)\n* $TORCH_HOME/hub, if environment variable TORCH_HOME is set.\n* $XDG_CACHE_HOME/torch/hub, if environment variable XDG_CACHE_HOME is set.\n* ~/.cache/torch/hub", "source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"}
{"text": "\nlayout: blog_detail\ntitle: \"PyTorch 2.0 & XLA\u2014The Latest Cutting Edge Features\"\nauthor: Jack Cao, Milad Mohammadi, Alex Wertheim, Yeounoh Chung, Joe Spisak, Will Cromar, Shauheen Zahirazami\n", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}
{"text": "\nToday, we are excited to share our latest work for PyTorch/XLA 2.0. The release of PyTorch 2.0 is yet another major milestone for this storied community and we are excited to continue to be part of it. When the PyTorch/XLA project started in 2018 between Google and Meta, the focus was on bringing cutting edge Cloud TPUs to help support the PyTorch community. Along the way, others in the community such as Amazon joined the project and very quickly the community expanded. We are excited about XLA's direction and the benefits this project continues to bring to the PyTorch community. In this blog we\u2019d like to showcase some key features that have been in development, show code snippets, and illustrate the benefit through some benchmarks.", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}
{"text": "TorchDynamo / torch.compile (Experimental)\nTorchDynamo (Dynamo) is a Python-level JIT compiler designed to make unmodified PyTorch programs faster. It provides a clean API for compiler backends to hook in; its biggest feature is to dynamically modify Python bytecode just before execution. In the PyTorch/XLA 2.0 release, an experimental backend for Dynamo is provided for both inference and training. \nDynamo provides a Torch FX (FX) graph when it recognizes a model pattern and PyTorch/XLA uses a Lazy Tensor approach to compile the FX graph and return the compiled function. To get more insight regarding the technical details about PyTorch/XLA\u2019s dynamo implementation, check out this dev-discuss post and dynamo doc.", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}
{"text": "Here is a small code example of running ResNet18 with torch.compile:\nimport torch\nimport torchvision\nimport torch_xla.core.xla_model as xm\ndef eval_model(loader):\n device = xm.xla_device()\n xla_resnet18 = torchvision.models.resnet18().to(device)\n xla_resnet18.eval()\n dynamo_resnet18 = torch.compile(\n xla_resnet18, backend='torchxla_trace_once')\n for data, _ in loader:\n output = dynamo_resnet18(data)\n\nWith torch.compile PyTorch/XLA only traces the ResNet18 model once during the init time and executes the compiled binary everytime dynamo_resnet18 is invoked, instead of tracing the model every step. To illustrate the benefits of Dynamo+XLA, below is an inference speedup analysis to compare Dynamo and LazyTensor (without Dynamo) using TorchBench on a Cloud TPU v4-8 where the y-axis is the speedup multiplier.\n", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}
{"text": "Dynamo for training is in the development stage with its implementation being at an earlier stage than inference. Developers are welcome to test this early feature, however, in the 2.0 release, PyTorch/XLA supports the forward and backward pass graphs and not the optimizer graph; the optimizer graph is available in the nightly builds and will land in the PyTorch/XLA 2.1 release. Below is an example of what training looks like using the ResNet18 example with torch.compile:\n```\nimport torch\nimport torchvision\nimport torch_xla.core.xla_model as xm\ndef train_model(model, data, target):\n loss_fn = torch.nn.CrossEntropyLoss()\n pred = model(data)\n loss = loss_fn(pred, target)\n loss.backward()\n return pred\ndef train_model_main(loader):\n device = xm.xla_device()\n xla_resnet18 = torchvision.models.resnet18().to(device)\n xla_resnet18.train()\n dynamo_train_model = torch.compile(\n train_model, backend='aot_torchxla_trace_once')\n for data, target in loader:", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}
{"text": "for data, target in loader:\n output = dynamo_train_model(xla_resnet18, data, target)\n``\nNote that the backend for training isaot_torchxla_trace_once(API will be updated for stable release) whereas the inference backend istorchxla_trace_once` (name subject to change). We expect to extract and execute 3 graphs per training step instead of 1 training step if you use the Lazy tensor. Below is a training speedup analysis to compare Dynamo and Lazy using the TorchBench on Cloud TPU v4-8.\n\nPJRT Runtime (Beta)", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}
{"text": "PJRT Runtime (Beta)\nPyTorch/XLA is migrating from XRT to the new PJRT runtime. PJRT is a better-maintained stack, with demonstrated performance advantages, including, on average, a 35% performance for training on TorchBench 2.0 models. It also supports a richer set of features enabling technologies like SPMD. In the PyTorch/XLA 2.0 release, PJRT is the default runtime for TPU and CPU; GPU support is in experimental state. The PJRT features included in the PyTorch/XLA 2.0 release are:\n* TPU runtime implementation in libtpu using the PJRT Plugin API improves performance by up to 30%\n* torch.distributed support for TPU v2 and v3, including pjrt:// init_method (Experimental)\n* Single-host GPU support. Multi-host support coming soon. (Experimental)", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}
{"text": "Switching to PJRT requires no change (or minimal change for GPUs) to user code (see pjrt.md for more details). Runtime configuration is as simple as setting the PJRT_DEVICE environment variable to the local device type (i.e. TPU, GPU, CPU). Below are examples of using PJRT runtimes on different devices. \n# TPU Device\nPJRT_DEVICE=TPU python3 xla/test/test_train_mp_imagenet.py --fake_data --batch_size=256 --num_epochs=1\n\n# TPU Pod Device\ngcloud alpha compute tpus tpu-vm ssh $USER-pjrt --zone=us-central2-b --project=$PROJECT --worker=all --command=\"git clone --depth=1 --branch r2.0 https://github.com/pytorch/xla.git\"\ngcloud alpha compute tpus tpu-vm ssh $USER-pjrt --zone=us-central2-b --project=$PROJECT --worker=all --command=\"PJRT_DEVICE=TPU python3 xla/test/test_train_mp_imagenet.py --fake_data --batch_size=256 --num_epochs=1\"\n\n```\nGPU Device (Experimental)", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}
{"text": "\nGPU Device (Experimental)\nPJRT_DEVICE=GPU GPU_NUM_DEVICES=4 python3 xla/test/test_train_mp_imagenet.py --fake_data --batch_size=128 --num_epochs=1\n```\nBelow is a performance comparison between XRT and PJRT by task on TorchBench 2.0 on v4-8 TPU. To learn more about PJRT vs. XRT please review the documentation.\n\nParallelization\nGSPMD (Experimental)", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}
{"text": "Parallelization\nGSPMD (Experimental)\nWe are delighted to introduce General and Scalable Parallelization for ML Computation Graphs (GSPMD) in PyTorch as a new experimental data & model sharding solution. GSPMD provides automatic parallelization for common ML workloads, allowing developers to write PyTorch programs as if on a single large device and without custom sharded computation ops and/or collective communication ops. The XLA compiler transforms the single device program into a partitioned one with proper collectives, based on the user provided sharding hints. The API (RFC) will be available in the PyTorch/XLA 2.0 release as an experimental feature on a single TPU VM host. \nNext Steps for GSPMD", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}
{"text": "Next Steps for GSPMD\nGSPMD is experimental in 2.0 release. To bring it to Stable status, we plan to address a number of feature gaps and known issues in the following releases, including multi-host support, DTensor integration, partial replication sharding, asynchronous data loading, and checkpointing. \nFSDP (Beta)\nPyTorch/XLA introduced fully sharded data parallel (FSDP) experimental support in version 1.12. This feature is a parallel representation of PyTorch FSDP and there are subtle differences in how XLA and upstream CUDA kernels are set up. auto_wrap_policy is a new argument that enables developers to automatically specify conditions for propagating partitioning specifications to neural network submodules. auto_wrap_policys may be simply passed in as an argument when wrapping a model with FSDP. Two auto_wrap_policy callables worth noting are: size_based_auto_wrap_policy, transformer_auto_wrap_policy.", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}
{"text": "size_based_auto_wrap_policy enables users to wrap submodules with a minimum number of parameters. The example below wraps model submodules having at least 10M parameters.\nauto_wrap_policy = partial(size_based_auto_wrap_policy, min_num_params=1e7)\n\ntransformer_auto_wrap_policy enables users to wrap all submodules that match a specific layer type. The example below wraps model submodules named torch.nn.Conv2d. To learn more, review this ResNet example by Ronghang Hu.\nauto_wrap_policy = partial(transformer_auto_wrap_policy, transformer_layer_cls={torch.nn.Conv2d})\n", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}
{"text": "```\nPyTorch/XLA FSDP is now integrated in HuggingFace trainer class (PR) enabling users to train much larger models on PyTorch/XLA (official Hugging Face documentation). A 16B parameters GPT2 model trained on Cloud TPU v4-64 with this FSDP configuration achieved 39% hardware utilization.\n\n\nTPU Accelerator - Num Devices\n\nv4-64\n \n\n\nGPT2 Parameter Count\n\n16B\n \n\n\nLayers Wrapped with FSDP\n\nGPT2Block\n \n\n\nTFLOPs / Chip\n\n275\n \n\n\nPFLOPs / Step\n\n50\n \n\n\nHardware Utilization", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}
{"text": "Hardware Utilization\n\n39%\n \n\n\nDifferences Between FSDP & GSPMD\nFSDP is a data parallelism technique that reduces device memory footprint by storing model parameters, optimizer states, and gradients all sharded. Note that the actual computation is still local to the device and requires all-gathering the sharded model parameters for both forward and backward passes, hence the name \u201cdata parallel\u201d. FSDP is one of the newest additions to PyTorch/XLA to scale large model training.", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}
{"text": "GSPMD on the other hand, is a general parallelization system that enables various types of parallelisms, including both data and model parallelisms. PyTorch/XLA provides a sharding annotation API and XLAShardedTensor abstraction, so a user can annotate any tensor with sharding specs in the PyTorch program. Developers don\u2019t need to manually implement sharded computations or inject collective communications ops to get it right. The XLA compiler does the work so that each computation can run in a distributed manner on multiple devices.\nExamples & Preliminary Results\nTo learn about PyTorch/XLA parallelism sharding API, visit our RFC and see the Sample Code references. Below is a simple example to enable data and model parallelism.\n```\nmodel = SimpleLinear().to(xm.xla_device())\nSharding annotate the linear layer weights.\nxs.mark_sharding(model.fc1.weight, mesh, partition_spec)\nTraining loop\nmodel.train()", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}
{"text": "Training loop\nmodel.train()\nfor step, (data, target) in enumerate(loader):\n optimizer.zero_grad()\n data = data.to(xm.xla_device())\n target = target.to(xm.xla_device())\n # Sharding annotate input data, we can shard any input\n # dimensions. Sharidng the batch dimension enables \n # data parallelism, sharding the feature dimension enables\n # spatial partitioning.\n xs.mark_sharding(data, mesh, partition_spec)\n ouput = model(data)\n loss = loss_fn(output, target)\n optimizer.step()\n xm.mark_step()\n```\nThe following graph highlights the memory efficiency benefits of PyTorch/XLA FSDP and SPMD on Cloud TPU v4-8 running ResNet50.\n\nClosing Thoughts\u2026", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}
{"text": "Closing Thoughts\u2026\nWe are excited to bring these features to the PyTorch community, and this is really just the beginning. Areas like dynamic shapes, deeper support for OpenXLA and many others are in development and we plan to put out more blogs to dive into the details. PyTorch/XLA is developed fully open source and we invite you to join the community of developers by filing issues, submitting pull requests, and sending RFCs on GitHub. You can try PyTorch/XLA on a variety of XLA devices including TPUs and GPUs. Here is how to get started.\nCongratulations again to the PyTorch community on this milestone!\nCheers,\nThe PyTorch Team at Google", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"}
{"text": "\nlayout: blog_detail\ntitle: 'Announcing the Winners of the 2021 PyTorch Annual Hackathon'\nauthor: Team PyTorch\nfeatured-img: 'assets/images/social_hackathon21.png'\n\nMore than 1,900 people worked hard in this year\u2019s PyTorch Annual Hackathon to create unique tools and applications for PyTorch developers and researchers.\nNotice: None of the projects submitted to the hackathon are associated with or offered by Meta Platforms, Inc.\n\n\n\nThis year, participants could enter their projects into following three categories:\n* PyTorch Developer Tools: a tool or library for improving productivity and efficiency for PyTorch researchers and developers.\n* Web and Mobile Applications Powered by PyTorch: a web or mobile interface and/or an embedded device built using PyTorch.", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"}
{"text": "\nPyTorch Responsible AI Development Tools: a tool, library, or web/mobile app to support researchers and developers in creating responsible AI that factors in fairness, security, privacy, and more throughout its entire development process.\nThe virtual hackathon ran from September 8 through November 2, 2021, with more than 1,900 registered participants from 110 countries, submitting a total of 65 projects. Entrants were judged on their idea\u2019s quality, originality, potential impact, and how well they implemented it. All projects can be viewed here.\nMeet the winners of each category below!\n\nPYTORCH DEVELOPER TOOLS\nFirst Place: RaNNC\nRaNNC is a middleware to automate hybrid model/data parallelism for training very large-scale neural networks capable of training 100 billion parameter models without any manual tuning.", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"}
{"text": "Second Place: XiTorch\nXiTorch provides first and higher order gradients of functional routines, such as optimization, rootfinder, and ODE solver. It also contains operations for implicit linear operators (e.g. large matrix that is expressed only by its matrix-vector multiplication) such as symmetric eigen-decomposition, linear solve, and singular value decomposition.\nThird Place: TorchLiberator\nTorchLiberator automates model surgery, finding the maximum correspondence between weights in two networks.\nHonorable Mentions\n\nPADL manages your entire PyTorch work flow with a single python abstraction and a beautiful functional API, so there\u2019s no more complex configuration or juggling preprocessing, postprocessing and forward passes.\n", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"}
{"text": "\nPyTree is a PyTorch package for recursive neural networks that provides highly generic recursive neural network implementations as well as efficient batching methods. \nIndicLP makes it easier for developers and researchers to build applications and models in Indian Languages, thus making NLP a more diverse field. \n\nWEB/MOBILE APPLICATIONS POWERED BY PYTORCH\nFirst Place: PyTorch Driving Guardian\nPyTorch Driving Guardian is a tool that monitors driver alertness, emotional state, and potential blind spots on the road. \nSecond Place: Kronia\nKronia is an Android mobile app built to maximize the harvest outputs for farmers. \nThird Place: Heyoh camera for Mac", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"}
{"text": "Heyoh is a Mac virtual camera for Zoom and Meets that augments live video by recognizing hand gestures and smiles and shows animated effects to other video participants. \nHonorable Mentions\n\nMamma AI is a tool that helps doctors with the breast cancer identification process by identifying areas likely to have cancer using ultrasonic and x-ray images. \nAgingClock is a tool that predicts biological age first with methylation genome data, then blood test data and eventually with multimodal omics and lifestyle data.\nIris is an open source photos platform which is more of an alternative of Google Photos that includes features such as Listing photos, Detecting Categories, Detecting and Classifying Faces from Photos, Detecting and Clustering by Location and Things in Photos.\n\nPYTORCH RESPONSIBLE AI DEVELOPMENT TOOLS", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"}
{"text": "PYTORCH RESPONSIBLE AI DEVELOPMENT TOOLS\nFirst Place: FairWell\nFairWell aims to address model bias on specific groups of people by allowing data scientists to evaluate their dataset and model predictions and take steps to make their datasets more inclusive and their models less biased. \nSecond Place: promp2slip\nPromp2slip is a library that tests the ethics of language models by using natural adversarial texts. \nThird Place: Phorch\nPhorch adversarially attacks the data using FIGA (Feature Importance Guided Attack) and creates 3 different attack sets of data based on certain parameters. These features are utilized to implement adversarial training as a defense against FIGA using neural net architecture in PyTorch.\nHonorable Mentions", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"}
{"text": "Honorable Mentions\n\nGreenops helps to measure the footprints of deep learning models at training, testing and evaluating to reduce energy consumption and carbon footprints.\nXaitk-saliency is an open-source, explainable AI toolkit for visual saliency algorithm interfaces and implementations, built for analytic and autonomy applications.\nThank you,\nTeam PyTorch\n", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"}
{"text": "\nlayout: blog_detail\ntitle: 'Announcing PyTorch Ecosystem Day'\nauthor: Team PyTorch\n\nWe\u2019re proud to announce our first PyTorch Ecosystem Day. The virtual, one-day event will focus completely on our Ecosystem and Industry PyTorch communities!\nPyTorch is a deep learning framework of choice for academics and companies, all thanks to its rich ecosystem of tools and strong community. As with our developers, our ecosystem partners play a pivotal role in the development and growth of the community.\n\n\n\nWe will be hosting our first PyTorch Ecosystem Day, a virtual event designed for our ecosystem and industry communities to showcase their work and discover new opportunities to collaborate.", "source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}
{"text": "PyTorch Ecosystem Day will be held on April 21, with both a morning and evening session, to ensure we reach our global community. Join us virtually for a day filled with discussions on new developments, trends, challenges, and best practices through keynotes, breakout sessions, and a unique networking opportunity hosted through Gather.Town . \nEvent Details\nApril 21, 2021 (Pacific Time)\nFully digital experience \n\nMorning Session: (EMEA)\nOpening Talks - 8:00 am-9:00 am PT\nPoster Exhibition & Breakout Sessions - 9:00 am-12:00 pm PT \nEvening Session (APAC/US)\nOpening Talks - 3:00 pm-4:00 pm PT\nPoster Exhibition & Breakout Sessions - 3:00 pm-6:00 pm PT \nNetworking - 9:00 am-7:00 pm PT\n\nThere are two ways to participate in PyTorch Ecosystem Day:", "source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}
{"text": "\n\nPoster Exhibition from the PyTorch ecosystem and industry communities covering a variety of topics. Posters are available for viewing throughout the duration of the event. To be part of the poster exhibition, please see below for submission details. If your poster is accepted, we highly recommend tending your poster during one of the morning or evening sessions or both!\n\n\nBreakout Sessions are 40-min sessions freely designed by the community. The breakouts can be talks, demos, tutorials or discussions. Note: you must have an accepted poster to apply for the breakout sessions.\n\n", "source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}
{"text": "Call for posters now open! Submit your proposal today! Please send us the title and summary of your projects, tools, and libraries that could benefit PyTorch researchers in academia and industry, application developers, and ML engineers for consideration. The focus must be on academic papers, machine learning research, or open-source projects. Please no sales pitches. Deadline for submission is March 18, 2021. \nVisit pytorchecosystemday.fbreg.com for more information and we look forward to welcoming you to PyTorch Ecosystem Day on April 21st!", "source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"}
{"text": "\nlayout: blog_detail\ntitle: \"Democratizing AI with PyTorch Foundation and ROCm\u2122 support for PyTorch\"\nauthor: AMD\n\n\nLast year, Meta announced that PyTorch joined the Linux Foundation as a neutral home for growing the machine learning project and community with AMD representation as a part of the founding membership and governing board.\nPyTorch Foundation\u2019s mission is to drive AI adoption by democratizing its software ecosystem through open source principles aligning with the AMD core principle of an Open software ecosystem. AMD strives to foster innovation through the support for latest generations of hardware, tools, libraries, and other components to simplify and accelerate adoption of AI across a broad range of scientific discoveries.\n\n", "source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}
{"text": "\n", "source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}
{"text": "AMD, along with key PyTorch codebase developers (including those at Meta AI), delivered a set of updates to the ROCm\u2122 open software ecosystem that brings stable support for AMD Instinct\u2122 accelerators as well as many Radeon\u2122 GPUs. This now gives PyTorch developers the ability to build their next great AI solutions leveraging AMD GPU accelerators & ROCm. The support from PyTorch community in identifying gaps, prioritizing key updates, providing feedback for performance optimizing and supporting our journey from \u201cBeta\u201d to \u201cStable\u201d was immensely helpful and we deeply appreciate the strong collaboration between the two teams at AMD and PyTorch. The move for ROCm support from \u201cBeta\u201d to \u201cStable\u201d came in the PyTorch 1.12 release (June 2022) brings the added support to easily run PyTorch on native environment without having to configure custom dockers. This is a sign of confidence about the quality of support and performance of PyTorch using AMD Instinct and ROCm. The results of these collaborative efforts are evident in the performance measured on key industry benchmarks like Microsoft\u2019s SuperBench shown below in Graph 1.", "source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}
{"text": "\n\n\n\n\u201cWe are excited to see the significant impact of developers at AMD to contribute to and extend features within PyTorch to make AI models run in a more performant, efficient, and scalable way. A great example of this is the thought-leadership around unified memory approaches between the framework and future hardware systems, and we look forward to seeing that feature progress.\u201d \n- Soumith Chintala, PyTorch lead-maintainer and Director of Engineering, Meta AI\n\n\n", "source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}
{"text": "\n\n\nThe progressive improvements on both the AMD CDNA\u2122 architecture as well as ROCm and PyTorch shows single GPU model throughput increase from AMD Instinct MI100 to the latest generation AMD Instinct MI200 family GPUs going from ROCm 4.2 to ROCm 5.3 and from PyTorch 1.7 to PyTorch 1.12.\n\nGraph 1: ML model performance over generation using Microsoft Superbench Suite 1, 2, 3\nBelow are a few of the key updates for ROCm support since the PyTorch 1.12 release\nFull Continuous Integration (CI) for ROCm on PyTorch", "source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}
{"text": "With the ROCm support for PyTorch move from \u201cBeta\u201d to \u201cStable,\u201d all the functions and features commits are now verified through a full Continuous Integration (CI) process. The CI process helps ensure the proper build and test process ahead of an expected Docker and PIP wheel release with stable commits forthcoming.\nSupport for Kineto Profiler\nThe addition of Kineto profiler support to ROCm now helps developers and users understand performance bottlenecks through effective diagnosis and profiling tools. The tool also provides recommendations to improve known issues and visualization through TensorBoard UI.\nKey PyTorch Libraries support added", "source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}
{"text": "Key PyTorch Libraries support added\nPyTorch ecosystem libraries like TorchText (Text classification), TorchRec (libraries for recommender systems - RecSys), TorchVision (Computer Vision), TorchAudio (audio and signal processing) are fully supported since ROCm 5.1 and upstreamed with PyTorch 1.12.\nKey libraries provided with the ROCm software stack including MIOpen (Convolution models), RCCL (ROCm Collective Communications) and rocBLAS (BLAS for transformers) were further optimized to offer new potential efficiencies and higher performance.", "source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}
{"text": "MIOpen innovates on several fronts, such as implementing fusion to optimize for memory bandwidth and GPU launch overheads, providing an auto-tuning infrastructure to overcome the large design space of problem configurations, and implementing different algorithms to optimize convolutions for different filter and input sizes. MIOpen is one of the first libraries to publicly support the bfloat16 data-type for convolutions, allowing efficient training at lower precision maintaining expected accuracy.", "source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}
{"text": "RCCL (pronounced \"Rickle\") is a stand-alone library of standard collective communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, gather, scatter, and all-to-all. There is support for direct GPU-to-GPU send and receive operations. It has been optimized to achieve high bandwidth on platforms using PCIe\u00ae, Infinity Fabric\u2122 (GPU to GPU) as well as networking using InfiniBand Verbs or TCP/IP sockets. RCCL supports an arbitrary number of GPUs installed in single or multiple nodes and can be used in either single- or multi-process (e.g., MPI) applications.", "source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}
{"text": "Along with the above key highlights, over 50 features and functionality improvements were completed jointly between AMD and PyTorch to add stable support for ROCm. These include improvements to tools, compilers, runtime, graph optimizations through TorchScript, INT8 quant path usage, and ONNX runtime integration including support for Navi 21 based Radeon\u2122 PRO datacenter graphics card to name a few.\nAITemplate Inference Engine", "source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}
{"text": "MetaAI recently published a blog announcing the release of its open source AITemplate (link) for a unified inference system supporting AMD Instinct GPU accelerators using the AMD ROCm stack. This Python based framework can help significantly improve performance through increased utilization of AMD matrix cores for transformer blocks. This is achieved through the AMD Composable Kernel (CK) library which provides performance critical Kernels for ML AI workloads across multiple architectures including GPUs and CPUs through HIP & C++.\nMoreover, the AITemplate also provides out-of-the-box support for widely used AI models like BERT, ResNET, Vision Transformer, Stable Diffusion etc. simplifying deployment process through these pretrained models.\nWhat\u2019s coming with future ROCm releases?\nUnified memory models for CPU + GPU", "source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}
{"text": "Unified memory models for CPU + GPU\nAs system architecture evolves to address the complexity of large problem sizes and data sets, memory management becomes a key performance bottle neck that needs a cohesive strategy to be addressed through innovations at both hardware and software levels. AMD is uniquely positioned to address this problem with its effective data center solutions integrating AMD EPYC\u2122 CPU cores with its AMD Instinct GPU compute units in a truly unified datacenter APU (Accelerated Processing Unit) form factor set to be launched in 2H 2023.\nThe software work to leverage the unified CPU + GPU memory has already started in collaboration with the PyTorch team, to enable the usage of a fast, low latency, synchronized memory model that enables not only AMD but also other AI accelerators to address the complex memory management problem of today. We are looking forward to this joint effort and announcement soon.\nAcknowledgement", "source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}
{"text": "Acknowledgement\nThe content in this blog highlights the joint work between AMD and key PyTorch contributors including Meta, working on many of the core features, as well as Microsoft enabling ONNX Runtime support. We are looking forward to working with the other founding members at the PyTorch Foundation on the next steps and improvements to democratize and grow adoption of PyTorch across the industry.\nCAUTIONARY STATEMENT\n", "source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}
{"text": "This blog contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) such as the availability, timing and expected benefits of an AMD datacenter APU form factor, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as \"would,\" \"may,\" \"expects,\" \"believes,\" \"plans,\" \"intends,\" \"projects\" and other terms with similar meaning. Investors are cautioned that the forward-looking statements in this blog are based on current beliefs, assumptions and expectations, speak only as of the date of this blog and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and generally beyond AMD's control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Investors are urged to review in detail the risks and uncertainties in AMD\u2019s Securities and Exchange Commission filings, including but not limited to AMD\u2019s most recent reports on Forms 10-K and 10-Q. AMD does not assume, and hereby disclaims, any obligation to update forward-looking statements made in this blog, except as may be required by law.", "source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}
{"text": "\nEndnotes\n\nMI100D-01 SuperBench v0.5 model training results based on AMD internal testing as of 11/09/2022 measuring the total training throughput, at half precision, using a 2P AMD EPYC\u2122 7763 CPU server tested with 1x AMD Instinct\u2122 MI100 (32GB HBM2e) 300W GPU, SBIOS 2.2, Ubuntu\u00ae 20.04.5 LTS, host ROCm\u2122 5.2.0, guest ROCm 4.2, PyTorch 1.7.0. Server manufacturers may vary configurations, yielding different results. Performance may vary based factors including use of latest drivers and optimizations.\n", "source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}
{"text": "\nMI200D-01 SuperBench v0.6 model training results based on AMD internal testing as of 11/09/2022 measuring the total training throughput, at half precision, using a 2P AMD EPYC\u2122 7763 CPU server tested with 1x AMD Instinct\u2122 MI210 (64GB HBM2e) 300W GPU, SBIOS 2.2, Ubuntu 20.04.5 LTS, host ROCm 5.3.0, guest ROCm 5.3, PyTorch 1.12. Server manufacturers may vary configurations, yielding different results. Performance may vary based factors including use of latest drivers and optimizations.\nMI200D-02: SuperBench v0.6 model training results based on AMD internal testing as of 11/09/2022 measuring the total training throughput, at half precision, using a 2P AMD EPYC\u2122\ufe0f 7763 CPU server tested with 1x AMD Instinct\u2122\ufe0f MI250 (128GB HBM2e) 560W GPU, SBIOS M12, Ubuntu 20.04 LTS, host ROCm 5.3.0, guest ROCm 5.3, PyTorch 1.12. Server manufacturers may vary configurations, yielding different results. Performance may vary based factors including use of latest drivers and optimizations.\n", "source": "https://pytorch.org/blog/democratizing-ai-with-pytorch/", "category": "pytorch blogs"}
{"text": "\nlayout: blog_detail\ntitle: 'Introduction to Quantization on PyTorch'\nauthor: Raghuraman Krishnamoorthi, James Reed, Min Ni, Chris Gottbrath, and Seth Weidman\n\nIt\u2019s important to make efficient use of both server-side and on-device compute resources when developing machine learning applications. To support more efficient deployment on servers and edge devices, PyTorch added a support for model quantization using the familiar eager mode Python API.", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "Quantization leverages 8bit integer (int8) instructions to reduce the model size and run the inference faster (reduced latency) and can be the difference between a model achieving quality of service goals or even fitting into the resources available on a mobile device. Even when resources aren\u2019t quite so constrained it may enable you to deploy a larger and more accurate model. Quantization is available in PyTorch starting in version 1.3 and with the release of PyTorch 1.4 we published quantized models for ResNet, ResNext, MobileNetV2, GoogleNet, InceptionV3 and ShuffleNetV2 in the PyTorch torchvision 0.5 library.\nThis blog post provides an overview of the quantization support on PyTorch and its incorporation with the TorchVision domain library.\nWhat is Quantization?\nQuantization refers to techniques for doing both computations and memory accesses with lower precision data, usually int8 compared to floating point implementations. This enables performance gains in several important areas:", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "\n4x reduction in model size;\n2-4x reduction in memory bandwidth;\n2-4x faster inference due to savings in memory bandwidth and faster compute with int8 arithmetic (the exact speed up varies depending on the hardware, the runtime, and the model).\nQuantization does not however come without additional cost. Fundamentally quantization means introducing approximations and the resulting networks have slightly less accuracy. These techniques attempt to minimize the gap between the full floating point accuracy and the quantized accuracy.\nWe designed quantization to fit into the PyTorch framework. The means that:\nPyTorch has data types corresponding to quantized tensors, which share many of the features of tensors.\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "\nOne can write kernels with quantized tensors, much like kernels for floating point tensors to customize their implementation. PyTorch supports quantized modules for common operations as part of the torch.nn.quantized and torch.nn.quantized.dynamic name-space.\nQuantization is compatible with the rest of PyTorch: quantized models are traceable and scriptable. The quantization method is virtually identical for both server and mobile backends. One can easily mix quantized and floating point operations in a model.\nMapping of floating point tensors to quantized tensors is customizable with user defined observer/fake-quantization blocks. PyTorch provides default implementations that should work for most use cases.\n\n\n\n\nWe developed three techniques for quantizing neural networks in PyTorch as part of quantization tooling in the torch.quantization name-space.", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "The Three Modes of Quantization Supported in PyTorch starting version 1.3\n\n\nDynamic Quantization\n The easiest method of quantization PyTorch supports is called dynamic quantization. This involves not just converting the weights to int8 - as happens in all quantization variants - but also converting the activations to int8 on the fly, just before doing the computation (hence \u201cdynamic\u201d). The computations will thus be performed using efficient int8 matrix multiplication and convolution implementations, resulting in faster compute. However, the activations are read and written to memory in floating point format.\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "\nPyTorch API: we have a simple API for dynamic quantization in PyTorch. torch.quantization.quantize_dynamic takes in a model, as well as a couple other arguments, and produces a quantized model! Our end-to-end tutorial illustrates this for a BERT model; while the tutorial is long and contains sections on loading pre-trained models and other concepts unrelated to quantization, the part the quantizes the BERT model is simply:\n python\n import torch.quantization\n quantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "```\n * See the documentation for the function here an end-to-end example in our tutorials here and here.\n2. ### Post-Training Static Quantization", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "\n\nPost-Training Static Quantization\n One can further improve the performance (latency) by converting networks to use both integer arithmetic and int8 memory accesses. Static quantization performs the additional step of first feeding batches of data through the network and computing the resulting distributions of the different activations (specifically, this is done by inserting \u201cobserver\u201d modules at different points that record these distributions). This information is used to determine how specifically the different activations should be quantized at inference time (a simple technique would be to simply divide the entire range of activations into 256 levels, but we support more sophisticated methods as well). Importantly, this additional step allows us to pass quantized values between operations instead of converting these values to floats - and then back to ints - between every operation, resulting in a significant speed-up.\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "With this release, we\u2019re supporting several features that allow users to optimize their static quantization:\n 1. Observers: you can customize observer modules which specify how statistics are collected prior to quantization to try out more advanced methods to quantize your data.\n 2. Operator fusion: you can fuse multiple operations into a single operation, saving on memory access while also improving the operation\u2019s numerical accuracy.\n 3. Per-channel quantization: we can independently quantize weights for each output channel in a convolution/linear layer, which can lead to higher accuracy with almost the same speed.\n * ### PyTorch API:\n * To fuse modules, we have torch.quantization.fuse_modules\n * Observers are inserted using torch.quantization.prepare\n * Finally, quantization itself is done using torch.quantization.convert", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "We have a tutorial with an end-to-end example of quantization (this same tutorial also covers our third quantization method, quantization-aware training), but because of our simple API, the three lines that perform post-training static quantization on the pre-trained model myModel are:\n python\n # set quantization config for server (x86)\n deploymentmyModel.qconfig = torch.quantization.get_default_config('fbgemm')\n # insert observers\n torch.quantization.prepare(myModel, inplace=True)\n # Calibrate the model and collect statistics\n # convert to quantized version\n torch.quantization.convert(myModel, inplace=True)\n3. ### Quantization Aware Training", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "``\n3. ### **Quantization Aware Training**\n **Quantization-aware training(QAT)** is the third method, and the one that typically results in highest accuracy of these three. With QAT, all weights and activations are \u201cfake quantized\u201d during both the forward and backward passes of training: that is, float values are rounded to mimic int8 values, but all computations are still done with floating point numbers. Thus, all the weight adjustments during training are made while \u201caware\u201d of the fact that the model will ultimately be quantized; after quantizing, therefore, this method usually yields higher accuracy than the other two methods.\n* ### **PyTorch API**:\n *torch.quantization.prepare_qatinserts fake quantization modules to model quantization.\n * Mimicking the static quantization API,torch.quantization.convert` actually quantizes the model once training is complete.", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "For example, in the end-to-end example, we load in a pre-trained model as qat_model, then we simply perform quantization-aware training using:\n python\n # specify quantization config for QAT\n qat_model.qconfig=torch.quantization.get_default_qat_qconfig('fbgemm')\n # prepare QAT\n torch.quantization.prepare_qat(qat_model, inplace=True)\n # convert to quantized version, removing dropout, to check for accuracy on each\n epochquantized_model=torch.quantization.convert(qat_model.eval(), inplace=False)\nDevice and Operator Support\nQuantization support is restricted to a subset of available operators, depending on the method being used, for a list of supported operators, please see the documentation at https://pytorch.org/docs/stable/quantization.html.", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "The set of available operators and the quantization numerics also depend on the backend being used to run quantized models. Currently quantized operators are supported only for CPU inference in the following backends: x86 and ARM. Both the quantization configuration (how tensors should be quantized and the quantized kernels (arithmetic with quantized tensors) are backend dependent. One can specify the backend by doing:\nimport torchbackend='fbgemm'\n# 'fbgemm' for server, 'qnnpack' for mobile\nmy_model.qconfig = torch.quantization.get_default_qconfig(backend)\n# prepare and convert model\n# Set the backend on which the quantized kernels need to be run\ntorch.backends.quantized.engine=backend\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "torch.backends.quantized.engine=backend\n```\nHowever, quantization aware training occurs in full floating point and can run on either GPU or CPU. Quantization aware training is typically only used in CNN models when post training static or dynamic quantization doesn\u2019t yield sufficient accuracy. This can occur with models that are highly optimized to achieve small size (such as Mobilenet).\nIntegration in torchvision\nWe\u2019ve also enabled quantization for some of the most popular models in torchvision: Googlenet, Inception, Resnet, ResNeXt, Mobilenet and Shufflenet. We have upstreamed these changes to torchvision in three forms:\n1. Pre-trained quantized weights so that you can use them right away.\n2. Quantization ready model definitions so that you can do post-training quantization or quantization aware training.", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "\nA script for doing quantization aware training \u2014 which is available for any of these model though, as you will learn below, we only found it necessary for achieving accuracy with Mobilenet.\nWe also have a tutorial showing how you can do transfer learning with quantization using one of the torchvision models.\n\nChoosing an approach\nThe choice of which scheme to use depends on multiple factors:\n1. Model/Target requirements: Some models might be sensitive to quantization, requiring quantization aware training.\n2. Operator/Backend support: Some backends require fully quantized operators.\nCurrently, operator coverage is limited and may restrict the choices listed in the table below:\nThe table below provides a guideline.\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": ".tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;}\n.tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;}\n.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}\n.tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:top;font-weight:bold;color:black;}\narticle.pytorch-article table tr th:first-of-type, article.pytorch-article table tr td:first-of-type{padding-left:5px}\n\n\n\nModel Type\nPreferred scheme\nWhy\n\n\nLSTM/RNN\nDynamic Quantization\nThroughput dominated by compute/memory bandwidth for weights\n\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "\n\n\nBERT/Transformer\nDynamic Quantization\nThroughput dominated by compute/memory bandwidth for weights\n\n\nCNN\nStatic Quantization\nThroughput limited by memory bandwidth for activations\n\n\nCNN\nQuantization Aware Training\nIn the case where accuracy can't be achieved with static quantization\n\n\nPerformance Results\nQuantization provides a 4x reduction in the model size and a speedup of 2x to 3x compared to floating point implementations depending on the hardware platform and the model being benchmarked. Some sample results are:\n\n\n\nModel\nFloat Latency (ms)", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "Float Latency (ms)\n Quantized Latency (ms) | \n Inference Performance Gain | \n Device | \n Notes | \n\n\n BERT | \n 581 | \n 313 | \n 1.8x | \n Xeon-D2191 (1.6GHz) | \n Batch size = 1, Maximum sequence length= 128, Single thread, x86-64, Dynamic quantization | \n
\n\n Resnet-50 | \n 214 | \n 103 | \n 2x | \n Xeon-D2191 (1.6GHz) | \n Single thread, x86-64, Static quantization | \n
\n\n Mobilenet-v2 | \n 97 | \n 17 | \n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "17\n 5.7x | \n Samsung S9 | \n Static quantization, Floating point numbers are based on Caffe2 run-time and are not optimized | \n
\n\n\n\nAccuracy results\nWe also compared the accuracy of static quantized models with the floating point models on Imagenet. For dynamic quantization, we compared the F1 score of BERT on the GLUE benchmark for MRPC.\nComputer Vision Model accuracy\n\n\nModel\nTop-1 Accuracy (Float)\nTop-1 Accuracy (Quantized)\nQuantization scheme\n\n\nGooglenet\n69.8\n69.7\nStatic post training quantization\n\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "\n\n\nInception-v3\n77.5\n77.1\nStatic post training quantization\n\n\nResNet-18\n69.8\n69.4\nStatic post training quantization\n\n\nResnet-50\n76.1\n75.9\nStatic post training quantization\n\n\nResNext-101 32x8d\n79.3\n79\nStatic post training quantization\n\n\nMobilenet-v2\n71.9\n71.6\nQuantization Aware Training\n\n\nShufflenet-v2\n69.4", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "69.4\n68.4 | \nStatic post training quantization | \n\n\n\nSpeech and NLP Model accuracy\n\n\n\nModel\nF1 (GLUEMRPC) Float\nF1 (GLUEMRPC) Quantized\nQuantization scheme\n\n\nBERT\n0.902\n0.895\nDynamic quantization\n\n\n\nConclusion", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "\n\n\nConclusion\nTo get started on quantizing your models in PyTorch, start with the tutorials on the PyTorch website. If you are working with sequence data start with dynamic quantization for LSTM, or BERT. If you are working with image data then we recommend starting with the transfer learning with quantization tutorial. Then you can explore static post training quantization. If you find that the accuracy drop with post training quantization is too high, then try quantization aware training.", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "If you run into issues you can get community help by posting in at discuss.pytorch.org, use the quantization category for quantization related issues.\nThis post is authored by Raghuraman Krishnamoorthi, James Reed, Min Ni, Chris Gottbrath and Seth Weidman. Special thanks to Jianyu Huang, Lingyi Liu and Haixin Liu for producing quantization metrics included in this post.\nFurther reading:\n\nPyTorch quantization presentation at Neurips: (https://research.fb.com/wp-content/uploads/2019/12/2.-Quantization.pptx)\nQuantized Tensors (https://github.com/pytorch/pytorch/wiki/\nIntroducing-Quantized-Tensor)\nQuantization RFC on Github (https://github.com/pytorch/pytorch/\nissues/18318)\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"}
{"text": "\nlayout: blog_detail\ntitle: 'What\u2019s New in PyTorch Profiler 1.9?'\nauthor: Sabrina Smai, Program Manager on the AI Framework team at Microsoft\n\nPyTorch Profiler v1.9 has been released! The goal of this new release (previous PyTorch Profiler release) is to provide you with new state-of-the-art tools to help diagnose and fix machine learning performance issues regardless of whether you are working on one or numerous machines. The objective is to target the execution steps that are the most costly in time and/or memory, and visualize the work load distribution between GPUs and CPUs. \nHere is a summary of the five major features being released:", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "\nDistributed Training View: This helps you understand how much time and memory is consumed in your distributed training job. Many issues occur when you take a training model and split the load into worker nodes to be run in parallel as it can be a black box. The overall model goal is to speed up model training. This distributed training view will help you diagnose and debug issues within individual nodes. \nMemory View: This view allows you to understand your memory usage better. This tool will help you avoid the famously pesky Out of Memory error by showing active memory allocations at various points of your program run. \nGPU Utilization Visualization: This tool helps you make sure that your GPU is being fully utilized. \nCloud Storage Support: Tensorboard plugin can now read profiling data from Azure Blob Storage, Amazon S3, and Google Cloud Platform.\n", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "\nJump to Source Code: This feature allows you to visualize stack tracing information and jump directly into the source code. This helps you quickly optimize and iterate on your code based on your profiling results. \n\nGetting Started with PyTorch Profiling Tool\nPyTorch includes a profiling functionality called \u00ab PyTorch Profiler \u00bb. The PyTorch Profiler tutorial can be found here.\nTo instrument your PyTorch code for profiling, you must:\n$ pip install torch-tb-profiler\nimport torch.profiler as profiler\nWith profiler.profile(XXXX)\n\nComments:\n\u2022 For CUDA and CPU profiling, see below: \nwith torch.profiler.profile( \nactivities=[ \ntorch.profiler.ProfilerActivity.CPU, \ntorch.profiler.ProfilerActivity.CUDA], \n", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "torch.profiler.ProfilerActivity.CUDA], \n```\n\u2022 With profiler.record_function(\u201c$NAME\u201d): allows putting a decorator (a tag associated to a name) for a block of function\n\u2022 Profile_memory=True parameter under profiler.profile allows you to profile CPU and GPU memory footprint\nVisualizing PyTorch Model Performance using PyTorch Profiler\nDistributed Training\nRecent advances in deep learning argue for the value of large datasets and large models, which requires you to scale out model training to more computational resources. Distributed Data Parallel (DDP) and NVIDIA Collective Communications Library (NCCL) are the widely adopted paradigms in PyTorch for accelerating your deep learning training. \nIn this release of PyTorch Profiler, DDP with NCCL backend is now supported.\n\n\n\nComputation/Communication Overview", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "\nComputation/Communication Overview\nIn the Computation/Communication overview under the Distributed training view, you can observe the computation-to-communication ratio of each worker and [load balancer](https://en.wikipedia.org/wiki/Load_balancing_(computing) nodes between worker as measured by granularity. \nScenario 1:\nIf the computation and overlapping time of one worker is much larger than the others, this may suggest an issue in the workload balance or worker being a straggler. Computation is the sum of kernel time on GPU minus the overlapping time. The overlapping time is the time saved by interleaving communications during computation. The more overlapping time represents better parallelism between computation and communication. Ideally the computation and communication completely overlap with each other. Communication is the total communication time minus the overlapping time. The example image below displays how this scenario appears on Tensorboard. \n", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "\n\nFigure: A straggler example\n\nScenario 2:\nIf there is a small batch size (i.e. less computation on each worker) or the data to be transferred is large, the computation-to-communication may also be small and be seen in the profiler with low GPU utilization and long waiting times. This computation/communication view will allow you to diagnose your code to reduce communication by adopting gradient accumulation, or to decrease the communication proportion by increasing batch size. DDP communication time depends on model size. Batch size has no relationship with model size. So increasing batch size could make computation time longer and make computation-to-communication ratio bigger. \nSynchronizing/Communication Overview", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "Synchronizing/Communication Overview\nIn the Synchronizing/Communication view, you can observe the efficiency of communication. This is done by taking the step time minus computation and communication time. Synchronizing time is part of the total communication time for waiting and synchronizing with other workers. The Synchronizing/Communication view includes initialization, data loader, CPU computation, and so on Insights like what is the ratio of total communication is really used for exchanging data and what is the idle time of waiting for data from other workers can be drawn from this view. \n\n\n\nFor example, if there is an inefficient workload balance or straggler issue, you\u2019ll be able to identify it in this Synchronizing/Communication view. This view will show several workers\u2019 waiting time being longer than others. \n", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "\n\n\nThis table view above allows you to see the detailed statistics of all communication ops in each node. This allows you to see what operation types are being called, how many times each op is called, what is the size of the data being transferred by each op, etc. \nMemory View:\nThis memory view tool helps you understand the hardware resource consumption of the operators in your model. Understanding the time and memory consumption on the operator-level allows you to resolve performance bottlenecks and in turn, allow your model to execute faster. Given limited GPU memory size, optimizing the memory usage can: \n1. Allow bigger model which can potentially generalize better on end level tasks.\n2. Allow bigger batch size. Bigger batch sizes increase the training speed.", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "The profiler records all the memory allocation during the profiler interval. Selecting the \u201cDevice\u201d will allow you to see each operator\u2019s memory usage on the GPU side or host side. You must enable profile_memory=True to generate the below memory data as shown here. \nWith torch.profiler.profile(\nProfiler_memory=True # this will take 1 \u2013 2 minutes to complete. \n)\n\nImportant Definitions:\n\u2022 \u201cSize Increase\u201d displays the sum of all allocation bytes and minus all the memory release bytes.\n\u2022 \u201cAllocation Size\u201d shows the sum of all allocation bytes without considering the memory release.\n\u2022 \u201cSelf\u201d means the allocated memory is not from any child operators, instead by the operator itself.\n\n\n\nGPU Metric on Timeline:", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "\nGPU Metric on Timeline:\nThis feature will help you debug performance issues when one or more GPU are underutilized. Ideally, your program should have high GPU utilization (aiming for 100% GPU utilization), minimal CPU to GPU communication, and no overhead. \nOverview:\nThe overview page highlights the results of three important GPU usage metrics at different levels (i.e. GPU Utilization, Est. SM Efficiency, and Est. Achieved Occupancy). Essentially, each GPU has a bunch of SM each with a bunch of warps that can execute a bunch of threads concurrently. Warps execute a bunch because the amount depends on the GPU. But at a high level, this GPU Metric on Timeline tool allows you can see the whole stack, which is useful. \nIf the GPU utilization result is low, this suggests a potential bottleneck is present in your model. Common reasons: \n\u2022Insufficient parallelism in kernels (i.e., low batch size) \n\u2022Small kernels called in a loop. This is to say the launch overheads are not amortized", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "\u2022CPU or I/O bottlenecks lead to the GPU not receiving enough work to keep busy \nLooking of the overview page where the performance recommendation section is where you\u2019ll find potential suggestions on how to increase that GPU utilization. In this example, GPU utilization is low so the performance recommendation was to increase batch size. Increasing batch size 4 to 32, as per the performance recommendation, increased the GPU Utilization by 60.68%.", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "GPU Utilization: the step interval time in the profiler when a GPU engine was executing a workload. The high the utilization %, the better. The drawback of using GPU utilization solely to diagnose performance bottlenecks is it is too high-level and coarse. It won\u2019t be able to tell you how many Streaming Multiprocessors are in use. Note that while this metric is useful for detecting periods of idleness, a high value does not indicate efficient use of the GPU, only that it is doing anything at all. For instance, a kernel with a single thread running continuously will get a GPU Utilization of 100%", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "Estimated Stream Multiprocessor Efficiency (Est. SM Efficiency) is a finer grained metric, it indicates what percentage of SMs are in use at any point in the trace This metric reports the percentage of time where there is at least one active warp on a SM and those that are stalled (NVIDIA doc). Est. SM Efficiency also has it\u2019s limitation. For instance, a kernel with only one thread per block can\u2019t fully use each SM. SM Efficiency does not tell us how busy each SM is, only that they are doing anything at all, which can include stalling while waiting on the result of a memory load. To keep an SM busy, it is necessary to have a sufficient number of ready warps that can be run whenever a stall occurs", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "Estimated Achieved Occupancy (Est. Achieved Occupancy) is a layer deeper than Est. SM Efficiency and GPU Utilization for diagnosing performance issues. Estimated Achieved Occupancy indicates how many warps can be active at once per SMs. Having a sufficient number of active warps is usually key to achieving good throughput. Unlike GPU Utilization and SM Efficiency, it is not a goal to make this value as high as possible. As a rule of thumb, good throughput gains can be had by improving this metric to 15% and above. But at some point you will hit diminishing returns. If the value is already at 30% for example, further gains will be uncertain. This metric reports the average values of all warp schedulers for the kernel execution period (NVIDIA doc). The larger the Est. Achieve Occupancy value is the better. \n", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "\n\nOverview details: Resnet50_batchsize4\n\n\n\nOverview details: Resnet50_batchsize32\n\nKernel View\nThe kernel has \u201cBlocks per SM\u201d and \u201cEst. Achieved Occupancy\u201d which is a great tool to compare model runs. \n\n\n\nMean Blocks per SM:\nBlocks per SM = Blocks of this kernel / SM number of this GPU. If this number is less than 1, it indicates the GPU multiprocessors are not fully utilized. \u201cMean Blocks per SM\u201d is weighted average of all runs of this kernel name, using each run\u2019s duration as weight.\nMean Est. Achieved Occupancy:", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "Mean Est. Achieved Occupancy:\nEst. Achieved Occupancy is defined as above in overview. \u201cMean Est. Achieved Occupancy\u201d is weighted average of all runs of this kernel name, using each run\u2019s duration as weight.\nTrace View\nThis trace view displays a timeline that shows the duration of operators in your model and which system executed the operation. This view can help you identify whether the high consumption and long execution is because of input or model training. Currently, this trace view shows GPU Utilization and Est. SM Efficiency on a timeline. \n\n\n", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "\nGPU utilization is calculated independently and divided into multiple 10 millisecond buckets. The buckets\u2019 GPU utilization values are drawn alongside the timeline between 0 \u2013 100%. In the above example, the \u201cProfilerStep5\u201d GPU utilization during thread 28022\u2019s busy time is higher than the following the one during \u201cOptimizer.step\u201d. This is where you can zoom-in to investigate why that is. \n\n\n\nFrom above, we can see the former\u2019s kernels are longer than the later\u2019s kernels. The later\u2019s kernels are too short in execution, which results in lower GPU utilization. \nEst. SM Efficiency: Each kernel has a calculated est. SM efficiency between 0 \u2013 100%. For example, the below kernel has only 64 blocks, while the SMs in this GPU is 80. Then its \u201cEst. SM Efficiency\u201d is 64/80, which is 0.8. \n\n", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "\nCloud Storage Support\nAfter running pip install tensorboard, to have data be read through these cloud providers, you can now run: \ntorch-tb-profiler[blob] \ntorch-tb-profiler[gs] \ntorch-tb-profiler[s3] \n\npip install torch-tb-profiler[blob], pip install torch-tb-profiler[gs], or pip install torch-tb-profiler[S3] to have data be read through these cloud providers. For more information, please refer to this README. \nJump to Source Code:\nOne of the great benefits of having both TensorBoard and the PyTorch Profiler being integrated directly in Visual Studio Code (VS Code) is the ability to directly jump to the source code (file and line) from the profiler stack traces. VS Code Python Extension now supports TensorBoard Integration.", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "Jump to source is ONLY available when Tensorboard is launched within VS Code. Stack tracing will appear on the plugin UI if the profiling with_stack=True. When you click on a stack trace from the PyTorch Profiler, VS Code will automatically open the corresponding file side by side and jump directly to the line of code of interest for you to debug. This allows you to quickly make actionable optimizations and changes to your code based on the profiling results and suggestions. \n\n\nGify: Jump to Source using Visual Studio Code Plug In UI \n", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "\nFor how to optimize batch size performance, check out the step-by-step tutorial here. PyTorch Profiler is also integrated with PyTorch Lightning and you can simply launch your lightning training jobs with --trainer.profiler=pytorch flag to generate the traces. Check out an example here. \nWhat\u2019s Next for the PyTorch Profiler?\nYou just saw how PyTorch Profiler can help optimize a model. You can now try the Profiler by pip install torch-tb-profiler to optimize your PyTorch model.", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "Look out for an advanced version of this tutorial in the future. We are also thrilled to continue to bring state-of-the-art tool to PyTorch users to improve ML performance. We'd love to hear from you. Feel free to open an issue here. \nFor new and exciting features coming up with PyTorch Profiler, follow @PyTorch on Twitter and check us out on pytorch.org. \nAcknowledgements\nThe author would like to thank the contributions of the following individuals to this piece. From the Facebook side: Geeta Chauhan, Gisle Dankel, Woo Kim, Sam Farahzad, and Mark Saroufim. On the Microsoft side: AI Framework engineers (Teng Gao, Mike Guo, and Yang Gu), Guoliang Hua, and Thuy Nguyen.", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"}
{"text": "\nlayout: blog_detail\ntitle: 'New PyTorch library releases including TorchVision Mobile, TorchAudio I/O, and more'\nauthor: Team PyTorch \n\nToday, we are announcing updates to a number of PyTorch libraries, alongside the PyTorch 1.8 release. The updates include new releases for the domain libraries including TorchVision, TorchText and TorchAudio as well as new version of TorchCSPRNG. These releases include a number of new features and improvements and, along with the PyTorch 1.8 release, provide a broad set of updates for the PyTorch community to build on and leverage. \nSome highlights include:\n* TorchVision - Added support for PyTorch Mobile including Detectron2Go (D2Go), auto-augmentation of data during training, on the fly type conversion, and AMP autocasting.", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}
{"text": "\nTorchAudio - Major improvements to I/O, including defaulting to sox_io backend and file-like object support. Added Kaldi Pitch feature and support for CMake based build allowing TorchAudio to better support no-Python environments.\nTorchText - Updated the dataset loading API to be compatible with standard PyTorch data loading utilities.\nTorchCSPRNG - Support for cryptographically secure pseudorandom number generators for PyTorch is now stable with new APIs for AES128 ECB/CTR and CUDA support on Windows.\nPlease note that, starting in PyTorch 1.6, features are classified as Stable, Beta, and Prototype. Prototype features are not included as part of the binary distribution and are instead available through either building from source, using nightlies or via compiler flag. You can see the detailed announcement here.\n\nTorchVision 0.9.0\n[Stable] TorchVision Mobile: Operators, Android Binaries, and Tutorial", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}
{"text": "We are excited to announce the first on-device support and binaries for a PyTorch domain library. We have seen significant appetite in both research and industry for on-device vision support to allow low latency, privacy friendly, and resource efficient mobile vision experiences. You can follow this new tutorial to build your own Android object detection app using TorchVision operators, D2Go, or your own custom operators and model.\n\n\n\n[Stable] New Mobile models for Classification, Object Detection and Semantic Segmentation\nWe have added support for the MobileNetV3 architecture and provided pre-trained weights for Classification, Object Detection and Segmentation. It is easy to get up and running with these models, just import and load them as you would any torchvision model:\n```python\nimport torch\nimport torchvision\nClassification", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}
{"text": "import torch\nimport torchvision\nClassification\nx = torch.rand(1, 3, 224, 224)\nm_classifier = torchvision.models.mobilenet_v3_large(pretrained=True)\nm_classifier.eval()\npredictions = m_classifier(x)\nQuantized Classification\nx = torch.rand(1, 3, 224, 224)\nm_classifier = torchvision.models.quantization.mobilenet_v3_large(pretrained=True)\nm_classifier.eval()\npredictions = m_classifier(x)\nObject Detection: Highly Accurate High Resolution Mobile Model\nx = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]\nm_detector = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=True)\nm_detector.eval()\npredictions = m_detector(x)\nSemantic Segmentation: Highly Accurate Mobile Model\nx = torch.rand(1, 3, 520, 520)\nm_segmenter = torchvision.models.segmentation.deeplabv3_mobilenet_v3_large(pretrained=True)\nm_segmenter.eval()\npredictions = m_segmenter(x)\n```", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}
{"text": "predictions = m_segmenter(x)\nThese models are highly competitive with TorchVision\u2019s existing models on resource efficiency, speed, and accuracy. See our [release notes](https://github.com/pytorch/vision/releases) for detailed performance metrics.\n### [Stable] AutoAugment\n[AutoAugment](https://arxiv.org/pdf/1805.09501.pdf) is a common Data Augmentation technique that can increase the accuracy of Scene Classification models. Though the data augmentation policies are directly linked to their trained dataset, empirical studies show that ImageNet policies provide significant improvements when applied to other datasets. We\u2019ve implemented 3 policies learned on the following datasets: ImageNet, CIFA10 and SVHN. These can be used standalone or mixed-and-matched with existing transforms:\n```python\nfrom torchvision import transforms\nt = transforms.AutoAugment()\ntransformed = t(image)\ntransform=transforms.Compose([\n transforms.Resize(256),\n transforms.AutoAugment(),\n transforms.ToTensor()])\n", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}
{"text": "transforms.ToTensor()])\n```\nOther New Features for TorchVision\n\n[Stable] All read and decode methods in the io.image package now support:\nPalette, Grayscale Alpha and RBG Alpha image types during PNG decoding\nOn-the-fly conversion of image from one type to the other during read\n[Stable] WiderFace dataset\n[Stable] Improved FasterRCNN speed and accuracy by introducing a score threshold on RPN\n[Stable] Modulation input for DeformConv2D\n[Stable] Option to write audio to a video file\n[Stable] Utility to draw bounding boxes\n[Beta] Autocast support in all Operators\nFind the full TorchVision release notes here.\n\nTorchAudio 0.8.0\nI/O Improvements\nWe have continued our work from the previous release to improve TorchAudio\u2019s I/O support, including:", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}
{"text": "\n[Stable] Changing the default backend to \u201csox_io\u201d (for Linux/macOS), and updating the \u201csoundfile\u201d backend\u2019s interface to align with that of \u201csox_io\u201d. The legacy backend and interface are still accessible, though it is strongly discouraged to use them.\n[Stable] File-like object support in both \"sox_io\" backend, \u201csoundfile\u201d backend and sox_effects.\n[Stable] New options to change the format, encoding, and bits_per_sample when saving.\n[Stable] Added GSM, HTK, AMB, AMR-NB and AMR-WB format support to the \u201csox_io\u201d backend.\n[Beta] A new functional.apply_codec function which can degrade audio data by applying audio codecs supported by \u201csox_io\u201d backend in an in-memory fashion.\nHere are some examples of features landed in this release:\n```python\n\nLoad audio over HTTP\nwith requests.get(URL, stream=True) as response:\n waveform, sample_rate = torchaudio.load(response.raw)\nSaving to Bytes buffer as 32-bit floating-point PCM\nbuffer_ = io.BytesIO()\ntorchaudio.save(\n buffer_, waveform, sample_rate,", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}
{"text": "buffer_, waveform, sample_rate,\n format=\"wav\", encoding=\"PCM_S\", bits_per_sample=16)\nApply effects while loading audio from S3\nclient = boto3.client('s3')\nresponse = client.get_object(Bucket=S3_BUCKET, Key=S3_KEY)\nwaveform, sample_rate = torchaudio.sox_effects.apply_effect_file(\n response['Body'],\n [[\"lowpass\", \"-1\", \"300\"], [\"rate\", \"8000\"]])\nApply GSM codec to Tensor\nencoded = torchaudio.functional.apply_codec(\n waveform, sample_rate, format=\"gsm\")\n```\nCheck out the revamped audio preprocessing tutorial, Audio Manipulation with TorchAudio.\n[Stable] Switch to CMake-based build", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}
{"text": "[Stable] Switch to CMake-based build\nIn the previous version of TorchAudio, it was utilizing CMake to build third party dependencies. Starting in 0.8.0, TorchaAudio uses CMake to build its C++ extension. This will open the door to integrate TorchAudio in non-Python environments (such as C++ applications and mobile). We will continue working on adding example applications and mobile integrations.\n[Beta] Improved and New Audio Transforms", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}
{"text": "[Beta] Improved and New Audio Transforms\nWe have added two widely requested operators in this release: the SpectralCentroid transform and the Kaldi Pitch feature extraction (detailed in \"A pitch extraction algorithm tuned for automatic speech recognition\"). We\u2019ve also exposed a normalization method to Mel transforms, and additional STFT arguments to Spectrogram. We would like to ask our community to continue to raise feature requests for core audio processing features like these!\nCommunity Contributions", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}
{"text": "Community Contributions\nWe had more contributions from the open source community in this release than ever before, including several completely new features. We would like to extend our sincere thanks to the community. Please check out the newly added CONTRIBUTING.md for ways to contribute code, and remember that reporting bugs and requesting features are just as valuable. We will continue posting well-scoped work items as issues labeled \u201chelp-wanted\u201d and \u201ccontributions-welcome\u201d for anyone who would like to contribute code, and are happy to coach new contributors through the contribution process.\nFind the full TorchAudio release notes here.\nTorchText 0.9.0\n[Beta] Dataset API Updates", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}
{"text": "TorchText 0.9.0\n[Beta] Dataset API Updates\nIn this release, we are updating TorchText\u2019s dataset API to be compatible with PyTorch data utilities, such as DataLoader, and are deprecating TorchText\u2019s custom data abstractions such as Field. The updated datasets are simple string-by-string iterators over the data. For guidance about migrating from the legacy abstractions to use modern PyTorch data utilities, please refer to our migration guide.\nThe text datasets listed below have been updated as part of this work. For examples of how to use these datasets, please refer to our end-to-end text classification tutorial.\n* Language modeling: WikiText2, WikiText103, PennTreebank, EnWik9", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}
{"text": "\nText classification: AG_NEWS, SogouNews, DBpedia, YelpReviewPolarity, YelpReviewFull, YahooAnswers, AmazonReviewPolarity, AmazonReviewFull, IMDB\nSequence tagging: UDPOS, CoNLL2000Chunking\nTranslation: IWSLT2016, IWSLT2017\nQuestion answer: SQuAD1, SQuAD2\nFind the full TorchText release notes here.\n\n[Stable] TorchCSPRNG 0.2.0\nWe released TorchCSPRNG in August 2020, a PyTorch C++/CUDA extension that provides cryptographically secure pseudorandom number generators for PyTorch. Today, we are releasing the 0.2.0 version and designating the library as stable. This release includes a new API for encrypt/decrypt with AES128 ECB/CTR as well as CUDA 11 and Windows CUDA support.\nFind the full TorchCSPRNG release notes here.", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}
{"text": "Thanks for reading, and if you are excited about these updates and want to participate in the future of PyTorch, we encourage you to join the discussion forums and open GitHub issues.\nCheers!\nTeam PyTorch", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"}
{"text": "Torch Distributed ElasticMakes distributed PyTorch fault-tolerant and elastic.\nGet Started\n===========\nUsage\n^^^^^\n* Quickstart\n* Train script\n* Examples\nDocumentation\n=============\nAPI\n^^^\n* torchrun (Elastic Launch)\n* Elastic Agent\n* Multiprocessing\n* Error Propagation\n* Rendezvous\n* Expiration Timers\n* Metrics\n* Events\nAdvanced\n^^^^^^^^\n* Customization\nPlugins\n^^^^^^^\n* TorchElastic Kubernetes", "source": "https://pytorch.org/docs/stable/distributed.elastic.html", "category": "pytorch docs"}
{"text": "torch.overridesThis module exposes various helper functions for the\n\"torch_function\" protocol. See Extending torch for more detail on\nthe \"torch_function\" protocol.\nFunctions\n=========\ntorch.overrides.get_ignored_functions()\n Return public functions that cannot be overridden by\n \"torch_function\".\n Returns:\n A tuple of functions that are publicly available in the torch\n API but cannot be overridden with \"torch_function\". Mostly\n this is because none of the arguments of these functions are\n tensors or tensor-likes.\n Return type:\n Set[Callable]\n -[ Examples ]-\n\n\n\ntorch.Tensor.as_subclass in torch.overrides.get_ignored_functions()\n True\ntorch.add in torch.overrides.get_ignored_functions()\n False\ntorch.overrides.get_overridable_functions()\n List functions that are overridable via torch_function\n Returns:\n A dictionary that maps namespaces that contain overridable\n\n\n", "source": "https://pytorch.org/docs/stable/torch.overrides.html", "category": "pytorch docs"}
{"text": "functions to functions in that namespace that can be overridden.\n Return type:\n Dict[Any, List[Callable]]\ntorch.overrides.resolve_name(f)\n Get a human readable string name for a function passed to\n torch_function\n Parameters:\n f (Callable) -- Function to resolve the name of.\n Returns:\n Name of the function; if eval'ed it should give back the input\n function.\n Return type:\n str\ntorch.overrides.get_testing_overrides()\n Return a dict containing dummy overrides for all overridable\n functions\n Returns:\n A dictionary that maps overridable functions in the PyTorch API\n to lambda functions that have the same signature as the real\n function and unconditionally return -1. These lambda functions\n are useful for testing API coverage for a type that defines\n \"torch_function\".\n Return type:\n Dict[Callable, Callable]\n -[ Examples ]-\n\n\n\nimport inspect\nmy_add = torch.overrides.get_testing_overrides()[torch.add]\n\n\n", "source": "https://pytorch.org/docs/stable/torch.overrides.html", "category": "pytorch docs"}
{"text": "\n\n\ninspect.signature(my_add)\n \ntorch.overrides.handle_torch_function(public_api, relevant_args, args, kwargs)\n Implement a function with checks for \"torch_function\"\n overrides.\n See torch::autograd::handle_torch_function for the equivalent of\n this function in the C++ implementation.\n Parameters:\n * public_api (function) -- Function exposed by the public\n torch API originally called like \"public_api(args, kwargs)\"\n on which arguments are now being checked.\n * relevant_args (iterable) -- Iterable of arguments to\n check for torch_function methods.\n * args (tuple) -- Arbitrary positional arguments\n originally passed into \"public_api\".\n * kwargs (tuple) -- Arbitrary keyword arguments originally\n passed into \"public_api\".\n Returns:\n Result from calling \"implementation\" or an \"torch_function\"\n method, as appropriate.\n Return type:\n object\n\n\n", "source": "https://pytorch.org/docs/stable/torch.overrides.html", "category": "pytorch docs"}
{"text": "Return type:\n object\n :raises TypeError : if no implementation is found.:\n -[ Example ]-\n\n\n\ndef func(a):\n ... if has_torch_function_unary(a):\n ... return handle_torch_function(func, (a,), a)\n ... return a + 0\ntorch.overrides.has_torch_function()\n Check for torch_function implementations in the elements of an\n iterable or if a torch_function mode is enabled. Considers\n exact \"Tensor\" s and \"Parameter\" s non-dispatchable. Use this to\n guard a call to \"handle_torch_function()\"; don't use it to test if\n something is Tensor-like, use \"is_tensor_like()\" instead. :param\n relevant_args: Iterable or arguments to check for\n torch_function methods. :type relevant_args: iterable\n Returns:\n True if any of the elements of relevant_args have\n torch_function implementations, False otherwise.\n Return type:\n bool\n See also:\n \"torch.is_tensor_like\"\n Checks if something is a Tensor-like, including an exact\n \"Tensor\".\n\n\n", "source": "https://pytorch.org/docs/stable/torch.overrides.html", "category": "pytorch docs"}
{"text": "\"Tensor\".\ntorch.overrides.is_tensor_like(inp)\n Returns \"True\" if the passed-in input is a Tensor-like.\n Currently, this occurs whenever there's a \"torch_function\"\n attribute on the type of the input.\n -[ Examples ]-\n A subclass of tensor is generally a Tensor-like.\n\n\n\nclass SubTensor(torch.Tensor): ...\nis_tensor_like(SubTensor([0]))\n True\n Built-in or user types aren't usually Tensor-like.\nis_tensor_like(6)\n False\nis_tensor_like(None)\n False\nclass NotATensor: ...\nis_tensor_like(NotATensor())\n False\n But, they can be made Tensor-like by implementing\n torch_function.\nclass TensorLike:\n ... @classmethod\n ... def torch_function(cls, func, types, args, kwargs):\n ... return -1\nis_tensor_like(TensorLike())\n True\ntorch.overrides.is_tensor_method_or_property(func)\n Returns True if the function passed in is a handler for a method or\n property belonging to \"torch.Tensor\", as passed into\n\n\n", "source": "https://pytorch.org/docs/stable/torch.overrides.html", "category": "pytorch docs"}
{"text": "\"torch_function\".\n Note:\n For properties, their \"get\" method must be passed in.\n This may be needed, in particular, for the following reasons:\n 1. Methods/properties sometimes don't contain a module slot.\n 2. They require that the first passed-in argument is an instance of\n \"torch.Tensor\".\n -[ Examples ]-\n\n\n\nis_tensor_method_or_property(torch.Tensor.add)\n True\nis_tensor_method_or_property(torch.add)\n False\n Return type:\n bool\ntorch.overrides.wrap_torch_function(dispatcher)\n Wraps a given function with \"torch_function\" -related\n functionality.\n Parameters:\n dispatcher (Callable) -- A callable that returns an\n iterable of Tensor-likes passed into the function.\n Note:\n This decorator may reduce the performance of your code.\n Generally, it's enough to express your code as a series of\n functions that, themselves, support torch_function. If you\n find yourself in the rare situation where this is not the case,\n\n\n", "source": "https://pytorch.org/docs/stable/torch.overrides.html", "category": "pytorch docs"}
{"text": "e.g. if you're wrapping a low-level library and you also need it\n to work for Tensor-likes, then this function is available.\n -[ Examples ]-\n\n\n\ndef dispatcher(a): # Must have the same signature as func\n ... return (a,)\n@torch.overrides.wrap_torch_function(dispatcher)\ndef func(a): # This will make func dispatchable by torch_function\n ... return a + 0\n\n\n", "source": "https://pytorch.org/docs/stable/torch.overrides.html", "category": "pytorch docs"}
{"text": "Quantization Accuracy DebuggingThis document provides high level strategies for improving\nquantization accuracy. If a quantized model has error compared to the\noriginal model, we can categorize the error into:\n1. data insensitive error - caused by intrinsic model quantization\n error, large portion of input data has large error\n2. data sensitive error - caused by outlier input data, small\n portion of input data has large error\n3. implementation error - quantized kernel is not matching\n reference implementation\nData insensitive error\n======================\nGeneral tips\n\n\nFor PTQ, ensure that the data you are calibrating with is\n representative of your dataset. For example, for a classification\n problem a general guideline is to have multiple samples in every\n category, and the overall number of samples should be at least 100.\n There is no penalty for calibrating with more data other than\n calibration time.\n", "source": "https://pytorch.org/docs/stable/quantization-accuracy-debugging.html", "category": "pytorch docs"}
{"text": "calibration time.\n2. If your model has Conv-BN or Linear-BN patterns, consider fusing\n them. If you are using FX graph mode quantization, this is done\n automatically by the workflow. If you are using Eager mode\n quantization, you can do this manually with the\n \"torch.ao.quantization.fuse_modules\" API.\n3. Increase the precision of dtype of the problematic ops. Usually,\n fp32 will have the highest accuracy, followed by fp16, followed by\n dynamically quantized int8, followed by statically quantized int8.\n 1. Note: this is trading off performance for accuracy.\n 2. Note: availability of kernels per dtype per op can vary by\n backend.\n 3. Note: dtype conversions add an additional performance cost. For\n example, \"fp32_op -> quant -> int8_op -> dequant -> fp32_op ->\n quant -> int8_op -> dequant\" will have a performance penalty\n compared to \"fp32_op -> fp32_op -> quant -> int8_op -> int8_op\n -> dequant\" because of a higher number of required dtype\n conversions.", "source": "https://pytorch.org/docs/stable/quantization-accuracy-debugging.html", "category": "pytorch docs"}
{"text": "conversions.\n4. If you are using PTQ, consider using QAT to recover some of the\n accuracy loss from quantization.\nInt8 quantization tips\n\n\nIf you are using per-tensor weight quantization, consider using\n per-channel weight quantization.\nIf you are doing inference on fbgemm, ensure that you set the\n reduce_range argument to False if your CPU is Cooperlake or\n newer, and to True otherwise.\nAudit the input activation distribution variation across different\n samples. If this variation is high, the layer may be suitable for\n dynamic quantization but not static quantization.\nData sensitive error\n====================\nIf you are using static quantization and a small portion of your input\ndata is resulting in high quantization error, you can try:\nAdjust your calibration dataset to make it more representative of\n your inference dataset.\nManually inspect (using Numeric Suite) which layers have high\n", "source": "https://pytorch.org/docs/stable/quantization-accuracy-debugging.html", "category": "pytorch docs"}
{"text": "quantization error. For these layers, consider leaving them in\n floating point or adjusting the observer settings to choose a\n better scale and zero_point.\nImplementation error\n====================\nIf you are using PyTorch quantization with your own backend you may\nsee differences between the reference implementation of an operation\n(such as \"dequant -> op_fp32 -> quant\") and the quantized\nimplementation (such as op_int8) of the op on the target hardware.\nThis could mean one of two things:\n1. the differences (usually small) are expected due to specific\n behavior of the target kernel on the target hardware compared to\n fp32/cpu. An example of this is accumulating in an integer dtype.\n Unless the kernel guarantees bitwise equivalency with the reference\n implementation, this is expected.\n2. the kernel on the target hardware has an accuracy issue. In this\n case, reach out to the kernel developer.\nNumerical Debugging Tooling (prototype)Warning:", "source": "https://pytorch.org/docs/stable/quantization-accuracy-debugging.html", "category": "pytorch docs"}
{"text": "\nWarning:\n Numerical debugging tooling is early prototype and subject to\n change.\n* torch.ao.ns._numeric_suite Eager mode numeric suite\n* torch.ao.ns._numeric_suite_fx FX numeric suite", "source": "https://pytorch.org/docs/stable/quantization-accuracy-debugging.html", "category": "pytorch docs"}
{"text": "JIT Utils - torch.utils.jit\n", "source": "https://pytorch.org/docs/stable/jit_utils.html", "category": "pytorch docs"}
{"text": "Distributed OptimizersWarning:\n Distributed optimizer is not currently supported when using CUDA\n tensors\n\"torch.distributed.optim\" exposes DistributedOptimizer, which takes a\nlist of remote parameters (\"RRef\") and runs the optimizer locally on\nthe workers where the parameters live. The distributed optimizer can\nuse any of the local optimizer Base class to apply the gradients on\neach worker.\nclass torch.distributed.optim.DistributedOptimizer(optimizer_class, params_rref, args, *kwargs)\n DistributedOptimizer takes remote references to parameters\n scattered across workers and applies the given optimizer locally\n for each parameter.\n This class uses \"get_gradients()\" in order to retrieve the\n gradients for specific parameters.\n Concurrent calls to \"step()\", either from the same or different\n clients, will be serialized on each worker -- as each worker's\n optimizer can only work on one set of gradients at a time. However,", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"}
{"text": "there is no guarantee that the full forward-backward-optimizer\n sequence will execute for one client at a time. This means that the\n gradients being applied may not correspond to the latest forward\n pass executed on a given worker. Also, there is no guaranteed\n ordering across workers.\n DistributedOptimizer creates the local optimizer with TorchScript\n enabled by default, so that optimizer updates are not blocked by\n the Python Global Interpreter Lock (GIL) in the case of\n multithreaded training (e.g. Distributed Model Parallel). This\n feature is currently enabled for most optimizers. You can also\n follow the recipe in PyTorch tutorials to enable TorchScript\n support for your own custom optimizers.\n Parameters:\n * optimizer_class (optim.Optimizer) -- the class of\n optimizer to instantiate on each worker.\n * params_rref (list[RRef]) -- list of RRefs to local\n or remote parameters to optimize.", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"}
{"text": "or remote parameters to optimize.\n * args -- arguments to pass to the optimizer constructor on\n each worker.\n * kwargs -- arguments to pass to the optimizer constructor\n on each worker.\n Example::\n >>> import torch.distributed.autograd as dist_autograd\n >>> import torch.distributed.rpc as rpc\n >>> from torch import optim\n >>> from torch.distributed.optim import DistributedOptimizer\n >>>\n >>> with dist_autograd.context() as context_id:\n >>> # Forward pass.\n >>> rref1 = rpc.remote(\"worker1\", torch.add, args=(torch.ones(2), 3))\n >>> rref2 = rpc.remote(\"worker1\", torch.add, args=(torch.ones(2), 1))\n >>> loss = rref1.to_here() + rref2.to_here()\n >>>\n >>> # Backward pass.\n >>> dist_autograd.backward(context_id, [loss.sum()])\n >>>\n >>> # Optimizer.\n >>> dist_optim = DistributedOptimizer(\n >>> optim.SGD,\n >>> [rref1, rref2],\n >>> lr=0.05,\n >>> )", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"}
{"text": "\n\n\n lr=0.05,\n >>> )\n >>> dist_optim.step(context_id)\n\nstep(context_id)\n Performs a single optimization step.\n This will call \"torch.optim.Optimizer.step()\" on each worker\n containing parameters to be optimized, and will block until all\n workers return. The provided \"context_id\" will be used to\n retrieve the corresponding \"context\" that contains the gradients\n that should be applied to the parameters.\n Parameters:\n context_id -- the autograd context id for which we should\n run the optimizer step.\nclass torch.distributed.optim.PostLocalSGDOptimizer(optim, averager)\n Wraps an arbitrary \"torch.optim.Optimizer\" and runs post-local SGD,\n This optimizer runs local optimizer at every step. After the warm-\n up stage, it averages parameters periodically afer the local\n optimizer is applied.\n Parameters:\n * optim (Optimizer) -- The local optimizer.\n * averager (ModelAverager) -- A model averager instance to\n\n\n", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"}
{"text": "run post-localSGD algorithm.\n Example:\n >>> import torch\n >>> import torch.distributed as dist\n >>> import torch.distributed.algorithms.model_averaging.averagers as averagers\n >>> import torch.nn as nn\n >>> from torch.distributed.optim import PostLocalSGDOptimizer\n >>> from torch.distributed.algorithms.ddp_comm_hooks.post_localSGD_hook import (\n >>> PostLocalSGDState,\n >>> post_localSGD_hook,\n >>> )\n >>>\n >>> model = nn.parallel.DistributedDataParallel(\n >>> module, device_ids=[rank], output_device=rank\n >>> )\n >>>\n >>> # Register a post-localSGD communication hook.\n >>> state = PostLocalSGDState(process_group=None, subgroup=None, start_localSGD_iter=100)\n >>> model.register_comm_hook(state, post_localSGD_hook)\n >>>\n >>> # Create a post-localSGD optimizer that wraps a local optimizer.\n >>> # Note that warmup_steps used in PostLocalSGDOptimizer must be the same as", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"}
{"text": "\n\n\nstart_localSGD_iter used in PostLocalSGDState.\n >>> local_optim = torch.optim.SGD(params=model.parameters(), lr=0.01)\n >>> opt = PostLocalSGDOptimizer(\n >>> optim=local_optim,\n >>> averager=averagers.PeriodicModelAverager(period=4, warmup_steps=100)\n >>> )\n >>>\n >>> # In the first 100 steps, DDP runs global gradient averaging at every step.\n >>> # After 100 steps, DDP runs gradient averaging within each subgroup (intra-node by default),\n >>> # and post-localSGD optimizer runs global model averaging every 4 steps after applying the local optimizer.\n >>> for step in range(0, 200):\n >>> opt.zero_grad()\n >>> loss = loss_fn(output, labels)\n >>> loss.backward()\n >>> opt.step()\n\nload_state_dict(state_dict)\n This is the same as \"torch.optim.Optimizer\" \"load_state_dict()\",\n but also restores model averager's step value to the one saved\n in the provided \"state_dict\".\n\n\n", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"}
{"text": "in the provided \"state_dict\".\n If there is no \"\"step\"\" entry in \"state_dict\", it will raise a\n warning and initialize the model averager's step to 0.\n state_dict()\n This is the same as \"torch.optim.Optimizer\" \"state_dict()\", but\n adds an extra entry to record model averager's step to the\n checkpoint to ensure reload does not cause unnecessary warm up\n again.\n step()\n Performs a single optimization step (parameter update).\nclass torch.distributed.optim.ZeroRedundancyOptimizer(params, optimizer_class, process_group=None, parameters_as_bucket_view=False, overlap_with_ddp=False, **defaults)\n This class wraps an arbitrary \"optim.Optimizer\" and shards its\n states across ranks in the group as described by ZeRO. The local\n optimizer instance in each rank is only responsible for updating\n approximately \"1 / world_size\" parameters and hence only needs to\n keep \"1 / world_size\" optimizer states. After parameters are", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"}
{"text": "updated locally, each rank will broadcast its parameters to all\n other peers to keep all model replicas in the same state.\n \"ZeroRedundancyOptimizer\" can be used in conjunction with\n \"torch.nn.parallel.DistributedDataParallel\" to reduce per-rank peak\n memory consumption.\n \"ZeroRedundancyOptimizer\" uses a sorted-greedy algorithm to pack a\n number of parameters at each rank. Each parameter belongs to a\n single rank and is not divided among ranks. The partition is\n arbitrary and might not match the the parameter registration or\n usage order.\n Parameters:\n params (\"Iterable\") -- an \"Iterable\" of \"torch.Tensor\" s or\n \"dict\" s giving all parameters, which will be sharded across\n ranks.\n Keyword Arguments:\n * optimizer_class (\"torch.nn.Optimizer\") -- the class of the\n local optimizer.\n * process_group (\"ProcessGroup\", optional) --\n \"torch.distributed\" \"ProcessGroup\" (default:\n \"dist.group.WORLD\" initialized by", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"}
{"text": "\"dist.group.WORLD\" initialized by\n \"torch.distributed.init_process_group()\").\n * parameters_as_bucket_view (bool, optional) -- if\n \"True\", parameters are packed into buckets to speed up\n communication, and \"param.data\" fields point to bucket views\n at different offsets; if \"False\", each individual parameter is\n communicated separately, and each \"params.data\" stays intact\n (default: \"False\").\n * overlap_with_ddp (bool, optional) -- if \"True\",\n \"step()\" is overlapped with \"DistributedDataParallel\" 's\n gradient synchronization; this requires (1) either a\n functional optimizer for the \"optimizer_class\" argument or one\n with a functional equivalent and (2) registering a DDP\n communication hook constructed from one of the functions in\n \"ddp_zero_hook.py\"; parameters are packed into buckets\n matching those in \"DistributedDataParallel\", meaning that the", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"}
{"text": "\"parameters_as_bucket_view\" argument is ignored. If \"False\",\n \"step()\" runs disjointly after the backward pass (per normal).\n (default: \"False\")\n * *defaults -- any trailing arguments, which are forwarded\n to the local optimizer.\n Example:\n >>> import torch.nn as nn\n >>> from torch.distributed.optim import ZeroRedundancyOptimizer\n >>> from torch.nn.parallel import DistributedDataParallel as DDP\n >>> model = nn.Sequential([nn.Linear(2000, 2000).to(rank) for _ in range(20)])\n >>> ddp = DDP(model, device_ids=[rank])\n >>> opt = ZeroRedundancyOptimizer(\n >>> ddp.parameters(),\n >>> optimizer_class=torch.optim.Adam,\n >>> lr=0.01\n >>> )\n >>> ddp(inputs).sum().backward()\n >>> opt.step()\n Warning:\n Currently, \"ZeroRedundancyOptimizer\" requires that all of the\n passed-in parameters are the same dense type.\n Warning:\n If you pass \"overlap_with_ddp=True\", be wary of the following:", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"}
{"text": "Given the way that overlapping \"DistributedDataParallel\" with\n \"ZeroRedundancyOptimizer\" is currently implemented, the first two\n or three training iterations do not perform parameter updates in\n the optimizer step, depending on if \"static_graph=False\" or\n \"static_graph=True\", respectively. This is because it needs\n information about the gradient bucketing strategy used by\n \"DistributedDataParallel\", which is not finalized until the\n second forward pass if \"static_graph=False\" or until the third\n forward pass if \"static_graph=True\". To adjust for this, one\n option is to prepend dummy inputs.\n Warning:\n ZeroRedundancyOptimizer is experimental and subject to change.\n add_param_group(param_group)\n Add a parameter group to the \"Optimizer\" 's \"param_groups\".\n This can be useful when fine tuning a pre-trained network, as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n Parameters:", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"}
{"text": "as training progresses.\n Parameters:\n param_group (dict) -- specifies the parameters to be\n optimized and group-specific optimization options.\n Warning:\n This method handles updating the shards on all partitions but\n needs to be called on all ranks. Calling this on a subset of\n the ranks will cause the training to hang because\n communication primitives are called depending on the managed\n parameters and expect all the ranks to participate on the same\n set of parameters.\n consolidate_state_dict(to=0)\n Consolidate a list of \"state_dict\" s (one per rank) on the\n target rank.\n Parameters:\n to (int) -- the rank that receives the optimizer states\n (default: 0).\n Raises:\n RuntimeError -- if \"overlap_with_ddp=True\" and this\n method is called before this \"ZeroRedundancyOptimizer\"\n instance has been fully initialized, which happens once", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"}
{"text": "\"DistributedDataParallel\" gradient buckets have been\n rebuilt.\n Warning:\n This needs to be called on all ranks.\n join_hook(kwargs)\n Returns the ZeRO join hook, which enables training on uneven\n inputs by shadowing the collective communications in the\n optimizer step.\n Gradients must be properly set before this hook is called.\n Parameters:\n kwargs (dict) -- a \"dict\" containing any keyword\n arguments to modify the behavior of the join hook at run\n time; all \"Joinable\" instances sharing the same join context\n manager are forwarded the same value for \"kwargs\".\n This hook does not support any keyword arguments; i.e. \"kwargs\"\n is unused.\n load_state_dict(state_dict)\n Load the state pertaining to the given rank from the input\n \"state_dict\", updating the local optimizer as needed.\n Parameters:\n state_dict (dict) -- optimizer state; should be an", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"}
{"text": "object returned from a call to \"state_dict()\".\n Raises:\n RuntimeError -- if \"overlap_with_ddp=True\" and this\n method is called before this \"ZeroRedundancyOptimizer\"\n instance has been fully initialized, which happens once\n \"DistributedDataParallel\" gradient buckets have been\n rebuilt.\n state_dict()\n Returns the last global optimizer state known to this rank.\n Raises:\n RuntimeError -- if \"overlap_with_ddp=True\" and this\n method is called before this \"ZeroRedundancyOptimizer\"\n instance has been fully initialized, which happens once\n \"DistributedDataParallel\" gradient buckets have been\n rebuilt; or if this method is called without a preceding call\n to \"consolidate_state_dict()\".\n Return type:\n Dict[str, Any]\n step(closure=None, **kwargs)\n Performs a single optimizer step and syncs parameters across all\n ranks.\n Parameters:", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"}
{"text": "ranks.\n Parameters:\n closure (Callable) -- a closure that re-evaluates the\n model and returns the loss; optional for most optimizers.\n Returns:\n Optional loss depending on the underlying local optimizer.\n Return type:\n Optional[float]", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"}
{"text": "Distributed Autograd DesignThis note will present the detailed design for distributed autograd\nand walk through the internals of the same. Make sure you're familiar\nwith Autograd mechanics and the Distributed RPC Framework before\nproceeding.\nBackground\n==========\nLet's say you have two nodes and a very simple model partitioned\nacross two nodes. This can be implemented using\n\"torch.distributed.rpc\" as follows:\n import torch\n import torch.distributed.rpc as rpc\n def my_add(t1, t2):\n return torch.add(t1, t2)\n # On worker 0:\n t1 = torch.rand((3, 3), requires_grad=True)\n t2 = torch.rand((3, 3), requires_grad=True)\n # Perform some computation remotely.\n t3 = rpc.rpc_sync(\"worker1\", my_add, args=(t1, t2))\n # Perform some computation locally based on remote result.\n t4 = torch.rand((3, 3), requires_grad=True)\n t5 = torch.mul(t3, t4)\n # Compute some loss.\n loss = t5.sum()\nThe main motivation behind distributed autograd is to enable running a", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"}
{"text": "backward pass on such distributed models with the \"loss\" that we've\ncomputed and record appropriate gradients for all tensors that require\ngradients.\nAutograd recording during the forward pass\n==========================================\nPyTorch builds the autograd graph during the forward pass and this\ngraph is used to execute the backward pass. For more details see How\nautograd encodes the history.\nFor distributed autograd, we need to keep track of all RPCs during the\nforward pass to ensure the backward pass is executed appropriately.\nFor this purpose, we attach \"send\" and \"recv\" functions to the\nautograd graph when we perform an RPC.\n* The \"send\" function is attached to the source of the RPC and its\n output edges point to the autograd function for the input tensors of\n the RPC. The input for this function during the backward pass is\n received from the destination as the output of the appropriate\n \"recv\" function.\n* The \"recv\" function is attached to the destination of the RPC and", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"}
{"text": "its inputs are retrieved from operators executed on the destination\n using the input tensors. The output gradients of this function are\n sent to the source node to the appropriate \"send\" function during\n the backward pass.\n* Each \"send-recv\" pair is assigned a globally unique\n \"autograd_message_id\" to uniquely identify the pair. This is useful\n to look up the corresponding function on a remote node during the\n backward pass.\n* For RRef, whenever we call \"torch.distributed.rpc.RRef.to_here()\" we\n attach an appropriate \"send-recv\" pair for the tensors involved.\nAs an example, this is what the autograd graph for our example above\nwould look like (t5.sum() excluded for simplicity):\n[image]\nDistributed Autograd Context\n============================\nEach forward and backward pass that uses distributed autograd is\nassigned a unique \"torch.distributed.autograd.context\" and this\ncontext has a globally unique \"autograd_context_id\". This context is\ncreated on each node as needed.", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"}
{"text": "created on each node as needed.\nThis context serves the following purpose:\n1. Multiple nodes running distributed backward passes might accumulate\n gradients on the same tensor and as a result the \".grad\" field of\n the tensor would have gradients from a variety of distributed\n backward passes before we have the opportunity to run the\n optimizer. This is similar to calling \"torch.autograd.backward()\"\n multiple times locally. In order to provide a way of separating out\n the gradients for each backward pass, the gradients are accumulated\n in the \"torch.distributed.autograd.context\" for each backward pass.\n2. During the forward pass we store the \"send\" and \"recv\" functions\n for each autograd pass in this context. This ensures we hold\n references to the appropriate nodes in the autograd graph to keep\n it alive. In addition to this, it is easy to look up the\n appropriate \"send\" and \"recv\" functions during the backward pass.\n3. In general we also use this context to store some metadata for each", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"}
{"text": "distributed autograd pass.\nFrom the user's perspective the autograd context is setup as follows:\n import torch.distributed.autograd as dist_autograd\n with dist_autograd.context() as context_id:\n loss = model.forward()\n dist_autograd.backward(context_id, loss)\nIt is important to note that your model's forward pass must be invoked\nwithin the distributed autograd context manager, as a valid context is\nneeded in order to ensure that all \"send\" and \"recv\" functions are\nstored properly to run the backward pass across all participating\nnodes.\nDistributed Backward Pass\n=========================\nIn this section we outline the challenge of computing dependencies\naccurately during a distributed backward pass and describe a couple of\nalgorithms (with tradeoffs) on how we can execute a distributed\nbackward pass.\nComputing dependencies\n\nConsider the following piece of code being run on a single machine\n import torch\n a = torch.rand((3, 3), requires_grad=True)", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"}
{"text": "a = torch.rand((3, 3), requires_grad=True)\n b = torch.rand((3, 3), requires_grad=True)\n c = torch.rand((3, 3), requires_grad=True)\n d = a + b\n e = b * c\n d.sum.().backward()\nThis is what the autograd graph for the code above would look like:\n[image]\nThe first step the autograd engine performs as part of the backward\npass is computing the number of dependencies for each node in the\nautograd graph. This helps the autograd engine know when a node in the\ngraph is ready for execution. The numbers in brackets for \"add(1)\" and\n\"mul(0)\" denote the number of dependencies. As you can see, this means\nduring the backward pass the \"add\" node needs 1 input and the \"mul\"\nnode doesn't need any inputs (in other words doesn't need to be\nexecuted). The local autograd engine computes these dependencies by\ntraversing the graph from the root nodes (\"d\" in this case).\nThe fact that certain nodes in the autograd graph might not be\nexecuted in the backward pass poses a challenge for distributed", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"}
{"text": "autograd. Consider this piece of code which uses RPC.\n import torch\n import torch.distributed.rpc as rpc\n a = torch.rand((3, 3), requires_grad=True)\n b = torch.rand((3, 3), requires_grad=True)\n c = torch.rand((3, 3), requires_grad=True)\n d = rpc.rpc_sync(\"worker1\", torch.add, args=(a, b))\n e = rpc.rpc_sync(\"worker1\", torch.mul, args=(b, c))\n loss = d.sum()\nThe associated autograd graph for the code above would be:\n[image]\nComputing dependencies of this distributed autograd graph is much more\nchallenging and requires some overhead (either in terms of computation\nor network communication).\nFor performance sensitive applications we can avoid a lot of overhead\nby assuming every \"send\" and \"recv\" function are valid as part of the\nbackward pass (most applications don't perform RPCs that aren't used).\nThis simplifies the distributed autograd algorithm and is much more\nefficient, but at the cost that the application needs to be aware of", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"}
{"text": "the limitations. This algorithm is called the FAST mode algorithm and\nis described in detail below.\nIn the general case it might not be necessary that every \"send\" and\n\"recv\" function is valid as part of the backward pass. To address\nthis, we have proposed a SMART mode algorithm which is described in a\nlater section. Please note that currently, only the FAST mode\nalgorithm is implemented.\nFAST mode algorithm\n\nThe key assumption of this algorithm is that each \"send\" function has\na dependency of 1 when we run a backward pass. In other words, we\nassume we'll receive a gradient over RPC from another node.\nThe algorithm is as follows:\n1. We start from the worker which has the roots for the backward pass\n (all roots must be local).\n2. Lookup all the \"send\" functions for the current Distributed\n Autograd Context.\n3. Compute dependencies locally starting from the provided roots and\n all the \"send\" functions we retrieved.\n4. After computing dependencies, kick off the local autograd engine", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"}
{"text": "with the provided roots.\n5. When the autograd engine executes the \"recv\" function, the \"recv\"\n function sends the input gradients via RPC to the appropriate\n worker. Each \"recv\" function knows the destination worker id since\n it is recorded as part of the forward pass. The \"recv\" function\n also sends over the \"autograd_context_id\" and \"autograd_message_id\"\n to the remote host.\n6. When this request is received on the remote host, we use the\n \"autograd_context_id\" and \"autograd_message_id\" to look up the\n appropriate \"send\" function.\n7. If this is the first time a worker has received a request for the\n given \"autograd_context_id\", it will compute dependencies locally\n as described in points 1-3 above.\n8. The \"send\" function retrieved in 6. is then enqueued for execution\n on the local autograd engine for that worker.\n9. Finally, instead of accumulating the gradients on the \".grad\" field\n of the Tensor, we accumulate the gradients separately per", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"}
{"text": "Distributed Autograd Context. The gradients are stored in a\n \"Dict[Tensor, Tensor]\", which is basically a map from Tensor to its\n associated gradient and this map can be retrieved using the\n \"get_gradients()\" API.\nAs an example the complete code with distributed autograd would be as\nfollows:\n import torch\n import torch.distributed.autograd as dist_autograd\n import torch.distributed.rpc as rpc\n def my_add(t1, t2):\n return torch.add(t1, t2)\n # On worker 0:\n # Setup the autograd context. Computations that take\n # part in the distributed backward pass must be within\n # the distributed autograd context manager.\n with dist_autograd.context() as context_id:\n t1 = torch.rand((3, 3), requires_grad=True)\n t2 = torch.rand((3, 3), requires_grad=True)\n # Perform some computation remotely.\n t3 = rpc.rpc_sync(\"worker1\", my_add, args=(t1, t2))\n # Perform some computation locally based on remote result.\n t4 = torch.rand((3, 3), requires_grad=True)", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"}
{"text": "t4 = torch.rand((3, 3), requires_grad=True)\n t5 = torch.mul(t3, t4)\n # Compute some loss.\n loss = t5.sum()\n # Run the backward pass.\n dist_autograd.backward(context_id, [loss])\n # Retrieve the gradients from the context.\n dist_autograd.get_gradients(context_id)\nThe distributed autograd graph with dependencies would be as follows\n(t5.sum() excluded for simplicity):\n[image]\nThe FAST mode algorithm applied to the above example would be as\nfollows:\n1. On \"Worker 0\" we start from the roots \"loss\" and \"send1\" to compute\n dependencies. As a result \"send1\" is marked with a dependency of 1\n and \"mul\" on \"Worker 0\" is marked with a dependency of 1.\n2. Now, we kickoff the local autograd engine on \"Worker 0\". We first\n execute the \"mul\" function, accumulate its output in the autograd\n context as the gradient for \"t4\". Then, we execute \"recv2\" which\n sends the gradients to \"Worker 1\".\n3. Since this is the first time \"Worker 1\" has heard about this", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"}
{"text": "backward pass, it starts dependency computation and marks the\n dependencies for \"send2\", \"add\" and \"recv1\" appropriately.\n4. Next, we enqueue \"send2\" on the local autograd engine of \"Worker\n 1\", which in turn executes \"add\" and \"recv1\".\n5. When \"recv1\" is executed it sends the gradients over to \"Worker 0\".\n6. Since \"Worker 0\" has already computed dependencies for this\n backward pass, it just enqueues and executes \"send1\" locally.\n7. Finally, gradients for \"t1\", \"t2\" and \"t4\" are accumulated in the\n Distributed Autograd Context.\nSMART mode algorithm\n\nFull details of this algorithm are still in the works, but for the\ngeneral idea you can refer to Distributed Autograd Algorithm Smart\nmode section in the RFC.\nDistributed Optimizer\n=====================\nThe \"DistributedOptimizer\" operates as follows:\n1. Takes a list of remote parameters (\"RRef\") to optimize. These could\n also be local parameters wrapped within a local \"RRef\".", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"}
{"text": "\nTakes a \"Optimizer\" class as the local optimizer to run on all\n distinct \"RRef\" owners.\nThe distributed optimizer creates an instance of the local\n \"Optimizer\" on each of the worker nodes and holds an \"RRef\" to\n them.\nWhen \"torch.distributed.optim.DistributedOptimizer.step()\" is\n invoked, the distributed optimizer uses RPC to remotely execute all\n the local optimizers on the appropriate remote workers. A\n distributed autograd \"context_id\" must be provided as input to\n \"torch.distributed.optim.DistributedOptimizer.step()\". This is used\n by local optimizers to apply gradients stored in the corresponding\n context.\nIf multiple concurrent distributed optimizers are updating the same\n parameters on a worker, these updates are serialized via a lock.\nSimple end to end example\n=========================\nPutting it all together, the following is a simple end to end example\nusing distributed autograd and the distributed optimizer. If the code\n", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"}
{"text": "is placed into a file called \"dist_autograd_simple.py\", it can be run\nwith the command \"MASTER_ADDR=\"localhost\" MASTER_PORT=29500 python\ndist_autograd_simple.py\":\n import torch\n import torch.multiprocessing as mp\n import torch.distributed.autograd as dist_autograd\n from torch.distributed import rpc\n from torch import optim\n from torch.distributed.optim import DistributedOptimizer\n def random_tensor():\n return torch.rand((3, 3), requires_grad=True)\n def _run_process(rank, dst_rank, world_size):\n name = \"worker{}\".format(rank)\n dst_name = \"worker{}\".format(dst_rank)\n # Initialize RPC.\n rpc.init_rpc(\n name=name,\n rank=rank,\n world_size=world_size\n )\n # Use a distributed autograd context.\n with dist_autograd.context() as context_id:\n # Forward pass (create references on remote nodes).\n rref1 = rpc.remote(dst_name, random_tensor)\n rref2 = rpc.remote(dst_name, random_tensor)", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"}
{"text": "loss = rref1.to_here() + rref2.to_here()\n # Backward pass (run distributed autograd).\n dist_autograd.backward(context_id, [loss.sum()])\n # Build DistributedOptimizer.\n dist_optim = DistributedOptimizer(\n optim.SGD,\n [rref1, rref2],\n lr=0.05,\n )\n # Run the distributed optimizer step.\n dist_optim.step(context_id)\n def run_process(rank, world_size):\n dst_rank = (rank + 1) % world_size\n _run_process(rank, dst_rank, world_size)\n rpc.shutdown()\n if name == 'main':\n # Run world_size workers\n world_size = 2\n mp.spawn(run_process, args=(world_size,), nprocs=world_size)", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"}
{"text": "torch.utils.mobile_optimizerWarning:\n This API is in beta and may change in the near future.\nTorch mobile supports\n\"torch.utils.mobile_optimizer.optimize_for_mobile\" utility to run a\nlist of optimization pass with modules in eval mode. The method takes\nthe following parameters: a torch.jit.ScriptModule object, a\nblocklisting optimization set, a preserved method list, and a backend.\nFor CPU Backend, by default, if optimization blocklist is None or\nempty, \"optimize_for_mobile\" will run the following optimizations:\n * Conv2D + BatchNorm fusion (blocklisting option\n mobile_optimizer.MobileOptimizerType.CONV_BN_FUSION): This\n optimization pass folds \"Conv2d-BatchNorm2d\" into \"Conv2d\" in\n \"forward\" method of this module and all its submodules. The\n weight and bias of the \"Conv2d\" are correspondingly updated.\n * Insert and Fold prepacked ops (blocklisting option\n mobile_optimizer.MobileOptimizerType.INSERT_FOLD_PREPACK_OPS):", "source": "https://pytorch.org/docs/stable/mobile_optimizer.html", "category": "pytorch docs"}
{"text": "This optimization pass rewrites the graph to replace 2D\n convolutions and linear ops with their prepacked counterparts.\n Prepacked ops are stateful ops in that, they require some state\n to be created, such as weight prepacking and use this state, i.e.\n prepacked weights, during op execution. XNNPACK is one such\n backend that provides prepacked ops, with kernels optimized for\n mobile platforms (such as ARM CPUs). Prepacking of weight enables\n efficient memory access and thus faster kernel execution. At the\n moment \"optimize_for_mobile\" pass rewrites the graph to replace\n \"Conv2D/Linear\" with 1) op that pre-packs weight for XNNPACK\n conv2d/linear ops and 2) op that takes pre-packed weight and\n activation as input and generates output activations. Since 1\n needs to be done only once, we fold the weight pre-packing such\n that it is done only once at model load time. This pass of the\n \"optimize_for_mobile\" does 1 and 2 and then folds, i.e. removes,", "source": "https://pytorch.org/docs/stable/mobile_optimizer.html", "category": "pytorch docs"}
{"text": "weight pre-packing ops.\n * ReLU/Hardtanh fusion: XNNPACK ops support fusion of clamping.\n That is clamping of output activation is done as part of the\n kernel, including for 2D convolution and linear op kernels. Thus\n clamping effectively comes for free. Thus any op that can be\n expressed as clamping op, such as \"ReLU\" or \"hardtanh\", can be\n fused with previous \"Conv2D\" or \"linear\" op in XNNPACK. This pass\n rewrites graph by finding \"ReLU/hardtanh\" ops that follow XNNPACK\n \"Conv2D/linear\" ops, written by the previous pass, and fuses them\n together.\n * Dropout removal (blocklisting option\n mobile_optimizer.MobileOptimizerType.REMOVE_DROPOUT): This\n optimization pass removes \"dropout\" and \"dropout_\" nodes from\n this module when training is false.\n * Conv packed params hoisting (blocklisting option\n mobile_optimizer.MobileOptimizerType.HOIST_CONV_PACKED_PARAMS):\n This optimization pass moves convolution packed params to the", "source": "https://pytorch.org/docs/stable/mobile_optimizer.html", "category": "pytorch docs"}
{"text": "root module, so that the convolution structs can be deleted. This\n decreases model size without impacting numerics.\n * Add/ReLU fusion (blocklisting option\n mobile_optimizer.MobileOptimizerType.FUSE_ADD_RELU): This pass\n finds instances of \"relu\" ops that follow \"add\" ops and fuses\n them into a single \"add_relu\".\nfor Vulkan Backend, by default, if optimization blocklist is None or\nempty, \"optimize_for_mobile\" will run the following optimization:\n * Automatic GPU Transfer (blocklisting option mobile_optimize\n r.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER): This\n optimization pass rewrites the graph so that moving input and\n output data to and from the GPU becomes part of the model.\n\"optimize_for_mobile\" will also invoke freeze_module pass which only\npreserves \"forward\" method. If you have other method to that needed to\nbe preserved, add them into the preserved method list and pass into\nthe method.", "source": "https://pytorch.org/docs/stable/mobile_optimizer.html", "category": "pytorch docs"}
{"text": "the method.\ntorch.utils.mobile_optimizer.optimize_for_mobile(script_module, optimization_blocklist=None, preserved_methods=None, backend='CPU')\n Parameters:\n * script_module (ScriptModule) -- An instance of torch\n script module with type of ScriptModule.\n * optimization_blocklist\n (Optional[Set[_MobileOptimizerType]]) -- A set\n with type of MobileOptimizerType. When set is not passed,\n optimization method will run all the optimizer pass;\n otherwise, optimizer method will run the optimization pass\n that is not included inside optimization_blocklist.\n * preserved_methods (Optional[List]) -- A list of\n methods that needed to be preserved when freeze_module pass is\n invoked\n * backend (str) -- Device type to use for running the\n result model ('CPU'(default), 'Vulkan' or 'Metal').\n Returns:\n A new optimized torch script module\n Return type:\n RecursiveScriptModule", "source": "https://pytorch.org/docs/stable/mobile_optimizer.html", "category": "pytorch docs"}
{"text": "Quantization Backend ConfigurationFX Graph Mode Quantization allows the user to configure various\nquantization behaviors of an op in order to match the expectation of\ntheir backend.\nIn the future, this document will contain a detailed spec of these\nconfigurations.\nDefault values for native configurations\n========================================\nBelow is the output of the configuration for quantization of ops in\nx86 and qnnpack (PyTorch's default quantized backends).\nResults:\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': ,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'root_module': ,\n 'reference_quantized_module_for_root': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'fused_module': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n 'fuser_method': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n 'fuser_method': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': clamp,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': contiguous,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'qat_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'input_type_to_index': {'weight': 1, 'bias': 2},\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'qat_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'input_type_to_index': {'weight': 1, 'bias': 2},\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'qat_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'input_type_to_index': {'weight': 1, 'bias': 2},\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'qat_module': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'qat_module': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'qat_module': ,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'qat_module': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'qat_module': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'qat_module': ,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'qat_module': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'qat_module': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'qat_module': ,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': detach,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': detach_,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.quint4x2, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'qat_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'reference_quantized_module_for_root': ,\n 'input_output_observed': False,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.quint4x2, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'input_output_observed': False,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.quint4x2, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'qat_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'reference_quantized_module_for_root': ,\n 'input_output_observed': False,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.quint4x2, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'input_output_observed': False,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'input_type_to_index': {'weight': 2, 'bias': 3},\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': hardsigmoid,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': hardsigmoid_,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'input_type_to_index': {'weight': 3, 'bias': 4},\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'input_type_to_index': {'weight': 2, 'bias': 3},\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'qat_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'bias_dtype': torch.float32,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'input_type_to_index': {'weight': 1, 'bias': 2},\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'qat_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'bias_dtype': torch.float32,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'qat_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'bias_dtype': torch.float32,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': torch.nn.functional.max_pool1d,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': torch.nn.functional.max_pool2d,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': torch.nn.functional.max_pool3d,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': mean,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': permute,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': .fuser_method at 0x7f8f27895630>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'fuser_method': .fuser_method at 0x7f8f278955a0>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, , ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, , ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'fused_module': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': .fuser_method at 0x7f8f278956c0>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'fuser_method': .fuser_method at 0x7f8f27895750>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, , ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, , ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'fused_module': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': .fuser_method at 0x7f8f278957e0>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'fuser_method': .fuser_method at 0x7f8f27895870>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, , ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, , ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'fused_module': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': .fuser_method at 0x7f8f27895900>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'fused_module': ,\n 'fuser_method': .fuser_method at 0x7f8f27895990>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'bias_dtype': torch.float32,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n ],\n 'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n ],\n 'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n ],\n 'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': relu,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': relu_,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': .fuser_method at 0x7f8f27895a20>,\n },\n {\n 'pattern': (, ),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': .fuser_method at 0x7f8f27895ab0>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': .fuser_method at 0x7f8f27895b40>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': .fuser_method at 0x7f8f27895bd0>,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': repeat,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': repeat_interleave,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': reshape,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': resize_,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': shape,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': sigmoid,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': sigmoid_,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': size,\n 'dtype_configs': [", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "{\n 'pattern': size,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': squeeze,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': squeeze_,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.0078125, zero_point_exact_match=128),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.0078125, zero_point_exact_match=128),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.0078125, zero_point_exact_match=128),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.0078125, zero_point_exact_match=128),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': tanh,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.0078125, zero_point_exact_match=128),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.0078125, zero_point_exact_match=128),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': tanh_,\n 'dtype_configs': [", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'pattern': tanh_,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.0078125, zero_point_exact_match=128),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.0078125, zero_point_exact_match=128),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': transpose,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': unsqueeze,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': unsqueeze_,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': view,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n }", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"}
{"text": "\u008b\b\u0000\u0000\u0000\u0000\u0000\u0000\u00ed\u00c1\u0001\n\u0000\u0000\u0000\u00c2\u00a0\u00f7Om\u000e7\u00a0\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u00807\u009a\u00de\u001d'\u0000(\u0000\u0000", "source": "https://pytorch.org/docs/stable/output_text.tar.gz.html", "category": "pytorch docs"}
{"text": "torch.utils.checkpointNote:\n Checkpointing is implemented by rerunning a forward-pass segment for\n each checkpointed segment during backward. This can cause\n persistent states like the RNG state to be advanced than they would\n without checkpointing. By default, checkpointing includes logic to\n juggle the RNG state such that checkpointed passes making use of RNG\n (through dropout for example) have deterministic output as compared\n to non-checkpointed passes. The logic to stash and restore RNG\n states can incur a moderate performance hit depending on the runtime\n of checkpointed operations. If deterministic output compared to\n non-checkpointed passes is not required, supply\n \"preserve_rng_state=False\" to \"checkpoint\" or\n \"checkpoint_sequential\" to omit stashing and restoring the RNG state\n during each checkpoint.The stashing logic saves and restores the RNG\n state for the current device and the device of all cuda Tensor", "source": "https://pytorch.org/docs/stable/checkpoint.html", "category": "pytorch docs"}
{"text": "arguments to the \"run_fn\". However, the logic has no way to\n anticipate if the user will move Tensors to a new device within the\n \"run_fn\" itself. Therefore, if you move Tensors to a new device\n (\"new\" meaning not belonging to the set of [current device + devices\n of Tensor arguments]) within \"run_fn\", deterministic output compared\n to non-checkpointed passes is never guaranteed.\ntorch.utils.checkpoint.checkpoint(function, args, use_reentrant=True, kwargs)\n Checkpoint a model or part of the model\n Checkpointing works by trading compute for memory. Rather than\n storing all intermediate activations of the entire computation\n graph for computing backward, the checkpointed part does not*\n save intermediate activations, and instead recomputes them in\n backward pass. It can be applied on any part of a model.\n Specifically, in the forward pass, \"function\" will run in\n \"torch.no_grad()\" manner, i.e., not storing the intermediate", "source": "https://pytorch.org/docs/stable/checkpoint.html", "category": "pytorch docs"}
{"text": "activations. Instead, the forward pass saves the inputs tuple and\n the \"function\" parameter. In the backwards pass, the saved inputs\n and \"function\" is retrieved, and the forward pass is computed on\n \"function\" again, now tracking the intermediate activations, and\n then the gradients are calculated using these activation values.\n The output of \"function\" can contain non-Tensor values and gradient\n recording is only performed for the Tensor values. Note that if the\n output consists of nested structures (ex: custom objects, lists,\n dicts etc.) consisting of Tensors, these Tensors nested in custom\n structures will not be considered as part of autograd.\n Warning:\n If \"function\" invocation during backward does anything different\n than the one during forward, e.g., due to some global variable,\n the checkpointed version won't be equivalent, and unfortunately\n it can't be detected.\n Warning:\n If \"use_reentrant=True\" is specified, then if the checkpointed", "source": "https://pytorch.org/docs/stable/checkpoint.html", "category": "pytorch docs"}
{"text": "segment contains tensors detached from the computational graph by\n detach() or torch.no_grad(), the backward pass will raise an\n error. This is because checkpoint makes all the outputs require\n gradients which causes issues when a tensor is defined to have no\n gradient in the model. To circumvent this, detach the tensors\n outside of the checkpoint function. Note that the checkpointed\n segment can contain tensors detached from the computational graph\n if \"use_reentrant=False\" is specified.\n Warning:\n If \"use_reentrant=True\" is specified, at least one of the inputs\n needs to have \"requires_grad=True\" if grads are needed for model\n inputs, otherwise the checkpointed part of the model won't have\n gradients. At least one of the outputs needs to have\n \"requires_grad=True\" as well. Note that this does not apply if\n \"use_reentrant=False\" is specified.\n Warning:\n If \"use_reentrant=True\" is specified, checkpointing currently", "source": "https://pytorch.org/docs/stable/checkpoint.html", "category": "pytorch docs"}
{"text": "only supports \"torch.autograd.backward()\" and only if its\n inputs argument is not passed. \"torch.autograd.grad()\" is not\n supported. If \"use_reentrant=False\" is specified, checkpointing\n will work with \"torch.autograd.grad()\".\n Parameters:\n * function -- describes what to run in the forward pass of\n the model or part of the model. It should also know how to\n handle the inputs passed as the tuple. For example, in LSTM,\n if user passes \"(activation, hidden)\", \"function\" should\n correctly use the first input as \"activation\" and the second\n input as \"hidden\"\n * preserve_rng_state (bool, optional) -- Omit stashing\n and restoring the RNG state during each checkpoint. Default:\n \"True\"\n * use_reentrant (bool, optional) -- Use checkpointing\n implementation that requires re-entrant autograd. If\n \"use_reentrant=False\" is specified, \"checkpoint\" will use an", "source": "https://pytorch.org/docs/stable/checkpoint.html", "category": "pytorch docs"}
{"text": "implementation that does not require re-entrant autograd. This\n allows \"checkpoint\" to support additional functionality, such\n as working as expected with \"torch.autograd.grad\" and support\n for keyword arguments input into the checkpointed function.\n Note that future versions of PyTorch will default to\n \"use_reentrant=False\". Default: \"True\"\n * args -- tuple containing inputs to the \"function\"\n Returns:\n Output of running \"function\" on \"args\"\ntorch.utils.checkpoint.checkpoint_sequential(functions, segments, input, use_reentrant=True, *kwargs)\n A helper function for checkpointing sequential models.\n Sequential models execute a list of modules/functions in order\n (sequentially). Therefore, we can divide such a model in various\n segments and checkpoint each segment. All segments except the last\n will run in \"torch.no_grad()\" manner, i.e., not storing the\n intermediate activations. The inputs of each checkpointed segment", "source": "https://pytorch.org/docs/stable/checkpoint.html", "category": "pytorch docs"}
{"text": "will be saved for re-running the segment in the backward pass.\n See \"checkpoint()\" on how checkpointing works.\n Warning:\n Checkpointing currently only supports \"torch.autograd.backward()\"\n and only if its inputs argument is not passed.\n \"torch.autograd.grad()\" is not supported.\n Parameters:\n * functions -- A \"torch.nn.Sequential\" or the list of\n modules or functions (comprising the model) to run\n sequentially.\n * segments -- Number of chunks to create in the model\n * input -- A Tensor that is input to \"functions\"\n * preserve_rng_state (bool, optional) -- Omit stashing\n and restoring the RNG state during each checkpoint. Default:\n \"True\"\n * use_reentrant (bool, optional) -- Use checkpointing\n implementation that requires re-entrant autograd. If\n \"use_reentrant=False\" is specified, \"checkpoint\" will use an\n implementation that does not require re-entrant autograd. This", "source": "https://pytorch.org/docs/stable/checkpoint.html", "category": "pytorch docs"}
{"text": "allows \"checkpoint\" to support additional functionality, such\n as working as expected with \"torch.autograd.grad\" and support\n for keyword arguments input into the checkpointed function.\n Default: \"True\"\n Returns:\n Output of running \"functions\" sequentially on \"*inputs\"\n -[ Example ]-\n\n\n\nmodel = nn.Sequential(...)\ninput_var = checkpoint_sequential(model, chunks, input_var)\n\n\n", "source": "https://pytorch.org/docs/stable/checkpoint.html", "category": "pytorch docs"}
{"text": "torch.functorch.func, previously known as \"functorch\", is JAX-like composable\nfunction transforms for PyTorch.\nNote:\n This library is currently in beta. What this means is that the\n features generally work (unless otherwise documented) and we (the\n PyTorch team) are committed to bringing this library forward.\n However, the APIs may change under user feedback and we don't have\n full coverage over PyTorch operations.If you have suggestions on the\n API or use-cases you'd like to be covered, please open an GitHub\n issue or reach out. We'd love to hear about how you're using the\n library.\nWhat are composable function transforms?\n========================================\n* A \"function transform\" is a higher-order function that accepts a\n numerical function and returns a new function that computes a\n different quantity.\n* \"torch.func\" has auto-differentiation transforms (\"grad(f)\" returns\n a function that computes the gradient of \"f\"), a", "source": "https://pytorch.org/docs/stable/func.html", "category": "pytorch docs"}
{"text": "a function that computes the gradient of \"f\"), a\n vectorization/batching transform (\"vmap(f)\" returns a function that\n computes \"f\" over batches of inputs), and others.\n* These function transforms can compose with each other arbitrarily.\n For example, composing \"vmap(grad(f))\" computes a quantity called\n per-sample-gradients that stock PyTorch cannot efficiently compute\n today.\nWhy composable function transforms?\n===================================\nThere are a number of use cases that are tricky to do in PyTorch\ntoday:\n* computing per-sample-gradients (or other per-sample quantities)\n* running ensembles of models on a single machine\n* efficiently batching together tasks in the inner-loop of MAML\n* efficiently computing Jacobians and Hessians\n* efficiently computing batched Jacobians and Hessians\nComposing \"vmap()\", \"grad()\", and \"vjp()\" transforms allows us to\nexpress the above without designing a separate subsystem for each.\nThis idea of composable function transforms comes from the JAX\nframework.", "source": "https://pytorch.org/docs/stable/func.html", "category": "pytorch docs"}
{"text": "framework.\nRead More\n=========\n* torch.func Whirlwind Tour\n * What is torch.func?\n * Why composable function transforms?\n * What are the transforms?\n* torch.func API Reference\n * Function Transforms\n * Utilities for working with torch.nn.Modules\n* UX Limitations\n * General limitations\n * torch.autograd APIs\n * vmap limitations\n * Randomness\n* Migrating from functorch to torch.func\n * function transforms\n * NN module utilities\n * functorch.compile", "source": "https://pytorch.org/docs/stable/func.html", "category": "pytorch docs"}
{"text": "torch.ao.ns._numeric_suiteWarning:\n This module is an early prototype and is subject to change.\ntorch.ao.ns._numeric_suite.compare_weights(float_dict, quantized_dict)\n Compare the weights of the float module with its corresponding\n quantized module. Return a dict with key corresponding to module\n names and each entry being a dictionary with two keys 'float' and\n 'quantized', containing the float and quantized weights. This dict\n can be used to compare and compute the quantization error of the\n weights of float and quantized models.\n Example usage:\n wt_compare_dict = compare_weights(\n float_model.state_dict(), qmodel.state_dict())\n for key in wt_compare_dict:\n print(\n key,\n compute_error(\n wt_compare_dict[key]['float'],\n wt_compare_dict[key]['quantized'].dequantize()\n )\n )\n Parameters:\n * float_dict (Dict[str, Any]) -- state dict of", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"}
{"text": "the float model\n * quantized_dict (Dict[str, Any]) -- state dict\n of the quantized model\n Returns:\n dict with key corresponding to module names and each entry being\n a dictionary with two keys 'float' and 'quantized', containing\n the float and quantized weights\n Return type:\n weight_dict\ntorch.ao.ns._numeric_suite.get_logger_dict(mod, prefix='')\n Traverse the modules and save all logger stats into target dict.\n This is mainly used for quantization accuracy debug.\n Type of loggers supported:\n ShadowLogger: used to log the outputs of the quantized module\n and its matching float shadow module, OutputLogger: used to log\n the outputs of the modules\n Parameters:\n * mod (Module) -- module we want to save all logger stats\n * prefix (str) -- prefix for the current module\n Returns:\n the dictionary used to save all logger stats\n Return type:\n target_dict\nclass torch.ao.ns._numeric_suite.Logger", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"}
{"text": "class torch.ao.ns._numeric_suite.Logger\n Base class for stats logging\n forward(x)\nclass torch.ao.ns._numeric_suite.ShadowLogger\n Class used in Shadow module to record the outputs of the original\n and shadow modules.\n forward(x, y)\nclass torch.ao.ns._numeric_suite.OutputLogger\n Class used to log the outputs of the module\n forward(x)\nclass torch.ao.ns._numeric_suite.Shadow(q_module, float_module, logger_cls)\n Shadow module attaches the float module to its matching quantized\n module as the shadow. Then it uses Logger module to process the\n outputs of both modules.\n Parameters:\n * q_module -- module quantized from float_module that we\n want to shadow\n * float_module -- float module used to shadow q_module\n * logger_cls -- type of logger used to process the outputs\n of q_module and float_module. ShadowLogger or custom loggers\n can be used.\n forward(x)\n Return type:\n Tensor\n add(x, y)\n Return type:\n Tensor*", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"}
{"text": "add(x, y)\n Return type:\n Tensor\n add_scalar(x, y)\n Return type:\n Tensor\n mul(x, y)\n Return type:\n Tensor\n mul_scalar(x, y)\n Return type:\n Tensor\n cat(x, dim=0)\n Return type:\n Tensor\n add_relu(x, y)\n Return type:\n Tensor\ntorch.ao.ns._numeric_suite.prepare_model_with_stubs(float_module, q_module, module_swap_list, logger_cls)\n Prepare the model by attaching the float module to its matching\n quantized module as the shadow if the float module type is in\n module_swap_list.\n Example usage:\n prepare_model_with_stubs(float_model, q_model, module_swap_list, Logger)\n q_model(data)\n ob_dict = get_logger_dict(q_model)\n Parameters:\n * float_module (Module) -- float module used to generate\n the q_module\n * q_module (Module) -- module quantized from float_module\n * module_swap_list (Set[type]) -- list of float\n module types to attach the shadow", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"}
{"text": "module types to attach the shadow\n * logger_cls (Callable) -- type of logger to be used in\n shadow module to process the outputs of quantized module and\n its float shadow module\ntorch.ao.ns._numeric_suite.compare_model_stub(float_model, q_model, module_swap_list, *data, logger_cls=)\n Compare quantized module in a model with its floating point\n counterpart, feeding both of them the same input. Return a dict\n with key corresponding to module names and each entry being a\n dictionary with two keys 'float' and 'quantized', containing the\n output tensors of quantized and its matching float shadow module.\n This dict can be used to compare and compute the module level\n quantization error.\n This function first call prepare_model_with_stubs() to swap the\n quantized module that we want to compare with the Shadow module,\n which takes quantized module, corresponding float module and logger", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"}
{"text": "as input, and creates a forward path inside to make the float\n module to shadow quantized module sharing the same input. The\n logger can be customizable, default logger is ShadowLogger and it\n will save the outputs of the quantized module and float module that\n can be used to compute the module level quantization error.\n Example usage:\n module_swap_list = [torchvision.models.quantization.resnet.QuantizableBasicBlock]\n ob_dict = compare_model_stub(float_model,qmodel,module_swap_list, data)\n for key in ob_dict:\n print(key, compute_error(ob_dict[key]['float'], ob_dict[key]['quantized'].dequantize()))\n Parameters:\n * float_model (Module) -- float model used to generate the\n q_model\n * q_model (Module) -- model quantized from float_model\n * module_swap_list (Set[type]) -- list of float\n module types at which shadow modules will be attached.\n * data -- input data used to run the prepared q_model", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"}
{"text": "\nlogger_cls -- type of logger to be used in shadow module\n to process the outputs of quantized module and its float\n shadow module\n Return type:\n Dict[str, Dict]\ntorch.ao.ns._numeric_suite.get_matching_activations(float_module, q_module)\n Find the matching activation between float and quantized modules.\n Parameters:\nfloat_module (Module) -- float module used to generate\n the q_module\nq_module (Module) -- module quantized from float_module\n Returns:\n dict with key corresponding to quantized module names and each\n entry being a dictionary with two keys 'float' and 'quantized',\n containing the matching float and quantized activations\n Return type:\n act_dict\ntorch.ao.ns._numeric_suite.prepare_model_outputs(float_module, q_module, logger_cls=, allow_list=None)\n Prepare the model by attaching the logger to both float module and\n\n\n", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"}
{"text": "quantized module if they are in the allow_list.\n Parameters:\n * float_module (Module) -- float module used to generate\n the q_module\n * q_module (Module) -- module quantized from float_module\n * logger_cls -- type of logger to be attached to\n float_module and q_module\n * allow_list -- list of module types to attach logger\ntorch.ao.ns._numeric_suite.compare_model_outputs(float_model, q_model, *data, logger_cls=, allow_list=None)\n Compare output activations between float and quantized models at\n corresponding locations for the same input. Return a dict with key\n corresponding to quantized module names and each entry being a\n dictionary with two keys 'float' and 'quantized', containing the\n activations of quantized model and float model at matching\n locations. This dict can be used to compare and compute the\n propagation quantization error.\n Example usage:", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"}
{"text": "Example usage:\n act_compare_dict = compare_model_outputs(float_model, qmodel, data)\n for key in act_compare_dict:\n print(\n key,\n compute_error(\n act_compare_dict[key]['float'],\n act_compare_dict[key]['quantized'].dequantize()\n )\n )\n Parameters:\n * float_model (Module) -- float model used to generate the\n q_model\n * q_model (Module) -- model quantized from float_model\n * data -- input data used to run the prepared float_model\n and q_model\n * logger_cls -- type of logger to be attached to\n float_module and q_module\n * allow_list -- list of module types to attach logger\n Returns:\n dict with key corresponding to quantized module names and each\n entry being a dictionary with two keys 'float' and 'quantized',\n containing the matching float and quantized activations\n Return type:\n act_compare_dict", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"}
{"text": "torch.utils.model_zooMoved to torch.hub.\ntorch.utils.model_zoo.load_url(url, model_dir=None, map_location=None, progress=True, check_hash=False, file_name=None)\n Loads the Torch serialized object at the given URL.\n If downloaded file is a zip file, it will be automatically\n decompressed.\n If the object is already present in model_dir, it's deserialized\n and returned. The default value of \"model_dir\" is\n \"/checkpoints\" where \"hub_dir\" is the directory returned\n by \"get_dir()\".\n Parameters:\n * url (str) -- URL of the object to download\n * model_dir (str, optional) -- directory in which to\n save the object\n * map_location (optional) -- a function or a dict\n specifying how to remap storage locations (see torch.load)\n * progress (bool, optional) -- whether or not to\n display a progress bar to stderr. Default: True\n * check_hash (bool, optional) -- If True, the filename", "source": "https://pytorch.org/docs/stable/model_zoo.html", "category": "pytorch docs"}
{"text": "part of the URL should follow the naming convention\n \"filename-.ext\" where \"\" is the first eight or\n more digits of the SHA256 hash of the contents of the file.\n The hash is used to ensure unique names and to verify the\n contents of the file. Default: False\n * file_name (str, optional) -- name for the downloaded\n file. Filename from \"url\" will be used if not set.\n Return type:\n Dict[str, Any]\n -[ Example ]-\n\n\n\nstate_dict = torch.hub.load_state_dict_from_url('https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth')\n\n\n", "source": "https://pytorch.org/docs/stable/model_zoo.html", "category": "pytorch docs"}
{"text": "Warning:\n There are known non-determinism issues for RNN functions on some\n versions of cuDNN and CUDA. You can enforce deterministic behavior\n by setting the following environment variables:On CUDA 10.1, set\n environment variable \"CUDA_LAUNCH_BLOCKING=1\". This may affect\n performance.On CUDA 10.2 or later, set environment variable (note\n the leading colon symbol) \"CUBLAS_WORKSPACE_CONFIG=:16:8\" or\n \"CUBLAS_WORKSPACE_CONFIG=:4096:2\".See the cuDNN 8 Release Notes for\n more information.", "source": "https://pytorch.org/docs/stable/cudnn_rnn_determinism.html", "category": "pytorch docs"}
{"text": "PyTorch documentationPyTorch is an optimized tensor library for deep learning using GPUs\nand CPUs.\nFeatures described in this documentation are classified by release\nstatus:\n Stable: These features will be maintained long-term and there\n should generally be no major performance limitations or gaps in\n documentation. We also expect to maintain backwards compatibility\n (although breaking changes can happen and notice will be given one\n release ahead of time).\n Beta: These features are tagged as Beta because the API may\n change based on user feedback, because the performance needs to\n improve, or because coverage across operators is not yet complete.\n For Beta features, we are committing to seeing the feature through\n to the Stable classification. We are not, however, committing to\n backwards compatibility.\n Prototype: These features are typically not available as part of\n binary distributions like PyPI or Conda, except sometimes behind", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"}
{"text": "run-time flags, and are at an early stage for feedback and testing.\nCommunity\n^^^^^^^^^\n* PyTorch Governance | Build + CI\n* PyTorch Contribution Guide\n* PyTorch Design Philosophy\n* PyTorch Governance | Mechanics\n* PyTorch Governance | Maintainers\nDeveloper Notes\n^^^^^^^^^^^^^^^\n* CUDA Automatic Mixed Precision examples\n* Autograd mechanics\n* Broadcasting semantics\n* CPU threading and TorchScript inference\n* CUDA semantics\n* Distributed Data Parallel\n* Extending PyTorch\n* Extending torch.func with autograd.Function\n* Frequently Asked Questions\n* Gradcheck mechanics\n* HIP (ROCm) semantics\n* Features for large-scale deployments\n* Modules\n* MPS backend\n* Multiprocessing best practices\n* Numerical accuracy\n* Reproducibility\n* Serialization semantics\n* Windows FAQ\nLanguage Bindings\n^^^^^^^^^^^^^^^^^\n* C++\n* Javadoc\n* torch::deploy\nPython API\n^^^^^^^^^^\n* torch\n * Tensors\n * Generators\n * Random sampling\n * Serialization\n * Parallelism\n * Locally disabling gradient computation\n * Math operations", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"}
{"text": "\nMath operations\nUtilities\nSymbolic Numbers\nOptimizations\nOperator Tags\nEngine Configuration\ntorch.nn\nParameter\nUninitializedParameter\nUninitializedBuffer\nContainers\nConvolution Layers\nPooling layers\nPadding Layers\nNon-linear Activations (weighted sum, nonlinearity)\nNon-linear Activations (other)\nNormalization Layers\nRecurrent Layers\nTransformer Layers\nLinear Layers\nDropout Layers\nSparse Layers\nDistance Functions\nLoss Functions\nVision Layers\nShuffle Layers\nDataParallel Layers (multi-GPU, distributed)\nUtilities\nQuantized Functions\nLazy Modules Initialization\ntorch.nn.functional\nConvolution functions\nPooling functions\nNon-linear activation functions\nLinear functions\nDropout functions\nSparse functions\nDistance functions\nLoss functions\nVision functions\nDataParallel functions (multi-GPU, distributed)\ntorch.Tensor\nData types\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"}
{"text": "\ntorch.Tensor\nData types\nInitializing and basic operations\nTensor class reference\nTensor Attributes\ntorch.dtype\ntorch.device\ntorch.layout\ntorch.memory_format\nTensor Views\ntorch.amp\nAutocasting\nGradient Scaling\nAutocast Op Reference\ntorch.autograd\ntorch.autograd.backward\ntorch.autograd.grad\nForward-mode Automatic Differentiation\nFunctional higher level API\nLocally disabling gradient computation\nDefault gradient layouts\nIn-place operations on Tensors\nVariable (deprecated)\nTensor autograd functions\nFunction\nContext method mixins\nNumerical gradient checking\nProfiler\nAnomaly detection\nAutograd graph\ntorch.library\ntorch.cuda\nStreamContext\ntorch.cuda.can_device_access_peer\ntorch.cuda.current_blas_handle\ntorch.cuda.current_device\ntorch.cuda.current_stream\ntorch.cuda.default_stream\ndevice\ntorch.cuda.device_count\ndevice_of\ntorch.cuda.get_arch_list\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"}
{"text": "\ndevice_of\ntorch.cuda.get_arch_list\ntorch.cuda.get_device_capability\ntorch.cuda.get_device_name\ntorch.cuda.get_device_properties\ntorch.cuda.get_gencode_flags\ntorch.cuda.get_sync_debug_mode\ntorch.cuda.init\ntorch.cuda.ipc_collect\ntorch.cuda.is_available\ntorch.cuda.is_initialized\ntorch.cuda.memory_usage\ntorch.cuda.set_device\ntorch.cuda.set_stream\ntorch.cuda.set_sync_debug_mode\ntorch.cuda.stream\ntorch.cuda.synchronize\ntorch.cuda.utilization\ntorch.cuda.OutOfMemoryError\nRandom Number Generator\nCommunication collectives\nStreams and events\nGraphs (beta)\nMemory management\nNVIDIA Tools Extension (NVTX)\nJiterator (beta)\nStream Sanitizer (prototype)\ntorch.backends\ntorch.backends.cuda\ntorch.backends.cudnn\ntorch.backends.mps\ntorch.backends.mkl\ntorch.backends.mkldnn\ntorch.backends.openmp\ntorch.backends.opt_einsum\ntorch.backends.xeon\ntorch.distributed\nBackends\nBasics\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"}
{"text": "\ntorch.distributed\nBackends\nBasics\nInitialization\nPost-Initialization\nDistributed Key-Value Store\nGroups\nPoint-to-point communication\nSynchronous and asynchronous collective operations\nCollective functions\nProfiling Collective Communication\nMulti-GPU collective functions\nThird-party backends\nLaunch utility\nSpawn utility\nDebugging \"torch.distributed\" applications\nLogging\ntorch.distributed.algorithms.join\ntorch.distributed.elastic\nGet Started\nDocumentation\ntorch.distributed.fsdp\ntorch.distributed.optim\ntorch.distributed.tensor.parallel\ntorch.distributed.checkpoint\ntorch.distributions\nScore function\nPathwise derivative\nDistribution\nExponentialFamily\nBernoulli\nBeta\nBinomial\nCategorical\nCauchy\nChi2\nContinuousBernoulli\nDirichlet\nExponential\nFisherSnedecor\nGamma\nGeometric\nGumbel\nHalfCauchy\nHalfNormal\nIndependent\nKumaraswamy\nLKJCholesky\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"}
{"text": "\nIndependent\nKumaraswamy\nLKJCholesky\nLaplace\nLogNormal\nLowRankMultivariateNormal\nMixtureSameFamily\nMultinomial\nMultivariateNormal\nNegativeBinomial\nNormal\nOneHotCategorical\nPareto\nPoisson\nRelaxedBernoulli\nLogitRelaxedBernoulli\nRelaxedOneHotCategorical\nStudentT\nTransformedDistribution\nUniform\nVonMises\nWeibull\nWishart\nKL Divergence\nTransforms\nConstraints\nConstraint Registry\ntorch._dynamo\ntorch.fft\nFast Fourier Transforms\nHelper Functions\ntorch.func\nWhat are composable function transforms?\nWhy composable function transforms?\nRead More\ntorch.futures\ntorch.fx\nOverview\nWriting Transformations\nDebugging\nLimitations of Symbolic Tracing\nAPI Reference\ntorch.hub\nPublishing models\nLoading models from Hub\ntorch.jit\nTorchScript Language Reference\nCreating TorchScript Code\nMixing Tracing and Scripting\nTorchScript Language\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"}
{"text": "\nTorchScript Language\nBuilt-in Functions and Modules\nDebugging\nFrequently Asked Questions\nKnown Issues\nAppendix\ntorch.linalg\nMatrix Properties\nDecompositions\nSolvers\nInverses\nMatrix Functions\nMatrix Products\nTensor Operations\nMisc\nExperimental Functions\ntorch.monitor\nAPI Reference\ntorch.signal\ntorch.signal.windows\ntorch.special\nFunctions\ntorch.overrides\nFunctions\ntorch.package\nTutorials\nHow do I...\nExplanation\nAPI Reference\ntorch.profiler\nOverview\nAPI Reference\nIntel Instrumentation and Tracing Technology APIs\ntorch.nn.init\ntorch.onnx\nExample: AlexNet from PyTorch to ONNX\nTracing vs Scripting\nAvoiding Pitfalls\nLimitations\nAdding support for operators\nFrequently Asked Questions\nContributing / developing\nFunctions\nClasses\ntorch.onnx diagnostics\nOverview\nDiagnostic Rules\nAPI Reference\ntorch.optim\nHow to use an optimizer\nBase class\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"}
{"text": "\nHow to use an optimizer\nBase class\nAlgorithms\nHow to adjust learning rate\nStochastic Weight Averaging\nComplex Numbers\nCreating Complex Tensors\nTransition from the old representation\nAccessing real and imag\nAngle and abs\nLinear Algebra\nSerialization\nAutograd\nDDP Communication Hooks\nHow to Use a Communication Hook?\nWhat Does a Communication Hook Operate On?\nDefault Communication Hooks\nPowerSGD Communication Hook\nDebugging Communication Hooks\nCheckpointing of Communication Hooks\nAcknowledgements\nPipeline Parallelism\nModel Parallelism using multiple GPUs\nPipelined Execution\nPipe APIs in PyTorch\nTutorials\nAcknowledgements\nQuantization\nIntroduction to Quantization\nQuantization API Summary\nQuantization Stack\nQuantization Support Matrix\nQuantization API Reference\nQuantization Backend Configuration\nQuantization Accuracy Debugging\nQuantization Customizations\nBest Practices\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"}
{"text": "\nQuantization Customizations\nBest Practices\nFrequently Asked Questions\nCommon Errors\nDistributed RPC Framework\nBasics\nRPC\nRRef\nRemoteModule\nDistributed Autograd Framework\nDistributed Optimizer\nDesign Notes\nTutorials\ntorch.random\ntorch.masked\nIntroduction\nSupported Operators\ntorch.nested\nIntroduction\nConstruction\nsize\nunbind\nNested tensor constructor and conversion functions\nSupported operations\ntorch.sparse\nWhy and when to use sparsity\nFunctionality overview\nOperator overview\nSparse COO tensors\nSparse Compressed Tensors\nSupported operations\ntorch.Storage\ntorch.testing\ntorch.utils.benchmark\ntorch.utils.bottleneck\ntorch.utils.checkpoint\ntorch.utils.cpp_extension\ntorch.utils.data\nDataset Types\nData Loading Order and \"Sampler\"\nLoading Batched and Non-Batched Data\nSingle- and Multi-process Data Loading\nMemory Pinning\ntorch.utils.jit\ntorch.utils.dlpack\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"}
{"text": "\ntorch.utils.jit\ntorch.utils.dlpack\ntorch.utils.mobile_optimizer\ntorch.utils.model_zoo\ntorch.utils.tensorboard\nType Info\ntorch.finfo\ntorch.iinfo\nNamed Tensors\nCreating named tensors\nNamed dimensions\nName propagation semantics\nExplicit alignment by names\nManipulating dimensions\nAutograd support\nCurrently supported operations and subsystems\nNamed tensor API reference\nNamed Tensors operator coverage\nKeeps input names\nRemoves dimensions\nUnifies names from inputs\nPermutes dimensions\nContracts away dims\nFactory functions\nout function and in-place variants\ntorch.config\nLibraries\n^^^^^^^^^\ntorchaudio\nTorchData\nTorchRec\nTorchServe\ntorchtext\ntorchvision\nPyTorch on XLA Devices\nIndices and tables* Index\nModule Index\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"}
{"text": "EventsModule contains events processing mechanisms that are integrated with\nthe standard python logging.\nExample of usage:\n from torch.distributed.elastic import events\n event = events.Event(name=\"test_event\", source=events.EventSource.WORKER, metadata={...})\n events.get_logging_handler(destination=\"console\").info(event)\nAPI Methods\n===========\ntorch.distributed.elastic.events.record(event, destination='null')\ntorch.distributed.elastic.events.get_logging_handler(destination='null')\n Return type:\n Handler\nEvent Objects\n=============\nclass torch.distributed.elastic.events.api.Event(name, source, timestamp=0, metadata=)\n The class represents the generic event that occurs during the\n torchelastic job execution. The event can be any kind of meaningful\n action.\n Parameters:\n * name (str) -- event name.\n * source (EventSource) -- the event producer, e.g. agent\n or worker\n * timestamp (int) -- timestamp in milliseconds when event", "source": "https://pytorch.org/docs/stable/elastic/events.html", "category": "pytorch docs"}
{"text": "occured.\n * metadata (Dict[str, Optional[Union[str,\n int, float, bool]]]) -- additional data that\n is associated with the event.\nclass torch.distributed.elastic.events.api.EventSource(value)\n Known identifiers of the event producers.\ntorch.distributed.elastic.events.api.EventMetadataValue\n alias of \"Optional\"[\"Union\"[\"str\", \"int\", \"float\", \"bool\"]]", "source": "https://pytorch.org/docs/stable/elastic/events.html", "category": "pytorch docs"}
{"text": "MetricsMetrics API\nOverview:\nThe metrics API in torchelastic is used to publish telemetry metrics.\nIt is designed to be used by torchelastic's internal modules to\npublish metrics for the end user with the goal of increasing\nvisibility and helping with debugging. However you may use the same\nAPI in your jobs to publish metrics to the same metrics \"sink\".\nA \"metric\" can be thought of as timeseries data and is uniquely\nidentified by the string-valued tuple \"(metric_group, metric_name)\".\ntorchelastic makes no assumptions about what a \"metric_group\" is and\nwhat relationship it has with \"metric_name\". It is totally up to the\nuser to use these two fields to uniquely identify a metric.\nNote:\n The metric group \"torchelastic\" is reserved by torchelastic for\n platform level metrics that it produces. For instance torchelastic\n may output the latency (in milliseconds) of a re-rendezvous\n operation from the agent as \"(torchelastic,\n agent.rendezvous.duration.ms)\"", "source": "https://pytorch.org/docs/stable/elastic/metrics.html", "category": "pytorch docs"}
{"text": "agent.rendezvous.duration.ms)\"\nA sensible way to use metric groups is to map them to a stage or\nmodule in your job. You may also encode certain high level properties\nthe job such as the region or stage (dev vs prod).\nPublish Metrics:\nUsing torchelastic's metrics API is similar to using python's logging\nframework. You first have to configure a metrics handler before trying\nto add metric data.\nThe example below measures the latency for the \"calculate()\" function.\n import time\n import torch.distributed.elastic.metrics as metrics\n # makes all metrics other than the one from \"my_module\" to go /dev/null\n metrics.configure(metrics.NullMetricsHandler())\n metrics.configure(metrics.ConsoleMetricsHandler(), \"my_module\")\n def my_method():\n start = time.time()\n calculate()\n end = time.time()\n metrics.put_metric(\"calculate_latency\", int(end-start), \"my_module\")\nYou may also use the torch.distributed.elastic.metrics.prof` decorator\nto conveniently and succinctly profile functions", "source": "https://pytorch.org/docs/stable/elastic/metrics.html", "category": "pytorch docs"}
{"text": "to conveniently and succinctly profile functions\n # -- in module examples.foobar --\n import torch.distributed.elastic.metrics as metrics\n metrics.configure(metrics.ConsoleMetricsHandler(), \"foobar\")\n metrics.configure(metrics.ConsoleMetricsHandler(), \"Bar\")\n @metrics.prof\n def foo():\n pass\n class Bar():\n @metrics.prof\n def baz():\n pass\n\"@metrics.prof\" will publish the following metrics\n .success - 1 if the function finished successfully\n .failure - 1 if the function threw an exception\n .duration.ms - function duration in milliseconds\nConfiguring Metrics Handler:\ntorch.distributed.elastic.metrics.MetricHandler is responsible for\nemitting the added metric values to a particular destination. Metric\ngroups can be configured with different metric handlers.\nBy default torchelastic emits all metrics to \"/dev/null\". By adding\nthe following configuration metrics, \"torchelastic\" and \"my_app\"", "source": "https://pytorch.org/docs/stable/elastic/metrics.html", "category": "pytorch docs"}
{"text": "metric groups will be printed out to console.\n import torch.distributed.elastic.metrics as metrics\n metrics.configure(metrics.ConsoleMetricHandler(), group = \"torchelastic\")\n metrics.configure(metrics.ConsoleMetricHandler(), group = \"my_app\")\nWriting a Custom Metric Handler:\nIf you want your metrics to be emitted to a custom location, implement\nthe torch.distributed.elastic.metrics.MetricHandler interface and\nconfigure your job to use your custom metric handler.\nBelow is a toy example that prints the metrics to \"stdout\"\n import torch.distributed.elastic.metrics as metrics\n class StdoutMetricHandler(metrics.MetricHandler):\n def emit(self, metric_data):\n ts = metric_data.timestamp\n group = metric_data.group_name\n name = metric_data.name\n value = metric_data.value\n print(f\"[{ts}][{group}]: {name}={value}\")\n metrics.configure(StdoutMetricHandler(), group=\"my_app\")\nNow all metrics in the group \"my_app\" will be printed to stdout as:", "source": "https://pytorch.org/docs/stable/elastic/metrics.html", "category": "pytorch docs"}
{"text": "[1574213883.4182858][my_app]: my_metric=\n [1574213940.5237644][my_app]: my_metric=\nMetric Handlers\n===============\nBelow are the metric handlers that come included with torchelastic.\nclass torch.distributed.elastic.metrics.api.MetricHandler\nclass torch.distributed.elastic.metrics.api.ConsoleMetricHandler\nclass torch.distributed.elastic.metrics.api.NullMetricHandler\nMethods\n=======\ntorch.distributed.elastic.metrics.configure(handler, group=None)\ntorch.distributed.elastic.metrics.prof(fn=None, group='torchelastic')\n @profile decorator publishes duration.ms, count, success, failure\n metrics for the function that it decorates. The metric name\n defaults to the qualified name (\"class_name.def_name\") of the\n function. If the function does not belong to a class, it uses the\n leaf module name instead.\n Usage\n @metrics.prof\n def x():\n pass\n @metrics.prof(group=\"agent\")\n def y():\n pass", "source": "https://pytorch.org/docs/stable/elastic/metrics.html", "category": "pytorch docs"}
{"text": "def y():\n pass\ntorch.distributed.elastic.metrics.put_metric(metric_name, metric_value, metric_group='torchelastic')\n Publishes a metric data point.\n Usage\n put_metric(\"metric_name\", 1)\n put_metric(\"metric_name\", 1, \"metric_group_name\")", "source": "https://pytorch.org/docs/stable/elastic/metrics.html", "category": "pytorch docs"}
{"text": "QuickstartTo launch a fault-tolerant job, run the following on all nodes.\n torchrun\n --nnodes=NUM_NODES\n --nproc_per_node=TRAINERS_PER_NODE\n --max_restarts=NUM_ALLOWED_FAILURES\n --rdzv_id=JOB_ID\n --rdzv_backend=c10d\n --rdzv_endpoint=HOST_NODE_ADDR\n YOUR_TRAINING_SCRIPT.py (--arg1 ... train script args...)\nTo launch an elastic job, run the following on at least \"MIN_SIZE\"\nnodes and at most \"MAX_SIZE\" nodes.\n torchrun\n --nnodes=MIN_SIZE:MAX_SIZE\n --nproc_per_node=TRAINERS_PER_NODE\n --max_restarts=NUM_ALLOWED_FAILURES_OR_MEMBERSHIP_CHANGES\n --rdzv_id=JOB_ID\n --rdzv_backend=c10d\n --rdzv_endpoint=HOST_NODE_ADDR\n YOUR_TRAINING_SCRIPT.py (--arg1 ... train script args...)\nNote:\n TorchElastic models failures as membership changes. When a node\n fails, this is treated as a \"scale down\" event. When the failed node\n is replaced by the scheduler, it is a \"scale up\" event. Hence for", "source": "https://pytorch.org/docs/stable/elastic/quickstart.html", "category": "pytorch docs"}
{"text": "both fault tolerant and elastic jobs, \"--max_restarts\" is used to\n control the total number of restarts before giving up, regardless of\n whether the restart was caused due to a failure or a scaling event.\n\"HOST_NODE_ADDR\", in form [:] (e.g.\nnode1.example.com:29400), specifies the node and the port on which the\nC10d rendezvous backend should be instantiated and hosted. It can be\nany node in your training cluster, but ideally you should pick a node\nthat has a high bandwidth.\nNote:\n If no port number is specified \"HOST_NODE_ADDR\" defaults to 29400.\nNote:\n The \"--standalone\" option can be passed to launch a single node job\n with a sidecar rendezvous backend. You don\u00e2\u0080\u0099t have to pass \"--\n rdzv_id\", \"--rdzv_endpoint\", and \"--rdzv_backend\" when the \"--\n standalone\" option is used.\nNote:\n Learn more about writing your distributed training script here.\nIf \"torchrun\" does not meet your requirements you may use our APIs\ndirectly for more powerful customization. Start by taking a look at", "source": "https://pytorch.org/docs/stable/elastic/quickstart.html", "category": "pytorch docs"}
{"text": "the elastic agent API.", "source": "https://pytorch.org/docs/stable/elastic/quickstart.html", "category": "pytorch docs"}
{"text": "ExamplesPlease refer to the elastic/examples README.", "source": "https://pytorch.org/docs/stable/elastic/examples.html", "category": "pytorch docs"}
{"text": "CustomizationThis section describes how to customize TorchElastic to fit your\nneeds.\nLauncher\n========\nThe launcher program that ships with TorchElastic should be sufficient\nfor most use-cases (see torchrun (Elastic Launch)). You can implement\na custom launcher by programmatically creating an agent and passing it\nspecs for your workers as shown below.\n # my_launcher.py\n if name == \"main\":\n args = parse_args(sys.argv[1:])\n rdzv_handler = RendezvousHandler(...)\n spec = WorkerSpec(\n local_world_size=args.nproc_per_node,\n fn=trainer_entrypoint_fn,\n args=(trainer_entrypoint_fn args.fn_args,...),\n rdzv_handler=rdzv_handler,\n max_restarts=args.max_restarts,\n monitor_interval=args.monitor_interval,\n )\n agent = LocalElasticAgent(spec, start_method=\"spawn\")\n try:\n run_result = agent.run()\n if run_result.is_failed():\n print(f\"worker 0 failed with: run_result.failures[0]\")\n else:", "source": "https://pytorch.org/docs/stable/elastic/customization.html", "category": "pytorch docs"}
{"text": "else:\n print(f\"worker 0 return value is: run_result.return_values[0]\")\n except Exception ex:\n # handle exception\nRendezvous Handler\n==================\nTo implement your own rendezvous, extend\n\"torch.distributed.elastic.rendezvous.RendezvousHandler\" and implement\nits methods.\nWarning:\n Rendezvous handlers are tricky to implement. Before you begin make\n sure you completely understand the properties of rendezvous. Please\n refer to Rendezvous for more information.\nOnce implemented you can pass your custom rendezvous handler to the\nworker spec when creating the agent.\n spec = WorkerSpec(\n rdzv_handler=MyRendezvousHandler(params),\n ...\n )\n elastic_agent = LocalElasticAgent(spec, start_method=start_method)\n elastic_agent.run(spec.role)\nMetric Handler\n==============\nTorchElastic emits platform level metrics (see Metrics). By default\nmetrics are emitted to /dev/null so you will not see them. To have\nthe metrics pushed to a metric handling service in your", "source": "https://pytorch.org/docs/stable/elastic/customization.html", "category": "pytorch docs"}
{"text": "infrastructure, implement a\ntorch.distributed.elastic.metrics.MetricHandler and configure it\nin your custom launcher.\n # my_launcher.py\n import torch.distributed.elastic.metrics as metrics\n class MyMetricHandler(metrics.MetricHandler):\n def emit(self, metric_data: metrics.MetricData):\n # push metric_data to your metric sink\n def main():\n metrics.configure(MyMetricHandler())\n spec = WorkerSpec(...)\n agent = LocalElasticAgent(spec)\n agent.run()\nEvents Handler\n==============\nTorchElastic supports events recording (see Events). The events module\ndefines API that allows you to record events and implement custom\nEventHandler. EventHandler is used for publishing events produced\nduring torchelastic execution to different sources, e.g. AWS\nCloudWatch. By default it uses\ntorch.distributed.elastic.events.NullEventHandler that ignores\nevents. To configure custom events handler you need to implement\ntorch.distributed.elastic.events.EventHandler interface and", "source": "https://pytorch.org/docs/stable/elastic/customization.html", "category": "pytorch docs"}
{"text": "configure it in your custom launcher.\n # my_launcher.py\n import torch.distributed.elastic.events as events\n class MyEventHandler(events.EventHandler):\n def record(self, event: events.Event):\n # process event\n def main():\n events.configure(MyEventHandler())\n spec = WorkerSpec(...)\n agent = LocalElasticAgent(spec)\n agent.run()", "source": "https://pytorch.org/docs/stable/elastic/customization.html", "category": "pytorch docs"}
{"text": "MultiprocessingLibrary that launches and manages \"n\" copies of worker subprocesses\neither specified by a function or a binary.\nFor functions, it uses \"torch.multiprocessing\" (and therefore python\n\"multiprocessing\") to spawn/fork worker processes. For binaries it\nuses python \"subprocessing.Popen\" to create worker processes.\nUsage 1: Launching two trainers as a function\n from torch.distributed.elastic.multiprocessing import Std, start_processes\n def trainer(a, b, c):\n pass # train\n # runs two trainers\n # LOCAL_RANK=0 trainer(1,2,3)\n # LOCAL_RANK=1 trainer(4,5,6)\n ctx = start_processes(\n name=\"trainer\",\n entrypoint=trainer,\n args={0: (1,2,3), 1: (4,5,6)},\n envs={0: {\"LOCAL_RANK\": 0}, 1: {\"LOCAL_RANK\": 1}},\n log_dir=\"/tmp/foobar\",\n redirects=Std.ALL, # write all worker stdout/stderr to a log file\n tee={0: Std.ERR}, # tee only local rank 0's stderr to console\n )", "source": "https://pytorch.org/docs/stable/elastic/multiprocessing.html", "category": "pytorch docs"}
{"text": ")\n # waits for all copies of trainer to finish\n ctx.wait()\nUsage 2: Launching 2 echo workers as a binary\n # same as invoking\n # echo hello\n # echo world > stdout.log\n ctx = start_processes(\n name=\"echo\"\n entrypoint=\"echo\",\n log_dir=\"/tmp/foobar\",\n args={0: \"hello\", 1: \"world\"},\n redirects={1: Std.OUT},\n )\nJust like \"torch.multiprocessing\", the return value of the function\n\"start_processes()\" is a process context (\"api.PContext\"). If a\nfunction was launched, a \"api.MultiprocessContext\" is returned and if\na binary was launched a \"api.SubprocessContext\" is returned. Both are\nspecific implementations of the parent \"api.PContext\" class.\nStarting Multiple Workers\n=========================\ntorch.distributed.elastic.multiprocessing.start_processes(name, entrypoint, args, envs, log_dir, start_method='spawn', redirects=Std.NONE, tee=Std.NONE)\n Starts \"n\" copies of \"entrypoint\" processes with the provided", "source": "https://pytorch.org/docs/stable/elastic/multiprocessing.html", "category": "pytorch docs"}
{"text": "options. \"entrypoint\" is either a \"Callable\" (function) or a \"str\"\n (binary). The number of copies is determined by the number of\n entries for \"args\" and \"envs\" arguments, which need to have the\n same key set.\n \"args\" and \"env\" parameters are the arguments and environment\n variables to pass down to the entrypoint mapped by the replica\n index (local rank). All local ranks must be accounted for. That is,\n the keyset should be \"{0,1,...,(nprocs-1)}\".\n Note:\n When the \"entrypoint\" is a binary (\"str\"), \"args\" can only be\n strings. If any other type is given, then it is casted to a\n string representation (e.g. \"str(arg1)\"). Furthermore, a binary\n failure will only write an \"error.json\" error file if the main\n function is annotated with\n \"torch.distributed.elastic.multiprocessing.errors.record\". For\n function launches, this is done by default and there is no need\n to manually annotate with the \"@record\" annotation.", "source": "https://pytorch.org/docs/stable/elastic/multiprocessing.html", "category": "pytorch docs"}
{"text": "\"redirects\" and \"tee\" are bitmasks specifying which std stream(s)\n to redirect to a log file in the \"log_dir\". Valid mask values are\n defined in \"Std\". To redirect/tee only certain local ranks, pass\n \"redirects\" as a map with the key as the local rank to specify the\n redirect behavior for. Any missing local ranks will default to\n \"Std.NONE\".\n \"tee\" acts like the unix \"tee\" command in that it redirects +\n prints to console. To avoid worker stdout/stderr from printing to\n console, use the \"redirects\" parameter.\n For each process, the \"log_dir\" will contain:\n 1. \"{local_rank}/error.json\": if the process failed, a file with\n the error info\n 2. \"{local_rank}/stdout.json\": if \"redirect & STDOUT == STDOUT\"\n 3. \"{local_rank}/stderr.json\": if \"redirect & STDERR == STDERR\"\n Note:\n It is expected that the \"log_dir\" exists, is empty, and is a\n directory.\n Example:\n log_dir = \"/tmp/test\"\n # ok; two copies of foo: foo(\"bar0\"), foo(\"bar1\")\n start_processes(", "source": "https://pytorch.org/docs/stable/elastic/multiprocessing.html", "category": "pytorch docs"}
{"text": "start_processes(\n name=\"trainer\",\n entrypoint=foo,\n args:{0:(\"bar0\",), 1:(\"bar1\",),\n envs:{0:{}, 1:{}},\n log_dir=log_dir\n )\n # invalid; envs missing for local rank 1\n start_processes(\n name=\"trainer\",\n entrypoint=foo,\n args:{0:(\"bar0\",), 1:(\"bar1\",),\n envs:{0:{}},\n log_dir=log_dir\n )\n # ok; two copies of /usr/bin/touch: touch file1, touch file2\n start_processes(\n name=\"trainer\",\n entrypoint=\"/usr/bin/touch\",\n args:{0:(\"file1\",), 1:(\"file2\",),\n envs:{0:{}, 1:{}},\n log_dir=log_dir\n )\n # caution; arguments casted to string, runs:\n # echo \"1\" \"2\" \"3\" and echo \"[1, 2, 3]\"\n start_processes(\n name=\"trainer\",\n entrypoint=\"/usr/bin/echo\",\n args:{0:(1,2,3), 1:([1,2,3],),\n envs:{0:{}, 1:{}},\n log_dir=log_dir\n )\n Parameters:\n * name (str) -- a human readable short name that describes", "source": "https://pytorch.org/docs/stable/elastic/multiprocessing.html", "category": "pytorch docs"}
{"text": "what the processes are (used as header when tee'ing\n stdout/stderr outputs)\n * entrypoint (Union[Callable, str]) -- either a\n \"Callable\" (function) or \"cmd\" (binary)\n * args (Dict[int, Tuple]) -- arguments to each\n replica\n * envs (Dict[int, Dict[str, str]]) --\n env vars to each replica\n * log_dir (str) -- directory used to write log files\n * start_method (str) -- multiprocessing start method\n (spawn, fork, forkserver) ignored for binaries\n * redirects (Union[Std, Dict[int,\n Std]]) -- which std streams to redirect to a log file\n * tee (Union[Std, Dict[int, Std]]) --\n which std streams to redirect + print to console\n Return type:\n PContext\nProcess Context\n===============", "source": "https://pytorch.org/docs/stable/elastic/multiprocessing.html", "category": "pytorch docs"}
{"text": "PContext\nProcess Context\n===============\nclass torch.distributed.elastic.multiprocessing.api.PContext(name, entrypoint, args, envs, stdouts, stderrs, tee_stdouts, tee_stderrs, error_files)\n The base class that standardizes operations over a set of processes\n that are launched via different mechanisms. The name \"PContext\" is\n intentional to disambiguate with\n \"torch.multiprocessing.ProcessContext\".\n Warning:\n stdouts and stderrs should ALWAYS be a superset of tee_stdouts\n and tee_stderrs (respectively) this is b/c tee is implemented as\n a redirect + tail -f \nclass torch.distributed.elastic.multiprocessing.api.MultiprocessContext(name, entrypoint, args, envs, stdouts, stderrs, tee_stdouts, tee_stderrs, error_files, start_method)\n \"PContext\" holding worker processes invoked as a function.\nclass torch.distributed.elastic.multiprocessing.api.SubprocessContext(name, entrypoint, args, envs, stdouts, stderrs, tee_stdouts, tee_stderrs, error_files)", "source": "https://pytorch.org/docs/stable/elastic/multiprocessing.html", "category": "pytorch docs"}
{"text": "\"PContext\" holding worker processes invoked as a binary.\nclass torch.distributed.elastic.multiprocessing.api.RunProcsResult(return_values=, failures=, stdouts=, stderrs=)\n Results of a completed run of processes started with\n \"start_processes()\". Returned by \"PContext\".\n Note the following:\n 1. All fields are mapped by local rank\n 2. \"return_values\" - only populated for functions (not the\n binaries).\n 3. \"stdouts\" - path to stdout.log (empty string if no redirect)\n 4. \"stderrs\" - path to stderr.log (empty string if no redirect)", "source": "https://pytorch.org/docs/stable/elastic/multiprocessing.html", "category": "pytorch docs"}
{"text": "Elastic AgentServer\nThe elastic agent is the control plane of torchelastic. It is a\nprocess that launches and manages underlying worker processes. The\nagent is responsible for:\n1. Working with distributed torch: the workers are started with all\n the necessary information to successfully and trivially call\n \"torch.distributed.init_process_group()\".\n2. Fault tolerance: monitors workers and upon detecting worker\n failures or unhealthiness, tears down all workers and restarts\n everyone.\n3. Elasticity: Reacts to membership changes and restarts workers with\n the new members.\nThe simplest agents are deployed per node and works with local\nprocesses. A more advanced agent can launch and manage workers\nremotely. Agents can be completely decentralized, making decisions\nbased on the workers it manages. Or can be coordinated, communicating\nto other agents (that manage workers in the same job) to make a\ncollective decision.", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "collective decision.\nBelow is a diagram of an agent that manages a local group of workers.\n[image]\nConcepts\n========\nThis section describes the high-level classes and concepts that are\nrelevant to understanding the role of the \"agent\" in torchelastic.\nclass torch.distributed.elastic.agent.server.ElasticAgent\n Agent process responsible for managing one or more worker\n processes. The worker processes are assumed to be regular\n distributed PyTorch scripts. When the worker process is created by\n the agent, the agent provides the necessary information for the\n worker processes to properly initialize a torch process group.\n The exact deployment topology and ratio of agent-to-worker is\n dependent on the specific implementation of the agent and the\n user's job placement preferences. For instance, to run a\n distributed training job on GPU with 8 trainers (one per GPU) one\n can:\n 1. Use 8 x single GPU instances, place an agent per instance,\n managing 1 worker per agent.", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "managing 1 worker per agent.\n 2. Use 4 x double GPU instances, place an agent per instance,\n managing 2 workers per agent.\n 3. Use 2 x quad GPU instances, place an agent per instance,\n managing 4 workers per agent.\n 4. Use 1 x 8 GPU instance, place an agent per instance, managing 8\n workers per agent.\n Usage\n group_result = agent.run()\n if group_result.is_failed():\n # workers failed\n failure = group_result.failures[0]\n log.exception(f\"worker 0 failed with exit code : {failure.exit_code}\")\n else:\n return group_result.return_values[0] # return rank 0's results\n abstract get_worker_group(role='default')\n Returns:\n The \"WorkerGroup\" for the given \"role\". Note that the worker\n group is a mutable object and hence in a multi-\n threaded/process environment it may change state.\n Implementors are encouraged (but not required) to return a\n defensive read-only copy.\n Return type:", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "Return type:\n WorkerGroup\n abstract run(role='default')\n Runs the agent, retrying the worker group on failures up to\n \"max_restarts\".\n Returns:\n The result of the execution, containing the return values or\n failure details for each worker mapped by the worker's global\n rank.\n Raises:\n Exception - any other failures NOT related to worker\n process --\n Return type:\n RunResult\nclass torch.distributed.elastic.agent.server.WorkerSpec(role, local_world_size, rdzv_handler, fn=None, entrypoint=None, args=(), max_restarts=3, monitor_interval=30.0, master_port=None, master_addr=None, local_addr=None, redirects=Std.NONE, tee=Std.NONE)\n Contains blueprint information about a particular type of worker.\n For a given role, there must only exist a single worker spec.\n Worker spec is expected to be homogenous across all nodes\n (machine), that is each node runs the same number of workers for a\n particular spec.", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "particular spec.\n Parameters:\n * role (str) -- user-defined role for the workers with\n this spec\n * local_world_size (int) -- number local workers to run\n * fn (Optional[Callable]) -- (deprecated use\n entrypoint instead)\n * entrypoint (Optional[Union[Callable,\n str]]) -- worker function or command\n * args (Tuple) -- arguments to pass to \"entrypoint\"\n * rdzv_handler (RendezvousHandler) -- handles rdzv for\n this set of workers\n * max_restarts (int) -- number of max retries for the\n workers\n * monitor_interval (float) -- monitor status of workers\n every \"n\" seconds\n * master_port (Optional[int]) -- fixed port to run\n the c10d store on rank 0 if not specified then will chose a\n random free port\n * master_addr (Optional[str]) -- fixed master_addr\n to run the c10d store on rank 0 if not specified then will", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "chose hostname on agent rank 0\n * redirects (Union[Std, Dict[int,\n Std]]) -- redirect std streams to a file, selectively\n redirect for a particular local rank by passing a map\n * tee (Union[Std, Dict[int, Std]]) --\n tees the specified std stream(s) to console + file,\n selectively tee for a particular local rank by passing a map,\n takes precedence over \"redirects\" settings.\n get_entrypoint_name()\n If the entrypoint is a function (e.g. \"Callable\") returns its\n \"qualname\", else if the entrypoint is a binary (e.g. \"str\"),\n returns the binary name.\nclass torch.distributed.elastic.agent.server.WorkerState(value)\n State of the \"WorkerGroup\". Workers in a worker group change state\n as a unit. If a single worker in a worker group fails the entire\n set is considered failed:\n UNKNOWN - agent lost track of worker group state, unrecoverable", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "INIT - worker group object created not yet started\n HEALTHY - workers running and healthy\n UNHEALTHY - workers running and unhealthy\n STOPPED - workers stopped (interrupted) by the agent\n SUCCEEDED - workers finished running (exit 0)\n FAILED - workers failed to successfully finish (exit !0)\n A worker group starts from an initial \"INIT\" state, then progresses\n to \"HEALTHY\" or \"UNHEALTHY\" states, and finally reaches a terminal\n \"SUCCEEDED\" or \"FAILED\" state.\n Worker groups can be interrupted and temporarily put into \"STOPPED\"\n state by the agent. Workers in \"STOPPED\" state are scheduled to be\n restarted in the near future by the agent. Some examples of workers\n being put into \"STOPPED\" state are:\n 1. Worker group failure|unhealthy observed\n 2. Membership change detected\n When actions (start, stop, rdzv, retry, etc) on worker group fails\n and results in the action being partially applied to the worker", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "group the state will be \"UNKNOWN\". Typically this happens on\n uncaught/unhandled exceptions during state change events on the\n agent. The agent is not expected to recover worker groups in\n \"UNKNOWN\" state and is better off self terminating and allowing the\n job manager to retry the node.\n static is_running(state)\n Returns:\n True if the worker state represents workers still running\n (e.g. that the process exists but not necessarily healthy).\n Return type:\n bool\nclass torch.distributed.elastic.agent.server.Worker(local_rank, global_rank=- 1, role_rank=- 1, world_size=- 1, role_world_size=- 1)\n Represents a worker instance. Contrast this with \"WorkerSpec\" that\n represents the specifications of a worker. A \"Worker\" is created\n from a \"WorkerSpec\". A \"Worker\" is to a \"WorkerSpec\" as an object\n is to a class.\n The \"id\" of the worker is interpreted by the specific\n implementation of \"ElasticAgent\". For a local agent, it could be", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "the \"pid (int)\" of the worker, for a remote agent it could be\n encoded as \"host:port (string)\".\n Parameters:\n * id (Any) -- uniquely identifies a worker (interpreted by\n the agent)\n * local_rank (int) -- local rank of the worker\n * global_rank (int) -- global rank of the worker\n * role_rank (int) -- rank of the worker across all workers\n that have the same role\n * world_size (int) -- number of workers (globally)\n * role_world_size (int) -- number of workers that have the\n same role\nclass torch.distributed.elastic.agent.server.WorkerGroup(spec)\n Represents the set of \"Worker\" instances for the given \"WorkerSpec\"\n managed by \"ElasticAgent\". Whether the worker group contains cross\n instance workers or not depends on the implementation of the agent.\nImplementations\n===============\nBelow are the agent implementations provided by torchelastic.", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "class torch.distributed.elastic.agent.server.local_elastic_agent.LocalElasticAgent(spec, start_method='spawn', exit_barrier_timeout=300, log_dir=None)\n An implementation of \"torchelastic.agent.server.ElasticAgent\" that\n handles host-local workers. This agent is deployed per host and is\n configured to spawn \"n\" workers. When using GPUs, \"n\" maps to the\n number of GPUs available on the host.\n The local agent does not communicate to other local agents deployed\n on other hosts, even if the workers may communicate inter-host. The\n worker id is interpreted to be a local process. The agent starts\n and stops all worker processes as a single unit.\n The worker function and argument passed to the worker function must\n be python multiprocessing compatible. To pass multiprocessing data\n structures to the workers you may create the data structure in the\n same multiprocessing context as the specified \"start_method\" and\n pass it as a function argument.", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "pass it as a function argument.\n The \"exit_barrier_timeout\" specifies the amount of time (in\n seconds) to wait for other agents to finish. This acts as a safety\n net to handle cases where workers finish at different times, to\n prevent agents from viewing workers that finished early as a scale-\n down event. It is strongly advised that the user code deal with\n ensuring that workers are terminated in a synchronous manner rather\n than relying on the exit_barrier_timeout.\n A named pipe based watchdog can be enabled in \"LocalElasticAgent\"\n if an environment variable \"TORCHELASTIC_ENABLE_FILE_TIMER\" with\n value 1 has been defined in the \"LocalElasticAgent\" process.\n Optionally, another environment variable\n \"TORCHELASTIC_TIMER_FILE\" can be set with a unique file name for\n the named pipe. If the environment variable\n \"TORCHELASTIC_TIMER_FILE\" is not set, \"LocalElasticAgent\" will\n internally create a unique file name and set it to the environment", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "variable \"TORCHELASTIC_TIMER_FILE\", and this environment variable\n will be propagated to the worker processes to allow them to connect\n to the same named pipe that \"LocalElasticAgent\" uses.\n Example launching function\n def trainer(args) -> str:\n return \"do train\"\n def main():\n start_method=\"spawn\"\n shared_queue= multiprocessing.get_context(start_method).Queue()\n spec = WorkerSpec(\n role=\"trainer\",\n local_world_size=nproc_per_process,\n entrypoint=trainer,\n args=(\"foobar\",),\n ...)\n agent = LocalElasticAgent(spec, start_method)\n results = agent.run()\n if results.is_failed():\n print(\"trainer failed\")\n else:\n print(f\"rank 0 return value: {results.return_values[0]}\")\n # prints -> rank 0 return value: do train\n Example launching binary\n def main():", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "Example launching binary\n def main():\n spec = WorkerSpec(\n role=\"trainer\",\n local_world_size=nproc_per_process,\n entrypoint=\"/usr/local/bin/trainer\",\n args=(\"--trainer_args\", \"foobar\"),\n ...)\n agent = LocalElasticAgent(spec)\n results = agent.run()\n if not results.is_failed():\n print(\"binary launches do not have return values\")\nExtending the Agent\n===================\nTo extend the agent you can implement \"`ElasticAgent\" directly,\nhowever we recommend you extend \"SimpleElasticAgent\" instead, which\nprovides most of the scaffolding and leaves you with a few specific\nabstract methods to implement.\nclass torch.distributed.elastic.agent.server.SimpleElasticAgent(spec, exit_barrier_timeout=300)\n An \"ElasticAgent\" that manages workers (\"WorkerGroup\") for a single\n \"WorkerSpec\" (e.g. one particular type of worker role).", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "_assign_worker_ranks(store, group_rank, group_world_size, spec)\n Determines proper ranks for worker processes. The rank\n assignment is done according to the following algorithm:\n 1. Each agent writes its configuration(group_rank,\n group_world_size , num_workers) to the common store.\n 2. Each agent retrieves configuration for all agents and\n performs two level sort using role and rank.\n 3. Determine the global rank: the global rank of the workers for\n the current agent is the offset of the infos array up to\n group_rank of the agent. The offset is computed as a sum of\n local_world_size of all agents that have rank less than the\n group_rank. The workers would have the ranks: [offset,\n offset+local_world_size)\n 4. Determine the role rank: The role rank is determined using\n the algorithms in the point 3 with the exception that the\n offset is done from the first agent that has the same role as", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "current one and has the minimum group rank.\n Return type:\n List[Worker]\n _exit_barrier()\n Wait for \"exit_barrier_timeout\" seconds for all agents to finish\n executing their local workers (either successfully or not). This\n acts as a safety guard against user scripts that terminate at\n different times. This barrier keeps the agent process alive\n until all workers finish.\n _initialize_workers(worker_group)\n Starts a fresh set of workers for the worker_group. Essentially\n a rendezvous followed by a start_workers.\n The caller should first call \"_stop_workers()\" to stop running\n workers prior to calling this method.\n Optimistically sets the state of the worker group that just\n started as \"HEALTHY\" and delegates the actual monitoring of\n state to \"_monitor_workers()\" method\n abstract _monitor_workers(worker_group)\n Checks on the workers for the \"worker_group\" and returns the new\n state of the worker group.", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "state of the worker group.\n Return type:\n RunResult\n _rendezvous(worker_group)\n Runs rendezvous for the workers specified by worker spec.\n Assigns workers a new global rank and world size. Updates the\n rendezvous store for the worker group.\n _restart_workers(worker_group)\n Restarts (stops, rendezvous, starts) all local workers in the\n group.\n abstract _shutdown(death_sig=Signals.SIGTERM)\n Cleans up any resources that were allocated during the agent's\n work.\n Parameters:\n death_sig (Signals) -- Signal to send to the child\n process, SIGTERM is default\n abstract _start_workers(worker_group)\n Starts \"worker_group.spec.local_world_size\" number of workers\n according to worker spec for the worker group .\n Returns a map of \"local_rank\" to worker \"id\".\n Return type:\n Dict[int, Any]\n abstract _stop_workers(worker_group)\n Stops all workers in the given worker group. Implementors must", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "deal with workers in all states defined by \"WorkerState\". That\n is, it must gracefully handle stopping non-existent workers,\n unhealthy (stuck) workers, etc.\nclass torch.distributed.elastic.agent.server.api.RunResult(state, return_values=, failures=)\n Results returned by the worker executions. Run results follow an\n \"all-or-nothing\" policy where the run is successful if and only if\n ALL local workers managed by this agent complete successfully.\n If the result is successful (e.g. \"is_failed() = False\") then the\n \"return_values\" field contains the outputs (return values) of the\n workers managed by THIS agent mapped by their GLOBAL ranks. That is\n \"result.return_values[0]\" is the return value of global rank 0.\n Note:\n \"return_values\" are only meaningful for when the worker\n entrypoint is a function. Workers specified as a binary\n entrypoint do not canonically have a return value and the\n \"return_values\" field is meaningless and may be empty.", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "If \"is_failed()\" returns \"True\" then the \"failures\" field contains\n the failure information, again, mapped by the GLOBAL rank of the\n worker that failed.\n The keys in \"return_values\" and \"failures\" are mutually exclusive,\n that is, a worker's final state can only be one of: succeeded,\n failed. Workers intentionally terminated by the agent according to\n the agent's restart policy, are not represented in either\n \"return_values\" nor \"failures\".\nWatchdog in the Agent\n=====================\nA named pipe based watchdog can be enabled in \"LocalElasticAgent\" if\nan environment variable \"TORCHELASTIC_ENABLE_FILE_TIMER\" with value 1\nhas been defined in the \"LocalElasticAgent\" process. Optionally,\nanother environment variable \"TORCHELASTIC_TIMER_FILE\" can be set\nwith a unique file name for the named pipe. If the environment\nvariable \"TORCHELASTIC_TIMER_FILE\" is not set, \"LocalElasticAgent\"\nwill internally create a unique file name and set it to the", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "environment variable \"TORCHELASTIC_TIMER_FILE\", and this environment\nvariable will be propagated to the worker processes to allow them to\nconnect to the same named pipe that \"LocalElasticAgent\" uses.", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"}
{"text": "Expiration TimersExpiration timers are set up on the same process as the agent and used\nfrom your script to deal with stuck workers. When you go into a code-\nblock that has the potential to get stuck you can acquire an\nexpiration timer, which instructs the timer server to kill the process\nif it does not release the timer by the self-imposed expiration\ndeadline.\nUsage:\n import torchelastic.timer as timer\n import torchelastic.agent.server as agent\n def main():\n start_method = \"spawn\"\n message_queue = mp.get_context(start_method).Queue()\n server = timer.LocalTimerServer(message, max_interval=0.01)\n server.start() # non-blocking\n spec = WorkerSpec(\n fn=trainer_func,\n args=(message_queue,),\n ...)\n agent = agent.LocalElasticAgent(spec, start_method)\n agent.run()\n def trainer_func(message_queue):\n timer.configure(timer.LocalTimerClient(message_queue))", "source": "https://pytorch.org/docs/stable/elastic/timer.html", "category": "pytorch docs"}
{"text": "with timer.expires(after=60): # 60 second expiry\n # do some work\nIn the example above if \"trainer_func\" takes more than 60 seconds to\ncomplete, then the worker process is killed and the agent retries the\nworker group.\nClient Methods\n==============\ntorch.distributed.elastic.timer.configure(timer_client)\n Configures a timer client. Must be called before using \"expires\".\ntorch.distributed.elastic.timer.expires(after, scope=None, client=None)\n Acquires a countdown timer that expires in \"after\" seconds from\n now, unless the code-block that it wraps is finished within the\n timeframe. When the timer expires, this worker is eligible to be\n reaped. The exact meaning of \"reaped\" depends on the client\n implementation. In most cases, reaping means to terminate the\n worker process. Note that the worker is NOT guaranteed to be reaped\n at exactly \"time.now() + after\", but rather the worker is\n \"eligible\" for being reaped and the \"TimerServer\" that the client", "source": "https://pytorch.org/docs/stable/elastic/timer.html", "category": "pytorch docs"}
{"text": "talks to will ultimately make the decision when and how to reap the\n workers with expired timers.\n Usage:\n torch.distributed.elastic.timer.configure(LocalTimerClient())\n with expires(after=10):\n torch.distributed.all_reduce(...)\nServer/Client Implementations\n=============================\nBelow are the timer server and client pairs that are provided by\ntorchelastic.\nNote:\n Timer server and clients always have to be implemented and used in\n pairs since there is a messaging protocol between the server and\n client.\nBelow is a pair of timer server and client that is implemented based\non a \"multiprocess.Queue\".\nclass torch.distributed.elastic.timer.LocalTimerServer(mp_queue, max_interval=60, daemon=True)\n Server that works with \"LocalTimerClient\". Clients are expected to\n be subprocesses to the parent process that is running this server.\n Each host in the job is expected to start its own timer server\n locally and each server instance manages timers for local workers", "source": "https://pytorch.org/docs/stable/elastic/timer.html", "category": "pytorch docs"}
{"text": "(running on processes on the same host).\nclass torch.distributed.elastic.timer.LocalTimerClient(mp_queue)\n Client side of \"LocalTimerServer\". This client is meant to be used\n on the same host that the \"LocalTimerServer\" is running on and uses\n pid to uniquely identify a worker. This is particularly useful in\n situations where one spawns a subprocess (trainer) per GPU on a\n host with multiple GPU devices.\nBelow is another pair of timer server and client that is implemented\nbased on a named pipe.\nclass torch.distributed.elastic.timer.FileTimerServer(file_path, max_interval=10, daemon=True, log_event=None)\n Server that works with \"FileTimerClient\". Clients are expected to\n be running on the same host as the process that is running this\n server. Each host in the job is expected to start its own timer\n server locally and each server instance manages timers for local\n workers (running on processes on the same host).\n Parameters:", "source": "https://pytorch.org/docs/stable/elastic/timer.html", "category": "pytorch docs"}
{"text": "Parameters:\n * file_path (str) -- str, the path of a FIFO special file\n to be created.\n * max_interval (float) -- float, max interval in seconds\n for each watchdog loop.\n * daemon (bool) -- bool, running the watchdog thread in\n daemon mode or not. A daemon thread will not block a process\n to stop.\n * log_event (Callable[[str,\n Optional[FileTimerRequest]], None]) --\n Callable[[Dict[str, str]], None], an optional callback for\n logging the events in JSON format.\nclass torch.distributed.elastic.timer.FileTimerClient(file_path, signal=Signals.SIGKILL)\n Client side of \"FileTimerServer\". This client is meant to be used\n on the same host that the \"FileTimerServer\" is running on and uses\n pid to uniquely identify a worker. This client uses a named_pipe to\n send timer requests to the \"FileTimerServer\". This client is a\n producer while the \"FileTimerServer\" is a consumer. Multiple", "source": "https://pytorch.org/docs/stable/elastic/timer.html", "category": "pytorch docs"}
{"text": "clients can work with the same \"FileTimerServer\".\n Parameters:\n * file_path (str) -- str, the path of a FIFO special file.\n \"FileTimerServer\" must have created it by calling os.mkfifo().\n * signal -- signal, the signal to use to kill the process.\n Using a negative or zero signal will not kill the process.\nWriting a custom timer server/client\n====================================\nTo write your own timer server and client extend the\n\"torch.distributed.elastic.timer.TimerServer\" for the server and\n\"torch.distributed.elastic.timer.TimerClient\" for the client. The\n\"TimerRequest\" object is used to pass messages between the server and\nclient.\nclass torch.distributed.elastic.timer.TimerRequest(worker_id, scope_id, expiration_time)\n Data object representing a countdown timer acquisition and release\n that is used between the \"TimerClient\" and \"TimerServer\". A\n negative \"expiration_time\" should be interpreted as a \"release\"\n request.\n Note:", "source": "https://pytorch.org/docs/stable/elastic/timer.html", "category": "pytorch docs"}
{"text": "request.\n Note:\n the type of \"worker_id\" is implementation specific. It is\n whatever the TimerServer and TimerClient implementations have on\n to uniquely identify a worker.\nclass torch.distributed.elastic.timer.TimerServer(request_queue, max_interval, daemon=True)\n Entity that monitors active timers and expires them in a timely\n fashion. This server is responsible for reaping workers that have\n expired timers.\n abstract clear_timers(worker_ids)\n Clears all timers for the given \"worker_ids\".\n abstract get_expired_timers(deadline)\n Returns all expired timers for each worker_id. An expired timer\n is a timer for which the expiration_time is less than or equal\n to the provided deadline.\n Return type:\n Dict[str, List[TimerRequest]]\n abstract register_timers(timer_requests)\n Processes the incoming timer requests and registers them with\n the server. The timer request can either be a acquire-timer or", "source": "https://pytorch.org/docs/stable/elastic/timer.html", "category": "pytorch docs"}
{"text": "release-timer request. Timer requests with a negative\n expiration_time should be interpreted as a release-timer\n request.\nclass torch.distributed.elastic.timer.TimerClient\n Client library to acquire and release countdown timers by\n communicating with the TimerServer.\n abstract acquire(scope_id, expiration_time)\n Acquires a timer for the worker that holds this client object\n given the scope_id and expiration_time. Typically registers the\n timer with the TimerServer.\n abstract release(scope_id)\n Releases the timer for the \"scope_id\" on the worker this client\n represents. After this method is called, the countdown timer on\n the scope is no longer in effect.", "source": "https://pytorch.org/docs/stable/elastic/timer.html", "category": "pytorch docs"}
{"text": "Train scriptIf your train script works with \"torch.distributed.launch\" it will\ncontinue working with \"torchrun\" with these differences:\n1. No need to manually pass \"RANK\", \"WORLD_SIZE\", \"MASTER_ADDR\", and\n \"MASTER_PORT\".\n2. \"rdzv_backend\" and \"rdzv_endpoint\" can be provided. For most users\n this will be set to \"c10d\" (see rendezvous). The default\n \"rdzv_backend\" creates a non-elastic rendezvous where\n \"rdzv_endpoint\" holds the master address.\n3. Make sure you have a \"load_checkpoint(path)\" and\n \"save_checkpoint(path)\" logic in your script. When any number of\n workers fail we restart all the workers with the same program\n arguments so you will lose progress up to the most recent\n checkpoint (see elastic launch).\n4. \"use_env\" flag has been removed. If you were parsing local rank by\n parsing the \"--local_rank\" option, you need to get the local rank\n from the environment variable \"LOCAL_RANK\" (e.g.\n \"int(os.environ[\"LOCAL_RANK\"])\").", "source": "https://pytorch.org/docs/stable/elastic/train_script.html", "category": "pytorch docs"}
{"text": "\"int(os.environ[\"LOCAL_RANK\"])\").\nBelow is an expository example of a training script that checkpoints\non each epoch, hence the worst-case progress lost on failure is one\nfull epoch worth of training.\n def main():\n args = parse_args(sys.argv[1:])\n state = load_checkpoint(args.checkpoint_path)\n initialize(state)\n # torch.distributed.run ensures that this will work\n # by exporting all the env vars needed to initialize the process group\n torch.distributed.init_process_group(backend=args.backend)\n for i in range(state.epoch, state.total_num_epochs)\n for batch in iter(state.dataset)\n train(batch, state.model)\n state.epoch += 1\n save_checkpoint(state)\nFor concrete examples of torchelastic-compliant train scripts, visit\nour examples page.", "source": "https://pytorch.org/docs/stable/elastic/train_script.html", "category": "pytorch docs"}
{"text": "TorchElastic KubernetesPlease refer to our GitHub's Kubernetes README for more information on\nElastic Job Controller and custom resource definition.", "source": "https://pytorch.org/docs/stable/elastic/kubernetes.html", "category": "pytorch docs"}
{"text": "Automatic Mixed Precision package - torch.amp\"torch.amp\" provides convenience methods for mixed precision, where\nsome operations use the \"torch.float32\" (\"float\") datatype and other\noperations use lower precision floating point datatype\n(\"lower_precision_fp\"): \"torch.float16\" (\"half\") or \"torch.bfloat16\".\nSome ops, like linear layers and convolutions, are much faster in\n\"lower_precision_fp\". Other ops, like reductions, often require the\ndynamic range of \"float32\". Mixed precision tries to match each op to\nits appropriate datatype.\nOrdinarily, \"automatic mixed precision training\" with datatype of\n\"torch.float16\" uses \"torch.autocast\" and \"torch.cuda.amp.GradScaler\"\ntogether, as shown in the CUDA Automatic Mixed Precision examples and\nCUDA Automatic Mixed Precision recipe. However, \"torch.autocast\" and\n\"torch.cuda.amp.GradScaler\" are modular, and may be used separately if\ndesired. As shown in the CPU example section of \"torch.autocast\",", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "\"automatic mixed precision training/inference\" on CPU with datatype of\n\"torch.bfloat16\" only uses \"torch.autocast\".\nFor CUDA and CPU, APIs are also provided separately:\n* \"torch.autocast(\"cuda\", args...)\" is equivalent to\n \"torch.cuda.amp.autocast(args...)\".\n* \"torch.autocast(\"cpu\", args...)\" is equivalent to\n \"torch.cpu.amp.autocast(args...)\". For CPU, only lower precision\n floating point datatype of \"torch.bfloat16\" is supported for now.\n* Autocasting\n* Gradient Scaling\n* Autocast Op Reference\n * Op Eligibility\n * CUDA Op-Specific Behavior\n * CUDA Ops that can autocast to \"float16\"\n * CUDA Ops that can autocast to \"float32\"\n * CUDA Ops that promote to the widest input type\n * Prefer \"binary_cross_entropy_with_logits\" over\n \"binary_cross_entropy\"\n * CPU Op-Specific Behavior\n * CPU Ops that can autocast to \"bfloat16\"\n * CPU Ops that can autocast to \"float32\"\n * CPU Ops that promote to the widest input type\nAutocasting\n===========", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "Autocasting\nclass torch.autocast(device_type, dtype=None, enabled=True, cache_enabled=None)\n Instances of \"autocast\" serve as context managers or decorators\n that allow regions of your script to run in mixed precision.\n In these regions, ops run in an op-specific dtype chosen by\n autocast to improve performance while maintaining accuracy. See the\n Autocast Op Reference for details.\n When entering an autocast-enabled region, Tensors may be any type.\n You should not call \"half()\" or \"bfloat16()\" on your model(s) or\n inputs when using autocasting.\n \"autocast\" should wrap only the forward pass(es) of your network,\n including the loss computation(s). Backward passes under autocast\n are not recommended. Backward ops run in the same type that\n autocast used for corresponding forward ops.\n Example for CUDA Devices:\n # Creates model and optimizer in default precision\n model = Net().cuda()\n optimizer = optim.SGD(model.parameters(), ...)", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "for input, target in data:\n optimizer.zero_grad()\n # Enables autocasting for the forward pass (model + loss)\n with autocast():\n output = model(input)\n loss = loss_fn(output, target)\n # Exits the context manager before backward()\n loss.backward()\n optimizer.step()\n See the CUDA Automatic Mixed Precision examples for usage (along\n with gradient scaling) in more complex scenarios (e.g., gradient\n penalty, multiple models/losses, custom autograd functions).\n \"autocast\" can also be used as a decorator, e.g., on the \"forward\"\n method of your model:\n class AutocastModel(nn.Module):\n ...\n @autocast()\n def forward(self, input):\n ...\n Floating-point Tensors produced in an autocast-enabled region may\n be \"float16\". After returning to an autocast-disabled region, using\n them with floating-point Tensors of different dtypes may cause type", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "mismatch errors. If so, cast the Tensor(s) produced in the\n autocast region back to \"float32\" (or other dtype if desired). If a\n Tensor from the autocast region is already \"float32\", the cast is a\n no-op, and incurs no additional overhead. CUDA Example:\n # Creates some tensors in default dtype (here assumed to be float32)\n a_float32 = torch.rand((8, 8), device=\"cuda\")\n b_float32 = torch.rand((8, 8), device=\"cuda\")\n c_float32 = torch.rand((8, 8), device=\"cuda\")\n d_float32 = torch.rand((8, 8), device=\"cuda\")\n with autocast():\n # torch.mm is on autocast's list of ops that should run in float16.\n # Inputs are float32, but the op runs in float16 and produces float16 output.\n # No manual casts are required.\n e_float16 = torch.mm(a_float32, b_float32)\n # Also handles mixed input types\n f_float16 = torch.mm(d_float32, e_float16)\n # After exiting autocast, calls f_float16.float() to use with d_float32", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "g_float32 = torch.mm(d_float32, f_float16.float())\n CPU Training Example:\n # Creates model and optimizer in default precision\n model = Net()\n optimizer = optim.SGD(model.parameters(), ...)\n for epoch in epochs:\n for input, target in data:\n optimizer.zero_grad()\n # Runs the forward pass with autocasting.\n with torch.autocast(device_type=\"cpu\", dtype=torch.bfloat16):\n output = model(input)\n loss = loss_fn(output, target)\n loss.backward()\n optimizer.step()\n CPU Inference Example:\n # Creates model in default precision\n model = Net().eval()\n with torch.autocast(device_type=\"cpu\", dtype=torch.bfloat16):\n for input in data:\n # Runs the forward pass with autocasting.\n output = model(input)\n CPU Inference Example with Jit Trace:\n class TestModel(nn.Module):\n def init(self, input_size, num_classes):", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "super(TestModel, self).init()\n self.fc1 = nn.Linear(input_size, num_classes)\n def forward(self, x):\n return self.fc1(x)\n input_size = 2\n num_classes = 2\n model = TestModel(input_size, num_classes).eval()\n # For now, we suggest to disable the Jit Autocast Pass,\n # As the issue: https://github.com/pytorch/pytorch/issues/75956\n torch._C._jit_set_autocast_mode(False)\n with torch.cpu.amp.autocast(cache_enabled=False):\n model = torch.jit.trace(model, torch.randn(1, input_size))\n model = torch.jit.freeze(model)\n # Models Run\n for _ in range(3):\n model(torch.randn(1, input_size))\n Type mismatch errors in an autocast-enabled region are a bug; if\n this is what you observe, please file an issue.\n \"autocast(enabled=False)\" subregions can be nested in autocast-\n enabled regions. Locally disabling autocast can be useful, for\n example, if you want to force a subregion to run in a particular", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "\"dtype\". Disabling autocast gives you explicit control over the\n execution type. In the subregion, inputs from the surrounding\n region should be cast to \"dtype\" before use:\n # Creates some tensors in default dtype (here assumed to be float32)\n a_float32 = torch.rand((8, 8), device=\"cuda\")\n b_float32 = torch.rand((8, 8), device=\"cuda\")\n c_float32 = torch.rand((8, 8), device=\"cuda\")\n d_float32 = torch.rand((8, 8), device=\"cuda\")\n with autocast():\n e_float16 = torch.mm(a_float32, b_float32)\n with autocast(enabled=False):\n # Calls e_float16.float() to ensure float32 execution\n # (necessary because e_float16 was created in an autocasted region)\n f_float32 = torch.mm(c_float32, e_float16.float())\n # No manual casts are required when re-entering the autocast-enabled region.\n # torch.mm again runs in float16 and produces float16 output, regardless of input types.", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "g_float16 = torch.mm(d_float32, f_float32)\n The autocast state is thread-local. If you want it enabled in a\n new thread, the context manager or decorator must be invoked in\n that thread. This affects \"torch.nn.DataParallel\" and\n \"torch.nn.parallel.DistributedDataParallel\" when used with more\n than one GPU per process (see Working with Multiple GPUs).\n Parameters:\n * device_type (str, required) -- Whether to use 'cuda'\n or 'cpu' device\n * enabled (bool, optional) -- Whether autocasting\n should be enabled in the region. Default: \"True\"\n * dtype (torch_dtype, optional) -- Whether to use\n torch.float16 or torch.bfloat16.\n * cache_enabled (bool, optional) -- Whether the weight\n cache inside autocast should be enabled. Default: \"True\"\nclass torch.cuda.amp.autocast(enabled=True, dtype=torch.float16, cache_enabled=True)\n See \"torch.autocast\". \"torch.cuda.amp.autocast(args...)\" is", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "equivalent to \"torch.autocast(\"cuda\", args...)\"\ntorch.cuda.amp.custom_fwd(fwd=None, , cast_inputs=None)\n Helper decorator for \"forward\" methods of custom autograd functions\n (subclasses of \"torch.autograd.Function\"). See the example page\n for more detail.\n Parameters:\n cast_inputs* (\"torch.dtype\" or None, optional, default=None)\n -- If not \"None\", when \"forward\" runs in an autocast-enabled\n region, casts incoming floating-point CUDA Tensors to the target\n dtype (non-floating-point Tensors are not affected), then\n executes \"forward\" with autocast disabled. If \"None\",\n \"forward\"'s internal ops execute with the current autocast\n state.\n Note:\n If the decorated \"forward\" is called outside an autocast-enabled\n region, \"custom_fwd\" is a no-op and \"cast_inputs\" has no effect.\ntorch.cuda.amp.custom_bwd(bwd)\n Helper decorator for backward methods of custom autograd functions\n (subclasses of \"torch.autograd.Function\"). Ensures that \"backward\"", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "executes with the same autocast state as \"forward\". See the example\n page for more detail.\nclass torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16, cache_enabled=True)\n See \"torch.autocast\". \"torch.cpu.amp.autocast(args...)\" is\n equivalent to \"torch.autocast(\"cpu\", args...)\"\nGradient Scaling\n================\nIf the forward pass for a particular op has \"float16\" inputs, the\nbackward pass for that op will produce \"float16\" gradients. Gradient\nvalues with small magnitudes may not be representable in \"float16\".\nThese values will flush to zero (\"underflow\"), so the update for the\ncorresponding parameters will be lost.\nTo prevent underflow, \"gradient scaling\" multiplies the network's\nloss(es) by a scale factor and invokes a backward pass on the scaled\nloss(es). Gradients flowing backward through the network are then\nscaled by the same factor. In other words, gradient values have a\nlarger magnitude, so they don't flush to zero.\nEach parameter's gradient (\".grad\" attribute) should be unscaled", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "before the optimizer updates the parameters, so the scale factor does\nnot interfere with the learning rate.\nclass torch.cuda.amp.GradScaler(init_scale=65536.0, growth_factor=2.0, backoff_factor=0.5, growth_interval=2000, enabled=True)\n get_backoff_factor()\n Returns a Python float containing the scale backoff factor.\n get_growth_factor()\n Returns a Python float containing the scale growth factor.\n get_growth_interval()\n Returns a Python int containing the growth interval.\n get_scale()\n Returns a Python float containing the current scale, or 1.0 if\n scaling is disabled.\n Warning:\n \"get_scale()\" incurs a CPU-GPU sync.\n is_enabled()\n Returns a bool indicating whether this instance is enabled.\n load_state_dict(state_dict)\n Loads the scaler state. If this instance is disabled,\n \"load_state_dict()\" is a no-op.\n Parameters:\n state_dict (dict) -- scaler state. Should be an object\n returned from a call to \"state_dict()\".", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "returned from a call to \"state_dict()\".\n scale(outputs)\n Multiplies ('scales') a tensor or list of tensors by the scale\n factor.\n Returns scaled outputs. If this instance of \"GradScaler\" is not\n enabled, outputs are returned unmodified.\n Parameters:\n outputs (Tensor or iterable of Tensors) -- Outputs\n to scale.\n set_backoff_factor(new_factor)\n Parameters:\n new_scale (float) -- Value to use as the new scale\n backoff factor.\n set_growth_factor(new_factor)\n Parameters:\n new_scale (float) -- Value to use as the new scale\n growth factor.\n set_growth_interval(new_interval)\n Parameters:\n new_interval (int) -- Value to use as the new growth\n interval.\n state_dict()\n Returns the state of the scaler as a \"dict\". It contains five\n entries:\n * \"\"scale\"\" - a Python float containing the current scale", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "\n\"\"growth_factor\"\" - a Python float containing the current\n growth factor\n\"\"backoff_factor\"\" - a Python float containing the current\n backoff factor\n\"\"growth_interval\"\" - a Python int containing the current\n growth interval\n\"\"_growth_tracker\"\" - a Python int containing the number of\n recent consecutive unskipped steps.\n If this instance is not enabled, returns an empty dict.\n Note:\n If you wish to checkpoint the scaler's state after a\n particular iteration, \"state_dict()\" should be called after\n \"update()\".\n step(optimizer, args, *kwargs)\n \"step()\" carries out the following two operations:\nInternally invokes \"unscale_(optimizer)\" (unless \"unscale_()\"\n was explicitly called for \"optimizer\" earlier in the\n iteration). As part of the \"unscale_()\", gradients are\n checked for infs/NaNs.\nIf no inf/NaN gradients are found, invokes \"optimizer.step()\"\n\n\n", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "using the unscaled gradients. Otherwise, \"optimizer.step()\"\n is skipped to avoid corrupting the params.\n \"args\" and \"kwargs\" are forwarded to \"optimizer.step()\".\n Returns the return value of \"optimizer.step(args, kwargs)\".\n Parameters:\n * optimizer (torch.optim.Optimizer) -- Optimizer that\n applies the gradients.\n * args -- Any arguments.\n * kwargs** -- Any keyword arguments.\n Warning:\n Closure use is not currently supported.\n unscale_(optimizer)\n Divides (\"unscales\") the optimizer's gradient tensors by the\n scale factor.\n \"unscale_()\" is optional, serving cases where you need to modify\n or inspect gradients between the backward pass(es) and \"step()\".\n If \"unscale_()\" is not called explicitly, gradients will be\n unscaled automatically during \"step()\".\n Simple example, using \"unscale_()\" to enable clipping of\n unscaled gradients:\n ...", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "unscaled gradients:\n ...\n scaler.scale(loss).backward()\n scaler.unscale_(optimizer)\n torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm)\n scaler.step(optimizer)\n scaler.update()\n Parameters:\n optimizer (torch.optim.Optimizer) -- Optimizer that\n owns the gradients to be unscaled.\n Note:\n \"unscale_()\" does not incur a CPU-GPU sync.\n Warning:\n \"unscale_()\" should only be called once per optimizer per\n \"step()\" call, and only after all gradients for that\n optimizer's assigned parameters have been accumulated. Calling\n \"unscale_()\" twice for a given optimizer between each \"step()\"\n triggers a RuntimeError.\n Warning:\n \"unscale_()\" may unscale sparse gradients out of place,\n replacing the \".grad\" attribute.\n update(new_scale=None)\n Updates the scale factor.\n If any optimizer steps were skipped the scale is multiplied by", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "\"backoff_factor\" to reduce it. If \"growth_interval\" unskipped\n iterations occurred consecutively, the scale is multiplied by\n \"growth_factor\" to increase it.\n Passing \"new_scale\" sets the new scale value manually.\n (\"new_scale\" is not used directly, it's used to fill\n GradScaler's internal scale tensor. So if \"new_scale\" was a\n tensor, later in-place changes to that tensor will not further\n affect the scale GradScaler uses internally.)\n Parameters:\n new_scale (float or \"torch.cuda.FloatTensor\", optional,\n default=None) -- New scale factor.\n Warning:\n \"update()\" should only be called at the end of the iteration,\n after \"scaler.step(optimizer)\" has been invoked for all\n optimizers used this iteration.\nAutocast Op Reference\n=====================\nOp Eligibility\n\nOps that run in \"float64\" or non-floating-point dtypes are not\neligible, and will run in these types whether or not autocast is\nenabled.", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "enabled.\nOnly out-of-place ops and Tensor methods are eligible. In-place\nvariants and calls that explicitly supply an \"out=...\" Tensor are\nallowed in autocast-enabled regions, but won't go through autocasting.\nFor example, in an autocast-enabled region \"a.addmm(b, c)\" can\nautocast, but \"a.addmm_(b, c)\" and \"a.addmm(b, c, out=d)\" cannot. For\nbest performance and stability, prefer out-of-place ops in autocast-\nenabled regions.\nOps called with an explicit \"dtype=...\" argument are not eligible, and\nwill produce output that respects the \"dtype\" argument.\nCUDA Op-Specific Behavior\n\nThe following lists describe the behavior of eligible ops in autocast-\nenabled regions. These ops always go through autocasting whether they\nare invoked as part of a \"torch.nn.Module\", as a function, or as a\n\"torch.Tensor\" method. If functions are exposed in multiple\nnamespaces, they go through autocasting regardless of the namespace.\nOps not listed below do not go through autocasting. They run in the", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "type defined by their inputs. However, autocasting may still change\nthe type in which unlisted ops run if they're downstream from\nautocasted ops.\nIf an op is unlisted, we assume it's numerically stable in \"float16\".\nIf you believe an unlisted op is numerically unstable in \"float16\",\nplease file an issue.\nCUDA Ops that can autocast to \"float16\"\n\"__matmul__\", \"addbmm\", \"addmm\", \"addmv\", \"addr\", \"baddbmm\", \"bmm\",\n\"chain_matmul\", \"multi_dot\", \"conv1d\", \"conv2d\", \"conv3d\",\n\"conv_transpose1d\", \"conv_transpose2d\", \"conv_transpose3d\", \"GRUCell\",\n\"linear\", \"LSTMCell\", \"matmul\", \"mm\", \"mv\", \"prelu\", \"RNNCell\"\nCUDA Ops that can autocast to \"float32\"\n\n\"pow\", \"rdiv\", \"rpow\", \"rtruediv\", \"acos\", \"asin\",\n\"binary_cross_entropy_with_logits\", \"cosh\", \"cosine_embedding_loss\",\n\"cdist\", \"cosine_similarity\", \"cross_entropy\", \"cumprod\", \"cumsum\",\n\"dist\", \"erfinv\", \"exp\", \"expm1\", \"group_norm\",", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "\"dist\", \"erfinv\", \"exp\", \"expm1\", \"group_norm\",\n\"hinge_embedding_loss\", \"kl_div\", \"l1_loss\", \"layer_norm\", \"log\",\n\"log_softmax\", \"log10\", \"log1p\", \"log2\", \"margin_ranking_loss\",\n\"mse_loss\", \"multilabel_margin_loss\", \"multi_margin_loss\", \"nll_loss\",\n\"norm\", \"normalize\", \"pdist\", \"poisson_nll_loss\", \"pow\", \"prod\",\n\"reciprocal\", \"rsqrt\", \"sinh\", \"smooth_l1_loss\", \"soft_margin_loss\",\n\"softmax\", \"softmin\", \"softplus\", \"sum\", \"renorm\", \"tan\",\n\"triplet_margin_loss\"\nCUDA Ops that promote to the widest input type\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThese ops don't require a particular dtype for stability, but take\nmultiple inputs and require that the inputs' dtypes match. If all of\nthe inputs are \"float16\", the op runs in \"float16\". If any of the\ninputs is \"float32\", autocast casts all inputs to \"float32\" and runs\nthe op in \"float32\".\n\"addcdiv\", \"addcmul\", \"atan2\", \"bilinear\", \"cross\", \"dot\",\n\"grid_sample\", \"index_put\", \"scatter_add\", \"tensordot\"", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "Some ops not listed here (e.g., binary ops like \"add\") natively\npromote inputs without autocasting's intervention. If inputs are a\nmixture of \"float16\" and \"float32\", these ops run in \"float32\" and\nproduce \"float32\" output, regardless of whether autocast is enabled.\nPrefer \"binary_cross_entropy_with_logits\" over \"binary_cross_entropy\"\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe backward passes of \"torch.nn.functional.binary_cross_entropy()\"\n(and \"torch.nn.BCELoss\", which wraps it) can produce gradients that\naren't representable in \"float16\". In autocast-enabled regions, the\nforward input may be \"float16\", which means the backward gradient must\nbe representable in \"float16\" (autocasting \"float16\" forward inputs to\n\"float32\" doesn't help, because that cast must be reversed in\nbackward). Therefore, \"binary_cross_entropy\" and \"BCELoss\" raise an\nerror in autocast-enabled regions.\nMany models use a sigmoid layer right before the binary cross entropy", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "layer. In this case, combine the two layers using\n\"torch.nn.functional.binary_cross_entropy_with_logits()\" or\n\"torch.nn.BCEWithLogitsLoss\". \"binary_cross_entropy_with_logits\" and\n\"BCEWithLogits\" are safe to autocast.\nCPU Op-Specific Behavior\n\nThe following lists describe the behavior of eligible ops in autocast-\nenabled regions. These ops always go through autocasting whether they\nare invoked as part of a \"torch.nn.Module\", as a function, or as a\n\"torch.Tensor\" method. If functions are exposed in multiple\nnamespaces, they go through autocasting regardless of the namespace.\nOps not listed below do not go through autocasting. They run in the\ntype defined by their inputs. However, autocasting may still change\nthe type in which unlisted ops run if they're downstream from\nautocasted ops.\nIf an op is unlisted, we assume it's numerically stable in \"bfloat16\".\nIf you believe an unlisted op is numerically unstable in \"bfloat16\",\nplease file an issue.\nCPU Ops that can autocast to \"bfloat16\"", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "CPU Ops that can autocast to \"bfloat16\"\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\"conv1d\", \"conv2d\", \"conv3d\", \"bmm\", \"mm\", \"baddbmm\", \"addmm\",\n\"addbmm\", \"linear\", \"matmul\", \"_convolution\"\nCPU Ops that can autocast to \"float32\"\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\"conv_transpose1d\", \"conv_transpose2d\", \"conv_transpose3d\",\n\"avg_pool3d\", \"binary_cross_entropy\", \"grid_sampler\",\n\"grid_sampler_2d\", \"_grid_sampler_2d_cpu_fallback\", \"grid_sampler_3d\",\n\"polar\", \"prod\", \"quantile\", \"nanquantile\", \"stft\", \"cdist\", \"trace\",\n\"view_as_complex\", \"cholesky\", \"cholesky_inverse\", \"cholesky_solve\",\n\"inverse\", \"lu_solve\", \"orgqr\", \"inverse\", \"ormqr\", \"pinverse\",\n\"max_pool3d\", \"max_unpool2d\", \"max_unpool3d\", \"adaptive_avg_pool3d\",\n\"reflection_pad1d\", \"reflection_pad2d\", \"replication_pad1d\",\n\"replication_pad2d\", \"replication_pad3d\", \"mse_loss\", \"ctc_loss\",\n\"kl_div\", \"multilabel_margin_loss\", \"fft_fft\", \"fft_ifft\", \"fft_fft2\",\n\"fft_ifft2\", \"fft_fftn\", \"fft_ifftn\", \"fft_rfft\", \"fft_irfft\",", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "\"fft_rfft2\", \"fft_irfft2\", \"fft_rfftn\", \"fft_irfftn\", \"fft_hfft\",\n\"fft_ihfft\", \"linalg_matrix_norm\", \"linalg_cond\",\n\"linalg_matrix_rank\", \"linalg_solve\", \"linalg_cholesky\",\n\"linalg_svdvals\", \"linalg_eigvals\", \"linalg_eigvalsh\", \"linalg_inv\",\n\"linalg_householder_product\", \"linalg_tensorinv\",\n\"linalg_tensorsolve\", \"fake_quantize_per_tensor_affine\", \"eig\",\n\"geqrf\", \"lstsq\", \"_lu_with_info\", \"qr\", \"solve\", \"svd\", \"symeig\",\n\"triangular_solve\", \"fractional_max_pool2d\", \"fractional_max_pool3d\",\n\"adaptive_max_pool3d\", \"multilabel_margin_loss_forward\", \"linalg_qr\",\n\"linalg_cholesky_ex\", \"linalg_svd\", \"linalg_eig\", \"linalg_eigh\",\n\"linalg_lstsq\", \"linalg_inv_ex\"\nCPU Ops that promote to the widest input type\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThese ops don't require a particular dtype for stability, but take\nmultiple inputs and require that the inputs' dtypes match. If all of\nthe inputs are \"bfloat16\", the op runs in \"bfloat16\". If any of the", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "inputs is \"float32\", autocast casts all inputs to \"float32\" and runs\nthe op in \"float32\".\n\"cat\", \"stack\", \"index_copy\"\nSome ops not listed here (e.g., binary ops like \"add\") natively\npromote inputs without autocasting's intervention. If inputs are a\nmixture of \"bfloat16\" and \"float32\", these ops run in \"float32\" and\nproduce \"float32\" output, regardless of whether autocast is enabled.", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"}
{"text": "torch._dynamoWarning:\n This module is an early prototype and is subject to change.\ntorch._dynamo.allow_in_graph(fn)\n Customize which functions TorchDynamo will include in the generated\n graph. Similar to torch.fx.wrap().\n torch._dynamo.allow_in_graph(my_custom_function)\n @torch._dynamo.optimize(...)\n def fn(a):\n x = torch.add(x, 1)\n x = my_custom_function(x)\n x = torch.add(x, 1)\n return x\n fn(...)\n Will capture a single graph containing my_custom_function().\ntorch._dynamo.disallow_in_graph(fn)\n Customize which functions TorchDynamo will exclude in the generated\n graph and force a graph break on.\n torch._dynamo.disallow_in_graph(torch.sub)\n @torch._dynamo.optimize(...)\n def fn(a):\n x = torch.add(x, 1)\n x = torch.sub(x, 1)\n x = torch.add(x, 1)\n return x\n fn(...)\n Will break the graph on torch.sub, and give two graphs each with\n a single torch.add() op.", "source": "https://pytorch.org/docs/stable/_dynamo.html", "category": "pytorch docs"}
{"text": "a single torch.add() op.\ntorch._dynamo.graph_break()\n Force a graph break\ntorch._dynamo.optimize(backend='inductor', , nopython=False, guard_export_fn=None, guard_fail_fn=None, disable=False, dynamic=False)\n The main entrypoint of TorchDynamo. Do graph capture and call\n backend() to optimize extracted graphs.\n Parameters:\n * backend -- One of the two things: - Either, a\n function/callable taking a torch.fx.GraphModule and\n example_inputs and returning a python callable that runs the\n graph faster. One can also provide additional context for the\n backend, like torch.jit.fuser(\"fuser2\"), by setting the\n backend_ctx_ctor attribute. See\n AOTAutogradMemoryEfficientFusionWithContext for the usage. -\n Or, a string backend name in torch._dynamo.list_backends()\n * nopython -- If True, graph breaks will be errors and there\n will be a single whole-program graph.\n * disable* -- If True, turn this decorator into a no-op", "source": "https://pytorch.org/docs/stable/_dynamo.html", "category": "pytorch docs"}
{"text": "\ndynamic -- If True, turn on dynamic shapes support\n Example Usage:\n @torch._dynamo.optimize()\n def toy_example(a, b):\n ...\ntorch._dynamo.optimize_assert(backend, , hooks=Hooks(guard_export_fn=None, guard_fail_fn=None), export=False, dynamic=False)\n The same as torch._dynamo.optimize(backend, nopython=True)*\ntorch._dynamo.run(fn=None)\n Don't do any dynamic compiles, just use prior optimizations\ntorch._dynamo.disable(fn=None)\n Decorator and context manager to disable TorchDynamo\ntorch._dynamo.reset()\n Clear all compile caches and restore initial state\ntorch._dynamo.list_backends()\n Return valid strings that can be passed to:\n @torch._dynamo.optimize()\n def foo(...):\n ....\ntorch._dynamo.skip(fn=None)\n Skip frames associated with the function code, but still process\n recursively invoked frames\nclass torch._dynamo.OptimizedModule(mod, dynamo_ctx)\n Wraps the original nn.Module object and later patches its forward\n", "source": "https://pytorch.org/docs/stable/_dynamo.html", "category": "pytorch docs"}
{"text": "method to optimized self.forward method.", "source": "https://pytorch.org/docs/stable/_dynamo.html", "category": "pytorch docs"}
{"text": "Tensor ViewsPyTorch allows a tensor to be a \"View\" of an existing tensor. View\ntensor shares the same underlying data with its base tensor.\nSupporting \"View\" avoids explicit data copy, thus allows us to do fast\nand memory efficient reshaping, slicing and element-wise operations.\nFor example, to get a view of an existing tensor \"t\", you can call\n\"t.view(...)\".\n\n\n\nt = torch.rand(4, 4)\nb = t.view(2, 8)\nt.storage().data_ptr() == b.storage().data_ptr() # t and b share the same underlying data.\n True\n # Modifying view tensor changes base tensor as well.\nb[0][0] = 3.14\nt[0][0]\n tensor(3.14)\nSince views share underlying data with its base tensor, if you edit\nthe data in the view, it will be reflected in the base tensor as well.\nTypically a PyTorch op returns a new tensor as output, e.g. \"add()\".\nBut in case of view ops, outputs are views of input tensors to avoid\nunnecessary data copy. No data movement occurs when creating a view,\n\n\n", "source": "https://pytorch.org/docs/stable/tensor_view.html", "category": "pytorch docs"}
{"text": "view tensor just changes the way it interprets the same data. Taking a\nview of contiguous tensor could potentially produce a non-contiguous\ntensor. Users should pay additional attention as contiguity might have\nimplicit performance impact. \"transpose()\" is a common example.\n\n\n\nbase = torch.tensor([[0, 1],[2, 3]])\nbase.is_contiguous()\n True\nt = base.transpose(0, 1) # t is a view of base. No data movement happened here.\n # View tensors might be non-contiguous.\nt.is_contiguous()\n False\n # To get a contiguous tensor, call .contiguous() to enforce\n # copying data when t is not contiguous.\nc = t.contiguous()\nFor reference, here\u00e2\u0080\u0099s a full list of view ops in PyTorch:\n* Basic slicing and indexing op, e.g. \"tensor[0, 2:, 1:7:2]\" returns a\n view of base \"tensor\", see note below.\n* \"adjoint()\"\n* \"as_strided()\"\n* \"detach()\"\n* \"diagonal()\"\n* \"expand()\"\n* \"expand_as()\"\n* \"movedim()\"\n* \"narrow()\"\n* \"permute()\"\n* \"select()\"\n* \"squeeze()\"\n* \"transpose()\"\n* \"t()\"\n* \"T\"\n* \"H\"\n\n\n", "source": "https://pytorch.org/docs/stable/tensor_view.html", "category": "pytorch docs"}
{"text": "\n\"squeeze()\"\n\"transpose()\"\n\"t()\"\n\"T\"\n\"H\"\n\"mT\"\n\"mH\"\n\"real\"\n\"imag\"\n\"view_as_real()\"\n\"unflatten()\"\n\"unfold()\"\n\"unsqueeze()\"\n\"view()\"\n\"view_as()\"\n\"unbind()\"\n\"split()\"\n\"hsplit()\"\n\"vsplit()\"\n\"tensor_split()\"\n\"split_with_sizes()\"\n\"swapaxes()\"\n\"swapdims()\"\n\"chunk()\"\n\"indices()\" (sparse tensor only)\n\"values()\" (sparse tensor only)\nNote:\n When accessing the contents of a tensor via indexing, PyTorch\n follows Numpy behaviors that basic indexing returns views, while\n advanced indexing returns a copy. Assignment via either basic or\n advanced indexing is in-place. See more examples in Numpy indexing\n documentation.\nIt's also worth mentioning a few ops with special behaviors:\n\"reshape()\", \"reshape_as()\" and \"flatten()\" can return either a view\n or new tensor, user code shouldn't rely on whether it's view or not.\n\"contiguous()\" returns itself if input tensor is already\n contiguous, otherwise it returns a new contiguous tensor by copying\n data.\n", "source": "https://pytorch.org/docs/stable/tensor_view.html", "category": "pytorch docs"}
{"text": "data.\nFor a more detailed walk-through of PyTorch internal implementation,\nplease refer to ezyang's blogpost about PyTorch Internals.", "source": "https://pytorch.org/docs/stable/tensor_view.html", "category": "pytorch docs"}
{"text": "torch.Storage\"torch.Storage\" is an alias for the storage class that corresponds\nwith the default data type (\"torch.get_default_dtype()\"). For\ninstance, if the default data type is \"torch.float\", \"torch.Storage\"\nresolves to \"torch.FloatStorage\".\nThe \"torch.Storage\" and \"torch.cuda.Storage\" classes, like\n\"torch.FloatStorage\", \"torch.IntStorage\", etc., are not actually ever\ninstantiated. Calling their constructors creates a\n\"torch.TypedStorage\" with the appropriate \"torch.dtype\" and\n\"torch.device\". \"torch.Storage\" classes have all of the same\nclass methods that \"torch.TypedStorage\" has.\nA \"torch.TypedStorage\" is a contiguous, one-dimensional array of\nelements of a particular \"torch.dtype\". It can be given any\n\"torch.dtype\", and the internal data will be interpreted\nappropriately. \"torch.TypedStorage\" contains a \"torch.UntypedStorage\"\nwhich holds the data as an untyped array of bytes.\nEvery strided \"torch.Tensor\" contains a \"torch.TypedStorage\", which", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"}
{"text": "stores all of the data that the \"torch.Tensor\" views.\nWarning:\n All storage classes except for \"torch.UntypedStorage\" will be\n removed in the future, and \"torch.UntypedStorage\" will be used in\n all cases.\nclass torch.TypedStorage(args, wrap_storage=None, dtype=None, device=None, internal=False)\n bfloat16()\n Casts this storage to bfloat16 type\n bool()\n Casts this storage to bool type\n byte()\n Casts this storage to byte type\n char()\n Casts this storage to char type\n clone()\n Returns a copy of this storage\n complex_double()\n Casts this storage to complex double type\n complex_float()\n Casts this storage to complex float type\n copy(source, non_blocking=None)\n cpu()\n Returns a CPU copy of this storage if it's not already on the\n CPU\n cuda(device=None, non_blocking=False, *kwargs)\n Returns a copy of this object in CUDA memory.\n If this object is already in CUDA memory and on the correct", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"}
{"text": "device, then no copy is performed and the original object is\n returned.\n Parameters:\n * device (int) -- The destination GPU id. Defaults to\n the current device.\n * non_blocking (bool) -- If \"True\" and the source is in\n pinned memory, the copy will be asynchronous with respect\n to the host. Otherwise, the argument has no effect.\n * *kwargs -- For compatibility, may contain the key\n \"async\" in place of the \"non_blocking\" argument.\n Return type:\n T\n data_ptr()\n property device\n double()\n Casts this storage to double type\n dtype: dtype\n element_size()\n fill_(value)\n float()\n Casts this storage to float type\n classmethod from_buffer(args, kwargs)\n classmethod from_file(filename, shared=False, size=0) -> Storage\n If shared is True, then memory is shared between all\n processes. All changes are written to the file. If shared is", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"}
{"text": "False, then the changes on the storage do not affect the file.\n size is the number of elements in the storage. If shared is\n False, then the file must contain at least size *\n sizeof(Type) bytes (Type is the type of storage). If shared\n is True the file will be created if needed.\n Parameters:\n * filename (str) -- file name to map\n * shared (bool) -- whether to share memory\n * size (int) -- number of elements in the storage\n get_device()\n Return type:\n int\n half()\n Casts this storage to half type\n int()\n Casts this storage to int type\n property is_cuda\n is_pinned()\n is_shared()\n is_sparse = False\n long()\n Casts this storage to long type\n nbytes()\n pickle_storage_type()\n pin_memory()\n Coppies the storage to pinned memory, if it's not already\n pinned.\n resize_(size)\n share_memory_()\n Moves the storage to shared memory.", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"}
{"text": "Moves the storage to shared memory.\n This is a no-op for storages already in shared memory and for\n CUDA storages, which do not need to be moved for sharing across\n processes. Storages in shared memory cannot be resized.\n Returns: self\n short()\n Casts this storage to short type\n size()\n tolist()\n Returns a list containing the elements of this storage\n type(dtype=None, non_blocking=False)\n Returns the type if dtype is not provided, else casts this\n object to the specified type.\n If this is already of the correct type, no copy is performed and\n the original object is returned.\n Parameters:\n * dtype (type or string) -- The desired type\n * non_blocking (bool) -- If \"True\", and the source is\n in pinned memory and destination is on the GPU or vice\n versa, the copy is performed asynchronously with respect to\n the host. Otherwise, the argument has no effect.", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"}
{"text": "\n*kwargs -- For compatibility, may contain the key\n \"async\" in place of the \"non_blocking\" argument. The\n \"async\" arg is deprecated.\n Return type:\n Union[T, str]\n untyped()\n Returns the internal \"torch.UntypedStorage\"\nclass torch.UntypedStorage(args, kwargs)\n bfloat16()\n Casts this storage to bfloat16 type\n bool()\n Casts this storage to bool type\n byte()\n Casts this storage to byte type\n char()\n Casts this storage to char type\n clone()\n Returns a copy of this storage\n complex_double()\n Casts this storage to complex double type\n complex_float()\n Casts this storage to complex float type\n copy_()\n cpu()\n Returns a CPU copy of this storage if it's not already on the\n CPU\n cuda(device=None, non_blocking=False, **kwargs)\n Returns a copy of this object in CUDA memory.\n If this object is already in CUDA memory and on the correct\n", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"}
{"text": "device, then no copy is performed and the original object is\n returned.\n Parameters:\n * device (int) -- The destination GPU id. Defaults to\n the current device.\n * non_blocking (bool) -- If \"True\" and the source is in\n pinned memory, the copy will be asynchronous with respect\n to the host. Otherwise, the argument has no effect.\n * *kwargs -- For compatibility, may contain the key\n \"async\" in place of the \"non_blocking\" argument.\n data_ptr()\n device: device\n double()\n Casts this storage to double type\n element_size()\n fill_()\n float()\n Casts this storage to float type\n static from_buffer()\n static from_file(filename, shared=False, size=0) -> Storage\n If shared is True, then memory is shared between all\n processes. All changes are written to the file. If shared is\n False*, then the changes on the storage do not affect the file.", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"}
{"text": "size is the number of elements in the storage. If shared is\n False, then the file must contain at least size *\n sizeof(Type) bytes (Type is the type of storage). If shared\n is True the file will be created if needed.\n Parameters:\n * filename (str) -- file name to map\n * shared (bool) -- whether to share memory\n * size (int) -- number of elements in the storage\n get_device()\n Return type:\n int\n half()\n Casts this storage to half type\n int()\n Casts this storage to int type\n property is_cuda\n is_pinned()\n is_shared()\n is_sparse: bool = False\n is_sparse_csr: bool = False\n long()\n Casts this storage to long type\n mps()\n Returns a CPU copy of this storage if it's not already on the\n CPU\n nbytes()\n new()\n pin_memory()\n Copies the storage to pinned memory, if it's not already pinned.\n resize_()\n share_memory_()\n Moves the storage to shared memory.", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"}
{"text": "Moves the storage to shared memory.\n This is a no-op for storages already in shared memory and for\n CUDA storages, which do not need to be moved for sharing across\n processes. Storages in shared memory cannot be resized.\n Returns: self\n short()\n Casts this storage to short type\n size()\n Return type:\n int\n tolist()\n Returns a list containing the elements of this storage\n type(dtype=None, non_blocking=False, kwargs)\n Returns the type if dtype is not provided, else casts this\n object to the specified type.\n If this is already of the correct type, no copy is performed and\n the original object is returned.\n Parameters:\n * dtype (*type or string*) -- The desired type\n * non_blocking (bool) -- If \"True\", and the source is\n in pinned memory and destination is on the GPU or vice\n versa, the copy is performed asynchronously with respect to", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"}
{"text": "the host. Otherwise, the argument has no effect.\n * *kwargs -- For compatibility, may contain the key\n \"async\" in place of the \"non_blocking\" argument. The\n \"async\" arg is deprecated.\n untyped()\nclass torch.DoubleStorage(args, wrap_storage=None, dtype=None, device=None, _internal=False)\n dtype: dtype = torch.float64\nclass torch.FloatStorage(args, wrap_storage=None, dtype=None, device=None, _internal=False)\n dtype: dtype = torch.float32\nclass torch.HalfStorage(args, wrap_storage=None, dtype=None, device=None, _internal=False)\n dtype: dtype = torch.float16\nclass torch.LongStorage(args, wrap_storage=None, dtype=None, device=None, _internal=False)\n dtype: dtype = torch.int64\nclass torch.IntStorage(args, wrap_storage=None, dtype=None, device=None, _internal=False)\n dtype: dtype = torch.int32\nclass torch.ShortStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\n dtype: dtype = torch.int16", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"}
{"text": "dtype: dtype = torch.int16\nclass torch.CharStorage(args, wrap_storage=None, dtype=None, device=None, _internal=False)\n dtype: dtype = torch.int8\nclass torch.ByteStorage(args, wrap_storage=None, dtype=None, device=None, _internal=False)\n dtype: dtype = torch.uint8\nclass torch.BoolStorage(args, wrap_storage=None, dtype=None, device=None, _internal=False)\n dtype: dtype = torch.bool\nclass torch.BFloat16Storage(args, wrap_storage=None, dtype=None, device=None, _internal=False)\n dtype: dtype = torch.bfloat16\nclass torch.ComplexDoubleStorage(args, wrap_storage=None, dtype=None, device=None, _internal=False)\n dtype: dtype = torch.complex128\nclass torch.ComplexFloatStorage(args, wrap_storage=None, dtype=None, device=None, _internal=False)\n dtype: dtype = torch.complex64\nclass torch.QUInt8Storage(args, wrap_storage=None, dtype=None, device=None, _internal=False)\n dtype: dtype = torch.quint8\nclass torch.QInt8Storage(args, wrap_storage=None, dtype=None, device=None, _internal=False)", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"}
{"text": "dtype: dtype = torch.qint8\nclass torch.QInt32Storage(args, wrap_storage=None, dtype=None, device=None, _internal=False)\n dtype: dtype = torch.qint32\nclass torch.QUInt4x2Storage(args, wrap_storage=None, dtype=None, device=None, _internal=False)\n dtype: dtype = torch.quint4x2\nclass torch.QUInt2x4Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\n dtype: dtype = torch.quint2x4", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"}
{"text": "torch.monitorWarning:\n This module is a prototype release, and its interfaces and\n functionality may change without warning in future PyTorch releases.\n\"torch.monitor\" provides an interface for logging events and counters\nfrom PyTorch.\nThe stat interfaces are designed to be used for tracking high level\nmetrics that are periodically logged out to be used for monitoring\nsystem performance. Since the stats aggregate with a specific window\nsize you can log to them from critical loops with minimal performance\nimpact.\nFor more infrequent events or values such as loss, accuracy, usage\ntracking the event interface can be directly used.\nEvent handlers can be registered to handle the events and pass them to\nan external event sink.\nAPI Reference\n=============\nclass torch.monitor.Aggregation\n These are types of aggregations that can be used to accumulate\n stats.\n Members:\n VALUE :\n VALUE returns the last value to be added.\n MEAN :", "source": "https://pytorch.org/docs/stable/monitor.html", "category": "pytorch docs"}
{"text": "MEAN :\n MEAN computes the arithmetic mean of all the added values.\n COUNT :\n COUNT returns the total number of added values.\n SUM :\n SUM returns the sum of the added values.\n MAX :\n MAX returns the max of the added values.\n MIN :\n MIN returns the min of the added values.\n property name\nclass torch.monitor.Stat\n Stat is used to compute summary statistics in a performant way over\n fixed intervals. Stat logs the statistics as an Event once every\n \"window_size\" duration. When the window closes the stats are logged\n via the event handlers as a \"torch.monitor.Stat\" event.\n \"window_size\" should be set to something relatively high to avoid a\n huge number of events being logged. Ex: 60s. Stat uses millisecond\n precision.\n If \"max_samples\" is set, the stat will cap the number of samples\n per window by discarding add calls once \"max_samples\" adds have\n occurred. If it's not set, all \"add\" calls during the window will", "source": "https://pytorch.org/docs/stable/monitor.html", "category": "pytorch docs"}
{"text": "be included. This is an optional field to make aggregations more\n directly comparable across windows when the number of samples might\n vary.\n When the Stat is destructed it will log any remaining data even if\n the window hasn't elapsed.\n init(self: torch._C._monitor.Stat, name: str, aggregations: List[torch._C._monitor.Aggregation], window_size: datetime.timedelta, max_samples: int = 9223372036854775807) -> None\n Constructs the \"Stat\".\n add(self: torch._C._monitor.Stat, v: float) -> None\n Adds a value to the stat to be aggregated according to the\n configured stat type and aggregations.\n property count\n Number of data points that have currently been collected. Resets\n once the event has been logged.\n get(self: torch._C._monitor.Stat) -> Dict[torch._C._monitor.Aggregation, float]\n Returns the current value of the stat, primarily for testing\n purposes. If the stat has logged and no additional values have\n been added this will be zero.", "source": "https://pytorch.org/docs/stable/monitor.html", "category": "pytorch docs"}
{"text": "been added this will be zero.\n property name\n The name of the stat that was set during creation.\nclass torch.monitor.data_value_t\n data_value_t is one of \"str\", \"float\", \"int\", \"bool\".\nclass torch.monitor.Event\n Event represents a specific typed event to be logged. This can\n represent high-level data points such as loss or accuracy per epoch\n or more low-level aggregations such as through the Stats provided\n through this library.\n All Events of the same type should have the same name so downstream\n handlers can correctly process them.\n init(self: torch._C._monitor.Event, name: str, timestamp: datetime.datetime, data: Dict[str, data_value_t]) -> None\n Constructs the \"Event\".\n property data\n The structured data contained within the \"Event\".\n property name\n The name of the \"Event\".\n property timestamp\n The timestamp when the \"Event\" happened.\nclass torch.monitor.EventHandlerHandle\n EventHandlerHandle is a wrapper type returned by", "source": "https://pytorch.org/docs/stable/monitor.html", "category": "pytorch docs"}
{"text": "\"register_event_handler\" used to unregister the handler via\n \"unregister_event_handler\". This cannot be directly initialized.\ntorch.monitor.log_event(event: torch._C._monitor.Event) -> None\n log_event logs the specified event to all of the registered event\n handlers. It's up to the event handlers to log the event out to the\n corresponding event sink.\n If there are no event handlers registered this method is a no-op.\ntorch.monitor.register_event_handler(callback: Callable[[torch._C._monitor.Event], None]) -> torch._C._monitor.EventHandlerHandle\n register_event_handler registers a callback to be called whenever\n an event is logged via \"log_event\". These handlers should avoid\n blocking the main thread since that may interfere with training as\n they run during the \"log_event\" call.\ntorch.monitor.unregister_event_handler(handler: torch._C._monitor.EventHandlerHandle) -> None\n unregister_event_handler unregisters the \"EventHandlerHandle\"", "source": "https://pytorch.org/docs/stable/monitor.html", "category": "pytorch docs"}
{"text": "returned after calling \"register_event_handler\". After this returns\n the event handler will no longer receive events.\nclass torch.monitor.TensorboardEventHandler(writer)\n TensorboardEventHandler is an event handler that will write known\n events to the provided SummaryWriter.\n This currently only supports \"torch.monitor.Stat\" events which are\n logged as scalars.\n -[ Example ]-\n\n\n\nfrom torch.utils.tensorboard import SummaryWriter\nfrom torch.monitor import TensorboardEventHandler, register_event_handler\nwriter = SummaryWriter(\"log_dir\")\nregister_event_handler(TensorboardEventHandler(writer))\n init(writer)\n Constructs the \"TensorboardEventHandler\".\n\n\n", "source": "https://pytorch.org/docs/stable/monitor.html", "category": "pytorch docs"}
{"text": "Note:\n If the following conditions are satisfied: 1) cudnn is enabled, 2)\n input data is on the GPU 3) input data has dtype \"torch.float16\" 4)\n V100 GPU is used, 5) input data is not in \"PackedSequence\" format\n persistent algorithm can be selected to improve performance.", "source": "https://pytorch.org/docs/stable/cudnn_persistent_rnn.html", "category": "pytorch docs"}
{"text": "C++Note:\n If you are looking for the PyTorch C++ API docs, directly go here.\nPyTorch provides several features for working with C++, and it\u00e2\u0080\u0099s best\nto choose from them based on your needs. At a high level, the\nfollowing support is available:\nTorchScript C++ API\n===================\nTorchScript allows PyTorch models defined in Python to be serialized\nand then loaded and run in C++ capturing the model code via\ncompilation or tracing its execution. You can learn more in the\nLoading a TorchScript Model in C++ tutorial. This means you can define\nyour models in Python as much as possible, but subsequently export\nthem via TorchScript for doing no-Python execution in production or\nembedded environments. The TorchScript C++ API is used to interact\nwith these models and the TorchScript execution engine, including:\n* Loading serialized TorchScript models saved from Python\n* Doing simple model modifications if needed (e.g. pulling out\n submodules)", "source": "https://pytorch.org/docs/stable/cpp_index.html", "category": "pytorch docs"}
{"text": "submodules)\n* Constructing the input and doing preprocessing using C++ Tensor API\nExtending PyTorch and TorchScript with C++ Extensions\n=====================================================\nTorchScript can be augmented with user-supplied code through custom\noperators and custom classes. Once registered with TorchScript, these\noperators and classes can be invoked in TorchScript code run from\nPython or from C++ as part of a serialized TorchScript model. The\nExtending TorchScript with Custom C++ Operators tutorial walks through\ninterfacing TorchScript with OpenCV. In addition to wrapping a\nfunction call with a custom operator, C++ classes and structs can be\nbound into TorchScript through a pybind11-like interface which is\nexplained in the Extending TorchScript with Custom C++ Classes\ntutorial.\nTensor and Autograd in C++\n==========================\nMost of the tensor and autograd operations in PyTorch Python API are\nalso available in the C++ API. These include:", "source": "https://pytorch.org/docs/stable/cpp_index.html", "category": "pytorch docs"}
{"text": "also available in the C++ API. These include:\n* \"torch::Tensor\" methods such as \"add\" / \"reshape\" / \"clone\". For the\n full list of methods available, please see:\n https://pytorch.org/cppdocs/api/classat_1_1_tensor.html\n* C++ tensor indexing API that looks and behaves the same as the\n Python API. For details on its usage, please see:\n https://pytorch.org/cppdocs/notes/tensor_indexing.html\n* The tensor autograd APIs and the \"torch::autograd\" package that are\n crucial for building dynamic neural networks in C++ frontend. For\n more details, please see:\n https://pytorch.org/tutorials/advanced/cpp_autograd.html\nAuthoring Models in C++\n=======================\nThe \"author in TorchScript, infer in C++\" workflow requires model\nauthoring to be done in TorchScript. However, there might be cases\nwhere the model has to be authored in C++ (e.g. in workflows where a\nPython component is undesirable). To serve such use cases, we provide\nthe full capability of authoring and training a neural net model", "source": "https://pytorch.org/docs/stable/cpp_index.html", "category": "pytorch docs"}
{"text": "purely in C++, with familiar components such as \"torch::nn\" /\n\"torch::nn::functional\" / \"torch::optim\" that closely resemble the\nPython API.\n* For an overview of the PyTorch C++ model authoring and training API,\n please see: https://pytorch.org/cppdocs/frontend.html\n* For a detailed tutorial on how to use the API, please see:\n https://pytorch.org/tutorials/advanced/cpp_frontend.html\n* Docs for components such as \"torch::nn\" / \"torch::nn::functional\" /\n \"torch::optim\" can be found at:\n https://pytorch.org/cppdocs/api/library_root.html\nPackaging for C++\n=================\nFor guidance on how to install and link with libtorch (the library\nthat contains all of the above C++ APIs), please see:\nhttps://pytorch.org/cppdocs/installing.html. Note that on Linux there\nare two types of libtorch binaries provided: one compiled with GCC\npre-cxx11 ABI and the other with GCC cxx11 ABI, and you should make\nthe selection based on the GCC ABI your system is using.", "source": "https://pytorch.org/docs/stable/cpp_index.html", "category": "pytorch docs"}
{"text": "torch.randomtorch.random.fork_rng(devices=None, enabled=True, _caller='fork_rng', _devices_kw='devices')\n Forks the RNG, so that when you return, the RNG is reset to the\n state that it was previously in.\n Parameters:\n * devices (iterable of CUDA IDs) -- CUDA devices for which\n to fork the RNG. CPU RNG state is always forked. By default,\n \"fork_rng()\" operates on all devices, but will emit a warning\n if your machine has a lot of devices, since this function will\n run very slowly in that case. If you explicitly specify\n devices, this warning will be suppressed\n * enabled (bool) -- if \"False\", the RNG is not forked.\n This is a convenience argument for easily disabling the\n context manager without having to delete it and unindent your\n Python code under it.\n Return type:\n Generator\ntorch.random.get_rng_state()\n Returns the random number generator state as a torch.ByteTensor.\n Return type:", "source": "https://pytorch.org/docs/stable/random.html", "category": "pytorch docs"}
{"text": "Return type:\n Tensor\ntorch.random.initial_seed()\n Returns the initial seed for generating random numbers as a Python\n long.\n Return type:\n int\ntorch.random.manual_seed(seed)\n Sets the seed for generating random numbers. Returns a\n torch.Generator object.\n Parameters:\n seed (int) -- The desired seed. Value must be within the\n inclusive range [-0x8000_0000_0000_0000,\n 0xffff_ffff_ffff_ffff]. Otherwise, a RuntimeError is raised.\n Negative inputs are remapped to positive values with the formula\n 0xffff_ffff_ffff_ffff + seed.\n Return type:\n Generator\ntorch.random.seed()\n Sets the seed for generating random numbers to a non-deterministic\n random number. Returns a 64 bit number used to seed the RNG.\n Return type:\n int\ntorch.random.set_rng_state(new_state)\n Sets the random number generator state.\n Parameters:\n new_state (torch.ByteTensor) -- The desired state", "source": "https://pytorch.org/docs/stable/random.html", "category": "pytorch docs"}
{"text": "AvgPool1dclass torch.nn.AvgPool1d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)\n Applies a 1D average pooling over an input signal composed of\n several input planes.\n In the simplest case, the output value of the layer with input size\n (N, C, L), output (N, C, L_{out}) and \"kernel_size\" k can be\n precisely described as:\n \\text{out}(N_i, C_j, l) = \\frac{1}{k} \\sum_{m=0}^{k-1}\n \\text{input}(N_i, C_j, \\text{stride} \\times l + m)\n If \"padding\" is non-zero, then the input is implicitly zero-padded\n on both sides for \"padding\" number of points.\n Note:\n When ceil_mode=True, sliding windows are allowed to go off-bounds\n if they start within the left padding or the input. Sliding\n windows that would start in the right padded region are ignored.\n The parameters \"kernel_size\", \"stride\", \"padding\" can each be an\n \"int\" or a one-element tuple.\n Parameters:\n * kernel_size (Union[int, Tuple[int]]) --", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AvgPool1d.html", "category": "pytorch docs"}
{"text": "the size of the window\n * stride (Union[int, Tuple[int]]) -- the\n stride of the window. Default value is \"kernel_size\"\n * padding (Union[int, Tuple[int]]) --\n implicit zero padding to be added on both sides\n * ceil_mode (bool) -- when True, will use ceil instead\n of floor to compute the output shape\n * count_include_pad (bool) -- when True, will include the\n zero-padding in the averaging calculation\n Shape:\n * Input: (N, C, L_{in}) or (C, L_{in}).\n * Output: (N, C, L_{out}) or (C, L_{out}), where\n L_{out} = \\left\\lfloor \\frac{L_{in} + 2 \\times\n \\text{padding} - \\text{kernel_size}}{\\text{stride}} +\n 1\\right\\rfloor\n Examples:\n >>> # pool with window of size=3, stride=2\n >>> m = nn.AvgPool1d(3, stride=2)\n >>> m(torch.tensor([[[1., 2, 3, 4, 5, 6, 7]]]))\n tensor([[[2., 4., 6.]]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AvgPool1d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.tanhTensor.tanh() -> Tensor\n See \"torch.tanh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.tanh.html", "category": "pytorch docs"}
{"text": "torch.eqtorch.eq(input, other, , out=None) -> Tensor\n Computes element-wise equality\n The second argument can be a number or a tensor whose shape is\n broadcastable with the first argument.\n Parameters:\n * input (Tensor) -- the tensor to compare\n * other (Tensor or float) -- the tensor or value to\n compare\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Returns:\n A boolean tensor that is True where \"input\" is equal to \"other\"\n and False elsewhere\n Example:\n >>> torch.eq(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))\n tensor([[ True, False],\n [False, True]])", "source": "https://pytorch.org/docs/stable/generated/torch.eq.html", "category": "pytorch docs"}
{"text": "torch.floortorch.floor(input, , out=None) -> Tensor\n Returns a new tensor with the floor of the elements of \"input\", the\n largest integer less than or equal to each element.\n For integer inputs, follows the array-api convention of returning a\n copy of the input tensor.\n \\text{out}{i} = \\left\\lfloor \\text{input} \\right\\rfloor\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([-0.8166, 1.5308, -0.2530, -0.2091])\n >>> torch.floor(a)\n tensor([-1., 1., -1., -1.])", "source": "https://pytorch.org/docs/stable/generated/torch.floor.html", "category": "pytorch docs"}
{"text": "torch.autograd.Function.jvpstatic Function.jvp(ctx, grad_inputs)\n Defines a formula for differentiating the operation with forward\n mode automatic differentiation. This function is to be overridden\n by all subclasses. It must accept a context \"ctx\" as the first\n argument, followed by as many inputs as the \"forward()\" got (None\n will be passed in for non tensor inputs of the forward function),\n and it should return as many tensors as there were outputs to\n \"forward()\". Each argument is the gradient w.r.t the given input,\n and each returned value should be the gradient w.r.t. the\n corresponding output. If an output is not a Tensor or the function\n is not differentiable with respect to that output, you can just\n pass None as a gradient for that input.\n You can use the \"ctx\" object to pass any value from the forward to\n this functions.\n Return type:\n Any*", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.Function.jvp.html", "category": "pytorch docs"}
{"text": "ConvReLU3dclass torch.ao.nn.intrinsic.quantized.ConvReLU3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\n A ConvReLU3d module is a fused module of Conv3d and ReLU\n We adopt the same interface as \"torch.ao.nn.quantized.Conv3d\".\n Attributes: Same as torch.ao.nn.quantized.Conv3d", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.ConvReLU3d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.corrcoefTensor.corrcoef() -> Tensor\n See \"torch.corrcoef()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.corrcoef.html", "category": "pytorch docs"}
{"text": "torch.Tensor.tolistTensor.tolist() -> list or number\n Returns the tensor as a (nested) list. For scalars, a standard\n Python number is returned, just like with \"item()\". Tensors are\n automatically moved to the CPU first if necessary.\n This operation is not differentiable.\n Examples:\n >>> a = torch.randn(2, 2)\n >>> a.tolist()\n [[0.012766935862600803, 0.5415473580360413],\n [-0.08909505605697632, 0.7729271650314331]]\n >>> a[0,0].tolist()\n 0.012766935862600803", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.tolist.html", "category": "pytorch docs"}
{"text": "torch.autograd.gradgradchecktorch.autograd.gradgradcheck(func, inputs, grad_outputs=None, *, eps=1e-06, atol=1e-05, rtol=0.001, gen_non_contig_grad_outputs=False, raise_exception=True, nondet_tol=0.0, check_undefined_grad=True, check_grad_dtypes=False, check_batched_grad=False, check_fwd_over_rev=False, check_rev_over_rev=True, fast_mode=False)\n Check gradients of gradients computed via small finite differences\n against analytical gradients w.r.t. tensors in \"inputs\" and\n \"grad_outputs\" that are of floating point or complex type and with\n \"requires_grad=True\".\n This function checks that backpropagating through the gradients\n computed to the given \"grad_outputs\" are correct.\n The check between numerical and analytical gradients uses\n \"allclose()\".\n Note:\n The default values are designed for \"input\" and \"grad_outputs\" of\n double precision. This check will likely fail if they are of less\n precision, e.g., \"FloatTensor\".\n Warning:", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.gradgradcheck.html", "category": "pytorch docs"}
{"text": "precision, e.g., \"FloatTensor\".\n Warning:\n If any checked tensor in \"input\" and \"grad_outputs\" has\n overlapping memory, i.e., different indices pointing to the same\n memory address (e.g., from \"torch.expand()\"), this check will\n likely fail because the numerical gradients computed by point\n perturbation at such indices will change values at all other\n indices that share the same memory address.\n Parameters:\n * func (function) -- a Python function that takes Tensor\n inputs and returns a Tensor or a tuple of Tensors\n * inputs (tuple of Tensor or Tensor) -- inputs to the\n function\n * grad_outputs (tuple of Tensor or Tensor,\n optional) -- The gradients with respect to the function's\n outputs.\n * eps (float, optional) -- perturbation for finite\n differences\n * atol (float, optional) -- absolute tolerance\n * rtol (float, optional) -- relative tolerance", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.gradgradcheck.html", "category": "pytorch docs"}
{"text": "\ngen_non_contig_grad_outputs (bool, optional) -- if\n \"grad_outputs\" is \"None\" and \"gen_non_contig_grad_outputs\" is\n \"True\", the randomly generated gradient outputs are made to be\n noncontiguous\nraise_exception (bool, optional) -- indicating\n whether to raise an exception if the check fails. The\n exception gives more information about the exact nature of the\n failure. This is helpful when debugging gradchecks.\nnondet_tol (float, optional) -- tolerance for non-\n determinism. When running identical inputs through the\n differentiation, the results must either match exactly\n (default, 0.0) or be within this tolerance. Note that a small\n amount of nondeterminism in the gradient will lead to larger\n inaccuracies in the second derivative.\ncheck_undefined_grad (bool, optional) -- if True,\n check if undefined output grads are supported and treated as\n zeros\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.gradgradcheck.html", "category": "pytorch docs"}
{"text": "zeros\n * check_batched_grad (bool, optional) -- if True,\n check if we can compute batched gradients using prototype vmap\n support. Defaults to False.\n * fast_mode (bool, optional) -- if True, run a faster\n implementation of gradgradcheck that no longer computes the\n entire jacobian.\n Returns:\n True if all differences satisfy allclose condition\n Return type:\n bool", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.gradgradcheck.html", "category": "pytorch docs"}
{"text": "torch.Tensor.bmmTensor.bmm(batch2) -> Tensor\n See \"torch.bmm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bmm.html", "category": "pytorch docs"}
{"text": "default_fused_wt_fake_quanttorch.quantization.fake_quantize.default_fused_wt_fake_quant\n alias of functools.partial(, observer=,\n quant_min=-128, quant_max=127, dtype=torch.qint8,\n qscheme=torch.per_tensor_symmetric){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_fused_wt_fake_quant.html", "category": "pytorch docs"}
{"text": "torch.jit.tracetorch.jit.trace(func, example_inputs=None, optimize=None, check_trace=True, check_inputs=None, check_tolerance=1e-05, strict=True, _force_outplace=False, _module_class=None, _compilation_unit=, example_kwarg_inputs=None)\n Trace a function and return an executable or \"ScriptFunction\" that\n will be optimized using just-in-time compilation. Tracing is ideal\n for code that operates only on \"Tensor\"s and lists, dictionaries,\n and tuples of \"Tensor\"s.\n Using torch.jit.trace and torch.jit.trace_module, you can turn\n an existing module or Python function into a TorchScript\n \"ScriptFunction\" or \"ScriptModule\". You must provide example\n inputs, and we run the function, recording the operations performed\n on all the tensors.\n * The resulting recording of a standalone function produces\n ScriptFunction.\n * The resulting recording of nn.Module.forward or nn.Module\n produces ScriptModule.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace.html", "category": "pytorch docs"}
{"text": "produces ScriptModule.\n This module also contains any parameters that the original module\n had as well.\n Warning:\n Tracing only correctly records functions and modules which are\n not data dependent (e.g., do not have conditionals on data in\n tensors) and do not have any untracked external dependencies\n (e.g., perform input/output or access global variables). Tracing\n only records operations done when the given function is run on\n the given tensors. Therefore, the returned ScriptModule will\n always run the same traced graph on any input. This has some\n important implications when your module is expected to run\n different sets of operations, depending on the input and/or the\n module state. For example,\n * Tracing will not record any control-flow like if-statements or\n loops. When this control-flow is constant across your module,\n this is fine and it often inlines the control-flow decisions.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace.html", "category": "pytorch docs"}
{"text": "But sometimes the control-flow is actually part of the model\n itself. For instance, a recurrent network is a loop over the\n (possibly dynamic) length of an input sequence.\n * In the returned \"ScriptModule\", operations that have different\n behaviors in \"training\" and \"eval\" modes will always behave as\n if it is in the mode it was in during tracing, no matter which\n mode the ScriptModule is in.\n In cases like these, tracing would not be appropriate and\n \"scripting\" is a better choice. If you trace such models, you may\n silently get incorrect results on subsequent invocations of the\n model. The tracer will try to emit warnings when doing something\n that may cause an incorrect trace to be produced.\n Parameters:\n func (callable or torch.nn.Module) -- A Python\n function or torch.nn.Module that will be run with\n example_inputs. func arguments and return values must be", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace.html", "category": "pytorch docs"}
{"text": "tensors or (possibly nested) tuples that contain tensors. When a\n module is passed torch.jit.trace, only the \"forward\" method is\n run and traced (see \"torch.jit.trace\" for details).\n Keyword Arguments:\n * example_inputs (tuple or torch.Tensor or None,\n optional) -- A tuple of example inputs that will be passed\n to the function while tracing. Default: \"None\". Either this\n argument or \"example_kwarg_inputs\" should be specified. The\n resulting trace can be run with inputs of different types and\n shapes assuming the traced operations support those types and\n shapes. example_inputs may also be a single Tensor in which\n case it is automatically wrapped in a tuple. When the value is\n None, \"example_kwarg_inputs\" should be specified.\n * check_trace (\"bool\", optional) -- Check if the same inputs\n run through traced code produce the same outputs. Default:", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace.html", "category": "pytorch docs"}
{"text": "\"True\". You might want to disable this if, for example, your\n network contains non- deterministic ops or if you are sure\n that the network is correct despite a checker failure.\n * check_inputs (list of tuples, optional) -- A list of\n tuples of input arguments that should be used to check the\n trace against what is expected. Each tuple is equivalent to a\n set of input arguments that would be specified in\n \"example_inputs\". For best results, pass in a set of checking\n inputs representative of the space of shapes and types of\n inputs you expect the network to see. If not specified, the\n original \"example_inputs\" are used for checking\n * check_tolerance (float, optional) -- Floating-point\n comparison tolerance to use in the checker procedure. This\n can be used to relax the checker strictness in the event that\n results diverge numerically for a known reason, such as\n operator fusion.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace.html", "category": "pytorch docs"}
{"text": "operator fusion.\n * strict (\"bool\", optional) -- run the tracer in a strict\n mode or not (default: \"True\"). Only turn this off when you\n want the tracer to record your mutable container types\n (currently \"list\"/\"dict\") and you are sure that the container\n you are using in your problem is a \"constant\" structure and\n does not get used as control flow (if, for) conditions.\n * example_kwarg_inputs (dict, optional) -- This\n parameter is a pack of keyword arguments of example inputs\n that will be passed to the function while tracing. Default:\n \"None\". Either this argument or \"example_inputs\" should be\n specified. The dict will be unpacking by the arguments name of\n the traced function. If the keys of the dict don't not match\n with the traced function's arguments name, a runtime exception\n will be raised.\n Returns:\n If func is nn.Module or \"forward\" of nn.Module, trace", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace.html", "category": "pytorch docs"}
{"text": "returns a \"ScriptModule\" object with a single \"forward\" method\n containing the traced code. The returned ScriptModule will\n have the same set of sub-modules and parameters as the original\n \"nn.Module\". If \"func\" is a standalone function, \"trace\"\n returns ScriptFunction.\n Example (tracing a function):\n import torch\n def foo(x, y):\n return 2 * x + y\n # Run foo with the provided inputs and record the tensor operations\n traced_foo = torch.jit.trace(foo, (torch.rand(3), torch.rand(3)))\n # traced_foo can now be run with the TorchScript interpreter or saved\n # and loaded in a Python-free environment\n Example (tracing an existing module):\n import torch\n import torch.nn as nn\n class Net(nn.Module):\n def init(self):\n super(Net, self).init()\n self.conv = nn.Conv2d(1, 1, 3)\n def forward(self, x):\n return self.conv(x)\n n = Net()", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace.html", "category": "pytorch docs"}
{"text": "return self.conv(x)\n n = Net()\n example_weight = torch.rand(1, 1, 3, 3)\n example_forward_input = torch.rand(1, 1, 3, 3)\n # Trace a specific method and construct ScriptModule with\n # a single forward method\n module = torch.jit.trace(n.forward, example_forward_input)\n # Trace a module (implicitly traces forward) and construct a\n # ScriptModule with a single forward method\n module = torch.jit.trace(n, example_forward_input)", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace.html", "category": "pytorch docs"}
{"text": "Unflattenclass torch.nn.Unflatten(dim, unflattened_size)\n Unflattens a tensor dim expanding it to a desired shape. For use\n with \"Sequential\".\n * \"dim\" specifies the dimension of the input tensor to be\n unflattened, and it can be either int or str when Tensor or\n NamedTensor is used, respectively.\n * \"unflattened_size\" is the new shape of the unflattened dimension\n of the tensor and it can be a tuple of ints or a list of ints\n or torch.Size for Tensor input; a NamedShape (tuple of\n (name, size) tuples) for NamedTensor input.\n Shape:\n * Input: (, S_{\\text{dim}}, ), where S_{\\text{dim}} is the\n size at dimension \"dim\" and * means any number of dimensions\n including none.\n * Output: (, U_1, ..., U_n, ), where U = \"unflattened_size\"\n and \\prod_{i=1}^n U_i = S_{\\text{dim}}.\n Parameters:\n * dim (Union[int, str]) -- Dimension to be\n unflattened", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Unflatten.html", "category": "pytorch docs"}
{"text": "unflattened\n * unflattened_size (Union[torch.Size, Tuple,\n List, NamedShape]) -- New shape of the unflattened\n dimension\n -[ Examples ]-\n\n\n\ninput = torch.randn(2, 50)\nWith tuple of ints\nm = nn.Sequential(\n nn.Linear(50, 50),\n nn.Unflatten(1, (2, 5, 5))\n)\noutput = m(input)\noutput.size()\n torch.Size([2, 2, 5, 5])\nWith torch.Size\nm = nn.Sequential(\n nn.Linear(50, 50),\n nn.Unflatten(1, torch.Size([2, 5, 5]))\n)\noutput = m(input)\noutput.size()\n torch.Size([2, 2, 5, 5])\nWith namedshape (tuple of tuples)\ninput = torch.randn(2, 50, names=('N', 'features'))\nunflatten = nn.Unflatten('features', (('C', 2), ('H', 5), ('W', 5)))\noutput = unflatten(input)\noutput.size()\n torch.Size([2, 2, 5, 5])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Unflatten.html", "category": "pytorch docs"}
{"text": "torch.Tensor.coalesceTensor.coalesce() -> Tensor\n Returns a coalesced copy of \"self\" if \"self\" is an uncoalesced\n tensor.\n Returns \"self\" if \"self\" is a coalesced tensor.\n Warning:\n Throws an error if \"self\" is not a sparse COO tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.coalesce.html", "category": "pytorch docs"}
{"text": "torch.outertorch.outer(input, vec2, , out=None) -> Tensor\n Outer product of \"input\" and \"vec2\". If \"input\" is a vector of size\n n and \"vec2\" is a vector of size m, then \"out\" must be a matrix of\n size (n \\times m).\n Note:\n This function does not broadcast.\n Parameters:\n * input (Tensor) -- 1-D input vector\n * vec2 (Tensor) -- 1-D input vector\n Keyword Arguments:\n out (Tensor, optional*) -- optional output matrix\n Example:\n >>> v1 = torch.arange(1., 5.)\n >>> v2 = torch.arange(1., 4.)\n >>> torch.outer(v1, v2)\n tensor([[ 1., 2., 3.],\n [ 2., 4., 6.],\n [ 3., 6., 9.],\n [ 4., 8., 12.]])", "source": "https://pytorch.org/docs/stable/generated/torch.outer.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.avg_pool3dtorch.nn.functional.avg_pool3d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) -> Tensor\n Applies 3D average-pooling operation in kT \\times kH \\times kW\n regions by step size sT \\times sH \\times sW steps. The number of\n output features is equal to \\lfloor\\frac{\\text{input\n planes}}{sT}\\rfloor.\n See \"AvgPool3d\" for details and output shape.\n Parameters:\n * input -- input tensor (\\text{minibatch} ,\n \\text{in_channels} , iT \\times iH , iW)\n * kernel_size -- size of the pooling region. Can be a single\n number or a tuple (kT, kH, kW)\n * stride -- stride of the pooling operation. Can be a single\n number or a tuple (sT, sH, sW). Default: \"kernel_size\"\n * padding -- implicit zero paddings on both sides of the\n input. Can be a single number or a tuple (padT, padH, padW),\n Default: 0", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool3d.html", "category": "pytorch docs"}
{"text": "Default: 0\n * ceil_mode -- when True, will use ceil instead of floor\n in the formula to compute the output shape\n * count_include_pad -- when True, will include the zero-\n padding in the averaging calculation\n * divisor_override -- if specified, it will be used as\n divisor, otherwise size of the pooling region will be used.\n Default: None", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool3d.html", "category": "pytorch docs"}
{"text": "torch.autograd.graph.Node.next_functionsabstract property Node.next_functions: Tuple[Tuple[Optional[Node], int], ...]", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.next_functions.html", "category": "pytorch docs"}
{"text": "torch.Tensor.byteTensor.byte(memory_format=torch.preserve_format) -> Tensor\n \"self.byte()\" is equivalent to \"self.to(torch.uint8)\". See \"to()\".\n Parameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.byte.html", "category": "pytorch docs"}
{"text": "LinearReLUclass torch.ao.nn.intrinsic.quantized.dynamic.LinearReLU(in_features, out_features, bias=True, dtype=torch.qint8)\n A LinearReLU module fused from Linear and ReLU modules that can be\n used for dynamic quantization. Supports both, FP16 and INT8\n quantization.\n We adopt the same interface as\n \"torch.ao.nn.quantized.dynamic.Linear\".\n Variables:\n torch.ao.nn.quantized.dynamic.Linear (Same as) --\n Examples:\n >>> m = nn.intrinsic.quantized.dynamic.LinearReLU(20, 30)\n >>> input = torch.randn(128, 20)\n >>> output = m(input)\n >>> print(output.size())\n torch.Size([128, 30])", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.dynamic.LinearReLU.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.adaptive_max_pool2dtorch.nn.functional.adaptive_max_pool2d(args, kwargs)\n Applies a 2D adaptive max pooling over an input signal composed of\n several input planes.\n See \"AdaptiveMaxPool2d\" for details and output shape.\n Parameters:\n * output_size -- the target output size (single integer or\n double-integer tuple)\n * return_indices* -- whether to return pooling indices.\n Default: \"False\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.adaptive_max_pool2d.html", "category": "pytorch docs"}
{"text": "MaxUnpool1dclass torch.nn.MaxUnpool1d(kernel_size, stride=None, padding=0)\n Computes a partial inverse of \"MaxPool1d\".\n \"MaxPool1d\" is not fully invertible, since the non-maximal values\n are lost.\n \"MaxUnpool1d\" takes in as input the output of \"MaxPool1d\" including\n the indices of the maximal values and computes a partial inverse in\n which all non-maximal values are set to zero.\n Note:\n \"MaxPool1d\" can map several input sizes to the same output sizes.\n Hence, the inversion process can get ambiguous. To accommodate\n this, you can provide the needed output size as an additional\n argument \"output_size\" in the forward call. See the Inputs and\n Example below.\n Parameters:\n * kernel_size (int or tuple) -- Size of the max\n pooling window.\n * stride (int or tuple) -- Stride of the max pooling\n window. It is set to \"kernel_size\" by default.\n * padding (int or tuple) -- Padding that was added to\n the input", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool1d.html", "category": "pytorch docs"}
{"text": "the input\n Inputs:\n * input: the input Tensor to invert\n * indices: the indices given out by \"MaxPool1d\"\n * output_size (optional): the targeted output size\n Shape:\n * Input: (N, C, H_{in}) or (C, H_{in}).\n * Output: (N, C, H_{out}) or (C, H_{out}), where\n H_{out} = (H_{in} - 1) \\times \\text{stride}[0] - 2 \\times\n \\text{padding}[0] + \\text{kernel_size}[0]\n or as given by \"output_size\" in the call operator\n Example:\n >>> pool = nn.MaxPool1d(2, stride=2, return_indices=True)\n >>> unpool = nn.MaxUnpool1d(2, stride=2)\n >>> input = torch.tensor([[[1., 2, 3, 4, 5, 6, 7, 8]]])\n >>> output, indices = pool(input)\n >>> unpool(output, indices)\n tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8.]]])\n >>> # Example showcasing the use of output_size\n >>> input = torch.tensor([[[1., 2, 3, 4, 5, 6, 7, 8, 9]]])\n >>> output, indices = pool(input)\n >>> unpool(output, indices, output_size=input.size())", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool1d.html", "category": "pytorch docs"}
{"text": "tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8., 0.]]])\n >>> unpool(output, indices)\n tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8.]]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool1d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.argmaxTensor.argmax(dim=None, keepdim=False) -> LongTensor\n See \"torch.argmax()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.argmax.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.max_pool2dtorch.nn.functional.max_pool2d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)\n Applies a 2D max pooling over an input signal composed of several\n input planes.\n Note:\n The order of \"ceil_mode\" and \"return_indices\" is different from\n what seen in \"MaxPool2d\", and will change in a future release.\n See \"MaxPool2d\" for details.\n Parameters:\n * input -- input tensor (\\text{minibatch} ,\n \\text{in_channels} , iH , iW), minibatch dim optional.\n * kernel_size -- size of the pooling region. Can be a single\n number or a tuple (kH, kW)\n * stride -- stride of the pooling operation. Can be a single\n number or a tuple (sH, sW). Default: \"kernel_size\"\n * padding -- Implicit negative infinity padding to be added\n on both sides, must be >= 0 and <= kernel_size / 2.\n * dilation -- The stride between elements within a sliding", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool2d.html", "category": "pytorch docs"}
{"text": "window, must be > 0.\n * ceil_mode -- If \"True\", will use ceil instead of floor\n to compute the output shape. This ensures that every element\n in the input tensor is covered by a sliding window.\n * return_indices -- If \"True\", will return the argmax along\n with the max values. Useful for\n \"torch.nn.functional.max_unpool2d\" later", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.unsqueeze_Tensor.unsqueeze_(dim) -> Tensor\n In-place version of \"unsqueeze()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.unsqueeze_.html", "category": "pytorch docs"}
{"text": "QFunctionalclass torch.ao.nn.quantized.QFunctional\n Wrapper class for quantized operations.\n The instance of this class can be used instead of the\n \"torch.ops.quantized\" prefix. See example usage below.\n Note:\n This class does not provide a \"forward\" hook. Instead, you must\n use one of the underlying functions (e.g. \"add\").\n Examples:\n >>> q_add = QFunctional()\n >>> a = torch.quantize_per_tensor(torch.tensor(3.0), 1.0, 0, torch.qint32)\n >>> b = torch.quantize_per_tensor(torch.tensor(4.0), 1.0, 0, torch.qint32)\n >>> q_add.add(a, b) # Equivalent to torch.ops.quantized.add(a, b, 1.0, 0)\n Valid operation names:\n * add\n * cat\n * mul\n * add_relu\n * add_scalar\n * mul_scalar", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.QFunctional.html", "category": "pytorch docs"}
{"text": "LazyBatchNorm1dclass torch.nn.LazyBatchNorm1d(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)\n A \"torch.nn.BatchNorm1d\" module with lazy initialization of the\n \"num_features\" argument of the \"BatchNorm1d\" that is inferred from\n the \"input.size(1)\". The attributes that will be lazily initialized\n are weight, bias, running_mean and running_var.\n Check the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\n Parameters:\n * eps (float) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n * momentum (float) -- the value used for the running_mean\n and running_var computation. Can be set to \"None\" for\n cumulative moving average (i.e. simple average). Default: 0.1\n * affine (bool) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters. Default:\n \"True\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyBatchNorm1d.html", "category": "pytorch docs"}
{"text": "\"True\"\n * track_running_stats (bool) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics, and initializes statistics buffers\n \"running_mean\" and \"running_var\" as \"None\". When these buffers\n are \"None\", this module always uses batch statistics. in both\n training and eval modes. Default: \"True\"\n cls_to_become\n alias of \"BatchNorm1d\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyBatchNorm1d.html", "category": "pytorch docs"}
{"text": "torch.fliplrtorch.fliplr(input) -> Tensor\n Flip tensor in the left/right direction, returning a new tensor.\n Flip the entries in each row in the left/right direction. Columns\n are preserved, but appear in a different order than before.\n Note:\n Requires the tensor to be at least 2-D.\n Note:\n torch.fliplr makes a copy of \"input\"'s data. This is different\n from NumPy's np.fliplr, which returns a view in constant time.\n Since copying a tensor's data is more work than viewing that\n data, torch.fliplr is expected to be slower than np.fliplr.\n Parameters:\n input (Tensor) -- Must be at least 2-dimensional.\n Example:\n >>> x = torch.arange(4).view(2, 2)\n >>> x\n tensor([[0, 1],\n [2, 3]])\n >>> torch.fliplr(x)\n tensor([[1, 0],\n [3, 2]])", "source": "https://pytorch.org/docs/stable/generated/torch.fliplr.html", "category": "pytorch docs"}
{"text": "EmbeddingBagclass torch.nn.EmbeddingBag(num_embeddings, embedding_dim, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='mean', sparse=False, _weight=None, include_last_offset=False, padding_idx=None, device=None, dtype=None)\n Computes sums or means of 'bags' of embeddings, without\n instantiating the intermediate embeddings.\n For bags of constant length, no \"per_sample_weights\", no indices\n equal to \"padding_idx\", and with 2D inputs, this class\n * with \"mode=\"sum\"\" is equivalent to \"Embedding\" followed by\n \"torch.sum(dim=1)\",\n * with \"mode=\"mean\"\" is equivalent to \"Embedding\" followed by\n \"torch.mean(dim=1)\",\n * with \"mode=\"max\"\" is equivalent to \"Embedding\" followed by\n \"torch.max(dim=1)\".\n However, \"EmbeddingBag\" is much more time and memory efficient than\n using a chain of these operations.\n EmbeddingBag also supports per-sample weights as an argument to the\n forward pass. This scales the output of the Embedding before", "source": "https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html", "category": "pytorch docs"}
{"text": "performing a weighted reduction as specified by \"mode\". If\n \"per_sample_weights\" is passed, the only supported \"mode\" is\n \"\"sum\"\", which computes a weighted sum according to\n \"per_sample_weights\".\n Parameters:\n * num_embeddings (int) -- size of the dictionary of\n embeddings\n * embedding_dim (int) -- the size of each embedding vector\n * max_norm (float, optional) -- If given, each\n embedding vector with norm larger than \"max_norm\" is\n renormalized to have norm \"max_norm\".\n * norm_type (float, optional) -- The p of the p-norm\n to compute for the \"max_norm\" option. Default \"2\".\n * scale_grad_by_freq (bool, optional) -- if given,\n this will scale gradients by the inverse of frequency of the\n words in the mini-batch. Default \"False\". Note: this option is\n not supported when \"mode=\"max\"\".\n * mode (str, optional) -- \"\"sum\"\", \"\"mean\"\" or", "source": "https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html", "category": "pytorch docs"}
{"text": "\"\"max\"\". Specifies the way to reduce the bag. \"\"sum\"\" computes\n the weighted sum, taking \"per_sample_weights\" into\n consideration. \"\"mean\"\" computes the average of the values in\n the bag, \"\"max\"\" computes the max value over each bag.\n Default: \"\"mean\"\"\n * sparse (bool, optional) -- if \"True\", gradient\n w.r.t. \"weight\" matrix will be a sparse tensor. See Notes for\n more details regarding sparse gradients. Note: this option is\n not supported when \"mode=\"max\"\".\n * include_last_offset (bool, optional) -- if \"True\",\n \"offsets\" has one additional element, where the last element\n is equivalent to the size of indices. This matches the CSR\n format.\n * padding_idx (int, optional) -- If specified, the\n entries at \"padding_idx\" do not contribute to the gradient;\n therefore, the embedding vector at \"padding_idx\" is not", "source": "https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html", "category": "pytorch docs"}
{"text": "updated during training, i.e. it remains as a fixed \"pad\". For\n a newly constructed EmbeddingBag, the embedding vector at\n \"padding_idx\" will default to all zeros, but can be updated to\n another value to be used as the padding vector. Note that the\n embedding vector at \"padding_idx\" is excluded from the\n reduction.\n Variables:\n weight (Tensor) -- the learnable weights of the module of\n shape (num_embeddings, embedding_dim) initialized from\n \\mathcal{N}(0, 1).\n Examples:\n >>> # an EmbeddingBag module containing 10 tensors of size 3\n >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum')\n >>> # a batch of 2 samples of 4 indices each\n >>> input = torch.tensor([1, 2, 4, 5, 4, 3, 2, 9], dtype=torch.long)\n >>> offsets = torch.tensor([0, 4], dtype=torch.long)\n >>> embedding_sum(input, offsets)\n tensor([[-0.8861, -5.4350, -0.0523],\n [ 1.1306, -2.5798, -1.0044]])\n >>> # Example with padding_idx", "source": "https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html", "category": "pytorch docs"}
{"text": "\n\n\nExample with padding_idx\n >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum', padding_idx=2)\n >>> input = torch.tensor([2, 2, 2, 2, 4, 3, 2, 9], dtype=torch.long)\n >>> offsets = torch.tensor([0, 4], dtype=torch.long)\n >>> embedding_sum(input, offsets)\n tensor([[ 0.0000, 0.0000, 0.0000],\n [-0.7082, 3.2145, -2.6251]])\n >>> # An EmbeddingBag can be loaded from an Embedding like so\n >>> embedding = nn.Embedding(10, 3, padding_idx=2)\n >>> embedding_sum = nn.EmbeddingBag.from_pretrained(\n embedding.weight,\n padding_idx=embedding.padding_idx,\n mode='sum')\n\nforward(input, offsets=None, per_sample_weights=None)\n Forward pass of EmbeddingBag.\n Parameters:\n * input (Tensor) -- Tensor containing bags of indices\n into the embedding matrix.\n * offsets (Tensor, optional) -- Only used when\n \"input\" is 1D. \"offsets\" determines the starting index\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html", "category": "pytorch docs"}
{"text": "position of each bag (sequence) in \"input\".\n * per_sample_weights (Tensor, optional) -- a tensor\n of float / double weights, or None to indicate all weights\n should be taken to be \"1\". If specified,\n \"per_sample_weights\" must have exactly the same shape as\n input and is treated as having the same \"offsets\", if those\n are not \"None\". Only supported for \"mode='sum'\".\n Returns:\n Tensor output shape of (B, embedding_dim).\n Return type:\n Tensor\n Note:\n A few notes about \"input\" and \"offsets\":\n * \"input\" and \"offsets\" have to be of the same type, either\n int or long\n * If \"input\" is 2D of shape (B, N), it will be treated as\n \"B\" bags (sequences) each of fixed length \"N\", and this will\n return \"B\" values aggregated in a way depending on the\n \"mode\". \"offsets\" is ignored and required to be \"None\" in\n this case.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html", "category": "pytorch docs"}
{"text": "this case.\n * If \"input\" is 1D of shape (N), it will be treated as a\n concatenation of multiple bags (sequences). \"offsets\" is\n required to be a 1D tensor containing the starting index\n positions of each bag in \"input\". Therefore, for \"offsets\"\n of shape (B), \"input\" will be viewed as having \"B\" bags.\n Empty bags (i.e., having 0-length) will have returned\n vectors filled by zeros.\n classmethod from_pretrained(embeddings, freeze=True, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='mean', sparse=False, include_last_offset=False, padding_idx=None)\n Creates EmbeddingBag instance from given 2-dimensional\n FloatTensor.\n Parameters:\n * embeddings (Tensor) -- FloatTensor containing weights\n for the EmbeddingBag. First dimension is being passed to\n EmbeddingBag as 'num_embeddings', second as\n 'embedding_dim'.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html", "category": "pytorch docs"}
{"text": "'embedding_dim'.\n * freeze (bool, optional) -- If \"True\", the tensor\n does not get updated in the learning process. Equivalent to\n \"embeddingbag.weight.requires_grad = False\". Default:\n \"True\"\n * max_norm (float, optional) -- See module\n initialization documentation. Default: \"None\"\n * norm_type (float, optional) -- See module\n initialization documentation. Default \"2\".\n * scale_grad_by_freq (bool, optional) -- See module\n initialization documentation. Default \"False\".\n * mode (str, optional) -- See module initialization\n documentation. Default: \"\"mean\"\"\n * sparse (bool, optional) -- See module\n initialization documentation. Default: \"False\".\n * include_last_offset (bool, optional) -- See\n module initialization documentation. Default: \"False\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html", "category": "pytorch docs"}
{"text": "\npadding_idx (int, optional) -- See module\n initialization documentation. Default: \"None\".\n Return type:\n EmbeddingBag\n Examples:\n >>> # FloatTensor containing pretrained weights\n >>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]])\n >>> embeddingbag = nn.EmbeddingBag.from_pretrained(weight)\n >>> # Get embeddings for index 1\n >>> input = torch.LongTensor([[1, 0]])\n >>> embeddingbag(input)\n tensor([[ 2.5000, 3.7000, 4.6500]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html", "category": "pytorch docs"}
{"text": "default_float_qparams_observertorch.quantization.observer.default_float_qparams_observer\n alias of functools.partial(,\n dtype=torch.quint8, qscheme=torch.per_channel_affine_float_qparams,\n ch_axis=0){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_float_qparams_observer.html", "category": "pytorch docs"}
{"text": "torch.Tensor.retains_gradTensor.retains_grad\n Is \"True\" if this Tensor is non-leaf and its \"grad\" is enabled to\n be populated during \"backward()\", \"False\" otherwise.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.retains_grad.html", "category": "pytorch docs"}
{"text": "torch.Tensor.index_copy_Tensor.index_copy_(dim, index, tensor) -> Tensor\n Copies the elements of \"tensor\" into the \"self\" tensor by selecting\n the indices in the order given in \"index\". For example, if \"dim ==\n 0\" and \"index[i] == j\", then the \"i\"th row of \"tensor\" is copied to\n the \"j\"th row of \"self\".\n The \"dim\"th dimension of \"tensor\" must have the same size as the\n length of \"index\" (which must be a vector), and all other\n dimensions must match \"self\", or an error will be raised.\n Note:\n If \"index\" contains duplicate entries, multiple elements from\n \"tensor\" will be copied to the same index of \"self\". The result\n is nondeterministic since it depends on which copy occurs last.\n Parameters:\n * dim (int) -- dimension along which to index\n * index (LongTensor) -- indices of \"tensor\" to select from\n * tensor (Tensor) -- the tensor containing values to copy\n Example:\n >>> x = torch.zeros(5, 3)", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_copy_.html", "category": "pytorch docs"}
{"text": "Example:\n >>> x = torch.zeros(5, 3)\n >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)\n >>> index = torch.tensor([0, 4, 2])\n >>> x.index_copy_(0, index, t)\n tensor([[ 1., 2., 3.],\n [ 0., 0., 0.],\n [ 7., 8., 9.],\n [ 0., 0., 0.],\n [ 4., 5., 6.]])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_copy_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.vsplitTensor.vsplit(split_size_or_sections) -> List of Tensors\n See \"torch.vsplit()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.vsplit.html", "category": "pytorch docs"}
{"text": "MultiheadAttentionclass torch.nn.MultiheadAttention(embed_dim, num_heads, dropout=0.0, bias=True, add_bias_kv=False, add_zero_attn=False, kdim=None, vdim=None, batch_first=False, device=None, dtype=None)\n Allows the model to jointly attend to information from different\n representation subspaces as described in the paper: Attention Is\n All You Need.\n Multi-Head Attention is defined as:\n \\text{MultiHead}(Q, K, V) =\n \\text{Concat}(head_1,\\dots,head_h)W^O\n where head_i = \\text{Attention}(QW_i^Q, KW_i^K, VW_i^V).\n \"forward()\" will use a special optimized implementation if all of\n the following conditions are met:\n * self attention is being computed (i.e., \"query\", \"key\", and\n \"value\" are the same tensor. This restriction will be loosened in\n the future.)\n * inputs are batched (3D) with \"batch_first==True\"\n * Either autograd is disabled (using \"torch.inference_mode\" or\n \"torch.no_grad\") or no tensor argument \"requires_grad\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html", "category": "pytorch docs"}
{"text": "\ntraining is disabled (using \".eval()\")\n\"add_bias_kv\" is \"False\"\n\"add_zero_attn\" is \"False\"\n\"batch_first\" is \"True\" and the input is batched\n\"kdim\" and \"vdim\" are equal to \"embed_dim\"\nif a NestedTensor is passed, neither \"key_padding_mask\" nor\n \"attn_mask\" is passed\nautocast is disabled\n If the optimized implementation is in use, a NestedTensor can be\n passed for \"query\"/\"key\"/\"value\" to represent padding more\n efficiently than using a padding mask. In this case, a NestedTensor\n will be returned, and an additional speedup proportional to the\n fraction of the input that is padding can be expected.\n Parameters:\nembed_dim -- Total dimension of the model.\nnum_heads -- Number of parallel attention heads. Note that\n \"embed_dim\" will be split across \"num_heads\" (i.e. each head\n will have dimension \"embed_dim // num_heads\").\ndropout -- Dropout probability on \"attn_output_weights\".\n Default: \"0.0\" (no dropout).\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html", "category": "pytorch docs"}
{"text": "Default: \"0.0\" (no dropout).\n * bias -- If specified, adds bias to input / output\n projection layers. Default: \"True\".\n * add_bias_kv -- If specified, adds bias to the key and\n value sequences at dim=0. Default: \"False\".\n * add_zero_attn -- If specified, adds a new batch of zeros\n to the key and value sequences at dim=1. Default: \"False\".\n * kdim -- Total number of features for keys. Default: \"None\"\n (uses \"kdim=embed_dim\").\n * vdim -- Total number of features for values. Default:\n \"None\" (uses \"vdim=embed_dim\").\n * batch_first -- If \"True\", then the input and output\n tensors are provided as (batch, seq, feature). Default:\n \"False\" (seq, batch, feature).\n Examples:\n >>> multihead_attn = nn.MultiheadAttention(embed_dim, num_heads)\n >>> attn_output, attn_output_weights = multihead_attn(query, key, value)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html", "category": "pytorch docs"}
{"text": "forward(query, key, value, key_padding_mask=None, need_weights=True, attn_mask=None, average_attn_weights=True, is_causal=False)\n Parameters:\n * query (Tensor) -- Query embeddings of shape (L, E_q)\n for unbatched input, (L, N, E_q) when \"batch_first=False\"\n or (N, L, E_q) when \"batch_first=True\", where L is the\n target sequence length, N is the batch size, and E_q is the\n query embedding dimension \"embed_dim\". Queries are compared\n against key-value pairs to produce the output. See\n \"Attention Is All You Need\" for more details.\n * key (Tensor) -- Key embeddings of shape (S, E_k) for\n unbatched input, (S, N, E_k) when \"batch_first=False\" or\n (N, S, E_k) when \"batch_first=True\", where S is the source\n sequence length, N is the batch size, and E_k is the key\n embedding dimension \"kdim\". See \"Attention Is All You Need\"\n for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html", "category": "pytorch docs"}
{"text": "for more details.\n * value (Tensor) -- Value embeddings of shape (S, E_v)\n for unbatched input, (S, N, E_v) when \"batch_first=False\"\n or (N, S, E_v) when \"batch_first=True\", where S is the\n source sequence length, N is the batch size, and E_v is the\n value embedding dimension \"vdim\". See \"Attention Is All You\n Need\" for more details.\n * key_padding_mask (Optional[Tensor]) -- If\n specified, a mask of shape (N, S) indicating which elements\n within \"key\" to ignore for the purpose of attention (i.e.\n treat as \"padding\"). For unbatched query, shape should be\n (S). Binary and byte masks are supported. For a binary\n mask, a \"True\" value indicates that the corresponding \"key\"\n value will be ignored for the purpose of attention. For a\n float mask, it will be directly added to the corresponding\n \"key\" value.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html", "category": "pytorch docs"}
{"text": "\"key\" value.\n * need_weights (bool) -- If specified, returns\n \"attn_output_weights\" in addition to \"attn_outputs\".\n Default: \"True\".\n * attn_mask (Optional[Tensor]) -- If specified, a\n 2D or 3D mask preventing attention to certain positions.\n Must be of shape (L, S) or (N\\cdot\\text{num_heads}, L, S),\n where N is the batch size, L is the target sequence length,\n and S is the source sequence length. A 2D mask will be\n broadcasted across the batch while a 3D mask allows for a\n different mask for each entry in the batch. Binary, byte,\n and float masks are supported. For a binary mask, a \"True\"\n value indicates that the corresponding position is not\n allowed to attend. For a byte mask, a non-zero value\n indicates that the corresponding position is not allowed to\n attend. For a float mask, the mask values will be added to", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html", "category": "pytorch docs"}
{"text": "the attention weight.\n * is_causal (bool) -- If specified, applies a causal\n mask as attention mask. Mutually exclusive with providing\n attn_mask. Default: \"False\".\n * average_attn_weights (bool) -- If true, indicates\n that the returned \"attn_weights\" should be averaged across\n heads. Otherwise, \"attn_weights\" are provided separately\n per head. Note that this flag only has an effect when\n \"need_weights=True\". Default: \"True\" (i.e. average weights\n across heads)\n Return type:\n Tuple[Tensor, Optional[Tensor]]\n Outputs:\n * attn_output - Attention outputs of shape (L, E) when\n input is unbatched, (L, N, E) when \"batch_first=False\" or\n (N, L, E) when \"batch_first=True\", where L is the target\n sequence length, N is the batch size, and E is the\n embedding dimension \"embed_dim\".\n * attn_output_weights - Only returned when", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html", "category": "pytorch docs"}
{"text": "\"need_weights=True\". If \"average_attn_weights=True\",\n returns attention weights averaged across heads of shape\n (L, S) when input is unbatched or (N, L, S), where N is the\n batch size, L is the target sequence length, and S is the\n source sequence length. If \"average_attn_weights=False\",\n returns attention weights per head of shape\n (\\text{num_heads}, L, S) when input is unbatched or (N,\n \\text{num_heads}, L, S).\n Note:\n batch_first argument is ignored for unbatched inputs.\n merge_masks(attn_mask, key_padding_mask, query)\n Determine mask type and combine masks if necessary. If only one\n mask is provided, that mask and the corresponding mask type will\n be returned. If both masks are provided, they will be both\n expanded to shape \"(batch_size, num_heads, seq_len, seq_len)\",\n combined with logical \"or\" and mask type 2 will be returned", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html", "category": "pytorch docs"}
{"text": ":param attn_mask: attention mask of shape \"(seq_len, seq_len)\",\n mask type 0 :param key_padding_mask: padding mask of shape\n \"(batch_size, seq_len)\", mask type 1 :param query: query\n embeddings of shape \"(batch_size, seq_len, embed_dim)\"\n Returns:\n merged mask mask_type: merged mask type (0, 1, or 2)\n Return type:\n merged_mask", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html", "category": "pytorch docs"}
{"text": "torch.bitwise_xortorch.bitwise_xor(input, other, , out=None) -> Tensor\n Computes the bitwise XOR of \"input\" and \"other\". The input tensor\n must be of integral or Boolean types. For bool tensors, it computes\n the logical XOR.\n Parameters:\n * input -- the first input tensor\n * other -- the second input tensor\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> torch.bitwise_xor(torch.tensor([-1, -2, 3], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8))\n tensor([-2, -2, 0], dtype=torch.int8)\n >>> torch.bitwise_xor(torch.tensor([True, True, False]), torch.tensor([False, True, False]))\n tensor([ True, False, False])", "source": "https://pytorch.org/docs/stable/generated/torch.bitwise_xor.html", "category": "pytorch docs"}
{"text": "torch.cuda.list_gpu_processestorch.cuda.list_gpu_processes(device=None)\n Returns a human-readable printout of the running processes and\n their GPU memory use for a given device.\n This can be useful to display periodically during training, or when\n handling out-of-memory exceptions.\n Parameters:\n device (torch.device or int, optional) -- selected\n device. Returns printout for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\n Return type:\n str", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.list_gpu_processes.html", "category": "pytorch docs"}
{"text": "torch.full_liketorch.full_like(input, fill_value, *, dtype=None, layout=torch.strided, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor\n Returns a tensor with the same size as \"input\" filled with\n \"fill_value\". \"torch.full_like(input, fill_value)\" is equivalent to\n \"torch.full(input.size(), fill_value, dtype=input.dtype,\n layout=input.layout, device=input.device)\".\n Parameters:\n * input (Tensor) -- the size of \"input\" will determine\n size of the output tensor.\n * fill_value -- the number to fill the output tensor with.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned Tensor. Default: if \"None\", defaults to the dtype\n of \"input\".\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned tensor. Default: if \"None\", defaults to the layout of\n \"input\".\n * device (\"torch.device\", optional) -- the desired device of", "source": "https://pytorch.org/docs/stable/generated/torch.full_like.html", "category": "pytorch docs"}
{"text": "returned tensor. Default: if \"None\", defaults to the device of\n \"input\".\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.full_like.html", "category": "pytorch docs"}
{"text": "ConvTranspose2dclass torch.nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)\n Applies a 2D transposed convolution operator over an input image\n composed of several input planes.\n This module can be seen as the gradient of Conv2d with respect to\n its input. It is also known as a fractionally-strided convolution\n or a deconvolution (although it is not an actual deconvolution\n operation as it does not compute a true inverse of convolution).\n For more information, see the visualizations here and the\n Deconvolutional Networks paper.\n This module supports TensorFloat32.\n On certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\n * \"stride\" controls the stride for the cross-correlation.\n * \"padding\" controls the amount of implicit zero padding on both", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html", "category": "pytorch docs"}
{"text": "sides for \"dilation * (kernel_size - 1) - padding\" number of\n points. See note below for details.\n * \"output_padding\" controls the additional size added to one side\n of the output shape. See note below for details.\n * \"dilation\" controls the spacing between the kernel points; also\n known as the \u00c3\u00a0 trous algorithm. It is harder to describe, but the\n link here has a nice visualization of what \"dilation\" does.\n * \"groups\" controls the connections between inputs and outputs.\n \"in_channels\" and \"out_channels\" must both be divisible by\n \"groups\". For example,\n * At groups=1, all inputs are convolved to all outputs.\n * At groups=2, the operation becomes equivalent to having two\n conv layers side by side, each seeing half the input\n channels and producing half the output channels, and both\n subsequently concatenated.\n * At groups= \"in_channels\", each input channel is convolved\n with its own set of filters (of size", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html", "category": "pytorch docs"}
{"text": "with its own set of filters (of size\n \\frac{\\text{out_channels}}{\\text{in_channels}}).\n The parameters \"kernel_size\", \"stride\", \"padding\", \"output_padding\"\n can either be:\n * a single \"int\" -- in which case the same value is used for the\n height and width dimensions\n * a \"tuple\" of two ints -- in which case, the first int is\n used for the height dimension, and the second int for the\n width dimension\n Note:\n The \"padding\" argument effectively adds \"dilation * (kernel_size\n - 1) - padding\" amount of zero padding to both sizes of the\n input. This is set so that when a \"Conv2d\" and a\n \"ConvTranspose2d\" are initialized with same parameters, they are\n inverses of each other in regard to the input and output shapes.\n However, when \"stride > 1\", \"Conv2d\" maps multiple input shapes\n to the same output shape. \"output_padding\" is provided to resolve\n this ambiguity by effectively increasing the calculated output", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html", "category": "pytorch docs"}
{"text": "shape on one side. Note that \"output_padding\" is only used to\n find output shape, but does not actually add zero-padding to\n output.\n Note:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n Parameters:\n * in_channels (int) -- Number of channels in the input\n image\n * out_channels (int) -- Number of channels produced by the\n convolution\n * kernel_size (int or tuple) -- Size of the convolving\n kernel\n * stride (int or tuple, optional) -- Stride of the\n convolution. Default: 1\n * padding (int or tuple, optional) -- \"dilation *", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html", "category": "pytorch docs"}
{"text": "(kernel_size - 1) - padding\" zero-padding will be added to\n both sides of each dimension in the input. Default: 0\n * output_padding (int or tuple, optional) --\n Additional size added to one side of each dimension in the\n output shape. Default: 0\n * groups (int, optional) -- Number of blocked\n connections from input channels to output channels. Default: 1\n * bias (bool, optional) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n * dilation (int or tuple, optional) -- Spacing\n between kernel elements. Default: 1\n Shape:\n * Input: (N, C_{in}, H_{in}, W_{in}) or (C_{in}, H_{in}, W_{in})\n * Output: (N, C_{out}, H_{out}, W_{out}) or (C_{out}, H_{out},\n W_{out}), where\n H_{out} = (H_{in} - 1) \\times \\text{stride}[0] - 2 \\times\n \\text{padding}[0] + \\text{dilation}[0] \\times", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html", "category": "pytorch docs"}
{"text": "(\\text{kernel_size}[0] - 1) + \\text{output_padding}[0] + 1\n W_{out} = (W_{in} - 1) \\times \\text{stride}[1] - 2 \\times\n \\text{padding}[1] + \\text{dilation}[1] \\times\n (\\text{kernel_size}[1] - 1) + \\text{output_padding}[1] + 1\n Variables:\n * weight (Tensor) -- the learnable weights of the module\n of shape (\\text{in_channels},\n \\frac{\\text{out_channels}}{\\text{groups}},\n \\text{kernel_size[0]}, \\text{kernel_size[1]}). The values of\n these weights are sampled from \\mathcal{U}(-\\sqrt{k},\n \\sqrt{k}) where k = \\frac{groups}{C_\\text{out} *\n \\prod_{i=0}^{1}\\text{kernel_size}[i]}\n * bias (Tensor) -- the learnable bias of the module of\n shape (out_channels) If \"bias\" is \"True\", then the values of\n these weights are sampled from \\mathcal{U}(-\\sqrt{k},\n \\sqrt{k}) where k = \\frac{groups}{C_\\text{out} *\n \\prod_{i=0}^{1}\\text{kernel_size}[i]}\n Examples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html", "category": "pytorch docs"}
{"text": "Examples:\n >>> # With square kernels and equal stride\n >>> m = nn.ConvTranspose2d(16, 33, 3, stride=2)\n >>> # non-square kernels and unequal stride and with padding\n >>> m = nn.ConvTranspose2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))\n >>> input = torch.randn(20, 16, 50, 100)\n >>> output = m(input)\n >>> # exact output size can be also specified as an argument\n >>> input = torch.randn(1, 16, 12, 12)\n >>> downsample = nn.Conv2d(16, 16, 3, stride=2, padding=1)\n >>> upsample = nn.ConvTranspose2d(16, 16, 3, stride=2, padding=1)\n >>> h = downsample(input)\n >>> h.size()\n torch.Size([1, 16, 6, 6])\n >>> output = upsample(h, output_size=input.size())\n >>> output.size()\n torch.Size([1, 16, 12, 12])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html", "category": "pytorch docs"}
{"text": "torch.cuda.comm.reduce_addtorch.cuda.comm.reduce_add(inputs, destination=None)\n Sums tensors from multiple GPUs.\n All inputs should have matching shapes, dtype, and layout. The\n output tensor will be of the same shape, dtype, and layout.\n Parameters:\n * inputs (Iterable[Tensor]) -- an iterable of\n tensors to add.\n * destination (int, optional) -- a device on which the\n output will be placed (default: current device).\n Returns:\n A tensor containing an elementwise sum of all inputs, placed on\n the \"destination\" device.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.comm.reduce_add.html", "category": "pytorch docs"}
{"text": "torch.Tensor.negativeTensor.negative() -> Tensor\n See \"torch.negative()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.negative.html", "category": "pytorch docs"}
{"text": "torch.Tensor.tTensor.t() -> Tensor\n See \"torch.t()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.t.html", "category": "pytorch docs"}
{"text": "torch.Tensor.cauchy_Tensor.cauchy_(median=0, sigma=1, *, generator=None) -> Tensor\n Fills the tensor with numbers drawn from the Cauchy distribution:\n f(x) = \\dfrac{1}{\\pi} \\dfrac{\\sigma}{(x - \\text{median})^2 +\n \\sigma^2}", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cauchy_.html", "category": "pytorch docs"}
{"text": "torch.autograd.functional.hvptorch.autograd.functional.hvp(func, inputs, v=None, create_graph=False, strict=False)\n Function that computes the dot product between the Hessian of a\n given scalar function and a vector \"v\" at the point given by the\n inputs.\n Parameters:\n * func (function) -- a Python function that takes Tensor\n inputs and returns a Tensor with a single element.\n * inputs (tuple of Tensors or Tensor) -- inputs to the\n function \"func\".\n * v (tuple of Tensors or Tensor) -- The vector for\n which the Hessian vector product is computed. Must be the same\n size as the input of \"func\". This argument is optional when\n \"func\"'s input contains a single element and (if it is not\n provided) will be set as a Tensor containing a single \"1\".\n * create_graph (bool, optional) -- If \"True\", both the\n output and result will be computed in a differentiable way.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.hvp.html", "category": "pytorch docs"}
{"text": "Note that when \"strict\" is \"False\", the result can not require\n gradients or be disconnected from the inputs. Defaults to\n \"False\".\n * strict (bool, optional) -- If \"True\", an error will\n be raised when we detect that there exists an input such that\n all the outputs are independent of it. If \"False\", we return a\n Tensor of zeros as the hvp for said inputs, which is the\n expected mathematical value. Defaults to \"False\".\n Returns:\n tuple with:\n func_output (tuple of Tensors or Tensor): output of\n \"func(inputs)\"\n hvp (tuple of Tensors or Tensor): result of the dot product\n with the same shape as the inputs.\n Return type:\n output (tuple)\n -[ Example ]-\n\n\n\ndef pow_reducer(x):\n ... return x.pow(3).sum()\ninputs = torch.rand(2, 2)\nv = torch.ones(2, 2)\nhvp(pow_reducer, inputs, v)\n (tensor(0.1448),\n tensor([[2.0239, 1.6456],\n [2.4988, 1.4310]]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.hvp.html", "category": "pytorch docs"}
{"text": "[2.4988, 1.4310]]))\n\n\n\nhvp(pow_reducer, inputs, v, create_graph=True)\n (tensor(0.1448, grad_fn=),\n tensor([[2.0239, 1.6456],\n [2.4988, 1.4310]], grad_fn=))\ndef pow_adder_reducer(x, y):\n ... return (2 * x.pow(2) + 3 * y.pow(2)).sum()\ninputs = (torch.rand(2), torch.rand(2))\nv = (torch.zeros(2), torch.ones(2))\nhvp(pow_adder_reducer, inputs, v)\n (tensor(2.3030),\n (tensor([0., 0.]),\n tensor([6., 6.])))\n Note:\n This function is significantly slower than vhp due to backward\n mode AD constraints. If your functions is twice continuously\n differentiable, then hvp = vhp.t(). So if you know that your\n function satisfies this condition, you should use vhp instead\n that is much faster with the current implementation.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.hvp.html", "category": "pytorch docs"}
{"text": "torch.Tensor.trilTensor.tril(diagonal=0) -> Tensor\n See \"torch.tril()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.tril.html", "category": "pytorch docs"}
{"text": "torch.Tensor.ltTensor.lt(other) -> Tensor\n See \"torch.lt()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lt.html", "category": "pytorch docs"}
{"text": "torch.Tensor.expTensor.exp() -> Tensor\n See \"torch.exp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.exp.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.conv2dtorch.nn.functional.conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) -> Tensor\n Applies a 2D convolution over an input image composed of several\n input planes.\n This operator supports TensorFloat32.\n See \"Conv2d\" for details and output shape.\n Note:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n Note:\n This operator supports complex data types i.e. \"complex32,\n complex64, complex128\".\n Parameters:\n * input -- input tensor of shape (\\text{minibatch} ,\n \\text{in_channels} , iH , iW)\n * weight -- filters of shape (\\text{out_channels} ,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv2d.html", "category": "pytorch docs"}
{"text": "\\frac{\\text{in_channels}}{\\text{groups}} , kH , kW)\n * bias -- optional bias tensor of shape\n (\\text{out_channels}). Default: \"None\"\n * stride -- the stride of the convolving kernel. Can be a\n single number or a tuple (sH, sW). Default: 1\n * padding --\n implicit paddings on both sides of the input. Can be a string\n {'valid', 'same'}, single number or a tuple (padH, padW).\n Default: 0 \"padding='valid'\" is the same as no padding.\n \"padding='same'\" pads the input so the output has the same\n shape as the input. However, this mode doesn't support any\n stride values other than 1.\n Warning:\n For \"padding='same'\", if the \"weight\" is even-length and\n \"dilation\" is odd in any dimension, a full \"pad()\" operation\n may be needed internally. Lowering performance.\n * dilation -- the spacing between kernel elements. Can be a\n single number or a tuple (dH, dW). Default: 1", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv2d.html", "category": "pytorch docs"}
{"text": "\ngroups -- split input into groups, \\text{in_channels}\n should be divisible by the number of groups. Default: 1\n Examples:\n >>> # With square kernels and equal stride\n >>> filters = torch.randn(8, 4, 3, 3)\n >>> inputs = torch.randn(1, 4, 5, 5)\n >>> F.conv2d(inputs, filters, padding=1)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv2d.html", "category": "pytorch docs"}
{"text": "torch.func.vmaptorch.func.vmap(func, in_dims=0, out_dims=0, randomness='error', , chunk_size=None)\n vmap is the vectorizing map; \"vmap(func)\" returns a new function\n that maps \"func\" over some dimension of the inputs. Semantically,\n vmap pushes the map into PyTorch operations called by \"func\",\n effectively vectorizing those operations.\n vmap is useful for handling batch dimensions: one can write a\n function \"func\" that runs on examples and then lift it to a\n function that can take batches of examples with \"vmap(func)\". vmap\n can also be used to compute batched gradients when composed with\n autograd.\n Note:\n \"torch.vmap()\" is aliased to \"torch.func.vmap()\" for convenience.\n Use whichever one you'd like.\n Parameters:\n * func (function) -- A Python function that takes one or\n more arguments. Must return one or more Tensors.\n * in_dims (int or nested structure*) -- Specifies which", "source": "https://pytorch.org/docs/stable/generated/torch.func.vmap.html", "category": "pytorch docs"}
{"text": "dimension of the inputs should be mapped over. \"in_dims\"\n should have a structure like the inputs. If the \"in_dim\" for a\n particular input is None, then that indicates there is no map\n dimension. Default: 0.\n * out_dims (int or Tuple[int]) -- Specifies\n where the mapped dimension should appear in the outputs. If\n \"out_dims\" is a Tuple, then it should have one element per\n output. Default: 0.\n * randomness (str) -- Specifies whether the randomness in\n this vmap should be the same or different across batches. If\n 'different', the randomness for each batch will be different.\n If 'same', the randomness will be the same across batches. If\n 'error', any calls to random functions will error. Default:\n 'error'. WARNING: this flag only applies to random PyTorch\n operations and does not apply to Python's random module or\n numpy randomness.", "source": "https://pytorch.org/docs/stable/generated/torch.func.vmap.html", "category": "pytorch docs"}
{"text": "numpy randomness.\n * chunk_size (None or int) -- If None (default), apply\n a single vmap over inputs. If not None, then compute the vmap\n \"chunk_size\" samples at a time. Note that \"chunk_size=1\" is\n equivalent to computing the vmap with a for-loop. If you run\n into memory issues computing the vmap, please try a non-None\n chunk_size.\n Returns:\n Returns a new \"batched\" function. It takes the same inputs as\n \"func\", except each input has an extra dimension at the index\n specified by \"in_dims\". It takes returns the same outputs as\n \"func\", except each output has an extra dimension at the index\n specified by \"out_dims\".\n Return type:\n Callable\n One example of using \"vmap()\" is to compute batched dot products.\n PyTorch doesn't provide a batched \"torch.dot\" API; instead of\n unsuccessfully rummaging through docs, use \"vmap()\" to construct a\n new function.\n\n\n\ntorch.dot # [D], [D] -> []\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.vmap.html", "category": "pytorch docs"}
{"text": "\n\n\nbatched_dot = torch.func.vmap(torch.dot) # [N, D], [N, D] -> [N]\nx, y = torch.randn(2, 5), torch.randn(2, 5)\nbatched_dot(x, y)\n \"vmap()\" can be helpful in hiding batch dimensions, leading to a\n simpler model authoring experience.\nbatch_size, feature_size = 3, 5\nweights = torch.randn(feature_size, requires_grad=True)\ndef model(feature_vec):\n # Very simple linear model with activation\n return feature_vec.dot(weights).relu()\nexamples = torch.randn(batch_size, feature_size)\nresult = torch.vmap(model)(examples)\n \"vmap()\" can also help vectorize computations that were previously\n difficult or impossible to batch. One example is higher-order\n gradient computation. The PyTorch autograd engine computes vjps\n (vector-Jacobian products). Computing a full Jacobian matrix for\n some function f: R^N -> R^N usually requires N calls to\n \"autograd.grad\", one per Jacobian row. Using \"vmap()\", we can\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.vmap.html", "category": "pytorch docs"}
{"text": "vectorize the whole computation, computing the Jacobian in a single\n call to \"autograd.grad\".\n\n\n\nSetup\nN = 5\nf = lambda x: x ** 2\nx = torch.randn(N, requires_grad=True)\ny = f(x)\nI_N = torch.eye(N)\nSequential approach\njacobian_rows = [torch.autograd.grad(y, x, v, retain_graph=True)[0]\n for v in I_N.unbind()]\njacobian = torch.stack(jacobian_rows)\nvectorized gradient computation\ndef get_vjp(v):\n return torch.autograd.grad(y, x, v)\njacobian = torch.vmap(get_vjp)(I_N)\n \"vmap()\" can also be nested, producing an output with multiple\n batched dimensions\ntorch.dot # [D], [D] -> []\nbatched_dot = torch.vmap(torch.vmap(torch.dot)) # [N1, N0, D], [N1, N0, D] -> [N1, N0]\nx, y = torch.randn(2, 3, 5), torch.randn(2, 3, 5)\nbatched_dot(x, y) # tensor of size [2, 3]\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.vmap.html", "category": "pytorch docs"}
{"text": "\n\n\nbatched_dot(x, y) # tensor of size [2, 3]\n If the inputs are not batched along the first dimension, \"in_dims\"\n specifies the dimension that each inputs are batched along as\ntorch.dot # [N], [N] -> []\nbatched_dot = torch.vmap(torch.dot, in_dims=1) # [N, D], [N, D] -> [D]\nx, y = torch.randn(2, 5), torch.randn(2, 5)\nbatched_dot(x, y) # output is [5] instead of [2] if batched along the 0th dimension\n If there are multiple inputs each of which is batched along\n different dimensions, \"in_dims\" must be a tuple with the batch\n dimension for each input as\ntorch.dot # [D], [D] -> []\nbatched_dot = torch.vmap(torch.dot, in_dims=(0, None)) # [N, D], [D] -> [N]\nx, y = torch.randn(2, 5), torch.randn(5)\nbatched_dot(x, y) # second arg doesn't have a batch dim because in_dim[1] was None\n If the input is a Python struct, \"in_dims\" must be a tuple\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.vmap.html", "category": "pytorch docs"}
{"text": "containing a struct matching the shape of the input:\n\n\n\nf = lambda dict: torch.dot(dict['x'], dict['y'])\nx, y = torch.randn(2, 5), torch.randn(5)\ninput = {'x': x, 'y': y}\nbatched_dot = torch.vmap(f, in_dims=({'x': 0, 'y': None},))\nbatched_dot(input)\n By default, the output is batched along the first dimension.\n However, it can be batched along any dimension by using \"out_dims\"\nf = lambda x: x ** 2\nx = torch.randn(2, 5)\nbatched_pow = torch.vmap(f, out_dims=1)\nbatched_pow(x) # [5, 2]\n For any function that uses kwargs, the returned function will not\n batch the kwargs but will accept kwargs\nx = torch.randn([2, 5])\ndef fn(x, scale=4.):\n return x * scale\nbatched_pow = torch.vmap(fn)\nassert torch.allclose(batched_pow(x), x * 4)\nbatched_pow(x, scale=x) # scale is not batched, output has shape [2, 2, 5]\n Note:\n vmap does not provide general autobatching or handle variable-\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.vmap.html", "category": "pytorch docs"}
{"text": "length sequences out of the box.", "source": "https://pytorch.org/docs/stable/generated/torch.func.vmap.html", "category": "pytorch docs"}
{"text": "torch.Tensor.bernoulli_Tensor.bernoulli_(p=0.5, , generator=None) -> Tensor\n Fills each location of \"self\" with an independent sample from\n \\text{Bernoulli}(\\texttt{p}). \"self\" can have integral \"dtype\".\n \"p\" should either be a scalar or tensor containing probabilities to\n be used for drawing the binary random number.\n If it is a tensor, the \\text{i}^{th} element of \"self\" tensor will\n be set to a value sampled from\n \\text{Bernoulli}(\\texttt{p_tensor[i]}). In this case p* must have\n floating point \"dtype\".\n See also \"bernoulli()\" and \"torch.bernoulli()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bernoulli_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.is_metaTensor.is_meta\n Is \"True\" if the Tensor is a meta tensor, \"False\" otherwise. Meta\n tensors are like normal tensors, but they carry no data.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_meta.html", "category": "pytorch docs"}
{"text": "torch.jit.onednn_fusion_enabledtorch.jit.onednn_fusion_enabled()\n Returns whether onednn JIT fusion is enabled", "source": "https://pytorch.org/docs/stable/generated/torch.jit.onednn_fusion_enabled.html", "category": "pytorch docs"}
{"text": "torch.Tensor.absolute_Tensor.absolute_() -> Tensor\n In-place version of \"absolute()\" Alias for \"abs_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.absolute_.html", "category": "pytorch docs"}
{"text": "torch.logaddexp2torch.logaddexp2(input, other, , out=None) -> Tensor\n Logarithm of the sum of exponentiations of the inputs in base-2.\n Calculates pointwise \\log_2\\left(2^x + 2^y\\right). See\n \"torch.logaddexp()\" for more details.\n Parameters:\n * input (Tensor) -- the input tensor.\n * other (Tensor) -- the second input tensor\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.logaddexp2.html", "category": "pytorch docs"}
{"text": "torch.cuda.memory_snapshottorch.cuda.memory_snapshot()\n Returns a snapshot of the CUDA memory allocator state across all\n devices.\n Interpreting the output of this function requires familiarity with\n the memory allocator internals.\n Note:\n See Memory management for more details about GPU memory\n management.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_snapshot.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sigmoidTensor.sigmoid() -> Tensor\n See \"torch.sigmoid()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sigmoid.html", "category": "pytorch docs"}
{"text": "LazyInstanceNorm2dclass torch.nn.LazyInstanceNorm2d(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)\n A \"torch.nn.InstanceNorm2d\" module with lazy initialization of the\n \"num_features\" argument of the \"InstanceNorm2d\" that is inferred\n from the \"input.size(1)\". The attributes that will be lazily\n initialized are weight, bias, running_mean and running_var.\n Check the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\n Parameters:\n * num_features -- C from an expected input of size (N, C, H,\n W) or (C, H, W)\n * eps (float) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n * momentum (float) -- the value used for the running_mean\n and running_var computation. Default: 0.1\n * affine (bool) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyInstanceNorm2d.html", "category": "pytorch docs"}
{"text": "initialized the same way as done for batch normalization.\n Default: \"False\".\n * track_running_stats (bool) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics and always uses batch statistics in both\n training and eval modes. Default: \"False\"\n Shape:\n * Input: (N, C, H, W) or (C, H, W)\n * Output: (N, C, H, W) or (C, H, W) (same shape as input)\n cls_to_become\n alias of \"InstanceNorm2d\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyInstanceNorm2d.html", "category": "pytorch docs"}
{"text": "torch.isrealtorch.isreal(input) -> Tensor\n Returns a new tensor with boolean elements representing if each\n element of \"input\" is real-valued or not. All real-valued types are\n considered real. Complex values are considered real when their\n imaginary part is 0.\n Parameters:\n input (Tensor) -- the input tensor.\n Returns:\n A boolean tensor that is True where \"input\" is real and False\n elsewhere\n Example:\n >>> torch.isreal(torch.tensor([1, 1+1j, 2+0j]))\n tensor([True, False, True])", "source": "https://pytorch.org/docs/stable/generated/torch.isreal.html", "category": "pytorch docs"}
{"text": "TransformerEncoderLayerclass torch.nn.TransformerEncoderLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation=, layer_norm_eps=1e-05, batch_first=False, norm_first=False, device=None, dtype=None)\n TransformerEncoderLayer is made up of self-attn and feedforward\n network. This standard encoder layer is based on the paper\n \"Attention Is All You Need\". Ashish Vaswani, Noam Shazeer, Niki\n Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser,\n and Illia Polosukhin. 2017. Attention is all you need. In Advances\n in Neural Information Processing Systems, pages 6000-6010. Users\n may modify or implement in a different way during application.\n Parameters:\n * d_model (int) -- the number of expected features in the\n input (required).\n * nhead (int) -- the number of heads in the\n multiheadattention models (required).\n * dim_feedforward (int) -- the dimension of the", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html", "category": "pytorch docs"}
{"text": "feedforward network model (default=2048).\n * dropout (float) -- the dropout value (default=0.1).\n * activation (Union[str,\n Callable[[Tensor], Tensor]]) -- the\n activation function of the intermediate layer, can be a string\n (\"relu\" or \"gelu\") or a unary callable. Default: relu\n * layer_norm_eps (float) -- the eps value in layer\n normalization components (default=1e-5).\n * batch_first (bool) -- If \"True\", then the input and\n output tensors are provided as (batch, seq, feature). Default:\n \"False\" (seq, batch, feature).\n * norm_first (bool) -- if \"True\", layer norm is done prior\n to attention and feedforward operations, respectively.\n Otherwise it's done after. Default: \"False\" (after).\n Examples::\n >>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8)\n >>> src = torch.rand(10, 32, 512)\n >>> out = encoder_layer(src)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html", "category": "pytorch docs"}
{"text": "\n\n\nout = encoder_layer(src)\n Alternatively, when \"batch_first\" is \"True\":\n >>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8, batch_first=True)\n >>> src = torch.rand(32, 10, 512)\n >>> out = encoder_layer(src)\n Fast path:\n forward() will use a special optimized implementation if all of\n the following conditions are met:\n * Either autograd is disabled (using \"torch.inference_mode\" or\n \"torch.no_grad\") or no tensor argument \"requires_grad\"\n * training is disabled (using \".eval()\")\n * batch_first is \"True\" and the input is batched (i.e.,\n \"src.dim() == 3\")\n * activation is one of: \"\"relu\"\", \"\"gelu\"\",\n \"torch.functional.relu\", or \"torch.functional.gelu\"\n * at most one of \"src_mask\" and \"src_key_padding_mask\" is passed\n * if src is a NestedTensor, neither \"src_mask\" nor\n \"src_key_padding_mask\" is passed\n * the two \"LayerNorm\" instances have a consistent \"eps\" value\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html", "category": "pytorch docs"}
{"text": "(this will naturally be the case unless the caller has\n manually modified one without modifying the other)\n If the optimized implementation is in use, a NestedTensor can be\n passed for \"src\" to represent padding more efficiently than\n using a padding mask. In this case, a NestedTensor will be\n returned, and an additional speedup proportional to the fraction\n of the input that is padding can be expected.\n forward(src, src_mask=None, src_key_padding_mask=None, is_causal=False)\n Pass the input through the encoder layer.\n Parameters:\n * src (Tensor) -- the sequence to the encoder layer\n (required).\n * src_mask (Optional[Tensor]) -- the mask for the\n src sequence (optional).\n * is_causal (bool) -- If specified, applies a causal\n mask as src_mask. Mutually exclusive with providing\n src_mask. Default: \"False\".\n * src_key_padding_mask (Optional[Tensor]) -- the", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html", "category": "pytorch docs"}
{"text": "mask for the src keys per batch (optional).\n Return type:\n Tensor\n Shape:\n see the docs in Transformer class.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html", "category": "pytorch docs"}
{"text": "MaxUnpool3dclass torch.nn.MaxUnpool3d(kernel_size, stride=None, padding=0)\n Computes a partial inverse of \"MaxPool3d\".\n \"MaxPool3d\" is not fully invertible, since the non-maximal values\n are lost. \"MaxUnpool3d\" takes in as input the output of \"MaxPool3d\"\n including the indices of the maximal values and computes a partial\n inverse in which all non-maximal values are set to zero.\n Note:\n \"MaxPool3d\" can map several input sizes to the same output sizes.\n Hence, the inversion process can get ambiguous. To accommodate\n this, you can provide the needed output size as an additional\n argument \"output_size\" in the forward call. See the Inputs\n section below.\n Parameters:\n * kernel_size (int or tuple) -- Size of the max\n pooling window.\n * stride (int or tuple) -- Stride of the max pooling\n window. It is set to \"kernel_size\" by default.\n * padding (int or tuple) -- Padding that was added to\n the input", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool3d.html", "category": "pytorch docs"}
{"text": "the input\n Inputs:\n * input: the input Tensor to invert\n * indices: the indices given out by \"MaxPool3d\"\n * output_size (optional): the targeted output size\n Shape:\n * Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},\n W_{in}).\n * Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},\n H_{out}, W_{out}), where\n D_{out} = (D_{in} - 1) \\times \\text{stride[0]} - 2 \\times\n \\text{padding[0]} + \\text{kernel_size[0]}\n H_{out} = (H_{in} - 1) \\times \\text{stride[1]} - 2 \\times\n \\text{padding[1]} + \\text{kernel_size[1]}\n W_{out} = (W_{in} - 1) \\times \\text{stride[2]} - 2 \\times\n \\text{padding[2]} + \\text{kernel_size[2]}\n or as given by \"output_size\" in the call operator\n Example:\n >>> # pool of square window of size=3, stride=2\n >>> pool = nn.MaxPool3d(3, stride=2, return_indices=True)\n >>> unpool = nn.MaxUnpool3d(3, stride=2)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool3d.html", "category": "pytorch docs"}
{"text": "\n\n\nunpool = nn.MaxUnpool3d(3, stride=2)\n >>> output, indices = pool(torch.randn(20, 16, 51, 33, 15))\n >>> unpooled_output = unpool(output, indices)\n >>> unpooled_output.size()\n torch.Size([20, 16, 51, 33, 15])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool3d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.is_leafTensor.is_leaf\n All Tensors that have \"requires_grad\" which is \"False\" will be leaf\n Tensors by convention.\n For Tensors that have \"requires_grad\" which is \"True\", they will be\n leaf Tensors if they were created by the user. This means that they\n are not the result of an operation and so \"grad_fn\" is None.\n Only leaf Tensors will have their \"grad\" populated during a call to\n \"backward()\". To get \"grad\" populated for non-leaf Tensors, you can\n use \"retain_grad()\".\n Example:\n >>> a = torch.rand(10, requires_grad=True)\n >>> a.is_leaf\n True\n >>> b = torch.rand(10, requires_grad=True).cuda()\n >>> b.is_leaf\n False\n # b was created by the operation that cast a cpu Tensor into a cuda Tensor\n >>> c = torch.rand(10, requires_grad=True) + 2\n >>> c.is_leaf\n False\n # c was created by the addition operation\n >>> d = torch.rand(10).cuda()\n >>> d.is_leaf\n True", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_leaf.html", "category": "pytorch docs"}
{"text": "\n\n\nd.is_leaf\n True\n # d does not require gradients and so has no operation creating it (that is tracked by the autograd engine)\n >>> e = torch.rand(10).cuda().requires_grad_()\n >>> e.is_leaf\n True\n # e requires gradients and has no operations creating it\n >>> f = torch.rand(10, requires_grad=True, device=\"cuda\")\n >>> f.is_leaf\n True\n # f requires grad, has no operation creating it\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_leaf.html", "category": "pytorch docs"}
{"text": "torch.jit.waittorch.jit.wait(future)\n Forces completion of a torch.jit.Future[T] asynchronous task,\n returning the result of the task. See \"fork()\" for docs and\n examples. :param future: an asynchronous task reference, created\n through torch.jit.fork :type future: torch.jit.Future[T]\n Returns:\n the return value of the the completed task\n Return type:\n T", "source": "https://pytorch.org/docs/stable/generated/torch.jit.wait.html", "category": "pytorch docs"}
{"text": "torch.Tensor.scatter_addTensor.scatter_add(dim, index, src) -> Tensor\n Out-of-place version of \"torch.Tensor.scatter_add_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_add.html", "category": "pytorch docs"}
{"text": "torch.Tensor.reshapeTensor.reshape(shape) -> Tensor\n Returns a tensor with the same data and number of elements as\n \"self\" but with the specified shape. This method returns a view if\n \"shape\" is compatible with the current shape. See\n \"torch.Tensor.view()\" on when it is possible to return a view.\n See \"torch.reshape()\"\n Parameters:\n shape (tuple of ints or int*...) -- the desired shape", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.reshape.html", "category": "pytorch docs"}
{"text": "ObserverBaseclass torch.quantization.observer.ObserverBase(dtype)\n Base observer Module. Any observer implementation should derive\n from this class.\n Concrete observers should follow the same API. In forward, they\n will update the statistics of the observed Tensor. And they should\n provide a calculate_qparams function that computes the\n quantization parameters given the collected statistics.\n Parameters:\n dtype -- dtype argument to the quantize node needed to\n implement the reference model spec.\n classmethod with_args(**kwargs)\n Wrapper that allows creation of class factories.\n This can be useful when there is a need to create classes with\n the same constructor arguments, but different instances. Can be\n used in conjunction with _callable_args\n Example:\n >>> Foo.with_args = classmethod(_with_args)\n >>> foo_builder = Foo.with_args(a=3, b=4).with_args(answer=42)\n >>> foo_instance1 = foo_builder()", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.ObserverBase.html", "category": "pytorch docs"}
{"text": "\n\n\nfoo_instance1 = foo_builder()\n >>> foo_instance2 = foo_builder()\n >>> id(foo_instance1) == id(foo_instance2)\n False\n classmethod with_callable_args(**kwargs)\n Wrapper that allows creation of class factories args that need\n to be called at construction time.\n This can be useful when there is a need to create classes with\n the same constructor arguments, but different instances and\n those arguments should only be calculated at construction time.\n Can be used in conjunction with _with_args\n Example:\n >>> Foo.with_callable_args = classmethod(_with_callable_args)\n >>> Foo.with_args = classmethod(_with_args)\n >>> foo_builder = Foo.with_callable_args(cur_time=get_time_func).with_args(name=\"dan\")\n >>> foo_instance1 = foo_builder()\n >>> # wait 50\n >>> foo_instance2 = foo_builder()\n >>> id(foo_instance1.creation_time) == id(foo_instance2.creation_time)\n False\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.ObserverBase.html", "category": "pytorch docs"}
{"text": "torch.Tensor.igamma_Tensor.igamma_(other) -> Tensor\n In-place version of \"igamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.igamma_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.log10Tensor.log10() -> Tensor\n See \"torch.log10()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.log10.html", "category": "pytorch docs"}
{"text": "torch.cuda.can_device_access_peertorch.cuda.can_device_access_peer(device, peer_device)\n Checks if peer access between two devices is possible.\n Return type:\n bool", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.can_device_access_peer.html", "category": "pytorch docs"}
{"text": "torch.linalg.dettorch.linalg.det(A, , out=None) -> Tensor\n Computes the determinant of a square matrix.\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n See also:\n \"torch.linalg.slogdet()\" computes the sign and natural logarithm\n of the absolute value of the determinant of square matrices.\n Parameters:\n A (Tensor) -- tensor of shape (, n, n) where *** is\n zero or more batch dimensions.\n Keyword Arguments:\n out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\n Examples:\n >>> A = torch.randn(3, 3)\n >>> torch.linalg.det(A)\n tensor(0.0934)\n >>> A = torch.randn(3, 2, 2)\n >>> torch.linalg.det(A)\n tensor([1.1990, 0.4099, 0.7386])", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.det.html", "category": "pytorch docs"}
{"text": "TripletMarginWithDistanceLossclass torch.nn.TripletMarginWithDistanceLoss(*, distance_function=None, margin=1.0, swap=False, reduction='mean')\n Creates a criterion that measures the triplet loss given input\n tensors a, p, and n (representing anchor, positive, and negative\n examples, respectively), and a nonnegative, real-valued function\n (\"distance function\") used to compute the relationship between the\n anchor and positive example (\"positive distance\") and the anchor\n and negative example (\"negative distance\").\n The unreduced loss (i.e., with \"reduction\" set to \"'none'\") can be\n described as:\n \\ell(a, p, n) = L = {l_1,\\dots,l_N}^\\top, \\quad l_i = \\max\n {d(a_i, p_i) - d(a_i, n_i) + {\\rm margin}, 0}\n where N is the batch size; d is a nonnegative, real-valued function\n quantifying the closeness of two tensors, referred to as the\n \"distance_function\"; and margin is a nonnegative margin", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginWithDistanceLoss.html", "category": "pytorch docs"}
{"text": "representing the minimum difference between the positive and\n negative distances that is required for the loss to be 0. The\n input tensors have N elements each and can be of any shape that the\n distance function can handle.\n If \"reduction\" is not \"'none'\" (default \"'mean'\"), then:\n \\ell(x, y) = \\begin{cases} \\operatorname{mean}(L), &\n \\text{if reduction} = \\text{mean';}\\\\\n \\operatorname{sum}(L), & \\text{if reduction} = \\text{sum'.}\n \\end{cases}\n See also \"TripletMarginLoss\", which computes the triplet loss for\n input tensors using the l_p distance as the distance function.\n Parameters:\n * distance_function (Callable, optional) -- A\n nonnegative, real-valued function that quantifies the\n closeness of two tensors. If not specified,\n nn.PairwiseDistance will be used. Default: \"None\"\n * margin (float, optional) -- A nonnegative margin\n representing the minimum difference between the positive and", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginWithDistanceLoss.html", "category": "pytorch docs"}
{"text": "negative distances required for the loss to be 0. Larger\n margins penalize cases where the negative examples are not\n distant enough from the anchors, relative to the positives.\n Default: 1.\n * swap (bool, optional) -- Whether to use the distance\n swap described in the paper Learning shallow convolutional\n feature descriptors with triplet losses by V. Balntas, E.\n Riba et al. If True, and if the positive example is closer to\n the negative example than the anchor is, swaps the positive\n example and the anchor in the loss computation. Default:\n \"False\".\n * reduction (str, optional) -- Specifies the\n (optional) reduction to apply to the output: \"'none'\" |\n \"'mean'\" | \"'sum'\". \"'none'\": no reduction will be applied,\n \"'mean'\": the sum of the output will be divided by the number\n of elements in the output, \"'sum'\": the output will be summed.\n Default: \"'mean'\"\n Shape:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginWithDistanceLoss.html", "category": "pytorch docs"}
{"text": "Default: \"'mean'\"\n Shape:\n * Input: (N, *) where * represents any number of additional\n dimensions as supported by the distance function.\n * Output: A Tensor of shape (N) if \"reduction\" is \"'none'\", or a\n scalar otherwise.\n Examples:\n >>> # Initialize embeddings\n >>> embedding = nn.Embedding(1000, 128)\n >>> anchor_ids = torch.randint(0, 1000, (1,))\n >>> positive_ids = torch.randint(0, 1000, (1,))\n >>> negative_ids = torch.randint(0, 1000, (1,))\n >>> anchor = embedding(anchor_ids)\n >>> positive = embedding(positive_ids)\n >>> negative = embedding(negative_ids)\n >>>\n >>> # Built-in Distance Function\n >>> triplet_loss = \\\n >>> nn.TripletMarginWithDistanceLoss(distance_function=nn.PairwiseDistance())\n >>> output = triplet_loss(anchor, positive, negative)\n >>> output.backward()\n >>>\n >>> # Custom Distance Function\n >>> def l_infinity(x1, x2):", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginWithDistanceLoss.html", "category": "pytorch docs"}
{"text": "\n\n\ndef l_infinity(x1, x2):\n >>> return torch.max(torch.abs(x1 - x2), dim=1).values\n >>>\n >>> triplet_loss = (\n >>> nn.TripletMarginWithDistanceLoss(distance_function=l_infinity, margin=1.5))\n >>> output = triplet_loss(anchor, positive, negative)\n >>> output.backward()\n >>>\n >>> # Custom Distance Function (Lambda)\n >>> triplet_loss = (\n >>> nn.TripletMarginWithDistanceLoss(\n >>> distance_function=lambda x, y: 1.0 - F.cosine_similarity(x, y)))\n >>> output = triplet_loss(anchor, positive, negative)\n >>> output.backward()\n Reference:\n V. Balntas, et al.: Learning shallow convolutional feature\n descriptors with triplet losses:\n http://www.bmva.org/bmvc/2016/papers/paper119/index.html\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginWithDistanceLoss.html", "category": "pytorch docs"}
{"text": "torch.rot90torch.rot90(input, k=1, dims=[0, 1]) -> Tensor\n Rotate an n-D tensor by 90 degrees in the plane specified by dims\n axis. Rotation direction is from the first towards the second axis\n if k > 0, and from the second towards the first for k < 0.\n Parameters:\n * input (Tensor) -- the input tensor.\n * k (int) -- number of times to rotate. Default value is 1\n * dims (a list or tuple) -- axis to rotate. Default\n value is [0, 1]\n Example:\n >>> x = torch.arange(4).view(2, 2)\n >>> x\n tensor([[0, 1],\n [2, 3]])\n >>> torch.rot90(x, 1, [0, 1])\n tensor([[1, 3],\n [0, 2]])\n >>> x = torch.arange(8).view(2, 2, 2)\n >>> x\n tensor([[[0, 1],\n [2, 3]],\n [[4, 5],\n [6, 7]]])\n >>> torch.rot90(x, 1, [1, 2])\n tensor([[[1, 3],\n [0, 2]],\n [[5, 7],\n [4, 6]]])", "source": "https://pytorch.org/docs/stable/generated/torch.rot90.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.prune.global_unstructuredtorch.nn.utils.prune.global_unstructured(parameters, pruning_method, importance_scores=None, kwargs)\n Globally prunes tensors corresponding to all parameters in\n \"parameters\" by applying the specified \"pruning_method\". Modifies\n modules in place by:\n 1. adding a named buffer called \"name+'_mask'\" corresponding to the\n binary mask applied to the parameter \"name\" by the pruning\n method.\n 2. replacing the parameter \"name\" by its pruned version, while the\n original (unpruned) parameter is stored in a new parameter named\n \"name+'_orig'\".\n Parameters:\n * parameters (*Iterable of (module, name)\n *tuples) -- parameters of the model to prune in a global\n fashion, i.e. by aggregating all weights prior to deciding\n which ones to prune. module must be of type \"nn.Module\", and\n name must be a string.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.global_unstructured.html", "category": "pytorch docs"}
{"text": "name must be a string.\n * pruning_method (function) -- a valid pruning function\n from this module, or a custom one implemented by the user that\n satisfies the implementation guidelines and has\n \"PRUNING_TYPE='unstructured'\".\n * importance_scores (dict) -- a dictionary mapping\n (module, name) tuples to the corresponding parameter's\n importance scores tensor. The tensor should be the same shape\n as the parameter, and is used for computing mask for pruning.\n If unspecified or None, the parameter will be used in place of\n its importance scores.\n * kwargs -- other keyword arguments such as: amount (int or\n float): quantity of parameters to prune across the specified\n parameters. If \"float\", should be between 0.0 and 1.0 and\n represent the fraction of parameters to prune. If \"int\", it\n represents the absolute number of parameters to prune.\n Raises:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.global_unstructured.html", "category": "pytorch docs"}
{"text": "Raises:\n TypeError -- if \"PRUNING_TYPE != 'unstructured'\"\n Note:\n Since global structured pruning doesn't make much sense unless\n the norm is normalized by the size of the parameter, we now limit\n the scope of global pruning to unstructured methods.\n -[ Examples ]-\n\n\n\nfrom torch.nn.utils import prune\nfrom collections import OrderedDict\nnet = nn.Sequential(OrderedDict([\n ... ('first', nn.Linear(10, 4)),\n ... ('second', nn.Linear(4, 1)),\n ... ]))\nparameters_to_prune = (\n ... (net.first, 'weight'),\n ... (net.second, 'weight'),\n ... )\nprune.global_unstructured(\n ... parameters_to_prune,\n ... pruning_method=prune.L1Unstructured,\n ... amount=10,\n ... )\nprint(sum(torch.nn.utils.parameters_to_vector(net.buffers()) == 0))\n tensor(10)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.global_unstructured.html", "category": "pytorch docs"}
{"text": "TransformerDecoderLayerclass torch.nn.TransformerDecoderLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation=, layer_norm_eps=1e-05, batch_first=False, norm_first=False, device=None, dtype=None)\n TransformerDecoderLayer is made up of self-attn, multi-head-attn\n and feedforward network. This standard decoder layer is based on\n the paper \"Attention Is All You Need\". Ashish Vaswani, Noam\n Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,\n Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you\n need. In Advances in Neural Information Processing Systems, pages\n 6000-6010. Users may modify or implement in a different way during\n application.\n Parameters:\n * d_model (int) -- the number of expected features in the\n input (required).\n * nhead (int) -- the number of heads in the\n multiheadattention models (required).\n * dim_feedforward (int) -- the dimension of the", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoderLayer.html", "category": "pytorch docs"}
{"text": "feedforward network model (default=2048).\n * dropout (float) -- the dropout value (default=0.1).\n * activation (Union[str,\n Callable[[Tensor], Tensor]]) -- the\n activation function of the intermediate layer, can be a string\n (\"relu\" or \"gelu\") or a unary callable. Default: relu\n * layer_norm_eps (float) -- the eps value in layer\n normalization components (default=1e-5).\n * batch_first (bool) -- If \"True\", then the input and\n output tensors are provided as (batch, seq, feature). Default:\n \"False\" (seq, batch, feature).\n * norm_first (bool) -- if \"True\", layer norm is done prior\n to self attention, multihead attention and feedforward\n operations, respectively. Otherwise it's done after. Default:\n \"False\" (after).\n Examples::\n >>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8)\n >>> memory = torch.rand(10, 32, 512)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoderLayer.html", "category": "pytorch docs"}
{"text": "\n\n\nmemory = torch.rand(10, 32, 512)\n >>> tgt = torch.rand(20, 32, 512)\n >>> out = decoder_layer(tgt, memory)\n Alternatively, when \"batch_first\" is \"True\":\n >>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8, batch_first=True)\n >>> memory = torch.rand(32, 10, 512)\n >>> tgt = torch.rand(32, 20, 512)\n >>> out = decoder_layer(tgt, memory)\n forward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None, tgt_is_causal=False, memory_is_causal=False)\n Pass the inputs (and mask) through the decoder layer.\n Parameters:\n * tgt (Tensor) -- the sequence to the decoder layer\n (required).\n * memory (Tensor) -- the sequence from the last layer\n of the encoder (required).\n * tgt_mask (Optional[Tensor]) -- the mask for the\n tgt sequence (optional).\n * memory_mask (Optional[Tensor]) -- the mask for\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoderLayer.html", "category": "pytorch docs"}
{"text": "the memory sequence (optional).\n * tgt_key_padding_mask (Optional[Tensor]) -- the\n mask for the tgt keys per batch (optional).\n * memory_key_padding_mask (Optional[Tensor]) --\n the mask for the memory keys per batch (optional).\n * tgt_is_causal (bool) -- If specified, applies a\n causal mask as tgt mask. Mutually exclusive with providing\n tgt_mask. Default: \"False\".\n * memory_is_causal (bool) -- If specified, applies a\n causal mask as tgt mask. Mutually exclusive with providing\n memory_mask. Default: \"False\".\n Return type:\n Tensor\n Shape:\n see the docs in Transformer class.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoderLayer.html", "category": "pytorch docs"}
{"text": "torch.randint_liketorch.randint_like(input, low=0, high, *, dtype=None, layout=torch.strided, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor\n Returns a tensor with the same shape as Tensor \"input\" filled with\n random integers generated uniformly between \"low\" (inclusive) and\n \"high\" (exclusive).\n Parameters:\n * input (Tensor) -- the size of \"input\" will determine\n size of the output tensor.\n * low (int, optional) -- Lowest integer to be drawn\n from the distribution. Default: 0.\n * high (int) -- One above the highest integer to be drawn\n from the distribution.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned Tensor. Default: if \"None\", defaults to the dtype\n of \"input\".\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned tensor. Default: if \"None\", defaults to the layout of", "source": "https://pytorch.org/docs/stable/generated/torch.randint_like.html", "category": "pytorch docs"}
{"text": "\"input\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", defaults to the device of\n \"input\".\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.randint_like.html", "category": "pytorch docs"}
{"text": "torch.Tensor.masked_selectTensor.masked_select(mask) -> Tensor\n See \"torch.masked_select()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.masked_select.html", "category": "pytorch docs"}
{"text": "torch.Tensor.bernoulliTensor.bernoulli(*, generator=None) -> Tensor\n Returns a result tensor where each \\texttt{result[i]} is\n independently sampled from \\text{Bernoulli}(\\texttt{self[i]}).\n \"self\" must have floating point \"dtype\", and the result will have\n the same \"dtype\".\n See \"torch.bernoulli()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bernoulli.html", "category": "pytorch docs"}
{"text": "torch.fft.fftfreqtorch.fft.fftfreq(n, d=1.0, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\n Computes the discrete Fourier Transform sample frequencies for a\n signal of size \"n\".\n Note:\n By convention, \"fft()\" returns positive frequency terms first,\n followed by the negative frequencies in reverse order, so that\n \"f[-i]\" for all 0 < i \\leq n/2` in Python gives the negative\n frequency terms. For an FFT of length \"n\" and with inputs spaced\n in length unit \"d\", the frequencies are:\n f = [0, 1, ..., (n - 1) // 2, -(n // 2), ..., -1] / (d * n)\n Note:\n For even lengths, the Nyquist frequency at \"f[n/2]\" can be\n thought of as either negative or positive. \"fftfreq()\" follows\n NumPy's convention of taking it to be negative.\n Parameters:\n * n (int) -- the FFT length\n * d (float, optional*) -- The sampling length scale.", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftfreq.html", "category": "pytorch docs"}
{"text": "The spacing between individual samples of the FFT input. The\n default assumes unit spacing, dividing that result by the\n actual spacing gives the result in physical frequency units.\n Keyword Arguments:\n * out (Tensor, optional) -- the output tensor.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftfreq.html", "category": "pytorch docs"}
{"text": "record operations on the returned tensor. Default: \"False\".\n -[ Example ]-\n\n\n\ntorch.fft.fftfreq(5)\n tensor([ 0.0000, 0.2000, 0.4000, -0.4000, -0.2000])\n For even input, we can see the Nyquist frequency at \"f[2]\" is given\n as negative:\ntorch.fft.fftfreq(4)\n tensor([ 0.0000, 0.2500, -0.5000, -0.2500])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftfreq.html", "category": "pytorch docs"}
{"text": "torch.Tensor.broadcast_toTensor.broadcast_to(shape) -> Tensor\n See \"torch.broadcast_to()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.broadcast_to.html", "category": "pytorch docs"}
{"text": "torch.cuda.nvtx.range_pushtorch.cuda.nvtx.range_push(msg)\n Pushes a range onto a stack of nested range span. Returns zero-\n based depth of the range that is started.\n Parameters:\n msg (str) -- ASCII message to associate with range", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.nvtx.range_push.html", "category": "pytorch docs"}
{"text": "GELUclass torch.nn.GELU(approximate='none')\n Applies the Gaussian Error Linear Units function:\n \\text{GELU}(x) = x * \\Phi(x)\n where \\Phi(x) is the Cumulative Distribution Function for Gaussian\n Distribution.\n When the approximate argument is 'tanh', Gelu is estimated with:\n \\text{GELU}(x) = 0.5 * x * (1 + \\text{Tanh}(\\sqrt(2 / \\pi) * (x\n + 0.044715 * x^3)))\n Parameters:\n approximate (str, optional) -- the gelu approximation\n algorithm to use: \"'none'\" | \"'tanh'\". Default: \"'none'\"\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.GELU()\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GELU.html", "category": "pytorch docs"}
{"text": "torch.func.functionalizetorch.func.functionalize(func, , remove='mutations')\n functionalize is a transform that can be used to remove\n (intermediate) mutations and aliasing from a function, while\n preserving the function's semantics.\n \"functionalize(func)\" returns a new function with the same\n semantics as \"func\", but with all intermediate mutations removed.\n Every inplace operation performed on an intermediate tensor:\n \"intermediate.foo_()\" gets replaced by its out-of-place equivalent:\n \"intermediate_updated = intermediate.foo()\".\n functionalize is useful for shipping a pytorch program off to\n backends or compilers that aren't able to easily represent\n mutations or aliasing operators.\n Parameters:\n * func (Callable) -- A Python function that takes one or\n more arguments.\n * remove (str*) -- An optional string argument, that takes\n on either the value 'mutations' or 'mutations_and_views'. If", "source": "https://pytorch.org/docs/stable/generated/torch.func.functionalize.html", "category": "pytorch docs"}
{"text": "'mutations' is passed in then all mutating operators will be\n replaced with their non-mutating equivalents. If\n 'mutations_and_views' is passed in, then additionally, all\n aliasing operators will be replaced with their non-aliasing\n equivalents. Default: 'mutations'.\n Returns:\n Returns a new \"functionalized\" function. It takes the same\n inputs as \"func\", and has the same behavior, but any mutations\n (and optionally aliasing) performed on intermeidate tensors in\n the function will be removed.\n Return type:\n Callable\n functionalize will also remove mutations (and views) that were\n performed on function inputs. However to preserve semantics,\n functionalize will \"fix up\" the mutations after the transform has\n finished running, by detecting if any tensor inputs \"should have\"\n been mutated, and copying the new data back to the inputs if\n necessary.\n Example:\n >>> import torch", "source": "https://pytorch.org/docs/stable/generated/torch.func.functionalize.html", "category": "pytorch docs"}
{"text": "necessary.\n Example:\n >>> import torch\n >>> from torch.fx.experimental.proxy_tensor import make_fx\n >>> from torch.func import functionalize\n >>>\n >>> # A function that uses mutations and views, but only on intermediate tensors.\n >>> def f(a):\n ... b = a + 1\n ... c = b.view(-1)\n ... c.add_(1)\n ... return b\n ...\n >>> inpt = torch.randn(2)\n >>>\n >>> out1 = f(inpt)\n >>> out2 = functionalize(f)(inpt)\n >>>\n >>> # semantics are the same (outputs are equivalent)\n >>> print(torch.allclose(out1, out2))\n True\n >>>\n >>> f_traced = make_fx(f)(inpt)\n >>> f_no_mutations_traced = make_fx(functionalize(f))(inpt)\n >>> f_no_mutations_and_views_traced = make_fx(functionalize(f, remove='mutations_and_views'))(inpt)\n >>>\n >>> print(f_traced.code)\n def forward(self, a_1):\n add = torch.ops.aten.add(a_1, 1); a_1 = None\n view = torch.ops.aten.view(add, [-1])", "source": "https://pytorch.org/docs/stable/generated/torch.func.functionalize.html", "category": "pytorch docs"}
{"text": "view = torch.ops.aten.view(add, [-1])\n add_ = torch.ops.aten.add_(view, 1); view = None\n return add\n >>> print(f_no_mutations_traced.code)\n def forward(self, a_1):\n add = torch.ops.aten.add(a_1, 1); a_1 = None\n view = torch.ops.aten.view(add, [-1]); add = None\n add_1 = torch.ops.aten.add(view, 1); view = None\n view_1 = torch.ops.aten.view(add_1, [2]); add_1 = None\n return view_1\n >>> print(f_no_mutations_and_views_traced.code)\n def forward(self, a_1):\n add = torch.ops.aten.add(a_1, 1); a_1 = None\n view_copy = torch.ops.aten.view_copy(add, [-1]); add = None\n add_1 = torch.ops.aten.add(view_copy, 1); view_copy = None\n view_copy_1 = torch.ops.aten.view_copy(add_1, [2]); add_1 = None\n return view_copy_1\n >>> # A function that mutates its input tensor\n >>> def f(a):\n ... b = a.view(-1)\n ... b.add_(1)\n ... return a\n ...", "source": "https://pytorch.org/docs/stable/generated/torch.func.functionalize.html", "category": "pytorch docs"}
{"text": "... return a\n ...\n >>> f_no_mutations_and_views_traced = make_fx(functionalize(f, remove='mutations_and_views'))(inpt)\n >>> #\n >>> # All mutations and views have been removed,\n >>> # but there is an extra copy_ in the graph to correctly apply the mutation to the input\n >>> # after the function has completed.\n >>> print(f_no_mutations_and_views_traced.code)\n def forward(self, a_1):\n view_copy = torch.ops.aten.view_copy(a_1, [-1])\n add = torch.ops.aten.add(view_copy, 1); view_copy = None\n view_copy_1 = torch.ops.aten.view_copy(add, [2]); add = None\n copy_ = torch.ops.aten.copy_(a_1, view_copy_1); a_1 = None\n return view_copy_1\n There are a few \"failure modes\" for functionalize that are worth\n calling out:\n 1. Like other torch.func transforms, functionalize() doesn't\n work with functions that directly use .backward(). The same", "source": "https://pytorch.org/docs/stable/generated/torch.func.functionalize.html", "category": "pytorch docs"}
{"text": "is true for torch.autograd.grad. If you want to use autograd,\n you can compute gradients directly with\n functionalize(grad(f)).\n 2. Like other torch.func transforms, functionalize() doesn't\n work with global state. If you call functionalize(f) on a\n function that takes views / mutations of non-local state,\n functionalization will simply no-op and pass the\n view/mutation calls directly to the backend. One way to work\n around this is is to ensure that any non-local state creation\n is wrapped into a larger function, which you then call\n functionalize on.\n 3. resize_() has some limitations: functionalize will only\n work on programs that use resize_()` as long as the tensor\n being resized is not a view.\n 4. as_strided() has some limitations: functionalize will not\n work on as_strided() calls that result in tensors with\n overlapping memory.", "source": "https://pytorch.org/docs/stable/generated/torch.func.functionalize.html", "category": "pytorch docs"}
{"text": "overlapping memory.\n Finally, a helpful mental model for understanding functionalization\n is that most user pytorch programs are writting with the public\n torch API. When executed, torch operators are generally decomposed\n into our internal C++ \"ATen\" API. The logic for functionalization\n happens entirely at the level of ATen. Functionalization knows how\n to take every aliasing operator in ATen, and map it to its non-\n aliasing equivalent (e.g. \"tensor.view({-1})\" ->\n \"at::view_copy(tensor, {-1})\"), and how to take every mutating\n operator in ATen, and map it to its non-mutating equivalent (e.g.\n \"tensor.add_(1)\" -> \"at::add(tensor, -1)\"), while tracking aliases\n and mutations out-of-line to know when to fix things up.\n Information about which ATen operators are aliasing or mutating all\n comes from https://github.com/pytorch/pytorch/blob/master/aten/src\n /ATen/native/native_functions.yaml.", "source": "https://pytorch.org/docs/stable/generated/torch.func.functionalize.html", "category": "pytorch docs"}
{"text": "torch.bernoullitorch.bernoulli(input, , generator=None, out=None) -> Tensor\n Draws binary random numbers (0 or 1) from a Bernoulli distribution.\n The \"input\" tensor should be a tensor containing probabilities to\n be used for drawing the binary random number. Hence, all values in\n \"input\" have to be in the range: 0 \\leq \\text{input}i \\leq 1.\n The \\text{i}^{th} element of the output tensor will draw a value 1\n according to the \\text{i}^{th} probability value given in \"input\".\n \\text{out} \\sim \\mathrm{Bernoulli}(p = \\text{input}_{i})\n The returned \"out\" tensor only has values 0 or 1 and is of the same\n shape as \"input\".\n \"out\" can have integral \"dtype\", but \"input\" must have floating\n point \"dtype\".\n Parameters:\n input (Tensor) -- the input tensor of probability values\n for the Bernoulli distribution\n Keyword Arguments:\n * generator* (\"torch.Generator\", optional) -- a pseudorandom\n number generator for sampling", "source": "https://pytorch.org/docs/stable/generated/torch.bernoulli.html", "category": "pytorch docs"}
{"text": "number generator for sampling\n * out (Tensor, optional) -- the output tensor.\n Example:\n >>> a = torch.empty(3, 3).uniform_(0, 1) # generate a uniform random matrix with range [0, 1]\n >>> a\n tensor([[ 0.1737, 0.0950, 0.3609],\n [ 0.7148, 0.0289, 0.2676],\n [ 0.9456, 0.8937, 0.7202]])\n >>> torch.bernoulli(a)\n tensor([[ 1., 0., 0.],\n [ 0., 0., 0.],\n [ 1., 1., 1.]])\n >>> a = torch.ones(3, 3) # probability of drawing \"1\" is 1\n >>> torch.bernoulli(a)\n tensor([[ 1., 1., 1.],\n [ 1., 1., 1.],\n [ 1., 1., 1.]])\n >>> a = torch.zeros(3, 3) # probability of drawing \"1\" is 0\n >>> torch.bernoulli(a)\n tensor([[ 0., 0., 0.],\n [ 0., 0., 0.],\n [ 0., 0., 0.]])", "source": "https://pytorch.org/docs/stable/generated/torch.bernoulli.html", "category": "pytorch docs"}
{"text": "torch.minimumtorch.minimum(input, other, , out=None) -> Tensor\n Computes the element-wise minimum of \"input\" and \"other\".\n Note:\n If one of the elements being compared is a NaN, then that element\n is returned. \"minimum()\" is not supported for tensors with\n complex dtypes.\n Parameters:\n * input (Tensor) -- the input tensor.\n * other (Tensor) -- the second input tensor\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.tensor((1, 2, -1))\n >>> b = torch.tensor((3, 0, 4))\n >>> torch.minimum(a, b)\n tensor([1, 0, -1])", "source": "https://pytorch.org/docs/stable/generated/torch.minimum.html", "category": "pytorch docs"}
{"text": "torch.logical_andtorch.logical_and(input, other, , out=None) -> Tensor\n Computes the element-wise logical AND of the given input tensors.\n Zeros are treated as \"False\" and nonzeros are treated as \"True\".\n Parameters:\n * input (Tensor) -- the input tensor.\n * other (Tensor) -- the tensor to compute AND with\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> torch.logical_and(torch.tensor([True, False, True]), torch.tensor([True, False, False]))\n tensor([ True, False, False])\n >>> a = torch.tensor([0, 1, 10, 0], dtype=torch.int8)\n >>> b = torch.tensor([4, 0, 1, 0], dtype=torch.int8)\n >>> torch.logical_and(a, b)\n tensor([False, False, True, False])\n >>> torch.logical_and(a.double(), b.double())\n tensor([False, False, True, False])\n >>> torch.logical_and(a.double(), b)\n tensor([False, False, True, False])", "source": "https://pytorch.org/docs/stable/generated/torch.logical_and.html", "category": "pytorch docs"}
{"text": "tensor([False, False, True, False])\n >>> torch.logical_and(a, b, out=torch.empty(4, dtype=torch.bool))\n tensor([False, False, True, False])", "source": "https://pytorch.org/docs/stable/generated/torch.logical_and.html", "category": "pytorch docs"}
{"text": "CELUclass torch.nn.CELU(alpha=1.0, inplace=False)\n Applies the element-wise function:\n \\text{CELU}(x) = \\max(0,x) + \\min(0, \\alpha * (\\exp(x/\\alpha) -\n 1))\n More details can be found in the paper Continuously Differentiable\n Exponential Linear Units .\n Parameters:\n * alpha (float) -- the \\alpha value for the CELU\n formulation. Default: 1.0\n * inplace (bool) -- can optionally do the operation in-\n place. Default: \"False\"\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.CELU()\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CELU.html", "category": "pytorch docs"}
{"text": "TransformerDecoderclass torch.nn.TransformerDecoder(decoder_layer, num_layers, norm=None)\n TransformerDecoder is a stack of N decoder layers\n Parameters:\n * decoder_layer -- an instance of the\n TransformerDecoderLayer() class (required).\n * num_layers -- the number of sub-decoder-layers in the\n decoder (required).\n * norm -- the layer normalization component (optional).\n Examples::\n >>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8)\n >>> transformer_decoder = nn.TransformerDecoder(decoder_layer, num_layers=6)\n >>> memory = torch.rand(10, 32, 512)\n >>> tgt = torch.rand(20, 32, 512)\n >>> out = transformer_decoder(tgt, memory)\n forward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None)\n Pass the inputs (and mask) through the decoder layer in turn.\n Parameters:\n * tgt (Tensor) -- the sequence to the decoder", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoder.html", "category": "pytorch docs"}
{"text": "(required).\n * memory (Tensor) -- the sequence from the last layer\n of the encoder (required).\n * tgt_mask (Optional[Tensor]) -- the mask for the\n tgt sequence (optional).\n * memory_mask (Optional[Tensor]) -- the mask for\n the memory sequence (optional).\n * tgt_key_padding_mask (Optional[Tensor]) -- the\n mask for the tgt keys per batch (optional).\n * memory_key_padding_mask (Optional[Tensor]) --\n the mask for the memory keys per batch (optional).\n Return type:\n Tensor\n Shape:\n see the docs in Transformer class.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoder.html", "category": "pytorch docs"}
{"text": "avg_pool3dclass torch.ao.nn.quantized.functional.avg_pool3d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None)\n Applies 3D average-pooling operation in kD \\ times kH \\times kW\n regions by step size sD \\times sH \\times sW steps. The number of\n output features is equal to the number of input planes.\n Note:\n The input quantization parameters propagate to the output.\n Parameters:\n * input -- quantized input tensor (\\text{minibatch} ,\n \\text{in_channels} , iH , iW)\n * kernel_size -- size of the pooling region. Can be a single\n number or a tuple (kD, kH, kW)\n * stride -- stride of the pooling operation. Can be a single\n number or a tuple (sD, sH, sW). Default: \"kernel_size\"\n * padding -- implicit zero paddings on both sides of the\n input. Can be a single number or a tuple (padD, padH, padW).\n Default: 0", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.avg_pool3d.html", "category": "pytorch docs"}
{"text": "Default: 0\n * ceil_mode -- when True, will use ceil instead of floor\n in the formula to compute the output shape. Default: \"False\"\n * count_include_pad -- when True, will include the zero-\n padding in the averaging calculation. Default: \"True\"\n * divisor_override -- if specified, it will be used as\n divisor, otherwise size of the pooling region will be used.\n Default: None", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.avg_pool3d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.nan_to_numTensor.nan_to_num(nan=0.0, posinf=None, neginf=None) -> Tensor\n See \"torch.nan_to_num()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nan_to_num.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.dropouttorch.nn.functional.dropout(input, p=0.5, training=True, inplace=False)\n During training, randomly zeroes some of the elements of the input\n tensor with probability \"p\" using samples from a Bernoulli\n distribution.\n See \"Dropout\" for details.\n Parameters:\n * p (float) -- probability of an element to be zeroed.\n Default: 0.5\n * training (bool) -- apply dropout if is \"True\". Default:\n \"True\"\n * inplace (bool) -- If set to \"True\", will do this\n operation in-place. Default: \"False\"\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.dropout.html", "category": "pytorch docs"}
{"text": "torch.Tensor.dotTensor.dot(other) -> Tensor\n See \"torch.dot()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.dot.html", "category": "pytorch docs"}
{"text": "torch.Tensor.fminTensor.fmin(other) -> Tensor\n See \"torch.fmin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fmin.html", "category": "pytorch docs"}
{"text": "torch.Tensor.expandTensor.expand(sizes) -> Tensor\n Returns a new view of the \"self\" tensor with singleton dimensions\n expanded to a larger size.\n Passing -1 as the size for a dimension means not changing the size\n of that dimension.\n Tensor can be also expanded to a larger number of dimensions, and\n the new ones will be appended at the front. For the new dimensions,\n the size cannot be set to -1.\n Expanding a tensor does not allocate new memory, but only creates a\n new view on the existing tensor where a dimension of size one is\n expanded to a larger size by setting the \"stride\" to 0. Any\n dimension of size 1 can be expanded to an arbitrary value without\n allocating new memory.\n Parameters:\n sizes (torch.Size or int...) -- the desired\n expanded size\n Warning:\n More than one element of an expanded tensor may refer to a single\n memory location. As a result, in-place operations (especially", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.expand.html", "category": "pytorch docs"}
{"text": "ones that are vectorized) may result in incorrect behavior. If\n you need to write to the tensors, please clone them first.\n Example:\n >>> x = torch.tensor([[1], [2], [3]])\n >>> x.size()\n torch.Size([3, 1])\n >>> x.expand(3, 4)\n tensor([[ 1, 1, 1, 1],\n [ 2, 2, 2, 2],\n [ 3, 3, 3, 3]])\n >>> x.expand(-1, 4) # -1 means not changing the size of that dimension\n tensor([[ 1, 1, 1, 1],\n [ 2, 2, 2, 2],\n [ 3, 3, 3, 3]])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.expand.html", "category": "pytorch docs"}
{"text": "RReLUclass torch.nn.RReLU(lower=0.125, upper=0.3333333333333333, inplace=False)\n Applies the randomized leaky rectified liner unit function,\n element-wise, as described in the paper:\n Empirical Evaluation of Rectified Activations in Convolutional\n Network.\n The function is defined as:\n \\text{RReLU}(x) = \\begin{cases} x & \\text{if } x \\geq 0 \\\n ax & \\text{ otherwise } \\end{cases}\n where a is randomly sampled from uniform distribution\n \\mathcal{U}(\\text{lower}, \\text{upper}).\n See: https://arxiv.org/pdf/1505.00853.pdf\n Parameters:\n * lower (float) -- lower bound of the uniform\n distribution. Default: \\frac{1}{8}\n * upper (float) -- upper bound of the uniform\n distribution. Default: \\frac{1}{3}\n * inplace (bool) -- can optionally do the operation in-\n place. Default: \"False\"\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RReLU.html", "category": "pytorch docs"}
{"text": "[image]\n Examples:\n >>> m = nn.RReLU(0.1, 0.3)\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RReLU.html", "category": "pytorch docs"}
{"text": "torch.Tensor.transpose_Tensor.transpose_(dim0, dim1) -> Tensor\n In-place version of \"transpose()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.transpose_.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.max_unpool1dtorch.nn.functional.max_unpool1d(input, indices, kernel_size, stride=None, padding=0, output_size=None)\n Computes a partial inverse of \"MaxPool1d\".\n See \"MaxUnpool1d\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.max_unpool1d.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.lineartorch.nn.functional.linear(input, weight, bias=None) -> Tensor\n Applies a linear transformation to the incoming data: y = xA^T + b.\n This operation supports 2-D \"weight\" with sparse layout\n Warning:\n Sparse support is a beta feature and some layout(s)/dtype/device\n combinations may not be supported, or may not have autograd\n support. If you notice missing functionality please open a\n feature request.\n This operator supports TensorFloat32.\n Shape:\n * Input: (, in_features) where *** means any number of\n additional dimensions, including none\n * Weight: (out_features, in_features) or (in_features)\n * Bias: (out_features) or ()\n * Output: (, out_features) or (*), based on the shape of the\n weight", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.linear.html", "category": "pytorch docs"}
{"text": "torch.nansumtorch.nansum(input, , dtype=None) -> Tensor\n Returns the sum of all elements, treating Not a Numbers (NaNs) as\n zero.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n dtype (\"torch.dtype\", optional) -- the desired data type of\n returned tensor. If specified, the input tensor is casted to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\n Example:\n >>> a = torch.tensor([1., 2., float('nan'), 4.])\n >>> torch.nansum(a)\n tensor(7.)\n torch.nansum(input, dim, keepdim=False, , dtype=None) -> Tensor\n Returns the sum of each row of the \"input\" tensor in the given\n dimension \"dim\", treating Not a Numbers (NaNs) as zero. If \"dim\" is\n a list of dimensions, reduce over all of them.\n If \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.", "source": "https://pytorch.org/docs/stable/generated/torch.nansum.html", "category": "pytorch docs"}
{"text": "Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int or tuple of ints, optional) -- the\n dimension or dimensions to reduce. If \"None\", all dimensions\n are reduced.\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n Keyword Arguments:\n dtype (\"torch.dtype\", optional) -- the desired data type of\n returned tensor. If specified, the input tensor is casted to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\n Example:\n >>> torch.nansum(torch.tensor([1., float(\"nan\")]))\n 1.0\n >>> a = torch.tensor([[1, 2], [3., float(\"nan\")]])\n >>> torch.nansum(a)\n tensor(6.)\n >>> torch.nansum(a, dim=0)\n tensor([4., 2.])\n >>> torch.nansum(a, dim=1)\n tensor([3., 3.])", "source": "https://pytorch.org/docs/stable/generated/torch.nansum.html", "category": "pytorch docs"}
{"text": "torch.Tensor.maximumTensor.maximum(other) -> Tensor\n See \"torch.maximum()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.maximum.html", "category": "pytorch docs"}
{"text": "torch.ttorch.t(input) -> Tensor\n Expects \"input\" to be <= 2-D tensor and transposes dimensions 0 and\n 1.\n 0-D and 1-D tensors are returned as is. When input is a 2-D tensor\n this is equivalent to \"transpose(input, 0, 1)\".\n Parameters:\n input (Tensor) -- the input tensor.\n Example:\n >>> x = torch.randn(())\n >>> x\n tensor(0.1995)\n >>> torch.t(x)\n tensor(0.1995)\n >>> x = torch.randn(3)\n >>> x\n tensor([ 2.4320, -0.4608, 0.7702])\n >>> torch.t(x)\n tensor([ 2.4320, -0.4608, 0.7702])\n >>> x = torch.randn(2, 3)\n >>> x\n tensor([[ 0.4875, 0.9158, -0.5872],\n [ 0.3938, -0.6929, 0.6932]])\n >>> torch.t(x)\n tensor([[ 0.4875, 0.3938],\n [ 0.9158, -0.6929],\n [-0.5872, 0.6932]])\n See also \"torch.transpose()\".", "source": "https://pytorch.org/docs/stable/generated/torch.t.html", "category": "pytorch docs"}
{"text": "torch.Tensor.lt_Tensor.lt_(other) -> Tensor\n In-place version of \"lt()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lt_.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.binary_cross_entropytorch.nn.functional.binary_cross_entropy(input, target, weight=None, size_average=None, reduce=None, reduction='mean')\n Function that measures the Binary Cross Entropy between the target\n and input probabilities.\n See \"BCELoss\" for details.\n Parameters:\n * input (Tensor) -- Tensor of arbitrary shape as\n probabilities.\n * target (Tensor) -- Tensor of the same shape as input\n with values between 0 and 1.\n * weight (Tensor, optional) -- a manual rescaling\n weight if provided it's repeated to match input tensor shape\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n multiple elements per sample. If the field \"size_average\" is\n set to \"False\", the losses are instead summed for each", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy.html", "category": "pytorch docs"}
{"text": "minibatch. Ignored when reduce is \"False\". Default: \"True\"\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n Return type:\n Tensor\n Examples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy.html", "category": "pytorch docs"}
{"text": "Return type:\n Tensor\n Examples:\n >>> input = torch.randn(3, 2, requires_grad=True)\n >>> target = torch.rand(3, 2, requires_grad=False)\n >>> loss = F.binary_cross_entropy(torch.sigmoid(input), target)\n >>> loss.backward()", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy.html", "category": "pytorch docs"}
{"text": "torch.fmintorch.fmin(input, other, , out=None) -> Tensor\n Computes the element-wise minimum of \"input\" and \"other\".\n This is like \"torch.minimum()\" except it handles NaNs differently:\n if exactly one of the two elements being compared is a NaN then the\n non-NaN element is taken as the minimum. Only if both elements are\n NaN is NaN propagated.\n This function is a wrapper around C++'s \"std::fmin\" and is similar\n to NumPy's \"fmin\" function.\n Supports broadcasting to a common shape, type promotion, and\n integer and floating-point inputs.\n Parameters:\n * input (Tensor) -- the input tensor.\n * other (Tensor) -- the second input tensor\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.tensor([2.2, float('nan'), 2.1, float('nan')])\n >>> b = torch.tensor([-9.3, 0.1, float('nan'), float('nan')])\n >>> torch.fmin(a, b)\n tensor([-9.3000, 0.1000, 2.1000, nan])", "source": "https://pytorch.org/docs/stable/generated/torch.fmin.html", "category": "pytorch docs"}
{"text": "torch.mintorch.min(input) -> Tensor\n Returns the minimum value of all elements in the \"input\" tensor.\n Warning:\n This function produces deterministic (sub)gradients unlike\n \"min(dim=0)\"\n Parameters:\n input (Tensor) -- the input tensor.\n Example:\n >>> a = torch.randn(1, 3)\n >>> a\n tensor([[ 0.6750, 1.0857, 1.7197]])\n >>> torch.min(a)\n tensor(0.6750)\n torch.min(input, dim, keepdim=False, *, out=None)\n Returns a namedtuple \"(values, indices)\" where \"values\" is the\n minimum value of each row of the \"input\" tensor in the given\n dimension \"dim\". And \"indices\" is the index location of each\n minimum value found (argmin).\n If \"keepdim\" is \"True\", the output tensors are of the same size as\n \"input\" except in the dimension \"dim\" where they are of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensors having 1 fewer dimension than \"input\".\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.min.html", "category": "pytorch docs"}
{"text": "Note:\n If there are multiple minimal values in a reduced row then the\n indices of the first minimal value are returned.\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- the dimension to reduce.\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n Keyword Arguments:\n out (tuple, optional) -- the tuple of two output\n tensors (min, min_indices)\n Example:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[-0.6248, 1.1334, -1.1899, -0.2803],\n [-1.4644, -0.2635, -0.3651, 0.6134],\n [ 0.2457, 0.0384, 1.0128, 0.7015],\n [-0.1153, 2.9849, 2.1458, 0.5788]])\n >>> torch.min(a, 1)\n torch.return_types.min(values=tensor([-1.1899, -1.4644, 0.0384, -0.1153]), indices=tensor([2, 0, 1, 0]))\n torch.min(input, other, *, out=None) -> Tensor\n See \"torch.minimum()\".", "source": "https://pytorch.org/docs/stable/generated/torch.min.html", "category": "pytorch docs"}
{"text": "torch.Tensor.remainderTensor.remainder(divisor) -> Tensor\n See \"torch.remainder()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.remainder.html", "category": "pytorch docs"}
{"text": "torch._asserttorch._assert(condition, message)\n A wrapper around Python's assert which is symbolically traceable.", "source": "https://pytorch.org/docs/stable/generated/torch._assert.html", "category": "pytorch docs"}
{"text": "torch.foreach_log10_torch._foreach_log10(self: List[Tensor]) -> None\n Apply \"torch.log10()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_log10_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.argsortTensor.argsort(dim=- 1, descending=False) -> LongTensor\n See \"torch.argsort()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.argsort.html", "category": "pytorch docs"}
{"text": "CosineAnnealingWarmRestartsclass torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0, T_mult=1, eta_min=0, last_epoch=- 1, verbose=False)\n Set the learning rate of each parameter group using a cosine\n annealing schedule, where \\eta_{max} is set to the initial lr,\n T_{cur} is the number of epochs since the last restart and T_{i} is\n the number of epochs between two warm restarts in SGDR:\n \\eta_t = \\eta_{min} + \\frac{1}{2}(\\eta_{max} -\n \\eta_{min})\\left(1 +\n \\cos\\left(\\frac{T_{cur}}{T_{i}}\\pi\\right)\\right)\n When T_{cur}=T_{i}, set \\eta_t = \\eta_{min}. When T_{cur}=0 after\n restart, set \\eta_t=\\eta_{max}.\n It has been proposed in SGDR: Stochastic Gradient Descent with Warm\n Restarts.\n Parameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * T_0 (int) -- Number of iterations for the first restart.\n * T_mult (int, optional) -- A factor increases T_{i}\n after a restart. Default: 1.", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingWarmRestarts.html", "category": "pytorch docs"}
{"text": "after a restart. Default: 1.\n * eta_min (float, optional) -- Minimum learning rate.\n Default: 0.\n * last_epoch (int, optional) -- The index of last\n epoch. Default: -1.\n * verbose (bool) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n get_last_lr()\n Return last computed learning rate by current scheduler.\n load_state_dict(state_dict)\n Loads the schedulers state.\n Parameters:\n state_dict (dict) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n print_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n state_dict()\n Returns the state of the scheduler as a \"dict\".\n It contains an entry for every variable in self.dict which\n is not the optimizer.\n step(epoch=None)\n Step could be called after every batch update\n -[ Example ]-", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingWarmRestarts.html", "category": "pytorch docs"}
{"text": "-[ Example ]-\n >>> scheduler = CosineAnnealingWarmRestarts(optimizer, T_0, T_mult)\n >>> iters = len(dataloader)\n >>> for epoch in range(20):\n >>> for i, sample in enumerate(dataloader):\n >>> inputs, labels = sample['inputs'], sample['labels']\n >>> optimizer.zero_grad()\n >>> outputs = net(inputs)\n >>> loss = criterion(outputs, labels)\n >>> loss.backward()\n >>> optimizer.step()\n >>> scheduler.step(epoch + i / iters)\n This function can be called in an interleaved way.\n -[ Example ]-\n >>> scheduler = CosineAnnealingWarmRestarts(optimizer, T_0, T_mult)\n >>> for epoch in range(20):\n >>> scheduler.step()\n >>> scheduler.step(26)\n >>> scheduler.step() # scheduler.step(27), instead of scheduler(20)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingWarmRestarts.html", "category": "pytorch docs"}
{"text": "torch.inversetorch.inverse(input, *, out=None) -> Tensor\n Alias for \"torch.linalg.inv()\"", "source": "https://pytorch.org/docs/stable/generated/torch.inverse.html", "category": "pytorch docs"}
{"text": "upsample_bilinearclass torch.ao.nn.quantized.functional.upsample_bilinear(input, size=None, scale_factor=None)\n Upsamples the input, using bilinear upsampling.\n Warning:\n This function is deprecated in favor of\n \"torch.nn.quantized.functional.interpolate()\". This is equivalent\n with \"nn.quantized.functional.interpolate(..., mode='bilinear',\n align_corners=True)\".\n Note:\n The input quantization parameters propagate to the output.\n Note:\n Only 2D inputs are supported\n Parameters:\n * input (Tensor) -- quantized input\n * size (int or Tuple[int, int]) -- output\n spatial size.\n * scale_factor (int or Tuple[int, int]) --\n multiplier for spatial size", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.upsample_bilinear.html", "category": "pytorch docs"}
{"text": "torch.Tensor.diagonal_scatterTensor.diagonal_scatter(src, offset=0, dim1=0, dim2=1) -> Tensor\n See \"torch.diagonal_scatter()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.diagonal_scatter.html", "category": "pytorch docs"}
{"text": "torch.unflattentorch.unflatten(input, dim, sizes) -> Tensor\n Expands a dimension of the input tensor over multiple dimensions.\n See also:\n \"torch.flatten()\" the inverse of this function. It coalesces\n several dimensions into one.\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- Dimension to be unflattened, specified as\n an index into \"input.shape\".\n * sizes (Tuple[int]) -- New shape of the unflattened\n dimension. One of its elements can be -1 in which case the\n corresponding output dimension is inferred. Otherwise, the\n product of \"sizes\" must equal \"input.shape[dim]\".\n Returns:\n A View of input with the specified dimension unflattened.\n Examples::\n >>> torch.unflatten(torch.randn(3, 4, 1), 1, (2, 2)).shape\n torch.Size([3, 2, 2, 1])\n >>> torch.unflatten(torch.randn(3, 4, 1), 1, (-1, 2)).shape\n torch.Size([3, 2, 2, 1])", "source": "https://pytorch.org/docs/stable/generated/torch.unflatten.html", "category": "pytorch docs"}
{"text": "torch.Size([3, 2, 2, 1])\n >>> torch.unflatten(torch.randn(5, 12, 3), -1, (2, 2, 3, 1, 1)).shape\n torch.Size([5, 2, 2, 3, 1, 1, 3])", "source": "https://pytorch.org/docs/stable/generated/torch.unflatten.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.interpolatetorch.nn.functional.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None, antialias=False)\n Down/up samples the input to either the given \"size\" or the given\n \"scale_factor\"\n The algorithm used for interpolation is determined by \"mode\".\n Currently temporal, spatial and volumetric sampling are supported,\n i.e. expected inputs are 3-D, 4-D or 5-D in shape.\n The input dimensions are interpreted in the form: mini-batch x\n channels x [optional depth] x [optional height] x width.\n The modes available for resizing are: nearest, linear (3D-\n only), bilinear, bicubic (4D-only), trilinear (5D-only),\n area, nearest-exact\n Parameters:\n * input (Tensor) -- the input tensor\n * size (int or Tuple[int] or Tuple[int,\n int] or Tuple[int, int, int]) -- output\n spatial size.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html", "category": "pytorch docs"}
{"text": "spatial size.\n * scale_factor (float or Tuple[float]) --\n multiplier for spatial size. If scale_factor is a tuple, its\n length has to match the number of spatial dimensions;\n input.dim() - 2.\n * mode (str) -- algorithm used for upsampling: \"'nearest'\"\n | \"'linear'\" | \"'bilinear'\" | \"'bicubic'\" | \"'trilinear'\" |\n \"'area'\" | \"'nearest-exact'\". Default: \"'nearest'\"\n * align_corners (bool, optional) -- Geometrically, we\n consider the pixels of the input and output as squares rather\n than points. If set to \"True\", the input and output tensors\n are aligned by the center points of their corner pixels,\n preserving the values at the corner pixels. If set to \"False\",\n the input and output tensors are aligned by the corner points\n of their corner pixels, and the interpolation uses edge value\n padding for out-of-boundary values, making this operation", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html", "category": "pytorch docs"}
{"text": "independent of input size when \"scale_factor\" is kept the\n same. This only has an effect when \"mode\" is \"'linear'\",\n \"'bilinear'\", \"'bicubic'\" or \"'trilinear'\". Default: \"False\"\n * recompute_scale_factor (bool, optional) -- recompute\n the scale_factor for use in the interpolation calculation. If\n recompute_scale_factor is \"True\", then scale_factor must\n be passed in and scale_factor is used to compute the output\n size. The computed output size will be used to infer new\n scales for the interpolation. Note that when scale_factor is\n floating-point, it may differ from the recomputed\n scale_factor due to rounding and precision issues. If\n recompute_scale_factor is \"False\", then size or\n scale_factor will be used directly for interpolation.\n Default: \"None\".\n * antialias (bool, optional) -- flag to apply anti-", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html", "category": "pytorch docs"}
{"text": "aliasing. Default: \"False\". Using anti-alias option together\n with \"align_corners=False\", interpolation result would match\n Pillow result for downsampling operation. Supported modes:\n \"'bilinear'\", \"'bicubic'\".\n Return type:\n Tensor\n Note:\n With \"mode='bicubic'\", it's possible to cause overshoot, in other\n words it can produce negative values or values greater than 255\n for images. Explicitly call \"result.clamp(min=0, max=255)\" if you\n want to reduce the overshoot when displaying the image.\n Note:\n Mode \"mode='nearest-exact'\" matches Scikit-Image and PIL nearest\n neighbours interpolation algorithms and fixes known issues with\n \"mode='nearest'\". This mode is introduced to keep backward\n compatibility. Mode \"mode='nearest'\" matches buggy OpenCV's\n \"INTER_NEAREST\" interpolation algorithm.\n Note:\n This operation may produce nondeterministic gradients when given\n tensors on a CUDA device. See Reproducibility for more", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html", "category": "pytorch docs"}
{"text": "information.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html", "category": "pytorch docs"}
{"text": "torch.not_equaltorch.not_equal(input, other, *, out=None) -> Tensor\n Alias for \"torch.ne()\".", "source": "https://pytorch.org/docs/stable/generated/torch.not_equal.html", "category": "pytorch docs"}
{"text": "LPPool2dclass torch.nn.LPPool2d(norm_type, kernel_size, stride=None, ceil_mode=False)\n Applies a 2D power-average pooling over an input signal composed of\n several input planes.\n On each window, the function computed is:\n f(X) = \\sqrt[p]{\\sum_{x \\in X} x^{p}}\n * At p = \\infty, one gets Max Pooling\n * At p = 1, one gets Sum Pooling (which is proportional to average\n pooling)\n The parameters \"kernel_size\", \"stride\" can either be:\n * a single \"int\" -- in which case the same value is used for the\n height and width dimension\n * a \"tuple\" of two ints -- in which case, the first int is\n used for the height dimension, and the second int for the\n width dimension\n Note:\n If the sum to the power of p is zero, the gradient of this\n function is not defined. This implementation will set the\n gradient to zero in this case.\n Parameters:\n * kernel_size (Union[int, Tuple[int*,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LPPool2d.html", "category": "pytorch docs"}
{"text": "int]]*) -- the size of the window\n * stride (*Union[int, Tuple[int, int]]*)\n -- the stride of the window. Default value is \"kernel_size\"\n * ceil_mode (bool) -- when True, will use ceil instead\n of floor to compute the output shape\n Shape:\n * Input: (N, C, H_{in}, W_{in})\n * Output: (N, C, H_{out}, W_{out}), where\n H_{out} = \\left\\lfloor\\frac{H_{in} -\n \\text{kernel_size}[0]}{\\text{stride}[0]} + 1\\right\\rfloor\n W_{out} = \\left\\lfloor\\frac{W_{in} -\n \\text{kernel_size}[1]}{\\text{stride}[1]} + 1\\right\\rfloor\n Examples:\n >>> # power-2 pool of square window of size=3, stride=2\n >>> m = nn.LPPool2d(2, 3, stride=2)\n >>> # pool of non-square window of power 1.2\n >>> m = nn.LPPool2d(1.2, (3, 2), stride=(2, 1))\n >>> input = torch.randn(20, 16, 50, 32)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LPPool2d.html", "category": "pytorch docs"}
{"text": "default_histogram_observertorch.quantization.observer.default_histogram_observer\n alias of functools.partial(, quant_min=0,\n quant_max=127){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_histogram_observer.html", "category": "pytorch docs"}
{"text": "torch.Tensor.cumsum_Tensor.cumsum_(dim, dtype=None) -> Tensor\n In-place version of \"cumsum()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cumsum_.html", "category": "pytorch docs"}
{"text": "PixelShuffleclass torch.nn.PixelShuffle(upscale_factor)\n Rearranges elements in a tensor of shape (, C \\times r^2, H, W) to\n a tensor of shape (, C, H \\times r, W \\times r), where r is an\n upscale factor.\n This is useful for implementing efficient sub-pixel convolution\n with a stride of 1/r.\n See the paper: Real-Time Single Image and Video Super-Resolution\n Using an Efficient Sub-Pixel Convolutional Neural Network by Shi\n et. al (2016) for more details.\n Parameters:\n upscale_factor (int) -- factor to increase spatial\n resolution by\n Shape:\n * Input: (, C_{in}, H_{in}, W_{in}), where * is zero or more\n batch dimensions\n * Output: (, C_{out}, H_{out}, W_{out}), where\n C_{out} = C_{in} \\div \\text{upscale_factor}^2\n H_{out} = H_{in} \\times \\text{upscale_factor}\n W_{out} = W_{in} \\times \\text{upscale_factor}\n Examples:\n >>> pixel_shuffle = nn.PixelShuffle(3)\n >>> input = torch.randn(1, 9, 4, 4)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PixelShuffle.html", "category": "pytorch docs"}
{"text": "\n\n\ninput = torch.randn(1, 9, 4, 4)\n >>> output = pixel_shuffle(input)\n >>> print(output.size())\n torch.Size([1, 1, 12, 12])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PixelShuffle.html", "category": "pytorch docs"}
{"text": "default_histogram_fake_quanttorch.quantization.fake_quantize.default_histogram_fake_quant\n alias of functools.partial(,\n observer=, quant_min=0,\n quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine,\n reduce_range=True){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_histogram_fake_quant.html", "category": "pytorch docs"}
{"text": "torch.Tensor.minTensor.min(dim=None, keepdim=False)\n See \"torch.min()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.min.html", "category": "pytorch docs"}
{"text": "Conv3dclass torch.ao.nn.qat.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None, device=None, dtype=None)\n A Conv3d module attached with FakeQuantize modules for weight, used\n for quantization aware training.\n We adopt the same interface as torch.nn.Conv3d, please see https\n ://pytorch.org/docs/stable/nn.html?highlight=conv3d#torch.nn.Conv3d\n for documentation.\n Similar to torch.nn.Conv3d, with FakeQuantize modules initialized\n to default.\n Variables:\n weight_fake_quant -- fake quant module for weight", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.qat.Conv3d.html", "category": "pytorch docs"}
{"text": "PoissonNLLLossclass torch.nn.PoissonNLLLoss(log_input=True, full=False, size_average=None, eps=1e-08, reduce=None, reduction='mean')\n Negative log likelihood loss with Poisson distribution of target.\n The loss can be described as:\n \\text{target} \\sim \\mathrm{Poisson}(\\text{input})\n \\text{loss}(\\text{input}, \\text{target}) = \\text{input} -\n \\text{target} * \\log(\\text{input}) +\n \\log(\\text{target!})\n The last term can be omitted or approximated with Stirling formula.\n The approximation is used for target values more than 1. For\n targets less or equal to 1 zeros are added to the loss.\n Parameters:\n * log_input (bool, optional) -- if \"True\" the loss is\n computed as \\exp(\\text{input}) - \\text{target}\\text{input},\n if \"False\" the loss is \\text{input} -\n \\text{target}\\log(\\text{input}+\\text{eps}).\n * full (bool, optional) --\n whether to compute full loss, i. e. to add the Stirling", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PoissonNLLLoss.html", "category": "pytorch docs"}
{"text": "approximation term\n \\text{target}\\log(\\text{target}) - \\text{target} + 0.5 *\n \\log(2\\pi\\text{target}).\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * eps (float, optional) -- Small value to avoid\n evaluation of \\log(0) when \"log_input = False\". Default: 1e-8\n * reduce (bool, optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PoissonNLLLoss.html", "category": "pytorch docs"}
{"text": "\"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n Examples:\n >>> loss = nn.PoissonNLLLoss()\n >>> log_input = torch.randn(5, 2, requires_grad=True)\n >>> target = torch.randn(5, 2)\n >>> output = loss(log_input, target)\n >>> output.backward()\n Shape:\n * Input: (), where * means any number of dimensions.\n * Target: (), same shape as the input.\n * Output: scalar by default. If \"reduction\" is \"'none'\", then\n (*), the same shape as the input.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PoissonNLLLoss.html", "category": "pytorch docs"}
{"text": "torch._foreach_acostorch._foreach_acos(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.acos()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_acos.html", "category": "pytorch docs"}
{"text": "torch.bincounttorch.bincount(input, weights=None, minlength=0) -> Tensor\n Count the frequency of each value in an array of non-negative ints.\n The number of bins (size 1) is one larger than the largest value in\n \"input\" unless \"input\" is empty, in which case the result is a\n tensor of size 0. If \"minlength\" is specified, the number of bins\n is at least \"minlength\" and if \"input\" is empty, then the result is\n tensor of size \"minlength\" filled with zeros. If \"n\" is the value\n at position \"i\", \"out[n] += weights[i]\" if \"weights\" is specified\n else \"out[n] += 1\".\n Note:\n This operation may produce nondeterministic gradients when given\n tensors on a CUDA device. See Reproducibility for more\n information.\n Parameters:\n * input (Tensor) -- 1-d int tensor\n * weights (Tensor) -- optional, weight for each value in\n the input tensor. Should be of same size as input tensor.\n * minlength (int) -- optional, minimum number of bins.", "source": "https://pytorch.org/docs/stable/generated/torch.bincount.html", "category": "pytorch docs"}
{"text": "Should be non-negative.\n Returns:\n a tensor of shape \"Size([max(input) + 1])\" if \"input\" is non-\n empty, else \"Size(0)\"\n Return type:\n output (Tensor)\n Example:\n >>> input = torch.randint(0, 8, (5,), dtype=torch.int64)\n >>> weights = torch.linspace(0, 1, steps=5)\n >>> input, weights\n (tensor([4, 3, 6, 3, 4]),\n tensor([ 0.0000, 0.2500, 0.5000, 0.7500, 1.0000])\n >>> torch.bincount(input)\n tensor([0, 0, 0, 2, 2, 0, 1])\n >>> input.bincount(weights)\n tensor([0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 0.0000, 0.5000])", "source": "https://pytorch.org/docs/stable/generated/torch.bincount.html", "category": "pytorch docs"}
{"text": "torch.triltorch.tril(input, diagonal=0, , out=None) -> Tensor\n Returns the lower triangular part of the matrix (2-D tensor) or\n batch of matrices \"input\", the other elements of the result tensor\n \"out\" are set to 0.\n The lower triangular part of the matrix is defined as the elements\n on and below the diagonal.\n The argument \"diagonal\" controls which diagonal to consider. If\n \"diagonal\" = 0, all elements on and below the main diagonal are\n retained. A positive value includes just as many diagonals above\n the main diagonal, and similarly a negative value excludes just as\n many diagonals below the main diagonal. The main diagonal are the\n set of indices \\lbrace (i, i) \\rbrace for i \\in [0, \\min{d_{1},\n d_{2}} - 1] where d_{1}, d_{2} are the dimensions of the matrix.\n Parameters:\n * input (Tensor) -- the input tensor.\n * diagonal (int, optional*) -- the diagonal to consider\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.tril.html", "category": "pytorch docs"}
{"text": "Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> a = torch.randn(3, 3)\n >>> a\n tensor([[-1.0813, -0.8619, 0.7105],\n [ 0.0935, 0.1380, 2.2112],\n [-0.3409, -0.9828, 0.0289]])\n >>> torch.tril(a)\n tensor([[-1.0813, 0.0000, 0.0000],\n [ 0.0935, 0.1380, 0.0000],\n [-0.3409, -0.9828, 0.0289]])\n >>> b = torch.randn(4, 6)\n >>> b\n tensor([[ 1.2219, 0.5653, -0.2521, -0.2345, 1.2544, 0.3461],\n [ 0.4785, -0.4477, 0.6049, 0.6368, 0.8775, 0.7145],\n [ 1.1502, 3.2716, -1.1243, -0.5413, 0.3615, 0.6864],\n [-0.0614, -0.7344, -1.3164, -0.7648, -1.4024, 0.0978]])\n >>> torch.tril(b, diagonal=1)\n tensor([[ 1.2219, 0.5653, 0.0000, 0.0000, 0.0000, 0.0000],\n [ 0.4785, -0.4477, 0.6049, 0.0000, 0.0000, 0.0000],\n [ 1.1502, 3.2716, -1.1243, -0.5413, 0.0000, 0.0000],", "source": "https://pytorch.org/docs/stable/generated/torch.tril.html", "category": "pytorch docs"}
{"text": "[-0.0614, -0.7344, -1.3164, -0.7648, -1.4024, 0.0000]])\n >>> torch.tril(b, diagonal=-1)\n tensor([[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n [ 0.4785, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n [ 1.1502, 3.2716, 0.0000, 0.0000, 0.0000, 0.0000],\n [-0.0614, -0.7344, -1.3164, 0.0000, 0.0000, 0.0000]])", "source": "https://pytorch.org/docs/stable/generated/torch.tril.html", "category": "pytorch docs"}
{"text": "torch._foreach_expm1torch._foreach_expm1(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.expm1()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_expm1.html", "category": "pytorch docs"}
{"text": "torch.cuda.max_memory_allocatedtorch.cuda.max_memory_allocated(device=None)\n Returns the maximum GPU memory occupied by tensors in bytes for a\n given device.\n By default, this returns the peak allocated memory since the\n beginning of this program. \"reset_peak_memory_stats()\" can be used\n to reset the starting point in tracking this metric. For example,\n these two functions can measure the peak allocated memory usage of\n each iteration in a training loop.\n Parameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\n Return type:\n int\n Note:\n See Memory management for more details about GPU memory\n management.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.max_memory_allocated.html", "category": "pytorch docs"}
{"text": "torch.cuda.caching_allocator_deletetorch.cuda.caching_allocator_delete(mem_ptr)\n Deletes memory allocated using the CUDA memory allocator.\n Memory allocated with \"caching_allocator_alloc()\". is freed here.\n The associated device and stream are tracked inside the allocator.\n Parameters:\n mem_ptr (int) -- memory address to be freed by the\n allocator.\n Note:\n See Memory management for more details about GPU memory\n management.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.caching_allocator_delete.html", "category": "pytorch docs"}
{"text": "torch.Tensor.multinomialTensor.multinomial(num_samples, replacement=False, *, generator=None) -> Tensor\n See \"torch.multinomial()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.multinomial.html", "category": "pytorch docs"}
{"text": "torch.foreach_zero_torch._foreach_zero(self: List[Tensor]) -> None\n Apply \"torch.zero()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_zero_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.round_Tensor.round_(decimals=0) -> Tensor\n In-place version of \"round()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.round_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.msortTensor.msort() -> Tensor\n See \"torch.msort()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.msort.html", "category": "pytorch docs"}
{"text": "torch.Tensor.resolve_conjTensor.resolve_conj() -> Tensor\n See \"torch.resolve_conj()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.resolve_conj.html", "category": "pytorch docs"}
{"text": "LazyConvTranspose1dclass torch.nn.LazyConvTranspose1d(out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)\n A \"torch.nn.ConvTranspose1d\" module with lazy initialization of the\n \"in_channels\" argument of the \"ConvTranspose1d\" that is inferred\n from the \"input.size(1)\". The attributes that will be lazily\n initialized are weight and bias.\n Check the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\n Parameters:\n * out_channels (int) -- Number of channels produced by the\n convolution\n * kernel_size (int or tuple) -- Size of the convolving\n kernel\n * stride (int or tuple, optional) -- Stride of the\n convolution. Default: 1\n * padding (int or tuple, optional) -- \"dilation *\n (kernel_size - 1) - padding\" zero-padding will be added to", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConvTranspose1d.html", "category": "pytorch docs"}
{"text": "both sides of the input. Default: 0\n * output_padding (int or tuple, optional) --\n Additional size added to one side of the output shape.\n Default: 0\n * groups (int, optional) -- Number of blocked\n connections from input channels to output channels. Default: 1\n * bias (bool, optional) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n * dilation (int or tuple, optional) -- Spacing\n between kernel elements. Default: 1\n See also:\n \"torch.nn.ConvTranspose1d\" and\n \"torch.nn.modules.lazy.LazyModuleMixin\"\n cls_to_become\n alias of \"ConvTranspose1d\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConvTranspose1d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.erfinv_Tensor.erfinv_() -> Tensor\n In-place version of \"erfinv()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.erfinv_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.rot90Tensor.rot90(k, dims) -> Tensor\n See \"torch.rot90()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.rot90.html", "category": "pytorch docs"}
{"text": "torch.Tensor.tril_Tensor.tril_(diagonal=0) -> Tensor\n In-place version of \"tril()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.tril_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.floatTensor.float(memory_format=torch.preserve_format) -> Tensor\n \"self.float()\" is equivalent to \"self.to(torch.float32)\". See\n \"to()\".\n Parameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.float.html", "category": "pytorch docs"}
{"text": "Embeddingclass torch.nn.Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None, _freeze=False, device=None, dtype=None)\n A simple lookup table that stores embeddings of a fixed dictionary\n and size.\n This module is often used to store word embeddings and retrieve\n them using indices. The input to the module is a list of indices,\n and the output is the corresponding word embeddings.\n Parameters:\n * num_embeddings (int) -- size of the dictionary of\n embeddings\n * embedding_dim (int) -- the size of each embedding vector\n * padding_idx (int, optional) -- If specified, the\n entries at \"padding_idx\" do not contribute to the gradient;\n therefore, the embedding vector at \"padding_idx\" is not\n updated during training, i.e. it remains as a fixed \"pad\". For\n a newly constructed Embedding, the embedding vector at", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html", "category": "pytorch docs"}
{"text": "\"padding_idx\" will default to all zeros, but can be updated to\n another value to be used as the padding vector.\n * max_norm (float, optional) -- If given, each\n embedding vector with norm larger than \"max_norm\" is\n renormalized to have norm \"max_norm\".\n * norm_type (float, optional) -- The p of the p-norm\n to compute for the \"max_norm\" option. Default \"2\".\n * scale_grad_by_freq (bool, optional) -- If given,\n this will scale gradients by the inverse of frequency of the\n words in the mini-batch. Default \"False\".\n * sparse (bool, optional) -- If \"True\", gradient\n w.r.t. \"weight\" matrix will be a sparse tensor. See Notes for\n more details regarding sparse gradients.\n Variables:\n weight (Tensor) -- the learnable weights of the module of\n shape (num_embeddings, embedding_dim) initialized from\n \\mathcal{N}(0, 1)\n Shape:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html", "category": "pytorch docs"}
{"text": "\\mathcal{N}(0, 1)\n Shape:\n * Input: (), IntTensor or LongTensor of arbitrary shape\n containing the indices to extract\n * Output: (, H), where *** is the input shape and\n H=\\text{embedding_dim}\n Note:\n Keep in mind that only a limited number of optimizers support\n sparse gradients: currently it's \"optim.SGD\" (CUDA and CPU),\n \"optim.SparseAdam\" (CUDA and CPU) and \"optim.Adagrad\" (CPU)\n Note:\n When \"max_norm\" is not \"None\", \"Embedding\"'s forward method will\n modify the \"weight\" tensor in-place. Since tensors needed for\n gradient computations cannot be modified in-place, performing a\n differentiable operation on \"Embedding.weight\" before calling\n \"Embedding\"'s forward method requires cloning \"Embedding.weight\"\n when \"max_norm\" is not \"None\". For example:\n n, d, m = 3, 5, 7\n embedding = nn.Embedding(n, d, max_norm=True)\n W = torch.randn((m, d), requires_grad=True)\n idx = torch.tensor([1, 2])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html", "category": "pytorch docs"}
{"text": "idx = torch.tensor([1, 2])\n a = embedding.weight.clone() @ W.t() # weight must be cloned for this to be differentiable\n b = embedding(idx) @ W.t() # modifies weight in-place\n out = (a.unsqueeze(0) + b.unsqueeze(1))\n loss = out.sigmoid().prod()\n loss.backward()\n Examples:\n >>> # an Embedding module containing 10 tensors of size 3\n >>> embedding = nn.Embedding(10, 3)\n >>> # a batch of 2 samples of 4 indices each\n >>> input = torch.LongTensor([[1, 2, 4, 5], [4, 3, 2, 9]])\n >>> embedding(input)\n tensor([[[-0.0251, -1.6902, 0.7172],\n [-0.6431, 0.0748, 0.6969],\n [ 1.4970, 1.3448, -0.9685],\n [-0.3677, -2.7265, -0.1685]],\n [[ 1.4970, 1.3448, -0.9685],\n [ 0.4362, -0.4004, 0.9400],\n [-0.6431, 0.0748, 0.6969],\n [ 0.9124, -2.3616, 1.1151]]])\n >>> # example with padding_idx\n >>> embedding = nn.Embedding(10, 3, padding_idx=0)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html", "category": "pytorch docs"}
{"text": "\n\n\ninput = torch.LongTensor([[0, 2, 0, 5]])\n >>> embedding(input)\n tensor([[[ 0.0000, 0.0000, 0.0000],\n [ 0.1535, -2.0309, 0.9315],\n [ 0.0000, 0.0000, 0.0000],\n [-0.1655, 0.9897, 0.0635]]])\n >>> # example of changing pad vector\n >>> padding_idx = 0\n >>> embedding = nn.Embedding(3, 3, padding_idx=padding_idx)\n >>> embedding.weight\n Parameter containing:\n tensor([[ 0.0000, 0.0000, 0.0000],\n [-0.7895, -0.7089, -0.0364],\n [ 0.6778, 0.5803, 0.2678]], requires_grad=True)\n >>> with torch.no_grad():\n ... embedding.weight[padding_idx] = torch.ones(3)\n >>> embedding.weight\n Parameter containing:\n tensor([[ 1.0000, 1.0000, 1.0000],\n [-0.7895, -0.7089, -0.0364],\n [ 0.6778, 0.5803, 0.2678]], requires_grad=True)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html", "category": "pytorch docs"}
{"text": "classmethod from_pretrained(embeddings, freeze=True, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False)\n Creates Embedding instance from given 2-dimensional FloatTensor.\n Parameters:\n * embeddings (Tensor) -- FloatTensor containing weights\n for the Embedding. First dimension is being passed to\n Embedding as \"num_embeddings\", second as \"embedding_dim\".\n * freeze (bool, optional) -- If \"True\", the tensor\n does not get updated in the learning process. Equivalent to\n \"embedding.weight.requires_grad = False\". Default: \"True\"\n * padding_idx (int, optional) -- If specified, the\n entries at \"padding_idx\" do not contribute to the gradient;\n therefore, the embedding vector at \"padding_idx\" is not\n updated during training, i.e. it remains as a fixed \"pad\".\n * max_norm (float, optional) -- See module", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html", "category": "pytorch docs"}
{"text": "initialization documentation.\n * norm_type (float, optional) -- See module\n initialization documentation. Default \"2\".\n * scale_grad_by_freq (bool, optional) -- See module\n initialization documentation. Default \"False\".\n * sparse (bool, optional) -- See module\n initialization documentation.\n Examples:\n >>> # FloatTensor containing pretrained weights\n >>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]])\n >>> embedding = nn.Embedding.from_pretrained(weight)\n >>> # Get embeddings for index 1\n >>> input = torch.LongTensor([1])\n >>> embedding(input)\n tensor([[ 4.0000, 5.1000, 6.3000]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html", "category": "pytorch docs"}
{"text": "torch.Tensor.amaxTensor.amax(dim=None, keepdim=False) -> Tensor\n See \"torch.amax()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.amax.html", "category": "pytorch docs"}
{"text": "torch.sparse_csc_tensortorch.sparse_csc_tensor(ccol_indices, row_indices, values, size=None, , dtype=None, device=None, requires_grad=False, check_invariants=None) -> Tensor\n Constructs a sparse tensor in CSC (Compressed Sparse Column) with\n specified values at the given \"ccol_indices\" and \"row_indices\".\n Sparse matrix multiplication operations in CSC format are typically\n faster than that for sparse tensors in COO format. Make you have a\n look at the note on the data type of the indices.\n Note:\n If the \"device\" argument is not specified the device of the given\n \"values\" and indices tensor(s) must match. If, however, the\n argument is specified the input Tensors will be converted to the\n given device and in turn determine the device of the constructed\n sparse tensor.\n Parameters:\n * ccol_indices (array_like) -- (B+1)-dimensional array of\n size \"(batchsize, ncols + 1)\". The last element of each", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_csc_tensor.html", "category": "pytorch docs"}
{"text": "batch is the number of non-zeros. This tensor encodes the\n index in values and row_indices depending on where the given\n column starts. Each successive number in the tensor subtracted\n by the number before it denotes the number of elements in a\n given column.\n * row_indices (array_like) -- Row co-ordinates of each\n element in values. (B+1)-dimensional tensor with the same\n length as values.\n * values (array_list) -- Initial values for the tensor.\n Can be a list, tuple, NumPy \"ndarray\", scalar, and other types\n that represents a (1+K)-dimensional tensor where \"K\" is the\n number of dense dimensions.\n * size (list, tuple, \"torch.Size\", optional) -- Size of the\n sparse tensor: \"(batchsize, nrows, ncols, densesize)\". If\n not provided, the size will be inferred as the minimum size\n big enough to hold all non-zero elements.\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_csc_tensor.html", "category": "pytorch docs"}
{"text": "Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if None, infers data type from\n \"values\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * check_invariants (bool, optional) -- If sparse\n tensor invariants are checked. Default: as returned by\n \"torch.sparse.check_sparse_tensor_invariants.is_enabled()\",\n initially False.\n Example::\n >>> ccol_indices = [0, 2, 4]\n >>> row_indices = [0, 1, 0, 1]\n >>> values = [1, 2, 3, 4]", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_csc_tensor.html", "category": "pytorch docs"}
{"text": "\n\n\nvalues = [1, 2, 3, 4]\n >>> torch.sparse_csc_tensor(torch.tensor(ccol_indices, dtype=torch.int64),\n ... torch.tensor(row_indices, dtype=torch.int64),\n ... torch.tensor(values), dtype=torch.double)\n tensor(ccol_indices=tensor([0, 2, 4]),\n row_indices=tensor([0, 1, 0, 1]),\n values=tensor([1., 2., 3., 4.]), size=(2, 2), nnz=4,\n dtype=torch.float64, layout=torch.sparse_csc)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_csc_tensor.html", "category": "pytorch docs"}
{"text": "torch.Tensor.traceTensor.trace() -> Tensor\n See \"torch.trace()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.trace.html", "category": "pytorch docs"}
{"text": "torch.cuda.memory_summarytorch.cuda.memory_summary(device=None, abbreviated=False)\n Returns a human-readable printout of the current memory allocator\n statistics for a given device.\n This can be useful to display periodically during training, or when\n handling out-of-memory exceptions.\n Parameters:\n * device (torch.device or int, optional) --\n selected device. Returns printout for the current device,\n given by \"current_device()\", if \"device\" is \"None\" (default).\n * abbreviated (bool, optional) -- whether to return an\n abbreviated summary (default: False).\n Return type:\n str\n Note:\n See Memory management for more details about GPU memory\n management.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_summary.html", "category": "pytorch docs"}
{"text": "torch.difftorch.diff(input, n=1, dim=- 1, prepend=None, append=None) -> Tensor\n Computes the n-th forward difference along the given dimension.\n The first-order differences are given by out[i] = input[i + 1] -\n input[i]. Higher-order differences are calculated by using\n \"torch.diff()\" recursively.\n Parameters:\n * input (Tensor) -- the tensor to compute the differences\n on\n * n (int, optional) -- the number of times to\n recursively compute the difference\n * dim (int, optional) -- the dimension to compute the\n difference along. Default is the last dimension.\n * prepend (Tensor, optional) -- values to prepend or\n append to \"input\" along \"dim\" before computing the difference.\n Their dimensions must be equivalent to that of input, and\n their shapes must match input's shape except on \"dim\".\n * append (Tensor, optional) -- values to prepend or", "source": "https://pytorch.org/docs/stable/generated/torch.diff.html", "category": "pytorch docs"}
{"text": "append to \"input\" along \"dim\" before computing the difference.\n Their dimensions must be equivalent to that of input, and\n their shapes must match input's shape except on \"dim\".\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> a = torch.tensor([1, 3, 2])\n >>> torch.diff(a)\n tensor([ 2, -1])\n >>> b = torch.tensor([4, 5])\n >>> torch.diff(a, append=b)\n tensor([ 2, -1, 2, 1])\n >>> c = torch.tensor([[1, 2, 3], [3, 4, 5]])\n >>> torch.diff(c, dim=0)\n tensor([[2, 2, 2]])\n >>> torch.diff(c, dim=1)\n tensor([[1, 1],\n [1, 1]])", "source": "https://pytorch.org/docs/stable/generated/torch.diff.html", "category": "pytorch docs"}
{"text": "torch.Tensor.eq_Tensor.eq_(other) -> Tensor\n In-place version of \"eq()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.eq_.html", "category": "pytorch docs"}
{"text": "torch.addbmmtorch.addbmm(input, batch1, batch2, , beta=1, alpha=1, out=None) -> Tensor\n Performs a batch matrix-matrix product of matrices stored in\n \"batch1\" and \"batch2\", with a reduced add step (all matrix\n multiplications get accumulated along the first dimension). \"input\"\n is added to the final result.\n \"batch1\" and \"batch2\" must be 3-D tensors each containing the same\n number of matrices.\n If \"batch1\" is a (b \\times n \\times m) tensor, \"batch2\" is a (b\n \\times m \\times p) tensor, \"input\" must be broadcastable with a (n\n \\times p) tensor and \"out\" will be a (n \\times p) tensor.\n out = \\beta\\ \\text{input} + \\alpha\\ (\\sum_{i=0}^{b-1}\n \\text{batch1}_i \\mathbin{@} \\text{batch2}_i)\n If \"beta\" is 0, then \"input\" will be ignored, and nan and inf\n in it will not be propagated.\n For inputs of type FloatTensor or DoubleTensor*, arguments\n \"beta\" and \"alpha\" must be real numbers, otherwise they should be\n integers.\n This operator supports TensorFloat32.", "source": "https://pytorch.org/docs/stable/generated/torch.addbmm.html", "category": "pytorch docs"}
{"text": "This operator supports TensorFloat32.\n On certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\n Parameters:\n * batch1 (Tensor) -- the first batch of matrices to be\n multiplied\n * batch2 (Tensor) -- the second batch of matrices to be\n multiplied\n Keyword Arguments:\n * beta (Number, optional) -- multiplier for \"input\"\n (\\beta)\n * input (Tensor) -- matrix to be added\n * alpha (Number, optional) -- multiplier for batch1 @\n batch2 (\\alpha)\n * out (Tensor, optional) -- the output tensor.\n Example:\n >>> M = torch.randn(3, 5)\n >>> batch1 = torch.randn(10, 3, 4)\n >>> batch2 = torch.randn(10, 4, 5)\n >>> torch.addbmm(M, batch1, batch2)\n tensor([[ 6.6311, 0.0503, 6.9768, -12.0362, -2.1653],\n [ -4.8185, -1.4255, -6.6760, 8.9453, 2.5743],\n [ -3.8202, 4.3691, 1.0943, -1.1109, 5.4730]])", "source": "https://pytorch.org/docs/stable/generated/torch.addbmm.html", "category": "pytorch docs"}
{"text": "ScriptModuleclass torch.jit.ScriptModule\n A wrapper around C++ \"torch::jit::Module\". \"ScriptModule\"s contain\n methods, attributes, parameters, and constants. These can be\n accessed the same way as on a normal \"nn.Module\".\n add_module(name, module)\n Adds a child module to the current module.\n The module can be accessed as an attribute using the given name.\n Parameters:\n * name (str) -- name of the child module. The child\n module can be accessed from this module using the given\n name\n * module (Module) -- child module to be added to the\n module.\n apply(fn)\n Applies \"fn\" recursively to every submodule (as returned by\n \".children()\") as well as self. Typical use includes\n initializing the parameters of a model (see also torch.nn.init).\n Parameters:\n fn (\"Module\" -> None) -- function to be applied to each\n submodule\n Returns:\n self\n Return type:\n Module", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "self\n Return type:\n Module\n Example:\n >>> @torch.no_grad()\n >>> def init_weights(m):\n >>> print(m)\n >>> if type(m) == nn.Linear:\n >>> m.weight.fill_(1.0)\n >>> print(m.weight)\n >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))\n >>> net.apply(init_weights)\n Linear(in_features=2, out_features=2, bias=True)\n Parameter containing:\n tensor([[1., 1.],\n [1., 1.]], requires_grad=True)\n Linear(in_features=2, out_features=2, bias=True)\n Parameter containing:\n tensor([[1., 1.],\n [1., 1.]], requires_grad=True)\n Sequential(\n (0): Linear(in_features=2, out_features=2, bias=True)\n (1): Linear(in_features=2, out_features=2, bias=True)\n )\n bfloat16()\n Casts all floating point parameters and buffers to \"bfloat16\"\n datatype.\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "datatype.\n Note:\n This method modifies the module in-place.\n Returns:\n self\n Return type:\n Module\n buffers(recurse=True)\n Returns an iterator over module buffers.\n Parameters:\n recurse (bool) -- if True, then yields buffers of this\n module and all submodules. Otherwise, yields only buffers\n that are direct members of this module.\n Yields:\n torch.Tensor -- module buffer\n Return type:\n Iterator[Tensor]\n Example:\n >>> for buf in model.buffers():\n >>> print(type(buf), buf.size())\n (20L,)\n (20L, 1L, 5L, 5L)\n children()\n Returns an iterator over immediate children modules.\n Yields:\n Module -- a child module\n Return type:\n Iterator[Module]\n property code\n Returns a pretty-printed representation (as valid Python syntax)", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "of the internal graph for the \"forward\" method. See Inspecting\n Code for details.\n property code_with_constants\n Returns a tuple of:\n [0] a pretty-printed representation (as valid Python syntax) of\n the internal graph for the \"forward\" method. See code. [1] a\n ConstMap following the CONSTANT.cN format of the output in [0].\n The indices in the [0] output are keys to the underlying\n constant's values.\n See Inspecting Code for details.\n cpu()\n Moves all model parameters and buffers to the CPU.\n Note:\n This method modifies the module in-place.\n Returns:\n self\n Return type:\n Module\n cuda(device=None)\n Moves all model parameters and buffers to the GPU.\n This also makes associated parameters and buffers different\n objects. So it should be called before constructing optimizer if\n the module will live on GPU while being optimized.\n Note:\n This method modifies the module in-place.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "This method modifies the module in-place.\n Parameters:\n device (int, optional) -- if specified, all\n parameters will be copied to that device\n Returns:\n self\n Return type:\n Module\n double()\n Casts all floating point parameters and buffers to \"double\"\n datatype.\n Note:\n This method modifies the module in-place.\n Returns:\n self\n Return type:\n Module\n eval()\n Sets the module in evaluation mode.\n This has any effect only on certain modules. See documentations\n of particular modules for details of their behaviors in\n training/evaluation mode, if they are affected, e.g. \"Dropout\",\n \"BatchNorm\", etc.\n This is equivalent with \"self.train(False)\".\n See Locally disabling gradient computation for a comparison\n between .eval() and several similar mechanisms that may be\n confused with it.\n Returns:\n self\n Return type:\n Module", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "self\n Return type:\n Module\n extra_repr()\n Set the extra representation of the module\n To print customized extra information, you should re-implement\n this method in your own modules. Both single-line and multi-line\n strings are acceptable.\n Return type:\n str\n float()\n Casts all floating point parameters and buffers to \"float\"\n datatype.\n Note:\n This method modifies the module in-place.\n Returns:\n self\n Return type:\n Module\n get_buffer(target)\n Returns the buffer given by \"target\" if it exists, otherwise\n throws an error.\n See the docstring for \"get_submodule\" for a more detailed\n explanation of this method's functionality as well as how to\n correctly specify \"target\".\n Parameters:\n target (str) -- The fully-qualified string name of the\n buffer to look for. (See \"get_submodule\" for how to specify a\n fully-qualified string.)\n Returns:", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "fully-qualified string.)\n Returns:\n The buffer referenced by \"target\"\n Return type:\n torch.Tensor\n Raises:\n AttributeError -- If the target string references an\n invalid path or resolves to something that is not a\n buffer\n get_extra_state()\n Returns any extra state to include in the module's state_dict.\n Implement this and a corresponding \"set_extra_state()\" for your\n module if you need to store extra state. This function is called\n when building the module's state_dict().\n Note that extra state should be picklable to ensure working\n serialization of the state_dict. We only provide provide\n backwards compatibility guarantees for serializing Tensors;\n other objects may break backwards compatibility if their\n serialized pickled form changes.\n Returns:\n Any extra state to store in the module's state_dict\n Return type:\n object\n get_parameter(target)", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "object\n get_parameter(target)\n Returns the parameter given by \"target\" if it exists, otherwise\n throws an error.\n See the docstring for \"get_submodule\" for a more detailed\n explanation of this method's functionality as well as how to\n correctly specify \"target\".\n Parameters:\n target (str) -- The fully-qualified string name of the\n Parameter to look for. (See \"get_submodule\" for how to\n specify a fully-qualified string.)\n Returns:\n The Parameter referenced by \"target\"\n Return type:\n torch.nn.Parameter\n Raises:\n AttributeError -- If the target string references an\n invalid path or resolves to something that is not an\n \"nn.Parameter\"\n get_submodule(target)\n Returns the submodule given by \"target\" if it exists, otherwise\n throws an error.\n For example, let's say you have an \"nn.Module\" \"A\" that looks\n like this:\n A(\n (net_b): Module(", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "A(\n (net_b): Module(\n (net_c): Module(\n (conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2))\n )\n (linear): Linear(in_features=100, out_features=200, bias=True)\n )\n )\n (The diagram shows an \"nn.Module\" \"A\". \"A\" has a nested\n submodule \"net_b\", which itself has two submodules \"net_c\" and\n \"linear\". \"net_c\" then has a submodule \"conv\".)\n To check whether or not we have the \"linear\" submodule, we would\n call \"get_submodule(\"net_b.linear\")\". To check whether we have\n the \"conv\" submodule, we would call\n \"get_submodule(\"net_b.net_c.conv\")\".\n The runtime of \"get_submodule\" is bounded by the degree of\n module nesting in \"target\". A query against \"named_modules\"\n achieves the same result, but it is O(N) in the number of\n transitive modules. So, for a simple check to see if some\n submodule exists, \"get_submodule\" should always be used.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "Parameters:\n target (str) -- The fully-qualified string name of the\n submodule to look for. (See above example for how to specify\n a fully-qualified string.)\n Returns:\n The submodule referenced by \"target\"\n Return type:\n torch.nn.Module\n Raises:\n AttributeError -- If the target string references an\n invalid path or resolves to something that is not an\n \"nn.Module\"\n property graph\n Returns a string representation of the internal graph for the\n \"forward\" method. See Interpreting Graphs for details.\n half()\n Casts all floating point parameters and buffers to \"half\"\n datatype.\n Note:\n This method modifies the module in-place.\n Returns:\n self\n Return type:\n Module\n property inlined_graph\n Returns a string representation of the internal graph for the\n \"forward\" method. This graph will be preprocessed to inline all", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "function and method calls. See Interpreting Graphs for details.\n ipu(device=None)\n Moves all model parameters and buffers to the IPU.\n This also makes associated parameters and buffers different\n objects. So it should be called before constructing optimizer if\n the module will live on IPU while being optimized.\n Note:\n This method modifies the module in-place.\n Parameters:\n device (int, optional) -- if specified, all\n parameters will be copied to that device\n Returns:\n self\n Return type:\n Module\n load_state_dict(state_dict, strict=True)\n Copies parameters and buffers from \"state_dict\" into this module\n and its descendants. If \"strict\" is \"True\", then the keys of\n \"state_dict\" must exactly match the keys returned by this\n module's \"state_dict()\" function.\n Parameters:\n * state_dict (dict) -- a dict containing parameters and\n persistent buffers.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "persistent buffers.\n * strict (bool, optional) -- whether to strictly\n enforce that the keys in \"state_dict\" match the keys\n returned by this module's \"state_dict()\" function. Default:\n \"True\"\n Returns:\n * missing_keys is a list of str containing the missing\n keys\n * unexpected_keys is a list of str containing the\n unexpected keys\n Return type:\n \"NamedTuple\" with \"missing_keys\" and \"unexpected_keys\" fields\n Note:\n If a parameter or buffer is registered as \"None\" and its\n corresponding key exists in \"state_dict\", \"load_state_dict()\"\n will raise a \"RuntimeError\".\n modules()\n Returns an iterator over all modules in the network.\n Yields:\n Module -- a module in the network\n Return type:\n Iterator[Module]\n Note:\n Duplicate modules are returned only once. In the following\n example, \"l\" will be returned only once.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "example, \"l\" will be returned only once.\n Example:\n >>> l = nn.Linear(2, 2)\n >>> net = nn.Sequential(l, l)\n >>> for idx, m in enumerate(net.modules()):\n ... print(idx, '->', m)\n 0 -> Sequential(\n (0): Linear(in_features=2, out_features=2, bias=True)\n (1): Linear(in_features=2, out_features=2, bias=True)\n )\n 1 -> Linear(in_features=2, out_features=2, bias=True)\n named_buffers(prefix='', recurse=True, remove_duplicate=True)\n Returns an iterator over module buffers, yielding both the name\n of the buffer as well as the buffer itself.\n Parameters:\n * prefix (str) -- prefix to prepend to all buffer\n names.\n * recurse (bool, optional) -- if True, then yields\n buffers of this module and all submodules. Otherwise,\n yields only buffers that are direct members of this module.\n Defaults to True.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "Defaults to True.\n * remove_duplicate (bool, optional) -- whether to\n remove the duplicated buffers in the result. Defaults to\n True.\n Yields:\n (str, torch.Tensor) -- Tuple containing the name and buffer\n Return type:\n Iterator[Tuple[str, Tensor]]\n Example:\n >>> for name, buf in self.named_buffers():\n >>> if name in ['running_var']:\n >>> print(buf.size())\n named_children()\n Returns an iterator over immediate children modules, yielding\n both the name of the module as well as the module itself.\n Yields:\n (str, Module) -- Tuple containing a name and child module\n Return type:\n Iterator[Tuple[str, Module]]\n Example:\n >>> for name, module in model.named_children():\n >>> if name in ['conv4', 'conv5']:\n >>> print(module)\n named_modules(memo=None, prefix='', remove_duplicate=True)", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "Returns an iterator over all modules in the network, yielding\n both the name of the module as well as the module itself.\n Parameters:\n * memo (Optional[Set[Module]]) -- a memo to\n store the set of modules already added to the result\n * prefix (str) -- a prefix that will be added to the\n name of the module\n * remove_duplicate (bool) -- whether to remove the\n duplicated module instances in the result or not\n Yields:\n (str, Module) -- Tuple of name and module\n Note:\n Duplicate modules are returned only once. In the following\n example, \"l\" will be returned only once.\n Example:\n >>> l = nn.Linear(2, 2)\n >>> net = nn.Sequential(l, l)\n >>> for idx, m in enumerate(net.named_modules()):\n ... print(idx, '->', m)\n 0 -> ('', Sequential(\n (0): Linear(in_features=2, out_features=2, bias=True)", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "(1): Linear(in_features=2, out_features=2, bias=True)\n ))\n 1 -> ('0', Linear(in_features=2, out_features=2, bias=True))\n named_parameters(prefix='', recurse=True, remove_duplicate=True)\n Returns an iterator over module parameters, yielding both the\n name of the parameter as well as the parameter itself.\n Parameters:\n * prefix (str) -- prefix to prepend to all parameter\n names.\n * recurse (bool) -- if True, then yields parameters of\n this module and all submodules. Otherwise, yields only\n parameters that are direct members of this module.\n * remove_duplicate (bool, optional) -- whether to\n remove the duplicated parameters in the result. Defaults to\n True.\n Yields:\n (str, Parameter) -- Tuple containing the name and parameter\n Return type:\n Iterator[Tuple[str, Parameter]]\n Example:", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "Example:\n >>> for name, param in self.named_parameters():\n >>> if name in ['bias']:\n >>> print(param.size())\n parameters(recurse=True)\n Returns an iterator over module parameters.\n This is typically passed to an optimizer.\n Parameters:\n recurse (bool) -- if True, then yields parameters of\n this module and all submodules. Otherwise, yields only\n parameters that are direct members of this module.\n Yields:\n Parameter -- module parameter\n Return type:\n Iterator[Parameter]\n Example:\n >>> for param in model.parameters():\n >>> print(type(param), param.size())\n (20L,)\n (20L, 1L, 5L, 5L)\n register_backward_hook(hook)\n Registers a backward hook on the module.\n This function is deprecated in favor of\n \"register_full_backward_hook()\" and the behavior of this", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "function will change in future versions.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n register_buffer(name, tensor, persistent=True)\n Adds a buffer to the module.\n This is typically used to register a buffer that should not to\n be considered a model parameter. For example, BatchNorm's\n \"running_mean\" is not a parameter, but is part of the module's\n state. Buffers, by default, are persistent and will be saved\n alongside parameters. This behavior can be changed by setting\n \"persistent\" to \"False\". The only difference between a\n persistent buffer and a non-persistent buffer is that the latter\n will not be a part of this module's \"state_dict\".\n Buffers can be accessed as attributes using given names.\n Parameters:\n * name (str) -- name of the buffer. The buffer can be", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "accessed from this module using the given name\n * tensor (Tensor or None) -- buffer to be\n registered. If \"None\", then operations that run on buffers,\n such as \"cuda\", are ignored. If \"None\", the buffer is\n not included in the module's \"state_dict\".\n * persistent (bool) -- whether the buffer is part of\n this module's \"state_dict\".\n Example:\n >>> self.register_buffer('running_mean', torch.zeros(num_features))\n register_forward_hook(hook, *, prepend=False, with_kwargs=False)\n Registers a forward hook on the module.\n The hook will be called every time after \"forward()\" has\n computed an output.\n If \"with_kwargs\" is \"False\" or not specified, the input contains\n only the positional arguments given to the module. Keyword\n arguments won't be passed to the hooks and only to the\n \"forward\". The hook can modify the output. It can modify the", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "input inplace but it will not have effect on forward since this\n is called after \"forward()\" is called. The hook should have the\n following signature:\n hook(module, args, output) -> None or modified output\n If \"with_kwargs\" is \"True\", the forward hook will be passed the\n \"kwargs\" given to the forward function and be expected to return\n the output possibly modified. The hook should have the following\n signature:\n hook(module, args, kwargs, output) -> None or modified output\n Parameters:\n * hook (Callable) -- The user defined hook to be\n registered.\n * prepend (bool) -- If \"True\", the provided \"hook\" will\n be fired before all existing \"forward\" hooks on this\n \"torch.nn.modules.Module\". Otherwise, the provided \"hook\"\n will be fired after all existing \"forward\" hooks on this\n \"torch.nn.modules.Module\". Note that global \"forward\" hooks", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "registered with \"register_module_forward_hook()\" will fire\n before all hooks registered by this method. Default:\n \"False\"\n * with_kwargs (bool) -- If \"True\", the \"hook\" will be\n passed the kwargs given to the forward function. Default:\n \"False\"\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n register_forward_pre_hook(hook, *, prepend=False, with_kwargs=False)\n Registers a forward pre-hook on the module.\n The hook will be called every time before \"forward()\" is\n invoked.\n If \"with_kwargs\" is false or not specified, the input contains\n only the positional arguments given to the module. Keyword\n arguments won't be passed to the hooks and only to the\n \"forward\". The hook can modify the input. User can either return\n a tuple or a single modified value in the hook. We will wrap the", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "value into a tuple if a single value is returned (unless that\n value is already a tuple). The hook should have the following\n signature:\n hook(module, args) -> None or modified input\n If \"with_kwargs\" is true, the forward pre-hook will be passed\n the kwargs given to the forward function. And if the hook\n modifies the input, both the args and kwargs should be returned.\n The hook should have the following signature:\n hook(module, args, kwargs) -> None or a tuple of modified input and kwargs\n Parameters:\n * hook (Callable) -- The user defined hook to be\n registered.\n * prepend (bool) -- If true, the provided \"hook\" will\n be fired before all existing \"forward_pre\" hooks on this\n \"torch.nn.modules.Module\". Otherwise, the provided \"hook\"\n will be fired after all existing \"forward_pre\" hooks on\n this \"torch.nn.modules.Module\". Note that global", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "\"forward_pre\" hooks registered with\n \"register_module_forward_pre_hook()\" will fire before all\n hooks registered by this method. Default: \"False\"\n * with_kwargs (bool) -- If true, the \"hook\" will be\n passed the kwargs given to the forward function. Default:\n \"False\"\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n register_full_backward_hook(hook, prepend=False)\n Registers a backward hook on the module.\n The hook will be called every time the gradients with respect to\n a module are computed, i.e. the hook will execute if and only if\n the gradients with respect to module outputs are computed. The\n hook should have the following signature:\n hook(module, grad_input, grad_output) -> tuple(Tensor) or None\n The \"grad_input\" and \"grad_output\" are tuples that contain the", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "gradients with respect to the inputs and outputs respectively.\n The hook should not modify its arguments, but it can optionally\n return a new gradient with respect to the input that will be\n used in place of \"grad_input\" in subsequent computations.\n \"grad_input\" will only correspond to the inputs given as\n positional arguments and all kwarg arguments are ignored.\n Entries in \"grad_input\" and \"grad_output\" will be \"None\" for all\n non-Tensor arguments.\n For technical reasons, when this hook is applied to a Module,\n its forward function will receive a view of each Tensor passed\n to the Module. Similarly the caller will receive a view of each\n Tensor returned by the Module's forward function.\n Warning:\n Modifying inputs or outputs inplace is not allowed when using\n backward hooks and will raise an error.\n Parameters:\n * hook (Callable) -- The user-defined hook to be\n registered.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "registered.\n * prepend (bool) -- If true, the provided \"hook\" will\n be fired before all existing \"backward\" hooks on this\n \"torch.nn.modules.Module\". Otherwise, the provided \"hook\"\n will be fired after all existing \"backward\" hooks on this\n \"torch.nn.modules.Module\". Note that global \"backward\"\n hooks registered with\n \"register_module_full_backward_hook()\" will fire before all\n hooks registered by this method.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n register_full_backward_pre_hook(hook, prepend=False)\n Registers a backward pre-hook on the module.\n The hook will be called every time the gradients for the module\n are computed. The hook should have the following signature:\n hook(module, grad_output) -> Tensor or None", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "The \"grad_output\" is a tuple. The hook should not modify its\n arguments, but it can optionally return a new gradient with\n respect to the output that will be used in place of\n \"grad_output\" in subsequent computations. Entries in\n \"grad_output\" will be \"None\" for all non-Tensor arguments.\n For technical reasons, when this hook is applied to a Module,\n its forward function will receive a view of each Tensor passed\n to the Module. Similarly the caller will receive a view of each\n Tensor returned by the Module's forward function.\n Warning:\n Modifying inputs inplace is not allowed when using backward\n hooks and will raise an error.\n Parameters:\n * hook (Callable) -- The user-defined hook to be\n registered.\n * prepend (bool) -- If true, the provided \"hook\" will\n be fired before all existing \"backward_pre\" hooks on this\n \"torch.nn.modules.Module\". Otherwise, the provided \"hook\"", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "will be fired after all existing \"backward_pre\" hooks on\n this \"torch.nn.modules.Module\". Note that global\n \"backward_pre\" hooks registered with\n \"register_module_full_backward_pre_hook()\" will fire before\n all hooks registered by this method.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n register_load_state_dict_post_hook(hook)\n Registers a post hook to be run after module's \"load_state_dict\"\n is called.\n It should have the following signature::\n hook(module, incompatible_keys) -> None\n The \"module\" argument is the current module that this hook is\n registered on, and the \"incompatible_keys\" argument is a\n \"NamedTuple\" consisting of attributes \"missing_keys\" and\n \"unexpected_keys\". \"missing_keys\" is a \"list\" of \"str\"", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "containing the missing keys and \"unexpected_keys\" is a \"list\" of\n \"str\" containing the unexpected keys.\n The given incompatible_keys can be modified inplace if needed.\n Note that the checks performed when calling \"load_state_dict()\"\n with \"strict=True\" are affected by modifications the hook makes\n to \"missing_keys\" or \"unexpected_keys\", as expected. Additions\n to either set of keys will result in an error being thrown when\n \"strict=True\", and clearing out both missing and unexpected keys\n will avoid an error.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n register_module(name, module)\n Alias for \"add_module()\".\n register_parameter(name, param)\n Adds a parameter to the module.\n The parameter can be accessed as an attribute using given name.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "Parameters:\n * name (str) -- name of the parameter. The parameter\n can be accessed from this module using the given name\n * param (Parameter or None) -- parameter to be\n added to the module. If \"None\", then operations that run on\n parameters, such as \"cuda\", are ignored. If \"None\", the\n parameter is not included in the module's \"state_dict\".\n register_state_dict_pre_hook(hook)\n These hooks will be called with arguments: \"self\", \"prefix\", and\n \"keep_vars\" before calling \"state_dict\" on \"self\". The\n registered hooks can be used to perform pre-processing before\n the \"state_dict\" call is made.\n requires_grad_(requires_grad=True)\n Change if autograd should record operations on parameters in\n this module.\n This method sets the parameters' \"requires_grad\" attributes in-\n place.\n This method is helpful for freezing part of the module for", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "finetuning or training parts of a model individually (e.g., GAN\n training).\n See Locally disabling gradient computation for a comparison\n between .requires_grad_() and several similar mechanisms that\n may be confused with it.\n Parameters:\n requires_grad (bool) -- whether autograd should record\n operations on parameters in this module. Default: \"True\".\n Returns:\n self\n Return type:\n Module\n save(f, extra_files={})\n See \"torch.jit.save\" for details.\n set_extra_state(state)\n This function is called from \"load_state_dict()\" to handle any\n extra state found within the state_dict. Implement this\n function and a corresponding \"get_extra_state()\" for your module\n if you need to store extra state within its state_dict.\n Parameters:\n state (dict) -- Extra state from the state_dict\n share_memory()\n See \"torch.Tensor.share_memory()\"\n Return type:\n T", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "Return type:\n T\n state_dict(args, destination=None, prefix='', keep_vars=False)\n Returns a dictionary containing references to the whole state of\n the module.\n Both parameters and persistent buffers (e.g. running averages)\n are included. Keys are corresponding parameter and buffer names.\n Parameters and buffers set to \"None\" are not included.\n Note:\n The returned object is a shallow copy. It contains references\n to the module's parameters and buffers.\n Warning:\n Currently \"state_dict()\" also accepts positional arguments for\n \"destination\", \"prefix\" and \"keep_vars\" in order. However,\n this is being deprecated and keyword arguments will be\n enforced in future releases.\n Warning:\n Please avoid the use of argument \"destination\" as it is not\n designed for end-users.\n Parameters:\n * destination (dict, optional*) -- If provided, the", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "state of module will be updated into the dict and the same\n object is returned. Otherwise, an \"OrderedDict\" will be\n created and returned. Default: \"None\".\n * prefix (str, optional) -- a prefix added to\n parameter and buffer names to compose the keys in\n state_dict. Default: \"''\".\n * keep_vars (bool, optional) -- by default the\n \"Tensor\" s returned in the state dict are detached from\n autograd. If it's set to \"True\", detaching will not be\n performed. Default: \"False\".\n Returns:\n a dictionary containing a whole state of the module\n Return type:\n dict\n Example:\n >>> module.state_dict().keys()\n ['bias', 'weight']\n to(args, *kwargs)\n Moves and/or casts the parameters and buffers.\n This can be called as\n to(device=None, dtype=None, non_blocking=False)\n to(dtype, non_blocking=False)\n to(tensor, non_blocking=False)", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "to(tensor, non_blocking=False)\n to(memory_format=torch.channels_last)\n Its signature is similar to \"torch.Tensor.to()\", but only\n accepts floating point or complex \"dtype\"s. In addition, this\n method will only cast the floating point or complex parameters\n and buffers to \"dtype\" (if given). The integral parameters and\n buffers will be moved \"device\", if that is given, but with\n dtypes unchanged. When \"non_blocking\" is set, it tries to\n convert/move asynchronously with respect to the host if\n possible, e.g., moving CPU Tensors with pinned memory to CUDA\n devices.\n See below for examples.\n Note:\n This method modifies the module in-place.\n Parameters:\n * device (\"torch.device\") -- the desired device of the\n parameters and buffers in this module\n * dtype (\"torch.dtype\") -- the desired floating point or\n complex dtype of the parameters and buffers in this module", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "\ntensor (torch.Tensor) -- Tensor whose dtype and\n device are the desired dtype and device for all parameters\n and buffers in this module\n * memory_format (\"torch.memory_format\") -- the desired\n memory format for 4D parameters and buffers in this module\n (keyword only argument)\n Returns:\n self\n Return type:\n Module\n Examples:\n >>> linear = nn.Linear(2, 2)\n >>> linear.weight\n Parameter containing:\n tensor([[ 0.1913, -0.3420],\n [-0.5113, -0.2325]])\n >>> linear.to(torch.double)\n Linear(in_features=2, out_features=2, bias=True)\n >>> linear.weight\n Parameter containing:\n tensor([[ 0.1913, -0.3420],\n [-0.5113, -0.2325]], dtype=torch.float64)\n >>> gpu1 = torch.device(\"cuda:1\")\n >>> linear.to(gpu1, dtype=torch.half, non_blocking=True)\n Linear(in_features=2, out_features=2, bias=True)\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "\n\n\nlinear.weight\n Parameter containing:\n tensor([[ 0.1914, -0.3420],\n [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')\n >>> cpu = torch.device(\"cpu\")\n >>> linear.to(cpu)\n Linear(in_features=2, out_features=2, bias=True)\n >>> linear.weight\n Parameter containing:\n tensor([[ 0.1914, -0.3420],\n [-0.5112, -0.2324]], dtype=torch.float16)\n >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)\n >>> linear.weight\n Parameter containing:\n tensor([[ 0.3741+0.j, 0.2382+0.j],\n [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)\n >>> linear(torch.ones(3, 2, dtype=torch.cdouble))\n tensor([[0.6122+0.j, 0.1150+0.j],\n [0.6122+0.j, 0.1150+0.j],\n [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)\n to_empty(*, device)\n Moves the parameters and buffers to the specified device without\n copying storage.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "copying storage.\n Parameters:\n device (\"torch.device\") -- The desired device of the\n parameters and buffers in this module.\n Returns:\n self\n Return type:\n Module\n train(mode=True)\n Sets the module in training mode.\n This has any effect only on certain modules. See documentations\n of particular modules for details of their behaviors in\n training/evaluation mode, if they are affected, e.g. \"Dropout\",\n \"BatchNorm\", etc.\n Parameters:\n mode (bool) -- whether to set training mode (\"True\") or\n evaluation mode (\"False\"). Default: \"True\".\n Returns:\n self\n Return type:\n Module\n type(dst_type)\n Casts all parameters and buffers to \"dst_type\".\n Note:\n This method modifies the module in-place.\n Parameters:\n dst_type (type or string) -- the desired type\n Returns:\n self\n Return type:\n Module\n xpu(device=None)", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "Module\n xpu(device=None)\n Moves all model parameters and buffers to the XPU.\n This also makes associated parameters and buffers different\n objects. So it should be called before constructing optimizer if\n the module will live on XPU while being optimized.\n Note:\n This method modifies the module in-place.\n Parameters:\n device (int, optional) -- if specified, all\n parameters will be copied to that device\n Returns:\n self\n Return type:\n Module\n zero_grad(set_to_none=False)\n Sets gradients of all model parameters to zero. See similar\n function under \"torch.optim.Optimizer\" for more context.\n Parameters:\n set_to_none (bool) -- instead of setting to zero, set\n the grads to None. See \"torch.optim.Optimizer.zero_grad()\"\n for details.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.avg_pool2dtorch.nn.functional.avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) -> Tensor\n Applies 2D average-pooling operation in kH \\times kW regions by\n step size sH \\times sW steps. The number of output features is\n equal to the number of input planes.\n See \"AvgPool2d\" for details and output shape.\n Parameters:\n * input -- input tensor (\\text{minibatch} ,\n \\text{in_channels} , iH , iW)\n * kernel_size -- size of the pooling region. Can be a single\n number or a tuple (kH, kW)\n * stride -- stride of the pooling operation. Can be a single\n number or a tuple (sH, sW). Default: \"kernel_size\"\n * padding -- implicit zero paddings on both sides of the\n input. Can be a single number or a tuple (padH, padW).\n Default: 0\n * ceil_mode -- when True, will use ceil instead of floor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool2d.html", "category": "pytorch docs"}
{"text": "in the formula to compute the output shape. Default: \"False\"\n * count_include_pad -- when True, will include the zero-\n padding in the averaging calculation. Default: \"True\"\n * divisor_override -- if specified, it will be used as\n divisor, otherwise size of the pooling region will be used.\n Default: None", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.valuesTensor.values() -> Tensor\n Return the values tensor of a sparse COO tensor.\n Warning:\n Throws an error if \"self\" is not a sparse COO tensor.\n See also \"Tensor.indices()\".\n Note:\n This method can only be called on a coalesced sparse tensor. See\n \"Tensor.coalesce()\" for details.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.values.html", "category": "pytorch docs"}
{"text": "torch.Tensor.to_sparseTensor.to_sparse(sparseDims) -> Tensor\n Returns a sparse copy of the tensor. PyTorch supports sparse\n tensors in coordinate format.\n Parameters:\n sparseDims (int, optional) -- the number of sparse\n dimensions to include in the new sparse tensor\n Example:\n >>> d = torch.tensor([[0, 0, 0], [9, 0, 10], [0, 0, 0]])\n >>> d\n tensor([[ 0, 0, 0],\n [ 9, 0, 10],\n [ 0, 0, 0]])\n >>> d.to_sparse()\n tensor(indices=tensor([[1, 1],\n [0, 2]]),\n values=tensor([ 9, 10]),\n size=(3, 3), nnz=2, layout=torch.sparse_coo)\n >>> d.to_sparse(1)\n tensor(indices=tensor([[1]]),\n values=tensor([[ 9, 0, 10]]),\n size=(3, 3), nnz=1, layout=torch.sparse_coo)\n to_sparse(*, layout=None, blocksize=None, dense_dim=None) -> Tensor\n Returns a sparse tensor with the specified layout and blocksize.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse.html", "category": "pytorch docs"}
{"text": "If the \"self\" is strided, the number of dense dimensions could be\n specified, and a hybrid sparse tensor will be created, with\n dense_dim dense dimensions and self.dim() - 2 - dense_dim batch\n dimension.\n Note:\n If the \"self\" layout and blocksize parameters match with the\n specified layout and blocksize, return \"self\". Otherwise, return\n a sparse tensor copy of \"self\".\n Parameters:\n * layout (\"torch.layout\", optional) -- The desired sparse\n layout. One of \"torch.sparse_coo\", \"torch.sparse_csr\",\n \"torch.sparse_csc\", \"torch.sparse_bsr\", or \"torch.sparse_bsc\".\n Default: if \"None\", \"torch.sparse_coo\".\n * blocksize (list, tuple, \"torch.Size\", optional) -- Block\n size of the resulting BSR or BSC tensor. For other layouts,\n specifying the block size that is not \"None\" will result in a\n RuntimeError exception. A block size must be a tuple of\n length two such that its items evenly divide the two sparse\n dimensions.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse.html", "category": "pytorch docs"}
{"text": "dimensions.\n * dense_dim (int, optional) -- Number of dense\n dimensions of the resulting CSR, CSC, BSR or BSC tensor. This\n argument should be used only if \"self\" is a strided tensor,\n and must be a value between 0 and dimension of \"self\" tensor\n minus two.\n Example:\n >>> x = torch.tensor([[1, 0], [0, 0], [2, 3]])\n >>> x.to_sparse(layout=torch.sparse_coo)\n tensor(indices=tensor([[0, 2, 2],\n [0, 0, 1]]),\n values=tensor([1, 2, 3]),\n size=(3, 2), nnz=3, layout=torch.sparse_coo)\n >>> x.to_sparse(layout=torch.sparse_bsr, blocksize=(1, 2))\n tensor(crow_indices=tensor([0, 1, 1, 2]),\n col_indices=tensor([0, 0]),\n values=tensor([[[1, 0]],\n [[2, 3]]]), size=(3, 2), nnz=2, layout=torch.sparse_bsr)\n >>> x.to_sparse(layout=torch.sparse_bsr, blocksize=(2, 1))\n RuntimeError: Tensor size(-2) 3 needs to be divisible by blocksize[0] 2", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse.html", "category": "pytorch docs"}
{"text": "\n\n\nx.to_sparse(layout=torch.sparse_csr, blocksize=(3, 1))\n RuntimeError: to_sparse for Strided to SparseCsr conversion does not use specified blocksize\n >>> x = torch.tensor([[[1], [0]], [[0], [0]], [[2], [3]]])\n >>> x.to_sparse(layout=torch.sparse_csr, dense_dim=1)\n tensor(crow_indices=tensor([0, 1, 1, 3]),\n col_indices=tensor([0, 0, 1]),\n values=tensor([[1],\n [2],\n [3]]), size=(3, 2, 1), nnz=3, layout=torch.sparse_csr)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse.html", "category": "pytorch docs"}
{"text": "torch.Tensor.ccol_indicesTensor.ccol_indices()", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ccol_indices.html", "category": "pytorch docs"}
{"text": "torch.Tensor.selectTensor.select(dim, index) -> Tensor\n See \"torch.select()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.select.html", "category": "pytorch docs"}
{"text": "Adamaxclass torch.optim.Adamax(params, lr=0.002, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, foreach=None, *, maximize=False, differentiable=False)\n Implements Adamax algorithm (a variant of Adam based on infinity\n norm).\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\ &\\textbf{input} : \\gamma \\text{ (lr)}, \\beta_1,\n \\beta_2 \\text{ (betas)},\\theta_0 \\text{\n (params)},f(\\theta) \\text{ (objective)}, \\: \\lambda\n \\text{ (weight decay)},\n \\ &\\hspace{13mm} \\epsilon \\text{ (epsilon)}\n \\ &\\textbf{initialize} : m_0 \\leftarrow 0 \\text{ ( first\n moment)}, u_0 \\leftarrow 0 \\text{ ( infinity norm)}\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\ &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\:\n \\textbf{do} \\ &\\hspace{5mm}g_t\n \\leftarrow \\nabla_{\\theta} f_t (\\theta_{t-1}) \\\n &\\hspace{5mm}if \\: \\lambda \\neq 0", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html", "category": "pytorch docs"}
{"text": "&\\hspace{5mm}if \\: \\lambda \\neq 0\n \\ &\\hspace{10mm} g_t \\leftarrow g_t + \\lambda\n \\theta_{t-1} \\ &\\hspace{5mm}m_t\n \\leftarrow \\beta_1 m_{t-1} + (1 - \\beta_1) g_t\n \\ &\\hspace{5mm}u_t \\leftarrow \\mathrm{max}(\\beta_2\n u_{t-1}, |g_{t}|+\\epsilon) \\ &\\hspace{5mm}\\theta_t\n \\leftarrow \\theta_{t-1} - \\frac{\\gamma m_t}{(1-\\beta^t_1) u_t}\n \\ &\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\n For further details regarding the algorithm we refer to Adam: A\n Method for Stochastic Optimization.\n Parameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * lr (float, optional) -- learning rate (default:\n 2e-3)\n * betas (Tuple[float, float], optional) --", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html", "category": "pytorch docs"}
{"text": "coefficients used for computing running averages of gradient\n and its square\n * eps (float, optional) -- term added to the\n denominator to improve numerical stability (default: 1e-8)\n * weight_decay (float, optional) -- weight decay (L2\n penalty) (default: 0)\n * foreach (bool, optional) -- whether foreach\n implementation of optimizer is used. If unspecified by the\n user (so foreach is None), we will try to use foreach over the\n for-loop implementation on CUDA, since it is usually\n significantly more performant. (default: None)\n * maximize (bool, optional) -- maximize the params\n based on the objective, instead of minimizing (default: False)\n * differentiable (bool, optional) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html", "category": "pytorch docs"}
{"text": "context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\n add_param_group(param_group)\n Add a param group to the \"Optimizer\" s param_groups.\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n Parameters:\n param_group (dict) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n load_state_dict(state_dict)\n Loads the optimizer state.\n Parameters:\n state_dict (dict) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n register_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html", "category": "pytorch docs"}
{"text": "hook(optimizer, args, kwargs) -> None\n The \"optimizer\" argument is the optimizer instance being used.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n register_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html", "category": "pytorch docs"}
{"text": "registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n state_dict()\n Returns the state of the optimizer as a \"dict\".\n It contains two entries:\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n zero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n Parameters:\n set_to_none (bool) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html", "category": "pytorch docs"}
{"text": "None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html", "category": "pytorch docs"}
{"text": "torch.foreach_cos_torch._foreach_cos(self: List[Tensor]) -> None\n Apply \"torch.cos()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_cos_.html", "category": "pytorch docs"}
{"text": "ModuleDictclass torch.nn.ModuleDict(modules=None)\n Holds submodules in a dictionary.\n \"ModuleDict\" can be indexed like a regular Python dictionary, but\n modules it contains are properly registered, and will be visible by\n all \"Module\" methods.\n \"ModuleDict\" is an ordered dictionary that respects\n * the order of insertion, and\n * in \"update()\", the order of the merged \"OrderedDict\", \"dict\"\n (started from Python 3.6) or another \"ModuleDict\" (the argument\n to \"update()\").\n Note that \"update()\" with other unordered mapping types (e.g.,\n Python's plain \"dict\" before Python version 3.6) does not preserve\n the order of the merged mapping.\n Parameters:\n modules (iterable, optional) -- a mapping (dictionary)\n of (string: module) or an iterable of key-value pairs of type\n (string, module)\n Example:\n class MyModule(nn.Module):\n def init(self):\n super(MyModule, self).init()", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ModuleDict.html", "category": "pytorch docs"}
{"text": "super(MyModule, self).init()\n self.choices = nn.ModuleDict({\n 'conv': nn.Conv2d(10, 10, 3),\n 'pool': nn.MaxPool2d(3)\n })\n self.activations = nn.ModuleDict([\n ['lrelu', nn.LeakyReLU()],\n ['prelu', nn.PReLU()]\n ])\n def forward(self, x, choice, act):\n x = self.choiceschoice\n x = self.activationsact\n return x\n clear()\n Remove all items from the ModuleDict.\n items()\n Return an iterable of the ModuleDict key/value pairs.\n Return type:\n Iterable[Tuple[str, Module]]\n keys()\n Return an iterable of the ModuleDict keys.\n Return type:\n Iterable[str]\n pop(key)\n Remove key from the ModuleDict and return its module.\n Parameters:\n key (str) -- key to pop from the ModuleDict\n Return type:\n Module\n update(modules)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ModuleDict.html", "category": "pytorch docs"}
{"text": "Module\n update(modules)\n Update the \"ModuleDict\" with the key-value pairs from a mapping\n or an iterable, overwriting existing keys.\n Note:\n If \"modules\" is an \"OrderedDict\", a \"ModuleDict\", or an\n iterable of key-value pairs, the order of new elements in it\n is preserved.\n Parameters:\n modules (iterable) -- a mapping (dictionary) from\n string to \"Module\", or an iterable of key-value pairs of type\n (string, \"Module\")\n values()\n Return an iterable of the ModuleDict values.\n Return type:\n Iterable[Module]", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ModuleDict.html", "category": "pytorch docs"}
{"text": "torch.Tensor.tensor_splitTensor.tensor_split(indices_or_sections, dim=0) -> List of Tensors\n See \"torch.tensor_split()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.tensor_split.html", "category": "pytorch docs"}
{"text": "OneCycleLRclass torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr, total_steps=None, epochs=None, steps_per_epoch=None, pct_start=0.3, anneal_strategy='cos', cycle_momentum=True, base_momentum=0.85, max_momentum=0.95, div_factor=25.0, final_div_factor=10000.0, three_phase=False, last_epoch=- 1, verbose=False)\n Sets the learning rate of each parameter group according to the\n 1cycle learning rate policy. The 1cycle policy anneals the learning\n rate from an initial learning rate to some maximum learning rate\n and then from that maximum learning rate to some minimum learning\n rate much lower than the initial learning rate. This policy was\n initially described in the paper Super-Convergence: Very Fast\n Training of Neural Networks Using Large Learning Rates.\n The 1cycle learning rate policy changes the learning rate after\n every batch. step should be called after a batch has been used\n for training.\n This scheduler is not chainable.", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html", "category": "pytorch docs"}
{"text": "This scheduler is not chainable.\n Note also that the total number of steps in the cycle can be\n determined in one of two ways (listed in order of precedence):\n 1. A value for total_steps is explicitly provided.\n 2. A number of epochs (epochs) and a number of steps per epoch\n (steps_per_epoch) are provided. In this case, the number of\n total steps is inferred by total_steps = epochs *\n steps_per_epoch\n You must either provide a value for total_steps or provide a value\n for both epochs and steps_per_epoch.\n The default behaviour of this scheduler follows the fastai\n implementation of 1cycle, which claims that \"unpublished work has\n shown even better results by using only two phases\". To mimic the\n behaviour of the original paper instead, set \"three_phase=True\".\n Parameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * max_lr (float or list) -- Upper learning rate\n boundaries in the cycle for each parameter group.", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html", "category": "pytorch docs"}
{"text": "\ntotal_steps (int) -- The total number of steps in the\n cycle. Note that if a value is not provided here, then it must\n be inferred by providing a value for epochs and\n steps_per_epoch. Default: None\nepochs (int) -- The number of epochs to train for. This\n is used along with steps_per_epoch in order to infer the total\n number of steps in the cycle if a value for total_steps is not\n provided. Default: None\nsteps_per_epoch (int) -- The number of steps per epoch\n to train for. This is used along with epochs in order to infer\n the total number of steps in the cycle if a value for\n total_steps is not provided. Default: None\npct_start (float) -- The percentage of the cycle (in\n number of steps) spent increasing the learning rate. Default:\n 0.3\nanneal_strategy (str) -- {'cos', 'linear'} Specifies the\n annealing strategy: \"cos\" for cosine annealing, \"linear\" for\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html", "category": "pytorch docs"}
{"text": "linear annealing. Default: 'cos'\n * cycle_momentum (bool) -- If \"True\", momentum is cycled\n inversely to learning rate between 'base_momentum' and\n 'max_momentum'. Default: True\n * base_momentum (float or list) -- Lower momentum\n boundaries in the cycle for each parameter group. Note that\n momentum is cycled inversely to learning rate; at the peak of\n a cycle, momentum is 'base_momentum' and learning rate is\n 'max_lr'. Default: 0.85\n * max_momentum (float or list) -- Upper momentum\n boundaries in the cycle for each parameter group.\n Functionally, it defines the cycle amplitude (max_momentum -\n base_momentum). Note that momentum is cycled inversely to\n learning rate; at the start of a cycle, momentum is\n 'max_momentum' and learning rate is 'base_lr' Default: 0.95\n * div_factor (float) -- Determines the initial learning", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html", "category": "pytorch docs"}
{"text": "rate via initial_lr = max_lr/div_factor Default: 25\n * final_div_factor (float) -- Determines the minimum\n learning rate via min_lr = initial_lr/final_div_factor\n Default: 1e4\n * three_phase (bool) -- If \"True\", use a third phase of\n the schedule to annihilate the learning rate according to\n 'final_div_factor' instead of modifying the second phase (the\n first two phases will be symmetrical about the step indicated\n by 'pct_start').\n * last_epoch (int) -- The index of the last batch. This\n parameter is used when resuming a training job. Since step()\n should be invoked after each batch instead of after each\n epoch, this number represents the total number of batches\n computed, not the total number of epochs computed. When\n last_epoch=-1, the schedule is started from the beginning.\n Default: -1\n * verbose (bool) -- If \"True\", prints a message to stdout", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html", "category": "pytorch docs"}
{"text": "for each update. Default: \"False\".\n -[ Example ]-\n\n\n\ndata_loader = torch.utils.data.DataLoader(...)\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)\nscheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=0.01, steps_per_epoch=len(data_loader), epochs=10)\nfor epoch in range(10):\n for batch in data_loader:\n train_batch(...)\n scheduler.step()\n get_last_lr()\n Return last computed learning rate by current scheduler.\n load_state_dict(state_dict)\n Loads the schedulers state.\n Parameters:\n state_dict (dict) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n print_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n state_dict()\n Returns the state of the scheduler as a \"dict\".\n It contains an entry for every variable in self.dict which\n is not the optimizer.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html", "category": "pytorch docs"}
{"text": "torch.Tensor.untyped_storageTensor.untyped_storage() -> torch.UntypedStorage\n Returns the underlying \"UntypedStorage\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.untyped_storage.html", "category": "pytorch docs"}
{"text": "InstanceNorm3dclass torch.ao.nn.quantized.InstanceNorm3d(num_features, weight, bias, scale, zero_point, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False, device=None, dtype=None)\n This is the quantized version of \"InstanceNorm3d\".\n Additional args:\n * scale - quantization scale of the output, type: double.\n * zero_point - quantization zero point of the output, type:\n long.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.InstanceNorm3d.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.avg_pool1dtorch.nn.functional.avg_pool1d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True) -> Tensor\n Applies a 1D average pooling over an input signal composed of\n several input planes.\n See \"AvgPool1d\" for details and output shape.\n Parameters:\n * input -- input tensor of shape (\\text{minibatch} ,\n \\text{in_channels} , iW)\n * kernel_size -- the size of the window. Can be a single\n number or a tuple (kW,)\n * stride -- the stride of the window. Can be a single number\n or a tuple (sW,). Default: \"kernel_size\"\n * padding -- implicit zero paddings on both sides of the\n input. Can be a single number or a tuple (padW,). Default: 0\n * ceil_mode -- when True, will use ceil instead of floor\n to compute the output shape. Default: \"False\"\n * count_include_pad -- when True, will include the zero-", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool1d.html", "category": "pytorch docs"}
{"text": "padding in the averaging calculation. Default: \"True\"\n Examples:\n >>> # pool of square window of size=3, stride=2\n >>> input = torch.tensor([[[1, 2, 3, 4, 5, 6, 7]]], dtype=torch.float32)\n >>> F.avg_pool1d(input, kernel_size=3, stride=2)\n tensor([[[ 2., 4., 6.]]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool1d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.bitwise_not_Tensor.bitwise_not_() -> Tensor\n In-place version of \"bitwise_not()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_not_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sparse_dimTensor.sparse_dim() -> int\n Return the number of sparse dimensions in a sparse tensor \"self\".\n Note:\n Returns \"0\" if \"self\" is not a sparse tensor.\n See also \"Tensor.dense_dim()\" and hybrid tensors.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_dim.html", "category": "pytorch docs"}
{"text": "torch.Tensor.hypotTensor.hypot(other) -> Tensor\n See \"torch.hypot()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.hypot.html", "category": "pytorch docs"}
{"text": "torch.scattertorch.scatter(input, dim, index, src) -> Tensor\n Out-of-place version of \"torch.Tensor.scatter_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.scatter.html", "category": "pytorch docs"}
{"text": "torch.swapdimstorch.swapdims(input, dim0, dim1) -> Tensor\n Alias for \"torch.transpose()\".\n This function is equivalent to NumPy's swapaxes function.\n Examples:\n >>> x = torch.tensor([[[0,1],[2,3]],[[4,5],[6,7]]])\n >>> x\n tensor([[[0, 1],\n [2, 3]],\n [[4, 5],\n [6, 7]]])\n >>> torch.swapdims(x, 0, 1)\n tensor([[[0, 1],\n [4, 5]],\n [[2, 3],\n [6, 7]]])\n >>> torch.swapdims(x, 0, 2)\n tensor([[[0, 4],\n [2, 6]],\n [[1, 5],\n [3, 7]]])", "source": "https://pytorch.org/docs/stable/generated/torch.swapdims.html", "category": "pytorch docs"}
{"text": "torch.Tensor.true_divide_Tensor.true_divide_(value) -> Tensor\n In-place version of \"true_divide_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.true_divide_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.fmaxTensor.fmax(other) -> Tensor\n See \"torch.fmax()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fmax.html", "category": "pytorch docs"}
{"text": "torch.is_storagetorch.is_storage(obj)\n Returns True if obj is a PyTorch storage object.\n Parameters:\n obj (Object) -- Object to test", "source": "https://pytorch.org/docs/stable/generated/torch.is_storage.html", "category": "pytorch docs"}
{"text": "Generatorclass torch.Generator(device='cpu')\n Creates and returns a generator object that manages the state of\n the algorithm which produces pseudo random numbers. Used as a\n keyword argument in many In-place random sampling functions.\n Parameters:\n device (\"torch.device\", optional) -- the desired device for\n the generator.\n Returns:\n An torch.Generator object.\n Return type:\n Generator\n Example:\n >>> g_cpu = torch.Generator()\n >>> g_cuda = torch.Generator(device='cuda')\n device\n Generator.device -> device\n Gets the current device of the generator.\n Example:\n >>> g_cpu = torch.Generator()\n >>> g_cpu.device\n device(type='cpu')\n get_state() -> Tensor\n Returns the Generator state as a \"torch.ByteTensor\".\n Returns:\n A \"torch.ByteTensor\" which contains all the necessary bits to\n restore a Generator to a specific point in time.\n Return type:\n Tensor\n Example:", "source": "https://pytorch.org/docs/stable/generated/torch.Generator.html", "category": "pytorch docs"}
{"text": "Return type:\n Tensor\n Example:\n >>> g_cpu = torch.Generator()\n >>> g_cpu.get_state()\n initial_seed() -> int\n Returns the initial seed for generating random numbers.\n Example:\n >>> g_cpu = torch.Generator()\n >>> g_cpu.initial_seed()\n 2147483647\n manual_seed(seed) -> Generator\n Sets the seed for generating random numbers. Returns a\n torch.Generator object. It is recommended to set a large seed,\n i.e. a number that has a good balance of 0 and 1 bits. Avoid\n having many 0 bits in the seed.\n Parameters:\n seed (int) -- The desired seed. Value must be within\n the inclusive range [-0x8000_0000_0000_0000,\n 0xffff_ffff_ffff_ffff]. Otherwise, a RuntimeError is raised.\n Negative inputs are remapped to positive values with the\n formula 0xffff_ffff_ffff_ffff + seed.\n Returns:\n An torch.Generator object.\n Return type:\n Generator\n Example:", "source": "https://pytorch.org/docs/stable/generated/torch.Generator.html", "category": "pytorch docs"}
{"text": "Generator\n Example:\n >>> g_cpu = torch.Generator()\n >>> g_cpu.manual_seed(2147483647)\n seed() -> int\n Gets a non-deterministic random number from std::random_device\n or the current time and uses it to seed a Generator.\n Example:\n >>> g_cpu = torch.Generator()\n >>> g_cpu.seed()\n 1516516984916\n set_state(new_state) -> void\n Sets the Generator state.\n Parameters:\n new_state (torch.ByteTensor) -- The desired state.\n Example:\n >>> g_cpu = torch.Generator()\n >>> g_cpu_other = torch.Generator()\n >>> g_cpu.set_state(g_cpu_other.get_state())", "source": "https://pytorch.org/docs/stable/generated/torch.Generator.html", "category": "pytorch docs"}
{"text": "torch.Tensor.ge_Tensor.ge_(other) -> Tensor\n In-place version of \"ge()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ge_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.pin_memoryTensor.pin_memory() -> Tensor\n Copies the tensor to pinned memory, if it's not already pinned.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.pin_memory.html", "category": "pytorch docs"}
{"text": "torch.Tensor.gtTensor.gt(other) -> Tensor\n See \"torch.gt()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.gt.html", "category": "pytorch docs"}
{"text": "torch.cummaxtorch.cummax(input, dim, , out=None)\n Returns a namedtuple \"(values, indices)\" where \"values\" is the\n cumulative maximum of elements of \"input\" in the dimension \"dim\".\n And \"indices\" is the index location of each maximum value found in\n the dimension \"dim\".\n y_i = max(x_1, x_2, x_3, \\dots, x_i)\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- the dimension to do the operation over\n Keyword Arguments:\n out (tuple, optional*) -- the result tuple of two\n output tensors (values, indices)\n Example:\n >>> a = torch.randn(10)\n >>> a\n tensor([-0.3449, -1.5447, 0.0685, -1.5104, -1.1706, 0.2259, 1.4696, -1.3284,\n 1.9946, -0.8209])\n >>> torch.cummax(a, dim=0)\n torch.return_types.cummax(\n values=tensor([-0.3449, -0.3449, 0.0685, 0.0685, 0.0685, 0.2259, 1.4696, 1.4696,\n 1.9946, 1.9946]),\n indices=tensor([0, 0, 2, 2, 2, 5, 6, 6, 8, 8]))", "source": "https://pytorch.org/docs/stable/generated/torch.cummax.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.upsample_nearesttorch.nn.functional.upsample_nearest(input, size=None, scale_factor=None)\n Upsamples the input, using nearest neighbours' pixel values.\n Warning:\n This function is deprecated in favor of\n \"torch.nn.functional.interpolate()\". This is equivalent with\n \"nn.functional.interpolate(..., mode='nearest')\".\n Currently spatial and volumetric upsampling are supported (i.e.\n expected inputs are 4 or 5 dimensional).\n Parameters:\n * input (Tensor) -- input\n * size (int or Tuple[int, int] or\n Tuple[int, int, int]) -- output spatia size.\n * scale_factor (int) -- multiplier for spatial size. Has\n to be an integer.\n Note:\n This operation may produce nondeterministic gradients when given\n tensors on a CUDA device. See Reproducibility for more\n information.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.upsample_nearest.html", "category": "pytorch docs"}
{"text": "torch.Tensor.index_put_Tensor.index_put_(indices, values, accumulate=False) -> Tensor\n Puts values from the tensor \"values\" into the tensor \"self\" using\n the indices specified in \"indices\" (which is a tuple of Tensors).\n The expression \"tensor.index_put_(indices, values)\" is equivalent\n to \"tensor[indices] = values\". Returns \"self\".\n If \"accumulate\" is \"True\", the elements in \"values\" are added to\n \"self\". If accumulate is \"False\", the behavior is undefined if\n indices contain duplicate elements.\n Parameters:\n * indices (tuple of LongTensor) -- tensors used to index\n into self.\n * values (Tensor) -- tensor of same dtype as self.\n * accumulate (bool) -- whether to accumulate into self", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_put_.html", "category": "pytorch docs"}
{"text": "FloatFunctionalclass torch.ao.nn.quantized.FloatFunctional\n State collector class for float operations.\n The instance of this class can be used instead of the \"torch.\"\n prefix for some operations. See example usage below.\n Note:\n This class does not provide a \"forward\" hook. Instead, you must\n use one of the underlying functions (e.g. \"add\").\n Examples:\n >>> f_add = FloatFunctional()\n >>> a = torch.tensor(3.0)\n >>> b = torch.tensor(4.0)\n >>> f_add.add(a, b) # Equivalent to torch.add(a, b)\n Valid operation names:\n * add\n * cat\n * mul\n * add_relu\n * add_scalar\n * mul_scalar", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.FloatFunctional.html", "category": "pytorch docs"}
{"text": "torch.Tensor.hypot_Tensor.hypot_(other) -> Tensor\n In-place version of \"hypot()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.hypot_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.mmTensor.mm(mat2) -> Tensor\n See \"torch.mm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.mm.html", "category": "pytorch docs"}
{"text": "ELUclass torch.nn.ELU(alpha=1.0, inplace=False)\n Applies the Exponential Linear Unit (ELU) function, element-wise,\n as described in the paper: Fast and Accurate Deep Network Learning\n by Exponential Linear Units (ELUs).\n ELU is defined as:\n \\text{ELU}(x) = \\begin{cases} x, & \\text{ if } x > 0\\ \\alpha *\n (\\exp(x) - 1), & \\text{ if } x \\leq 0 \\end{cases}\n Parameters:\n * alpha (float) -- the \\alpha value for the ELU\n formulation. Default: 1.0\n * inplace (bool) -- can optionally do the operation in-\n place. Default: \"False\"\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.ELU()\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ELU.html", "category": "pytorch docs"}
{"text": "torch.Tensor.swapdimsTensor.swapdims(dim0, dim1) -> Tensor\n See \"torch.swapdims()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.swapdims.html", "category": "pytorch docs"}
{"text": "torch.Tensor.atanTensor.atan() -> Tensor\n See \"torch.atan()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.atan.html", "category": "pytorch docs"}
{"text": "torch.optim.Optimizer.stepOptimizer.step(closure)\n Performs a single optimization step (parameter update).\n Parameters:\n closure (Callable) -- A closure that reevaluates the model\n and returns the loss. Optional for most optimizers.\n Note:\n Unless otherwise specified, this function should not modify the\n \".grad\" field of the parameters.", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.step.html", "category": "pytorch docs"}
{"text": "torch.Tensor.is_quantizedTensor.is_quantized\n Is \"True\" if the Tensor is quantized, \"False\" otherwise.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_quantized.html", "category": "pytorch docs"}
{"text": "torch.Tensor.arcsinhTensor.arcsinh() -> Tensor\n See \"torch.arcsinh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arcsinh.html", "category": "pytorch docs"}
{"text": "torch.baddbmmtorch.baddbmm(input, batch1, batch2, , beta=1, alpha=1, out=None) -> Tensor\n Performs a batch matrix-matrix product of matrices in \"batch1\" and\n \"batch2\". \"input\" is added to the final result.\n \"batch1\" and \"batch2\" must be 3-D tensors each containing the same\n number of matrices.\n If \"batch1\" is a (b \\times n \\times m) tensor, \"batch2\" is a (b\n \\times m \\times p) tensor, then \"input\" must be broadcastable with\n a (b \\times n \\times p) tensor and \"out\" will be a (b \\times n\n \\times p) tensor. Both \"alpha\" and \"beta\" mean the same as the\n scaling factors used in \"torch.addbmm()\".\n \\text{out}_i = \\beta\\ \\text{input}_i + \\alpha\\ (\\text{batch1}_i\n \\mathbin{@} \\text{batch2}_i)\n If \"beta\" is 0, then \"input\" will be ignored, and nan and inf\n in it will not be propagated.\n For inputs of type FloatTensor or DoubleTensor*, arguments\n \"beta\" and \"alpha\" must be real numbers, otherwise they should be\n integers.", "source": "https://pytorch.org/docs/stable/generated/torch.baddbmm.html", "category": "pytorch docs"}
{"text": "integers.\n This operator supports TensorFloat32.\n On certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\n Parameters:\n * input (Tensor) -- the tensor to be added\n * batch1 (Tensor) -- the first batch of matrices to be\n multiplied\n * batch2 (Tensor) -- the second batch of matrices to be\n multiplied\n Keyword Arguments:\n * beta (Number, optional) -- multiplier for \"input\"\n (\\beta)\n * alpha (Number, optional) -- multiplier for\n \\text{batch1} \\mathbin{@} \\text{batch2} (\\alpha)\n * out (Tensor, optional) -- the output tensor.\n Example:\n >>> M = torch.randn(10, 3, 5)\n >>> batch1 = torch.randn(10, 3, 4)\n >>> batch2 = torch.randn(10, 4, 5)\n >>> torch.baddbmm(M, batch1, batch2).size()\n torch.Size([10, 3, 5])", "source": "https://pytorch.org/docs/stable/generated/torch.baddbmm.html", "category": "pytorch docs"}
{"text": "HistogramObserverclass torch.quantization.observer.HistogramObserver(bins=2048, upsample_rate=128, dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None, factory_kwargs=None, eps=1.1920928955078125e-07)\n The module records the running histogram of tensor values along\n with min/max values. \"calculate_qparams\" will calculate scale and\n zero_point.\n Parameters:\n * bins (int) -- Number of bins to use for the histogram\n * upsample_rate (int) -- Factor by which the histograms\n are upsampled, this is used to interpolate histograms with\n varying ranges across observations\n * dtype (dtype) -- dtype argument to the quantize node\n needed to implement the reference model spec\n * qscheme -- Quantization scheme to be used\n * reduce_range -- Reduces the range of the quantized data\n type by 1 bit\n * eps (Tensor) -- Epsilon value for float32, Defaults to", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.HistogramObserver.html", "category": "pytorch docs"}
{"text": "torch.finfo(torch.float32).eps.\n The scale and zero point are computed as follows:\n 1. Create the histogram of the incoming inputs.\n The histogram is computed continuously, and the ranges per\n bin change with every new tensor observed.\n 2. Search the distribution in the histogram for optimal min/max\n values.\n The search for the min/max values ensures the minimization of\n the quantization error with respect to the floating point\n model.\n 3. Compute the scale and zero point the same way as in the\n \"MinMaxObserver\"", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.HistogramObserver.html", "category": "pytorch docs"}
{"text": "torch.promote_typestorch.promote_types(type1, type2) -> dtype\n Returns the \"torch.dtype\" with the smallest size and scalar kind\n that is not smaller nor of lower kind than either type1 or\n type2. See type promotion documentation for more information on\n the type promotion logic.\n Parameters:\n * type1 (\"torch.dtype\") --\n * type2 (\"torch.dtype\") --\n Example:\n >>> torch.promote_types(torch.int32, torch.float32)\n torch.float32\n >>> torch.promote_types(torch.uint8, torch.long)\n torch.long", "source": "https://pytorch.org/docs/stable/generated/torch.promote_types.html", "category": "pytorch docs"}
{"text": "torch.Tensor.resolve_negTensor.resolve_neg() -> Tensor\n See \"torch.resolve_neg()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.resolve_neg.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.threshold_torch.nn.functional.threshold_(input, threshold, value) -> Tensor\n In-place version of \"threshold()\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.threshold_.html", "category": "pytorch docs"}
{"text": "torch.linalg.lstsqtorch.linalg.lstsq(A, B, rcond=None, , driver=None)\n Computes a solution to the least squares problem of a system of\n linear equations.\n Letting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the least squares\n problem for a linear system AX = B with A \\in \\mathbb{K}^{m\n \\times n}, B \\in \\mathbb{K}^{m \\times k} is defined as\n \\min_{X \\in \\mathbb{K}^{n \\times k}} |AX - B|_F\n where |-|_F denotes the Frobenius norm.\n Supports inputs of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if the inputs are batches of\n matrices then the output has the same batch dimensions.\n \"driver\" chooses the backend function that will be used. For CPU\n inputs the valid values are 'gels', 'gelsy', 'gelsd,\n 'gelss'*. To choose the best driver on CPU consider:\n * If \"A\" is well-conditioned (its condition number is not too\n large), or you do not mind some precision loss.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html", "category": "pytorch docs"}
{"text": "\nFor a general matrix: 'gelsy' (QR with pivoting) (default)\nIf \"A\" is full-rank: 'gels' (QR)\n\n\nIf \"A\" is not well-conditioned.\n'gelsd' (tridiagonal reduction and SVD)\nBut if you run into memory issues: 'gelss' (full SVD).\n For CUDA input, the only valid driver is 'gels', which assumes\n that \"A\" is full-rank.\n See also the full description of these drivers\n \"rcond\" is used to determine the effective rank of the matrices in\n \"A\" when \"driver\" is one of ('gelsy', 'gelsd', 'gelss'). In\n this case, if \\sigma_i are the singular values of A in decreasing\n order, \\sigma_i will be rounded down to zero if \\sigma_i \\leq\n \\text{rcond} \\cdot \\sigma_1. If \"rcond\"= None (default), \"rcond\"\n is set to the machine precision of the dtype of \"A\" times max(m,\n n).\n This function returns the solution to the problem and some extra\n information in a named tuple of four tensors *(solution, residuals,\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html", "category": "pytorch docs"}
{"text": "rank, singular_values). For inputs \"A\", \"B\" of shape (, m, n),\n (, m, k) respectively, it contains\n * solution: the least squares solution. It has shape (, n, k).\n * residuals: the squared residuals of the solutions, that is,\n |AX - B|_F^2. It has shape equal to the batch dimensions of\n \"A\". It is computed when m > n and every matrix in \"A\" is full-\n rank, otherwise, it is an empty tensor. If \"A\" is a batch of\n matrices and any matrix in the batch is not full rank, then an\n empty tensor is returned. This behavior may change in a future\n PyTorch release.\n * rank: tensor of ranks of the matrices in \"A\". It has shape\n equal to the batch dimensions of \"A\". It is computed when\n \"driver\" is one of ('gelsy', 'gelsd', 'gelss'), otherwise\n it is an empty tensor.\n * singular_values: tensor of singular values of the matrices in\n \"A\". It has shape (, min(m, n))*. It is computed when \"driver\"", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html", "category": "pytorch docs"}
{"text": "is one of ('gelsd', 'gelss'), otherwise it is an empty\n tensor.\n Note:\n This function computes X = \"A\".pinverse() @ \"B\" in a faster\n and more numerically stable way than performing the computations\n separately.\n Warning:\n The default value of \"rcond\" may change in a future PyTorch\n release. It is therefore recommended to use a fixed value to\n avoid potential breaking changes.\n Parameters:\n * A (Tensor) -- lhs tensor of shape (, m, n) where ***\n is zero or more batch dimensions.\n * B (Tensor) -- rhs tensor of shape (, m, k) where ***\n is zero or more batch dimensions.\n * rcond (float, optional) -- used to determine the\n effective rank of \"A\". If \"rcond\"= None, \"rcond\" is set to\n the machine precision of the dtype of \"A\" times max(m, n).\n Default: None.\n Keyword Arguments:\n driver (str, optional) -- name of the LAPACK/MAGMA", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html", "category": "pytorch docs"}
{"text": "method to be used. If None, 'gelsy' is used for CPU inputs\n and 'gels' for CUDA inputs. Default: None.\n Returns:\n A named tuple (solution, residuals, rank, singular_values).\n Examples:\n >>> A = torch.randn(1,3,3)\n >>> A\n tensor([[[-1.0838, 0.0225, 0.2275],\n [ 0.2438, 0.3844, 0.5499],\n [ 0.1175, -0.9102, 2.0870]]])\n >>> B = torch.randn(2,3,3)\n >>> B\n tensor([[[-0.6772, 0.7758, 0.5109],\n [-1.4382, 1.3769, 1.1818],\n [-0.3450, 0.0806, 0.3967]],\n [[-1.3994, -0.1521, -0.1473],\n [ 1.9194, 1.0458, 0.6705],\n [-1.1802, -0.9796, 1.4086]]])\n >>> X = torch.linalg.lstsq(A, B).solution # A is broadcasted to shape (2, 3, 3)\n >>> torch.dist(X, torch.linalg.pinv(A) @ B)\n tensor(1.5152e-06)\n >>> S = torch.linalg.lstsq(A, B, driver='gelsd').singular_values\n >>> torch.dist(S, torch.linalg.svdvals(A))\n tensor(2.3842e-07)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html", "category": "pytorch docs"}
{"text": "tensor(2.3842e-07)\n >>> A[:, 0].zero_() # Decrease the rank of A\n >>> rank = torch.linalg.lstsq(A, B).rank\n >>> rank\n tensor([2])", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html", "category": "pytorch docs"}
{"text": "LinearLRclass torch.optim.lr_scheduler.LinearLR(optimizer, start_factor=0.3333333333333333, end_factor=1.0, total_iters=5, last_epoch=- 1, verbose=False)\n Decays the learning rate of each parameter group by linearly\n changing small multiplicative factor until the number of epoch\n reaches a pre-defined milestone: total_iters. Notice that such\n decay can happen simultaneously with other changes to the learning\n rate from outside this scheduler. When last_epoch=-1, sets initial\n lr as lr.\n Parameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * start_factor (float) -- The number we multiply learning\n rate in the first epoch. The multiplication factor changes\n towards end_factor in the following epochs. Default: 1./3.\n * end_factor (float) -- The number we multiply learning\n rate at the end of linear changing process. Default: 1.0.\n * total_iters (int) -- The number of iterations that", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LinearLR.html", "category": "pytorch docs"}
{"text": "multiplicative factor reaches to 1. Default: 5.\n * last_epoch (int) -- The index of the last epoch.\n Default: -1.\n * verbose (bool) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n -[ Example ]-\n\n\n\nAssuming optimizer uses lr = 0.05 for all groups\nlr = 0.025 if epoch == 0\nlr = 0.03125 if epoch == 1\nlr = 0.0375 if epoch == 2\nlr = 0.04375 if epoch == 3\nlr = 0.05 if epoch >= 4\nscheduler = LinearLR(self.opt, start_factor=0.5, total_iters=4)\nfor epoch in range(100):\n train(...)\n validate(...)\n scheduler.step()\n get_last_lr()\n Return last computed learning rate by current scheduler.\n load_state_dict(state_dict)\n Loads the schedulers state.\n Parameters:\n state_dict (dict) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n print_lr(is_verbose, group, lr, epoch=None)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LinearLR.html", "category": "pytorch docs"}
{"text": "print_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n state_dict()\n Returns the state of the scheduler as a \"dict\".\n It contains an entry for every variable in self.dict which\n is not the optimizer.", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LinearLR.html", "category": "pytorch docs"}
{"text": "torch.sigmoidtorch.sigmoid(input, *, out=None) -> Tensor\n Alias for \"torch.special.expit()\".", "source": "https://pytorch.org/docs/stable/generated/torch.sigmoid.html", "category": "pytorch docs"}
{"text": "LazyBatchNorm2dclass torch.nn.LazyBatchNorm2d(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)\n A \"torch.nn.BatchNorm2d\" module with lazy initialization of the\n \"num_features\" argument of the \"BatchNorm2d\" that is inferred from\n the \"input.size(1)\". The attributes that will be lazily initialized\n are weight, bias, running_mean and running_var.\n Check the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\n Parameters:\n * eps (float) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n * momentum (float) -- the value used for the running_mean\n and running_var computation. Can be set to \"None\" for\n cumulative moving average (i.e. simple average). Default: 0.1\n * affine (bool) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters. Default:\n \"True\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyBatchNorm2d.html", "category": "pytorch docs"}
{"text": "\"True\"\n * track_running_stats (bool) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics, and initializes statistics buffers\n \"running_mean\" and \"running_var\" as \"None\". When these buffers\n are \"None\", this module always uses batch statistics. in both\n training and eval modes. Default: \"True\"\n cls_to_become\n alias of \"BatchNorm2d\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyBatchNorm2d.html", "category": "pytorch docs"}
{"text": "torch.copysigntorch.copysign(input, other, , out=None) -> Tensor\n Create a new floating-point tensor with the magnitude of \"input\"\n and the sign of \"other\", elementwise.\n \\text{out}{i} = \\begin{cases} -|\\text{input}| &\n \\text{if } \\text{other}{i} \\leq -0.0 \\ |\\text{input}|\n & \\text{if } \\text{other}_{i} \\geq 0.0 \\ \\end{cases}\n Supports broadcasting to a common shape, and integer and float\n inputs.\n Parameters:\n * input (Tensor) -- magnitudes.\n * other (Tensor or Number) -- contains value(s) whose\n signbit(s) are applied to the magnitudes in \"input\".\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(5)\n >>> a\n tensor([-1.2557, -0.0026, -0.5387, 0.4740, -0.9244])\n >>> torch.copysign(a, 1)\n tensor([1.2557, 0.0026, 0.5387, 0.4740, 0.9244])\n >>> a = torch.randn(4, 4)\n >>> a", "source": "https://pytorch.org/docs/stable/generated/torch.copysign.html", "category": "pytorch docs"}
{"text": "\n\n\na = torch.randn(4, 4)\n >>> a\n tensor([[ 0.7079, 0.2778, -1.0249, 0.5719],\n [-0.0059, -0.2600, -0.4475, -1.3948],\n [ 0.3667, -0.9567, -2.5757, -0.1751],\n [ 0.2046, -0.0742, 0.2998, -0.1054]])\n >>> b = torch.randn(4)\n tensor([ 0.2373, 0.3120, 0.3190, -1.1128])\n >>> torch.copysign(a, b)\n tensor([[ 0.7079, 0.2778, 1.0249, -0.5719],\n [ 0.0059, 0.2600, 0.4475, -1.3948],\n [ 0.3667, 0.9567, 2.5757, -0.1751],\n [ 0.2046, 0.0742, 0.2998, -0.1054]])\n >>> a = torch.tensor([1.])\n >>> b = torch.tensor([-0.])\n >>> torch.copysign(a, b)\n tensor([-1.])\n Note:\n copysign handles signed zeros. If the other argument has a\n negative zero (-0), the corresponding output value will be\n negative.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.copysign.html", "category": "pytorch docs"}
{"text": "torch.Tensor.histcTensor.histc(bins=100, min=0, max=0) -> Tensor\n See \"torch.histc()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.histc.html", "category": "pytorch docs"}
{"text": "torch.pca_lowranktorch.pca_lowrank(A, q=None, center=True, niter=2)\n Performs linear Principal Component Analysis (PCA) on a low-rank\n matrix, batches of such matrices, or sparse matrix.\n This function returns a namedtuple \"(U, S, V)\" which is the nearly\n optimal approximation of a singular value decomposition of a\n centered matrix A such that A = U diag(S) V^T.\n Note:\n The relation of \"(U, S, V)\" to PCA is as follows:\n * A is a data matrix with \"m\" samples and \"n\" features\n * the V columns represent the principal directions\n * S ** 2 / (m - 1) contains the eigenvalues of A^T A / (m - 1)\n which is the covariance of \"A\" when \"center=True\" is provided.\n * \"matmul(A, V[:, :k])\" projects data to the first k principal\n components\n Note:\n Different from the standard SVD, the size of returned matrices\n depend on the specified rank and q values as follows:\n * U is m x q matrix\n * S is q-vector\n * V is n x q matrix\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.pca_lowrank.html", "category": "pytorch docs"}
{"text": "\nV is n x q matrix\n Note:\n To obtain repeatable results, reset the seed for the pseudorandom\n number generator\n Parameters:\nA (Tensor) -- the input tensor of size (*, m, n)\nq (int, optional) -- a slightly overestimated rank\n of A. By default, \"q = min(6, m, n)\".\ncenter (bool, optional) -- if True, center the input\n tensor, otherwise, assume that the input is centered.\nniter (int, optional) -- the number of subspace\n iterations to conduct; niter must be a nonnegative integer,\n and defaults to 2.\n Return type:\n Tuple[Tensor, Tensor, Tensor]\n References:\nNathan Halko, Per-Gunnar Martinsson, and Joel Tropp, Finding\n structure with randomness: probabilistic algorithms for\n constructing approximate matrix decompositions,\n arXiv:0909.4061 [math.NA; math.PR], 2009 (available at\n arXiv _).\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.pca_lowrank.html", "category": "pytorch docs"}
{"text": "torch.uniquetorch.unique(input, sorted=True, return_inverse=False, return_counts=False, dim=None) -> Tuple[Tensor, Tensor, Tensor]\n Returns the unique elements of the input tensor.\n Note:\n This function is different from \"torch.unique_consecutive()\" in\n the sense that this function also eliminates non-consecutive\n duplicate values.\n Note:\n Currently in the CUDA implementation and the CPU implementation\n when dim is specified, torch.unique always sort the tensor at\n the beginning regardless of the sort argument. Sorting could be\n slow, so if your input tensor is already sorted, it is\n recommended to use \"torch.unique_consecutive()\" which avoids the\n sorting.\n Parameters:\n * input (Tensor) -- the input tensor\n * sorted (bool) -- Whether to sort the unique elements in\n ascending order before returning as output.\n * return_inverse (bool) -- Whether to also return the", "source": "https://pytorch.org/docs/stable/generated/torch.unique.html", "category": "pytorch docs"}
{"text": "indices for where elements in the original input ended up in\n the returned unique list.\n * return_counts (bool) -- Whether to also return the\n counts for each unique element.\n * dim (int) -- the dimension to apply unique. If \"None\",\n the unique of the flattened input is returned. default: \"None\"\n Returns:\n A tensor or a tuple of tensors containing\n * output (Tensor): the output list of unique scalar\n elements.\n * inverse_indices (Tensor): (optional) if\n \"return_inverse\" is True, there will be an additional\n returned tensor (same shape as input) representing the\n indices for where elements in the original input map to in\n the output; otherwise, this function will only return a\n single tensor.\n * counts (Tensor): (optional) if \"return_counts\" is\n True, there will be an additional returned tensor (same", "source": "https://pytorch.org/docs/stable/generated/torch.unique.html", "category": "pytorch docs"}
{"text": "shape as output or output.size(dim), if dim was specified)\n representing the number of occurrences for each unique\n value or tensor.\n Return type:\n (Tensor, Tensor (optional), Tensor (optional))\n Example:\n >>> output = torch.unique(torch.tensor([1, 3, 2, 3], dtype=torch.long))\n >>> output\n tensor([1, 2, 3])\n >>> output, inverse_indices = torch.unique(\n ... torch.tensor([1, 3, 2, 3], dtype=torch.long), sorted=True, return_inverse=True)\n >>> output\n tensor([1, 2, 3])\n >>> inverse_indices\n tensor([0, 2, 1, 2])\n >>> output, inverse_indices = torch.unique(\n ... torch.tensor([[1, 3], [2, 3]], dtype=torch.long), sorted=True, return_inverse=True)\n >>> output\n tensor([1, 2, 3])\n >>> inverse_indices\n tensor([[0, 2],\n [1, 2]])", "source": "https://pytorch.org/docs/stable/generated/torch.unique.html", "category": "pytorch docs"}
{"text": "AdaptiveLogSoftmaxWithLossclass torch.nn.AdaptiveLogSoftmaxWithLoss(in_features, n_classes, cutoffs, div_value=4.0, head_bias=False, device=None, dtype=None)\n Efficient softmax approximation as described in Efficient softmax\n approximation for GPUs by Edouard Grave, Armand Joulin, Moustapha\n Ciss\u00c3\u00a9, David Grangier, and Herv\u00c3\u00a9 J\u00c3\u00a9gou.\n Adaptive softmax is an approximate strategy for training models\n with large output spaces. It is most effective when the label\n distribution is highly imbalanced, for example in natural language\n modelling, where the word frequency distribution approximately\n follows the Zipf's law.\n Adaptive softmax partitions the labels into several clusters,\n according to their frequency. These clusters may contain different\n number of targets each. Additionally, clusters containing less\n frequent labels assign lower dimensional embeddings to those\n labels, which speeds up the computation. For each minibatch, only", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveLogSoftmaxWithLoss.html", "category": "pytorch docs"}
{"text": "clusters for which at least one target is present are evaluated.\n The idea is that the clusters which are accessed frequently (like\n the first one, containing most frequent labels), should also be\n cheap to compute -- that is, contain a small number of assigned\n labels.\n We highly recommend taking a look at the original paper for more\n details.\n * \"cutoffs\" should be an ordered Sequence of integers sorted in the\n increasing order. It controls number of clusters and the\n partitioning of targets into clusters. For example setting\n \"cutoffs = [10, 100, 1000]\" means that first 10 targets will be\n assigned to the 'head' of the adaptive softmax, targets 11, 12,\n ..., 100 will be assigned to the first cluster, and targets\n 101, 102, ..., 1000 will be assigned to the second cluster,\n while targets 1001, 1002, ..., n_classes - 1 will be assigned\n to the last, third cluster.\n * \"div_value\" is used to compute the size of each additional", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveLogSoftmaxWithLoss.html", "category": "pytorch docs"}
{"text": "cluster, which is given as \\left\\lfloor\\frac{\\texttt{in_feature\n s}}{\\texttt{div_value}^{idx}}\\right\\rfloor, where idx is the\n cluster index (with clusters for less frequent words having\n larger indices, and indices starting from 1).\n * \"head_bias\" if set to True, adds a bias term to the 'head' of the\n adaptive softmax. See paper for details. Set to False in the\n official implementation.\n Warning:\n Labels passed as inputs to this module should be sorted according\n to their frequency. This means that the most frequent label\n should be represented by the index 0, and the least frequent\n label should be represented by the index n_classes - 1.\n Note:\n This module returns a \"NamedTuple\" with \"output\" and \"loss\"\n fields. See further documentation for details.\n Note:\n To compute log-probabilities for all classes, the \"log_prob\"\n method can be used.\n Parameters:\n * in_features (int) -- Number of features in the input\n tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveLogSoftmaxWithLoss.html", "category": "pytorch docs"}
{"text": "tensor\n * n_classes (int) -- Number of classes in the dataset\n * cutoffs (Sequence) -- Cutoffs used to assign targets to\n their buckets\n * div_value (float, optional) -- value used as an\n exponent to compute sizes of the clusters. Default: 4.0\n * head_bias (bool, optional) -- If \"True\", adds a bias\n term to the 'head' of the adaptive softmax. Default: \"False\"\n Returns:\n * output is a Tensor of size \"N\" containing computed target\n log probabilities for each example\n * loss is a Scalar representing the computed negative log\n likelihood loss\n Return type:\n \"NamedTuple\" with \"output\" and \"loss\" fields\n Shape:\n * input: (N, \\texttt{in_features}) or (\\texttt{in_features})\n * target: (N) or () where each value satisfies 0 <=\n \\texttt{target[i]} <= \\texttt{n_classes}\n * output1: (N) or ()\n * output2: \"Scalar\"\n log_prob(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveLogSoftmaxWithLoss.html", "category": "pytorch docs"}
{"text": "\noutput2: \"Scalar\"\n log_prob(input)\n Computes log probabilities for all \\texttt{n_classes}\n Parameters:\n input (Tensor) -- a minibatch of examples\n Returns:\n log-probabilities of for each class c in range 0 <= c <=\n \\texttt{n_classes}, where \\texttt{n_classes} is a parameter\n passed to \"AdaptiveLogSoftmaxWithLoss\" constructor.\n Return type:\n Tensor\n Shape:\n * Input: (N, \\texttt{in_features})\n * Output: (N, \\texttt{n_classes})\n predict(input)\n This is equivalent to self.log_prob(input).argmax(dim=1), but\n is more efficient in some cases.\n Parameters:\n input (Tensor) -- a minibatch of examples\n Returns:\n a class with the highest probability for each example\n Return type:\n output (Tensor)\n Shape:\n * Input: (N, \\texttt{in_features})\n * Output: (N)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveLogSoftmaxWithLoss.html", "category": "pytorch docs"}
{"text": "torch.atleast_1dtorch.atleast_1d(tensors)\n Returns a 1-dimensional view of each input tensor with zero\n dimensions. Input tensors with one or more dimensions are returned\n as-is.\n Parameters:\n input (Tensor or list of Tensors*) --\n Returns:\n output (Tensor or tuple of Tensors)\n Example:\n >>> x = torch.arange(2)\n >>> x\n tensor([0, 1])\n >>> torch.atleast_1d(x)\n tensor([0, 1])\n >>> x = torch.tensor(1.)\n >>> x\n tensor(1.)\n >>> torch.atleast_1d(x)\n tensor([1.])\n >>> x = torch.tensor(0.5)\n >>> y = torch.tensor(1.)\n >>> torch.atleast_1d((x, y))\n (tensor([0.5000]), tensor([1.]))", "source": "https://pytorch.org/docs/stable/generated/torch.atleast_1d.html", "category": "pytorch docs"}
{"text": "torch.set_flush_denormaltorch.set_flush_denormal(mode) -> bool\n Disables denormal floating numbers on CPU.\n Returns \"True\" if your system supports flushing denormal numbers\n and it successfully configures flush denormal mode.\n \"set_flush_denormal()\" is only supported on x86 architectures\n supporting SSE3.\n Parameters:\n mode (bool) -- Controls whether to enable flush denormal\n mode or not\n Example:\n >>> torch.set_flush_denormal(True)\n True\n >>> torch.tensor([1e-323], dtype=torch.float64)\n tensor([ 0.], dtype=torch.float64)\n >>> torch.set_flush_denormal(False)\n True\n >>> torch.tensor([1e-323], dtype=torch.float64)\n tensor(9.88131e-324 *\n [ 1.0000], dtype=torch.float64)", "source": "https://pytorch.org/docs/stable/generated/torch.set_flush_denormal.html", "category": "pytorch docs"}
{"text": "SiLUclass torch.nn.SiLU(inplace=False)\n Applies the Sigmoid Linear Unit (SiLU) function, element-wise. The\n SiLU function is also known as the swish function.\n \\text{silu}(x) = x * \\sigma(x), \\text{where } \\sigma(x) \\text{\n is the logistic sigmoid.}\n Note:\n See Gaussian Error Linear Units (GELUs) where the SiLU (Sigmoid\n Linear Unit) was originally coined, and see Sigmoid-Weighted\n Linear Units for Neural Network Function Approximation in\n Reinforcement Learning and Swish: a Self-Gated Activation\n Function where the SiLU was experimented with later.\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.SiLU()\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SiLU.html", "category": "pytorch docs"}
{"text": "torch.Tensor.flattenTensor.flatten(start_dim=0, end_dim=- 1) -> Tensor\n See \"torch.flatten()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.flatten.html", "category": "pytorch docs"}
{"text": "ELUclass torch.ao.nn.quantized.ELU(scale, zero_point, alpha=1.0)\n This is the quantized equivalent of \"ELU\".\n Parameters:\n * scale -- quantization scale of the output tensor\n * zero_point -- quantization zero point of the output tensor\n * alpha (float) -- the alpha constant", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ELU.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.torch.nn.parallel.data_paralleltorch.nn.parallel.data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None)\n Evaluates module(input) in parallel across the GPUs given in\n device_ids.\n This is the functional version of the DataParallel module.\n Parameters:\n * module (Module) -- the module to evaluate in parallel\n * inputs (Tensor) -- inputs to the module\n * device_ids (list of python:int or torch.device) --\n GPU ids on which to replicate module\n * output_device (list of python:int or torch.device)\n -- GPU location of the output Use -1 to indicate the CPU.\n (default: device_ids[0])\n Returns:\n a Tensor containing the result of module(input) located on\n output_device", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.torch.nn.parallel.data_parallel.html", "category": "pytorch docs"}
{"text": "torch.cuda.inittorch.cuda.init()\n Initialize PyTorch's CUDA state. You may need to call this\n explicitly if you are interacting with PyTorch via its C API, as\n Python bindings for CUDA functionality will not be available until\n this initialization takes place. Ordinary users should not need\n this, as all of PyTorch's CUDA methods automatically initialize\n CUDA state on-demand.\n Does nothing if the CUDA state is already initialized.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.init.html", "category": "pytorch docs"}
{"text": "torch.Tensor.realTensor.real\n Returns a new tensor containing real values of the \"self\" tensor\n for a complex-valued input tensor. The returned tensor and \"self\"\n share the same underlying storage.\n Returns \"self\" if \"self\" is a real-valued tensor tensor.\n Example::\n >>> x=torch.randn(4, dtype=torch.cfloat)\n >>> x\n tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)])\n >>> x.real\n tensor([ 0.3100, -0.5445, -1.6492, -0.0638])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.real.html", "category": "pytorch docs"}
{"text": "torch.signal.windows.bartletttorch.signal.windows.bartlett(M, , sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\n Computes the Bartlett window.\n The Bartlett window is defined as follows:\n w_n = 1 - \\left| \\frac{2n}{M - 1} - 1 \\right| = \\begin{cases}\n \\frac{2n}{M - 1} & \\text{if } 0 \\leq n \\leq \\frac{M - 1}{2} \\\n 2 - \\frac{2n}{M - 1} & \\text{if } \\frac{M - 1}{2} < n < M \\\n \\end{cases}\n The window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\n Parameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\n Keyword Arguments:\n * sym (bool, optional) -- If False, returns a\n periodic window suitable for use in spectral analysis. If\n True, returns a symmetric window suitable for use in filter\n design. Default: True*.", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.bartlett.html", "category": "pytorch docs"}
{"text": "design. Default: True.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Return type:\n Tensor\n Examples:\n >>> # Generates a symmetric Bartlett window.\n >>> torch.signal.windows.bartlett(10)", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.bartlett.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.signal.windows.bartlett(10)\n tensor([0.0000, 0.2222, 0.4444, 0.6667, 0.8889, 0.8889, 0.6667, 0.4444, 0.2222, 0.0000])\n >>> # Generates a periodic Bartlett window.\n >>> torch.signal.windows.bartlett(10, sym=False)\n tensor([0.0000, 0.2000, 0.4000, 0.6000, 0.8000, 1.0000, 0.8000, 0.6000, 0.4000, 0.2000])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.bartlett.html", "category": "pytorch docs"}
{"text": "torch.poissontorch.poisson(input, generator=None) -> Tensor\n Returns a tensor of the same size as \"input\" with each element\n sampled from a Poisson distribution with rate parameter given by\n the corresponding element in \"input\" i.e.,\n \\text{out}_i \\sim \\text{Poisson}(\\text{input}_i)\n \"input\" must be non-negative.\n Parameters:\n input (Tensor) -- the input tensor containing the rates of\n the Poisson distribution\n Keyword Arguments:\n generator (\"torch.Generator\", optional) -- a pseudorandom\n number generator for sampling\n Example:\n >>> rates = torch.rand(4, 4) * 5 # rate parameter between 0 and 5\n >>> torch.poisson(rates)\n tensor([[9., 1., 3., 5.],\n [8., 6., 6., 0.],\n [0., 4., 5., 3.],\n [2., 1., 4., 2.]])", "source": "https://pytorch.org/docs/stable/generated/torch.poisson.html", "category": "pytorch docs"}
{"text": "torch.asintorch.asin(input, , out=None) -> Tensor\n Returns a new tensor with the arcsine of the elements of \"input\".\n \\text{out}{i} = \\sin^{-1}(\\text{input})\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([-0.5962, 1.4985, -0.4396, 1.4525])\n >>> torch.asin(a)\n tensor([-0.6387, nan, -0.4552, nan])", "source": "https://pytorch.org/docs/stable/generated/torch.asin.html", "category": "pytorch docs"}
{"text": "torch.Tensor.arcsin_Tensor.arcsin_() -> Tensor\n In-place version of \"arcsin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arcsin_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.geqrfTensor.geqrf()\n See \"torch.geqrf()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.geqrf.html", "category": "pytorch docs"}
{"text": "torch.Tensor.whereTensor.where(condition, y) -> Tensor\n \"self.where(condition, y)\" is equivalent to \"torch.where(condition,\n self, y)\". See \"torch.where()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.where.html", "category": "pytorch docs"}
{"text": "torch.sym_mintorch.sym_min(a, b)\n SymInt-aware utility for max().", "source": "https://pytorch.org/docs/stable/generated/torch.sym_min.html", "category": "pytorch docs"}
{"text": "torch.index_addtorch.index_add(input, dim, index, source, *, alpha=1, out=None) -> Tensor\n See \"index_add_()\" for function description.", "source": "https://pytorch.org/docs/stable/generated/torch.index_add.html", "category": "pytorch docs"}
{"text": "HuberLossclass torch.nn.HuberLoss(reduction='mean', delta=1.0)\n Creates a criterion that uses a squared term if the absolute\n element-wise error falls below delta and a delta-scaled L1 term\n otherwise. This loss combines advantages of both \"L1Loss\" and\n \"MSELoss\"; the delta-scaled L1 region makes the loss less sensitive\n to outliers than \"MSELoss\", while the L2 region provides smoothness\n over \"L1Loss\" near 0. See Huber loss for more information.\n For a batch of size N, the unreduced loss can be described as:\n \\ell(x, y) = L = {l_1, ..., l_N}^T\n with\n l_n = \\begin{cases} 0.5 (x_n - y_n)^2, & \\text{if } |x_n - y_n|\n < delta \\ delta * (|x_n - y_n| - 0.5 * delta), &\n \\text{otherwise } \\end{cases}\n If reduction is not none, then:\n \\ell(x, y) = \\begin{cases} \\operatorname{mean}(L), &\n \\text{if reduction} = \\text{mean';}\\\\\n \\operatorname{sum}(L), & \\text{if reduction} = \\text{sum'.}\n \\end{cases}\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.HuberLoss.html", "category": "pytorch docs"}
{"text": "\\end{cases}\n Note:\n When delta is set to 1, this loss is equivalent to\n \"SmoothL1Loss\". In general, this loss differs from \"SmoothL1Loss\"\n by a factor of delta (AKA beta in Smooth L1). See \"SmoothL1Loss\"\n for additional discussion on the differences in behavior between\n the two losses.\n Parameters:\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Default: \"'mean'\"\n * delta (float, optional) -- Specifies the threshold\n at which to change between delta-scaled L1 and L2 loss. The\n value must be positive. Default: 1.0\n Shape:\n * Input: () where * means any number of dimensions.\n * Target: (), same shape as the input.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.HuberLoss.html", "category": "pytorch docs"}
{"text": "\nTarget: (*), same shape as the input.\nOutput: scalar. If \"reduction\" is \"'none'\", then (*), same\n shape as the input.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.HuberLoss.html", "category": "pytorch docs"}
{"text": "Moduleclass torch.nn.Module\n Base class for all neural network modules.\n Your models should also subclass this class.\n Modules can also contain other Modules, allowing to nest them in a\n tree structure. You can assign the submodules as regular\n attributes:\n import torch.nn as nn\n import torch.nn.functional as F\n class Model(nn.Module):\n def init(self):\n super().init()\n self.conv1 = nn.Conv2d(1, 20, 5)\n self.conv2 = nn.Conv2d(20, 20, 5)\n def forward(self, x):\n x = F.relu(self.conv1(x))\n return F.relu(self.conv2(x))\n Submodules assigned in this way will be registered, and will have\n their parameters converted too when you call \"to()\", etc.\n Note:\n As per the example above, an \"init()\" call to the parent\n class must be made before assignment on the child.\n Variables:\n training (bool) -- Boolean represents whether this module", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "is in training or evaluation mode.\n add_module(name, module)\n Adds a child module to the current module.\n The module can be accessed as an attribute using the given name.\n Parameters:\n * name (str) -- name of the child module. The child\n module can be accessed from this module using the given\n name\n * module (Module) -- child module to be added to the\n module.\n apply(fn)\n Applies \"fn\" recursively to every submodule (as returned by\n \".children()\") as well as self. Typical use includes\n initializing the parameters of a model (see also torch.nn.init).\n Parameters:\n fn (\"Module\" -> None) -- function to be applied to each\n submodule\n Returns:\n self\n Return type:\n Module\n Example:\n >>> @torch.no_grad()\n >>> def init_weights(m):\n >>> print(m)\n >>> if type(m) == nn.Linear:\n >>> m.weight.fill_(1.0)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "\n\n\n m.weight.fill_(1.0)\n >>> print(m.weight)\n >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))\n >>> net.apply(init_weights)\n Linear(in_features=2, out_features=2, bias=True)\n Parameter containing:\n tensor([[1., 1.],\n [1., 1.]], requires_grad=True)\n Linear(in_features=2, out_features=2, bias=True)\n Parameter containing:\n tensor([[1., 1.],\n [1., 1.]], requires_grad=True)\n Sequential(\n (0): Linear(in_features=2, out_features=2, bias=True)\n (1): Linear(in_features=2, out_features=2, bias=True)\n )\n\nbfloat16()\n Casts all floating point parameters and buffers to \"bfloat16\"\n datatype.\n Note:\n This method modifies the module in-place.\n Returns:\n self\n Return type:\n Module\n buffers(recurse=True)\n Returns an iterator over module buffers.\n Parameters:\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "Parameters:\n recurse (bool) -- if True, then yields buffers of this\n module and all submodules. Otherwise, yields only buffers\n that are direct members of this module.\n Yields:\n torch.Tensor -- module buffer\n Return type:\n Iterator[Tensor]\n Example:\n >>> for buf in model.buffers():\n >>> print(type(buf), buf.size())\n (20L,)\n (20L, 1L, 5L, 5L)\n children()\n Returns an iterator over immediate children modules.\n Yields:\n Module -- a child module\n Return type:\n Iterator[Module]\n cpu()\n Moves all model parameters and buffers to the CPU.\n Note:\n This method modifies the module in-place.\n Returns:\n self\n Return type:\n Module\n cuda(device=None)\n Moves all model parameters and buffers to the GPU.\n This also makes associated parameters and buffers different", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "objects. So it should be called before constructing optimizer if\n the module will live on GPU while being optimized.\n Note:\n This method modifies the module in-place.\n Parameters:\n device (int, optional) -- if specified, all\n parameters will be copied to that device\n Returns:\n self\n Return type:\n Module\n double()\n Casts all floating point parameters and buffers to \"double\"\n datatype.\n Note:\n This method modifies the module in-place.\n Returns:\n self\n Return type:\n Module\n eval()\n Sets the module in evaluation mode.\n This has any effect only on certain modules. See documentations\n of particular modules for details of their behaviors in\n training/evaluation mode, if they are affected, e.g. \"Dropout\",\n \"BatchNorm\", etc.\n This is equivalent with \"self.train(False)\".\n See Locally disabling gradient computation for a comparison", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "between .eval() and several similar mechanisms that may be\n confused with it.\n Returns:\n self\n Return type:\n Module\n extra_repr()\n Set the extra representation of the module\n To print customized extra information, you should re-implement\n this method in your own modules. Both single-line and multi-line\n strings are acceptable.\n Return type:\n str\n float()\n Casts all floating point parameters and buffers to \"float\"\n datatype.\n Note:\n This method modifies the module in-place.\n Returns:\n self\n Return type:\n Module\n forward(*input)\n Defines the computation performed at every call.\n Should be overridden by all subclasses.\n Note:\n Although the recipe for forward pass needs to be defined\n within this function, one should call the \"Module\" instance\n afterwards instead of this since the former takes care of", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "running the registered hooks while the latter silently ignores\n them.\n get_buffer(target)\n Returns the buffer given by \"target\" if it exists, otherwise\n throws an error.\n See the docstring for \"get_submodule\" for a more detailed\n explanation of this method's functionality as well as how to\n correctly specify \"target\".\n Parameters:\n target (str) -- The fully-qualified string name of the\n buffer to look for. (See \"get_submodule\" for how to specify a\n fully-qualified string.)\n Returns:\n The buffer referenced by \"target\"\n Return type:\n torch.Tensor\n Raises:\n AttributeError -- If the target string references an\n invalid path or resolves to something that is not a\n buffer\n get_extra_state()\n Returns any extra state to include in the module's state_dict.\n Implement this and a corresponding \"set_extra_state()\" for your", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "module if you need to store extra state. This function is called\n when building the module's state_dict().\n Note that extra state should be picklable to ensure working\n serialization of the state_dict. We only provide provide\n backwards compatibility guarantees for serializing Tensors;\n other objects may break backwards compatibility if their\n serialized pickled form changes.\n Returns:\n Any extra state to store in the module's state_dict\n Return type:\n object\n get_parameter(target)\n Returns the parameter given by \"target\" if it exists, otherwise\n throws an error.\n See the docstring for \"get_submodule\" for a more detailed\n explanation of this method's functionality as well as how to\n correctly specify \"target\".\n Parameters:\n target (str) -- The fully-qualified string name of the\n Parameter to look for. (See \"get_submodule\" for how to\n specify a fully-qualified string.)\n Returns:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "Returns:\n The Parameter referenced by \"target\"\n Return type:\n torch.nn.Parameter\n Raises:\n AttributeError -- If the target string references an\n invalid path or resolves to something that is not an\n \"nn.Parameter\"\n get_submodule(target)\n Returns the submodule given by \"target\" if it exists, otherwise\n throws an error.\n For example, let's say you have an \"nn.Module\" \"A\" that looks\n like this:\n A(\n (net_b): Module(\n (net_c): Module(\n (conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2))\n )\n (linear): Linear(in_features=100, out_features=200, bias=True)\n )\n )\n (The diagram shows an \"nn.Module\" \"A\". \"A\" has a nested\n submodule \"net_b\", which itself has two submodules \"net_c\" and\n \"linear\". \"net_c\" then has a submodule \"conv\".)\n To check whether or not we have the \"linear\" submodule, we would", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "call \"get_submodule(\"net_b.linear\")\". To check whether we have\n the \"conv\" submodule, we would call\n \"get_submodule(\"net_b.net_c.conv\")\".\n The runtime of \"get_submodule\" is bounded by the degree of\n module nesting in \"target\". A query against \"named_modules\"\n achieves the same result, but it is O(N) in the number of\n transitive modules. So, for a simple check to see if some\n submodule exists, \"get_submodule\" should always be used.\n Parameters:\n target (str) -- The fully-qualified string name of the\n submodule to look for. (See above example for how to specify\n a fully-qualified string.)\n Returns:\n The submodule referenced by \"target\"\n Return type:\n torch.nn.Module\n Raises:\n AttributeError -- If the target string references an\n invalid path or resolves to something that is not an\n \"nn.Module\"\n half()\n Casts all floating point parameters and buffers to \"half\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "datatype.\n Note:\n This method modifies the module in-place.\n Returns:\n self\n Return type:\n Module\n ipu(device=None)\n Moves all model parameters and buffers to the IPU.\n This also makes associated parameters and buffers different\n objects. So it should be called before constructing optimizer if\n the module will live on IPU while being optimized.\n Note:\n This method modifies the module in-place.\n Parameters:\n device (int, optional) -- if specified, all\n parameters will be copied to that device\n Returns:\n self\n Return type:\n Module\n load_state_dict(state_dict, strict=True)\n Copies parameters and buffers from \"state_dict\" into this module\n and its descendants. If \"strict\" is \"True\", then the keys of\n \"state_dict\" must exactly match the keys returned by this\n module's \"state_dict()\" function.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "Parameters:\n * state_dict (dict) -- a dict containing parameters and\n persistent buffers.\n * strict (bool, optional) -- whether to strictly\n enforce that the keys in \"state_dict\" match the keys\n returned by this module's \"state_dict()\" function. Default:\n \"True\"\n Returns:\n * missing_keys is a list of str containing the missing\n keys\n * unexpected_keys is a list of str containing the\n unexpected keys\n Return type:\n \"NamedTuple\" with \"missing_keys\" and \"unexpected_keys\" fields\n Note:\n If a parameter or buffer is registered as \"None\" and its\n corresponding key exists in \"state_dict\", \"load_state_dict()\"\n will raise a \"RuntimeError\".\n modules()\n Returns an iterator over all modules in the network.\n Yields:\n Module -- a module in the network\n Return type:\n Iterator[Module]\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "Iterator[Module]\n Note:\n Duplicate modules are returned only once. In the following\n example, \"l\" will be returned only once.\n Example:\n >>> l = nn.Linear(2, 2)\n >>> net = nn.Sequential(l, l)\n >>> for idx, m in enumerate(net.modules()):\n ... print(idx, '->', m)\n 0 -> Sequential(\n (0): Linear(in_features=2, out_features=2, bias=True)\n (1): Linear(in_features=2, out_features=2, bias=True)\n )\n 1 -> Linear(in_features=2, out_features=2, bias=True)\n named_buffers(prefix='', recurse=True, remove_duplicate=True)\n Returns an iterator over module buffers, yielding both the name\n of the buffer as well as the buffer itself.\n Parameters:\n * prefix (str) -- prefix to prepend to all buffer\n names.\n * recurse (bool, optional) -- if True, then yields\n buffers of this module and all submodules. Otherwise,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "yields only buffers that are direct members of this module.\n Defaults to True.\n * remove_duplicate (bool, optional) -- whether to\n remove the duplicated buffers in the result. Defaults to\n True.\n Yields:\n (str, torch.Tensor) -- Tuple containing the name and buffer\n Return type:\n Iterator[Tuple[str, Tensor]]\n Example:\n >>> for name, buf in self.named_buffers():\n >>> if name in ['running_var']:\n >>> print(buf.size())\n named_children()\n Returns an iterator over immediate children modules, yielding\n both the name of the module as well as the module itself.\n Yields:\n (str, Module) -- Tuple containing a name and child module\n Return type:\n Iterator[Tuple[str, Module]]\n Example:\n >>> for name, module in model.named_children():\n >>> if name in ['conv4', 'conv5']:\n >>> print(module)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "\n\n\n print(module)\n\nnamed_modules(memo=None, prefix='', remove_duplicate=True)\n Returns an iterator over all modules in the network, yielding\n both the name of the module as well as the module itself.\n Parameters:\n * memo (Optional[Set[Module]]) -- a memo to\n store the set of modules already added to the result\n * prefix (str) -- a prefix that will be added to the\n name of the module\n * remove_duplicate (bool) -- whether to remove the\n duplicated module instances in the result or not\n Yields:\n (str, Module) -- Tuple of name and module\n Note:\n Duplicate modules are returned only once. In the following\n example, \"l\" will be returned only once.\n Example:\n >>> l = nn.Linear(2, 2)\n >>> net = nn.Sequential(l, l)\n >>> for idx, m in enumerate(net.named_modules()):\n ... print(idx, '->', m)\n 0 -> ('', Sequential(\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "0 -> ('', Sequential(\n (0): Linear(in_features=2, out_features=2, bias=True)\n (1): Linear(in_features=2, out_features=2, bias=True)\n ))\n 1 -> ('0', Linear(in_features=2, out_features=2, bias=True))\n named_parameters(prefix='', recurse=True, remove_duplicate=True)\n Returns an iterator over module parameters, yielding both the\n name of the parameter as well as the parameter itself.\n Parameters:\n * prefix (str) -- prefix to prepend to all parameter\n names.\n * recurse (bool) -- if True, then yields parameters of\n this module and all submodules. Otherwise, yields only\n parameters that are direct members of this module.\n * remove_duplicate (bool, optional) -- whether to\n remove the duplicated parameters in the result. Defaults to\n True.\n Yields:\n (str, Parameter) -- Tuple containing the name and parameter\n Return type:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "Return type:\n Iterator[Tuple[str, Parameter]]\n Example:\n >>> for name, param in self.named_parameters():\n >>> if name in ['bias']:\n >>> print(param.size())\n parameters(recurse=True)\n Returns an iterator over module parameters.\n This is typically passed to an optimizer.\n Parameters:\n recurse (bool) -- if True, then yields parameters of\n this module and all submodules. Otherwise, yields only\n parameters that are direct members of this module.\n Yields:\n Parameter -- module parameter\n Return type:\n Iterator[Parameter]\n Example:\n >>> for param in model.parameters():\n >>> print(type(param), param.size())\n (20L,)\n (20L, 1L, 5L, 5L)\n register_backward_hook(hook)\n Registers a backward hook on the module.\n This function is deprecated in favor of", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "This function is deprecated in favor of\n \"register_full_backward_hook()\" and the behavior of this\n function will change in future versions.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n register_buffer(name, tensor, persistent=True)\n Adds a buffer to the module.\n This is typically used to register a buffer that should not to\n be considered a model parameter. For example, BatchNorm's\n \"running_mean\" is not a parameter, but is part of the module's\n state. Buffers, by default, are persistent and will be saved\n alongside parameters. This behavior can be changed by setting\n \"persistent\" to \"False\". The only difference between a\n persistent buffer and a non-persistent buffer is that the latter\n will not be a part of this module's \"state_dict\".\n Buffers can be accessed as attributes using given names.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "Parameters:\n * name (str) -- name of the buffer. The buffer can be\n accessed from this module using the given name\n * tensor (Tensor or None) -- buffer to be\n registered. If \"None\", then operations that run on buffers,\n such as \"cuda\", are ignored. If \"None\", the buffer is\n not included in the module's \"state_dict\".\n * persistent (bool) -- whether the buffer is part of\n this module's \"state_dict\".\n Example:\n >>> self.register_buffer('running_mean', torch.zeros(num_features))\n register_forward_hook(hook, *, prepend=False, with_kwargs=False)\n Registers a forward hook on the module.\n The hook will be called every time after \"forward()\" has\n computed an output.\n If \"with_kwargs\" is \"False\" or not specified, the input contains\n only the positional arguments given to the module. Keyword\n arguments won't be passed to the hooks and only to the", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "\"forward\". The hook can modify the output. It can modify the\n input inplace but it will not have effect on forward since this\n is called after \"forward()\" is called. The hook should have the\n following signature:\n hook(module, args, output) -> None or modified output\n If \"with_kwargs\" is \"True\", the forward hook will be passed the\n \"kwargs\" given to the forward function and be expected to return\n the output possibly modified. The hook should have the following\n signature:\n hook(module, args, kwargs, output) -> None or modified output\n Parameters:\n * hook (Callable) -- The user defined hook to be\n registered.\n * prepend (bool) -- If \"True\", the provided \"hook\" will\n be fired before all existing \"forward\" hooks on this\n \"torch.nn.modules.Module\". Otherwise, the provided \"hook\"\n will be fired after all existing \"forward\" hooks on this", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "\"torch.nn.modules.Module\". Note that global \"forward\" hooks\n registered with \"register_module_forward_hook()\" will fire\n before all hooks registered by this method. Default:\n \"False\"\n * with_kwargs (bool) -- If \"True\", the \"hook\" will be\n passed the kwargs given to the forward function. Default:\n \"False\"\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n register_forward_pre_hook(hook, *, prepend=False, with_kwargs=False)\n Registers a forward pre-hook on the module.\n The hook will be called every time before \"forward()\" is\n invoked.\n If \"with_kwargs\" is false or not specified, the input contains\n only the positional arguments given to the module. Keyword\n arguments won't be passed to the hooks and only to the\n \"forward\". The hook can modify the input. User can either return", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "a tuple or a single modified value in the hook. We will wrap the\n value into a tuple if a single value is returned (unless that\n value is already a tuple). The hook should have the following\n signature:\n hook(module, args) -> None or modified input\n If \"with_kwargs\" is true, the forward pre-hook will be passed\n the kwargs given to the forward function. And if the hook\n modifies the input, both the args and kwargs should be returned.\n The hook should have the following signature:\n hook(module, args, kwargs) -> None or a tuple of modified input and kwargs\n Parameters:\n * hook (Callable) -- The user defined hook to be\n registered.\n * prepend (bool) -- If true, the provided \"hook\" will\n be fired before all existing \"forward_pre\" hooks on this\n \"torch.nn.modules.Module\". Otherwise, the provided \"hook\"\n will be fired after all existing \"forward_pre\" hooks on", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "this \"torch.nn.modules.Module\". Note that global\n \"forward_pre\" hooks registered with\n \"register_module_forward_pre_hook()\" will fire before all\n hooks registered by this method. Default: \"False\"\n * with_kwargs (bool) -- If true, the \"hook\" will be\n passed the kwargs given to the forward function. Default:\n \"False\"\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n register_full_backward_hook(hook, prepend=False)\n Registers a backward hook on the module.\n The hook will be called every time the gradients with respect to\n a module are computed, i.e. the hook will execute if and only if\n the gradients with respect to module outputs are computed. The\n hook should have the following signature:\n hook(module, grad_input, grad_output) -> tuple(Tensor) or None", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "The \"grad_input\" and \"grad_output\" are tuples that contain the\n gradients with respect to the inputs and outputs respectively.\n The hook should not modify its arguments, but it can optionally\n return a new gradient with respect to the input that will be\n used in place of \"grad_input\" in subsequent computations.\n \"grad_input\" will only correspond to the inputs given as\n positional arguments and all kwarg arguments are ignored.\n Entries in \"grad_input\" and \"grad_output\" will be \"None\" for all\n non-Tensor arguments.\n For technical reasons, when this hook is applied to a Module,\n its forward function will receive a view of each Tensor passed\n to the Module. Similarly the caller will receive a view of each\n Tensor returned by the Module's forward function.\n Warning:\n Modifying inputs or outputs inplace is not allowed when using\n backward hooks and will raise an error.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "Parameters:\n * hook (Callable) -- The user-defined hook to be\n registered.\n * prepend (bool) -- If true, the provided \"hook\" will\n be fired before all existing \"backward\" hooks on this\n \"torch.nn.modules.Module\". Otherwise, the provided \"hook\"\n will be fired after all existing \"backward\" hooks on this\n \"torch.nn.modules.Module\". Note that global \"backward\"\n hooks registered with\n \"register_module_full_backward_hook()\" will fire before all\n hooks registered by this method.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n register_full_backward_pre_hook(hook, prepend=False)\n Registers a backward pre-hook on the module.\n The hook will be called every time the gradients for the module\n are computed. The hook should have the following signature:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "hook(module, grad_output) -> Tensor or None\n The \"grad_output\" is a tuple. The hook should not modify its\n arguments, but it can optionally return a new gradient with\n respect to the output that will be used in place of\n \"grad_output\" in subsequent computations. Entries in\n \"grad_output\" will be \"None\" for all non-Tensor arguments.\n For technical reasons, when this hook is applied to a Module,\n its forward function will receive a view of each Tensor passed\n to the Module. Similarly the caller will receive a view of each\n Tensor returned by the Module's forward function.\n Warning:\n Modifying inputs inplace is not allowed when using backward\n hooks and will raise an error.\n Parameters:\n * hook (Callable) -- The user-defined hook to be\n registered.\n * prepend (bool) -- If true, the provided \"hook\" will\n be fired before all existing \"backward_pre\" hooks on this", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "\"torch.nn.modules.Module\". Otherwise, the provided \"hook\"\n will be fired after all existing \"backward_pre\" hooks on\n this \"torch.nn.modules.Module\". Note that global\n \"backward_pre\" hooks registered with\n \"register_module_full_backward_pre_hook()\" will fire before\n all hooks registered by this method.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n register_load_state_dict_post_hook(hook)\n Registers a post hook to be run after module's \"load_state_dict\"\n is called.\n It should have the following signature::\n hook(module, incompatible_keys) -> None\n The \"module\" argument is the current module that this hook is\n registered on, and the \"incompatible_keys\" argument is a\n \"NamedTuple\" consisting of attributes \"missing_keys\" and\n \"unexpected_keys\". \"missing_keys\" is a \"list\" of \"str\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "containing the missing keys and \"unexpected_keys\" is a \"list\" of\n \"str\" containing the unexpected keys.\n The given incompatible_keys can be modified inplace if needed.\n Note that the checks performed when calling \"load_state_dict()\"\n with \"strict=True\" are affected by modifications the hook makes\n to \"missing_keys\" or \"unexpected_keys\", as expected. Additions\n to either set of keys will result in an error being thrown when\n \"strict=True\", and clearing out both missing and unexpected keys\n will avoid an error.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n register_module(name, module)\n Alias for \"add_module()\".\n register_parameter(name, param)\n Adds a parameter to the module.\n The parameter can be accessed as an attribute using given name.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "Parameters:\n * name (str) -- name of the parameter. The parameter\n can be accessed from this module using the given name\n * param (Parameter or None) -- parameter to be\n added to the module. If \"None\", then operations that run on\n parameters, such as \"cuda\", are ignored. If \"None\", the\n parameter is not included in the module's \"state_dict\".\n register_state_dict_pre_hook(hook)\n These hooks will be called with arguments: \"self\", \"prefix\", and\n \"keep_vars\" before calling \"state_dict\" on \"self\". The\n registered hooks can be used to perform pre-processing before\n the \"state_dict\" call is made.\n requires_grad_(requires_grad=True)\n Change if autograd should record operations on parameters in\n this module.\n This method sets the parameters' \"requires_grad\" attributes in-\n place.\n This method is helpful for freezing part of the module for", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "finetuning or training parts of a model individually (e.g., GAN\n training).\n See Locally disabling gradient computation for a comparison\n between .requires_grad_() and several similar mechanisms that\n may be confused with it.\n Parameters:\n requires_grad (bool) -- whether autograd should record\n operations on parameters in this module. Default: \"True\".\n Returns:\n self\n Return type:\n Module\n set_extra_state(state)\n This function is called from \"load_state_dict()\" to handle any\n extra state found within the state_dict. Implement this\n function and a corresponding \"get_extra_state()\" for your module\n if you need to store extra state within its state_dict.\n Parameters:\n state (dict) -- Extra state from the state_dict\n share_memory()\n See \"torch.Tensor.share_memory_()\"\n Return type:\n T", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "Return type:\n T\n state_dict(, destination: T_destination, prefix: str = '', keep_vars: bool = False) -> T_destination\n state_dict(, prefix: str = '', keep_vars: bool = False) -> Dict[str, Any]\n Returns a dictionary containing references to the whole state of\n the module.\n Both parameters and persistent buffers (e.g. running averages)\n are included. Keys are corresponding parameter and buffer names.\n Parameters and buffers set to \"None\" are not included.\n Note:\n The returned object is a shallow copy. It contains references\n to the module's parameters and buffers.\n Warning:\n Currently \"state_dict()\" also accepts positional arguments for\n \"destination\", \"prefix\" and \"keep_vars\" in order. However,\n this is being deprecated and keyword arguments will be\n enforced in future releases.\n Warning:\n Please avoid the use of argument \"destination\" as it is not\n designed for end-users.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "designed for end-users.\n Parameters:\n * destination (dict, optional) -- If provided, the\n state of module will be updated into the dict and the same\n object is returned. Otherwise, an \"OrderedDict\" will be\n created and returned. Default: \"None\".\n * prefix (str, optional) -- a prefix added to\n parameter and buffer names to compose the keys in\n state_dict. Default: \"''\".\n * keep_vars (bool, optional) -- by default the\n \"Tensor\" s returned in the state dict are detached from\n autograd. If it's set to \"True\", detaching will not be\n performed. Default: \"False\".\n Returns:\n a dictionary containing a whole state of the module\n Return type:\n dict\n Example:\n >>> module.state_dict().keys()\n ['bias', 'weight']\n to(device: Optional[Union[int, device]] = ..., dtype: Optional[Union[dtype, str]] = ..., non_blocking: bool = ...) -> T", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "to(dtype: Union[dtype, str], non_blocking: bool = ...) -> T\n to(tensor: Tensor, non_blocking: bool = ...) -> T\n Moves and/or casts the parameters and buffers.\n This can be called as\n to(device=None, dtype=None, non_blocking=False)\n to(dtype, non_blocking=False)\n to(tensor, non_blocking=False)\n to(memory_format=torch.channels_last)\n Its signature is similar to \"torch.Tensor.to()\", but only\n accepts floating point or complex \"dtype\"s. In addition, this\n method will only cast the floating point or complex parameters\n and buffers to \"dtype\" (if given). The integral parameters and\n buffers will be moved \"device\", if that is given, but with\n dtypes unchanged. When \"non_blocking\" is set, it tries to\n convert/move asynchronously with respect to the host if\n possible, e.g., moving CPU Tensors with pinned memory to CUDA\n devices.\n See below for examples.\n Note:\n This method modifies the module in-place.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "Parameters:\n * device (\"torch.device\") -- the desired device of the\n parameters and buffers in this module\n * dtype (\"torch.dtype\") -- the desired floating point or\n complex dtype of the parameters and buffers in this module\n * tensor (torch.Tensor) -- Tensor whose dtype and\n device are the desired dtype and device for all parameters\n and buffers in this module\n * memory_format (\"torch.memory_format\") -- the desired\n memory format for 4D parameters and buffers in this module\n (keyword only argument)\n Returns:\n self\n Return type:\n Module\n Examples:\n >>> linear = nn.Linear(2, 2)\n >>> linear.weight\n Parameter containing:\n tensor([[ 0.1913, -0.3420],\n [-0.5113, -0.2325]])\n >>> linear.to(torch.double)\n Linear(in_features=2, out_features=2, bias=True)\n >>> linear.weight\n Parameter containing:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "Parameter containing:\n tensor([[ 0.1913, -0.3420],\n [-0.5113, -0.2325]], dtype=torch.float64)\n >>> gpu1 = torch.device(\"cuda:1\")\n >>> linear.to(gpu1, dtype=torch.half, non_blocking=True)\n Linear(in_features=2, out_features=2, bias=True)\n >>> linear.weight\n Parameter containing:\n tensor([[ 0.1914, -0.3420],\n [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')\n >>> cpu = torch.device(\"cpu\")\n >>> linear.to(cpu)\n Linear(in_features=2, out_features=2, bias=True)\n >>> linear.weight\n Parameter containing:\n tensor([[ 0.1914, -0.3420],\n [-0.5112, -0.2324]], dtype=torch.float16)\n >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)\n >>> linear.weight\n Parameter containing:\n tensor([[ 0.3741+0.j, 0.2382+0.j],\n [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "\n\n\nlinear(torch.ones(3, 2, dtype=torch.cdouble))\n tensor([[0.6122+0.j, 0.1150+0.j],\n [0.6122+0.j, 0.1150+0.j],\n [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)\n to_empty(, device)\n Moves the parameters and buffers to the specified device without\n copying storage.\n Parameters:\n device (\"torch.device\") -- The desired device of the\n parameters and buffers in this module.\n Returns:\n self\n Return type:\n Module\n train(mode=True)\n Sets the module in training mode.\n This has any effect only on certain modules. See documentations\n of particular modules for details of their behaviors in\n training/evaluation mode, if they are affected, e.g. \"Dropout\",\n \"BatchNorm\", etc.\n Parameters:\n mode (bool*) -- whether to set training mode (\"True\") or\n evaluation mode (\"False\"). Default: \"True\".\n Returns:\n self\n Return type:\n Module\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "self\n Return type:\n Module\n type(dst_type)\n Casts all parameters and buffers to \"dst_type\".\n Note:\n This method modifies the module in-place.\n Parameters:\n dst_type (type or string) -- the desired type\n Returns:\n self\n Return type:\n Module\n xpu(device=None)\n Moves all model parameters and buffers to the XPU.\n This also makes associated parameters and buffers different\n objects. So it should be called before constructing optimizer if\n the module will live on XPU while being optimized.\n Note:\n This method modifies the module in-place.\n Parameters:\n device (int, optional) -- if specified, all\n parameters will be copied to that device\n Returns:\n self\n Return type:\n Module\n zero_grad(set_to_none=False)\n Sets gradients of all model parameters to zero. See similar\n function under \"torch.optim.Optimizer\" for more context.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "Parameters:\n set_to_none (bool) -- instead of setting to zero, set\n the grads to None. See \"torch.optim.Optimizer.zero_grad()\"\n for details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"}
{"text": "torch.meshgridtorch.meshgrid(tensors, indexing=None)\n Creates grids of coordinates specified by the 1D inputs in\n attr:tensors.\n This is helpful when you want to visualize data over some range of\n inputs. See below for a plotting example.\n Given N 1D tensors T_0 \\ldots T_{N-1} as inputs with corresponding\n sizes S_0 \\ldots S_{N-1}, this creates N N-dimensional tensors G_0\n \\ldots G_{N-1}, each with shape (S_0, ..., S_{N-1}) where the\n output G_i is constructed by expanding T_i to the result shape.\n Note:\n 0D inputs are treated equivalently to 1D inputs of a single\n element.\n Warning:\n torch.meshgrid(tensors) currently has the same behavior as\n calling numpy.meshgrid(arrays, indexing='ij').In the future\n torch.meshgrid will transition to indexing='xy'* as the\n default.https://github.com/pytorch/pytorch/issues/50276 tracks\n this issue with the goal of migrating to NumPy's behavior.\n See also:", "source": "https://pytorch.org/docs/stable/generated/torch.meshgrid.html", "category": "pytorch docs"}
{"text": "See also:\n \"torch.cartesian_prod()\" has the same effect but it collects the\n data in a tensor of vectors.\n Parameters:\n * tensors (list of Tensor) -- list of scalars or 1\n dimensional tensors. Scalars will be treated as tensors of\n size (1,) automatically\n * indexing (Optional[str]) --\n (str, optional): the indexing mode, either \"xy\" or \"ij\",\n defaults to \"ij\". See warning for future changes.\n If \"xy\" is selected, the first dimension corresponds to the\n cardinality of the second input and the second dimension\n corresponds to the cardinality of the first input.\n If \"ij\" is selected, the dimensions are in the same order as\n the cardinality of the inputs.\n Returns:\n If the input has N tensors of size S_0 \\ldots S_{N-1}`, then the\n output will also have N tensors, where each tensor is of shape\n (S_0, ..., S_{N-1}).\n Return type:\n seq (sequence of Tensors)\n Example:", "source": "https://pytorch.org/docs/stable/generated/torch.meshgrid.html", "category": "pytorch docs"}
{"text": "seq (sequence of Tensors)\n Example:\n >>> x = torch.tensor([1, 2, 3])\n >>> y = torch.tensor([4, 5, 6])\n Observe the element-wise pairings across the grid, (1, 4),\n (1, 5), ..., (3, 6). This is the same thing as the\n cartesian product.\n >>> grid_x, grid_y = torch.meshgrid(x, y, indexing='ij')\n >>> grid_x\n tensor([[1, 1, 1],\n [2, 2, 2],\n [3, 3, 3]])\n >>> grid_y\n tensor([[4, 5, 6],\n [4, 5, 6],\n [4, 5, 6]])\n This correspondence can be seen when these grids are\n stacked properly.\n >>> torch.equal(torch.cat(tuple(torch.dstack([grid_x, grid_y]))),\n ... torch.cartesian_prod(x, y))\n True\n torch.meshgrid is commonly used to produce a grid for\n plotting.\n >>> import matplotlib.pyplot as plt\n >>> xs = torch.linspace(-5, 5, steps=100)\n >>> ys = torch.linspace(-5, 5, steps=100)\n >>> x, y = torch.meshgrid(xs, ys, indexing='xy')", "source": "https://pytorch.org/docs/stable/generated/torch.meshgrid.html", "category": "pytorch docs"}
{"text": "\n\n\nz = torch.sin(torch.sqrt(x * x + y * y))\n >>> ax = plt.axes(projection='3d')\n >>> ax.plot_surface(x.numpy(), y.numpy(), z.numpy())\n >>> plt.show()\n [image]\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.meshgrid.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.dropout3dtorch.nn.functional.dropout3d(input, p=0.5, training=True, inplace=False)\n Randomly zero out entire channels (a channel is a 3D feature map,\n e.g., the j-th channel of the i-th sample in the batched input is a\n 3D tensor \\text{input}[i, j]) of the input tensor). Each channel\n will be zeroed out independently on every forward call with\n probability \"p\" using samples from a Bernoulli distribution.\n See \"Dropout3d\" for details.\n Parameters:\n * p (float) -- probability of a channel to be zeroed.\n Default: 0.5\n * training (bool) -- apply dropout if is \"True\". Default:\n \"True\"\n * inplace (bool) -- If set to \"True\", will do this\n operation in-place. Default: \"False\"\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.dropout3d.html", "category": "pytorch docs"}
{"text": "MSELossclass torch.nn.MSELoss(size_average=None, reduce=None, reduction='mean')\n Creates a criterion that measures the mean squared error (squared\n L2 norm) between each element in the input x and target y.\n The unreduced (i.e. with \"reduction\" set to \"'none'\") loss can be\n described as:\n \\ell(x, y) = L = {l_1,\\dots,l_N}^\\top, \\quad l_n = \\left( x_n\n - y_n \\right)^2,\n where N is the batch size. If \"reduction\" is not \"'none'\" (default\n \"'mean'\"), then:\n \\ell(x, y) = \\begin{cases} \\operatorname{mean}(L), &\n \\text{if reduction} = \\text{mean';}\\\\\n \\operatorname{sum}(L), & \\text{if reduction} = \\text{sum'.}\n \\end{cases}\n x and y are tensors of arbitrary shapes with a total of n elements\n each.\n The mean operation still operates over all the elements, and\n divides by n.\n The division by n can be avoided if one sets \"reduction = 'sum'\".\n Parameters:\n * size_average (bool, optional) -- Deprecated (see", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html", "category": "pytorch docs"}
{"text": "\"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html", "category": "pytorch docs"}
{"text": "\"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n Shape:\n * Input: (), where * means any number of dimensions.\n * Target: (), same shape as the input.\n Examples:\n >>> loss = nn.MSELoss()\n >>> input = torch.randn(3, 5, requires_grad=True)\n >>> target = torch.randn(3, 5)\n >>> output = loss(input, target)\n >>> output.backward()", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html", "category": "pytorch docs"}
{"text": "torch.seedtorch.seed()\n Sets the seed for generating random numbers to a non-deterministic\n random number. Returns a 64 bit number used to seed the RNG.\n Return type:\n int", "source": "https://pytorch.org/docs/stable/generated/torch.seed.html", "category": "pytorch docs"}
{"text": "torch.linalg.eightorch.linalg.eigh(A, UPLO='L', , out=None)\n Computes the eigenvalue decomposition of a complex Hermitian or\n real symmetric matrix.\n Letting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the eigenvalue\n decomposition* of a complex Hermitian or real symmetric matrix A\n \\in \\mathbb{K}^{n \\times n} is defined as\n A = Q \\operatorname{diag}(\\Lambda) Q^{\\text{H}}\\mathrlap{\\qquad\n Q \\in \\mathbb{K}^{n \\times n}, \\Lambda \\in \\mathbb{R}^n}\n where Q^{\\text{H}} is the conjugate transpose when Q is complex,\n and the transpose when Q is real-valued. Q is orthogonal in the\n real case and unitary in the complex case.\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n \"A\" is assumed to be Hermitian (resp. symmetric), but this is not\n checked internally, instead:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html", "category": "pytorch docs"}
{"text": "checked internally, instead:\n * If \"UPLO\"= 'L' (default), only the lower triangular part of the\n matrix is used in the computation.\n * If \"UPLO\"= 'U', only the upper triangular part of the matrix is\n used.\n The eigenvalues are returned in ascending order.\n Note:\n When inputs are on a CUDA device, this function synchronizes that\n device with the CPU.\n Note:\n The eigenvalues of real symmetric or complex Hermitian matrices\n are always real.\n Warning:\n The eigenvectors of a symmetric matrix are not unique, nor are\n they continuous with respect to \"A\". Due to this lack of\n uniqueness, different hardware and software may compute different\n eigenvectors.This non-uniqueness is caused by the fact that\n multiplying an eigenvector by -1 in the real case or by e^{i\n \\phi}, \\phi \\in \\mathbb{R} in the complex case produces another\n set of valid eigenvectors of the matrix. For this reason, the", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html", "category": "pytorch docs"}
{"text": "loss function shall not depend on the phase of the eigenvectors,\n as this quantity is not well-defined. This is checked for complex\n inputs when computing the gradients of this function. As such,\n when inputs are complex and are on a CUDA device, the computation\n of the gradients of this function synchronizes that device with\n the CPU.\n Warning:\n Gradients computed using the eigenvectors tensor will only be\n finite when \"A\" has distinct eigenvalues. Furthermore, if the\n distance between any two eigenvalues is close to zero, the\n gradient will be numerically unstable, as it depends on the\n eigenvalues \\lambda_i through the computation of \\frac{1}{\\min_{i\n \\neq j} \\lambda_i - \\lambda_j}.\n See also:\n \"torch.linalg.eigvalsh()\" computes only the eigenvalues of a\n Hermitian matrix. Unlike \"torch.linalg.eigh()\", the gradients of\n \"eigvalsh()\" are always numerically stable.\n \"torch.linalg.cholesky()\" for a different decomposition of a", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html", "category": "pytorch docs"}
{"text": "Hermitian matrix. The Cholesky decomposition gives less\n information about the matrix but is much faster to compute than\n the eigenvalue decomposition.\n \"torch.linalg.eig()\" for a (slower) function that computes the\n eigenvalue decomposition of a not necessarily Hermitian square\n matrix.\n \"torch.linalg.svd()\" for a (slower) function that computes the\n more general SVD decomposition of matrices of any shape.\n \"torch.linalg.qr()\" for another (much faster) decomposition that\n works on general matrices.\n Parameters:\n * A (Tensor) -- tensor of shape (, n, n) where *** is\n zero or more batch dimensions consisting of symmetric or\n Hermitian matrices.\n * UPLO ('L', 'U', optional) -- controls whether to\n use the upper or lower triangular part of \"A\" in the\n computations. Default: 'L'.\n Keyword Arguments:\n out (tuple, optional*) -- output tuple of two tensors.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html", "category": "pytorch docs"}
{"text": "Ignored if None. Default: None.\n Returns:\n A named tuple (eigenvalues, eigenvectors) which corresponds to\n \\Lambda and Q above.\n eigenvalues will always be real-valued, even when \"A\" is\n complex. It will also be ordered in ascending order.\n eigenvectors will have the same dtype as \"A\" and will contain\n the eigenvectors as its columns.\n Examples::\n >>> A = torch.randn(2, 2, dtype=torch.complex128)\n >>> A = A + A.T.conj() # creates a Hermitian matrix\n >>> A\n tensor([[2.9228+0.0000j, 0.2029-0.0862j],\n [0.2029+0.0862j, 0.3464+0.0000j]], dtype=torch.complex128)\n >>> L, Q = torch.linalg.eigh(A)\n >>> L\n tensor([0.3277, 2.9415], dtype=torch.float64)\n >>> Q\n tensor([[-0.0846+-0.0000j, -0.9964+0.0000j],\n [ 0.9170+0.3898j, -0.0779-0.0331j]], dtype=torch.complex128)\n >>> torch.dist(Q @ torch.diag(L.cdouble()) @ Q.T.conj(), A)\n tensor(6.1062e-16, dtype=torch.float64)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html", "category": "pytorch docs"}
{"text": "tensor(6.1062e-16, dtype=torch.float64)\n >>> A = torch.randn(3, 2, 2, dtype=torch.float64)\n >>> A = A + A.mT # creates a batch of symmetric matrices\n >>> L, Q = torch.linalg.eigh(A)\n >>> torch.dist(Q @ torch.diag_embed(L) @ Q.mH, A)\n tensor(1.5423e-15, dtype=torch.float64)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html", "category": "pytorch docs"}
{"text": "torch.Tensor.index_fillTensor.index_fill(dim, index, value) -> Tensor\n Out-of-place version of \"torch.Tensor.index_fill_()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_fill.html", "category": "pytorch docs"}
{"text": "torch.Tensor.addmmTensor.addmm(mat1, mat2, *, beta=1, alpha=1) -> Tensor\n See \"torch.addmm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addmm.html", "category": "pytorch docs"}
{"text": "torch.autograd.forward_ad.unpack_dualtorch.autograd.forward_ad.unpack_dual(tensor, *, level=None)\n Unpacks a \"dual tensor\" to get both its Tensor value and its\n forward AD gradient. The result is a namedtuple \"(primal, tangent)\"\n where \"primal\" is a view of \"tensor\"'s primal and \"tangent\" is\n \"tensor\"'s tangent as-is. Neither of these tensors can be dual\n tensor of level \"level\".\n This function is backward differentiable.\n Example:\n >>> with dual_level():\n ... inp = make_dual(x, x_t)\n ... out = f(inp)\n ... y, jvp = unpack_dual(out)\n ... jvp = unpack_dual(out).tangent\n Please see the forward-mode AD tutorial for detailed steps on how\n to use this API.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.forward_ad.unpack_dual.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.normalizetorch.nn.functional.normalize(input, p=2.0, dim=1, eps=1e-12, out=None)\n Performs L_p normalization of inputs over specified dimension.\n For a tensor \"input\" of sizes (n_0, ..., n_{dim}, ..., n_k), each\n n_{dim} -element vector v along dimension \"dim\" is transformed as\n v = \\frac{v}{\\max(\\lVert v \\rVert_p, \\epsilon)}.\n With the default arguments it uses the Euclidean norm over vectors\n along dimension 1 for normalization.\n Parameters:\n * input (Tensor) -- input tensor of any shape\n * p (float) -- the exponent value in the norm formulation.\n Default: 2\n * dim (int) -- the dimension to reduce. Default: 1\n * eps (float) -- small value to avoid division by zero.\n Default: 1e-12\n * out (Tensor, optional) -- the output tensor. If\n \"out\" is used, this operation won't be differentiable.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.normalize.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.conv3dtorch.nn.functional.conv3d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) -> Tensor\n Applies a 3D convolution over an input image composed of several\n input planes.\n This operator supports TensorFloat32.\n See \"Conv3d\" for details and output shape.\n Note:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n Note:\n This operator supports complex data types i.e. \"complex32,\n complex64, complex128\".\n Parameters:\n * input -- input tensor of shape (\\text{minibatch} ,\n \\text{in_channels} , iT , iH , iW)\n * weight -- filters of shape (\\text{out_channels} ,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv3d.html", "category": "pytorch docs"}
{"text": "\\frac{\\text{in_channels}}{\\text{groups}} , kT , kH , kW)\n * bias -- optional bias tensor of shape\n (\\text{out_channels}). Default: None\n * stride -- the stride of the convolving kernel. Can be a\n single number or a tuple (sT, sH, sW). Default: 1\n * padding --\n implicit paddings on both sides of the input. Can be a string\n {'valid', 'same'}, single number or a tuple (padT, padH,\n padW). Default: 0 \"padding='valid'\" is the same as no\n padding. \"padding='same'\" pads the input so the output has the\n same shape as the input. However, this mode doesn't support\n any stride values other than 1.\n Warning:\n For \"padding='same'\", if the \"weight\" is even-length and\n \"dilation\" is odd in any dimension, a full \"pad()\" operation\n may be needed internally. Lowering performance.\n * dilation -- the spacing between kernel elements. Can be a", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv3d.html", "category": "pytorch docs"}
{"text": "single number or a tuple (dT, dH, dW). Default: 1\n * groups -- split input into groups, \\text{in_channels}\n should be divisible by the number of groups. Default: 1\n Examples:\n >>> filters = torch.randn(33, 16, 3, 3, 3)\n >>> inputs = torch.randn(20, 16, 50, 10, 20)\n >>> F.conv3d(inputs, filters)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv3d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.is_sparseTensor.is_sparse\n Is \"True\" if the Tensor uses sparse storage layout, \"False\"\n otherwise.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_sparse.html", "category": "pytorch docs"}
{"text": "ReplicationPad3dclass torch.nn.ReplicationPad3d(padding)\n Pads the input tensor using replication of the input boundary.\n For N-dimensional padding, use \"torch.nn.functional.pad()\".\n Parameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in all boundaries. If a 6-tuple,\n uses (\\text{padding_left}, \\text{padding_right},\n \\text{padding_top}, \\text{padding_bottom},\n \\text{padding_front}, \\text{padding_back})\n Shape:\n * Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},\n W_{in}).\n * Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},\n H_{out}, W_{out}), where\n D_{out} = D_{in} + \\text{padding_front} +\n \\text{padding_back}\n H_{out} = H_{in} + \\text{padding_top} +\n \\text{padding_bottom}\n W_{out} = W_{in} + \\text{padding_left} +\n \\text{padding_right}\n Examples:\n >>> m = nn.ReplicationPad3d(3)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReplicationPad3d.html", "category": "pytorch docs"}
{"text": "Examples:\n >>> m = nn.ReplicationPad3d(3)\n >>> input = torch.randn(16, 3, 8, 320, 480)\n >>> output = m(input)\n >>> # using different paddings for different sides\n >>> m = nn.ReplicationPad3d((3, 3, 6, 6, 1, 1))\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReplicationPad3d.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.gaussian_nll_losstorch.nn.functional.gaussian_nll_loss(input, target, var, full=False, eps=1e-06, reduction='mean')\n Gaussian negative log likelihood loss.\n See \"GaussianNLLLoss\" for details.\n Parameters:\n * input (Tensor) -- expectation of the Gaussian\n distribution.\n * target (Tensor) -- sample from the Gaussian\n distribution.\n * var (Tensor) -- tensor of positive variance(s), one for\n each of the expectations in the input (heteroscedastic), or a\n single one (homoscedastic).\n * full (bool, optional) -- include the constant term\n in the loss calculation. Default: \"False\".\n * eps (float, optional) -- value added to var, for\n stability. Default: 1e-6.\n * reduction (str, optional) -- specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.gaussian_nll_loss.html", "category": "pytorch docs"}
{"text": "\"'none'\": no reduction will be applied, \"'mean'\": the output\n is the average of all batch member losses, \"'sum'\": the output\n is the sum of all batch member losses. Default: \"'mean'\".\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.gaussian_nll_loss.html", "category": "pytorch docs"}
{"text": "torch.foreach_round_torch._foreach_round(self: List[Tensor]) -> None\n Apply \"torch.round()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_round_.html", "category": "pytorch docs"}
{"text": "torch.fractorch.frac(input, *, out=None) -> Tensor\n Computes the fractional portion of each element in \"input\".\n \\text{out}{i} = \\text{input} - \\left\\lfloor\n |\\text{input}{i}| \\right\\rfloor *\n \\operatorname{sgn}(\\text{input})\n Example:\n >>> torch.frac(torch.tensor([1, 2.5, -3.2]))\n tensor([ 0.0000, 0.5000, -0.2000])", "source": "https://pytorch.org/docs/stable/generated/torch.frac.html", "category": "pytorch docs"}
{"text": "adaptive_avg_pool3dclass torch.ao.nn.quantized.functional.adaptive_avg_pool3d(input, output_size)\n Applies a 3D adaptive average pooling over a quantized input signal\n composed of several quantized input planes.\n Note:\n The input quantization parameters propagate to the output.\n See \"AdaptiveAvgPool3d\" for details and output shape.\n Parameters:\n output_size (None) -- the target output size (single\n integer or double-integer tuple)\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.adaptive_avg_pool3d.html", "category": "pytorch docs"}
{"text": "torch.smmtorch.smm(input, mat) -> Tensor\n Performs a matrix multiplication of the sparse matrix \"input\" with\n the dense matrix \"mat\".\n Parameters:\n * input (Tensor) -- a sparse matrix to be matrix\n multiplied\n * mat (Tensor) -- a dense matrix to be matrix multiplied", "source": "https://pytorch.org/docs/stable/generated/torch.smm.html", "category": "pytorch docs"}
{"text": "torch.Tensor.erfTensor.erf() -> Tensor\n See \"torch.erf()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.erf.html", "category": "pytorch docs"}
{"text": "torch.fft.rfftfreqtorch.fft.rfftfreq(n, d=1.0, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\n Computes the sample frequencies for \"rfft()\" with a signal of size\n \"n\".\n Note:\n \"rfft()\" returns Hermitian one-sided output, so only the positive\n frequency terms are returned. For a real FFT of length \"n\" and\n with inputs spaced in length unit \"d\", the frequencies are:\n f = torch.arange((n + 1) // 2) / (d * n)\n Note:\n For even lengths, the Nyquist frequency at \"f[n/2]\" can be\n thought of as either negative or positive. Unlike \"fftfreq()\",\n \"rfftfreq()\" always returns it as positive.\n Parameters:\n * n (int) -- the real FFT length\n * d (float, optional*) -- The sampling length scale.\n The spacing between individual samples of the FFT input. The\n default assumes unit spacing, dividing that result by the\n actual spacing gives the result in physical frequency units.", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfftfreq.html", "category": "pytorch docs"}
{"text": "Keyword Arguments:\n * out (Tensor, optional) -- the output tensor.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n -[ Example ]-\n\n\n\ntorch.fft.rfftfreq(5)\n tensor([0.0000, 0.2000, 0.4000])\ntorch.fft.rfftfreq(4)\n tensor([0.0000, 0.2500, 0.5000])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfftfreq.html", "category": "pytorch docs"}
{"text": "tensor([0.0000, 0.2500, 0.5000])\n Compared to the output from \"fftfreq()\", we see that the Nyquist\n frequency at \"f[2]\" has changed sign: >>> torch.fft.fftfreq(4)\n tensor([ 0.0000, 0.2500, -0.5000, -0.2500])", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfftfreq.html", "category": "pytorch docs"}
{"text": "GRUclass torch.nn.GRU(args, kwargs)\n Applies a multi-layer gated recurrent unit (GRU) RNN to an input\n sequence.\n For each element in the input sequence, each layer computes the\n following function:\n \\begin{array}{ll} r_t = \\sigma(W_{ir} x_t + b_{ir} + W_{hr}\n h_{(t-1)} + b_{hr}) \\ z_t = \\sigma(W_{iz} x_t + b_{iz} +\n W_{hz} h_{(t-1)} + b_{hz}) \\ n_t = \\tanh(W_{in} x_t +\n b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\ h_t = (1 -\n z_t) * n_t + z_t * h_{(t-1)} \\end{array}\n where h_t is the hidden state at time t, x_t is the input at time\n t, h_{(t-1)} is the hidden state of the layer at time t-1 or\n the initial hidden state at time 0*, and r_t, z_t, n_t are the\n reset, update, and new gates, respectively. \\sigma is the sigmoid\n function, and * is the Hadamard product.\n In a multilayer GRU, the input x^{(l)}_t of the l -th layer (l >=\n 2) is the hidden state h^{(l-1)}_t of the previous layer multiplied", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GRU.html", "category": "pytorch docs"}
{"text": "by dropout \\delta^{(l-1)}_t where each \\delta^{(l-1)}_t is a\n Bernoulli random variable which is 0 with probability \"dropout\".\n Parameters:\n * input_size -- The number of expected features in the input\n x\n * hidden_size -- The number of features in the hidden state\n h\n * num_layers -- Number of recurrent layers. E.g., setting\n \"num_layers=2\" would mean stacking two GRUs together to form a\n stacked GRU, with the second GRU taking in outputs of the\n first GRU and computing the final results. Default: 1\n * bias -- If \"False\", then the layer does not use bias\n weights b_ih and b_hh. Default: \"True\"\n * batch_first -- If \"True\", then the input and output\n tensors are provided as (batch, seq, feature) instead of\n (seq, batch, feature). Note that this does not apply to\n hidden or cell states. See the Inputs/Outputs sections below\n for details. Default: \"False\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GRU.html", "category": "pytorch docs"}
{"text": "for details. Default: \"False\"\n * dropout -- If non-zero, introduces a Dropout layer on\n the outputs of each GRU layer except the last layer, with\n dropout probability equal to \"dropout\". Default: 0\n * bidirectional -- If \"True\", becomes a bidirectional GRU.\n Default: \"False\"\n Inputs: input, h_0\n * input: tensor of shape (L, H_{in}) for unbatched input,\n (L, N, H_{in}) when \"batch_first=False\" or (N, L, H_{in}) when\n \"batch_first=True\" containing the features of the input\n sequence. The input can also be a packed variable length\n sequence. See \"torch.nn.utils.rnn.pack_padded_sequence()\" or\n \"torch.nn.utils.rnn.pack_sequence()\" for details.\n * h_0: tensor of shape (D * \\text{num_layers}, H_{out}) or\n (D * \\text{num_layers}, N, H_{out}) containing the initial\n hidden state for the input sequence. Defaults to zeros if not\n provided.\n where:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GRU.html", "category": "pytorch docs"}
{"text": "provided.\n where:\n \\begin{aligned} N ={} & \\text{batch size} \\ L ={} &\n \\text{sequence length} \\ D ={} & 2 \\text{ if\n bidirectional=True otherwise } 1 \\ H_{in} ={} &\n \\text{input_size} \\ H_{out} ={} & \\text{hidden_size}\n \\end{aligned}\n Outputs: output, h_n\n * output: tensor of shape (L, D * H_{out}) for unbatched\n input, (L, N, D * H_{out}) when \"batch_first=False\" or (N, L,\n D * H_{out}) when \"batch_first=True\" containing the output\n features (h_t) from the last layer of the GRU, for each t.\n If a \"torch.nn.utils.rnn.PackedSequence\" has been given as the\n input, the output will also be a packed sequence.\n * h_n: tensor of shape (D * \\text{num_layers}, H_{out}) or\n (D * \\text{num_layers}, N, H_{out}) containing the final\n hidden state for the input sequence.\n Variables:\n * weight_ih_l[k] -- the learnable input-hidden weights of", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GRU.html", "category": "pytorch docs"}
{"text": "the \\text{k}^{th} layer (W_ir|W_iz|W_in), of shape\n (3hidden_size, input_size) for k = 0. Otherwise, the\n shape is (3hidden_size, num_directions * hidden_size)\n * weight_hh_l[k] -- the learnable hidden-hidden weights of\n the \\text{k}^{th} layer (W_hr|W_hz|W_hn), of shape\n (3hidden_size, hidden_size)\n * bias_ih_l[k] -- the learnable input-hidden bias of the\n \\text{k}^{th} layer (b_ir|b_iz|b_in), of shape\n (3hidden_size)\n * bias_hh_l[k] -- the learnable hidden-hidden bias of the\n \\text{k}^{th} layer (b_hr|b_hz|b_hn), of shape\n (3hidden_size)*\n Note:\n All the weights and biases are initialized from\n \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{1}{\\text{hidden_size}}\n Note:\n For bidirectional GRUs, forward and backward are directions 0 and\n 1 respectively. Example of splitting the output layers when\n \"batch_first=False\": \"output.view(seq_len, batch, num_directions,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GRU.html", "category": "pytorch docs"}
{"text": "hidden_size)\".\n Note:\n \"batch_first\" argument is ignored for unbatched inputs.\n Note:\n If the following conditions are satisfied: 1) cudnn is enabled,\n 2) input data is on the GPU 3) input data has dtype\n \"torch.float16\" 4) V100 GPU is used, 5) input data is not in\n \"PackedSequence\" format persistent algorithm can be selected to\n improve performance.\n Examples:\n >>> rnn = nn.GRU(10, 20, 2)\n >>> input = torch.randn(5, 3, 10)\n >>> h0 = torch.randn(2, 3, 20)\n >>> output, hn = rnn(input, h0)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GRU.html", "category": "pytorch docs"}
{"text": "SequentialLRclass torch.optim.lr_scheduler.SequentialLR(optimizer, schedulers, milestones, last_epoch=- 1, verbose=False)\n Receives the list of schedulers that is expected to be called\n sequentially during optimization process and milestone points that\n provides exact intervals to reflect which scheduler is supposed to\n be called at a given epoch.\n Parameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * schedulers (list) -- List of chained schedulers.\n * milestones (list) -- List of integers that reflects\n milestone points.\n * last_epoch (int) -- The index of last epoch. Default:\n -1.\n * verbose (bool) -- Does nothing.\n -[ Example ]-\n\n\n\nAssuming optimizer uses lr = 1. for all groups\nlr = 0.1 if epoch == 0\nlr = 0.1 if epoch == 1\nlr = 0.9 if epoch == 2\nlr = 0.81 if epoch == 3\nlr = 0.729 if epoch == 4\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.SequentialLR.html", "category": "pytorch docs"}
{"text": "\n\n\nlr = 0.729 if epoch == 4\nscheduler1 = ConstantLR(self.opt, factor=0.1, total_iters=2)\nscheduler2 = ExponentialLR(self.opt, gamma=0.9)\nscheduler = SequentialLR(self.opt, schedulers=[scheduler1, scheduler2], milestones=[2])\nfor epoch in range(100):\n train(...)\n validate(...)\n scheduler.step()\n get_last_lr()\n Return last computed learning rate by current scheduler.\n load_state_dict(state_dict)\n Loads the schedulers state.\n Parameters:\n state_dict (dict) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n print_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n state_dict()\n Returns the state of the scheduler as a \"dict\".\n It contains an entry for every variable in self.dict which\n is not the optimizer. The wrapped scheduler states will also be\n saved.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.SequentialLR.html", "category": "pytorch docs"}
{"text": "torch.Tensor.argwhereTensor.argwhere() -> Tensor\n See \"torch.argwhere()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.argwhere.html", "category": "pytorch docs"}
{"text": "torch.Tensor.addcdivTensor.addcdiv(tensor1, tensor2, *, value=1) -> Tensor\n See \"torch.addcdiv()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addcdiv.html", "category": "pytorch docs"}
{"text": "torch.floor_dividetorch.floor_divide(input, other, , out=None) -> Tensor\n Note:\n Before PyTorch 1.13 \"torch.floor_divide()\" incorrectly performed\n truncation division. To restore the previous behavior use\n \"torch.div()\" with \"rounding_mode='trunc'\".\n Computes \"input\" divided by \"other\", elementwise, and floors the\n result.\n \\text{{out}}_i = \\text{floor} \\left(\n \\frac{{\\text{{input}}_i}}{{\\text{{other}}_i}} \\right)\n Supports broadcasting to a common shape, type promotion, and\n integer and float inputs.\n Parameters:\n * input (Tensor or Number) -- the dividend\n * other (Tensor or Number) -- the divisor\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.tensor([4.0, 3.0])\n >>> b = torch.tensor([2.0, 2.0])\n >>> torch.floor_divide(a, b)\n tensor([2.0, 1.0])\n >>> torch.floor_divide(a, 1.4)\n tensor([2.0, 2.0])", "source": "https://pytorch.org/docs/stable/generated/torch.floor_divide.html", "category": "pytorch docs"}
{"text": "torch.get_float32_matmul_precisiontorch.get_float32_matmul_precision()\n Returns the current value of float32 matrix multiplication\n precision. Refer to \"torch.set_float32_matmul_precision()\"\n documentation for more details.\n Return type:\n str", "source": "https://pytorch.org/docs/stable/generated/torch.get_float32_matmul_precision.html", "category": "pytorch docs"}
{"text": "prepare_qat_fxclass torch.quantization.quantize_fx.prepare_qat_fx(model, qconfig_mapping, example_inputs, prepare_custom_config=None, backend_config=None)\n Prepare a model for quantization aware training\n Parameters:\n * model () -- torch.nn.Module model\n * *qconfig_mapping () -- see \"prepare_fx()\"\n * *example_inputs () -- see \"prepare_fx()\"\n * *prepare_custom_config () -- see \"prepare_fx()\"\n * *backend_config (*) -- see \"prepare_fx()\"\n Returns:\n A GraphModule with fake quant modules (configured by\n qconfig_mapping and backend_config), ready for quantization\n aware training\n Return type:\n ObservedGraphModule\n Example:\n import torch\n from torch.ao.quantization import get_default_qat_qconfig_mapping\n from torch.ao.quantization import prepare_fx\n class Submodule(torch.nn.Module):\n def init(self):\n super().init()\n self.linear = torch.nn.Linear(5, 5)", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_qat_fx.html", "category": "pytorch docs"}
{"text": "self.linear = torch.nn.Linear(5, 5)\n def forward(self, x):\n x = self.linear(x)\n return x\n class M(torch.nn.Module):\n def init(self):\n super().init()\n self.linear = torch.nn.Linear(5, 5)\n self.sub = Submodule()\n def forward(self, x):\n x = self.linear(x)\n x = self.sub(x) + x\n return x\n # initialize a floating point model\n float_model = M().train()\n # (optional, but preferred) load the weights from pretrained model\n # float_model.load_weights(...)\n # define the training loop for quantization aware training\n def train_loop(model, train_data):\n model.train()\n for image, target in data_loader:\n ...\n # qconfig is the configuration for how we insert observers for a particular\n # operator\n # qconfig = get_default_qconfig(\"fbgemm\")\n # Example of customizing qconfig:", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_qat_fx.html", "category": "pytorch docs"}
{"text": "Example of customizing qconfig:\n # qconfig = torch.ao.quantization.QConfig(\n # activation=FakeQuantize.with_args(observer=MinMaxObserver.with_args(dtype=torch.qint8)),\n # weight=FakeQuantize.with_args(observer=MinMaxObserver.with_args(dtype=torch.qint8)))\n # `activation` and `weight` are constructors of observer module\n # qconfig_mapping is a collection of quantization configurations, user can\n # set the qconfig for each operator (torch op calls, functional calls, module calls)\n # in the model through qconfig_mapping\n # the following call will get the qconfig_mapping that works best for models\n # that target \"fbgemm\" backend\n qconfig_mapping = get_default_qat_qconfig(\"fbgemm\")\n # We can customize qconfig_mapping in different ways, please take a look at\n # the docstring for :func:`~torch.ao.quantization.prepare_fx` for different ways\n # to configure this\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_qat_fx.html", "category": "pytorch docs"}
{"text": "to configure this\n # example_inputs is a tuple of inputs, that is used to infer the type of the\n # outputs in the model\n # currently it's not used, but please make sure model(*example_inputs) runs\n example_inputs = (torch.randn(1, 3, 224, 224),)\n # TODO: add backend_config after we split the backend_config for fbgemm and qnnpack\n # e.g. backend_config = get_default_backend_config(\"fbgemm\")\n # `prepare_qat_fx` inserts observers in the model based on qconfig_mapping and\n # backend_config, if the configuration for an operator in qconfig_mapping\n # is supported in the backend_config (meaning it's supported by the target\n # hardware), we'll insert fake_quantize modules according to the qconfig_mapping\n # otherwise the configuration in qconfig_mapping will be ignored\n # see :func:`~torch.ao.quantization.prepare_fx` for a detailed explanation of\n # how qconfig_mapping interacts with backend_config\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_qat_fx.html", "category": "pytorch docs"}
{"text": "prepared_model = prepare_qat_fx(float_model, qconfig_mapping, example_inputs)\n # Run training\n train_loop(prepared_model, train_loop)", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_qat_fx.html", "category": "pytorch docs"}
{"text": "torch.Tensor.stdTensor.std(dim=None, *, correction=1, keepdim=False) -> Tensor\n See \"torch.std()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.std.html", "category": "pytorch docs"}
{"text": "BNReLU3dclass torch.ao.nn.intrinsic.quantized.BNReLU3d(num_features, eps=1e-05, momentum=0.1, device=None, dtype=None)\n A BNReLU3d module is a fused module of BatchNorm3d and ReLU\n We adopt the same interface as \"torch.ao.nn.quantized.BatchNorm3d\".\n Variables:\n torch.ao.nn.quantized.BatchNorm3d (Same as) --", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.BNReLU3d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sign_Tensor.sign_() -> Tensor\n In-place version of \"sign()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sign_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.floorTensor.floor() -> Tensor\n See \"torch.floor()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.floor.html", "category": "pytorch docs"}
{"text": "torch.normaltorch.normal(mean, std, , generator=None, out=None) -> Tensor\n Returns a tensor of random numbers drawn from separate normal\n distributions whose mean and standard deviation are given.\n The \"mean\" is a tensor with the mean of each output element's\n normal distribution\n The \"std\" is a tensor with the standard deviation of each output\n element's normal distribution\n The shapes of \"mean\" and \"std\" don't need to match, but the total\n number of elements in each tensor need to be the same.\n Note:\n When the shapes do not match, the shape of \"mean\" is used as the\n shape for the returned output tensor\n Note:\n When \"std\" is a CUDA tensor, this function synchronizes its\n device with the CPU.\n Parameters:\n * mean (Tensor) -- the tensor of per-element means\n * std (Tensor) -- the tensor of per-element standard\n deviations\n Keyword Arguments:\n * generator* (\"torch.Generator\", optional) -- a pseudorandom", "source": "https://pytorch.org/docs/stable/generated/torch.normal.html", "category": "pytorch docs"}
{"text": "number generator for sampling\n * out (Tensor, optional) -- the output tensor.\n Example:\n >>> torch.normal(mean=torch.arange(1., 11.), std=torch.arange(1, 0, -0.1))\n tensor([ 1.0425, 3.5672, 2.7969, 4.2925, 4.7229, 6.2134,\n 8.0505, 8.1408, 9.0563, 10.0566])\n torch.normal(mean=0.0, std, , out=None) -> Tensor\n Similar to the function above, but the means are shared among all\n drawn elements.\n Parameters:\n * mean (float, optional) -- the mean for all\n distributions\n * std (Tensor) -- the tensor of per-element standard\n deviations\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> torch.normal(mean=0.5, std=torch.arange(1., 6.))\n tensor([-1.2793, -1.0732, -2.0687, 5.1177, -1.2303])\n torch.normal(mean, std=1.0, , out=None) -> Tensor\n Similar to the function above, but the standard deviations are\n shared among all drawn elements.", "source": "https://pytorch.org/docs/stable/generated/torch.normal.html", "category": "pytorch docs"}
{"text": "shared among all drawn elements.\n Parameters:\n * mean (Tensor) -- the tensor of per-element means\n * std (float, optional) -- the standard deviation for\n all distributions\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor\n Example:\n >>> torch.normal(mean=torch.arange(1., 6.))\n tensor([ 1.1552, 2.6148, 2.6535, 5.8318, 4.2361])\n torch.normal(mean, std, size, , out=None) -> Tensor\n Similar to the function above, but the means and standard\n deviations are shared among all drawn elements. The resulting\n tensor has size given by \"size\".\n Parameters:\n * mean (float) -- the mean for all distributions\n * std (float) -- the standard deviation for all\n distributions\n * size (int...*) -- a sequence of integers defining the\n shape of the output tensor.\n Keyword Arguments:\n out (*Tensor, *optional) -- the output tensor.\n Example:", "source": "https://pytorch.org/docs/stable/generated/torch.normal.html", "category": "pytorch docs"}
{"text": "Example:\n >>> torch.normal(2, 3, size=(1, 4))\n tensor([[-1.3987, -1.9544, 3.6048, 0.7909]])", "source": "https://pytorch.org/docs/stable/generated/torch.normal.html", "category": "pytorch docs"}
{"text": "RNNCellclass torch.ao.nn.quantized.dynamic.RNNCell(input_size, hidden_size, bias=True, nonlinearity='tanh', dtype=torch.qint8)\n An Elman RNN cell with tanh or ReLU non-linearity. A dynamic\n quantized RNNCell module with floating point tensor as inputs and\n outputs. Weights are quantized to 8 bits. We adopt the same\n interface as torch.nn.RNNCell, please see\n https://pytorch.org/docs/stable/nn.html#torch.nn.RNNCell for\n documentation.\n Examples:\n >>> rnn = nn.RNNCell(10, 20)\n >>> input = torch.randn(6, 3, 10)\n >>> hx = torch.randn(3, 20)\n >>> output = []\n >>> for i in range(6):\n ... hx = rnn(input[i], hx)\n ... output.append(hx)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.RNNCell.html", "category": "pytorch docs"}
{"text": "torch.Tensor.cpuTensor.cpu(memory_format=torch.preserve_format) -> Tensor\n Returns a copy of this object in CPU memory.\n If this object is already in CPU memory and on the correct device,\n then no copy is performed and the original object is returned.\n Parameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cpu.html", "category": "pytorch docs"}
{"text": "torch.selecttorch.select(input, dim, index) -> Tensor\n Slices the \"input\" tensor along the selected dimension at the given\n index. This function returns a view of the original tensor with the\n given dimension removed.\n Note:\n If \"input\" is a sparse tensor and returning a view of the tensor\n is not possible, a RuntimeError exception is raised. In this is\n the case, consider using \"torch.select_copy()\" function.\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- the dimension to slice\n * index (int) -- the index to select with\n Note:\n \"select()\" is equivalent to slicing. For example,\n \"tensor.select(0, index)\" is equivalent to \"tensor[index]\" and\n \"tensor.select(2, index)\" is equivalent to \"tensor[:,:,index]\".", "source": "https://pytorch.org/docs/stable/generated/torch.select.html", "category": "pytorch docs"}
{"text": "torch.cuda.current_blas_handletorch.cuda.current_blas_handle()\n Returns cublasHandle_t pointer to current cuBLAS handle", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.current_blas_handle.html", "category": "pytorch docs"}
{"text": "torch.Tensor.intTensor.int(memory_format=torch.preserve_format) -> Tensor\n \"self.int()\" is equivalent to \"self.to(torch.int32)\". See \"to()\".\n Parameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.int.html", "category": "pytorch docs"}
{"text": "torch.Tensor.erfcTensor.erfc() -> Tensor\n See \"torch.erfc()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.erfc.html", "category": "pytorch docs"}
{"text": "torch.Tensor.absTensor.abs() -> Tensor\n See \"torch.abs()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.abs.html", "category": "pytorch docs"}
{"text": "torch.Tensor.scatterTensor.scatter(dim, index, src) -> Tensor\n Out-of-place version of \"torch.Tensor.scatter_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.soft_margin_losstorch.nn.functional.soft_margin_loss(input, target, size_average=None, reduce=None, reduction='mean') -> Tensor\n See \"SoftMarginLoss\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.soft_margin_loss.html", "category": "pytorch docs"}
{"text": "torch.from_dlpacktorch.from_dlpack(ext_tensor) -> Tensor\n Converts a tensor from an external library into a \"torch.Tensor\".\n The returned PyTorch tensor will share the memory with the input\n tensor (which may have come from another library). Note that in-\n place operations will therefore also affect the data of the input\n tensor. This may lead to unexpected issues (e.g., other libraries\n may have read-only flags or immutable data structures), so the user\n should only do this if they know for sure that this is fine.\n Parameters:\n ext_tensor (object with \"dlpack\" attribute, or a DLPack\n capsule) --\n The tensor or DLPack capsule to convert.\n If \"ext_tensor\" is a tensor (or ndarray) object, it must support\n the \"dlpack\" protocol (i.e., have a \"ext_tensor.dlpack\"\n method). Otherwise \"ext_tensor\" may be a DLPack capsule, which\n is an opaque \"PyCapsule\" instance, typically produced by a\n \"to_dlpack\" function or method.", "source": "https://pytorch.org/docs/stable/generated/torch.from_dlpack.html", "category": "pytorch docs"}
{"text": "\"to_dlpack\" function or method.\n Return type:\n Tensor\n Examples:\n >>> import torch.utils.dlpack\n >>> t = torch.arange(4)\n # Convert a tensor directly (supported in PyTorch >= 1.10)\n >>> t2 = torch.from_dlpack(t)\n >>> t2[:2] = -1 # show that memory is shared\n >>> t2\n tensor([-1, -1, 2, 3])\n >>> t\n tensor([-1, -1, 2, 3])\n # The old-style DLPack usage, with an intermediate capsule object\n >>> capsule = torch.utils.dlpack.to_dlpack(t)\n >>> capsule\n \n >>> t3 = torch.from_dlpack(capsule)\n >>> t3\n tensor([-1, -1, 2, 3])\n >>> t3[0] = -9 # now we're sharing memory between 3 tensors\n >>> t3\n tensor([-9, -1, 2, 3])\n >>> t2\n tensor([-9, -1, 2, 3])\n >>> t\n tensor([-9, -1, 2, 3])", "source": "https://pytorch.org/docs/stable/generated/torch.from_dlpack.html", "category": "pytorch docs"}
{"text": "torch.Tensor.is_set_toTensor.is_set_to(tensor) -> bool\n Returns True if both tensors are pointing to the exact same memory\n (same storage, offset, size and stride).", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_set_to.html", "category": "pytorch docs"}
{"text": "DTypeWithConstraintsclass torch.ao.quantization.backend_config.DTypeWithConstraints(dtype=None, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None)\n Config for specifying additional constraints for a given dtype,\n such as quantization value ranges, scale value ranges, and fixed\n quantization params, to be used in \"DTypeConfig\".\n The constraints currently supported are:\n * quant_min_lower_bound and quant_max_upper_bound: Lower and\n upper bounds for the minimum and maximum quantized values\n respectively. If the QConfig\u00e2\u0080\u0099s quant_min and quant_max fall\n outside this range, then the QConfig will be ignored.\n * scale_min_lower_bound and scale_max_upper_bound: Lower and\n upper bounds for the minimum and maximum scale values\n respectively. If the QConfig\u00e2\u0080\u0099s minimum scale value (currently", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.DTypeWithConstraints.html", "category": "pytorch docs"}
{"text": "exposed as eps) falls below the lower bound, then the QConfig\n will be ignored. Note that the upper bound is currently not\n enforced.\n * scale_exact_match and zero_point_exact_match: Exact match\n requirements for scale and zero point, to be used for operators\n with fixed quantization parameters such as sigmoid and tanh. If\n the observer specified in the QConfig is neither\n FixedQParamsObserver nor FixedQParamsFakeQuantize, or if the\n quantization parameters don't match, then the QConfig will be\n ignored.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.DTypeWithConstraints.html", "category": "pytorch docs"}
{"text": "Hardtanhclass torch.nn.Hardtanh(min_val=- 1.0, max_val=1.0, inplace=False, min_value=None, max_value=None)\n Applies the HardTanh function element-wise.\n HardTanh is defined as:\n \\text{HardTanh}(x) = \\begin{cases} \\text{max_val} & \\text{\n if } x > \\text{ max_val } \\ \\text{min_val} & \\text{ if }\n x < \\text{ min_val } \\ x & \\text{ otherwise } \\\n \\end{cases}\n Parameters:\n * min_val (float) -- minimum value of the linear region\n range. Default: -1\n * max_val (float) -- maximum value of the linear region\n range. Default: 1\n * inplace (bool) -- can optionally do the operation in-\n place. Default: \"False\"\n Keyword arguments \"min_value\" and \"max_value\" have been deprecated\n in favor of \"min_val\" and \"max_val\".\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.Hardtanh(-2, 2)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Hardtanh.html", "category": "pytorch docs"}
{"text": "Examples:\n >>> m = nn.Hardtanh(-2, 2)\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Hardtanh.html", "category": "pytorch docs"}
{"text": "ConvBn2dclass torch.ao.nn.intrinsic.qat.ConvBn2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None)\n A ConvBn2d module is a module fused from Conv2d and BatchNorm2d,\n attached with FakeQuantize modules for weight, used in quantization\n aware training.\n We combined the interface of \"torch.nn.Conv2d\" and\n \"torch.nn.BatchNorm2d\".\n Similar to \"torch.nn.Conv2d\", with FakeQuantize modules initialized\n to default.\n Variables:\n * freeze_bn --\n * weight_fake_quant -- fake quant module for weight", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvBn2d.html", "category": "pytorch docs"}
{"text": "torch.wheretorch.where(condition, input, other, , out=None) -> Tensor\n Return a tensor of elements selected from either \"input\" or\n \"other\", depending on \"condition\".\n The operation is defined as:\n \\text{out}_i = \\begin{cases} \\text{input}_i & \\text{if }\n \\text{condition}_i \\ \\text{other}_i & \\text{otherwise} \\\n \\end{cases}\n Note:\n The tensors \"condition\", \"input\", \"other\" must be broadcastable.\n Parameters:\n * condition (BoolTensor) -- When True (nonzero), yield\n input, otherwise yield other\n * input (Tensor or Scalar) -- value (if \"input\" is a\n scalar) or values selected at indices where \"condition\" is\n \"True\"\n * other (Tensor or Scalar) -- value (if \"other\" is a\n scalar) or values selected at indices where \"condition\" is\n \"False\"\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Returns:", "source": "https://pytorch.org/docs/stable/generated/torch.where.html", "category": "pytorch docs"}
{"text": "Returns:\n A tensor of shape equal to the broadcasted shape of \"condition\",\n \"input\", \"other\"\n Return type:\n Tensor\n Example:\n >>> x = torch.randn(3, 2)\n >>> y = torch.ones(3, 2)\n >>> x\n tensor([[-0.4620, 0.3139],\n [ 0.3898, -0.7197],\n [ 0.0478, -0.1657]])\n >>> torch.where(x > 0, x, y)\n tensor([[ 1.0000, 0.3139],\n [ 0.3898, 1.0000],\n [ 0.0478, 1.0000]])\n >>> x = torch.randn(2, 2, dtype=torch.double)\n >>> x\n tensor([[ 1.0779, 0.0383],\n [-0.8785, -1.1089]], dtype=torch.float64)\n >>> torch.where(x > 0, x, 0.)\n tensor([[1.0779, 0.0383],\n [0.0000, 0.0000]], dtype=torch.float64)\n torch.where(condition) -> tuple of LongTensor\n \"torch.where(condition)\" is identical to \"torch.nonzero(condition,\n as_tuple=True)\".\n Note:\n See also \"torch.nonzero()\".", "source": "https://pytorch.org/docs/stable/generated/torch.where.html", "category": "pytorch docs"}
{"text": "torch.Tensor.clamp_Tensor.clamp_(min=None, max=None) -> Tensor\n In-place version of \"clamp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.clamp_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.leTensor.le(other) -> Tensor\n See \"torch.le()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.le.html", "category": "pytorch docs"}
{"text": "GRUCellclass torch.ao.nn.quantized.dynamic.GRUCell(input_size, hidden_size, bias=True, dtype=torch.qint8)\n A gated recurrent unit (GRU) cell\n A dynamic quantized GRUCell module with floating point tensor as\n inputs and outputs. Weights are quantized to 8 bits. We adopt the\n same interface as torch.nn.GRUCell, please see\n https://pytorch.org/docs/stable/nn.html#torch.nn.GRUCell for\n documentation.\n Examples:\n >>> rnn = nn.GRUCell(10, 20)\n >>> input = torch.randn(6, 3, 10)\n >>> hx = torch.randn(3, 20)\n >>> output = []\n >>> for i in range(6):\n ... hx = rnn(input[i], hx)\n ... output.append(hx)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRUCell.html", "category": "pytorch docs"}
{"text": "torch.Tensor.narrowTensor.narrow(dimension, start, length) -> Tensor\n See \"torch.narrow()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.narrow.html", "category": "pytorch docs"}
{"text": "PerChannelMinMaxObserverclass torch.quantization.observer.PerChannelMinMaxObserver(ch_axis=0, dtype=torch.quint8, qscheme=torch.per_channel_affine, reduce_range=False, quant_min=None, quant_max=None, factory_kwargs=None, eps=1.1920928955078125e-07)\n Observer module for computing the quantization parameters based on\n the running per channel min and max values.\n This observer uses the tensor min/max statistics to compute the per\n channel quantization parameters. The module records the running\n minimum and maximum of incoming tensors, and uses this statistic to\n compute the quantization parameters.\n Parameters:\n * ch_axis -- Channel axis\n * dtype -- dtype argument to the quantize node needed to\n implement the reference model spec.\n * qscheme -- Quantization scheme to be used\n * reduce_range -- Reduces the range of the quantized data\n type by 1 bit\n * quant_min -- Minimum quantization value. If unspecified,", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.PerChannelMinMaxObserver.html", "category": "pytorch docs"}
{"text": "it will follow the 8-bit setup.\n * quant_max -- Maximum quantization value. If unspecified,\n it will follow the 8-bit setup.\n * eps (Tensor) -- Epsilon value for float32, Defaults to\n torch.finfo(torch.float32).eps.\n The quantization parameters are computed the same way as in\n \"MinMaxObserver\", with the difference that the running min/max\n values are stored per channel. Scales and zero points are thus\n computed per channel as well.\n Note:\n If the running minimum equals to the running maximum, the scales\n and zero_points are set to 1.0 and 0.\n reset_min_max_vals()\n Resets the min/max values.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.PerChannelMinMaxObserver.html", "category": "pytorch docs"}
{"text": "torch.blackman_windowtorch.blackman_window(window_length, periodic=True, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\n Blackman window function.\n w[n] = 0.42 - 0.5 \\cos \\left( \\frac{2 \\pi n}{N - 1} \\right) +\n 0.08 \\cos \\left( \\frac{4 \\pi n}{N - 1} \\right)\n where N is the full window size.\n The input \"window_length\" is a positive integer controlling the\n returned window size. \"periodic\" flag determines whether the\n returned window trims off the last duplicate value from the\n symmetric window and is ready to be used as a periodic window with\n functions like \"torch.stft()\". Therefore, if \"periodic\" is true,\n the N in above formula is in fact \\text{window_length} + 1. Also,\n we always have \"torch.blackman_window(L, periodic=True)\" equal to\n \"torch.blackman_window(L + 1, periodic=False)[:-1])\".\n Note:\n If \"window_length\" =1, the returned window contains a single\n value 1.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.blackman_window.html", "category": "pytorch docs"}
{"text": "value 1.\n Parameters:\n * window_length (int) -- the size of returned window\n * periodic (bool, optional) -- If True, returns a\n window to be used as periodic function. If False, return a\n symmetric window.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\"). Only floating point\n types are supported.\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned window tensor. Only \"torch.strided\" (dense layout) is\n supported.\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.", "source": "https://pytorch.org/docs/stable/generated/torch.blackman_window.html", "category": "pytorch docs"}
{"text": "tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Returns:\n A 1-D tensor of size (\\text{window_length},) containing the\n window\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.blackman_window.html", "category": "pytorch docs"}
{"text": "torch.Tensor.svdTensor.svd(some=True, compute_uv=True)\n See \"torch.svd()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.svd.html", "category": "pytorch docs"}
{"text": "torch.cuda.streamtorch.cuda.stream(stream)\n Wrapper around the Context-manager StreamContext that selects a\n given stream.\n Parameters:\n stream (Stream) -- selected stream. This manager is a no-\n op if it's \"None\".\n Return type:\n StreamContext\n ..Note:: In eager mode stream is of type Stream class while in JIT\n it is an object of the custom class \"torch.classes.cuda.Stream\".", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.stream.html", "category": "pytorch docs"}
{"text": "torch.Tensor.log_Tensor.log_() -> Tensor\n In-place version of \"log()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.log_.html", "category": "pytorch docs"}
{"text": "device_ofclass torch.cuda.device_of(obj)\n Context-manager that changes the current device to that of given\n object.\n You can use both tensors and storages as arguments. If a given\n object is not allocated on a GPU, this is a no-op.\n Parameters:\n obj (Tensor or Storage) -- object allocated on the\n selected device.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.device_of.html", "category": "pytorch docs"}
{"text": "torch.histogramtorch.histogram(input, bins, , range=None, weight=None, density=False, out=None)\n Computes a histogram of the values in a tensor.\n \"bins\" can be an integer or a 1D tensor.\n If \"bins\" is an int, it specifies the number of equal-width bins.\n By default, the lower and upper range of the bins is determined by\n the minimum and maximum elements of the input tensor. The \"range\"\n argument can be provided to specify a range for the bins.\n If \"bins\" is a 1D tensor, it specifies the sequence of bin edges\n including the rightmost edge. It should contain at least 2 elements\n and its elements should be increasing.\n Parameters:\n * input (Tensor) -- the input tensor.\n * bins -- int or 1D Tensor. If int, defines the number of\n equal-width bins. If tensor, defines the sequence of bin edges\n including the rightmost edge.\n Keyword Arguments:\n * range (tuple of python:float*) -- Defines the range of\n the bins.", "source": "https://pytorch.org/docs/stable/generated/torch.histogram.html", "category": "pytorch docs"}
{"text": "the bins.\n * weight (Tensor) -- If provided, weight should have the\n same shape as input. Each value in input contributes its\n associated weight towards its bin's result.\n * density (bool) -- If False, the result will contain the\n count (or total weight) in each bin. If True, the result is\n the value of the probability density function over the bins,\n normalized such that the integral over the range of the bins\n is 1.\n * out (Tensor, optional) -- the output tensor. (tuple,\n optional): The result tuple of two output tensors (hist,\n bin_edges).\n Returns:\n 1D Tensor containing the values of the histogram.\n bin_edges(Tensor): 1D Tensor containing the edges of the\n histogram bins.\n Return type:\n hist (Tensor)\n Example:\n >>> torch.histogram(torch.tensor([1., 2, 1]), bins=4, range=(0., 3.), weight=torch.tensor([1., 2., 4.]))", "source": "https://pytorch.org/docs/stable/generated/torch.histogram.html", "category": "pytorch docs"}
{"text": "(tensor([ 0., 5., 2., 0.]), tensor([0., 0.75, 1.5, 2.25, 3.]))\n >>> torch.histogram(torch.tensor([1., 2, 1]), bins=4, range=(0., 3.), weight=torch.tensor([1., 2., 4.]), density=True)\n (tensor([ 0., 0.9524, 0.3810, 0.]), tensor([0., 0.75, 1.5, 2.25, 3.]))", "source": "https://pytorch.org/docs/stable/generated/torch.histogram.html", "category": "pytorch docs"}
{"text": "torch.Tensor.arctanTensor.arctan() -> Tensor\n See \"torch.arctan()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arctan.html", "category": "pytorch docs"}
{"text": "torch.polygammatorch.polygamma(n, input, *, out=None) -> Tensor\n Alias for \"torch.special.polygamma()\".", "source": "https://pytorch.org/docs/stable/generated/torch.polygamma.html", "category": "pytorch docs"}
{"text": "torch.cuda.comm.broadcast_coalescedtorch.cuda.comm.broadcast_coalesced(tensors, devices, buffer_size=10485760)\n Broadcasts a sequence tensors to the specified GPUs. Small tensors\n are first coalesced into a buffer to reduce the number of\n synchronizations.\n Parameters:\n * tensors (sequence) -- tensors to broadcast. Must be on\n the same device, either CPU or GPU.\n * devices (Iterable[torch.device, str or\n int]) -- an iterable of GPU devices, among which to\n broadcast.\n * buffer_size (int) -- maximum size of the buffer used for\n coalescing\n Returns:\n A tuple containing copies of \"tensor\", placed on \"devices\".", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.comm.broadcast_coalesced.html", "category": "pytorch docs"}
{"text": "torch._foreach_abstorch._foreach_abs(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.abs()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_abs.html", "category": "pytorch docs"}
{"text": "torch.negtorch.neg(input, , out=None) -> Tensor\n Returns a new tensor with the negative of the elements of \"input\".\n \\text{out} = -1 \\times \\text{input}\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(5)\n >>> a\n tensor([ 0.0090, -0.2262, -0.0682, -0.2866, 0.3940])\n >>> torch.neg(a)\n tensor([-0.0090, 0.2262, 0.0682, 0.2866, -0.3940])", "source": "https://pytorch.org/docs/stable/generated/torch.neg.html", "category": "pytorch docs"}
{"text": "torch.Tensor.floor_Tensor.floor_() -> Tensor\n In-place version of \"floor()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.floor_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.heavisideTensor.heaviside(values) -> Tensor\n See \"torch.heaviside()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.heaviside.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.max_unpool2dtorch.nn.functional.max_unpool2d(input, indices, kernel_size, stride=None, padding=0, output_size=None)\n Computes a partial inverse of \"MaxPool2d\".\n See \"MaxUnpool2d\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.max_unpool2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.scatter_add_Tensor.scatter_add_(dim, index, src) -> Tensor\n Adds all values from the tensor \"src\" into \"self\" at the indices\n specified in the \"index\" tensor in a similar fashion as\n \"scatter_()\". For each value in \"src\", it is added to an index in\n \"self\" which is specified by its index in \"src\" for \"dimension !=\n dim\" and by the corresponding value in \"index\" for \"dimension =\n dim\".\n For a 3-D tensor, \"self\" is updated as:\n self[index[i][j][k]][j][k] += src[i][j][k] # if dim == 0\n self[i][index[i][j][k]][k] += src[i][j][k] # if dim == 1\n self[i][j][index[i][j][k]] += src[i][j][k] # if dim == 2\n \"self\", \"index\" and \"src\" should have same number of dimensions. It\n is also required that \"index.size(d) <= src.size(d)\" for all\n dimensions \"d\", and that \"index.size(d) <= self.size(d)\" for all\n dimensions \"d != dim\". Note that \"index\" and \"src\" do not\n broadcast.\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_add_.html", "category": "pytorch docs"}
{"text": "broadcast.\n Note:\n This operation may behave nondeterministically when given tensors\n on a CUDA device. See Reproducibility for more information.\n Note:\n The backward pass is implemented only for \"src.shape ==\n index.shape\".\n Parameters:\n * dim (int) -- the axis along which to index\n * index (LongTensor) -- the indices of elements to scatter\n and add, can be either empty or of the same dimensionality as\n \"src\". When empty, the operation returns \"self\" unchanged.\n * src (Tensor) -- the source elements to scatter and add\n Example:\n >>> src = torch.ones((2, 5))\n >>> index = torch.tensor([[0, 1, 2, 0, 0]])\n >>> torch.zeros(3, 5, dtype=src.dtype).scatter_add_(0, index, src)\n tensor([[1., 0., 0., 1., 1.],\n [0., 1., 0., 0., 0.],\n [0., 0., 1., 0., 0.]])\n >>> index = torch.tensor([[0, 1, 2, 0, 0], [0, 1, 2, 2, 2]])\n >>> torch.zeros(3, 5, dtype=src.dtype).scatter_add_(0, index, src)", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_add_.html", "category": "pytorch docs"}
{"text": "tensor([[2., 0., 0., 1., 1.],\n [0., 2., 0., 0., 0.],\n [0., 0., 2., 1., 1.]])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_add_.html", "category": "pytorch docs"}
{"text": "torch.jit.optimize_for_inferencetorch.jit.optimize_for_inference(mod, other_methods=None)\n Performs a set of optimization passes to optimize a model for the\n purposes of inference. If the model is not already frozen,\n optimize_for_inference will invoke torch.jit.freeze\n automatically.\n In addition to generic optimizations that should speed up your\n model regardless of environment, prepare for inference will also\n bake in build specific settings such as the presence of CUDNN or\n MKLDNN, and may in the future make transformations which speed\n things up on one machine but slow things down on another.\n Accordingly, serialization is not implemented following invoking\n optimize_for_inference and is not guaranteed.\n This is still in prototype, and may have the potential to slow down\n your model. Primary use cases that have been targeted so far have\n been vision models on cpu and gpu to a lesser extent.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.optimize_for_inference.html", "category": "pytorch docs"}
{"text": "Example (optimizing a module with Conv->Batchnorm):\n import torch\n in_channels, out_channels = 3, 32\n conv = torch.nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=2, bias=True)\n bn = torch.nn.BatchNorm2d(out_channels, eps=.001)\n mod = torch.nn.Sequential(conv, bn)\n frozen_mod = torch.jit.optimize_for_inference(torch.jit.script(mod.eval()))\n assert \"batch_norm\" not in str(frozen_mod.graph)\n # if built with MKLDNN, convolution will be run with MKLDNN weights\n assert \"MKLDNN\" in frozen_mod.graph\n Return type:\n ScriptModule", "source": "https://pytorch.org/docs/stable/generated/torch.jit.optimize_for_inference.html", "category": "pytorch docs"}
{"text": "torch.Tensor.addcmulTensor.addcmul(tensor1, tensor2, *, value=1) -> Tensor\n See \"torch.addcmul()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addcmul.html", "category": "pytorch docs"}
{"text": "torch.cuda.is_current_stream_capturingtorch.cuda.is_current_stream_capturing()\n Returns True if CUDA graph capture is underway on the current CUDA\n stream, False otherwise.\n If a CUDA context does not exist on the current device, returns\n False without initializing the context.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.is_current_stream_capturing.html", "category": "pytorch docs"}
{"text": "torch.Tensor.aminTensor.amin(dim=None, keepdim=False) -> Tensor\n See \"torch.amin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.amin.html", "category": "pytorch docs"}
{"text": "torch.is_warn_always_enabledtorch.is_warn_always_enabled()\n Returns True if the global warn_always flag is turned on. Refer to\n \"torch.set_warn_always()\" documentation for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.is_warn_always_enabled.html", "category": "pytorch docs"}
{"text": "torch.Tensor.repeat_interleaveTensor.repeat_interleave(repeats, dim=None, *, output_size=None) -> Tensor\n See \"torch.repeat_interleave()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.repeat_interleave.html", "category": "pytorch docs"}
{"text": "upsampleclass torch.ao.nn.quantized.functional.upsample(input, size=None, scale_factor=None, mode='nearest', align_corners=None)\n Upsamples the input to either the given \"size\" or the given\n \"scale_factor\"\n Warning:\n This function is deprecated in favor of\n \"torch.nn.quantized.functional.interpolate()\". This is equivalent\n with \"nn.quantized.functional.interpolate(...)\".\n See \"torch.nn.functional.interpolate()\" for implementation details.\n The input dimensions are interpreted in the form: mini-batch x\n channels x [optional depth] x [optional height] x width.\n Note:\n The input quantization parameters propagate to the output.\n Note:\n Only 2D input is supported for quantized inputs\n Note:\n Only the following modes are supported for the quantized inputs:\n * bilinear\n * nearest\n Parameters:\n * input (Tensor) -- quantized input tensor\n * size (int or Tuple[int] or Tuple[int*,", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.upsample.html", "category": "pytorch docs"}
{"text": "int] or Tuple[int, int, int]) -- output\n spatial size.\n * scale_factor (float or Tuple[float]*) --\n multiplier for spatial size. Has to be an integer.\n * mode (str) -- algorithm used for upsampling: \"'nearest'\"\n | \"'bilinear'\"\n * align_corners (*bool, optional) -- Geometrically, we\n consider the pixels of the input and output as squares rather\n than points. If set to \"True\", the input and output tensors\n are aligned by the center points of their corner pixels,\n preserving the values at the corner pixels. If set to \"False\",\n the input and output tensors are aligned by the corner points\n of their corner pixels, and the interpolation uses edge value\n padding for out-of-boundary values, making this operation\n independent* of input size when \"scale_factor\" is kept the\n same. This only has an effect when \"mode\" is \"'bilinear'\".\n Default: \"False\"", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.upsample.html", "category": "pytorch docs"}
{"text": "Default: \"False\"\n Warning:\n With \"align_corners = True\", the linearly interpolating modes\n (bilinear) don't proportionally align the output and input\n pixels, and thus the output values can depend on the input size.\n This was the default behavior for these modes up to version\n 0.3.1. Since then, the default behavior is \"align_corners =\n False\". See \"Upsample\" for concrete examples on how this affects\n the outputs.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.upsample.html", "category": "pytorch docs"}
{"text": "torch.Tensor.nansumTensor.nansum(dim=None, keepdim=False, dtype=None) -> Tensor\n See \"torch.nansum()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nansum.html", "category": "pytorch docs"}
{"text": "torch.Tensor.unbindTensor.unbind(dim=0) -> seq\n See \"torch.unbind()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.unbind.html", "category": "pytorch docs"}
{"text": "torch.isclosetorch.isclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False) -> Tensor\n Returns a new tensor with boolean elements representing if each\n element of \"input\" is \"close\" to the corresponding element of\n \"other\". Closeness is defined as:\n \\lvert \\text{input} - \\text{other} \\rvert \\leq \\texttt{atol} +\n \\texttt{rtol} \\times \\lvert \\text{other} \\rvert\n where \"input\" and \"other\" are finite. Where \"input\" and/or \"other\"\n are nonfinite they are close if and only if they are equal, with\n NaNs being considered equal to each other when \"equal_nan\" is True.\n Parameters:\n * input (Tensor) -- first tensor to compare\n * other (Tensor) -- second tensor to compare\n * atol (float, optional) -- absolute tolerance.\n Default: 1e-08\n * rtol (float, optional) -- relative tolerance.\n Default: 1e-05\n * equal_nan (bool, optional) -- if \"True\", then two", "source": "https://pytorch.org/docs/stable/generated/torch.isclose.html", "category": "pytorch docs"}
{"text": "\"NaN\" s will be considered equal. Default: \"False\"\n Examples:\n >>> torch.isclose(torch.tensor((1., 2, 3)), torch.tensor((1 + 1e-10, 3, 4)))\n tensor([ True, False, False])\n >>> torch.isclose(torch.tensor((float('inf'), 4)), torch.tensor((float('inf'), 6)), rtol=.5)\n tensor([True, True])", "source": "https://pytorch.org/docs/stable/generated/torch.isclose.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.kl_divtorch.nn.functional.kl_div(input, target, size_average=None, reduce=None, reduction='mean', log_target=False)\n The Kullback-Leibler divergence Loss\n See \"KLDivLoss\" for details.\n Parameters:\n * input (Tensor) -- Tensor of arbitrary shape in log-\n probabilities.\n * target (Tensor) -- Tensor of the same shape as input.\n See \"log_target\" for the target's interpretation.\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n multiple elements per sample. If the field \"size_average\" is\n set to \"False\", the losses are instead summed for each\n minibatch. Ignored when reduce is \"False\". Default: \"True\"\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.kl_div.html", "category": "pytorch docs"}
{"text": "over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'batchmean'\" | \"'sum'\" |\n \"'mean'\". \"'none'\": no reduction will be applied\n \"'batchmean'\": the sum of the output will be divided by the\n batchsize \"'sum'\": the output will be summed \"'mean'\": the\n output will be divided by the number of elements in the output\n Default: \"'mean'\"\n * log_target (bool) -- A flag indicating whether \"target\"\n is passed in the log space. It is recommended to pass certain\n distributions (like \"softmax\") in the log space to avoid\n numerical issues caused by explicit \"log\". Default: \"False\"\n Return type:\n Tensor\n Note:\n \"size_average\" and \"reduce\" are in the process of being", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.kl_div.html", "category": "pytorch docs"}
{"text": "deprecated, and in the meantime, specifying either of those two\n args will override \"reduction\".\n Note:\n \"reduction\" = \"'mean'\" doesn't return the true kl divergence\n value, please use \"reduction\" = \"'batchmean'\" which aligns with\n KL math definition. In the next major release, \"'mean'\" will be\n changed to be the same as 'batchmean'.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.kl_div.html", "category": "pytorch docs"}
{"text": "torch.raveltorch.ravel(input) -> Tensor\n Return a contiguous flattened tensor. A copy is made only if\n needed.\n Parameters:\n input (Tensor) -- the input tensor.\n Example:\n >>> t = torch.tensor([[[1, 2],\n ... [3, 4]],\n ... [[5, 6],\n ... [7, 8]]])\n >>> torch.ravel(t)\n tensor([1, 2, 3, 4, 5, 6, 7, 8])", "source": "https://pytorch.org/docs/stable/generated/torch.ravel.html", "category": "pytorch docs"}
{"text": "torch.get_default_dtypetorch.get_default_dtype() -> torch.dtype\n Get the current default floating point \"torch.dtype\".\n Example:\n >>> torch.get_default_dtype() # initial default for floating point is torch.float32\n torch.float32\n >>> torch.set_default_dtype(torch.float64)\n >>> torch.get_default_dtype() # default is now changed to torch.float64\n torch.float64\n >>> torch.set_default_tensor_type(torch.FloatTensor) # setting tensor type also affects this\n >>> torch.get_default_dtype() # changed to torch.float32, the dtype for torch.FloatTensor\n torch.float32", "source": "https://pytorch.org/docs/stable/generated/torch.get_default_dtype.html", "category": "pytorch docs"}
{"text": "torch.autograd.backwardtorch.autograd.backward(tensors, grad_tensors=None, retain_graph=None, create_graph=False, grad_variables=None, inputs=None)\n Computes the sum of gradients of given tensors with respect to\n graph leaves.\n The graph is differentiated using the chain rule. If any of\n \"tensors\" are non-scalar (i.e. their data has more than one\n element) and require gradient, then the Jacobian-vector product\n would be computed, in this case the function additionally requires\n specifying \"grad_tensors\". It should be a sequence of matching\n length, that contains the \"vector\" in the Jacobian-vector product,\n usually the gradient of the differentiated function w.r.t.\n corresponding tensors (\"None\" is an acceptable value for all\n tensors that don't need gradient tensors).\n This function accumulates gradients in the leaves - you might need\n to zero \".grad\" attributes or set them to \"None\" before calling it.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.backward.html", "category": "pytorch docs"}
{"text": "See Default gradient layouts for details on the memory layout of\n accumulated gradients.\n Note:\n Using this method with \"create_graph=True\" will create a\n reference cycle between the parameter and its gradient which can\n cause a memory leak. We recommend using \"autograd.grad\" when\n creating the graph to avoid this. If you have to use this\n function, make sure to reset the \".grad\" fields of your\n parameters to \"None\" after use to break the cycle and avoid the\n leak.\n Note:\n If you run any forward ops, create \"grad_tensors\", and/or call\n \"backward\" in a user-specified CUDA stream context, see Stream\n semantics of backward passes.\n Note:\n When \"inputs\" are provided and a given input is not a leaf, the\n current implementation will call its grad_fn (even though it is\n not strictly needed to get this gradients). It is an\n implementation detail on which the user should not rely. See htt", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.backward.html", "category": "pytorch docs"}
{"text": "ps://github.com/pytorch/pytorch/pull/60521#issuecomment-867061780\n for more details.\n Parameters:\n * tensors (Sequence[Tensor] or Tensor) -- Tensors\n of which the derivative will be computed.\n * grad_tensors (Sequence[Tensor or None] or\n Tensor, optional) -- The \"vector\" in the Jacobian-\n vector product, usually gradients w.r.t. each element of\n corresponding tensors. None values can be specified for scalar\n Tensors or ones that don't require grad. If a None value would\n be acceptable for all grad_tensors, then this argument is\n optional.\n * retain_graph (bool, optional) -- If \"False\", the\n graph used to compute the grad will be freed. Note that in\n nearly all cases setting this option to \"True\" is not needed\n and often can be worked around in a much more efficient way.\n Defaults to the value of \"create_graph\".", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.backward.html", "category": "pytorch docs"}
{"text": "Defaults to the value of \"create_graph\".\n * create_graph (bool, optional) -- If \"True\", graph of\n the derivative will be constructed, allowing to compute higher\n order derivative products. Defaults to \"False\".\n * inputs (Sequence[Tensor] or Tensor,\n optional) -- Inputs w.r.t. which the gradient be will\n accumulated into \".grad\". All other Tensors will be ignored.\n If not provided, the gradient is accumulated into all the leaf\n Tensors that were used to compute the attr::tensors.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.backward.html", "category": "pytorch docs"}
{"text": "torch.geqrftorch.geqrf(input, , out=None)\n This is a low-level function for calling LAPACK's geqrf directly.\n This function returns a namedtuple (a, tau) as defined in LAPACK\n documentation for geqrf .\n Computes a QR decomposition of \"input\". Both Q and R matrices\n are stored in the same output tensor a. The elements of R are\n stored on and above the diagonal. Elementary reflectors (or\n Householder vectors) implicitly defining matrix Q are stored\n below the diagonal. The results of this function can be used\n together with \"torch.linalg.householder_product()\" to obtain the\n Q matrix or with \"torch.ormqr()\", which uses an implicit\n representation of the Q* matrix, for an efficient matrix-matrix\n multiplication.\n See LAPACK documentation for geqrf for further details.\n Note:\n See also \"torch.linalg.qr()\", which computes Q and R matrices,\n and \"torch.linalg.lstsq()\" with the \"driver=\"gels\"\" option for a", "source": "https://pytorch.org/docs/stable/generated/torch.geqrf.html", "category": "pytorch docs"}
{"text": "function that can solve matrix equations using a QR\n decomposition.\n Parameters:\n input (Tensor) -- the input matrix\n Keyword Arguments:\n out (tuple, optional) -- the output tuple of (Tensor,\n Tensor). Ignored if None. Default: None.", "source": "https://pytorch.org/docs/stable/generated/torch.geqrf.html", "category": "pytorch docs"}
{"text": "torch.autograd.Function.backwardstatic Function.backward(ctx, *grad_outputs)\n Defines a formula for differentiating the operation with backward\n mode automatic differentiation (alias to the vjp function).\n This function is to be overridden by all subclasses.\n It must accept a context \"ctx\" as the first argument, followed by\n as many outputs as the \"forward()\" returned (None will be passed in\n for non tensor outputs of the forward function), and it should\n return as many tensors, as there were inputs to \"forward()\". Each\n argument is the gradient w.r.t the given output, and each returned\n value should be the gradient w.r.t. the corresponding input. If an\n input is not a Tensor or is a Tensor not requiring grads, you can\n just pass None as a gradient for that input.\n The context can be used to retrieve tensors saved during the\n forward pass. It also has an attribute \"ctx.needs_input_grad\" as a", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.Function.backward.html", "category": "pytorch docs"}
{"text": "tuple of booleans representing whether each input needs gradient.\n E.g., \"backward()\" will have \"ctx.needs_input_grad[0] = True\" if\n the first input to \"forward()\" needs gradient computated w.r.t. the\n output.\n Return type:\n Any", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.Function.backward.html", "category": "pytorch docs"}
{"text": "torch.Tensor.isneginfTensor.isneginf() -> Tensor\n See \"torch.isneginf()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.isneginf.html", "category": "pytorch docs"}
{"text": "torch.cumprodtorch.cumprod(input, dim, , dtype=None, out=None) -> Tensor\n Returns the cumulative product of elements of \"input\" in the\n dimension \"dim\".\n For example, if \"input\" is a vector of size N, the result will also\n be a vector of size N, with elements.\n y_i = x_1 \\times x_2\\times x_3\\times \\dots \\times x_i\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- the dimension to do the operation over\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is casted\n to \"dtype\" before the operation is performed. This is useful\n for preventing data type overflows. Default: None.\n * out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(10)\n >>> a\n tensor([ 0.6001, 0.2069, -0.1919, 0.9792, 0.6727, 1.0062, 0.4126,\n -0.2129, -0.4206, 0.1968])", "source": "https://pytorch.org/docs/stable/generated/torch.cumprod.html", "category": "pytorch docs"}
{"text": "-0.2129, -0.4206, 0.1968])\n >>> torch.cumprod(a, dim=0)\n tensor([ 0.6001, 0.1241, -0.0238, -0.0233, -0.0157, -0.0158, -0.0065,\n 0.0014, -0.0006, -0.0001])\n >>> a[5] = 0.0\n >>> torch.cumprod(a, dim=0)\n tensor([ 0.6001, 0.1241, -0.0238, -0.0233, -0.0157, -0.0000, -0.0000,\n 0.0000, -0.0000, -0.0000])", "source": "https://pytorch.org/docs/stable/generated/torch.cumprod.html", "category": "pytorch docs"}
{"text": "torch.Tensor.diagTensor.diag(diagonal=0) -> Tensor\n See \"torch.diag()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.diag.html", "category": "pytorch docs"}
{"text": "torch.rsqrttorch.rsqrt(input, , out=None) -> Tensor\n Returns a new tensor with the reciprocal of the square-root of each\n of the elements of \"input\".\n \\text{out}{i} = \\frac{1}{\\sqrt{\\text{input}}}\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([-0.0370, 0.2970, 1.5420, -0.9105])\n >>> torch.rsqrt(a)\n tensor([ nan, 1.8351, 0.8053, nan])", "source": "https://pytorch.org/docs/stable/generated/torch.rsqrt.html", "category": "pytorch docs"}
{"text": "torch.dstacktorch.dstack(tensors, , out=None) -> Tensor\n Stack tensors in sequence depthwise (along third axis).\n This is equivalent to concatenation along the third axis after 1-D\n and 2-D tensors have been reshaped by \"torch.atleast_3d()\".\n Parameters:\n tensors (sequence of Tensors) -- sequence of tensors to\n concatenate\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.tensor([1, 2, 3])\n >>> b = torch.tensor([4, 5, 6])\n >>> torch.dstack((a,b))\n tensor([[[1, 4],\n [2, 5],\n [3, 6]]])\n >>> a = torch.tensor([[1],[2],[3]])\n >>> b = torch.tensor([[4],[5],[6]])\n >>> torch.dstack((a,b))\n tensor([[[1, 4]],\n [[2, 5]],\n [[3, 6]]])", "source": "https://pytorch.org/docs/stable/generated/torch.dstack.html", "category": "pytorch docs"}
{"text": "torch.Tensor.tan_Tensor.tan_() -> Tensor\n In-place version of \"tan()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.tan_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.subTensor.sub(other, *, alpha=1) -> Tensor\n See \"torch.sub()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sub.html", "category": "pytorch docs"}
{"text": "torch._foreach_tantorch._foreach_tan(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.tan()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_tan.html", "category": "pytorch docs"}
{"text": "torch.disttorch.dist(input, other, p=2) -> Tensor\n Returns the p-norm of (\"input\" - \"other\")\n The shapes of \"input\" and \"other\" must be broadcastable.\n Parameters:\n * input (Tensor) -- the input tensor.\n * other (Tensor) -- the Right-hand-side input tensor\n * p (float, optional) -- the norm to be computed\n Example:\n >>> x = torch.randn(4)\n >>> x\n tensor([-1.5393, -0.8675, 0.5916, 1.6321])\n >>> y = torch.randn(4)\n >>> y\n tensor([ 0.0967, -1.0511, 0.6295, 0.8360])\n >>> torch.dist(x, y, 3.5)\n tensor(1.6727)\n >>> torch.dist(x, y, 3)\n tensor(1.6973)\n >>> torch.dist(x, y, 0)\n tensor(4.)\n >>> torch.dist(x, y, 1)\n tensor(2.6537)", "source": "https://pytorch.org/docs/stable/generated/torch.dist.html", "category": "pytorch docs"}
{"text": "torch.func.vjptorch.func.vjp(func, primals, has_aux=False)\n Standing for the vector-Jacobian product, returns a tuple\n containing the results of \"func\" applied to \"primals\" and a\n function that, when given \"cotangents\", computes the reverse-mode\n Jacobian of \"func\" with respect to \"primals\" times \"cotangents\".\n Parameters:\n * func (Callable) -- A Python function that takes one or\n more arguments. Must return one or more Tensors.\n * primals (Tensors) -- Positional arguments to \"func\" that\n must all be Tensors. The returned function will also be\n computing the derivative with respect to these arguments\n * has_aux (bool*) -- Flag indicating that \"func\" returns a\n \"(output, aux)\" tuple where the first element is the output of\n the function to be differentiated and the second element is\n other auxiliary objects that will not be differentiated.\n Default: False.\n Returns:", "source": "https://pytorch.org/docs/stable/generated/torch.func.vjp.html", "category": "pytorch docs"}
{"text": "Default: False.\n Returns:\n Returns a \"(output, vjp_fn)\" tuple containing the output of\n \"func\" applied to \"primals\" and a function that computes the vjp\n of \"func\" with respect to all \"primals\" using the cotangents\n passed to the returned function. If \"has_aux is True\", then\n instead returns a \"(output, vjp_fn, aux)\" tuple. The returned\n \"vjp_fn\" function will return a tuple of each VJP.\n When used in simple cases, \"vjp()\" behaves the same as \"grad()\"\n\n\n\nx = torch.randn([5])\nf = lambda x: x.sin().sum()\n(, vjpfunc) = torch.func.vjp(f, x)\ngrad = vjpfunc(torch.tensor(1.))[0]\nassert torch.allclose(grad, torch.func.grad(f)(x))\n However, \"vjp()\" can support functions with multiple outputs by\n passing in the cotangents for each of the outputs\nx = torch.randn([5])\nf = lambda x: (x.sin(), x.cos())\n(, vjpfunc) = torch.func.vjp(f, x)\nvjps = vjpfunc((torch.ones([5]), torch.ones([5])))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.vjp.html", "category": "pytorch docs"}
{"text": "\n\n\nassert torch.allclose(vjps[0], x.cos() + -x.sin())\n \"vjp()\" can even support outputs being Python structs\nx = torch.randn([5])\nf = lambda x: {'first': x.sin(), 'second': x.cos()}\n(, vjpfunc) = torch.func.vjp(f, x)\ncotangents = {'first': torch.ones([5]), 'second': torch.ones([5])}\nvjps = vjpfunc(cotangents)\nassert torch.allclose(vjps[0], x.cos() + -x.sin())\n The function returned by \"vjp()\" will compute the partials with\n respect to each of the \"primals\"\nx, y = torch.randn([5, 4]), torch.randn([4, 5])\n(, vjpfunc) = torch.func.vjp(torch.matmul, x, y)\ncotangents = torch.randn([5, 5])\nvjps = vjpfunc(cotangents)\nassert len(vjps) == 2\nassert torch.allclose(vjps[0], torch.matmul(cotangents, y.transpose(0, 1)))\nassert torch.allclose(vjps[1], torch.matmul(x.transpose(0, 1), cotangents))\n \"primals\" are the positional arguments for \"f\". All kwargs use\n their default value\nx = torch.randn([5])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.vjp.html", "category": "pytorch docs"}
{"text": "their default value\n\n\n\nx = torch.randn([5])\ndef f(x, scale=4.):\n return x * scale\n(_, vjpfunc) = torch.func.vjp(f, x)\nvjps = vjpfunc(torch.ones_like(x))\nassert torch.allclose(vjps[0], torch.full(x.shape, 4.))\n Note:\n Using PyTorch \"torch.no_grad\" together with \"vjp\". Case 1: Using\n \"torch.no_grad\" inside a function:\n >>> def f(x):\n >>> with torch.no_grad():\n >>> c = x ** 2\n >>> return x - c\n In this case, \"vjp(f)(x)\" will respect the inner\n \"torch.no_grad\".Case 2: Using \"vjp\" inside \"torch.no_grad\"\n context manager:\n >>> with torch.no_grad():\n >>> vjp(f)(x)\n In this case, \"vjp\" will respect the inner \"torch.no_grad\", but\n not the outer one. This is because \"vjp\" is a \"function\n transform\": its result should not depend on the result of a\n context manager outside of \"f\".\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.vjp.html", "category": "pytorch docs"}
{"text": "Tanhshrinkclass torch.nn.Tanhshrink\n Applies the element-wise function:\n \\text{Tanhshrink}(x) = x - \\tanh(x)\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.Tanhshrink()\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Tanhshrink.html", "category": "pytorch docs"}
{"text": "torch.Tensor.arccosTensor.arccos() -> Tensor\n See \"torch.arccos()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arccos.html", "category": "pytorch docs"}
{"text": "torch.Tensor.row_indicesTensor.row_indices()", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.row_indices.html", "category": "pytorch docs"}
{"text": "Linearclass torch.ao.nn.qat.dynamic.Linear(in_features, out_features, bias=True, qconfig=None, device=None, dtype=None)\n A linear module attached with FakeQuantize modules for weight, used\n for dynamic quantization aware training.\n We adopt the same interface as torch.nn.Linear, please see\n https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for\n documentation.\n Similar to torch.nn.Linear, with FakeQuantize modules initialized\n to default.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.qat.dynamic.Linear.html", "category": "pytorch docs"}
{"text": "MaxUnpool2dclass torch.nn.MaxUnpool2d(kernel_size, stride=None, padding=0)\n Computes a partial inverse of \"MaxPool2d\".\n \"MaxPool2d\" is not fully invertible, since the non-maximal values\n are lost.\n \"MaxUnpool2d\" takes in as input the output of \"MaxPool2d\" including\n the indices of the maximal values and computes a partial inverse in\n which all non-maximal values are set to zero.\n Note:\n \"MaxPool2d\" can map several input sizes to the same output sizes.\n Hence, the inversion process can get ambiguous. To accommodate\n this, you can provide the needed output size as an additional\n argument \"output_size\" in the forward call. See the Inputs and\n Example below.\n Parameters:\n * kernel_size (int or tuple) -- Size of the max\n pooling window.\n * stride (int or tuple) -- Stride of the max pooling\n window. It is set to \"kernel_size\" by default.\n * padding (int or tuple) -- Padding that was added to\n the input", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool2d.html", "category": "pytorch docs"}
{"text": "the input\n Inputs:\n * input: the input Tensor to invert\n * indices: the indices given out by \"MaxPool2d\"\n * output_size (optional): the targeted output size\n Shape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).\n * Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),\n where\n H_{out} = (H_{in} - 1) \\times \\text{stride[0]} - 2 \\times\n \\text{padding[0]} + \\text{kernel_size[0]}\n W_{out} = (W_{in} - 1) \\times \\text{stride[1]} - 2 \\times\n \\text{padding[1]} + \\text{kernel_size[1]}\n or as given by \"output_size\" in the call operator\n Example:\n >>> pool = nn.MaxPool2d(2, stride=2, return_indices=True)\n >>> unpool = nn.MaxUnpool2d(2, stride=2)\n >>> input = torch.tensor([[[[ 1., 2., 3., 4.],\n [ 5., 6., 7., 8.],\n [ 9., 10., 11., 12.],\n [13., 14., 15., 16.]]]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool2d.html", "category": "pytorch docs"}
{"text": "\n\n\noutput, indices = pool(input)\n >>> unpool(output, indices)\n tensor([[[[ 0., 0., 0., 0.],\n [ 0., 6., 0., 8.],\n [ 0., 0., 0., 0.],\n [ 0., 14., 0., 16.]]]])\n >>> # Now using output_size to resolve an ambiguous size for the inverse\n >>> input = torch.torch.tensor([[[[ 1., 2., 3., 4., 5.],\n [ 6., 7., 8., 9., 10.],\n [11., 12., 13., 14., 15.],\n [16., 17., 18., 19., 20.]]]])\n >>> output, indices = pool(input)\n >>> # This call will not work without specifying output_size\n >>> unpool(output, indices, output_size=input.size())\n tensor([[[[ 0., 0., 0., 0., 0.],\n [ 0., 7., 0., 9., 0.],\n [ 0., 0., 0., 0., 0.],\n [ 0., 17., 0., 19., 0.]]]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool2d.html", "category": "pytorch docs"}
{"text": "LSTMCellclass torch.ao.nn.quantized.dynamic.LSTMCell(args, kwargs)\n A long short-term memory (LSTM) cell.\n A dynamic quantized LSTMCell module with floating point tensor as\n inputs and outputs. Weights are quantized to 8 bits. We adopt the\n same interface as torch.nn.LSTMCell*, please see\n https://pytorch.org/docs/stable/nn.html#torch.nn.LSTMCell for\n documentation.\n Examples:\n >>> rnn = nn.LSTMCell(10, 20)\n >>> input = torch.randn(6, 3, 10)\n >>> hx = torch.randn(3, 20)\n >>> cx = torch.randn(3, 20)\n >>> output = []\n >>> for i in range(6):\n ... hx, cx = rnn(input[i], (hx, cx))\n ... output.append(hx)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.LSTMCell.html", "category": "pytorch docs"}
{"text": "torch.sparse.sumtorch.sparse.sum(input, dim=None, dtype=None)\n Returns the sum of each row of the sparse tensor \"input\" in the\n given dimensions \"dim\". If \"dim\" is a list of dimensions, reduce\n over all of them. When sum over all \"sparse_dim\", this method\n returns a dense tensor instead of a sparse tensor.\n All summed \"dim\" are squeezed (see \"torch.squeeze()\"), resulting an\n output tensor having \"dim\" fewer dimensions than \"input\".\n During backward, only gradients at \"nnz\" locations of \"input\" will\n propagate back. Note that the gradients of \"input\" is coalesced.\n Parameters:\n * input (Tensor) -- the input sparse tensor\n * dim (int or tuple of ints) -- a dimension or a list\n of dimensions to reduce. Default: reduce over all dims.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned Tensor. Default: dtype of \"input\".\n Return type:\n Tensor\n Example:\n >>> nnz = 3\n >>> dims = [5, 5, 2, 3]", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.sum.html", "category": "pytorch docs"}
{"text": "\n\n\nnnz = 3\n >>> dims = [5, 5, 2, 3]\n >>> I = torch.cat([torch.randint(0, dims[0], size=(nnz,)),\n torch.randint(0, dims[1], size=(nnz,))], 0).reshape(2, nnz)\n >>> V = torch.randn(nnz, dims[2], dims[3])\n >>> size = torch.Size(dims)\n >>> S = torch.sparse_coo_tensor(I, V, size)\n >>> S\n tensor(indices=tensor([[2, 0, 3],\n [2, 4, 1]]),\n values=tensor([[[-0.6438, -1.6467, 1.4004],\n [ 0.3411, 0.0918, -0.2312]],\n [[ 0.5348, 0.0634, -2.0494],\n [-0.7125, -1.0646, 2.1844]],\n [[ 0.1276, 0.1874, -0.6334],\n [-1.9682, -0.5340, 0.7483]]]),\n size=(5, 5, 2, 3), nnz=3, layout=torch.sparse_coo)\n # when sum over only part of sparse_dims, return a sparse tensor\n >>> torch.sparse.sum(S, [1, 3])\n tensor(indices=tensor([[0, 2, 3]]),\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.sum.html", "category": "pytorch docs"}
{"text": "tensor(indices=tensor([[0, 2, 3]]),\n values=tensor([[-1.4512, 0.4073],\n [-0.8901, 0.2017],\n [-0.3183, -1.7539]]),\n size=(5, 2), nnz=3, layout=torch.sparse_coo)\n # when sum over all sparse dim, return a dense tensor\n # with summed dims squeezed\n >>> torch.sparse.sum(S, [0, 1, 3])\n tensor([-2.6596, -1.1450])", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.sum.html", "category": "pytorch docs"}
{"text": "torch.remaindertorch.remainder(input, other, , out=None) -> Tensor\n Computes Python's modulus operation entrywise. The result has the\n same sign as the divisor \"other\" and its absolute value is less\n than that of \"other\".\n It may also be defined in terms of \"torch.div()\" as\n torch.remainder(a, b) == a - a.div(b, rounding_mode=\"floor\") * b\n Supports broadcasting to a common shape, type promotion, and\n integer and float inputs.\n Note:\n Complex inputs are not supported. In some cases, it is not\n mathematically possible to satisfy the definition of a modulo\n operation with complex numbers. See \"torch.fmod()\" for how\n division by zero is handled.\n See also:\n \"torch.fmod()\" which implements C++'s std::fmod. This one is\n defined in terms of division rounding towards zero.\n Parameters:\n * input (Tensor or Scalar) -- the dividend\n * other (Tensor or Scalar*) -- the divisor\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.remainder.html", "category": "pytorch docs"}
{"text": "Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> torch.remainder(torch.tensor([-3., -2, -1, 1, 2, 3]), 2)\n tensor([ 1., 0., 1., 1., 0., 1.])\n >>> torch.remainder(torch.tensor([1, 2, 3, 4, 5]), -1.5)\n tensor([ -0.5000, -1.0000, 0.0000, -0.5000, -1.0000 ])", "source": "https://pytorch.org/docs/stable/generated/torch.remainder.html", "category": "pytorch docs"}
{"text": "torch.moveaxistorch.moveaxis(input, source, destination) -> Tensor\n Alias for \"torch.movedim()\".\n This function is equivalent to NumPy's moveaxis function.\n Examples:\n >>> t = torch.randn(3,2,1)\n >>> t\n tensor([[[-0.3362],\n [-0.8437]],\n [[-0.9627],\n [ 0.1727]],\n [[ 0.5173],\n [-0.1398]]])\n >>> torch.moveaxis(t, 1, 0).shape\n torch.Size([2, 3, 1])\n >>> torch.moveaxis(t, 1, 0)\n tensor([[[-0.3362],\n [-0.9627],\n [ 0.5173]],\n [[-0.8437],\n [ 0.1727],\n [-0.1398]]])\n >>> torch.moveaxis(t, (1, 2), (0, 1)).shape\n torch.Size([2, 1, 3])\n >>> torch.moveaxis(t, (1, 2), (0, 1))\n tensor([[[-0.3362, -0.9627, 0.5173]],\n [[-0.8437, 0.1727, -0.1398]]])", "source": "https://pytorch.org/docs/stable/generated/torch.moveaxis.html", "category": "pytorch docs"}
{"text": "torch.ormqrtorch.ormqr(input, tau, other, left=True, transpose=False, , out=None) -> Tensor\n Computes the matrix-matrix multiplication of a product of\n Householder matrices with a general matrix.\n Multiplies a m \\times n matrix C (given by \"other\") with a matrix\n Q, where Q is represented using Householder reflectors (input,\n tau). See Representation of Orthogonal or Unitary Matrices for\n further details.\n If \"left\" is True then op(Q) times C is computed, otherwise\n the result is C times op(Q). When \"left\" is True, the\n implicit matrix Q has size m \\times m. It has size n \\times n\n otherwise. If \"transpose\" is True then op* is the conjugate\n transpose operation, otherwise it's a no-op.\n Supports inputs of float, double, cfloat and cdouble dtypes. Also\n supports batched inputs, and, if the input is batched, the output\n is batched with the same dimensions.\n See also:\n \"torch.geqrf()\" can be used to form the Householder", "source": "https://pytorch.org/docs/stable/generated/torch.ormqr.html", "category": "pytorch docs"}
{"text": "representation (input, tau) of matrix Q from the QR\n decomposition.\n Note:\n This function supports backward but it is only fast when \"(input,\n tau)\" do not require gradients and/or \"tau.size(-1)\" is very\n small. ``\n Parameters:\n * input (Tensor) -- tensor of shape (, mn, k) where ***\n is zero or more batch dimensions and mn equals to m or n\n depending on the \"left\".\n * tau (Tensor) -- tensor of shape (, min(mn, k)) where\n *** is zero or more batch dimensions.\n * other (Tensor) -- tensor of shape (, m, n) where ***\n is zero or more batch dimensions.\n * left (bool) -- controls the order of multiplication.\n * transpose (bool) -- controls whether the matrix Q is\n conjugate transposed or not.\n Keyword Arguments:\n out (Tensor, optional) -- the output Tensor. Ignored\n if None. Default: None*.", "source": "https://pytorch.org/docs/stable/generated/torch.ormqr.html", "category": "pytorch docs"}
{"text": "torch.cuda.set_sync_debug_modetorch.cuda.set_sync_debug_mode(debug_mode)\n Sets the debug mode for cuda synchronizing operations.\n Parameters:\n debug_mode (str or int) -- if \"default\" or 0, don't\n error or warn on synchronizing operations, if \"warn\" or 1, warn\n on synchronizing operations, if \"error\" or 2, error out\n synchronizing operations.\n Warning:\n This is an experimental feature, and not all synchronizing\n operations will trigger warning or error. In particular,\n operations in torch.distributed and torch.sparse namespaces are\n not covered yet.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.set_sync_debug_mode.html", "category": "pytorch docs"}
{"text": "torch.log10torch.log10(input, , out=None) -> Tensor\n Returns a new tensor with the logarithm to the base 10 of the\n elements of \"input\".\n y_{i} = \\log_{10} (x_{i})\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.rand(5)\n >>> a\n tensor([ 0.5224, 0.9354, 0.7257, 0.1301, 0.2251])\n >>> torch.log10(a)\n tensor([-0.2820, -0.0290, -0.1392, -0.8857, -0.6476])", "source": "https://pytorch.org/docs/stable/generated/torch.log10.html", "category": "pytorch docs"}
{"text": "torch.flattentorch.flatten(input, start_dim=0, end_dim=- 1) -> Tensor\n Flattens \"input\" by reshaping it into a one-dimensional tensor. If\n \"start_dim\" or \"end_dim\" are passed, only dimensions starting with\n \"start_dim\" and ending with \"end_dim\" are flattened. The order of\n elements in \"input\" is unchanged.\n Unlike NumPy's flatten, which always copies input's data, this\n function may return the original object, a view, or copy. If no\n dimensions are flattened, then the original object \"input\" is\n returned. Otherwise, if input can be viewed as the flattened shape,\n then that view is returned. Finally, only if the input cannot be\n viewed as the flattened shape is input's data copied. See\n \"torch.Tensor.view()\" for details on when a view will be returned.\n Note:\n Flattening a zero-dimensional tensor will return a one-\n dimensional view.\n Parameters:\n * input (Tensor) -- the input tensor.\n * start_dim (int) -- the first dim to flatten", "source": "https://pytorch.org/docs/stable/generated/torch.flatten.html", "category": "pytorch docs"}
{"text": "\nend_dim (int) -- the last dim to flatten\n Example:\n >>> t = torch.tensor([[[1, 2],\n ... [3, 4]],\n ... [[5, 6],\n ... [7, 8]]])\n >>> torch.flatten(t)\n tensor([1, 2, 3, 4, 5, 6, 7, 8])\n >>> torch.flatten(t, start_dim=1)\n tensor([[1, 2, 3, 4],\n [5, 6, 7, 8]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.flatten.html", "category": "pytorch docs"}
{"text": "torch.Tensor.indicesTensor.indices() -> Tensor\n Return the indices tensor of a sparse COO tensor.\n Warning:\n Throws an error if \"self\" is not a sparse COO tensor.\n See also \"Tensor.values()\".\n Note:\n This method can only be called on a coalesced sparse tensor. See\n \"Tensor.coalesce()\" for details.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.indices.html", "category": "pytorch docs"}
{"text": "torch.Tensor.erfc_Tensor.erfc_() -> Tensor\n In-place version of \"erfc()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.erfc_.html", "category": "pytorch docs"}
{"text": "torch.autograd.profiler.profile.self_cpu_time_totalproperty profile.self_cpu_time_total\n Returns total time spent on CPU obtained as a sum of all self times\n across all the events.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.profiler.profile.self_cpu_time_total.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.clip_grad_norm_torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=2.0, error_if_nonfinite=False, foreach=None)\n Clips gradient norm of an iterable of parameters.\n The norm is computed over all gradients together, as if they were\n concatenated into a single vector. Gradients are modified in-place.\n Parameters:\n * parameters (Iterable[Tensor] or Tensor) -- an\n iterable of Tensors or a single Tensor that will have\n gradients normalized\n * max_norm (float) -- max norm of the gradients\n * norm_type (float) -- type of the used p-norm. Can be\n \"'inf'\" for infinity norm.\n * error_if_nonfinite (bool) -- if True, an error is thrown\n if the total norm of the gradients from \"parameters\" is \"nan\",\n \"inf\", or \"-inf\". Default: False (will switch to True in the\n future)\n * foreach (bool) -- use the faster foreach-based", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.clip_grad_norm_.html", "category": "pytorch docs"}
{"text": "implementation. If \"None\", use the foreach implementation for\n CUDA and CPU tensors and silently fall back to the slow\n implementation for other device types. Default: \"None\"\n Returns:\n Total norm of the parameter gradients (viewed as a single\n vector).\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.clip_grad_norm_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.to_mkldnnTensor.to_mkldnn() -> Tensor\n Returns a copy of the tensor in \"torch.mkldnn\" layout.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_mkldnn.html", "category": "pytorch docs"}
{"text": "torch.Tensor.erf_Tensor.erf_() -> Tensor\n In-place version of \"erf()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.erf_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.boolTensor.bool(memory_format=torch.preserve_format) -> Tensor\n \"self.bool()\" is equivalent to \"self.to(torch.bool)\". See \"to()\".\n Parameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bool.html", "category": "pytorch docs"}
{"text": "torch.view_as_complextorch.view_as_complex(input) -> Tensor\n Returns a view of \"input\" as a complex tensor. For an input complex\n tensor of \"size\" m1, m2, \\dots, mi, 2, this function returns a new\n complex tensor of \"size\" m1, m2, \\dots, mi where the last dimension\n of the input tensor is expected to represent the real and imaginary\n components of complex numbers.\n Warning:\n \"view_as_complex()\" is only supported for tensors with\n \"torch.dtype\" \"torch.float64\" and \"torch.float32\". The input is\n expected to have the last dimension of \"size\" 2. In addition, the\n tensor must have a stride of 1 for its last dimension. The\n strides of all other dimensions must be even numbers.\n Parameters:\n input (Tensor) -- the input tensor.\n Example:\n >>> x=torch.randn(4, 2)\n >>> x\n tensor([[ 1.6116, -0.5772],\n [-1.4606, -0.9120],\n [ 0.0786, -1.7497],\n [-0.6561, -1.6623]])", "source": "https://pytorch.org/docs/stable/generated/torch.view_as_complex.html", "category": "pytorch docs"}
{"text": "[-0.6561, -1.6623]])\n >>> torch.view_as_complex(x)\n tensor([(1.6116-0.5772j), (-1.4606-0.9120j), (0.0786-1.7497j), (-0.6561-1.6623j)])", "source": "https://pytorch.org/docs/stable/generated/torch.view_as_complex.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.pixel_shuffletorch.nn.functional.pixel_shuffle(input, upscale_factor) -> Tensor\n Rearranges elements in a tensor of shape (, C \\times r^2, H, W) to\n a tensor of shape (, C, H \\times r, W \\times r), where r is the\n \"upscale_factor\".\n See \"PixelShuffle\" for details.\n Parameters:\n * input (Tensor) -- the input tensor\n * upscale_factor (int) -- factor to increase spatial\n resolution by\n Examples:\n >>> input = torch.randn(1, 9, 4, 4)\n >>> output = torch.nn.functional.pixel_shuffle(input, 3)\n >>> print(output.size())\n torch.Size([1, 1, 12, 12])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.pixel_shuffle.html", "category": "pytorch docs"}
{"text": "torch.Tensor.arcsinh_Tensor.arcsinh_() -> Tensor\n In-place version of \"arcsinh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arcsinh_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.lessTensor.less()\n lt(other) -> Tensor\n See \"torch.less()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.less.html", "category": "pytorch docs"}
{"text": "ConvBnReLU3dclass torch.ao.nn.intrinsic.qat.ConvBnReLU3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None)\n A ConvBnReLU3d module is a module fused from Conv3d, BatchNorm3d\n and ReLU, attached with FakeQuantize modules for weight, used in\n quantization aware training.\n We combined the interface of \"torch.nn.Conv3d\" and\n \"torch.nn.BatchNorm3d\" and \"torch.nn.ReLU\".\n Similar to torch.nn.Conv3d, with FakeQuantize modules initialized\n to default.\n Variables:\n weight_fake_quant -- fake quant module for weight", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvBnReLU3d.html", "category": "pytorch docs"}
{"text": "torch.logical_xortorch.logical_xor(input, other, , out=None) -> Tensor\n Computes the element-wise logical XOR of the given input tensors.\n Zeros are treated as \"False\" and nonzeros are treated as \"True\".\n Parameters:\n * input (Tensor) -- the input tensor.\n * other (Tensor) -- the tensor to compute XOR with\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> torch.logical_xor(torch.tensor([True, False, True]), torch.tensor([True, False, False]))\n tensor([False, False, True])\n >>> a = torch.tensor([0, 1, 10, 0], dtype=torch.int8)\n >>> b = torch.tensor([4, 0, 1, 0], dtype=torch.int8)\n >>> torch.logical_xor(a, b)\n tensor([ True, True, False, False])\n >>> torch.logical_xor(a.double(), b.double())\n tensor([ True, True, False, False])\n >>> torch.logical_xor(a.double(), b)\n tensor([ True, True, False, False])", "source": "https://pytorch.org/docs/stable/generated/torch.logical_xor.html", "category": "pytorch docs"}
{"text": "tensor([ True, True, False, False])\n >>> torch.logical_xor(a, b, out=torch.empty(4, dtype=torch.bool))\n tensor([ True, True, False, False])", "source": "https://pytorch.org/docs/stable/generated/torch.logical_xor.html", "category": "pytorch docs"}
{"text": "SobolEngineclass torch.quasirandom.SobolEngine(dimension, scramble=False, seed=None)\n The \"torch.quasirandom.SobolEngine\" is an engine for generating\n (scrambled) Sobol sequences. Sobol sequences are an example of low\n discrepancy quasi-random sequences.\n This implementation of an engine for Sobol sequences is capable of\n sampling sequences up to a maximum dimension of 21201. It uses\n direction numbers from https://web.maths.unsw.edu.au/~fkuo/sobol/\n obtained using the search criterion D(6) up to the dimension 21201.\n This is the recommended choice by the authors.\n -[ References ]-\n * Art B. Owen. Scrambling Sobol and Niederreiter-Xing points.\n Journal of Complexity, 14(4):466-489, December 1998.\n * I. M. Sobol. The distribution of points in a cube and the\n accurate evaluation of integrals. Zh. Vychisl. Mat. i Mat. Phys.,\n 7:784-802, 1967.\n Parameters:\n * dimension (Int) -- The dimensionality of the sequence to\n be drawn", "source": "https://pytorch.org/docs/stable/generated/torch.quasirandom.SobolEngine.html", "category": "pytorch docs"}
{"text": "be drawn\n * scramble (bool, optional) -- Setting this to \"True\"\n will produce scrambled Sobol sequences. Scrambling is capable\n of producing better Sobol sequences. Default: \"False\".\n * seed (Int, optional) -- This is the seed for the\n scrambling. The seed of the random number generator is set to\n this, if specified. Otherwise, it uses a random seed. Default:\n \"None\"\n Examples:\n >>> soboleng = torch.quasirandom.SobolEngine(dimension=5)\n >>> soboleng.draw(3)\n tensor([[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n [0.5000, 0.5000, 0.5000, 0.5000, 0.5000],\n [0.7500, 0.2500, 0.2500, 0.2500, 0.7500]])\n draw(n=1, out=None, dtype=torch.float32)\n Function to draw a sequence of \"n\" points from a Sobol sequence.\n Note that the samples are dependent on the previous samples. The\n size of the result is (n, dimension).\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.quasirandom.SobolEngine.html", "category": "pytorch docs"}
{"text": "Parameters:\n * n (Int, optional) -- The length of sequence of\n points to draw. Default: 1\n * out (Tensor, optional) -- The output tensor\n * dtype (\"torch.dtype\", optional) -- the desired data\n type of the returned tensor. Default: \"torch.float32\"\n Return type:\n Tensor\n draw_base2(m, out=None, dtype=torch.float32)\n Function to draw a sequence of \"2m\" points from a Sobol\n sequence. Note that the samples are dependent on the previous\n samples. The size of the result is (2m, dimension).\n Parameters:\n * m (Int) -- The (base2) exponent of the number of\n points to draw.\n * out (Tensor, optional) -- The output tensor\n * dtype (\"torch.dtype\", optional) -- the desired data\n type of the returned tensor. Default: \"torch.float32\"\n Return type:\n Tensor\n fast_forward(n)", "source": "https://pytorch.org/docs/stable/generated/torch.quasirandom.SobolEngine.html", "category": "pytorch docs"}
{"text": "Tensor\n fast_forward(n)\n Function to fast-forward the state of the \"SobolEngine\" by \"n\"\n steps. This is equivalent to drawing \"n\" samples without using\n the samples.\n Parameters:\n n (Int) -- The number of steps to fast-forward by.\n reset()\n Function to reset the \"SobolEngine\" to base state.", "source": "https://pytorch.org/docs/stable/generated/torch.quasirandom.SobolEngine.html", "category": "pytorch docs"}
{"text": "PackedSequenceclass torch.nn.utils.rnn.PackedSequence(data, batch_sizes=None, sorted_indices=None, unsorted_indices=None)\n Holds the data and list of \"batch_sizes\" of a packed sequence.\n All RNN modules accept packed sequences as inputs.\n Note:\n Instances of this class should never be created manually. They\n are meant to be instantiated by functions like\n \"pack_padded_sequence()\".Batch sizes represent the number\n elements at each sequence step in the batch, not the varying\n sequence lengths passed to \"pack_padded_sequence()\". For\n instance, given data \"abc\" and \"x\" the \"PackedSequence\" would\n contain data \"axbc\" with \"batch_sizes=[2,1,1]\".\n Variables:\n * data (Tensor) -- Tensor containing packed sequence\n * batch_sizes (Tensor) -- Tensor of integers holding\n information about the batch size at each sequence step\n * sorted_indices (Tensor, optional) -- Tensor of", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.PackedSequence.html", "category": "pytorch docs"}
{"text": "integers holding how this \"PackedSequence\" is constructed from\n sequences.\n * unsorted_indices (Tensor, optional) -- Tensor of\n integers holding how this to recover the original sequences\n with correct order.\n Note:\n \"data\" can be on arbitrary device and of arbitrary dtype.\n \"sorted_indices\" and \"unsorted_indices\" must be \"torch.int64\"\n tensors on the same device as \"data\".However, \"batch_sizes\"\n should always be a CPU \"torch.int64\" tensor.This invariant is\n maintained throughout \"PackedSequence\" class, and all functions\n that construct a :class:PackedSequence in PyTorch (i.e., they\n only pass in tensors conforming to this constraint).\n batch_sizes: Tensor\n Alias for field number 1\n count(value, /)\n Return number of occurrences of value.\n data: Tensor\n Alias for field number 0\n index(value, start=0, stop=9223372036854775807, /)\n Return first index of value.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.PackedSequence.html", "category": "pytorch docs"}
{"text": "Return first index of value.\n Raises ValueError if the value is not present.\n property is_cuda\n Returns true if self.data stored on a gpu\n is_pinned()\n Returns true if self.data stored on in pinned memory\n sorted_indices: Optional[Tensor]\n Alias for field number 2\n to(args, kwargs)\n Performs dtype and/or device conversion on self.data.\n It has similar signature as \"torch.Tensor.to()\", except optional\n arguments like non_blocking and copy* should be passed as\n kwargs, not args, or they will not apply to the index tensors.\n Note:\n If the \"self.data\" Tensor already has the correct\n \"torch.dtype\" and \"torch.device\", then \"self\" is returned.\n Otherwise, returns a copy with the desired configuration.\n unsorted_indices: Optional[Tensor]\n Alias for field number 3", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.PackedSequence.html", "category": "pytorch docs"}
{"text": "Softplusclass torch.nn.Softplus(beta=1, threshold=20)\n Applies the Softplus function \\text{Softplus}(x) = \\frac{1}{\\beta}\n * \\log(1 + \\exp(\\beta * x)) element-wise.\n SoftPlus is a smooth approximation to the ReLU function and can be\n used to constrain the output of a machine to always be positive.\n For numerical stability the implementation reverts to the linear\n function when input \\times \\beta > threshold.\n Parameters:\n * beta (int) -- the \\beta value for the Softplus\n formulation. Default: 1\n * threshold (int) -- values above this revert to a linear\n function. Default: 20\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.Softplus()\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Softplus.html", "category": "pytorch docs"}
{"text": "torch.logical_ortorch.logical_or(input, other, , out=None) -> Tensor\n Computes the element-wise logical OR of the given input tensors.\n Zeros are treated as \"False\" and nonzeros are treated as \"True\".\n Parameters:\n * input (Tensor) -- the input tensor.\n * other (Tensor) -- the tensor to compute OR with\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> torch.logical_or(torch.tensor([True, False, True]), torch.tensor([True, False, False]))\n tensor([ True, False, True])\n >>> a = torch.tensor([0, 1, 10, 0], dtype=torch.int8)\n >>> b = torch.tensor([4, 0, 1, 0], dtype=torch.int8)\n >>> torch.logical_or(a, b)\n tensor([ True, True, True, False])\n >>> torch.logical_or(a.double(), b.double())\n tensor([ True, True, True, False])\n >>> torch.logical_or(a.double(), b)\n tensor([ True, True, True, False])\n >>> torch.logical_or(a, b, out=torch.empty(4, dtype=torch.bool))", "source": "https://pytorch.org/docs/stable/generated/torch.logical_or.html", "category": "pytorch docs"}
{"text": "tensor([ True, True, True, False])", "source": "https://pytorch.org/docs/stable/generated/torch.logical_or.html", "category": "pytorch docs"}
{"text": "thresholdclass torch.ao.nn.quantized.functional.threshold(input, threshold, value)\n Applies the quantized version of the threshold function element-\n wise:\n x = \\begin{cases} x & \\text{if~} x > \\text{threshold} \\\n \\text{value} & \\text{otherwise} \\end{cases}\n See \"Threshold\" for more details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.threshold.html", "category": "pytorch docs"}
{"text": "torch.Tensor.diagonalTensor.diagonal(offset=0, dim1=0, dim2=1) -> Tensor\n See \"torch.diagonal()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.diagonal.html", "category": "pytorch docs"}
{"text": "MarginRankingLossclass torch.nn.MarginRankingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean')\n Creates a criterion that measures the loss given inputs x1, x2, two\n 1D mini-batch or 0D Tensors, and a label 1D mini-batch or 0D\n Tensor y (containing 1 or -1).\n If y = 1 then it assumed the first input should be ranked higher\n (have a larger value) than the second input, and vice-versa for y =\n -1.\n The loss function for each pair of samples in the mini-batch is:\n \\text{loss}(x1, x2, y) = \\max(0, -y * (x1 - x2) + \\text{margin})\n Parameters:\n * margin (float, optional) -- Has a default value of\n 0.\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MarginRankingLoss.html", "category": "pytorch docs"}
{"text": "minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n Shape:\n * Input1: (N) or () where N is the batch size.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MarginRankingLoss.html", "category": "pytorch docs"}
{"text": "\nInput2: (N) or (), same shape as the Input1.\nTarget: (N) or (), same shape as the inputs.\nOutput: scalar. If \"reduction\" is \"'none'\" and Input size is\n not (), then (N).\n Examples:\n\n\nloss = nn.MarginRankingLoss()\ninput1 = torch.randn(3, requires_grad=True)\ninput2 = torch.randn(3, requires_grad=True)\ntarget = torch.randn(3).sign()\noutput = loss(input1, input2, target)\noutput.backward()\n\n\n\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MarginRankingLoss.html", "category": "pytorch docs"}
{"text": "torch.Tensor.cumprodTensor.cumprod(dim, dtype=None) -> Tensor\n See \"torch.cumprod()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cumprod.html", "category": "pytorch docs"}
{"text": "LocalResponseNormclass torch.nn.LocalResponseNorm(size, alpha=0.0001, beta=0.75, k=1.0)\n Applies local response normalization over an input signal composed\n of several input planes, where channels occupy the second\n dimension. Applies normalization across channels.\n b_{c} = a_{c}\\left(k + \\frac{\\alpha}{n} \\sum_{c'=\\max(0,\n c-n/2)}^{\\min(N-1,c+n/2)}a_{c'}^2\\right)^{-\\beta}\n Parameters:\n * size (int) -- amount of neighbouring channels used for\n normalization\n * alpha (float) -- multiplicative factor. Default: 0.0001\n * beta (float) -- exponent. Default: 0.75\n * k (float) -- additive factor. Default: 1\n Shape:\n * Input: (N, C, )\n * Output: (N, C, ) (same shape as input)\n Examples:\n >>> lrn = nn.LocalResponseNorm(2)\n >>> signal_2d = torch.randn(32, 5, 24, 24)\n >>> signal_4d = torch.randn(16, 5, 7, 7, 7, 7)\n >>> output_2d = lrn(signal_2d)\n >>> output_4d = lrn(signal_4d)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LocalResponseNorm.html", "category": "pytorch docs"}
{"text": "torch.jit.trace_moduletorch.jit.trace_module(mod, inputs, optimize=None, check_trace=True, check_inputs=None, check_tolerance=1e-05, strict=True, _force_outplace=False, _module_class=None, _compilation_unit=, example_inputs_is_kwarg=False)\n Trace a module and return an executable \"ScriptModule\" that will be\n optimized using just-in-time compilation. When a module is passed\n to \"torch.jit.trace\", only the \"forward\" method is run and traced.\n With \"trace_module\", you can specify a dictionary of method names\n to example inputs to trace (see the \"inputs\") argument below.\n See \"torch.jit.trace\" for more information on tracing.\n Parameters:\n * mod (torch.nn.Module) -- A \"torch.nn.Module\" containing\n methods whose names are specified in \"inputs\". The given\n methods will be compiled as a part of a single ScriptModule.\n * inputs (dict) -- A dict containing sample inputs indexed", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace_module.html", "category": "pytorch docs"}
{"text": "by method names in \"mod\". The inputs will be passed to methods\n whose names correspond to inputs' keys while tracing. \"{\n 'forward' : example_forward_input, 'method2':\n example_method2_input}\"\n Keyword Arguments:\n * check_trace (\"bool\", optional) -- Check if the same inputs\n run through traced code produce the same outputs. Default:\n \"True\". You might want to disable this if, for example, your\n network contains non- deterministic ops or if you are sure\n that the network is correct despite a checker failure.\n * check_inputs (list of dicts, optional) -- A list of\n dicts of input arguments that should be used to check the\n trace against what is expected. Each tuple is equivalent to a\n set of input arguments that would be specified in \"inputs\".\n For best results, pass in a set of checking inputs\n representative of the space of shapes and types of inputs you", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace_module.html", "category": "pytorch docs"}
{"text": "expect the network to see. If not specified, the original\n \"inputs\" are used for checking\n * check_tolerance (float, optional) -- Floating-point\n comparison tolerance to use in the checker procedure. This can\n be used to relax the checker strictness in the event that\n results diverge numerically for a known reason, such as\n operator fusion.\n * example_inputs_is_kwarg (\"bool\", optional) -- This\n parameter indicate whether the example inputs is a pack pack\n of keyword arguments. Default: \"False\".\n Returns:\n A \"ScriptModule\" object with a single \"forward\" method\n containing the traced code. When \"func\" is a \"torch.nn.Module\",\n the returned \"ScriptModule\" will have the same set of sub-\n modules and parameters as \"func\".\n Example (tracing a module with multiple methods):\n import torch\n import torch.nn as nn\n class Net(nn.Module):\n def init(self):", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace_module.html", "category": "pytorch docs"}
{"text": "def init(self):\n super(Net, self).init()\n self.conv = nn.Conv2d(1, 1, 3)\n def forward(self, x):\n return self.conv(x)\n def weighted_kernel_sum(self, weight):\n return weight * self.conv.weight\n n = Net()\n example_weight = torch.rand(1, 1, 3, 3)\n example_forward_input = torch.rand(1, 1, 3, 3)\n # Trace a specific method and construct ScriptModule with\n # a single forward method\n module = torch.jit.trace(n.forward, example_forward_input)\n # Trace a module (implicitly traces forward) and construct a\n # ScriptModule with a single forward method\n module = torch.jit.trace(n, example_forward_input)\n # Trace specific methods on a module (specified in inputs), constructs\n # a ScriptModule with forward and weighted_kernel_sum methods\n inputs = {'forward' : example_forward_input, 'weighted_kernel_sum' : example_weight}", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace_module.html", "category": "pytorch docs"}
{"text": "module = torch.jit.trace_module(n, inputs)", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace_module.html", "category": "pytorch docs"}
{"text": "ReplicationPad2dclass torch.nn.ReplicationPad2d(padding)\n Pads the input tensor using replication of the input boundary.\n For N-dimensional padding, use \"torch.nn.functional.pad()\".\n Parameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in all boundaries. If a 4-tuple,\n uses (\\text{padding_left}, \\text{padding_right},\n \\text{padding_top}, \\text{padding_bottom})\n Shape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).\n * Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),\n where\n H_{out} = H_{in} + \\text{padding_top} +\n \\text{padding_bottom}\n W_{out} = W_{in} + \\text{padding_left} +\n \\text{padding_right}\n Examples:\n >>> m = nn.ReplicationPad2d(2)\n >>> input = torch.arange(9, dtype=torch.float).reshape(1, 1, 3, 3)\n >>> input\n tensor([[[[0., 1., 2.],\n [3., 4., 5.],\n [6., 7., 8.]]]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReplicationPad2d.html", "category": "pytorch docs"}
{"text": "[6., 7., 8.]]]])\n >>> m(input)\n tensor([[[[0., 0., 0., 1., 2., 2., 2.],\n [0., 0., 0., 1., 2., 2., 2.],\n [0., 0., 0., 1., 2., 2., 2.],\n [3., 3., 3., 4., 5., 5., 5.],\n [6., 6., 6., 7., 8., 8., 8.],\n [6., 6., 6., 7., 8., 8., 8.],\n [6., 6., 6., 7., 8., 8., 8.]]]])\n >>> # using different paddings for different sides\n >>> m = nn.ReplicationPad2d((1, 1, 2, 0))\n >>> m(input)\n tensor([[[[0., 0., 1., 2., 2.],\n [0., 0., 1., 2., 2.],\n [0., 0., 1., 2., 2.],\n [3., 3., 4., 5., 5.],\n [6., 6., 7., 8., 8.]]]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReplicationPad2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.to_denseTensor.to_dense() -> Tensor\n Creates a strided copy of \"self\" if \"self\" is not a strided tensor,\n otherwise returns \"self\".\n Example:\n >>> s = torch.sparse_coo_tensor(\n ... torch.tensor([[1, 1],\n ... [0, 2]]),\n ... torch.tensor([9, 10]),\n ... size=(3, 3))\n >>> s.to_dense()\n tensor([[ 0, 0, 0],\n [ 9, 0, 10],\n [ 0, 0, 0]])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_dense.html", "category": "pytorch docs"}
{"text": "Dropoutclass torch.nn.Dropout(p=0.5, inplace=False)\n During training, randomly zeroes some of the elements of the input\n tensor with probability \"p\" using samples from a Bernoulli\n distribution. Each channel will be zeroed out independently on\n every forward call.\n This has proven to be an effective technique for regularization and\n preventing the co-adaptation of neurons as described in the paper\n Improving neural networks by preventing co-adaptation of feature\n detectors .\n Furthermore, the outputs are scaled by a factor of \\frac{1}{1-p}\n during training. This means that during evaluation the module\n simply computes an identity function.\n Parameters:\n * p (float) -- probability of an element to be zeroed.\n Default: 0.5\n * inplace (bool) -- If set to \"True\", will do this\n operation in-place. Default: \"False\"\n Shape:\n * Input: (). Input can be of any shape\n * Output: (). Output is of the same shape as input\n Examples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html", "category": "pytorch docs"}
{"text": "Examples:\n >>> m = nn.Dropout(p=0.2)\n >>> input = torch.randn(20, 16)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html", "category": "pytorch docs"}
{"text": "avg_pool2dclass torch.ao.nn.quantized.functional.avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None)\n Applies 2D average-pooling operation in kH \\times kW regions by\n step size sH \\times sW steps. The number of output features is\n equal to the number of input planes.\n Note:\n The input quantization parameters propagate to the output.\n See \"AvgPool2d\" for details and output shape.\n Parameters:\n * input -- quantized input tensor (\\text{minibatch} ,\n \\text{in_channels} , iH , iW)\n * kernel_size -- size of the pooling region. Can be a single\n number or a tuple (kH, kW)\n * stride -- stride of the pooling operation. Can be a single\n number or a tuple (sH, sW). Default: \"kernel_size\"\n * padding -- implicit zero paddings on both sides of the\n input. Can be a single number or a tuple (padH, padW).\n Default: 0", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.avg_pool2d.html", "category": "pytorch docs"}
{"text": "Default: 0\n * ceil_mode -- when True, will use ceil instead of floor\n in the formula to compute the output shape. Default: \"False\"\n * count_include_pad -- when True, will include the zero-\n padding in the averaging calculation. Default: \"True\"\n * divisor_override -- if specified, it will be used as\n divisor, otherwise size of the pooling region will be used.\n Default: None", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.avg_pool2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.neTensor.ne(other) -> Tensor\n See \"torch.ne()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ne.html", "category": "pytorch docs"}
{"text": "torch.foreach_asin_torch._foreach_asin(self: List[Tensor]) -> None\n Apply \"torch.asin()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_asin_.html", "category": "pytorch docs"}
{"text": "Linearclass torch.ao.nn.quantized.dynamic.Linear(in_features, out_features, bias_=True, dtype=torch.qint8)\n A dynamic quantized linear module with floating point tensor as\n inputs and outputs. We adopt the same interface as\n torch.nn.Linear, please see\n https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for\n documentation.\n Similar to \"torch.nn.Linear\", attributes will be randomly\n initialized at module creation time and will be overwritten later\n Variables:\n * weight (Tensor) -- the non-learnable quantized weights\n of the module which are of shape (\\text{out_features},\n \\text{in_features}).\n * bias (Tensor) -- the non-learnable floating point bias\n of the module of shape (\\text{out_features}). If \"bias\" is\n \"True\", the values are initialized to zero.\n Examples:\n >>> m = nn.quantized.dynamic.Linear(20, 30)\n >>> input = torch.randn(128, 20)\n >>> output = m(input)\n >>> print(output.size())", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.Linear.html", "category": "pytorch docs"}
{"text": "\n\n\nprint(output.size())\n torch.Size([128, 30])\n classmethod from_float(mod)\n Create a dynamic quantized module from a float module or\n qparams_dict\n Parameters:\n mod (Module) -- a float module, either produced by\n torch.ao.quantization utilities or provided by the user\n classmethod from_reference(ref_qlinear)\n Create a (fbgemm/qnnpack) dynamic quantized module from a\n reference quantized module :param ref_qlinear: a reference\n quantized module, either produced by :type ref_qlinear: Module\n :param torch.ao.quantization functions or provided by the user:\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.Linear.html", "category": "pytorch docs"}
{"text": "LazyLinearclass torch.nn.LazyLinear(out_features, bias=True, device=None, dtype=None)\n A \"torch.nn.Linear\" module where in_features is inferred.\n In this module, the weight and bias are of\n \"torch.nn.UninitializedParameter\" class. They will be initialized\n after the first call to \"forward\" is done and the module will\n become a regular \"torch.nn.Linear\" module. The \"in_features\"\n argument of the \"Linear\" is inferred from the \"input.shape[-1]\".\n Check the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\n Parameters:\n * out_features (int) -- size of each output sample\n * bias (UninitializedParameter) -- If set to \"False\", the\n layer will not learn an additive bias. Default: \"True\"\n Variables:\n * weight (torch.nn.parameter.UninitializedParameter) --\n the learnable weights of the module of shape\n (\\text{out_features}, \\text{in_features}). The values are", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyLinear.html", "category": "pytorch docs"}
{"text": "initialized from \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}), where k =\n \\frac{1}{\\text{in_features}}\n * bias (torch.nn.parameter.UninitializedParameter) -- the\n learnable bias of the module of shape (\\text{out_features}).\n If \"bias\" is \"True\", the values are initialized from\n \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{1}{\\text{in_features}}\n cls_to_become\n alias of \"Linear\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyLinear.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.triplet_margin_with_distance_losstorch.nn.functional.triplet_margin_with_distance_loss(anchor, positive, negative, , distance_function=None, margin=1.0, swap=False, reduction='mean')\n See \"TripletMarginWithDistanceLoss\" for details.\n Return type:\n Tensor*", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.triplet_margin_with_distance_loss.html", "category": "pytorch docs"}
{"text": "MultiheadAttentionclass torch.nn.quantizable.MultiheadAttention(embed_dim, num_heads, dropout=0.0, bias=True, add_bias_kv=False, add_zero_attn=False, kdim=None, vdim=None, batch_first=False, device=None, dtype=None)\n dequantize()\n Utility to convert the quantized MHA back to float.\n The motivation for this is that it is not trivial to conver the\n weights from the format that is used in the quantized version\n back to the float.\n forward(query, key, value, key_padding_mask=None, need_weights=True, attn_mask=None, average_attn_weights=True, is_causal=False)\n Note::\n Please, refer to \"forward()\" for more information\n Parameters:\n * query (Tensor) -- map a query and a set of key-value\n pairs to an output. See \"Attention Is All You Need\" for\n more details.\n * key (Tensor) -- map a query and a set of key-value\n pairs to an output. See \"Attention Is All You Need\" for\n more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.quantizable.MultiheadAttention.html", "category": "pytorch docs"}
{"text": "more details.\n * value (Tensor) -- map a query and a set of key-value\n pairs to an output. See \"Attention Is All You Need\" for\n more details.\n * key_padding_mask (Optional[Tensor]) -- if\n provided, specified padding elements in the key will be\n ignored by the attention. When given a binary mask and a\n value is True, the corresponding value on the attention\n layer will be ignored. When given a byte mask and a value\n is non-zero, the corresponding value on the attention layer\n will be ignored\n * need_weights (bool) -- output attn_output_weights.\n * attn_mask (Optional[Tensor]) -- 2D or 3D mask\n that prevents attention to certain positions. A 2D mask\n will be broadcasted for all the batches while a 3D mask\n allows to specify a different mask for the entries of each\n batch.\n Return type:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.quantizable.MultiheadAttention.html", "category": "pytorch docs"}
{"text": "batch.\n Return type:\n Tuple[Tensor, Optional[Tensor]]\n Shape:\n * Inputs:\n * query: (L, N, E) where L is the target sequence length, N\n is the batch size, E is the embedding dimension. (N, L, E)\n if \"batch_first\" is \"True\".\n * key: (S, N, E), where S is the source sequence length, N is\n the batch size, E is the embedding dimension. (N, S, E) if\n \"batch_first\" is \"True\".\n * value: (S, N, E) where S is the source sequence length, N\n is the batch size, E is the embedding dimension. (N, S, E)\n if \"batch_first\" is \"True\".\n * key_padding_mask: (N, S) where N is the batch size, S is\n the source sequence length. If a ByteTensor is provided,\n the non-zero positions will be ignored while the position\n with the zero positions will be unchanged. If a BoolTensor\n is provided, the positions with the value of \"True\" will be", "source": "https://pytorch.org/docs/stable/generated/torch.nn.quantizable.MultiheadAttention.html", "category": "pytorch docs"}
{"text": "ignored while the position with the value of \"False\" will\n be unchanged.\n * attn_mask: 2D mask (L, S) where L is the target sequence\n length, S is the source sequence length. 3D mask\n (N*num_heads, L, S) where N is the batch size, L is the\n target sequence length, S is the source sequence length.\n attn_mask ensure that position i is allowed to attend the\n unmasked positions. If a ByteTensor is provided, the non-\n zero positions are not allowed to attend while the zero\n positions will be unchanged. If a BoolTensor is provided,\n positions with \"True\" is not allowed to attend while\n \"False\" values will be unchanged. If a FloatTensor is\n provided, it will be added to the attention weight.\n * is_causal: If specified, applies a causal mask as attention\n mask. Mutually exclusive with providing attn_mask. Default:\n \"False\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.quantizable.MultiheadAttention.html", "category": "pytorch docs"}
{"text": "\"False\".\n * average_attn_weights: If true, indicates that the returned\n \"attn_weights\" should be averaged across heads. Otherwise,\n \"attn_weights\" are provided separately per head. Note that\n this flag only has an effect when \"need_weights=True.\".\n Default: True (i.e. average weights across heads)\n * Outputs:\n * attn_output: (L, N, E) where L is the target sequence\n length, N is the batch size, E is the embedding dimension.\n (N, L, E) if \"batch_first\" is \"True\".\n * attn_output_weights: If \"average_attn_weights=True\",\n returns attention weights averaged across heads of shape\n (N, L, S), where N is the batch size, L is the target\n sequence length, S is the source sequence length. If\n \"average_attn_weights=False\", returns attention weights per\n head of shape (N, num_heads, L, S).", "source": "https://pytorch.org/docs/stable/generated/torch.nn.quantizable.MultiheadAttention.html", "category": "pytorch docs"}
{"text": "torch.getorch.ge(input, other, , out=None) -> Tensor\n Computes \\text{input} \\geq \\text{other} element-wise.\n The second argument can be a number or a tensor whose shape is\n broadcastable with the first argument.\n Parameters:\n * input (Tensor) -- the tensor to compare\n * other (Tensor or float) -- the tensor or value to\n compare\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Returns:\n A boolean tensor that is True where \"input\" is greater than or\n equal to \"other\" and False elsewhere\n Example:\n >>> torch.ge(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))\n tensor([[True, True], [False, True]])", "source": "https://pytorch.org/docs/stable/generated/torch.ge.html", "category": "pytorch docs"}
{"text": "torch.Tensor.map_Tensor.map_(tensor, callable)\n Applies \"callable\" for each element in \"self\" tensor and the given\n \"tensor\" and stores the results in \"self\" tensor. \"self\" tensor and\n the given \"tensor\" must be broadcastable.\n The \"callable\" should have the signature:\n def callable(a, b) -> number", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.map_.html", "category": "pytorch docs"}
{"text": "Conv2dclass torch.ao.nn.qat.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None, device=None, dtype=None)\n A Conv2d module attached with FakeQuantize modules for weight, used\n for quantization aware training.\n We adopt the same interface as torch.nn.Conv2d, please see https\n ://pytorch.org/docs/stable/nn.html?highlight=conv2d#torch.nn.Conv2d\n for documentation.\n Similar to torch.nn.Conv2d, with FakeQuantize modules initialized\n to default.\n Variables:\n weight_fake_quant -- fake quant module for weight", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.qat.Conv2d.html", "category": "pytorch docs"}
{"text": "linearclass torch.ao.nn.quantized.functional.linear(input, weight, bias=None, scale=None, zero_point=None)\n Applies a linear transformation to the incoming quantized data: y =\n xA^T + b. See \"Linear\"\n Note:\n Current implementation packs weights on every call, which has\n penalty on performance. If you want to avoid the overhead, use\n \"Linear\".\n Parameters:\n * input (Tensor) -- Quantized input of type torch.quint8\n * weight (Tensor) -- Quantized weight of type\n torch.qint8\n * bias (Tensor) -- None or fp32 bias of type torch.float\n * scale (double) -- output scale. If None, derived from\n the input scale\n * zero_point (python:long) -- output zero point. If None,\n derived from the input zero_point\n Return type:\n Tensor\n Shape:\n * Input: (N, *, in_features) where *** means any number of\n additional dimensions\n * Weight: (out_features, in_features)\n * Bias: (out_features)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.linear.html", "category": "pytorch docs"}
{"text": "\nBias: (out_features)\nOutput: (N, *, out_features)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.linear.html", "category": "pytorch docs"}
{"text": "MovingAveragePerChannelMinMaxObserverclass torch.quantization.observer.MovingAveragePerChannelMinMaxObserver(averaging_constant=0.01, ch_axis=0, dtype=torch.quint8, qscheme=torch.per_channel_affine, reduce_range=False, quant_min=None, quant_max=None, eps=1.1920928955078125e-07, kwargs)\n Observer module for computing the quantization parameters based on\n the running per channel min and max values.\n This observer uses the tensor min/max statistics to compute the per\n channel quantization parameters. The module records the running\n minimum and maximum of incoming tensors, and uses this statistic to\n compute the quantization parameters.\n Parameters:\n * averaging_constant -- Averaging constant for min/max.\n * ch_axis -- Channel axis\n * dtype -- Quantized data type\n * qscheme -- Quantization scheme to be used\n * reduce_range** -- Reduces the range of the quantized data\n type by 1 bit", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.MovingAveragePerChannelMinMaxObserver.html", "category": "pytorch docs"}
{"text": "type by 1 bit\n * quant_min -- Minimum quantization value. If unspecified,\n it will follow the 8-bit setup.\n * quant_max -- Maximum quantization value. If unspecified,\n it will follow the 8-bit setup.\n * eps (Tensor) -- Epsilon value for float32, Defaults to\n torch.finfo(torch.float32).eps.\n The quantization parameters are computed the same way as in\n \"MovingAverageMinMaxObserver\", with the difference that the running\n min/max values are stored per channel. Scales and zero points are\n thus computed per channel as well.\n Note:\n If the running minimum equals to the running maximum, the scales\n and zero_points are set to 1.0 and 0.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.MovingAveragePerChannelMinMaxObserver.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.hardswishtorch.nn.functional.hardswish(input, inplace=False)\n Applies the hardswish function, element-wise, as described in the\n paper:\n Searching for MobileNetV3.\n \\text{Hardswish}(x) = \\begin{cases} 0 & \\text{if~} x \\le -3,\n \\ x & \\text{if~} x \\ge +3, \\ x \\cdot (x + 3) /6 &\n \\text{otherwise} \\end{cases}\n See \"Hardswish\" for more details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.hardswish.html", "category": "pytorch docs"}
{"text": "torch.quantized_batch_normtorch.quantized_batch_norm(input, weight=None, bias=None, mean, var, eps, output_scale, output_zero_point) -> Tensor\n Applies batch normalization on a 4D (NCHW) quantized tensor.\n y = \\frac{x - \\mathrm{E}[x]}{\\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n Parameters:\n * input (Tensor) -- quantized tensor\n * weight (Tensor) -- float tensor that corresponds to the\n gamma, size C\n * bias (Tensor) -- float tensor that corresponds to the\n beta, size C\n * mean (Tensor) -- float mean value in batch\n normalization, size C\n * var (Tensor) -- float tensor for variance, size C\n * eps (float) -- a value added to the denominator for\n numerical stability.\n * output_scale (float) -- output quantized tensor scale\n * output_zero_point (int) -- output quantized tensor\n zero_point\n Returns:", "source": "https://pytorch.org/docs/stable/generated/torch.quantized_batch_norm.html", "category": "pytorch docs"}
{"text": "zero_point\n Returns:\n A quantized tensor with batch normalization applied.\n Return type:\n Tensor\n Example:\n >>> qx = torch.quantize_per_tensor(torch.rand(2, 2, 2, 2), 1.5, 3, torch.quint8)\n >>> torch.quantized_batch_norm(qx, torch.ones(2), torch.zeros(2), torch.rand(2), torch.rand(2), 0.00001, 0.2, 2)\n tensor([[[[-0.2000, -0.2000],\n [ 1.6000, -0.2000]],\n [[-0.4000, -0.4000],\n [-0.4000, 0.6000]]],\n [[[-0.2000, -0.2000],\n [-0.2000, -0.2000]],\n [[ 0.6000, -0.4000],\n [ 0.6000, -0.4000]]]], size=(2, 2, 2, 2), dtype=torch.quint8,\n quantization_scheme=torch.per_tensor_affine, scale=0.2, zero_point=2)", "source": "https://pytorch.org/docs/stable/generated/torch.quantized_batch_norm.html", "category": "pytorch docs"}
{"text": "torch.linalg.cholesky_extorch.linalg.cholesky_ex(A, *, upper=False, check_errors=False, out=None)\n Computes the Cholesky decomposition of a complex Hermitian or real\n symmetric positive-definite matrix.\n This function skips the (slow) error checking and error message\n construction of \"torch.linalg.cholesky()\", instead directly\n returning the LAPACK error codes as part of a named tuple \"(L,\n info)\". This makes this function a faster way to check if a matrix\n is positive-definite, and it provides an opportunity to handle\n decomposition errors more gracefully or performantly than\n \"torch.linalg.cholesky()\" does.\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n If \"A\" is not a Hermitian positive-definite matrix, or if it's a\n batch of matrices and one or more of them is not a Hermitian", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.cholesky_ex.html", "category": "pytorch docs"}
{"text": "positive-definite matrix, then \"info\" stores a positive integer for\n the corresponding matrix. The positive integer indicates the order\n of the leading minor that is not positive-definite, and the\n decomposition could not be completed. \"info\" filled with zeros\n indicates that the decomposition was successful. If\n \"check_errors=True\" and \"info\" contains positive integers, then a\n RuntimeError is thrown.\n Note:\n When the inputs are on a CUDA device, this function synchronizes\n only when \"check_errors\"= True.\n Warning:\n This function is \"experimental\" and it may change in a future\n PyTorch release.\n See also:\n \"torch.linalg.cholesky()\" is a NumPy compatible variant that\n always checks for errors.\n Parameters:\n A (Tensor) -- the Hermitian n times n matrix or the\n batch of such matrices of size (, n, n) where *** is one or\n more batch dimensions.\n Keyword Arguments:\n * upper (bool, optional*) -- whether to return an upper", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.cholesky_ex.html", "category": "pytorch docs"}
{"text": "triangular matrix. The tensor returned with upper=True is the\n conjugate transpose of the tensor returned with upper=False.\n * check_errors (bool, optional) -- controls whether to\n check the content of \"infos\". Default: False.\n * out (tuple, optional) -- tuple of two tensors to\n write the output to. Ignored if None. Default: None.\n Examples:\n >>> A = torch.randn(2, 2, dtype=torch.complex128)\n >>> A = A @ A.t().conj() # creates a Hermitian positive-definite matrix\n >>> L, info = torch.linalg.cholesky_ex(A)\n >>> A\n tensor([[ 2.3792+0.0000j, -0.9023+0.9831j],\n [-0.9023-0.9831j, 0.8757+0.0000j]], dtype=torch.complex128)\n >>> L\n tensor([[ 1.5425+0.0000j, 0.0000+0.0000j],\n [-0.5850-0.6374j, 0.3567+0.0000j]], dtype=torch.complex128)\n >>> info\n tensor(0, dtype=torch.int32)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.cholesky_ex.html", "category": "pytorch docs"}
{"text": "default_observertorch.quantization.observer.default_observer\n alias of functools.partial(, quant_min=0,\n quant_max=127){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_observer.html", "category": "pytorch docs"}
{"text": "torch.Tensor.arctanh_Tensor.arctanh_(other) -> Tensor\n In-place version of \"arctanh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arctanh_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.neg_Tensor.neg_() -> Tensor\n In-place version of \"neg()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.neg_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sparse_maskTensor.sparse_mask(mask) -> Tensor\n Returns a new sparse tensor with values from a strided tensor\n \"self\" filtered by the indices of the sparse tensor \"mask\". The\n values of \"mask\" sparse tensor are ignored. \"self\" and \"mask\"\n tensors must have the same shape.\n Note:\n The returned sparse tensor might contain duplicate values if\n \"mask\" is not coalesced. It is therefore advisable to pass\n \"mask.coalesce()\" if such behavior is not desired.\n Note:\n The returned sparse tensor has the same indices as the sparse\n tensor \"mask\", even when the corresponding values in \"self\" are\n zeros.\n Parameters:\n mask (Tensor) -- a sparse tensor whose indices are used as\n a filter\n Example:\n >>> nse = 5\n >>> dims = (5, 5, 2, 2)\n >>> I = torch.cat([torch.randint(0, dims[0], size=(nse,)),\n ... torch.randint(0, dims[1], size=(nse,))], 0).reshape(2, nse)", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_mask.html", "category": "pytorch docs"}
{"text": "\n\n\nV = torch.randn(nse, dims[2], dims[3])\n >>> S = torch.sparse_coo_tensor(I, V, dims).coalesce()\n >>> D = torch.randn(dims)\n >>> D.sparse_mask(S)\n tensor(indices=tensor([[0, 0, 0, 2],\n [0, 1, 4, 3]]),\n values=tensor([[[ 1.6550, 0.2397],\n [-0.1611, -0.0779]],\n [[ 0.2326, -1.0558],\n [ 1.4711, 1.9678]],\n [[-0.5138, -0.0411],\n [ 1.9417, 0.5158]],\n [[ 0.0793, 0.0036],\n [-0.2569, -0.1055]]]),\n size=(5, 5, 2, 2), nnz=4, layout=torch.sparse_coo)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_mask.html", "category": "pytorch docs"}
{"text": "torch.cuda.max_memory_cachedtorch.cuda.max_memory_cached(device=None)\n Deprecated; see \"max_memory_reserved()\".\n Return type:\n int", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.max_memory_cached.html", "category": "pytorch docs"}
{"text": "KLDivLossclass torch.nn.KLDivLoss(size_average=None, reduce=None, reduction='mean', log_target=False)\n The Kullback-Leibler divergence loss.\n For tensors of the same shape y_{\\text{pred}},\\ y_{\\text{true}},\n where y_{\\text{pred}} is the \"input\" and y_{\\text{true}} is the\n \"target\", we define the pointwise KL-divergence as\n L(y_{\\text{pred}},\\ y_{\\text{true}}) = y_{\\text{true}} \\cdot\n \\log \\frac{y_{\\text{true}}}{y_{\\text{pred}}} =\n y_{\\text{true}} \\cdot (\\log y_{\\text{true}} - \\log\n y_{\\text{pred}})\n To avoid underflow issues when computing this quantity, this loss\n expects the argument \"input\" in the log-space. The argument\n \"target\" may also be provided in the log-space if \"log_target\"=\n True.\n To summarise, this function is roughly equivalent to computing\n if not log_target: # default\n loss_pointwise = target * (target.log() - input)\n else:\n loss_pointwise = target.exp() * (target - input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html", "category": "pytorch docs"}
{"text": "and then reducing this result depending on the argument \"reduction\"\n as\n if reduction == \"mean\": # default\n loss = loss_pointwise.mean()\n elif reduction == \"batchmean\": # mathematically correct\n loss = loss_pointwise.sum() / input.size(0)\n elif reduction == \"sum\":\n loss = loss_pointwise.sum()\n else: # reduction == \"none\"\n loss = loss_pointwise\n Note:\n As all the other losses in PyTorch, this function expects the\n first argument, \"input\", to be the output of the model (e.g. the\n neural network) and the second, \"target\", to be the observations\n in the dataset. This differs from the standard mathematical\n notation KL(P\\ ||\\ Q) where P denotes the distribution of the\n observations and Q denotes the model.\n Warning:\n \"reduction\"= \"mean\" doesn't return the true KL divergence\n value, please use \"reduction\"= \"batchmean\" which aligns with\n the mathematical definition. In a future release, \"mean\" will", "source": "https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html", "category": "pytorch docs"}
{"text": "be changed to be the same as \"batchmean\".\n Parameters:\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to False, the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is False. Default: True\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is False, returns a loss per\n batch element instead and ignores \"size_average\". Default:\n True\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output. Default: \"mean\"\n * log_target (bool, optional) -- Specifies whether", "source": "https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html", "category": "pytorch docs"}
{"text": "target is the log space. Default: False\n Shape:\n * Input: (), where * means any number of dimensions.\n * Target: (), same shape as the input.\n * Output: scalar by default. If \"reduction\" is 'none', then\n (*), same shape as the input.\n Examples:\n >>> import torch.nn.functional as F\n >>> kl_loss = nn.KLDivLoss(reduction=\"batchmean\")\n >>> # input should be a distribution in the log space\n >>> input = F.log_softmax(torch.randn(3, 5, requires_grad=True), dim=1)\n >>> # Sample a batch of distributions. Usually this would come from the dataset\n >>> target = F.softmax(torch.rand(3, 5), dim=1)\n >>> output = kl_loss(input, target)\n >>> kl_loss = nn.KLDivLoss(reduction=\"batchmean\", log_target=True)\n >>> log_target = F.log_softmax(torch.rand(3, 5), dim=1)\n >>> output = kl_loss(input, log_target)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html", "category": "pytorch docs"}
{"text": "ChainedSchedulerclass torch.optim.lr_scheduler.ChainedScheduler(schedulers)\n Chains list of learning rate schedulers. It takes a list of\n chainable learning rate schedulers and performs consecutive step()\n functions belonging to them by just one call.\n Parameters:\n schedulers (list) -- List of chained schedulers.\n -[ Example ]-\n\n\n\nAssuming optimizer uses lr = 1. for all groups\nlr = 0.09 if epoch == 0\nlr = 0.081 if epoch == 1\nlr = 0.729 if epoch == 2\nlr = 0.6561 if epoch == 3\nlr = 0.59049 if epoch >= 4\nscheduler1 = ConstantLR(self.opt, factor=0.1, total_iters=2)\nscheduler2 = ExponentialLR(self.opt, gamma=0.9)\nscheduler = ChainedScheduler([scheduler1, scheduler2])\nfor epoch in range(100):\n train(...)\n validate(...)\n scheduler.step()\n get_last_lr()\n Return last computed learning rate by current scheduler.\n load_state_dict(state_dict)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ChainedScheduler.html", "category": "pytorch docs"}
{"text": "load_state_dict(state_dict)\n Loads the schedulers state.\n Parameters:\n state_dict (dict) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n print_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n state_dict()\n Returns the state of the scheduler as a \"dict\".\n It contains an entry for every variable in self.dict which\n is not the optimizer. The wrapped scheduler states will also be\n saved.", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ChainedScheduler.html", "category": "pytorch docs"}
{"text": "torch.fmaxtorch.fmax(input, other, , out=None) -> Tensor\n Computes the element-wise maximum of \"input\" and \"other\".\n This is like \"torch.maximum()\" except it handles NaNs differently:\n if exactly one of the two elements being compared is a NaN then the\n non-NaN element is taken as the maximum. Only if both elements are\n NaN is NaN propagated.\n This function is a wrapper around C++'s \"std::fmax\" and is similar\n to NumPy's \"fmax\" function.\n Supports broadcasting to a common shape, type promotion, and\n integer and floating-point inputs.\n Parameters:\n * input (Tensor) -- the input tensor.\n * other (Tensor) -- the second input tensor\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.tensor([9.7, float('nan'), 3.1, float('nan')])\n >>> b = torch.tensor([-2.2, 0.5, float('nan'), float('nan')])\n >>> torch.fmax(a, b)\n tensor([9.7000, 0.5000, 3.1000, nan])", "source": "https://pytorch.org/docs/stable/generated/torch.fmax.html", "category": "pytorch docs"}
{"text": "get_observer_state_dictclass torch.quantization.observer.get_observer_state_dict(mod)\n Returns the state dict corresponding to the observer stats.\n Traverse the model state_dict and extract out the stats.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.get_observer_state_dict.html", "category": "pytorch docs"}
{"text": "torch.acoshtorch.acosh(input, , out=None) -> Tensor\n Returns a new tensor with the inverse hyperbolic cosine of the\n elements of \"input\".\n \\text{out}{i} = \\cosh^{-1}(\\text{input})\n Note:\n The domain of the inverse hyperbolic cosine is [1, inf) and\n values outside this range will be mapped to \"NaN\", except for +\n INF for which the output is mapped to + INF.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4).uniform_(1, 2)\n >>> a\n tensor([ 1.3192, 1.9915, 1.9674, 1.7151 ])\n >>> torch.acosh(a)\n tensor([ 0.7791, 1.3120, 1.2979, 1.1341 ])", "source": "https://pytorch.org/docs/stable/generated/torch.acosh.html", "category": "pytorch docs"}
{"text": "torch.divtorch.div(input, other, , rounding_mode=None, out=None) -> Tensor\n Divides each element of the input \"input\" by the corresponding\n element of \"other\".\n \\text{out}_i = \\frac{\\text{input}_i}{\\text{other}_i}\n Note:\n By default, this performs a \"true\" division like Python 3. See\n the \"rounding_mode\" argument for floor division.\n Supports broadcasting to a common shape, type promotion, and\n integer, float, and complex inputs. Always promotes integer types\n to the default scalar type.\n Parameters:\n * input (Tensor) -- the dividend\n * other (Tensor or Number) -- the divisor\n Keyword Arguments:\n * rounding_mode (str, optional*) --\n Type of rounding applied to the result:\n * None - default behavior. Performs no rounding and, if both\n \"input\" and \"other\" are integer types, promotes the inputs\n to the default scalar type. Equivalent to true division in", "source": "https://pytorch.org/docs/stable/generated/torch.div.html", "category": "pytorch docs"}
{"text": "Python (the \"/\" operator) and NumPy's \"np.true_divide\".\n * \"\"trunc\"\" - rounds the results of the division towards zero.\n Equivalent to C-style integer division.\n * \"\"floor\"\" - rounds the results of the division down.\n Equivalent to floor division in Python (the \"//\" operator)\n and NumPy's \"np.floor_divide\".\n * out (Tensor, optional) -- the output tensor.\n Examples:\n >>> x = torch.tensor([ 0.3810, 1.2774, -0.2972, -0.3719, 0.4637])\n >>> torch.div(x, 0.5)\n tensor([ 0.7620, 2.5548, -0.5944, -0.7438, 0.9274])\n >>> a = torch.tensor([[-0.3711, -1.9353, -0.4605, -0.2917],\n ... [ 0.1815, -1.0111, 0.9805, -1.5923],\n ... [ 0.1062, 1.4581, 0.7759, -1.2344],\n ... [-0.1830, -0.0313, 1.1908, -1.4757]])\n >>> b = torch.tensor([ 0.8032, 0.2930, -0.8113, -0.2308])\n >>> torch.div(a, b)\n tensor([[-0.4620, -6.6051, 0.5676, 1.2639],", "source": "https://pytorch.org/docs/stable/generated/torch.div.html", "category": "pytorch docs"}
{"text": "[ 0.2260, -3.4509, -1.2086, 6.8990],\n [ 0.1322, 4.9764, -0.9564, 5.3484],\n [-0.2278, -0.1068, -1.4678, 6.3938]])\n >>> torch.div(a, b, rounding_mode='trunc')\n tensor([[-0., -6., 0., 1.],\n [ 0., -3., -1., 6.],\n [ 0., 4., -0., 5.],\n [-0., -0., -1., 6.]])\n >>> torch.div(a, b, rounding_mode='floor')\n tensor([[-1., -7., 0., 1.],\n [ 0., -4., -2., 6.],\n [ 0., 4., -1., 5.],\n [-1., -1., -2., 6.]])", "source": "https://pytorch.org/docs/stable/generated/torch.div.html", "category": "pytorch docs"}
{"text": "torch.Tensor.index_reduce_Tensor.index_reduce_(dim, index, source, reduce, *, include_self=True) -> Tensor\n Accumulate the elements of \"source\" into the \"self\" tensor by\n accumulating to the indices in the order given in \"index\" using the\n reduction given by the \"reduce\" argument. For example, if \"dim ==\n 0\", \"index[i] == j\", \"reduce == prod\" and \"include_self == True\"\n then the \"i\"th row of \"source\" is multiplied by the \"j\"th row of\n \"self\". If \"include_self=\"True\"\", the values in the \"self\" tensor\n are included in the reduction, otherwise, rows in the \"self\" tensor\n that are accumulated to are treated as if they were filled with the\n reduction identites.\n The \"dim\"th dimension of \"source\" must have the same size as the\n length of \"index\" (which must be a vector), and all other\n dimensions must match \"self\", or an error will be raised.\n For a 3-D tensor with \"reduce=\"prod\"\" and \"include_self=True\" the\n output is given as:", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_reduce_.html", "category": "pytorch docs"}
{"text": "output is given as:\n self[index[i], :, :] = src[i, :, :] # if dim == 0\n self[:, index[i], :] = src[:, i, :] # if dim == 1\n self[:, :, index[i]] = src[:, :, i] # if dim == 2\n Note:\n This operation may behave nondeterministically when given tensors\n on a CUDA device. See Reproducibility for more information.\n Note:\n This function only supports floating point tensors.\n Warning:\n This function is in beta and may change in the near future.\n Parameters:\n * dim (int) -- dimension along which to index\n * index (Tensor) -- indices of \"source\" to select from,\n should have dtype either torch.int64 or torch.int32\n * source (FloatTensor) -- the tensor containing values to\n accumulate\n * reduce (str) -- the reduction operation to apply\n (\"\"prod\"\", \"\"mean\"\", \"\"amax\"\", \"\"amin\"\")\n Keyword Arguments:\n include_self (bool*) -- whether the elements from the", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_reduce_.html", "category": "pytorch docs"}
{"text": "\"self\" tensor are included in the reduction\n Example:\n >>> x = torch.empty(5, 3).fill_(2)\n >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]], dtype=torch.float)\n >>> index = torch.tensor([0, 4, 2, 0])\n >>> x.index_reduce_(0, index, t, 'prod')\n tensor([[20., 44., 72.],\n [ 2., 2., 2.],\n [14., 16., 18.],\n [ 2., 2., 2.],\n [ 8., 10., 12.]])\n >>> x = torch.empty(5, 3).fill_(2)\n >>> x.index_reduce_(0, index, t, 'prod', include_self=False)\n tensor([[10., 22., 36.],\n [ 2., 2., 2.],\n [ 7., 8., 9.],\n [ 2., 2., 2.],\n [ 4., 5., 6.]])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_reduce_.html", "category": "pytorch docs"}
{"text": "torch.autograd.functional.vhptorch.autograd.functional.vhp(func, inputs, v=None, create_graph=False, strict=False)\n Function that computes the dot product between a vector \"v\" and the\n Hessian of a given scalar function at the point given by the\n inputs.\n Parameters:\n * func (function) -- a Python function that takes Tensor\n inputs and returns a Tensor with a single element.\n * inputs (tuple of Tensors or Tensor) -- inputs to the\n function \"func\".\n * v (tuple of Tensors or Tensor) -- The vector for\n which the vector Hessian product is computed. Must be the same\n size as the input of \"func\". This argument is optional when\n \"func\"'s input contains a single element and (if it is not\n provided) will be set as a Tensor containing a single \"1\".\n * create_graph (bool, optional) -- If \"True\", both the\n output and result will be computed in a differentiable way.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.vhp.html", "category": "pytorch docs"}
{"text": "Note that when \"strict\" is \"False\", the result can not require\n gradients or be disconnected from the inputs. Defaults to\n \"False\".\n * strict (bool, optional) -- If \"True\", an error will\n be raised when we detect that there exists an input such that\n all the outputs are independent of it. If \"False\", we return a\n Tensor of zeros as the vhp for said inputs, which is the\n expected mathematical value. Defaults to \"False\".\n Returns:\n tuple with:\n func_output (tuple of Tensors or Tensor): output of\n \"func(inputs)\"\n vhp (tuple of Tensors or Tensor): result of the dot product\n with the same shape as the inputs.\n Return type:\n output (tuple)\n -[ Example ]-\n\n\n\ndef pow_reducer(x):\n ... return x.pow(3).sum()\ninputs = torch.rand(2, 2)\nv = torch.ones(2, 2)\nvhp(pow_reducer, inputs, v)\n (tensor(0.5591),\n tensor([[1.0689, 1.2431],\n [3.0989, 4.4456]]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.vhp.html", "category": "pytorch docs"}
{"text": "[3.0989, 4.4456]]))\n\n\n\nvhp(pow_reducer, inputs, v, create_graph=True)\n (tensor(0.5591, grad_fn=),\n tensor([[1.0689, 1.2431],\n [3.0989, 4.4456]], grad_fn=))\ndef pow_adder_reducer(x, y):\n ... return (2 * x.pow(2) + 3 * y.pow(2)).sum()\ninputs = (torch.rand(2), torch.rand(2))\nv = (torch.zeros(2), torch.ones(2))\nvhp(pow_adder_reducer, inputs, v)\n (tensor(4.8053),\n (tensor([0., 0.]),\n tensor([6., 6.])))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.vhp.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.prune.ln_structuredtorch.nn.utils.prune.ln_structured(module, name, amount, n, dim, importance_scores=None)\n Prunes tensor corresponding to parameter called \"name\" in \"module\"\n by removing the specified \"amount\" of (currently unpruned) channels\n along the specified \"dim\" with the lowest L\"n\"-norm. Modifies\n module in place (and also return the modified module) by:\n 1. adding a named buffer called \"name+'_mask'\" corresponding to the\n binary mask applied to the parameter \"name\" by the pruning\n method.\n 2. replacing the parameter \"name\" by its pruned version, while the\n original (unpruned) parameter is stored in a new parameter named\n \"name+'_orig'\".\n Parameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * name (str) -- parameter name within \"module\" on which\n pruning will act.\n * amount (int or float) -- quantity of parameters to", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.ln_structured.html", "category": "pytorch docs"}
{"text": "prune. If \"float\", should be between 0.0 and 1.0 and represent\n the fraction of parameters to prune. If \"int\", it represents\n the absolute number of parameters to prune.\n * n (int, float, inf, -inf, 'fro',\n 'nuc') -- See documentation of valid entries for argument\n \"p\" in \"torch.norm()\".\n * dim (int) -- index of the dim along which we define\n channels to prune.\n * importance_scores (torch.Tensor) -- tensor of importance\n scores (of same shape as module parameter) used to compute\n mask for pruning. The values in this tensor indicate the\n importance of the corresponding elements in the parameter\n being pruned. If unspecified or None, the module parameter\n will be used in its place.\n Returns:\n modified (i.e. pruned) version of the input module\n Return type:\n module (nn.Module)\n -[ Examples ]-\n\n\n\nfrom torch.nn.utils import prune\nm = prune.ln_structured(\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.ln_structured.html", "category": "pytorch docs"}
{"text": "\n\n\nm = prune.ln_structured(\n ... nn.Conv2d(5, 3, 2), 'weight', amount=0.3, dim=1, n=float('-inf')\n ... )\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.ln_structured.html", "category": "pytorch docs"}
{"text": "torch.foreach_trunc_torch._foreach_trunc(self: List[Tensor]) -> None\n Apply \"torch.trunc()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_trunc_.html", "category": "pytorch docs"}
{"text": "default_dynamic_qconfigtorch.quantization.qconfig.default_dynamic_qconfig\n alias of QConfig(activation=functools.partial(,\n dtype=torch.quint8, quant_min=0, quant_max=255, is_dynamic=True){},\n weight=functools.partial(,\n dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_dynamic_qconfig.html", "category": "pytorch docs"}
{"text": "torch.Tensor.renorm_Tensor.renorm_(p, dim, maxnorm) -> Tensor\n In-place version of \"renorm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.renorm_.html", "category": "pytorch docs"}
{"text": "CTCLossclass torch.nn.CTCLoss(blank=0, reduction='mean', zero_infinity=False)\n The Connectionist Temporal Classification loss.\n Calculates loss between a continuous (unsegmented) time series and\n a target sequence. CTCLoss sums over the probability of possible\n alignments of input to target, producing a loss value which is\n differentiable with respect to each input node. The alignment of\n input to target is assumed to be \"many-to-one\", which limits the\n length of the target sequence such that it must be \\leq the input\n length.\n Parameters:\n * blank (int, optional) -- blank label. Default 0.\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the output\n losses will be divided by the target lengths and then the mean\n over the batch is taken. Default: \"'mean'\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html", "category": "pytorch docs"}
{"text": "over the batch is taken. Default: \"'mean'\"\n * zero_infinity (bool, optional) -- Whether to zero\n infinite losses and the associated gradients. Default: \"False\"\n Infinite losses mainly occur when the inputs are too short to\n be aligned to the targets.\n Shape:\n * Log_probs: Tensor of size (T, N, C) or (T, C), where T =\n \\text{input length}, N = \\text{batch size}, and C =\n \\text{number of classes (including blank)}. The logarithmized\n probabilities of the outputs (e.g. obtained with\n \"torch.nn.functional.log_softmax()\").\n * Targets: Tensor of size (N, S) or\n (\\operatorname{sum}(\\text{target_lengths})), where N =\n \\text{batch size} and S = \\text{max target length, if shape is\n } (N, S). It represent the target sequences. Each element in\n the target sequence is a class index. And the target index\n cannot be blank (default=0). In the (N, S) form, targets are", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html", "category": "pytorch docs"}
{"text": "padded to the length of the longest sequence, and stacked. In\n the (\\operatorname{sum}(\\text{target_lengths})) form, the\n targets are assumed to be un-padded and concatenated within 1\n dimension.\n * Input_lengths: Tuple or tensor of size (N) or (), where N =\n \\text{batch size}. It represent the lengths of the inputs\n (must each be \\leq T). And the lengths are specified for each\n sequence to achieve masking under the assumption that\n sequences are padded to equal lengths.\n * Target_lengths: Tuple or tensor of size (N) or (), where N =\n \\text{batch size}. It represent lengths of the targets.\n Lengths are specified for each sequence to achieve masking\n under the assumption that sequences are padded to equal\n lengths. If target shape is (N,S), target_lengths are\n effectively the stop index s_n for each target sequence, such\n that \"target_n = targets[n,0:s_n]\" for each target in a batch.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html", "category": "pytorch docs"}
{"text": "Lengths must each be \\leq S If the targets are given as a 1d\n tensor that is the concatenation of individual targets, the\n target_lengths must add up to the total length of the tensor.\n * Output: scalar. If \"reduction\" is \"'none'\", then (N) if input\n is batched or () if input is unbatched, where N = \\text{batch\n size}.\n Examples:\n >>> # Target are to be padded\n >>> T = 50 # Input sequence length\n >>> C = 20 # Number of classes (including blank)\n >>> N = 16 # Batch size\n >>> S = 30 # Target sequence length of longest target in batch (padding length)\n >>> S_min = 10 # Minimum target length, for demonstration purposes\n >>>\n >>> # Initialize random batch of input vectors, for *size = (T,N,C)\n >>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_()\n >>>\n >>> # Initialize random batch of targets (0 = blank, 1:C = classes)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html", "category": "pytorch docs"}
{"text": "\n\n\ntarget = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long)\n >>>\n >>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long)\n >>> target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long)\n >>> ctc_loss = nn.CTCLoss()\n >>> loss = ctc_loss(input, target, input_lengths, target_lengths)\n >>> loss.backward()\n >>>\n >>>\n >>> # Target are to be un-padded\n >>> T = 50 # Input sequence length\n >>> C = 20 # Number of classes (including blank)\n >>> N = 16 # Batch size\n >>>\n >>> # Initialize random batch of input vectors, for *size = (T,N,C)\n >>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_()\n >>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long)\n >>>\n >>> # Initialize random batch of targets (0 = blank, 1:C = classes)\n >>> target_lengths = torch.randint(low=1, high=T, size=(N,), dtype=torch.long)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html", "category": "pytorch docs"}
{"text": "\n\n\ntarget = torch.randint(low=1, high=C, size=(sum(target_lengths),), dtype=torch.long)\n >>> ctc_loss = nn.CTCLoss()\n >>> loss = ctc_loss(input, target, input_lengths, target_lengths)\n >>> loss.backward()\n >>>\n >>>\n >>> # Target are to be un-padded and unbatched (effectively N=1)\n >>> T = 50 # Input sequence length\n >>> C = 20 # Number of classes (including blank)\n >>>\n >>> # Initialize random batch of input vectors, for *size = (T,C)\n >>> input = torch.randn(T, C).log_softmax(2).detach().requires_grad_()\n >>> input_lengths = torch.tensor(T, dtype=torch.long)\n >>>\n >>> # Initialize random batch of targets (0 = blank, 1:C = classes)\n >>> target_lengths = torch.randint(low=1, high=T, size=(), dtype=torch.long)\n >>> target = torch.randint(low=1, high=C, size=(target_lengths,), dtype=torch.long)\n >>> ctc_loss = nn.CTCLoss()\n >>> loss = ctc_loss(input, target, input_lengths, target_lengths)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html", "category": "pytorch docs"}
{"text": "\n\n\nloss.backward()\n Reference:\n A. Graves et al.: Connectionist Temporal Classification:\n Labelling Unsegmented Sequence Data with Recurrent Neural\n Networks: https://www.cs.toronto.edu/~graves/icml_2006.pdf\n Note:\n In order to use CuDNN, the following must be satisfied: \"targets\"\n must be in concatenated format, all \"input_lengths\" must be T.\n blank=0, \"target_lengths\" \\leq 256, the integer arguments must be\n of dtype \"torch.int32\".The regular implementation uses the (more\n common in PyTorch) torch.long dtype.\n Note:\n In some circumstances when using the CUDA backend with CuDNN,\n this operator may select a nondeterministic algorithm to increase\n performance. If this is undesirable, you can try to make the\n operation deterministic (potentially at a performance cost) by\n setting \"torch.backends.cudnn.deterministic = True\". Please see\n the notes on Reproducibility for background.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html", "category": "pytorch docs"}
{"text": "torch.exp2torch.exp2(input, *, out=None) -> Tensor\n Alias for \"torch.special.exp2()\".", "source": "https://pytorch.org/docs/stable/generated/torch.exp2.html", "category": "pytorch docs"}
{"text": "torch.Tensor.log1pTensor.log1p() -> Tensor\n See \"torch.log1p()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.log1p.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.unfoldtorch.nn.functional.unfold(input, kernel_size, dilation=1, padding=0, stride=1)\n Extracts sliding local blocks from a batched input tensor.\n Warning:\n Currently, only 4-D input tensors (batched image-like tensors)\n are supported.\n Warning:\n More than one element of the unfolded tensor may refer to a\n single memory location. As a result, in-place operations\n (especially ones that are vectorized) may result in incorrect\n behavior. If you need to write to the tensor, please clone it\n first.\n See \"torch.nn.Unfold\" for details\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.unfold.html", "category": "pytorch docs"}
{"text": "torch._foreach_erfctorch._foreach_erfc(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.erfc()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_erfc.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sqrtTensor.sqrt() -> Tensor\n See \"torch.sqrt()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sqrt.html", "category": "pytorch docs"}
{"text": "torch.masked_selecttorch.masked_select(input, mask, , out=None) -> Tensor\n Returns a new 1-D tensor which indexes the \"input\" tensor according\n to the boolean mask \"mask\" which is a BoolTensor.\n The shapes of the \"mask\" tensor and the \"input\" tensor don't need\n to match, but they must be broadcastable.\n Note:\n The returned tensor does not use the same storage as the\n original tensor\n Parameters:\n * input (Tensor) -- the input tensor.\n * mask (BoolTensor) -- the tensor containing the binary\n mask to index with\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> x = torch.randn(3, 4)\n >>> x\n tensor([[ 0.3552, -2.3825, -0.8297, 0.3477],\n [-1.2035, 1.2252, 0.5002, 0.6248],\n [ 0.1307, -2.0608, 0.1244, 2.0139]])\n >>> mask = x.ge(0.5)\n >>> mask\n tensor([[False, False, False, False],\n [False, True, True, True],", "source": "https://pytorch.org/docs/stable/generated/torch.masked_select.html", "category": "pytorch docs"}
{"text": "[False, True, True, True],\n [False, False, False, True]])\n >>> torch.masked_select(x, mask)\n tensor([ 1.2252, 0.5002, 0.6248, 2.0139])", "source": "https://pytorch.org/docs/stable/generated/torch.masked_select.html", "category": "pytorch docs"}
{"text": "torch.Tensor.is_sparse_csrTensor.is_sparse_csr\n Is \"True\" if the Tensor uses sparse CSR storage layout, \"False\"\n otherwise.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_sparse_csr.html", "category": "pytorch docs"}
{"text": "torch.Tensor.allcloseTensor.allclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) -> Tensor\n See \"torch.allclose()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.allclose.html", "category": "pytorch docs"}
{"text": "torch.log2torch.log2(input, , out=None) -> Tensor\n Returns a new tensor with the logarithm to the base 2 of the\n elements of \"input\".\n y_{i} = \\log_{2} (x_{i})\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.rand(5)\n >>> a\n tensor([ 0.8419, 0.8003, 0.9971, 0.5287, 0.0490])\n >>> torch.log2(a)\n tensor([-0.2483, -0.3213, -0.0042, -0.9196, -4.3504])", "source": "https://pytorch.org/docs/stable/generated/torch.log2.html", "category": "pytorch docs"}
{"text": "torch.autograd.profiler.profile.export_chrome_traceprofile.export_chrome_trace(path)\n Exports an EventList as a Chrome tracing tools file.\n The checkpoint can be later loaded and inspected under\n \"chrome://tracing\" URL.\n Parameters:\n path (str) -- Path where the trace will be written.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.profiler.profile.export_chrome_trace.html", "category": "pytorch docs"}
{"text": "FusedMovingAvgObsFakeQuantizeclass torch.quantization.fake_quantize.FusedMovingAvgObsFakeQuantize(observer=, quant_min=0, quant_max=255, *observer_kwargs)\n Fused module that is used to observe the input tensor (compute\n min/max), compute scale/zero_point and fake_quantize the tensor.\n This module uses calculation similar MovingAverageMinMaxObserver\n for the inputs, to compute the min/max values in order to compute\n the scale/zero_point. The qscheme input in the observer is used to\n differentiate between symmetric/affine quantization scheme.\n The output of this module is given by x_out = (clamp(round(x/scale\n + zero_point), quant_min, quant_max)-zero_point)scale\n Similar to \"FakeQuantize\", and accepts the same attributes as the\n base class.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.FusedMovingAvgObsFakeQuantize.html", "category": "pytorch docs"}
{"text": "torch.linalg.diagonaltorch.linalg.diagonal(A, , offset=0, dim1=- 2, dim2=- 1) -> Tensor\n Alias for \"torch.diagonal()\" with defaults \"dim1\"= -2, \"dim2\"=\n -1*.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.diagonal.html", "category": "pytorch docs"}
{"text": "torch.sinctorch.sinc(input, *, out=None) -> Tensor\n Alias for \"torch.special.sinc()\".", "source": "https://pytorch.org/docs/stable/generated/torch.sinc.html", "category": "pytorch docs"}
{"text": "quantize_qatclass torch.quantization.quantize_qat(model, run_fn, run_args, inplace=False)\n Do quantization aware training and output a quantized model\n Parameters:\n * model -- input model\n * run_fn -- a function for evaluating the prepared model,\n can be a function that simply runs the prepared model or a\n training loop\n * run_args -- positional arguments for run_fn\n Returns:\n Quantized model.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_qat.html", "category": "pytorch docs"}
{"text": "torch.Tensor.ceil_Tensor.ceil_() -> Tensor\n In-place version of \"ceil()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ceil_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.index_copyTensor.index_copy(dim, index, tensor2) -> Tensor\n Out-of-place version of \"torch.Tensor.index_copy_()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_copy.html", "category": "pytorch docs"}
{"text": "Streamclass torch.cuda.Stream(device=None, priority=0, kwargs)\n Wrapper around a CUDA stream.\n A CUDA stream is a linear sequence of execution that belongs to a\n specific device, independent from other streams. See CUDA\n semantics for details.\n Parameters:\n * device (*torch.device or int, optional*) -- a\n device on which to allocate the stream. If \"device\" is \"None\"\n (default) or a negative integer, this will use the current\n device.\n * priority (*int, *optional) -- priority of the stream.\n Can be either -1 (high priority) or 0 (low priority). By\n default, streams have priority 0.\n Note:\n Although CUDA versions >= 11 support more than two levels of\n priorities, in PyTorch, we only support two levels of priorities.\n query()\n Checks if all the work submitted has been completed.\n Returns:\n A boolean indicating if all kernels in this stream are\n completed.\n record_event(event=None)", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.Stream.html", "category": "pytorch docs"}
{"text": "completed.\n record_event(event=None)\n Records an event.\n Parameters:\n event (torch.cuda.Event, optional) -- event to\n record. If not given, a new one will be allocated.\n Returns:\n Recorded event.\n synchronize()\n Wait for all the kernels in this stream to complete.\n Note:\n This is a wrapper around \"cudaStreamSynchronize()\": see CUDA\n Stream documentation for more info.\n wait_event(event)\n Makes all future work submitted to the stream wait for an event.\n Parameters:\n event (torch.cuda.Event) -- an event to wait for.\n Note:\n This is a wrapper around \"cudaStreamWaitEvent()\": see CUDA\n Stream documentation for more info.This function returns\n without waiting for \"event\": only future operations are\n affected.\n wait_stream(stream)\n Synchronizes with another stream.\n All future work submitted to this stream will wait until all", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.Stream.html", "category": "pytorch docs"}
{"text": "kernels submitted to a given stream at the time of call\n complete.\n Parameters:\n stream (Stream) -- a stream to synchronize.\n Note:\n This function returns without waiting for currently enqueued\n kernels in \"stream\": only future operations are affected.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.Stream.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.padtorch.nn.functional.pad(input, pad, mode='constant', value=None) -> Tensor\n Pads tensor.\n Padding size:\n The padding size by which to pad some dimensions of \"input\" are\n described starting from the last dimension and moving forward.\n \\left\\lfloor\\frac{\\text{len(pad)}}{2}\\right\\rfloor dimensions of\n \"input\" will be padded. For example, to pad only the last\n dimension of the input tensor, then \"pad\" has the form\n (\\text{padding_left}, \\text{padding_right}); to pad the last 2\n dimensions of the input tensor, then use (\\text{padding_left},\n \\text{padding_right}, \\text{padding_top},\n \\text{padding_bottom}); to pad the last 3 dimensions, use\n (\\text{padding_left}, \\text{padding_right},\n \\text{padding_top}, \\text{padding_bottom}\n \\text{padding_front}, \\text{padding_back}).\n Padding mode:\n See \"torch.nn.ConstantPad2d\", \"torch.nn.ReflectionPad2d\", and", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.pad.html", "category": "pytorch docs"}
{"text": "\"torch.nn.ReplicationPad2d\" for concrete examples on how each of\n the padding modes works. Constant padding is implemented for\n arbitrary dimensions. Replicate and reflection padding are\n implemented for padding the last 3 dimensions of a 4D or 5D\n input tensor, the last 2 dimensions of a 3D or 4D input tensor,\n or the last dimension of a 2D or 3D input tensor.\n Note:\n When using the CUDA backend, this operation may induce\n nondeterministic behaviour in its backward pass that is not\n easily switched off. Please see the notes on Reproducibility for\n background.\n Parameters:\n * input (Tensor) -- N-dimensional tensor\n * pad (tuple) -- m-elements tuple, where \\frac{m}{2} \\leq\n input dimensions and m is even.\n * mode -- \"'constant'\", \"'reflect'\", \"'replicate'\" or\n \"'circular'\". Default: \"'constant'\"\n * value -- fill value for \"'constant'\" padding. Default: \"0\"\n Examples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.pad.html", "category": "pytorch docs"}
{"text": "Examples:\n >>> t4d = torch.empty(3, 3, 4, 2)\n >>> p1d = (1, 1) # pad last dim by 1 on each side\n >>> out = F.pad(t4d, p1d, \"constant\", 0) # effectively zero padding\n >>> print(out.size())\n torch.Size([3, 3, 4, 4])\n >>> p2d = (1, 1, 2, 2) # pad last dim by (1, 1) and 2nd to last by (2, 2)\n >>> out = F.pad(t4d, p2d, \"constant\", 0)\n >>> print(out.size())\n torch.Size([3, 3, 8, 4])\n >>> t4d = torch.empty(3, 3, 4, 2)\n >>> p3d = (0, 1, 2, 1, 3, 3) # pad by (0, 1), (2, 1), and (3, 3)\n >>> out = F.pad(t4d, p3d, \"constant\", 0)\n >>> print(out.size())\n torch.Size([3, 9, 7, 3])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.pad.html", "category": "pytorch docs"}
{"text": "torch.sym_nottorch.sym_not(a)\n SymInt-aware utility for logical negation.\n Parameters:\n a (SymBool or bool) -- Object to negate", "source": "https://pytorch.org/docs/stable/generated/torch.sym_not.html", "category": "pytorch docs"}
{"text": "torch.Tensor.requires_grad_Tensor.requires_grad_(requires_grad=True) -> Tensor\n Change if autograd should record operations on this tensor: sets\n this tensor's \"requires_grad\" attribute in-place. Returns this\n tensor.\n \"requires_grad_()\"'s main use case is to tell autograd to begin\n recording operations on a Tensor \"tensor\". If \"tensor\" has\n \"requires_grad=False\" (because it was obtained through a\n DataLoader, or required preprocessing or initialization),\n \"tensor.requires_grad_()\" makes it so that autograd will begin to\n record operations on \"tensor\".\n Parameters:\n requires_grad (bool) -- If autograd should record\n operations on this tensor. Default: \"True\".\n Example:\n >>> # Let's say we want to preprocess some saved weights and use\n >>> # the result as new weights.\n >>> saved_weights = [0.1, 0.2, 0.3, 0.25]\n >>> loaded_weights = torch.tensor(saved_weights)\n >>> weights = preprocess(loaded_weights) # some function", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.requires_grad_.html", "category": "pytorch docs"}
{"text": "\n\n\nweights\n tensor([-0.5503, 0.4926, -2.1158, -0.8303])\n >>> # Now, start to record operations done to weights\n >>> weights.requires_grad_()\n >>> out = weights.pow(2).sum()\n >>> out.backward()\n >>> weights.grad\n tensor([-1.1007, 0.9853, -4.2316, -1.6606])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.requires_grad_.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.softshrinktorch.nn.functional.softshrink(input, lambd=0.5) -> Tensor\n Applies the soft shrinkage function elementwise\n See \"Softshrink\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.softshrink.html", "category": "pytorch docs"}
{"text": "torch.cuda.current_streamtorch.cuda.current_stream(device=None)\n Returns the currently selected \"Stream\" for a given device.\n Parameters:\n device (torch.device or int, optional) -- selected\n device. Returns the currently selected \"Stream\" for the current\n device, given by \"current_device()\", if \"device\" is \"None\"\n (default).\n Return type:\n Stream", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.current_stream.html", "category": "pytorch docs"}
{"text": "torch.Tensor.bitwise_xor_Tensor.bitwise_xor_() -> Tensor\n In-place version of \"bitwise_xor()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_xor_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.contiguousTensor.contiguous(memory_format=torch.contiguous_format) -> Tensor\n Returns a contiguous in memory tensor containing the same data as\n \"self\" tensor. If \"self\" tensor is already in the specified memory\n format, this function returns the \"self\" tensor.\n Parameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.contiguous_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.contiguous.html", "category": "pytorch docs"}
{"text": "torch.std_meantorch.std_mean(input, dim=None, , correction=1, keepdim=False, out=None)\n Calculates the standard deviation and mean over the dimensions\n specified by \"dim\". \"dim\" can be a single dimension, list of\n dimensions, or \"None\" to reduce over all dimensions.\n The standard deviation (\\sigma) is calculated as\n \\sigma = \\sqrt{\\frac{1}{N - \\delta\n N}\\sum_{i=0}^{N-1}(x_i-\\bar{x})^2}\n where x is the sample set of elements, \\bar{x} is the sample mean,\n N is the number of samples and \\delta N is the \"correction\".\n If \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int or tuple of ints, optional*) -- the", "source": "https://pytorch.org/docs/stable/generated/torch.std_mean.html", "category": "pytorch docs"}
{"text": "dimension or dimensions to reduce. If \"None\", all dimensions\n are reduced.\n Keyword Arguments:\n * correction (int) --\n difference between the sample size and sample degrees of\n freedom. Defaults to Bessel's correction, \"correction=1\".\n Changed in version 2.0: Previously this argument was called\n \"unbiased\" and was a boolean with \"True\" corresponding to\n \"correction=1\" and \"False\" being \"correction=0\".\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n * out (Tensor, optional) -- the output tensor.\n Returns:\n A tuple (std, mean) containing the standard deviation and mean.\n -[ Example ]-\n\n\n\na = torch.tensor(\n ... [[ 0.2035, 1.2959, 1.8101, -0.4644],\n ... [ 1.5027, -0.3270, 0.5905, 0.6538],\n ... [-1.5745, 1.3330, -0.5596, -0.6548],\n ... [ 0.1264, -0.5080, 1.6420, 0.1992]])\ntorch.std_mean(a, dim=0, keepdim=True)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.std_mean.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.std_mean(a, dim=0, keepdim=True)\n (tensor([[1.2620, 1.0028, 1.0957, 0.6038]]),\n tensor([[ 0.0645, 0.4485, 0.8707, -0.0665]]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.std_mean.html", "category": "pytorch docs"}
{"text": "torch.cuda.manual_seed_alltorch.cuda.manual_seed_all(seed)\n Sets the seed for generating random numbers on all GPUs. It's safe\n to call this function if CUDA is not available; in that case, it is\n silently ignored.\n Parameters:\n seed (int) -- The desired seed.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.manual_seed_all.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.adaptive_avg_pool1dtorch.nn.functional.adaptive_avg_pool1d(input, output_size) -> Tensor\n Applies a 1D adaptive average pooling over an input signal composed\n of several input planes.\n See \"AdaptiveAvgPool1d\" for details and output shape.\n Parameters:\n output_size -- the target output size (single integer)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.adaptive_avg_pool1d.html", "category": "pytorch docs"}
{"text": "Flattenclass torch.nn.Flatten(start_dim=1, end_dim=- 1)\n Flattens a contiguous range of dims into a tensor. For use with\n \"Sequential\".\n Shape:\n * Input: (, S_{\\text{start}},..., S_{i}, ..., S_{\\text{end}},\n ),' where S_{i} is the size at dimension i and * means any\n number of dimensions including none.\n * Output: (, \\prod_{i=\\text{start}}^{\\text{end}} S_{i}, ).\n Parameters:\n * start_dim (int) -- first dim to flatten (default = 1).\n * end_dim (int) -- last dim to flatten (default = -1).\n Examples::\n >>> input = torch.randn(32, 1, 5, 5)\n >>> # With default parameters\n >>> m = nn.Flatten()\n >>> output = m(input)\n >>> output.size()\n torch.Size([32, 25])\n >>> # With non-default parameters\n >>> m = nn.Flatten(0, 2)\n >>> output = m(input)\n >>> output.size()\n torch.Size([160, 5])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Flatten.html", "category": "pytorch docs"}
{"text": "torch.Tensor.mulTensor.mul(value) -> Tensor\n See \"torch.mul()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.mul.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.rnn.pad_packed_sequencetorch.nn.utils.rnn.pad_packed_sequence(sequence, batch_first=False, padding_value=0.0, total_length=None)\n Pads a packed batch of variable length sequences.\n It is an inverse operation to \"pack_padded_sequence()\".\n The returned Tensor's data will be of size \"T x B x \", where T\n is the length of the longest sequence and B is the batch size. If\n \"batch_first\" is True, the data will be transposed into \"B x T x \"\n format.\n -[ Example ]-\n\n\n\nfrom torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence\nseq = torch.tensor([[1, 2, 0], [3, 0, 0], [4, 5, 6]])\nlens = [2, 1, 3]\npacked = pack_padded_sequence(seq, lens, batch_first=True, enforce_sorted=False)\npacked\n PackedSequence(data=tensor([4, 1, 3, 5, 2, 6]), batch_sizes=tensor([3, 2, 1]),\n sorted_indices=tensor([2, 0, 1]), unsorted_indices=tensor([1, 2, 0]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pad_packed_sequence.html", "category": "pytorch docs"}
{"text": "\n\n\nseq_unpacked, lens_unpacked = pad_packed_sequence(packed, batch_first=True)\nseq_unpacked\n tensor([[1, 2, 0],\n [3, 0, 0],\n [4, 5, 6]])\nlens_unpacked\n tensor([2, 1, 3])\n Note:\n \"total_length\" is useful to implement the \"pack sequence ->\n recurrent network -> unpack sequence\" pattern in a \"Module\"\n wrapped in \"DataParallel\". See this FAQ section for details.\n Parameters:\n * sequence (PackedSequence) -- batch to pad\n * batch_first (bool, optional) -- if \"True\", the\n output will be in \"B x T x \" format.\n * padding_value (float, optional) -- values for padded\n elements.\n * total_length (int, optional*) -- if not \"None\", the\n output will be padded to have length \"total_length\". This\n method will throw \"ValueError\" if \"total_length\" is less than\n the max sequence length in \"sequence\".\n Returns:\n Tuple of Tensor containing the padded sequence, and a Tensor\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pad_packed_sequence.html", "category": "pytorch docs"}
{"text": "containing the list of lengths of each sequence in the batch.\n Batch elements will be re-ordered as they were ordered\n originally when the batch was passed to \"pack_padded_sequence\"\n or \"pack_sequence\".\n Return type:\n Tuple[Tensor, Tensor]", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pad_packed_sequence.html", "category": "pytorch docs"}
{"text": "torch.Tensor.arctan2_Tensor.arctan2_()\n atan2_(other) -> Tensor\n In-place version of \"arctan2()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arctan2_.html", "category": "pytorch docs"}
{"text": "ConvBn1dclass torch.ao.nn.intrinsic.ConvBn1d(conv, bn)\n This is a sequential container which calls the Conv 1d and Batch\n Norm 1d modules. During quantization this will be replaced with the\n corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvBn1d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.bitwise_and_Tensor.bitwise_and_() -> Tensor\n In-place version of \"bitwise_and()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_and_.html", "category": "pytorch docs"}
{"text": "torch.chain_matmultorch.chain_matmul(matrices, out=None)\n Returns the matrix product of the N 2-D tensors. This product is\n efficiently computed using the matrix chain order algorithm which\n selects the order in which incurs the lowest cost in terms of\n arithmetic operations ([CLRS]). Note that since this is a function\n to compute the product, N needs to be greater than or equal to 2;\n if equal to 2 then a trivial matrix-matrix product is returned. If\n N is 1, then this is a no-op - the original matrix is returned as\n is.\n Warning:\n \"torch.chain_matmul()\" is deprecated and will be removed in a\n future PyTorch release. Use \"torch.linalg.multi_dot()\" instead,\n which accepts a list of two or more tensors rather than multiple\n arguments.\n Parameters:\n * matrices (Tensors...*) -- a sequence of 2 or more 2-D\n tensors whose product is to be determined.\n * out (*Tensor, *optional) -- the output tensor. Ignored", "source": "https://pytorch.org/docs/stable/generated/torch.chain_matmul.html", "category": "pytorch docs"}
{"text": "if \"out\" = \"None\".\n Returns:\n if the i^{th} tensor was of dimensions p_{i} \\times p_{i + 1},\n then the product would be of dimensions p_{1} \\times p_{N + 1}.\n Return type:\n Tensor\n Example:\n >>> a = torch.randn(3, 4)\n >>> b = torch.randn(4, 5)\n >>> c = torch.randn(5, 6)\n >>> d = torch.randn(6, 7)\n >>> # will raise a deprecation warning\n >>> torch.chain_matmul(a, b, c, d)\n tensor([[ -2.3375, -3.9790, -4.1119, -6.6577, 9.5609, -11.5095, -3.2614],\n [ 21.4038, 3.3378, -8.4982, -5.2457, -10.2561, -2.4684, 2.7163],\n [ -0.9647, -5.8917, -2.3213, -5.2284, 12.8615, -12.2816, -2.5095]])", "source": "https://pytorch.org/docs/stable/generated/torch.chain_matmul.html", "category": "pytorch docs"}
{"text": "torch.foreach_log2_torch._foreach_log2(self: List[Tensor]) -> None\n Apply \"torch.log2()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_log2_.html", "category": "pytorch docs"}
{"text": "torch.cuda.utilizationtorch.cuda.utilization(device=None)\n Returns the percent of time over the past sample period during\n which one or more kernels was executing on the GPU as given by\n nvidia-smi.\n Parameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\n Return type:\n int\n Warning: Each sample period may be between 1 second and 1/6 second,\n depending on the product being queried.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.utilization.html", "category": "pytorch docs"}
{"text": "torch.get_num_threadstorch.get_num_threads() -> int\n Returns the number of threads used for parallelizing CPU operations", "source": "https://pytorch.org/docs/stable/generated/torch.get_num_threads.html", "category": "pytorch docs"}
{"text": "torch.Tensor.hsplitTensor.hsplit(split_size_or_sections) -> List of Tensors\n See \"torch.hsplit()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.hsplit.html", "category": "pytorch docs"}
{"text": "CrossEntropyLossclass torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction='mean', label_smoothing=0.0)\n This criterion computes the cross entropy loss between input logits\n and target.\n It is useful when training a classification problem with C\n classes. If provided, the optional argument \"weight\" should be a 1D\n Tensor assigning weight to each of the classes. This is\n particularly useful when you have an unbalanced training set.\n The input is expected to contain the unnormalized logits for each\n class (which do not need to be positive or sum to 1, in general).\n input has to be a Tensor of size (C) for unbatched input,\n (minibatch, C) or (minibatch, C, d_1, d_2, ..., d_K) with K \\geq 1\n for the K-dimensional case. The last being useful for higher\n dimension inputs, such as computing cross entropy loss per-pixel\n for 2D images.\n The target that this criterion expects should contain either:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html", "category": "pytorch docs"}
{"text": "\nClass indices in the range [0, C) where C is the number of\n classes; if ignore_index is specified, this loss also accepts\n this class index (this index may not necessarily be in the class\n range). The unreduced (i.e. with \"reduction\" set to \"'none'\")\n loss for this case can be described as:\n \\ell(x, y) = L = {l_1,\\dots,l_N}^\\top, \\quad l_n = - w_{y_n}\n \\log \\frac{\\exp(x_{n,y_n})}{\\sum_{c=1}^C \\exp(x_{n,c})} \\cdot\n \\mathbb{1}{y_n \\not= \\text{ignore_index}}\n where x is the input, y is the target, w is the weight, C is the\n number of classes, and N spans the minibatch dimension as well as\n d_1, ..., d_k for the K-dimensional case. If \"reduction\" is not\n \"'none'\" (default \"'mean'\"), then\n \\ell(x, y) = \\begin{cases} \\sum_{n=1}^N\n \\frac{1}{\\sum_{n=1}^N w_{y_n} \\cdot \\mathbb{1}{y_n \\not=\n \\text{ignore_index}}} l_n, & \\text{if reduction} =\n \\text{`mean';}\\ \\sum_{n=1}^N l_n, & \\text{if\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html", "category": "pytorch docs"}
{"text": "reduction} = \\text{sum'.} \\end{cases}\n Note that this case is equivalent to the combination of\n \"LogSoftmax\" and \"NLLLoss\".\n * Probabilities for each class; useful when labels beyond a single\n class per minibatch item are required, such as for blended\n labels, label smoothing, etc. The unreduced (i.e. with\n \"reduction\" set to \"'none'\") loss for this case can be described\n as:\n \\ell(x, y) = L = \\{l_1,\\dots,l_N\\}^\\top, \\quad l_n = -\n \\sum_{c=1}^C w_c \\log \\frac{\\exp(x_{n,c})}{\\sum_{i=1}^C\n \\exp(x_{n,i})} y_{n,c}\n where x is the input, y is the target, w is the weight, C is the\n number of classes, and N spans the minibatch dimension as well as\n d_1, ..., d_k for the *K*-dimensional case. If \"reduction\" is not\n \"'none'\" (default \"'mean'\"), then\n \\ell(x, y) = \\begin{cases} \\frac{\\sum_{n=1}^N l_n}{N}, &\n \\text{if reduction} = \\text{mean';}\\ \\sum_{n=1}^N l_n,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html", "category": "pytorch docs"}
{"text": "& \\text{if reduction} = \\text{`sum'.} \\end{cases}\n Note:\n The performance of this criterion is generally better when\n target contains class indices, as this allows for optimized\n computation. Consider providing target as class probabilities\n only when a single class label per minibatch item is too\n restrictive.\n Parameters:\n * weight (Tensor, optional) -- a manual rescaling\n weight given to each class. If given, has to be a Tensor of\n size C\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * ignore_index (int, optional) -- Specifies a target", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html", "category": "pytorch docs"}
{"text": "value that is ignored and does not contribute to the input\n gradient. When \"size_average\" is \"True\", the loss is averaged\n over non-ignored targets. Note that \"ignore_index\" is only\n applicable when the target contains class indices.\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the weighted\n mean of the output is taken, \"'sum'\": the output will be\n summed. Note: \"size_average\" and \"reduce\" are in the process\n of being deprecated, and in the meantime, specifying either of", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html", "category": "pytorch docs"}
{"text": "those two args will override \"reduction\". Default: \"'mean'\"\n * label_smoothing (float, optional) -- A float in\n [0.0, 1.0]. Specifies the amount of smoothing when computing\n the loss, where 0.0 means no smoothing. The targets become a\n mixture of the original ground truth and a uniform\n distribution as described in Rethinking the Inception\n Architecture for Computer Vision. Default: 0.0.\n Shape:\n * Input: Shape (C), (N, C) or (N, C, d_1, d_2, ..., d_K) with K\n \\geq 1 in the case of K-dimensional loss.\n * Target: If containing class indices, shape (), (N) or (N, d_1,\n d_2, ..., d_K) with K \\geq 1 in the case of K-dimensional loss\n where each value should be between [0, C). If containing class\n probabilities, same shape as the input and each value should\n be between [0, 1].\n * Output: If reduction is 'none', shape (), (N) or (N, d_1, d_2,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html", "category": "pytorch docs"}
{"text": "..., d_K) with K \\geq 1 in the case of K-dimensional loss,\n depending on the shape of the input. Otherwise, scalar.\n where:\n \\begin{aligned} C ={} & \\text{number of classes} \\ N\n ={} & \\text{batch size} \\ \\end{aligned}\n Examples:\n >>> # Example of target with class indices\n >>> loss = nn.CrossEntropyLoss()\n >>> input = torch.randn(3, 5, requires_grad=True)\n >>> target = torch.empty(3, dtype=torch.long).random_(5)\n >>> output = loss(input, target)\n >>> output.backward()\n >>>\n >>> # Example of target with class probabilities\n >>> input = torch.randn(3, 5, requires_grad=True)\n >>> target = torch.randn(3, 5).softmax(dim=1)\n >>> output = loss(input, target)\n >>> output.backward()", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html", "category": "pytorch docs"}
{"text": "prepare_qatclass torch.quantization.prepare_qat(model, mapping=None, inplace=False)\n Prepares a copy of the model for quantization calibration or\n quantization-aware training and converts it to quantized version.\n Quantization configuration should be assigned preemptively to\n individual submodules in .qconfig attribute.\n Parameters:\n * model -- input model to be modified in-place\n * mapping -- dictionary that maps float modules to quantized\n modules to be replaced.\n * inplace -- carry out model transformations in-place, the\n original module is mutated", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.prepare_qat.html", "category": "pytorch docs"}
{"text": "torch.Tensor.acos_Tensor.acos_() -> Tensor\n In-place version of \"acos()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.acos_.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.thresholdtorch.nn.functional.threshold(input, threshold, value, inplace=False)\n Thresholds each element of the input Tensor.\n See \"Threshold\" for more details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.threshold.html", "category": "pytorch docs"}
{"text": "torch.xlogytorch.xlogy(input, other, *, out=None) -> Tensor\n Alias for \"torch.special.xlogy()\".", "source": "https://pytorch.org/docs/stable/generated/torch.xlogy.html", "category": "pytorch docs"}
{"text": "torch.sumtorch.sum(input, , dtype=None) -> Tensor\n Returns the sum of all elements in the \"input\" tensor.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n dtype (\"torch.dtype\", optional) -- the desired data type of\n returned tensor. If specified, the input tensor is casted to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\n Example:\n >>> a = torch.randn(1, 3)\n >>> a\n tensor([[ 0.1133, -0.9567, 0.2958]])\n >>> torch.sum(a)\n tensor(-0.5475)\n torch.sum(input, dim, keepdim=False, , dtype=None) -> Tensor\n Returns the sum of each row of the \"input\" tensor in the given\n dimension \"dim\". If \"dim\" is a list of dimensions, reduce over all\n of them.\n If \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.", "source": "https://pytorch.org/docs/stable/generated/torch.sum.html", "category": "pytorch docs"}
{"text": "Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int or tuple of ints, optional) -- the\n dimension or dimensions to reduce. If \"None\", all dimensions\n are reduced.\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n Keyword Arguments:\n dtype (\"torch.dtype\", optional) -- the desired data type of\n returned tensor. If specified, the input tensor is casted to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\n Example:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[ 0.0569, -0.2475, 0.0737, -0.3429],\n [-0.2993, 0.9138, 0.9337, -1.6864],\n [ 0.1132, 0.7892, -0.1003, 0.5688],\n [ 0.3637, -0.9906, -0.4752, -1.5197]])\n >>> torch.sum(a, 1)", "source": "https://pytorch.org/docs/stable/generated/torch.sum.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.sum(a, 1)\n tensor([-0.4598, -0.1381, 1.3708, -2.6217])\n >>> b = torch.arange(4 * 5 * 6).view(4, 5, 6)\n >>> torch.sum(b, (2, 1))\n tensor([ 435., 1335., 2235., 3135.])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.sum.html", "category": "pytorch docs"}
{"text": "torch.Tensor.is_cudaTensor.is_cuda\n Is \"True\" if the Tensor is stored on the GPU, \"False\" otherwise.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_cuda.html", "category": "pytorch docs"}
{"text": "torch.autograd.gradtorch.autograd.grad(outputs, inputs, grad_outputs=None, retain_graph=None, create_graph=False, only_inputs=True, allow_unused=False, is_grads_batched=False)\n Computes and returns the sum of gradients of outputs with respect\n to the inputs.\n \"grad_outputs\" should be a sequence of length matching \"output\"\n containing the \"vector\" in vector-Jacobian product, usually the\n pre-computed gradients w.r.t. each of the outputs. If an output\n doesn't require_grad, then the gradient can be \"None\").\n Note:\n If you run any forward ops, create \"grad_outputs\", and/or call\n \"grad\" in a user-specified CUDA stream context, see Stream\n semantics of backward passes.\n Note:\n \"only_inputs\" argument is deprecated and is ignored now (defaults\n to \"True\"). To accumulate gradient for other parts of the graph,\n please use \"torch.autograd.backward\".\n Parameters:\n * outputs (sequence of Tensor) -- outputs of the\n differentiated function.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.grad.html", "category": "pytorch docs"}
{"text": "differentiated function.\n * inputs (sequence of Tensor) -- Inputs w.r.t. which the\n gradient will be returned (and not accumulated into \".grad\").\n * grad_outputs (sequence of Tensor) -- The \"vector\" in the\n vector-Jacobian product. Usually gradients w.r.t. each output.\n None values can be specified for scalar Tensors or ones that\n don't require grad. If a None value would be acceptable for\n all grad_tensors, then this argument is optional. Default:\n None.\n * retain_graph (bool, optional) -- If \"False\", the\n graph used to compute the grad will be freed. Note that in\n nearly all cases setting this option to \"True\" is not needed\n and often can be worked around in a much more efficient way.\n Defaults to the value of \"create_graph\".\n * create_graph (bool, optional) -- If \"True\", graph of\n the derivative will be constructed, allowing to compute higher", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.grad.html", "category": "pytorch docs"}
{"text": "order derivative products. Default: \"False\".\n * allow_unused (bool, optional) -- If \"False\",\n specifying inputs that were not used when computing outputs\n (and therefore their grad is always zero) is an error.\n Defaults to \"False\".\n * is_grads_batched (bool, optional) -- If \"True\", the\n first dimension of each tensor in \"grad_outputs\" will be\n interpreted as the batch dimension. Instead of computing a\n single vector-Jacobian product, we compute a batch of vector-\n Jacobian products for each \"vector\" in the batch. We use the\n vmap prototype feature as the backend to vectorize calls to\n the autograd engine so that this computation can be performed\n in a single call. This should lead to performance improvements\n when compared to manually looping and performing backward\n multiple times. Note that due to this feature being\n experimental, there may be performance cliffs. Please use", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.grad.html", "category": "pytorch docs"}
{"text": "\"torch._C._debug_only_display_vmap_fallback_warnings(True)\" to\n show any performance warnings and file an issue on github if\n warnings exist for your use case. Defaults to \"False\".\n Return type:\n Tuple[Tensor, ...]", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.grad.html", "category": "pytorch docs"}
{"text": "torch.Tensor.index_add_Tensor.index_add_(dim, index, source, *, alpha=1) -> Tensor\n Accumulate the elements of \"alpha\" times \"source\" into the \"self\"\n tensor by adding to the indices in the order given in \"index\". For\n example, if \"dim == 0\", \"index[i] == j\", and \"alpha=-1\", then the\n \"i\"th row of \"source\" is subtracted from the \"j\"th row of \"self\".\n The \"dim\"th dimension of \"source\" must have the same size as the\n length of \"index\" (which must be a vector), and all other\n dimensions must match \"self\", or an error will be raised.\n For a 3-D tensor the output is given as:\n self[index[i], :, :] += alpha * src[i, :, :] # if dim == 0\n self[:, index[i], :] += alpha * src[:, i, :] # if dim == 1\n self[:, :, index[i]] += alpha * src[:, :, i] # if dim == 2\n Note:\n This operation may behave nondeterministically when given tensors\n on a CUDA device. See Reproducibility for more information.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_add_.html", "category": "pytorch docs"}
{"text": "Parameters:\n * dim (int) -- dimension along which to index\n * index (Tensor) -- indices of \"source\" to select from,\n should have dtype either torch.int64 or torch.int32\n * source (Tensor) -- the tensor containing values to add\n Keyword Arguments:\n alpha (Number) -- the scalar multiplier for \"source\"\n Example:\n >>> x = torch.ones(5, 3)\n >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)\n >>> index = torch.tensor([0, 4, 2])\n >>> x.index_add_(0, index, t)\n tensor([[ 2., 3., 4.],\n [ 1., 1., 1.],\n [ 8., 9., 10.],\n [ 1., 1., 1.],\n [ 5., 6., 7.]])\n >>> x.index_add_(0, index, t, alpha=-1)\n tensor([[ 1., 1., 1.],\n [ 1., 1., 1.],\n [ 1., 1., 1.],\n [ 1., 1., 1.],\n [ 1., 1., 1.]])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_add_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.ceilTensor.ceil() -> Tensor\n See \"torch.ceil()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ceil.html", "category": "pytorch docs"}
{"text": "torch.Tensor.bfloat16Tensor.bfloat16(memory_format=torch.preserve_format) -> Tensor\n \"self.bfloat16()\" is equivalent to \"self.to(torch.bfloat16)\". See\n \"to()\".\n Parameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bfloat16.html", "category": "pytorch docs"}
{"text": "torch.Tensor.matmulTensor.matmul(tensor2) -> Tensor\n See \"torch.matmul()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.matmul.html", "category": "pytorch docs"}
{"text": "torch.Tensor.adjointTensor.adjoint() -> Tensor\n Alias for \"adjoint()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.adjoint.html", "category": "pytorch docs"}
{"text": "torch.tensordottorch.tensordot(a, b, dims=2, out=None)\n Returns a contraction of a and b over multiple dimensions.\n \"tensordot\" implements a generalized matrix product.\n Parameters:\n * a (Tensor) -- Left tensor to contract\n * b (Tensor) -- Right tensor to contract\n * dims (int or Tuple[List[int],\n List[int]] or List[List[int]]\n containing two lists or Tensor) -- number of dimensions\n to contract or explicit lists of dimensions for \"a\" and \"b\"\n respectively\n When called with a non-negative integer argument \"dims\" = d, and\n the number of dimensions of \"a\" and \"b\" is m and n, respectively,\n \"tensordot()\" computes\n r_{i_0,...,i_{m-d}, i_d,...,i_n} = \\sum_{k_0,...,k_{d-1}}\n a_{i_0,...,i_{m-d},k_0,...,k_{d-1}} \\times b_{k_0,...,k_{d-1},\n i_d,...,i_n}.\n When called with \"dims\" of the list form, the given dimensions will", "source": "https://pytorch.org/docs/stable/generated/torch.tensordot.html", "category": "pytorch docs"}
{"text": "be contracted in place of the last d of \"a\" and the first d of b.\n The sizes in these dimensions must match, but \"tensordot()\" will\n deal with broadcasted dimensions.\n Examples:\n >>> a = torch.arange(60.).reshape(3, 4, 5)\n >>> b = torch.arange(24.).reshape(4, 3, 2)\n >>> torch.tensordot(a, b, dims=([1, 0], [0, 1]))\n tensor([[4400., 4730.],\n [4532., 4874.],\n [4664., 5018.],\n [4796., 5162.],\n [4928., 5306.]])\n >>> a = torch.randn(3, 4, 5, device='cuda')\n >>> b = torch.randn(4, 5, 6, device='cuda')\n >>> c = torch.tensordot(a, b, dims=2).cpu()\n tensor([[ 8.3504, -2.5436, 6.2922, 2.7556, -1.0732, 3.2741],\n [ 3.3161, 0.0704, 5.0187, -0.4079, -4.3126, 4.8744],\n [ 0.8223, 3.9445, 3.2168, -0.2400, 3.4117, 1.7780]])\n >>> a = torch.randn(3, 5, 4, 6)\n >>> b = torch.randn(6, 4, 5, 3)\n >>> torch.tensordot(a, b, dims=([2, 1, 3], [1, 2, 0]))", "source": "https://pytorch.org/docs/stable/generated/torch.tensordot.html", "category": "pytorch docs"}
{"text": "tensor([[ 7.7193, -2.4867, -10.3204],\n [ 1.5513, -14.4737, -6.5113],\n [ -0.2850, 4.2573, -3.5997]])", "source": "https://pytorch.org/docs/stable/generated/torch.tensordot.html", "category": "pytorch docs"}
{"text": "torch.mvlgammatorch.mvlgamma(input, p, *, out=None) -> Tensor\n Alias for \"torch.special.multigammaln()\".", "source": "https://pytorch.org/docs/stable/generated/torch.mvlgamma.html", "category": "pytorch docs"}
{"text": "torch.signal.windows.nuttalltorch.signal.windows.nuttall(M, , sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\n Computes the minimum 4-term Blackman-Harris window according to\n Nuttall.\n w_n = 1 - 0.36358 \\cos{(z_n)} + 0.48917 \\cos{(2z_n)} - 0.13659\n \\cos{(3z_n)} + 0.01064 \\cos{(4z_n)}\n where \"z_n = 2 \u00cf\u0080 n/ M\".\n The window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\n Parameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\n Keyword Arguments:\n * sym (bool, optional) -- If False, returns a\n periodic window suitable for use in spectral analysis. If\n True, returns a symmetric window suitable for use in filter\n design. Default: True.\n * dtype* (\"torch.dtype\", optional) -- the desired data type", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.nuttall.html", "category": "pytorch docs"}
{"text": "of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Return type:\n Tensor\n References:\n - A. Nuttall, \u00e2\u0080\u009cSome windows with very good sidelobe behavior,\u00e2\u0080\u009d\n IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 29, no. 1, pp. 84-91,\n Feb 1981. https://doi.org/10.1109/TASSP.1981.1163506", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.nuttall.html", "category": "pytorch docs"}
{"text": "\nHeinzel G. et al., \u00e2\u0080\u009cSpectrum and spectral density estimation by the Discrete Fourier transform (DFT),\n including a comprehensive list of window functions and some new flat-top windows\u00e2\u0080\u009d,\n February 15, 2002 https://holometer.fnal.gov/GH_FFT.pdf\n Examples:\n >>> # Generates a symmetric Nutall window.\n >>> torch.signal.windows.general_hamming(5, sym=True)\n tensor([3.6280e-04, 2.2698e-01, 1.0000e+00, 2.2698e-01, 3.6280e-04])\n >>> # Generates a periodic Nuttall window.\n >>> torch.signal.windows.general_hamming(5, sym=False)\n tensor([3.6280e-04, 1.1052e-01, 7.9826e-01, 7.9826e-01, 1.1052e-01])\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.nuttall.html", "category": "pytorch docs"}
{"text": "Upsampleclass torch.nn.Upsample(size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None)\n Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D\n (volumetric) data.\n The input data is assumed to be of the form minibatch x channels x\n [optional depth] x [optional height] x width. Hence, for spatial\n inputs, we expect a 4D Tensor and for volumetric inputs, we expect\n a 5D Tensor.\n The algorithms available for upsampling are nearest neighbor and\n linear, bilinear, bicubic and trilinear for 3D, 4D and 5D input\n Tensor, respectively.\n One can either give a \"scale_factor\" or the target output \"size\" to\n calculate the output size. (You cannot give both, as it is\n ambiguous)\n Parameters:\n * size (int or Tuple[int] or Tuple[int,\n int] or Tuple[int, int, int],\n optional) -- output spatial sizes\n * scale_factor (float or Tuple[float*] or", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html", "category": "pytorch docs"}
{"text": "Tuple[float, float] or Tuple[float,\n float, float], optional) -- multiplier for\n spatial size. Has to match input size if it is a tuple.\n * mode (str, optional) -- the upsampling algorithm:\n one of \"'nearest'\", \"'linear'\", \"'bilinear'\", \"'bicubic'\" and\n \"'trilinear'\". Default: \"'nearest'\"\n * align_corners (bool, optional) -- if \"True\", the\n corner pixels of the input and output tensors are aligned, and\n thus preserving the values at those pixels. This only has\n effect when \"mode\" is \"'linear'\", \"'bilinear'\", \"'bicubic'\",\n or \"'trilinear'\". Default: \"False\"\n * recompute_scale_factor (bool, optional) -- recompute\n the scale_factor for use in the interpolation calculation. If\n recompute_scale_factor is \"True\", then scale_factor must\n be passed in and scale_factor* is used to compute the output", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html", "category": "pytorch docs"}
{"text": "size. The computed output size will be used to infer new\n scales for the interpolation. Note that when scale_factor is\n floating-point, it may differ from the recomputed\n scale_factor due to rounding and precision issues. If\n recompute_scale_factor is \"False\", then size or\n scale_factor will be used directly for interpolation.\n Shape:\n * Input: (N, C, W_{in}), (N, C, H_{in}, W_{in}) or (N, C,\n D_{in}, H_{in}, W_{in})\n * Output: (N, C, W_{out}), (N, C, H_{out}, W_{out}) or (N, C,\n D_{out}, H_{out}, W_{out}), where\n D_{out} = \\left\\lfloor D_{in} \\times \\text{scale_factor}\n \\right\\rfloor\n H_{out} = \\left\\lfloor H_{in} \\times \\text{scale_factor}\n \\right\\rfloor\n W_{out} = \\left\\lfloor W_{in} \\times \\text{scale_factor}\n \\right\\rfloor\n Warning:\n With \"align_corners = True\", the linearly interpolating modes\n (linear, bilinear, bicubic, and trilinear) don't", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html", "category": "pytorch docs"}
{"text": "proportionally align the output and input pixels, and thus the\n output values can depend on the input size. This was the default\n behavior for these modes up to version 0.3.1. Since then, the\n default behavior is \"align_corners = False\". See below for\n concrete examples on how this affects the outputs.\n Note:\n If you want downsampling/general resizing, you should use\n \"interpolate()\".\n Examples:\n >>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2)\n >>> input\n tensor([[[[1., 2.],\n [3., 4.]]]])\n >>> m = nn.Upsample(scale_factor=2, mode='nearest')\n >>> m(input)\n tensor([[[[1., 1., 2., 2.],\n [1., 1., 2., 2.],\n [3., 3., 4., 4.],\n [3., 3., 4., 4.]]]])\n >>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False\n >>> m(input)\n tensor([[[[1.0000, 1.2500, 1.7500, 2.0000],\n [1.5000, 1.7500, 2.2500, 2.5000],", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html", "category": "pytorch docs"}
{"text": "[1.5000, 1.7500, 2.2500, 2.5000],\n [2.5000, 2.7500, 3.2500, 3.5000],\n [3.0000, 3.2500, 3.7500, 4.0000]]]])\n >>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)\n >>> m(input)\n tensor([[[[1.0000, 1.3333, 1.6667, 2.0000],\n [1.6667, 2.0000, 2.3333, 2.6667],\n [2.3333, 2.6667, 3.0000, 3.3333],\n [3.0000, 3.3333, 3.6667, 4.0000]]]])\n >>> # Try scaling the same data in a larger tensor\n >>> input_3x3 = torch.zeros(3, 3).view(1, 1, 3, 3)\n >>> input_3x3[:, :, :2, :2].copy_(input)\n tensor([[[[1., 2.],\n [3., 4.]]]])\n >>> input_3x3\n tensor([[[[1., 2., 0.],\n [3., 4., 0.],\n [0., 0., 0.]]]])\n >>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False\n >>> # Notice that values in top left corner are the same with the small input (except at boundary)\n >>> m(input_3x3)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html", "category": "pytorch docs"}
{"text": "\n\n\nm(input_3x3)\n tensor([[[[1.0000, 1.2500, 1.7500, 1.5000, 0.5000, 0.0000],\n [1.5000, 1.7500, 2.2500, 1.8750, 0.6250, 0.0000],\n [2.5000, 2.7500, 3.2500, 2.6250, 0.8750, 0.0000],\n [2.2500, 2.4375, 2.8125, 2.2500, 0.7500, 0.0000],\n [0.7500, 0.8125, 0.9375, 0.7500, 0.2500, 0.0000],\n [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]])\n >>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)\n >>> # Notice that values in top left corner are now changed\n >>> m(input_3x3)\n tensor([[[[1.0000, 1.4000, 1.8000, 1.6000, 0.8000, 0.0000],\n [1.8000, 2.2000, 2.6000, 2.2400, 1.1200, 0.0000],\n [2.6000, 3.0000, 3.4000, 2.8800, 1.4400, 0.0000],\n [2.4000, 2.7200, 3.0400, 2.5600, 1.2800, 0.0000],\n [1.2000, 1.3600, 1.5200, 1.2800, 0.6400, 0.0000],\n [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html", "category": "pytorch docs"}
{"text": "Conv3dclass torch.ao.nn.quantized.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\n Applies a 3D convolution over a quantized input signal composed of\n several quantized input planes.\n For details on input arguments, parameters, and implementation see\n \"Conv3d\".\n Note:\n Only zeros is supported for the \"padding_mode\" argument.\n Note:\n Only torch.quint8 is supported for the input data type.\n Variables:\n * weight (Tensor) -- packed tensor derived from the\n learnable weight parameter.\n * scale (Tensor) -- scalar for the output scale\n * zero_point (Tensor) -- scalar for the output zero point\n See \"Conv3d\" for other attributes.\n Examples:\n >>> # With square kernels and equal stride\n >>> m = nn.quantized.Conv3d(16, 33, 3, stride=2)\n >>> # non-square kernels and unequal stride and with padding", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Conv3d.html", "category": "pytorch docs"}
{"text": "\n\n\nm = nn.quantized.Conv3d(16, 33, (3, 5, 5), stride=(1, 2, 2), padding=(1, 2, 2))\n >>> # non-square kernels and unequal stride and with padding and dilation\n >>> m = nn.quantized.Conv3d(16, 33, (3, 5, 5), stride=(1, 2, 2), padding=(1, 2, 2), dilation=(1, 2, 2))\n >>> input = torch.randn(20, 16, 56, 56, 56)\n >>> # quantize input to quint8\n >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)\n >>> output = m(q_input)\n classmethod from_float(mod)\n Creates a quantized module from a float module or qparams_dict.\n Parameters:\n mod (Module) -- a float module, either produced by\n torch.ao.quantization utilities or provided by the user\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Conv3d.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.rnn.pad_sequencetorch.nn.utils.rnn.pad_sequence(sequences, batch_first=False, padding_value=0.0)\n Pad a list of variable length Tensors with \"padding_value\"\n \"pad_sequence\" stacks a list of Tensors along a new dimension, and\n pads them to equal length. For example, if the input is list of\n sequences with size \"L x \" and if batch_first is False, and \"T x B\n x \" otherwise.\n B is batch size. It is equal to the number of elements in\n \"sequences\". T is length of the longest sequence. L is length\n of the sequence. *** is any number of trailing dimensions,\n including none.\n -[ Example ]-\n\n\n\nfrom torch.nn.utils.rnn import pad_sequence\na = torch.ones(25, 300)\nb = torch.ones(22, 300)\nc = torch.ones(15, 300)\npad_sequence([a, b, c]).size()\n torch.Size([25, 3, 300])\n Note:\n This function returns a Tensor of size \"T x B x \" or \"B x T x \"\n where T is the length of the longest sequence. This function\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pad_sequence.html", "category": "pytorch docs"}
{"text": "assumes trailing dimensions and type of all the Tensors in\n sequences are same.\n Parameters:\n * sequences (list[Tensor]) -- list of variable\n length sequences.\n * batch_first (bool, optional) -- output will be in \"B\n x T x \" if True, or in \"T x B x \" otherwise. Default: False.\n * padding_value (float, optional) -- value for padded\n elements. Default: 0.\n Returns:\n Tensor of size \"T x B x \" if \"batch_first\" is \"False\". Tensor\n of size \"B x T x \" otherwise\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pad_sequence.html", "category": "pytorch docs"}
{"text": "torch.initial_seedtorch.initial_seed()\n Returns the initial seed for generating random numbers as a Python\n long.\n Return type:\n int", "source": "https://pytorch.org/docs/stable/generated/torch.initial_seed.html", "category": "pytorch docs"}
{"text": "load_observer_state_dictclass torch.quantization.observer.load_observer_state_dict(mod, obs_dict)\n Given input model and a state_dict containing model observer stats,\n load the stats back into the model. The observer state_dict can be\n saved using torch.ao.quantization.get_observer_state_dict", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.load_observer_state_dict.html", "category": "pytorch docs"}
{"text": "torch.scatter_addtorch.scatter_add(input, dim, index, src) -> Tensor\n Out-of-place version of \"torch.Tensor.scatter_add_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.scatter_add.html", "category": "pytorch docs"}
{"text": "torch.trapezoidtorch.trapezoid(y, x=None, *, dx=None, dim=- 1) -> Tensor\n Computes the trapezoidal rule along \"dim\". By default the spacing\n between elements is assumed to be 1, but \"dx\" can be used to\n specify a different constant spacing, and \"x\" can be used to\n specify arbitrary spacing along \"dim\".\n Assuming \"y\" is a one-dimensional tensor with elements {y_0, y_1,\n ..., y_n}, the default computation is\n \\begin{aligned} \\sum_{i = 1}^{n-1} \\frac{1}{2} (y_i +\n y_{i-1}) \\end{aligned}\n When \"dx\" is specified the computation becomes\n \\begin{aligned} \\sum_{i = 1}^{n-1} \\frac{\\Delta x}{2} (y_i +\n y_{i-1}) \\end{aligned}\n effectively multiplying the result by \"dx\". When \"x\" is specified,\n assuming \"x\" is also a one-dimensional tensor with elements {x_0,\n x_1, ..., x_n}, the computation becomes\n \\begin{aligned} \\sum_{i = 1}^{n-1} \\frac{(x_i - x_{i-1})}{2}\n (y_i + y_{i-1}) \\end{aligned}", "source": "https://pytorch.org/docs/stable/generated/torch.trapezoid.html", "category": "pytorch docs"}
{"text": "(y_i + y_{i-1}) \\end{aligned}\n When \"x\" and \"y\" have the same size, the computation is as\n described above and no broadcasting is needed. The broadcasting\n behavior of this function is as follows when their sizes are\n different. For both \"x\" and \"y\", the function computes the\n difference between consecutive elements along dimension \"dim\". This\n effectively creates two tensors, x_diff and y_diff, that have\n the same shape as the original tensors except their lengths along\n the dimension \"dim\" is reduced by 1. After that, those two tensors\n are broadcast together to compute final output as part of the\n trapezoidal rule. See the examples below for details.\n Note:\n The trapezoidal rule is a technique for approximating the\n definite integral of a function by averaging its left and right\n Riemann sums. The approximation becomes more accurate as the\n resolution of the partition increases.\n Parameters:\n * y (Tensor) -- Values to use when computing the", "source": "https://pytorch.org/docs/stable/generated/torch.trapezoid.html", "category": "pytorch docs"}
{"text": "trapezoidal rule.\n * x (Tensor) -- If specified, defines spacing between\n values as specified above.\n Keyword Arguments:\n * dx (float) -- constant spacing between values. If\n neither \"x\" or \"dx\" are specified then this defaults to 1.\n Effectively multiplies the result by its value.\n * dim (int) -- The dimension along which to compute the\n trapezoidal rule. The last (inner-most) dimension by default.\n Examples:\n >>> # Computes the trapezoidal rule in 1D, spacing is implicitly 1\n >>> y = torch.tensor([1, 5, 10])\n >>> torch.trapezoid(y)\n tensor(10.5)\n >>> # Computes the same trapezoidal rule directly to verify\n >>> (1 + 10 + 10) / 2\n 10.5\n >>> # Computes the trapezoidal rule in 1D with constant spacing of 2\n >>> # NOTE: the result is the same as before, but multiplied by 2\n >>> torch.trapezoid(y, dx=2)\n 21.0\n >>> # Computes the trapezoidal rule in 1D with arbitrary spacing", "source": "https://pytorch.org/docs/stable/generated/torch.trapezoid.html", "category": "pytorch docs"}
{"text": "\n\n\nx = torch.tensor([1, 3, 6])\n >>> torch.trapezoid(y, x)\n 28.5\n >>> # Computes the same trapezoidal rule directly to verify\n >>> ((3 - 1) * (1 + 5) + (6 - 3) * (5 + 10)) / 2\n 28.5\n >>> # Computes the trapezoidal rule for each row of a 3x3 matrix\n >>> y = torch.arange(9).reshape(3, 3)\n tensor([[0, 1, 2],\n [3, 4, 5],\n [6, 7, 8]])\n >>> torch.trapezoid(y)\n tensor([ 2., 8., 14.])\n >>> # Computes the trapezoidal rule for each column of the matrix\n >>> torch.trapezoid(y, dim=0)\n tensor([ 6., 8., 10.])\n >>> # Computes the trapezoidal rule for each row of a 3x3 ones matrix\n >>> # with the same arbitrary spacing\n >>> y = torch.ones(3, 3)\n >>> x = torch.tensor([1, 3, 6])\n >>> torch.trapezoid(y, x)\n array([5., 5., 5.])\n >>> # Computes the trapezoidal rule for each row of a 3x3 ones matrix\n >>> # with different arbitrary spacing per row\n >>> y = torch.ones(3, 3)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.trapezoid.html", "category": "pytorch docs"}
{"text": "\n\n\ny = torch.ones(3, 3)\n >>> x = torch.tensor([[1, 2, 3], [1, 3, 5], [1, 4, 7]])\n >>> torch.trapezoid(y, x)\n array([2., 4., 6.])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.trapezoid.html", "category": "pytorch docs"}
{"text": "RAdamclass torch.optim.RAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, *, foreach=None, differentiable=False)\n Implements RAdam algorithm.\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\ &\\textbf{input} : \\gamma \\text{ (lr)}, \\: \\beta_1,\n \\beta_2 \\text{ (betas)}, \\: \\theta_0 \\text{ (params)},\n \\:f(\\theta) \\text{ (objective)}, \\: \\lambda \\text{\n (weightdecay)},\n \\ &\\hspace{13mm} \\epsilon \\text{ (epsilon)}\n \\ &\\textbf{initialize} : m_0 \\leftarrow 0 \\text{ ( first\n moment)}, v_0 \\leftarrow 0 \\text{ ( second moment)},\n \\ &\\hspace{18mm} \\rho_{\\infty} \\leftarrow 2/(1-\\beta_2)\n -1 \\[-1.ex] &\\rule{110mm}{0.4pt} \\\n &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\: \\textbf{do}\n \\ &\\hspace{6mm}g_t \\leftarrow \\nabla_{\\theta}\n f_t (\\theta_{t-1}) \\ &\\hspace{5mm} \\textbf{if}", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html", "category": "pytorch docs"}
{"text": "\\: \\lambda \\neq 0 \\\n &\\hspace{10mm} g_t \\leftarrow g_t + \\lambda \\theta_{t-1}\n \\ &\\hspace{6mm}m_t \\leftarrow \\beta_1 m_{t-1}\n + (1 - \\beta_1) g_t \\ &\\hspace{6mm}v_t\n \\leftarrow \\beta_2 v_{t-1} + (1-\\beta_2) g^2_t \\\n &\\hspace{6mm}\\widehat{m_t} \\leftarrow m_t/\\big(1-\\beta_1^t\n \\big) \\ &\\hspace{6mm}\\rho_t \\leftarrow\n \\rho_{\\infty} - 2 t \\beta^t_2 /\\big(1-\\beta_2^t \\big)\n \\[0.1.ex] &\\hspace{6mm}\\textbf{if} \\: \\rho_t > 5\n \\ &\\hspace{12mm} l_t \\leftarrow \\frac{\\sqrt{\n (1-\\beta^t_2) }}{ \\sqrt{v_t} +\\epsilon } \\\n &\\hspace{12mm} r_t \\leftarrow \\sqrt{\\frac{(\\rho_t-4)(\\rho_t-2)\\\n rho_{\\infty}}{(\\rho_{\\infty}-4)(\\rho_{\\infty}-2) \\rho_t}} \\\n &\\hspace{12mm}\\theta_t \\leftarrow \\theta_{t-1} - \\gamma\n \\widehat{m_t} r_t l_t \\ &\\hspace{6mm}\\textbf{else}", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html", "category": "pytorch docs"}
{"text": "\\ &\\hspace{12mm}\\theta_t \\leftarrow \\theta_{t-1} - \\gamma\n \\widehat{m_t} \\ &\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\n For further details regarding the algorithm we refer to On the\n variance of the adaptive learning rate and beyond.\n This implementation uses the same weight_decay implementation as\n Adam (were the weight_decay is applied to the gradient) and not the\n one from AdamW (were weight_decay is applied to the update). This\n is different from the author's implementation.\n Parameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * lr (float, optional) -- learning rate (default:\n 1e-3)\n * betas (Tuple[float, float], optional) --\n coefficients used for computing running averages of gradient", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html", "category": "pytorch docs"}
{"text": "and its square (default: (0.9, 0.999))\n * eps (float, optional) -- term added to the\n denominator to improve numerical stability (default: 1e-8)\n * weight_decay (float, optional) -- weight decay (L2\n penalty) (default: 0)\n * foreach (bool, optional) -- whether foreach\n implementation of optimizer is used. If unspecified by the\n user (so foreach is None), we will try to use foreach over the\n for-loop implementation on CUDA, since it is usually\n significantly more performant. (default: None)\n * differentiable (bool, optional) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\n add_param_group(param_group)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html", "category": "pytorch docs"}
{"text": "add_param_group(param_group)\n Add a param group to the \"Optimizer\" s param_groups.\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n Parameters:\n param_group (dict) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n load_state_dict(state_dict)\n Loads the optimizer state.\n Parameters:\n state_dict (dict) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n register_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None\n The \"optimizer\" argument is the optimizer instance being used.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html", "category": "pytorch docs"}
{"text": "registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n register_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n state_dict()\n Returns the state of the optimizer as a \"dict\".", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html", "category": "pytorch docs"}
{"text": "It contains two entries:\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n zero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n Parameters:\n set_to_none (bool) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html", "category": "pytorch docs"}
{"text": "receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html", "category": "pytorch docs"}
{"text": "PixelUnshuffleclass torch.nn.PixelUnshuffle(downscale_factor)\n Reverses the \"PixelShuffle\" operation by rearranging elements in a\n tensor of shape (, C, H \\times r, W \\times r) to a tensor of shape\n (, C \\times r^2, H, W), where r is a downscale factor.\n See the paper: Real-Time Single Image and Video Super-Resolution\n Using an Efficient Sub-Pixel Convolutional Neural Network by Shi\n et. al (2016) for more details.\n Parameters:\n downscale_factor (int) -- factor to decrease spatial\n resolution by\n Shape:\n * Input: (, C_{in}, H_{in}, W_{in}), where * is zero or more\n batch dimensions\n * Output: (, C_{out}, H_{out}, W_{out}), where\n C_{out} = C_{in} \\times \\text{downscale_factor}^2\n H_{out} = H_{in} \\div \\text{downscale_factor}\n W_{out} = W_{in} \\div \\text{downscale_factor}\n Examples:\n >>> pixel_unshuffle = nn.PixelUnshuffle(3)\n >>> input = torch.randn(1, 1, 12, 12)\n >>> output = pixel_unshuffle(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PixelUnshuffle.html", "category": "pytorch docs"}
{"text": "\n\n\noutput = pixel_unshuffle(input)\n >>> print(output.size())\n torch.Size([1, 9, 4, 4])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PixelUnshuffle.html", "category": "pytorch docs"}
{"text": "torch.foreach_exp_torch._foreach_exp(self: List[Tensor]) -> None\n Apply \"torch.exp()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_exp_.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.log_softmaxtorch.nn.functional.log_softmax(input, dim=None, _stacklevel=3, dtype=None)\n Applies a softmax followed by a logarithm.\n While mathematically equivalent to log(softmax(x)), doing these two\n operations separately is slower and numerically unstable. This\n function uses an alternative formulation to compute the output and\n gradient correctly.\n See \"LogSoftmax\" for more details.\n Parameters:\n * input (Tensor) -- input\n * dim (int) -- A dimension along which log_softmax will be\n computed.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is cast to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.log_softmax.html", "category": "pytorch docs"}
{"text": "AvgPool3dclass torch.nn.AvgPool3d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None)\n Applies a 3D average pooling over an input signal composed of\n several input planes.\n In the simplest case, the output value of the layer with input size\n (N, C, D, H, W), output (N, C, D_{out}, H_{out}, W_{out}) and\n \"kernel_size\" (kD, kH, kW) can be precisely described as:\n \\begin{aligned} \\text{out}(N_i, C_j, d, h, w) ={} &\n \\sum_{k=0}^{kD-1} \\sum_{m=0}^{kH-1} \\sum_{n=0}^{kW-1} \\\n & \\frac{\\text{input}(N_i, C_j, \\text{stride}[0] \\times d + k,\n \\text{stride}[1] \\times h + m, \\text{stride}[2] \\times w + n)}\n {kD \\times kH \\times kW} \\end{aligned}\n If \"padding\" is non-zero, then the input is implicitly zero-padded\n on all three sides for \"padding\" number of points.\n Note:\n When ceil_mode=True, sliding windows are allowed to go off-bounds\n if they start within the left padding or the input. Sliding", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AvgPool3d.html", "category": "pytorch docs"}
{"text": "windows that would start in the right padded region are ignored.\n The parameters \"kernel_size\", \"stride\" can either be:\n * a single \"int\" -- in which case the same value is used for the\n depth, height and width dimension\n * a \"tuple\" of three ints -- in which case, the first int is\n used for the depth dimension, the second int for the height\n dimension and the third int for the width dimension\n Parameters:\n * kernel_size (Union[int, Tuple[int, int,\n int]]) -- the size of the window\n * stride (Union[int, Tuple[int, int,\n int]]) -- the stride of the window. Default value is\n \"kernel_size\"\n * padding (Union[int, Tuple[int, int,\n int]]) -- implicit zero padding to be added on all\n three sides\n * ceil_mode (bool) -- when True, will use ceil instead\n of floor to compute the output shape", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AvgPool3d.html", "category": "pytorch docs"}
{"text": "of floor to compute the output shape\n * count_include_pad (bool) -- when True, will include the\n zero-padding in the averaging calculation\n * divisor_override (Optional[int]) -- if specified,\n it will be used as divisor, otherwise \"kernel_size\" will be\n used\n Shape:\n * Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},\n W_{in}).\n * Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},\n H_{out}, W_{out}), where\n D_{out} = \\left\\lfloor\\frac{D_{in} + 2 \\times\n \\text{padding}[0] -\n \\text{kernel_size}[0]}{\\text{stride}[0]} + 1\\right\\rfloor\n H_{out} = \\left\\lfloor\\frac{H_{in} + 2 \\times\n \\text{padding}[1] -\n \\text{kernel_size}[1]}{\\text{stride}[1]} + 1\\right\\rfloor\n W_{out} = \\left\\lfloor\\frac{W_{in} + 2 \\times\n \\text{padding}[2] -\n \\text{kernel_size}[2]}{\\text{stride}[2]} + 1\\right\\rfloor\n Examples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AvgPool3d.html", "category": "pytorch docs"}
{"text": "Examples:\n >>> # pool of square window of size=3, stride=2\n >>> m = nn.AvgPool3d(3, stride=2)\n >>> # pool of non-square window\n >>> m = nn.AvgPool3d((3, 2, 2), stride=(2, 1, 2))\n >>> input = torch.randn(20, 16, 50, 44, 31)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AvgPool3d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.masked_fillTensor.masked_fill(mask, value) -> Tensor\n Out-of-place version of \"torch.Tensor.masked_fill_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.masked_fill.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sparse_resize_and_clear_Tensor.sparse_resize_and_clear_(size, sparse_dim, dense_dim) -> Tensor\n Removes all specified elements from a sparse tensor \"self\" and\n resizes \"self\" to the desired size and the number of sparse and\n dense dimensions.\n Parameters:\n * size (torch.Size) -- the desired size.\n * sparse_dim (int) -- the number of sparse dimensions\n * dense_dim (int) -- the number of dense dimensions", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_resize_and_clear_.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.stateless.functional_calltorch.nn.utils.stateless.functional_call(module, parameters_and_buffers, args, kwargs=None, *, tie_weights=True)\n Performs a functional call on the module by replacing the module\n parameters and buffers with the provided ones.\n Warning:\n This API is deprecated as of PyTorch 2.0 and will be removed in a\n future version of PyTorch. Please use\n \"torch.func.functional_call()\" instead, which is a drop-in\n replacement for this API.\n Note:\n If the module has active parametrizations, passing a value in the\n \"parameters_and_buffers\" argument with the name set to the\n regular parameter name will completely disable the\n parametrization. If you want to apply the parametrization\n function to the value passed please set the key as\n \"{submodule_name}.parametrizations.{parameter_name}.original\".\n Note:\n If the module performs in-place operations on parameters/buffers,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.stateless.functional_call.html", "category": "pytorch docs"}
{"text": "these will be reflected in the parameters_and_buffers\n input.Example:\n >>> a = {'foo': torch.zeros(())}\n >>> mod = Foo() # does self.foo = self.foo + 1\n >>> print(mod.foo) # tensor(0.)\n >>> functional_call(mod, a, torch.ones(()))\n >>> print(mod.foo) # tensor(0.)\n >>> print(a['foo']) # tensor(1.)\n Note:\n If the module has tied weights, whether or not functional_call\n respects the tying is determined by the tie_weights flag.Example:\n >>> a = {'foo': torch.zeros(())}\n >>> mod = Foo() # has both self.foo and self.foo_tied which are tied. Returns x + self.foo + self.foo_tied\n >>> print(mod.foo) # tensor(1.)\n >>> mod(torch.zeros(())) # tensor(2.)\n >>> functional_call(mod, a, torch.zeros(())) # tensor(0.) since it will change self.foo_tied too\n >>> functional_call(mod, a, torch.zeros(()), tie_weights=False) # tensor(1.)--self.foo_tied is not updated", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.stateless.functional_call.html", "category": "pytorch docs"}
{"text": "\n\n\nnew_a = {'foo', torch.zeros(()), 'foo_tied': torch.zeros(())}\n >>> functional_call(mod, new_a, torch.zeros()) # tensor(0.)\n Parameters:\n * module (torch.nn.Module) -- the module to call\n * parameters_and_buffers (dict of str and Tensor) -- the\n parameters that will be used in the module call.\n * args (Any or tuple) -- arguments to be passed to the\n module call. If not a tuple, considered a single argument.\n * kwargs (dict) -- keyword arguments to be passed to the\n module call\n * tie_weights (bool, optional) -- If True, then\n parameters and buffers tied in the original model will be\n treated as tied in the reparamaterized version. Therefore, if\n True and different values are passed for the tied paramaters\n and buffers, it will error. If False, it will not respect the\n originally tied parameters and buffers unless the values\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.stateless.functional_call.html", "category": "pytorch docs"}
{"text": "passed for both weights are the same. Default: True.\n Returns:\n the result of calling \"module\".\n Return type:\n Any", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.stateless.functional_call.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.prune.custom_from_masktorch.nn.utils.prune.custom_from_mask(module, name, mask)\n Prunes tensor corresponding to parameter called \"name\" in \"module\"\n by applying the pre-computed mask in \"mask\". Modifies module in\n place (and also return the modified module) by:\n 1. adding a named buffer called \"name+'_mask'\" corresponding to the\n binary mask applied to the parameter \"name\" by the pruning\n method.\n 2. replacing the parameter \"name\" by its pruned version, while the\n original (unpruned) parameter is stored in a new parameter named\n \"name+'_orig'\".\n Parameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * name (str) -- parameter name within \"module\" on which\n pruning will act.\n * mask (Tensor) -- binary mask to be applied to the\n parameter.\n Returns:\n modified (i.e. pruned) version of the input module\n Return type:\n module (nn.Module)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.custom_from_mask.html", "category": "pytorch docs"}
{"text": "Return type:\n module (nn.Module)\n -[ Examples ]-\n\n\n\nfrom torch.nn.utils import prune\nm = prune.custom_from_mask(\n ... nn.Linear(5, 3), name='bias', mask=torch.tensor([0, 1, 0])\n ... )\nprint(m.bias_mask)\n tensor([0., 1., 0.])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.custom_from_mask.html", "category": "pytorch docs"}
{"text": "torch.Tensor.xlogyTensor.xlogy(other) -> Tensor\n See \"torch.xlogy()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.xlogy.html", "category": "pytorch docs"}
{"text": "torch.Tensor.softmaxTensor.softmax(dim) -> Tensor\n Alias for \"torch.nn.functional.softmax()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.softmax.html", "category": "pytorch docs"}
{"text": "torch.autograd.functional.jvptorch.autograd.functional.jvp(func, inputs, v=None, create_graph=False, strict=False)\n Function that computes the dot product between the Jacobian of the\n given function at the point given by the inputs and a vector \"v\".\n Parameters:\n * func (function) -- a Python function that takes Tensor\n inputs and returns a tuple of Tensors or a Tensor.\n * inputs (tuple of Tensors or Tensor) -- inputs to the\n function \"func\".\n * v (tuple of Tensors or Tensor) -- The vector for\n which the Jacobian vector product is computed. Must be the\n same size as the input of \"func\". This argument is optional\n when the input to \"func\" contains a single element and (if it\n is not provided) will be set as a Tensor containing a single\n \"1\".\n * create_graph (bool, optional) -- If \"True\", both the\n output and result will be computed in a differentiable way.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.jvp.html", "category": "pytorch docs"}
{"text": "Note that when \"strict\" is \"False\", the result can not require\n gradients or be disconnected from the inputs. Defaults to\n \"False\".\n * strict (bool, optional) -- If \"True\", an error will\n be raised when we detect that there exists an input such that\n all the outputs are independent of it. If \"False\", we return a\n Tensor of zeros as the jvp for said inputs, which is the\n expected mathematical value. Defaults to \"False\".\n Returns:\n tuple with:\n func_output (tuple of Tensors or Tensor): output of\n \"func(inputs)\"\n jvp (tuple of Tensors or Tensor): result of the dot product\n with the same shape as the output.\n Return type:\n output (tuple)\n Note:\n \"autograd.functional.jvp\" computes the jvp by using the backward\n of the backward (sometimes called the double backwards trick).\n This is not the most performant way of computing the jvp. Please", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.jvp.html", "category": "pytorch docs"}
{"text": "consider using \"torch.func.jvp()\" or the low-level forward-mode\n AD API instead.\n -[ Example ]-\n\n\n\ndef exp_reducer(x):\n ... return x.exp().sum(dim=1)\ninputs = torch.rand(4, 4)\nv = torch.ones(4, 4)\njvp(exp_reducer, inputs, v)\n (tensor([6.3090, 4.6742, 7.9114, 8.2106]),\n tensor([6.3090, 4.6742, 7.9114, 8.2106]))\njvp(exp_reducer, inputs, v, create_graph=True)\n (tensor([6.3090, 4.6742, 7.9114, 8.2106], grad_fn=),\n tensor([6.3090, 4.6742, 7.9114, 8.2106], grad_fn=))\ndef adder(x, y):\n ... return 2 * x + 3 * y\ninputs = (torch.rand(2), torch.rand(2))\nv = (torch.ones(2), torch.ones(2))\njvp(adder, inputs, v)\n (tensor([2.2399, 2.5005]),\n tensor([5., 5.]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.jvp.html", "category": "pytorch docs"}
{"text": "torch.func.grad_and_valuetorch.func.grad_and_value(func, argnums=0, has_aux=False)\n Returns a function to compute a tuple of the gradient and primal,\n or forward, computation.\n Parameters:\n * func (Callable) -- A Python function that takes one or\n more arguments. Must return a single-element Tensor. If\n specified \"has_aux\" equals \"True\", function can return a tuple\n of single-element Tensor and other auxiliary objects:\n \"(output, aux)\".\n * argnums (int or Tuple[int]) -- Specifies\n arguments to compute gradients with respect to. \"argnums\" can\n be single integer or tuple of integers. Default: 0.\n * has_aux (bool) -- Flag indicating that \"func\" returns a\n tensor and other auxiliary objects: \"(output, aux)\". Default:\n False.\n Returns:\n Function to compute a tuple of gradients with respect to its\n inputs and the forward computation. By default, the output of", "source": "https://pytorch.org/docs/stable/generated/torch.func.grad_and_value.html", "category": "pytorch docs"}
{"text": "the function is a tuple of the gradient tensor(s) with respect\n to the first argument and the primal computation. If specified\n \"has_aux\" equals \"True\", tuple of gradients and tuple of the\n forward computation with output auxiliary objects is returned.\n If \"argnums\" is a tuple of integers, a tuple of a tuple of the\n output gradients with respect to each \"argnums\" value and the\n forward computation is returned.\n Return type:\n Callable\n See \"grad()\" for examples", "source": "https://pytorch.org/docs/stable/generated/torch.func.grad_and_value.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sizeTensor.size(dim=None) -> torch.Size or int\n Returns the size of the \"self\" tensor. If \"dim\" is not specified,\n the returned value is a \"torch.Size\", a subclass of \"tuple\". If\n \"dim\" is specified, returns an int holding the size of that\n dimension.\n Parameters:\n dim (int, optional) -- The dimension for which to\n retrieve the size.\n Example:\n >>> t = torch.empty(3, 4, 5)\n >>> t.size()\n torch.Size([3, 4, 5])\n >>> t.size(dim=1)\n 4", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.size.html", "category": "pytorch docs"}
{"text": "torch.Tensor.bitwise_xorTensor.bitwise_xor() -> Tensor\n See \"torch.bitwise_xor()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_xor.html", "category": "pytorch docs"}
{"text": "torch.func.hessiantorch.func.hessian(func, argnums=0)\n Computes the Hessian of \"func\" with respect to the arg(s) at index\n \"argnum\" via a forward-over-reverse strategy.\n The forward-over-reverse strategy (composing\n \"jacfwd(jacrev(func))\") is a good default for good performance. It\n is possible to compute Hessians through other compositions of\n \"jacfwd()\" and \"jacrev()\" like \"jacfwd(jacfwd(func))\" or\n \"jacrev(jacrev(func))\".\n Parameters:\n * func (function) -- A Python function that takes one or\n more arguments, one of which must be a Tensor, and returns one\n or more Tensors\n * argnums (int or Tuple[int]) -- Optional,\n integer or tuple of integers, saying which arguments to get\n the Hessian with respect to. Default: 0.\n Returns:\n Returns a function that takes in the same inputs as \"func\" and\n returns the Hessian of \"func\" with respect to the arg(s) at\n \"argnums\".\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.func.hessian.html", "category": "pytorch docs"}
{"text": "\"argnums\".\n Note:\n You may see this API error out with \"forward-mode AD not\n implemented for operator X\". If so, please file a bug report and\n we will prioritize it. An alternative is to use\n \"jacrev(jacrev(func))\", which has better operator coverage.\n A basic usage with a R^N -> R^1 function gives a N x N Hessian:\n\n\n\nfrom torch.func import hessian\ndef f(x):\n return x.sin().sum()\nx = torch.randn(5)\nhess = hessian(f)(x) # equivalent to jacfwd(jacrev(f))(x)\nassert torch.allclose(hess, torch.diag(-x.sin()))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.hessian.html", "category": "pytorch docs"}
{"text": "torch.slogdettorch.slogdet(input)\n Alias for \"torch.linalg.slogdet()\"", "source": "https://pytorch.org/docs/stable/generated/torch.slogdet.html", "category": "pytorch docs"}
{"text": "torch.broadcast_tensorstorch.broadcast_tensors(tensors) -> List of Tensors\n Broadcasts the given tensors according to Broadcasting semantics.\n Parameters:\n tensors -- any number of tensors of the same type\n Warning:\n More than one element of a broadcasted tensor may refer to a\n single memory location. As a result, in-place operations\n (especially ones that are vectorized) may result in incorrect\n behavior. If you need to write to the tensors, please clone them\n first.\n Example:\n >>> x = torch.arange(3).view(1, 3)\n >>> y = torch.arange(2).view(2, 1)\n >>> a, b = torch.broadcast_tensors(x, y)\n >>> a.size()\n torch.Size([2, 3])\n >>> a\n tensor([[0, 1, 2],\n [0, 1, 2]])", "source": "https://pytorch.org/docs/stable/generated/torch.broadcast_tensors.html", "category": "pytorch docs"}
{"text": "torch.autograd.profiler.profile.total_averageprofile.total_average()\n Averages all events.\n Returns:\n A FunctionEventAvg object.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.profiler.profile.total_average.html", "category": "pytorch docs"}
{"text": "torch.greater_equaltorch.greater_equal(input, other, *, out=None) -> Tensor\n Alias for \"torch.ge()\".", "source": "https://pytorch.org/docs/stable/generated/torch.greater_equal.html", "category": "pytorch docs"}
{"text": "torch.Tensor.qrTensor.qr(some=True)\n See \"torch.qr()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.qr.html", "category": "pytorch docs"}
{"text": "torch.Tensor.mvTensor.mv(vec) -> Tensor\n See \"torch.mv()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.mv.html", "category": "pytorch docs"}
{"text": "ObservationTypeclass torch.ao.quantization.backend_config.ObservationType(value)\n An enum that represents different ways of how an operator/operator\n pattern should be observed\n OUTPUT_SHARE_OBSERVER_WITH_INPUT = 1\n this means the output will use the same observer instance as\n input, based on qconfig.activation example: torch.cat, maxpool\n OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT = 0\n this means input and output are observed with different\n observers, based on qconfig.activation example: conv, linear,\n softmax", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.ObservationType.html", "category": "pytorch docs"}
{"text": "torch.jit.scripttorch.jit.script(obj, optimize=None, _frames_up=0, _rcb=None, example_inputs=None)\n Scripting a function or \"nn.Module\" will inspect the source code,\n compile it as TorchScript code using the TorchScript compiler, and\n return a \"ScriptModule\" or \"ScriptFunction\". TorchScript itself is\n a subset of the Python language, so not all features in Python\n work, but we provide enough functionality to compute on tensors and\n do control-dependent operations. For a complete guide, see the\n TorchScript Language Reference.\n Scripting a dictionary or list copies the data inside it into a\n TorchScript instance than can be subsequently passed by reference\n between Python and TorchScript with zero copy overhead.\n \"torch.jit.script\" can be used as a function for modules,\n functions, dictionaries and lists\n and as a decorator \"@torch.jit.script\" for TorchScript Classes\n and functions.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.jit.script.html", "category": "pytorch docs"}
{"text": "and functions.\n Parameters:\n * obj (Callable, class, or nn.Module) -- The\n \"nn.Module\", function, class type, dictionary, or list to\n compile.\n * example_inputs (Union[List[Tuple],\n Dict[Callable, List[Tuple]], None])\n -- Provide example inputs to annotate the arguments for a\n function or \"nn.Module\".\n Returns:\n If \"obj\" is \"nn.Module\", \"script\" returns a \"ScriptModule\"\n object. The returned \"ScriptModule\" will have the same set of\n sub-modules and parameters as the original \"nn.Module\". If \"obj\"\n is a standalone function, a \"ScriptFunction\" will be returned.\n If \"obj\" is a \"dict\", then \"script\" returns an instance of\n torch._C.ScriptDict. If \"obj\" is a \"list\", then \"script\"\n returns an instance of torch._C.ScriptList.\n Scripting a function\n The \"@torch.jit.script\" decorator will construct a", "source": "https://pytorch.org/docs/stable/generated/torch.jit.script.html", "category": "pytorch docs"}
{"text": "\"ScriptFunction\" by compiling the body of the function.\n Example (scripting a function):\n import torch\n @torch.jit.script\n def foo(x, y):\n if x.max() > y.max():\n r = x\n else:\n r = y\n return r\n print(type(foo)) # torch.jit.ScriptFunction\n # See the compiled graph as Python code\n print(foo.code)\n # Call the function using the TorchScript interpreter\n foo(torch.ones(2, 2), torch.ones(2, 2))\n **Scripting a function using example_inputs\n Example inputs can be used to annotate a function arguments.\n Example (annotating a function before scripting):\n import torch\n def test_sum(a, b):\n return a + b\n # Annotate the arguments to be int\n scripted_fn = torch.jit.script(test_sum, example_inputs=[(3, 4)])\n print(type(scripted_fn)) # torch.jit.ScriptFunction\n # See the compiled graph as Python code", "source": "https://pytorch.org/docs/stable/generated/torch.jit.script.html", "category": "pytorch docs"}
{"text": "See the compiled graph as Python code\n print(scripted_fn.code)\n # Call the function using the TorchScript interpreter\n scripted_fn(20, 100)\n\nScripting an nn.Module\n Scripting an \"nn.Module\" by default will compile the \"forward\"\n method and recursively compile any methods, submodules, and\n functions called by \"forward\". If a \"nn.Module\" only uses\n features supported in TorchScript, no changes to the original\n module code should be necessary. \"script\" will construct\n \"ScriptModule\" that has copies of the attributes, parameters,\n and methods of the original module.\n Example (scripting a simple module with a Parameter):\n import torch\n class MyModule(torch.nn.Module):\n def init(self, N, M):\n super(MyModule, self).init()\n # This parameter will be copied to the new ScriptModule\n self.weight = torch.nn.Parameter(torch.rand(N, M))", "source": "https://pytorch.org/docs/stable/generated/torch.jit.script.html", "category": "pytorch docs"}
{"text": "When this submodule is used, it will be compiled\n self.linear = torch.nn.Linear(N, M)\n def forward(self, input):\n output = self.weight.mv(input)\n # This calls the `forward` method of the `nn.Linear` module, which will\n # cause the `self.linear` submodule to be compiled to a `ScriptModule` here\n output = self.linear(output)\n return output\n scripted_module = torch.jit.script(MyModule(2, 3))\n Example (scripting a module with traced submodules):\n import torch\n import torch.nn as nn\n import torch.nn.functional as F\n class MyModule(nn.Module):\n def __init__(self):\n super(MyModule, self).__init__()\n # torch.jit.trace produces a ScriptModule's conv1 and conv2\n self.conv1 = torch.jit.trace(nn.Conv2d(1, 20, 5), torch.rand(1, 1, 16, 16))\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.script.html", "category": "pytorch docs"}
{"text": "self.conv2 = torch.jit.trace(nn.Conv2d(20, 20, 5), torch.rand(1, 20, 16, 16))\n def forward(self, input):\n input = F.relu(self.conv1(input))\n input = F.relu(self.conv2(input))\n return input\n scripted_module = torch.jit.script(MyModule())\n To compile a method other than \"forward\" (and recursively\n compile anything it calls), add the \"@torch.jit.export\"\n decorator to the method. To opt out of compilation use\n \"@torch.jit.ignore\" or \"@torch.jit.unused\".\n Example (an exported and ignored method in a module):\n import torch\n import torch.nn as nn\n class MyModule(nn.Module):\n def init(self):\n super(MyModule, self).init()\n @torch.jit.export\n def some_entry_point(self, input):\n return input + 10\n @torch.jit.ignore\n def python_only_fn(self, input):", "source": "https://pytorch.org/docs/stable/generated/torch.jit.script.html", "category": "pytorch docs"}
{"text": "def python_only_fn(self, input):\n # This function won't be compiled, so any\n # Python APIs can be used\n import pdb\n pdb.set_trace()\n def forward(self, input):\n if self.training:\n self.python_only_fn(input)\n return input * 99\n scripted_module = torch.jit.script(MyModule())\n print(scripted_module.some_entry_point(torch.randn(2, 2)))\n print(scripted_module(torch.randn(2, 2)))\n Example ( Annotating forward of nn.Module using example_inputs):\n import torch\n import torch.nn as nn\n from typing import NamedTuple\n class MyModule(NamedTuple):\n result: List[int]\n class TestNNModule(torch.nn.Module):\n def forward(self, a) -> MyModule:\n result = MyModule(result=a)\n return result\n pdt_model = TestNNModule()", "source": "https://pytorch.org/docs/stable/generated/torch.jit.script.html", "category": "pytorch docs"}
{"text": "pdt_model = TestNNModule()\n # Runs the pdt_model in eager model with the inputs provided and annotates the arguments of forward\n scripted_model = torch.jit.script(pdt_model, example_inputs={pdt_model: [([10, 20, ], ), ], })\n # Run the scripted_model with actual inputs\n print(scripted_model([20]))", "source": "https://pytorch.org/docs/stable/generated/torch.jit.script.html", "category": "pytorch docs"}
{"text": "POE0001:node-missing-onnx-shape-inferenceNode is missing ONNX shape inference. This usually happens when the\nnode is not valid under standard ONNX operator spec.", "source": "https://pytorch.org/docs/stable/generated/onnx_diagnostics_rules/POE0001:node-missing-onnx-shape-inference.html", "category": "pytorch docs"}
{"text": "POE0004:operator-supported-in-newer-opset-versionOperator is supported in newer opset version.\nExample:\n torch.onnx.export(model, args, ..., opset_version=9)", "source": "https://pytorch.org/docs/stable/generated/onnx_diagnostics_rules/POE0004:operator-supported-in-newer-opset-version.html", "category": "pytorch docs"}
{"text": "POE0003:missing-standard-symbolic-functionMissing symbolic function for standard PyTorch operator, cannot\ntranslate node to ONNX.", "source": "https://pytorch.org/docs/stable/generated/onnx_diagnostics_rules/POE0003:missing-standard-symbolic-function.html", "category": "pytorch docs"}
{"text": "POE0002:missing-custom-symbolic-functionMissing symbolic function for custom PyTorch operator, cannot\ntranslate node to ONNX.", "source": "https://pytorch.org/docs/stable/generated/onnx_diagnostics_rules/POE0002:missing-custom-symbolic-function.html", "category": "pytorch docs"}
{"text": "ConvReLU3dclass torch.ao.nn.intrinsic.qat.ConvReLU3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None)\n A ConvReLU3d module is a fused module of Conv3d and ReLU, attached\n with FakeQuantize modules for weight for quantization aware\n training.\n We combined the interface of \"Conv3d\" and \"BatchNorm3d\".\n Variables:\n weight_fake_quant -- fake quant module for weight", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvReLU3d.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.softmintorch.nn.functional.softmin(input, dim=None, _stacklevel=3, dtype=None)\n Applies a softmin function.\n Note that \\text{Softmin}(x) = \\text{Softmax}(-x). See softmax\n definition for mathematical formula.\n See \"Softmin\" for more details.\n Parameters:\n * input (Tensor) -- input\n * dim (int) -- A dimension along which softmin will be\n computed (so every slice along dim will sum to 1).\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is casted\n to \"dtype\" before the operation is performed. This is useful\n for preventing data type overflows. Default: None.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.softmin.html", "category": "pytorch docs"}
{"text": "prepareclass torch.quantization.prepare(model, inplace=False, allow_list=None, observer_non_leaf_module_list=None, prepare_custom_config_dict=None)\n Prepares a copy of the model for quantization calibration or\n quantization-aware training.\n Quantization configuration should be assigned preemptively to\n individual submodules in .qconfig attribute.\n The model will be attached with observer or fake quant modules, and\n qconfig will be propagated.\n Parameters:\n * model -- input model to be modified in-place\n * inplace -- carry out model transformations in-place, the\n original module is mutated\n * allow_list -- list of quantizable modules\n * observer_non_leaf_module_list -- list of non-leaf modules\n we want to add observer\n * prepare_custom_config_dict -- customization configuration\n dictionary for prepare function\n # Example of prepare_custom_config_dict:\n prepare_custom_config_dict = {", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.prepare.html", "category": "pytorch docs"}
{"text": "prepare_custom_config_dict = {\n # user will manually define the corresponding observed\n # module class which has a from_float class method that converts\n # float custom module to observed custom module\n \"float_to_observed_custom_module_class\": {\n CustomModule: ObservedCustomModule\n }\n }", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.prepare.html", "category": "pytorch docs"}
{"text": "GaussianNLLLossclass torch.nn.GaussianNLLLoss(*, full=False, eps=1e-06, reduction='mean')\n Gaussian negative log likelihood loss.\n The targets are treated as samples from Gaussian distributions with\n expectations and variances predicted by the neural network. For a\n \"target\" tensor modelled as having Gaussian distribution with a\n tensor of expectations \"input\" and a tensor of positive variances\n \"var\" the loss is:\n \\text{loss} =\n \\frac{1}{2}\\left(\\log\\left(\\text{max}\\left(\\text{var}, \\\n \\text{eps}\\right)\\right) + \\frac{\\left(\\text{input} -\n \\text{target}\\right)^2} {\\text{max}\\left(\\text{var}, \\\n \\text{eps}\\right)}\\right) + \\text{const.}\n where \"eps\" is used for stability. By default, the constant term of\n the loss function is omitted unless \"full\" is \"True\". If \"var\" is\n not the same size as \"input\" (due to a homoscedastic assumption),\n it must either have a final dimension of 1 or have one fewer", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GaussianNLLLoss.html", "category": "pytorch docs"}
{"text": "dimension (with all other sizes being the same) for correct\n broadcasting.\n Parameters:\n * full (bool, optional) -- include the constant term\n in the loss calculation. Default: \"False\".\n * eps (float, optional) -- value used to clamp \"var\"\n (see note below), for stability. Default: 1e-6.\n * reduction (str, optional) -- specifies the reduction\n to apply to the output:\"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the output\n is the average of all batch member losses, \"'sum'\": the output\n is the sum of all batch member losses. Default: \"'mean'\".\n Shape:\n * Input: (N, ) or () where * means any number of additional\n dimensions\n * Target: (N, ) or (), same shape as the input, or same shape\n as the input but with one dimension equal to 1 (to allow for\n broadcasting)\n * Var: (N, ) or (), same shape as the input, or same shape as", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GaussianNLLLoss.html", "category": "pytorch docs"}
{"text": "the input but with one dimension equal to 1, or same shape as\n the input but with one fewer dimension (to allow for\n broadcasting)\n * Output: scalar if \"reduction\" is \"'mean'\" (default) or\n \"'sum'\". If \"reduction\" is \"'none'\", then (N, *), same shape\n as the input\n Examples::\n >>> loss = nn.GaussianNLLLoss()\n >>> input = torch.randn(5, 2, requires_grad=True)\n >>> target = torch.randn(5, 2)\n >>> var = torch.ones(5, 2, requires_grad=True) # heteroscedastic\n >>> output = loss(input, target, var)\n >>> output.backward()\n >>> loss = nn.GaussianNLLLoss()\n >>> input = torch.randn(5, 2, requires_grad=True)\n >>> target = torch.randn(5, 2)\n >>> var = torch.ones(5, 1, requires_grad=True) # homoscedastic\n >>> output = loss(input, target, var)\n >>> output.backward()\n Note:\n The clamping of \"var\" is ignored with respect to autograd, and so\n the gradients are unaffected by it.\n Reference:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GaussianNLLLoss.html", "category": "pytorch docs"}
{"text": "Reference:\n Nix, D. A. and Weigend, A. S., \"Estimating the mean and variance\n of the target probability distribution\", Proceedings of 1994\n IEEE International Conference on Neural Networks (ICNN'94),\n Orlando, FL, USA, 1994, pp. 55-60 vol.1, doi:\n 10.1109/ICNN.1994.374138.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GaussianNLLLoss.html", "category": "pytorch docs"}
{"text": "convert_fxclass torch.quantization.quantize_fx.convert_fx(graph_module, convert_custom_config=None, _remove_qconfig=True, qconfig_mapping=None, backend_config=None)\n Convert a calibrated or trained model to a quantized model\n Parameters:\n * graph_module () -- A prepared and calibrated/trained\n model (GraphModule)\n * *convert_custom_config () -- custom configurations for\n convert function. See \"ConvertCustomConfig\" for more details\n * *_remove_qconfig () -- Option to remove the qconfig\n attributes in the model after convert.\n * *qconfig_mapping () --\n config for specifying how to convert a model for quantization.\n The keys must include the ones in the qconfig_mapping\n passed to prepare_fx or prepare_qat_fx, with the\n same values or None. Additional keys can be specified\n with values set to None*.\n For each entry whose value is set to None, we skip", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.convert_fx.html", "category": "pytorch docs"}
{"text": "quantizing that entry in the model:\n qconfig_mapping = QConfigMapping\n .set_global(qconfig_from_prepare)\n .set_object_type(torch.nn.functional.add, None) # skip quantizing torch.nn.functional.add\n .set_object_type(torch.nn.functional.linear, qconfig_from_prepare)\n .set_module_name(\"foo.bar\", None) # skip quantizing module \"foo.bar\"\n * backend_config (BackendConfig): A configuration for the\n backend which describes how\n operators should be quantized in the backend, this\n includes quantization mode support\n (static/dynamic/weight_only), dtype support (quint8/qint8\n etc.), observer placement for each operators and fused\n operators. See \"BackendConfig\" for more details\n Returns:\n A quantized model (torch.nn.Module)\n Return type:\n Module\n Example:", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.convert_fx.html", "category": "pytorch docs"}
{"text": "Return type:\n Module\n Example:\n # prepared_model: the model after prepare_fx/prepare_qat_fx and calibration/training\n # convert_fx converts a calibrated/trained model to a quantized model for the\n # target hardware, this includes converting the model first to a reference\n # quantized model, and then lower the reference quantized model to a backend\n # Currently, the supported backends are fbgemm (onednn), qnnpack (xnnpack) and\n # they share the same set of quantized operators, so we are using the same\n # lowering procedure\n #\n # backend_config defines the corresponding reference quantized module for\n # the weighted modules in the model, e.g. nn.Linear\n # TODO: add backend_config after we split the backend_config for fbgemm and qnnpack\n # e.g. backend_config = get_default_backend_config(\"fbgemm\")\n quantized_model = convert_fx(prepared_model)", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.convert_fx.html", "category": "pytorch docs"}
{"text": "torch.Tensor.addmv_Tensor.addmv_(mat, vec, *, beta=1, alpha=1) -> Tensor\n In-place version of \"addmv()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addmv_.html", "category": "pytorch docs"}
{"text": "torch.gcdtorch.gcd(input, other, , out=None) -> Tensor\n Computes the element-wise greatest common divisor (GCD) of \"input\"\n and \"other\".\n Both \"input\" and \"other\" must have integer types.\n Note:\n This defines gcd(0, 0) = 0.\n Parameters:\n * input (Tensor) -- the input tensor.\n * other (Tensor) -- the second input tensor\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.tensor([5, 10, 15])\n >>> b = torch.tensor([3, 4, 5])\n >>> torch.gcd(a, b)\n tensor([1, 2, 5])\n >>> c = torch.tensor([3])\n >>> torch.gcd(a, c)\n tensor([1, 1, 3])", "source": "https://pytorch.org/docs/stable/generated/torch.gcd.html", "category": "pytorch docs"}
{"text": "torch.Tensor.arctan2Tensor.arctan2(other) -> Tensor\n See \"torch.arctan2()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arctan2.html", "category": "pytorch docs"}
{"text": "torch.arctantorch.arctan(input, *, out=None) -> Tensor\n Alias for \"torch.atan()\".", "source": "https://pytorch.org/docs/stable/generated/torch.arctan.html", "category": "pytorch docs"}
{"text": "torch.Tensor.log_normal_Tensor.log_normal_(mean=1, std=2, *, generator=None)\n Fills \"self\" tensor with numbers samples from the log-normal\n distribution parameterized by the given mean \\mu and standard\n deviation \\sigma. Note that \"mean\" and \"std\" are the mean and\n standard deviation of the underlying normal distribution, and not\n of the returned distribution:\n f(x) = \\dfrac{1}{x \\sigma \\sqrt{2\\pi}}\\ e^{-\\frac{(\\ln x -\n \\mu)^2}{2\\sigma^2}}", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.log_normal_.html", "category": "pytorch docs"}
{"text": "ConvBnReLU3dclass torch.ao.nn.intrinsic.ConvBnReLU3d(conv, bn, relu)\n This is a sequential container which calls the Conv 3d, Batch Norm\n 3d, and ReLU modules. During quantization this will be replaced\n with the corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvBnReLU3d.html", "category": "pytorch docs"}
{"text": "torch.func.functional_calltorch.func.functional_call(module, parameter_and_buffer_dicts, args, kwargs=None, *, tie_weights=True)\n Performs a functional call on the module by replacing the module\n parameters and buffers with the provided ones.\n Note:\n If the module has active parametrizations, passing a value in the\n \"parameters_and_buffers\" argument with the name set to the\n regular parameter name will completely disable the\n parametrization. If you want to apply the parametrization\n function to the value passed please set the key as\n \"{submodule_name}.parametrizations.{parameter_name}.original\".\n Note:\n If the module performs in-place operations on parameters/buffers,\n these will be reflected in the \"parameters_and_buffers\" input.\n Example:\n >>> a = {'foo': torch.zeros(())}\n >>> mod = Foo() # does self.foo = self.foo + 1\n >>> print(mod.foo) # tensor(0.)", "source": "https://pytorch.org/docs/stable/generated/torch.func.functional_call.html", "category": "pytorch docs"}
{"text": "\n\n\nprint(mod.foo) # tensor(0.)\n >>> functional_call(mod, a, torch.ones(()))\n >>> print(mod.foo) # tensor(0.)\n >>> print(a['foo']) # tensor(1.)\n Note:\n If the module has tied weights, whether or not functional_call\n respects the tying is determined by the tie_weights flag.Example:\n >>> a = {'foo': torch.zeros(())}\n >>> mod = Foo() # has both self.foo and self.foo_tied which are tied. Returns x + self.foo + self.foo_tied\n >>> print(mod.foo) # tensor(1.)\n >>> mod(torch.zeros(())) # tensor(2.)\n >>> functional_call(mod, a, torch.zeros(())) # tensor(0.) since it will change self.foo_tied too\n >>> functional_call(mod, a, torch.zeros(()), tie_weights=False) # tensor(1.)--self.foo_tied is not updated\n >>> new_a = {'foo', torch.zeros(()), 'foo_tied': torch.zeros(())}\n >>> functional_call(mod, new_a, torch.zeros()) # tensor(0.)\n An example of passing mutliple dictionaries\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.functional_call.html", "category": "pytorch docs"}
{"text": "An example of passing mutliple dictionaries\n a = ({'weight': torch.ones(1, 1)}, {'buffer': torch.zeros(1)}) # two separate dictionaries\n mod = nn.Bar(1, 1) # return self.weight @ x + self.buffer\n print(mod.weight) # tensor(...)\n print(mod.buffer) # tensor(...)\n x = torch.randn((1, 1))\n print(x)\n functional_call(mod, a, x) # same as x\n print(mod.weight) # same as before functional_call\n And here is an example of applying the grad transform over the\n parameters of a model.\n import torch\n import torch.nn as nn\n from torch.func import functional_call, grad\n x = torch.randn(4, 3)\n t = torch.randn(4, 3)\n model = nn.Linear(3, 3)\n def compute_loss(params, x, t):\n y = functional_call(model, params, x)\n return nn.functional.mse_loss(y, t)\n grad_weights = grad(compute_loss)(dict(model.named_parameters()), x, t)\n Note:\n If the user does not need grad tracking outside of grad", "source": "https://pytorch.org/docs/stable/generated/torch.func.functional_call.html", "category": "pytorch docs"}
{"text": "transforms, they can detach all of the parameters for better\n performance and memory usageExample:\n >>> detached_params = {k: v.detach() for k, v in model.named_parameters()}\n >>> grad_weights = grad(compute_loss)(detached_params, x, t)\n >>> grad_weights.grad_fn # None--it's not tracking gradients outside of grad\n This means that the user cannot call \"grad_weight.backward()\".\n However, if they don't need autograd tracking outside of the\n transforms, this will result in less memory usage and faster\n speeds.\n Parameters:\n * module (torch.nn.Module) -- the module to call\n * parameters_and_buffers (Dict[str,Tensor] or\n tuple of Dict[str, Tensor]) -- the parameters\n that will be used in the module call. If given a tuple of\n dictionaries, they must have distinct keys so that all\n dictionaries can be used together\n * args (Any or tuple) -- arguments to be passed to the", "source": "https://pytorch.org/docs/stable/generated/torch.func.functional_call.html", "category": "pytorch docs"}
{"text": "module call. If not a tuple, considered a single argument.\n * kwargs (dict) -- keyword arguments to be passed to the\n module call\n * tie_weights (bool, optional) -- If True, then\n parameters and buffers tied in the original model will be\n treated as tied in the reparamaterized version. Therefore, if\n True and different values are passed for the tied paramaters\n and buffers, it will error. If False, it will not respect the\n originally tied parameters and buffers unless the values\n passed for both weights are the same. Default: True.\n Returns:\n the result of calling \"module\".\n Return type:\n Any", "source": "https://pytorch.org/docs/stable/generated/torch.func.functional_call.html", "category": "pytorch docs"}
{"text": "torch.linalg.eigtorch.linalg.eig(A, , out=None)\n Computes the eigenvalue decomposition of a square matrix if it\n exists.\n Letting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the eigenvalue\n decomposition* of a square matrix A \\in \\mathbb{K}^{n \\times n}\n (if it exists) is defined as\n A = V \\operatorname{diag}(\\Lambda) V^{-1}\\mathrlap{\\qquad V \\in\n \\mathbb{C}^{n \\times n}, \\Lambda \\in \\mathbb{C}^n}\n This decomposition exists if and only if A is diagonalizable. This\n is the case when all its eigenvalues are different.\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n Note:\n The eigenvalues and eigenvectors of a real matrix may be complex.\n Note:\n When inputs are on a CUDA device, this function synchronizes that\n device with the CPU.\n Warning:\n This function assumes that \"A\" is diagonalizable (for example,", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eig.html", "category": "pytorch docs"}
{"text": "when all the eigenvalues are different). If it is not\n diagonalizable, the returned eigenvalues will be correct but A\n \\neq V \\operatorname{diag}(\\Lambda)V^{-1}.\n Warning:\n The returned eigenvectors are normalized to have norm 1. Even\n then, the eigenvectors of a matrix are not unique, nor are they\n continuous with respect to \"A\". Due to this lack of uniqueness,\n different hardware and software may compute different\n eigenvectors.This non-uniqueness is caused by the fact that\n multiplying an eigenvector by by e^{i \\phi}, \\phi \\in \\mathbb{R}\n produces another set of valid eigenvectors of the matrix. For\n this reason, the loss function shall not depend on the phase of\n the eigenvectors, as this quantity is not well-defined. This is\n checked when computing the gradients of this function. As such,\n when inputs are on a CUDA device, this function synchronizes that\n device with the CPU when computing the gradients. This is checked", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eig.html", "category": "pytorch docs"}
{"text": "when computing the gradients of this function. As such, when\n inputs are on a CUDA device, the computation of the gradients of\n this function synchronizes that device with the CPU.\n Warning:\n Gradients computed using the eigenvectors tensor will only be\n finite when \"A\" has distinct eigenvalues. Furthermore, if the\n distance between any two eigenvalues is close to zero, the\n gradient will be numerically unstable, as it depends on the\n eigenvalues \\lambda_i through the computation of \\frac{1}{\\min_{i\n \\neq j} \\lambda_i - \\lambda_j}.\n See also:\n \"torch.linalg.eigvals()\" computes only the eigenvalues. Unlike\n \"torch.linalg.eig()\", the gradients of \"eigvals()\" are always\n numerically stable.\n \"torch.linalg.eigh()\" for a (faster) function that computes the\n eigenvalue decomposition for Hermitian and symmetric matrices.\n \"torch.linalg.svd()\" for a function that computes another type of\n spectral decomposition that works on matrices of any shape.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eig.html", "category": "pytorch docs"}
{"text": "\"torch.linalg.qr()\" for another (much faster) decomposition that\n works on matrices of any shape.\n Parameters:\n A (Tensor) -- tensor of shape (, n, n) where *** is\n zero or more batch dimensions consisting of diagonalizable\n matrices.\n Keyword Arguments:\n out (tuple, optional) -- output tuple of two tensors.\n Ignored if None. Default: None.\n Returns:\n A named tuple (eigenvalues, eigenvectors) which corresponds to\n \\Lambda and V above.\n eigenvalues and eigenvectors will always be complex-valued,\n even when \"A\" is real. The eigenvectors will be given by the\n columns of eigenvectors*.\n Examples:\n >>> A = torch.randn(2, 2, dtype=torch.complex128)\n >>> A\n tensor([[ 0.9828+0.3889j, -0.4617+0.3010j],\n [ 0.1662-0.7435j, -0.6139+0.0562j]], dtype=torch.complex128)\n >>> L, V = torch.linalg.eig(A)\n >>> L\n tensor([ 1.1226+0.5738j, -0.7537-0.1286j], dtype=torch.complex128)\n >>> V", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eig.html", "category": "pytorch docs"}
{"text": "\n\n\nV\n tensor([[ 0.9218+0.0000j, 0.1882-0.2220j],\n [-0.0270-0.3867j, 0.9567+0.0000j]], dtype=torch.complex128)\n >>> torch.dist(V @ torch.diag(L) @ torch.linalg.inv(V), A)\n tensor(7.7119e-16, dtype=torch.float64)\n >>> A = torch.randn(3, 2, 2, dtype=torch.float64)\n >>> L, V = torch.linalg.eig(A)\n >>> torch.dist(V @ torch.diag_embed(L) @ torch.linalg.inv(V), A)\n tensor(3.2841e-16, dtype=torch.float64)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eig.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.hinge_embedding_losstorch.nn.functional.hinge_embedding_loss(input, target, margin=1.0, size_average=None, reduce=None, reduction='mean') -> Tensor\n See \"HingeEmbeddingLoss\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.hinge_embedding_loss.html", "category": "pytorch docs"}
{"text": "DataParallelclass torch.nn.DataParallel(module, device_ids=None, output_device=None, dim=0)\n Implements data parallelism at the module level.\n This container parallelizes the application of the given \"module\"\n by splitting the input across the specified devices by chunking in\n the batch dimension (other objects will be copied once per device).\n In the forward pass, the module is replicated on each device, and\n each replica handles a portion of the input. During the backwards\n pass, gradients from each replica are summed into the original\n module.\n The batch size should be larger than the number of GPUs used.\n Warning:\n It is recommended to use \"DistributedDataParallel\", instead of\n this class, to do multi-GPU training, even if there is only a\n single node. See: Use nn.parallel.DistributedDataParallel instead\n of multiprocessing or nn.DataParallel and Distributed Data\n Parallel.\n Arbitrary positional and keyword inputs are allowed to be passed", "source": "https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html", "category": "pytorch docs"}
{"text": "into DataParallel but some types are specially handled. tensors\n will be scattered on dim specified (default 0). tuple, list and\n dict types will be shallow copied. The other types will be shared\n among different threads and can be corrupted if written to in the\n model's forward pass.\n The parallelized \"module\" must have its parameters and buffers on\n \"device_ids[0]\" before running this \"DataParallel\" module.\n Warning:\n In each forward, \"module\" is replicated on each device, so\n any updates to the running module in \"forward\" will be lost. For\n example, if \"module\" has a counter attribute that is incremented\n in each \"forward\", it will always stay at the initial value\n because the update is done on the replicas which are destroyed\n after \"forward\". However, \"DataParallel\" guarantees that the\n replica on \"device[0]\" will have its parameters and buffers\n sharing storage with the base parallelized \"module\". So **in-", "source": "https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html", "category": "pytorch docs"}
{"text": "place updates to the parameters or buffers on \"device[0]\" will\n be recorded. E.g., \"BatchNorm2d\" and \"spectral_norm()\" rely on\n this behavior to update the buffers.\n Warning:\n Forward and backward hooks defined on \"module\" and its submodules\n will be invoked \"len(device_ids)\" times, each with inputs located\n on a particular device. Particularly, the hooks are only\n guaranteed to be executed in correct order with respect to\n operations on corresponding devices. For example, it is not\n guaranteed that hooks set via \"register_forward_pre_hook()\" be\n executed before all \"len(device_ids)\" \"forward()\" calls, but\n that each such hook be executed before the corresponding\n \"forward()\" call of that device.\n Warning:\n When \"module\" returns a scalar (i.e., 0-dimensional tensor) in\n \"forward()\", this wrapper will return a vector of length equal to\n number of devices used in data parallelism, containing the result\n from each device.\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html", "category": "pytorch docs"}
{"text": "from each device.\n Note:\n There is a subtlety in using the \"pack sequence -> recurrent\n network -> unpack sequence\" pattern in a \"Module\" wrapped in\n \"DataParallel\". See My recurrent network doesn't work with data\n parallelism section in FAQ for details.\n Parameters:\n * module (Module) -- module to be parallelized\n * device_ids (list of python:int or torch.device) --\n CUDA devices (default: all devices)\n * output_device (int or torch.device) -- device\n location of output (default: device_ids[0])\n Variables:\n module (Module) -- the module to be parallelized\n Example:\n >>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2])\n >>> output = net(input_var) # input_var can be on any device, including CPU", "source": "https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html", "category": "pytorch docs"}
{"text": "GLUclass torch.nn.GLU(dim=- 1)\n Applies the gated linear unit function {GLU}(a, b)= a \\otimes\n \\sigma(b) where a is the first half of the input matrices and b is\n the second half.\n Parameters:\n dim (int) -- the dimension on which to split the input.\n Default: -1\n Shape:\n * Input: (\\ast_1, N, \\ast_2) where *** means, any number of\n additional dimensions\n * Output: (\\ast_1, M, \\ast_2) where M=N/2\n Examples:\n >>> m = nn.GLU()\n >>> input = torch.randn(4, 2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GLU.html", "category": "pytorch docs"}
{"text": "torch.Tensor.diagflatTensor.diagflat(offset=0) -> Tensor\n See \"torch.diagflat()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.diagflat.html", "category": "pytorch docs"}
{"text": "ReflectionPad1dclass torch.nn.ReflectionPad1d(padding)\n Pads the input tensor using the reflection of the input boundary.\n For N-dimensional padding, use \"torch.nn.functional.pad()\".\n Parameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in all boundaries. If a 2-tuple,\n uses (\\text{padding_left}, \\text{padding_right})\n Shape:\n * Input: (C, W_{in}) or (N, C, W_{in}).\n * Output: (C, W_{out}) or (N, C, W_{out}), where\n W_{out} = W_{in} + \\text{padding_left} +\n \\text{padding_right}\n Examples:\n >>> m = nn.ReflectionPad1d(2)\n >>> input = torch.arange(8, dtype=torch.float).reshape(1, 2, 4)\n >>> input\n tensor([[[0., 1., 2., 3.],\n [4., 5., 6., 7.]]])\n >>> m(input)\n tensor([[[2., 1., 0., 1., 2., 3., 2., 1.],\n [6., 5., 4., 5., 6., 7., 6., 5.]]])\n >>> # using different paddings for different sides", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad1d.html", "category": "pytorch docs"}
{"text": "\n\n\nm = nn.ReflectionPad1d((3, 1))\n >>> m(input)\n tensor([[[3., 2., 1., 0., 1., 2., 3., 2.],\n [7., 6., 5., 4., 5., 6., 7., 6.]]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad1d.html", "category": "pytorch docs"}
{"text": "conv1dclass torch.ao.nn.quantized.functional.conv1d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8)\n Applies a 1D convolution over a quantized 1D input composed of\n several input planes.\n See \"Conv1d\" for details and output shape.\n Parameters:\n * input -- quantized input tensor of shape (\\text{minibatch}\n , \\text{in_channels} , iW)\n * weight -- quantized filters of shape (\\text{out_channels}\n , \\frac{\\text{in_channels}}{\\text{groups}} , iW)\n * bias -- non-quantized bias tensor of shape\n (\\text{out_channels}). The tensor type must be torch.float.\n * stride -- the stride of the convolving kernel. Can be a\n single number or a tuple (sW,). Default: 1\n * padding -- implicit paddings on both sides of the input.\n Can be a single number or a tuple (padW,). Default: 0", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv1d.html", "category": "pytorch docs"}
{"text": "\ndilation -- the spacing between kernel elements. Can be a\n single number or a tuple (dW,). Default: 1\ngroups -- split input into groups, \\text{in_channels}\n should be divisible by the number of groups. Default: 1\npadding_mode -- the padding mode to use. Only \"zeros\" is\n supported for quantized convolution at the moment. Default:\n \"zeros\"\nscale -- quantization scale for the output. Default: 1.0\nzero_point -- quantization zero_point for the output.\n Default: 0\ndtype -- quantization data type to use. Default:\n \"torch.quint8\"\n Examples:\n\n\nfrom torch.ao.nn.quantized import functional as qF\nfilters = torch.randn(33, 16, 3, dtype=torch.float)\ninputs = torch.randn(20, 16, 50, dtype=torch.float)\nbias = torch.randn(33, dtype=torch.float)\nscale, zero_point = 1.0, 0\ndtype_inputs = torch.quint8\ndtype_filters = torch.qint8\n\n\n\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv1d.html", "category": "pytorch docs"}
{"text": "\n\n\ndtype_filters = torch.qint8\n >>>\n >>> q_filters = torch.quantize_per_tensor(filters, scale, zero_point, dtype_filters)\n >>> q_inputs = torch.quantize_per_tensor(inputs, scale, zero_point, dtype_inputs)\n >>> qF.conv1d(q_inputs, q_filters, bias, padding=1, scale=scale, zero_point=zero_point)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv1d.html", "category": "pytorch docs"}
{"text": "conv2dclass torch.ao.nn.quantized.functional.conv2d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8)\n Applies a 2D convolution over a quantized 2D input composed of\n several input planes.\n See \"Conv2d\" for details and output shape.\n Parameters:\n * input -- quantized input tensor of shape (\\text{minibatch}\n , \\text{in_channels} , iH , iW)\n * weight -- quantized filters of shape (\\text{out_channels}\n , \\frac{\\text{in_channels}}{\\text{groups}} , kH , kW)\n * bias -- non-quantized bias tensor of shape\n (\\text{out_channels}). The tensor type must be torch.float.\n * stride -- the stride of the convolving kernel. Can be a\n single number or a tuple (sH, sW). Default: 1\n * padding -- implicit paddings on both sides of the input.\n Can be a single number or a tuple (padH, padW). Default: 0", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv2d.html", "category": "pytorch docs"}
{"text": "\ndilation -- the spacing between kernel elements. Can be a\n single number or a tuple (dH, dW). Default: 1\ngroups -- split input into groups, \\text{in_channels}\n should be divisible by the number of groups. Default: 1\npadding_mode -- the padding mode to use. Only \"zeros\" is\n supported for quantized convolution at the moment. Default:\n \"zeros\"\nscale -- quantization scale for the output. Default: 1.0\nzero_point -- quantization zero_point for the output.\n Default: 0\ndtype -- quantization data type to use. Default:\n \"torch.quint8\"\n Examples:\n\n\nfrom torch.ao.nn.quantized import functional as qF\nfilters = torch.randn(8, 4, 3, 3, dtype=torch.float)\ninputs = torch.randn(1, 4, 5, 5, dtype=torch.float)\nbias = torch.randn(8, dtype=torch.float)\nscale, zero_point = 1.0, 0\ndtype_inputs = torch.quint8\ndtype_filters = torch.qint8\n\n\n\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv2d.html", "category": "pytorch docs"}
{"text": "\n\n\ndtype_filters = torch.qint8\n >>>\n >>> q_filters = torch.quantize_per_tensor(filters, scale, zero_point, dtype_filters)\n >>> q_inputs = torch.quantize_per_tensor(inputs, scale, zero_point, dtype_inputs)\n >>> qF.conv2d(q_inputs, q_filters, bias, padding=1, scale=scale, zero_point=zero_point)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.exp_Tensor.exp_() -> Tensor\n In-place version of \"exp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.exp_.html", "category": "pytorch docs"}
{"text": "torch.manual_seedtorch.manual_seed(seed)\n Sets the seed for generating random numbers. Returns a\n torch.Generator object.\n Parameters:\n seed (int) -- The desired seed. Value must be within the\n inclusive range [-0x8000_0000_0000_0000,\n 0xffff_ffff_ffff_ffff]. Otherwise, a RuntimeError is raised.\n Negative inputs are remapped to positive values with the formula\n 0xffff_ffff_ffff_ffff + seed.\n Return type:\n Generator", "source": "https://pytorch.org/docs/stable/generated/torch.manual_seed.html", "category": "pytorch docs"}
{"text": "torch.Tensor.register_hookTensor.register_hook(hook)\n Registers a backward hook.\n The hook will be called every time a gradient with respect to the\n Tensor is computed. The hook should have the following signature:\n hook(grad) -> Tensor or None\n The hook should not modify its argument, but it can optionally\n return a new gradient which will be used in place of \"grad\".\n This function returns a handle with a method \"handle.remove()\" that\n removes the hook from the module.\n Note:\n See Backward Hooks execution for more information on how when\n this hook is executed, and how its execution is ordered relative\n to other hooks.\n Example:\n >>> v = torch.tensor([0., 0., 0.], requires_grad=True)\n >>> h = v.register_hook(lambda grad: grad * 2) # double the gradient\n >>> v.backward(torch.tensor([1., 2., 3.]))\n >>> v.grad\n 2\n 4\n 6\n [torch.FloatTensor of size (3,)]\n >>> h.remove() # removes the hook", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.register_hook.html", "category": "pytorch docs"}
{"text": "torch.index_copytorch.index_copy(input, dim, index, source, *, out=None) -> Tensor\n See \"index_add_()\" for function description.", "source": "https://pytorch.org/docs/stable/generated/torch.index_copy.html", "category": "pytorch docs"}
{"text": "torch.Tensor.atan2Tensor.atan2(other) -> Tensor\n See \"torch.atan2()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.atan2.html", "category": "pytorch docs"}
{"text": "torch.set_warn_alwaystorch.set_warn_always(b)\n When this flag is False (default) then some PyTorch warnings may\n only appear once per process. This helps avoid excessive warning\n information. Setting it to True causes these warnings to always\n appear, which may be helpful when debugging.\n Parameters:\n b (\"bool\") -- If True, force warnings to always be emitted\n If False, set to the default behaviour", "source": "https://pytorch.org/docs/stable/generated/torch.set_warn_always.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.pixel_unshuffletorch.nn.functional.pixel_unshuffle(input, downscale_factor) -> Tensor\n Reverses the \"PixelShuffle\" operation by rearranging elements in a\n tensor of shape (, C, H \\times r, W \\times r) to a tensor of shape\n (, C \\times r^2, H, W), where r is the \"downscale_factor\".\n See \"PixelUnshuffle\" for details.\n Parameters:\n * input (Tensor) -- the input tensor\n * downscale_factor (int) -- factor to increase spatial\n resolution by\n Examples:\n >>> input = torch.randn(1, 1, 12, 12)\n >>> output = torch.nn.functional.pixel_unshuffle(input, 3)\n >>> print(output.size())\n torch.Size([1, 9, 4, 4])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.pixel_unshuffle.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.sigmoidtorch.nn.functional.sigmoid(input) -> Tensor\n Applies the element-wise function \\text{Sigmoid}(x) = \\frac{1}{1 +\n \\exp(-x)}\n See \"Sigmoid\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.sigmoid.html", "category": "pytorch docs"}
{"text": "Conv2dclass torch.ao.nn.quantized.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\n Applies a 2D convolution over a quantized input signal composed of\n several quantized input planes.\n For details on input arguments, parameters, and implementation see\n \"Conv2d\".\n Note:\n Only zeros is supported for the \"padding_mode\" argument.\n Note:\n Only torch.quint8 is supported for the input data type.\n Variables:\n * weight (Tensor) -- packed tensor derived from the\n learnable weight parameter.\n * scale (Tensor) -- scalar for the output scale\n * zero_point (Tensor) -- scalar for the output zero point\n See \"Conv2d\" for other attributes.\n Examples:\n >>> # With square kernels and equal stride\n >>> m = nn.quantized.Conv2d(16, 33, 3, stride=2)\n >>> # non-square kernels and unequal stride and with padding", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Conv2d.html", "category": "pytorch docs"}
{"text": "\n\n\nm = nn.quantized.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))\n >>> # non-square kernels and unequal stride and with padding and dilation\n >>> m = nn.quantized.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))\n >>> input = torch.randn(20, 16, 50, 100)\n >>> # quantize input to quint8\n >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)\n >>> output = m(q_input)\n classmethod from_float(mod)\n Creates a quantized module from a float module or qparams_dict.\n Parameters:\n mod (Module) -- a float module, either produced by\n torch.ao.quantization utilities or provided by the user\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Conv2d.html", "category": "pytorch docs"}
{"text": "torch.bitwise_ortorch.bitwise_or(input, other, , out=None) -> Tensor\n Computes the bitwise OR of \"input\" and \"other\". The input tensor\n must be of integral or Boolean types. For bool tensors, it computes\n the logical OR.\n Parameters:\n * input -- the first input tensor\n * other -- the second input tensor\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> torch.bitwise_or(torch.tensor([-1, -2, 3], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8))\n tensor([-1, -2, 3], dtype=torch.int8)\n >>> torch.bitwise_or(torch.tensor([True, True, False]), torch.tensor([False, True, False]))\n tensor([ True, True, False])", "source": "https://pytorch.org/docs/stable/generated/torch.bitwise_or.html", "category": "pytorch docs"}
{"text": "torch.unsqueezetorch.unsqueeze(input, dim) -> Tensor\n Returns a new tensor with a dimension of size one inserted at the\n specified position.\n The returned tensor shares the same underlying data with this\n tensor.\n A \"dim\" value within the range \"[-input.dim() - 1, input.dim() +\n 1)\" can be used. Negative \"dim\" will correspond to \"unsqueeze()\"\n applied at \"dim\" = \"dim + input.dim() + 1\".\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- the index at which to insert the singleton\n dimension\n Example:\n >>> x = torch.tensor([1, 2, 3, 4])\n >>> torch.unsqueeze(x, 0)\n tensor([[ 1, 2, 3, 4]])\n >>> torch.unsqueeze(x, 1)\n tensor([[ 1],\n [ 2],\n [ 3],\n [ 4]])", "source": "https://pytorch.org/docs/stable/generated/torch.unsqueeze.html", "category": "pytorch docs"}
{"text": "torch.set_num_threadstorch.set_num_threads(int)\n Sets the number of threads used for intraop parallelism on CPU.\n Warning:\n To ensure that the correct number of threads is used,\n set_num_threads must be called before running eager, JIT or\n autograd code.", "source": "https://pytorch.org/docs/stable/generated/torch.set_num_threads.html", "category": "pytorch docs"}
{"text": "torch.squaretorch.square(input, , out=None) -> Tensor\n Returns a new tensor with the square of the elements of \"input\".\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([-2.0755, 1.0226, 0.0831, 0.4806])\n >>> torch.square(a)\n tensor([ 4.3077, 1.0457, 0.0069, 0.2310])", "source": "https://pytorch.org/docs/stable/generated/torch.square.html", "category": "pytorch docs"}
{"text": "torch.Tensor.doubleTensor.double(memory_format=torch.preserve_format) -> Tensor\n \"self.double()\" is equivalent to \"self.to(torch.float64)\". See\n \"to()\".\n Parameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.double.html", "category": "pytorch docs"}
{"text": "torch.Tensor.i0_Tensor.i0_() -> Tensor\n In-place version of \"i0()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.i0_.html", "category": "pytorch docs"}
{"text": "torch.alltorch.all(input) -> Tensor\n Tests if all elements in \"input\" evaluate to True.\n Note:\n This function matches the behaviour of NumPy in returning output\n of dtype bool for all supported dtypes except uint8. For\n uint8 the dtype of output is uint8 itself.\n Example:\n >>> a = torch.rand(1, 2).bool()\n >>> a\n tensor([[False, True]], dtype=torch.bool)\n >>> torch.all(a)\n tensor(False, dtype=torch.bool)\n >>> a = torch.arange(0, 3)\n >>> a\n tensor([0, 1, 2])\n >>> torch.all(a)\n tensor(False)\n torch.all(input, dim, keepdim=False, , out=None) -> Tensor\n For each row of \"input\" in the given dimension \"dim\", returns\n True if all elements in the row evaluate to True and False*\n otherwise.\n If \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in", "source": "https://pytorch.org/docs/stable/generated/torch.all.html", "category": "pytorch docs"}
{"text": "the output tensor having 1 fewer dimension than \"input\".\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- the dimension to reduce.\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> a = torch.rand(4, 2).bool()\n >>> a\n tensor([[True, True],\n [True, False],\n [True, True],\n [True, True]], dtype=torch.bool)\n >>> torch.all(a, dim=1)\n tensor([ True, False, True, True], dtype=torch.bool)\n >>> torch.all(a, dim=0)\n tensor([ True, False], dtype=torch.bool)", "source": "https://pytorch.org/docs/stable/generated/torch.all.html", "category": "pytorch docs"}
{"text": "torch.Tensor.prodTensor.prod(dim=None, keepdim=False, dtype=None) -> Tensor\n See \"torch.prod()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.prod.html", "category": "pytorch docs"}
{"text": "torch.lu_solvetorch.lu_solve(b, LU_data, LU_pivots, , out=None) -> Tensor\n Returns the LU solve of the linear system Ax = b using the\n partially pivoted LU factorization of A from \"lu_factor()\".\n This function supports \"float\", \"double\", \"cfloat\" and \"cdouble\"\n dtypes for \"input\".\n Warning:\n \"torch.lu_solve()\" is deprecated in favor of\n \"torch.linalg.lu_solve()\". \"torch.lu_solve()\" will be removed in\n a future PyTorch release. \"X = torch.lu_solve(B, LU, pivots)\"\n should be replaced with\n X = linalg.lu_solve(LU, pivots, B)\n Parameters:\n * b (Tensor) -- the RHS tensor of size (, m, k), where *\n is zero or more batch dimensions.\n * LU_data (Tensor) -- the pivoted LU factorization of A\n from \"lu_factor()\" of size (, m, m), where * is zero or more\n batch dimensions.\n * LU_pivots (IntTensor) -- the pivots of the LU\n factorization from \"lu_factor()\" of size (, m), where * is", "source": "https://pytorch.org/docs/stable/generated/torch.lu_solve.html", "category": "pytorch docs"}
{"text": "zero or more batch dimensions. The batch dimensions of\n \"LU_pivots\" must be equal to the batch dimensions of\n \"LU_data\".\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> A = torch.randn(2, 3, 3)\n >>> b = torch.randn(2, 3, 1)\n >>> LU, pivots = torch.linalg.lu_factor(A)\n >>> x = torch.lu_solve(b, LU, pivots)\n >>> torch.dist(A @ x, b)\n tensor(1.00000e-07 *\n 2.8312)", "source": "https://pytorch.org/docs/stable/generated/torch.lu_solve.html", "category": "pytorch docs"}
{"text": "torch.cuda.comm.broadcasttorch.cuda.comm.broadcast(tensor, devices=None, , out=None)\n Broadcasts a tensor to specified GPU devices.\n Parameters:\n * tensor (Tensor) -- tensor to broadcast. Can be on CPU or\n GPU.\n * devices (Iterable[torch.device, str or\n int], optional*) -- an iterable of GPU devices, among\n which to broadcast.\n * out (*Sequence[Tensor], optional, keyword-\n only*) -- the GPU tensors to store output results.\n Note:\n Exactly one of \"devices\" and \"out\" must be specified.\n Returns:\n * If \"devices\" is specified,\n a tuple containing copies of \"tensor\", placed on \"devices\".\n * If \"out\" is specified,\n a tuple containing \"out\" tensors, each containing a copy of\n \"tensor\".", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.comm.broadcast.html", "category": "pytorch docs"}
{"text": "torch.Tensor.itemTensor.item() -> number\n Returns the value of this tensor as a standard Python number. This\n only works for tensors with one element. For other cases, see\n \"tolist()\".\n This operation is not differentiable.\n Example:\n >>> x = torch.tensor([1.0])\n >>> x.item()\n 1.0", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.item.html", "category": "pytorch docs"}
{"text": "torch.fmodtorch.fmod(input, other, *, out=None) -> Tensor\n Applies C++'s std::fmod entrywise. The result has the same sign as\n the dividend \"input\" and its absolute value is less than that of\n \"other\".\n This function may be defined in terms of \"torch.div()\" as\n torch.fmod(a, b) == a - a.div(b, rounding_mode=\"trunc\") * b\n Supports broadcasting to a common shape, type promotion, and\n integer and float inputs.\n Note:\n When the divisor is zero, returns \"NaN\" for floating point dtypes\n on both CPU and GPU; raises \"RuntimeError\" for integer division\n by zero on CPU; Integer division by zero on GPU may return any\n value.\n Note:\n Complex inputs are not supported. In some cases, it is not\n mathematically possible to satisfy the definition of a modulo\n operation with complex numbers.\n See also:\n \"torch.remainder()\" which implements Python's modulus operator.\n This one is defined using division rounding down the result.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.fmod.html", "category": "pytorch docs"}
{"text": "Parameters:\n * input (Tensor) -- the dividend\n * other (Tensor or Scalar) -- the divisor\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> torch.fmod(torch.tensor([-3., -2, -1, 1, 2, 3]), 2)\n tensor([-1., -0., -1., 1., 0., 1.])\n >>> torch.fmod(torch.tensor([1, 2, 3, 4, 5]), -1.5)\n tensor([1.0000, 0.5000, 0.0000, 1.0000, 0.5000])", "source": "https://pytorch.org/docs/stable/generated/torch.fmod.html", "category": "pytorch docs"}
{"text": "torch.argmintorch.argmin(input, dim=None, keepdim=False) -> LongTensor\n Returns the indices of the minimum value(s) of the flattened tensor\n or along a dimension\n This is the second value returned by \"torch.min()\". See its\n documentation for the exact semantics of this method.\n Note:\n If there are multiple minimal values then the indices of the\n first minimal value are returned.\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- the dimension to reduce. If \"None\", the\n argmin of the flattened input is returned.\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not..\n Example:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[ 0.1139, 0.2254, -0.1381, 0.3687],\n [ 1.0100, -1.1975, -0.0102, -0.4732],\n [-0.9240, 0.1207, -0.7506, -1.0213],\n [ 1.7809, -1.2960, 0.9384, 0.1438]])\n >>> torch.argmin(a)\n tensor(13)", "source": "https://pytorch.org/docs/stable/generated/torch.argmin.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.argmin(a)\n tensor(13)\n >>> torch.argmin(a, dim=1)\n tensor([ 2, 1, 3, 1])\n >>> torch.argmin(a, dim=1, keepdim=True)\n tensor([[2],\n [1],\n [3],\n [1]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.argmin.html", "category": "pytorch docs"}
{"text": "torch.Tensor.type_asTensor.type_as(tensor) -> Tensor\n Returns this tensor cast to the type of the given tensor.\n This is a no-op if the tensor is already of the correct type. This\n is equivalent to \"self.type(tensor.type())\"\n Parameters:\n tensor (Tensor) -- the tensor which has the desired type", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.type_as.html", "category": "pytorch docs"}
{"text": "Conv1dclass torch.nn.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\n Applies a 1D convolution over an input signal composed of several\n input planes.\n In the simplest case, the output value of the layer with input size\n (N, C_{\\text{in}}, L) and output (N, C_{\\text{out}},\n L_{\\text{out}}) can be precisely described as:\n \\text{out}(N_i, C_{\\text{out}j}) =\n \\text{bias}(Cj}) + \\sum^{C_{in} - 1}\n \\text{weight}(C_{\\text{out}_j}, k) \\star \\text{input}(N_i, k)\n where \\star is the valid cross-correlation operator, N is a batch\n size, C denotes a number of channels, L is a length of signal\n sequence.\n This module supports TensorFloat32.\n On certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\n * \"stride\" controls the stride for the cross-correlation, a single\n number or a one-element tuple.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html", "category": "pytorch docs"}
{"text": "number or a one-element tuple.\n * \"padding\" controls the amount of padding applied to the input. It\n can be either a string {'valid', 'same'} or a tuple of ints\n giving the amount of implicit padding applied on both sides.\n * \"dilation\" controls the spacing between the kernel points; also\n known as the \u00c3\u00a0 trous algorithm. It is harder to describe, but\n this link has a nice visualization of what \"dilation\" does.\n * \"groups\" controls the connections between inputs and outputs.\n \"in_channels\" and \"out_channels\" must both be divisible by\n \"groups\". For example,\n * At groups=1, all inputs are convolved to all outputs.\n * At groups=2, the operation becomes equivalent to having two\n conv layers side by side, each seeing half the input\n channels and producing half the output channels, and both\n subsequently concatenated.\n * At groups= \"in_channels\", each input channel is convolved\n with its own set of filters (of size", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html", "category": "pytorch docs"}
{"text": "with its own set of filters (of size\n \\frac{\\text{out_channels}}{\\text{in_channels}}).\n Note:\n When groups == in_channels and out_channels == K *\n in_channels, where K is a positive integer, this operation is\n also known as a \"depthwise convolution\".In other words, for an\n input of size (N, C_{in}, L_{in}), a depthwise convolution with a\n depthwise multiplier K can be performed with the arguments\n (C_\\text{in}=C_\\text{in}, C_\\text{out}=C_\\text{in} \\times\n \\text{K}, ..., \\text{groups}=C_\\text{in}).\n Note:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html", "category": "pytorch docs"}
{"text": "Note:\n \"padding='valid'\" is the same as no padding. \"padding='same'\"\n pads the input so the output has the shape as the input. However,\n this mode doesn't support any stride values other than 1.\n Note:\n This module supports complex data types i.e. \"complex32,\n complex64, complex128\".\n Parameters:\n * in_channels (int) -- Number of channels in the input\n image\n * out_channels (int) -- Number of channels produced by the\n convolution\n * kernel_size (int or tuple) -- Size of the convolving\n kernel\n * stride (int or tuple, optional) -- Stride of the\n convolution. Default: 1\n * padding (int, tuple or str, optional) --\n Padding added to both sides of the input. Default: 0\n * padding_mode (str, optional) -- \"'zeros'\",\n \"'reflect'\", \"'replicate'\" or \"'circular'\". Default: \"'zeros'\"\n * dilation (int or tuple, optional) -- Spacing", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html", "category": "pytorch docs"}
{"text": "between kernel elements. Default: 1\n * groups (int, optional) -- Number of blocked\n connections from input channels to output channels. Default: 1\n * bias (bool, optional) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n Shape:\n * Input: (N, C_{in}, L_{in}) or (C_{in}, L_{in})\n * Output: (N, C_{out}, L_{out}) or (C_{out}, L_{out}), where\n L_{out} = \\left\\lfloor\\frac{L_{in} + 2 \\times\n \\text{padding} - \\text{dilation} \\times\n (\\text{kernel_size} - 1) - 1}{\\text{stride}} +\n 1\\right\\rfloor\n Variables:\n * weight (Tensor) -- the learnable weights of the module\n of shape (\\text{out_channels},\n \\frac{\\text{in_channels}}{\\text{groups}},\n \\text{kernel_size}). The values of these weights are sampled\n from \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{groups}{C_\\text{in} * \\text{kernel_size}}", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html", "category": "pytorch docs"}
{"text": "\nbias (Tensor) -- the learnable bias of the module of\n shape (out_channels). If \"bias\" is \"True\", then the values of\n these weights are sampled from \\mathcal{U}(-\\sqrt{k},\n \\sqrt{k}) where k = \\frac{groups}{C_\\text{in} *\n \\text{kernel_size}}\n Examples:\n >>> m = nn.Conv1d(16, 33, 3, stride=2)\n >>> input = torch.randn(20, 16, 50)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html", "category": "pytorch docs"}
{"text": "JitScalarTypeclass torch.onnx.JitScalarType(value)\n Scalar types defined in torch.\n Use \"JitScalarType\" to convert from torch and JIT scalar types to\n ONNX scalar types.\n -[ Examples ]-\n\n\n\nJitScalarType.from_value(torch.ones(1, 2)).onnx_type()\n TensorProtoDataType.FLOAT\nJitScalarType.from_value(torch_c_value_with_type_float).onnx_type()\n TensorProtoDataType.FLOAT\nJitScalarType.from_dtype(torch.get_default_dtype).onnx_type()\n TensorProtoDataType.FLOAT\n dtype()\n Convert a JitScalarType to a torch dtype.\n Return type:\n dtype\n classmethod from_dtype(dtype)\n Convert a torch dtype to JitScalarType.\n Note: DO NOT USE this API when dtype comes from a\n torch._C.Value.type() calls.\n A \"RuntimeError: INTERNAL ASSERT FAILED at\n \"../aten/src/ATen/core/jit_type_base.h\" can be raised in\n several scenarios where shape info is not present. Instead\n use from_value API which is safer.\n Parameters:\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.JitScalarType.html", "category": "pytorch docs"}
{"text": "Parameters:\n dtype (Optional[dtype]) -- A torch.dtype to\n create a JitScalarType from\n Returns:\n JitScalarType\n Raises:\n OnnxExporterError -- if dtype is not a valid torch.dtype\n or if it is None.\n Return type:\n JitScalarType\n classmethod from_value(value, default=None)\n Create a JitScalarType from an value's scalar type.\n Parameters:\n * value (Union[None, Value, Tensor]) --\n An object to fetch scalar type from.\n * default -- The JitScalarType to return if a valid\n scalar cannot be fetched from value\n Returns:\n JitScalarType.\n Raises:\n * OnnxExporterError -- if value does not have a valid\n scalar type and default is None.\n * SymbolicValueError -- when value.type()'s info are\n empty and default is None\n Return type:\n JitScalarType\n onnx_compatible()", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.JitScalarType.html", "category": "pytorch docs"}
{"text": "JitScalarType\n onnx_compatible()\n Return whether this JitScalarType is compatible with ONNX.\n Return type:\n bool\n onnx_type()\n Convert a JitScalarType to an ONNX data type.\n Return type:\n TensorProtoDataType\n scalar_name()\n Convert a JitScalarType to a JIT scalar type name.\n Return type:\n Literal['Byte', 'Char', 'Double', 'Float', 'Half', 'Int',\n 'Long', 'Short', 'Bool', 'ComplexHalf', 'ComplexFloat',\n 'ComplexDouble', 'QInt8', 'QUInt8', 'QInt32', 'BFloat16',\n 'Undefined']\n torch_name()\n Convert a JitScalarType to a torch type name.\n Return type:\n Literal['bool', 'uint8_t', 'int8_t', 'double', 'float',\n 'half', 'int', 'int64_t', 'int16_t', 'complex32',\n 'complex64', 'complex128', 'qint8', 'quint8', 'qint32',\n 'bfloat16']", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.JitScalarType.html", "category": "pytorch docs"}
{"text": "torch.Tensor.luTensor.lu(pivot=True, get_infos=False)\n See \"torch.lu()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lu.html", "category": "pytorch docs"}
{"text": "torch.foreach_sin_torch._foreach_sin(self: List[Tensor]) -> None\n Apply \"torch.sin()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_sin_.html", "category": "pytorch docs"}
{"text": "torch.cuda.comm.scattertorch.cuda.comm.scatter(tensor, devices=None, chunk_sizes=None, dim=0, streams=None, , out=None)\n Scatters tensor across multiple GPUs.\n Parameters:\n * tensor (Tensor) -- tensor to scatter. Can be on CPU or\n GPU.\n * devices (Iterable[torch.device, str or\n int], optional*) -- an iterable of GPU devices, among\n which to scatter.\n * chunk_sizes (*Iterable[int], optional) -- sizes\n of chunks to be placed on each device. It should match\n \"devices\" in length and sums to \"tensor.size(dim)\". If not\n specified, \"tensor\" will be divided into equal chunks.\n * dim (int, optional) -- A dimension along which to\n chunk \"tensor\". Default: \"0\".\n * streams (Iterable[Stream], *optional) -- an\n iterable of Streams, among which to execute the scatter. If\n not specified, the default stream will be utilized.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.comm.scatter.html", "category": "pytorch docs"}
{"text": "\nout (Sequence[Tensor], optional, keyword-\n only) -- the GPU tensors to store output results. Sizes of\n these tensors must match that of \"tensor\", except for \"dim\",\n where the total size must sum to \"tensor.size(dim)\".\n Note:\n Exactly one of \"devices\" and \"out\" must be specified. When \"out\"\n is specified, \"chunk_sizes\" must not be specified and will be\n inferred from sizes of \"out\".\n Returns:\nIf \"devices\" is specified,\n a tuple containing chunks of \"tensor\", placed on \"devices\".\nIf \"out\" is specified,\n a tuple containing \"out\" tensors, each containing a chunk\n of \"tensor\".\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.comm.scatter.html", "category": "pytorch docs"}
{"text": "torch.sparse.log_softmaxtorch.sparse.log_softmax(input, dim, , dtype=None) -> Tensor\n Applies a softmax function followed by logarithm.\n See \"softmax\" for more details.\n Parameters:\n * input (Tensor) -- input\n * dim (int) -- A dimension along which softmax will be\n computed.\n * dtype* (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is casted\n to \"dtype\" before the operation is performed. This is useful\n for preventing data type overflows. Default: None", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.log_softmax.html", "category": "pytorch docs"}
{"text": "torch.Tensor.atan2_Tensor.atan2_(other) -> Tensor\n In-place version of \"atan2()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.atan2_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.cosTensor.cos() -> Tensor\n See \"torch.cos()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cos.html", "category": "pytorch docs"}
{"text": "torch.innertorch.inner(input, other, , out=None) -> Tensor\n Computes the dot product for 1D tensors. For higher dimensions,\n sums the product of elements from \"input\" and \"other\" along their\n last dimension.\n Note:\n If either \"input\" or \"other\" is a scalar, the result is\n equivalent to torch.mul(input, other).If both \"input\" and\n \"other\" are non-scalars, the size of their last dimension must\n match and the result is equivalent to torch.tensordot(input,\n other, dims=([-1], [-1]))\n Parameters:\n * input (Tensor) -- First input tensor\n * other (Tensor) -- Second input tensor\n Keyword Arguments:\n out (Tensor, optional) -- Optional output tensor to\n write result into. The output shape is input.shape[:-1] +\n other.shape[:-1]*.\n Example:\n # Dot product\n >>> torch.inner(torch.tensor([1, 2, 3]), torch.tensor([0, 2, 1]))\n tensor(7)\n # Multidimensional input tensors\n >>> a = torch.randn(2, 3)", "source": "https://pytorch.org/docs/stable/generated/torch.inner.html", "category": "pytorch docs"}
{"text": "\n\n\na = torch.randn(2, 3)\n >>> a\n tensor([[0.8173, 1.0874, 1.1784],\n [0.3279, 0.1234, 2.7894]])\n >>> b = torch.randn(2, 4, 3)\n >>> b\n tensor([[[-0.4682, -0.7159, 0.1506],\n [ 0.4034, -0.3657, 1.0387],\n [ 0.9892, -0.6684, 0.1774],\n [ 0.9482, 1.3261, 0.3917]],\n [[ 0.4537, 0.7493, 1.1724],\n [ 0.2291, 0.5749, -0.2267],\n [-0.7920, 0.3607, -0.3701],\n [ 1.3666, -0.5850, -1.7242]]])\n >>> torch.inner(a, b)\n tensor([[[-0.9837, 1.1560, 0.2907, 2.6785],\n [ 2.5671, 0.5452, -0.6912, -1.5509]],\n [[ 0.1782, 2.9843, 0.7366, 1.5672],\n [ 3.5115, -0.4864, -1.2476, -4.4337]]])\n # Scalar input\n >>> torch.inner(a, torch.tensor(2))\n tensor([[1.6347, 2.1748, 2.3567],\n [0.6558, 0.2469, 5.5787]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.inner.html", "category": "pytorch docs"}
{"text": "torch.Tensor.coshTensor.cosh() -> Tensor\n See \"torch.cosh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cosh.html", "category": "pytorch docs"}
{"text": "torch.Tensor.t_Tensor.t_() -> Tensor\n In-place version of \"t()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.t_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.choleskyTensor.cholesky(upper=False) -> Tensor\n See \"torch.cholesky()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cholesky.html", "category": "pytorch docs"}
{"text": "LSTMCellclass torch.nn.LSTMCell(input_size, hidden_size, bias=True, device=None, dtype=None)\n A long short-term memory (LSTM) cell.\n \\begin{array}{ll} i = \\sigma(W_{ii} x + b_{ii} + W_{hi} h +\n b_{hi}) \\ f = \\sigma(W_{if} x + b_{if} + W_{hf} h + b_{hf}) \\\n g = \\tanh(W_{ig} x + b_{ig} + W_{hg} h + b_{hg}) \\ o =\n \\sigma(W_{io} x + b_{io} + W_{ho} h + b_{ho}) \\ c' = f * c + i\n * g \\ h' = o * \\tanh(c') \\ \\end{array}\n where \\sigma is the sigmoid function, and * is the Hadamard\n product.\n Parameters:\n * input_size (int) -- The number of expected features in\n the input x\n * hidden_size (int) -- The number of features in the\n hidden state h\n * bias (bool) -- If \"False\", then the layer does not use\n bias weights b_ih and b_hh. Default: \"True\"\n Inputs: input, (h_0, c_0)\n * input of shape (batch, input_size) or (input_size):\n tensor containing input features", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTMCell.html", "category": "pytorch docs"}
{"text": "tensor containing input features\n * h_0 of shape (batch, hidden_size) or (hidden_size):\n tensor containing the initial hidden state\n * c_0 of shape (batch, hidden_size) or (hidden_size):\n tensor containing the initial cell state\n If (h_0, c_0) is not provided, both h_0 and c_0\n default to zero.\n Outputs: (h_1, c_1)\n * h_1 of shape (batch, hidden_size) or (hidden_size):\n tensor containing the next hidden state\n * c_1 of shape (batch, hidden_size) or (hidden_size):\n tensor containing the next cell state\n Variables:\n * weight_ih (torch.Tensor) -- the learnable input-hidden\n weights, of shape (4hidden_size, input_size)\n * weight_hh (torch.Tensor) -- the learnable hidden-hidden\n weights, of shape (4hidden_size, hidden_size)\n * bias_ih -- the learnable input-hidden bias, of shape\n (4hidden_size)*", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTMCell.html", "category": "pytorch docs"}
{"text": "(4hidden_size)\n * bias_hh -- the learnable hidden-hidden bias, of shape\n (4hidden_size)\n Note:\n All the weights and biases are initialized from\n \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{1}{\\text{hidden_size}}\n On certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\n Examples:\n >>> rnn = nn.LSTMCell(10, 20) # (input_size, hidden_size)\n >>> input = torch.randn(2, 3, 10) # (time_steps, batch, input_size)\n >>> hx = torch.randn(3, 20) # (batch, hidden_size)\n >>> cx = torch.randn(3, 20)\n >>> output = []\n >>> for i in range(input.size()[0]):\n ... hx, cx = rnn(input[i], (hx, cx))\n ... output.append(hx)\n >>> output = torch.stack(output, dim=0)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTMCell.html", "category": "pytorch docs"}
{"text": "conv3dclass torch.ao.nn.quantized.functional.conv3d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8)\n Applies a 3D convolution over a quantized 3D input composed of\n several input planes.\n See \"Conv3d\" for details and output shape.\n Parameters:\n * input -- quantized input tensor of shape (\\text{minibatch}\n , \\text{in_channels} , iD , iH , iW)\n * weight -- quantized filters of shape (\\text{out_channels}\n , \\frac{\\text{in_channels}}{\\text{groups}} , kD , kH , kW)\n * bias -- non-quantized bias tensor of shape\n (\\text{out_channels}). The tensor type must be torch.float.\n * stride -- the stride of the convolving kernel. Can be a\n single number or a tuple (sD, sH, sW). Default: 1\n * padding -- implicit paddings on both sides of the input.\n Can be a single number or a tuple (padD, padH, padW).\n Default: 0", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv3d.html", "category": "pytorch docs"}
{"text": "Default: 0\n * dilation -- the spacing between kernel elements. Can be a\n single number or a tuple (dD, dH, dW). Default: 1\n * groups -- split input into groups, \\text{in_channels}\n should be divisible by the number of groups. Default: 1\n * padding_mode -- the padding mode to use. Only \"zeros\" is\n supported for quantized convolution at the moment. Default:\n \"zeros\"\n * scale -- quantization scale for the output. Default: 1.0\n * zero_point -- quantization zero_point for the output.\n Default: 0\n * dtype -- quantization data type to use. Default:\n \"torch.quint8\"\n Examples:\n >>> from torch.ao.nn.quantized import functional as qF\n >>> filters = torch.randn(8, 4, 3, 3, 3, dtype=torch.float)\n >>> inputs = torch.randn(1, 4, 5, 5, 5, dtype=torch.float)\n >>> bias = torch.randn(8, dtype=torch.float)\n >>>\n >>> scale, zero_point = 1.0, 0\n >>> dtype_inputs = torch.quint8", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv3d.html", "category": "pytorch docs"}
{"text": "\n\n\ndtype_inputs = torch.quint8\n >>> dtype_filters = torch.qint8\n >>>\n >>> q_filters = torch.quantize_per_tensor(filters, scale, zero_point, dtype_filters)\n >>> q_inputs = torch.quantize_per_tensor(inputs, scale, zero_point, dtype_inputs)\n >>> qF.conv3d(q_inputs, q_filters, bias, padding=1, scale=scale, zero_point=zero_point)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv3d.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.prune.is_prunedtorch.nn.utils.prune.is_pruned(module)\n Check whether \"module\" is pruned by looking for \"forward_pre_hooks\"\n in its modules that inherit from the \"BasePruningMethod\".\n Parameters:\n module (nn.Module) -- object that is either pruned or\n unpruned\n Returns:\n binary answer to whether \"module\" is pruned.\n -[ Examples ]-\n\n\n\nfrom torch.nn.utils import prune\nm = nn.Linear(5, 7)\nprint(prune.is_pruned(m))\n False\nprune.random_unstructured(m, name='weight', amount=0.2)\nprint(prune.is_pruned(m))\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.is_pruned.html", "category": "pytorch docs"}
{"text": "torch.Tensor.ndimTensor.ndim\n Alias for \"dim()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ndim.html", "category": "pytorch docs"}
{"text": "max_pool1dclass torch.ao.nn.quantized.functional.max_pool1d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)\n Applies a 1D max pooling over a quantized input signal composed of\n several quantized input planes.\n Note:\n The input quantization parameters are propagated to the output.\n See \"MaxPool1d\" for details.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.max_pool1d.html", "category": "pytorch docs"}
{"text": "torch.cuda.ipc_collecttorch.cuda.ipc_collect()\n Force collects GPU memory after it has been released by CUDA IPC.\n Note:\n Checks if any sent CUDA tensors could be cleaned from the memory.\n Force closes shared memory file used for reference counting if\n there is no active counters. Useful when the producer process\n stopped actively sending tensors and want to release unused\n memory.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.ipc_collect.html", "category": "pytorch docs"}
{"text": "torch.Tensor.conj_physical_Tensor.conj_physical_() -> Tensor\n In-place version of \"conj_physical()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.conj_physical_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.view_asTensor.view_as(other) -> Tensor\n View this tensor as the same size as \"other\". \"self.view_as(other)\"\n is equivalent to \"self.view(other.size())\".\n Please see \"view()\" for more information about \"view\".\n Parameters:\n other (\"torch.Tensor\") -- The result tensor has the same\n size as \"other\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.view_as.html", "category": "pytorch docs"}
{"text": "torch.Tensor.mvlgamma_Tensor.mvlgamma_(p) -> Tensor\n In-place version of \"mvlgamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.mvlgamma_.html", "category": "pytorch docs"}
{"text": "torch.addtorch.add(input, other, , alpha=1, out=None) -> Tensor\n Adds \"other\", scaled by \"alpha\", to \"input\".\n \\text{{out}}_i = \\text{{input}}_i + \\text{{alpha}} \\times\n \\text{{other}}_i\n Supports broadcasting to a common shape, type promotion, and\n integer, float, and complex inputs.\n Parameters:\n * input (Tensor) -- the input tensor.\n * other (Tensor or Number) -- the tensor or number to\n add to \"input\".\n Keyword Arguments:\n * alpha (Number) -- the multiplier for \"other\".\n * out (Tensor, optional*) -- the output tensor.\n Examples:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 0.0202, 1.0985, 1.3506, -0.6056])\n >>> torch.add(a, 20)\n tensor([ 20.0202, 21.0985, 21.3506, 19.3944])\n >>> b = torch.randn(4)\n >>> b\n tensor([-0.9732, -0.3497, 0.6245, 0.4022])\n >>> c = torch.randn(4, 1)\n >>> c\n tensor([[ 0.3743],\n [-1.7724],\n [-0.5811],", "source": "https://pytorch.org/docs/stable/generated/torch.add.html", "category": "pytorch docs"}
{"text": "[-1.7724],\n [-0.5811],\n [-0.8017]])\n >>> torch.add(b, c, alpha=10)\n tensor([[ 2.7695, 3.3930, 4.3672, 4.1450],\n [-18.6971, -18.0736, -17.0994, -17.3216],\n [ -6.7845, -6.1610, -5.1868, -5.4090],\n [ -8.9902, -8.3667, -7.3925, -7.6147]])", "source": "https://pytorch.org/docs/stable/generated/torch.add.html", "category": "pytorch docs"}
{"text": "torch.cuda.get_sync_debug_modetorch.cuda.get_sync_debug_mode()\n Returns current value of debug mode for cuda synchronizing\n operations.\n Return type:\n int", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.get_sync_debug_mode.html", "category": "pytorch docs"}
{"text": "torch.cattorch.cat(tensors, dim=0, , out=None) -> Tensor\n Concatenates the given sequence of \"seq\" tensors in the given\n dimension. All tensors must either have the same shape (except in\n the concatenating dimension) or be empty.\n \"torch.cat()\" can be seen as an inverse operation for\n \"torch.split()\" and \"torch.chunk()\".\n \"torch.cat()\" can be best understood via examples.\n Parameters:\n * tensors (sequence of Tensors) -- any python sequence of\n tensors of the same type. Non-empty tensors provided must have\n the same shape, except in the cat dimension.\n * dim (int, optional) -- the dimension over which the\n tensors are concatenated\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> x = torch.randn(2, 3)\n >>> x\n tensor([[ 0.6580, -1.0969, -0.4614],\n [-0.1034, -0.5790, 0.1497]])\n >>> torch.cat((x, x, x), 0)\n tensor([[ 0.6580, -1.0969, -0.4614],", "source": "https://pytorch.org/docs/stable/generated/torch.cat.html", "category": "pytorch docs"}
{"text": "tensor([[ 0.6580, -1.0969, -0.4614],\n [-0.1034, -0.5790, 0.1497],\n [ 0.6580, -1.0969, -0.4614],\n [-0.1034, -0.5790, 0.1497],\n [ 0.6580, -1.0969, -0.4614],\n [-0.1034, -0.5790, 0.1497]])\n >>> torch.cat((x, x, x), 1)\n tensor([[ 0.6580, -1.0969, -0.4614, 0.6580, -1.0969, -0.4614, 0.6580,\n -1.0969, -0.4614],\n [-0.1034, -0.5790, 0.1497, -0.1034, -0.5790, 0.1497, -0.1034,\n -0.5790, 0.1497]])", "source": "https://pytorch.org/docs/stable/generated/torch.cat.html", "category": "pytorch docs"}
{"text": "torch.loadtorch.load(f, map_location=None, pickle_module=pickle, , weights_only=False, *pickle_load_args)\n Loads an object saved with \"torch.save()\" from a file.\n \"torch.load()\" uses Python's unpickling facilities but treats\n storages, which underlie tensors, specially. They are first\n deserialized on the CPU and are then moved to the device they were\n saved from. If this fails (e.g. because the run time system doesn't\n have certain devices), an exception is raised. However, storages\n can be dynamically remapped to an alternative set of devices using\n the \"map_location\" argument.\n If \"map_location\" is a callable, it will be called once for each\n serialized storage with two arguments: storage and location. The\n storage argument will be the initial deserialization of the\n storage, residing on the CPU. Each serialized storage has a\n location tag associated with it which identifies the device it was\n saved from, and this tag is the second argument passed to", "source": "https://pytorch.org/docs/stable/generated/torch.load.html", "category": "pytorch docs"}
{"text": "\"map_location\". The builtin location tags are \"'cpu'\" for CPU\n tensors and \"'cuda:device_id'\" (e.g. \"'cuda:2'\") for CUDA tensors.\n \"map_location\" should return either \"None\" or a storage. If\n \"map_location\" returns a storage, it will be used as the final\n deserialized object, already moved to the right device. Otherwise,\n \"torch.load()\" will fall back to the default behavior, as if\n \"map_location\" wasn't specified.\n If \"map_location\" is a \"torch.device\" object or a string containing\n a device tag, it indicates the location where all tensors should be\n loaded.\n Otherwise, if \"map_location\" is a dict, it will be used to remap\n location tags appearing in the file (keys), to ones that specify\n where to put the storages (values).\n User extensions can register their own location tags and tagging\n and deserialization methods using\n \"torch.serialization.register_package()\".\n Parameters:\n * f (Union[str, PathLike, BinaryIO*,", "source": "https://pytorch.org/docs/stable/generated/torch.load.html", "category": "pytorch docs"}
{"text": "IO[bytes]]*) -- a file-like object (has to implement\n \"read()\", \"readline()\", \"tell()\", and \"seek()\"), or a string\n or os.PathLike object containing a file name\n * map_location\n (*Optional[Union[Callable[[Tensor, str],\n Tensor], device, str, Dict[str,\n str]]]*) -- a function, \"torch.device\", string or a\n dict specifying how to remap storage locations\n * pickle_module (*Optional[Any]) -- module used for\n unpickling metadata and objects (has to match the\n \"pickle_module\" used to serialize file)\n * weights_only (bool) -- Indicates whether unpickler\n should be restricted to loading only tensors, primitive types\n and dictionaries\n * pickle_load_args (Any*) -- (Python 3 only) optional\n keyword arguments passed over to \"pickle_module.load()\" and\n \"pickle_module.Unpickler()\", e.g., \"errors=...\".\n Return type:", "source": "https://pytorch.org/docs/stable/generated/torch.load.html", "category": "pytorch docs"}
{"text": "Return type:\n Any\n Warning:\n \"torch.load()\" unless weights_only parameter is set to True,\n uses \"pickle\" module implicitly, which is known to be insecure.\n It is possible to construct malicious pickle data which will\n execute arbitrary code during unpickling. Never load data that\n could have come from an untrusted source in an unsafe mode, or\n that could have been tampered with. Only load data you trust.\n Note:\n When you call \"torch.load()\" on a file which contains GPU\n tensors, those tensors will be loaded to GPU by default. You can\n call \"torch.load(.., map_location='cpu')\" and then\n \"load_state_dict()\" to avoid GPU RAM surge when loading a model\n checkpoint.\n Note:\n By default, we decode byte strings as \"utf-8\". This is to avoid\n a common error case \"UnicodeDecodeError: 'ascii' codec can't\n decode byte 0x...\" when loading files saved by Python 2 in Python\n 3. If this default is incorrect, you may use an extra \"encoding\"", "source": "https://pytorch.org/docs/stable/generated/torch.load.html", "category": "pytorch docs"}
{"text": "keyword argument to specify how these objects should be loaded,\n e.g., \"encoding='latin1'\" decodes them to strings using \"latin1\"\n encoding, and \"encoding='bytes'\" keeps them as byte arrays which\n can be decoded later with \"byte_array.decode(...)\".\n -[ Example ]-\n\n\n\ntorch.load('tensors.pt')\n # Load all tensors onto the CPU\ntorch.load('tensors.pt', map_location=torch.device('cpu'))\n # Load all tensors onto the CPU, using a function\ntorch.load('tensors.pt', map_location=lambda storage, loc: storage)\n # Load all tensors onto GPU 1\ntorch.load('tensors.pt', map_location=lambda storage, loc: storage.cuda(1))\n # Map tensors from GPU 1 to GPU 0\ntorch.load('tensors.pt', map_location={'cuda:1': 'cuda:0'})\n # Load tensor from io.BytesIO object\nwith open('tensor.pt', 'rb') as f:\n ... buffer = io.BytesIO(f.read())\ntorch.load(buffer)\n # Load a module with 'ascii' encoding for unpickling\ntorch.load('module.pt', encoding='ascii')\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.load.html", "category": "pytorch docs"}
{"text": "torch.Tensor.unflattenTensor.unflatten(dim, sizes) -> Tensor\n See \"torch.unflatten()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.unflatten.html", "category": "pytorch docs"}
{"text": "torch.quantiletorch.quantile(input, q, dim=None, keepdim=False, *, interpolation='linear', out=None) -> Tensor\n Computes the q-th quantiles of each row of the \"input\" tensor along\n the dimension \"dim\".\n To compute the quantile, we map q in [0, 1] to the range of indices\n [0, n] to find the location of the quantile in the sorted input. If\n the quantile lies between two data points \"a < b\" with indices \"i\"\n and \"j\" in the sorted order, result is computed according to the\n given \"interpolation\" method as follows:\n * \"linear\": \"a + (b - a) * fraction\", where \"fraction\" is the\n fractional part of the computed quantile index.\n * \"lower\": \"a\".\n * \"higher\": \"b\".\n * \"nearest\": \"a\" or \"b\", whichever's index is closer to the\n computed quantile index (rounding down for .5 fractions).\n * \"midpoint\": \"(a + b) / 2\".\n If \"q\" is a 1D tensor, the first dimension of the output represents\n the quantiles and has size equal to the size of \"q\", the remaining", "source": "https://pytorch.org/docs/stable/generated/torch.quantile.html", "category": "pytorch docs"}
{"text": "dimensions are what remains from the reduction.\n Note:\n By default \"dim\" is \"None\" resulting in the \"input\" tensor being\n flattened before computation.\n Parameters:\n * input (Tensor) -- the input tensor.\n * q (float or Tensor) -- a scalar or 1D tensor of\n values in the range [0, 1].\n * dim (int) -- the dimension to reduce.\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n Keyword Arguments:\n * interpolation (str) -- interpolation method to use when\n the desired quantile lies between two data points. Can be\n \"linear\", \"lower\", \"higher\", \"midpoint\" and \"nearest\". Default\n is \"linear\".\n * out (Tensor, optional) -- the output tensor.\n Example:\n >>> a = torch.randn(2, 3)\n >>> a\n tensor([[ 0.0795, -1.2117, 0.9765],\n [ 1.1707, 0.6706, 0.4884]])\n >>> q = torch.tensor([0.25, 0.5, 0.75])", "source": "https://pytorch.org/docs/stable/generated/torch.quantile.html", "category": "pytorch docs"}
{"text": "\n\n\nq = torch.tensor([0.25, 0.5, 0.75])\n >>> torch.quantile(a, q, dim=1, keepdim=True)\n tensor([[[-0.5661],\n [ 0.5795]],\n [[ 0.0795],\n [ 0.6706]],\n [[ 0.5280],\n [ 0.9206]]])\n >>> torch.quantile(a, q, dim=1, keepdim=True).shape\n torch.Size([3, 2, 1])\n >>> a = torch.arange(4.)\n >>> a\n tensor([0., 1., 2., 3.])\n >>> torch.quantile(a, 0.6, interpolation='linear')\n tensor(1.8000)\n >>> torch.quantile(a, 0.6, interpolation='lower')\n tensor(1.)\n >>> torch.quantile(a, 0.6, interpolation='higher')\n tensor(2.)\n >>> torch.quantile(a, 0.6, interpolation='midpoint')\n tensor(1.5000)\n >>> torch.quantile(a, 0.6, interpolation='nearest')\n tensor(2.)\n >>> torch.quantile(a, 0.4, interpolation='nearest')\n tensor(1.)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantile.html", "category": "pytorch docs"}
{"text": "torch.Tensor.toTensor.to(args, kwargs) -> Tensor\n Performs Tensor dtype and/or device conversion. A \"torch.dtype\" and\n \"torch.device\" are inferred from the arguments of \"self.to(args,\n **kwargs)\".\n Note:\n If the \"self\" Tensor already has the correct \"torch.dtype\" and\n \"torch.device\", then \"self\" is returned. Otherwise, the returned\n tensor is a copy of \"self\" with the desired \"torch.dtype\" and\n \"torch.device\".\n Here are the ways to call \"to\":\n to(dtype, non_blocking=False, copy=False, memory_format=torch.preserve_format) -> Tensor\n Returns a Tensor with the specified \"dtype\"\n Args:\n memory_format (\"torch.memory_format\", optional): the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".\n torch.to(device=None, dtype=None, non_blocking=False, copy=False, memory_format=torch.preserve_format) -> Tensor\n Returns a Tensor with the specified \"device\" and (optional)", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to.html", "category": "pytorch docs"}
{"text": "\"dtype\". If \"dtype\" is \"None\" it is inferred to be\n \"self.dtype\". When \"non_blocking\", tries to convert\n asynchronously with respect to the host if possible, e.g.,\n converting a CPU Tensor with pinned memory to a CUDA Tensor.\n When \"copy\" is set, a new Tensor is created even when the\n Tensor already matches the desired conversion.\n Args:\n memory_format (\"torch.memory_format\", optional): the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".\n torch.to(other, non_blocking=False, copy=False) -> Tensor\n Returns a Tensor with same \"torch.dtype\" and \"torch.device\"\n as the Tensor \"other\". When \"non_blocking\", tries to convert\n asynchronously with respect to the host if possible, e.g.,\n converting a CPU Tensor with pinned memory to a CUDA Tensor.\n When \"copy\" is set, a new Tensor is created even when the\n Tensor already matches the desired conversion.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to.html", "category": "pytorch docs"}
{"text": "Example:\n >>> tensor = torch.randn(2, 2) # Initially dtype=float32, device=cpu\n >>> tensor.to(torch.float64)\n tensor([[-0.5044, 0.0005],\n [ 0.3310, -0.0584]], dtype=torch.float64)\n >>> cuda0 = torch.device('cuda:0')\n >>> tensor.to(cuda0)\n tensor([[-0.5044, 0.0005],\n [ 0.3310, -0.0584]], device='cuda:0')\n >>> tensor.to(cuda0, dtype=torch.float64)\n tensor([[-0.5044, 0.0005],\n [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')\n >>> other = torch.randn((), dtype=torch.float64, device=cuda0)\n >>> tensor.to(other, non_blocking=True)\n tensor([[-0.5044, 0.0005],\n [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to.html", "category": "pytorch docs"}
{"text": "torch.Tensor.gcdTensor.gcd(other) -> Tensor\n See \"torch.gcd()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.gcd.html", "category": "pytorch docs"}
{"text": "torch.Tensor.baddbmmTensor.baddbmm(batch1, batch2, *, beta=1, alpha=1) -> Tensor\n See \"torch.baddbmm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.baddbmm.html", "category": "pytorch docs"}
{"text": "add_quant_dequantclass torch.quantization.add_quant_dequant(module)\n Wrap the leaf child module in QuantWrapper if it has a valid\n qconfig Note that this function will modify the children of module\n inplace and it can return a new module which wraps the input module\n as well.\n Parameters:\n * module -- input module with qconfig attributes for all the\n leaf modules\n * quantize (that we want to) --\n Returns:\n Either the inplace modified module with submodules wrapped in\n QuantWrapper based on qconfig or a new QuantWrapper module\n which wraps the input module, the latter case only happens when\n the input module is a leaf module and we want to quantize it.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.add_quant_dequant.html", "category": "pytorch docs"}
{"text": "RecordingObserverclass torch.quantization.observer.RecordingObserver(dtype=torch.quint8, kwargs)\n The module is mainly for debug and records the tensor values during\n runtime.\n Parameters:\n * dtype -- Quantized data type\n * qscheme -- Quantization scheme to be used\n * reduce_range** -- Reduces the range of the quantized data\n type by 1 bit", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.RecordingObserver.html", "category": "pytorch docs"}
{"text": "torch.Tensor.is_signedTensor.is_signed() -> bool\n Returns True if the data type of \"self\" is a signed data type.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_signed.html", "category": "pytorch docs"}
{"text": "torch.broadcast_totorch.broadcast_to(input, shape) -> Tensor\n Broadcasts \"input\" to the shape \"shape\". Equivalent to calling\n \"input.expand(shape)\". See \"expand()\" for details.\n Parameters:\n * input (Tensor) -- the input tensor.\n * shape (list, tuple, or \"torch.Size\") -- the new shape.\n Example:\n >>> x = torch.tensor([1, 2, 3])\n >>> torch.broadcast_to(x, (3, 3))\n tensor([[1, 2, 3],\n [1, 2, 3],\n [1, 2, 3]])", "source": "https://pytorch.org/docs/stable/generated/torch.broadcast_to.html", "category": "pytorch docs"}
{"text": "Hardswishclass torch.nn.Hardswish(inplace=False)\n Applies the Hardswish function, element-wise, as described in the\n paper: Searching for MobileNetV3.\n Hardswish is defined as:\n \\text{Hardswish}(x) = \\begin{cases} 0 & \\text{if~} x \\le -3,\n \\ x & \\text{if~} x \\ge +3, \\ x \\cdot (x + 3) /6 &\n \\text{otherwise} \\end{cases}\n Parameters:\n inplace (bool) -- can optionally do the operation in-\n place. Default: \"False\"\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.Hardswish()\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Hardswish.html", "category": "pytorch docs"}
{"text": "torch.Tensor.greater_Tensor.greater_(other) -> Tensor\n In-place version of \"greater()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.greater_.html", "category": "pytorch docs"}
{"text": "ReduceLROnPlateauclass torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=10, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08, verbose=False)\n Reduce learning rate when a metric has stopped improving. Models\n often benefit from reducing the learning rate by a factor of 2-10\n once learning stagnates. This scheduler reads a metrics quantity\n and if no improvement is seen for a 'patience' number of epochs,\n the learning rate is reduced.\n Parameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * mode (str) -- One of min, max. In min mode, lr\n will be reduced when the quantity monitored has stopped\n decreasing; in max mode it will be reduced when the quantity\n monitored has stopped increasing. Default: 'min'.\n * factor (float) -- Factor by which the learning rate will\n be reduced. new_lr = lr * factor. Default: 0.1.", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html", "category": "pytorch docs"}
{"text": "\npatience (int) -- Number of epochs with no improvement\n after which learning rate will be reduced. For example, if\n patience = 2, then we will ignore the first 2 epochs with no\n improvement, and will only decrease the LR after the 3rd epoch\n if the loss still hasn't improved then. Default: 10.\nthreshold (float) -- Threshold for measuring the new\n optimum, to only focus on significant changes. Default: 1e-4.\nthreshold_mode (str) -- One of rel, abs. In rel\n mode, dynamic_threshold = best * ( 1 + threshold ) in 'max'\n mode or best * ( 1 - threshold ) in min mode. In abs mode,\n dynamic_threshold = best + threshold in max mode or best -\n threshold in min mode. Default: 'rel'.\ncooldown (int) -- Number of epochs to wait before\n resuming normal operation after lr has been reduced. Default:\n 0.\nmin_lr (float or list) -- A scalar or a list of\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html", "category": "pytorch docs"}
{"text": "scalars. A lower bound on the learning rate of all param\n groups or each group respectively. Default: 0.\n * eps (float) -- Minimal decay applied to lr. If the\n difference between new and old lr is smaller than eps, the\n update is ignored. Default: 1e-8.\n * verbose (bool) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n -[ Example ]-\n\n\n\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)\nscheduler = ReduceLROnPlateau(optimizer, 'min')\nfor epoch in range(10):\n train(...)\n val_loss = validate(...)\n # Note that step should be called after validate()\n scheduler.step(val_loss)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html", "category": "pytorch docs"}
{"text": "UninitializedBufferclass torch.nn.parameter.UninitializedBuffer(requires_grad=False, device=None, dtype=None)\n A buffer that is not initialized.\n Uninitialized Buffer is a a special case of \"torch.Tensor\" where\n the shape of the data is still unknown.\n Unlike a \"torch.Tensor\", uninitialized parameters hold no data and\n attempting to access some properties, like their shape, will throw\n a runtime error. The only operations that can be performed on a\n uninitialized parameter are changing its datatype, moving it to a\n different device and converting it to a regular \"torch.Tensor\".\n The default device or dtype to use when the buffer is materialized\n can be set during construction using e.g. \"device='cuda'\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parameter.UninitializedBuffer.html", "category": "pytorch docs"}
{"text": "ConvTranspose3dclass torch.ao.nn.quantized.ConvTranspose3d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)\n Applies a 3D transposed convolution operator over an input image\n composed of several input planes. For details on input arguments,\n parameters, and implementation see \"ConvTranspose3d\".\n Note:\n Currently only the FBGEMM engine is implemented. Please, set the\n torch.backends.quantized.engine = 'fbgemm'\n For special notes, please, see \"Conv3d\"\n Variables:\n * weight (Tensor) -- packed tensor derived from the\n learnable weight parameter.\n * scale (Tensor) -- scalar for the output scale\n * zero_point (Tensor) -- scalar for the output zero point\n See \"ConvTranspose3d\" for other attributes.\n Examples:\n >>> torch.backends.quantized.engine = 'fbgemm'\n >>> from torch.nn import quantized as nnq", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose3d.html", "category": "pytorch docs"}
{"text": "\n\n\nfrom torch.nn import quantized as nnq\n >>> # With cubic kernels and equal stride\n >>> m = nnq.ConvTranspose3d(16, 33, 3, stride=2)\n >>> # non-cubic kernels and unequal stride and with padding\n >>> m = nnq.ConvTranspose3d(16, 33, (3, 3, 5), stride=(2, 1, 1), padding=(4, 2, 2))\n >>> input = torch.randn(20, 16, 50, 100, 100)\n >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)\n >>> output = m(q_input)\n >>> # exact output size can be also specified as an argument\n >>> input = torch.randn(1, 16, 12, 12, 12)\n >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)\n >>> downsample = nnq.Conv3d(16, 16, 3, stride=2, padding=1)\n >>> upsample = nnq.ConvTranspose3d(16, 16, 3, stride=2, padding=1)\n >>> h = downsample(q_input)\n >>> h.size()\n torch.Size([1, 16, 6, 6, 6])\n >>> output = upsample(h, output_size=input.size())\n >>> output.size()\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose3d.html", "category": "pytorch docs"}
{"text": "\n\n\noutput.size()\n torch.Size([1, 16, 12, 12, 12])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose3d.html", "category": "pytorch docs"}
{"text": "torch.cuda.is_initializedtorch.cuda.is_initialized()\n Returns whether PyTorch's CUDA state has been initialized.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.is_initialized.html", "category": "pytorch docs"}
{"text": "torch.autograd.function.FunctionCtx.mark_dirtyFunctionCtx.mark_dirty(args)\n Marks given tensors as modified in an in-place operation.\n This should be called at most once, only from inside the\n \"forward()\" method, and all arguments should be inputs.*\n Every tensor that's been modified in-place in a call to \"forward()\"\n should be given to this function, to ensure correctness of our\n checks. It doesn't matter whether the function is called before or\n after modification.\n Examples::\n >>> class Inplace(Function):\n >>> @staticmethod\n >>> def forward(ctx, x):\n >>> x_npy = x.numpy() # x_npy shares storage with x\n >>> x_npy += 1\n >>> ctx.mark_dirty(x)\n >>> return x\n >>>\n >>> @staticmethod\n >>> @once_differentiable\n >>> def backward(ctx, grad_output):\n >>> return grad_output\n >>>", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.mark_dirty.html", "category": "pytorch docs"}
{"text": "\n\n\n return grad_output\n >>>\n >>> a = torch.tensor(1., requires_grad=True, dtype=torch.double).clone()\n >>> b = a * a\n >>> Inplace.apply(a) # This would lead to wrong gradients!\n >>> # but the engine would not know unless we mark_dirty\n >>> b.backward() # RuntimeError: one of the variables needed for gradient\n >>> # computation has been modified by an inplace operation\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.mark_dirty.html", "category": "pytorch docs"}
{"text": "torch.Tensor.index_putTensor.index_put(indices, values, accumulate=False) -> Tensor\n Out-place version of \"index_put_()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_put.html", "category": "pytorch docs"}
{"text": "torch.quantize_per_tensortorch.quantize_per_tensor(input, scale, zero_point, dtype) -> Tensor\n Converts a float tensor to a quantized tensor with given scale and\n zero point.\n Parameters:\n * input (Tensor) -- float tensor or list of tensors to\n quantize\n * scale (float or Tensor) -- scale to apply in\n quantization formula\n * zero_point (int or Tensor) -- offset in integer\n value that maps to float zero\n * dtype (\"torch.dtype\") -- the desired data type of returned\n tensor. Has to be one of the quantized dtypes: \"torch.quint8\",\n \"torch.qint8\", \"torch.qint32\"\n Returns:\n A newly quantized tensor or list of quantized tensors.\n Return type:\n Tensor\n Example:\n >>> torch.quantize_per_tensor(torch.tensor([-1.0, 0.0, 1.0, 2.0]), 0.1, 10, torch.quint8)\n tensor([-1., 0., 1., 2.], size=(4,), dtype=torch.quint8,", "source": "https://pytorch.org/docs/stable/generated/torch.quantize_per_tensor.html", "category": "pytorch docs"}
{"text": "quantization_scheme=torch.per_tensor_affine, scale=0.1, zero_point=10)\n >>> torch.quantize_per_tensor(torch.tensor([-1.0, 0.0, 1.0, 2.0]), 0.1, 10, torch.quint8).int_repr()\n tensor([ 0, 10, 20, 30], dtype=torch.uint8)\n >>> torch.quantize_per_tensor([torch.tensor([-1.0, 0.0]), torch.tensor([-2.0, 2.0])],\n >>> torch.tensor([0.1, 0.2]), torch.tensor([10, 20]), torch.quint8)\n (tensor([-1., 0.], size=(2,), dtype=torch.quint8,\n quantization_scheme=torch.per_tensor_affine, scale=0.1, zero_point=10),\n tensor([-2., 2.], size=(2,), dtype=torch.quint8,\n quantization_scheme=torch.per_tensor_affine, scale=0.2, zero_point=20))\n >>> torch.quantize_per_tensor(torch.tensor([-1.0, 0.0, 1.0, 2.0]), torch.tensor(0.1), torch.tensor(10), torch.quint8)\n tensor([-1., 0., 1., 2.], size=(4,), dtype=torch.quint8,\n quantization_scheme=torch.per_tensor_affine, scale=0.10, zero_point=10)", "source": "https://pytorch.org/docs/stable/generated/torch.quantize_per_tensor.html", "category": "pytorch docs"}
{"text": "torch.svd_lowranktorch.svd_lowrank(A, q=6, niter=2, M=None)\n Return the singular value decomposition \"(U, S, V)\" of a matrix,\n batches of matrices, or a sparse matrix A such that A \\approx U\n diag(S) V^T. In case M is given, then SVD is computed for the\n matrix A - M.\n Note:\n The implementation is based on the Algorithm 5.1 from Halko et\n al, 2009.\n Note:\n To obtain repeatable results, reset the seed for the pseudorandom\n number generator\n Note:\n The input is assumed to be a low-rank matrix.\n Note:\n In general, use the full-rank SVD implementation\n \"torch.linalg.svd()\" for dense matrices due to its 10-fold higher\n performance characteristics. The low-rank SVD will be useful for\n huge sparse matrices that \"torch.linalg.svd()\" cannot handle.\n Args::\n A (Tensor): the input tensor of size (*, m, n)\n q (int, optional): a slightly overestimated rank of A.\n niter (int, optional): the number of subspace iterations to", "source": "https://pytorch.org/docs/stable/generated/torch.svd_lowrank.html", "category": "pytorch docs"}
{"text": "conduct; niter must be a nonnegative integer, and defaults to\n 2\n M (Tensor, optional): the input tensor's mean of size\n (, 1, n).\n References::\n * Nathan Halko, Per-Gunnar Martinsson, and Joel Tropp, Finding\n structure with randomness: probabilistic algorithms for\n constructing approximate matrix decompositions,\n arXiv:0909.4061 [math.NA; math.PR], 2009 (available at arXiv).\n Return type:\n Tuple[Tensor, Tensor, Tensor*]", "source": "https://pytorch.org/docs/stable/generated/torch.svd_lowrank.html", "category": "pytorch docs"}
{"text": "torch.allclosetorch.allclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False) -> bool\n This function checks if \"input\" and \"other\" satisfy the condition:\n \\lvert \\text{input} - \\text{other} \\rvert \\leq \\texttt{atol} +\n \\texttt{rtol} \\times \\lvert \\text{other} \\rvert\n elementwise, for all elements of \"input\" and \"other\". The behaviour\n of this function is analogous to numpy.allclose\n Parameters:\n * input (Tensor) -- first tensor to compare\n * other (Tensor) -- second tensor to compare\n * atol (float, optional) -- absolute tolerance.\n Default: 1e-08\n * rtol (float, optional) -- relative tolerance.\n Default: 1e-05\n * equal_nan (bool, optional) -- if \"True\", then two\n \"NaN\" s will be considered equal. Default: \"False\"\n Example:\n >>> torch.allclose(torch.tensor([10000., 1e-07]), torch.tensor([10000.1, 1e-08]))\n False", "source": "https://pytorch.org/docs/stable/generated/torch.allclose.html", "category": "pytorch docs"}
{"text": "False\n >>> torch.allclose(torch.tensor([10000., 1e-08]), torch.tensor([10000.1, 1e-09]))\n True\n >>> torch.allclose(torch.tensor([1.0, float('nan')]), torch.tensor([1.0, float('nan')]))\n False\n >>> torch.allclose(torch.tensor([1.0, float('nan')]), torch.tensor([1.0, float('nan')]), equal_nan=True)\n True", "source": "https://pytorch.org/docs/stable/generated/torch.allclose.html", "category": "pytorch docs"}
{"text": "FeatureAlphaDropoutclass torch.nn.FeatureAlphaDropout(p=0.5, inplace=False)\n Randomly masks out entire channels (a channel is a feature map,\n e.g. the j-th channel of the i-th sample in the batch input is a\n tensor \\text{input}[i, j]) of the input tensor). Instead of setting\n activations to zero, as in regular Dropout, the activations are set\n to the negative saturation value of the SELU activation function.\n More details can be found in the paper Self-Normalizing Neural\n Networks .\n Each element will be masked independently for each sample on every\n forward call with probability \"p\" using samples from a Bernoulli\n distribution. The elements to be masked are randomized on every\n forward call, and scaled and shifted to maintain zero mean and unit\n variance.\n Usually the input comes from \"nn.AlphaDropout\" modules.\n As described in the paper Efficient Object Localization Using\n Convolutional Networks , if adjacent pixels within feature maps are", "source": "https://pytorch.org/docs/stable/generated/torch.nn.FeatureAlphaDropout.html", "category": "pytorch docs"}
{"text": "strongly correlated (as is normally the case in early convolution\n layers) then i.i.d. dropout will not regularize the activations and\n will otherwise just result in an effective learning rate decrease.\n In this case, \"nn.AlphaDropout()\" will help promote independence\n between feature maps and should be used instead.\n Parameters:\n * p (float, optional) -- probability of an element to\n be zeroed. Default: 0.5\n * inplace (bool, optional) -- If set to \"True\", will\n do this operation in-place\n Shape:\n * Input: (N, C, D, H, W) or (C, D, H, W).\n * Output: (N, C, D, H, W) or (C, D, H, W) (same shape as input).\n Examples:\n >>> m = nn.FeatureAlphaDropout(p=0.2)\n >>> input = torch.randn(20, 16, 4, 32, 32)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.FeatureAlphaDropout.html", "category": "pytorch docs"}
{"text": "torch.Tensor.vdotTensor.vdot(other) -> Tensor\n See \"torch.vdot()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.vdot.html", "category": "pytorch docs"}
{"text": "torch.cuda.memory_reservedtorch.cuda.memory_reserved(device=None)\n Returns the current GPU memory managed by the caching allocator in\n bytes for a given device.\n Parameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\n Return type:\n int\n Note:\n See Memory management for more details about GPU memory\n management.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_reserved.html", "category": "pytorch docs"}
{"text": "torch.foreach_acos_torch._foreach_acos(self: List[Tensor]) -> None\n Apply \"torch.acos()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_acos_.html", "category": "pytorch docs"}
{"text": "torch.sym_inttorch.sym_int(a)\n SymInt-aware utility for int casting.\n Parameters:\n a (SymInt, SymFloat, or object) -- Object to cast", "source": "https://pytorch.org/docs/stable/generated/torch.sym_int.html", "category": "pytorch docs"}
{"text": "torch.fft.ifft2torch.fft.ifft2(input, s=None, dim=(- 2, - 1), norm=None, , out=None) -> Tensor\n Computes the 2 dimensional inverse discrete Fourier transform of\n \"input\". Equivalent to \"ifftn()\" but IFFTs only the last two\n dimensions by default.\n Note:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimensions.\n Parameters:\n * input (Tensor) -- the input tensor\n * s (Tuple[int], optional*) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the IFFT. If a length \"-1\" is specified, no padding\n is done in that dimension. Default: \"s = [input.size(d) for d\n in dim]\"\n * dim (*Tuple[int], optional*) -- Dimensions to be\n transformed. Default: last two dimensions.", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ifft2.html", "category": "pytorch docs"}
{"text": "transformed. Default: last two dimensions.\n * norm (str, optional) --\n Normalization mode. For the backward transform (\"ifft2()\"),\n these correspond to:\n * \"\"forward\"\" - no normalization\n * \"\"backward\"\" - normalize by \"1/n\"\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the IFFT\n orthonormal)\n Where \"n = prod(s)\" is the logical IFFT size. Calling the\n forward transform (\"fft2()\") with the same normalization mode\n will apply an overall normalization of \"1/n\" between the two\n transforms. This is required to make \"ifft2()\" the exact\n inverse.\n Default is \"\"backward\"\" (normalize by \"1/n\").\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n\n\n\nx = torch.rand(10, 10, dtype=torch.complex64)\nifft2 = torch.fft.ifft2(x)\n The discrete Fourier transform is separable, so \"ifft2()\" here is\n equivalent to two one-dimensional \"ifft()\" calls:\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ifft2.html", "category": "pytorch docs"}
{"text": "\n\n\ntwo_iffts = torch.fft.ifft(torch.fft.ifft(x, dim=0), dim=1)\ntorch.testing.assert_close(ifft2, two_iffts, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ifft2.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.leaky_relu_torch.nn.functional.leaky_relu_(input, negative_slope=0.01) -> Tensor\n In-place version of \"leaky_relu()\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.leaky_relu_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.masked_fill_Tensor.masked_fill_(mask, value)\n Fills elements of \"self\" tensor with \"value\" where \"mask\" is True.\n The shape of \"mask\" must be broadcastable with the shape of the\n underlying tensor.\n Parameters:\n * mask (BoolTensor) -- the boolean mask\n * value (float) -- the value to fill in with", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.masked_fill_.html", "category": "pytorch docs"}
{"text": "default_weight_fake_quanttorch.quantization.fake_quantize.default_weight_fake_quant\n alias of functools.partial(,\n observer=,\n quant_min=-128, quant_max=127, dtype=torch.qint8,\n qscheme=torch.per_tensor_symmetric, reduce_range=False){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_weight_fake_quant.html", "category": "pytorch docs"}
{"text": "ConvTranspose3dclass torch.nn.ConvTranspose3d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)\n Applies a 3D transposed convolution operator over an input image\n composed of several input planes. The transposed convolution\n operator multiplies each input value element-wise by a learnable\n kernel, and sums over the outputs from all input feature planes.\n This module can be seen as the gradient of Conv3d with respect to\n its input. It is also known as a fractionally-strided convolution\n or a deconvolution (although it is not an actual deconvolution\n operation as it does not compute a true inverse of convolution).\n For more information, see the visualizations here and the\n Deconvolutional Networks paper.\n This module supports TensorFloat32.\n On certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html", "category": "pytorch docs"}
{"text": "use different precision for backward.\n * \"stride\" controls the stride for the cross-correlation.\n * \"padding\" controls the amount of implicit zero padding on both\n sides for \"dilation * (kernel_size - 1) - padding\" number of\n points. See note below for details.\n * \"output_padding\" controls the additional size added to one side\n of the output shape. See note below for details.\n * \"dilation\" controls the spacing between the kernel points; also\n known as the \u00c3\u00a0 trous algorithm. It is harder to describe, but the\n link here has a nice visualization of what \"dilation\" does.\n * \"groups\" controls the connections between inputs and outputs.\n \"in_channels\" and \"out_channels\" must both be divisible by\n \"groups\". For example,\n * At groups=1, all inputs are convolved to all outputs.\n * At groups=2, the operation becomes equivalent to having two\n conv layers side by side, each seeing half the input", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html", "category": "pytorch docs"}
{"text": "channels and producing half the output channels, and both\n subsequently concatenated.\n * At groups= \"in_channels\", each input channel is convolved\n with its own set of filters (of size\n \\frac{\\text{out_channels}}{\\text{in_channels}}).\n The parameters \"kernel_size\", \"stride\", \"padding\", \"output_padding\"\n can either be:\n * a single \"int\" -- in which case the same value is used for the\n depth, height and width dimensions\n * a \"tuple\" of three ints -- in which case, the first int is\n used for the depth dimension, the second int for the height\n dimension and the third int for the width dimension\n Note:\n The \"padding\" argument effectively adds \"dilation * (kernel_size\n - 1) - padding\" amount of zero padding to both sizes of the\n input. This is set so that when a \"Conv3d\" and a\n \"ConvTranspose3d\" are initialized with same parameters, they are\n inverses of each other in regard to the input and output shapes.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html", "category": "pytorch docs"}
{"text": "However, when \"stride > 1\", \"Conv3d\" maps multiple input shapes\n to the same output shape. \"output_padding\" is provided to resolve\n this ambiguity by effectively increasing the calculated output\n shape on one side. Note that \"output_padding\" is only used to\n find output shape, but does not actually add zero-padding to\n output.\n Note:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n Parameters:\n * in_channels (int) -- Number of channels in the input\n image\n * out_channels (int) -- Number of channels produced by the\n convolution\n * kernel_size (int or tuple) -- Size of the convolving", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html", "category": "pytorch docs"}
{"text": "kernel\n * stride (int or tuple, optional) -- Stride of the\n convolution. Default: 1\n * padding (int or tuple, optional) -- \"dilation *\n (kernel_size - 1) - padding\" zero-padding will be added to\n both sides of each dimension in the input. Default: 0\n * output_padding (int or tuple, optional) --\n Additional size added to one side of each dimension in the\n output shape. Default: 0\n * groups (int, optional) -- Number of blocked\n connections from input channels to output channels. Default: 1\n * bias (bool, optional) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n * dilation (int or tuple, optional) -- Spacing\n between kernel elements. Default: 1\n Shape:\n * Input: (N, C_{in}, D_{in}, H_{in}, W_{in}) or (C_{in}, D_{in},\n H_{in}, W_{in})\n * Output: (N, C_{out}, D_{out}, H_{out}, W_{out}) or (C_{out},", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html", "category": "pytorch docs"}
{"text": "D_{out}, H_{out}, W_{out}), where\n D_{out} = (D_{in} - 1) \\times \\text{stride}[0] - 2 \\times\n \\text{padding}[0] + \\text{dilation}[0] \\times\n (\\text{kernel_size}[0] - 1) + \\text{output_padding}[0] + 1\n H_{out} = (H_{in} - 1) \\times \\text{stride}[1] - 2 \\times\n \\text{padding}[1] + \\text{dilation}[1] \\times\n (\\text{kernel_size}[1] - 1) + \\text{output_padding}[1] + 1\n W_{out} = (W_{in} - 1) \\times \\text{stride}[2] - 2 \\times\n \\text{padding}[2] + \\text{dilation}[2] \\times\n (\\text{kernel_size}[2] - 1) + \\text{output_padding}[2] + 1\n Variables:\n * weight (Tensor) -- the learnable weights of the module\n of shape (\\text{in_channels},\n \\frac{\\text{out_channels}}{\\text{groups}},\n \\text{kernel_size[0]}, \\text{kernel_size[1]},\n \\text{kernel_size[2]}). The values of these weights are\n sampled from \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html", "category": "pytorch docs"}
{"text": "\\frac{groups}{C_\\text{out} *\n \\prod_{i=0}^{2}\\text{kernel_size}[i]}\n * bias (Tensor) -- the learnable bias of the module of\n shape (out_channels) If \"bias\" is \"True\", then the values of\n these weights are sampled from \\mathcal{U}(-\\sqrt{k},\n \\sqrt{k}) where k = \\frac{groups}{C_\\text{out} *\n \\prod_{i=0}^{2}\\text{kernel_size}[i]}\n Examples:\n >>> # With square kernels and equal stride\n >>> m = nn.ConvTranspose3d(16, 33, 3, stride=2)\n >>> # non-square kernels and unequal stride and with padding\n >>> m = nn.ConvTranspose3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(0, 4, 2))\n >>> input = torch.randn(20, 16, 10, 50, 100)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html", "category": "pytorch docs"}
{"text": "torch.cholesky_inversetorch.cholesky_inverse(input, upper=False, , out=None) -> Tensor\n Computes the inverse of a symmetric positive-definite matrix A\n using its Cholesky factor u: returns matrix \"inv\". The inverse is\n computed using LAPACK routines \"dpotri\" and \"spotri\" (and the\n corresponding MAGMA routines).\n If \"upper\" is \"False\", u is lower triangular such that the returned\n tensor is\n inv = (uu^{{T}})^{{-1}}\n If \"upper\" is \"True\" or not provided, u is upper triangular such\n that the returned tensor is\n inv = (u^T u)^{{-1}}\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if A is a batch of matrices then\n the output has the same batch dimensions.\n Parameters:\n * input (Tensor) -- the input tensor A of size (, n, n),\n consisting of symmetric positive-definite matrices where * is\n zero or more batch dimensions.", "source": "https://pytorch.org/docs/stable/generated/torch.cholesky_inverse.html", "category": "pytorch docs"}
{"text": "zero or more batch dimensions.\n * upper (bool, optional) -- flag that indicates\n whether to return a upper or lower triangular matrix. Default:\n False\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor for inv\n Example:\n >>> a = torch.randn(3, 3)\n >>> a = torch.mm(a, a.t()) + 1e-05 * torch.eye(3) # make symmetric positive definite\n >>> u = torch.linalg.cholesky(a)\n >>> a\n tensor([[ 0.9935, -0.6353, 1.5806],\n [ -0.6353, 0.8769, -1.7183],\n [ 1.5806, -1.7183, 10.6618]])\n >>> torch.cholesky_inverse(u)\n tensor([[ 1.9314, 1.2251, -0.0889],\n [ 1.2251, 2.4439, 0.2122],\n [-0.0889, 0.2122, 0.1412]])\n >>> a.inverse()\n tensor([[ 1.9314, 1.2251, -0.0889],\n [ 1.2251, 2.4439, 0.2122],\n [-0.0889, 0.2122, 0.1412]])\n >>> a = torch.randn(3, 2, 2) # Example for batched input", "source": "https://pytorch.org/docs/stable/generated/torch.cholesky_inverse.html", "category": "pytorch docs"}
{"text": "\n\n\na = a @ a.mT + 1e-03 # make symmetric positive-definite\n >>> l = torch.linalg.cholesky(a)\n >>> z = l @ l.mT\n >>> torch.dist(z, a)\n tensor(3.5894e-07)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.cholesky_inverse.html", "category": "pytorch docs"}
{"text": "torch.Tensor.isfiniteTensor.isfinite() -> Tensor\n See \"torch.isfinite()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.isfinite.html", "category": "pytorch docs"}
{"text": "GroupNormclass torch.nn.GroupNorm(num_groups, num_channels, eps=1e-05, affine=True, device=None, dtype=None)\n Applies Group Normalization over a mini-batch of inputs as\n described in the paper Group Normalization\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n The input channels are separated into \"num_groups\" groups, each\n containing \"num_channels / num_groups\" channels. \"num_channels\"\n must be divisible by \"num_groups\". The mean and standard-deviation\n are calculated separately over the each group. \\gamma and \\beta are\n learnable per-channel affine transform parameter vectors of size\n \"num_channels\" if \"affine\" is \"True\". The standard-deviation is\n calculated via the biased estimator, equivalent to\n torch.var(input, unbiased=False).\n This layer uses statistics computed from input data in both\n training and evaluation modes.\n Parameters:\n * num_groups (int) -- number of groups to separate the", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GroupNorm.html", "category": "pytorch docs"}
{"text": "channels into\n * num_channels (int) -- number of channels expected in\n input\n * eps (float) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n * affine (bool) -- a boolean value that when set to\n \"True\", this module has learnable per-channel affine\n parameters initialized to ones (for weights) and zeros (for\n biases). Default: \"True\".\n Shape:\n * Input: (N, C, ) where C=\\text{num_channels}\n * Output: (N, C, ) (same shape as input)\n Examples:\n >>> input = torch.randn(20, 6, 10, 10)\n >>> # Separate 6 channels into 3 groups\n >>> m = nn.GroupNorm(3, 6)\n >>> # Separate 6 channels into 6 groups (equivalent with InstanceNorm)\n >>> m = nn.GroupNorm(6, 6)\n >>> # Put all 6 channels into a single group (equivalent with LayerNorm)\n >>> m = nn.GroupNorm(1, 6)\n >>> # Activating the module\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GroupNorm.html", "category": "pytorch docs"}
{"text": "torch.cuda.caching_allocator_alloctorch.cuda.caching_allocator_alloc(size, device=None, stream=None)\n Performs a memory allocation using the CUDA memory allocator.\n Memory is allocated for a given device and a stream, this function\n is intended to be used for interoperability with other frameworks.\n Allocated memory is released through \"caching_allocator_delete()\".\n Parameters:\n * size (int) -- number of bytes to be allocated.\n * device (torch.device or int, optional) --\n selected device. If it is \"None\" the default CUDA device is\n used.\n * stream (torch.cuda.Stream or int, optional) --\n selected stream. If is \"None\" then the default stream for the\n selected device is used.\n Note:\n See Memory management for more details about GPU memory\n management.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.caching_allocator_alloc.html", "category": "pytorch docs"}
{"text": "BatchNorm1dclass torch.nn.BatchNorm1d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)\n Applies Batch Normalization over a 2D or 3D input as described in\n the paper Batch Normalization: Accelerating Deep Network Training\n by Reducing Internal Covariate Shift .\n y = \\frac{x - \\mathrm{E}[x]}{\\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n The mean and standard-deviation are calculated per-dimension over\n the mini-batches and \\gamma and \\beta are learnable parameter\n vectors of size C (where C is the number of features or\n channels of the input). By default, the elements of \\gamma are set\n to 1 and the elements of \\beta are set to 0. The standard-deviation\n is calculated via the biased estimator, equivalent to\n torch.var(input, unbiased=False).\n Also by default, during training this layer keeps running estimates\n of its computed mean and variance, which are then used for", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm1d.html", "category": "pytorch docs"}
{"text": "normalization during evaluation. The running estimates are kept\n with a default \"momentum\" of 0.1.\n If \"track_running_stats\" is set to \"False\", this layer then does\n not keep running estimates, and batch statistics are instead used\n during evaluation time as well.\n Note:\n This \"momentum\" argument is different from one used in optimizer\n classes and the conventional notion of momentum. Mathematically,\n the update rule for running statistics here is \\hat{x}_\\text{new}\n = (1 - \\text{momentum}) \\times \\hat{x} + \\text{momentum} \\times\n x_t, where \\hat{x} is the estimated statistic and x_t is the new\n observed value.\n Because the Batch Normalization is done over the C dimension,\n computing statistics on (N, L) slices, it's common terminology to\n call this Temporal Batch Normalization.\n Parameters:\n * num_features (int) -- number of features or channels C\n of the input\n * eps (float) -- a value added to the denominator for", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm1d.html", "category": "pytorch docs"}
{"text": "numerical stability. Default: 1e-5\n * momentum (float) -- the value used for the running_mean\n and running_var computation. Can be set to \"None\" for\n cumulative moving average (i.e. simple average). Default: 0.1\n * affine (bool) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters. Default:\n \"True\"\n * track_running_stats (bool) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics, and initializes statistics buffers\n \"running_mean\" and \"running_var\" as \"None\". When these buffers\n are \"None\", this module always uses batch statistics. in both\n training and eval modes. Default: \"True\"\n Shape:\n * Input: (N, C) or (N, C, L), where N is the batch size, C is\n the number of features or channels, and L is the sequence\n length", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm1d.html", "category": "pytorch docs"}
{"text": "length\n * Output: (N, C) or (N, C, L) (same shape as input)\n Examples:\n >>> # With Learnable Parameters\n >>> m = nn.BatchNorm1d(100)\n >>> # Without Learnable Parameters\n >>> m = nn.BatchNorm1d(100, affine=False)\n >>> input = torch.randn(20, 100)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm1d.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.hardsigmoidtorch.nn.functional.hardsigmoid(input, inplace=False)\n Applies the element-wise function\n \\text{Hardsigmoid}(x) = \\begin{cases} 0 & \\text{if~} x \\le\n -3, \\ 1 & \\text{if~} x \\ge +3, \\ x / 6 + 1 / 2 &\n \\text{otherwise} \\end{cases}\n Parameters:\n inplace (bool) -- If set to \"True\", will do this operation\n in-place. Default: \"False\"\n Return type:\n Tensor\n See \"Hardsigmoid\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.hardsigmoid.html", "category": "pytorch docs"}
{"text": "torch.cuda.synchronizetorch.cuda.synchronize(device=None)\n Waits for all kernels in all streams on a CUDA device to complete.\n Parameters:\n device (torch.device or int, optional) -- device\n for which to synchronize. It uses the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.synchronize.html", "category": "pytorch docs"}
{"text": "torch.Tensor.logical_xor_Tensor.logical_xor_() -> Tensor\n In-place version of \"logical_xor()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logical_xor_.html", "category": "pytorch docs"}
{"text": "torch.addcmultorch.addcmul(input, tensor1, tensor2, , value=1, out=None) -> Tensor\n Performs the element-wise multiplication of \"tensor1\" by \"tensor2\",\n multiplies the result by the scalar \"value\" and adds it to \"input\".\n \\text{out}_i = \\text{input}_i + \\text{value} \\times\n \\text{tensor1}_i \\times \\text{tensor2}_i\n The shapes of \"tensor\", \"tensor1\", and \"tensor2\" must be\n broadcastable.\n For inputs of type FloatTensor or DoubleTensor, \"value\" must be\n a real number, otherwise an integer.\n Parameters:\n * input (Tensor) -- the tensor to be added\n * tensor1 (Tensor) -- the tensor to be multiplied\n * tensor2 (Tensor) -- the tensor to be multiplied\n Keyword Arguments:\n * value (Number, optional) -- multiplier for tensor1\n . tensor2\n * out (Tensor, optional) -- the output tensor.\n Example:\n >>> t = torch.randn(1, 3)\n >>> t1 = torch.randn(3, 1)\n >>> t2 = torch.randn(1, 3)", "source": "https://pytorch.org/docs/stable/generated/torch.addcmul.html", "category": "pytorch docs"}
{"text": "\n\n\nt2 = torch.randn(1, 3)\n >>> torch.addcmul(t, t1, t2, value=0.1)\n tensor([[-0.8635, -0.6391, 1.6174],\n [-0.7617, -0.5879, 1.7388],\n [-0.8353, -0.6249, 1.6511]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.addcmul.html", "category": "pytorch docs"}
{"text": "torch.Tensor.multiplyTensor.multiply(value) -> Tensor\n See \"torch.multiply()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.multiply.html", "category": "pytorch docs"}
{"text": "MovingAverageMinMaxObserverclass torch.quantization.observer.MovingAverageMinMaxObserver(averaging_constant=0.01, dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None, eps=1.1920928955078125e-07, kwargs)\n Observer module for computing the quantization parameters based on\n the moving average of the min and max values.\n This observer computes the quantization parameters based on the\n moving averages of minimums and maximums of the incoming tensors.\n The module records the average minimum and maximum of incoming\n tensors, and uses this statistic to compute the quantization\n parameters.\n Parameters:\n * averaging_constant -- Averaging constant for min/max.\n * dtype -- dtype argument to the quantize node needed to\n implement the reference model spec.\n * qscheme -- Quantization scheme to be used\n * reduce_range** -- Reduces the range of the quantized data\n type by 1 bit", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.MovingAverageMinMaxObserver.html", "category": "pytorch docs"}
{"text": "type by 1 bit\n * quant_min -- Minimum quantization value. If unspecified,\n it will follow the 8-bit setup.\n * quant_max -- Maximum quantization value. If unspecified,\n it will follow the 8-bit setup.\n * eps (Tensor) -- Epsilon value for float32, Defaults to\n torch.finfo(torch.float32).eps.\n The moving average min/max is computed as follows\n \\begin{array}{ll} x_\\text{min} = \\begin{cases}\n \\min(X) & \\text{if~}x_\\text{min} = \\text{None} \\ (1\n - c) x_\\text{min} + c \\min(X) & \\text{otherwise}\n \\end{cases}\\ x_\\text{max} = \\begin{cases}\n \\max(X) & \\text{if~}x_\\text{max} = \\text{None} \\ (1\n - c) x_\\text{max} + c \\max(X) & \\text{otherwise}\n \\end{cases}\\ \\end{array}\n where x_\\text{min/max} is the running average min/max, X is is the\n incoming tensor, and c is the \"averaging_constant\".\n The scale and zero point are then computed as in \"MinMaxObserver\".\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.MovingAverageMinMaxObserver.html", "category": "pytorch docs"}
{"text": "Note:\n Only works with \"torch.per_tensor_affine\" quantization scheme.\n Note:\n If the running minimum equals to the running maximum, the scale\n and zero_point are set to 1.0 and 0.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.MovingAverageMinMaxObserver.html", "category": "pytorch docs"}
{"text": "torch.cuda.get_gencode_flagstorch.cuda.get_gencode_flags()\n Returns NVCC gencode flags this library was compiled with.\n Return type:\n str", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.get_gencode_flags.html", "category": "pytorch docs"}
{"text": "Softsignclass torch.nn.Softsign\n Applies the element-wise function:\n \\text{SoftSign}(x) = \\frac{x}{ 1 + |x|}\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.Softsign()\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Softsign.html", "category": "pytorch docs"}
{"text": "torch.Tensor.is_sharedTensor.is_shared()\n Checks if tensor is in shared memory.\n This is always \"True\" for CUDA tensors.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_shared.html", "category": "pytorch docs"}
{"text": "torch.Tensor.topkTensor.topk(k, dim=None, largest=True, sorted=True)\n See \"torch.topk()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.topk.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.prune.removetorch.nn.utils.prune.remove(module, name)\n Removes the pruning reparameterization from a module and the\n pruning method from the forward hook. The pruned parameter named\n \"name\" remains permanently pruned, and the parameter named\n \"name+'_orig'\" is removed from the parameter list. Similarly, the\n buffer named \"name+'_mask'\" is removed from the buffers.\n Note:\n Pruning itself is NOT undone or reversed!\n Parameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * name (str) -- parameter name within \"module\" on which\n pruning will act.\n -[ Examples ]-\n\n\n\nm = random_unstructured(nn.Linear(5, 7), name='weight', amount=0.2)\nm = remove(m, name='weight')\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.remove.html", "category": "pytorch docs"}
{"text": "torch.Tensor.q_per_channel_scalesTensor.q_per_channel_scales() -> Tensor\n Given a Tensor quantized by linear (affine) per-channel\n quantization, returns a Tensor of scales of the underlying\n quantizer. It has the number of elements that matches the\n corresponding dimensions (from q_per_channel_axis) of the tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.q_per_channel_scales.html", "category": "pytorch docs"}
{"text": "torch.take_along_dimtorch.take_along_dim(input, indices, dim, , out=None) -> Tensor\n Selects values from \"input\" at the 1-dimensional indices from\n \"indices\" along the given \"dim\".\n Functions that return indices along a dimension, like\n \"torch.argmax()\" and \"torch.argsort()\", are designed to work with\n this function. See the examples below.\n Note:\n This function is similar to NumPy's take_along_axis. See also\n \"torch.gather()\".\n Parameters:\n * input (Tensor) -- the input tensor.\n * indices (tensor) -- the indices into \"input\". Must have\n long dtype.\n * dim (int) -- dimension to select along.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> t = torch.tensor([[10, 30, 20], [60, 40, 50]])\n >>> max_idx = torch.argmax(t)\n >>> torch.take_along_dim(t, max_idx)\n tensor([60])\n >>> sorted_idx = torch.argsort(t, dim=1)", "source": "https://pytorch.org/docs/stable/generated/torch.take_along_dim.html", "category": "pytorch docs"}
{"text": "\n\n\nsorted_idx = torch.argsort(t, dim=1)\n >>> torch.take_along_dim(t, sorted_idx, dim=1)\n tensor([[10, 20, 30],\n [40, 50, 60]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.take_along_dim.html", "category": "pytorch docs"}
{"text": "torch.get_rng_statetorch.get_rng_state()\n Returns the random number generator state as a torch.ByteTensor.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.get_rng_state.html", "category": "pytorch docs"}
{"text": "torch.nn.modules.module.register_module_forward_hooktorch.nn.modules.module.register_module_forward_hook(hook)\n Registers a global forward hook for all the modules\n Warning:\n This adds global state to the nn.module module and it is only\n intended for debugging/profiling purposes.\n The hook will be called every time after \"forward()\" has computed\n an output. It should have the following signature:\n hook(module, input, output) -> None or modified output\n The input contains only the positional arguments given to the\n module. Keyword arguments won't be passed to the hooks and only to\n the \"forward\". The hook can modify the output. It can modify the\n input inplace but it will not have effect on forward since this is\n called after \"forward()\" is called.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemovableHandle\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_hook.html", "category": "pytorch docs"}
{"text": "\"torch.utils.hooks.RemovableHandle\"\n This hook will be executed before specific module hooks registered\n with \"register_forward_hook\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_hook.html", "category": "pytorch docs"}
{"text": "torch.lutorch.lu(args, *kwargs)\n Computes the LU factorization of a matrix or batches of matrices\n \"A\". Returns a tuple containing the LU factorization and pivots of\n \"A\". Pivoting is done if \"pivot\" is set to \"True\".\n Warning:\n \"torch.lu()\" is deprecated in favor of \"torch.linalg.lu_factor()\"\n and \"torch.linalg.lu_factor_ex()\". \"torch.lu()\" will be removed\n in a future PyTorch release. \"LU, pivots, info = torch.lu(A,\n compute_pivots)\" should be replaced with\n LU, pivots = torch.linalg.lu_factor(A, compute_pivots)\n \"LU, pivots, info = torch.lu(A, compute_pivots, get_infos=True)\"\n should be replaced with\n LU, pivots, info = torch.linalg.lu_factor_ex(A, compute_pivots)\n Note:\n * The returned permutation matrix for every matrix in the batch\n is represented by a 1-indexed vector of size \"min(A.shape[-2],\n A.shape[-1])\". \"pivots[i] == j\" represents that in the \"i\"-th\n step of the algorithm, the \"i\"-th row was permuted with the", "source": "https://pytorch.org/docs/stable/generated/torch.lu.html", "category": "pytorch docs"}
{"text": "\"j-1\"-th row.\n * LU factorization with \"pivot\" = \"False\" is not available for\n CPU, and attempting to do so will throw an error. However, LU\n factorization with \"pivot\" = \"False\" is available for CUDA.\n * This function does not check if the factorization was\n successful or not if \"get_infos\" is \"True\" since the status of\n the factorization is present in the third element of the return\n tuple.\n * In the case of batches of square matrices with size less or\n equal to 32 on a CUDA device, the LU factorization is repeated\n for singular matrices due to the bug in the MAGMA library (see\n magma issue 13).\n * \"L\", \"U\", and \"P\" can be derived using \"torch.lu_unpack()\".\n Warning:\n The gradients of this function will only be finite when \"A\" is\n full rank. This is because the LU decomposition is just\n differentiable at full rank matrices. Furthermore, if \"A\" is\n close to not being full rank, the gradient will be numerically", "source": "https://pytorch.org/docs/stable/generated/torch.lu.html", "category": "pytorch docs"}
{"text": "unstable as it depends on the computation of L^{-1} and U^{-1}.\n Parameters:\n * A (Tensor) -- the tensor to factor of size (, m, n)\n * pivot (bool, optional) -- controls whether pivoting\n is done. Default: \"True\"\n * get_infos (bool, optional) -- if set to \"True\",\n returns an info IntTensor. Default: \"False\"\n * out (tuple, optional) -- optional output tuple. If\n \"get_infos\" is \"True\", then the elements in the tuple are\n Tensor, IntTensor, and IntTensor. If \"get_infos\" is \"False\",\n then the elements in the tuple are Tensor, IntTensor. Default:\n \"None\"\n Returns:\n A tuple of tensors containing\n * factorization (Tensor): the factorization of size (,\n m, n)\n * pivots (IntTensor): the pivots of size (*,\n \\text{min}(m, n)). \"pivots\" stores all the intermediate\n transpositions of rows. The final permutation \"perm\" could", "source": "https://pytorch.org/docs/stable/generated/torch.lu.html", "category": "pytorch docs"}
{"text": "be reconstructed by applying \"swap(perm[i], perm[pivots[i]\n - 1])\" for \"i = 0, ..., pivots.size(-1) - 1\", where \"perm\"\n is initially the identity permutation of m elements\n (essentially this is what \"torch.lu_unpack()\" is doing).\n * infos (IntTensor, optional): if \"get_infos\" is\n \"True\", this is a tensor of size (*) where non-zero values\n indicate whether factorization for the matrix or each\n minibatch has succeeded or failed\n Return type:\n (Tensor, IntTensor, IntTensor (optional))\n Example:\n >>> A = torch.randn(2, 3, 3)\n >>> A_LU, pivots = torch.lu(A)\n >>> A_LU\n tensor([[[ 1.3506, 2.5558, -0.0816],\n [ 0.1684, 1.1551, 0.1940],\n [ 0.1193, 0.6189, -0.5497]],\n [[ 0.4526, 1.2526, -0.3285],\n [-0.7988, 0.7175, -0.9701],\n [ 0.2634, -0.9255, -0.3459]]])\n >>> pivots\n tensor([[ 3, 3, 3],", "source": "https://pytorch.org/docs/stable/generated/torch.lu.html", "category": "pytorch docs"}
{"text": "\n\n\npivots\n tensor([[ 3, 3, 3],\n [ 3, 3, 3]], dtype=torch.int32)\n >>> A_LU, pivots, info = torch.lu(A, get_infos=True)\n >>> if info.nonzero().size(0) == 0:\n ... print('LU factorization succeeded for all samples!')\n LU factorization succeeded for all samples!\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.lu.html", "category": "pytorch docs"}
{"text": "torch.Tensor.addbmm_Tensor.addbmm_(batch1, batch2, *, beta=1, alpha=1) -> Tensor\n In-place version of \"addbmm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addbmm_.html", "category": "pytorch docs"}
{"text": "torch.absolutetorch.absolute(input, *, out=None) -> Tensor\n Alias for \"torch.abs()\"", "source": "https://pytorch.org/docs/stable/generated/torch.absolute.html", "category": "pytorch docs"}
{"text": "torch.Tensor.requires_gradTensor.requires_grad\n Is \"True\" if gradients need to be computed for this Tensor, \"False\"\n otherwise.\n Note:\n The fact that gradients need to be computed for a Tensor do not\n mean that the \"grad\" attribute will be populated, see \"is_leaf\"\n for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.requires_grad.html", "category": "pytorch docs"}
{"text": "torch.trunctorch.trunc(input, , out=None) -> Tensor\n Returns a new tensor with the truncated integer values of the\n elements of \"input\".\n For integer inputs, follows the array-api convention of returning a\n copy of the input tensor.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 3.4742, 0.5466, -0.8008, -0.9079])\n >>> torch.trunc(a)\n tensor([ 3., 0., -0., -0.])", "source": "https://pytorch.org/docs/stable/generated/torch.trunc.html", "category": "pytorch docs"}
{"text": "torch.linalg.choleskytorch.linalg.cholesky(A, , upper=False, out=None) -> Tensor\n Computes the Cholesky decomposition of a complex Hermitian or real\n symmetric positive-definite matrix.\n Letting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the Cholesky\n decomposition* of a complex Hermitian or real symmetric positive-\n definite matrix A \\in \\mathbb{K}^{n \\times n} is defined as\n A = LL^{\\text{H}}\\mathrlap{\\qquad L \\in \\mathbb{K}^{n \\times n}}\n where L is a lower triangular matrix with real positive diagonal\n (even in the complex case) and L^{\\text{H}} is the conjugate\n transpose when L is complex, and the transpose when L is real-\n valued.\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n Note:\n When inputs are on a CUDA device, this function synchronizes that\n device with the CPU.\n See also:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.cholesky.html", "category": "pytorch docs"}
{"text": "device with the CPU.\n See also:\n \"torch.linalg.cholesky_ex()\" for a version of this operation that\n skips the (slow) error checking by default and instead returns\n the debug information. This makes it a faster way to check if a\n matrix is positive-definite.\n \"torch.linalg.eigh()\" for a different decomposition of a\n Hermitian matrix. The eigenvalue decomposition gives more\n information about the matrix but it slower to compute than the\n Cholesky decomposition.\n Parameters:\n A (Tensor) -- tensor of shape (, n, n) where *** is\n zero or more batch dimensions consisting of symmetric or\n Hermitian positive-definite matrices.\n Keyword Arguments:\n * upper (bool, optional) -- whether to return an upper\n triangular matrix. The tensor returned with upper=True is the\n conjugate transpose of the tensor returned with upper=False.\n * out (Tensor, optional*) -- output tensor. Ignored if", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.cholesky.html", "category": "pytorch docs"}
{"text": "None. Default: None.\n Raises:\n RuntimeError -- if the \"A\" matrix or any matrix in a batched\n \"A\" is not Hermitian (resp. symmetric) positive-definite. If\n \"A\" is a batch of matrices, the error message will include\n the batch index of the first matrix that fails to meet this\n condition.\n Examples:\n >>> A = torch.randn(2, 2, dtype=torch.complex128)\n >>> A = A @ A.T.conj() + torch.eye(2) # creates a Hermitian positive-definite matrix\n >>> A\n tensor([[2.5266+0.0000j, 1.9586-2.0626j],\n [1.9586+2.0626j, 9.4160+0.0000j]], dtype=torch.complex128)\n >>> L = torch.linalg.cholesky(A)\n >>> L\n tensor([[1.5895+0.0000j, 0.0000+0.0000j],\n [1.2322+1.2976j, 2.4928+0.0000j]], dtype=torch.complex128)\n >>> torch.dist(L @ L.T.conj(), A)\n tensor(4.4692e-16, dtype=torch.float64)\n >>> A = torch.randn(3, 2, 2, dtype=torch.float64)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.cholesky.html", "category": "pytorch docs"}
{"text": "\n\n\nA = A @ A.mT + torch.eye(2) # batch of symmetric positive-definite matrices\n >>> L = torch.linalg.cholesky(A)\n >>> torch.dist(L @ L.mT, A)\n tensor(5.8747e-16, dtype=torch.float64)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.cholesky.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.conv_transpose3dtorch.nn.functional.conv_transpose3d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor\n Applies a 3D transposed convolution operator over an input image\n composed of several input planes, sometimes also called\n \"deconvolution\"\n This operator supports TensorFloat32.\n See \"ConvTranspose3d\" for details and output shape.\n Note:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n Parameters:\n * input -- input tensor of shape (\\text{minibatch} ,\n \\text{in_channels} , iT , iH , iW)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose3d.html", "category": "pytorch docs"}
{"text": "\\text{in_channels} , iT , iH , iW)\n * weight -- filters of shape (\\text{in_channels} ,\n \\frac{\\text{out_channels}}{\\text{groups}} , kT , kH , kW)\n * bias -- optional bias of shape (\\text{out_channels}).\n Default: None\n * stride -- the stride of the convolving kernel. Can be a\n single number or a tuple \"(sT, sH, sW)\". Default: 1\n * padding -- \"dilation * (kernel_size - 1) - padding\" zero-\n padding will be added to both sides of each dimension in the\n input. Can be a single number or a tuple \"(padT, padH, padW)\".\n Default: 0\n * output_padding -- additional size added to one side of\n each dimension in the output shape. Can be a single number or\n a tuple \"(out_padT, out_padH, out_padW)\". Default: 0\n * groups -- split input into groups, \\text{in_channels}\n should be divisible by the number of groups. Default: 1\n * dilation -- the spacing between kernel elements. Can be a", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose3d.html", "category": "pytorch docs"}
{"text": "single number or a tuple (dT, dH, dW). Default: 1\n Examples:\n >>> inputs = torch.randn(20, 16, 50, 10, 20)\n >>> weights = torch.randn(16, 33, 3, 3, 3)\n >>> F.conv_transpose3d(inputs, weights)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose3d.html", "category": "pytorch docs"}
{"text": "torch.cuda.memory_usagetorch.cuda.memory_usage(device=None)\n Returns the percent of time over the past sample period during\n which global (device) memory was being read or written. as given by\n nvidia-smi.\n Parameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\n Return type:\n int\n Warning: Each sample period may be between 1 second and 1/6 second,\n depending on the product being queried.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_usage.html", "category": "pytorch docs"}
{"text": "ConvTranspose1dclass torch.ao.nn.quantized.ConvTranspose1d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)\n Applies a 1D transposed convolution operator over an input image\n composed of several input planes. For details on input arguments,\n parameters, and implementation see \"ConvTranspose1d\".\n Note:\n Currently only the QNNPACK engine is implemented. Please, set the\n torch.backends.quantized.engine = 'qnnpack'\n For special notes, please, see \"Conv1d\"\n Variables:\n * weight (Tensor) -- packed tensor derived from the\n learnable weight parameter.\n * scale (Tensor) -- scalar for the output scale\n * zero_point (Tensor) -- scalar for the output zero point\n See \"ConvTranspose2d\" for other attributes.\n Examples:\n >>> torch.backends.quantized.engine = 'qnnpack'\n >>> from torch.nn import quantized as nnq", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose1d.html", "category": "pytorch docs"}
{"text": "\n\n\nfrom torch.nn import quantized as nnq\n >>> # With square kernels and equal stride\n >>> m = nnq.ConvTranspose1d(16, 33, 3, stride=2)\n >>> # non-square kernels and unequal stride and with padding\n >>> m = nnq.ConvTranspose1d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))\n >>> input = torch.randn(20, 16, 50)\n >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)\n >>> output = m(q_input)\n >>> # exact output size can be also specified as an argument\n >>> input = torch.randn(1, 16, 12)\n >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)\n >>> downsample = nnq.Conv1d(16, 16, 3, stride=2, padding=1)\n >>> upsample = nnq.ConvTranspose1d(16, 16, 3, stride=2, padding=1)\n >>> h = downsample(q_input)\n >>> h.size()\n torch.Size([1, 16, 6])\n >>> output = upsample(h, output_size=input.size())\n >>> output.size()\n torch.Size([1, 16, 12])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose1d.html", "category": "pytorch docs"}
{"text": "torch.linalg.solve_triangulartorch.linalg.solve_triangular(A, B, , upper, left=True, unitriangular=False, out=None) -> Tensor\n Computes the solution of a triangular system of linear equations\n with a unique solution.\n Letting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, this function\n computes the solution X \\in \\mathbb{K}^{n \\times k} of the linear\n system associated to the triangular matrix A \\in \\mathbb{K}^{n\n \\times n} without zeros on the diagonal (that is, it is invertible)\n and the rectangular matrix , B \\in \\mathbb{K}^{n \\times k}, which\n is defined as\n AX = B\n The argument \"upper\" signals whether A is upper or lower\n triangular.\n If \"left\"= False, this function returns the matrix X \\in\n \\mathbb{K}^{n \\times k} that solves the system\n XA = B\\mathrlap{\\qquad A \\in \\mathbb{K}^{k \\times k}, B \\in\n \\mathbb{K}^{n \\times k}.}\n If \"upper\"= True (resp. False*) just the upper (resp. lower)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve_triangular.html", "category": "pytorch docs"}
{"text": "triangular half of \"A\" will be accessed. The elements below the\n main diagonal will be considered to be zero and will not be\n accessed.\n If \"unitriangular\"= True, the diagonal of \"A\" is assumed to be\n ones and will not be accessed.\n The result may contain NaN s if the diagonal of \"A\" contains\n zeros or elements that are very close to zero and \"unitriangular\"=\n False (default) or if the input matrix has very small eigenvalues.\n Supports inputs of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if the inputs are batches of\n matrices then the output has the same batch dimensions.\n See also:\n \"torch.linalg.solve()\" computes the solution of a general square\n system of linear equations with a unique solution.\n Parameters:\n * A (Tensor) -- tensor of shape (, n, n) (or (, k,\n k) if \"left\"= True) where *** is zero or more batch\n dimensions.\n * B (Tensor) -- right-hand side tensor of shape (, n,", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve_triangular.html", "category": "pytorch docs"}
{"text": "k).\n Keyword Arguments:\n * upper (bool) -- whether \"A\" is an upper or lower\n triangular matrix.\n * left (bool, optional) -- whether to solve the system\n AX=B or XA = B. Default: True.\n * unitriangular (bool, optional) -- if True, the\n diagonal elements of \"A\" are assumed to be all equal to 1.\n Default: False.\n * out (Tensor, optional) -- output tensor. B may be\n passed as out and the result is computed in-place on B.\n Ignored if None. Default: None*.\n Examples:\n >>> A = torch.randn(3, 3).triu_()\n >>> b = torch.randn(3, 4)\n >>> X = torch.linalg.solve_triangular(A, B, upper=True)\n >>> torch.allclose(A @ X, B)\n True\n >>> A = torch.randn(2, 3, 3).tril_()\n >>> B = torch.randn(2, 3, 4)\n >>> X = torch.linalg.solve_triangular(A, B, upper=False)\n >>> torch.allclose(A @ X, B)\n True\n >>> A = torch.randn(2, 4, 4).tril_()", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve_triangular.html", "category": "pytorch docs"}
{"text": "\n\n\nA = torch.randn(2, 4, 4).tril_()\n >>> B = torch.randn(2, 3, 4)\n >>> X = torch.linalg.solve_triangular(A, B, upper=False, left=False)\n >>> torch.allclose(X @ A, B)\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve_triangular.html", "category": "pytorch docs"}
{"text": "torch.covtorch.cov(input, *, correction=1, fweights=None, aweights=None) -> Tensor\n Estimates the covariance matrix of the variables given by the\n \"input\" matrix, where rows are the variables and columns are the\n observations.\n A covariance matrix is a square matrix giving the covariance of\n each pair of variables. The diagonal contains the variance of each\n variable (covariance of a variable with itself). By definition, if\n \"input\" represents a single variable (Scalar or 1D) then its\n variance is returned.\n The unbiased sample covariance of the variables x and y is given\n by:\n \\text{cov}w(x,y) = \\frac{\\sum^{N}(x_{i} -\n \\bar{x})(y_{i} - \\bar{y})}{N~-~1}\n where \\bar{x} and \\bar{y} are the simple means of the x and y\n respectively.\n If \"fweights\" and/or \"aweights\" are provided, the unbiased weighted\n covariance is calculated, which is given by:\n \\text{cov}w(x,y) = \\frac{\\sum^{N}w_i(x_{i} -", "source": "https://pytorch.org/docs/stable/generated/torch.cov.html", "category": "pytorch docs"}
{"text": "\\mu_x^)(y_{i} - \\mu_y^)}{\\sum^{N}_{i = 1}w_i~-~1}\n where w denotes \"fweights\" or \"aweights\" based on whichever is\n provided, or w = fweights \\times aweights if both are provided, and\n \\mu_x^ = \\frac{\\sum^{N}{i = 1}w_ix }{\\sum^{N}_{i = 1}w_i} is\n the weighted mean of the variable.\n Parameters:\n input (Tensor) -- A 2D matrix containing multiple\n variables and observations, or a Scalar or 1D vector\n representing a single variable.\n Keyword Arguments:\n * correction (int, optional) -- difference between the\n sample size and sample degrees of freedom. Defaults to\n Bessel's correction, \"correction = 1\" which returns the\n unbiased estimate, even if both \"fweights\" and \"aweights\" are\n specified. \"correction = 0\" will return the simple average.\n Defaults to \"1\".\n * fweights (tensor, optional*) -- A Scalar or 1D tensor\n of observation vector frequencies representing the number of", "source": "https://pytorch.org/docs/stable/generated/torch.cov.html", "category": "pytorch docs"}
{"text": "times each observation should be repeated. Its numel must\n equal the number of columns of \"input\". Must have integral\n dtype. Ignored if \"None\". Defaults to None`*.\n * **aweights** (*tensor**, **optional*) -- A Scalar or 1D array\n of observation vector weights. These relative weights are\n typically large for observations considered \u00e2\u0080\u009cimportant\u00e2\u0080\u009d and\n smaller for observations considered less \u00e2\u0080\u009cimportant\u00e2\u0080\u009d. Its\n numel must equal the number of columns of \"input\". Must have\n floating point dtype. Ignored if \"None\". *Defaults toNone`.\n Returns:\n (Tensor) The covariance matrix of the variables.\n See also: \"torch.corrcoef()\" normalized covariance matrix.\n Example::\n >>> x = torch.tensor([[0, 2], [1, 1], [2, 0]]).T\n >>> x\n tensor([[0, 1, 2],\n [2, 1, 0]])\n >>> torch.cov(x)\n tensor([[ 1., -1.],\n [-1., 1.]])\n >>> torch.cov(x, correction=0)\n tensor([[ 0.6667, -0.6667],", "source": "https://pytorch.org/docs/stable/generated/torch.cov.html", "category": "pytorch docs"}
{"text": "tensor([[ 0.6667, -0.6667],\n [-0.6667, 0.6667]])\n >>> fw = torch.randint(1, 10, (3,))\n >>> fw\n tensor([1, 6, 9])\n >>> aw = torch.rand(3)\n >>> aw\n tensor([0.4282, 0.0255, 0.4144])\n >>> torch.cov(x, fweights=fw, aweights=aw)\n tensor([[ 0.4169, -0.4169],\n [-0.4169, 0.4169]])", "source": "https://pytorch.org/docs/stable/generated/torch.cov.html", "category": "pytorch docs"}
{"text": "torch.asarraytorch.asarray(obj, *, dtype=None, device=None, copy=None, requires_grad=False) -> Tensor\n Converts \"obj\" to a tensor.\n \"obj\" can be one of:\n 1. a tensor\n 2. a NumPy array\n 3. a DLPack capsule\n 4. an object that implements Python's buffer protocol\n 5. a scalar\n 6. a sequence of scalars\n When \"obj\" is a tensor, NumPy array, or DLPack capsule the returned\n tensor will, by default, not require a gradient, have the same\n datatype as \"obj\", be on the same device, and share memory with it.\n These properties can be controlled with the \"dtype\", \"device\",\n \"copy\", and \"requires_grad\" keyword arguments. If the returned\n tensor is of a different datatype, on a different device, or a copy\n is requested then it will not share its memory with \"obj\". If\n \"requires_grad\" is \"True\" then the returned tensor will require a\n gradient, and if \"obj\" is also a tensor with an autograd history\n then the returned tensor will have the same history.", "source": "https://pytorch.org/docs/stable/generated/torch.asarray.html", "category": "pytorch docs"}
{"text": "When \"obj\" is not a tensor, NumPy Array, or DLPack capsule but\n implements Python's buffer protocol then the buffer is interpreted\n as an array of bytes grouped according to the size of the datatype\n passed to the \"dtype\" keyword argument. (If no datatype is passed\n then the default floating point datatype is used, instead.) The\n returned tensor will have the specified datatype (or default\n floating point datatype if none is specified) and, by default, be\n on the CPU device and share memory with the buffer.\n When \"obj\" is none of the above but a scalar or sequence of scalars\n then the returned tensor will, by default, infer its datatype from\n the scalar values, be on the CPU device, and not share its memory.\n See also:\n \"torch.tensor()\" creates a tensor that always copies the data\n from the input object. \"torch.from_numpy()\" creates a tensor that\n always shares memory from NumPy arrays. \"torch.frombuffer()\"\n creates a tensor that always shares memory from objects that", "source": "https://pytorch.org/docs/stable/generated/torch.asarray.html", "category": "pytorch docs"}
{"text": "implement the buffer protocol. \"torch.from_dlpack()\" creates a\n tensor that always shares memory from DLPack capsules.\n Parameters:\n obj (object) -- a tensor, NumPy array, DLPack Capsule,\n object that implements Python's buffer protocol, scalar, or\n sequence of scalars.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the datatype of the\n returned tensor. Default: \"None\", which causes the datatype of\n the returned tensor to be inferred from \"obj\".\n * copy (bool, optional) -- controls whether the\n returned tensor shares memory with \"obj\". Default: \"None\",\n which causes the returned tensor to share memory with \"obj\"\n whenever possible. If \"True\" then the returned tensor does not\n share its memory. If \"False\" then the returned tensor shares\n its memory with \"obj\" and an error is thrown if it cannot.\n * device (\"torch.device\", optional) -- the device of the", "source": "https://pytorch.org/docs/stable/generated/torch.asarray.html", "category": "pytorch docs"}
{"text": "returned tensor. Default: \"None\", which causes the device of\n \"obj\" to be used.\n * requires_grad (bool, optional) -- whether the\n returned tensor requires grad. Default: \"False\", which causes\n the returned tensor not to require a gradient. If \"True\", then\n the returned tensor will require a gradient, and if \"obj\" is\n also a tensor with an autograd history then the returned\n tensor will have the same history.\n Example:\n >>> a = torch.tensor([1, 2, 3])\n >>> # Shares memory with tensor 'a'\n >>> b = torch.asarray(a)\n >>> a.data_ptr() == b.data_ptr()\n True\n >>> # Forces memory copy\n >>> c = torch.asarray(a, copy=True)\n >>> a.data_ptr() == c.data_ptr()\n False\n >>> a = torch.tensor([1, 2, 3], requires_grad=True).float()\n >>> b = a + 2\n >>> b\n tensor([1., 2., 3.], grad_fn=)\n >>> # Shares memory with tensor 'b', with no grad\n >>> c = torch.asarray(b)\n >>> c", "source": "https://pytorch.org/docs/stable/generated/torch.asarray.html", "category": "pytorch docs"}
{"text": "\n\n\nc = torch.asarray(b)\n >>> c\n tensor([1., 2., 3.])\n >>> # Shares memory with tensor 'b', retaining autograd history\n >>> d = torch.asarray(b, requires_grad=True)\n >>> d\n tensor([1., 2., 3.], grad_fn=)\n >>> array = numpy.array([1, 2, 3])\n >>> # Shares memory with array 'array'\n >>> t1 = torch.asarray(array)\n >>> array.array_interface['data'][0] == t1.data_ptr()\n True\n >>> # Copies memory due to dtype mismatch\n >>> t2 = torch.asarray(array, dtype=torch.float32)\n >>> array.array_interface['data'][0] == t1.data_ptr()\n False\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.asarray.html", "category": "pytorch docs"}
{"text": "torch.concattorch.concat(tensors, dim=0, *, out=None) -> Tensor\n Alias of \"torch.cat()\".", "source": "https://pytorch.org/docs/stable/generated/torch.concat.html", "category": "pytorch docs"}
{"text": "torch.argwheretorch.argwhere(input) -> Tensor\n Returns a tensor containing the indices of all non-zero elements of\n \"input\". Each row in the result contains the indices of a non-zero\n element in \"input\". The result is sorted lexicographically, with\n the last index changing the fastest (C-style).\n If \"input\" has n dimensions, then the resulting indices tensor\n \"out\" is of size (z \\times n), where z is the total number of non-\n zero elements in the \"input\" tensor.\n Note:\n This function is similar to NumPy's argwhere.When \"input\" is on\n CUDA, this function causes host-device synchronization.\n Parameters:\n {input} --\n Example:\n >>> t = torch.tensor([1, 0, 1])\n >>> torch.argwhere(t)\n tensor([[0],\n [2]])\n >>> t = torch.tensor([[1, 0, 1], [0, 1, 1]])\n >>> torch.argwhere(t)\n tensor([[0, 0],\n [0, 2],\n [1, 1],\n [1, 2]])", "source": "https://pytorch.org/docs/stable/generated/torch.argwhere.html", "category": "pytorch docs"}
{"text": "torch.Tensor.fill_Tensor.fill_(value) -> Tensor\n Fills \"self\" tensor with the specified value.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fill_.html", "category": "pytorch docs"}
{"text": "NoopObserverclass torch.quantization.observer.NoopObserver(dtype=torch.float16, custom_op_name='')\n Observer that doesn't do anything and just passes its configuration\n to the quantized module's \".from_float()\".\n Primarily used for quantization to float16 which doesn't require\n determining ranges.\n Parameters:\n * dtype -- Quantized data type\n * custom_op_name -- (temporary) specify this observer for an\n operator that doesn't require any observation (Can be used in\n Graph Mode Passes for special case ops).", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.NoopObserver.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.hardtanhtorch.nn.functional.hardtanh(input, min_val=- 1., max_val=1., inplace=False) -> Tensor\n Applies the HardTanh function element-wise. See \"Hardtanh\" for more\n details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.hardtanh.html", "category": "pytorch docs"}
{"text": "torch.foreach_frac_torch._foreach_frac(self: List[Tensor]) -> None\n Apply \"torch.frac()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_frac_.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.mishtorch.nn.functional.mish(input, inplace=False)\n Applies the Mish function, element-wise. Mish: A Self Regularized\n Non-Monotonic Neural Activation Function.\n \\text{Mish}(x) = x * \\text{Tanh}(\\text{Softplus}(x))\n Note:\n See Mish: A Self Regularized Non-Monotonic Neural Activation\n Function\n See \"Mish\" for more details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.mish.html", "category": "pytorch docs"}
{"text": "torch.Tensor.square_Tensor.square_() -> Tensor\n In-place version of \"square()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.square_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sgn_Tensor.sgn_() -> Tensor\n In-place version of \"sgn()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sgn_.html", "category": "pytorch docs"}
{"text": "torch.fft.fftshifttorch.fft.fftshift(input, dim=None) -> Tensor\n Reorders n-dimensional FFT data, as provided by \"fftn()\", to have\n negative frequency terms first.\n This performs a periodic shift of n-dimensional data such that the\n origin \"(0, ..., 0)\" is moved to the center of the tensor.\n Specifically, to \"input.shape[dim] // 2\" in each selected\n dimension.\n Note:\n By convention, the FFT returns positive frequency terms first,\n followed by the negative frequencies in reverse order, so that\n \"f[-i]\" for all 0 < i \\leq n/2 in Python gives the negative\n frequency terms. \"fftshift()\" rearranges all frequencies into\n ascending order from negative to positive with the zero-frequency\n term in the center.\n Note:\n For even lengths, the Nyquist frequency at \"f[n/2]\" can be\n thought of as either negative or positive. \"fftshift()\" always\n puts the Nyquist term at the 0-index. This is the same convention\n used by \"fftfreq()\".\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftshift.html", "category": "pytorch docs"}
{"text": "used by \"fftfreq()\".\n Parameters:\n * input (Tensor) -- the tensor in FFT order\n * dim (int, Tuple[int], optional) -- The\n dimensions to rearrange. Only dimensions specified here will\n be rearranged, any other dimensions will be left in their\n original order. Default: All dimensions of \"input\".\n -[ Example ]-\n\n\n\nf = torch.fft.fftfreq(4)\nf\n tensor([ 0.0000, 0.2500, -0.5000, -0.2500])\ntorch.fft.fftshift(f)\n tensor([-0.5000, -0.2500, 0.0000, 0.2500])\n Also notice that the Nyquist frequency term at \"f[2]\" was moved to\n the beginning of the tensor.\n This also works for multi-dimensional transforms:\nx = torch.fft.fftfreq(5, d=1/5) + 0.1 * torch.fft.fftfreq(5, d=1/5).unsqueeze(1)\nx\n tensor([[ 0.0000, 1.0000, 2.0000, -2.0000, -1.0000],\n [ 0.1000, 1.1000, 2.1000, -1.9000, -0.9000],\n [ 0.2000, 1.2000, 2.2000, -1.8000, -0.8000],\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftshift.html", "category": "pytorch docs"}
{"text": "[-0.2000, 0.8000, 1.8000, -2.2000, -1.2000],\n [-0.1000, 0.9000, 1.9000, -2.1000, -1.1000]])\n\n\n\ntorch.fft.fftshift(x)\n tensor([[-2.2000, -1.2000, -0.2000, 0.8000, 1.8000],\n [-2.1000, -1.1000, -0.1000, 0.9000, 1.9000],\n [-2.0000, -1.0000, 0.0000, 1.0000, 2.0000],\n [-1.9000, -0.9000, 0.1000, 1.1000, 2.1000],\n [-1.8000, -0.8000, 0.2000, 1.2000, 2.2000]])\n \"fftshift()\" can also be useful for spatial data. If our data is\n defined on a centered grid (\"[-(N//2), (N-1)//2]\") then we can use\n the standard FFT defined on an uncentered grid (\"[0, N)\") by first\n applying an \"ifftshift()\".\nx_centered = torch.arange(-5, 5)\nx_uncentered = torch.fft.ifftshift(x_centered)\nfft_uncentered = torch.fft.fft(x_uncentered)\n Similarly, we can convert the frequency domain components to\n centered convention by applying \"fftshift()\".\nfft_centered = torch.fft.fftshift(fft_uncentered)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftshift.html", "category": "pytorch docs"}
{"text": "The inverse transform, from centered Fourier space back to centered\n spatial data, can be performed by applying the inverse shifts in\n reverse order:\n\n\n\nx_centered_2 = torch.fft.fftshift(torch.fft.ifft(torch.fft.ifftshift(fft_centered)))\ntorch.testing.assert_close(x_centered.to(torch.complex64), x_centered_2, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftshift.html", "category": "pytorch docs"}
{"text": "float_qparams_weight_only_qconfigtorch.quantization.qconfig.float_qparams_weight_only_qconfig\n alias of QConfig(activation=,\n weight=functools.partial(,\n dtype=torch.quint8, qscheme=torch.per_channel_affine_float_qparams,\n ch_axis=0){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.float_qparams_weight_only_qconfig.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.gumbel_softmaxtorch.nn.functional.gumbel_softmax(logits, tau=1, hard=False, eps=1e-10, dim=- 1)\n Samples from the Gumbel-Softmax distribution (Link 1 Link 2) and\n optionally discretizes.\n Parameters:\n * logits (Tensor) -- [..., num_features] unnormalized\n log probabilities\n * tau (float) -- non-negative scalar temperature\n * hard (bool) -- if \"True\", the returned samples will be\n discretized as one-hot vectors, but will be differentiated as\n if it is the soft sample in autograd\n * dim (int) -- A dimension along which softmax will be\n computed. Default: -1.\n Returns:\n Sampled tensor of same shape as logits from the Gumbel-Softmax\n distribution. If \"hard=True\", the returned samples will be one-\n hot, otherwise they will be probability distributions that sum\n to 1 across dim.\n Return type:\n Tensor\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.gumbel_softmax.html", "category": "pytorch docs"}
{"text": "Return type:\n Tensor\n Note:\n This function is here for legacy reasons, may be removed from\n nn.Functional in the future.\n Note:\n The main trick for hard is to do y_hard - y_soft.detach() +\n y_softIt achieves two things: - makes the output value exactly\n one-hot (since we add then subtract y_soft value) - makes the\n gradient equal to y_soft gradient (since we strip all other\n gradients)\n Examples::\n >>> logits = torch.randn(20, 32)\n >>> # Sample soft categorical using reparametrization trick:\n >>> F.gumbel_softmax(logits, tau=1, hard=False)\n >>> # Sample hard categorical using \"Straight-through\" trick:\n >>> F.gumbel_softmax(logits, tau=1, hard=True)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.gumbel_softmax.html", "category": "pytorch docs"}
{"text": "torch._foreach_log10torch._foreach_log10(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.log10()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_log10.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.parametrize.register_parametrizationtorch.nn.utils.parametrize.register_parametrization(module, tensor_name, parametrization, *, unsafe=False)\n Adds a parametrization to a tensor in a module.\n Assume that \"tensor_name=\"weight\"\" for simplicity. When accessing\n \"module.weight\", the module will return the parametrized version\n \"parametrization(module.weight)\". If the original tensor requires a\n gradient, the backward pass will differentiate through\n \"parametrization\", and the optimizer will update the tensor\n accordingly.\n The first time that a module registers a parametrization, this\n function will add an attribute \"parametrizations\" to the module of\n type \"ParametrizationList\".\n The list of parametrizations on the tensor \"weight\" will be\n accessible under \"module.parametrizations.weight\".\n The original tensor will be accessible under\n \"module.parametrizations.weight.original\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.register_parametrization.html", "category": "pytorch docs"}
{"text": "\"module.parametrizations.weight.original\".\n Parametrizations may be concatenated by registering several\n parametrizations on the same attribute.\n The training mode of a registered parametrization is updated on\n registration to match the training mode of the host module\n Parametrized parameters and buffers have an inbuilt caching system\n that can be activated using the context manager \"cached()\".\n A \"parametrization\" may optionally implement a method with\n signature\n def right_inverse(self, X: Tensor) -> Union[Tensor, Sequence[Tensor]]\n This method is called on the unparametrized tensor when the first\n parametrization is registered to compute the initial value of the\n original tensor. If this method is not implemented, the original\n tensor will be just the unparametrized tensor.\n If all the parametrizations registered on a tensor implement\n right_inverse it is possible to initialize a parametrized tensor\n by assigning to it, as shown in the example below.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.register_parametrization.html", "category": "pytorch docs"}
{"text": "It is possible for the first parametrization to depend on several\n inputs. This may be implemented returning a tuple of tensors from\n \"right_inverse\" (see the example implementation of a \"RankOne\"\n parametrization below).\n In this case, the unconstrained tensors are also located under\n \"module.parametrizations.weight\" with names \"original0\",\n \"original1\",...\n Note:\n If unsafe=False (default) both the forward and right_inverse\n methods will be called once to perform a number of consistency\n checks. If unsafe=True, then right_inverse will be called if the\n tensor is not parametrized, and nothing will be called otherwise.\n Note:\n In most situations, \"right_inverse\" will be a function such that\n \"forward(right_inverse(X)) == X\" (see right inverse). Sometimes,\n when the parametrization is not surjective, it may be reasonable\n to relax this.\n Warning:\n If a parametrization depends on several inputs,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.register_parametrization.html", "category": "pytorch docs"}
{"text": "\"register_parametrization()\" will register a number of new\n parameters. If such parametrization is registered after the\n optimizer is created, these new parameters will need to be added\n manually to the optimizer. See\n \"torch.Optimizer.add_param_group()\".\n Parameters:\n * module (nn.Module) -- module on which to register the\n parametrization\n * tensor_name (str) -- name of the parameter or buffer on\n which to register the parametrization\n * parametrization (nn.Module) -- the parametrization to\n register\n Keyword Arguments:\n unsafe (bool) -- a boolean flag that denotes whether the\n parametrization may change the dtype and shape of the tensor.\n Default: False Warning: the parametrization is not checked for\n consistency upon registration. Enable this flag at your own\n risk.\n Raises:\n ValueError -- if the module does not have a parameter or a\n buffer named \"tensor_name\"\n Return type:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.register_parametrization.html", "category": "pytorch docs"}
{"text": "buffer named \"tensor_name\"\n Return type:\n Module\n -[ Examples ]-\n\n\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.utils.parametrize as P\nclass Symmetric(nn.Module):\n def forward(self, X):\n return X.triu() + X.triu(1).T # Return a symmetric matrix\ndef right_inverse(self, A):\n return A.triu()\n\nm = nn.Linear(5, 5)\nP.register_parametrization(m, \"weight\", Symmetric())\nprint(torch.allclose(m.weight, m.weight.T)) # m.weight is now symmetric\n True\nA = torch.rand(5, 5)\nA = A + A.T # A is now symmetric\nm.weight = A # Initialize the weight to be the symmetric matrix A\nprint(torch.allclose(m.weight, A))\n True\nclass RankOne(nn.Module):\n def forward(self, x, y):\n # Form a rank 1 matrix multiplying two vectors\n return x.unsqueeze(-1) @ y.unsqueeze(-2)\ndef right_inverse(self, Z):\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.register_parametrization.html", "category": "pytorch docs"}
{"text": "\n\n\ndef right_inverse(self, Z):\n # Project Z onto the rank 1 matrices\n U, S, Vh = torch.linalg.svd(Z, full_matrices=False)\n # Return rescaled singular vectors\n s0_sqrt = S[0].sqrt().unsqueeze(-1)\n return U[..., :, 0] * s0_sqrt, Vh[..., 0, :] * s0_sqrt\n\nlinear_rank_one = P.register_parametrization(nn.Linear(4, 4), \"weight\", RankOne())\nprint(torch.linalg.matrix_rank(linear_rank_one.weight).item())\n 1\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.register_parametrization.html", "category": "pytorch docs"}
{"text": "default_qat_qconfig_v2torch.quantization.qconfig.default_qat_qconfig_v2\n alias of QConfig(activation=functools.partial(,\n observer=,\n quant_min=0, quant_max=255, dtype=torch.quint8){},\n weight=functools.partial(, observer=,\n quant_min=-128, quant_max=127, dtype=torch.qint8,\n qscheme=torch.per_tensor_symmetric){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_qat_qconfig_v2.html", "category": "pytorch docs"}
{"text": "torch.Tensor.truncTensor.trunc() -> Tensor\n See \"torch.trunc()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.trunc.html", "category": "pytorch docs"}
{"text": "torch.fft.rfft2torch.fft.rfft2(input, s=None, dim=(- 2, - 1), norm=None, , out=None) -> Tensor\n Computes the 2-dimensional discrete Fourier transform of real\n \"input\". Equivalent to \"rfftn()\" but FFTs only the last two\n dimensions by default.\n The FFT of a real signal is Hermitian-symmetric, \"X[i, j] =\n conj(X[-i, -j])\", so the full \"fft2()\" output contains redundant\n information. \"rfft2()\" instead omits the negative frequencies in\n the last dimension.\n Note:\n Supports torch.half on CUDA with GPU Architecture SM53 or\n greater. However it only supports powers of 2 signal length in\n every transformed dimensions.\n Parameters:\n * input (Tensor) -- the input tensor\n * s (Tuple[int], *optional) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the real FFT. If a length \"-1\" is specified, no", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfft2.html", "category": "pytorch docs"}
{"text": "padding is done in that dimension. Default: \"s =\n [input.size(d) for d in dim]\"\n * dim (Tuple[int], optional) -- Dimensions to be\n transformed. Default: last two dimensions.\n * norm (str, optional) --\n Normalization mode. For the forward transform (\"rfft2()\"),\n these correspond to:\n * \"\"forward\"\" - normalize by \"1/n\"\n * \"\"backward\"\" - no normalization\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the real FFT\n orthonormal)\n Where \"n = prod(s)\" is the logical FFT size. Calling the\n backward transform (\"irfft2()\") with the same normalization\n mode will apply an overall normalization of \"1/n\" between the\n two transforms. This is required to make \"irfft2()\" the exact\n inverse.\n Default is \"\"backward\"\" (no normalization).\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n\n\n\nt = torch.rand(10, 10)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfft2.html", "category": "pytorch docs"}
{"text": "-[ Example ]-\n\n\n\nt = torch.rand(10, 10)\nrfft2 = torch.fft.rfft2(t)\nrfft2.size()\n torch.Size([10, 6])\n Compared against the full output from \"fft2()\", we have all\n elements up to the Nyquist frequency.\nfft2 = torch.fft.fft2(t)\ntorch.testing.assert_close(fft2[..., :6], rfft2, check_stride=False)\n The discrete Fourier transform is separable, so \"rfft2()\" here is\n equivalent to a combination of \"fft()\" and \"rfft()\":\ntwo_ffts = torch.fft.fft(torch.fft.rfft(t, dim=1), dim=0)\ntorch.testing.assert_close(rfft2, two_ffts, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfft2.html", "category": "pytorch docs"}
{"text": "GRUclass torch.ao.nn.quantized.dynamic.GRU(args, kwargs)\n Applies a multi-layer gated recurrent unit (GRU) RNN to an input\n sequence.\n For each element in the input sequence, each layer computes the\n following function:\n \\begin{array}{ll} r_t = \\sigma(W_{ir} x_t + b_{ir} + W_{hr}\n h_{(t-1)} + b_{hr}) \\ z_t = \\sigma(W_{iz} x_t + b_{iz} +\n W_{hz} h_{(t-1)} + b_{hz}) \\ n_t = \\tanh(W_{in} x_t +\n b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\ h_t = (1 -\n z_t) * n_t + z_t * h_{(t-1)} \\end{array}\n where h_t is the hidden state at time t, x_t is the input at time\n t, h_{(t-1)} is the hidden state of the layer at time t-1 or\n the initial hidden state at time 0*, and r_t, z_t, n_t are the\n reset, update, and new gates, respectively. \\sigma is the sigmoid\n function, and * is the Hadamard product.\n In a multilayer GRU, the input x^{(l)}_t of the l -th layer (l >=\n 2) is the hidden state h^{(l-1)}_t of the previous layer multiplied", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html", "category": "pytorch docs"}
{"text": "by dropout \\delta^{(l-1)}_t where each \\delta^{(l-1)}_t is a\n Bernoulli random variable which is 0 with probability \"dropout\".\n Parameters:\n * input_size -- The number of expected features in the input\n x\n * hidden_size -- The number of features in the hidden state\n h\n * num_layers -- Number of recurrent layers. E.g., setting\n \"num_layers=2\" would mean stacking two GRUs together to form a\n stacked GRU, with the second GRU taking in outputs of the\n first GRU and computing the final results. Default: 1\n * bias -- If \"False\", then the layer does not use bias\n weights b_ih and b_hh. Default: \"True\"\n * batch_first -- If \"True\", then the input and output\n tensors are provided as (batch, seq, feature). Default:\n \"False\"\n * dropout -- If non-zero, introduces a Dropout layer on\n the outputs of each GRU layer except the last layer, with", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html", "category": "pytorch docs"}
{"text": "dropout probability equal to \"dropout\". Default: 0\n * bidirectional -- If \"True\", becomes a bidirectional GRU.\n Default: \"False\"\n Inputs: input, h_0\n * input of shape (seq_len, batch, input_size): tensor\n containing the features of the input sequence. The input can\n also be a packed variable length sequence. See\n \"torch.nn.utils.rnn.pack_padded_sequence()\" for details.\n * h_0 of shape (num_layers * num_directions, batch,\n hidden_size): tensor containing the initial hidden state for\n each element in the batch. Defaults to zero if not provided.\n If the RNN is bidirectional, num_directions should be 2, else\n it should be 1.\n Outputs: output, h_n\n * output of shape (seq_len, batch, num_directions *\n hidden_size): tensor containing the output features h_t from\n the last layer of the GRU, for each t. If a\n \"torch.nn.utils.rnn.PackedSequence\" has been given as the", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html", "category": "pytorch docs"}
{"text": "input, the output will also be a packed sequence. For the\n unpacked case, the directions can be separated using\n \"output.view(seq_len, batch, num_directions, hidden_size)\",\n with forward and backward being direction 0 and 1\n respectively.\n Similarly, the directions can be separated in the packed case.\n * h_n of shape (num_layers * num_directions, batch,\n hidden_size): tensor containing the hidden state for t =\n seq_len\n Like output, the layers can be separated using\n \"h_n.view(num_layers, num_directions, batch, hidden_size)\".\n Shape:\n * Input1: (L, N, H_{in}) tensor containing input features where\n H_{in}=\\text{input_size} and L represents a sequence\n length.\n * Input2: (S, N, H_{out}) tensor containing the initial hidden\n state for each element in the batch.\n H_{out}=\\text{hidden_size} Defaults to zero if not provided.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html", "category": "pytorch docs"}
{"text": "where S=\\text{num_layers} * \\text{num_directions} If the RNN\n is bidirectional, num_directions should be 2, else it should\n be 1.\n * Output1: (L, N, H_{all}) where H_{all}=\\text{num_directions}\n * \\text{hidden_size}\n * Output2: (S, N, H_{out}) tensor containing the next hidden\n state for each element in the batch\n Variables:\n * weight_ih_l[k] -- the learnable input-hidden weights of\n the \\text{k}^{th} layer (W_ir|W_iz|W_in), of shape\n (3hidden_size, input_size) for k = 0. Otherwise, the\n shape is (3hidden_size, num_directions * hidden_size)\n * weight_hh_l[k] -- the learnable hidden-hidden weights of\n the \\text{k}^{th} layer (W_hr|W_hz|W_hn), of shape\n (3hidden_size, hidden_size)\n * bias_ih_l[k] -- the learnable input-hidden bias of the\n \\text{k}^{th} layer (b_ir|b_iz|b_in), of shape\n (3hidden_size)\n * bias_hh_l[k] -- the learnable hidden-hidden bias of the", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html", "category": "pytorch docs"}
{"text": "\\text{k}^{th} layer (b_hr|b_hz|b_hn), of shape\n (3hidden_size)*\n Note:\n All the weights and biases are initialized from\n \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{1}{\\text{hidden_size}}\n Note:\n If the following conditions are satisfied: 1) cudnn is enabled,\n 2) input data is on the GPU 3) input data has dtype\n \"torch.float16\" 4) V100 GPU is used, 5) input data is not in\n \"PackedSequence\" format persistent algorithm can be selected to\n improve performance.\n Examples:\n >>> rnn = nn.GRU(10, 20, 2)\n >>> input = torch.randn(5, 3, 10)\n >>> h0 = torch.randn(2, 3, 20)\n >>> output, hn = rnn(input, h0)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html", "category": "pytorch docs"}
{"text": "update_bn_statsclass torch.ao.nn.intrinsic.qat.update_bn_stats(mod)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.update_bn_stats.html", "category": "pytorch docs"}
{"text": "torch.Tensor.crow_indicesTensor.crow_indices() -> IntTensor\n Returns the tensor containing the compressed row indices of the\n \"self\" tensor when \"self\" is a sparse CSR tensor of layout\n \"sparse_csr\". The \"crow_indices\" tensor is strictly of shape\n (\"self\".size(0) + 1) and of type \"int32\" or \"int64\". When using MKL\n routines such as sparse matrix multiplication, it is necessary to\n use \"int32\" indexing in order to avoid downcasting and potentially\n losing information.\n Example::\n >>> csr = torch.eye(5,5).to_sparse_csr()\n >>> csr.crow_indices()\n tensor([0, 1, 2, 3, 4, 5], dtype=torch.int32)", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.crow_indices.html", "category": "pytorch docs"}
{"text": "torch.Tensor.addTensor.add(other, *, alpha=1) -> Tensor\n Add a scalar or tensor to \"self\" tensor. If both \"alpha\" and\n \"other\" are specified, each element of \"other\" is scaled by \"alpha\"\n before being used.\n When \"other\" is a tensor, the shape of \"other\" must be\n broadcastable with the shape of the underlying tensor\n See \"torch.add()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.add.html", "category": "pytorch docs"}
{"text": "torch.cuda.get_device_nametorch.cuda.get_device_name(device=None)\n Gets the name of a device.\n Parameters:\n device (torch.device or int, optional) -- device\n for which to return the name. This function is a no-op if this\n argument is a negative integer. It uses the current device,\n given by \"current_device()\", if \"device\" is \"None\" (default).\n Returns:\n the name of the device\n Return type:\n str", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.get_device_name.html", "category": "pytorch docs"}
{"text": "torch.Tensor.reciprocalTensor.reciprocal() -> Tensor\n See \"torch.reciprocal()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.reciprocal.html", "category": "pytorch docs"}
{"text": "torch.autograd.functional.vjptorch.autograd.functional.vjp(func, inputs, v=None, create_graph=False, strict=False)\n Function that computes the dot product between a vector \"v\" and the\n Jacobian of the given function at the point given by the inputs.\n Parameters:\n * func (function) -- a Python function that takes Tensor\n inputs and returns a tuple of Tensors or a Tensor.\n * inputs (tuple of Tensors or Tensor) -- inputs to the\n function \"func\".\n * v (tuple of Tensors or Tensor) -- The vector for\n which the vector Jacobian product is computed. Must be the\n same size as the output of \"func\". This argument is optional\n when the output of \"func\" contains a single element and (if it\n is not provided) will be set as a Tensor containing a single\n \"1\".\n * create_graph (bool, optional) -- If \"True\", both the\n output and result will be computed in a differentiable way.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.vjp.html", "category": "pytorch docs"}
{"text": "Note that when \"strict\" is \"False\", the result can not require\n gradients or be disconnected from the inputs. Defaults to\n \"False\".\n * strict (bool, optional) -- If \"True\", an error will\n be raised when we detect that there exists an input such that\n all the outputs are independent of it. If \"False\", we return a\n Tensor of zeros as the vjp for said inputs, which is the\n expected mathematical value. Defaults to \"False\".\n Returns:\n tuple with:\n func_output (tuple of Tensors or Tensor): output of\n \"func(inputs)\"\n vjp (tuple of Tensors or Tensor): result of the dot product\n with the same shape as the inputs.\n Return type:\n output (tuple)\n -[ Example ]-\n\n\n\ndef exp_reducer(x):\n ... return x.exp().sum(dim=1)\ninputs = torch.rand(4, 4)\nv = torch.ones(4)\nvjp(exp_reducer, inputs, v)\n (tensor([5.7817, 7.2458, 5.7830, 6.7782]),\n tensor([[1.4458, 1.3962, 1.3042, 1.6354],\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.vjp.html", "category": "pytorch docs"}
{"text": "tensor([[1.4458, 1.3962, 1.3042, 1.6354],\n [2.1288, 1.0652, 1.5483, 2.5035],\n [2.2046, 1.1292, 1.1432, 1.3059],\n [1.3225, 1.6652, 1.7753, 2.0152]]))\n\n\n\nvjp(exp_reducer, inputs, v, create_graph=True)\n (tensor([5.7817, 7.2458, 5.7830, 6.7782], grad_fn=),\n tensor([[1.4458, 1.3962, 1.3042, 1.6354],\n [2.1288, 1.0652, 1.5483, 2.5035],\n [2.2046, 1.1292, 1.1432, 1.3059],\n [1.3225, 1.6652, 1.7753, 2.0152]], grad_fn=))\ndef adder(x, y):\n ... return 2 * x + 3 * y\ninputs = (torch.rand(2), torch.rand(2))\nv = torch.ones(2)\nvjp(adder, inputs, v)\n (tensor([2.4225, 2.3340]),\n (tensor([2., 2.]), tensor([3., 3.])))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.vjp.html", "category": "pytorch docs"}
{"text": "torch.nan_to_numtorch.nan_to_num(input, nan=0.0, posinf=None, neginf=None, , out=None) -> Tensor\n Replaces \"NaN\", positive infinity, and negative infinity values in\n \"input\" with the values specified by \"nan\", \"posinf\", and \"neginf\",\n respectively. By default, \"NaN\"s are replaced with zero, positive\n infinity is replaced with the greatest finite value representable\n by \"input\"'s dtype, and negative infinity is replaced with the\n least finite value representable by \"input\"'s dtype.\n Parameters:\n * input (Tensor) -- the input tensor.\n * nan (Number, optional) -- the value to replace\n \"NaN\"s with. Default is zero.\n * posinf (Number, optional) -- if a Number, the value\n to replace positive infinity values with. If None, positive\n infinity values are replaced with the greatest finite value\n representable by \"input\"'s dtype. Default is None.\n * neginf (Number, optional*) -- if a Number, the value", "source": "https://pytorch.org/docs/stable/generated/torch.nan_to_num.html", "category": "pytorch docs"}
{"text": "to replace negative infinity values with. If None, negative\n infinity values are replaced with the lowest finite value\n representable by \"input\"'s dtype. Default is None.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> x = torch.tensor([float('nan'), float('inf'), -float('inf'), 3.14])\n >>> torch.nan_to_num(x)\n tensor([ 0.0000e+00, 3.4028e+38, -3.4028e+38, 3.1400e+00])\n >>> torch.nan_to_num(x, nan=2.0)\n tensor([ 2.0000e+00, 3.4028e+38, -3.4028e+38, 3.1400e+00])\n >>> torch.nan_to_num(x, nan=2.0, posinf=1.0)\n tensor([ 2.0000e+00, 1.0000e+00, -3.4028e+38, 3.1400e+00])", "source": "https://pytorch.org/docs/stable/generated/torch.nan_to_num.html", "category": "pytorch docs"}
{"text": "FractionalMaxPool3dclass torch.nn.FractionalMaxPool3d(kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)\n Applies a 3D fractional max pooling over an input signal composed\n of several input planes.\n Fractional MaxPooling is described in detail in the paper\n Fractional MaxPooling by Ben Graham\n The max-pooling operation is applied in kT \\times kH \\times kW\n regions by a stochastic step size determined by the target output\n size. The number of output features is equal to the number of input\n planes.\n Parameters:\n * kernel_size (Union[int, Tuple[int, int,\n int]]) -- the size of the window to take a max over.\n Can be a single number k (for a square kernel of k x k x k) or\n a tuple (kt x kh x kw)\n * output_size (Union[int, Tuple[int, int,\n int]]) -- the target output size of the image of the", "source": "https://pytorch.org/docs/stable/generated/torch.nn.FractionalMaxPool3d.html", "category": "pytorch docs"}
{"text": "form oT x oH x oW. Can be a tuple (oT, oH, oW) or a single\n number oH for a square image oH x oH x oH\n * output_ratio (Union[float, Tuple[float,\n float, float]]) -- If one wants to have an output\n size as a ratio of the input size, this option can be given.\n This has to be a number or tuple in the range (0, 1)\n * return_indices (bool) -- if \"True\", will return the\n indices along with the outputs. Useful to pass to\n \"nn.MaxUnpool3d()\". Default: \"False\"\n Shape:\n * Input: (N, C, T_{in}, H_{in}, W_{in}) or (C, T_{in}, H_{in},\n W_{in}).\n * Output: (N, C, T_{out}, H_{out}, W_{out}) or (C, T_{out},\n H_{out}, W_{out}), where (T_{out}, H_{out},\n W_{out})=\\text{output_size} or (T_{out}, H_{out},\n W_{out})=\\text{output_ratio} \\times (T_{in}, H_{in}, W_{in})\n -[ Examples ]-\n\n\n\npool of cubic window of size=3, and target output size 13x12x11\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.FractionalMaxPool3d.html", "category": "pytorch docs"}
{"text": "\n\n\nm = nn.FractionalMaxPool3d(3, output_size=(13, 12, 11))\npool of cubic window and target output size being half of input size\nm = nn.FractionalMaxPool3d(3, output_ratio=(0.5, 0.5, 0.5))\ninput = torch.randn(20, 16, 50, 32, 16)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.FractionalMaxPool3d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.gerTensor.ger(vec2) -> Tensor\n See \"torch.ger()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ger.html", "category": "pytorch docs"}
{"text": "torch.Tensor.col_indicesTensor.col_indices() -> IntTensor\n Returns the tensor containing the column indices of the \"self\"\n tensor when \"self\" is a sparse CSR tensor of layout \"sparse_csr\".\n The \"col_indices\" tensor is strictly of shape (\"self\".nnz()) and of\n type \"int32\" or \"int64\". When using MKL routines such as sparse\n matrix multiplication, it is necessary to use \"int32\" indexing in\n order to avoid downcasting and potentially losing information.\n Example::\n >>> csr = torch.eye(5,5).to_sparse_csr()\n >>> csr.col_indices()\n tensor([0, 1, 2, 3, 4], dtype=torch.int32)", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.col_indices.html", "category": "pytorch docs"}
{"text": "torch.Tensor.uniform_Tensor.uniform_(from=0, to=1) -> Tensor\n Fills \"self\" tensor with numbers sampled from the continuous\n uniform distribution:\n P(x) = \\dfrac{1}{\\text{to} - \\text{from}}", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.uniform_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.deviceTensor.device\n Is the \"torch.device\" where this Tensor is.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.device.html", "category": "pytorch docs"}
{"text": "torch.Tensor.unfoldTensor.unfold(dimension, size, step) -> Tensor\n Returns a view of the original tensor which contains all slices of\n size \"size\" from \"self\" tensor in the dimension \"dimension\".\n Step between two slices is given by \"step\".\n If sizedim is the size of dimension \"dimension\" for \"self\", the\n size of dimension \"dimension\" in the returned tensor will be\n (sizedim - size) / step + 1.\n An additional dimension of size \"size\" is appended in the returned\n tensor.\n Parameters:\n * dimension (int) -- dimension in which unfolding happens\n * size (int) -- the size of each slice that is unfolded\n * step (int) -- the step between each slice\n Example:\n >>> x = torch.arange(1., 8)\n >>> x\n tensor([ 1., 2., 3., 4., 5., 6., 7.])\n >>> x.unfold(0, 2, 1)\n tensor([[ 1., 2.],\n [ 2., 3.],\n [ 3., 4.],\n [ 4., 5.],\n [ 5., 6.],\n [ 6., 7.]])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.unfold.html", "category": "pytorch docs"}
{"text": "[ 6., 7.]])\n >>> x.unfold(0, 2, 2)\n tensor([[ 1., 2.],\n [ 3., 4.],\n [ 5., 6.]])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.unfold.html", "category": "pytorch docs"}
{"text": "torch._foreach_logtorch._foreach_log(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.log()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_log.html", "category": "pytorch docs"}
{"text": "torch.digammatorch.digamma(input, *, out=None) -> Tensor\n Alias for \"torch.special.digamma()\".", "source": "https://pytorch.org/docs/stable/generated/torch.digamma.html", "category": "pytorch docs"}
{"text": "Tanhclass torch.nn.Tanh\n Applies the Hyperbolic Tangent (Tanh) function element-wise.\n Tanh is defined as:\n \\text{Tanh}(x) = \\tanh(x) = \\frac{\\exp(x) - \\exp(-x)} {\\exp(x) +\n \\exp(-x)}\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.Tanh()\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Tanh.html", "category": "pytorch docs"}
{"text": "torch.trapztorch.trapz(y, x, *, dim=- 1) -> Tensor\n Alias for \"torch.trapezoid()\".", "source": "https://pytorch.org/docs/stable/generated/torch.trapz.html", "category": "pytorch docs"}
{"text": "torch.Tensor.geometric_Tensor.geometric_(p, *, generator=None) -> Tensor\n Fills \"self\" tensor with elements drawn from the geometric\n distribution:\n f(X=k) = (1 - p)^{k - 1} p", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.geometric_.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.layer_normtorch.nn.functional.layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05)\n Applies Layer Normalization for last certain number of dimensions.\n See \"LayerNorm\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.layer_norm.html", "category": "pytorch docs"}
{"text": "torch.isinftorch.isinf(input) -> Tensor\n Tests if each element of \"input\" is infinite (positive or negative\n infinity) or not.\n Note:\n Complex values are infinite when their real or imaginary part is\n infinite.\n Parameters:\n input (Tensor) -- the input tensor.\n Returns:\n A boolean tensor that is True where \"input\" is infinite and\n False elsewhere\n Example:\n >>> torch.isinf(torch.tensor([1, float('inf'), 2, float('-inf'), float('nan')]))\n tensor([False, True, False, True, False])", "source": "https://pytorch.org/docs/stable/generated/torch.isinf.html", "category": "pytorch docs"}
{"text": "Attributeclass torch.jit.Attribute(value, type)\n This method is a pass-through function that returns value, mostly\n used to indicate to the TorchScript compiler that the left-hand\n side expression is a class instance attribute with type of type.\n Note that torch.jit.Attribute should only be used in init\n method of jit.ScriptModule subclasses.\n Though TorchScript can infer correct type for most Python\n expressions, there are some cases where type inference can be\n wrong, including:\n * Empty containers like [] and {}, which TorchScript assumes to\n be container of Tensor\n * Optional types like Optional[T] but assigned a valid value of\n type T, TorchScript would assume it is type T rather than\n Optional[T]\n In eager mode, it is simply a pass-through function that returns\n value without other implications.\n Example:\n import torch\n from typing import Dict\n class AttributeModule(torch.jit.ScriptModule):", "source": "https://pytorch.org/docs/stable/generated/torch.jit.Attribute.html", "category": "pytorch docs"}
{"text": "def init(self):\n super(AttributeModule, self).init()\n self.foo = torch.jit.Attribute(0.1, float)\n # we should be able to use self.foo as a float here\n assert 0.0 < self.foo\n self.names_ages = torch.jit.Attribute({}, Dict[str, int])\n self.names_ages[\"someone\"] = 20\n assert isinstance(self.names_ages[\"someone\"], int)\n m = AttributeModule()\n # m will contain two attributes\n # 1. foo of type float\n # 2. names_ages of type Dict[str, int]\n Note: it's now preferred to instead use type annotations instead of\n torch.jit.Annotate:\n import torch\n from typing import Dict\n class AttributeModule(torch.nn.Module):\n names: Dict[str, int]\n def init(self):\n super(AttributeModule, self).init()\n self.names = {}\n m = AttributeModule()\n Parameters:\n * value -- An initial value to be assigned to attribute.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.Attribute.html", "category": "pytorch docs"}
{"text": "\ntype -- A Python type\n Returns:\n Returns value\n count(value, /)\n Return number of occurrences of value.\n index(value, start=0, stop=9223372036854775807, /)\n Return first index of value.\n Raises ValueError if the value is not present.\n type\n Alias for field number 1\n value\n Alias for field number 0\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.Attribute.html", "category": "pytorch docs"}
{"text": "torch.Tensor.longTensor.long(memory_format=torch.preserve_format) -> Tensor\n \"self.long()\" is equivalent to \"self.to(torch.int64)\". See \"to()\".\n Parameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.long.html", "category": "pytorch docs"}
{"text": "torch.Tensor.mvlgammaTensor.mvlgamma(p) -> Tensor\n See \"torch.mvlgamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.mvlgamma.html", "category": "pytorch docs"}
{"text": "torch.Tensor.nan_to_num_Tensor.nan_to_num_(nan=0.0, posinf=None, neginf=None) -> Tensor\n In-place version of \"nan_to_num()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nan_to_num_.html", "category": "pytorch docs"}
{"text": "ConvBn3dclass torch.ao.nn.intrinsic.ConvBn3d(conv, bn)\n This is a sequential container which calls the Conv 3d and Batch\n Norm 3d modules. During quantization this will be replaced with the\n corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvBn3d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.argminTensor.argmin(dim=None, keepdim=False) -> LongTensor\n See \"torch.argmin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.argmin.html", "category": "pytorch docs"}
{"text": "torch.asinhtorch.asinh(input, , out=None) -> Tensor\n Returns a new tensor with the inverse hyperbolic sine of the\n elements of \"input\".\n \\text{out}{i} = \\sinh^{-1}(\\text{input})\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 0.1606, -1.4267, -1.0899, -1.0250 ])\n >>> torch.asinh(a)\n tensor([ 0.1599, -1.1534, -0.9435, -0.8990 ])", "source": "https://pytorch.org/docs/stable/generated/torch.asinh.html", "category": "pytorch docs"}
{"text": "torch.signal.windows.kaisertorch.signal.windows.kaiser(M, , beta=12.0, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\n Computes the Kaiser window.\n The Kaiser window is defined as follows:\n w_n = I_0 \\left( \\beta \\sqrt{1 - \\left( {\\frac{n - N/2}{N/2}}\n \\right) ^2 } \\right) / I_0( \\beta )\n where \"I_0\" is the zeroth order modified Bessel function of the\n first kind (see \"torch.special.i0()\"), and \"N = M - 1 if sym else\n M\".\n The window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\n Parameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\n Keyword Arguments:\n * beta (float, optional) -- shape parameter for the\n window. Must be non-negative. Default: 12.0\n * sym (bool, optional) -- If False*, returns a", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.kaiser.html", "category": "pytorch docs"}
{"text": "periodic window suitable for use in spectral analysis. If\n True, returns a symmetric window suitable for use in filter\n design. Default: True.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Return type:\n Tensor\n Examples:", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.kaiser.html", "category": "pytorch docs"}
{"text": "Return type:\n Tensor\n Examples:\n >>> # Generates a symmetric gaussian window with a standard deviation of 1.0.\n >>> torch.signal.windows.kaiser(5)\n tensor([4.0065e-05, 2.1875e-03, 4.3937e-02, 3.2465e-01, 8.8250e-01, 8.8250e-01, 3.2465e-01, 4.3937e-02, 2.1875e-03, 4.0065e-05])\n >>> # Generates a periodic gaussian window and standard deviation equal to 0.9.\n >>> torch.signal.windows.kaiser(5, sym=False,std=0.9)\n tensor([1.9858e-07, 5.1365e-05, 3.8659e-03, 8.4658e-02, 5.3941e-01, 1.0000e+00, 5.3941e-01, 8.4658e-02, 3.8659e-03, 5.1365e-05])", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.kaiser.html", "category": "pytorch docs"}
{"text": "torch.linalg.crosstorch.linalg.cross(input, other, , dim=- 1, out=None) -> Tensor\n Computes the cross product of two 3-dimensional vectors.\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of vectors, for which it computes the product\n along the dimension \"dim\". It broadcasts over the batch dimensions.\n Parameters:\n * input (Tensor) -- the first input tensor.\n * other (Tensor) -- the second input tensor.\n * dim (int, optional) -- the dimension along which to\n take the cross-product. Default: -1.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor. Ignored\n if None. Default: None*.\n -[ Example ]-\n\n\n\na = torch.randn(4, 3)\na\n tensor([[-0.3956, 1.1455, 1.6895],\n [-0.5849, 1.3672, 0.3599],\n [-1.1626, 0.7180, -0.0521],\n [-0.1339, 0.9902, -2.0225]])\nb = torch.randn(4, 3)\nb\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.cross.html", "category": "pytorch docs"}
{"text": "\n\n\nb = torch.randn(4, 3)\nb\n tensor([[-0.0257, -1.4725, -1.2251],\n [-1.1479, -0.7005, -1.9757],\n [-1.3904, 0.3726, -1.1836],\n [-0.9688, -0.7153, 0.2159]])\ntorch.linalg.cross(a, b)\n tensor([[ 1.0844, -0.5281, 0.6120],\n [-2.4490, -1.5687, 1.9792],\n [-0.8304, -1.3037, 0.5650],\n [-1.2329, 1.9883, 1.0551]])\na = torch.randn(1, 3) # a is broadcast to match shape of b\na\n tensor([[-0.9941, -0.5132, 0.5681]])\ntorch.linalg.cross(a, b)\n tensor([[ 1.4653, -1.2325, 1.4507],\n [ 1.4119, -2.6163, 0.1073],\n [ 0.3957, -1.9666, -1.0840],\n [ 0.2956, -0.3357, 0.2139]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.cross.html", "category": "pytorch docs"}
{"text": "torch.combinationstorch.combinations(input, r=2, with_replacement=False) -> seq\n Compute combinations of length r of the given tensor. The behavior\n is similar to python's itertools.combinations when\n with_replacement is set to False, and\n itertools.combinations_with_replacement when with_replacement\n is set to True.\n Parameters:\n * input (Tensor) -- 1D vector.\n * r (int, optional) -- number of elements to combine\n * with_replacement (bool, optional) -- whether to\n allow duplication in combination\n Returns:\n A tensor equivalent to converting all the input tensors into\n lists, do itertools.combinations or\n itertools.combinations_with_replacement on these lists, and\n finally convert the resulting list into tensor.\n Return type:\n Tensor\n Example:\n >>> a = [1, 2, 3]\n >>> list(itertools.combinations(a, r=2))\n [(1, 2), (1, 3), (2, 3)]", "source": "https://pytorch.org/docs/stable/generated/torch.combinations.html", "category": "pytorch docs"}
{"text": "[(1, 2), (1, 3), (2, 3)]\n >>> list(itertools.combinations(a, r=3))\n [(1, 2, 3)]\n >>> list(itertools.combinations_with_replacement(a, r=2))\n [(1, 1), (1, 2), (1, 3), (2, 2), (2, 3), (3, 3)]\n >>> tensor_a = torch.tensor(a)\n >>> torch.combinations(tensor_a)\n tensor([[1, 2],\n [1, 3],\n [2, 3]])\n >>> torch.combinations(tensor_a, r=3)\n tensor([[1, 2, 3]])\n >>> torch.combinations(tensor_a, with_replacement=True)\n tensor([[1, 1],\n [1, 2],\n [1, 3],\n [2, 2],\n [2, 3],\n [3, 3]])", "source": "https://pytorch.org/docs/stable/generated/torch.combinations.html", "category": "pytorch docs"}
{"text": "Unfoldclass torch.nn.Unfold(kernel_size, dilation=1, padding=0, stride=1)\n Extracts sliding local blocks from a batched input tensor.\n Consider a batched \"input\" tensor of shape (N, C, *), where N is\n the batch dimension, C is the channel dimension, and * represent\n arbitrary spatial dimensions. This operation flattens each sliding\n \"kernel_size\"-sized block within the spatial dimensions of \"input\"\n into a column (i.e., last dimension) of a 3-D \"output\" tensor of\n shape (N, C \\times \\prod(\\text{kernel_size}), L), where C \\times\n \\prod(\\text{kernel_size}) is the total number of values within\n each block (a block has \\prod(\\text{kernel_size}) spatial\n locations each containing a C-channeled vector), and L is the total\n number of such blocks:\n L = \\prod_d \\left\\lfloor\\frac{\\text{spatial_size}[d] + 2 \\times\n \\text{padding}[d] % - \\text{dilation}[d] \\times\n (\\text{kernel_size}[d] - 1) - 1}{\\text{stride}[d]} +\n 1\\right\\rfloor,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html", "category": "pytorch docs"}
{"text": "1\\right\\rfloor,\n where \\text{spatial_size} is formed by the spatial dimensions of\n \"input\" ( above), and d is over all spatial dimensions.\n Therefore, indexing \"output\" at the last dimension (column\n dimension) gives all values within a certain block.\n The \"padding\", \"stride\" and \"dilation\" arguments specify how the\n sliding blocks are retrieved.\n * \"stride\" controls the stride for the sliding blocks.\n * \"padding\" controls the amount of implicit zero-paddings on both\n sides for \"padding\" number of points for each dimension before\n reshaping.\n * \"dilation\" controls the spacing between the kernel points; also\n known as the \u00c3\u00a0 trous algorithm. It is harder to describe, but\n this link has a nice visualization of what \"dilation\" does.\n Parameters:\n * kernel_size (int or tuple) -- the size of the\n sliding blocks\n * dilation (int or tuple, optional*) -- a parameter\n that controls the stride of elements within the neighborhood.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html", "category": "pytorch docs"}
{"text": "Default: 1\n * padding (int or tuple, optional) -- implicit\n zero padding to be added on both sides of input. Default: 0\n * stride (int or tuple, optional) -- the stride of\n the sliding blocks in the input spatial dimensions. Default: 1\n * If \"kernel_size\", \"dilation\", \"padding\" or \"stride\" is an int or\n a tuple of length 1, their values will be replicated across all\n spatial dimensions.\n * For the case of two input spatial dimensions this operation is\n sometimes called \"im2col\".\n Note:\n \"Fold\" calculates each combined value in the resulting large\n tensor by summing all values from all containing blocks. \"Unfold\"\n extracts the values in the local blocks by copying from the large\n tensor. So, if the blocks overlap, they are not inverses of each\n other.In general, folding and unfolding operations are related as\n follows. Consider \"Fold\" and \"Unfold\" instances created with the\n same parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html", "category": "pytorch docs"}
{"text": "same parameters:\n >>> fold_params = dict(kernel_size=..., dilation=..., padding=..., stride=...)\n >>> fold = nn.Fold(output_size=..., fold_params)\n >>> unfold = nn.Unfold(fold_params)\n Then for any (supported) \"input\" tensor the following equality\n holds:\n fold(unfold(input)) == divisor * input\n where \"divisor\" is a tensor that depends only on the shape and\n dtype of the \"input\":\n >>> input_ones = torch.ones(input.shape, dtype=input.dtype)\n >>> divisor = fold(unfold(input_ones))\n When the \"divisor\" tensor contains no zero elements, then \"fold\"\n and \"unfold\" operations are inverses of each other (up to\n constant divisor).\n Warning:\n Currently, only 4-D input tensors (batched image-like tensors)\n are supported.\n Shape:\n * Input: (N, C, *)\n * Output: (N, C \\times \\prod(\\text{kernel_size}), L) as\n described above\n Examples:\n >>> unfold = nn.Unfold(kernel_size=(2, 3))\n >>> input = torch.randn(2, 5, 3, 4)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html", "category": "pytorch docs"}
{"text": "\n\n\ninput = torch.randn(2, 5, 3, 4)\n >>> output = unfold(input)\n >>> # each patch contains 30 values (2x3=6 vectors, each of 5 channels)\n >>> # 4 blocks (2x3 kernels) in total in the 3x4 input\n >>> output.size()\n torch.Size([2, 30, 4])\n >>> # Convolution is equivalent with Unfold + Matrix Multiplication + Fold (or view to output shape)\n >>> inp = torch.randn(1, 3, 10, 12)\n >>> w = torch.randn(2, 3, 4, 5)\n >>> inp_unf = torch.nn.functional.unfold(inp, (4, 5))\n >>> out_unf = inp_unf.transpose(1, 2).matmul(w.view(w.size(0), -1).t()).transpose(1, 2)\n >>> out = torch.nn.functional.fold(out_unf, (7, 8), (1, 1))\n >>> # or equivalently (and avoiding a copy),\n >>> # out = out_unf.view(1, 2, 7, 8)\n >>> (torch.nn.functional.conv2d(inp, w) - out).abs().max()\n tensor(1.9073e-06)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html", "category": "pytorch docs"}
{"text": "torch.unique_consecutivetorch.unique_consecutive(args, kwargs)\n Eliminates all but the first element from every consecutive group\n of equivalent elements.\n Note:\n This function is different from \"torch.unique()\" in the sense\n that this function only eliminates consecutive duplicate values.\n This semantics is similar to std::unique in C++.\n Parameters:\n * input (Tensor) -- the input tensor\n * return_inverse (bool) -- Whether to also return the\n indices for where elements in the original input ended up in\n the returned unique list.\n * return_counts (bool) -- Whether to also return the\n counts for each unique element.\n * dim (int) -- the dimension to apply unique. If \"None\",\n the unique of the flattened input is returned. default: \"None\"\n Returns:\n A tensor or a tuple of tensors containing\n * output (Tensor*): the output list of unique scalar\n elements.", "source": "https://pytorch.org/docs/stable/generated/torch.unique_consecutive.html", "category": "pytorch docs"}
{"text": "elements.\n * inverse_indices (Tensor): (optional) if\n \"return_inverse\" is True, there will be an additional\n returned tensor (same shape as input) representing the\n indices for where elements in the original input map to in\n the output; otherwise, this function will only return a\n single tensor.\n * counts (Tensor): (optional) if \"return_counts\" is\n True, there will be an additional returned tensor (same\n shape as output or output.size(dim), if dim was specified)\n representing the number of occurrences for each unique\n value or tensor.\n Return type:\n (Tensor, Tensor (optional), Tensor (optional))\n Example:\n >>> x = torch.tensor([1, 1, 2, 2, 3, 1, 1, 2])\n >>> output = torch.unique_consecutive(x)\n >>> output\n tensor([1, 2, 3, 1, 2])\n >>> output, inverse_indices = torch.unique_consecutive(x, return_inverse=True)\n >>> output", "source": "https://pytorch.org/docs/stable/generated/torch.unique_consecutive.html", "category": "pytorch docs"}
{"text": "\n\n\noutput\n tensor([1, 2, 3, 1, 2])\n >>> inverse_indices\n tensor([0, 0, 1, 1, 2, 3, 3, 4])\n >>> output, counts = torch.unique_consecutive(x, return_counts=True)\n >>> output\n tensor([1, 2, 3, 1, 2])\n >>> counts\n tensor([2, 2, 1, 2, 1])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.unique_consecutive.html", "category": "pytorch docs"}
{"text": "torch.tracetorch.trace(input) -> Tensor\n Returns the sum of the elements of the diagonal of the input 2-D\n matrix.\n Example:\n >>> x = torch.arange(1., 10.).view(3, 3)\n >>> x\n tensor([[ 1., 2., 3.],\n [ 4., 5., 6.],\n [ 7., 8., 9.]])\n >>> torch.trace(x)\n tensor(15.)", "source": "https://pytorch.org/docs/stable/generated/torch.trace.html", "category": "pytorch docs"}
{"text": "SoftMarginLossclass torch.nn.SoftMarginLoss(size_average=None, reduce=None, reduction='mean')\n Creates a criterion that optimizes a two-class classification\n logistic loss between input tensor x and target tensor y\n (containing 1 or -1).\n \\text{loss}(x, y) = \\sum_i \\frac{\\log(1 +\n \\exp(-y[i]x[i]))}{\\text{x.nelement}()}\n Parameters:\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * reduce (bool, optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SoftMarginLoss.html", "category": "pytorch docs"}
{"text": "\"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n Shape:\n * Input: (), where * means any number of dimensions.\n * Target: (), same shape as the input.\n * Output: scalar. If \"reduction\" is \"'none'\", then (*), same\n shape as input.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SoftMarginLoss.html", "category": "pytorch docs"}
{"text": "get_default_qat_qconfig_mappingclass torch.ao.quantization.qconfig_mapping.get_default_qat_qconfig_mapping(backend='x86', version=1)\n Return the default QConfigMapping for quantization aware training.\n Parameters:\n * backend () -- the quantization backend for the default\n qconfig mapping, should be one of [\"x86\" (default), \"fbgemm\",\n \"qnnpack\", \"onednn\"]\n * *version (*) -- the version for the default qconfig\n mapping\n Return type:\n QConfigMapping", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.qconfig_mapping.get_default_qat_qconfig_mapping.html", "category": "pytorch docs"}
{"text": "torch.Tensor.to_sparse_bsrTensor.to_sparse_bsr(blocksize, dense_dim) -> Tensor\n Convert a tensor to a block sparse row (BSR) storage format of\n given blocksize. If the \"self\" is strided, then the number of\n dense dimensions could be specified, and a hybrid BSR tensor will\n be created, with dense_dim dense dimensions and self.dim() - 2 -\n dense_dim batch dimension.\n Parameters:\n * blocksize (list, tuple, \"torch.Size\", optional) -- Block\n size of the resulting BSR tensor. A block size must be a tuple\n of length two such that its items evenly divide the two sparse\n dimensions.\n * dense_dim (int, optional) -- Number of dense\n dimensions of the resulting BSR tensor. This argument should\n be used only if \"self\" is a strided tensor, and must be a\n value between 0 and dimension of \"self\" tensor minus two.\n Example:\n >>> dense = torch.randn(10, 10)\n >>> sparse = dense.to_sparse_csr()", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_bsr.html", "category": "pytorch docs"}
{"text": "\n\n\nsparse = dense.to_sparse_csr()\n >>> sparse_bsr = sparse.to_sparse_bsr((5, 5))\n >>> sparse_bsr.col_indices()\n tensor([0, 1, 0, 1])\n >>> dense = torch.zeros(4, 3, 1)\n >>> dense[0:2, 0] = dense[0:2, 2] = dense[2:4, 1] = 1\n >>> dense.to_sparse_bsr((2, 1), 1)\n tensor(crow_indices=tensor([0, 2, 3]),\n col_indices=tensor([0, 2, 1]),\n values=tensor([[[[1.]],\n [[1.]]],\n [[[1.]],\n [[1.]]],\n [[[1.]],\n [[1.]]]]), size=(4, 3, 1), nnz=3,\n layout=torch.sparse_bsr)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_bsr.html", "category": "pytorch docs"}
{"text": "torch.Tensor.innerTensor.inner(other) -> Tensor\n See \"torch.inner()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.inner.html", "category": "pytorch docs"}
{"text": "torch.index_selecttorch.index_select(input, dim, index, , out=None) -> Tensor\n Returns a new tensor which indexes the \"input\" tensor along\n dimension \"dim\" using the entries in \"index\" which is a\n LongTensor.\n The returned tensor has the same number of dimensions as the\n original tensor (\"input\"). The \"dim\"th dimension has the same size\n as the length of \"index\"; other dimensions have the same size as in\n the original tensor.\n Note:\n The returned tensor does not use the same storage as the\n original tensor. If \"out\" has a different shape than expected,\n we silently change it to the correct shape, reallocating the\n underlying storage if necessary.\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- the dimension in which we index\n * index (IntTensor or LongTensor*) -- the 1-D tensor\n containing the indices to index\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.index_select.html", "category": "pytorch docs"}
{"text": "Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> x = torch.randn(3, 4)\n >>> x\n tensor([[ 0.1427, 0.0231, -0.5414, -1.0009],\n [-0.4664, 0.2647, -0.1228, -1.1068],\n [-1.1734, -0.6571, 0.7230, -0.6004]])\n >>> indices = torch.tensor([0, 2])\n >>> torch.index_select(x, 0, indices)\n tensor([[ 0.1427, 0.0231, -0.5414, -1.0009],\n [-1.1734, -0.6571, 0.7230, -0.6004]])\n >>> torch.index_select(x, 1, indices)\n tensor([[ 0.1427, -0.5414],\n [-0.4664, -0.1228],\n [-1.1734, 0.7230]])", "source": "https://pytorch.org/docs/stable/generated/torch.index_select.html", "category": "pytorch docs"}
{"text": "torch.igammactorch.igammac(input, other, *, out=None) -> Tensor\n Alias for \"torch.special.gammaincc()\".", "source": "https://pytorch.org/docs/stable/generated/torch.igammac.html", "category": "pytorch docs"}
{"text": "Dropout2dclass torch.nn.Dropout2d(p=0.5, inplace=False)\n Randomly zero out entire channels (a channel is a 2D feature map,\n e.g., the j-th channel of the i-th sample in the batched input is a\n 2D tensor \\text{input}[i, j]). Each channel will be zeroed out\n independently on every forward call with probability \"p\" using\n samples from a Bernoulli distribution.\n Usually the input comes from \"nn.Conv2d\" modules.\n As described in the paper Efficient Object Localization Using\n Convolutional Networks , if adjacent pixels within feature maps are\n strongly correlated (as is normally the case in early convolution\n layers) then i.i.d. dropout will not regularize the activations and\n will otherwise just result in an effective learning rate decrease.\n In this case, \"nn.Dropout2d()\" will help promote independence\n between feature maps and should be used instead.\n Parameters:\n * p (float, optional) -- probability of an element to\n be zero-ed.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Dropout2d.html", "category": "pytorch docs"}
{"text": "be zero-ed.\n * inplace (bool, optional) -- If set to \"True\", will\n do this operation in-place\n Warning:\n Due to historical reasons, this class will perform 1D channel-\n wise dropout for 3D inputs (as done by \"nn.Dropout1d\"). Thus, it\n currently does NOT support inputs without a batch dimension of\n shape (C, H, W). This behavior will change in a future release to\n interpret 3D inputs as no-batch-dim inputs. To maintain the old\n behavior, switch to \"nn.Dropout1d\".\n Shape:\n * Input: (N, C, H, W) or (N, C, L).\n * Output: (N, C, H, W) or (N, C, L) (same shape as input).\n Examples:\n >>> m = nn.Dropout2d(p=0.2)\n >>> input = torch.randn(20, 16, 32, 32)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Dropout2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.logical_not_Tensor.logical_not_() -> Tensor\n In-place version of \"logical_not()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logical_not_.html", "category": "pytorch docs"}
{"text": "torch.linalg.svdtorch.linalg.svd(A, full_matrices=True, , driver=None, out=None)\n Computes the singular value decomposition (SVD) of a matrix.\n Letting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the full SVD of\n a matrix A \\in \\mathbb{K}^{m \\times n}, if k = min(m,n), is\n defined as\n A = U \\operatorname{diag}(S) V^{\\text{H}} \\mathrlap{\\qquad U \\in\n \\mathbb{K}^{m \\times m}, S \\in \\mathbb{R}^k, V \\in \\mathbb{K}^{n\n \\times n}}\n where \\operatorname{diag}(S) \\in \\mathbb{K}^{m \\times n},\n V^{\\text{H}} is the conjugate transpose when V is complex, and the\n transpose when V is real-valued. The matrices U, V (and thus\n V^{\\text{H}}) are orthogonal in the real case, and unitary in the\n complex case.\n When m > n (resp. m < n) we can drop the last m - n (resp. n\n - m) columns of U (resp. V) to form the reduced SVD*:\n A = U \\operatorname{diag}(S) V^{\\text{H}} \\mathrlap{\\qquad U \\in", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.svd.html", "category": "pytorch docs"}
{"text": "\\mathbb{K}^{m \\times k}, S \\in \\mathbb{R}^k, V \\in \\mathbb{K}^{k\n \\times n}}\n where \\operatorname{diag}(S) \\in \\mathbb{K}^{k \\times k}. In this\n case, U and V also have orthonormal columns.\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n The returned decomposition is a named tuple (U, S, Vh) which\n corresponds to U, S, V^{\\text{H}} above.\n The singular values are returned in descending order.\n The parameter \"full_matrices\" chooses between the full (default)\n and reduced SVD.\n The \"driver\" kwarg may be used in CUDA with a cuSOLVER backend to\n choose the algorithm used to compute the SVD. The choice of a\n driver is a trade-off between accuracy and speed.\n * If \"A\" is well-conditioned (its condition number is not too\n large), or you do not mind some precision loss.\n * For a general matrix: 'gesvdj' (Jacobi method)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.svd.html", "category": "pytorch docs"}
{"text": "\nIf \"A\" is tall or wide (m >> n or m << n): 'gesvda'\n (Approximate method)\nIf \"A\" is not well-conditioned or precision is relevant:\n 'gesvd' (QR based)\n By default (\"driver\"= None), we call 'gesvdj' and, if it fails,\n we fallback to 'gesvd'.\n Differences with numpy.linalg.svd:\nUnlike numpy.linalg.svd, this function always returns a tuple\n of three tensors and it doesn't support compute_uv argument.\n Please use \"torch.linalg.svdvals()\", which computes only the\n singular values, instead of compute_uv=False.\n Note:\n When \"full_matrices\"= True, the gradients with respect to\n U[..., :, min(m, n):] and Vh[..., min(m, n):, :] will be\n ignored, as those vectors can be arbitrary bases of the\n corresponding subspaces.\n Warning:\n The returned tensors U and V are not unique, nor are they\n continuous with respect to \"A\". Due to this lack of uniqueness,\n different hardware and software may compute different singular\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.svd.html", "category": "pytorch docs"}
{"text": "vectors.This non-uniqueness is caused by the fact that\n multiplying any pair of singular vectors u_k, v_k by -1 in the\n real case or by e^{i \\phi}, \\phi \\in \\mathbb{R} in the complex\n case produces another two valid singular vectors of the matrix.\n For this reason, the loss function shall not depend on this e^{i\n \\phi} quantity, as it is not well-defined. This is checked for\n complex inputs when computing the gradients of this function. As\n such, when inputs are complex and are on a CUDA device, the\n computation of the gradients of this function synchronizes that\n device with the CPU.\n Warning:\n Gradients computed using U or Vh will only be finite when \"A\"\n does not have repeated singular values. If \"A\" is rectangular,\n additionally, zero must also not be one of its singular values.\n Furthermore, if the distance between any two singular values is\n close to zero, the gradient will be numerically unstable, as it", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.svd.html", "category": "pytorch docs"}
{"text": "depends on the singular values \\sigma_i through the computation\n of \\frac{1}{\\min_{i \\neq j} \\sigma_i^2 - \\sigma_j^2}. In the\n rectangular case, the gradient will also be numerically unstable\n when \"A\" has small singular values, as it also depends on the\n computation of \\frac{1}{\\sigma_i}.\n See also:\n \"torch.linalg.svdvals()\" computes only the singular values.\n Unlike \"torch.linalg.svd()\", the gradients of \"svdvals()\" are\n always numerically stable.\n \"torch.linalg.eig()\" for a function that computes another type of\n spectral decomposition of a matrix. The eigendecomposition works\n just on square matrices.\n \"torch.linalg.eigh()\" for a (faster) function that computes the\n eigenvalue decomposition for Hermitian and symmetric matrices.\n \"torch.linalg.qr()\" for another (much faster) decomposition that\n works on general matrices.\n Parameters:\n * A (Tensor) -- tensor of shape (, m, n)* where *** is\n zero or more batch dimensions.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.svd.html", "category": "pytorch docs"}
{"text": "zero or more batch dimensions.\n * full_matrices (bool, optional) -- controls whether\n to compute the full or reduced SVD, and consequently, the\n shape of the returned tensors U and Vh. Default: True.\n Keyword Arguments:\n * driver (str, optional) -- name of the cuSOLVER\n method to be used. This keyword argument only works on CUDA\n inputs. Available options are: None, gesvd, gesvdj, and\n gesvda. Default: None.\n * out (tuple, optional) -- output tuple of three\n tensors. Ignored if None.\n Returns:\n A named tuple (U, S, Vh) which corresponds to U, S,\n V^{\\text{H}} above.\n S will always be real-valued, even when \"A\" is complex. It\n will also be ordered in descending order.\n U and Vh will have the same dtype as \"A\". The left / right\n singular vectors will be given by the columns of U and the\n rows of Vh respectively.\n Examples:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.svd.html", "category": "pytorch docs"}
{"text": "rows of Vh respectively.\n Examples:\n >>> A = torch.randn(5, 3)\n >>> U, S, Vh = torch.linalg.svd(A, full_matrices=False)\n >>> U.shape, S.shape, Vh.shape\n (torch.Size([5, 3]), torch.Size([3]), torch.Size([3, 3]))\n >>> torch.dist(A, U @ torch.diag(S) @ Vh)\n tensor(1.0486e-06)\n >>> U, S, Vh = torch.linalg.svd(A)\n >>> U.shape, S.shape, Vh.shape\n (torch.Size([5, 5]), torch.Size([3]), torch.Size([3, 3]))\n >>> torch.dist(A, U[:, :3] @ torch.diag(S) @ Vh)\n tensor(1.0486e-06)\n >>> A = torch.randn(7, 5, 3)\n >>> U, S, Vh = torch.linalg.svd(A, full_matrices=False)\n >>> torch.dist(A, U @ torch.diag_embed(S) @ Vh)\n tensor(3.0957e-06)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.svd.html", "category": "pytorch docs"}
{"text": "torch.Tensor.record_streamTensor.record_stream(stream)\n Ensures that the tensor memory is not reused for another tensor\n until all current work queued on \"stream\" are complete.\n Note:\n The caching allocator is aware of only the stream where a tensor\n was allocated. Due to the awareness, it already correctly manages\n the life cycle of tensors on only one stream. But if a tensor is\n used on a stream different from the stream of origin, the\n allocator might reuse the memory unexpectedly. Calling this\n method lets the allocator know which streams have used the\n tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.record_stream.html", "category": "pytorch docs"}
{"text": "torch.Tensor.squeeze_Tensor.squeeze_(dim=None) -> Tensor\n In-place version of \"squeeze()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.squeeze_.html", "category": "pytorch docs"}
{"text": "LazyBatchNorm3dclass torch.nn.LazyBatchNorm3d(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)\n A \"torch.nn.BatchNorm3d\" module with lazy initialization of the\n \"num_features\" argument of the \"BatchNorm3d\" that is inferred from\n the \"input.size(1)\". The attributes that will be lazily initialized\n are weight, bias, running_mean and running_var.\n Check the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\n Parameters:\n * eps (float) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n * momentum (float) -- the value used for the running_mean\n and running_var computation. Can be set to \"None\" for\n cumulative moving average (i.e. simple average). Default: 0.1\n * affine (bool) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters. Default:\n \"True\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyBatchNorm3d.html", "category": "pytorch docs"}
{"text": "\"True\"\n * track_running_stats (bool) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics, and initializes statistics buffers\n \"running_mean\" and \"running_var\" as \"None\". When these buffers\n are \"None\", this module always uses batch statistics. in both\n training and eval modes. Default: \"True\"\n cls_to_become\n alias of \"BatchNorm3d\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyBatchNorm3d.html", "category": "pytorch docs"}
{"text": "torch.foreach_reciprocal_torch._foreach_reciprocal(self: List[Tensor]) -> None\n Apply \"torch.reciprocal()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_reciprocal_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.tanTensor.tan() -> Tensor\n See \"torch.tan()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.tan.html", "category": "pytorch docs"}
{"text": "torch.Tensor.pinverseTensor.pinverse() -> Tensor\n See \"torch.pinverse()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.pinverse.html", "category": "pytorch docs"}
{"text": "torch.Tensor.is_contiguousTensor.is_contiguous(memory_format=torch.contiguous_format) -> bool\n Returns True if \"self\" tensor is contiguous in memory in the order\n specified by memory format.\n Parameters:\n memory_format (\"torch.memory_format\", optional) -- Specifies\n memory allocation order. Default: \"torch.contiguous_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_contiguous.html", "category": "pytorch docs"}
{"text": "torch.Tensor.q_scaleTensor.q_scale() -> float\n Given a Tensor quantized by linear(affine) quantization, returns\n the scale of the underlying quantizer().", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.q_scale.html", "category": "pytorch docs"}
{"text": "torch.get_deterministic_debug_modetorch.get_deterministic_debug_mode()\n Returns the current value of the debug mode for deterministic\n operations. Refer to \"torch.set_deterministic_debug_mode()\"\n documentation for more details.\n Return type:\n int", "source": "https://pytorch.org/docs/stable/generated/torch.get_deterministic_debug_mode.html", "category": "pytorch docs"}
{"text": "MultiLabelSoftMarginLossclass torch.nn.MultiLabelSoftMarginLoss(weight=None, size_average=None, reduce=None, reduction='mean')\n Creates a criterion that optimizes a multi-label one-versus-all\n loss based on max-entropy, between input x and target y of size (N,\n C). For each sample in the minibatch:\n loss(x, y) = - \\frac{1}{C} * \\sum_i y[i] * \\log((1 +\n \\exp(-x[i]))^{-1}) + (1-y[i]) *\n \\log\\left(\\frac{\\exp(-x[i])}{(1 + \\exp(-x[i]))}\\right)\n where i \\in \\left{0, \\; \\cdots , \\; \\text{x.nElement}() -\n 1\\right}, y[i] \\in \\left{0, \\; 1\\right}.\n Parameters:\n * weight (Tensor, optional) -- a manual rescaling\n weight given to each class. If given, it has to be a Tensor of\n size C. Otherwise, it is treated as if having all ones.\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelSoftMarginLoss.html", "category": "pytorch docs"}
{"text": "loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelSoftMarginLoss.html", "category": "pytorch docs"}
{"text": "deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n Shape:\n * Input: (N, C) where N is the batch size and C is the\n number of classes.\n * Target: (N, C), label targets padded by -1 ensuring same shape\n as the input.\n * Output: scalar. If \"reduction\" is \"'none'\", then (N).", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelSoftMarginLoss.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.foldtorch.nn.functional.fold(input, output_size, kernel_size, dilation=1, padding=0, stride=1)\n Combines an array of sliding local blocks into a large containing\n tensor.\n Warning:\n Currently, only unbatched (3D) or batched (4D) image-like output\n tensors are supported.\n See \"torch.nn.Fold\" for details\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.fold.html", "category": "pytorch docs"}
{"text": "torch.autograd.graph.Node.register_hookabstract Node.register_hook(fn)\n Registers a backward hook.\n The hook will be called every time a gradient with respect to the\n Node is computed. The hook should have the following signature:\n hook(grad_inputs: Tuple[Tensor], grad_outputs: Tuple[Tensor]) -> Tuple[Tensor] or None\n The hook should not modify its argument, but it can optionally\n return a new gradient which will be used in place of\n \"grad_outputs\".\n This function returns a handle with a method \"handle.remove()\" that\n removes the hook from the module.\n Note:\n See Backward Hooks execution for more information on how when\n this hook is executed, and how its execution is ordered relative\n to other hooks.\n Example:\n >>> import torch\n >>> a = torch.tensor([0., 0., 0.], requires_grad=True)\n >>> b = a.clone()\n >>> assert isinstance(b.grad_fn, torch.autograd.graph.Node)", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.register_hook.html", "category": "pytorch docs"}
{"text": "\n\n\nhandle = b.grad_fn.register_hook(lambda gI, gO: (gO[0] * 2,))\n >>> b.sum().backward(retain_graph=True)\n >>> print(a.grad)\n tensor([2., 2., 2.])\n >>> handle.remove() # Removes the hook\n >>> a.grad = None\n >>> b.sum().backward(retain_graph=True)\n >>> print(a.grad)\n tensor([1., 1., 1.])\n Return type:\n RemovableHandle\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.register_hook.html", "category": "pytorch docs"}
{"text": "torch.arccostorch.arccos(input, *, out=None) -> Tensor\n Alias for \"torch.acos()\".", "source": "https://pytorch.org/docs/stable/generated/torch.arccos.html", "category": "pytorch docs"}
{"text": "torch.histctorch.histc(input, bins=100, min=0, max=0, , out=None) -> Tensor\n Computes the histogram of a tensor.\n The elements are sorted into equal width bins between \"min\" and\n \"max\". If \"min\" and \"max\" are both zero, the minimum and maximum\n values of the data are used.\n Elements lower than min and higher than max and \"NaN\" elements are\n ignored.\n Parameters:\n * input (Tensor) -- the input tensor.\n * bins (int) -- number of histogram bins\n * min (Scalar) -- lower end of the range (inclusive)\n * max (Scalar) -- upper end of the range (inclusive)\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Returns:\n Histogram represented as a tensor\n Return type:\n Tensor\n Example:\n >>> torch.histc(torch.tensor([1., 2, 1]), bins=4, min=0, max=3)\n tensor([ 0., 2., 1., 0.])", "source": "https://pytorch.org/docs/stable/generated/torch.histc.html", "category": "pytorch docs"}
{"text": "torch.Tensor.float_powerTensor.float_power(exponent) -> Tensor\n See \"torch.float_power()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.float_power.html", "category": "pytorch docs"}
{"text": "torch.linalg.tensorsolvetorch.linalg.tensorsolve(A, B, dims=None, , out=None) -> Tensor\n Computes the solution X to the system torch.tensordot(A, X) =\n B.\n If m is the product of the first \"B\".ndim dimensions of \"A\"\n and n is the product of the rest of the dimensions, this function\n expects m and n to be equal.\n The returned tensor x satisfies tensordot(\"A\", x, dims=x.ndim)\n == \"B\". x has shape \"A\"[B.ndim:].\n If \"dims\" is specified, \"A\" will be reshaped as\n A = movedim(A, dims, range(len(dims) - A.ndim + 1, 0))\n Supports inputs of float, double, cfloat and cdouble dtypes.\n See also:\n \"torch.linalg.tensorinv()\" computes the multiplicative inverse of\n \"torch.tensordot()\".\n Parameters:\n * A (Tensor) -- tensor to solve for. Its shape must\n satisfy prod(\"A\".shape[:\"B\".ndim]) ==\n prod(\"A\".shape[\"B\".ndim:]).\n * B (Tensor) -- tensor of shape \"A\".shape[:\"B\".ndim]*.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.tensorsolve.html", "category": "pytorch docs"}
{"text": "\ndims (Tuple[int], optional) -- dimensions of\n \"A\" to be moved. If None, no dimensions are moved. Default:\n None.\n Keyword Arguments:\n out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\n Raises:\n RuntimeError -- if the reshaped \"A\".view(m, m) with m as\n above is not invertible or the product of the first \"ind\"\n dimensions is not equal to the product of the rest of the\n dimensions.\n Examples:\n >>> A = torch.eye(2 * 3 * 4).reshape((2 * 3, 4, 2, 3, 4))\n >>> B = torch.randn(2 * 3, 4)\n >>> X = torch.linalg.tensorsolve(A, B)\n >>> X.shape\n torch.Size([2, 3, 4])\n >>> torch.allclose(torch.tensordot(A, X, dims=X.ndim), B)\n True\n >>> A = torch.randn(6, 4, 4, 3, 2)\n >>> B = torch.randn(4, 3, 2)\n >>> X = torch.linalg.tensorsolve(A, B, dims=(0, 2))\n >>> X.shape\n torch.Size([6, 4])\n >>> A = A.permute(1, 3, 4, 0, 2)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.tensorsolve.html", "category": "pytorch docs"}
{"text": "\n\n\nA = A.permute(1, 3, 4, 0, 2)\n >>> A.shape[B.ndim:]\n torch.Size([6, 4])\n >>> torch.allclose(torch.tensordot(A, X, dims=X.ndim), B, atol=1e-6)\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.tensorsolve.html", "category": "pytorch docs"}
{"text": "torch.Tensor.new_fullTensor.new_full(size, fill_value, , dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) -> Tensor\n Returns a Tensor of size \"size\" filled with \"fill_value\". By\n default, the returned Tensor has the same \"torch.dtype\" and\n \"torch.device\" as this tensor.\n Parameters:\n fill_value (scalar) -- the number to fill the output\n tensor with.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired type of\n returned tensor. Default: if None, same \"torch.dtype\" as this\n tensor.\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, same \"torch.device\" as this\n tensor.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * layout* (\"torch.layout\", optional) -- the desired layout of", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_full.html", "category": "pytorch docs"}
{"text": "returned Tensor. Default: \"torch.strided\".\n * pin_memory (bool, optional) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n Example:\n >>> tensor = torch.ones((2,), dtype=torch.float64)\n >>> tensor.new_full((3, 4), 3.141592)\n tensor([[ 3.1416, 3.1416, 3.1416, 3.1416],\n [ 3.1416, 3.1416, 3.1416, 3.1416],\n [ 3.1416, 3.1416, 3.1416, 3.1416]], dtype=torch.float64)", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_full.html", "category": "pytorch docs"}
{"text": "torch.Tensor.powTensor.pow(exponent) -> Tensor\n See \"torch.pow()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.pow.html", "category": "pytorch docs"}
{"text": "torch.Tensor.int_reprTensor.int_repr() -> Tensor\n Given a quantized Tensor, \"self.int_repr()\" returns a CPU Tensor\n with uint8_t as data type that stores the underlying uint8_t values\n of the given Tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.int_repr.html", "category": "pytorch docs"}
{"text": "torch.Tensor.addcmul_Tensor.addcmul_(tensor1, tensor2, *, value=1) -> Tensor\n In-place version of \"addcmul()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addcmul_.html", "category": "pytorch docs"}
{"text": "torch.sspaddmmtorch.sspaddmm(input, mat1, mat2, , beta=1, alpha=1, out=None) -> Tensor\n Matrix multiplies a sparse tensor \"mat1\" with a dense tensor\n \"mat2\", then adds the sparse tensor \"input\" to the result.\n Note: This function is equivalent to \"torch.addmm()\", except\n \"input\" and \"mat1\" are sparse.\n Parameters:\n * input (Tensor) -- a sparse matrix to be added\n * mat1 (Tensor) -- a sparse matrix to be matrix multiplied\n * mat2 (Tensor) -- a dense matrix to be matrix multiplied\n Keyword Arguments:\n * beta (Number, optional) -- multiplier for \"mat\"\n (\\beta)\n * alpha (Number, optional) -- multiplier for mat1 @\n mat2 (\\alpha)\n * out (Tensor, optional*) -- the output tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.sspaddmm.html", "category": "pytorch docs"}
{"text": "torch.Tensor.arctan_Tensor.arctan_() -> Tensor\n In-place version of \"arctan()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arctan_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.digamma_Tensor.digamma_() -> Tensor\n In-place version of \"digamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.digamma_.html", "category": "pytorch docs"}
{"text": "ParameterListclass torch.nn.ParameterList(values=None)\n Holds parameters in a list.\n \"ParameterList\" can be used like a regular Python list, but Tensors\n that are \"Parameter\" are properly registered, and will be visible\n by all \"Module\" methods.\n Note that the constructor, assigning an element of the list, the\n \"append()\" method and the \"extend()\" method will convert any\n \"Tensor\" into \"Parameter\".\n Parameters:\n parameters (iterable, optional) -- an iterable of\n elements to add to the list.\n Example:\n class MyModule(nn.Module):\n def init(self):\n super(MyModule, self).init()\n self.params = nn.ParameterList([nn.Parameter(torch.randn(10, 10)) for i in range(10)])\n def forward(self, x):\n # ParameterList can act as an iterable, or be indexed using ints\n for i, p in enumerate(self.params):\n x = self.params[i // 2].mm(x) + p.mm(x)\n return x", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ParameterList.html", "category": "pytorch docs"}
{"text": "return x\n append(value)\n Appends a given value at the end of the list.\n Parameters:\n value (Any) -- value to append\n Return type:\n ParameterList\n extend(values)\n Appends values from a Python iterable to the end of the list.\n Parameters:\n values (iterable) -- iterable of values to append\n Return type:\n ParameterList", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ParameterList.html", "category": "pytorch docs"}
{"text": "torch.sinhtorch.sinh(input, , out=None) -> Tensor\n Returns a new tensor with the hyperbolic sine of the elements of\n \"input\".\n \\text{out}{i} = \\sinh(\\text{input})\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 0.5380, -0.8632, -0.1265, 0.9399])\n >>> torch.sinh(a)\n tensor([ 0.5644, -0.9744, -0.1268, 1.0845])\n Note:\n When \"input\" is on the CPU, the implementation of torch.sinh may\n use the Sleef library, which rounds very large results to\n infinity or negative infinity. See here for details.", "source": "https://pytorch.org/docs/stable/generated/torch.sinh.html", "category": "pytorch docs"}
{"text": "inference_modeclass torch.inference_mode(mode=True)\n Context-manager that enables or disables inference mode\n InferenceMode is a new context manager analogous to \"no_grad\" to be\n used when you are certain your operations will have no interactions\n with autograd (e.g., model training). Code run under this mode gets\n better performance by disabling view tracking and version counter\n bumps. Note that unlike some other mechanisms that locally enable\n or disable grad, entering inference_mode also disables to forward-\n mode AD.\n This context manager is thread local; it will not affect\n computation in other threads.\n Also functions as a decorator. (Make sure to instantiate with\n parenthesis.)\n Note:\n Inference mode is one of several mechanisms that can enable or\n disable gradients locally see Locally disabling gradient\n computation for more information on how they compare.\n Parameters:\n mode (bool) -- Flag whether to enable or disable inference", "source": "https://pytorch.org/docs/stable/generated/torch.inference_mode.html", "category": "pytorch docs"}
{"text": "mode\n Example::\n >>> import torch\n >>> x = torch.ones(1, 2, 3, requires_grad=True)\n >>> with torch.inference_mode():\n ... y = x * x\n >>> y.requires_grad\n False\n >>> y._version\n Traceback (most recent call last):\n File \"\", line 1, in \n RuntimeError: Inference tensors do not track version counter.\n >>> @torch.inference_mode()\n ... def func(x):\n ... return x * x\n >>> out = func(x)\n >>> out.requires_grad\n False", "source": "https://pytorch.org/docs/stable/generated/torch.inference_mode.html", "category": "pytorch docs"}
{"text": "torch.Tensor.arccos_Tensor.arccos_() -> Tensor\n In-place version of \"arccos()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arccos_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.addmvTensor.addmv(mat, vec, *, beta=1, alpha=1) -> Tensor\n See \"torch.addmv()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addmv.html", "category": "pytorch docs"}
{"text": "torch.Tensor.less_Tensor.less_(other) -> Tensor\n In-place version of \"less()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.less_.html", "category": "pytorch docs"}
{"text": "torch.foreach_ceil_torch._foreach_ceil(self: List[Tensor]) -> None\n Apply \"torch.ceil()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_ceil_.html", "category": "pytorch docs"}
{"text": "convertclass torch.quantization.convert(module, mapping=None, inplace=False, remove_qconfig=True, is_reference=False, convert_custom_config_dict=None)\n Converts submodules in input module to a different module according\n to mapping by calling from_float method on the target module\n class. And remove qconfig at the end if remove_qconfig is set to\n True.\n Parameters:\n * module -- prepared and calibrated module\n * mapping -- a dictionary that maps from source module type\n to target module type, can be overwritten to allow swapping\n user defined Modules\n * inplace -- carry out model transformations in-place, the\n original module is mutated\n * convert_custom_config_dict -- custom configuration\n dictionary for convert function\n # Example of convert_custom_config_dict:\n convert_custom_config_dict = {\n # user will manually define the corresponding quantized", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.convert.html", "category": "pytorch docs"}
{"text": "module class which has a from_observed class method that converts\n # observed custom module to quantized custom module\n \"observed_to_quantized_custom_module_class\": {\n ObservedCustomModule: QuantizedCustomModule\n }\n }\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.convert.html", "category": "pytorch docs"}
{"text": "torch.Tensor.divide_Tensor.divide_(value, *, rounding_mode=None) -> Tensor\n In-place version of \"divide()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.divide_.html", "category": "pytorch docs"}
{"text": "graphclass torch.cuda.graph(cuda_graph, pool=None, stream=None)\n Context-manager that captures CUDA work into a\n \"torch.cuda.CUDAGraph\" object for later replay.\n See CUDA Graphs for a general introduction, detailed use, and\n constraints.\n Parameters:\n * cuda_graph (torch.cuda.CUDAGraph) -- Graph object used\n for capture.\n * pool (optional) -- Opaque token (returned by a call to\n \"graph_pool_handle()\" or \"other_Graph_instance.pool()\")\n hinting this graph's capture may share memory from the\n specified pool. See Graph memory management.\n * stream (torch.cuda.Stream, optional) -- If supplied,\n will be set as the current stream in the context. If not\n supplied, \"graph\" sets its own internal side stream as the\n current stream in the context.\n Note:\n For effective memory sharing, if you pass a \"pool\" used by a\n previous capture and the previous capture used an explicit", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.graph.html", "category": "pytorch docs"}
{"text": "\"stream\" argument, you should pass the same \"stream\" argument to\n this capture.\n Warning:\n This API is in beta and may change in future releases.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.graph.html", "category": "pytorch docs"}
{"text": "torch.jit.loadtorch.jit.load(f, map_location=None, _extra_files=None)\n Load a \"ScriptModule\" or \"ScriptFunction\" previously saved with\n \"torch.jit.save\"\n All previously saved modules, no matter their device, are first\n loaded onto CPU, and then are moved to the devices they were saved\n from. If this fails (e.g. because the run time system doesn't have\n certain devices), an exception is raised.\n Parameters:\n * f -- a file-like object (has to implement read, readline,\n tell, and seek), or a string containing a file name\n * map_location (string or torch.device) -- A\n simplified version of \"map_location\" in torch.jit.save used\n to dynamically remap storages to an alternative set of\n devices.\n * _extra_files (dictionary of filename to content) -- The\n extra filenames given in the map would be loaded and their\n content would be stored in the provided map.\n Returns:\n A \"ScriptModule\" object.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.load.html", "category": "pytorch docs"}
{"text": "Returns:\n A \"ScriptModule\" object.\n Example:\n import torch\n import io\n torch.jit.load('scriptmodule.pt')\n # Load ScriptModule from io.BytesIO object\n with open('scriptmodule.pt', 'rb') as f:\n buffer = io.BytesIO(f.read())\n # Load all tensors to the original device\n torch.jit.load(buffer)\n # Load all tensors onto CPU, using a device\n buffer.seek(0)\n torch.jit.load(buffer, map_location=torch.device('cpu'))\n # Load all tensors onto CPU, using a string\n buffer.seek(0)\n torch.jit.load(buffer, map_location='cpu')\n # Load with extra files.\n extra_files = {'foo.txt': ''} # values will be replaced with data\n torch.jit.load('scriptmodule.pt', _extra_files=extra_files)\n print(extra_files['foo.txt'])", "source": "https://pytorch.org/docs/stable/generated/torch.jit.load.html", "category": "pytorch docs"}
{"text": "torch.Tensor.quantileTensor.quantile(q, dim=None, keepdim=False, *, interpolation='linear') -> Tensor\n See \"torch.quantile()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.quantile.html", "category": "pytorch docs"}
{"text": "torch.complextorch.complex(real, imag, , out=None) -> Tensor\n Constructs a complex tensor with its real part equal to \"real\" and\n its imaginary part equal to \"imag\".\n Parameters:\n * real (Tensor) -- The real part of the complex tensor.\n Must be float or double.\n * imag (Tensor) -- The imaginary part of the complex\n tensor. Must be same dtype as \"real\".\n Keyword Arguments:\n out (Tensor*) -- If the inputs are \"torch.float32\", must be\n \"torch.complex64\". If the inputs are \"torch.float64\", must be\n \"torch.complex128\".\n Example:\n >>> real = torch.tensor([1, 2], dtype=torch.float32)\n >>> imag = torch.tensor([3, 4], dtype=torch.float32)\n >>> z = torch.complex(real, imag)\n >>> z\n tensor([(1.+3.j), (2.+4.j)])\n >>> z.dtype\n torch.complex64", "source": "https://pytorch.org/docs/stable/generated/torch.complex.html", "category": "pytorch docs"}
{"text": "torch._foreach_negtorch._foreach_neg(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.neg()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_neg.html", "category": "pytorch docs"}
{"text": "torch.lcmtorch.lcm(input, other, , out=None) -> Tensor\n Computes the element-wise least common multiple (LCM) of \"input\"\n and \"other\".\n Both \"input\" and \"other\" must have integer types.\n Note:\n This defines lcm(0, 0) = 0 and lcm(0, a) = 0.\n Parameters:\n * input (Tensor) -- the input tensor.\n * other (Tensor) -- the second input tensor\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.tensor([5, 10, 15])\n >>> b = torch.tensor([3, 4, 5])\n >>> torch.lcm(a, b)\n tensor([15, 20, 15])\n >>> c = torch.tensor([3])\n >>> torch.lcm(a, c)\n tensor([15, 30, 15])", "source": "https://pytorch.org/docs/stable/generated/torch.lcm.html", "category": "pytorch docs"}
{"text": "torch._foreach_asintorch._foreach_asin(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.asin()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_asin.html", "category": "pytorch docs"}
{"text": "torch.isposinftorch.isposinf(input, , out=None) -> Tensor\n Tests if each element of \"input\" is positive infinity or not.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.tensor([-float('inf'), float('inf'), 1.2])\n >>> torch.isposinf(a)\n tensor([False, True, False])", "source": "https://pytorch.org/docs/stable/generated/torch.isposinf.html", "category": "pytorch docs"}
{"text": "ConvBn1dclass torch.ao.nn.intrinsic.qat.ConvBn1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None)\n A ConvBn1d module is a module fused from Conv1d and BatchNorm1d,\n attached with FakeQuantize modules for weight, used in quantization\n aware training.\n We combined the interface of \"torch.nn.Conv1d\" and\n \"torch.nn.BatchNorm1d\".\n Similar to \"torch.nn.Conv1d\", with FakeQuantize modules initialized\n to default.\n Variables:\n * freeze_bn --\n * weight_fake_quant -- fake quant module for weight", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvBn1d.html", "category": "pytorch docs"}
{"text": "enable_fake_quantclass torch.quantization.fake_quantize.enable_fake_quant(mod)\n Enable fake quantization for this module, if applicable. Example\n usage:\n # model is any PyTorch model\n model.apply(torch.ao.quantization.enable_fake_quant)", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.enable_fake_quant.html", "category": "pytorch docs"}
{"text": "RNNclass torch.nn.RNN(args, kwargs)\n Applies a multi-layer Elman RNN with \\tanh or \\text{ReLU} non-\n linearity to an input sequence.\n For each element in the input sequence, each layer computes the\n following function:\n h_t = \\tanh(x_t W_{ih}^T + b_{ih} + h_{t-1}W_{hh}^T + b_{hh})\n where h_t is the hidden state at time t, x_t is the input at time\n t, and h_{(t-1)} is the hidden state of the previous layer at\n time t-1 or the initial hidden state at time 0. If\n \"nonlinearity\" is \"'relu'\", then \\text{ReLU} is used instead of\n \\tanh.\n Parameters:\n * input_size -- The number of expected features in the input\n x\n * hidden_size -- The number of features in the hidden state\n h\n * num_layers -- Number of recurrent layers. E.g., setting\n \"num_layers=2\" would mean stacking two RNNs together to form a\n stacked RNN*, with the second RNN taking in outputs of the\n first RNN and computing the final results. Default: 1", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNN.html", "category": "pytorch docs"}
{"text": "\nnonlinearity -- The non-linearity to use. Can be either\n \"'tanh'\" or \"'relu'\". Default: \"'tanh'\"\nbias -- If \"False\", then the layer does not use bias\n weights b_ih and b_hh. Default: \"True\"\nbatch_first -- If \"True\", then the input and output\n tensors are provided as (batch, seq, feature) instead of\n (seq, batch, feature). Note that this does not apply to\n hidden or cell states. See the Inputs/Outputs sections below\n for details. Default: \"False\"\ndropout -- If non-zero, introduces a Dropout layer on\n the outputs of each RNN layer except the last layer, with\n dropout probability equal to \"dropout\". Default: 0\nbidirectional -- If \"True\", becomes a bidirectional RNN.\n Default: \"False\"\n Inputs: input, h_0\ninput: tensor of shape (L, H_{in}) for unbatched input,\n (L, N, H_{in}) when \"batch_first=False\" or (N, L, H_{in}) when\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNN.html", "category": "pytorch docs"}
{"text": "\"batch_first=True\" containing the features of the input\n sequence. The input can also be a packed variable length\n sequence. See \"torch.nn.utils.rnn.pack_padded_sequence()\" or\n \"torch.nn.utils.rnn.pack_sequence()\" for details.\n * h_0: tensor of shape (D * \\text{num_layers}, H_{out}) for\n unbatched input or (D * \\text{num_layers}, N, H_{out})\n containing the initial hidden state for the input sequence\n batch. Defaults to zeros if not provided.\n where:\n \\begin{aligned} N ={} & \\text{batch size} \\ L ={} &\n \\text{sequence length} \\ D ={} & 2 \\text{ if\n bidirectional=True otherwise } 1 \\ H_{in} ={} &\n \\text{input_size} \\ H_{out} ={} & \\text{hidden_size}\n \\end{aligned}\n Outputs: output, h_n\n * output: tensor of shape (L, D * H_{out}) for unbatched\n input, (L, N, D * H_{out}) when \"batch_first=False\" or (N, L,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNN.html", "category": "pytorch docs"}
{"text": "D * H_{out}) when \"batch_first=True\" containing the output\n features (h_t) from the last layer of the RNN, for each t.\n If a \"torch.nn.utils.rnn.PackedSequence\" has been given as the\n input, the output will also be a packed sequence.\n * h_n: tensor of shape (D * \\text{num_layers}, H_{out}) for\n unbatched input or (D * \\text{num_layers}, N, H_{out})\n containing the final hidden state for each element in the\n batch.\n Variables:\n * weight_ih_l[k] -- the learnable input-hidden weights of\n the k-th layer, of shape (hidden_size, input_size) for k =\n 0. Otherwise, the shape is (hidden_size, num_directions *\n hidden_size)\n * weight_hh_l[k] -- the learnable hidden-hidden weights of\n the k-th layer, of shape (hidden_size, hidden_size)\n * bias_ih_l[k] -- the learnable input-hidden bias of the\n k-th layer, of shape (hidden_size)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNN.html", "category": "pytorch docs"}
{"text": "k-th layer, of shape (hidden_size)\n * bias_hh_l[k] -- the learnable hidden-hidden bias of the\n k-th layer, of shape (hidden_size)\n Note:\n All the weights and biases are initialized from\n \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{1}{\\text{hidden_size}}\n Note:\n For bidirectional RNNs, forward and backward are directions 0 and\n 1 respectively. Example of splitting the output layers when\n \"batch_first=False\": \"output.view(seq_len, batch, num_directions,\n hidden_size)\".\n Note:\n \"batch_first\" argument is ignored for unbatched inputs.\n Warning:\n There are known non-determinism issues for RNN functions on some\n versions of cuDNN and CUDA. You can enforce deterministic\n behavior by setting the following environment variables:On CUDA\n 10.1, set environment variable \"CUDA_LAUNCH_BLOCKING=1\". This may\n affect performance.On CUDA 10.2 or later, set environment\n variable (note the leading colon symbol)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNN.html", "category": "pytorch docs"}
{"text": "variable (note the leading colon symbol)\n \"CUBLAS_WORKSPACE_CONFIG=:16:8\" or\n \"CUBLAS_WORKSPACE_CONFIG=:4096:2\".See the cuDNN 8 Release Notes\n for more information.\n Note:\n If the following conditions are satisfied: 1) cudnn is enabled,\n 2) input data is on the GPU 3) input data has dtype\n \"torch.float16\" 4) V100 GPU is used, 5) input data is not in\n \"PackedSequence\" format persistent algorithm can be selected to\n improve performance.\n Examples:\n >>> rnn = nn.RNN(10, 20, 2)\n >>> input = torch.randn(5, 3, 10)\n >>> h0 = torch.randn(2, 3, 20)\n >>> output, hn = rnn(input, h0)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNN.html", "category": "pytorch docs"}
{"text": "torch.Tensor.tanh_Tensor.tanh_() -> Tensor\n In-place version of \"tanh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.tanh_.html", "category": "pytorch docs"}
{"text": "torch.deg2radtorch.deg2rad(input, , out=None) -> Tensor\n Returns a new tensor with each of the elements of \"input\" converted\n from angles in degrees to radians.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.tensor([[180.0, -180.0], [360.0, -360.0], [90.0, -90.0]])\n >>> torch.deg2rad(a)\n tensor([[ 3.1416, -3.1416],\n [ 6.2832, -6.2832],\n [ 1.5708, -1.5708]])", "source": "https://pytorch.org/docs/stable/generated/torch.deg2rad.html", "category": "pytorch docs"}
{"text": "torch.randtorch.rand(size, , generator=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, pin_memory=False) -> Tensor\n Returns a tensor filled with random numbers from a uniform\n distribution on the interval [0, 1)\n The shape of the tensor is defined by the variable argument \"size\".\n Parameters:\n size (int...) -- a sequence of integers defining the\n shape of the output tensor. Can be a variable number of\n arguments or a collection like a list or tuple.\n Keyword Arguments:\n * generator (\"torch.Generator\", optional) -- a pseudorandom\n number generator for sampling\n * out (Tensor, optional) -- the output tensor.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of", "source": "https://pytorch.org/docs/stable/generated/torch.rand.html", "category": "pytorch docs"}
{"text": "returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * pin_memory (bool, optional) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n Example:\n >>> torch.rand(4)\n tensor([ 0.5204, 0.2503, 0.3525, 0.5673])\n >>> torch.rand(2, 3)\n tensor([[ 0.8237, 0.5781, 0.6879],\n [ 0.3816, 0.7249, 0.0998]])", "source": "https://pytorch.org/docs/stable/generated/torch.rand.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sincTensor.sinc() -> Tensor\n See \"torch.sinc()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sinc.html", "category": "pytorch docs"}
{"text": "torch.autograd.profiler.load_nvproftorch.autograd.profiler.load_nvprof(path)\n Opens an nvprof trace file and parses autograd annotations.\n Parameters:\n path (str) -- path to nvprof trace", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.profiler.load_nvprof.html", "category": "pytorch docs"}
{"text": "torch.Tensor.triuTensor.triu(diagonal=0) -> Tensor\n See \"torch.triu()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.triu.html", "category": "pytorch docs"}
{"text": "torch.Tensor.geTensor.ge(other) -> Tensor\n See \"torch.ge()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ge.html", "category": "pytorch docs"}
{"text": "check_sparse_tensor_invariantsclass torch.sparse.check_sparse_tensor_invariants(enable=True)\n A tool to control checking sparse tensor invariants.\n The following options exists to manage sparsr tensor invariants\n checking in sparse tensor construction:\n 1. Using a context manager:\n with torch.sparse.check_sparse_tensor_invariants():\n run_my_model()\n 2. Using a procedural approach:\n prev_checks_enabled = torch.sparse.check_sparse_tensor_invariants.is_enabled()\n torch.sparse.check_sparse_tensor_invariants.enable()\n run_my_model()\n if not prev_checks_enabled:\n torch.sparse.check_sparse_tensor_invariants.disable()\n 3. Using function decoration:\n @torch.sparse.check_sparse_tensor_invariants()\n def run_my_model():\n ...\n run_my_model()\n 4. Using \"check_invariants\" keyword argument in sparse tensor\n constructor call. For example:", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.check_sparse_tensor_invariants.html", "category": "pytorch docs"}
{"text": "constructor call. For example:\n >>> torch.sparse_csr_tensor([0, 1, 3], [0, 1], [1, 2], check_invariants=True)\n Traceback (most recent call last):\n File \"\", line 1, in \n RuntimeError: crow_indices[..., -1] == nnz is not satisfied.\n static disable()\n Disable sparse tensor invariants checking in sparse tensor\n constructors.\n See \"torch.sparse.check_sparse_tensor_invariants.enable()\" for\n more information.\n static enable()\n Enable sparse tensor invariants checking in sparse tensor\n constructors.\n Note:\n By default, the sparse tensor invariants checks are disabled.\n Use \"torch.sparse.check_sparse_tensor_invariants.is_enabled()\"\n to retrieve the current state of sparse tensor invariants\n checking.\n Note:\n The sparse tensor invariants check flag is effective to all\n sparse tensor constructors, both in Python and ATen.The flag", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.check_sparse_tensor_invariants.html", "category": "pytorch docs"}
{"text": "can be locally overridden by the \"check_invariants\" optional\n argument of the sparse tensor constructor functions.\n static is_enabled()\n Returns True if the sparse tensor invariants checking is\n enabled.\n Note:\n Use \"torch.sparse.check_sparse_tensor_invariants.enable()\" or\n \"torch.sparse.check_sparse_tensor_invariants.disable()\" to\n manage the state of the sparse tensor invariants checks.", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.check_sparse_tensor_invariants.html", "category": "pytorch docs"}
{"text": "torch.sintorch.sin(input, , out=None) -> Tensor\n Returns a new tensor with the sine of the elements of \"input\".\n \\text{out}{i} = \\sin(\\text{input})\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([-0.5461, 0.1347, -2.7266, -0.2746])\n >>> torch.sin(a)\n tensor([-0.5194, 0.1343, -0.4032, -0.2711])", "source": "https://pytorch.org/docs/stable/generated/torch.sin.html", "category": "pytorch docs"}
{"text": "torch.autograd.graph.Node.register_prehookabstract Node.register_prehook(fn)\n Registers a backward pre-hook.\n The hook will be called every time a gradient with respect to the\n Node is computed. The hook should have the following signature:\n hook(grad_outputs: Tuple[Tensor]) -> Tuple[Tensor] or None\n The hook should not modify its argument, but it can optionally\n return a new gradient which will be used in place of\n \"grad_outputs\".\n This function returns a handle with a method \"handle.remove()\" that\n removes the hook from the module.\n Note:\n See Backward Hooks execution for more information on how when\n this hook is executed, and how its execution is ordered relative\n to other hooks.\n Example:\n >>> a = torch.tensor([0., 0., 0.], requires_grad=True)\n >>> b = a.clone()\n >>> assert isinstance(b.grad_fn, torch.autograd.graph.Node)\n >>> handle = b.grad_fn.register_prehook(lambda gI: (gI[0] * 2,))", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.register_prehook.html", "category": "pytorch docs"}
{"text": "\n\n\nb.sum().backward(retain_graph=True)\n >>> print(a.grad)\n tensor([2., 2., 2.])\n >>> handle.remove()\n >>> a.grad = None\n >>> b.sum().backward(retain_graph=True)\n >>> print(a.grad)\n tensor([1., 1., 1.])\n Return type:\n RemovableHandle\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.register_prehook.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.rnn.pack_padded_sequencetorch.nn.utils.rnn.pack_padded_sequence(input, lengths, batch_first=False, enforce_sorted=True)\n Packs a Tensor containing padded sequences of variable length.\n \"input\" can be of size \"T x B x \" where T is the length of the\n longest sequence (equal to \"lengths[0]\"), \"B\" is the batch size,\n and \"\" is any number of dimensions (including 0). If \"batch_first\"\n is \"True\", \"B x T x \" \"input\" is expected.\n For unsorted sequences, use enforce_sorted = False. If\n \"enforce_sorted\" is \"True\", the sequences should be sorted by\n length in a decreasing order, i.e. \"input[:,0]\" should be the\n longest sequence, and \"input[:,B-1]\" the shortest one.\n enforce_sorted = True* is only necessary for ONNX export.\n Note:\n This function accepts any input that has at least two dimensions.\n You can apply it to pack the labels, and use the output of the\n RNN with them to compute the loss directly. A Tensor can be", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pack_padded_sequence.html", "category": "pytorch docs"}
{"text": "retrieved from a \"PackedSequence\" object by accessing its \".data\"\n attribute.\n Parameters:\n * input (Tensor) -- padded batch of variable length\n sequences.\n * lengths (Tensor or list(int)) -- list of\n sequence lengths of each batch element (must be on the CPU if\n provided as a tensor).\n * batch_first (bool, optional) -- if \"True\", the input\n is expected in \"B x T x \" format.\n * enforce_sorted (bool, optional) -- if \"True\", the\n input is expected to contain sequences sorted by length in a\n decreasing order. If \"False\", the input will get sorted\n unconditionally. Default: \"True\".\n Returns:\n a \"PackedSequence\" object\n Return type:\n PackedSequence*", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pack_padded_sequence.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.clip_grad_value_torch.nn.utils.clip_grad_value_(parameters, clip_value, foreach=None)\n Clips gradient of an iterable of parameters at specified value.\n Gradients are modified in-place.\n Parameters:\n * parameters (Iterable[Tensor] or Tensor) -- an\n iterable of Tensors or a single Tensor that will have\n gradients normalized\n * clip_value (float) -- maximum allowed value of the\n gradients. The gradients are clipped in the range\n \\left[\\text{-clip_value}, \\text{clip_value}\\right]\n * foreach (bool) -- use the faster foreach-based\n implementation If \"None\", use the foreach implementation for\n CUDA and CPU tensors and silently fall back to the slow\n implementation for other device types. Default: \"None\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.clip_grad_value_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.igammac_Tensor.igammac_(other) -> Tensor\n In-place version of \"igammac()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.igammac_.html", "category": "pytorch docs"}
{"text": "torch.autograd.functional.hessiantorch.autograd.functional.hessian(func, inputs, create_graph=False, strict=False, vectorize=False, outer_jacobian_strategy='reverse-mode')\n Function that computes the Hessian of a given scalar function.\n Parameters:\n * func (function) -- a Python function that takes Tensor\n inputs and returns a Tensor with a single element.\n * inputs (tuple of Tensors or Tensor) -- inputs to the\n function \"func\".\n * create_graph (bool, optional) -- If \"True\", the\n Hessian will be computed in a differentiable manner. Note that\n when \"strict\" is \"False\", the result can not require gradients\n or be disconnected from the inputs. Defaults to \"False\".\n * strict (bool, optional) -- If \"True\", an error will\n be raised when we detect that there exists an input such that\n all the outputs are independent of it. If \"False\", we return a", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.hessian.html", "category": "pytorch docs"}
{"text": "Tensor of zeros as the hessian for said inputs, which is the\n expected mathematical value. Defaults to \"False\".\n * vectorize (bool, optional) -- This feature is\n experimental. Please consider using \"torch.func.hessian()\"\n instead if you are looking for something less experimental and\n more performant. When computing the hessian, usually we invoke\n \"autograd.grad\" once per row of the hessian. If this flag is\n \"True\", we use the vmap prototype feature as the backend to\n vectorize calls to \"autograd.grad\" so we only invoke it once\n instead of once per row. This should lead to performance\n improvements in many use cases, however, due to this feature\n being incomplete, there may be performance cliffs. Please use\n torch._C._debug_only_display_vmap_fallback_warnings(True) to\n show any performance warnings and file us issues if warnings\n exist for your use case. Defaults to \"False\".", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.hessian.html", "category": "pytorch docs"}
{"text": "\nouter_jacobian_strategy (str, optional) -- The\n Hessian is computed by computing the Jacobian of a Jacobian.\n The inner Jacobian is always computed in reverse-mode AD.\n Setting strategy to \"\"forward-mode\"\" or \"\"reverse-mode\"\"\n determines whether the outer Jacobian will be computed with\n forward or reverse mode AD. Currently, computing the outer\n Jacobian in \"\"forward-mode\"\" requires \"vectorized=True\".\n Defaults to \"\"reverse-mode\"\".\n Returns:\n if there is a single input, this will be a single Tensor\n containing the Hessian for the input. If it is a tuple, then the\n Hessian will be a tuple of tuples where \"Hessian[i][j]\" will\n contain the Hessian of the \"i\"th input and \"j\"th input with size\n the sum of the size of the \"i\"th input plus the size of the\n \"j\"th input. \"Hessian[i][j]\" will have the same dtype and device\n as the corresponding \"i\"th input.\n Return type:\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.hessian.html", "category": "pytorch docs"}
{"text": "Return type:\n Hessian (Tensor or a tuple of tuple of Tensors)\n -[ Example ]-\n\n\n\ndef pow_reducer(x):\n ... return x.pow(3).sum()\ninputs = torch.rand(2, 2)\nhessian(pow_reducer, inputs)\n tensor([[[[5.2265, 0.0000],\n [0.0000, 0.0000]],\n [[0.0000, 4.8221],\n [0.0000, 0.0000]]],\n [[[0.0000, 0.0000],\n [1.9456, 0.0000]],\n [[0.0000, 0.0000],\n [0.0000, 3.2550]]]])\nhessian(pow_reducer, inputs, create_graph=True)\n tensor([[[[5.2265, 0.0000],\n [0.0000, 0.0000]],\n [[0.0000, 4.8221],\n [0.0000, 0.0000]]],\n [[[0.0000, 0.0000],\n [1.9456, 0.0000]],\n [[0.0000, 0.0000],\n [0.0000, 3.2550]]]], grad_fn=)\ndef pow_adder_reducer(x, y):\n ... return (2 * x.pow(2) + 3 * y.pow(2)).sum()\ninputs = (torch.rand(2), torch.rand(2))\nhessian(pow_adder_reducer, inputs)\n ((tensor([[4., 0.],\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.hessian.html", "category": "pytorch docs"}
{"text": "((tensor([[4., 0.],\n [0., 4.]]),\n tensor([[0., 0.],\n [0., 0.]])),\n (tensor([[0., 0.],\n [0., 0.]]),\n tensor([[6., 0.],\n [0., 6.]])))", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.hessian.html", "category": "pytorch docs"}
{"text": "torch.hstacktorch.hstack(tensors, , out=None) -> Tensor\n Stack tensors in sequence horizontally (column wise).\n This is equivalent to concatenation along the first axis for 1-D\n tensors, and along the second axis for all other tensors.\n Parameters:\n tensors (sequence of Tensors) -- sequence of tensors to\n concatenate\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.tensor([1, 2, 3])\n >>> b = torch.tensor([4, 5, 6])\n >>> torch.hstack((a,b))\n tensor([1, 2, 3, 4, 5, 6])\n >>> a = torch.tensor([[1],[2],[3]])\n >>> b = torch.tensor([[4],[5],[6]])\n >>> torch.hstack((a,b))\n tensor([[1, 4],\n [2, 5],\n [3, 6]])", "source": "https://pytorch.org/docs/stable/generated/torch.hstack.html", "category": "pytorch docs"}
{"text": "torch.vmaptorch.vmap(func, in_dims=0, out_dims=0, randomness='error', , chunk_size=None)\n vmap is the vectorizing map; \"vmap(func)\" returns a new function\n that maps \"func\" over some dimension of the inputs. Semantically,\n vmap pushes the map into PyTorch operations called by \"func\",\n effectively vectorizing those operations.\n vmap is useful for handling batch dimensions: one can write a\n function \"func\" that runs on examples and then lift it to a\n function that can take batches of examples with \"vmap(func)\". vmap\n can also be used to compute batched gradients when composed with\n autograd.\n Note:\n \"torch.vmap()\" is aliased to \"torch.func.vmap()\" for convenience.\n Use whichever one you'd like.\n Parameters:\n * func (function) -- A Python function that takes one or\n more arguments. Must return one or more Tensors.\n * in_dims (int or nested structure*) -- Specifies which\n dimension of the inputs should be mapped over. \"in_dims\"", "source": "https://pytorch.org/docs/stable/generated/torch.vmap.html", "category": "pytorch docs"}
{"text": "should have a structure like the inputs. If the \"in_dim\" for a\n particular input is None, then that indicates there is no map\n dimension. Default: 0.\n * out_dims (int or Tuple[int]) -- Specifies\n where the mapped dimension should appear in the outputs. If\n \"out_dims\" is a Tuple, then it should have one element per\n output. Default: 0.\n * randomness (str) -- Specifies whether the randomness in\n this vmap should be the same or different across batches. If\n 'different', the randomness for each batch will be different.\n If 'same', the randomness will be the same across batches. If\n 'error', any calls to random functions will error. Default:\n 'error'. WARNING: this flag only applies to random PyTorch\n operations and does not apply to Python's random module or\n numpy randomness.\n * chunk_size (None or int) -- If None (default), apply", "source": "https://pytorch.org/docs/stable/generated/torch.vmap.html", "category": "pytorch docs"}
{"text": "a single vmap over inputs. If not None, then compute the vmap\n \"chunk_size\" samples at a time. Note that \"chunk_size=1\" is\n equivalent to computing the vmap with a for-loop. If you run\n into memory issues computing the vmap, please try a non-None\n chunk_size.\n Returns:\n Returns a new \"batched\" function. It takes the same inputs as\n \"func\", except each input has an extra dimension at the index\n specified by \"in_dims\". It takes returns the same outputs as\n \"func\", except each output has an extra dimension at the index\n specified by \"out_dims\".\n Return type:\n Callable\n One example of using \"vmap()\" is to compute batched dot products.\n PyTorch doesn't provide a batched \"torch.dot\" API; instead of\n unsuccessfully rummaging through docs, use \"vmap()\" to construct a\n new function.\n\n\n\ntorch.dot # [D], [D] -> []\nbatched_dot = torch.func.vmap(torch.dot) # [N, D], [N, D] -> [N]\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.vmap.html", "category": "pytorch docs"}
{"text": "\n\n\nx, y = torch.randn(2, 5), torch.randn(2, 5)\nbatched_dot(x, y)\n \"vmap()\" can be helpful in hiding batch dimensions, leading to a\n simpler model authoring experience.\nbatch_size, feature_size = 3, 5\nweights = torch.randn(feature_size, requires_grad=True)\ndef model(feature_vec):\n # Very simple linear model with activation\n return feature_vec.dot(weights).relu()\nexamples = torch.randn(batch_size, feature_size)\nresult = torch.vmap(model)(examples)\n \"vmap()\" can also help vectorize computations that were previously\n difficult or impossible to batch. One example is higher-order\n gradient computation. The PyTorch autograd engine computes vjps\n (vector-Jacobian products). Computing a full Jacobian matrix for\n some function f: R^N -> R^N usually requires N calls to\n \"autograd.grad\", one per Jacobian row. Using \"vmap()\", we can\n vectorize the whole computation, computing the Jacobian in a single\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.vmap.html", "category": "pytorch docs"}
{"text": "call to \"autograd.grad\".\n\n\n\nSetup\nN = 5\nf = lambda x: x ** 2\nx = torch.randn(N, requires_grad=True)\ny = f(x)\nI_N = torch.eye(N)\nSequential approach\njacobian_rows = [torch.autograd.grad(y, x, v, retain_graph=True)[0]\n for v in I_N.unbind()]\njacobian = torch.stack(jacobian_rows)\nvectorized gradient computation\ndef get_vjp(v):\n return torch.autograd.grad(y, x, v)\njacobian = torch.vmap(get_vjp)(I_N)\n \"vmap()\" can also be nested, producing an output with multiple\n batched dimensions\ntorch.dot # [D], [D] -> []\nbatched_dot = torch.vmap(torch.vmap(torch.dot)) # [N1, N0, D], [N1, N0, D] -> [N1, N0]\nx, y = torch.randn(2, 3, 5), torch.randn(2, 3, 5)\nbatched_dot(x, y) # tensor of size [2, 3]\n If the inputs are not batched along the first dimension, \"in_dims\"\n specifies the dimension that each inputs are batched along as\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.vmap.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.dot # [N], [N] -> []\nbatched_dot = torch.vmap(torch.dot, in_dims=1) # [N, D], [N, D] -> [D]\nx, y = torch.randn(2, 5), torch.randn(2, 5)\nbatched_dot(x, y) # output is [5] instead of [2] if batched along the 0th dimension\n If there are multiple inputs each of which is batched along\n different dimensions, \"in_dims\" must be a tuple with the batch\n dimension for each input as\ntorch.dot # [D], [D] -> []\nbatched_dot = torch.vmap(torch.dot, in_dims=(0, None)) # [N, D], [D] -> [N]\nx, y = torch.randn(2, 5), torch.randn(5)\nbatched_dot(x, y) # second arg doesn't have a batch dim because in_dim[1] was None\n If the input is a Python struct, \"in_dims\" must be a tuple\n containing a struct matching the shape of the input:\nf = lambda dict: torch.dot(dict['x'], dict['y'])\nx, y = torch.randn(2, 5), torch.randn(5)\ninput = {'x': x, 'y': y}\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.vmap.html", "category": "pytorch docs"}
{"text": "\n\n\ninput = {'x': x, 'y': y}\nbatched_dot = torch.vmap(f, in_dims=({'x': 0, 'y': None},))\nbatched_dot(input)\n By default, the output is batched along the first dimension.\n However, it can be batched along any dimension by using \"out_dims\"\nf = lambda x: x ** 2\nx = torch.randn(2, 5)\nbatched_pow = torch.vmap(f, out_dims=1)\nbatched_pow(x) # [5, 2]\n For any function that uses kwargs, the returned function will not\n batch the kwargs but will accept kwargs\nx = torch.randn([2, 5])\ndef fn(x, scale=4.):\n return x * scale\nbatched_pow = torch.vmap(fn)\nassert torch.allclose(batched_pow(x), x * 4)\nbatched_pow(x, scale=x) # scale is not batched, output has shape [2, 2, 5]\n Note:\n vmap does not provide general autobatching or handle variable-\n length sequences out of the box.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.vmap.html", "category": "pytorch docs"}
{"text": "torch.cuda.default_streamtorch.cuda.default_stream(device=None)\n Returns the default \"Stream\" for a given device.\n Parameters:\n device (torch.device or int, optional) -- selected\n device. Returns the default \"Stream\" for the current device,\n given by \"current_device()\", if \"device\" is \"None\" (default).\n Return type:\n Stream", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.default_stream.html", "category": "pytorch docs"}
{"text": "torch.Tensor.numpyTensor.numpy(, force=False) -> numpy.ndarray\n Returns the tensor as a NumPy \"ndarray\".\n If \"force\" is \"False\" (the default), the conversion is performed\n only if the tensor is on the CPU, does not require grad, does not\n have its conjugate bit set, and is a dtype and layout that NumPy\n supports. The returned ndarray and the tensor will share their\n storage, so changes to the tensor will be reflected in the ndarray\n and vice versa.\n If \"force\" is \"True\" this is equivalent to calling\n \"t.detach().cpu().resolve_conj().resolve_neg().numpy()\". If the\n tensor isn't on the CPU or the conjugate or negative bit is set,\n the tensor won't share its storage with the returned ndarray.\n Setting \"force\" to \"True\" can be a useful shorthand.\n Parameters:\n force (bool*) -- if \"True\", the ndarray may be a copy of\n the tensor instead of always sharing memory, defaults to\n \"False\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.numpy.html", "category": "pytorch docs"}
{"text": "torch.expm1torch.expm1(input, *, out=None) -> Tensor\n Alias for \"torch.special.expm1()\".", "source": "https://pytorch.org/docs/stable/generated/torch.expm1.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.pdisttorch.nn.functional.pdist(input, p=2) -> Tensor\n Computes the p-norm distance between every pair of row vectors in\n the input. This is identical to the upper triangular portion,\n excluding the diagonal, of torch.norm(input[:, None] - input,\n dim=2, p=p). This function will be faster if the rows are\n contiguous.\n If input has shape N \\times M then the output will have shape\n \\frac{1}{2} N (N - 1).\n This function is equivalent to \"scipy.spatial.distance.pdist(input,\n 'minkowski', p=p)\" if p \\in (0, \\infty). When p = 0 it is\n equivalent to \"scipy.spatial.distance.pdist(input, 'hamming') * M\".\n When p = \\infty, the closest scipy function is\n \"scipy.spatial.distance.pdist(xn, lambda x, y: np.abs(x -\n y).max())\".\n Parameters:\n * input -- input tensor of shape N \\times M.\n * p -- p value for the p-norm distance to calculate between\n each vector pair \\in [0, \\infty].", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.pdist.html", "category": "pytorch docs"}
{"text": "LogSigmoidclass torch.nn.LogSigmoid\n Applies the element-wise function:\n \\text{LogSigmoid}(x) = \\log\\left(\\frac{ 1 }{ 1 +\n \\exp(-x)}\\right)\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.LogSigmoid()\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LogSigmoid.html", "category": "pytorch docs"}
{"text": "torch.Tensor.fracTensor.frac() -> Tensor\n See \"torch.frac()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.frac.html", "category": "pytorch docs"}
{"text": "SELUclass torch.nn.SELU(inplace=False)\n Applied element-wise, as:\n \\text{SELU}(x) = \\text{scale} * (\\max(0,x) + \\min(0, \\alpha *\n (\\exp(x) - 1)))\n with \\alpha = 1.6732632423543772848170429916717 and \\text{scale} =\n 1.0507009873554804934193349852946.\n Warning:\n When using \"kaiming_normal\" or \"kaiming_normal_\" for\n initialisation, \"nonlinearity='linear'\" should be used instead of\n \"nonlinearity='selu'\" in order to get Self-Normalizing Neural\n Networks. See \"torch.nn.init.calculate_gain()\" for more\n information.\n More details can be found in the paper Self-Normalizing Neural\n Networks .\n Parameters:\n inplace (bool, optional) -- can optionally do the\n operation in-place. Default: \"False\"\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.SELU()\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SELU.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.nll_losstorch.nn.functional.nll_loss(input, target, weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction='mean')\n The negative log likelihood loss.\n See \"NLLLoss\" for details.\n Parameters:\n * input (Tensor) -- (N, C) where C = number of classes\n or (N, C, H, W) in case of 2D Loss, or (N, C, d_1, d_2, ...,\n d_K) where K \\geq 1 in the case of K-dimensional loss. input\n is expected to be log-probabilities.\n * target (Tensor) -- (N) where each value is 0 \\leq\n \\text{targets}[i] \\leq C-1, or (N, d_1, d_2, ..., d_K) where K\n \\geq 1 for K-dimensional loss.\n * weight (Tensor, optional) -- a manual rescaling\n weight given to each class. If given, has to be a Tensor of\n size C\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.nll_loss.html", "category": "pytorch docs"}
{"text": "loss element in the batch. Note that for some losses, there\n multiple elements per sample. If the field \"size_average\" is\n set to \"False\", the losses are instead summed for each\n minibatch. Ignored when reduce is \"False\". Default: \"True\"\n * ignore_index (int, optional) -- Specifies a target\n value that is ignored and does not contribute to the input\n gradient. When \"size_average\" is \"True\", the loss is averaged\n over non-ignored targets. Default: -100\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.nll_loss.html", "category": "pytorch docs"}
{"text": "\"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n Return type:\n Tensor\n Example:\n >>> # input is of size N x C = 3 x 5\n >>> input = torch.randn(3, 5, requires_grad=True)\n >>> # each element in target has to have 0 <= value < C\n >>> target = torch.tensor([1, 0, 4])\n >>> output = F.nll_loss(F.log_softmax(input, dim=1), target)\n >>> output.backward()", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.nll_loss.html", "category": "pytorch docs"}
{"text": "torch.compiletorch.compile(model=None, , fullgraph=False, dynamic=False, backend='inductor', mode=None, passes=None, kwargs)\n Optimizes given model/function using Dynamo and specified backend\n Parameters:\n * model (Callable) -- Module/function to optimize\n * fullgraph (bool) -- Whether it is ok to break model into\n several subgraphs\n * dynamic (bool) -- Use dynamic shape tracing\n * backend (str or Callable) -- backend to be used\n * mode (str) -- Can be either \"default\", \"reduce-overhead\"\n or \"max-autotune\"\n * passes (dict) -- A dictionary of passes to the backend.\n Passes currently recognized by inductor backend: - static-\n memory - matmul-tune - matmul-padding - triton-autotune -\n triton-bmm - triton-mm - triton-convolution - rematerialize-\n threshold - rematerialize-acc-threshold\n Return type:\n Callable*\n Example:", "source": "https://pytorch.org/docs/stable/generated/torch.compile.html", "category": "pytorch docs"}
{"text": "Return type:\n Callable\n Example:\n @torch.compile(passes={\"matmul-padding\": True}, fullgraph=True)\n def foo(x):\n return torch.sin(x) + torch.cos(x)", "source": "https://pytorch.org/docs/stable/generated/torch.compile.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.local_response_normtorch.nn.functional.local_response_norm(input, size, alpha=0.0001, beta=0.75, k=1.0)\n Applies local response normalization over an input signal composed\n of several input planes, where channels occupy the second\n dimension. Applies normalization across channels.\n See \"LocalResponseNorm\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.local_response_norm.html", "category": "pytorch docs"}
{"text": "torch.Tensor.kthvalueTensor.kthvalue(k, dim=None, keepdim=False)\n See \"torch.kthvalue()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.kthvalue.html", "category": "pytorch docs"}
{"text": "ModuleListclass torch.nn.ModuleList(modules=None)\n Holds submodules in a list.\n \"ModuleList\" can be indexed like a regular Python list, but modules\n it contains are properly registered, and will be visible by all\n \"Module\" methods.\n Parameters:\n modules (iterable, optional) -- an iterable of modules\n to add\n Example:\n class MyModule(nn.Module):\n def init(self):\n super(MyModule, self).init()\n self.linears = nn.ModuleList([nn.Linear(10, 10) for i in range(10)])\n def forward(self, x):\n # ModuleList can act as an iterable, or be indexed using ints\n for i, l in enumerate(self.linears):\n x = self.linearsi // 2 + l(x)\n return x\n append(module)\n Appends a given module to the end of the list.\n Parameters:\n module (nn.Module) -- module to append\n Return type:\n ModuleList\n extend(modules)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ModuleList.html", "category": "pytorch docs"}
{"text": "ModuleList\n extend(modules)\n Appends modules from a Python iterable to the end of the list.\n Parameters:\n modules (iterable) -- iterable of modules to append\n Return type:\n ModuleList\n insert(index, module)\n Insert a given module before a given index in the list.\n Parameters:\n * index (int) -- index to insert.\n * module (nn.Module) -- module to insert", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ModuleList.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.adaptive_max_pool1dtorch.nn.functional.adaptive_max_pool1d(args, kwargs)\n Applies a 1D adaptive max pooling over an input signal composed of\n several input planes.\n See \"AdaptiveMaxPool1d\" for details and output shape.\n Parameters:\n * output_size -- the target output size (single integer)\n * return_indices* -- whether to return pooling indices.\n Default: \"False\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.adaptive_max_pool1d.html", "category": "pytorch docs"}
{"text": "FakeQuantizeclass torch.quantization.fake_quantize.FakeQuantize(observer=, quant_min=None, quant_max=None, observer_kwargs)\n Simulate the quantize and dequantize operations in training time.\n The output of this module is given by:\n x_out = (\n clamp(round(x/scale + zero_point), quant_min, quant_max) - zero_point\n ) * scale\n * \"scale\" defines the scale factor used for quantization.\n * \"zero_point\" specifies the quantized value to which 0 in floating\n point maps to\n * \"fake_quant_enabled\" controls the application of fake\n quantization on tensors, note that statistics can still be\n updated.\n * \"observer_enabled\" controls statistics collection on tensors\n * \"dtype\" specifies the quantized dtype that is being emulated with\n fake-quantization,\n allowable values are torch.qint8 and torch.quint8.\n Parameters:\n * observer (module) -- Module for observing statistics on", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.FakeQuantize.html", "category": "pytorch docs"}
{"text": "input tensors and calculating scale and zero-point.\n * observer_kwargs (optional) -- Arguments for the observer\n module\n Variables:\n activation_post_process (Module) -- User provided module\n that collects statistics on the input tensor and provides a\n method to calculate scale and zero-point.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.FakeQuantize.html", "category": "pytorch docs"}
{"text": "torch.adjointtorch.adjoint(Tensor) -> Tensor\n Returns a view of the tensor conjugated and with the last two\n dimensions transposed.\n \"x.adjoint()\" is equivalent to \"x.transpose(-2, -1).conj()\" for\n complex tensors and to \"x.transpose(-2, -1)\" for real tensors.\n Example::\n >>> x = torch.arange(4, dtype=torch.float)\n >>> A = torch.complex(x, x).reshape(2, 2)\n >>> A\n tensor([[0.+0.j, 1.+1.j],\n [2.+2.j, 3.+3.j]])\n >>> A.adjoint()\n tensor([[0.-0.j, 2.-2.j],\n [1.-1.j, 3.-3.j]])\n >>> (A.adjoint() == A.mH).all()\n tensor(True)", "source": "https://pytorch.org/docs/stable/generated/torch.adjoint.html", "category": "pytorch docs"}
{"text": "Softminclass torch.nn.Softmin(dim=None)\n Applies the Softmin function to an n-dimensional input Tensor\n rescaling them so that the elements of the n-dimensional output\n Tensor lie in the range [0, 1] and sum to 1.\n Softmin is defined as:\n \\text{Softmin}(x_{i}) = \\frac{\\exp(-x_i)}{\\sum_j \\exp(-x_j)}\n Shape:\n * Input: () where *** means, any number of additional\n dimensions\n * Output: (), same shape as the input\n Parameters:\n dim (int) -- A dimension along which Softmin will be\n computed (so every slice along dim will sum to 1).\n Returns:\n a Tensor of the same dimension and shape as the input, with\n values in the range [0, 1]\n Return type:\n None\n Examples:\n >>> m = nn.Softmin(dim=1)\n >>> input = torch.randn(2, 3)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Softmin.html", "category": "pytorch docs"}
{"text": "torch.Tensor.masked_scatterTensor.masked_scatter(mask, tensor) -> Tensor\n Out-of-place version of \"torch.Tensor.masked_scatter_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.masked_scatter.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.parameters_to_vectortorch.nn.utils.parameters_to_vector(parameters)\n Convert parameters to one vector\n Parameters:\n parameters (Iterable[Tensor]) -- an iterator of\n Tensors that are the parameters of a model.\n Returns:\n The parameters represented by a single vector\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parameters_to_vector.html", "category": "pytorch docs"}
{"text": "default_debug_qconfigtorch.quantization.qconfig.default_debug_qconfig\n alias of QConfig(activation=,\n weight=functools.partial(,\n dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_debug_qconfig.html", "category": "pytorch docs"}
{"text": "UninitializedParameterclass torch.nn.parameter.UninitializedParameter(requires_grad=True, device=None, dtype=None)\n A parameter that is not initialized.\n Uninitialized Parameters are a a special case of\n \"torch.nn.Parameter\" where the shape of the data is still unknown.\n Unlike a \"torch.nn.Parameter\", uninitialized parameters hold no\n data and attempting to access some properties, like their shape,\n will throw a runtime error. The only operations that can be\n performed on a uninitialized parameter are changing its datatype,\n moving it to a different device and converting it to a regular\n \"torch.nn.Parameter\".\n The default device or dtype to use when the parameter is\n materialized can be set during construction using e.g.\n \"device='cuda'\".\n cls_to_become\n alias of \"Parameter\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parameter.UninitializedParameter.html", "category": "pytorch docs"}
{"text": "torch.linalg.tensorinvtorch.linalg.tensorinv(A, ind=2, , out=None) -> Tensor\n Computes the multiplicative inverse of \"torch.tensordot()\".\n If m is the product of the first \"ind\" dimensions of \"A\" and n\n is the product of the rest of the dimensions, this function expects\n m and n to be equal. If this is the case, it computes a tensor\n X such that tensordot(\"A\", X, \"ind\") is the identity matrix\n in dimension m. X will have the shape of \"A\" but with the first\n \"ind\" dimensions pushed back to the end\n X.shape == A.shape[ind:] + A.shape[:ind]\n Supports input of float, double, cfloat and cdouble dtypes.\n Note:\n When \"A\" is a 2-dimensional tensor and \"ind\"= 1*, this\n function computes the (multiplicative) inverse of \"A\" (see\n \"torch.linalg.inv()\").\n Note:\n Consider using \"torch.linalg.tensorsolve()\" if possible for\n multiplying a tensor on the left by the tensor inverse, as:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.tensorinv.html", "category": "pytorch docs"}
{"text": "linalg.tensorsolve(A, B) == torch.tensordot(linalg.tensorinv(A), B) # When B is a tensor with shape A.shape[:B.ndim]\n It is always preferred to use \"tensorsolve()\" when possible, as\n it is faster and more numerically stable than computing the\n pseudoinverse explicitly.\n See also:\n \"torch.linalg.tensorsolve()\" computes\n torch.tensordot(tensorinv(\"A\"), \"B\").\n Parameters:\n * A (Tensor) -- tensor to invert. Its shape must satisfy\n prod(\"A\".shape[:\"ind\"]) == prod(\"A\".shape[\"ind\":]).\n * ind (int) -- index at which to compute the inverse of\n \"torch.tensordot()\". Default: 2.\n Keyword Arguments:\n out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\n Raises:\n RuntimeError -- if the reshaped \"A\" is not invertible or the\n product of the first \"ind\" dimensions is not equal to the\n product of the rest.\n Examples:\n >>> A = torch.eye(4 * 6).reshape((4, 6, 8, 3))", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.tensorinv.html", "category": "pytorch docs"}
{"text": "\n\n\nAinv = torch.linalg.tensorinv(A, ind=2)\n >>> Ainv.shape\n torch.Size([8, 3, 4, 6])\n >>> B = torch.randn(4, 6)\n >>> torch.allclose(torch.tensordot(Ainv, B), torch.linalg.tensorsolve(A, B))\n True\n >>> A = torch.randn(4, 4)\n >>> Atensorinv = torch.linalg.tensorinv(A, ind=1)\n >>> Ainv = torch.linalg.inverse(A)\n >>> torch.allclose(Atensorinv, Ainv)\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.tensorinv.html", "category": "pytorch docs"}
{"text": "torch.Tensor.apply_Tensor.apply_(callable) -> Tensor\n Applies the function \"callable\" to each element in the tensor,\n replacing each element with the value returned by \"callable\".\n Note:\n This function only works with CPU tensors and should not be used\n in code sections that require high performance.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.apply_.html", "category": "pytorch docs"}
{"text": "torch.softmaxtorch.softmax(input, dim, *, dtype=None) -> Tensor\n Alias for \"torch.nn.functional.softmax()\".", "source": "https://pytorch.org/docs/stable/generated/torch.softmax.html", "category": "pytorch docs"}
{"text": "torch.randinttorch.randint(low=0, high, size, *, generator=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\n Returns a tensor filled with random integers generated uniformly\n between \"low\" (inclusive) and \"high\" (exclusive).\n The shape of the tensor is defined by the variable argument \"size\".\n Note:\n With the global dtype default (\"torch.float32\"), this function\n returns a tensor with dtype \"torch.int64\".\n Parameters:\n * low (int, optional) -- Lowest integer to be drawn\n from the distribution. Default: 0.\n * high (int) -- One above the highest integer to be drawn\n from the distribution.\n * size (tuple) -- a tuple defining the shape of the output\n tensor.\n Keyword Arguments:\n * generator (\"torch.Generator\", optional) -- a pseudorandom\n number generator for sampling\n * out (Tensor, optional) -- the output tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.randint.html", "category": "pytorch docs"}
{"text": "\ndtype (torch.dtype, optional) -- if \"None\", this\n function returns a tensor with dtype \"torch.int64\".\nlayout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\ndevice (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\nrequires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Example:\n\n\ntorch.randint(3, 5, (3,))\n tensor([4, 3, 4])\ntorch.randint(10, (2, 2))\n tensor([[0, 2],\n [5, 5]])\ntorch.randint(3, 10, (2, 2))\n tensor([[4, 5],\n [6, 7]])\n\n\n\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.randint.html", "category": "pytorch docs"}
{"text": "torch.Tensor.hardshrinkTensor.hardshrink(lambd=0.5) -> Tensor\n See \"torch.nn.functional.hardshrink()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.hardshrink.html", "category": "pytorch docs"}
{"text": "get_default_qconfig_mappingclass torch.ao.quantization.qconfig_mapping.get_default_qconfig_mapping(backend='x86', version=0)\n Return the default QConfigMapping for post training quantization.\n Parameters:\n * backend () -- the quantization backend for the default\n qconfig mapping, should be one of [\"x86\" (default), \"fbgemm\",\n \"qnnpack\", \"onednn\"]\n * *version (*) -- the version for the default qconfig\n mapping\n Return type:\n QConfigMapping", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.qconfig_mapping.get_default_qconfig_mapping.html", "category": "pytorch docs"}
{"text": "QuantStubclass torch.quantization.QuantStub(qconfig=None)\n Quantize stub module, before calibration, this is same as an\n observer, it will be swapped as nnq.Quantize in convert.\n Parameters:\n qconfig -- quantization configuration for the tensor, if\n qconfig is not provided, we will get qconfig from parent modules", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.QuantStub.html", "category": "pytorch docs"}
{"text": "torch.anytorch.any(input) -> Tensor\n Tests if any element in \"input\" evaluates to True.\n Note:\n This function matches the behaviour of NumPy in returning output\n of dtype bool for all supported dtypes except uint8. For\n uint8 the dtype of output is uint8 itself.\n Example:\n >>> a = torch.rand(1, 2).bool()\n >>> a\n tensor([[False, True]], dtype=torch.bool)\n >>> torch.any(a)\n tensor(True, dtype=torch.bool)\n >>> a = torch.arange(0, 3)\n >>> a\n tensor([0, 1, 2])\n >>> torch.any(a)\n tensor(True)\n torch.any(input, dim, keepdim=False, , out=None) -> Tensor\n For each row of \"input\" in the given dimension \"dim\", returns\n True if any element in the row evaluate to True and False*\n otherwise.\n If \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in", "source": "https://pytorch.org/docs/stable/generated/torch.any.html", "category": "pytorch docs"}
{"text": "the output tensor having 1 fewer dimension than \"input\".\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- the dimension to reduce.\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> a = torch.randn(4, 2) < 0\n >>> a\n tensor([[ True, True],\n [False, True],\n [ True, True],\n [False, False]])\n >>> torch.any(a, 1)\n tensor([ True, True, True, False])\n >>> torch.any(a, 0)\n tensor([True, True])", "source": "https://pytorch.org/docs/stable/generated/torch.any.html", "category": "pytorch docs"}
{"text": "torch.Tensor.chunkTensor.chunk(chunks, dim=0) -> List of Tensors\n See \"torch.chunk()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.chunk.html", "category": "pytorch docs"}
{"text": "torch.Tensor.erfinvTensor.erfinv() -> Tensor\n See \"torch.erfinv()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.erfinv.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sparse_resize_Tensor.sparse_resize_(size, sparse_dim, dense_dim) -> Tensor\n Resizes \"self\" sparse tensor to the desired size and the number of\n sparse and dense dimensions.\n Note:\n If the number of specified elements in \"self\" is zero, then\n \"size\", \"sparse_dim\", and \"dense_dim\" can be any size and\n positive integers such that \"len(size) == sparse_dim +\n dense_dim\".If \"self\" specifies one or more elements, however,\n then each dimension in \"size\" must not be smaller than the\n corresponding dimension of \"self\", \"sparse_dim\" must equal the\n number of sparse dimensions in \"self\", and \"dense_dim\" must equal\n the number of dense dimensions in \"self\".\n Warning:\n Throws an error if \"self\" is not a sparse tensor.\n Parameters:\n * size (torch.Size) -- the desired size. If \"self\" is non-\n empty sparse tensor, the desired size cannot be smaller than\n the original size.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_resize_.html", "category": "pytorch docs"}
{"text": "the original size.\n * sparse_dim (int) -- the number of sparse dimensions\n * dense_dim (int) -- the number of dense dimensions", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_resize_.html", "category": "pytorch docs"}
{"text": "torch.linalg.multi_dottorch.linalg.multi_dot(tensors, , out=None)\n Efficiently multiplies two or more matrices by reordering the\n multiplications so that the fewest arithmetic operations are\n performed.\n Supports inputs of float, double, cfloat and cdouble dtypes. This\n function does not support batched inputs.\n Every tensor in \"tensors\" must be 2D, except for the first and last\n which may be 1D. If the first tensor is a 1D vector of shape (n,)\n it is treated as a row vector of shape (1, n), similarly if the\n last tensor is a 1D vector of shape (n,) it is treated as a\n column vector of shape (n, 1).\n If the first and last tensors are matrices, the output will be a\n matrix. However, if either is a 1D vector, then the output will be\n a 1D vector.\n Differences with numpy.linalg.multi_dot:\n * Unlike numpy.linalg.multi_dot*, the first and last tensors must\n either be 1D or 2D whereas NumPy allows them to be nD\n Warning:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.multi_dot.html", "category": "pytorch docs"}
{"text": "Warning:\n This function does not broadcast.\n Note:\n This function is implemented by chaining \"torch.mm()\" calls after\n computing the optimal matrix multiplication order.\n Note:\n The cost of multiplying two matrices with shapes (a, b) and\n (b, c) is a * b * c. Given matrices A, B, C with shapes\n (10, 100), (100, 5), (5, 50) respectively, we can calculate\n the cost of different multiplication orders as follows:\n \\begin{align} \\operatorname{cost}((AB)C) &= 10 \\times 100\n \\times 5 + 10 \\times 5 \\times 50 = 7500 \\\n \\operatorname{cost}(A(BC)) &= 10 \\times 100 \\times 50 + 100\n \\times 5 \\times 50 = 75000 \\end{align}\n In this case, multiplying A and B first followed by C is 10\n times faster.\n Parameters:\n tensors (Sequence[Tensor]) -- two or more tensors to\n multiply. The first and last tensors may be 1D or 2D. Every\n other tensor must be 2D.\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.multi_dot.html", "category": "pytorch docs"}
{"text": "Keyword Arguments:\n out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\n Examples:\n >>> from torch.linalg import multi_dot\n >>> multi_dot([torch.tensor([1, 2]), torch.tensor([2, 3])])\n tensor(8)\n >>> multi_dot([torch.tensor([[1, 2]]), torch.tensor([2, 3])])\n tensor([8])\n >>> multi_dot([torch.tensor([[1, 2]]), torch.tensor([[2], [3]])])\n tensor([[8]])\n >>> A = torch.arange(2 * 3).view(2, 3)\n >>> B = torch.arange(3 * 2).view(3, 2)\n >>> C = torch.arange(2 * 2).view(2, 2)\n >>> multi_dot((A, B, C))\n tensor([[ 26, 49],\n [ 80, 148]])", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.multi_dot.html", "category": "pytorch docs"}
{"text": "default_qconfigtorch.quantization.qconfig.default_qconfig\n alias of QConfig(activation=functools.partial(, quant_min=0,\n quant_max=127){}, weight=functools.partial(,\n dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_qconfig.html", "category": "pytorch docs"}
{"text": "torch.fake_quantize_per_tensor_affinetorch.fake_quantize_per_tensor_affine(input, scale, zero_point, quant_min, quant_max) -> Tensor\n Returns a new tensor with the data in \"input\" fake quantized using\n \"scale\", \"zero_point\", \"quant_min\" and \"quant_max\".\n \\text{output} = min( \\text{quant_max}, max(\n \\text{quant_min}, \\text{std::nearby_int}(\\text{input}\n / \\text{scale}) + \\text{zero_point} ) )\n Parameters:\n * input (Tensor) -- the input value(s), \"torch.float32\"\n tensor\n * scale (double scalar or \"float32\" Tensor) -- quantization\n scale\n * zero_point (int64 scalar or \"int32\" Tensor) --\n quantization zero_point\n * quant_min (int64) -- lower bound of the quantized domain\n * quant_max (int64) -- upper bound of the quantized domain\n Returns:\n A newly fake_quantized \"torch.float32\" tensor\n Return type:\n Tensor\n Example:", "source": "https://pytorch.org/docs/stable/generated/torch.fake_quantize_per_tensor_affine.html", "category": "pytorch docs"}
{"text": "Return type:\n Tensor\n Example:\n >>> x = torch.randn(4)\n >>> x\n tensor([ 0.0552, 0.9730, 0.3973, -1.0780])\n >>> torch.fake_quantize_per_tensor_affine(x, 0.1, 0, 0, 255)\n tensor([0.1000, 1.0000, 0.4000, 0.0000])\n >>> torch.fake_quantize_per_tensor_affine(x, torch.tensor(0.1), torch.tensor(0), 0, 255)\n tensor([0.6000, 0.4000, 0.0000, 0.0000])", "source": "https://pytorch.org/docs/stable/generated/torch.fake_quantize_per_tensor_affine.html", "category": "pytorch docs"}
{"text": "torch.Tensor.rad2degTensor.rad2deg() -> Tensor\n See \"torch.rad2deg()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.rad2deg.html", "category": "pytorch docs"}
{"text": "torch.Tensor.viewTensor.view(*shape) -> Tensor\n Returns a new tensor with the same data as the \"self\" tensor but of\n a different \"shape\".\n The returned tensor shares the same data and must have the same\n number of elements, but may have a different size. For a tensor to\n be viewed, the new view size must be compatible with its original\n size and stride, i.e., each new view dimension must either be a\n subspace of an original dimension, or only span across original\n dimensions d, d+1, \\dots, d+k that satisfy the following\n contiguity-like condition that \\forall i = d, \\dots, d+k-1,\n \\text{stride}[i] = \\text{stride}[i+1] \\times \\text{size}[i+1]\n Otherwise, it will not be possible to view \"self\" tensor as \"shape\"\n without copying it (e.g., via \"contiguous()\"). When it is unclear\n whether a \"view()\" can be performed, it is advisable to use\n \"reshape()\", which returns a view if the shapes are compatible, and", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.view.html", "category": "pytorch docs"}
{"text": "copies (equivalent to calling \"contiguous()\") otherwise.\n Parameters:\n shape (torch.Size or int...) -- the desired size\n Example:\n >>> x = torch.randn(4, 4)\n >>> x.size()\n torch.Size([4, 4])\n >>> y = x.view(16)\n >>> y.size()\n torch.Size([16])\n >>> z = x.view(-1, 8) # the size -1 is inferred from other dimensions\n >>> z.size()\n torch.Size([2, 8])\n >>> a = torch.randn(1, 2, 3, 4)\n >>> a.size()\n torch.Size([1, 2, 3, 4])\n >>> b = a.transpose(1, 2) # Swaps 2nd and 3rd dimension\n >>> b.size()\n torch.Size([1, 3, 2, 4])\n >>> c = a.view(1, 3, 2, 4) # Does not change tensor layout in memory\n >>> c.size()\n torch.Size([1, 3, 2, 4])\n >>> torch.equal(b, c)\n False\n view(dtype) -> Tensor\n Returns a new tensor with the same data as the \"self\" tensor but of\n a different \"dtype\".\n If the element size of \"dtype\" is different than that of", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.view.html", "category": "pytorch docs"}
{"text": "\"self.dtype\", then the size of the last dimension of the output\n will be scaled proportionally. For instance, if \"dtype\" element\n size is twice that of \"self.dtype\", then each pair of elements in\n the last dimension of \"self\" will be combined, and the size of the\n last dimension of the output will be half that of \"self\". If\n \"dtype\" element size is half that of \"self.dtype\", then each\n element in the last dimension of \"self\" will be split in two, and\n the size of the last dimension of the output will be double that of\n \"self\". For this to be possible, the following conditions must be\n true:\n * \"self.dim()\" must be greater than 0.\n * \"self.stride(-1)\" must be 1.\n Additionally, if the element size of \"dtype\" is greater than that\n of \"self.dtype\", the following conditions must be true as well:\n * \"self.size(-1)\" must be divisible by the ratio between the\n element sizes of the dtypes.\n * \"self.storage_offset()\" must be divisible by the ratio between", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.view.html", "category": "pytorch docs"}
{"text": "the element sizes of the dtypes.\n * The strides of all dimensions, except the last dimension, must\n be divisible by the ratio between the element sizes of the\n dtypes.\n If any of the above conditions are not met, an error is thrown.\n Warning:\n This overload is not supported by TorchScript, and using it in a\n Torchscript program will cause undefined behavior.\n Parameters:\n dtype (\"torch.dtype\") -- the desired dtype\n Example:\n >>> x = torch.randn(4, 4)\n >>> x\n tensor([[ 0.9482, -0.0310, 1.4999, -0.5316],\n [-0.1520, 0.7472, 0.5617, -0.8649],\n [-2.4724, -0.0334, -0.2976, -0.8499],\n [-0.2109, 1.9913, -0.9607, -0.6123]])\n >>> x.dtype\n torch.float32\n >>> y = x.view(torch.int32)\n >>> y\n tensor([[ 1064483442, -1124191867, 1069546515, -1089989247],\n [-1105482831, 1061112040, 1057999968, -1084397505],\n [-1071760287, -1123489973, -1097310419, -1084649136],", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.view.html", "category": "pytorch docs"}
{"text": "[-1101533110, 1073668768, -1082790149, -1088634448]],\n dtype=torch.int32)\n >>> y[0, 0] = 1000000000\n >>> x\n tensor([[ 0.0047, -0.0310, 1.4999, -0.5316],\n [-0.1520, 0.7472, 0.5617, -0.8649],\n [-2.4724, -0.0334, -0.2976, -0.8499],\n [-0.2109, 1.9913, -0.9607, -0.6123]])\n >>> x.view(torch.cfloat)\n tensor([[ 0.0047-0.0310j, 1.4999-0.5316j],\n [-0.1520+0.7472j, 0.5617-0.8649j],\n [-2.4724-0.0334j, -0.2976-0.8499j],\n [-0.2109+1.9913j, -0.9607-0.6123j]])\n >>> x.view(torch.cfloat).size()\n torch.Size([4, 2])\n >>> x.view(torch.uint8)\n tensor([[ 0, 202, 154, 59, 182, 243, 253, 188, 185, 252, 191, 63, 240, 22,\n 8, 191],\n [227, 165, 27, 190, 128, 72, 63, 63, 146, 203, 15, 63, 22, 106,\n 93, 191],\n [205, 59, 30, 192, 112, 206, 8, 189, 7, 95, 152, 190, 12, 147,\n 89, 191],", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.view.html", "category": "pytorch docs"}
{"text": "89, 191],\n [ 43, 246, 87, 190, 235, 226, 254, 63, 111, 240, 117, 191, 177, 191,\n 28, 191]], dtype=torch.uint8)\n >>> x.view(torch.uint8).size()\n torch.Size([4, 16])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.view.html", "category": "pytorch docs"}
{"text": "MultiStepLRclass torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones, gamma=0.1, last_epoch=- 1, verbose=False)\n Decays the learning rate of each parameter group by gamma once the\n number of epoch reaches one of the milestones. Notice that such\n decay can happen simultaneously with other changes to the learning\n rate from outside this scheduler. When last_epoch=-1, sets initial\n lr as lr.\n Parameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * milestones (list) -- List of epoch indices. Must be\n increasing.\n * gamma (float) -- Multiplicative factor of learning rate\n decay. Default: 0.1.\n * last_epoch (int) -- The index of last epoch. Default:\n -1.\n * verbose (bool) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n -[ Example ]-\n\n\n\nAssuming optimizer uses lr = 0.05 for all groups\nlr = 0.05 if epoch < 30\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.MultiStepLR.html", "category": "pytorch docs"}
{"text": "\n\n\nlr = 0.05 if epoch < 30\nlr = 0.005 if 30 <= epoch < 80\nlr = 0.0005 if epoch >= 80\nscheduler = MultiStepLR(optimizer, milestones=[30,80], gamma=0.1)\nfor epoch in range(100):\n train(...)\n validate(...)\n scheduler.step()\n get_last_lr()\n Return last computed learning rate by current scheduler.\n load_state_dict(state_dict)\n Loads the schedulers state.\n Parameters:\n state_dict (dict) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n print_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n state_dict()\n Returns the state of the scheduler as a \"dict\".\n It contains an entry for every variable in self.dict which\n is not the optimizer.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.MultiStepLR.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sqrt_Tensor.sqrt_() -> Tensor\n In-place version of \"sqrt()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sqrt_.html", "category": "pytorch docs"}
{"text": "torch.autograd.function.FunctionCtx.save_for_backwardFunctionCtx.save_for_backward(*tensors)\n Saves given tensors for a future call to \"backward()\".\n \"save_for_backward\" should be called at most once, only from inside\n the \"forward()\" method, and only with tensors.\n All tensors intended to be used in the backward pass should be\n saved with \"save_for_backward\" (as opposed to directly on \"ctx\") to\n prevent incorrect gradients and memory leaks, and enable the\n application of saved tensor hooks. See\n \"torch.autograd.graph.saved_tensors_hooks\".\n Note that if intermediary tensors, tensors that are neither inputs\n nor outputs of \"forward()\", are saved for backward, your custom\n Function may not support double backward. Custom Functions that do\n not support double backward should decorate their \"backward()\"\n method with \"@once_differentiable\" so that performing double\n backward raises an error. If you'd like to support double backward,", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.save_for_backward.html", "category": "pytorch docs"}
{"text": "you can either recompute intermediaries based on the inputs during\n backward or return the intermediaries as the outputs of the custom\n Function. See the double backward tutorial for more details.\n In \"backward()\", saved tensors can be accessed through the\n \"saved_tensors\" attribute. Before returning them to the user, a\n check is made to ensure they weren't used in any in-place operation\n that modified their content.\n Arguments can also be \"None\". This is a no-op.\n See Extending torch.autograd for more details on how to use this\n method.\n Example::\n >>> class Func(Function):\n >>> @staticmethod\n >>> def forward(ctx, x: torch.Tensor, y: torch.Tensor, z: int):\n >>> w = x * z\n >>> out = x * y + y * z + w * y\n >>> ctx.save_for_backward(x, y, w, out)\n >>> ctx.z = z # z is not a tensor\n >>> return out\n >>>\n >>> @staticmethod\n >>> @once_differentiable", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.save_for_backward.html", "category": "pytorch docs"}
{"text": "\n\n\n@once_differentiable\n >>> def backward(ctx, grad_out):\n >>> x, y, w, out = ctx.saved_tensors\n >>> z = ctx.z\n >>> gx = grad_out * (y + y * z)\n >>> gy = grad_out * (x + z + w)\n >>> gz = None\n >>> return gx, gy, gz\n >>>\n >>> a = torch.tensor(1., requires_grad=True, dtype=torch.double)\n >>> b = torch.tensor(2., requires_grad=True, dtype=torch.double)\n >>> c = 4\n >>> d = Func.apply(a, b, c)\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.save_for_backward.html", "category": "pytorch docs"}
{"text": "torch.fulltorch.full(size, fill_value, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\n Creates a tensor of size \"size\" filled with \"fill_value\". The\n tensor's dtype is inferred from \"fill_value\".\n Parameters:\n * size (int...*) -- a list, tuple, or \"torch.Size\" of\n integers defining the shape of the output tensor.\n * fill_value (Scalar) -- the value to fill the output\n tensor with.\n Keyword Arguments:\n * out (*Tensor, optional*) -- the output tensor.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device", "source": "https://pytorch.org/docs/stable/generated/torch.full.html", "category": "pytorch docs"}
{"text": "for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Example:\n >>> torch.full((2, 3), 3.141592)\n tensor([[ 3.1416, 3.1416, 3.1416],\n [ 3.1416, 3.1416, 3.1416]])", "source": "https://pytorch.org/docs/stable/generated/torch.full.html", "category": "pytorch docs"}
{"text": "torch.Tensor.digammaTensor.digamma() -> Tensor\n See \"torch.digamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.digamma.html", "category": "pytorch docs"}
{"text": "default_dynamic_quant_observertorch.quantization.observer.default_dynamic_quant_observer\n alias of functools.partial(,\n dtype=torch.quint8, quant_min=0, quant_max=255, is_dynamic=True){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_dynamic_quant_observer.html", "category": "pytorch docs"}
{"text": "torch._foreach_floortorch._foreach_floor(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.floor()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_floor.html", "category": "pytorch docs"}
{"text": "torch.matrix_exptorch.matrix_exp(A) -> Tensor\n Alias for \"torch.linalg.matrix_exp()\".", "source": "https://pytorch.org/docs/stable/generated/torch.matrix_exp.html", "category": "pytorch docs"}
{"text": "torch.nanquantiletorch.nanquantile(input, q, dim=None, keepdim=False, , interpolation='linear', out=None) -> Tensor\n This is a variant of \"torch.quantile()\" that \"ignores\" \"NaN\"\n values, computing the quantiles \"q\" as if \"NaN\" values in \"input\"\n did not exist. If all values in a reduced row are \"NaN\" then the\n quantiles for that reduction will be \"NaN\". See the documentation\n for \"torch.quantile()\".\n Parameters:\n * input (Tensor) -- the input tensor.\n * q (float or Tensor) -- a scalar or 1D tensor of\n quantile values in the range [0, 1]\n * dim (int) -- the dimension to reduce.\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n Keyword Arguments:\n * interpolation (str*) -- interpolation method to use when\n the desired quantile lies between two data points. Can be\n \"linear\", \"lower\", \"higher\", \"midpoint\" and \"nearest\". Default\n is \"linear\".", "source": "https://pytorch.org/docs/stable/generated/torch.nanquantile.html", "category": "pytorch docs"}
{"text": "is \"linear\".\n * out (Tensor, optional) -- the output tensor.\n Example:\n >>> t = torch.tensor([float('nan'), 1, 2])\n >>> t.quantile(0.5)\n tensor(nan)\n >>> t.nanquantile(0.5)\n tensor(1.5000)\n >>> t = torch.tensor([[float('nan'), float('nan')], [1, 2]])\n >>> t\n tensor([[nan, nan],\n [1., 2.]])\n >>> t.nanquantile(0.5, dim=0)\n tensor([1., 2.])\n >>> t.nanquantile(0.5, dim=1)\n tensor([ nan, 1.5000])", "source": "https://pytorch.org/docs/stable/generated/torch.nanquantile.html", "category": "pytorch docs"}
{"text": "torch.aminmaxtorch.aminmax(input, , dim=None, keepdim=False, out=None) -> (Tensor min, Tensor max)\n Computes the minimum and maximum values of the \"input\" tensor.\n Parameters:\n input (Tensor) -- The input tensor\n Keyword Arguments:\n * dim (Optional[int]) -- The dimension along which\n to compute the values. If None, computes the values over the\n entire \"input\" tensor. Default is None*.\n * keepdim (bool) -- If True, the reduced dimensions will\n be kept in the output tensor as dimensions with size 1 for\n broadcasting, otherwise they will be removed, as if calling\n (\"torch.squeeze()\"). Default is False.\n * out (*Optional[Tuple[Tensor, Tensor]]) --\n Optional tensors on which to write the result. Must have the\n same shape and dtype as the expected output. Default is\n None.\n Returns:\n A named tuple (min, max)* containing the minimum and maximum\n values.", "source": "https://pytorch.org/docs/stable/generated/torch.aminmax.html", "category": "pytorch docs"}
{"text": "values.\n Raises:\n RuntimeError -- If any of the dimensions to compute the\n values over has size 0.\n Note:\n NaN values are propagated to the output if at least one value is\n NaN.\n See also:\n \"torch.amin()\" computes just the minimum value \"torch.amax()\"\n computes just the maximum value\n Example:\n >>> torch.aminmax(torch.tensor([1, -3, 5]))\n torch.return_types.aminmax(\n min=tensor(-3),\n max=tensor(5))\n >>> # aminmax propagates NaNs\n >>> torch.aminmax(torch.tensor([1, -3, 5, torch.nan]))\n torch.return_types.aminmax(\n min=tensor(nan),\n max=tensor(nan))\n >>> t = torch.arange(10).view(2, 5)\n >>> t\n tensor([[0, 1, 2, 3, 4],\n [5, 6, 7, 8, 9]])\n >>> t.aminmax(dim=0, keepdim=True)\n torch.return_types.aminmax(\n min=tensor([[0, 1, 2, 3, 4]]),\n max=tensor([[5, 6, 7, 8, 9]]))", "source": "https://pytorch.org/docs/stable/generated/torch.aminmax.html", "category": "pytorch docs"}
{"text": "torch.autograd.functional.jacobiantorch.autograd.functional.jacobian(func, inputs, create_graph=False, strict=False, vectorize=False, strategy='reverse-mode')\n Function that computes the Jacobian of a given function.\n Parameters:\n * func (function) -- a Python function that takes Tensor\n inputs and returns a tuple of Tensors or a Tensor.\n * inputs (tuple of Tensors or Tensor) -- inputs to the\n function \"func\".\n * create_graph (bool, optional) -- If \"True\", the\n Jacobian will be computed in a differentiable manner. Note\n that when \"strict\" is \"False\", the result can not require\n gradients or be disconnected from the inputs. Defaults to\n \"False\".\n * strict (bool, optional) -- If \"True\", an error will\n be raised when we detect that there exists an input such that\n all the outputs are independent of it. If \"False\", we return a", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.jacobian.html", "category": "pytorch docs"}
{"text": "Tensor of zeros as the jacobian for said inputs, which is the\n expected mathematical value. Defaults to \"False\".\n * vectorize (bool, optional) -- This feature is\n experimental. Please consider using \"torch.func.jacrev()\" or\n \"torch.func.jacfwd()\" instead if you are looking for something\n less experimental and more performant. When computing the\n jacobian, usually we invoke \"autograd.grad\" once per row of\n the jacobian. If this flag is \"True\", we perform only a single\n \"autograd.grad\" call with \"batched_grad=True\" which uses the\n vmap prototype feature. Though this should lead to performance\n improvements in many cases, because this feature is still\n experimental, there may be performance cliffs. See\n \"torch.autograd.grad()\"'s \"batched_grad\" parameter for more\n information.\n * strategy (str, optional) -- Set to \"\"forward-mode\"\"", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.jacobian.html", "category": "pytorch docs"}
{"text": "or \"\"reverse-mode\"\" to determine whether the Jacobian will be\n computed with forward or reverse mode AD. Currently,\n \"\"forward-mode\"\" requires \"vectorized=True\". Defaults to\n \"\"reverse-mode\"\". If \"func\" has more outputs than inputs,\n \"\"forward-mode\"\" tends to be more performant. Otherwise,\n prefer to use \"\"reverse-mode\"\".\n Returns:\n if there is a single input and output, this will be a single\n Tensor containing the Jacobian for the linearized inputs and\n output. If one of the two is a tuple, then the Jacobian will be\n a tuple of Tensors. If both of them are tuples, then the\n Jacobian will be a tuple of tuple of Tensors where\n \"Jacobian[i][j]\" will contain the Jacobian of the \"i\"th output\n and \"j\"th input and will have as size the concatenation of the\n sizes of the corresponding output and the corresponding input\n and will have same dtype and device as the corresponding input.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.jacobian.html", "category": "pytorch docs"}
{"text": "If strategy is \"forward-mode\", the dtype will be that of the\n output; otherwise, the input.\n Return type:\n Jacobian (Tensor or nested tuple of Tensors)\n -[ Example ]-\n\n\n\ndef exp_reducer(x):\n ... return x.exp().sum(dim=1)\ninputs = torch.rand(2, 2)\njacobian(exp_reducer, inputs)\n tensor([[[1.4917, 2.4352],\n [0.0000, 0.0000]],\n [[0.0000, 0.0000],\n [2.4369, 2.3799]]])\njacobian(exp_reducer, inputs, create_graph=True)\n tensor([[[1.4917, 2.4352],\n [0.0000, 0.0000]],\n [[0.0000, 0.0000],\n [2.4369, 2.3799]]], grad_fn=)\ndef exp_adder(x, y):\n ... return 2 * x.exp() + 3 * y\ninputs = (torch.rand(2), torch.rand(2))\njacobian(exp_adder, inputs)\n (tensor([[2.8052, 0.0000],\n [0.0000, 3.3963]]),\n tensor([[3., 0.],\n [0., 3.]]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.jacobian.html", "category": "pytorch docs"}
{"text": "BatchNorm3dclass torch.ao.nn.quantized.BatchNorm3d(num_features, eps=1e-05, momentum=0.1, device=None, dtype=None)\n This is the quantized version of \"BatchNorm3d\".", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.BatchNorm3d.html", "category": "pytorch docs"}
{"text": "torch.nonzerotorch.nonzero(input, , out=None, as_tuple=False) -> LongTensor or tuple of LongTensors\n Note:\n \"torch.nonzero(..., as_tuple=False)\" (default) returns a 2-D\n tensor where each row is the index for a nonzero\n value.\"torch.nonzero(..., as_tuple=True)\" returns a tuple of 1-D\n index tensors, allowing for advanced indexing, so\n \"x[x.nonzero(as_tuple=True)]\" gives all nonzero values of tensor\n \"x\". Of the returned tuple, each index tensor contains nonzero\n indices for a certain dimension.See below for more details on the\n two behaviors.When \"input\" is on CUDA, \"torch.nonzero()\" causes\n host-device synchronization.\n When \"as_tuple\" is \"False\" (default)*:\n Returns a tensor containing the indices of all non-zero elements of\n \"input\". Each row in the result contains the indices of a non-zero\n element in \"input\". The result is sorted lexicographically, with\n the last index changing the fastest (C-style).", "source": "https://pytorch.org/docs/stable/generated/torch.nonzero.html", "category": "pytorch docs"}
{"text": "the last index changing the fastest (C-style).\n If \"input\" has n dimensions, then the resulting indices tensor\n \"out\" is of size (z \\times n), where z is the total number of non-\n zero elements in the \"input\" tensor.\n When \"as_tuple\" is \"True\":\n Returns a tuple of 1-D tensors, one for each dimension in \"input\",\n each containing the indices (in that dimension) of all non-zero\n elements of \"input\" .\n If \"input\" has n dimensions, then the resulting tuple contains n\n tensors of size z, where z is the total number of non-zero elements\n in the \"input\" tensor.\n As a special case, when \"input\" has zero dimensions and a nonzero\n scalar value, it is treated as a one-dimensional tensor with one\n element.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (LongTensor, optional) -- the output tensor\n containing indices\n Returns:\n If \"as_tuple\" is \"False\", the output tensor containing indices.", "source": "https://pytorch.org/docs/stable/generated/torch.nonzero.html", "category": "pytorch docs"}
{"text": "If \"as_tuple\" is \"True\", one 1-D tensor for each dimension,\n containing the indices of each nonzero element along that\n dimension.\n Return type:\n LongTensor or tuple of LongTensor\n Example:\n >>> torch.nonzero(torch.tensor([1, 1, 1, 0, 1]))\n tensor([[ 0],\n [ 1],\n [ 2],\n [ 4]])\n >>> torch.nonzero(torch.tensor([[0.6, 0.0, 0.0, 0.0],\n ... [0.0, 0.4, 0.0, 0.0],\n ... [0.0, 0.0, 1.2, 0.0],\n ... [0.0, 0.0, 0.0,-0.4]]))\n tensor([[ 0, 0],\n [ 1, 1],\n [ 2, 2],\n [ 3, 3]])\n >>> torch.nonzero(torch.tensor([1, 1, 1, 0, 1]), as_tuple=True)\n (tensor([0, 1, 2, 4]),)\n >>> torch.nonzero(torch.tensor([[0.6, 0.0, 0.0, 0.0],\n ... [0.0, 0.4, 0.0, 0.0],\n ... [0.0, 0.0, 1.2, 0.0],", "source": "https://pytorch.org/docs/stable/generated/torch.nonzero.html", "category": "pytorch docs"}
{"text": "... [0.0, 0.0, 0.0,-0.4]]), as_tuple=True)\n (tensor([0, 1, 2, 3]), tensor([0, 1, 2, 3]))\n >>> torch.nonzero(torch.tensor(5), as_tuple=True)\n (tensor([0]),)", "source": "https://pytorch.org/docs/stable/generated/torch.nonzero.html", "category": "pytorch docs"}
{"text": "torch.set_default_dtypetorch.set_default_dtype(d)\n Sets the default floating point dtype to \"d\". Supports\n torch.float32 and torch.float64 as inputs. Other dtypes may be\n accepted without complaint but are not supported and are unlikely\n to work as expected.\n When PyTorch is initialized its default floating point dtype is\n torch.float32, and the intent of set_default_dtype(torch.float64)\n is to facilitate NumPy-like type inference. The default floating\n point dtype is used to:\n 1. Implicitly determine the default complex dtype. When the default\n floating point type is float32 the default complex dtype is\n complex64, and when the default floating point type is float64\n the default complex type is complex128.\n 2. Infer the dtype for tensors constructed using Python floats or\n complex Python numbers. See examples below.\n 3. Determine the result of type promotion between bool and integer\n tensors and Python floats and complex Python numbers.", "source": "https://pytorch.org/docs/stable/generated/torch.set_default_dtype.html", "category": "pytorch docs"}
{"text": "Parameters:\n d (\"torch.dtype\") -- the floating point dtype to make the\n default. Either torch.float32 or torch.float64.\n -[ Example ]-\n\n\n\ninitial default for floating point is torch.float32\nPython floats are interpreted as float32\ntorch.tensor([1.2, 3]).dtype\n torch.float32\ninitial default for floating point is torch.complex64\nComplex Python numbers are interpreted as complex64\ntorch.tensor([1.2, 3j]).dtype\n torch.complex64\ntorch.set_default_dtype(torch.float64)\nPython floats are now interpreted as float64\ntorch.tensor([1.2, 3]).dtype # a new floating point tensor\n torch.float64\nComplex Python numbers are now interpreted as complex128\ntorch.tensor([1.2, 3j]).dtype # a new complex tensor\n torch.complex128\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.set_default_dtype.html", "category": "pytorch docs"}
{"text": "torch.arctan2torch.arctan2(input, other, *, out=None) -> Tensor\n Alias for \"torch.atan2()\".", "source": "https://pytorch.org/docs/stable/generated/torch.arctan2.html", "category": "pytorch docs"}
{"text": "torch.Tensor.trunc_Tensor.trunc_() -> Tensor\n In-place version of \"trunc()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.trunc_.html", "category": "pytorch docs"}
{"text": "RandomStructuredclass torch.nn.utils.prune.RandomStructured(amount, dim=- 1)\n Prune entire (currently unpruned) channels in a tensor at random.\n Parameters:\n * amount (int or float) -- quantity of parameters to\n prune. If \"float\", should be between 0.0 and 1.0 and represent\n the fraction of parameters to prune. If \"int\", it represents\n the absolute number of parameters to prune.\n * dim (int, optional) -- index of the dim along which\n we define channels to prune. Default: -1.\n classmethod apply(module, name, amount, dim=- 1)\n Adds the forward pre-hook that enables pruning on the fly and\n the reparametrization of a tensor in terms of the original\n tensor and the pruning mask.\n Parameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * name (str) -- parameter name within \"module\" on which\n pruning will act.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomStructured.html", "category": "pytorch docs"}
{"text": "pruning will act.\n * amount (int or float) -- quantity of parameters\n to prune. If \"float\", should be between 0.0 and 1.0 and\n represent the fraction of parameters to prune. If \"int\", it\n represents the absolute number of parameters to prune.\n * dim (int, optional) -- index of the dim along\n which we define channels to prune. Default: -1.\n apply_mask(module)\n Simply handles the multiplication between the parameter being\n pruned and the generated mask. Fetches the mask and the original\n tensor from the module and returns the pruned version of the\n tensor.\n Parameters:\n module (nn.Module) -- module containing the tensor to\n prune\n Returns:\n pruned version of the input tensor\n Return type:\n pruned_tensor (torch.Tensor)\n compute_mask(t, default_mask)\n Computes and returns a mask for the input tensor \"t\". Starting", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomStructured.html", "category": "pytorch docs"}
{"text": "from a base \"default_mask\" (which should be a mask of ones if\n the tensor has not been pruned yet), generate a random mask to\n apply on top of the \"default_mask\" by randomly zeroing out\n channels along the specified dim of the tensor.\n Parameters:\n * t (torch.Tensor) -- tensor representing the parameter\n to prune\n * default_mask (torch.Tensor) -- Base mask from\n previous pruning iterations, that need to be respected\n after the new mask is applied. Same dims as \"t\".\n Returns:\n mask to apply to \"t\", of same dims as \"t\"\n Return type:\n mask (torch.Tensor)\n Raises:\n IndexError -- if \"self.dim >= len(t.shape)\"\n prune(t, default_mask=None, importance_scores=None)\n Computes and returns a pruned version of input tensor \"t\"\n according to the pruning rule specified in \"compute_mask()\".\n Parameters:\n * t (torch.Tensor) -- tensor to prune (of same", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomStructured.html", "category": "pytorch docs"}
{"text": "dimensions as \"default_mask\").\n * importance_scores (torch.Tensor) -- tensor of\n importance scores (of same shape as \"t\") used to compute\n mask for pruning \"t\". The values in this tensor indicate\n the importance of the corresponding elements in the \"t\"\n that is being pruned. If unspecified or None, the tensor\n \"t\" will be used in its place.\n * default_mask (torch.Tensor, optional) -- mask\n from previous pruning iteration, if any. To be considered\n when determining what portion of the tensor that pruning\n should act on. If None, default to a mask of ones.\n Returns:\n pruned version of tensor \"t\".\n remove(module)\n Removes the pruning reparameterization from a module. The pruned\n parameter named \"name\" remains permanently pruned, and the\n parameter named \"name+'_orig'\" is removed from the parameter", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomStructured.html", "category": "pytorch docs"}
{"text": "list. Similarly, the buffer named \"name+'_mask'\" is removed from\n the buffers.\n Note:\n Pruning itself is NOT undone or reversed!", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomStructured.html", "category": "pytorch docs"}
{"text": "torch.cuda.get_rng_state_alltorch.cuda.get_rng_state_all()\n Returns a list of ByteTensor representing the random number states\n of all devices.\n Return type:\n List[Tensor]", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.get_rng_state_all.html", "category": "pytorch docs"}
{"text": "torch.fixtorch.fix(input, *, out=None) -> Tensor\n Alias for \"torch.trunc()\"", "source": "https://pytorch.org/docs/stable/generated/torch.fix.html", "category": "pytorch docs"}
{"text": "torch.cuda.seed_alltorch.cuda.seed_all()\n Sets the seed for generating random numbers to a random number on\n all GPUs. It's safe to call this function if CUDA is not available;\n in that case, it is silently ignored.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.seed_all.html", "category": "pytorch docs"}
{"text": "set_grad_enabledclass torch.set_grad_enabled(mode)\n Context-manager that sets gradient calculation on or off.\n \"set_grad_enabled\" will enable or disable grads based on its\n argument \"mode\". It can be used as a context-manager or as a\n function.\n This context manager is thread local; it will not affect\n computation in other threads.\n Parameters:\n mode (bool) -- Flag whether to enable grad (\"True\"), or\n disable (\"False\"). This can be used to conditionally enable\n gradients.\n Note:\n set_grad_enabled is one of several mechanisms that can enable or\n disable gradients locally see Locally disabling gradient\n computation for more information on how they compare.\n Note:\n This API does not apply to forward-mode AD.\n Example::\n >>> x = torch.tensor([1.], requires_grad=True)\n >>> is_train = False\n >>> with torch.set_grad_enabled(is_train):\n ... y = x * 2\n >>> y.requires_grad\n False", "source": "https://pytorch.org/docs/stable/generated/torch.set_grad_enabled.html", "category": "pytorch docs"}
{"text": "\n\n\ny.requires_grad\n False\n >>> _ = torch.set_grad_enabled(True)\n >>> y = x * 2\n >>> y.requires_grad\n True\n >>> _ = torch.set_grad_enabled(False)\n >>> y = x * 2\n >>> y.requires_grad\n False\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.set_grad_enabled.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.fractional_max_pool2dtorch.nn.functional.fractional_max_pool2d(args, kwargs)\n Applies 2D fractional max pooling over an input signal composed of\n several input planes.\n Fractional MaxPooling is described in detail in the paper\n Fractional MaxPooling by Ben Graham\n The max-pooling operation is applied in kH \\times kW regions by a\n stochastic step size determined by the target output size. The\n number of output features is equal to the number of input planes.\n Parameters:\n * kernel_size -- the size of the window to take a max over.\n Can be a single number k (for a square kernel of k \\times k)\n or a tuple (kH, kW)\n * output_size -- the target output size of the image of the\n form oH \\times oW. Can be a tuple (oH, oW) or a single\n number oH for a square image oH \\times oH\n * output_ratio* -- If one wants to have an output size as a", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.fractional_max_pool2d.html", "category": "pytorch docs"}
{"text": "ratio of the input size, this option can be given. This has to\n be a number or tuple in the range (0, 1)\n * return_indices -- if \"True\", will return the indices along\n with the outputs. Useful to pass to \"max_unpool2d()\".\n Examples::\n >>> input = torch.randn(20, 16, 50, 32)\n >>> # pool of square window of size=3, and target output size 13x12\n >>> F.fractional_max_pool2d(input, 3, output_size=(13, 12))\n >>> # pool of square window and target output size being half of input image size\n >>> F.fractional_max_pool2d(input, 3, output_ratio=(0.5, 0.5))", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.fractional_max_pool2d.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.batch_normtorch.nn.functional.batch_norm(input, running_mean, running_var, weight=None, bias=None, training=False, momentum=0.1, eps=1e-05)\n Applies Batch Normalization for each channel across a batch of\n data.\n See \"BatchNorm1d\", \"BatchNorm2d\", \"BatchNorm3d\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.batch_norm.html", "category": "pytorch docs"}
{"text": "torch.jit.set_fusion_strategytorch.jit.set_fusion_strategy(strategy)\n Sets the type and number of specializations that can occur during\n fusion.\n Usage: provide a list of pairs (type, depth) where type is one of\n \"STATIC\" or \"DYNAMIC\" and depth is an integer.\n Behavior - static vs dynamic:\n In STATIC fusion, fused ops are compiled to have fixed input\n shapes. The shape is determined based on some initial profiling\n runs. In DYNAMIC fusion, fused ops are compiled to have variable\n input shapes, so that multiple shapes are possible.\n In both cases, we also recompile on new striding behavior, device,\n or dtype.\n Behavior - fallback functions & depth:\n When an input doesn't match the format required by the\n specialized compiled op, it will run a fallback function.\n Fallback functions are recursively be compiled and specialized\n based on the observed tensor shapes. Since compilation can be", "source": "https://pytorch.org/docs/stable/generated/torch.jit.set_fusion_strategy.html", "category": "pytorch docs"}
{"text": "slow, the \"depth\" parameter is provided to limit the number of\n specializations that can be compiled, before giving up on\n recompiling and falling back to a completely un-fused, un-\n specialized implementation.\n The list of (type, depth) pairs controls the type of\n specializations and the number of specializations. For example:\n [(\"STATIC\", 2), (\"DYNAMIC\", 2)] indicates that the first two\n specializations will use static fusions, the following two\n specializations will use dynamic fusion, and any inputs that\n satisfy none of the 4 options will run an unfused implementation.\n NB: in the future, if more as more fusion backends are added there\n may be more granular apis for specific fusers.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.set_fusion_strategy.html", "category": "pytorch docs"}
{"text": "torch.Tensor.lgamma_Tensor.lgamma_() -> Tensor\n In-place version of \"lgamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lgamma_.html", "category": "pytorch docs"}
{"text": "torch.linalg.matmultorch.linalg.matmul(input, other, *, out=None) -> Tensor\n Alias for \"torch.matmul()\"", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matmul.html", "category": "pytorch docs"}
{"text": "AdaptiveMaxPool2dclass torch.nn.AdaptiveMaxPool2d(output_size, return_indices=False)\n Applies a 2D adaptive max pooling over an input signal composed of\n several input planes.\n The output is of size H_{out} \\times W_{out}, for any input size.\n The number of output features is equal to the number of input\n planes.\n Parameters:\n * output_size (Union[int, None,\n Tuple[Optional[int],\n Optional[int]]]) -- the target output size of the\n image of the form H_{out} \\times W_{out}. Can be a tuple\n (H_{out}, W_{out}) or a single H_{out} for a square image\n H_{out} \\times H_{out}. H_{out} and W_{out} can be either a\n \"int\", or \"None\" which means the size will be the same as that\n of the input.\n * return_indices (bool) -- if \"True\", will return the\n indices along with the outputs. Useful to pass to\n nn.MaxUnpool2d. Default: \"False\"\n Shape:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveMaxPool2d.html", "category": "pytorch docs"}
{"text": "nn.MaxUnpool2d. Default: \"False\"\n Shape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).\n * Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),\n where (H_{out}, W_{out})=\\text{output_size}.\n -[ Examples ]-\n\n\n\ntarget output size of 5x7\nm = nn.AdaptiveMaxPool2d((5, 7))\ninput = torch.randn(1, 64, 8, 9)\noutput = m(input)\ntarget output size of 7x7 (square)\nm = nn.AdaptiveMaxPool2d(7)\ninput = torch.randn(1, 64, 10, 9)\noutput = m(input)\ntarget output size of 10x7\nm = nn.AdaptiveMaxPool2d((None, 7))\ninput = torch.randn(1, 64, 10, 9)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveMaxPool2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.slice_scatterTensor.slice_scatter(src, dim=0, start=None, end=None, step=1) -> Tensor\n See \"torch.slice_scatter()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.slice_scatter.html", "category": "pytorch docs"}
{"text": "torch.Tensor.covTensor.cov(*, correction=1, fweights=None, aweights=None) -> Tensor\n See \"torch.cov()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cov.html", "category": "pytorch docs"}
{"text": "UpsamplingBilinear2dclass torch.nn.UpsamplingBilinear2d(size=None, scale_factor=None)\n Applies a 2D bilinear upsampling to an input signal composed of\n several input channels.\n To specify the scale, it takes either the \"size\" or the\n \"scale_factor\" as it's constructor argument.\n When \"size\" is given, it is the output size of the image (h, w).\n Parameters:\n * size (int or Tuple[int, int],\n optional) -- output spatial sizes\n * scale_factor (float or Tuple[float,\n float], optional) -- multiplier for spatial size.\n Warning:\n This class is deprecated in favor of \"interpolate()\". It is\n equivalent to \"nn.functional.interpolate(..., mode='bilinear',\n align_corners=True)\".\n Shape:\n * Input: (N, C, H_{in}, W_{in})\n * Output: (N, C, H_{out}, W_{out}) where\n H_{out} = \\left\\lfloor H_{in} \\times \\text{scale_factor}\n \\right\\rfloor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.UpsamplingBilinear2d.html", "category": "pytorch docs"}
{"text": "\\right\\rfloor\n W_{out} = \\left\\lfloor W_{in} \\times \\text{scale_factor}\n \\right\\rfloor\n Examples:\n >>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2)\n >>> input\n tensor([[[[1., 2.],\n [3., 4.]]]])\n >>> m = nn.UpsamplingBilinear2d(scale_factor=2)\n >>> m(input)\n tensor([[[[1.0000, 1.3333, 1.6667, 2.0000],\n [1.6667, 2.0000, 2.3333, 2.6667],\n [2.3333, 2.6667, 3.0000, 3.3333],\n [3.0000, 3.3333, 3.6667, 4.0000]]]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.UpsamplingBilinear2d.html", "category": "pytorch docs"}
{"text": "AvgPool2dclass torch.nn.AvgPool2d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None)\n Applies a 2D average pooling over an input signal composed of\n several input planes.\n In the simplest case, the output value of the layer with input size\n (N, C, H, W), output (N, C, H_{out}, W_{out}) and \"kernel_size\"\n (kH, kW) can be precisely described as:\n out(N_i, C_j, h, w) = \\frac{1}{kH * kW} \\sum_{m=0}^{kH-1}\n \\sum_{n=0}^{kW-1} input(N_i, C_j,\n stride[0] \\times h + m, stride[1] \\times w + n)\n If \"padding\" is non-zero, then the input is implicitly zero-padded\n on both sides for \"padding\" number of points.\n Note:\n When ceil_mode=True, sliding windows are allowed to go off-bounds\n if they start within the left padding or the input. Sliding\n windows that would start in the right padded region are ignored.\n The parameters \"kernel_size\", \"stride\", \"padding\" can either be:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html", "category": "pytorch docs"}
{"text": "\na single \"int\" -- in which case the same value is used for the\n height and width dimension\na \"tuple\" of two ints -- in which case, the first int is\n used for the height dimension, and the second int for the\n width dimension\n Parameters:\nkernel_size (Union[int, Tuple[int,\n int]]) -- the size of the window\nstride (Union[int, Tuple[int, int]])\n -- the stride of the window. Default value is \"kernel_size\"\npadding (Union[int, Tuple[int,\n int]]) -- implicit zero padding to be added on both\n sides\nceil_mode (bool) -- when True, will use ceil instead\n of floor to compute the output shape\ncount_include_pad (bool) -- when True, will include the\n zero-padding in the averaging calculation\ndivisor_override (Optional[int]) -- if specified,\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html", "category": "pytorch docs"}
{"text": "it will be used as divisor, otherwise size of the pooling\n region will be used.\n Shape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).\n * Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),\n where\n H_{out} = \\left\\lfloor\\frac{H_{in} + 2 \\times\n \\text{padding}[0] -\n \\text{kernel_size}[0]}{\\text{stride}[0]} + 1\\right\\rfloor\n W_{out} = \\left\\lfloor\\frac{W_{in} + 2 \\times\n \\text{padding}[1] -\n \\text{kernel_size}[1]}{\\text{stride}[1]} + 1\\right\\rfloor\n Examples:\n >>> # pool of square window of size=3, stride=2\n >>> m = nn.AvgPool2d(3, stride=2)\n >>> # pool of non-square window\n >>> m = nn.AvgPool2d((3, 2), stride=(2, 1))\n >>> input = torch.randn(20, 16, 50, 32)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.fixTensor.fix() -> Tensor\n See \"torch.fix()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fix.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.feature_alpha_dropouttorch.nn.functional.feature_alpha_dropout(input, p=0.5, training=False, inplace=False)\n Randomly masks out entire channels (a channel is a feature map,\n e.g. the j-th channel of the i-th sample in the batch input is a\n tensor \\text{input}[i, j]) of the input tensor). Instead of setting\n activations to zero, as in regular Dropout, the activations are set\n to the negative saturation value of the SELU activation function.\n Each element will be masked independently on every forward call\n with probability \"p\" using samples from a Bernoulli distribution.\n The elements to be masked are randomized on every forward call, and\n scaled and shifted to maintain zero mean and unit variance.\n See \"FeatureAlphaDropout\" for details.\n Parameters:\n * p (float) -- dropout probability of a channel to be\n zeroed. Default: 0.5\n * training (bool) -- apply dropout if is \"True\". Default:\n \"True\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.feature_alpha_dropout.html", "category": "pytorch docs"}
{"text": "\"True\"\n * inplace (bool) -- If set to \"True\", will do this\n operation in-place. Default: \"False\"\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.feature_alpha_dropout.html", "category": "pytorch docs"}
{"text": "torch.Tensor.to_sparse_csrTensor.to_sparse_csr(dense_dim=None) -> Tensor\n Convert a tensor to compressed row storage format (CSR). Except\n for strided tensors, only works with 2D tensors. If the \"self\" is\n strided, then the number of dense dimensions could be specified,\n and a hybrid CSR tensor will be created, with dense_dim dense\n dimensions and self.dim() - 2 - dense_dim batch dimension.\n Parameters:\n dense_dim (int, optional) -- Number of dense\n dimensions of the resulting CSR tensor. This argument should be\n used only if \"self\" is a strided tensor, and must be a value\n between 0 and dimension of \"self\" tensor minus two.\n Example:\n >>> dense = torch.randn(5, 5)\n >>> sparse = dense.to_sparse_csr()\n >>> sparse._nnz()\n 25\n >>> dense = torch.zeros(3, 3, 1, 1)\n >>> dense[0, 0] = dense[1, 2] = dense[2, 1] = 1\n >>> dense.to_sparse_csr(dense_dim=2)\n tensor(crow_indices=tensor([0, 1, 2, 3]),", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_csr.html", "category": "pytorch docs"}
{"text": "tensor(crow_indices=tensor([0, 1, 2, 3]),\n col_indices=tensor([0, 2, 1]),\n values=tensor([[[1.]],\n [[1.]],\n [[1.]]]), size=(3, 3, 1, 1), nnz=3,\n layout=torch.sparse_csr)", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_csr.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.glutorch.nn.functional.glu(input, dim=- 1) -> Tensor\n The gated linear unit. Computes:\n \\text{GLU}(a, b) = a \\otimes \\sigma(b)\n where input is split in half along dim to form a and b,\n \\sigma is the sigmoid function and \\otimes is the element-wise\n product between matrices.\n See Language Modeling with Gated Convolutional Networks.\n Parameters:\n * input (Tensor) -- input tensor\n * dim (int) -- dimension on which to split the input.\n Default: -1\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.glu.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.conv1dtorch.nn.functional.conv1d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) -> Tensor\n Applies a 1D convolution over an input signal composed of several\n input planes.\n This operator supports TensorFloat32.\n See \"Conv1d\" for details and output shape.\n Note:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n Note:\n This operator supports complex data types i.e. \"complex32,\n complex64, complex128\".\n Parameters:\n * input -- input tensor of shape (\\text{minibatch} ,\n \\text{in_channels} , iW)\n * weight -- filters of shape (\\text{out_channels} ,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv1d.html", "category": "pytorch docs"}
{"text": "\\frac{\\text{in_channels}}{\\text{groups}} , kW)\n * bias -- optional bias of shape (\\text{out_channels}).\n Default: \"None\"\n * stride -- the stride of the convolving kernel. Can be a\n single number or a one-element tuple (sW,). Default: 1\n * padding --\n implicit paddings on both sides of the input. Can be a string\n {'valid', 'same'}, single number or a one-element tuple\n (padW,). Default: 0 \"padding='valid'\" is the same as no\n padding. \"padding='same'\" pads the input so the output has the\n same shape as the input. However, this mode doesn't support\n any stride values other than 1.\n Warning:\n For \"padding='same'\", if the \"weight\" is even-length and\n \"dilation\" is odd in any dimension, a full \"pad()\" operation\n may be needed internally. Lowering performance.\n * dilation -- the spacing between kernel elements. Can be a", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv1d.html", "category": "pytorch docs"}
{"text": "single number or a one-element tuple (dW,). Default: 1\n * groups -- split input into groups, \\text{in_channels}\n should be divisible by the number of groups. Default: 1\n Examples:\n >>> inputs = torch.randn(33, 16, 30)\n >>> filters = torch.randn(20, 16, 5)\n >>> F.conv1d(inputs, filters)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv1d.html", "category": "pytorch docs"}
{"text": "default_weight_observertorch.quantization.observer.default_weight_observer\n alias of functools.partial(,\n dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_weight_observer.html", "category": "pytorch docs"}
{"text": "torch.gradienttorch.gradient(input, *, spacing=1, dim=None, edge_order=1) -> List of Tensors\n Estimates the gradient of a function g : \\mathbb{R}^n \\rightarrow\n \\mathbb{R} in one or more dimensions using the second-order\n accurate central differences method.\n The gradient of g is estimated using samples. By default, when\n \"spacing\" is not specified, the samples are entirely described by\n \"input\", and the mapping of input coordinates to an output is the\n same as the tensor's mapping of indices to values. For example, for\n a three-dimensional \"input\" the function described is g :\n \\mathbb{R}^3 \\rightarrow \\mathbb{R}, and g(1, 2, 3)\\ == input[1, 2,\n 3].\n When \"spacing\" is specified, it modifies the relationship between\n \"input\" and input coordinates. This is detailed in the \"Keyword\n Arguments\" section below.\n The gradient is estimated by estimating each partial derivative of\n g independently. This estimation is accurate if g is in C^3 (it has", "source": "https://pytorch.org/docs/stable/generated/torch.gradient.html", "category": "pytorch docs"}
{"text": "at least 3 continuous derivatives), and the estimation can be\n improved by providing closer samples. Mathematically, the value at\n each interior point of a partial derivative is estimated using\n Taylor\u00e2\u0080\u0099s theorem with remainder. Letting x be an interior point and\n x+h_r be point neighboring it, the partial gradient at f(x+h_r) is\n estimated using:\n \\begin{aligned} f(x+h_r) = f(x) + h_r f'(x) + {h_r}^2\n \\frac{f''(x)}{2} + {h_r}^3 \\frac{f'''(x_r)}{6} \\ \\end{aligned}\n where x_r is a number in the interval [x, x+ h_r] and using the\n fact that f \\in C^3 we derive :\n f'(x) \\approx \\frac{ {h_l}^2 f(x+h_r) - {h_r}^2 f(x-h_l) +\n ({h_r}^2-{h_l}^2 ) f(x) }{ {h_r} {h_l}^2 + {h_r}^2 {h_l} }\n Note:\n We estimate the gradient of functions in complex domain g :\n \\mathbb{C}^n \\rightarrow \\mathbb{C} in the same way.\n The value of each partial derivative at the boundary points is\n computed differently. See edge_order below.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.gradient.html", "category": "pytorch docs"}
{"text": "Parameters:\n input (\"Tensor\") -- the tensor that represents the values of\n the function\n Keyword Arguments:\n * spacing (\"scalar\", \"list of scalar\", \"list of Tensor\",\n optional) -- \"spacing\" can be used to modify how the \"input\"\n tensor's indices relate to sample coordinates. If \"spacing\" is\n a scalar then the indices are multiplied by the scalar to\n produce the coordinates. For example, if \"spacing=2\" the\n indices (1, 2, 3) become coordinates (2, 4, 6). If \"spacing\"\n is a list of scalars then the corresponding indices are\n multiplied. For example, if \"spacing=(2, -1, 3)\" the indices\n (1, 2, 3) become coordinates (2, -2, 9). Finally, if \"spacing\"\n is a list of one-dimensional tensors then each tensor\n specifies the coordinates for the corresponding dimension. For\n example, if the indices are (1, 2, 3) and the tensors are (t0,\n t1, t2), then the coordinates are (t0[1], t1[2], t2[3])", "source": "https://pytorch.org/docs/stable/generated/torch.gradient.html", "category": "pytorch docs"}
{"text": "\ndim (\"int\", \"list of int\", optional) -- the dimension or\n dimensions to approximate the gradient over. By default the\n partial gradient in every dimension is computed. Note that\n when \"dim\" is specified the elements of the \"spacing\"\n argument must correspond with the specified dims.\"\nedge_order (\"int\", optional) -- 1 or 2, for first-order or\n second-order estimation of the boundary (\"edge\") values,\n respectively.\n Examples:\n\n\nEstimates the gradient of f(x)=x^2 at points [-2, -1, 2, 4]\ncoordinates = (torch.tensor([-2., -1., 1., 4.]),)\nvalues = torch.tensor([4., 1., 1., 16.], )\ntorch.gradient(values, spacing = coordinates)\n (tensor([-3., -2., 2., 5.]),)\nEstimates the gradient of the R^2 -> R function whose samples are\ndescribed by the tensor t. Implicit coordinates are [0, 1] for the outermost\n\n\n\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.gradient.html", "category": "pytorch docs"}
{"text": "\n\n\ndimension and [0, 1, 2, 3] for the innermost dimension, and function estimates\n >>> # partial derivative for both dimensions.\n >>> t = torch.tensor([[1, 2, 4, 8], [10, 20, 40, 80]])\n >>> torch.gradient(t)\n (tensor([[ 9., 18., 36., 72.],\n [ 9., 18., 36., 72.]]),\n tensor([[ 1.0000, 1.5000, 3.0000, 4.0000],\n [10.0000, 15.0000, 30.0000, 40.0000]]))\n >>> # A scalar value for spacing modifies the relationship between tensor indices\n >>> # and input coordinates by multiplying the indices to find the\n >>> # coordinates. For example, below the indices of the innermost\n >>> # 0, 1, 2, 3 translate to coordinates of [0, 2, 4, 6], and the indices of\n >>> # the outermost dimension 0, 1 translate to coordinates of [0, 2].\n >>> torch.gradient(t, spacing = 2.0) # dim = None (implicitly [0, 1])\n (tensor([[ 4.5000, 9.0000, 18.0000, 36.0000],\n [ 4.5000, 9.0000, 18.0000, 36.0000]]),\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.gradient.html", "category": "pytorch docs"}
{"text": "tensor([[ 0.5000, 0.7500, 1.5000, 2.0000],\n [ 5.0000, 7.5000, 15.0000, 20.0000]]))\n >>> # doubling the spacing between samples halves the estimated partial gradients.\n >>>\n >>> # Estimates only the partial derivative for dimension 1\n >>> torch.gradient(t, dim = 1) # spacing = None (implicitly 1.)\n (tensor([[ 1.0000, 1.5000, 3.0000, 4.0000],\n [10.0000, 15.0000, 30.0000, 40.0000]]),)\n >>> # When spacing is a list of scalars, the relationship between the tensor\n >>> # indices and input coordinates changes based on dimension.\n >>> # For example, below, the indices of the innermost dimension 0, 1, 2, 3 translate\n >>> # to coordinates of [0, 3, 6, 9], and the indices of the outermost dimension\n >>> # 0, 1 translate to coordinates of [0, 2].\n >>> torch.gradient(t, spacing = [3., 2.])\n (tensor([[ 4.5000, 9.0000, 18.0000, 36.0000],\n [ 4.5000, 9.0000, 18.0000, 36.0000]]),", "source": "https://pytorch.org/docs/stable/generated/torch.gradient.html", "category": "pytorch docs"}
{"text": "tensor([[ 0.3333, 0.5000, 1.0000, 1.3333],\n [ 3.3333, 5.0000, 10.0000, 13.3333]]))\n >>> # The following example is a replication of the previous one with explicit\n >>> # coordinates.\n >>> coords = (torch.tensor([0, 2]), torch.tensor([0, 3, 6, 9]))\n >>> torch.gradient(t, spacing = coords)\n (tensor([[ 4.5000, 9.0000, 18.0000, 36.0000],\n [ 4.5000, 9.0000, 18.0000, 36.0000]]),\n tensor([[ 0.3333, 0.5000, 1.0000, 1.3333],\n [ 3.3333, 5.0000, 10.0000, 13.3333]]))", "source": "https://pytorch.org/docs/stable/generated/torch.gradient.html", "category": "pytorch docs"}
{"text": "torch.select_scattertorch.select_scatter(input, src, dim, index) -> Tensor\n Embeds the values of the \"src\" tensor into \"input\" at the given\n index. This function returns a tensor with fresh storage; it does\n not create a view.\n Parameters:\n * input (Tensor) -- the input tensor.\n * src (Tensor) -- The tensor to embed into \"input\"\n * dim (int) -- the dimension to insert the slice into.\n * index (int) -- the index to select with\n Note:\n \"src\" must be of the proper size in order to be embedded into\n \"input\". Specifically, it should have the same shape as\n \"torch.select(input, dim, index)\"\n Example:\n >>> a = torch.zeros(2, 2)\n >>> b = torch.ones(2)\n >>> a.select_scatter(b, 0, 0)\n tensor([[1., 1.],\n [0., 0.]])", "source": "https://pytorch.org/docs/stable/generated/torch.select_scatter.html", "category": "pytorch docs"}
{"text": "torch.Tensor.q_per_channel_zero_pointsTensor.q_per_channel_zero_points() -> Tensor\n Given a Tensor quantized by linear (affine) per-channel\n quantization, returns a tensor of zero_points of the underlying\n quantizer. It has the number of elements that matches the\n corresponding dimensions (from q_per_channel_axis) of the tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.q_per_channel_zero_points.html", "category": "pytorch docs"}
{"text": "LSTMclass torch.nn.quantizable.LSTM(input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0.0, bidirectional=False, device=None, dtype=None)\n A quantizable long short-term memory (LSTM).\n For the description and the argument types, please, refer to \"LSTM\"\n Variables:\n layers -- instances of the _LSTMLayer\n Note:\n To access the weights and biases, you need to access them per\n layer. See examples below.\n Examples:\n >>> import torch.nn.quantizable as nnqa\n >>> rnn = nnqa.LSTM(10, 20, 2)\n >>> input = torch.randn(5, 3, 10)\n >>> h0 = torch.randn(2, 3, 20)\n >>> c0 = torch.randn(2, 3, 20)\n >>> output, (hn, cn) = rnn(input, (h0, c0))\n >>> # To get the weights:\n >>> print(rnn.layers[0].weight_ih)\n tensor([[...]])\n >>> print(rnn.layers[0].weight_hh)\n AssertionError: There is no reverse path in the non-bidirectional layer", "source": "https://pytorch.org/docs/stable/generated/torch.nn.quantizable.LSTM.html", "category": "pytorch docs"}
{"text": "torch.result_typetorch.result_type(tensor1, tensor2) -> dtype\n Returns the \"torch.dtype\" that would result from performing an\n arithmetic operation on the provided input tensors. See type\n promotion documentation for more information on the type promotion\n logic.\n Parameters:\n * tensor1 (Tensor or Number) -- an input tensor or\n number\n * tensor2 (Tensor or Number) -- an input tensor or\n number\n Example:\n >>> torch.result_type(torch.tensor([1, 2], dtype=torch.int), 1.0)\n torch.float32\n >>> torch.result_type(torch.tensor([1, 2], dtype=torch.uint8), torch.tensor(1))\n torch.uint8", "source": "https://pytorch.org/docs/stable/generated/torch.result_type.html", "category": "pytorch docs"}
{"text": "torch.foreach_atan_torch._foreach_atan(self: List[Tensor]) -> None\n Apply \"torch.atan()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_atan_.html", "category": "pytorch docs"}
{"text": "torch.cuda.nvtx.marktorch.cuda.nvtx.mark(msg)\n Describe an instantaneous event that occurred at some point.\n Parameters:\n msg (str) -- ASCII message to associate with the event.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.nvtx.mark.html", "category": "pytorch docs"}
{"text": "torch.Tensor.cumsumTensor.cumsum(dim, dtype=None) -> Tensor\n See \"torch.cumsum()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cumsum.html", "category": "pytorch docs"}
{"text": "torch.resolve_negtorch.resolve_neg(input) -> Tensor\n Returns a new tensor with materialized negation if \"input\"'s\n negative bit is set to True, else returns \"input\". The output\n tensor will always have its negative bit set to False. :param\n input: the input tensor. :type input: Tensor\n Example:\n >>> x = torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j])\n >>> y = x.conj()\n >>> z = y.imag\n >>> z.is_neg()\n True\n >>> out = y.resolve_neg()\n >>> out\n tensor([-1, -2, -3])\n >>> out.is_neg()\n False", "source": "https://pytorch.org/docs/stable/generated/torch.resolve_neg.html", "category": "pytorch docs"}
{"text": "torch._foreach_atantorch._foreach_atan(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.atan()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_atan.html", "category": "pytorch docs"}
{"text": "torch.conj_physicaltorch.conj_physical(input, , out=None) -> Tensor\n Computes the element-wise conjugate of the given \"input\" tensor. If\n \"input\" has a non-complex dtype, this function just returns\n \"input\".\n Note:\n This performs the conjugate operation regardless of the fact\n conjugate bit is set or not.\n Warning:\n In the future, \"torch.conj_physical()\" may return a non-writeable\n view for an \"input\" of non-complex dtype. It's recommended that\n programs not modify the tensor returned by\n \"torch.conj_physical()\" when \"input\" is of non-complex dtype to\n be compatible with this change.\n \\text{out}{i} = conj(\\text{input})\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> torch.conj_physical(torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j]))\n tensor([-1 - 1j, -2 - 2j, 3 + 3j])", "source": "https://pytorch.org/docs/stable/generated/torch.conj_physical.html", "category": "pytorch docs"}
{"text": "torch.Tensor.new_zerosTensor.new_zeros(size, , dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) -> Tensor\n Returns a Tensor of size \"size\" filled with \"0\". By default, the\n returned Tensor has the same \"torch.dtype\" and \"torch.device\" as\n this tensor.\n Parameters:\n size (int...*) -- a list, tuple, or \"torch.Size\" of\n integers defining the shape of the output tensor.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired type of\n returned tensor. Default: if None, same \"torch.dtype\" as this\n tensor.\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, same \"torch.device\" as this\n tensor.\n * requires_grad (*bool, optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * layout** (\"torch.layout\", optional) -- the desired layout of", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_zeros.html", "category": "pytorch docs"}
{"text": "returned Tensor. Default: \"torch.strided\".\n * pin_memory (bool, optional) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n Example:\n >>> tensor = torch.tensor((), dtype=torch.float64)\n >>> tensor.new_zeros((2, 3))\n tensor([[ 0., 0., 0.],\n [ 0., 0., 0.]], dtype=torch.float64)", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_zeros.html", "category": "pytorch docs"}
{"text": "propagate_qconfigclass torch.quantization.propagate_qconfig_(module, qconfig_dict=None, prepare_custom_config_dict=None)\n Propagate qconfig through the module hierarchy and assign qconfig\n attribute on each leaf module\n Parameters:\n * module -- input module\n * qconfig_dict -- dictionary that maps from name or type of\n submodule to quantization configuration, qconfig applies to\n all submodules of a given module unless qconfig for the\n submodules are specified (when the submodule already has\n qconfig attribute)\n * prepare_custom_config_dict -- dictionary for custom\n handling of modules see docs for \"prepare_fx()\"\n Returns:\n None, module is modified inplace with qconfig attached", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.propagate_qconfig_.html", "category": "pytorch docs"}
{"text": "torch.signtorch.sign(input, , out=None) -> Tensor\n Returns a new tensor with the signs of the elements of \"input\".\n \\text{out}{i} = \\operatorname{sgn}(\\text{input})\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.tensor([0.7, -1.2, 0., 2.3])\n >>> a\n tensor([ 0.7000, -1.2000, 0.0000, 2.3000])\n >>> torch.sign(a)\n tensor([ 1., -1., 0., 1.])", "source": "https://pytorch.org/docs/stable/generated/torch.sign.html", "category": "pytorch docs"}
{"text": "torch.Tensor.ravelTensor.ravel() -> Tensor\n see \"torch.ravel()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ravel.html", "category": "pytorch docs"}
{"text": "torch.swapaxestorch.swapaxes(input, axis0, axis1) -> Tensor\n Alias for \"torch.transpose()\".\n This function is equivalent to NumPy's swapaxes function.\n Examples:\n >>> x = torch.tensor([[[0,1],[2,3]],[[4,5],[6,7]]])\n >>> x\n tensor([[[0, 1],\n [2, 3]],\n [[4, 5],\n [6, 7]]])\n >>> torch.swapaxes(x, 0, 1)\n tensor([[[0, 1],\n [4, 5]],\n [[2, 3],\n [6, 7]]])\n >>> torch.swapaxes(x, 0, 2)\n tensor([[[0, 4],\n [2, 6]],\n [[1, 5],\n [3, 7]]])", "source": "https://pytorch.org/docs/stable/generated/torch.swapaxes.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.remove_spectral_normtorch.nn.utils.remove_spectral_norm(module, name='weight')\n Removes the spectral normalization reparameterization from a\n module.\n Parameters:\n * module (Module) -- containing module\n * name (str, optional) -- name of weight parameter\n Return type:\n T_module\n -[ Example ]-\n\n\n\nm = spectral_norm(nn.Linear(40, 10))\nremove_spectral_norm(m)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.remove_spectral_norm.html", "category": "pytorch docs"}
{"text": "torch.cuda.seedtorch.cuda.seed()\n Sets the seed for generating random numbers to a random number for\n the current GPU. It's safe to call this function if CUDA is not\n available; in that case, it is silently ignored.\n Warning:\n If you are working with a multi-GPU model, this function will\n only initialize the seed on one GPU. To initialize all GPUs, use\n \"seed_all()\".", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.seed.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.cross_entropytorch.nn.functional.cross_entropy(input, target, weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction='mean', label_smoothing=0.0)\n This criterion computes the cross entropy loss between input logits\n and target.\n See \"CrossEntropyLoss\" for details.\n Parameters:\n * input (Tensor) -- Predicted unnormalized logits; see\n Shape section below for supported shapes.\n * target (Tensor) -- Ground truth class indices or class\n probabilities; see Shape section below for supported shapes.\n * weight (Tensor, optional) -- a manual rescaling\n weight given to each class. If given, has to be a Tensor of\n size C\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html", "category": "pytorch docs"}
{"text": "multiple elements per sample. If the field \"size_average\" is\n set to \"False\", the losses are instead summed for each\n minibatch. Ignored when reduce is \"False\". Default: \"True\"\n * ignore_index (int, optional) -- Specifies a target\n value that is ignored and does not contribute to the input\n gradient. When \"size_average\" is \"True\", the loss is averaged\n over non-ignored targets. Note that \"ignore_index\" is only\n applicable when the target contains class indices. Default:\n -100\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html", "category": "pytorch docs"}
{"text": "\"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n * label_smoothing (float, optional) -- A float in\n [0.0, 1.0]. Specifies the amount of smoothing when computing\n the loss, where 0.0 means no smoothing. The targets become a\n mixture of the original ground truth and a uniform\n distribution as described in Rethinking the Inception\n Architecture for Computer Vision. Default: 0.0.\n Return type:\n Tensor\n Shape:\n * Input: Shape (C), (N, C) or (N, C, d_1, d_2, ..., d_K) with K\n \\geq 1 in the case of K-dimensional loss.\n * Target: If containing class indices, shape (), (N) or (N, d_1,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html", "category": "pytorch docs"}
{"text": "d_2, ..., d_K) with K \\geq 1 in the case of K-dimensional loss\n where each value should be between [0, C). If containing class\n probabilities, same shape as the input and each value should\n be between [0, 1].\n where:\n \\begin{aligned} C ={} & \\text{number of classes} \\ N\n ={} & \\text{batch size} \\ \\end{aligned}\n Examples:\n >>> # Example of target with class indices\n >>> input = torch.randn(3, 5, requires_grad=True)\n >>> target = torch.randint(5, (3,), dtype=torch.int64)\n >>> loss = F.cross_entropy(input, target)\n >>> loss.backward()\n >>>\n >>> # Example of target with class probabilities\n >>> input = torch.randn(3, 5, requires_grad=True)\n >>> target = torch.randn(3, 5).softmax(dim=1)\n >>> loss = F.cross_entropy(input, target)\n >>> loss.backward()", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html", "category": "pytorch docs"}
{"text": "torch.arctanhtorch.arctanh(input, *, out=None) -> Tensor\n Alias for \"torch.atanh()\".", "source": "https://pytorch.org/docs/stable/generated/torch.arctanh.html", "category": "pytorch docs"}
{"text": "torch.conjtorch.conj(input) -> Tensor\n Returns a view of \"input\" with a flipped conjugate bit. If \"input\"\n has a non-complex dtype, this function just returns \"input\".\n Note:\n \"torch.conj()\" performs a lazy conjugation, but the actual\n conjugated tensor can be materialized at any time using\n \"torch.resolve_conj()\".\n Warning:\n In the future, \"torch.conj()\" may return a non-writeable view for\n an \"input\" of non-complex dtype. It's recommended that programs\n not modify the tensor returned by \"torch.conj_physical()\" when\n \"input\" is of non-complex dtype to be compatible with this\n change.\n Parameters:\n input (Tensor) -- the input tensor.\n Example:\n >>> x = torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j])\n >>> x.is_conj()\n False\n >>> y = torch.conj(x)\n >>> y.is_conj()\n True", "source": "https://pytorch.org/docs/stable/generated/torch.conj.html", "category": "pytorch docs"}
{"text": "torch.Tensor.logical_andTensor.logical_and() -> Tensor\n See \"torch.logical_and()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logical_and.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sinhTensor.sinh() -> Tensor\n See \"torch.sinh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sinh.html", "category": "pytorch docs"}
{"text": "DeQuantStubclass torch.quantization.DeQuantStub(qconfig=None)\n Dequantize stub module, before calibration, this is same as\n identity, this will be swapped as nnq.DeQuantize in convert.\n Parameters:\n qconfig -- quantization configuration for the tensor, if\n qconfig is not provided, we will get qconfig from parent modules", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.DeQuantStub.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.remove_weight_normtorch.nn.utils.remove_weight_norm(module, name='weight')\n Removes the weight normalization reparameterization from a module.\n Parameters:\n * module (Module) -- containing module\n * name (str, optional) -- name of weight parameter\n Return type:\n T_module\n -[ Example ]-\n\n\n\nm = weight_norm(nn.Linear(20, 40))\nremove_weight_norm(m)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.remove_weight_norm.html", "category": "pytorch docs"}
{"text": "StepLRclass torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1, last_epoch=- 1, verbose=False)\n Decays the learning rate of each parameter group by gamma every\n step_size epochs. Notice that such decay can happen simultaneously\n with other changes to the learning rate from outside this\n scheduler. When last_epoch=-1, sets initial lr as lr.\n Parameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * step_size (int) -- Period of learning rate decay.\n * gamma (float) -- Multiplicative factor of learning rate\n decay. Default: 0.1.\n * last_epoch (int) -- The index of last epoch. Default:\n -1.\n * verbose (bool) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n -[ Example ]-\n\n\n\nAssuming optimizer uses lr = 0.05 for all groups\nlr = 0.05 if epoch < 30\nlr = 0.005 if 30 <= epoch < 60\nlr = 0.0005 if 60 <= epoch < 90\n...\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.StepLR.html", "category": "pytorch docs"}
{"text": "\n\n\n...\nscheduler = StepLR(optimizer, step_size=30, gamma=0.1)\nfor epoch in range(100):\n train(...)\n validate(...)\n scheduler.step()\n get_last_lr()\n Return last computed learning rate by current scheduler.\n load_state_dict(state_dict)\n Loads the schedulers state.\n Parameters:\n state_dict (dict) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n print_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n state_dict()\n Returns the state of the scheduler as a \"dict\".\n It contains an entry for every variable in self.dict which\n is not the optimizer.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.StepLR.html", "category": "pytorch docs"}
{"text": "PReLUclass torch.nn.PReLU(num_parameters=1, init=0.25, device=None, dtype=None)\n Applies the element-wise function:\n \\text{PReLU}(x) = \\max(0,x) + a * \\min(0,x)\n or\n \\text{PReLU}(x) = \\begin{cases} x, & \\text{ if } x \\geq 0 \\ ax,\n & \\text{ otherwise } \\end{cases}\n Here a is a learnable parameter. When called without arguments,\n nn.PReLU() uses a single parameter a across all input channels.\n If called with nn.PReLU(nChannels), a separate a is used for each\n input channel.\n Note:\n weight decay should not be used when learning a for good\n performance.\n Note:\n Channel dim is the 2nd dim of input. When input has dims < 2,\n then there is no channel dim and the number of channels = 1.\n Parameters:\n * num_parameters (int) -- number of a to learn. Although\n it takes an int as input, there is only two values are\n legitimate: 1, or the number of channels at input. Default: 1", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PReLU.html", "category": "pytorch docs"}
{"text": "\ninit (float) -- the initial value of a. Default: 0.25\n Shape:\nInput: ( *) where *** means, any number of additional\n dimensions.\nOutput: (), same shape as the input.\n Variables:\n weight (Tensor*) -- the learnable weights of shape\n (\"num_parameters\").\n [image]\n Examples:\n\n\nm = nn.PReLU()\ninput = torch.randn(2)\noutput = m(input)\n\n\n\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PReLU.html", "category": "pytorch docs"}
{"text": "torch.transposetorch.transpose(input, dim0, dim1) -> Tensor\n Returns a tensor that is a transposed version of \"input\". The given\n dimensions \"dim0\" and \"dim1\" are swapped.\n If \"input\" is a strided tensor then the resulting \"out\" tensor\n shares its underlying storage with the \"input\" tensor, so changing\n the content of one would change the content of the other.\n If \"input\" is a sparse tensor then the resulting \"out\" tensor does\n not share the underlying storage with the \"input\" tensor.\n If \"input\" is a sparse tensor with compressed layout (SparseCSR,\n SparseBSR, SparseCSC or SparseBSC) the arguments \"dim0\" and \"dim1\"\n must be both batch dimensions, or must both be sparse dimensions.\n The batch dimensions of a sparse tensor are the dimensions\n preceding the sparse dimensions.\n Note:\n Transpositions which interchange the sparse dimensions of a\n SparseCSR or SparseCSC layout tensor will result in the", "source": "https://pytorch.org/docs/stable/generated/torch.transpose.html", "category": "pytorch docs"}
{"text": "layout changing between the two options. Transposition of the\n sparse dimensions of a SparseBSR or SparseBSC layout tensor\n will likewise generate a result with the opposite layout.\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim0 (int) -- the first dimension to be transposed\n * dim1 (int) -- the second dimension to be transposed\n Example:\n >>> x = torch.randn(2, 3)\n >>> x\n tensor([[ 1.0028, -0.9893, 0.5809],\n [-0.1669, 0.7299, 0.4942]])\n >>> torch.transpose(x, 0, 1)\n tensor([[ 1.0028, -0.1669],\n [-0.9893, 0.7299],\n [ 0.5809, 0.4942]])\n See also \"torch.t()\".", "source": "https://pytorch.org/docs/stable/generated/torch.transpose.html", "category": "pytorch docs"}
{"text": "torch.cdisttorch.cdist(x1, x2, p=2.0, compute_mode='use_mm_for_euclid_dist_if_necessary')\n Computes batched the p-norm distance between each pair of the two\n collections of row vectors.\n Parameters:\n * x1 (Tensor) -- input tensor of shape B \\times P \\times\n M.\n * x2 (Tensor) -- input tensor of shape B \\times R \\times\n M.\n * p (float) -- p value for the p-norm distance to\n calculate between each vector pair \\in [0, \\infty].\n * compute_mode (str) --\n 'use_mm_for_euclid_dist_if_necessary' - will use matrix\n multiplication approach to calculate euclidean distance (p =\n 2) if P > 25 or R > 25 'use_mm_for_euclid_dist' - will always\n use matrix multiplication approach to calculate euclidean\n distance (p = 2) 'donot_use_mm_for_euclid_dist' - will never\n use matrix multiplication approach to calculate euclidean\n distance (p = 2) Default: use_mm_for_euclid_dist_if_necessary.", "source": "https://pytorch.org/docs/stable/generated/torch.cdist.html", "category": "pytorch docs"}
{"text": "Return type:\n Tensor\n If x1 has shape B \\times P \\times M and x2 has shape B \\times R\n \\times M then the output will have shape B \\times P \\times R.\n This function is equivalent to\n scipy.spatial.distance.cdist(input,'minkowski', p=p) if p \\in (0,\n \\infty). When p = 0 it is equivalent to\n scipy.spatial.distance.cdist(input, 'hamming') * M. When p =\n \\infty, the closest scipy function is\n scipy.spatial.distance.cdist(xn, lambda x, y: np.abs(x -\n y).max()).\n -[ Example ]-\n\n\n\na = torch.tensor([[0.9041, 0.0196], [-0.3108, -2.4423], [-0.4821, 1.059]])\na\n tensor([[ 0.9041, 0.0196],\n [-0.3108, -2.4423],\n [-0.4821, 1.0590]])\nb = torch.tensor([[-2.1763, -0.4713], [-0.6986, 1.3702]])\nb\n tensor([[-2.1763, -0.4713],\n [-0.6986, 1.3702]])\ntorch.cdist(a, b, p=2)\n tensor([[3.1193, 2.0959],\n [2.7138, 3.8322],\n [2.2830, 0.3791]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.cdist.html", "category": "pytorch docs"}
{"text": "torch.splittorch.split(tensor, split_size_or_sections, dim=0)\n Splits the tensor into chunks. Each chunk is a view of the original\n tensor.\n If \"split_size_or_sections\" is an integer type, then \"tensor\" will\n be split into equally sized chunks (if possible). Last chunk will\n be smaller if the tensor size along the given dimension \"dim\" is\n not divisible by \"split_size\".\n If \"split_size_or_sections\" is a list, then \"tensor\" will be split\n into \"len(split_size_or_sections)\" chunks with sizes in \"dim\"\n according to \"split_size_or_sections\".\n Parameters:\n * tensor (Tensor) -- tensor to split.\n * split_size_or_sections (int) or (list(int))\n -- size of a single chunk or list of sizes for each chunk\n * dim (int) -- dimension along which to split the tensor.\n Return type:\n List[Tensor]\n Example:\n >>> a = torch.arange(10).reshape(5, 2)\n >>> a\n tensor([[0, 1],\n [2, 3],\n [4, 5],", "source": "https://pytorch.org/docs/stable/generated/torch.split.html", "category": "pytorch docs"}
{"text": "[2, 3],\n [4, 5],\n [6, 7],\n [8, 9]])\n >>> torch.split(a, 2)\n (tensor([[0, 1],\n [2, 3]]),\n tensor([[4, 5],\n [6, 7]]),\n tensor([[8, 9]]))\n >>> torch.split(a, [1, 4])\n (tensor([[0, 1]]),\n tensor([[2, 3],\n [4, 5],\n [6, 7],\n [8, 9]]))", "source": "https://pytorch.org/docs/stable/generated/torch.split.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sinh_Tensor.sinh_() -> Tensor\n In-place version of \"sinh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sinh_.html", "category": "pytorch docs"}
{"text": "ConvReLU1dclass torch.ao.nn.intrinsic.ConvReLU1d(conv, relu)\n This is a sequential container which calls the Conv1d and ReLU\n modules. During quantization this will be replaced with the\n corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvReLU1d.html", "category": "pytorch docs"}
{"text": "torch.cuda.jiterator._create_jit_fntorch.cuda.jiterator._create_jit_fn(code_string, kwargs)\n Create a jiterator-generated cuda kernel for an elementwise op.\n The code string has to be a valid CUDA function that describes the\n computation for a single element. The code string has to follow the\n c++ template pattern, as shown in the example below. This function\n will be inlined into elementwise kernel template, and compiled on\n the fly. Compiled kernel will be cached in memory, as well as local\n temp dir.\n Jiterator-generated kernels accepts noncontiguous tensors, and\n supports boardcasting and type promotion.\n Parameters:\n * code_string (str) -- CUDA code string to be compiled by\n jiterator. The entry functor must return by value.\n * kwargs (*Dict, optional) -- Keyword arguments for\n generated function\n Return type:\n Callable*\n Example:", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.jiterator._create_jit_fn.html", "category": "pytorch docs"}
{"text": "Return type:\n Callable\n Example:\n code_string = \"template T my_kernel(T x, T y, T alpha) { return -x + alpha * y; }\"\n jitted_fn = create_jit_fn(code_string, alpha=1.0)\n a = torch.rand(3, device='cuda')\n b = torch.rand(3, device='cuda')\n # invoke jitted function like a regular python function\n result = jitted_fn(a, b, alpha=3.14)\n code_string also allows multiple function definitions, and the last\n function will be treated as the entry function.\n Example:\n code_string = \"template T util_fn(T x, T y) { return ::sin(x) + ::cos(y); }\"\n code_string += \"template T my_kernel(T x, T y, T val) { return ::min(val, util_fn(x, y)); }\"\n jitted_fn = create_jit_fn(code_string, val=0.0)\n a = torch.rand(3, device='cuda')\n b = torch.rand(3, device='cuda')\n # invoke jitted function like a regular python function\n result = jitted_fn(a, b) # using default val=0.0", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.jiterator._create_jit_fn.html", "category": "pytorch docs"}
{"text": "Jiterator can be used together with python registration to override\n an operator's cuda kernel. Following example is overriding gelu's\n cuda kernel with relu.\n Example:\n code_string = \"template T my_gelu(T a) { return a > 0 ? a : 0; }\"\n my_gelu = create_jit_fn(code_string)\n my_lib = torch.library.Library(\"aten\", \"IMPL\")\n my_lib.impl('aten::gelu', my_gelu, \"CUDA\")\n # torch.nn.GELU and torch.nn.function.gelu are now overridden\n a = torch.rand(3, device='cuda')\n torch.allclose(torch.nn.functional.gelu(a), torch.nn.functional.relu(a))\n Warning:\n This API is in beta and may change in future releases.\n Warning:\n This API only supports up to 8 inputs and 1 output\n Warning:\n All input tensors must live in CUDA device", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.jiterator._create_jit_fn.html", "category": "pytorch docs"}
{"text": "default_fake_quanttorch.quantization.fake_quantize.default_fake_quant\n alias of functools.partial(,\n observer=,\n quant_min=0, quant_max=255, dtype=torch.quint8,\n qscheme=torch.per_tensor_affine, reduce_range=True){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_fake_quant.html", "category": "pytorch docs"}
{"text": "torch.logaddexptorch.logaddexp(input, other, , out=None) -> Tensor\n Logarithm of the sum of exponentiations of the inputs.\n Calculates pointwise \\log\\left(e^x + e^y\\right). This function is\n useful in statistics where the calculated probabilities of events\n may be so small as to exceed the range of normal floating point\n numbers. In such cases the logarithm of the calculated probability\n is stored. This function allows adding probabilities stored in such\n a fashion.\n This op should be disambiguated with \"torch.logsumexp()\" which\n performs a reduction on a single tensor.\n Parameters:\n * input (Tensor) -- the input tensor.\n * other (Tensor) -- the second input tensor\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> torch.logaddexp(torch.tensor([-1.0]), torch.tensor([-1.0, -2, -3]))\n tensor([-0.3069, -0.6867, -0.8731])", "source": "https://pytorch.org/docs/stable/generated/torch.logaddexp.html", "category": "pytorch docs"}
{"text": "tensor([-0.3069, -0.6867, -0.8731])\n >>> torch.logaddexp(torch.tensor([-100.0, -200, -300]), torch.tensor([-1.0, -2, -3]))\n tensor([-1., -2., -3.])\n >>> torch.logaddexp(torch.tensor([1.0, 2000, 30000]), torch.tensor([-1.0, -2, -3]))\n tensor([1.1269e+00, 2.0000e+03, 3.0000e+04])", "source": "https://pytorch.org/docs/stable/generated/torch.logaddexp.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.prune.random_structuredtorch.nn.utils.prune.random_structured(module, name, amount, dim)\n Prunes tensor corresponding to parameter called \"name\" in \"module\"\n by removing the specified \"amount\" of (currently unpruned) channels\n along the specified \"dim\" selected at random. Modifies module in\n place (and also return the modified module) by:\n 1. adding a named buffer called \"name+'_mask'\" corresponding to the\n binary mask applied to the parameter \"name\" by the pruning\n method.\n 2. replacing the parameter \"name\" by its pruned version, while the\n original (unpruned) parameter is stored in a new parameter named\n \"name+'_orig'\".\n Parameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * name (str) -- parameter name within \"module\" on which\n pruning will act.\n * amount (int or float) -- quantity of parameters to", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.random_structured.html", "category": "pytorch docs"}
{"text": "prune. If \"float\", should be between 0.0 and 1.0 and represent\n the fraction of parameters to prune. If \"int\", it represents\n the absolute number of parameters to prune.\n * dim (int) -- index of the dim along which we define\n channels to prune.\n Returns:\n modified (i.e. pruned) version of the input module\n Return type:\n module (nn.Module)\n -[ Examples ]-\n\n\n\nm = prune.random_structured(\n ... nn.Linear(5, 3), 'weight', amount=3, dim=1\n ... )\ncolumns_pruned = int(sum(torch.sum(m.weight, dim=0) == 0))\nprint(columns_pruned)\n 3\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.random_structured.html", "category": "pytorch docs"}
{"text": "torch.argmaxtorch.argmax(input) -> LongTensor\n Returns the indices of the maximum value of all elements in the\n \"input\" tensor.\n This is the second value returned by \"torch.max()\". See its\n documentation for the exact semantics of this method.\n Note:\n If there are multiple maximal values then the indices of the\n first maximal value are returned.\n Parameters:\n input (Tensor) -- the input tensor.\n Example:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[ 1.3398, 0.2663, -0.2686, 0.2450],\n [-0.7401, -0.8805, -0.3402, -1.1936],\n [ 0.4907, -1.3948, -1.0691, -0.3132],\n [-1.6092, 0.5419, -0.2993, 0.3195]])\n >>> torch.argmax(a)\n tensor(0)\n torch.argmax(input, dim, keepdim=False) -> LongTensor\n Returns the indices of the maximum values of a tensor across a\n dimension.\n This is the second value returned by \"torch.max()\". See its\n documentation for the exact semantics of this method.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.argmax.html", "category": "pytorch docs"}
{"text": "Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- the dimension to reduce. If \"None\", the\n argmax of the flattened input is returned.\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not. Ignored if \"dim=None\".\n Example:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[ 1.3398, 0.2663, -0.2686, 0.2450],\n [-0.7401, -0.8805, -0.3402, -1.1936],\n [ 0.4907, -1.3948, -1.0691, -0.3132],\n [-1.6092, 0.5419, -0.2993, 0.3195]])\n >>> torch.argmax(a, dim=1)\n tensor([ 0, 2, 0, 1])", "source": "https://pytorch.org/docs/stable/generated/torch.argmax.html", "category": "pytorch docs"}
{"text": "QuantWrapperclass torch.quantization.QuantWrapper(module)\n A wrapper class that wraps the input module, adds QuantStub and\n DeQuantStub and surround the call to module with call to quant and\n dequant modules.\n This is used by the quantization utility functions to add the\n quant and dequant modules, before convert function QuantStub\n will just be observer, it observes the input tensor, after\n convert, QuantStub will be swapped to nnq.Quantize which does\n actual quantization. Similarly for DeQuantStub.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.QuantWrapper.html", "category": "pytorch docs"}
{"text": "torch.Tensor.dsplitTensor.dsplit(split_size_or_sections) -> List of Tensors\n See \"torch.dsplit()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.dsplit.html", "category": "pytorch docs"}
{"text": "torch.Tensor.gt_Tensor.gt_(other) -> Tensor\n In-place version of \"gt()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.gt_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.signTensor.sign() -> Tensor\n See \"torch.sign()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sign.html", "category": "pytorch docs"}
{"text": "AdaptiveAvgPool3dclass torch.nn.AdaptiveAvgPool3d(output_size)\n Applies a 3D adaptive average pooling over an input signal composed\n of several input planes.\n The output is of size D x H x W, for any input size. The number of\n output features is equal to the number of input planes.\n Parameters:\n output_size (Union[int, None,\n Tuple[Optional[int], Optional[int],\n Optional[int]]]) -- the target output size of the\n form D x H x W. Can be a tuple (D, H, W) or a single number D\n for a cube D x D x D. D, H and W can be either a \"int\", or\n \"None\" which means the size will be the same as that of the\n input.\n Shape:\n * Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},\n W_{in}).\n * Output: (N, C, S_{0}, S_{1}, S_{2}) or (C, S_{0}, S_{1},\n S_{2}), where S=\\text{output_size}.\n -[ Examples ]-\n\n\n\ntarget output size of 5x7x9\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveAvgPool3d.html", "category": "pytorch docs"}
{"text": "\n\n\ntarget output size of 5x7x9\nm = nn.AdaptiveAvgPool3d((5, 7, 9))\ninput = torch.randn(1, 64, 8, 9, 10)\noutput = m(input)\ntarget output size of 7x7x7 (cube)\nm = nn.AdaptiveAvgPool3d(7)\ninput = torch.randn(1, 64, 10, 9, 8)\noutput = m(input)\ntarget output size of 7x9x8\nm = nn.AdaptiveAvgPool3d((7, None, None))\ninput = torch.randn(1, 64, 10, 9, 8)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveAvgPool3d.html", "category": "pytorch docs"}
{"text": "torch.bitwise_right_shifttorch.bitwise_right_shift(input, other, , out=None) -> Tensor\n Computes the right arithmetic shift of \"input\" by \"other\" bits. The\n input tensor must be of integral type. This operator supports\n broadcasting to a common shape and type promotion.\n The operation applied is:\n \\text{out}_i = \\text{input}_i >> \\text{other}_i\n Parameters:\n * input (Tensor or Scalar) -- the first input tensor\n * other (Tensor or Scalar) -- the second input tensor\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> torch.bitwise_right_shift(torch.tensor([-2, -7, 31], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8))\n tensor([-1, -7, 3], dtype=torch.int8)", "source": "https://pytorch.org/docs/stable/generated/torch.bitwise_right_shift.html", "category": "pytorch docs"}
{"text": "torch.Tensor.normal_Tensor.normal_(mean=0, std=1, *, generator=None) -> Tensor\n Fills \"self\" tensor with elements samples from the normal\n distribution parameterized by \"mean\" and \"std\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.normal_.html", "category": "pytorch docs"}
{"text": "hardsigmoidclass torch.ao.nn.quantized.functional.hardsigmoid(input, inplace=False)\n This is the quantized version of \"hardsigmoid()\".\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.hardsigmoid.html", "category": "pytorch docs"}
{"text": "celuclass torch.ao.nn.quantized.functional.celu(input, scale, zero_point, alpha=1.)\n Applies the quantized CELU function element-wise.\n \\text{CELU}(x) = \\max(0,x) + \\min(0, \\alpha * (\\exp(x / \\alpha)\n - 1))\n Parameters:\n * input (Tensor) -- quantized input\n * alpha (float) -- the \\alpha value for the CELU\n formulation. Default: 1.0\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.celu.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.huber_losstorch.nn.functional.huber_loss(input, target, reduction='mean', delta=1.0)\n Function that uses a squared term if the absolute element-wise\n error falls below delta and a delta-scaled L1 term otherwise.\n See \"HuberLoss\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.huber_loss.html", "category": "pytorch docs"}
{"text": "torch.linalg.lutorch.linalg.lu(A, , pivot=True, out=None)\n Computes the LU decomposition with partial pivoting of a matrix.\n Letting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the LU\n decomposition with partial pivoting of a matrix A \\in\n \\mathbb{K}^{m \\times n} is defined as\n A = PLU\\mathrlap{\\qquad P \\in \\mathbb{K}^{m \\times m}, L \\in\n \\mathbb{K}^{m \\times k}, U \\in \\mathbb{K}^{k \\times n}}\n where k = min(m,n), P is a permutation matrix, L is lower\n triangular with ones on the diagonal and U is upper triangular.\n If \"pivot\"= False and \"A\" is on GPU, then the LU decomposition\n without pivoting is computed\n A = LU\\mathrlap{\\qquad L \\in \\mathbb{K}^{m \\times k}, U \\in\n \\mathbb{K}^{k \\times n}}\n When \"pivot\"= False, the returned matrix \"P\" will be empty. The\n LU decomposition without pivoting may not exist if any of the\n principal minors of \"A\" is singular. In this case, the output\n matrix may contain inf or NaN*.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu.html", "category": "pytorch docs"}
{"text": "matrix may contain inf or NaN.\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n See also:\n \"torch.linalg.solve()\" solves a system of linear equations using\n the LU decomposition with partial pivoting.\n Warning:\n The LU decomposition is almost never unique, as often there are\n different permutation matrices that can yield different LU\n decompositions. As such, different platforms, like SciPy, or\n inputs on different devices, may produce different valid\n decompositions.\n Warning:\n Gradient computations are only supported if the input matrix is\n full-rank. If this condition is not met, no error will be thrown,\n but the gradient may not be finite. This is because the LU\n decomposition with pivoting is not differentiable at these\n points.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu.html", "category": "pytorch docs"}
{"text": "points.\n Parameters:\n * A (Tensor) -- tensor of shape (, m, n) where *** is\n zero or more batch dimensions.\n * pivot (bool, optional) -- Controls whether to\n compute the LU decomposition with partial pivoting or no\n pivoting. Default: True.\n Keyword Arguments:\n out (tuple, optional) -- output tuple of three\n tensors. Ignored if None. Default: None.\n Returns:\n A named tuple (P, L, U)*.\n Examples:\n >>> A = torch.randn(3, 2)\n >>> P, L, U = torch.linalg.lu(A)\n >>> P\n tensor([[0., 1., 0.],\n [0., 0., 1.],\n [1., 0., 0.]])\n >>> L\n tensor([[1.0000, 0.0000],\n [0.5007, 1.0000],\n [0.0633, 0.9755]])\n >>> U\n tensor([[0.3771, 0.0489],\n [0.0000, 0.9644]])\n >>> torch.dist(A, P @ L @ U)\n tensor(5.9605e-08)\n >>> A = torch.randn(2, 5, 7, device=\"cuda\")\n >>> P, L, U = torch.linalg.lu(A, pivot=False)\n >>> P", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu.html", "category": "pytorch docs"}
{"text": "\n\n\nP\n tensor([], device='cuda:0')\n >>> torch.dist(A, L @ U)\n tensor(1.0376e-06, device='cuda:0')\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu.html", "category": "pytorch docs"}
{"text": "torch.Tensor.data_ptrTensor.data_ptr() -> int\n Returns the address of the first element of \"self\" tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.data_ptr.html", "category": "pytorch docs"}
{"text": "quantizeclass torch.quantization.quantize(model, run_fn, run_args, mapping=None, inplace=False)\n Quantize the input float model with post training static\n quantization.\n First it will prepare the model for calibration, then it calls\n run_fn which will run the calibration step, after that we will\n convert the model to a quantized model.\n Parameters:\n * model -- input float model\n * run_fn -- a calibration function for calibrating the\n prepared model\n * run_args -- positional arguments for run_fn\n * inplace -- carry out model transformations in-place, the\n original module is mutated\n * mapping -- correspondence between original module types\n and quantized counterparts\n Returns:\n Quantized model.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize.html", "category": "pytorch docs"}
{"text": "torch.log1ptorch.log1p(input, , out=None) -> Tensor\n Returns a new tensor with the natural logarithm of (1 + \"input\").\n y_i = \\log_{e} (x_i + 1)\n Note:\n This function is more accurate than \"torch.log()\" for small\n values of \"input\"\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(5)\n >>> a\n tensor([-1.0090, -0.9923, 1.0249, -0.5372, 0.2492])\n >>> torch.log1p(a)\n tensor([ nan, -4.8653, 0.7055, -0.7705, 0.2225])", "source": "https://pytorch.org/docs/stable/generated/torch.log1p.html", "category": "pytorch docs"}
{"text": "torch.diagflattorch.diagflat(input, offset=0) -> Tensor\n * If \"input\" is a vector (1-D tensor), then returns a 2-D square\n tensor with the elements of \"input\" as the diagonal.\n * If \"input\" is a tensor with more than one dimension, then returns\n a 2-D tensor with diagonal elements equal to a flattened \"input\".\n The argument \"offset\" controls which diagonal to consider:\n * If \"offset\" = 0, it is the main diagonal.\n * If \"offset\" > 0, it is above the main diagonal.\n * If \"offset\" < 0, it is below the main diagonal.\n Parameters:\n * input (Tensor) -- the input tensor.\n * offset (int, optional) -- the diagonal to consider.\n Default: 0 (main diagonal).\n Examples:\n >>> a = torch.randn(3)\n >>> a\n tensor([-0.2956, -0.9068, 0.1695])\n >>> torch.diagflat(a)\n tensor([[-0.2956, 0.0000, 0.0000],\n [ 0.0000, -0.9068, 0.0000],\n [ 0.0000, 0.0000, 0.1695]])\n >>> torch.diagflat(a, 1)", "source": "https://pytorch.org/docs/stable/generated/torch.diagflat.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.diagflat(a, 1)\n tensor([[ 0.0000, -0.2956, 0.0000, 0.0000],\n [ 0.0000, 0.0000, -0.9068, 0.0000],\n [ 0.0000, 0.0000, 0.0000, 0.1695],\n [ 0.0000, 0.0000, 0.0000, 0.0000]])\n >>> a = torch.randn(2, 2)\n >>> a\n tensor([[ 0.2094, -0.3018],\n [-0.1516, 1.9342]])\n >>> torch.diagflat(a)\n tensor([[ 0.2094, 0.0000, 0.0000, 0.0000],\n [ 0.0000, -0.3018, 0.0000, 0.0000],\n [ 0.0000, 0.0000, -0.1516, 0.0000],\n [ 0.0000, 0.0000, 0.0000, 1.9342]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.diagflat.html", "category": "pytorch docs"}
{"text": "torch.Tensor.masked_scatter_Tensor.masked_scatter_(mask, source)\n Copies elements from \"source\" into \"self\" tensor at positions where\n the \"mask\" is True. The shape of \"mask\" must be broadcastable with\n the shape of the underlying tensor. The \"source\" should have at\n least as many elements as the number of ones in \"mask\"\n Parameters:\n * mask (BoolTensor) -- the boolean mask\n * source (Tensor) -- the tensor to copy from\n Note:\n The \"mask\" operates on the \"self\" tensor, not on the given\n \"source\" tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.masked_scatter_.html", "category": "pytorch docs"}
{"text": "dual_levelclass torch.autograd.forward_ad.dual_level\n Context-manager that enables forward AD. All forward AD computation\n must be performed in a \"dual_level\" context.\n Note:\n The \"dual_level\" context appropriately enters and exit the dual\n level to controls the current forward AD level, which is used by\n default by the other functions in this API.We currently don't\n plan to support nested \"dual_level\" contexts, however, so only a\n single forward AD level is supported. To compute higher-order\n forward grads, one can use \"torch.func.jvp()\".\n Example:\n >>> x = torch.tensor([1])\n >>> x_t = torch.tensor([1])\n >>> with dual_level():\n ... inp = make_dual(x, x_t)\n ... # Do computations with inp\n ... out = your_fn(inp)\n ... , grad = unpack_dual(out)\n >>> grad is None\n False\n >>> # After exiting the level, the grad is deleted\n >>> , grad_after = unpack_dual(out)\n >>> grad is None\n True", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.forward_ad.dual_level.html", "category": "pytorch docs"}
{"text": "\n\n\ngrad is None\n True\n Please see the forward-mode AD tutorial for detailed steps on how\n to use this API.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.forward_ad.dual_level.html", "category": "pytorch docs"}
{"text": "torch.Tensor.share_memory_Tensor.share_memory_()\n Moves the underlying storage to shared memory.\n This is a no-op if the underlying storage is already in shared\n memory and for CUDA tensors. Tensors in shared memory cannot be\n resized.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.share_memory_.html", "category": "pytorch docs"}
{"text": "LBFGSclass torch.optim.LBFGS(params, lr=1, max_iter=20, max_eval=None, tolerance_grad=1e-07, tolerance_change=1e-09, history_size=100, line_search_fn=None)\n Implements L-BFGS algorithm, heavily inspired by minFunc.\n Warning:\n This optimizer doesn't support per-parameter options and\n parameter groups (there can be only one).\n Warning:\n Right now all parameters have to be on a single device. This will\n be improved in the future.\n Note:\n This is a very memory intensive optimizer (it requires additional\n \"param_bytes * (history_size + 1)\" bytes). If it doesn't fit in\n memory try reducing the history size, or use a different\n algorithm.\n Parameters:\n * lr (float) -- learning rate (default: 1)\n * max_iter (int) -- maximal number of iterations per\n optimization step (default: 20)\n * max_eval (int) -- maximal number of function evaluations\n per optimization step (default: max_iter * 1.25).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html", "category": "pytorch docs"}
{"text": "\ntolerance_grad (float) -- termination tolerance on first\n order optimality (default: 1e-5).\ntolerance_change (float) -- termination tolerance on\n function value/parameter changes (default: 1e-9).\nhistory_size (int) -- update history size (default:\n 100).\nline_search_fn (str) -- either 'strong_wolfe' or None\n (default: None).\n add_param_group(param_group)\n Add a param group to the \"Optimizer\" s param_groups.\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n Parameters:\n param_group (dict) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n load_state_dict(state_dict)\n Loads the optimizer state.\n Parameters:\n state_dict (dict) -- optimizer state. Should be an\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html", "category": "pytorch docs"}
{"text": "object returned from a call to \"state_dict()\".\n register_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None\n The \"optimizer\" argument is the optimizer instance being used.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n register_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the", "source": "https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html", "category": "pytorch docs"}
{"text": "transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n state_dict()\n Returns the state of the optimizer as a \"dict\".\n It contains two entries:\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n step(closure)\n Performs a single optimization step.\n Parameters:\n closure (Callable) -- A closure that reevaluates the\n model and returns the loss.\n zero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html", "category": "pytorch docs"}
{"text": "Parameters:\n set_to_none (bool) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html", "category": "pytorch docs"}
{"text": "torch.Tensor.addrTensor.addr(vec1, vec2, *, beta=1, alpha=1) -> Tensor\n See \"torch.addr()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addr.html", "category": "pytorch docs"}
{"text": "torch.Tensor.typeTensor.type(dtype=None, non_blocking=False, kwargs) -> str or Tensor\n Returns the type if dtype is not provided, else casts this object\n to the specified type.\n If this is already of the correct type, no copy is performed and\n the original object is returned.\n Parameters:\n * dtype (dtype or string) -- The desired type\n * non_blocking (bool*) -- If \"True\", and the source is in\n pinned memory and destination is on the GPU or vice versa, the\n copy is performed asynchronously with respect to the host.\n Otherwise, the argument has no effect.\n * kwargs* -- For compatibility, may contain the key \"async\"\n in place of the \"non_blocking\" argument. The \"async\" arg is\n deprecated.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.type.html", "category": "pytorch docs"}
{"text": "torch.Tensor.narrow_copyTensor.narrow_copy(dimension, start, length) -> Tensor\n See \"torch.narrow_copy()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.narrow_copy.html", "category": "pytorch docs"}
{"text": "LazyInstanceNorm3dclass torch.nn.LazyInstanceNorm3d(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)\n A \"torch.nn.InstanceNorm3d\" module with lazy initialization of the\n \"num_features\" argument of the \"InstanceNorm3d\" that is inferred\n from the \"input.size(1)\". The attributes that will be lazily\n initialized are weight, bias, running_mean and running_var.\n Check the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\n Parameters:\n * num_features -- C from an expected input of size (N, C, D,\n H, W) or (C, D, H, W)\n * eps (float) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n * momentum (float) -- the value used for the running_mean\n and running_var computation. Default: 0.1\n * affine (bool) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyInstanceNorm3d.html", "category": "pytorch docs"}
{"text": "initialized the same way as done for batch normalization.\n Default: \"False\".\n * track_running_stats (bool) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics and always uses batch statistics in both\n training and eval modes. Default: \"False\"\n Shape:\n * Input: (N, C, D, H, W) or (C, D, H, W)\n * Output: (N, C, D, H, W) or (C, D, H, W) (same shape as input)\n cls_to_become\n alias of \"InstanceNorm3d\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyInstanceNorm3d.html", "category": "pytorch docs"}
{"text": "ConstantPad2dclass torch.nn.ConstantPad2d(padding, value)\n Pads the input tensor boundaries with a constant value.\n For N-dimensional padding, use \"torch.nn.functional.pad()\".\n Parameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in all boundaries. If a 4-tuple,\n uses (\\text{padding_left}, \\text{padding_right},\n \\text{padding_top}, \\text{padding_bottom})\n Shape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).\n * Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),\n where\n H_{out} = H_{in} + \\text{padding_top} +\n \\text{padding_bottom}\n W_{out} = W_{in} + \\text{padding_left} +\n \\text{padding_right}\n Examples:\n >>> m = nn.ConstantPad2d(2, 3.5)\n >>> input = torch.randn(1, 2, 2)\n >>> input\n tensor([[[ 1.6585, 0.4320],\n [-0.8701, -0.4649]]])\n >>> m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad2d.html", "category": "pytorch docs"}
{"text": "\n\n\nm(input)\n tensor([[[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],\n [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],\n [ 3.5000, 3.5000, 1.6585, 0.4320, 3.5000, 3.5000],\n [ 3.5000, 3.5000, -0.8701, -0.4649, 3.5000, 3.5000],\n [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],\n [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]]])\n >>> # using different paddings for different sides\n >>> m = nn.ConstantPad2d((3, 0, 2, 1), 3.5)\n >>> m(input)\n tensor([[[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],\n [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],\n [ 3.5000, 3.5000, 3.5000, 1.6585, 0.4320],\n [ 3.5000, 3.5000, 3.5000, -0.8701, -0.4649],\n [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.polygamma_Tensor.polygamma_(n) -> Tensor\n In-place version of \"polygamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.polygamma_.html", "category": "pytorch docs"}
{"text": "GRUCellclass torch.nn.GRUCell(input_size, hidden_size, bias=True, device=None, dtype=None)\n A gated recurrent unit (GRU) cell\n \\begin{array}{ll} r = \\sigma(W_{ir} x + b_{ir} + W_{hr} h +\n b_{hr}) \\ z = \\sigma(W_{iz} x + b_{iz} + W_{hz} h + b_{hz}) \\\n n = \\tanh(W_{in} x + b_{in} + r * (W_{hn} h + b_{hn})) \\ h' =\n (1 - z) * n + z * h \\end{array}\n where \\sigma is the sigmoid function, and * is the Hadamard\n product.\n Parameters:\n * input_size (int) -- The number of expected features in\n the input x\n * hidden_size (int) -- The number of features in the\n hidden state h\n * bias (bool) -- If \"False\", then the layer does not use\n bias weights b_ih and b_hh. Default: \"True\"\n Inputs: input, hidden\n * input : tensor containing input features\n * hidden : tensor containing the initial hidden state for\n each element in the batch. Defaults to zero if not provided.\n Outputs: h'", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GRUCell.html", "category": "pytorch docs"}
{"text": "Outputs: h'\n * h' : tensor containing the next hidden state for each\n element in the batch\n Shape:\n * input: (N, H_{in}) or (H_{in}) tensor containing input\n features where H_{in} = input_size.\n * hidden: (N, H_{out}) or (H_{out}) tensor containing the\n initial hidden state where H_{out} = hidden_size. Defaults\n to zero if not provided.\n * output: (N, H_{out}) or (H_{out}) tensor containing the next\n hidden state.\n Variables:\n * weight_ih (torch.Tensor) -- the learnable input-hidden\n weights, of shape (3hidden_size, input_size)\n * weight_hh (torch.Tensor) -- the learnable hidden-hidden\n weights, of shape (3hidden_size, hidden_size)\n * bias_ih -- the learnable input-hidden bias, of shape\n (3hidden_size)\n * bias_hh -- the learnable hidden-hidden bias, of shape\n (3hidden_size)\n Note:\n All the weights and biases are initialized from", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GRUCell.html", "category": "pytorch docs"}
{"text": "\\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{1}{\\text{hidden_size}}\n On certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\n Examples:\n >>> rnn = nn.GRUCell(10, 20)\n >>> input = torch.randn(6, 3, 10)\n >>> hx = torch.randn(3, 20)\n >>> output = []\n >>> for i in range(6):\n ... hx = rnn(input[i], hx)\n ... output.append(hx)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GRUCell.html", "category": "pytorch docs"}
{"text": "torch.erfinvtorch.erfinv(input, *, out=None) -> Tensor\n Alias for \"torch.special.erfinv()\".", "source": "https://pytorch.org/docs/stable/generated/torch.erfinv.html", "category": "pytorch docs"}
{"text": "torch.Tensor.asin_Tensor.asin_() -> Tensor\n In-place version of \"asin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.asin_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.smmTensor.smm(mat) -> Tensor\n See \"torch.smm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.smm.html", "category": "pytorch docs"}
{"text": "torch.fft.ifftshifttorch.fft.ifftshift(input, dim=None) -> Tensor\n Inverse of \"fftshift()\".\n Parameters:\n * input (Tensor) -- the tensor in FFT order\n * dim (int, Tuple[int], optional) -- The\n dimensions to rearrange. Only dimensions specified here will\n be rearranged, any other dimensions will be left in their\n original order. Default: All dimensions of \"input\".\n -[ Example ]-\n\n\n\nf = torch.fft.fftfreq(5)\nf\n tensor([ 0.0000, 0.2000, 0.4000, -0.4000, -0.2000])\n A round-trip through \"fftshift()\" and \"ifftshift()\" gives the same\n result:\nshifted = torch.fft.fftshift(f)\ntorch.fft.ifftshift(shifted)\n tensor([ 0.0000, 0.2000, 0.4000, -0.4000, -0.2000])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ifftshift.html", "category": "pytorch docs"}
{"text": "torch.Tensor.repeatTensor.repeat(sizes) -> Tensor\n Repeats this tensor along the specified dimensions.\n Unlike \"expand()\", this function copies the tensor's data.\n Warning:\n \"repeat()\" behaves differently from numpy.repeat, but is more\n similar to numpy.tile. For the operator similar to\n numpy.repeat, see \"torch.repeat_interleave()\".\n Parameters:\n sizes (torch.Size or int*...) -- The number of times\n to repeat this tensor along each dimension\n Example:\n >>> x = torch.tensor([1, 2, 3])\n >>> x.repeat(4, 2)\n tensor([[ 1, 2, 3, 1, 2, 3],\n [ 1, 2, 3, 1, 2, 3],\n [ 1, 2, 3, 1, 2, 3],\n [ 1, 2, 3, 1, 2, 3]])\n >>> x.repeat(4, 2, 1).size()\n torch.Size([4, 2, 3])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.repeat.html", "category": "pytorch docs"}
{"text": "torch.func.jacrevtorch.func.jacrev(func, argnums=0, , has_aux=False, chunk_size=None, _preallocate_and_copy=False)\n Computes the Jacobian of \"func\" with respect to the arg(s) at index\n \"argnum\" using reverse mode autodiff\n Note:\n Using \"chunk_size=1\" is equivalent to computing the jacobian row-\n by-row with a for-loop i.e. the constraints of \"vmap()\" are not\n applicable.\n Parameters:\n * func (function) -- A Python function that takes one or\n more arguments, one of which must be a Tensor, and returns one\n or more Tensors\n * argnums (int or Tuple[int]*) -- Optional,\n integer or tuple of integers, saying which arguments to get\n the Jacobian with respect to. Default: 0.\n * has_aux (bool) -- Flag indicating that \"func\" returns a\n \"(output, aux)\" tuple where the first element is the output of\n the function to be differentiated and the second element is", "source": "https://pytorch.org/docs/stable/generated/torch.func.jacrev.html", "category": "pytorch docs"}
{"text": "auxiliary objects that will not be differentiated. Default:\n False.\n * chunk_size (None or int) -- If None (default), use\n the maximum chunk size (equivalent to doing a single vmap over\n vjp to compute the jacobian). If 1, then compute the jacobian\n row-by-row with a for-loop. If not None, then compute the\n jacobian \"chunk_size\" rows at a time (equivalent to doing\n multiple vmap over vjp). If you run into memory issues\n computing the jacobian, please try to specify a non-None\n chunk_size.\n Returns:\n Returns a function that takes in the same inputs as \"func\" and\n returns the Jacobian of \"func\" with respect to the arg(s) at\n \"argnums\". If \"has_aux is True\", then the returned function\n instead returns a \"(jacobian, aux)\" tuple where \"jacobian\" is\n the Jacobian and \"aux\" is auxiliary objects returned by \"func\".\n A basic usage with a pointwise, unary operation will give a\n diagonal array as the Jacobian", "source": "https://pytorch.org/docs/stable/generated/torch.func.jacrev.html", "category": "pytorch docs"}
{"text": "diagonal array as the Jacobian\n\n\n\nfrom torch.func import jacrev\nx = torch.randn(5)\njacobian = jacrev(torch.sin)(x)\nexpected = torch.diag(torch.cos(x))\nassert torch.allclose(jacobian, expected)\n If you would like to compute the output of the function as well as\n the jacobian of the function, use the \"has_aux\" flag to return the\n output as an auxiliary object:\nfrom torch.func import jacrev\nx = torch.randn(5)\ndef f(x):\n return x.sin()\ndef g(x):\n result = f(x)\n return result, result\njacobian_f, f_x = jacrev(g, has_aux=True)(x)\nassert torch.allclose(f_x, f(x))\n \"jacrev()\" can be composed with vmap to produce batched Jacobians:\nfrom torch.func import jacrev, vmap\nx = torch.randn(64, 5)\njacobian = vmap(jacrev(torch.sin))(x)\nassert jacobian.shape == (64, 5, 5)\n Additionally, \"jacrev()\" can be composed with itself to produce\n Hessians\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.jacrev.html", "category": "pytorch docs"}
{"text": "Hessians\n\n\n\nfrom torch.func import jacrev\ndef f(x):\n return x.sin().sum()\nx = torch.randn(5)\nhessian = jacrev(jacrev(f))(x)\nassert torch.allclose(hessian, torch.diag(-x.sin()))\n By default, \"jacrev()\" computes the Jacobian with respect to the\n first input. However, it can compute the Jacboian with respect to a\n different argument by using \"argnums\":\nfrom torch.func import jacrev\ndef f(x, y):\n return x + y ** 2\nx, y = torch.randn(5), torch.randn(5)\njacobian = jacrev(f, argnums=1)(x, y)\nexpected = torch.diag(2 * y)\nassert torch.allclose(jacobian, expected)\n Additionally, passing a tuple to \"argnums\" will compute the\n Jacobian with respect to multiple arguments\nfrom torch.func import jacrev\ndef f(x, y):\n return x + y ** 2\nx, y = torch.randn(5), torch.randn(5)\njacobian = jacrev(f, argnums=(0, 1))(x, y)\nexpectedX = torch.diag(torch.ones_like(x))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.jacrev.html", "category": "pytorch docs"}
{"text": "\n\n\nexpectedX = torch.diag(torch.ones_like(x))\nexpectedY = torch.diag(2 * y)\nassert torch.allclose(jacobian[0], expectedX)\nassert torch.allclose(jacobian[1], expectedY)\n Note:\n Using PyTorch \"torch.no_grad\" together with \"jacrev\". Case 1:\n Using \"torch.no_grad\" inside a function:\n >>> def f(x):\n >>> with torch.no_grad():\n >>> c = x ** 2\n >>> return x - c\n In this case, \"jacrev(f)(x)\" will respect the inner\n \"torch.no_grad\".Case 2: Using \"jacrev\" inside \"torch.no_grad\"\n context manager:\n >>> with torch.no_grad():\n >>> jacrev(f)(x)\n In this case, \"jacrev\" will respect the inner \"torch.no_grad\",\n but not the outer one. This is because \"jacrev\" is a \"function\n transform\": its result should not depend on the result of a\n context manager outside of \"f\".\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.jacrev.html", "category": "pytorch docs"}
{"text": "Conv2dclass torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\n Applies a 2D convolution over an input signal composed of several\n input planes.\n In the simplest case, the output value of the layer with input size\n (N, C_{\\text{in}}, H, W) and output (N, C_{\\text{out}},\n H_{\\text{out}}, W_{\\text{out}}) can be precisely described as:\n \\text{out}(N_i, C_{\\text{out}j}) =\n \\text{bias}(Cj}) + \\sum^{C_{\\text{in}} - 1}\n \\text{weight}(C_{\\text{out}_j}, k) \\star \\text{input}(N_i, k)\n where \\star is the valid 2D cross-correlation operator, N is a\n batch size, C denotes a number of channels, H is a height of input\n planes in pixels, and W is width in pixels.\n This module supports TensorFloat32.\n On certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html", "category": "pytorch docs"}
{"text": "use different precision for backward.\n * \"stride\" controls the stride for the cross-correlation, a single\n number or a tuple.\n * \"padding\" controls the amount of padding applied to the input. It\n can be either a string {'valid', 'same'} or an int / a tuple of\n ints giving the amount of implicit padding applied on both sides.\n * \"dilation\" controls the spacing between the kernel points; also\n known as the \u00c3\u00a0 trous algorithm. It is harder to describe, but\n this link has a nice visualization of what \"dilation\" does.\n * \"groups\" controls the connections between inputs and outputs.\n \"in_channels\" and \"out_channels\" must both be divisible by\n \"groups\". For example,\n * At groups=1, all inputs are convolved to all outputs.\n * At groups=2, the operation becomes equivalent to having two\n conv layers side by side, each seeing half the input\n channels and producing half the output channels, and both\n subsequently concatenated.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html", "category": "pytorch docs"}
{"text": "subsequently concatenated.\n * At groups= \"in_channels\", each input channel is convolved\n with its own set of filters (of size\n \\frac{\\text{out_channels}}{\\text{in_channels}}).\n The parameters \"kernel_size\", \"stride\", \"padding\", \"dilation\" can\n either be:\n * a single \"int\" -- in which case the same value is used for the\n height and width dimension\n * a \"tuple\" of two ints -- in which case, the first int is\n used for the height dimension, and the second int for the\n width dimension\n Note:\n When groups == in_channels and out_channels == K *\n in_channels, where K is a positive integer, this operation is\n also known as a \"depthwise convolution\".In other words, for an\n input of size (N, C_{in}, L_{in}), a depthwise convolution with a\n depthwise multiplier K can be performed with the arguments\n (C_\\text{in}=C_\\text{in}, C_\\text{out}=C_\\text{in} \\times\n \\text{K}, ..., \\text{groups}=C_\\text{in}).\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html", "category": "pytorch docs"}
{"text": "Note:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n Note:\n \"padding='valid'\" is the same as no padding. \"padding='same'\"\n pads the input so the output has the shape as the input. However,\n this mode doesn't support any stride values other than 1.\n Note:\n This module supports complex data types i.e. \"complex32,\n complex64, complex128\".\n Parameters:\n * in_channels (int) -- Number of channels in the input\n image\n * out_channels (int) -- Number of channels produced by the\n convolution\n * kernel_size (int or tuple) -- Size of the convolving\n kernel", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html", "category": "pytorch docs"}
{"text": "kernel\n * stride (int or tuple, optional) -- Stride of the\n convolution. Default: 1\n * padding (int, tuple or str, optional) --\n Padding added to all four sides of the input. Default: 0\n * padding_mode (str, optional) -- \"'zeros'\",\n \"'reflect'\", \"'replicate'\" or \"'circular'\". Default: \"'zeros'\"\n * dilation (int or tuple, optional) -- Spacing\n between kernel elements. Default: 1\n * groups (int, optional) -- Number of blocked\n connections from input channels to output channels. Default: 1\n * bias (bool, optional) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n Shape:\n * Input: (N, C_{in}, H_{in}, W_{in}) or (C_{in}, H_{in}, W_{in})\n * Output: (N, C_{out}, H_{out}, W_{out}) or (C_{out}, H_{out},\n W_{out}), where\n H_{out} = \\left\\lfloor\\frac{H_{in} + 2 \\times", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html", "category": "pytorch docs"}
{"text": "\\text{padding}[0] - \\text{dilation}[0] \\times\n (\\text{kernel_size}[0] - 1) - 1}{\\text{stride}[0]} +\n 1\\right\\rfloor\n W_{out} = \\left\\lfloor\\frac{W_{in} + 2 \\times\n \\text{padding}[1] - \\text{dilation}[1] \\times\n (\\text{kernel_size}[1] - 1) - 1}{\\text{stride}[1]} +\n 1\\right\\rfloor\n Variables:\n * weight (Tensor) -- the learnable weights of the module\n of shape (\\text{out_channels},\n \\frac{\\text{in_channels}}{\\text{groups}},\n \\text{kernel_size[0]}, \\text{kernel_size[1]}). The values of\n these weights are sampled from \\mathcal{U}(-\\sqrt{k},\n \\sqrt{k}) where k = \\frac{groups}{C_\\text{in} *\n \\prod_{i=0}^{1}\\text{kernel_size}[i]}\n * bias (Tensor) -- the learnable bias of the module of\n shape (out_channels). If \"bias\" is \"True\", then the values of\n these weights are sampled from \\mathcal{U}(-\\sqrt{k},", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html", "category": "pytorch docs"}
{"text": "\\sqrt{k}) where k = \\frac{groups}{C_\\text{in} *\n \\prod_{i=0}^{1}\\text{kernel_size}[i]}\n -[ Examples ]-\n\n\n\nWith square kernels and equal stride\nm = nn.Conv2d(16, 33, 3, stride=2)\nnon-square kernels and unequal stride and with padding\nm = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))\nnon-square kernels and unequal stride and with padding and dilation\nm = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))\ninput = torch.randn(20, 16, 50, 100)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.detach_Tensor.detach_()\n Detaches the Tensor from the graph that created it, making it a\n leaf. Views cannot be detached in-place.\n This method also affects forward mode AD gradients and the result\n will never have forward mode AD gradients.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.detach_.html", "category": "pytorch docs"}
{"text": "torch.modetorch.mode(input, dim=- 1, keepdim=False, , out=None)\n Returns a namedtuple \"(values, indices)\" where \"values\" is the mode\n value of each row of the \"input\" tensor in the given dimension\n \"dim\", i.e. a value which appears most often in that row, and\n \"indices\" is the index location of each mode value found.\n By default, \"dim\" is the last dimension of the \"input\" tensor.\n If \"keepdim\" is \"True\", the output tensors are of the same size as\n \"input\" except in the dimension \"dim\" where they are of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensors having 1 fewer dimension than \"input\".\n Note:\n This function is not defined for \"torch.cuda.Tensor\" yet.\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- the dimension to reduce.\n * keepdim (bool*) -- whether the output tensor has \"dim\"\n retained or not.\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.mode.html", "category": "pytorch docs"}
{"text": "retained or not.\n Keyword Arguments:\n out (tuple, optional) -- the result tuple of two\n output tensors (values, indices)\n Example:\n >>> a = torch.randint(10, (5,))\n >>> a\n tensor([6, 5, 1, 0, 2])\n >>> b = a + (torch.randn(50, 1) * 5).long()\n >>> torch.mode(b, 0)\n torch.return_types.mode(values=tensor([6, 5, 1, 0, 2]), indices=tensor([2, 2, 2, 2, 2]))", "source": "https://pytorch.org/docs/stable/generated/torch.mode.html", "category": "pytorch docs"}
{"text": "torch.signal.windows.cosinetorch.signal.windows.cosine(M, , sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\n Computes a window with a simple cosine waveform. Also known as the\n sine window.\n The cosine window is defined as follows:\n w_n = \\cos{\\left(\\frac{\\pi n}{M} - \\frac{\\pi}{2}\\right)} =\n \\sin{\\left(\\frac{\\pi n}{M}\\right)}\n The window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\n Parameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\n Keyword Arguments:\n * sym (bool, optional) -- If False, returns a\n periodic window suitable for use in spectral analysis. If\n True, returns a symmetric window suitable for use in filter\n design. Default: True.\n * dtype* (\"torch.dtype\", optional) -- the desired data type", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.cosine.html", "category": "pytorch docs"}
{"text": "of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Return type:\n Tensor\n Examples:\n >>> # Generates a symmetric cosine window.\n >>> torch.signal.windows.cosine(10)\n tensor([0.1564, 0.4540, 0.7071, 0.8910, 0.9877, 0.9877, 0.8910, 0.7071, 0.4540, 0.1564])\n >>> # Generates a periodic cosine window.", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.cosine.html", "category": "pytorch docs"}
{"text": "\n\n\nGenerates a periodic cosine window.\n >>> torch.signal.windows.cosine(10, sym=False)\n tensor([0.1423, 0.4154, 0.6549, 0.8413, 0.9595, 1.0000, 0.9595, 0.8413, 0.6549, 0.4154])\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.cosine.html", "category": "pytorch docs"}
{"text": "torch.powtorch.pow(input, exponent, , out=None) -> Tensor\n Takes the power of each element in \"input\" with \"exponent\" and\n returns a tensor with the result.\n \"exponent\" can be either a single \"float\" number or a Tensor with\n the same number of elements as \"input\".\n When \"exponent\" is a scalar value, the operation applied is:\n \\text{out}_i = x_i ^ \\text{exponent}\n When \"exponent\" is a tensor, the operation applied is:\n \\text{out}_i = x_i ^ {\\text{exponent}_i}\n When \"exponent\" is a tensor, the shapes of \"input\" and \"exponent\"\n must be broadcastable.\n Parameters:\n * input (Tensor) -- the input tensor.\n * exponent (float or tensor) -- the exponent value\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 0.4331, 1.2475, 0.6834, -0.2791])\n >>> torch.pow(a, 2)\n tensor([ 0.1875, 1.5561, 0.4670, 0.0779])", "source": "https://pytorch.org/docs/stable/generated/torch.pow.html", "category": "pytorch docs"}
{"text": "tensor([ 0.1875, 1.5561, 0.4670, 0.0779])\n >>> exp = torch.arange(1., 5.)\n >>> a = torch.arange(1., 5.)\n >>> a\n tensor([ 1., 2., 3., 4.])\n >>> exp\n tensor([ 1., 2., 3., 4.])\n >>> torch.pow(a, exp)\n tensor([ 1., 4., 27., 256.])\n torch.pow(self, exponent, , out=None) -> Tensor\n \"self\" is a scalar \"float\" value, and \"exponent\" is a tensor. The\n returned tensor \"out\" is of the same shape as \"exponent\"\n The operation applied is:\n \\text{out}_i = \\text{self} ^ {\\text{exponent}_i}\n Parameters:\n * self (float) -- the scalar base value for the power\n operation\n * exponent (Tensor) -- the exponent tensor\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> exp = torch.arange(1., 5.)\n >>> base = 2\n >>> torch.pow(base, exp)\n tensor([ 2., 4., 8., 16.])", "source": "https://pytorch.org/docs/stable/generated/torch.pow.html", "category": "pytorch docs"}
{"text": "torch.logsumexptorch.logsumexp(input, dim, keepdim=False, , out=None)\n Returns the log of summed exponentials of each row of the \"input\"\n tensor in the given dimension \"dim\". The computation is numerically\n stabilized.\n For summation index j given by dim and other indices i, the\n result is\n \\text{logsumexp}(x){i} = \\log \\sum_j \\exp(x)\n If \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int or tuple of ints, optional) -- the\n dimension or dimensions to reduce. If \"None\", all dimensions\n are reduced.\n * keepdim (bool*) -- whether the output tensor has \"dim\"\n retained or not.\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.logsumexp.html", "category": "pytorch docs"}
{"text": "retained or not.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> a = torch.randn(3, 3)\n >>> torch.logsumexp(a, 1)\n tensor([1.4907, 1.0593, 1.5696])\n >>> torch.dist(torch.logsumexp(a, 1), torch.log(torch.sum(torch.exp(a), 1)))\n tensor(1.6859e-07)", "source": "https://pytorch.org/docs/stable/generated/torch.logsumexp.html", "category": "pytorch docs"}
{"text": "torch.Tensor.clampTensor.clamp(min=None, max=None) -> Tensor\n See \"torch.clamp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.clamp.html", "category": "pytorch docs"}
{"text": "torch.Tensor.cdoubleTensor.cdouble(memory_format=torch.preserve_format) -> Tensor\n \"self.cdouble()\" is equivalent to \"self.to(torch.complex128)\". See\n \"to()\".\n Parameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cdouble.html", "category": "pytorch docs"}
{"text": "torch.Tensor.inverseTensor.inverse() -> Tensor\n See \"torch.inverse()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.inverse.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.triplet_margin_losstorch.nn.functional.triplet_margin_loss(anchor, positive, negative, margin=1.0, p=2, eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')\n See \"TripletMarginLoss\" for details\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.triplet_margin_loss.html", "category": "pytorch docs"}
{"text": "torch.Tensor.gradTensor.grad\n This attribute is \"None\" by default and becomes a Tensor the first\n time a call to \"backward()\" computes gradients for \"self\". The\n attribute will then contain the gradients computed and future calls\n to \"backward()\" will accumulate (add) gradients into it.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.grad.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sigmoid_Tensor.sigmoid_() -> Tensor\n In-place version of \"sigmoid()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sigmoid_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.bincountTensor.bincount(weights=None, minlength=0) -> Tensor\n See \"torch.bincount()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bincount.html", "category": "pytorch docs"}
{"text": "torch.cuda.memory_cachedtorch.cuda.memory_cached(device=None)\n Deprecated; see \"memory_reserved()\".\n Return type:\n int", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_cached.html", "category": "pytorch docs"}
{"text": "torch.Tensor.shortTensor.short(memory_format=torch.preserve_format) -> Tensor\n \"self.short()\" is equivalent to \"self.to(torch.int16)\". See \"to()\".\n Parameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.short.html", "category": "pytorch docs"}
{"text": "torch.cuda.set_rng_statetorch.cuda.set_rng_state(new_state, device='cuda')\n Sets the random number generator state of the specified GPU.\n Parameters:\n * new_state (torch.ByteTensor) -- The desired state\n * device (torch.device or int, optional) -- The\n device to set the RNG state. Default: \"'cuda'\" (i.e.,\n \"torch.device('cuda')\", the current CUDA device).", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.set_rng_state.html", "category": "pytorch docs"}
{"text": "torch.unbindtorch.unbind(input, dim=0) -> seq\n Removes a tensor dimension.\n Returns a tuple of all slices along a given dimension, already\n without it.\n Parameters:\n * input (Tensor) -- the tensor to unbind\n * dim (int) -- dimension to remove\n Example:\n >>> torch.unbind(torch.tensor([[1, 2, 3],\n >>> [4, 5, 6],\n >>> [7, 8, 9]]))\n (tensor([1, 2, 3]), tensor([4, 5, 6]), tensor([7, 8, 9]))", "source": "https://pytorch.org/docs/stable/generated/torch.unbind.html", "category": "pytorch docs"}
{"text": "torch.Tensor.logical_xorTensor.logical_xor() -> Tensor\n See \"torch.logical_xor()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logical_xor.html", "category": "pytorch docs"}
{"text": "LnStructuredclass torch.nn.utils.prune.LnStructured(amount, n, dim=- 1)\n Prune entire (currently unpruned) channels in a tensor based on\n their L\"n\"-norm.\n Parameters:\n * amount (int or float) -- quantity of channels to\n prune. If \"float\", should be between 0.0 and 1.0 and represent\n the fraction of parameters to prune. If \"int\", it represents\n the absolute number of parameters to prune.\n * n (int, float, inf, -inf, 'fro',\n 'nuc') -- See documentation of valid entries for argument\n \"p\" in \"torch.norm()\".\n * dim (int, optional) -- index of the dim along which\n we define channels to prune. Default: -1.\n classmethod apply(module, name, amount, n, dim, importance_scores=None)\n Adds the forward pre-hook that enables pruning on the fly and\n the reparametrization of a tensor in terms of the original\n tensor and the pruning mask.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.LnStructured.html", "category": "pytorch docs"}
{"text": "Parameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * name (str) -- parameter name within \"module\" on which\n pruning will act.\n * amount (int or float) -- quantity of parameters\n to prune. If \"float\", should be between 0.0 and 1.0 and\n represent the fraction of parameters to prune. If \"int\", it\n represents the absolute number of parameters to prune.\n * n (int, float, inf, -inf, 'fro',\n 'nuc') -- See documentation of valid entries for\n argument \"p\" in \"torch.norm()\".\n * dim (int) -- index of the dim along which we define\n channels to prune.\n * importance_scores (torch.Tensor) -- tensor of\n importance scores (of same shape as module parameter) used\n to compute mask for pruning. The values in this tensor\n indicate the importance of the corresponding elements in", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.LnStructured.html", "category": "pytorch docs"}
{"text": "the parameter being pruned. If unspecified or None, the\n module parameter will be used in its place.\n apply_mask(module)\n Simply handles the multiplication between the parameter being\n pruned and the generated mask. Fetches the mask and the original\n tensor from the module and returns the pruned version of the\n tensor.\n Parameters:\n module (nn.Module) -- module containing the tensor to\n prune\n Returns:\n pruned version of the input tensor\n Return type:\n pruned_tensor (torch.Tensor)\n compute_mask(t, default_mask)\n Computes and returns a mask for the input tensor \"t\". Starting\n from a base \"default_mask\" (which should be a mask of ones if\n the tensor has not been pruned yet), generate a mask to apply on\n top of the \"default_mask\" by zeroing out the channels along the\n specified dim with the lowest L\"n\"-norm.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.LnStructured.html", "category": "pytorch docs"}
{"text": "Parameters:\n * t (torch.Tensor) -- tensor representing the parameter\n to prune\n * default_mask (torch.Tensor) -- Base mask from\n previous pruning iterations, that need to be respected\n after the new mask is applied. Same dims as \"t\".\n Returns:\n mask to apply to \"t\", of same dims as \"t\"\n Return type:\n mask (torch.Tensor)\n Raises:\n IndexError -- if \"self.dim >= len(t.shape)\"\n prune(t, default_mask=None, importance_scores=None)\n Computes and returns a pruned version of input tensor \"t\"\n according to the pruning rule specified in \"compute_mask()\".\n Parameters:\n * t (torch.Tensor) -- tensor to prune (of same\n dimensions as \"default_mask\").\n * importance_scores (torch.Tensor) -- tensor of\n importance scores (of same shape as \"t\") used to compute\n mask for pruning \"t\". The values in this tensor indicate", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.LnStructured.html", "category": "pytorch docs"}
{"text": "the importance of the corresponding elements in the \"t\"\n that is being pruned. If unspecified or None, the tensor\n \"t\" will be used in its place.\n * default_mask (torch.Tensor, optional) -- mask\n from previous pruning iteration, if any. To be considered\n when determining what portion of the tensor that pruning\n should act on. If None, default to a mask of ones.\n Returns:\n pruned version of tensor \"t\".\n remove(module)\n Removes the pruning reparameterization from a module. The pruned\n parameter named \"name\" remains permanently pruned, and the\n parameter named \"name+'_orig'\" is removed from the parameter\n list. Similarly, the buffer named \"name+'_mask'\" is removed from\n the buffers.\n Note:\n Pruning itself is NOT undone or reversed!", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.LnStructured.html", "category": "pytorch docs"}
{"text": "torch.Tensor.nanmeanTensor.nanmean(dim=None, keepdim=False, *, dtype=None) -> Tensor\n See \"torch.nanmean()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nanmean.html", "category": "pytorch docs"}
{"text": "torch.Tensor.halfTensor.half(memory_format=torch.preserve_format) -> Tensor\n \"self.half()\" is equivalent to \"self.to(torch.float16)\". See\n \"to()\".\n Parameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.half.html", "category": "pytorch docs"}
{"text": "torch.Tensor.nextafterTensor.nextafter(other) -> Tensor\n See \"torch.nextafter()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nextafter.html", "category": "pytorch docs"}
{"text": "torch.Tensor.acosh_Tensor.acosh_() -> Tensor\n In-place version of \"acosh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.acosh_.html", "category": "pytorch docs"}
{"text": "LazyConv2dclass torch.nn.LazyConv2d(out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\n A \"torch.nn.Conv2d\" module with lazy initialization of the\n \"in_channels\" argument of the \"Conv2d\" that is inferred from the\n \"input.size(1)\". The attributes that will be lazily initialized are\n weight and bias.\n Check the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\n Parameters:\n * out_channels (int) -- Number of channels produced by the\n convolution\n * kernel_size (int or tuple) -- Size of the convolving\n kernel\n * stride (int or tuple, optional) -- Stride of the\n convolution. Default: 1\n * padding (int or tuple, optional) -- Zero-padding\n added to both sides of the input. Default: 0\n * padding_mode (str, optional) -- \"'zeros'\",", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConv2d.html", "category": "pytorch docs"}
{"text": "\"'reflect'\", \"'replicate'\" or \"'circular'\". Default: \"'zeros'\"\n * dilation (int or tuple, optional) -- Spacing\n between kernel elements. Default: 1\n * groups (int, optional) -- Number of blocked\n connections from input channels to output channels. Default: 1\n * bias (bool, optional) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n See also:\n \"torch.nn.Conv2d\" and \"torch.nn.modules.lazy.LazyModuleMixin\"\n cls_to_become\n alias of \"Conv2d\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConv2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.xlogy_Tensor.xlogy_(other) -> Tensor\n In-place version of \"xlogy()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.xlogy_.html", "category": "pytorch docs"}
{"text": "torch.cuda.get_device_propertiestorch.cuda.get_device_properties(device)\n Gets the properties of a device.\n Parameters:\n device (torch.device or int or str) -- device for\n which to return the properties of the device.\n Returns:\n the properties of the device\n Return type:\n _CudaDeviceProperties", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.get_device_properties.html", "category": "pytorch docs"}
{"text": "torch.Tensor.ldexp_Tensor.ldexp_(other) -> Tensor\n In-place version of \"ldexp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ldexp_.html", "category": "pytorch docs"}
{"text": "torch.krontorch.kron(input, other, , out=None) -> Tensor\n Computes the Kronecker product, denoted by \\otimes, of \"input\" and\n \"other\".\n If \"input\" is a (a_0 \\times a_1 \\times \\dots \\times a_n) tensor and\n \"other\" is a (b_0 \\times b_1 \\times \\dots \\times b_n) tensor, the\n result will be a (a_0b_0 \\times a_1b_1 \\times \\dots \\times\n a_nb_n) tensor with the following entries:\n (\\text{input} \\otimes \\text{other}){k_0, k_1, \\dots, k_n} =\n \\text{input} * \\text{other}_{j_0, j_1,\n \\dots, j_n},\n where k_t = i_t * b_t + j_t for 0 \\leq t \\leq n. If one tensor has\n fewer dimensions than the other it is unsqueezed until it has the\n same number of dimensions.\n Supports real-valued and complex-valued inputs.\n Note:\n This function generalizes the typical definition of the Kronecker\n product for two matrices to two tensors, as described above. When\n \"input\" is a (m \\times n) matrix and \"other\" is a (p \\times q)", "source": "https://pytorch.org/docs/stable/generated/torch.kron.html", "category": "pytorch docs"}
{"text": "matrix, the result will be a (pm \\times qn) block matrix:\n \\mathbf{A} \\otimes \\mathbf{B}=\\begin{bmatrix} a_{11}\n \\mathbf{B} & \\cdots & a_{1 n} \\mathbf{B} \\ \\vdots & \\ddots &\n \\vdots \\ a_{m 1} \\mathbf{B} & \\cdots & a_{m n} \\mathbf{B}\n \\end{bmatrix}\n where \"input\" is \\mathbf{A} and \"other\" is \\mathbf{B}.\n Parameters:\n * input (Tensor) --\n * other (Tensor) --\n Keyword Arguments:\n out (Tensor, optional) -- The output tensor. Ignored\n if \"None\". Default: \"None\"\n Examples:\n >>> mat1 = torch.eye(2)\n >>> mat2 = torch.ones(2, 2)\n >>> torch.kron(mat1, mat2)\n tensor([[1., 1., 0., 0.],\n [1., 1., 0., 0.],\n [0., 0., 1., 1.],\n [0., 0., 1., 1.]])\n >>> mat1 = torch.eye(2)\n >>> mat2 = torch.arange(1, 5).reshape(2, 2)\n >>> torch.kron(mat1, mat2)\n tensor([[1., 2., 0., 0.],\n [3., 4., 0., 0.],\n [0., 0., 1., 2.],", "source": "https://pytorch.org/docs/stable/generated/torch.kron.html", "category": "pytorch docs"}
{"text": "[0., 0., 1., 2.],\n [0., 0., 3., 4.]])", "source": "https://pytorch.org/docs/stable/generated/torch.kron.html", "category": "pytorch docs"}
{"text": "torch.fft.ihffttorch.fft.ihfft(input, n=None, dim=- 1, norm=None, , out=None) -> Tensor\n Computes the inverse of \"hfft()\".\n \"input\" must be a real-valued signal, interpreted in the Fourier\n domain. The IFFT of a real signal is Hermitian-symmetric, \"X[i] =\n conj(X[-i])\". \"ihfft()\" represents this in the one-sided form where\n only the positive frequencies below the Nyquist frequency are\n included. To compute the full output, use \"ifft()\".\n Note:\n Supports torch.half on CUDA with GPU Architecture SM53 or\n greater. However it only supports powers of 2 signal length in\n every transformed dimension.\n Parameters:\n * input (Tensor) -- the real input tensor\n * n (int, optional) -- Signal length. If given, the\n input will either be zero-padded or trimmed to this length\n before computing the Hermitian IFFT.\n * dim (int, optional*) -- The dimension along which to\n take the one dimensional Hermitian IFFT.", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ihfft.html", "category": "pytorch docs"}
{"text": "take the one dimensional Hermitian IFFT.\n * norm (str, optional) --\n Normalization mode. For the backward transform (\"ihfft()\"),\n these correspond to:\n * \"\"forward\"\" - no normalization\n * \"\"backward\"\" - normalize by \"1/n\"\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the IFFT\n orthonormal)\n Calling the forward transform (\"hfft()\") with the same\n normalization mode will apply an overall normalization of\n \"1/n\" between the two transforms. This is required to make\n \"ihfft()\" the exact inverse.\n Default is \"\"backward\"\" (normalize by \"1/n\").\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n\n\n\nt = torch.arange(5)\nt\n tensor([0, 1, 2, 3, 4])\ntorch.fft.ihfft(t)\n tensor([ 2.0000-0.0000j, -0.5000-0.6882j, -0.5000-0.1625j])\n Compare against the full output from \"ifft()\":\ntorch.fft.ifft(t)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ihfft.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.fft.ifft(t)\n tensor([ 2.0000-0.0000j, -0.5000-0.6882j, -0.5000-0.1625j, -0.5000+0.1625j,\n -0.5000+0.6882j])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ihfft.html", "category": "pytorch docs"}
{"text": "torch.nn.modules.module.register_module_full_backward_hooktorch.nn.modules.module.register_module_full_backward_hook(hook)\n Registers a backward hook common to all the modules.\n Warning:\n This adds global state to the nn.module module and it is only\n intended for debugging/profiling purposes.\n The hook will be called every time the gradients with respect to a\n module are computed, i.e. the hook will execute if and only if the\n gradients with respect to module outputs are computed. The hook\n should have the following signature:\n hook(module, grad_input, grad_output) -> Tensor or None\n The \"grad_input\" and \"grad_output\" are tuples. The hook should not\n modify its arguments, but it can optionally return a new gradient\n with respect to the input that will be used in place of\n \"grad_input\" in subsequent computations. \"grad_input\" will only\n correspond to the inputs given as positional arguments and all", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_full_backward_hook.html", "category": "pytorch docs"}
{"text": "kwarg arguments will not appear in the hook. Entries in\n \"grad_input\" and \"grad_output\" will be \"None\" for all non-Tensor\n arguments.\n For technical reasons, when this hook is applied to a Module, its\n forward function will receive a view of each Tensor passed to the\n Module. Similarly the caller will receive a view of each Tensor\n returned by the Module's forward function.\n Global hooks are called before hooks registered with\n register_backward_hook\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemovableHandle\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_full_backward_hook.html", "category": "pytorch docs"}
{"text": "torch.Tensor.polygammaTensor.polygamma(n) -> Tensor\n See \"torch.polygamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.polygamma.html", "category": "pytorch docs"}
{"text": "torch.jit.annotatetorch.jit.annotate(the_type, the_value)\n This method is a pass-through function that returns the_value,\n used to hint TorchScript compiler the type of the_value. It is a\n no-op when running outside of TorchScript.\n Though TorchScript can infer correct type for most Python\n expressions, there are some cases where type inference can be\n wrong, including:\n * Empty containers like [] and {}, which TorchScript assumes to\n be container of Tensor\n * Optional types like Optional[T] but assigned a valid value of\n type T, TorchScript would assume it is type T rather than\n Optional[T]\n Note that annotate() does not help in init method of\n torch.nn.Module subclasses because it is executed in eager mode.\n To annotate types of torch.nn.Module attributes, use \"Annotate()\"\n instead.\n Example:\n import torch\n from typing import Dict\n @torch.jit.script\n def fn():", "source": "https://pytorch.org/docs/stable/generated/torch.jit.annotate.html", "category": "pytorch docs"}
{"text": "@torch.jit.script\n def fn():\n # Telling TorchScript that this empty dictionary is a (str -> int) dictionary\n # instead of default dictionary type of (str -> Tensor).\n d = torch.jit.annotate(Dict[str, int], {})\n # Without torch.jit.annotate above, following statement would fail because of\n # type mismatch.\n d[\"name\"] = 20\n Parameters:\n * the_type -- Python type that should be passed to\n TorchScript compiler as type hint for the_value\n * the_value -- Value or expression to hint type for.\n Returns:\n the_value is passed back as return value.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.annotate.html", "category": "pytorch docs"}
{"text": "torch.isfinitetorch.isfinite(input) -> Tensor\n Returns a new tensor with boolean elements representing if each\n element is finite or not.\n Real values are finite when they are not NaN, negative infinity, or\n infinity. Complex values are finite when both their real and\n imaginary parts are finite.\n Parameters:\n input (Tensor) -- the input tensor.\n Returns:\n A boolean tensor that is True where \"input\" is finite and False\n elsewhere\n Example:\n >>> torch.isfinite(torch.tensor([1, float('inf'), 2, float('-inf'), float('nan')]))\n tensor([True, False, True, False, False])", "source": "https://pytorch.org/docs/stable/generated/torch.isfinite.html", "category": "pytorch docs"}
{"text": "torch.set_rng_statetorch.set_rng_state(new_state)\n Sets the random number generator state.\n Parameters:\n new_state (torch.ByteTensor) -- The desired state", "source": "https://pytorch.org/docs/stable/generated/torch.set_rng_state.html", "category": "pytorch docs"}
{"text": "FixedQParamsFakeQuantizeclass torch.quantization.fake_quantize.FixedQParamsFakeQuantize(observer)\n Simulate quantize and dequantize with fixed quantization parameters\n in training time. Only per tensor quantization is supported.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.FixedQParamsFakeQuantize.html", "category": "pytorch docs"}
{"text": "torch.greatertorch.greater(input, other, *, out=None) -> Tensor\n Alias for \"torch.gt()\".", "source": "https://pytorch.org/docs/stable/generated/torch.greater.html", "category": "pytorch docs"}
{"text": "torch.Tensor.greater_equalTensor.greater_equal(other) -> Tensor\n See \"torch.greater_equal()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.greater_equal.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sortTensor.sort(dim=- 1, descending=False)\n See \"torch.sort()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sort.html", "category": "pytorch docs"}
{"text": "torch.linspacetorch.linspace(start, end, steps, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\n Creates a one-dimensional tensor of size \"steps\" whose values are\n evenly spaced from \"start\" to \"end\", inclusive. That is, the value\n are:\n (\\text{start}, \\text{start} + \\frac{\\text{end} -\n \\text{start}}{\\text{steps} - 1}, \\ldots, \\text{start} +\n (\\text{steps} - 2) * \\frac{\\text{end} -\n \\text{start}}{\\text{steps} - 1}, \\text{end})\n From PyTorch 1.11 linspace requires the steps argument. Use\n steps=100 to restore the previous behavior.\n Parameters:\n * start (float) -- the starting value for the set of\n points\n * end (float) -- the ending value for the set of points\n * steps (int) -- size of the constructed tensor\n Keyword Arguments:\n * out (Tensor, optional) -- the output tensor.\n * dtype (torch.dtype, optional*) -- the data type to", "source": "https://pytorch.org/docs/stable/generated/torch.linspace.html", "category": "pytorch docs"}
{"text": "perform the computation in. Default: if None, uses the global\n default dtype (see torch.get_default_dtype()) when both\n \"start\" and \"end\" are real, and corresponding complex dtype\n when either is complex.\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Example:\n >>> torch.linspace(3, 10, steps=5)\n tensor([ 3.0000, 4.7500, 6.5000, 8.2500, 10.0000])\n >>> torch.linspace(-10, 10, steps=5)", "source": "https://pytorch.org/docs/stable/generated/torch.linspace.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.linspace(-10, 10, steps=5)\n tensor([-10., -5., 0., 5., 10.])\n >>> torch.linspace(start=-10, end=10, steps=5)\n tensor([-10., -5., 0., 5., 10.])\n >>> torch.linspace(start=-10, end=10, steps=1)\n tensor([-10.])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linspace.html", "category": "pytorch docs"}
{"text": "eluclass torch.ao.nn.quantized.functional.elu(input, scale, zero_point, alpha=1.0)\n This is the quantized version of \"elu()\".\n Parameters:\n * input (Tensor) -- quantized input\n * scale (float) -- quantization scale of the output tensor\n * zero_point (int) -- quantization zero point of the\n output tensor\n * alpha (float) -- the alpha constant\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.elu.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.pairwise_distancetorch.nn.functional.pairwise_distance(x1, x2, p=2.0, eps=1e-6, keepdim=False) -> Tensor\n See \"torch.nn.PairwiseDistance\" for details", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.pairwise_distance.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.multi_margin_losstorch.nn.functional.multi_margin_loss(input, target, p=1, margin=1, weight=None, size_average=None, reduce=None, reduction='mean') -> Tensor\n See \"MultiMarginLoss\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.multi_margin_loss.html", "category": "pytorch docs"}
{"text": "PolynomialLRclass torch.optim.lr_scheduler.PolynomialLR(optimizer, total_iters=5, power=1.0, last_epoch=- 1, verbose=False)\n Decays the learning rate of each parameter group using a polynomial\n function in the given total_iters. When last_epoch=-1, sets initial\n lr as lr.\n Parameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * total_iters (int) -- The number of steps that the\n scheduler decays the learning rate. Default: 5.\n * power (int) -- The power of the polynomial. Default:\n 1.0.\n * verbose (bool) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n -[ Example ]-\n\n\n\nAssuming optimizer uses lr = 0.001 for all groups\nlr = 0.001 if epoch == 0\nlr = 0.00075 if epoch == 1\nlr = 0.00050 if epoch == 2\nlr = 0.00025 if epoch == 3\nlr = 0.0 if epoch >= 4\nscheduler = PolynomialLR(self.opt, total_iters=4, power=1.0)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.PolynomialLR.html", "category": "pytorch docs"}
{"text": "\n\n\nfor epoch in range(100):\n train(...)\n validate(...)\n scheduler.step()\n get_last_lr()\n Return last computed learning rate by current scheduler.\n load_state_dict(state_dict)\n Loads the schedulers state.\n Parameters:\n state_dict (dict) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n print_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n state_dict()\n Returns the state of the scheduler as a \"dict\".\n It contains an entry for every variable in self.dict which\n is not the optimizer.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.PolynomialLR.html", "category": "pytorch docs"}
{"text": "torch.Tensor.flipTensor.flip(dims) -> Tensor\n See \"torch.flip()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.flip.html", "category": "pytorch docs"}
{"text": "ReflectionPad2dclass torch.nn.ReflectionPad2d(padding)\n Pads the input tensor using the reflection of the input boundary.\n For N-dimensional padding, use \"torch.nn.functional.pad()\".\n Parameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in all boundaries. If a 4-tuple,\n uses (\\text{padding_left}, \\text{padding_right},\n \\text{padding_top}, \\text{padding_bottom})\n Shape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).\n * Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out})\n where\n H_{out} = H_{in} + \\text{padding_top} +\n \\text{padding_bottom}\n W_{out} = W_{in} + \\text{padding_left} +\n \\text{padding_right}\n Examples:\n >>> m = nn.ReflectionPad2d(2)\n >>> input = torch.arange(9, dtype=torch.float).reshape(1, 1, 3, 3)\n >>> input\n tensor([[[[0., 1., 2.],\n [3., 4., 5.],\n [6., 7., 8.]]]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad2d.html", "category": "pytorch docs"}
{"text": "[6., 7., 8.]]]])\n >>> m(input)\n tensor([[[[8., 7., 6., 7., 8., 7., 6.],\n [5., 4., 3., 4., 5., 4., 3.],\n [2., 1., 0., 1., 2., 1., 0.],\n [5., 4., 3., 4., 5., 4., 3.],\n [8., 7., 6., 7., 8., 7., 6.],\n [5., 4., 3., 4., 5., 4., 3.],\n [2., 1., 0., 1., 2., 1., 0.]]]])\n >>> # using different paddings for different sides\n >>> m = nn.ReflectionPad2d((1, 1, 2, 0))\n >>> m(input)\n tensor([[[[7., 6., 7., 8., 7.],\n [4., 3., 4., 5., 4.],\n [1., 0., 1., 2., 1.],\n [4., 3., 4., 5., 4.],\n [7., 6., 7., 8., 7.]]]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.takeTensor.take(indices) -> Tensor\n See \"torch.take()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.take.html", "category": "pytorch docs"}
{"text": "torch.matmultorch.matmul(input, other, *, out=None) -> Tensor\n Matrix product of two tensors.\n The behavior depends on the dimensionality of the tensors as\n follows:\n * If both tensors are 1-dimensional, the dot product (scalar) is\n returned.\n * If both arguments are 2-dimensional, the matrix-matrix product is\n returned.\n * If the first argument is 1-dimensional and the second argument is\n 2-dimensional, a 1 is prepended to its dimension for the purpose\n of the matrix multiply. After the matrix multiply, the prepended\n dimension is removed.\n * If the first argument is 2-dimensional and the second argument is\n 1-dimensional, the matrix-vector product is returned.\n * If both arguments are at least 1-dimensional and at least one\n argument is N-dimensional (where N > 2), then a batched matrix\n multiply is returned. If the first argument is 1-dimensional, a\n 1 is prepended to its dimension for the purpose of the batched", "source": "https://pytorch.org/docs/stable/generated/torch.matmul.html", "category": "pytorch docs"}
{"text": "matrix multiply and removed after. If the second argument is\n 1-dimensional, a 1 is appended to its dimension for the purpose\n of the batched matrix multiple and removed after. The non-matrix\n (i.e. batch) dimensions are broadcasted (and thus must be\n broadcastable). For example, if \"input\" is a (j \\times 1 \\times\n n \\times n) tensor and \"other\" is a (k \\times n \\times n) tensor,\n \"out\" will be a (j \\times k \\times n \\times n) tensor.\n Note that the broadcasting logic only looks at the batch\n dimensions when determining if the inputs are broadcastable, and\n not the matrix dimensions. For example, if \"input\" is a (j \\times\n 1 \\times n \\times m) tensor and \"other\" is a (k \\times m \\times\n p) tensor, these inputs are valid for broadcasting even though\n the final two dimensions (i.e. the matrix dimensions) are\n different. \"out\" will be a (j \\times k \\times n \\times p) tensor.\n This operation has support for arguments with sparse layouts. In", "source": "https://pytorch.org/docs/stable/generated/torch.matmul.html", "category": "pytorch docs"}
{"text": "particular the matrix-matrix (both arguments 2-dimensional)\n supports sparse arguments with the same restrictions as\n \"torch.mm()\"\n Warning:\n Sparse support is a beta feature and some layout(s)/dtype/device\n combinations may not be supported, or may not have autograd\n support. If you notice missing functionality please open a\n feature request.\n This operator supports TensorFloat32.\n On certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\n Note:\n The 1-dimensional dot product version of this function does not\n support an \"out\" parameter.\n Parameters:\n * input (Tensor) -- the first tensor to be multiplied\n * other (Tensor) -- the second tensor to be multiplied\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> # vector x vector\n >>> tensor1 = torch.randn(3)\n >>> tensor2 = torch.randn(3)\n >>> torch.matmul(tensor1, tensor2).size()", "source": "https://pytorch.org/docs/stable/generated/torch.matmul.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.matmul(tensor1, tensor2).size()\n torch.Size([])\n >>> # matrix x vector\n >>> tensor1 = torch.randn(3, 4)\n >>> tensor2 = torch.randn(4)\n >>> torch.matmul(tensor1, tensor2).size()\n torch.Size([3])\n >>> # batched matrix x broadcasted vector\n >>> tensor1 = torch.randn(10, 3, 4)\n >>> tensor2 = torch.randn(4)\n >>> torch.matmul(tensor1, tensor2).size()\n torch.Size([10, 3])\n >>> # batched matrix x batched matrix\n >>> tensor1 = torch.randn(10, 3, 4)\n >>> tensor2 = torch.randn(10, 4, 5)\n >>> torch.matmul(tensor1, tensor2).size()\n torch.Size([10, 3, 5])\n >>> # batched matrix x broadcasted matrix\n >>> tensor1 = torch.randn(10, 3, 4)\n >>> tensor2 = torch.randn(4, 5)\n >>> torch.matmul(tensor1, tensor2).size()\n torch.Size([10, 3, 5])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.matmul.html", "category": "pytorch docs"}
{"text": "default_eval_fnclass torch.quantization.default_eval_fn(model, calib_data)\n Default evaluation function takes a torch.utils.data.Dataset or a\n list of input Tensors and run the model on the dataset", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.default_eval_fn.html", "category": "pytorch docs"}
{"text": "Linearclass torch.ao.nn.quantized.Linear(in_features, out_features, bias_=True, dtype=torch.qint8)\n A quantized linear module with quantized tensor as inputs and\n outputs. We adopt the same interface as torch.nn.Linear, please\n see https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for\n documentation.\n Similar to \"Linear\", attributes will be randomly initialized at\n module creation time and will be overwritten later\n Variables:\n * weight (Tensor) -- the non-learnable quantized weights\n of the module of shape (\\text{out_features},\n \\text{in_features}).\n * bias (Tensor) -- the non-learnable bias of the module of\n shape (\\text{out_features}). If \"bias\" is \"True\", the values\n are initialized to zero.\n * scale -- scale parameter of output Quantized Tensor,\n type: double\n * zero_point -- zero_point parameter for output Quantized\n Tensor, type: long\n Examples:", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Linear.html", "category": "pytorch docs"}
{"text": "Tensor, type: long\n Examples:\n >>> m = nn.quantized.Linear(20, 30)\n >>> input = torch.randn(128, 20)\n >>> input = torch.quantize_per_tensor(input, 1.0, 0, torch.quint8)\n >>> output = m(input)\n >>> print(output.size())\n torch.Size([128, 30])\n classmethod from_float(mod)\n Create a quantized module from an observed float module\n Parameters:\n mod (Module) -- a float module, either produced by\n torch.ao.quantization utilities or provided by the user\n classmethod from_reference(ref_qlinear, output_scale, output_zero_point)\n Create a (fbgemm/qnnpack) quantized module from a reference\n quantized module\n Parameters:\n * ref_qlinear (Module) -- a reference quantized linear\n module, either produced by torch.ao.quantization utilities\n or provided by the user\n * output_scale (float) -- scale for output Tensor\n * output_zero_point (int) -- zero point for output", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Linear.html", "category": "pytorch docs"}
{"text": "Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Linear.html", "category": "pytorch docs"}
{"text": "torch.lerptorch.lerp(input, end, weight, , out=None)\n Does a linear interpolation of two tensors \"start\" (given by\n \"input\") and \"end\" based on a scalar or tensor \"weight\" and returns\n the resulting \"out\" tensor.\n \\text{out}_i = \\text{start}_i + \\text{weight}_i \\times\n (\\text{end}_i - \\text{start}_i)\n The shapes of \"start\" and \"end\" must be broadcastable. If \"weight\"\n is a tensor, then the shapes of \"weight\", \"start\", and \"end\" must\n be broadcastable.\n Parameters:\n * input (Tensor) -- the tensor with the starting points\n * end (Tensor) -- the tensor with the ending points\n * weight (float or tensor) -- the weight for the\n interpolation formula\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> start = torch.arange(1., 5.)\n >>> end = torch.empty(4).fill_(10)\n >>> start\n tensor([ 1., 2., 3., 4.])\n >>> end\n tensor([ 10., 10., 10., 10.])", "source": "https://pytorch.org/docs/stable/generated/torch.lerp.html", "category": "pytorch docs"}
{"text": "tensor([ 10., 10., 10., 10.])\n >>> torch.lerp(start, end, 0.5)\n tensor([ 5.5000, 6.0000, 6.5000, 7.0000])\n >>> torch.lerp(start, end, torch.full_like(start, 0.5))\n tensor([ 5.5000, 6.0000, 6.5000, 7.0000])", "source": "https://pytorch.org/docs/stable/generated/torch.lerp.html", "category": "pytorch docs"}
{"text": "torch.Tensor.cfloatTensor.cfloat(memory_format=torch.preserve_format) -> Tensor\n \"self.cfloat()\" is equivalent to \"self.to(torch.complex64)\". See\n \"to()\".\n Parameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cfloat.html", "category": "pytorch docs"}
{"text": "torch.Tensor.atanhTensor.atanh() -> Tensor\n See \"torch.atanh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.atanh.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.softmaxtorch.nn.functional.softmax(input, dim=None, stacklevel=3, dtype=None)\n Applies a softmax function.\n Softmax is defined as:\n \\text{Softmax}(x) = \\frac{\\exp(x_i)}{\\sum_j \\exp(x_j)}\n It is applied to all slices along dim, and will re-scale them so\n that the elements lie in the range [0, 1] and sum to 1.\n See \"Softmax\" for more details.\n Parameters:\n * input (Tensor) -- input\n * dim (int) -- A dimension along which softmax will be\n computed.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is casted\n to \"dtype\" before the operation is performed. This is useful\n for preventing data type overflows. Default: None.\n Return type:\n Tensor\n Note:\n This function doesn't work directly with NLLLoss, which expects\n the Log to be computed between the Softmax and itself. Use", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.softmax.html", "category": "pytorch docs"}
{"text": "log_softmax instead (it's faster and has better numerical\n properties).", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.softmax.html", "category": "pytorch docs"}
{"text": "torch.sym_floattorch.sym_float(a)\n SymInt-aware utility for float casting.\n Parameters:\n a (SymInt, SymFloat, or object) -- Object to cast", "source": "https://pytorch.org/docs/stable/generated/torch.sym_float.html", "category": "pytorch docs"}
{"text": "torch.addrtorch.addr(input, vec1, vec2, , beta=1, alpha=1, out=None) -> Tensor\n Performs the outer-product of vectors \"vec1\" and \"vec2\" and adds it\n to the matrix \"input\".\n Optional values \"beta\" and \"alpha\" are scaling factors on the outer\n product between \"vec1\" and \"vec2\" and the added matrix \"input\"\n respectively.\n \\text{out} = \\beta\\ \\text{input} + \\alpha\\ (\\text{vec1} \\otimes\n \\text{vec2})\n If \"beta\" is 0, then \"input\" will be ignored, and nan and inf\n in it will not be propagated.\n If \"vec1\" is a vector of size n and \"vec2\" is a vector of size\n m, then \"input\" must be broadcastable with a matrix of size (n\n \\times m) and \"out\" will be a matrix of size (n \\times m).\n Parameters:\n * input (Tensor) -- matrix to be added\n * vec1 (Tensor) -- the first vector of the outer product\n * vec2 (Tensor*) -- the second vector of the outer product\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.addr.html", "category": "pytorch docs"}
{"text": "Keyword Arguments:\n * beta (Number, optional) -- multiplier for \"input\"\n (\\beta)\n * alpha (Number, optional) -- multiplier for\n \\text{vec1} \\otimes \\text{vec2} (\\alpha)\n * out (Tensor, optional) -- the output tensor.\n Example:\n >>> vec1 = torch.arange(1., 4.)\n >>> vec2 = torch.arange(1., 3.)\n >>> M = torch.zeros(3, 2)\n >>> torch.addr(M, vec1, vec2)\n tensor([[ 1., 2.],\n [ 2., 4.],\n [ 3., 6.]])", "source": "https://pytorch.org/docs/stable/generated/torch.addr.html", "category": "pytorch docs"}
{"text": "torch.Tensor.index_selectTensor.index_select(dim, index) -> Tensor\n See \"torch.index_select()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_select.html", "category": "pytorch docs"}
{"text": "torch.linalg.pinvtorch.linalg.pinv(A, , atol=None, rtol=None, hermitian=False, out=None) -> Tensor\n Computes the pseudoinverse (Moore-Penrose inverse) of a matrix.\n The pseudoinverse may be defined algebraically but it is more\n computationally convenient to understand it through the SVD\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n If \"hermitian\"= True, \"A\" is assumed to be Hermitian if complex\n or symmetric if real, but this is not checked internally. Instead,\n just the lower triangular part of the matrix is used in the\n computations.\n The singular values (or the norm of the eigenvalues when\n \"hermitian\"= True*) that are below \\max(\\text{atol}, \\sigma_1\n \\cdot \\text{rtol}) threshold are treated as zero and discarded in\n the computation, where \\sigma_1 is the largest singular value (or\n eigenvalue).", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.pinv.html", "category": "pytorch docs"}
{"text": "eigenvalue).\n If \"rtol\" is not specified and \"A\" is a matrix of dimensions (m,\n n), the relative tolerance is set to be \\text{rtol} = \\max(m, n)\n \\varepsilon and \\varepsilon is the epsilon value for the dtype of\n \"A\" (see \"finfo\"). If \"rtol\" is not specified and \"atol\" is\n specified to be larger than zero then \"rtol\" is set to zero.\n If \"atol\" or \"rtol\" is a \"torch.Tensor\", its shape must be\n broadcastable to that of the singular values of \"A\" as returned by\n \"torch.linalg.svd()\".\n Note:\n This function uses \"torch.linalg.svd()\" if \"hermitian\"= False\n and \"torch.linalg.eigh()\" if \"hermitian\"= True. For CUDA\n inputs, this function synchronizes that device with the CPU.\n Note:\n Consider using \"torch.linalg.lstsq()\" if possible for multiplying\n a matrix on the left by the pseudoinverse, as:\n torch.linalg.lstsq(A, B).solution == A.pinv() @ B\n It is always preferred to use \"lstsq()\" when possible, as it is", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.pinv.html", "category": "pytorch docs"}
{"text": "faster and more numerically stable than computing the\n pseudoinverse explicitly.\n Note:\n This function has NumPy compatible variant linalg.pinv(A, rcond,\n hermitian=False). However, use of the positional argument\n \"rcond\" is deprecated in favor of \"rtol\".\n Warning:\n This function uses internally \"torch.linalg.svd()\" (or\n \"torch.linalg.eigh()\" when \"hermitian\"= True), so its\n derivative has the same problems as those of these functions. See\n the warnings in \"torch.linalg.svd()\" and \"torch.linalg.eigh()\"\n for more details.\n See also:\n \"torch.linalg.inv()\" computes the inverse of a square matrix.\n \"torch.linalg.lstsq()\" computes \"A\".pinv() @ \"B\" with a\n numerically stable algorithm.\n Parameters:\n * A (Tensor) -- tensor of shape (, m, n) where *** is\n zero or more batch dimensions.\n * rcond (float, Tensor, optional) -- [NumPy\n Compat]. Alias for \"rtol\". Default: None*.\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.pinv.html", "category": "pytorch docs"}
{"text": "Keyword Arguments:\n * atol (float, Tensor, optional) -- the absolute\n tolerance value. When None it's considered to be zero.\n Default: None.\n * rtol (float, Tensor, optional) -- the relative\n tolerance value. See above for the value it takes when None.\n Default: None.\n * hermitian (bool, optional) -- indicates whether \"A\"\n is Hermitian if complex or symmetric if real. Default:\n False.\n * out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\n Examples:\n >>> A = torch.randn(3, 5)\n >>> A\n tensor([[ 0.5495, 0.0979, -1.4092, -0.1128, 0.4132],\n [-1.1143, -0.3662, 0.3042, 1.6374, -0.9294],\n [-0.3269, -0.5745, -0.0382, -0.5922, -0.6759]])\n >>> torch.linalg.pinv(A)\n tensor([[ 0.0600, -0.1933, -0.2090],\n [-0.0903, -0.0817, -0.4752],\n [-0.7124, -0.1631, -0.2272],", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.pinv.html", "category": "pytorch docs"}
{"text": "[-0.7124, -0.1631, -0.2272],\n [ 0.1356, 0.3933, -0.5023],\n [-0.0308, -0.1725, -0.5216]])\n >>> A = torch.randn(2, 6, 3)\n >>> Apinv = torch.linalg.pinv(A)\n >>> torch.dist(Apinv @ A, torch.eye(3))\n tensor(8.5633e-07)\n >>> A = torch.randn(3, 3, dtype=torch.complex64)\n >>> A = A + A.T.conj() # creates a Hermitian matrix\n >>> Apinv = torch.linalg.pinv(A, hermitian=True)\n >>> torch.dist(Apinv @ A, torch.eye(3))\n tensor(1.0830e-06)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.pinv.html", "category": "pytorch docs"}
{"text": "HingeEmbeddingLossclass torch.nn.HingeEmbeddingLoss(margin=1.0, size_average=None, reduce=None, reduction='mean')\n Measures the loss given an input tensor x and a labels tensor y\n (containing 1 or -1). This is usually used for measuring whether\n two inputs are similar or dissimilar, e.g. using the L1 pairwise\n distance as x, and is typically used for learning nonlinear\n embeddings or semi-supervised learning.\n The loss function for n-th sample in the mini-batch is\n l_n = \\begin{cases} x_n, & \\text{if}\\; y_n = 1,\\ \\max\n {0, \\Delta - x_n}, & \\text{if}\\; y_n = -1, \\end{cases}\n and the total loss functions is\n \\ell(x, y) = \\begin{cases} \\operatorname{mean}(L), &\n \\text{if reduction} = \\text{mean';}\\\\\n \\operatorname{sum}(L), & \\text{if reduction} = \\text{sum'.}\n \\end{cases}\n where L = {l_1,\\dots,l_N}^\\top.\n Parameters:\n * margin (float, optional) -- Has a default value of\n 1.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.HingeEmbeddingLoss.html", "category": "pytorch docs"}
{"text": "1.\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of", "source": "https://pytorch.org/docs/stable/generated/torch.nn.HingeEmbeddingLoss.html", "category": "pytorch docs"}
{"text": "the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n Shape:\n * Input: () where * means, any number of dimensions. The sum\n operation operates over all the elements.\n * Target: (), same shape as the input\n * Output: scalar. If \"reduction\" is \"'none'\", then same shape as\n the input", "source": "https://pytorch.org/docs/stable/generated/torch.nn.HingeEmbeddingLoss.html", "category": "pytorch docs"}
{"text": "torch.set_default_devicetorch.set_default_device(device)\n Sets the default \"torch.Tensor\" to be allocated on \"device\". This\n does not affect factory function calls which are called with an\n explicit \"device\" argument. Factory calls will be performed as if\n they were passed \"device\" as an argument.\n To only temporarily change the default device instead of setting it\n globally, use \"with torch.device(device):\" instead.\n The default device is initially \"cpu\". If you set the default\n tensor device to another device (e.g., \"cuda\") without a device\n index, tensors will be allocated on whatever the current device for\n the device type, even after \"torch.cuda.set_device()\" is called.\n Warning:\n This function imposes a slight performance cost on every Python\n call to the torch API (not just factory functions). If this is\n causing problems for you, please comment on\n https://github.com/pytorch/pytorch/issues/92701\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.set_default_device.html", "category": "pytorch docs"}
{"text": "Parameters:\n device (device or string) -- the device to set as\n default\n Example:\n >>> torch.tensor([1.2, 3]).device\n device(type='cpu')\n >>> torch.set_default_device('cuda') # current device is 0\n >>> torch.tensor([1.2, 3]).device\n device(type='cuda', index=0)\n >>> torch.set_default_device('cuda:1')\n >>> torch.tensor([1.2, 3]).device\n device(type='cuda', index=1)", "source": "https://pytorch.org/docs/stable/generated/torch.set_default_device.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.max_pool1dtorch.nn.functional.max_pool1d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)\n Applies a 1D max pooling over an input signal composed of several\n input planes.\n Note:\n The order of \"ceil_mode\" and \"return_indices\" is different from\n what seen in \"MaxPool1d\", and will change in a future release.\n See \"MaxPool1d\" for details.\n Parameters:\n * input -- input tensor of shape (\\text{minibatch} ,\n \\text{in_channels} , iW), minibatch dim optional.\n * kernel_size -- the size of the window. Can be a single\n number or a tuple (kW,)\n * stride -- the stride of the window. Can be a single number\n or a tuple (sW,). Default: \"kernel_size\"\n * padding -- Implicit negative infinity padding to be added\n on both sides, must be >= 0 and <= kernel_size / 2.\n * dilation -- The stride between elements within a sliding", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool1d.html", "category": "pytorch docs"}
{"text": "window, must be > 0.\n * ceil_mode -- If \"True\", will use ceil instead of floor\n to compute the output shape. This ensures that every element\n in the input tensor is covered by a sliding window.\n * return_indices -- If \"True\", will return the argmax along\n with the max values. Useful for\n \"torch.nn.functional.max_unpool1d\" later", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool1d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.to_sparse_cscTensor.to_sparse_csc() -> Tensor\n Convert a tensor to compressed column storage (CSC) format. Except\n for strided tensors, only works with 2D tensors. If the \"self\" is\n strided, then the number of dense dimensions could be specified,\n and a hybrid CSC tensor will be created, with dense_dim dense\n dimensions and self.dim() - 2 - dense_dim batch dimension.\n Parameters:\n dense_dim (int, optional) -- Number of dense\n dimensions of the resulting CSC tensor. This argument should be\n used only if \"self\" is a strided tensor, and must be a value\n between 0 and dimension of \"self\" tensor minus two.\n Example:\n >>> dense = torch.randn(5, 5)\n >>> sparse = dense.to_sparse_csc()\n >>> sparse._nnz()\n 25\n >>> dense = torch.zeros(3, 3, 1, 1)\n >>> dense[0, 0] = dense[1, 2] = dense[2, 1] = 1\n >>> dense.to_sparse_csc(dense_dim=2)\n tensor(ccol_indices=tensor([0, 1, 2, 3]),", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_csc.html", "category": "pytorch docs"}
{"text": "tensor(ccol_indices=tensor([0, 1, 2, 3]),\n row_indices=tensor([0, 2, 1]),\n values=tensor([[[1.]],\n [[1.]],\n [[1.]]]), size=(3, 3, 1, 1), nnz=3,\n layout=torch.sparse_csc)", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_csc.html", "category": "pytorch docs"}
{"text": "torch.savetorch.save(obj, f, pickle_module=pickle, pickle_protocol=DEFAULT_PROTOCOL, _use_new_zipfile_serialization=True)\n Saves an object to a disk file.\n See also: Saving and loading tensors\n Parameters:\n * obj (object) -- saved object\n * f (Union[str, PathLike, BinaryIO,\n IO[bytes]]) -- a file-like object (has to implement\n write and flush) or a string or os.PathLike object containing\n a file name\n * pickle_module (Any) -- module used for pickling metadata\n and objects\n * pickle_protocol (int) -- can be specified to override\n the default protocol\n Note:\n A common PyTorch convention is to save tensors using .pt file\n extension.\n Note:\n PyTorch preserves storage sharing across serialization. See\n Saving and loading tensors preserves views for more details.\n Note:\n The 1.6 release of PyTorch switched \"torch.save\" to use a new", "source": "https://pytorch.org/docs/stable/generated/torch.save.html", "category": "pytorch docs"}
{"text": "zipfile-based file format. \"torch.load\" still retains the ability\n to load files in the old format. If for any reason you want\n \"torch.save\" to use the old format, pass the kwarg\n \"_use_new_zipfile_serialization=False\".\n -[ Example ]-\n\n\n\nSave to file\nx = torch.tensor([0, 1, 2, 3, 4])\ntorch.save(x, 'tensor.pt')\nSave to io.BytesIO buffer\nbuffer = io.BytesIO()\ntorch.save(x, buffer)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.save.html", "category": "pytorch docs"}
{"text": "torch.triutorch.triu(input, diagonal=0, , out=None) -> Tensor\n Returns the upper triangular part of a matrix (2-D tensor) or batch\n of matrices \"input\", the other elements of the result tensor \"out\"\n are set to 0.\n The upper triangular part of the matrix is defined as the elements\n on and above the diagonal.\n The argument \"diagonal\" controls which diagonal to consider. If\n \"diagonal\" = 0, all elements on and above the main diagonal are\n retained. A positive value excludes just as many diagonals above\n the main diagonal, and similarly a negative value includes just as\n many diagonals below the main diagonal. The main diagonal are the\n set of indices \\lbrace (i, i) \\rbrace for i \\in [0, \\min{d_{1},\n d_{2}} - 1] where d_{1}, d_{2} are the dimensions of the matrix.\n Parameters:\n * input (Tensor) -- the input tensor.\n * diagonal (int, optional*) -- the diagonal to consider\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.triu.html", "category": "pytorch docs"}
{"text": "Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> a = torch.randn(3, 3)\n >>> a\n tensor([[ 0.2309, 0.5207, 2.0049],\n [ 0.2072, -1.0680, 0.6602],\n [ 0.3480, -0.5211, -0.4573]])\n >>> torch.triu(a)\n tensor([[ 0.2309, 0.5207, 2.0049],\n [ 0.0000, -1.0680, 0.6602],\n [ 0.0000, 0.0000, -0.4573]])\n >>> torch.triu(a, diagonal=1)\n tensor([[ 0.0000, 0.5207, 2.0049],\n [ 0.0000, 0.0000, 0.6602],\n [ 0.0000, 0.0000, 0.0000]])\n >>> torch.triu(a, diagonal=-1)\n tensor([[ 0.2309, 0.5207, 2.0049],\n [ 0.2072, -1.0680, 0.6602],\n [ 0.0000, -0.5211, -0.4573]])\n >>> b = torch.randn(4, 6)\n >>> b\n tensor([[ 0.5876, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235],\n [-0.2447, 0.9556, -1.2919, 1.3378, -0.1768, -1.0857],\n [ 0.4333, 0.3146, 0.6576, -1.0432, 0.9348, -0.4410],", "source": "https://pytorch.org/docs/stable/generated/torch.triu.html", "category": "pytorch docs"}
{"text": "[-0.9888, 1.0679, -1.3337, -1.6556, 0.4798, 0.2830]])\n >>> torch.triu(b, diagonal=1)\n tensor([[ 0.0000, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235],\n [ 0.0000, 0.0000, -1.2919, 1.3378, -0.1768, -1.0857],\n [ 0.0000, 0.0000, 0.0000, -1.0432, 0.9348, -0.4410],\n [ 0.0000, 0.0000, 0.0000, 0.0000, 0.4798, 0.2830]])\n >>> torch.triu(b, diagonal=-1)\n tensor([[ 0.5876, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235],\n [-0.2447, 0.9556, -1.2919, 1.3378, -0.1768, -1.0857],\n [ 0.0000, 0.3146, 0.6576, -1.0432, 0.9348, -0.4410],\n [ 0.0000, 0.0000, -1.3337, -1.6556, 0.4798, 0.2830]])", "source": "https://pytorch.org/docs/stable/generated/torch.triu.html", "category": "pytorch docs"}
{"text": "torch.Tensor.select_scatterTensor.select_scatter(src, dim, index) -> Tensor\n See \"torch.select_scatter()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.select_scatter.html", "category": "pytorch docs"}
{"text": "torch.linalg.svdvalstorch.linalg.svdvals(A, , driver=None, out=None) -> Tensor\n Computes the singular values of a matrix.\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n The singular values are returned in descending order.\n Note:\n This function is equivalent to NumPy's linalg.svd(A,\n compute_uv=False).\n Note:\n When inputs are on a CUDA device, this function synchronizes that\n device with the CPU.\n See also:\n \"torch.linalg.svd()\" computes the full singular value\n decomposition.\n Parameters:\n A (Tensor) -- tensor of shape (, m, n) where *** is\n zero or more batch dimensions.\n Keyword Arguments:\n * driver (str, optional) -- name of the cuSOLVER\n method to be used. This keyword argument only works on CUDA\n inputs. Available options are: None, gesvd, gesvdj, and", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.svdvals.html", "category": "pytorch docs"}
{"text": "gesvda. Check \"torch.linalg.svd()\" for details. Default:\n None.\n * out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\n Returns:\n A real-valued tensor, even when \"A\" is complex.\n Examples:\n >>> A = torch.randn(5, 3)\n >>> S = torch.linalg.svdvals(A)\n >>> S\n tensor([2.5139, 2.1087, 1.1066])\n >>> torch.dist(S, torch.linalg.svd(A, full_matrices=False).S)\n tensor(2.4576e-07)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.svdvals.html", "category": "pytorch docs"}
{"text": "AdamWclass torch.optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False, *, maximize=False, foreach=None, capturable=False, differentiable=False)\n Implements AdamW algorithm.\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\ &\\textbf{input} : \\gamma \\text{(lr)}, \\: \\beta_1,\n \\beta_2 \\text{(betas)}, \\: \\theta_0 \\text{(params)}, \\:\n f(\\theta) \\text{(objective)}, \\: \\epsilon \\text{\n (epsilon)} \\\n &\\hspace{13mm} \\lambda \\text{(weight decay)}, \\:\n \\textit{amsgrad}, \\: \\textit{maximize}\n \\ &\\textbf{initialize} : m_0 \\leftarrow 0 \\text{ (first\n moment)}, v_0 \\leftarrow 0 \\text{ ( second moment)}, \\:\n \\widehat{v_0}^{max}\\leftarrow 0 \\[-1.ex]\n &\\rule{110mm}{0.4pt}\n \\ &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\:\n \\textbf{do} \\", "source": "https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html", "category": "pytorch docs"}
{"text": "\\textbf{do} \\\n &\\hspace{5mm}\\textbf{if} \\: \\textit{maximize}:\n \\ &\\hspace{10mm}g_t \\leftarrow\n -\\nabla_{\\theta} f_t (\\theta_{t-1}) \\\n &\\hspace{5mm}\\textbf{else}\n \\ &\\hspace{10mm}g_t \\leftarrow \\nabla_{\\theta}\n f_t (\\theta_{t-1}) \\ &\\hspace{5mm} \\theta_t\n \\leftarrow \\theta_{t-1} - \\gamma \\lambda \\theta_{t-1} \\\n &\\hspace{5mm}m_t \\leftarrow \\beta_1 m_{t-1} + (1 -\n \\beta_1) g_t \\ &\\hspace{5mm}v_t\n \\leftarrow \\beta_2 v_{t-1} + (1-\\beta_2) g^2_t \\\n &\\hspace{5mm}\\widehat{m_t} \\leftarrow m_t/\\big(1-\\beta_1^t\n \\big) \\ &\\hspace{5mm}\\widehat{v_t}\n \\leftarrow v_t/\\big(1-\\beta_2^t \\big) \\\n &\\hspace{5mm}\\textbf{if} \\: amsgrad\n \\ &\\hspace{10mm}\\widehat{v_t}^{max} \\leftarrow\n \\mathrm{max}(\\widehat{v_t}^{max}, \\widehat{v_t})", "source": "https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html", "category": "pytorch docs"}
{"text": "\\ &\\hspace{10mm}\\theta_t \\leftarrow \\theta_t - \\gamma\n \\widehat{m_t}/ \\big(\\sqrt{\\widehat{v_t}^{max}} +\n \\epsilon \\big) \\\n &\\hspace{5mm}\\textbf{else}\n \\ &\\hspace{10mm}\\theta_t \\leftarrow \\theta_t - \\gamma\n \\widehat{m_t}/ \\big(\\sqrt{\\widehat{v_t}} + \\epsilon\n \\big) \\\n &\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\n For further details regarding the algorithm we refer to Decoupled\n Weight Decay Regularization.\n Parameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * lr (float, optional) -- learning rate (default:\n 1e-3)\n * betas (Tuple[float, float], optional) --\n coefficients used for computing running averages of gradient", "source": "https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html", "category": "pytorch docs"}
{"text": "and its square (default: (0.9, 0.999))\n * eps (float, optional) -- term added to the\n denominator to improve numerical stability (default: 1e-8)\n * weight_decay (float, optional) -- weight decay\n coefficient (default: 1e-2)\n * amsgrad (bool, optional) -- whether to use the\n AMSGrad variant of this algorithm from the paper On the\n Convergence of Adam and Beyond (default: False)\n * maximize (bool, optional) -- maximize the params\n based on the objective, instead of minimizing (default: False)\n * foreach (bool, optional) -- whether foreach\n implementation of optimizer is used. If unspecified by the\n user (so foreach is None), we will try to use foreach over the\n for-loop implementation on CUDA, since it is usually\n significantly more performant. (default: None)\n * capturable (bool, optional) -- whether this instance", "source": "https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html", "category": "pytorch docs"}
{"text": "is safe to capture in a CUDA graph. Passing True can impair\n ungraphed performance, so if you don't intend to graph capture\n this instance, leave it False (default: False)\n * differentiable (bool, optional) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\n add_param_group(param_group)\n Add a param group to the \"Optimizer\" s param_groups.\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n Parameters:\n param_group (dict) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n load_state_dict(state_dict)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html", "category": "pytorch docs"}
{"text": "load_state_dict(state_dict)\n Loads the optimizer state.\n Parameters:\n state_dict (dict) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n register_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None\n The \"optimizer\" argument is the optimizer instance being used.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n register_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None or modified args and kwargs", "source": "https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html", "category": "pytorch docs"}
{"text": "The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n state_dict()\n Returns the state of the optimizer as a \"dict\".\n It contains two entries:\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n zero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html", "category": "pytorch docs"}
{"text": "Parameters:\n set_to_none (bool) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html", "category": "pytorch docs"}
{"text": "torch.Tensor.cumminTensor.cummin(dim)\n See \"torch.cummin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cummin.html", "category": "pytorch docs"}
{"text": "FuseCustomConfigclass torch.ao.quantization.fx.custom_config.FuseCustomConfig\n Custom configuration for \"fuse_fx()\".\n Example usage:\n fuse_custom_config = FuseCustomConfig().set_preserved_attributes([\"attr1\", \"attr2\"])\n classmethod from_dict(fuse_custom_config_dict)\n Create a \"ConvertCustomConfig\" from a dictionary with the\n following items:\n \"preserved_attributes\": a list of attributes that persist\n even if they are not used in \"forward\"\n This function is primarily for backward compatibility and may be\n removed in the future.\n Return type:\n FuseCustomConfig\n set_preserved_attributes(attributes)\n Set the names of the attributes that will persist in the graph\n module even if they are not used in the model's \"forward\"\n method.\n Return type:\n FuseCustomConfig\n to_dict()\n Convert this \"FuseCustomConfig\" to a dictionary with the items\n described in \"from_dict()\".\n Return type:", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.FuseCustomConfig.html", "category": "pytorch docs"}
{"text": "Return type:\n Dict[str, Any]", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.FuseCustomConfig.html", "category": "pytorch docs"}
{"text": "Adamclass torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False, *, foreach=None, maximize=False, capturable=False, differentiable=False, fused=None)\n Implements Adam algorithm.\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\ &\\textbf{input} : \\gamma \\text{ (lr)}, \\beta_1,\n \\beta_2 \\text{ (betas)},\\theta_0 \\text{\n (params)},f(\\theta) \\text{ (objective)} \\\n &\\hspace{13mm} \\lambda \\text{ (weight decay)}, \\:\n \\textit{amsgrad}, \\:\\textit{maximize}\n \\ &\\textbf{initialize} : m_0 \\leftarrow 0 \\text{ ( first\n moment)}, v_0\\leftarrow 0 \\text{ (second moment)},\\:\n \\widehat{v_0}^{max}\\leftarrow 0\\[-1.ex]\n &\\rule{110mm}{0.4pt}\n \\ &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\:\n \\textbf{do} \\\n &\\hspace{5mm}\\textbf{if} \\: \\textit{maximize}:\n \\ &\\hspace{10mm}g_t \\leftarrow", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adam.html", "category": "pytorch docs"}
{"text": "-\\nabla_{\\theta} f_t (\\theta_{t-1}) \\\n &\\hspace{5mm}\\textbf{else}\n \\ &\\hspace{10mm}g_t \\leftarrow \\nabla_{\\theta}\n f_t (\\theta_{t-1}) \\ &\\hspace{5mm}\\textbf{if} \\:\n \\lambda \\neq 0 \\\n &\\hspace{10mm} g_t \\leftarrow g_t + \\lambda \\theta_{t-1}\n \\ &\\hspace{5mm}m_t \\leftarrow \\beta_1 m_{t-1}\n + (1 - \\beta_1) g_t \\ &\\hspace{5mm}v_t\n \\leftarrow \\beta_2 v_{t-1} + (1-\\beta_2) g^2_t \\\n &\\hspace{5mm}\\widehat{m_t} \\leftarrow m_t/\\big(1-\\beta_1^t\n \\big) \\ &\\hspace{5mm}\\widehat{v_t}\n \\leftarrow v_t/\\big(1-\\beta_2^t \\big) \\\n &\\hspace{5mm}\\textbf{if} \\: amsgrad\n \\ &\\hspace{10mm}\\widehat{v_t}^{max} \\leftarrow\n \\mathrm{max}(\\widehat{v_t}^{max}, \\widehat{v_t})\n \\ &\\hspace{10mm}\\theta_t \\leftarrow \\theta_{t-1} - \\gamma", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adam.html", "category": "pytorch docs"}
{"text": "\\widehat{m_t}/ \\big(\\sqrt{\\widehat{v_t}^{max}} +\n \\epsilon \\big) \\\n &\\hspace{5mm}\\textbf{else}\n \\ &\\hspace{10mm}\\theta_t \\leftarrow \\theta_{t-1} - \\gamma\n \\widehat{m_t}/ \\big(\\sqrt{\\widehat{v_t}} + \\epsilon\n \\big) \\\n &\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\n For further details regarding the algorithm we refer to Adam: A\n Method for Stochastic Optimization.\n Parameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * lr (float, optional) -- learning rate (default:\n 1e-3)\n * betas (Tuple[float, float], optional) --\n coefficients used for computing running averages of gradient\n and its square (default: (0.9, 0.999))", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adam.html", "category": "pytorch docs"}
{"text": "and its square (default: (0.9, 0.999))\n * eps (float, optional) -- term added to the\n denominator to improve numerical stability (default: 1e-8)\n * weight_decay (float, optional) -- weight decay (L2\n penalty) (default: 0)\n * amsgrad (bool, optional) -- whether to use the\n AMSGrad variant of this algorithm from the paper On the\n Convergence of Adam and Beyond (default: False)\n * foreach (bool, optional) -- whether foreach\n implementation of optimizer is used (default: None)\n * maximize (bool, optional) -- maximize the params\n based on the objective, instead of minimizing (default: False)\n * capturable (bool, optional) -- whether this instance\n is safe to capture in a CUDA graph. Passing True can impair\n ungraphed performance, so if you don't intend to graph capture\n this instance, leave it False (default: False)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adam.html", "category": "pytorch docs"}
{"text": "\ndifferentiable (bool, optional) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\nfused (bool, optional) -- whether the fused\n implementation (CUDA only) is used. Currently,\n torch.float64, torch.float32, torch.float16, and\n torch.bfloat16 are supported. Since the fused implementation\n is usually significantly faster than the for-loop\n implementation, we try to use it whenever possible (all\n parameters are on CUDA and are of a supported type). Else, we\n continue with the for-loop implementation. (default: None)\n add_param_group(param_group)\n Add a param group to the \"Optimizer\" s param_groups.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adam.html", "category": "pytorch docs"}
{"text": "This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n Parameters:\n param_group (dict) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n load_state_dict(state_dict)\n Loads the optimizer state.\n Parameters:\n state_dict (dict) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n register_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None\n The \"optimizer\" argument is the optimizer instance being used.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adam.html", "category": "pytorch docs"}
{"text": "\"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n register_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n state_dict()\n Returns the state of the optimizer as a \"dict\".\n It contains two entries:\n * state - a dict holding current optimization state. Its content", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adam.html", "category": "pytorch docs"}
{"text": "differs between optimizer classes.\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n zero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n Parameters:\n set_to_none (bool) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adam.html", "category": "pytorch docs"}
{"text": "it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adam.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.skip_inittorch.nn.utils.skip_init(module_cls, args, kwargs)\n Given a module class object and args / kwargs, instantiates the\n module without initializing parameters / buffers. This can be\n useful if initialization is slow or if custom initialization will\n be performed, making the default initialization unnecessary. There\n are some caveats to this, due to the way this function is\n implemented:\n 1. The module must accept a device arg in its constructor that is\n passed to any parameters or buffers created during construction.\n 2. The module must not perform any computation on parameters in its\n constructor except initialization (i.e. functions from\n \"torch.nn.init\").\n If these conditions are satisfied, the module can be instantiated\n with parameter / buffer values uninitialized, as if having been\n created using \"torch.empty()\".\n Parameters:\n * module_cls* -- Class object; should be a subclass of\n \"torch.nn.Module\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.skip_init.html", "category": "pytorch docs"}
{"text": "\"torch.nn.Module\"\n * args -- args to pass to the module's constructor\n * kwargs -- kwargs to pass to the module's constructor\n Returns:\n Instantiated module with uninitialized parameters / buffers\n Example:\n >>> import torch\n >>> m = torch.nn.utils.skip_init(torch.nn.Linear, 5, 1)\n >>> m.weight\n Parameter containing:\n tensor([[0.0000e+00, 1.5846e+29, 7.8307e+00, 2.5250e-29, 1.1210e-44]],\n requires_grad=True)\n >>> m2 = torch.nn.utils.skip_init(torch.nn.Linear, in_features=6, out_features=1)\n >>> m2.weight\n Parameter containing:\n tensor([[-1.4677e+24, 4.5915e-41, 1.4013e-45, 0.0000e+00, -1.4677e+24,\n 4.5915e-41]], requires_grad=True)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.skip_init.html", "category": "pytorch docs"}
{"text": "QConfigclass torch.quantization.qconfig.QConfig(activation, weight)\n Describes how to quantize a layer or a part of the network by\n providing settings (observer classes) for activations and weights\n respectively.\n Note that QConfig needs to contain observer classes (like\n MinMaxObserver) or a callable that returns instances on invocation,\n not the concrete observer instances themselves. Quantization\n preparation function will instantiate observers multiple times for\n each of the layers.\n Observer classes have usually reasonable default arguments, but\n they can be overwritten with with_args method (that behaves like\n functools.partial):\n my_qconfig = QConfig(\n activation=MinMaxObserver.with_args(dtype=torch.qint8),\n weight=default_observer.with_args(dtype=torch.qint8))", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.QConfig.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.gelutorch.nn.functional.gelu(input, approximate='none') -> Tensor\n When the approximate argument is 'none', it applies element-wise\n the function \\text{GELU}(x) = x * \\Phi(x)\n where \\Phi(x) is the Cumulative Distribution Function for Gaussian\n Distribution.\n When the approximate argument is 'tanh', Gelu is estimated with\n \\text{GELU}(x) = 0.5 * x * (1 + \\text{Tanh}(\\sqrt(2 / \\pi) * (x\n + 0.044715 * x^3)))\n See Gaussian Error Linear Units (GELUs).", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.gelu.html", "category": "pytorch docs"}
{"text": "torch.linalg.ldl_factortorch.linalg.ldl_factor(A, , hermitian=False, out=None)\n Computes a compact representation of the LDL factorization of a\n Hermitian or symmetric (possibly indefinite) matrix.\n When \"A\" is complex valued it can be Hermitian (\"hermitian\"=\n True) or symmetric (\"hermitian\"= False).\n The factorization is of the form the form A = L D L^T. If\n \"hermitian\" is True* then transpose operation is the conjugate\n transpose.\n L (or U) and D are stored in compact form in \"LD\". They follow the\n format specified by LAPACK's sytrf function. These tensors may be\n used in \"torch.linalg.ldl_solve()\" to solve linear systems.\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n Note:\n When inputs are on a CUDA device, this function synchronizes that\n device with the CPU. For a version of this function that does not", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor.html", "category": "pytorch docs"}
{"text": "synchronize, see \"torch.linalg.ldl_factor_ex()\".\n Parameters:\n A (Tensor) -- tensor of shape (, n, n) where * is zero or\n more batch dimensions consisting of symmetric or Hermitian\n matrices. (*, n, n) where *** is one or more batch dimensions.\n Keyword Arguments:\n * hermitian (bool, optional) -- whether to consider\n the input to be Hermitian or symmetric. For real-valued\n matrices, this switch has no effect. Default: False.\n * out (tuple, optional) -- tuple of two tensors to\n write the output to. Ignored if None. Default: None.\n Returns:\n A named tuple (LD, pivots)*.\n Examples:\n >>> A = torch.randn(3, 3)\n >>> A = A @ A.mT # make symmetric\n >>> A\n tensor([[7.2079, 4.2414, 1.9428],\n [4.2414, 3.4554, 0.3264],\n [1.9428, 0.3264, 1.3823]])\n >>> LD, pivots = torch.linalg.ldl_factor(A)\n >>> LD\n tensor([[ 7.2079, 0.0000, 0.0000],", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor.html", "category": "pytorch docs"}
{"text": "tensor([[ 7.2079, 0.0000, 0.0000],\n [ 0.5884, 0.9595, 0.0000],\n [ 0.2695, -0.8513, 0.1633]])\n >>> pivots\n tensor([1, 2, 3], dtype=torch.int32)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor.html", "category": "pytorch docs"}
{"text": "torch.cuda.change_current_allocatortorch.cuda.change_current_allocator(allocator)\n Changes the currently used memory allocator to be the one provided.\n If the current allocator has already been used/initialized, this\n function will error.\n Parameters:\n allocator (torch.cuda.memory._CUDAAllocator) -- allocator\n to be set as the active one.\n Note:\n See Memory management for details on creating and using a custom\n allocator", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.change_current_allocator.html", "category": "pytorch docs"}
{"text": "torch.bitwise_nottorch.bitwise_not(input, , out=None) -> Tensor\n Computes the bitwise NOT of the given input tensor. The input\n tensor must be of integral or Boolean types. For bool tensors, it\n computes the logical NOT.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> torch.bitwise_not(torch.tensor([-1, -2, 3], dtype=torch.int8))\n tensor([ 0, 1, -4], dtype=torch.int8)", "source": "https://pytorch.org/docs/stable/generated/torch.bitwise_not.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.hardshrinktorch.nn.functional.hardshrink(input, lambd=0.5) -> Tensor\n Applies the hard shrinkage function element-wise\n See \"Hardshrink\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.hardshrink.html", "category": "pytorch docs"}
{"text": "torch.atleast_2dtorch.atleast_2d(tensors)\n Returns a 2-dimensional view of each input tensor with zero\n dimensions. Input tensors with two or more dimensions are returned\n as-is.\n Parameters:\n input (Tensor or list of Tensors*) --\n Returns:\n output (Tensor or tuple of Tensors)\n Example:\n >>> x = torch.tensor(1.)\n >>> x\n tensor(1.)\n >>> torch.atleast_2d(x)\n tensor([[1.]])\n >>> x = torch.arange(4).view(2, 2)\n >>> x\n tensor([[0, 1],\n [2, 3]])\n >>> torch.atleast_2d(x)\n tensor([[0, 1],\n [2, 3]])\n >>> x = torch.tensor(0.5)\n >>> y = torch.tensor(1.)\n >>> torch.atleast_2d((x, y))\n (tensor([[0.5000]]), tensor([[1.]]))", "source": "https://pytorch.org/docs/stable/generated/torch.atleast_2d.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.dropout1dtorch.nn.functional.dropout1d(input, p=0.5, training=True, inplace=False)\n Randomly zero out entire channels (a channel is a 1D feature map,\n e.g., the j-th channel of the i-th sample in the batched input is a\n 1D tensor \\text{input}[i, j]) of the input tensor). Each channel\n will be zeroed out independently on every forward call with\n probability \"p\" using samples from a Bernoulli distribution.\n See \"Dropout1d\" for details.\n Parameters:\n * p (float) -- probability of a channel to be zeroed.\n Default: 0.5\n * training (bool) -- apply dropout if is \"True\". Default:\n \"True\"\n * inplace (bool) -- If set to \"True\", will do this\n operation in-place. Default: \"False\"\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.dropout1d.html", "category": "pytorch docs"}
{"text": "torch.signal.windows.exponentialtorch.signal.windows.exponential(M, , center=None, tau=1.0, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\n Computes a window with an exponential waveform. Also known as\n Poisson window.\n The exponential window is defined as follows:\n w_n = \\exp{\\left(-\\frac{|n - c|}{\\tau}\\right)}\n where c is the \"center\" of the window.\n The window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\n Parameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\n Keyword Arguments:\n * center (float, optional) -- where the center of the\n window will be located. Default: M / 2 if sym is False,\n else (M - 1) / 2.\n * tau (float, optional*) -- the decay value. Tau is\n generally associated with a percentage, that means, that the", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.exponential.html", "category": "pytorch docs"}
{"text": "value should vary within the interval (0, 100]. If tau is 100,\n it is considered the uniform window. Default: 1.0.\n * sym (bool, optional) -- If False, returns a\n periodic window suitable for use in spectral analysis. If\n True, returns a symmetric window suitable for use in filter\n design. Default: True.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.exponential.html", "category": "pytorch docs"}
{"text": "tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Return type:\n Tensor\n Examples:\n >>> # Generates a symmetric exponential window of size 10 and with a decay value of 1.0.\n >>> # The center will be at (M - 1) / 2, where M is 10.\n >>> torch.signal.windows.exponential(10)\n tensor([0.0111, 0.0302, 0.0821, 0.2231, 0.6065, 0.6065, 0.2231, 0.0821, 0.0302, 0.0111])\n >>> # Generates a periodic exponential window and decay factor equal to .5\n >>> torch.signal.windows.exponential(10, sym=False,tau=.5)\n tensor([4.5400e-05, 3.3546e-04, 2.4788e-03, 1.8316e-02, 1.3534e-01, 1.0000e+00, 1.3534e-01, 1.8316e-02, 2.4788e-03, 3.3546e-04])", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.exponential.html", "category": "pytorch docs"}
{"text": "torch.netorch.ne(input, other, , out=None) -> Tensor\n Computes \\text{input} \\neq \\text{other} element-wise.\n The second argument can be a number or a tensor whose shape is\n broadcastable with the first argument.\n Parameters:\n * input (Tensor) -- the tensor to compare\n * other (Tensor or float) -- the tensor or value to\n compare\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Returns:\n A boolean tensor that is True where \"input\" is not equal to\n \"other\" and False elsewhere\n Example:\n >>> torch.ne(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))\n tensor([[False, True], [True, False]])", "source": "https://pytorch.org/docs/stable/generated/torch.ne.html", "category": "pytorch docs"}
{"text": "torch.Tensor.logcumsumexpTensor.logcumsumexp(dim) -> Tensor\n See \"torch.logcumsumexp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logcumsumexp.html", "category": "pytorch docs"}
{"text": "default_activation_only_qconfigtorch.quantization.qconfig.default_activation_only_qconfig\n alias of QConfig(activation=functools.partial(,\n observer=,\n quant_min=0, quant_max=255, dtype=torch.quint8,\n qscheme=torch.per_tensor_affine, reduce_range=True){},\n weight=)", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_activation_only_qconfig.html", "category": "pytorch docs"}
{"text": "torch.set_float32_matmul_precisiontorch.set_float32_matmul_precision(precision)\n Sets the internal precision of float32 matrix multiplications.\n Running float32 matrix multiplications in lower precision may\n significantly increase performance, and in some programs the loss\n of precision has a negligible impact.\n Supports three settings:\n * \"highest\", float32 matrix multiplications use the float32\n datatype for internal computations.\n * \"high\", float32 matrix multiplications use the TensorFloat32\n or bfloat16_3x datatypes for internal computations, if fast\n matrix multiplication algorithms using those datatypes\n internally are available. Otherwise float32 matrix\n multiplications are computed as if the precision is \"highest\".\n * \"medium\", float32 matrix multiplications use the bfloat16\n datatype for internal computations, if a fast matrix\n multiplication algorithm using that datatype internally is", "source": "https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html", "category": "pytorch docs"}
{"text": "available. Otherwise float32 matrix multiplications are\n computed as if the precision is \"high\".\n Note:\n This does not change the output dtype of float32 matrix\n multiplications, it controls how the internal computation of the\n matrix multiplication is performed.\n Note:\n This does not change the precision of convolution operations.\n Other flags, like torch.backends.cudnn.allow_tf32, may control\n the precision of convolution operations.\n Note:\n This flag currently only affects one native device type: CUDA. If\n \"high\" or \"medium\" are set then the TensorFloat32 datatype will\n be used when computing float32 matrix multiplications, equivalent\n to setting torch.backends.cuda.matmul.allow_tf32 = True. When\n \"highest\" (the default) is set then the float32 datatype is used\n for internal computations, equivalent to setting\n torch.backends.cuda.matmul.allow_tf32 = False.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html", "category": "pytorch docs"}
{"text": "Parameters:\n precision (str) -- can be set to \"highest\" (default),\n \"high\", or \"medium\" (see above).", "source": "https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html", "category": "pytorch docs"}
{"text": "Linearclass torch.ao.nn.qat.Linear(in_features, out_features, bias=True, qconfig=None, device=None, dtype=None)\n A linear module attached with FakeQuantize modules for weight, used\n for quantization aware training.\n We adopt the same interface as torch.nn.Linear, please see\n https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for\n documentation.\n Similar to torch.nn.Linear, with FakeQuantize modules initialized\n to default.\n Variables:\n weight (torch.Tensor) -- fake quant module for weight\n classmethod from_float(mod)\n Create a qat module from a float module or qparams_dict Args:\n mod a float module, either produced by torch.ao.quantization\n utilities or directly from user", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.qat.Linear.html", "category": "pytorch docs"}
{"text": "LazyModuleMixinclass torch.nn.modules.lazy.LazyModuleMixin(args, *kwargs)\n A mixin for modules that lazily initialize parameters, also known\n as \"lazy modules.\"\n Modules that lazily initialize parameters, or \"lazy modules\",\n derive the shapes of their parameters from the first input(s) to\n their forward method. Until that first forward they contain\n \"torch.nn.UninitializedParameter\" s that should not be accessed or\n used, and afterward they contain regular \"torch.nn.Parameter\" s.\n Lazy modules are convenient since they don't require computing some\n module arguments, like the \"in_features\" argument of a typical\n \"torch.nn.Linear\".\n After construction, networks with lazy modules should first be\n converted to the desired dtype and placed on the expected device.\n This is because lazy modules only perform shape inference so the\n usual dtype and device placement behavior applies. The lazy modules\n should then perform \"dry runs\" to initialize all the components in", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html", "category": "pytorch docs"}
{"text": "the module. These \"dry runs\" send inputs of the correct size,\n dtype, and device through the network and to each one of its lazy\n modules. After this the network can be used as usual.\n\n\n\nclass LazyMLP(torch.nn.Module):\n ... def init(self):\n ... super().init()\n ... self.fc1 = torch.nn.LazyLinear(10)\n ... self.relu1 = torch.nn.ReLU()\n ... self.fc2 = torch.nn.LazyLinear(1)\n ... self.relu2 = torch.nn.ReLU()\n ...\n ... def forward(self, input):\n ... x = self.relu1(self.fc1(input))\n ... y = self.relu2(self.fc2(x))\n ... return y\nconstructs a network with lazy modules\nlazy_mlp = LazyMLP()\ntransforms the network's device and dtype\nNOTE: these transforms can and should be applied after construction and before any 'dry runs'\nlazy_mlp = lazy_mlp.cuda().double()\nlazy_mlp\n LazyMLP( (fc1): LazyLinear(in_features=0, out_features=10, bias=True)\n (relu1): ReLU()\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html", "category": "pytorch docs"}
{"text": "(relu1): ReLU()\n (fc2): LazyLinear(in_features=0, out_features=1, bias=True)\n (relu2): ReLU()\n )\n\n\n\nperforms a dry run to initialize the network's lazy modules\nlazy_mlp(torch.ones(10,10).cuda())\nafter initialization, LazyLinear modules become regular Linear modules\nlazy_mlp\n LazyMLP(\n (fc1): Linear(in_features=10, out_features=10, bias=True)\n (relu1): ReLU()\n (fc2): Linear(in_features=10, out_features=1, bias=True)\n (relu2): ReLU()\n )\nattaches an optimizer, since parameters can now be used as usual\noptim = torch.optim.SGD(mlp.parameters(), lr=0.01)\n A final caveat when using lazy modules is that the order of\n initialization of a network's parameters may change, since the lazy\n modules are always initialized after other modules. For example, if\n the LazyMLP class defined above had a \"torch.nn.LazyLinear\" module\n first and then a regular \"torch.nn.Linear\" second, the second\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html", "category": "pytorch docs"}
{"text": "module would be initialized on construction and the first module\n would be initialized during the first dry run. This can cause the\n parameters of a network using lazy modules to be initialized\n differently than the parameters of a network without lazy modules\n as the order of parameter initializations, which often depends on a\n stateful random number generator, is different. Check\n Reproducibility for more details.\n Lazy modules can be serialized with a state dict like other\n modules. For example:\n\n\n\nlazy_mlp = LazyMLP()\nThe state dict shows the uninitialized parameters\nlazy_mlp.state_dict()\n OrderedDict([('fc1.weight', Uninitialized parameter),\n ('fc1.bias',\n tensor([-1.8832e+25, 4.5636e-41, -1.8832e+25, 4.5636e-41, -6.1598e-30,\n 4.5637e-41, -1.8788e+22, 4.5636e-41, -2.0042e-31, 4.5637e-41])),\n ('fc2.weight', Uninitialized parameter),\n ('fc2.bias', tensor([0.0019]))])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html", "category": "pytorch docs"}
{"text": "('fc2.bias', tensor([0.0019]))])\n Lazy modules can load regular \"torch.nn.Parameter\" s (i.e. you can\n serialize/deserialize initialized LazyModules and they will remain\n initialized)\n\n\n\nfull_mlp = LazyMLP()\nDry run to initialize another module\nfull_mlp.forward(torch.ones(10, 1))\nLoad an initialized state into a lazy module\nlazy_mlp.load_state_dict(full_mlp.state_dict())\nThe state dict now holds valid values\nlazy_mlp.state_dict()\n OrderedDict([('fc1.weight',\n tensor([[-0.3837],\n [ 0.0907],\n [ 0.6708],\n [-0.5223],\n [-0.9028],\n [ 0.2851],\n [-0.4537],\n [ 0.6813],\n [ 0.5766],\n [-0.8678]])),\n ('fc1.bias',\n tensor([-1.8832e+25, 4.5636e-41, -1.8832e+25, 4.5636e-41, -6.1598e-30,\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html", "category": "pytorch docs"}
{"text": "4.5637e-41, -1.8788e+22, 4.5636e-41, -2.0042e-31, 4.5637e-41])),\n ('fc2.weight',\n tensor([[ 0.1320, 0.2938, 0.0679, 0.2793, 0.1088, -0.1795, -0.2301, 0.2807,\n 0.2479, 0.1091]])),\n ('fc2.bias', tensor([0.0019]))])\n Note, however, that the loaded parameters will not be replaced when\n doing a \"dry run\" if they are initialized when the state is loaded.\n This prevents using initialized modules in different contexts.\n has_uninitialized_params()\n Check if a module has parameters that are not initialized\n initialize_parameters(args, *kwargs)\n Initialize parameters according to the input batch properties.\n This adds an interface to isolate parameter initialization from\n the forward pass when doing parameter shape inference.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html", "category": "pytorch docs"}
{"text": "torch.fft.rffttorch.fft.rfft(input, n=None, dim=- 1, norm=None, , out=None) -> Tensor\n Computes the one dimensional Fourier transform of real-valued\n \"input\".\n The FFT of a real signal is Hermitian-symmetric, \"X[i] =\n conj(X[-i])\" so the output contains only the positive frequencies\n below the Nyquist frequency. To compute the full output, use\n \"fft()\"\n Note:\n Supports torch.half on CUDA with GPU Architecture SM53 or\n greater. However it only supports powers of 2 signal length in\n every transformed dimension.\n Parameters:\n * input (Tensor) -- the real input tensor\n * n (int, optional) -- Signal length. If given, the\n input will either be zero-padded or trimmed to this length\n before computing the real FFT.\n * dim (int, optional) -- The dimension along which to\n take the one dimensional real FFT.\n * norm (str, optional*) --\n Normalization mode. For the forward transform (\"rfft()\"),", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfft.html", "category": "pytorch docs"}
{"text": "these correspond to:\n * \"\"forward\"\" - normalize by \"1/n\"\n * \"\"backward\"\" - no normalization\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the FFT\n orthonormal)\n Calling the backward transform (\"irfft()\") with the same\n normalization mode will apply an overall normalization of\n \"1/n\" between the two transforms. This is required to make\n \"irfft()\" the exact inverse.\n Default is \"\"backward\"\" (no normalization).\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n\n\n\nt = torch.arange(4)\nt\n tensor([0, 1, 2, 3])\ntorch.fft.rfft(t)\n tensor([ 6.+0.j, -2.+2.j, -2.+0.j])\n Compare against the full output from \"fft()\":\ntorch.fft.fft(t)\n tensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j])\n Notice that the symmetric element \"T[-1] == T[1].conj()\" is\n omitted. At the Nyquist frequency \"T[-2] == T[2]\" is it's own\n symmetric pair, and therefore must always be real-valued.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfft.html", "category": "pytorch docs"}
{"text": "torch.nanmeantorch.nanmean(input, dim=None, keepdim=False, , dtype=None, out=None) -> Tensor\n Computes the mean of all non-NaN elements along the specified\n dimensions.\n This function is identical to \"torch.mean()\" when there are no\n NaN values in the \"input\" tensor. In the presence of NaN,\n \"torch.mean()\" will propagate the NaN to the output whereas\n \"torch.nanmean()\" will ignore the NaN values (torch.nanmean(a)\n is equivalent to torch.mean(a[~a.isnan()])).\n If \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int or tuple of ints, optional*) -- the\n dimension or dimensions to reduce. If \"None\", all dimensions\n are reduced.", "source": "https://pytorch.org/docs/stable/generated/torch.nanmean.html", "category": "pytorch docs"}
{"text": "are reduced.\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is casted\n to \"dtype\" before the operation is performed. This is useful\n for preventing data type overflows. Default: None.\n * out (Tensor, optional) -- the output tensor.\n See also:\n \"torch.mean()\" computes the mean value, propagating NaN.\n Example:\n >>> x = torch.tensor([[torch.nan, 1, 2], [1, 2, 3]])\n >>> x.mean()\n tensor(nan)\n >>> x.nanmean()\n tensor(1.8000)\n >>> x.mean(dim=0)\n tensor([ nan, 1.5000, 2.5000])\n >>> x.nanmean(dim=0)\n tensor([1.0000, 1.5000, 2.5000])\n # If all elements in the reduced dimensions are NaN then the result is NaN\n >>> torch.tensor([torch.nan]).nanmean()\n tensor(nan)", "source": "https://pytorch.org/docs/stable/generated/torch.nanmean.html", "category": "pytorch docs"}
{"text": "Identityclass torch.nn.utils.prune.Identity\n Utility pruning method that does not prune any units but generates\n the pruning parametrization with a mask of ones.\n classmethod apply(module, name)\n Adds the forward pre-hook that enables pruning on the fly and\n the reparametrization of a tensor in terms of the original\n tensor and the pruning mask.\n Parameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * name (str) -- parameter name within \"module\" on which\n pruning will act.\n apply_mask(module)\n Simply handles the multiplication between the parameter being\n pruned and the generated mask. Fetches the mask and the original\n tensor from the module and returns the pruned version of the\n tensor.\n Parameters:\n module (nn.Module) -- module containing the tensor to\n prune\n Returns:\n pruned version of the input tensor\n Return type:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.Identity.html", "category": "pytorch docs"}
{"text": "Return type:\n pruned_tensor (torch.Tensor)\n prune(t, default_mask=None, importance_scores=None)\n Computes and returns a pruned version of input tensor \"t\"\n according to the pruning rule specified in \"compute_mask()\".\n Parameters:\n * t (torch.Tensor) -- tensor to prune (of same\n dimensions as \"default_mask\").\n * importance_scores (torch.Tensor) -- tensor of\n importance scores (of same shape as \"t\") used to compute\n mask for pruning \"t\". The values in this tensor indicate\n the importance of the corresponding elements in the \"t\"\n that is being pruned. If unspecified or None, the tensor\n \"t\" will be used in its place.\n * default_mask (torch.Tensor, optional) -- mask\n from previous pruning iteration, if any. To be considered\n when determining what portion of the tensor that pruning\n should act on. If None, default to a mask of ones.\n Returns:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.Identity.html", "category": "pytorch docs"}
{"text": "Returns:\n pruned version of tensor \"t\".\n remove(module)\n Removes the pruning reparameterization from a module. The pruned\n parameter named \"name\" remains permanently pruned, and the\n parameter named \"name+'_orig'\" is removed from the parameter\n list. Similarly, the buffer named \"name+'_mask'\" is removed from\n the buffers.\n Note:\n Pruning itself is NOT undone or reversed!", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.Identity.html", "category": "pytorch docs"}
{"text": "torch.kthvaluetorch.kthvalue(input, k, dim=None, keepdim=False, , out=None)\n Returns a namedtuple \"(values, indices)\" where \"values\" is the \"k\"\n th smallest element of each row of the \"input\" tensor in the given\n dimension \"dim\". And \"indices\" is the index location of each\n element found.\n If \"dim\" is not given, the last dimension of the input is chosen.\n If \"keepdim\" is \"True\", both the \"values\" and \"indices\" tensors are\n the same size as \"input\", except in the dimension \"dim\" where they\n are of size 1. Otherwise, \"dim\" is squeezed (see\n \"torch.squeeze()\"), resulting in both the \"values\" and \"indices\"\n tensors having 1 fewer dimension than the \"input\" tensor.\n Note:\n When \"input\" is a CUDA tensor and there are multiple valid \"k\" th\n values, this function may nondeterministically return \"indices\"\n for any of them.\n Parameters:\n * input (Tensor) -- the input tensor.\n * k (int*) -- k for the k-th smallest element", "source": "https://pytorch.org/docs/stable/generated/torch.kthvalue.html", "category": "pytorch docs"}
{"text": "\ndim (int, optional) -- the dimension to find the kth\n value along\nkeepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n Keyword Arguments:\n out (tuple, optional) -- the output tuple of (Tensor,\n LongTensor) can be optionally given to be used as output buffers\n Example:\n\n\nx = torch.arange(1., 6.)\nx\n tensor([ 1., 2., 3., 4., 5.])\ntorch.kthvalue(x, 4)\n torch.return_types.kthvalue(values=tensor(4.), indices=tensor(3))\nx=torch.arange(1.,7.).resize_(2,3)\nx\n tensor([[ 1., 2., 3.],\n [ 4., 5., 6.]])\ntorch.kthvalue(x, 2, 0, True)\n torch.return_types.kthvalue(values=tensor([[4., 5., 6.]]), indices=tensor([[1, 1, 1]]))\n\n\n\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.kthvalue.html", "category": "pytorch docs"}
{"text": "torch.foreach_sinh_torch._foreach_sinh(self: List[Tensor]) -> None\n Apply \"torch.sinh()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_sinh_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.nanmedianTensor.nanmedian(dim=None, keepdim=False)\n See \"torch.nanmedian()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nanmedian.html", "category": "pytorch docs"}
{"text": "torch.Tensor.fix_Tensor.fix_() -> Tensor\n In-place version of \"fix()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fix_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.nonzeroTensor.nonzero() -> LongTensor\n See \"torch.nonzero()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nonzero.html", "category": "pytorch docs"}
{"text": "interpolateclass torch.ao.nn.quantized.functional.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None)\n Down/up samples the input to either the given \"size\" or the given\n \"scale_factor\"\n See \"torch.nn.functional.interpolate()\" for implementation details.\n The input dimensions are interpreted in the form: mini-batch x\n channels x [optional depth] x [optional height] x width.\n Note:\n The input quantization parameters propagate to the output.\n Note:\n Only 2D/3D input is supported for quantized inputs\n Note:\n Only the following modes are supported for the quantized inputs:\n * bilinear\n * nearest\n Parameters:\n * input (Tensor) -- the input tensor\n * size (int or Tuple[int] or Tuple[int,\n int] or Tuple[int, int, int]) -- output\n spatial size.\n * scale_factor (float or Tuple[float]) --", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.interpolate.html", "category": "pytorch docs"}
{"text": "multiplier for spatial size. Has to match input size if it is\n a tuple.\n * mode (str) -- algorithm used for upsampling: \"'nearest'\"\n | \"'bilinear'\"\n * align_corners (bool, optional) -- Geometrically, we\n consider the pixels of the input and output as squares rather\n than points. If set to \"True\", the input and output tensors\n are aligned by the center points of their corner pixels,\n preserving the values at the corner pixels. If set to \"False\",\n the input and output tensors are aligned by the corner points\n of their corner pixels, and the interpolation uses edge value\n padding for out-of-boundary values, making this operation\n independent of input size when \"scale_factor\" is kept the\n same. This only has an effect when \"mode\" is \"'bilinear'\".\n Default: \"False\"", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.interpolate.html", "category": "pytorch docs"}
{"text": "torch.multorch.mul(input, other, , out=None) -> Tensor\n Multiplies \"input\" by \"other\".\n \\text{out}_i = \\text{input}_i \\times \\text{other}_i\n Supports broadcasting to a common shape, type promotion, and\n integer, float, and complex inputs.\n Parameters:\n * input (Tensor) -- the input tensor.\n * other (Tensor or Number) --\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Examples:\n >>> a = torch.randn(3)\n >>> a\n tensor([ 0.2015, -0.4255, 2.6087])\n >>> torch.mul(a, 100)\n tensor([ 20.1494, -42.5491, 260.8663])\n >>> b = torch.randn(4, 1)\n >>> b\n tensor([[ 1.1207],\n [-0.3137],\n [ 0.0700],\n [ 0.8378]])\n >>> c = torch.randn(1, 4)\n >>> c\n tensor([[ 0.5146, 0.1216, -0.5244, 2.2382]])\n >>> torch.mul(b, c)\n tensor([[ 0.5767, 0.1363, -0.5877, 2.5083],\n [-0.1614, -0.0382, 0.1645, -0.7021],", "source": "https://pytorch.org/docs/stable/generated/torch.mul.html", "category": "pytorch docs"}
{"text": "[ 0.0360, 0.0085, -0.0367, 0.1567],\n [ 0.4312, 0.1019, -0.4394, 1.8753]])", "source": "https://pytorch.org/docs/stable/generated/torch.mul.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.adaptive_avg_pool3dtorch.nn.functional.adaptive_avg_pool3d(input, output_size)\n Applies a 3D adaptive average pooling over an input signal composed\n of several input planes.\n See \"AdaptiveAvgPool3d\" for details and output shape.\n Parameters:\n output_size (None) -- the target output size (single\n integer or triple-integer tuple)\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.adaptive_avg_pool3d.html", "category": "pytorch docs"}
{"text": "torch.use_deterministic_algorithmstorch.use_deterministic_algorithms(mode, *, warn_only=False)\n Sets whether PyTorch operations must use \"deterministic\"\n algorithms. That is, algorithms which, given the same input, and\n when run on the same software and hardware, always produce the same\n output. When enabled, operations will use deterministic algorithms\n when available, and if only nondeterministic algorithms are\n available they will throw a \"RuntimeError\" when called.\n Note:\n This setting alone is not always enough to make an application\n reproducible. Refer to Reproducibility for more information.\n Note:\n \"torch.set_deterministic_debug_mode()\" offers an alternative\n interface for this feature.\n The following normally-nondeterministic operations will act\n deterministically when \"mode=True\":\n * \"torch.nn.Conv1d\" when called on CUDA tensor\n * \"torch.nn.Conv2d\" when called on CUDA tensor", "source": "https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html", "category": "pytorch docs"}
{"text": "\n\"torch.nn.Conv3d\" when called on CUDA tensor\n\"torch.nn.ConvTranspose1d\" when called on CUDA tensor\n\"torch.nn.ConvTranspose2d\" when called on CUDA tensor\n\"torch.nn.ConvTranspose3d\" when called on CUDA tensor\n\"torch.bmm()\" when called on sparse-dense CUDA tensors\n\"torch.Tensor.getitem()\" when attempting to differentiate\n a CPU tensor and the index is a list of tensors\n\"torch.Tensor.index_put()\" with \"accumulate=False\"\n\"torch.Tensor.index_put()\" with \"accumulate=True\" when called\n on a CPU tensor\n\"torch.Tensor.put_()\" with \"accumulate=True\" when called on a\n CPU tensor\n\"torch.Tensor.scatter_add_()\" when called on a CUDA tensor\n\"torch.gather()\" when called on a CUDA tensor that requires\n grad\n\"torch.index_add()\" when called on CUDA tensor\n\"torch.index_select()\" when attempting to differentiate a CUDA\n tensor\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html", "category": "pytorch docs"}
{"text": "tensor\n * \"torch.repeat_interleave()\" when attempting to differentiate a\n CUDA tensor\n * \"torch.Tensor.index_copy()\" when called on a CPU or CUDA\n tensor\n The following normally-nondeterministic operations will throw a\n \"RuntimeError\" when \"mode=True\":\n * \"torch.nn.AvgPool3d\" when attempting to differentiate a CUDA\n tensor\n * \"torch.nn.AdaptiveAvgPool2d\" when attempting to differentiate\n a CUDA tensor\n * \"torch.nn.AdaptiveAvgPool3d\" when attempting to differentiate\n a CUDA tensor\n * \"torch.nn.MaxPool3d\" when attempting to differentiate a CUDA\n tensor\n * \"torch.nn.AdaptiveMaxPool2d\" when attempting to differentiate\n a CUDA tensor\n * \"torch.nn.FractionalMaxPool2d\" when attempting to\n differentiate a CUDA tensor\n * \"torch.nn.FractionalMaxPool3d\" when attempting to\n differentiate a CUDA tensor\n * \"torch.nn.MaxUnpool1d\"\n * \"torch.nn.MaxUnpool2d\"\n * \"torch.nn.MaxUnpool3d\"", "source": "https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html", "category": "pytorch docs"}
{"text": "\n\"torch.nn.MaxUnpool3d\"\n\"torch.nn.functional.interpolate()\" when attempting to\n differentiate a CUDA tensor and one of the following modes is\n used:\n\"linear\"\n\"bilinear\"\n\"bicubic\"\n\"trilinear\"\n\n\n\"torch.nn.ReflectionPad1d\" when attempting to differentiate a\n CUDA tensor\n\"torch.nn.ReflectionPad2d\" when attempting to differentiate a\n CUDA tensor\n\"torch.nn.ReflectionPad3d\" when attempting to differentiate a\n CUDA tensor\n\"torch.nn.ReplicationPad1d\" when attempting to differentiate a\n CUDA tensor\n\"torch.nn.ReplicationPad2d\" when attempting to differentiate a\n CUDA tensor\n\"torch.nn.ReplicationPad3d\" when attempting to differentiate a\n CUDA tensor\n\"torch.nn.NLLLoss\" when called on a CUDA tensor\n\"torch.nn.CTCLoss\" when attempting to differentiate a CUDA\n tensor\n\"torch.nn.EmbeddingBag\" when attempting to differentiate a\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html", "category": "pytorch docs"}
{"text": "CUDA tensor when \"mode='max'\"\n * \"torch.Tensor.put_()\" when \"accumulate=False\"\n * \"torch.Tensor.put_()\" when \"accumulate=True\" and called on a\n CUDA tensor\n * \"torch.histc()\" when called on a CUDA tensor\n * \"torch.bincount()\" when called on a CUDA tensor\n * \"torch.kthvalue()\" with called on a CUDA tensor\n * \"torch.median()\" with indices output when called on a CUDA\n tensor\n * \"torch.nn.functional.grid_sample()\" when attempting to\n differentiate a CUDA tensor\n * \"torch.cumsum()\" when called on a CUDA tensor when dtype is\n floating point or complex\n A handful of CUDA operations are nondeterministic if the CUDA\n version is 10.2 or greater, unless the environment variable\n \"CUBLAS_WORKSPACE_CONFIG=:4096:8\" or\n \"CUBLAS_WORKSPACE_CONFIG=:16:8\" is set. See the CUDA documentation\n for more details: https://docs.nvidia.com/cuda/cublas/index.html#c\n ublasApi_reproducibility If one of these environment variable", "source": "https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html", "category": "pytorch docs"}
{"text": "configurations is not set, a \"RuntimeError\" will be raised from\n these operations when called with CUDA tensors:\n * \"torch.mm()\"\n * \"torch.mv()\"\n * \"torch.bmm()\"\n Note that deterministic operations tend to have worse performance\n than nondeterministic operations.\n Note:\n This flag does not detect or prevent nondeterministic behavior\n caused by calling an inplace operation on a tensor with an\n internal memory overlap or by giving such a tensor as the \"out\"\n argument for an operation. In these cases, multiple writes of\n different data may target a single memory location, and the order\n of writes is not guaranteed.\n Parameters:\n mode (\"bool\") -- If True, makes potentially nondeterministic\n operations switch to a deterministic algorithm or throw a\n runtime error. If False, allows nondeterministic operations.\n Keyword Arguments:\n warn_only (\"bool\", optional) -- If True, operations that do", "source": "https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html", "category": "pytorch docs"}
{"text": "not have a deterministic implementation will throw a warning\n instead of an error. Default: \"False\"\n Example:\n >>> torch.use_deterministic_algorithms(True)\n # Forward mode nondeterministic error\n >>> torch.randn(10, device='cuda').kthvalue(0)\n ...\n RuntimeError: kthvalue CUDA does not have a deterministic implementation...\n # Backward mode nondeterministic error\n >>> torch.nn.AvgPool3d(1)(torch.randn(3, 4, 5, 6, requires_grad=True).cuda()).sum().backward()\n ...\n RuntimeError: avg_pool3d_backward_cuda does not have a deterministic implementation...", "source": "https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html", "category": "pytorch docs"}
{"text": "torch.as_stridedtorch.as_strided(input, size, stride, storage_offset=None) -> Tensor\n Create a view of an existing torch.Tensor \"input\" with specified\n \"size\", \"stride\" and \"storage_offset\".\n Warning:\n Prefer using other view functions, like \"torch.Tensor.expand()\",\n to setting a view's strides manually with as_strided, as this\n function's behavior depends on the implementation of a tensor's\n storage. The constructed view of the storage must only refer to\n elements within the storage or a runtime error will be thrown,\n and if the view is \"overlapped\" (with multiple indices referring\n to the same element in memory) its behavior is undefined.\n Parameters:\n * input (Tensor) -- the input tensor.\n * size (tuple or ints) -- the shape of the output\n tensor\n * stride (tuple or ints) -- the stride of the output\n tensor\n * storage_offset (int, optional) -- the offset in the", "source": "https://pytorch.org/docs/stable/generated/torch.as_strided.html", "category": "pytorch docs"}
{"text": "underlying storage of the output tensor. If \"None\", the\n storage_offset of the output tensor will match the input\n tensor.\n Example:\n >>> x = torch.randn(3, 3)\n >>> x\n tensor([[ 0.9039, 0.6291, 1.0795],\n [ 0.1586, 2.1939, -0.4900],\n [-0.1909, -0.7503, 1.9355]])\n >>> t = torch.as_strided(x, (2, 2), (1, 2))\n >>> t\n tensor([[0.9039, 1.0795],\n [0.6291, 0.1586]])\n >>> t = torch.as_strided(x, (2, 2), (1, 2), 1)\n tensor([[0.6291, 0.1586],\n [1.0795, 2.1939]])", "source": "https://pytorch.org/docs/stable/generated/torch.as_strided.html", "category": "pytorch docs"}
{"text": "torch.einsumtorch.einsum(equation, operands) -> Tensor\n Sums the product of the elements of the input \"operands\" along\n dimensions specified using a notation based on the Einstein\n summation convention.\n Einsum allows computing many common multi-dimensional linear\n algebraic array operations by representing them in a short-hand\n format based on the Einstein summation convention, given by\n \"equation\". The details of this format are described below, but the\n general idea is to label every dimension of the input \"operands\"\n with some subscript and define which subscripts are part of the\n output. The output is then computed by summing the product of the\n elements of the \"operands\" along the dimensions whose subscripts\n are not part of the output. For example, matrix multiplication can\n be computed using einsum as torch.einsum(\"ij,jk->ik\", A, B)*.\n Here, j is the summation subscript and i and k the output\n subscripts (see section below for more details on why).", "source": "https://pytorch.org/docs/stable/generated/torch.einsum.html", "category": "pytorch docs"}
{"text": "Equation:\n The \"equation\" string specifies the subscripts (letters in\n [a-zA-Z]) for each dimension of the input \"operands\" in the\n same order as the dimensions, separating subscripts for each\n operand by a comma (','), e.g. 'ij,jk' specify subscripts for\n two 2D operands. The dimensions labeled with the same subscript\n must be broadcastable, that is, their size must either match or\n be 1. The exception is if a subscript is repeated for the same\n input operand, in which case the dimensions labeled with this\n subscript for this operand must match in size and the operand\n will be replaced by its diagonal along these dimensions. The\n subscripts that appear exactly once in the \"equation\" will be\n part of the output, sorted in increasing alphabetical order. The\n output is computed by multiplying the input \"operands\" element-\n wise, with their dimensions aligned based on the subscripts, and", "source": "https://pytorch.org/docs/stable/generated/torch.einsum.html", "category": "pytorch docs"}
{"text": "then summing out the dimensions whose subscripts are not part of\n the output.\n Optionally, the output subscripts can be explicitly defined by\n adding an arrow ('->') at the end of the equation followed by\n the subscripts for the output. For instance, the following\n equation computes the transpose of a matrix multiplication:\n 'ij,jk->ki'. The output subscripts must appear at least once for\n some input operand and at most once for the output.\n Ellipsis ('...') can be used in place of subscripts to broadcast\n the dimensions covered by the ellipsis. Each input operand may\n contain at most one ellipsis which will cover the dimensions not\n covered by subscripts, e.g. for an input operand with 5\n dimensions, the ellipsis in the equation 'ab...c' cover the\n third and fourth dimensions. The ellipsis does not need to cover\n the same number of dimensions across the \"operands\" but the", "source": "https://pytorch.org/docs/stable/generated/torch.einsum.html", "category": "pytorch docs"}
{"text": "'shape' of the ellipsis (the size of the dimensions covered by\n them) must broadcast together. If the output is not explicitly\n defined with the arrow ('->') notation, the ellipsis will come\n first in the output (left-most dimensions), before the subscript\n labels that appear exactly once for the input operands. e.g. the\n following equation implements batch matrix multiplication\n '...ij,...jk'.\n A few final notes: the equation may contain whitespaces between\n the different elements (subscripts, ellipsis, arrow and comma)\n but something like '. . .' is not valid. An empty string ''\n is valid for scalar operands.\n Note:\n \"torch.einsum\" handles ellipsis ('...') differently from NumPy in\n that it allows dimensions covered by the ellipsis to be summed\n over, that is, ellipsis are not required to be part of the\n output.\n Note:\n This function uses opt_einsum (https://optimized-", "source": "https://pytorch.org/docs/stable/generated/torch.einsum.html", "category": "pytorch docs"}
{"text": "einsum.readthedocs.io/en/stable/) to speed up computation or to\n consume less memory by optimizing contraction order. This\n optimization occurs when there are at least three inputs, since\n the order does not matter otherwise. Note that finding the\n optimal path is an NP-hard problem, thus, opt_einsum relies on\n different heuristics to achieve near-optimal results. If\n opt_einsum is not available, the default order is to contract\n from left to right.To bypass this default behavior, add the\n following line to disable the usage of opt_einsum and skip path\n calculation: torch.backends.opt_einsum.enabled = FalseTo\n specify which strategy you'd like for opt_einsum to compute the\n contraction path, add the following line:\n torch.backends.opt_einsum.strategy = 'auto'. The default\n strategy is 'auto', and we also support 'greedy' and 'optimal'.\n Disclaimer that the runtime of 'optimal' is factorial in the", "source": "https://pytorch.org/docs/stable/generated/torch.einsum.html", "category": "pytorch docs"}
{"text": "number of inputs! See more details in the opt_einsum\n documentation (https://optimized-\n einsum.readthedocs.io/en/stable/path_finding.html).\n Note:\n As of PyTorch 1.10 \"torch.einsum()\" also supports the sublist\n format (see examples below). In this format, subscripts for each\n operand are specified by sublists, list of integers in the range\n [0, 52). These sublists follow their operands, and an extra\n sublist can appear at the end of the input to specify the\n output's subscripts., e.g. torch.einsum(op1, sublist1, op2,\n sublist2, ..., [subslist_out]). Python's Ellipsis object may\n be provided in a sublist to enable broadcasting as described in\n the Equation section above.\n Parameters:\n * equation (str) -- The subscripts for the Einstein\n summation.\n * operands (List[Tensor]) -- The tensors to compute\n the Einstein summation of.\n Return type:\n Tensor\n Examples:\n >>> # trace", "source": "https://pytorch.org/docs/stable/generated/torch.einsum.html", "category": "pytorch docs"}
{"text": "Tensor\n Examples:\n >>> # trace\n >>> torch.einsum('ii', torch.randn(4, 4))\n tensor(-1.2104)\n >>> # diagonal\n >>> torch.einsum('ii->i', torch.randn(4, 4))\n tensor([-0.1034, 0.7952, -0.2433, 0.4545])\n >>> # outer product\n >>> x = torch.randn(5)\n >>> y = torch.randn(4)\n >>> torch.einsum('i,j->ij', x, y)\n tensor([[ 0.1156, -0.2897, -0.3918, 0.4963],\n [-0.3744, 0.9381, 1.2685, -1.6070],\n [ 0.7208, -1.8058, -2.4419, 3.0936],\n [ 0.1713, -0.4291, -0.5802, 0.7350],\n [ 0.5704, -1.4290, -1.9323, 2.4480]])\n >>> # batch matrix multiplication\n >>> As = torch.randn(3, 2, 5)\n >>> Bs = torch.randn(3, 5, 4)\n >>> torch.einsum('bij,bjk->bik', As, Bs)\n tensor([[[-1.0564, -1.5904, 3.2023, 3.1271],\n [-1.6706, -0.8097, -0.8025, -2.1183]],\n [[ 4.2239, 0.3107, -0.5756, -0.2354],\n [-1.4558, -0.3460, 1.5087, -0.8530]],", "source": "https://pytorch.org/docs/stable/generated/torch.einsum.html", "category": "pytorch docs"}
{"text": "[[ 2.8153, 1.8787, -4.3839, -1.2112],\n [ 0.3728, -2.1131, 0.0921, 0.8305]]])\n >>> # with sublist format and ellipsis\n >>> torch.einsum(As, [..., 0, 1], Bs, [..., 1, 2], [..., 0, 2])\n tensor([[[-1.0564, -1.5904, 3.2023, 3.1271],\n [-1.6706, -0.8097, -0.8025, -2.1183]],\n [[ 4.2239, 0.3107, -0.5756, -0.2354],\n [-1.4558, -0.3460, 1.5087, -0.8530]],\n [[ 2.8153, 1.8787, -4.3839, -1.2112],\n [ 0.3728, -2.1131, 0.0921, 0.8305]]])\n >>> # batch permute\n >>> A = torch.randn(2, 3, 4, 5)\n >>> torch.einsum('...ij->...ji', A).shape\n torch.Size([2, 3, 5, 4])\n >>> # equivalent to torch.nn.functional.bilinear\n >>> A = torch.randn(3, 5, 4)\n >>> l = torch.randn(2, 5)\n >>> r = torch.randn(2, 4)\n >>> torch.einsum('bn,anm,bm->ba', l, A, r)\n tensor([[-0.3430, -5.2405, 0.4494],\n [ 0.3311, 5.5201, -3.0356]])", "source": "https://pytorch.org/docs/stable/generated/torch.einsum.html", "category": "pytorch docs"}
{"text": "torch.less_equaltorch.less_equal(input, other, *, out=None) -> Tensor\n Alias for \"torch.le()\".", "source": "https://pytorch.org/docs/stable/generated/torch.less_equal.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.margin_ranking_losstorch.nn.functional.margin_ranking_loss(input1, input2, target, margin=0, size_average=None, reduce=None, reduction='mean') -> Tensor\n See \"MarginRankingLoss\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.margin_ranking_loss.html", "category": "pytorch docs"}
{"text": "torch.linalg.ldl_factor_extorch.linalg.ldl_factor_ex(A, , hermitian=False, check_errors=False, out=None)\n This is a version of \"ldl_factor()\" that does not perform error\n checks unless \"check_errors\"= True. It also returns the \"info\"\n tensor returned by LAPACK's sytrf. \"info\" stores integer error\n codes from the backend library. A positive integer indicates the\n diagonal element of D that is zero. Division by 0 will occur if the\n result is used for solving a system of linear equations. \"info\"\n filled with zeros indicates that the factorization was successful.\n If \"check_errors=True\" and \"info\" contains positive integers, then\n a RuntimeError is thrown.\n Note:\n When the inputs are on a CUDA device, this function synchronizes\n only when \"check_errors\"= True.\n Warning:\n This function is \"experimental\" and it may change in a future\n PyTorch release.\n Parameters:\n A (Tensor) -- tensor of shape (, n, n) where * is zero or", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor_ex.html", "category": "pytorch docs"}
{"text": "more batch dimensions consisting of symmetric or Hermitian\n matrices. (*, n, n) where *** is one or more batch dimensions.\n Keyword Arguments:\n * hermitian (bool, optional) -- whether to consider\n the input to be Hermitian or symmetric. For real-valued\n matrices, this switch has no effect. Default: False.\n * check_errors (bool, optional) -- controls whether to\n check the content of \"info\" and raise an error if it is non-\n zero. Default: False.\n * out (tuple, optional) -- tuple of three tensors to\n write the output to. Ignored if None. Default: None.\n Returns:\n A named tuple (LD, pivots, info).\n Examples:\n >>> A = torch.randn(3, 3)\n >>> A = A @ A.mT # make symmetric\n >>> A\n tensor([[7.2079, 4.2414, 1.9428],\n [4.2414, 3.4554, 0.3264],\n [1.9428, 0.3264, 1.3823]])\n >>> LD, pivots, info = torch.linalg.ldl_factor_ex(A)\n >>> LD", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor_ex.html", "category": "pytorch docs"}
{"text": "\n\n\nLD\n tensor([[ 7.2079, 0.0000, 0.0000],\n [ 0.5884, 0.9595, 0.0000],\n [ 0.2695, -0.8513, 0.1633]])\n >>> pivots\n tensor([1, 2, 3], dtype=torch.int32)\n >>> info\n tensor(0, dtype=torch.int32)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor_ex.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.prune.random_unstructuredtorch.nn.utils.prune.random_unstructured(module, name, amount)\n Prunes tensor corresponding to parameter called \"name\" in \"module\"\n by removing the specified \"amount\" of (currently unpruned) units\n selected at random. Modifies module in place (and also return the\n modified module) by:\n 1. adding a named buffer called \"name+'_mask'\" corresponding to the\n binary mask applied to the parameter \"name\" by the pruning\n method.\n 2. replacing the parameter \"name\" by its pruned version, while the\n original (unpruned) parameter is stored in a new parameter named\n \"name+'_orig'\".\n Parameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * name (str) -- parameter name within \"module\" on which\n pruning will act.\n * amount (int or float) -- quantity of parameters to\n prune. If \"float\", should be between 0.0 and 1.0 and represent", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.random_unstructured.html", "category": "pytorch docs"}
{"text": "the fraction of parameters to prune. If \"int\", it represents\n the absolute number of parameters to prune.\n Returns:\n modified (i.e. pruned) version of the input module\n Return type:\n module (nn.Module)\n -[ Examples ]-\n\n\n\nm = prune.random_unstructured(nn.Linear(2, 3), 'weight', amount=1)\ntorch.sum(m.weight_mask == 0)\n tensor(1)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.random_unstructured.html", "category": "pytorch docs"}
{"text": "clampclass torch.ao.nn.quantized.functional.clamp(input, min_, max_)\n float(input, min_, max_) -> Tensor\n Applies the clamp function element-wise. See \"clamp\" for more\n details.\n Parameters:\n * input (Tensor) -- quantized input\n * min -- minimum value for clamping\n * max -- maximum value for clamping\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.clamp.html", "category": "pytorch docs"}
{"text": "torch.Tensor.scatter_Tensor.scatter_(dim, index, src, reduce=None) -> Tensor\n Writes all values from the tensor \"src\" into \"self\" at the indices\n specified in the \"index\" tensor. For each value in \"src\", its\n output index is specified by its index in \"src\" for \"dimension !=\n dim\" and by the corresponding value in \"index\" for \"dimension =\n dim\".\n For a 3-D tensor, \"self\" is updated as:\n self[index[i][j][k]][j][k] = src[i][j][k] # if dim == 0\n self[i][index[i][j][k]][k] = src[i][j][k] # if dim == 1\n self[i][j][index[i][j][k]] = src[i][j][k] # if dim == 2\n This is the reverse operation of the manner described in\n \"gather()\".\n \"self\", \"index\" and \"src\" (if it is a Tensor) should all have the\n same number of dimensions. It is also required that \"index.size(d)\n <= src.size(d)\" for all dimensions \"d\", and that \"index.size(d) <=\n self.size(d)\" for all dimensions \"d != dim\". Note that \"index\" and\n \"src\" do not broadcast.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_.html", "category": "pytorch docs"}
{"text": "\"src\" do not broadcast.\n Moreover, as for \"gather()\", the values of \"index\" must be between\n \"0\" and \"self.size(dim) - 1\" inclusive.\n Warning:\n When indices are not unique, the behavior is non-deterministic\n (one of the values from \"src\" will be picked arbitrarily) and the\n gradient will be incorrect (it will be propagated to all\n locations in the source that correspond to the same index)!\n Note:\n The backward pass is implemented only for \"src.shape ==\n index.shape\".\n Additionally accepts an optional \"reduce\" argument that allows\n specification of an optional reduction operation, which is applied\n to all values in the tensor \"src\" into \"self\" at the indices\n specified in the \"index\". For each value in \"src\", the reduction\n operation is applied to an index in \"self\" which is specified by\n its index in \"src\" for \"dimension != dim\" and by the corresponding\n value in \"index\" for \"dimension = dim\".\n Given a 3-D tensor and reduction using the multiplication", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_.html", "category": "pytorch docs"}
{"text": "operation, \"self\" is updated as:\n self[index[i][j][k]][j][k] = src[i][j][k] # if dim == 0\n self[i][index[i][j][k]][k] = src[i][j][k] # if dim == 1\n self[i][j][index[i][j][k]] = src[i][j][k] # if dim == 2\n Reducing with the addition operation is the same as using\n \"scatter_add_()\".\n Parameters:\n * dim (int) -- the axis along which to index\n * index (LongTensor) -- the indices of elements to\n scatter, can be either empty or of the same dimensionality as\n \"src\". When empty, the operation returns \"self\" unchanged.\n * src (Tensor or float) -- the source element(s) to\n scatter.\n * reduce (str, optional*) -- reduction operation to\n apply, can be either \"'add'\" or \"'multiply'\".\n Example:\n >>> src = torch.arange(1, 11).reshape((2, 5))\n >>> src\n tensor([[ 1, 2, 3, 4, 5],\n [ 6, 7, 8, 9, 10]])\n >>> index = torch.tensor([[0, 1, 2, 0]])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_.html", "category": "pytorch docs"}
{"text": "\n\n\nindex = torch.tensor([[0, 1, 2, 0]])\n >>> torch.zeros(3, 5, dtype=src.dtype).scatter_(0, index, src)\n tensor([[1, 0, 0, 4, 0],\n [0, 2, 0, 0, 0],\n [0, 0, 3, 0, 0]])\n >>> index = torch.tensor([[0, 1, 2], [0, 1, 4]])\n >>> torch.zeros(3, 5, dtype=src.dtype).scatter_(1, index, src)\n tensor([[1, 2, 3, 0, 0],\n [6, 7, 0, 0, 8],\n [0, 0, 0, 0, 0]])\n >>> torch.full((2, 4), 2.).scatter_(1, torch.tensor([[2], [3]]),\n ... 1.23, reduce='multiply')\n tensor([[2.0000, 2.0000, 2.4600, 2.0000],\n [2.0000, 2.0000, 2.0000, 2.4600]])\n >>> torch.full((2, 4), 2.).scatter_(1, torch.tensor([[2], [3]]),\n ... 1.23, reduce='add')\n tensor([[2.0000, 2.0000, 3.2300, 2.0000],\n [2.0000, 2.0000, 2.0000, 3.2300]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_.html", "category": "pytorch docs"}
{"text": "torch.ceiltorch.ceil(input, , out=None) -> Tensor\n Returns a new tensor with the ceil of the elements of \"input\", the\n smallest integer greater than or equal to each element.\n For integer inputs, follows the array-api convention of returning a\n copy of the input tensor.\n \\text{out}{i} = \\left\\lceil \\text{input} \\right\\rceil\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([-0.6341, -1.4208, -1.0900, 0.5826])\n >>> torch.ceil(a)\n tensor([-0., -1., -1., 1.])", "source": "https://pytorch.org/docs/stable/generated/torch.ceil.html", "category": "pytorch docs"}
{"text": "torch.Tensor.remainder_Tensor.remainder_(divisor) -> Tensor\n In-place version of \"remainder()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.remainder_.html", "category": "pytorch docs"}
{"text": "torch.realtorch.real(input) -> Tensor\n Returns a new tensor containing real values of the \"self\" tensor.\n The returned tensor and \"self\" share the same underlying storage.\n Parameters:\n input (Tensor) -- the input tensor.\n Example:\n >>> x=torch.randn(4, dtype=torch.cfloat)\n >>> x\n tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)])\n >>> x.real\n tensor([ 0.3100, -0.5445, -1.6492, -0.0638])", "source": "https://pytorch.org/docs/stable/generated/torch.real.html", "category": "pytorch docs"}
{"text": "torch.jit.isinstancetorch.jit.isinstance(obj, target_type)\n This function provides for container type refinement in\n TorchScript. It can refine parameterized containers of the List,\n Dict, Tuple, and Optional types. E.g. \"List[str]\", \"Dict[str,\n List[torch.Tensor]]\", \"Optional[Tuple[int,str,int]]\". It can also\n refine basic types such as bools and ints that are available in\n TorchScript.\n Parameters:\n * obj -- object to refine the type of\n * target_type -- type to try to refine obj to\n Returns:\n True if obj was successfully refined to the type of target_type,\n False otherwise with no new type refinement\n Return type:\n \"bool\"\n Example (using \"torch.jit.isinstance\" for type refinement): ..\n testcode:\n import torch\n from typing import Any, Dict, List\n class MyModule(torch.nn.Module):\n def init(self):\n super(MyModule, self).init()\n def forward(self, input: Any): # note the Any type", "source": "https://pytorch.org/docs/stable/generated/torch.jit.isinstance.html", "category": "pytorch docs"}
{"text": "if torch.jit.isinstance(input, List[torch.Tensor]):\n for t in input:\n y = t.clamp(0, 0.5)\n elif torch.jit.isinstance(input, Dict[str, str]):\n for val in input.values():\n print(val)\n m = torch.jit.script(MyModule())\n x = [torch.rand(3,3), torch.rand(4,3)]\n m(x)\n y = {\"key1\":\"val1\",\"key2\":\"val2\"}\n m(y)", "source": "https://pytorch.org/docs/stable/generated/torch.jit.isinstance.html", "category": "pytorch docs"}
{"text": "torch.Tensor.index_fill_Tensor.index_fill_(dim, index, value) -> Tensor\n Fills the elements of the \"self\" tensor with value \"value\" by\n selecting the indices in the order given in \"index\".\n Parameters:\n * dim (int) -- dimension along which to index\n * index (LongTensor) -- indices of \"self\" tensor to fill\n in\n * value (float) -- the value to fill with\n Example::\n >>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)\n >>> index = torch.tensor([0, 2])\n >>> x.index_fill_(1, index, -1)\n tensor([[-1., 2., -1.],\n [-1., 5., -1.],\n [-1., 8., -1.]])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_fill_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.cloneTensor.clone(*, memory_format=torch.preserve_format) -> Tensor\n See \"torch.clone()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.clone.html", "category": "pytorch docs"}
{"text": "LPPool1dclass torch.nn.LPPool1d(norm_type, kernel_size, stride=None, ceil_mode=False)\n Applies a 1D power-average pooling over an input signal composed of\n several input planes.\n On each window, the function computed is:\n f(X) = \\sqrt[p]{\\sum_{x \\in X} x^{p}}\n * At p = \\infty, one gets Max Pooling\n * At p = 1, one gets Sum Pooling (which is proportional to Average\n Pooling)\n Note:\n If the sum to the power of p is zero, the gradient of this\n function is not defined. This implementation will set the\n gradient to zero in this case.\n Parameters:\n * kernel_size (Union[int, Tuple[int]]) --\n a single int, the size of the window\n * stride (Union[int, Tuple[int]]) -- a\n single int, the stride of the window. Default value is\n \"kernel_size\"\n * ceil_mode (bool) -- when True, will use ceil instead\n of floor to compute the output shape\n Shape:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LPPool1d.html", "category": "pytorch docs"}
{"text": "Shape:\n * Input: (N, C, L_{in}) or (C, L_{in}).\n * Output: (N, C, L_{out}) or (C, L_{out}), where\n L_{out} = \\left\\lfloor\\frac{L_{in} -\n \\text{kernel_size}}{\\text{stride}} + 1\\right\\rfloor\n Examples::\n >>> # power-2 pool of window of length 3, with stride 2.\n >>> m = nn.LPPool1d(2, 3, stride=2)\n >>> input = torch.randn(20, 16, 50)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LPPool1d.html", "category": "pytorch docs"}
{"text": "Embeddingclass torch.ao.nn.quantized.Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None, dtype=torch.quint8)\n A quantized Embedding module with quantized packed weights as\n inputs. We adopt the same interface as torch.nn.Embedding, please\n see https://pytorch.org/docs/stable/nn.html#torch.nn.Embedding for\n documentation.\n Similar to \"Embedding\", attributes will be randomly initialized at\n module creation time and will be overwritten later\n Variables:\n weight (Tensor) -- the non-learnable quantized weights of\n the module of shape (\\text{num_embeddings},\n \\text{embedding_dim}).\n Examples::\n >>> m = nn.quantized.Embedding(num_embeddings=10, embedding_dim=12)\n >>> indices = torch.tensor([9, 6, 5, 7, 8, 8, 9, 2, 8])\n >>> output = m(indices)\n >>> print(output.size())\n torch.Size([9, 12])\n classmethod from_float(mod)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Embedding.html", "category": "pytorch docs"}
{"text": "classmethod from_float(mod)\n Create a quantized embedding module from a float module\n Parameters:\n mod (Module) -- a float module, either produced by\n torch.ao.quantization utilities or provided by user", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Embedding.html", "category": "pytorch docs"}
{"text": "torch.multiplytorch.multiply(input, other, *, out=None)\n Alias for \"torch.mul()\".", "source": "https://pytorch.org/docs/stable/generated/torch.multiply.html", "category": "pytorch docs"}
{"text": "AlphaDropoutclass torch.nn.AlphaDropout(p=0.5, inplace=False)\n Applies Alpha Dropout over the input.\n Alpha Dropout is a type of Dropout that maintains the self-\n normalizing property. For an input with zero mean and unit standard\n deviation, the output of Alpha Dropout maintains the original mean\n and standard deviation of the input. Alpha Dropout goes hand-in-\n hand with SELU activation function, which ensures that the outputs\n have zero mean and unit standard deviation.\n During training, it randomly masks some of the elements of the\n input tensor with probability p using samples from a bernoulli\n distribution. The elements to masked are randomized on every\n forward call, and scaled and shifted to maintain zero mean and unit\n standard deviation.\n During evaluation the module simply computes an identity function.\n More details can be found in the paper Self-Normalizing Neural\n Networks .\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AlphaDropout.html", "category": "pytorch docs"}
{"text": "Networks .\n Parameters:\n * p (float) -- probability of an element to be dropped.\n Default: 0.5\n * inplace (bool, optional) -- If set to \"True\", will\n do this operation in-place\n Shape:\n * Input: (). Input can be of any shape\n * Output: (). Output is of the same shape as input\n Examples:\n >>> m = nn.AlphaDropout(p=0.2)\n >>> input = torch.randn(20, 16)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AlphaDropout.html", "category": "pytorch docs"}
{"text": "torch.logdettorch.logdet(input) -> Tensor\n Calculates log determinant of a square matrix or batches of square\n matrices.\n It returns \"-inf\" if the input has a determinant of zero, and \"NaN\"\n if it has a negative determinant.\n Note:\n Backward through \"logdet()\" internally uses SVD results when\n \"input\" is not invertible. In this case, double backward through\n \"logdet()\" will be unstable in when \"input\" doesn't have distinct\n singular values. See \"torch.linalg.svd()\" for details.\n See also:\n \"torch.linalg.slogdet()\" computes the sign (resp. angle) and\n natural logarithm of the absolute value of the determinant of\n real-valued (resp. complex) square matrices.\n Parameters:\n input (Tensor) -- the input tensor of size \"(, n, n)\"\n where \"\" is zero or more batch dimensions.\n Example:\n >>> A = torch.randn(3, 3)\n >>> torch.det(A)\n tensor(0.2611)\n >>> torch.logdet(A)\n tensor(-1.3430)\n >>> A", "source": "https://pytorch.org/docs/stable/generated/torch.logdet.html", "category": "pytorch docs"}
{"text": "tensor(-1.3430)\n >>> A\n tensor([[[ 0.9254, -0.6213],\n [-0.5787, 1.6843]],\n [[ 0.3242, -0.9665],\n [ 0.4539, -0.0887]],\n [[ 1.1336, -0.4025],\n [-0.7089, 0.9032]]])\n >>> A.det()\n tensor([1.1990, 0.4099, 0.7386])\n >>> A.det().log()\n tensor([ 0.1815, -0.8917, -0.3031])", "source": "https://pytorch.org/docs/stable/generated/torch.logdet.html", "category": "pytorch docs"}
{"text": "torch.Tensor.maxTensor.max(dim=None, keepdim=False)\n See \"torch.max()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.max.html", "category": "pytorch docs"}
{"text": "torch.abstorch.abs(input, , out=None) -> Tensor\n Computes the absolute value of each element in \"input\".\n \\text{out}{i} = |\\text{input}|\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> torch.abs(torch.tensor([-1, -2, 3]))\n tensor([ 1, 2, 3])", "source": "https://pytorch.org/docs/stable/generated/torch.abs.html", "category": "pytorch docs"}
{"text": "torch.positivetorch.positive(input) -> Tensor\n Returns \"input\". Throws a runtime error if \"input\" is a bool\n tensor.\n Parameters:\n input (Tensor) -- the input tensor.\n Example:\n >>> t = torch.randn(5)\n >>> t\n tensor([ 0.0090, -0.2262, -0.0682, -0.2866, 0.3940])\n >>> torch.positive(t)\n tensor([ 0.0090, -0.2262, -0.0682, -0.2866, 0.3940])", "source": "https://pytorch.org/docs/stable/generated/torch.positive.html", "category": "pytorch docs"}
{"text": "prepare_fxclass torch.quantization.quantize_fx.prepare_fx(model, qconfig_mapping, example_inputs, prepare_custom_config=None, _equalization_config=None, backend_config=None)\n Prepare a model for post training static quantization\n Parameters:\n * model () -- torch.nn.Module model\n * *qconfig_mapping () -- QConfigMapping object to\n configure how a model is quantized, see \"QConfigMapping\" for\n more details\n * *example_inputs () -- Example inputs for forward\n function of the model, Tuple of positional args (keyword args\n can be passed as positional args as well)\n * *prepare_custom_config () -- customization configuration\n for quantization tool. See \"PrepareCustomConfig\" for more\n details\n * *_equalization_config () -- config for specifying how to\n perform equalization on the model\n * *backend_config (***) -- config that specifies how", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html", "category": "pytorch docs"}
{"text": "operators are quantized in a backend, this includes how the\n operators are observed, supported fusion patterns, how\n quantize/dequantize ops are inserted, supported dtypes etc.\n See \"BackendConfig\" for more details\n Returns:\n A GraphModule with observer (configured by qconfig_mapping),\n ready for calibration\n Return type:\n ObservedGraphModule\n Example:\n import torch\n from torch.ao.quantization import get_default_qconfig_mapping\n from torch.ao.quantization import prepare_fx\n class Submodule(torch.nn.Module):\n def init(self):\n super().init()\n self.linear = torch.nn.Linear(5, 5)\n def forward(self, x):\n x = self.linear(x)\n return x\n class M(torch.nn.Module):\n def init(self):\n super().init()\n self.linear = torch.nn.Linear(5, 5)\n self.sub = Submodule()\n def forward(self, x):", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html", "category": "pytorch docs"}
{"text": "def forward(self, x):\n x = self.linear(x)\n x = self.sub(x) + x\n return x\n # initialize a floating point model\n float_model = M().eval()\n # define calibration function\n def calibrate(model, data_loader):\n model.eval()\n with torch.no_grad():\n for image, target in data_loader:\n model(image)\n # qconfig is the configuration for how we insert observers for a particular\n # operator\n # qconfig = get_default_qconfig(\"fbgemm\")\n # Example of customizing qconfig:\n # qconfig = torch.ao.quantization.QConfig(\n # activation=MinMaxObserver.with_args(dtype=torch.qint8),\n # weight=MinMaxObserver.with_args(dtype=torch.qint8))\n # activation and weight are constructors of observer module\n # qconfig_mapping is a collection of quantization configurations, user can\n # set the qconfig for each operator (torch op calls, functional calls, module calls)", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html", "category": "pytorch docs"}
{"text": "in the model through qconfig_mapping\n # the following call will get the qconfig_mapping that works best for models\n # that target \"fbgemm\" backend\n qconfig_mapping = get_default_qconfig_mapping(\"fbgemm\")\n # We can customize qconfig_mapping in different ways.\n # e.g. set the global qconfig, which means we will use the same qconfig for\n # all operators in the model, this can be overwritten by other settings\n # qconfig_mapping = QConfigMapping().set_global(qconfig)\n # e.g. quantize the linear submodule with a specific qconfig\n # qconfig_mapping = QConfigMapping().set_module_name(\"linear\", qconfig)\n # e.g. quantize all nn.Linear modules with a specific qconfig\n # qconfig_mapping = QConfigMapping().set_object_type(torch.nn.Linear, qconfig)\n # for a more complete list, please see the docstring for :class:`torch.ao.quantization.QConfigMapping`\n # argument\n # example_inputs is a tuple of inputs, that is used to infer the type of the\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html", "category": "pytorch docs"}
{"text": "outputs in the model\n # currently it's not used, but please make sure model(*example_inputs) runs\n example_inputs = (torch.randn(1, 3, 224, 224),)\n # TODO: add backend_config after we split the backend_config for fbgemm and qnnpack\n # e.g. backend_config = get_default_backend_config(\"fbgemm\")\n # `prepare_fx` inserts observers in the model based on qconfig_mapping and\n # backend_config. If the configuration for an operator in qconfig_mapping\n # is supported in the backend_config (meaning it's supported by the target\n # hardware), we'll insert observer modules according to the qconfig_mapping\n # otherwise the configuration in qconfig_mapping will be ignored\n #\n # Example:\n # in qconfig_mapping, user sets linear module to be quantized with quint8 for\n # activation and qint8 for weight:\n # qconfig = torch.ao.quantization.QConfig(\n # observer=MinMaxObserver.with_args(dtype=torch.quint8),\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html", "category": "pytorch docs"}
{"text": "weight=MinMaxObserver.with-args(dtype=torch.qint8))\n # Note: current qconfig api does not support setting output observer, but\n # we may extend this to support these more fine grained control in the\n # future\n #\n # qconfig_mapping = QConfigMapping().set_object_type(torch.nn.Linear, qconfig)\n # in backend config, linear module also supports in this configuration:\n # weighted_int8_dtype_config = DTypeConfig(\n # input_dtype=torch.quint8,\n # output_dtype=torch.quint8,\n # weight_dtype=torch.qint8,\n # bias_type=torch.float)\n # linear_pattern_config = BackendPatternConfig(torch.nn.Linear) \\\n # .set_observation_type(ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT) \\\n # .add_dtype_config(weighted_int8_dtype_config) \\\n # ...\n # backend_config = BackendConfig().set_backend_pattern_config(linear_pattern_config)\n # `prepare_fx` will check that the setting requested by suer in qconfig_mapping\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html", "category": "pytorch docs"}
{"text": "is supported by the backend_config and insert observers and fake quant modules\n # in the model\n prepared_model = prepare_fx(float_model, qconfig_mapping, example_inputs)\n # Run calibration\n calibrate(prepared_model, sample_inference_data)\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html", "category": "pytorch docs"}
{"text": "ExponentialLRclass torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma, last_epoch=- 1, verbose=False)\n Decays the learning rate of each parameter group by gamma every\n epoch. When last_epoch=-1, sets initial lr as lr.\n Parameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * gamma (float) -- Multiplicative factor of learning rate\n decay.\n * last_epoch (int) -- The index of last epoch. Default:\n -1.\n * verbose (bool) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n get_last_lr()\n Return last computed learning rate by current scheduler.\n load_state_dict(state_dict)\n Loads the schedulers state.\n Parameters:\n state_dict (dict) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n print_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n state_dict()", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ExponentialLR.html", "category": "pytorch docs"}
{"text": "state_dict()\n Returns the state of the scheduler as a \"dict\".\n It contains an entry for every variable in self.dict which\n is not the optimizer.", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ExponentialLR.html", "category": "pytorch docs"}
{"text": "torch.Tensor.logdetTensor.logdet() -> Tensor\n See \"torch.logdet()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logdet.html", "category": "pytorch docs"}
{"text": "torch.Tensor.log1p_Tensor.log1p_() -> Tensor\n In-place version of \"log1p()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.log1p_.html", "category": "pytorch docs"}
{"text": "torch.dsplittorch.dsplit(input, indices_or_sections) -> List of Tensors\n Splits \"input\", a tensor with three or more dimensions, into\n multiple tensors depthwise according to \"indices_or_sections\". Each\n split is a view of \"input\".\n This is equivalent to calling torch.tensor_split(input,\n indices_or_sections, dim=2) (the split dimension is 2), except that\n if \"indices_or_sections\" is an integer it must evenly divide the\n split dimension or a runtime error will be thrown.\n This function is based on NumPy's \"numpy.dsplit()\".\n Parameters:\n * input (Tensor) -- tensor to split.\n * indices_or_sections (int or list or tuple of\n ints) -- See argument in \"torch.tensor_split()\".\n Example::\n >>> t = torch.arange(16.0).reshape(2, 2, 4)\n >>> t\n tensor([[[ 0., 1., 2., 3.],\n [ 4., 5., 6., 7.]],\n [[ 8., 9., 10., 11.],\n [12., 13., 14., 15.]]])\n >>> torch.dsplit(t, 2)", "source": "https://pytorch.org/docs/stable/generated/torch.dsplit.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.dsplit(t, 2)\n (tensor([[[ 0., 1.],\n [ 4., 5.]],\n [[ 8., 9.],\n [12., 13.]]]),\n tensor([[[ 2., 3.],\n [ 6., 7.]],\n [[10., 11.],\n [14., 15.]]]))\n >>> torch.dsplit(t, [3, 6])\n (tensor([[[ 0., 1., 2.],\n [ 4., 5., 6.]],\n [[ 8., 9., 10.],\n [12., 13., 14.]]]),\n tensor([[[ 3.],\n [ 7.]],\n [[11.],\n [15.]]]),\n tensor([], size=(2, 2, 0)))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.dsplit.html", "category": "pytorch docs"}
{"text": "torch._foreach_log1ptorch._foreach_log1p(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.log1p()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_log1p.html", "category": "pytorch docs"}
{"text": "torch.linalg.matrix_ranktorch.linalg.matrix_rank(A, , atol=None, rtol=None, hermitian=False, out=None) -> Tensor\n Computes the numerical rank of a matrix.\n The matrix rank is computed as the number of singular values (or\n eigenvalues in absolute value when \"hermitian\"= True) that are\n greater than \\max(\\text{atol}, \\sigma_1 * \\text{rtol}) threshold,\n where \\sigma_1 is the largest singular value (or eigenvalue).\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n If \"hermitian\"= True, \"A\" is assumed to be Hermitian if complex\n or symmetric if real, but this is not checked internally. Instead,\n just the lower triangular part of the matrix is used in the\n computations.\n If \"rtol\" is not specified and \"A\" is a matrix of dimensions (m,\n n)*, the relative tolerance is set to be \\text{rtol} = \\max(m, n)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matrix_rank.html", "category": "pytorch docs"}
{"text": "\\varepsilon and \\varepsilon is the epsilon value for the dtype of\n \"A\" (see \"finfo\"). If \"rtol\" is not specified and \"atol\" is\n specified to be larger than zero then \"rtol\" is set to zero.\n If \"atol\" or \"rtol\" is a \"torch.Tensor\", its shape must be\n broadcastable to that of the singular values of \"A\" as returned by\n \"torch.linalg.svdvals()\".\n Note:\n This function has NumPy compatible variant linalg.matrix_rank(A,\n tol, hermitian=False). However, use of the positional argument\n \"tol\" is deprecated in favor of \"atol\" and \"rtol\".\n Note:\n The matrix rank is computed using a singular value decomposition\n \"torch.linalg.svdvals()\" if \"hermitian\"= False (default) and\n the eigenvalue decomposition \"torch.linalg.eigvalsh()\" when\n \"hermitian\"= True. When inputs are on a CUDA device, this\n function synchronizes that device with the CPU.\n Parameters:\n * A (Tensor) -- tensor of shape (, m, n)* where *** is\n zero or more batch dimensions.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matrix_rank.html", "category": "pytorch docs"}
{"text": "zero or more batch dimensions.\n * tol (float, Tensor, optional) -- [NumPy Compat]\n Alias for \"atol\". Default: None.\n Keyword Arguments:\n * atol (float, Tensor, optional) -- the absolute\n tolerance value. When None it's considered to be zero.\n Default: None.\n * rtol (float, Tensor, optional) -- the relative\n tolerance value. See above for the value it takes when None.\n Default: None.\n * hermitian (bool) -- indicates whether \"A\" is Hermitian\n if complex or symmetric if real. Default: False.\n * out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\n Examples:\n >>> A = torch.eye(10)\n >>> torch.linalg.matrix_rank(A)\n tensor(10)\n >>> B = torch.eye(10)\n >>> B[0, 0] = 0\n >>> torch.linalg.matrix_rank(B)\n tensor(9)\n >>> A = torch.randn(4, 3, 2)\n >>> torch.linalg.matrix_rank(A)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matrix_rank.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.linalg.matrix_rank(A)\n tensor([2, 2, 2, 2])\n >>> A = torch.randn(2, 4, 2, 3)\n >>> torch.linalg.matrix_rank(A)\n tensor([[2, 2, 2, 2],\n [2, 2, 2, 2]])\n >>> A = torch.randn(2, 4, 3, 3, dtype=torch.complex64)\n >>> torch.linalg.matrix_rank(A)\n tensor([[3, 3, 3, 3],\n [3, 3, 3, 3]])\n >>> torch.linalg.matrix_rank(A, hermitian=True)\n tensor([[3, 3, 3, 3],\n [3, 3, 3, 3]])\n >>> torch.linalg.matrix_rank(A, atol=1.0, rtol=0.0)\n tensor([[3, 2, 2, 2],\n [1, 2, 1, 2]])\n >>> torch.linalg.matrix_rank(A, atol=1.0, rtol=0.0, hermitian=True)\n tensor([[2, 2, 2, 1],\n [1, 2, 2, 2]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matrix_rank.html", "category": "pytorch docs"}
{"text": "torch.from_numpytorch.from_numpy(ndarray) -> Tensor\n Creates a \"Tensor\" from a \"numpy.ndarray\".\n The returned tensor and \"ndarray\" share the same memory.\n Modifications to the tensor will be reflected in the \"ndarray\" and\n vice versa. The returned tensor is not resizable.\n It currently accepts \"ndarray\" with dtypes of \"numpy.float64\",\n \"numpy.float32\", \"numpy.float16\", \"numpy.complex64\",\n \"numpy.complex128\", \"numpy.int64\", \"numpy.int32\", \"numpy.int16\",\n \"numpy.int8\", \"numpy.uint8\", and \"numpy.bool\".\n Warning:\n Writing to a tensor created from a read-only NumPy array is not\n supported and will result in undefined behavior.\n Example:\n >>> a = numpy.array([1, 2, 3])\n >>> t = torch.from_numpy(a)\n >>> t\n tensor([ 1, 2, 3])\n >>> t[0] = -1\n >>> a\n array([-1, 2, 3])", "source": "https://pytorch.org/docs/stable/generated/torch.from_numpy.html", "category": "pytorch docs"}
{"text": "torch.diag_embedtorch.diag_embed(input, offset=0, dim1=- 2, dim2=- 1) -> Tensor\n Creates a tensor whose diagonals of certain 2D planes (specified by\n \"dim1\" and \"dim2\") are filled by \"input\". To facilitate creating\n batched diagonal matrices, the 2D planes formed by the last two\n dimensions of the returned tensor are chosen by default.\n The argument \"offset\" controls which diagonal to consider:\n * If \"offset\" = 0, it is the main diagonal.\n * If \"offset\" > 0, it is above the main diagonal.\n * If \"offset\" < 0, it is below the main diagonal.\n The size of the new matrix will be calculated to make the specified\n diagonal of the size of the last input dimension. Note that for\n \"offset\" other than 0, the order of \"dim1\" and \"dim2\" matters.\n Exchanging them is equivalent to changing the sign of \"offset\".\n Applying \"torch.diagonal()\" to the output of this function with the\n same arguments yields a matrix identical to input. However,", "source": "https://pytorch.org/docs/stable/generated/torch.diag_embed.html", "category": "pytorch docs"}
{"text": "\"torch.diagonal()\" has different default dimensions, so those need\n to be explicitly specified.\n Parameters:\n * input (Tensor) -- the input tensor. Must be at least\n 1-dimensional.\n * offset (int, optional) -- which diagonal to\n consider. Default: 0 (main diagonal).\n * dim1 (int, optional) -- first dimension with respect\n to which to take diagonal. Default: -2.\n * dim2 (int, optional) -- second dimension with\n respect to which to take diagonal. Default: -1.\n Example:\n >>> a = torch.randn(2, 3)\n >>> torch.diag_embed(a)\n tensor([[[ 1.5410, 0.0000, 0.0000],\n [ 0.0000, -0.2934, 0.0000],\n [ 0.0000, 0.0000, -2.1788]],\n [[ 0.5684, 0.0000, 0.0000],\n [ 0.0000, -1.0845, 0.0000],\n [ 0.0000, 0.0000, -1.3986]]])\n >>> torch.diag_embed(a, offset=1, dim1=0, dim2=2)\n tensor([[[ 0.0000, 1.5410, 0.0000, 0.0000],", "source": "https://pytorch.org/docs/stable/generated/torch.diag_embed.html", "category": "pytorch docs"}
{"text": "[ 0.0000, 0.5684, 0.0000, 0.0000]],\n [[ 0.0000, 0.0000, -0.2934, 0.0000],\n [ 0.0000, 0.0000, -1.0845, 0.0000]],\n [[ 0.0000, 0.0000, 0.0000, -2.1788],\n [ 0.0000, 0.0000, 0.0000, -1.3986]],\n [[ 0.0000, 0.0000, 0.0000, 0.0000],\n [ 0.0000, 0.0000, 0.0000, 0.0000]]])", "source": "https://pytorch.org/docs/stable/generated/torch.diag_embed.html", "category": "pytorch docs"}
{"text": "torch.Tensor.count_nonzeroTensor.count_nonzero(dim=None) -> Tensor\n See \"torch.count_nonzero()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.count_nonzero.html", "category": "pytorch docs"}
{"text": "torch.Tensor.take_along_dimTensor.take_along_dim(indices, dim) -> Tensor\n See \"torch.take_along_dim()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.take_along_dim.html", "category": "pytorch docs"}
{"text": "torch.optim.Optimizer.load_state_dictOptimizer.load_state_dict(state_dict)\n Loads the optimizer state.\n Parameters:\n state_dict (dict) -- optimizer state. Should be an object\n returned from a call to \"state_dict()\".", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.load_state_dict.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.relu6torch.nn.functional.relu6(input, inplace=False) -> Tensor\n Applies the element-wise function \\text{ReLU6}(x) = \\min(\\max(0,x),\n 6).\n See \"ReLU6\" for more details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.relu6.html", "category": "pytorch docs"}
{"text": "torch.cuda.memory_allocatedtorch.cuda.memory_allocated(device=None)\n Returns the current GPU memory occupied by tensors in bytes for a\n given device.\n Parameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\n Return type:\n int\n Note:\n This is likely less than the amount shown in nvidia-smi since\n some unused memory can be held by the caching allocator and some\n context needs to be created on GPU. See Memory management for\n more details about GPU memory management.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_allocated.html", "category": "pytorch docs"}
{"text": "torch.Tensor.not_equal_Tensor.not_equal_(other) -> Tensor\n In-place version of \"not_equal()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.not_equal_.html", "category": "pytorch docs"}
{"text": "torch.atleast_3dtorch.atleast_3d(tensors)\n Returns a 3-dimensional view of each input tensor with zero\n dimensions. Input tensors with three or more dimensions are\n returned as-is.\n Parameters:\n input (Tensor or list of Tensors*) --\n Returns:\n output (Tensor or tuple of Tensors)\n -[ Example ]-\n\n\n\nx = torch.tensor(0.5)\nx\n tensor(0.5000)\ntorch.atleast_3d(x)\n tensor([[[0.5000]]])\ny = torch.arange(4).view(2, 2)\ny\n tensor([[0, 1],\n [2, 3]])\ntorch.atleast_3d(y)\n tensor([[[0],\n [1]],\n [[2],\n [3]]])\nx = torch.tensor(1).view(1, 1, 1)\nx\n tensor([[[1]]])\ntorch.atleast_3d(x)\n tensor([[[1]]])\nx = torch.tensor(0.5)\ny = torch.tensor(1.)\ntorch.atleast_3d((x, y))\n (tensor([[[0.5000]]]), tensor([[[1.]]]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.atleast_3d.html", "category": "pytorch docs"}
{"text": "torch.cummintorch.cummin(input, dim, , out=None)\n Returns a namedtuple \"(values, indices)\" where \"values\" is the\n cumulative minimum of elements of \"input\" in the dimension \"dim\".\n And \"indices\" is the index location of each maximum value found in\n the dimension \"dim\".\n y_i = min(x_1, x_2, x_3, \\dots, x_i)\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- the dimension to do the operation over\n Keyword Arguments:\n out (tuple, optional*) -- the result tuple of two\n output tensors (values, indices)\n Example:\n >>> a = torch.randn(10)\n >>> a\n tensor([-0.2284, -0.6628, 0.0975, 0.2680, -1.3298, -0.4220, -0.3885, 1.1762,\n 0.9165, 1.6684])\n >>> torch.cummin(a, dim=0)\n torch.return_types.cummin(\n values=tensor([-0.2284, -0.6628, -0.6628, -0.6628, -1.3298, -1.3298, -1.3298, -1.3298,\n -1.3298, -1.3298]),\n indices=tensor([0, 1, 1, 1, 4, 4, 4, 4, 4, 4]))", "source": "https://pytorch.org/docs/stable/generated/torch.cummin.html", "category": "pytorch docs"}
{"text": "torch.cuda.set_rng_state_alltorch.cuda.set_rng_state_all(new_states)\n Sets the random number generator state of all devices.\n Parameters:\n new_states (Iterable of torch.ByteTensor) -- The desired\n state for each device", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.set_rng_state_all.html", "category": "pytorch docs"}
{"text": "torch.Tensor.deg2radTensor.deg2rad() -> Tensor\n See \"torch.deg2rad()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.deg2rad.html", "category": "pytorch docs"}
{"text": "torch.ldexptorch.ldexp(input, other, , out=None) -> Tensor\n Multiplies \"input\" by 2 ** \"other\".\n \\text{{out}}_i = \\text{{input}}_i * 2^\\text{{other}}_i\n Typically this function is used to construct floating point numbers\n by multiplying mantissas in \"input\" with integral powers of two\n created from the exponents in \"other\".\n Parameters:\n * input (Tensor) -- the input tensor.\n * other (Tensor) -- a tensor of exponents, typically\n integers.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> torch.ldexp(torch.tensor([1.]), torch.tensor([1]))\n tensor([2.])\n >>> torch.ldexp(torch.tensor([1.0]), torch.tensor([1, 2, 3, 4]))\n tensor([ 2., 4., 8., 16.])", "source": "https://pytorch.org/docs/stable/generated/torch.ldexp.html", "category": "pytorch docs"}
{"text": "Sigmoidclass torch.nn.Sigmoid\n Applies the element-wise function:\n \\text{Sigmoid}(x) = \\sigma(x) = \\frac{1}{1 + \\exp(-x)}\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.Sigmoid()\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Sigmoid.html", "category": "pytorch docs"}
{"text": "torch.cuda.graph_pool_handletorch.cuda.graph_pool_handle()\n Returns an opaque token representing the id of a graph memory pool.\n See Graph memory management.\n Warning:\n This API is in beta and may change in future releases.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.graph_pool_handle.html", "category": "pytorch docs"}
{"text": "torch.Tensor.rollTensor.roll(shifts, dims) -> Tensor\n See \"torch.roll()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.roll.html", "category": "pytorch docs"}
{"text": "torch.jit.enable_onednn_fusiontorch.jit.enable_onednn_fusion(enabled)\n Enables or disables onednn JIT fusion based on the parameter\n enabled.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.enable_onednn_fusion.html", "category": "pytorch docs"}
{"text": "ReLU6class torch.nn.ReLU6(inplace=False)\n Applies the element-wise function:\n \\text{ReLU6}(x) = \\min(\\max(0,x), 6)\n Parameters:\n inplace (bool) -- can optionally do the operation in-\n place. Default: \"False\"\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.ReLU6()\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReLU6.html", "category": "pytorch docs"}
{"text": "adaptive_avg_pool2dclass torch.ao.nn.quantized.functional.adaptive_avg_pool2d(input, output_size)\n Applies a 2D adaptive average pooling over a quantized input signal\n composed of several quantized input planes.\n Note:\n The input quantization parameters propagate to the output.\n See \"AdaptiveAvgPool2d\" for details and output shape.\n Parameters:\n output_size (None) -- the target output size (single\n integer or double-integer tuple)\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.adaptive_avg_pool2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.atanh_Tensor.atanh_(other) -> Tensor\n In-place version of \"atanh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.atanh_.html", "category": "pytorch docs"}
{"text": "DistributedDataParallelclass torch.nn.parallel.DistributedDataParallel(module, device_ids=None, output_device=None, dim=0, broadcast_buffers=True, process_group=None, bucket_cap_mb=25, find_unused_parameters=False, check_reduction=False, gradient_as_bucket_view=False, static_graph=False)\n Implements distributed data parallelism that is based on\n \"torch.distributed\" package at the module level.\n This container provides data parallelism by synchronizing gradients\n across each model replica. The devices to synchronize across are\n specified by the input \"process_group\", which is the entire world\n by default. Note that \"DistributedDataParallel\" does not chunk or\n otherwise shard the input across participating GPUs; the user is\n responsible for defining how to do so, for example through the use\n of a \"DistributedSampler\".\n See also: Basics and Use nn.parallel.DistributedDataParallel\n instead of multiprocessing or nn.DataParallel. The same constraints", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "on input as in \"torch.nn.DataParallel\" apply.\n Creation of this class requires that \"torch.distributed\" to be\n already initialized, by calling\n \"torch.distributed.init_process_group()\".\n \"DistributedDataParallel\" is proven to be significantly faster than\n \"torch.nn.DataParallel\" for single-node multi-GPU data parallel\n training.\n To use \"DistributedDataParallel\" on a host with N GPUs, you should\n spawn up \"N\" processes, ensuring that each process exclusively\n works on a single GPU from 0 to N-1. This can be done by either\n setting \"CUDA_VISIBLE_DEVICES\" for every process or by calling:\n\n\n\ntorch.cuda.set_device(i)\n where i is from 0 to N-1. In each process, you should refer the\n following to construct this module:\ntorch.distributed.init_process_group(\n backend='nccl', world_size=N, init_method='...'\n)\nmodel = DistributedDataParallel(model, device_ids=[i], output_device=i)\n In order to spawn up multiple processes per node, you can use\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "either \"torch.distributed.launch\" or \"torch.multiprocessing.spawn\".\n Note:\n Please refer to PyTorch Distributed Overview for a brief\n introduction to all features related to distributed training.\n Note:\n \"DistributedDataParallel\" can be used in conjunction with\n \"torch.distributed.optim.ZeroRedundancyOptimizer\" to reduce per-\n rank optimizer states memory footprint. Please refer to\n ZeroRedundancyOptimizer recipe for more details.\n Note:\n \"nccl\" backend is currently the fastest and highly recommended\n backend when using GPUs. This applies to both single-node and\n multi-node distributed training.\n Note:\n This module also supports mixed-precision distributed training.\n This means that your model can have different types of parameters\n such as mixed types of \"fp16\" and \"fp32\", the gradient reduction\n on these mixed types of parameters will just work fine.\n Note:\n If you use \"torch.save\" on one process to checkpoint the module,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "and \"torch.load\" on some other processes to recover it, make sure\n that \"map_location\" is configured properly for every process.\n Without \"map_location\", \"torch.load\" would recover the module to\n devices where the module was saved from.\n Note:\n When a model is trained on \"M\" nodes with \"batch=N\", the gradient\n will be \"M\" times smaller when compared to the same model trained\n on a single node with \"batch=M*N\" if the loss is summed (NOT\n averaged as usual) across instances in a batch (because the\n gradients between different nodes are averaged). You should take\n this into consideration when you want to obtain a mathematically\n equivalent training process compared to the local training\n counterpart. But in most cases, you can just treat a\n DistributedDataParallel wrapped model, a DataParallel wrapped\n model and an ordinary model on a single GPU as the same (E.g.\n using the same learning rate for equivalent batch size).\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "Note:\n Parameters are never broadcast between processes. The module\n performs an all-reduce step on gradients and assumes that they\n will be modified by the optimizer in all processes in the same\n way. Buffers (e.g. BatchNorm stats) are broadcast from the module\n in process of rank 0, to all other replicas in the system in\n every iteration.\n Note:\n If you are using DistributedDataParallel in conjunction with the\n Distributed RPC Framework, you should always use\n \"torch.distributed.autograd.backward()\" to compute gradients and\n \"torch.distributed.optim.DistributedOptimizer\" for optimizing\n parameters.Example:\n >>> import torch.distributed.autograd as dist_autograd\n >>> from torch.nn.parallel import DistributedDataParallel as DDP\n >>> import torch\n >>> from torch import optim\n >>> from torch.distributed.optim import DistributedOptimizer\n >>> import torch.distributed.rpc as rpc", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "\n\n\nimport torch.distributed.rpc as rpc\n >>> from torch.distributed.rpc import RRef\n >>>\n >>> t1 = torch.rand((3, 3), requires_grad=True)\n >>> t2 = torch.rand((3, 3), requires_grad=True)\n >>> rref = rpc.remote(\"worker1\", torch.add, args=(t1, t2))\n >>> ddp_model = DDP(my_model)\n >>>\n >>> # Setup optimizer\n >>> optimizer_params = [rref]\n >>> for param in ddp_model.parameters():\n >>> optimizer_params.append(RRef(param))\n >>>\n >>> dist_optim = DistributedOptimizer(\n >>> optim.SGD,\n >>> optimizer_params,\n >>> lr=0.05,\n >>> )\n >>>\n >>> with dist_autograd.context() as context_id:\n >>> pred = ddp_model(rref.to_here())\n >>> loss = loss_func(pred, target)\n >>> dist_autograd.backward(context_id, [loss])\n >>> dist_optim.step(context_id)\n Note:\n DistributedDataParallel currently offers limited support for\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "gradient checkpointing with \"torch.utils.checkpoint()\". DDP will\n work as expected when there are no unused parameters in the model\n and each layer is checkpointed at most once (make sure you are\n not passing find_unused_parameters=True to DDP). We currently\n do not support the case where a layer is checkpointed multiple\n times, or when there unused parameters in the checkpointed model.\n Note:\n To let a non-DDP model load a state dict from a DDP model,\n \"consume_prefix_in_state_dict_if_present()\" needs to be applied\n to strip the prefix \"module.\" in the DDP state dict before\n loading.\n Warning:\n Constructor, forward method, and differentiation of the output\n (or a function of the output of this module) are distributed\n synchronization points. Take that into account in case different\n processes might be executing different code.\n Warning:\n This module assumes all parameters are registered in the model by", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "the time it is created. No parameters should be added nor removed\n later. Same applies to buffers.\n Warning:\n This module assumes all parameters are registered in the model of\n each distributed processes are in the same order. The module\n itself will conduct gradient \"allreduce\" following the reverse\n order of the registered parameters of the model. In other words,\n it is users' responsibility to ensure that each distributed\n process has the exact same model and thus the exact same\n parameter registration order.\n Warning:\n This module allows parameters with non-rowmajor-contiguous\n strides. For example, your model may contain some parameters\n whose \"torch.memory_format\" is \"torch.contiguous_format\" and\n others whose format is \"torch.channels_last\". However,\n corresponding parameters in different processes must have the\n same strides.\n Warning:\n This module doesn't work with \"torch.autograd.grad()\" (i.e. it", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "will only work if gradients are to be accumulated in \".grad\"\n attributes of parameters).\n Warning:\n If you plan on using this module with a \"nccl\" backend or a\n \"gloo\" backend (that uses Infiniband), together with a DataLoader\n that uses multiple workers, please change the multiprocessing\n start method to \"forkserver\" (Python 3 only) or \"spawn\".\n Unfortunately Gloo (that uses Infiniband) and NCCL2 are not fork\n safe, and you will likely experience deadlocks if you don't\n change this setting.\n Warning:\n You should never try to change your model's parameters after\n wrapping up your model with \"DistributedDataParallel\". Because,\n when wrapping up your model with \"DistributedDataParallel\", the\n constructor of \"DistributedDataParallel\" will register the\n additional gradient reduction functions on all the parameters of\n the model itself at the time of construction. If you change the\n model's parameters afterwards, gradient reduction functions no", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "longer match the correct set of parameters.\n Warning:\n Using \"DistributedDataParallel\" in conjunction with the\n Distributed RPC Framework is experimental and subject to change.\n Parameters:\n * module (Module) -- module to be parallelized\n * device_ids (list of python:int or torch.device) --\n CUDA devices. 1) For single-device modules, \"device_ids\" can\n contain exactly one device id, which represents the only CUDA\n device where the input module corresponding to this process\n resides. Alternatively, \"device_ids\" can also be \"None\". 2)\n For multi-device modules and CPU modules, \"device_ids\" must be\n \"None\".\n When \"device_ids\" is \"None\" for both cases, both the input\n data for the forward pass and the actual module must be placed\n on the correct device. (default: \"None\")\n * output_device (int or torch.device) -- Device\n location of output for single-device CUDA modules. For multi-", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "device modules and CPU modules, it must be \"None\", and the\n module itself dictates the output location. (default:\n \"device_ids[0]\" for single-device modules)\n * broadcast_buffers (bool) -- Flag that enables syncing\n (broadcasting) buffers of the module at beginning of the\n \"forward\" function. (default: \"True\")\n * process_group -- The process group to be used for\n distributed data all-reduction. If \"None\", the default process\n group, which is created by\n \"torch.distributed.init_process_group()\", will be used.\n (default: \"None\")\n * bucket_cap_mb -- \"DistributedDataParallel\" will bucket\n parameters into multiple buckets so that gradient reduction of\n each bucket can potentially overlap with backward computation.\n \"bucket_cap_mb\" controls the bucket size in MegaBytes (MB).\n (default: 25)\n * find_unused_parameters (bool) -- Traverse the autograd", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "graph from all tensors contained in the return value of the\n wrapped module's \"forward\" function. Parameters that don't\n receive gradients as part of this graph are preemptively\n marked as being ready to be reduced. In addition, parameters\n that may have been used in the wrapped module's \"forward\"\n function but were not part of loss computation and thus would\n also not receive gradients are preemptively marked as ready to\n be reduced. (default: \"False\")\n * check_reduction -- This argument is deprecated.\n * gradient_as_bucket_view (bool) -- When set to \"True\",\n gradients will be views pointing to different offsets of\n \"allreduce\" communication buckets. This can reduce peak memory\n usage, where the saved memory size will be equal to the total\n gradients size. Moreover, it avoids the overhead of copying\n between gradients and \"allreduce\" communication buckets. When", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "gradients are views, \"detach_()\" cannot be called on the\n gradients. If hitting such errors, please fix it by referring\n to the \"zero_grad()\" function in \"torch/optim/optimizer.py\" as\n a solution. Note that gradients will be views after first\n iteration, so the peak memory saving should be checked after\n first iteration.\n * static_graph (bool) --\n When set to \"True\", DDP knows the trained graph is static.\n Static graph means 1) The set of used and unused parameters\n will not change during the whole training loop; in this case,\n it does not matter whether users set \"find_unused_parameters =\n True\" or not. 2) How the graph is trained will not change\n during the whole training loop (meaning there is no control\n flow depending on iterations). When static_graph is set to be\n \"True\", DDP will support cases that can not be supported in\n the past: 1) Reentrant backwards. 2) Activation checkpointing", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "multiple times. 3) Activation checkpointing when model has\n unused parameters. 4) There are model parameters that are\n outside of forward function. 5) Potentially improve\n performance when there are unused parameters, as DDP will not\n search graph in each iteration to detect unused parameters\n when static_graph is set to be \"True\". To check whether you\n can set static_graph to be \"True\", one way is to check ddp\n logging data at the end of your previous model training, if\n \"ddp_logging_data.get(\"can_set_static_graph\") == True\", mostly\n you can set \"static_graph = True\" as well.\n Example::\n >>> model_DDP = torch.nn.parallel.DistributedDataParallel(model)\n >>> # Training loop\n >>> ...\n >>> ddp_logging_data = model_DDP._get_ddp_logging_data()\n >>> static_graph = ddp_logging_data.get(\"can_set_static_graph\")\n Variables:\n module (Module) -- the module to be parallelized.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "Example:\n >>> torch.distributed.init_process_group(backend='nccl', world_size=4, init_method='...')\n >>> net = torch.nn.parallel.DistributedDataParallel(model)\n join(divide_by_initial_world_size=True, enable=True, throw_on_early_termination=False)\n A context manager to be used in conjunction with an instance of\n \"torch.nn.parallel.DistributedDataParallel\" to be able to train\n with uneven inputs across participating processes.\n This context manager will keep track of already-joined DDP\n processes, and \"shadow\" the forward and backward passes by\n inserting collective communication operations to match with the\n ones created by non-joined DDP processes. This will ensure each\n collective call has a corresponding call by already-joined DDP\n processes, preventing hangs or errors that would otherwise\n happen when training with uneven inputs across processes.\n Alternatively, if the flag \"throw_on_early_termination\" is", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "specified to be \"True\", all trainers will throw an error once\n one rank runs out of inputs, allowing these errors to be caught\n and handled according to application logic.\n Once all DDP processes have joined, the context manager will\n broadcast the model corresponding to the last joined process to\n all processes to ensure the model is the same across all\n processes (which is guaranteed by DDP).\n To use this to enable training with uneven inputs across\n processes, simply wrap this context manager around your training\n loop. No further modifications to the model or data loading is\n required.\n Warning:\n If the model or training loop this context manager is wrapped\n around has additional distributed collective operations, such\n as \"SyncBatchNorm\" in the model's forward pass, then the flag\n \"throw_on_early_termination\" must be enabled. This is because\n this context manager is not aware of non-DDP collective", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "communication. This flag will cause all ranks to throw when\n any one rank exhausts inputs, allowing these errors to be\n caught and recovered from across all ranks.\n Parameters:\n * divide_by_initial_world_size (bool) -- If \"True\",\n will divide gradients by the initial \"world_size\" DDP\n training was launched with. If \"False\", will compute the\n effective world size (number of ranks that have not\n depleted their inputs yet) and divide gradients by that\n during allreduce. Set \"divide_by_initial_world_size=True\"\n to ensure every input sample including the uneven inputs\n have equal weight in terms of how much they contribute to\n the global gradient. This is achieved by always dividing\n the gradient by the initial \"world_size\" even when we\n encounter uneven inputs. If you set this to \"False\", we\n divide the gradient by the remaining number of nodes. This", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "ensures parity with training on a smaller \"world_size\"\n although it also means the uneven inputs would contribute\n more towards the global gradient. Typically, you would want\n to set this to \"True\" for cases where the last few inputs\n of your training job are uneven. In extreme cases, where\n there is a large discrepancy in the number of inputs,\n setting this to \"False\" might provide better results.\n * enable (bool) -- Whether to enable uneven input\n detection or not. Pass in \"enable=False\" to disable in\n cases where you know that inputs are even across\n participating processes. Default is \"True\".\n * throw_on_early_termination (bool) -- Whether to throw\n an error or continue training when at least one rank has\n exhausted inputs. If \"True\", will throw upon the first rank\n reaching end of data. If \"False\", will continue training", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "with a smaller effective world size until all ranks are\n joined. Note that if this flag is specified, then the flag\n \"divide_by_initial_world_size\" would be ignored. Default is\n \"False\".\n Example:\n >>> import torch\n >>> import torch.distributed as dist\n >>> import os\n >>> import torch.multiprocessing as mp\n >>> import torch.nn as nn\n >>> # On each spawned worker\n >>> def worker(rank):\n >>> dist.init_process_group(\"nccl\", rank=rank, world_size=2)\n >>> torch.cuda.set_device(rank)\n >>> model = nn.Linear(1, 1, bias=False).to(rank)\n >>> model = torch.nn.parallel.DistributedDataParallel(\n >>> model, device_ids=[rank], output_device=rank\n >>> )\n >>> # Rank 1 gets one more input than rank 0.\n >>> inputs = [torch.tensor([1]).float() for _ in range(10 + rank)]\n >>> with model.join():", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "\n\n\nwith model.join():\n >>> for _ in range(5):\n >>> for inp in inputs:\n >>> loss = model(inp).sum()\n >>> loss.backward()\n >>> # Without the join() API, the below synchronization will hang\n >>> # blocking for rank 1's allreduce to complete.\n >>> torch.cuda.synchronize(device=rank)\n\njoin_hook(kwargs)\n Returns the DDP join hook, which enables training on uneven\n inputs by shadowing the collective communications in the forward\n and backward passes.\n Parameters:\n kwargs (dict) -- a \"dict\" containing any keyword\n arguments to modify the behavior of the join hook at run\n time; all \"Joinable\" instances sharing the same join context\n manager are forwarded the same value for \"kwargs\".\n The hook supports the following keyword arguments:\n divide_by_initial_world_size (bool, optional):\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "If \"True\", then gradients are divided by the initial world\n size that DDP was launched with. If \"False\", then\n gradients are divided by the effective world size (i.e.\n the number of non-joined processes), meaning that the\n uneven inputs contribute more toward the global gradient.\n Typically, this should be set to \"True\" if the degree of\n unevenness is small but can be set to \"False\" in extreme\n cases for possibly better results. Default is \"True\".\n no_sync()\n A context manager to disable gradient synchronizations across\n DDP processes. Within this context, gradients will be\n accumulated on module variables, which will later be\n synchronized in the first forward-backward pass exiting the\n context.\n Example:\n >>> ddp = torch.nn.parallel.DistributedDataParallel(model, pg)\n >>> with ddp.no_sync():\n >>> for input in inputs:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "\n\n\nfor input in inputs:\n >>> ddp(input).backward() # no synchronization, accumulate grads\n >>> ddp(another_input).backward() # synchronize grads\n Warning:\n The forward pass should be included inside the context\n manager, or else gradients will still be synchronized.\n\nregister_comm_hook(state, hook)\n Registers a communication hook which is an enhancement that\n provides a flexible hook to users where they can specify how DDP\n aggregates gradients across multiple workers.\n This hook would be very useful for researchers to try out new\n ideas. For example, this hook can be used to implement several\n algorithms like GossipGrad and gradient compression which\n involve different communication strategies for parameter syncs\n while running Distributed DataParallel training.\n Parameters:\n * state (object) --\n Passed to the hook to maintain any state information during\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "the training process. Examples include error feedback in\n gradient compression, peers to communicate with next in\n GossipGrad, etc.\n It is locally stored by each worker and shared by all the\n gradient tensors on the worker.\n * hook (Callable) --\n Callable with the following signature: \"hook(state: object,\n bucket: dist.GradBucket) ->\n torch.futures.Future[torch.Tensor]\":\n This function is called once the bucket is ready. The hook\n can perform whatever processing is needed and return a\n Future indicating completion of any async work (ex:\n allreduce). If the hook doesn't perform any communication,\n it still must return a completed Future. The Future should\n hold the new value of grad bucket's tensors. Once a bucket\n is ready, c10d reducer would call this hook and use the\n tensors returned by the Future and copy grads to individual", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "parameters. Note that the future's return type must be a\n single tensor.\n We also provide an API called \"get_future\" to retrieve a\n Future associated with the completion of\n \"c10d.ProcessGroup.Work\". \"get_future\" is currently\n supported for NCCL and also supported for most operations\n on GLOO and MPI, except for peer to peer operations\n (send/recv).\n Warning:\n Grad bucket's tensors will not be predivided by world_size.\n User is responsible to divide by the world_size in case of\n operations like allreduce.\n Warning:\n DDP communication hook can only be registered once and should\n be registered before calling backward.\n Warning:\n The Future object that hook returns should contain a single\n tensor that has the same shape with the tensors inside grad\n bucket.\n Warning:\n \"get_future\" API supports NCCL, and partially GLOO and MPI", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "backends (no support for peer-to-peer operations like\n send/recv) and will return a \"torch.futures.Future\".\n Example::\n Below is an example of a noop hook that returns the same\n tensor.\n >>> def noop(state: object, bucket: dist.GradBucket) -> torch.futures.Future[torch.Tensor]:\n >>> fut = torch.futures.Future()\n >>> fut.set_result(bucket.buffer())\n >>> return fut\n >>> ddp.register_comm_hook(state=None, hook=noop)\n Example::\n Below is an example of a Parallel SGD algorithm where\n gradients are encoded before allreduce, and then decoded\n after allreduce.\n >>> def encode_and_decode(state: object, bucket: dist.GradBucket) -> torch.futures.Future[torch.Tensor]:\n >>> encoded_tensor = encode(bucket.buffer()) # encode gradients\n >>> fut = torch.distributed.all_reduce(encoded_tensor).get_future()\n >>> # Define the then callback to decode.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "\n\n\ndef decode(fut):\n >>> decoded_tensor = decode(fut.value()[0]) # decode gradients\n >>> return decoded_tensor\n >>> return fut.then(decode)\n >>> ddp.register_comm_hook(state=None, hook=encode_and_decode)\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"}
{"text": "torch.sparse_compressed_tensortorch.sparse_compressed_tensor(compressed_indices, plain_indices, values, size=None, , dtype=None, layout=None, device=None, requires_grad=False, check_invariants=None) -> Tensor\n Constructs a sparse tensor in Compressed Sparse format - CSR, CSC,\n BSR, or BSC - with specified values at the given\n \"compressed_indices\" and \"plain_indices\". Sparse matrix\n multiplication operations in Compressed Sparse format are typically\n faster than that for sparse tensors in COO format. Make you have a\n look at the note on the data type of the indices.\n Note:\n If the \"device\" argument is not specified the device of the given\n \"values\" and indices tensor(s) must match. If, however, the\n argument is specified the input Tensors will be converted to the\n given device and in turn determine the device of the constructed\n sparse tensor.\n Parameters:\n * compressed_indices (array_like*) -- (B+1)-dimensional", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_compressed_tensor.html", "category": "pytorch docs"}
{"text": "array of size \"(batchsize, compressed_dim_size + 1)\". The\n last element of each batch is the number of non-zero elements\n or blocks. This tensor encodes the index in \"values\" and\n \"plain_indices\" depending on where the given compressed\n dimension (row or column) starts. Each successive number in\n the tensor subtracted by the number before it denotes the\n number of elements or blocks in a given compressed dimension.\n * plain_indices (array_like) -- Plain dimension (column or\n row) co-ordinates of each element or block in values.\n (B+1)-dimensional tensor with the same length as values.\n * values (array_list*) -- Initial values for the tensor.\n Can be a list, tuple, NumPy \"ndarray\", scalar, and other\n types. that represents a (1+K)-dimensional (for CSR and CSC\n layouts) or (1+2+K)-dimensional tensor (for BSR and BSC\n layouts) where \"K\" is the number of dense dimensions.", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_compressed_tensor.html", "category": "pytorch docs"}
{"text": "\nsize (list, tuple, \"torch.Size\", optional) -- Size of the\n sparse tensor: \"(batchsize, nrows * blocksize[0], ncols *\n blocksize[1], densesize)\" where \"blocksize[0] == blocksize[1]\n == 1\" for CSR and CSC formats. If not provided, the size will\n be inferred as the minimum size big enough to hold all non-\n zero elements or blocks.\n Keyword Arguments:\ndtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if None, infers data type from\n \"values\".\nlayout (\"torch.layout\", required) -- the desired layout of\n returned tensor: \"torch.sparse_csr\", \"torch.sparse_csc\",\n \"torch.sparse_bsr\", or \"torch.sparse_bsc\".\ndevice (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_compressed_tensor.html", "category": "pytorch docs"}
{"text": "for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * check_invariants (bool, optional) -- If sparse\n tensor invariants are checked. Default: as returned by\n \"torch.sparse.check_sparse_tensor_invariants.is_enabled()\",\n initially False.\n Example::\n >>> compressed_indices = [0, 2, 4]\n >>> plain_indices = [0, 1, 0, 1]\n >>> values = [1, 2, 3, 4]\n >>> torch.sparse_compressed_tensor(torch.tensor(compressed_indices, dtype=torch.int64),\n ... torch.tensor(plain_indices, dtype=torch.int64),\n ... torch.tensor(values), dtype=torch.double, layout=torch.sparse_csr)\n tensor(crow_indices=tensor([0, 2, 4]),\n col_indices=tensor([0, 1, 0, 1]),\n values=tensor([1., 2., 3., 4.]), size=(2, 2), nnz=4,", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_compressed_tensor.html", "category": "pytorch docs"}
{"text": "dtype=torch.float64, layout=torch.sparse_csr)", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_compressed_tensor.html", "category": "pytorch docs"}
{"text": "InstanceNorm3dclass torch.nn.InstanceNorm3d(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False, device=None, dtype=None)\n Applies Instance Normalization over a 5D input (a mini-batch of 3D\n inputs with additional channel dimension) as described in the paper\n Instance Normalization: The Missing Ingredient for Fast\n Stylization.\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n The mean and standard-deviation are calculated per-dimension\n separately for each object in a mini-batch. \\gamma and \\beta are\n learnable parameter vectors of size C (where C is the input size)\n if \"affine\" is \"True\". The standard-deviation is calculated via the\n biased estimator, equivalent to torch.var(input, unbiased=False).\n By default, this layer uses instance statistics computed from input\n data in both training and evaluation modes.\n If \"track_running_stats\" is set to \"True\", during training this", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm3d.html", "category": "pytorch docs"}
{"text": "layer keeps running estimates of its computed mean and variance,\n which are then used for normalization during evaluation. The\n running estimates are kept with a default \"momentum\" of 0.1.\n Note:\n This \"momentum\" argument is different from one used in optimizer\n classes and the conventional notion of momentum. Mathematically,\n the update rule for running statistics here is \\hat{x}_\\text{new}\n = (1 - \\text{momentum}) \\times \\hat{x} + \\text{momentum} \\times\n x_t, where \\hat{x} is the estimated statistic and x_t is the new\n observed value.\n Note:\n \"InstanceNorm3d\" and \"LayerNorm\" are very similar, but have some\n subtle differences. \"InstanceNorm3d\" is applied on each channel\n of channeled data like 3D models with RGB color, but \"LayerNorm\"\n is usually applied on entire sample and often in NLP tasks.\n Additionally, \"LayerNorm\" applies elementwise affine transform,\n while \"InstanceNorm3d\" usually don't apply affine transform.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm3d.html", "category": "pytorch docs"}
{"text": "Parameters:\n * num_features (int) -- C from an expected input of size\n (N, C, D, H, W) or (C, D, H, W)\n * eps (float) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n * momentum (float) -- the value used for the running_mean\n and running_var computation. Default: 0.1\n * affine (bool) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters,\n initialized the same way as done for batch normalization.\n Default: \"False\".\n * track_running_stats (bool) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics and always uses batch statistics in both\n training and eval modes. Default: \"False\"\n Shape:\n * Input: (N, C, D, H, W) or (C, D, H, W)\n * Output: (N, C, D, H, W) or (C, D, H, W) (same shape as input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm3d.html", "category": "pytorch docs"}
{"text": "Examples:\n >>> # Without Learnable Parameters\n >>> m = nn.InstanceNorm3d(100)\n >>> # With Learnable Parameters\n >>> m = nn.InstanceNorm3d(100, affine=True)\n >>> input = torch.randn(20, 100, 35, 45, 10)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm3d.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.bilineartorch.nn.functional.bilinear(input1, input2, weight, bias=None) -> Tensor\n Applies a bilinear transformation to the incoming data: y = x_1^T A\n x_2 + b\n Shape:\n * input1: (N, , H_{in1}) where H_{in1}=\\text{in1_features} and\n * means any number of additional dimensions. All but the last\n dimension of the inputs should be the same.\n * input2: (N, , H_{in2}) where H_{in2}=\\text{in2_features}\n * weight: (\\text{out_features}, \\text{in1_features},\n \\text{in2_features})\n * bias: (\\text{out_features})\n * output: (N, *, H_{out}) where H_{out}=\\text{out_features} and\n all but the last dimension are the same shape as the input.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.bilinear.html", "category": "pytorch docs"}
{"text": "torch.Tensor.bitwise_notTensor.bitwise_not() -> Tensor\n See \"torch.bitwise_not()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_not.html", "category": "pytorch docs"}
{"text": "torch.linalg.householder_producttorch.linalg.householder_product(A, tau, , out=None) -> Tensor\n Computes the first n columns of a product of Householder\n matrices.\n Let \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, and let V \\in\n \\mathbb{K}^{m \\times n} be a matrix with columns v_i \\in\n \\mathbb{K}^m for i=1,\\ldots,m with m \\geq n. Denote by w_i the\n vector resulting from zeroing out the first i-1 components of v_i\n and setting to 1 the i-th. For a vector \\tau \\in \\mathbb{K}^k\n with k \\leq n, this function computes the first n columns of the\n matrix\n H_1H_2 ... H_k \\qquad\\text{with}\\qquad H_i = \\mathrm{I}_m -\n \\tau_i w_i w_i^{\\text{H}}\n where \\mathrm{I}_m is the m*-dimensional identity matrix and\n w^{\\text{H}} is the conjugate transpose when w is complex, and the\n transpose when w is real-valued. The output matrix is the same size\n as the input matrix \"A\".\n See Representation of Orthogonal or Unitary Matrices for further\n details.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.householder_product.html", "category": "pytorch docs"}
{"text": "details.\n Supports inputs of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if the inputs are batches of\n matrices then the output has the same batch dimensions.\n See also:\n \"torch.geqrf()\" can be used together with this function to form\n the Q from the \"qr()\" decomposition.\n \"torch.ormqr()\" is a related function that computes the matrix\n multiplication of a product of Householder matrices with another\n matrix. However, that function is not supported by autograd.\n Warning:\n Gradient computations are only well-defined if tau_i \\neq\n \\frac{1}{||v_i||^2}. If this condition is not met, no error will\n be thrown, but the gradient produced may contain NaN.\n Parameters:\n * A (Tensor) -- tensor of shape (, m, n) where *** is\n zero or more batch dimensions.\n * tau (Tensor) -- tensor of shape (, k) where *** is\n zero or more batch dimensions.\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.householder_product.html", "category": "pytorch docs"}
{"text": "Keyword Arguments:\n out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\n Raises:\n RuntimeError -- if \"A\" doesn't satisfy the requirement m >=\n n, or \"tau\" doesn't satisfy the requirement n >= k.\n Examples:\n >>> A = torch.randn(2, 2)\n >>> h, tau = torch.geqrf(A)\n >>> Q = torch.linalg.householder_product(h, tau)\n >>> torch.dist(Q, torch.linalg.qr(A).Q)\n tensor(0.)\n >>> h = torch.randn(3, 2, 2, dtype=torch.complex128)\n >>> tau = torch.randn(3, 1, dtype=torch.complex128)\n >>> Q = torch.linalg.householder_product(h, tau)\n >>> Q\n tensor([[[ 1.8034+0.4184j, 0.2588-1.0174j],\n [-0.6853+0.7953j, 2.0790+0.5620j]],\n [[ 1.4581+1.6989j, -1.5360+0.1193j],\n [ 1.3877-0.6691j, 1.3512+1.3024j]],\n [[ 1.4766+0.5783j, 0.0361+0.6587j],\n [ 0.6396+0.1612j, 1.3693+0.4481j]]], dtype=torch.complex128)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.householder_product.html", "category": "pytorch docs"}
{"text": "torch.set_num_interop_threadstorch.set_num_interop_threads(int)\n Sets the number of threads used for interop parallelism (e.g. in\n JIT interpreter) on CPU.\n Warning:\n Can only be called once and before any inter-op parallel work is\n started (e.g. JIT execution).", "source": "https://pytorch.org/docs/stable/generated/torch.set_num_interop_threads.html", "category": "pytorch docs"}
{"text": "torch.stacktorch.stack(tensors, dim=0, , out=None) -> Tensor\n Concatenates a sequence of tensors along a new dimension.\n All tensors need to be of the same size.\n Parameters:\n * tensors (sequence of Tensors) -- sequence of tensors to\n concatenate\n * dim (int) -- dimension to insert. Has to be between 0\n and the number of dimensions of concatenated tensors\n (inclusive)\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.stack.html", "category": "pytorch docs"}
{"text": "torch.Tensor.multiply_Tensor.multiply_(value) -> Tensor\n In-place version of \"multiply()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.multiply_.html", "category": "pytorch docs"}
{"text": "torch.nextaftertorch.nextafter(input, other, , out=None) -> Tensor\n Return the next floating-point value after \"input\" towards \"other\",\n elementwise.\n The shapes of \"input\" and \"other\" must be broadcastable.\n Parameters:\n * input (Tensor) -- the first input tensor\n * other (Tensor) -- the second input tensor\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> eps = torch.finfo(torch.float32).eps\n >>> torch.nextafter(torch.tensor([1.0, 2.0]), torch.tensor([2.0, 1.0])) == torch.tensor([eps + 1, 2 - eps])\n tensor([True, True])", "source": "https://pytorch.org/docs/stable/generated/torch.nextafter.html", "category": "pytorch docs"}
{"text": "torch.Tensor.fmodTensor.fmod(divisor) -> Tensor\n See \"torch.fmod()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fmod.html", "category": "pytorch docs"}
{"text": "torch.Tensor.logTensor.log() -> Tensor\n See \"torch.log()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.log.html", "category": "pytorch docs"}
{"text": "torch.Tensor.bitwise_or_Tensor.bitwise_or_() -> Tensor\n In-place version of \"bitwise_or()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_or_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.baddbmm_Tensor.baddbmm_(batch1, batch2, *, beta=1, alpha=1) -> Tensor\n In-place version of \"baddbmm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.baddbmm_.html", "category": "pytorch docs"}
{"text": "torch.fft.irfft2torch.fft.irfft2(input, s=None, dim=(- 2, - 1), norm=None, *, out=None) -> Tensor\n Computes the inverse of \"rfft2()\". Equivalent to \"irfftn()\" but\n IFFTs only the last two dimensions by default.\n \"input\" is interpreted as a one-sided Hermitian signal in the\n Fourier domain, as produced by \"rfft2()\". By the Hermitian\n property, the output will be real-valued.\n Note:\n Some input frequencies must be real-valued to satisfy the\n Hermitian property. In these cases the imaginary component will\n be ignored. For example, any imaginary component in the zero-\n frequency term cannot be represented in a real output and so will\n always be ignored.\n Note:\n The correct interpretation of the Hermitian input depends on the\n length of the original data, as given by \"s\". This is because\n each input shape could correspond to either an odd or even length\n signal. By default, the signal is assumed to be even length and", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfft2.html", "category": "pytorch docs"}
{"text": "odd signals will not round-trip properly. So, it is recommended\n to always pass the signal shape \"s\".\n Note:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimensions. With default arguments,\n the size of last dimension should be (2^n + 1) as argument s\n defaults to even output size = 2 * (last_dim_size - 1)\n Parameters:\n * input (Tensor) -- the input tensor\n * s (Tuple[int], optional) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the real FFT. If a length \"-1\" is specified, no\n padding is done in that dimension. Defaults to even output in\n the last dimension: \"s[-1] = 2(input.size(dim[-1]) - 1)\".\n * dim (Tuple[int], *optional) -- Dimensions to be", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfft2.html", "category": "pytorch docs"}
{"text": "transformed. The last dimension must be the half-Hermitian\n compressed dimension. Default: last two dimensions.\n * norm (str, optional) --\n Normalization mode. For the backward transform (\"irfft2()\"),\n these correspond to:\n * \"\"forward\"\" - no normalization\n * \"\"backward\"\" - normalize by \"1/n\"\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the real IFFT\n orthonormal)\n Where \"n = prod(s)\" is the logical IFFT size. Calling the\n forward transform (\"rfft2()\") with the same normalization mode\n will apply an overall normalization of \"1/n\" between the two\n transforms. This is required to make \"irfft2()\" the exact\n inverse.\n Default is \"\"backward\"\" (normalize by \"1/n\").\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n\n\n\nt = torch.rand(10, 9)\nT = torch.fft.rfft2(t)\n Without specifying the output length to \"irfft2()\", the output will\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfft2.html", "category": "pytorch docs"}
{"text": "not round-trip properly because the input is odd-length in the last\n dimension:\n\n\n\ntorch.fft.irfft2(T).size()\n torch.Size([10, 8])\n So, it is recommended to always pass the signal shape \"s\".\nroundtrip = torch.fft.irfft2(T, t.size())\nroundtrip.size()\n torch.Size([10, 9])\ntorch.testing.assert_close(roundtrip, t, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfft2.html", "category": "pytorch docs"}
{"text": "torch._foreach_ceiltorch._foreach_ceil(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.ceil()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_ceil.html", "category": "pytorch docs"}
{"text": "torch.var_meantorch.var_mean(input, dim=None, , correction=1, keepdim=False, out=None)\n Calculates the variance and mean over the dimensions specified by\n \"dim\". \"dim\" can be a single dimension, list of dimensions, or\n \"None\" to reduce over all dimensions.\n The variance (\\sigma^2) is calculated as\n \\sigma^2 = \\frac{1}{N - \\delta N}\\sum_{i=0}^{N-1}(x_i-\\bar{x})^2\n where x is the sample set of elements, \\bar{x} is the sample mean,\n N is the number of samples and \\delta N is the \"correction\".\n If \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int or tuple of ints, optional*) -- the\n dimension or dimensions to reduce. If \"None\", all dimensions\n are reduced.", "source": "https://pytorch.org/docs/stable/generated/torch.var_mean.html", "category": "pytorch docs"}
{"text": "are reduced.\n Keyword Arguments:\n * correction (int) --\n difference between the sample size and sample degrees of\n freedom. Defaults to Bessel's correction, \"correction=1\".\n Changed in version 2.0: Previously this argument was called\n \"unbiased\" and was a boolean with \"True\" corresponding to\n \"correction=1\" and \"False\" being \"correction=0\".\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n * out (Tensor, optional) -- the output tensor.\n Returns:\n A tuple (var, mean) containing the variance and mean.\n -[ Example ]-\n\n\n\na = torch.tensor(\n ... [[ 0.2035, 1.2959, 1.8101, -0.4644],\n ... [ 1.5027, -0.3270, 0.5905, 0.6538],\n ... [-1.5745, 1.3330, -0.5596, -0.6548],\n ... [ 0.1264, -0.5080, 1.6420, 0.1992]])\ntorch.var_mean(a, dim=0, keepdim=True)\n (tensor([[1.5926, 1.0056, 1.2005, 0.3646]]),\n tensor([[ 0.0645, 0.4485, 0.8707, -0.0665]]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.var_mean.html", "category": "pytorch docs"}
{"text": "torch.Tensor.resize_Tensor.resize_(sizes, memory_format=torch.contiguous_format) -> Tensor\n Resizes \"self\" tensor to the specified size. If the number of\n elements is larger than the current storage size, then the\n underlying storage is resized to fit the new number of elements. If\n the number of elements is smaller, the underlying storage is not\n changed. Existing elements are preserved but any new memory is\n uninitialized.\n Warning:\n This is a low-level method. The storage is reinterpreted as\n C-contiguous, ignoring the current strides (unless the target\n size equals the current size, in which case the tensor is left\n unchanged). For most purposes, you will instead want to use\n \"view()\", which checks for contiguity, or \"reshape()\", which\n copies data if needed. To change the size in-place with custom\n strides, see \"set_()\".\n Parameters:\n * sizes (torch.Size or int*...) -- the desired size", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.resize_.html", "category": "pytorch docs"}
{"text": "\nmemory_format (\"torch.memory_format\", optional) -- the\n desired memory format of Tensor. Default:\n \"torch.contiguous_format\". Note that memory format of \"self\"\n is going to be unaffected if \"self.size()\" matches \"sizes\".\n Example:\n >>> x = torch.tensor([[1, 2], [3, 4], [5, 6]])\n >>> x.resize_(2, 2)\n tensor([[ 1, 2],\n [ 3, 4]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.resize_.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.lp_pool2dtorch.nn.functional.lp_pool2d(input, norm_type, kernel_size, stride=None, ceil_mode=False)\n Applies a 2D power-average pooling over an input signal composed of\n several input planes. If the sum of all inputs to the power of p\n is zero, the gradient is set to zero as well.\n See \"LPPool2d\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.lp_pool2d.html", "category": "pytorch docs"}
{"text": "torch.sparse_bsr_tensortorch.sparse_bsr_tensor(crow_indices, col_indices, values, size=None, , dtype=None, device=None, requires_grad=False, check_invariants=None) -> Tensor\n Constructs a sparse tensor in BSR (Block Compressed Sparse Row))\n with specified 2-dimensional blocks at the given \"crow_indices\" and\n \"col_indices\". Sparse matrix multiplication operations in BSR\n format are typically faster than that for sparse tensors in COO\n format. Make you have a look at the note on the data type of the\n indices.\n Note:\n If the \"device\" argument is not specified the device of the given\n \"values\" and indices tensor(s) must match. If, however, the\n argument is specified the input Tensors will be converted to the\n given device and in turn determine the device of the constructed\n sparse tensor.\n Parameters:\n * crow_indices (array_like) -- (B+1)-dimensional array of\n size \"(batchsize, nrowblocks + 1)\". The last element of each", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_bsr_tensor.html", "category": "pytorch docs"}
{"text": "batch is the number of non-zeros. This tensor encodes the\n block index in values and col_indices depending on where the\n given row block starts. Each successive number in the tensor\n subtracted by the number before it denotes the number of\n blocks in a given row.\n * col_indices (array_like) -- Column block co-ordinates of\n each block in values. (B+1)-dimensional tensor with the same\n length as values.\n * values (array_list) -- Initial values for the tensor.\n Can be a list, tuple, NumPy \"ndarray\", scalar, and other types\n that represents a (1 + 2 + K)-dimensional tensor where \"K\" is\n the number of dense dimensions.\n * size (list, tuple, \"torch.Size\", optional) -- Size of the\n sparse tensor: \"(batchsize, nrows * blocksize[0], ncols *\n blocksize[1], densesize)\" where \"blocksize ==\n values.shape[1:3]\". If not provided, the size will be inferred", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_bsr_tensor.html", "category": "pytorch docs"}
{"text": "as the minimum size big enough to hold all non-zero blocks.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if None, infers data type from\n \"values\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * check_invariants (bool, optional) -- If sparse\n tensor invariants are checked. Default: as returned by\n \"torch.sparse.check_sparse_tensor_invariants.is_enabled()\",\n initially False.\n Example::\n >>> crow_indices = [0, 1, 2]\n >>> col_indices = [0, 1]", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_bsr_tensor.html", "category": "pytorch docs"}
{"text": "\n\n\ncol_indices = [0, 1]\n >>> values = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]\n >>> torch.sparse_bsr_tensor(torch.tensor(crow_indices, dtype=torch.int64),\n ... torch.tensor(col_indices, dtype=torch.int64),\n ... torch.tensor(values), dtype=torch.double)\n tensor(crow_indices=tensor([0, 1, 2]),\n col_indices=tensor([0, 1]),\n values=tensor([[[1., 2.],\n [3., 4.]],\n [[5., 6.],\n [7., 8.]]]), size=(2, 2), nnz=2, dtype=torch.float64,\n layout=torch.sparse_bsr)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_bsr_tensor.html", "category": "pytorch docs"}
{"text": "LogSoftmaxclass torch.nn.LogSoftmax(dim=None)\n Applies the \\log(\\text{Softmax}(x)) function to an n-dimensional\n input Tensor. The LogSoftmax formulation can be simplified as:\n \\text{LogSoftmax}(x_{i}) = \\log\\left(\\frac{\\exp(x_i) }{ \\sum_j\n \\exp(x_j)} \\right)\n Shape:\n * Input: () where *** means, any number of additional\n dimensions\n * Output: (), same shape as the input\n Parameters:\n dim (int) -- A dimension along which LogSoftmax will be\n computed.\n Returns:\n a Tensor of the same dimension and shape as the input with\n values in the range [-inf, 0)\n Return type:\n None\n Examples:\n >>> m = nn.LogSoftmax(dim=1)\n >>> input = torch.randn(2, 3)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LogSoftmax.html", "category": "pytorch docs"}
{"text": "torch.func.jvptorch.func.jvp(func, primals, tangents, , strict=False, has_aux=False)\n Standing for the Jacobian-vector product, returns a tuple\n containing the output of func(primals) and the \"Jacobian of\n \"func\" evaluated at \"primals\"\" times \"tangents\". This is also known\n as forward-mode autodiff.\n Parameters:\n * func (function) -- A Python function that takes one or\n more arguments, one of which must be a Tensor, and returns one\n or more Tensors\n * primals (Tensors) -- Positional arguments to \"func\" that\n must all be Tensors. The returned function will also be\n computing the derivative with respect to these arguments\n * tangents (Tensors) -- The \"vector\" for which Jacobian-\n vector-product is computed. Must be the same structure and\n sizes as the inputs to \"func\".\n * has_aux (bool) -- Flag indicating that \"func\" returns a\n \"(output, aux)\" tuple where the first element is the output of", "source": "https://pytorch.org/docs/stable/generated/torch.func.jvp.html", "category": "pytorch docs"}
{"text": "the function to be differentiated and the second element is\n other auxiliary objects that will not be differentiated.\n Default: False.\n Returns:\n Returns a \"(output, jvp_out)\" tuple containing the output of\n \"func\" evaluated at \"primals\" and the Jacobian-vector product.\n If \"has_aux is True\", then instead returns a \"(output, jvp_out,\n aux)\" tuple.\n Note:\n You may see this API error out with \"forward-mode AD not\n implemented for operator X\". If so, please file a bug report and\n we will prioritize it.\n jvp is useful when you wish to compute gradients of a function R^1\n -> R^N\n\n\n\nfrom torch.func import jvp\nx = torch.randn([])\nf = lambda x: x * torch.tensor([1., 2., 3])\nvalue, grad = jvp(f, (x,), (torch.tensor(1.),))\nassert torch.allclose(value, f(x))\nassert torch.allclose(grad, torch.tensor([1., 2, 3]))\n \"jvp()\" can support functions with multiple inputs by passing in\n the tangents for each of the inputs\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.jvp.html", "category": "pytorch docs"}
{"text": "the tangents for each of the inputs\n\n\n\nfrom torch.func import jvp\nx = torch.randn(5)\ny = torch.randn(5)\nf = lambda x, y: (x * y)\n_, output = jvp(f, (x, y), (torch.ones(5), torch.ones(5)))\nassert torch.allclose(output, x + y)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.jvp.html", "category": "pytorch docs"}
{"text": "torch.angletorch.angle(input, , out=None) -> Tensor\n Computes the element-wise angle (in radians) of the given \"input\"\n tensor.\n \\text{out}{i} = angle(\\text{input})\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Note:\n Starting in PyTorch 1.8, angle returns pi for negative real\n numbers, zero for non-negative real numbers, and propagates NaNs.\n Previously the function would return zero for all real numbers\n and not propagate floating-point NaNs.\n Example:\n >>> torch.angle(torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j]))180/3.14159\n tensor([ 135., 135, -45])", "source": "https://pytorch.org/docs/stable/generated/torch.angle.html", "category": "pytorch docs"}
{"text": "torch.flipudtorch.flipud(input) -> Tensor\n Flip tensor in the up/down direction, returning a new tensor.\n Flip the entries in each column in the up/down direction. Rows are\n preserved, but appear in a different order than before.\n Note:\n Requires the tensor to be at least 1-D.\n Note:\n torch.flipud makes a copy of \"input\"'s data. This is different\n from NumPy's np.flipud, which returns a view in constant time.\n Since copying a tensor's data is more work than viewing that\n data, torch.flipud is expected to be slower than np.flipud.\n Parameters:\n input (Tensor) -- Must be at least 1-dimensional.\n Example:\n >>> x = torch.arange(4).view(2, 2)\n >>> x\n tensor([[0, 1],\n [2, 3]])\n >>> torch.flipud(x)\n tensor([[2, 3],\n [0, 1]])", "source": "https://pytorch.org/docs/stable/generated/torch.flipud.html", "category": "pytorch docs"}
{"text": "torch.foreach_abs_torch._foreach_abs(self: List[Tensor]) -> None\n Apply \"torch.abs()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_abs_.html", "category": "pytorch docs"}
{"text": "torch.cartesian_prodtorch.cartesian_prod(tensors)\n Do cartesian product of the given sequence of tensors. The behavior\n is similar to python's itertools.product.\n Parameters:\n tensors (Tensor) -- any number of 1 dimensional tensors.\n Returns:\n A tensor equivalent to converting all the input tensors into\n lists, do itertools.product on these lists, and finally\n convert the resulting list into tensor.\n Return type:\n Tensor\n Example:\n >>> import itertools\n >>> a = [1, 2, 3]\n >>> b = [4, 5]\n >>> list(itertools.product(a, b))\n [(1, 4), (1, 5), (2, 4), (2, 5), (3, 4), (3, 5)]\n >>> tensor_a = torch.tensor(a)\n >>> tensor_b = torch.tensor(b)\n >>> torch.cartesian_prod(tensor_a, tensor_b)\n tensor([[1, 4],\n [1, 5],\n [2, 4],\n [2, 5],\n [3, 4],\n [3, 5]])", "source": "https://pytorch.org/docs/stable/generated/torch.cartesian_prod.html", "category": "pytorch docs"}
{"text": "BNReLU2dclass torch.ao.nn.intrinsic.BNReLU2d(batch_norm, relu)\n This is a sequential container which calls the BatchNorm 2d and\n ReLU modules. During quantization this will be replaced with the\n corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.BNReLU2d.html", "category": "pytorch docs"}
{"text": "torch.autograd.profiler.profile.key_averagesprofile.key_averages(group_by_input_shape=False, group_by_stack_n=0)\n Averages all function events over their keys.\n Parameters:\n * group_by_input_shapes -- group entries by (event name,\n input shapes) rather than just event name. This is useful to\n see which input shapes contribute to the runtime the most and\n may help with size-specific optimizations or choosing the best\n candidates for quantization (aka fitting a roof line)\n * group_by_stack_n -- group by top n stack trace entries\n Returns:\n An EventList containing FunctionEventAvg objects.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.profiler.profile.key_averages.html", "category": "pytorch docs"}
{"text": "Hardsigmoidclass torch.nn.Hardsigmoid(inplace=False)\n Applies the Hardsigmoid function element-wise.\n Hardsigmoid is defined as:\n \\text{Hardsigmoid}(x) = \\begin{cases} 0 & \\text{if~} x \\le\n -3, \\ 1 & \\text{if~} x \\ge +3, \\ x / 6 + 1 / 2 &\n \\text{otherwise} \\end{cases}\n Parameters:\n inplace (bool) -- can optionally do the operation in-\n place. Default: \"False\"\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.Hardsigmoid()\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Hardsigmoid.html", "category": "pytorch docs"}
{"text": "torch._foreach_sqrttorch._foreach_sqrt(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.sqrt()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_sqrt.html", "category": "pytorch docs"}
{"text": "torch.linalg.vandertorch.linalg.vander(x, N=None) -> Tensor\n Generates a Vandermonde matrix.\n Returns the Vandermonde matrix V\n V = \\begin{pmatrix} 1 & x_1 & x_1^2 & \\dots &\n x_1^{N-1}\\ 1 & x_2 & x_2^2 & \\dots & x_2^{N-1}\\\n 1 & x_3 & x_3^2 & \\dots & x_3^{N-1}\\ \\vdots & \\vdots &\n \\vdots & \\ddots &\\vdots \\ 1 & x_n & x_n^2 & \\dots &\n x_n^{N-1} \\end{pmatrix}.\n for N > 1. If \"N\"= None, then N = x.size(-1) so that the\n output is a square matrix.\n Supports inputs of float, double, cfloat, cdouble, and integral\n dtypes. Also supports batches of vectors, and if \"x\" is a batch of\n vectors then the output has the same batch dimensions.\n Differences with numpy.vander:\n * Unlike numpy.vander, this function returns the powers of \"x\" in\n ascending order. To get them in the reverse order call\n \"linalg.vander(x, N).flip(-1)\".\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.vander.html", "category": "pytorch docs"}
{"text": "Parameters:\n x (Tensor) -- tensor of shape (, n) where *** is zero\n or more batch dimensions consisting of vectors.\n Keyword Arguments:\n N (int, optional) -- Number of columns in the output.\n Default: x.size(-1)*\n Example:\n >>> x = torch.tensor([1, 2, 3, 5])\n >>> linalg.vander(x)\n tensor([[ 1, 1, 1, 1],\n [ 1, 2, 4, 8],\n [ 1, 3, 9, 27],\n [ 1, 5, 25, 125]])\n >>> linalg.vander(x, N=3)\n tensor([[ 1, 1, 1],\n [ 1, 2, 4],\n [ 1, 3, 9],\n [ 1, 5, 25]])", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.vander.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.silutorch.nn.functional.silu(input, inplace=False)\n Applies the Sigmoid Linear Unit (SiLU) function, element-wise. The\n SiLU function is also known as the swish function.\n \\text{silu}(x) = x * \\sigma(x), \\text{where } \\sigma(x) \\text{\n is the logistic sigmoid.}\n Note:\n See Gaussian Error Linear Units (GELUs) where the SiLU (Sigmoid\n Linear Unit) was originally coined, and see Sigmoid-Weighted\n Linear Units for Neural Network Function Approximation in\n Reinforcement Learning and Swish: a Self-Gated Activation\n Function where the SiLU was experimented with later.\n See \"SiLU\" for more details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.silu.html", "category": "pytorch docs"}
{"text": "torch.clonetorch.clone(input, , memory_format=torch.preserve_format) -> Tensor\n Returns a copy of \"input\".\n Note:\n This function is differentiable, so gradients will flow back from\n the result of this operation to \"input\". To create a tensor\n without an autograd relationship to \"input\" see \"detach()\".\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n memory_format* (\"torch.memory_format\", optional) -- the\n desired memory format of returned tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.clone.html", "category": "pytorch docs"}
{"text": "LinearReLUclass torch.ao.nn.intrinsic.qat.LinearReLU(in_features, out_features, bias=True, qconfig=None)\n A LinearReLU module fused from Linear and ReLU modules, attached\n with FakeQuantize modules for weight, used in quantization aware\n training.\n We adopt the same interface as \"torch.nn.Linear\".\n Similar to torch.nn.intrinsic.LinearReLU, with FakeQuantize\n modules initialized to default.\n Variables:\n weight (torch.Tensor) -- fake quant module for weight\n Examples:\n >>> m = nn.qat.LinearReLU(20, 30)\n >>> input = torch.randn(128, 20)\n >>> output = m(input)\n >>> print(output.size())\n torch.Size([128, 30])", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.LinearReLU.html", "category": "pytorch docs"}
{"text": "torch.foreach_cosh_torch._foreach_cosh(self: List[Tensor]) -> None\n Apply \"torch.cosh()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_cosh_.html", "category": "pytorch docs"}
{"text": "torch.imagtorch.imag(input) -> Tensor\n Returns a new tensor containing imaginary values of the \"self\"\n tensor. The returned tensor and \"self\" share the same underlying\n storage.\n Warning:\n \"imag()\" is only supported for tensors with complex dtypes.\n Parameters:\n input (Tensor) -- the input tensor.\n Example:\n >>> x=torch.randn(4, dtype=torch.cfloat)\n >>> x\n tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)])\n >>> x.imag\n tensor([ 0.3553, -0.7896, -0.0633, -0.8119])", "source": "https://pytorch.org/docs/stable/generated/torch.imag.html", "category": "pytorch docs"}
{"text": "RMSpropclass torch.optim.RMSprop(params, lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False, foreach=None, maximize=False, differentiable=False)\n Implements RMSprop algorithm.\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\ &\\textbf{input} : \\alpha \\text{ (alpha)},\\: \\gamma\n \\text{ (lr)}, \\: \\theta_0 \\text{ (params)}, \\:\n f(\\theta) \\text{ (objective)} \\\n &\\hspace{13mm} \\lambda \\text{ (weight decay)},\\: \\mu \\text{\n (momentum)},\\: centered\\ &\\textbf{initialize} : v_0\n \\leftarrow 0 \\text{ (square average)}, \\: \\textbf{b}0\n \\leftarrow 0 \\text{ (buffer)}, \\: g^{ave}_0 \\leftarrow 0\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\ &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\:\n \\textbf{do} \\ &\\hspace{5mm}g_t\n \\leftarrow \\nabla f_t (\\theta_{t-1}) \\\n &\\hspace{5mm}if \\: \\lambda \\neq 0", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html", "category": "pytorch docs"}
{"text": "&\\hspace{5mm}if \\: \\lambda \\neq 0\n \\ &\\hspace{10mm} g_t \\leftarrow g_t + \\lambda\n \\theta_{t-1} \\ &\\hspace{5mm}v_t\n \\leftarrow \\alpha v_{t-1} + (1 - \\alpha) g^2_t\n \\hspace{8mm}\n \\ &\\hspace{5mm} \\tilde{v_t} \\leftarrow v_t\n \\ &\\hspace{5mm}if \\: centered\n \\ &\\hspace{10mm} g^{ave}t \\leftarrow g^{ave} \\alpha\n + (1-\\alpha) g_t \\ &\\hspace{10mm} \\tilde{v_t}\n \\leftarrow \\tilde{v_t} - \\big(g^{ave}{t} \\big)^2 \\\n &\\hspace{5mm}if \\: \\mu > 0\n \\ &\\hspace{10mm} \\textbf{b}_t\\leftarrow \\mu\n \\textbf{b} + g_t/ \\big(\\sqrt{\\tilde{v_t}} +\n \\epsilon \\big) \\\n &\\hspace{10mm} \\theta_t \\leftarrow \\theta_{t-1} - \\gamma\n \\textbf{b}t \\ &\\hspace{5mm} else\n \\ &\\hspace{10mm}\\theta_t \\leftarrow \\theta -\n \\gamma g_t/ \\big(\\sqrt{\\tilde{v_t}} + \\epsilon \\big)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html", "category": "pytorch docs"}
{"text": "\\hspace{3mm} \\ &\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\n For further details regarding the algorithm we refer to lecture\n notes by G. Hinton. and centered version Generating Sequences With\n Recurrent Neural Networks. The implementation here takes the square\n root of the gradient average before adding epsilon (note that\n TensorFlow interchanges these two operations). The effective\n learning rate is thus \\gamma/(\\sqrt{v} + \\epsilon) where \\gamma is\n the scheduled learning rate and v is the weighted moving average of\n the squared gradient.\n Parameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * lr (float, optional) -- learning rate (default:\n 1e-2)\n * momentum (float, optional) -- momentum factor\n (default: 0)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html", "category": "pytorch docs"}
{"text": "(default: 0)\n * alpha (float, optional) -- smoothing constant\n (default: 0.99)\n * eps (float, optional) -- term added to the\n denominator to improve numerical stability (default: 1e-8)\n * centered (bool, optional) -- if \"True\", compute the\n centered RMSProp, the gradient is normalized by an estimation\n of its variance\n * weight_decay (float, optional) -- weight decay (L2\n penalty) (default: 0)\n * foreach (bool, optional) -- whether foreach\n implementation of optimizer is used. If unspecified by the\n user (so foreach is None), we will try to use foreach over the\n for-loop implementation on CUDA, since it is usually\n significantly more performant. (default: None)\n * maximize (bool, optional) -- maximize the params\n based on the objective, instead of minimizing (default: False)\n * differentiable (bool, optional) -- whether autograd", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html", "category": "pytorch docs"}
{"text": "should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\n add_param_group(param_group)\n Add a param group to the \"Optimizer\" s param_groups.\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n Parameters:\n param_group (dict) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n load_state_dict(state_dict)\n Loads the optimizer state.\n Parameters:\n state_dict (dict) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n register_step_post_hook(hook)\n Register an optimizer step post hook which will be called after", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html", "category": "pytorch docs"}
{"text": "optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None\n The \"optimizer\" argument is the optimizer instance being used.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n register_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html", "category": "pytorch docs"}
{"text": "registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n state_dict()\n Returns the state of the optimizer as a \"dict\".\n It contains two entries:\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n zero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n Parameters:\n set_to_none (bool) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html", "category": "pytorch docs"}
{"text": "None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html", "category": "pytorch docs"}
{"text": "torch.qrtorch.qr(input, some=True, *, out=None)\n Computes the QR decomposition of a matrix or a batch of matrices\n \"input\", and returns a namedtuple (Q, R) of tensors such that\n \\text{input} = Q R with Q being an orthogonal matrix or batch of\n orthogonal matrices and R being an upper triangular matrix or batch\n of upper triangular matrices.\n If \"some\" is \"True\", then this function returns the thin (reduced)\n QR factorization. Otherwise, if \"some\" is \"False\", this function\n returns the complete QR factorization.\n Warning:\n \"torch.qr()\" is deprecated in favor of \"torch.linalg.qr()\" and\n will be removed in a future PyTorch release. The boolean\n parameter \"some\" has been replaced with a string parameter\n \"mode\".\"Q, R = torch.qr(A)\" should be replaced with\n Q, R = torch.linalg.qr(A)\n \"Q, R = torch.qr(A, some=False)\" should be replaced with\n Q, R = torch.linalg.qr(A, mode=\"complete\")\n Warning:", "source": "https://pytorch.org/docs/stable/generated/torch.qr.html", "category": "pytorch docs"}
{"text": "Warning:\n If you plan to backpropagate through QR, note that the current\n backward implementation is only well-defined when the first\n \\min(input.size(-1), input.size(-2)) columns of \"input\" are\n linearly independent. This behavior will probably change once QR\n supports pivoting.\n Note:\n This function uses LAPACK for CPU inputs and MAGMA for CUDA\n inputs, and may produce different (valid) decompositions on\n different device types or different platforms.\n Parameters:\n * input (Tensor) -- the input tensor of size (, m, n)\n where *** is zero or more batch dimensions consisting of\n matrices of dimension m \\times n.\n * some (bool, optional) --\n Set to \"True\" for reduced QR decomposition and \"False\" for\n complete QR decomposition. If k = min(m, n) then:\n * \"some=True\" : returns (Q, R) with dimensions (m, k),\n (k, n) (default)\n * \"'some=False'\": returns (Q, R)* with dimensions (m, m),", "source": "https://pytorch.org/docs/stable/generated/torch.qr.html", "category": "pytorch docs"}
{"text": "(m, n)\n Keyword Arguments:\n out (tuple, optional) -- tuple of Q and R tensors.\n The dimensions of Q and R are detailed in the description of\n \"some\" above.\n Example:\n >>> a = torch.tensor([[12., -51, 4], [6, 167, -68], [-4, 24, -41]])\n >>> q, r = torch.qr(a)\n >>> q\n tensor([[-0.8571, 0.3943, 0.3314],\n [-0.4286, -0.9029, -0.0343],\n [ 0.2857, -0.1714, 0.9429]])\n >>> r\n tensor([[ -14.0000, -21.0000, 14.0000],\n [ 0.0000, -175.0000, 70.0000],\n [ 0.0000, 0.0000, -35.0000]])\n >>> torch.mm(q, r).round()\n tensor([[ 12., -51., 4.],\n [ 6., 167., -68.],\n [ -4., 24., -41.]])\n >>> torch.mm(q.t(), q).round()\n tensor([[ 1., 0., 0.],\n [ 0., 1., -0.],\n [ 0., -0., 1.]])\n >>> a = torch.randn(3, 4, 5)\n >>> q, r = torch.qr(a, some=False)\n >>> torch.allclose(torch.matmul(q, r), a)", "source": "https://pytorch.org/docs/stable/generated/torch.qr.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.allclose(torch.matmul(q, r), a)\n True\n >>> torch.allclose(torch.matmul(q.mT, q), torch.eye(5))\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.qr.html", "category": "pytorch docs"}
{"text": "torch.linalg.lu_factor_extorch.linalg.lu_factor_ex(A, , pivot=True, check_errors=False, out=None)\n This is a version of \"lu_factor()\" that does not perform error\n checks unless \"check_errors\"= True. It also returns the \"info\"\n tensor returned by LAPACK's getrf.\n Note:\n When the inputs are on a CUDA device, this function synchronizes\n only when \"check_errors\"= True.\n Warning:\n This function is \"experimental\" and it may change in a future\n PyTorch release.\n Parameters:\n A (Tensor) -- tensor of shape (, m, n) where *** is\n zero or more batch dimensions.\n Keyword Arguments:\n * pivot (bool, optional) -- Whether to compute the LU\n decomposition with partial pivoting, or the regular LU\n decomposition. \"pivot\"= False not supported on CPU. Default:\n True.\n * check_errors (bool, optional) -- controls whether to\n check the content of \"infos\" and raise an error if it is non-", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_factor_ex.html", "category": "pytorch docs"}
{"text": "zero. Default: False.\n * out (tuple, optional) -- tuple of three tensors to\n write the output to. Ignored if None. Default: None.\n Returns:\n A named tuple (LU, pivots, info).", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_factor_ex.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sum_to_sizeTensor.sum_to_size(size) -> Tensor\n Sum \"this\" tensor to \"size\". \"size\" must be broadcastable to \"this\"\n tensor size.\n Parameters:\n size (int*...) -- a sequence of integers defining the\n shape of the output tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sum_to_size.html", "category": "pytorch docs"}
{"text": "torch.logcumsumexptorch.logcumsumexp(input, dim, , out=None) -> Tensor\n Returns the logarithm of the cumulative summation of the\n exponentiation of elements of \"input\" in the dimension \"dim\".\n For summation index j given by dim and other indices i, the\n result is\n \\text{logcumsumexp}(x){ij} = \\log \\sum\\limits^{i}\n \\exp(x_{ij})\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- the dimension to do the operation over\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(10)\n >>> torch.logcumsumexp(a, dim=0)\n tensor([-0.42296738, -0.04462666, 0.86278635, 0.94622083, 1.05277811,\n 1.39202815, 1.83525007, 1.84492621, 2.06084887, 2.06844475]))", "source": "https://pytorch.org/docs/stable/generated/torch.logcumsumexp.html", "category": "pytorch docs"}
{"text": "torch.Tensor.conj_physicalTensor.conj_physical() -> Tensor\n See \"torch.conj_physical()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.conj_physical.html", "category": "pytorch docs"}
{"text": "torch.Tensor.unsqueezeTensor.unsqueeze(dim) -> Tensor\n See \"torch.unsqueeze()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.unsqueeze.html", "category": "pytorch docs"}
{"text": "deviceclass torch.cuda.device(device)\n Context-manager that changes the selected device.\n Parameters:\n device (torch.device or int) -- device index to\n select. It's a no-op if this argument is a negative integer or\n \"None\".", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.device.html", "category": "pytorch docs"}
{"text": "torch.Tensor.fmod_Tensor.fmod_(divisor) -> Tensor\n In-place version of \"fmod()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fmod_.html", "category": "pytorch docs"}
{"text": "torch.diagonaltorch.diagonal(input, offset=0, dim1=0, dim2=1) -> Tensor\n Returns a partial view of \"input\" with the its diagonal elements\n with respect to \"dim1\" and \"dim2\" appended as a dimension at the\n end of the shape.\n The argument \"offset\" controls which diagonal to consider:\n * If \"offset\" = 0, it is the main diagonal.\n * If \"offset\" > 0, it is above the main diagonal.\n * If \"offset\" < 0, it is below the main diagonal.\n Applying \"torch.diag_embed()\" to the output of this function with\n the same arguments yields a diagonal matrix with the diagonal\n entries of the input. However, \"torch.diag_embed()\" has different\n default dimensions, so those need to be explicitly specified.\n Parameters:\n * input (Tensor) -- the input tensor. Must be at least\n 2-dimensional.\n * offset (int, optional) -- which diagonal to\n consider. Default: 0 (main diagonal).\n * dim1 (int, optional) -- first dimension with respect", "source": "https://pytorch.org/docs/stable/generated/torch.diagonal.html", "category": "pytorch docs"}
{"text": "to which to take diagonal. Default: 0.\n * dim2 (int, optional) -- second dimension with\n respect to which to take diagonal. Default: 1.\n Note:\n To take a batch diagonal, pass in dim1=-2, dim2=-1.\n Examples:\n >>> a = torch.randn(3, 3)\n >>> a\n tensor([[-1.0854, 1.1431, -0.1752],\n [ 0.8536, -0.0905, 0.0360],\n [ 0.6927, -0.3735, -0.4945]])\n >>> torch.diagonal(a, 0)\n tensor([-1.0854, -0.0905, -0.4945])\n >>> torch.diagonal(a, 1)\n tensor([ 1.1431, 0.0360])\n >>> x = torch.randn(2, 5, 4, 2)\n >>> torch.diagonal(x, offset=-1, dim1=1, dim2=2)\n tensor([[[-1.2631, 0.3755, -1.5977, -1.8172],\n [-1.1065, 1.0401, -0.2235, -0.7938]],\n [[-1.7325, -0.3081, 0.6166, 0.2335],\n [ 1.0500, 0.7336, -0.3836, -1.1015]]])", "source": "https://pytorch.org/docs/stable/generated/torch.diagonal.html", "category": "pytorch docs"}
{"text": "MultiLabelMarginLossclass torch.nn.MultiLabelMarginLoss(size_average=None, reduce=None, reduction='mean')\n Creates a criterion that optimizes a multi-class multi-\n classification hinge loss (margin-based loss) between input x (a 2D\n mini-batch Tensor) and output y (which is a 2D Tensor of target\n class indices). For each sample in the mini-batch:\n \\text{loss}(x, y) = \\sum_{ij}\\frac{\\max(0, 1 - (x[y[j]] -\n x[i]))}{\\text{x.size}(0)}\n where x \\in \\left{0, \\; \\cdots , \\; \\text{x.size}(0) - 1\\right},\n y \\in \\left{0, \\; \\cdots , \\; \\text{y.size}(0) - 1\\right}, 0 \\leq\n y[j] \\leq \\text{x.size}(0)-1, and i \\neq y[j] for all i and j.\n y and x must have the same size.\n The criterion only considers a contiguous block of non-negative\n targets that starts at the front.\n This allows for different samples to have variable amounts of\n target classes.\n Parameters:\n * size_average (bool, optional) -- Deprecated (see", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelMarginLoss.html", "category": "pytorch docs"}
{"text": "\"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelMarginLoss.html", "category": "pytorch docs"}
{"text": "\"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n Shape:\n * Input: (C) or (N, C) where N is the batch size and C is\n the number of classes.\n * Target: (C) or (N, C), label targets padded by -1 ensuring\n same shape as the input.\n * Output: scalar. If \"reduction\" is \"'none'\", then (N).\n Examples:\n >>> loss = nn.MultiLabelMarginLoss()\n >>> x = torch.FloatTensor([[0.1, 0.2, 0.4, 0.8]])\n >>> # for target y, only consider labels 3 and 0, not after label -1\n >>> y = torch.LongTensor([[3, 0, -1, 1]])\n >>> # 0.25 * ((1-(0.1-0.2)) + (1-(0.1-0.4)) + (1-(0.8-0.2)) + (1-(0.8-0.4)))\n >>> loss(x, y)\n tensor(0.85...)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelMarginLoss.html", "category": "pytorch docs"}
{"text": "BatchNorm3dclass torch.nn.BatchNorm3d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)\n Applies Batch Normalization over a 5D input (a mini-batch of 3D\n inputs with additional channel dimension) as described in the paper\n Batch Normalization: Accelerating Deep Network Training by Reducing\n Internal Covariate Shift .\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n The mean and standard-deviation are calculated per-dimension over\n the mini-batches and \\gamma and \\beta are learnable parameter\n vectors of size C (where C is the input size). By default, the\n elements of \\gamma are set to 1 and the elements of \\beta are set\n to 0. The standard-deviation is calculated via the biased\n estimator, equivalent to torch.var(input, unbiased=False).\n Also by default, during training this layer keeps running estimates\n of its computed mean and variance, which are then used for", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm3d.html", "category": "pytorch docs"}
{"text": "normalization during evaluation. The running estimates are kept\n with a default \"momentum\" of 0.1.\n If \"track_running_stats\" is set to \"False\", this layer then does\n not keep running estimates, and batch statistics are instead used\n during evaluation time as well.\n Note:\n This \"momentum\" argument is different from one used in optimizer\n classes and the conventional notion of momentum. Mathematically,\n the update rule for running statistics here is \\hat{x}_\\text{new}\n = (1 - \\text{momentum}) \\times \\hat{x} + \\text{momentum} \\times\n x_t, where \\hat{x} is the estimated statistic and x_t is the new\n observed value.\n Because the Batch Normalization is done over the C dimension,\n computing statistics on (N, D, H, W) slices, it's common\n terminology to call this Volumetric Batch Normalization or Spatio-\n temporal Batch Normalization.\n Parameters:\n * num_features (int) -- C from an expected input of size\n (N, C, D, H, W)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm3d.html", "category": "pytorch docs"}
{"text": "(N, C, D, H, W)\n * eps (float) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n * momentum (float) -- the value used for the running_mean\n and running_var computation. Can be set to \"None\" for\n cumulative moving average (i.e. simple average). Default: 0.1\n * affine (bool) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters. Default:\n \"True\"\n * track_running_stats (bool) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics, and initializes statistics buffers\n \"running_mean\" and \"running_var\" as \"None\". When these buffers\n are \"None\", this module always uses batch statistics. in both\n training and eval modes. Default: \"True\"\n Shape:\n * Input: (N, C, D, H, W)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm3d.html", "category": "pytorch docs"}
{"text": "Shape:\n * Input: (N, C, D, H, W)\n * Output: (N, C, D, H, W) (same shape as input)\n Examples:\n >>> # With Learnable Parameters\n >>> m = nn.BatchNorm3d(100)\n >>> # Without Learnable Parameters\n >>> m = nn.BatchNorm3d(100, affine=False)\n >>> input = torch.randn(20, 100, 35, 45, 10)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm3d.html", "category": "pytorch docs"}
{"text": "Softshrinkclass torch.nn.Softshrink(lambd=0.5)\n Applies the soft shrinkage function elementwise:\n \\text{SoftShrinkage}(x) = \\begin{cases} x - \\lambda, & \\text{ if\n } x > \\lambda \\ x + \\lambda, & \\text{ if } x < -\\lambda \\ 0, &\n \\text{ otherwise } \\end{cases}\n Parameters:\n lambd (float) -- the \\lambda (must be no less than zero)\n value for the Softshrink formulation. Default: 0.5\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.Softshrink()\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Softshrink.html", "category": "pytorch docs"}
{"text": "torch.Tensor.slogdetTensor.slogdet()\n See \"torch.slogdet()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.slogdet.html", "category": "pytorch docs"}
{"text": "torch.foreach_sigmoid_torch._foreach_sigmoid(self: List[Tensor]) -> None\n Apply \"torch.sigmoid()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_sigmoid_.html", "category": "pytorch docs"}
{"text": "torch.scatter_reducetorch.scatter_reduce(input, dim, index, src, reduce, *, include_self=True) -> Tensor\n Out-of-place version of \"torch.Tensor.scatter_reduce_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.scatter_reduce.html", "category": "pytorch docs"}
{"text": "torch.crosstorch.cross(input, other, dim=None, , out=None) -> Tensor\n Returns the cross product of vectors in dimension \"dim\" of \"input\"\n and \"other\".\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of vectors, for which it computes the product\n along the dimension \"dim\". In this case, the output has the same\n batch dimensions as the inputs.\n If \"dim\" is not given, it defaults to the first dimension found\n with the size 3. Note that this might be unexpected.\n See also:\n \"torch.linalg.cross()\" which requires specifying dim (defaulting\n to -1).\n Warning:\n This function may change in a future PyTorch release to match the\n default behaviour in \"torch.linalg.cross()\". We recommend using\n \"torch.linalg.cross()\".\n Parameters:\n * input (Tensor) -- the input tensor.\n * other (Tensor) -- the second input tensor\n * dim (int, optional*) -- the dimension to take the\n cross-product in.", "source": "https://pytorch.org/docs/stable/generated/torch.cross.html", "category": "pytorch docs"}
{"text": "cross-product in.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> a = torch.randn(4, 3)\n >>> a\n tensor([[-0.3956, 1.1455, 1.6895],\n [-0.5849, 1.3672, 0.3599],\n [-1.1626, 0.7180, -0.0521],\n [-0.1339, 0.9902, -2.0225]])\n >>> b = torch.randn(4, 3)\n >>> b\n tensor([[-0.0257, -1.4725, -1.2251],\n [-1.1479, -0.7005, -1.9757],\n [-1.3904, 0.3726, -1.1836],\n [-0.9688, -0.7153, 0.2159]])\n >>> torch.cross(a, b, dim=1)\n tensor([[ 1.0844, -0.5281, 0.6120],\n [-2.4490, -1.5687, 1.9792],\n [-0.8304, -1.3037, 0.5650],\n [-1.2329, 1.9883, 1.0551]])\n >>> torch.cross(a, b)\n tensor([[ 1.0844, -0.5281, 0.6120],\n [-2.4490, -1.5687, 1.9792],\n [-0.8304, -1.3037, 0.5650],\n [-1.2329, 1.9883, 1.0551]])", "source": "https://pytorch.org/docs/stable/generated/torch.cross.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sinc_Tensor.sinc_() -> Tensor\n In-place version of \"sinc()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sinc_.html", "category": "pytorch docs"}
{"text": "torch.is_inference_mode_enabledtorch.is_inference_mode_enabled()\n Returns True if inference mode is currently enabled.", "source": "https://pytorch.org/docs/stable/generated/torch.is_inference_mode_enabled.html", "category": "pytorch docs"}
{"text": "torch.Tensor.lerp_Tensor.lerp_(end, weight) -> Tensor\n In-place version of \"lerp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lerp_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.nanquantileTensor.nanquantile(q, dim=None, keepdim=False, *, interpolation='linear') -> Tensor\n See \"torch.nanquantile()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nanquantile.html", "category": "pytorch docs"}
{"text": "torch.cuda.nvtx.range_poptorch.cuda.nvtx.range_pop()\n Pops a range off of a stack of nested range spans. Returns the\n zero-based depth of the range that is ended.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.nvtx.range_pop.html", "category": "pytorch docs"}
{"text": "torch.dequantizetorch.dequantize(tensor) -> Tensor\n Returns an fp32 Tensor by dequantizing a quantized Tensor\n Parameters:\n tensor (Tensor) -- A quantized Tensor\n torch.dequantize(tensors) -> sequence of Tensors\n Given a list of quantized Tensors, dequantize them and return a\n list of fp32 Tensors\n Parameters:\n tensors (sequence of Tensors) -- A list of quantized\n Tensors", "source": "https://pytorch.org/docs/stable/generated/torch.dequantize.html", "category": "pytorch docs"}
{"text": "torch.Tensor.bitwise_left_shiftTensor.bitwise_left_shift(other) -> Tensor\n See \"torch.bitwise_left_shift()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_left_shift.html", "category": "pytorch docs"}
{"text": "LinearReLUclass torch.ao.nn.intrinsic.quantized.LinearReLU(in_features, out_features, bias=True, dtype=torch.qint8)\n A LinearReLU module fused from Linear and ReLU modules\n We adopt the same interface as \"torch.ao.nn.quantized.Linear\".\n Variables:\n torch.ao.nn.quantized.Linear (Same as) --\n Examples:\n >>> m = nn.intrinsic.LinearReLU(20, 30)\n >>> input = torch.randn(128, 20)\n >>> output = m(input)\n >>> print(output.size())\n torch.Size([128, 30])", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.LinearReLU.html", "category": "pytorch docs"}
{"text": "FakeQuantizeBaseclass torch.quantization.fake_quantize.FakeQuantizeBase\n Base fake quantize module Any fake quantize implementation should\n derive from this class.\n Concrete fake quantize module should follow the same API. In\n forward, they will update the statistics of the observed Tensor and\n fake quantize the input. They should also provide a\n calculate_qparams function that computes the quantization\n parameters given the collected statistics.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.FakeQuantizeBase.html", "category": "pytorch docs"}
{"text": "torch.optim.Optimizer.add_param_groupOptimizer.add_param_group(param_group)\n Add a param group to the \"Optimizer\" s param_groups.\n This can be useful when fine tuning a pre-trained network as frozen\n layers can be made trainable and added to the \"Optimizer\" as\n training progresses.\n Parameters:\n param_group (dict) -- Specifies what Tensors should be\n optimized along with group specific optimization options.", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.add_param_group.html", "category": "pytorch docs"}
{"text": "ConvBnReLU2dclass torch.ao.nn.intrinsic.ConvBnReLU2d(conv, bn, relu)\n This is a sequential container which calls the Conv 2d, Batch Norm\n 2d, and ReLU modules. During quantization this will be replaced\n with the corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvBnReLU2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.new_onesTensor.new_ones(size, , dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) -> Tensor\n Returns a Tensor of size \"size\" filled with \"1\". By default, the\n returned Tensor has the same \"torch.dtype\" and \"torch.device\" as\n this tensor.\n Parameters:\n size (int...*) -- a list, tuple, or \"torch.Size\" of\n integers defining the shape of the output tensor.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired type of\n returned tensor. Default: if None, same \"torch.dtype\" as this\n tensor.\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, same \"torch.device\" as this\n tensor.\n * requires_grad (*bool, optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * layout** (\"torch.layout\", optional) -- the desired layout of", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_ones.html", "category": "pytorch docs"}
{"text": "returned Tensor. Default: \"torch.strided\".\n * pin_memory (bool, optional) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n Example:\n >>> tensor = torch.tensor((), dtype=torch.int32)\n >>> tensor.new_ones((2, 3))\n tensor([[ 1, 1, 1],\n [ 1, 1, 1]], dtype=torch.int32)", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_ones.html", "category": "pytorch docs"}
{"text": "AdaptiveMaxPool3dclass torch.nn.AdaptiveMaxPool3d(output_size, return_indices=False)\n Applies a 3D adaptive max pooling over an input signal composed of\n several input planes.\n The output is of size D_{out} \\times H_{out} \\times W_{out}, for\n any input size. The number of output features is equal to the\n number of input planes.\n Parameters:\n * output_size (Union[int, None,\n Tuple[Optional[int], Optional[int],\n Optional[int]]]) -- the target output size of the\n image of the form D_{out} \\times H_{out} \\times W_{out}. Can\n be a tuple (D_{out}, H_{out}, W_{out}) or a single D_{out} for\n a cube D_{out} \\times D_{out} \\times D_{out}. D_{out}, H_{out}\n and W_{out} can be either a \"int\", or \"None\" which means the\n size will be the same as that of the input.\n * return_indices (bool) -- if \"True\", will return the\n indices along with the outputs. Useful to pass to", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveMaxPool3d.html", "category": "pytorch docs"}
{"text": "nn.MaxUnpool3d. Default: \"False\"\n Shape:\n * Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},\n W_{in}).\n * Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},\n H_{out}, W_{out}), where (D_{out}, H_{out},\n W_{out})=\\text{output_size}.\n -[ Examples ]-\n\n\n\ntarget output size of 5x7x9\nm = nn.AdaptiveMaxPool3d((5, 7, 9))\ninput = torch.randn(1, 64, 8, 9, 10)\noutput = m(input)\ntarget output size of 7x7x7 (cube)\nm = nn.AdaptiveMaxPool3d(7)\ninput = torch.randn(1, 64, 10, 9, 8)\noutput = m(input)\ntarget output size of 7x9x8\nm = nn.AdaptiveMaxPool3d((7, None, None))\ninput = torch.randn(1, 64, 10, 9, 8)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveMaxPool3d.html", "category": "pytorch docs"}
{"text": "torch.optim.Optimizer.zero_gradOptimizer.zero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n Parameters:\n set_to_none (bool) -- instead of setting to zero, set the\n grads to None. This will in general have lower memory footprint,\n and can modestly improve performance. However, it changes\n certain behaviors. For example: 1. When the user tries to access\n a gradient and perform manual ops on it, a None attribute or a\n Tensor full of 0s will behave differently. 2. If the user\n requests \"zero_grad(set_to_none=True)\" followed by a backward\n pass, \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a different\n behavior if the gradient is 0 or None (in one case it does the\n step with a gradient of 0 and in the other it skips the step\n altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html", "category": "pytorch docs"}
{"text": "torch.nn.modules.module.register_module_backward_hooktorch.nn.modules.module.register_module_backward_hook(hook)\n Registers a backward hook common to all the modules.\n This function is deprecated in favor of\n \"torch.nn.modules.module.register_module_full_backward_hook()\" and\n the behavior of this function will change in future versions.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemovableHandle\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_backward_hook.html", "category": "pytorch docs"}
{"text": "torch._foreach_coshtorch._foreach_cosh(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.cosh()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_cosh.html", "category": "pytorch docs"}
{"text": "ConstantPad3dclass torch.nn.ConstantPad3d(padding, value)\n Pads the input tensor boundaries with a constant value.\n For N-dimensional padding, use \"torch.nn.functional.pad()\".\n Parameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in all boundaries. If a 6-tuple,\n uses (\\text{padding_left}, \\text{padding_right},\n \\text{padding_top}, \\text{padding_bottom},\n \\text{padding_front}, \\text{padding_back})\n Shape:\n * Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},\n W_{in}).\n * Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},\n H_{out}, W_{out}), where\n D_{out} = D_{in} + \\text{padding_front} +\n \\text{padding_back}\n H_{out} = H_{in} + \\text{padding_top} +\n \\text{padding_bottom}\n W_{out} = W_{in} + \\text{padding_left} +\n \\text{padding_right}\n Examples:\n >>> m = nn.ConstantPad3d(3, 3.5)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad3d.html", "category": "pytorch docs"}
{"text": "\n\n\nm = nn.ConstantPad3d(3, 3.5)\n >>> input = torch.randn(16, 3, 10, 20, 30)\n >>> output = m(input)\n >>> # using different paddings for different sides\n >>> m = nn.ConstantPad3d((3, 3, 6, 6, 0, 1), 3.5)\n >>> output = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad3d.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.weight_normtorch.nn.utils.weight_norm(module, name='weight', dim=0)\n Applies weight normalization to a parameter in the given module.\n \\mathbf{w} = g \\dfrac{\\mathbf{v}}{|\\mathbf{v}|}\n Weight normalization is a reparameterization that decouples the\n magnitude of a weight tensor from its direction. This replaces the\n parameter specified by \"name\" (e.g. \"'weight'\") with two\n parameters: one specifying the magnitude (e.g. \"'weight_g'\") and\n one specifying the direction (e.g. \"'weight_v'\"). Weight\n normalization is implemented via a hook that recomputes the weight\n tensor from the magnitude and direction before every \"forward()\"\n call.\n By default, with \"dim=0\", the norm is computed independently per\n output channel/plane. To compute a norm over the entire weight\n tensor, use \"dim=None\".\n See https://arxiv.org/abs/1602.07868\n Parameters:\n * module (Module) -- containing module", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.weight_norm.html", "category": "pytorch docs"}
{"text": "\nmodule (Module) -- containing module\nname (str, optional) -- name of weight parameter\ndim (int, optional) -- dimension over which to\n compute the norm\n Returns:\n The original module with the weight norm hook\n Return type:\n T_module\n Example:\n\n\nm = weight_norm(nn.Linear(20, 40), name='weight')\nm\n Linear(in_features=20, out_features=40, bias=True)\nm.weight_g.size()\n torch.Size([40, 1])\nm.weight_v.size()\n torch.Size([40, 20])\n\n\n\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.weight_norm.html", "category": "pytorch docs"}
{"text": "torch.cuda.make_graphed_callablestorch.cuda.make_graphed_callables(callables, sample_args, num_warmup_iters=3, allow_unused_input=False)\n Accepts callables (functions or \"nn.Module\"s) and returns graphed\n versions.\n Each graphed callable's forward pass runs its source callable's\n forward CUDA work as a CUDA graph inside a single autograd node.\n The graphed callable's forward pass also appends a backward node to\n the autograd graph. During backward, this node runs the callable's\n backward work as a CUDA graph.\n Therefore, each graphed callable should be a drop-in replacement\n for its source callable in an autograd-enabled training loop.\n See Partial-network capture for detailed use and constraints.\n If you pass a tuple of several callables, their captures will use\n the same memory pool. See Graph memory management for when this is\n appropriate.\n Parameters:\n * callables (torch.nn.Module or Python function*, or", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.make_graphed_callables.html", "category": "pytorch docs"}
{"text": "tuple of these*) -- Callable or callables to graph. See\n Graph memory management for when passing a tuple of callables\n is appropriate. If you pass a tuple of callables, their order\n in the tuple must be the same order they'll run in the live\n workload.\n * sample_args (*tuple of Tensors, or tuple of tuples of\n Tensors*) -- Samples args for each callable. If a single\n callable was passed, \"sample_args\" must be a single tuple of\n argument Tensors. If a tuple of callables was passed,\n \"sample_args\" must be tuple of tuples of argument Tensors.\n * num_warmup_iters (int) -- The number of warmup\n iterations. Currently, \"DataDistributedParallel\" needs 11\n iterations for warm up. Default: \"3\".\n * allow_unused_input (bool) -- If False, specifying inputs\n that were not used when computing outputs (and therefore their\n grad is always zero) is an error. Defaults to False.\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.make_graphed_callables.html", "category": "pytorch docs"}
{"text": "Note:\n The \"requires_grad\" state of each Tensor in \"sample_args\" must\n match the state that's expected for the corresponding real input\n in the training loop.\n Warning:\n This API is in beta and may change in future releases.\n Warning:\n \"sample_args\" for each callable must contain only Tensors. Other\n types are not allowed.\n Warning:\n Returned callables do not support higher order differentiation\n (e.g., double backward).\n Warning:\n In any \"Module\" passed to \"make_graphed_callables()\", only\n parameters may be trainable. Buffers must have\n \"requires_grad=False\".\n Warning:\n After you pass a \"torch.nn.Module\" through\n \"make_graphed_callables()\", you may not add or remove any of that\n Module's parameters or buffers.\n Warning:\n \"torch.nn.Module\"s passed to \"make_graphed_callables()\" must not\n have module hooks registered on them at the time they are passed.\n However, registering hooks on modules after passing them", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.make_graphed_callables.html", "category": "pytorch docs"}
{"text": "through \"make_graphed_callables()\" is allowed.\n Warning:\n When running a graphed callable, you must pass its arguments in\n the same order and format they appeared in that callable's\n \"sample_args\".\n Warning:\n The automatic mixed precision is supported in\n \"make_graphed_callables()\" only with disabled caching. The\n context manager torch.cuda.amp.autocast() must have\n cache_enabled=False.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.make_graphed_callables.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.spectral_normtorch.nn.utils.spectral_norm(module, name='weight', n_power_iterations=1, eps=1e-12, dim=None)\n Applies spectral normalization to a parameter in the given module.\n \\mathbf{W}{SN} = \\dfrac{\\mathbf{W}}{\\sigma(\\mathbf{W})},\n \\sigma(\\mathbf{W}) = \\max: \\mathbf{h} \\ne 0}\n \\dfrac{|\\mathbf{W} \\mathbf{h}|_2}{|\\mathbf{h}|_2}\n Spectral normalization stabilizes the training of discriminators\n (critics) in Generative Adversarial Networks (GANs) by rescaling\n the weight tensor with spectral norm \\sigma of the weight matrix\n calculated using power iteration method. If the dimension of the\n weight tensor is greater than 2, it is reshaped to 2D in power\n iteration method to get spectral norm. This is implemented via a\n hook that calculates spectral norm and rescales weight before every\n \"forward()\" call.\n See Spectral Normalization for Generative Adversarial Networks .\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.spectral_norm.html", "category": "pytorch docs"}
{"text": "Parameters:\n * module (nn.Module) -- containing module\n * name (str, optional) -- name of weight parameter\n * n_power_iterations (int, optional) -- number of\n power iterations to calculate spectral norm\n * eps (float, optional) -- epsilon for numerical\n stability in calculating norms\n * dim (int, optional) -- dimension corresponding to\n number of outputs, the default is \"0\", except for modules that\n are instances of ConvTranspose{1,2,3}d, when it is \"1\"\n Returns:\n The original module with the spectral norm hook\n Return type:\n T_module\n Note:\n This function has been reimplemented as\n \"torch.nn.utils.parametrizations.spectral_norm()\" using the new\n parametrization functionality in\n \"torch.nn.utils.parametrize.register_parametrization()\". Please\n use the newer version. This function will be deprecated in a\n future version of PyTorch.\n Example:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.spectral_norm.html", "category": "pytorch docs"}
{"text": "future version of PyTorch.\n Example:\n >>> m = spectral_norm(nn.Linear(20, 40))\n >>> m\n Linear(in_features=20, out_features=40, bias=True)\n >>> m.weight_u.size()\n torch.Size([40])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.spectral_norm.html", "category": "pytorch docs"}
{"text": "torch.rolltorch.roll(input, shifts, dims=None) -> Tensor\n Roll the tensor \"input\" along the given dimension(s). Elements that\n are shifted beyond the last position are re-introduced at the first\n position. If \"dims\" is None, the tensor will be flattened before\n rolling and then restored to the original shape.\n Parameters:\n * input (Tensor) -- the input tensor.\n * shifts (int or tuple of ints) -- The number of\n places by which the elements of the tensor are shifted. If\n shifts is a tuple, dims must be a tuple of the same size, and\n each dimension will be rolled by the corresponding value\n * dims (int or tuple of ints) -- Axis along which to\n roll\n Example:\n >>> x = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8]).view(4, 2)\n >>> x\n tensor([[1, 2],\n [3, 4],\n [5, 6],\n [7, 8]])\n >>> torch.roll(x, 1)\n tensor([[8, 1],\n [2, 3],\n [4, 5],", "source": "https://pytorch.org/docs/stable/generated/torch.roll.html", "category": "pytorch docs"}
{"text": "[2, 3],\n [4, 5],\n [6, 7]])\n >>> torch.roll(x, 1, 0)\n tensor([[7, 8],\n [1, 2],\n [3, 4],\n [5, 6]])\n >>> torch.roll(x, -1, 0)\n tensor([[3, 4],\n [5, 6],\n [7, 8],\n [1, 2]])\n >>> torch.roll(x, shifts=(2, 1), dims=(0, 1))\n tensor([[6, 5],\n [8, 7],\n [2, 1],\n [4, 3]])", "source": "https://pytorch.org/docs/stable/generated/torch.roll.html", "category": "pytorch docs"}
{"text": "torch.Tensor.new_tensorTensor.new_tensor(data, , dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) -> Tensor\n Returns a new Tensor with \"data\" as the tensor data. By default,\n the returned Tensor has the same \"torch.dtype\" and \"torch.device\"\n as this tensor.\n Warning:\n \"new_tensor()\" always copies \"data\". If you have a Tensor \"data\"\n and want to avoid a copy, use \"torch.Tensor.requires_grad_()\" or\n \"torch.Tensor.detach()\". If you have a numpy array and want to\n avoid a copy, use \"torch.from_numpy()\".\n Warning:\n When data is a tensor x*, \"new_tensor()\" reads out 'the data'\n from whatever it is passed, and constructs a leaf variable.\n Therefore \"tensor.new_tensor(x)\" is equivalent to\n \"x.clone().detach()\" and \"tensor.new_tensor(x,\n requires_grad=True)\" is equivalent to\n \"x.clone().detach().requires_grad_(True)\". The equivalents using\n \"clone()\" and \"detach()\" are recommended.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_tensor.html", "category": "pytorch docs"}
{"text": "Parameters:\n data (array_like) -- The returned Tensor copies \"data\".\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired type of\n returned tensor. Default: if None, same \"torch.dtype\" as this\n tensor.\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, same \"torch.device\" as this\n tensor.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * pin_memory (bool, optional) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n Example:\n >>> tensor = torch.ones((2,), dtype=torch.int8)\n >>> data = [[0, 1], [2, 3]]\n >>> tensor.new_tensor(data)\n tensor([[ 0, 1],", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_tensor.html", "category": "pytorch docs"}
{"text": "tensor([[ 0, 1],\n [ 2, 3]], dtype=torch.int8)", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_tensor.html", "category": "pytorch docs"}
{"text": "torch.set_printoptionstorch.set_printoptions(precision=None, threshold=None, edgeitems=None, linewidth=None, profile=None, sci_mode=None)\n Set options for printing. Items shamelessly taken from NumPy\n Parameters:\n * precision -- Number of digits of precision for floating\n point output (default = 4).\n * threshold -- Total number of array elements which trigger\n summarization rather than full repr (default = 1000).\n * edgeitems -- Number of array items in summary at beginning\n and end of each dimension (default = 3).\n * linewidth -- The number of characters per line for the\n purpose of inserting line breaks (default = 80). Thresholded\n matrices will ignore this parameter.\n * profile -- Sane defaults for pretty printing. Can override\n with any of the above options. (any one of default, short,\n full)\n * sci_mode -- Enable (True) or disable (False) scientific", "source": "https://pytorch.org/docs/stable/generated/torch.set_printoptions.html", "category": "pytorch docs"}
{"text": "notation. If None (default) is specified, the value is defined\n by torch._tensor_str._Formatter. This value is automatically\n chosen by the framework.\n Example:\n >>> # Limit the precision of elements\n >>> torch.set_printoptions(precision=2)\n >>> torch.tensor([1.12345])\n tensor([1.12])\n >>> # Limit the number of elements shown\n >>> torch.set_printoptions(threshold=5)\n >>> torch.arange(10)\n tensor([0, 1, 2, ..., 7, 8, 9])\n >>> # Restore defaults\n >>> torch.set_printoptions(profile='default')\n >>> torch.tensor([1.12345])\n tensor([1.1235])\n >>> torch.arange(10)\n tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])", "source": "https://pytorch.org/docs/stable/generated/torch.set_printoptions.html", "category": "pytorch docs"}
{"text": "torch.jit.ignoretorch.jit.ignore(drop=False, **kwargs)\n This decorator indicates to the compiler that a function or method\n should be ignored and left as a Python function. This allows you to\n leave code in your model that is not yet TorchScript compatible. If\n called from TorchScript, ignored functions will dispatch the call\n to the Python interpreter. Models with ignored functions cannot be\n exported; use \"@torch.jit.unused\" instead.\n Example (using \"@torch.jit.ignore\" on a method):\n import torch\n import torch.nn as nn\n class MyModule(nn.Module):\n @torch.jit.ignore\n def debugger(self, x):\n import pdb\n pdb.set_trace()\n def forward(self, x):\n x += 10\n # The compiler would normally try to compile debugger,\n # but since it is @ignored, it will be left as a call\n # to Python\n self.debugger(x)\n return x", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ignore.html", "category": "pytorch docs"}
{"text": "return x\n m = torch.jit.script(MyModule())\n # Error! The call debugger cannot be saved since it calls into Python\n m.save(\"m.pt\")\n Example (using \"@torch.jit.ignore(drop=True)\" on a method):\n import torch\n import torch.nn as nn\n class MyModule(nn.Module):\n @torch.jit.ignore(drop=True)\n def training_method(self, x):\n import pdb\n pdb.set_trace()\n def forward(self, x):\n if self.training:\n self.training_method(x)\n return x\n m = torch.jit.script(MyModule())\n # This is OK since training_method is not saved, the call is replaced\n # with a raise.\n m.save(\"m.pt\")", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ignore.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.adaptive_avg_pool2dtorch.nn.functional.adaptive_avg_pool2d(input, output_size)\n Applies a 2D adaptive average pooling over an input signal composed\n of several input planes.\n See \"AdaptiveAvgPool2d\" for details and output shape.\n Parameters:\n output_size (None) -- the target output size (single\n integer or double-integer tuple)\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.adaptive_avg_pool2d.html", "category": "pytorch docs"}
{"text": "torch.sparse_bsc_tensortorch.sparse_bsc_tensor(ccol_indices, row_indices, values, size=None, , dtype=None, device=None, requires_grad=False, check_invariants=None) -> Tensor\n Constructs a sparse tensor in BSC (Block Compressed Sparse Column))\n with specified 2-dimensional blocks at the given \"ccol_indices\" and\n \"row_indices\". Sparse matrix multiplication operations in BSC\n format are typically faster than that for sparse tensors in COO\n format. Make you have a look at the note on the data type of the\n indices.\n Note:\n If the \"device\" argument is not specified the device of the given\n \"values\" and indices tensor(s) must match. If, however, the\n argument is specified the input Tensors will be converted to the\n given device and in turn determine the device of the constructed\n sparse tensor.\n Parameters:\n * ccol_indices (array_like) -- (B+1)-dimensional array of\n size \"(batchsize, ncolblocks + 1)\". The last element of each", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_bsc_tensor.html", "category": "pytorch docs"}
{"text": "batch is the number of non-zeros. This tensor encodes the\n index in values and row_indices depending on where the given\n column starts. Each successive number in the tensor subtracted\n by the number before it denotes the number of elements in a\n given column.\n * row_indices (array_like) -- Row block co-ordinates of\n each block in values. (B+1)-dimensional tensor with the same\n length as values.\n * values (array_list) -- Initial blocks for the tensor.\n Can be a list, tuple, NumPy \"ndarray\", and other types that\n represents a (1 + 2 + K)-dimensional tensor where \"K\" is the\n number of dense dimensions.\n * size (list, tuple, \"torch.Size\", optional) -- Size of the\n sparse tensor: \"(batchsize, nrows * blocksize[0], ncols *\n blocksize[1], densesize)\" If not provided, the size will be\n inferred as the minimum size big enough to hold all non-zero\n blocks.\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_bsc_tensor.html", "category": "pytorch docs"}
{"text": "blocks.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if None, infers data type from\n \"values\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * check_invariants (bool, optional) -- If sparse\n tensor invariants are checked. Default: as returned by\n \"torch.sparse.check_sparse_tensor_invariants.is_enabled()\",\n initially False.\n Example::\n >>> ccol_indices = [0, 1, 2]\n >>> row_indices = [0, 1]", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_bsc_tensor.html", "category": "pytorch docs"}
{"text": "\n\n\nrow_indices = [0, 1]\n >>> values = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]\n >>> torch.sparse_bsc_tensor(torch.tensor(ccol_indices, dtype=torch.int64),\n ... torch.tensor(row_indices, dtype=torch.int64),\n ... torch.tensor(values), dtype=torch.double)\n tensor(ccol_indices=tensor([0, 1, 2]),\n row_indices=tensor([0, 1]),\n values=tensor([[[1., 2.],\n [3., 4.]],\n [[5., 6.],\n [7., 8.]]]), size=(2, 2), nnz=2, dtype=torch.float64,\n layout=torch.sparse_bsc)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_bsc_tensor.html", "category": "pytorch docs"}
{"text": "LSTMclass torch.nn.LSTM(args, kwargs)\n Applies a multi-layer long short-term memory (LSTM) RNN to an input\n sequence.\n For each element in the input sequence, each layer computes the\n following function:\n \\begin{array}{ll} \\ i_t = \\sigma(W_{ii} x_t + b_{ii} +\n W_{hi} h_{t-1} + b_{hi}) \\ f_t = \\sigma(W_{if} x_t + b_{if}\n + W_{hf} h_{t-1} + b_{hf}) \\ g_t = \\tanh(W_{ig} x_t +\n b_{ig} + W_{hg} h_{t-1} + b_{hg}) \\ o_t = \\sigma(W_{io} x_t\n + b_{io} + W_{ho} h_{t-1} + b_{ho}) \\ c_t = f_t \\odot\n c_{t-1} + i_t \\odot g_t \\ h_t = o_t \\odot \\tanh(c_t) \\\n \\end{array}\n where h_t is the hidden state at time t, c_t is the cell state at\n time t, x_t is the input at time t, h_{t-1} is the hidden state\n of the layer at time t-1 or the initial hidden state at time 0*,\n and i_t, f_t, g_t, o_t are the input, forget, cell, and output\n gates, respectively. \\sigma is the sigmoid function, and \\odot is\n the Hadamard product.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"}
{"text": "the Hadamard product.\n In a multilayer LSTM, the input x^{(l)}t of the l -th layer (l >=\n 2) is the hidden state h^{(l-1)}_t of the previous layer multiplied\n by dropout \\delta^{(l-1)}_t where each \\delta^{(l-1)}_t is a\n Bernoulli random variable which is 0 with probability \"dropout\".\n If \"proj_size > 0\" is specified, LSTM with projections will be\n used. This changes the LSTM cell in the following way. First, the\n dimension of h_t will be changed from \"hidden_size\" to \"proj_size\"\n (dimensions of W will be changed accordingly). Second, the\n output hidden state of each layer will be multiplied by a learnable\n projection matrix: h_t = W_{hr}h_t. Note that as a consequence of\n this, the output of LSTM network will be of different shape as\n well. See Inputs/Outputs sections below for exact dimensions of all\n variables. You can find more details in\n https://arxiv.org/abs/1402.1128.\n Parameters:\n * input_size -- The number of expected features in the input\n x", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"}
{"text": "x\n * hidden_size -- The number of features in the hidden state\n h\n * num_layers -- Number of recurrent layers. E.g., setting\n \"num_layers=2\" would mean stacking two LSTMs together to form\n a stacked LSTM, with the second LSTM taking in outputs of\n the first LSTM and computing the final results. Default: 1\n * bias -- If \"False\", then the layer does not use bias\n weights b_ih and b_hh. Default: \"True\"\n * batch_first -- If \"True\", then the input and output\n tensors are provided as (batch, seq, feature) instead of\n (seq, batch, feature). Note that this does not apply to\n hidden or cell states. See the Inputs/Outputs sections below\n for details. Default: \"False\"\n * dropout -- If non-zero, introduces a Dropout layer on\n the outputs of each LSTM layer except the last layer, with\n dropout probability equal to \"dropout\". Default: 0", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"}
{"text": "\nbidirectional -- If \"True\", becomes a bidirectional LSTM.\n Default: \"False\"\nproj_size -- If \"> 0\", will use LSTM with projections of\n corresponding size. Default: 0\n Inputs: input, (h_0, c_0)\ninput: tensor of shape (L, H_{in}) for unbatched input,\n (L, N, H_{in}) when \"batch_first=False\" or (N, L, H_{in}) when\n \"batch_first=True\" containing the features of the input\n sequence. The input can also be a packed variable length\n sequence. See \"torch.nn.utils.rnn.pack_padded_sequence()\" or\n \"torch.nn.utils.rnn.pack_sequence()\" for details.\nh_0: tensor of shape (D * \\text{num_layers}, H_{out}) for\n unbatched input or (D * \\text{num_layers}, N, H_{out})\n containing the initial hidden state for each element in the\n input sequence. Defaults to zeros if (h_0, c_0) is not\n provided.\nc_0: tensor of shape (D * \\text{num_layers}, H_{cell})\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"}
{"text": "for unbatched input or (D * \\text{num_layers}, N, H_{cell})\n containing the initial cell state for each element in the\n input sequence. Defaults to zeros if (h_0, c_0) is not\n provided.\n where:\n \\begin{aligned} N ={} & \\text{batch size} \\ L ={} &\n \\text{sequence length} \\ D ={} & 2 \\text{ if\n bidirectional=True otherwise } 1 \\ H_{in} ={} &\n \\text{input_size} \\ H_{cell} ={} & \\text{hidden_size}\n \\ H_{out} ={} & \\text{proj_size if }\n \\text{proj_size}>0 \\text{ otherwise hidden_size} \\\n \\end{aligned}\n Outputs: output, (h_n, c_n)\n * output: tensor of shape (L, D * H_{out}) for unbatched\n input, (L, N, D * H_{out}) when \"batch_first=False\" or (N, L,\n D * H_{out}) when \"batch_first=True\" containing the output\n features (h_t) from the last layer of the LSTM, for each\n t. If a \"torch.nn.utils.rnn.PackedSequence\" has been given", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"}
{"text": "as the input, the output will also be a packed sequence. When\n \"bidirectional=True\", output will contain a concatenation of\n the forward and reverse hidden states at each time step in the\n sequence.\n * h_n: tensor of shape (D * \\text{num_layers}, H_{out}) for\n unbatched input or (D * \\text{num_layers}, N, H_{out})\n containing the final hidden state for each element in the\n sequence. When \"bidirectional=True\", h_n will contain a\n concatenation of the final forward and reverse hidden states,\n respectively.\n * c_n: tensor of shape (D * \\text{num_layers}, H_{cell})\n for unbatched input or (D * \\text{num_layers}, N, H_{cell})\n containing the final cell state for each element in the\n sequence. When \"bidirectional=True\", c_n will contain a\n concatenation of the final forward and reverse cell states,\n respectively.\n Variables:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"}
{"text": "respectively.\n Variables:\n * weight_ih_l[k] -- the learnable input-hidden weights of\n the \\text{k}^{th} layer (W_ii|W_if|W_ig|W_io), of shape\n (4hidden_size, input_size) for k = 0. Otherwise, the\n shape is (4hidden_size, num_directions * hidden_size). If\n \"proj_size > 0\" was specified, the shape will be\n (4hidden_size, num_directions * proj_size) for k > 0\n * weight_hh_l[k] -- the learnable hidden-hidden weights of\n the \\text{k}^{th} layer (W_hi|W_hf|W_hg|W_ho), of shape\n (4hidden_size, hidden_size). If \"proj_size > 0\" was\n specified, the shape will be (4hidden_size, proj_size).\n * bias_ih_l[k] -- the learnable input-hidden bias of the\n \\text{k}^{th} layer (b_ii|b_if|b_ig|b_io), of shape\n (4hidden_size)\n * bias_hh_l[k] -- the learnable hidden-hidden bias of the\n \\text{k}^{th} layer (b_hi|b_hf|b_hg|b_ho), of shape\n (4hidden_size)*", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"}
{"text": "(4hidden_size)\n * weight_hr_l[k] -- the learnable projection weights of the\n \\text{k}^{th} layer of shape (proj_size, hidden_size). Only\n present when \"proj_size > 0\" was specified.\n * weight_ih_l[k]_reverse -- Analogous to weight_ih_l[k]\n for the reverse direction. Only present when\n \"bidirectional=True\".\n * weight_hh_l[k]_reverse -- Analogous to weight_hh_l[k]\n for the reverse direction. Only present when\n \"bidirectional=True\".\n * bias_ih_l[k]_reverse -- Analogous to bias_ih_l[k] for\n the reverse direction. Only present when \"bidirectional=True\".\n * bias_hh_l[k]_reverse -- Analogous to bias_hh_l[k] for\n the reverse direction. Only present when \"bidirectional=True\".\n * weight_hr_l[k]_reverse -- Analogous to weight_hr_l[k]*\n for the reverse direction. Only present when\n \"bidirectional=True\" and \"proj_size > 0\" was specified.\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"}
{"text": "Note:\n All the weights and biases are initialized from\n \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{1}{\\text{hidden_size}}\n Note:\n For bidirectional LSTMs, forward and backward are directions 0\n and 1 respectively. Example of splitting the output layers when\n \"batch_first=False\": \"output.view(seq_len, batch, num_directions,\n hidden_size)\".\n Note:\n For bidirectional LSTMs, h_n is not equivalent to the last\n element of output; the former contains the final forward and\n reverse hidden states, while the latter contains the final\n forward hidden state and the initial reverse hidden state.\n Note:\n \"batch_first\" argument is ignored for unbatched inputs.\n Warning:\n There are known non-determinism issues for RNN functions on some\n versions of cuDNN and CUDA. You can enforce deterministic\n behavior by setting the following environment variables:On CUDA\n 10.1, set environment variable \"CUDA_LAUNCH_BLOCKING=1\". This may", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"}
{"text": "affect performance.On CUDA 10.2 or later, set environment\n variable (note the leading colon symbol)\n \"CUBLAS_WORKSPACE_CONFIG=:16:8\" or\n \"CUBLAS_WORKSPACE_CONFIG=:4096:2\".See the cuDNN 8 Release Notes\n for more information.\n Note:\n If the following conditions are satisfied: 1) cudnn is enabled,\n 2) input data is on the GPU 3) input data has dtype\n \"torch.float16\" 4) V100 GPU is used, 5) input data is not in\n \"PackedSequence\" format persistent algorithm can be selected to\n improve performance.\n Examples:\n >>> rnn = nn.LSTM(10, 20, 2)\n >>> input = torch.randn(5, 3, 10)\n >>> h0 = torch.randn(2, 3, 20)\n >>> c0 = torch.randn(2, 3, 20)\n >>> output, (hn, cn) = rnn(input, (h0, c0))", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"}
{"text": "torch.zeros_liketorch.zeros_like(input, , dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor\n Returns a tensor filled with the scalar value 0, with the same\n size as \"input\". \"torch.zeros_like(input)\" is equivalent to\n \"torch.zeros(input.size(), dtype=input.dtype, layout=input.layout,\n device=input.device)\".\n Warning:\n As of 0.4, this function does not support an \"out\" keyword. As an\n alternative, the old \"torch.zeros_like(input, out=output)\" is\n equivalent to \"torch.zeros(input.size(), out=output)\".\n Parameters:\n input (Tensor) -- the size of \"input\" will determine size\n of the output tensor.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned Tensor. Default: if \"None\", defaults to the dtype\n of \"input\".\n * layout* (\"torch.layout\", optional) -- the desired layout of", "source": "https://pytorch.org/docs/stable/generated/torch.zeros_like.html", "category": "pytorch docs"}
{"text": "returned tensor. Default: if \"None\", defaults to the layout of\n \"input\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", defaults to the device of\n \"input\".\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".\n Example:\n >>> input = torch.empty(2, 3)\n >>> torch.zeros_like(input)\n tensor([[ 0., 0., 0.],\n [ 0., 0., 0.]])", "source": "https://pytorch.org/docs/stable/generated/torch.zeros_like.html", "category": "pytorch docs"}
{"text": "torch.hsplittorch.hsplit(input, indices_or_sections) -> List of Tensors\n Splits \"input\", a tensor with one or more dimensions, into multiple\n tensors horizontally according to \"indices_or_sections\". Each split\n is a view of \"input\".\n If \"input\" is one dimensional this is equivalent to calling\n torch.tensor_split(input, indices_or_sections, dim=0) (the split\n dimension is zero), and if \"input\" has two or more dimensions it's\n equivalent to calling torch.tensor_split(input,\n indices_or_sections, dim=1) (the split dimension is 1), except that\n if \"indices_or_sections\" is an integer it must evenly divide the\n split dimension or a runtime error will be thrown.\n This function is based on NumPy's \"numpy.hsplit()\".\n Parameters:\n * input (Tensor) -- tensor to split.\n * indices_or_sections (int or list or tuple of\n ints) -- See argument in \"torch.tensor_split()\".\n Example::\n >>> t = torch.arange(16.0).reshape(4,4)\n >>> t", "source": "https://pytorch.org/docs/stable/generated/torch.hsplit.html", "category": "pytorch docs"}
{"text": "\n\n\nt\n tensor([[ 0., 1., 2., 3.],\n [ 4., 5., 6., 7.],\n [ 8., 9., 10., 11.],\n [12., 13., 14., 15.]])\n >>> torch.hsplit(t, 2)\n (tensor([[ 0., 1.],\n [ 4., 5.],\n [ 8., 9.],\n [12., 13.]]),\n tensor([[ 2., 3.],\n [ 6., 7.],\n [10., 11.],\n [14., 15.]]))\n >>> torch.hsplit(t, [3, 6])\n (tensor([[ 0., 1., 2.],\n [ 4., 5., 6.],\n [ 8., 9., 10.],\n [12., 13., 14.]]),\n tensor([[ 3.],\n [ 7.],\n [11.],\n [15.]]),\n tensor([], size=(4, 0)))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.hsplit.html", "category": "pytorch docs"}
{"text": "torch.Tensor.aminmaxTensor.aminmax(*, dim=None, keepdim=False) -> (Tensor min, Tensor max)\n See \"torch.aminmax()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.aminmax.html", "category": "pytorch docs"}
{"text": "ConvBn3dclass torch.ao.nn.intrinsic.qat.ConvBn3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None)\n A ConvBn3d module is a module fused from Conv3d and BatchNorm3d,\n attached with FakeQuantize modules for weight, used in quantization\n aware training.\n We combined the interface of \"torch.nn.Conv3d\" and\n \"torch.nn.BatchNorm3d\".\n Similar to \"torch.nn.Conv3d\", with FakeQuantize modules initialized\n to default.\n Variables:\n * freeze_bn --\n * weight_fake_quant -- fake quant module for weight", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvBn3d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.retain_gradTensor.retain_grad() -> None\n Enables this Tensor to have their \"grad\" populated during\n \"backward()\". This is a no-op for leaf tensors.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.retain_grad.html", "category": "pytorch docs"}
{"text": "BCELossclass torch.nn.BCELoss(weight=None, size_average=None, reduce=None, reduction='mean')\n Creates a criterion that measures the Binary Cross Entropy between\n the target and the input probabilities:\n The unreduced (i.e. with \"reduction\" set to \"'none'\") loss can be\n described as:\n \\ell(x, y) = L = {l_1,\\dots,l_N}^\\top, \\quad l_n = - w_n\n \\left[ y_n \\cdot \\log x_n + (1 - y_n) \\cdot \\log (1 - x_n)\n \\right],\n where N is the batch size. If \"reduction\" is not \"'none'\" (default\n \"'mean'\"), then\n \\ell(x, y) = \\begin{cases} \\operatorname{mean}(L), &\n \\text{if reduction} = \\text{mean';}\\\\\n \\operatorname{sum}(L), & \\text{if reduction} = \\text{sum'.}\n \\end{cases}\n This is used for measuring the error of a reconstruction in for\n example an auto-encoder. Note that the targets y should be numbers\n between 0 and 1.\n Notice that if x_n is either 0 or 1, one of the log terms would be\n mathematically undefined in the above loss equation. PyTorch", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html", "category": "pytorch docs"}
{"text": "chooses to set \\log (0) = -\\infty, since \\lim_{x\\to 0} \\log (x) =\n -\\infty. However, an infinite term in the loss equation is not\n desirable for several reasons.\n For one, if either y_n = 0 or (1 - y_n) = 0, then we would be\n multiplying 0 with infinity. Secondly, if we have an infinite loss\n value, then we would also have an infinite term in our gradient,\n since \\lim_{x\\to 0} \\frac{d}{dx} \\log (x) = \\infty. This would make\n BCELoss's backward method nonlinear with respect to x_n, and using\n it for things like linear regression would not be straight-forward.\n Our solution is that BCELoss clamps its log function outputs to be\n greater than or equal to -100. This way, we can always have a\n finite loss value and a linear backward method.\n Parameters:\n * weight (Tensor, optional) -- a manual rescaling\n weight given to the loss of each batch element. If given, has\n to be a Tensor of size nbatch.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html", "category": "pytorch docs"}
{"text": "to be a Tensor of size nbatch.\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html", "category": "pytorch docs"}
{"text": "the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n Shape:\n * Input: (), where * means any number of dimensions.\n * Target: (), same shape as the input.\n * Output: scalar. If \"reduction\" is \"'none'\", then (*), same\n shape as input.\n Examples:\n >>> m = nn.Sigmoid()\n >>> loss = nn.BCELoss()\n >>> input = torch.randn(3, requires_grad=True)\n >>> target = torch.empty(3).random_(2)\n >>> output = loss(m(input), target)\n >>> output.backward()", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html", "category": "pytorch docs"}
{"text": "torch.Tensor.outerTensor.outer(vec2) -> Tensor\n See \"torch.outer()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.outer.html", "category": "pytorch docs"}
{"text": "torch.Tensor.clipTensor.clip(min=None, max=None) -> Tensor\n Alias for \"clamp()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.clip.html", "category": "pytorch docs"}
{"text": "torch.Tensor.squareTensor.square() -> Tensor\n See \"torch.square()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.square.html", "category": "pytorch docs"}
{"text": "torch.hann_windowtorch.hann_window(window_length, periodic=True, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\n Hann window function.\n w[n] = \\frac{1}{2}\\ \\left[1 - \\cos \\left( \\frac{2 \\pi n}{N - 1}\n \\right)\\right] = \\sin^2 \\left( \\frac{\\pi n}{N - 1}\n \\right),\n where N is the full window size.\n The input \"window_length\" is a positive integer controlling the\n returned window size. \"periodic\" flag determines whether the\n returned window trims off the last duplicate value from the\n symmetric window and is ready to be used as a periodic window with\n functions like \"torch.stft()\". Therefore, if \"periodic\" is true,\n the N in above formula is in fact \\text{window_length} + 1. Also,\n we always have \"torch.hann_window(L, periodic=True)\" equal to\n \"torch.hann_window(L + 1, periodic=False)[:-1])\".\n Note:\n If \"window_length\" =1, the returned window contains a single\n value 1.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.hann_window.html", "category": "pytorch docs"}
{"text": "value 1.\n Parameters:\n * window_length (int) -- the size of returned window\n * periodic (bool, optional) -- If True, returns a\n window to be used as periodic function. If False, return a\n symmetric window.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\"). Only floating point\n types are supported.\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned window tensor. Only \"torch.strided\" (dense layout) is\n supported.\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.", "source": "https://pytorch.org/docs/stable/generated/torch.hann_window.html", "category": "pytorch docs"}
{"text": "tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Returns:\n A 1-D tensor of size (\\text{window_length},) containing the\n window\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.hann_window.html", "category": "pytorch docs"}
{"text": "fuse_fxclass torch.quantization.quantize_fx.fuse_fx(model, fuse_custom_config=None, backend_config=None)\n Fuse modules like conv+bn, conv+bn+relu etc, model must be in eval\n mode. Fusion rules are defined in\n torch.quantization.fx.fusion_pattern.py\n Parameters:\n * model () -- a torch.nn.Module model\n * *fuse_custom_config (*) -- custom configurations for\n fuse_fx. See \"FuseCustomConfig\" for more details\n Return type:\n GraphModule\n Example:\n from torch.ao.quantization import fuse_fx\n m = Model().eval()\n m = fuse_fx(m)", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.fuse_fx.html", "category": "pytorch docs"}
{"text": "torch.linalg.solvetorch.linalg.solve(A, B, , left=True, out=None) -> Tensor\n Computes the solution of a square system of linear equations with a\n unique solution.\n Letting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, this function\n computes the solution X \\in \\mathbb{K}^{n \\times k} of the linear\n system associated to A \\in \\mathbb{K}^{n \\times n}, B \\in\n \\mathbb{K}^{n \\times k}, which is defined as\n AX = B\n If \"left\"= False*, this function returns the matrix X \\in\n \\mathbb{K}^{n \\times k} that solves the system\n XA = B\\mathrlap{\\qquad A \\in \\mathbb{K}^{k \\times k}, B \\in\n \\mathbb{K}^{n \\times k}.}\n This system of linear equations has one solution if and only if A\n is invertible. This function assumes that A is invertible.\n Supports inputs of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if the inputs are batches of\n matrices then the output has the same batch dimensions.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve.html", "category": "pytorch docs"}
{"text": "Letting *** be zero or more batch dimensions,\n * If \"A\" has shape (, n, n) and \"B\" has shape (, n) (a batch\n of vectors) or shape (, n, k) (a batch of matrices or\n \"multiple right-hand sides\"), this function returns X of shape\n (, n) or (, n, k) respectively.\n * Otherwise, if \"A\" has shape (, n, n) and \"B\" has shape (n,)\n or (n, k), \"B\" is broadcasted to have shape (, n) or (, n,\n k) respectively. This function then returns the solution of the\n resulting batch of systems of linear equations.\n Note:\n This function computes X = \"A\".inverse() @ \"B\" in a faster\n and more numerically stable way than performing the computations\n separately.\n Note:\n It is possible to compute the solution of the system XA = B by\n passing the inputs \"A\" and \"B\" transposed and transposing the\n output returned by this function.\n Note:\n When inputs are on a CUDA device, this function synchronizes that\n device with the CPU.\n See also:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve.html", "category": "pytorch docs"}
{"text": "device with the CPU.\n See also:\n \"torch.linalg.solve_triangular()\" computes the solution of a\n triangular system of linear equations with a unique solution.\n Parameters:\n * A (Tensor) -- tensor of shape (, n, n) where *** is\n zero or more batch dimensions.\n * B (Tensor) -- right-hand side tensor of shape (, n)\n or (, n, k) or (n,) or (n, k) according to the rules\n described above\n Keyword Arguments:\n * left (bool, optional) -- whether to solve the system\n AX=B or XA = B. Default: True.\n * out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\n Raises:\n RuntimeError* -- if the \"A\" matrix is not invertible or any\n matrix in a batched \"A\" is not invertible.\n Examples:\n >>> A = torch.randn(3, 3)\n >>> b = torch.randn(3)\n >>> x = torch.linalg.solve(A, b)\n >>> torch.allclose(A @ x, b)\n True\n >>> A = torch.randn(2, 3, 3)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve.html", "category": "pytorch docs"}
{"text": "True\n >>> A = torch.randn(2, 3, 3)\n >>> B = torch.randn(2, 3, 4)\n >>> X = torch.linalg.solve(A, B)\n >>> X.shape\n torch.Size([2, 3, 4])\n >>> torch.allclose(A @ X, B)\n True\n >>> A = torch.randn(2, 3, 3)\n >>> b = torch.randn(3, 1)\n >>> x = torch.linalg.solve(A, b) # b is broadcasted to size (2, 3, 1)\n >>> x.shape\n torch.Size([2, 3, 1])\n >>> torch.allclose(A @ x, b)\n True\n >>> b = torch.randn(3)\n >>> x = torch.linalg.solve(A, b) # b is broadcasted to size (2, 3)\n >>> x.shape\n torch.Size([2, 3])\n >>> Ax = A @ x.unsqueeze(-1)\n >>> torch.allclose(Ax, b.unsqueeze(-1).expand_as(Ax))\n True", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve.html", "category": "pytorch docs"}
{"text": "torch.polartorch.polar(abs, angle, , out=None) -> Tensor\n Constructs a complex tensor whose elements are Cartesian\n coordinates corresponding to the polar coordinates with absolute\n value \"abs\" and angle \"angle\".\n \\text{out} = \\text{abs} \\cdot \\cos(\\text{angle}) + \\text{abs}\n \\cdot \\sin(\\text{angle}) \\cdot j\n Note:\n torch.polar is similar to std::polar and does not compute the\n polar decomposition of a complex tensor like Python's\n cmath.polar and SciPy's linalg.polar do. The behavior of this\n function is undefined if abs is negative or NaN, or if angle\n is infinite.\n Parameters:\n * abs (Tensor) -- The absolute value the complex tensor.\n Must be float or double.\n * angle (Tensor) -- The angle of the complex tensor. Must\n be same dtype as \"abs\".\n Keyword Arguments:\n out (Tensor*) -- If the inputs are \"torch.float32\", must be\n \"torch.complex64\". If the inputs are \"torch.float64\", must be", "source": "https://pytorch.org/docs/stable/generated/torch.polar.html", "category": "pytorch docs"}
{"text": "\"torch.complex128\".\n Example:\n >>> import numpy as np\n >>> abs = torch.tensor([1, 2], dtype=torch.float64)\n >>> angle = torch.tensor([np.pi / 2, 5 * np.pi / 4], dtype=torch.float64)\n >>> z = torch.polar(abs, angle)\n >>> z\n tensor([(0.0000+1.0000j), (-1.4142-1.4142j)], dtype=torch.complex128)", "source": "https://pytorch.org/docs/stable/generated/torch.polar.html", "category": "pytorch docs"}
{"text": "torch.foreach_sqrt_torch._foreach_sqrt(self: List[Tensor]) -> None\n Apply \"torch.sqrt()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_sqrt_.html", "category": "pytorch docs"}
{"text": "torch.numeltorch.numel(input) -> int\n Returns the total number of elements in the \"input\" tensor.\n Parameters:\n input (Tensor) -- the input tensor.\n Example:\n >>> a = torch.randn(1, 2, 3, 4, 5)\n >>> torch.numel(a)\n 120\n >>> a = torch.zeros(4,4)\n >>> torch.numel(a)\n 16", "source": "https://pytorch.org/docs/stable/generated/torch.numel.html", "category": "pytorch docs"}
{"text": "torch.Tensor.igammacTensor.igammac(other) -> Tensor\n See \"torch.igammac()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.igammac.html", "category": "pytorch docs"}
{"text": "torch.lttorch.lt(input, other, , out=None) -> Tensor\n Computes \\text{input} < \\text{other} element-wise.\n The second argument can be a number or a tensor whose shape is\n broadcastable with the first argument.\n Parameters:\n * input (Tensor) -- the tensor to compare\n * other (Tensor or float) -- the tensor or value to\n compare\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Returns:\n A boolean tensor that is True where \"input\" is less than \"other\"\n and False elsewhere\n Example:\n >>> torch.lt(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))\n tensor([[False, False], [True, False]])", "source": "https://pytorch.org/docs/stable/generated/torch.lt.html", "category": "pytorch docs"}
{"text": "torch.triu_indicestorch.triu_indices(row, col, offset=0, *, dtype=torch.long, device='cpu', layout=torch.strided) -> Tensor\n Returns the indices of the upper triangular part of a \"row\" by\n \"col\" matrix in a 2-by-N Tensor, where the first row contains row\n coordinates of all indices and the second row contains column\n coordinates. Indices are ordered based on rows and then columns.\n The upper triangular part of the matrix is defined as the elements\n on and above the diagonal.\n The argument \"offset\" controls which diagonal to consider. If\n \"offset\" = 0, all elements on and above the main diagonal are\n retained. A positive value excludes just as many diagonals above\n the main diagonal, and similarly a negative value includes just as\n many diagonals below the main diagonal. The main diagonal are the\n set of indices \\lbrace (i, i) \\rbrace for i \\in [0, \\min{d_{1},\n d_{2}} - 1] where d_{1}, d_{2} are the dimensions of the matrix.\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.triu_indices.html", "category": "pytorch docs"}
{"text": "Note:\n When running on CUDA, \"row * col\" must be less than 2^{59} to\n prevent overflow during calculation.\n Parameters:\n * row (\"int\") -- number of rows in the 2-D matrix.\n * col (\"int\") -- number of columns in the 2-D matrix.\n * offset (\"int\") -- diagonal offset from the main diagonal.\n Default: if not provided, 0.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", \"torch.long\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * layout (\"torch.layout\", optional) -- currently only\n support \"torch.strided\".\n Example:\n >>> a = torch.triu_indices(3, 3)\n >>> a", "source": "https://pytorch.org/docs/stable/generated/torch.triu_indices.html", "category": "pytorch docs"}
{"text": "\n\n\na = torch.triu_indices(3, 3)\n >>> a\n tensor([[0, 0, 0, 1, 1, 2],\n [0, 1, 2, 1, 2, 2]])\n >>> a = torch.triu_indices(4, 3, -1)\n >>> a\n tensor([[0, 0, 0, 1, 1, 1, 2, 2, 3],\n [0, 1, 2, 0, 1, 2, 1, 2, 2]])\n >>> a = torch.triu_indices(4, 3, 1)\n >>> a\n tensor([[0, 0, 1],\n [1, 2, 2]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.triu_indices.html", "category": "pytorch docs"}
{"text": "LazyInstanceNorm1dclass torch.nn.LazyInstanceNorm1d(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)\n A \"torch.nn.InstanceNorm1d\" module with lazy initialization of the\n \"num_features\" argument of the \"InstanceNorm1d\" that is inferred\n from the \"input.size(1)\". The attributes that will be lazily\n initialized are weight, bias, running_mean and running_var.\n Check the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\n Parameters:\n * num_features -- C from an expected input of size (N, C, L)\n or (C, L)\n * eps (float) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n * momentum (float) -- the value used for the running_mean\n and running_var computation. Default: 0.1\n * affine (bool) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyInstanceNorm1d.html", "category": "pytorch docs"}
{"text": "initialized the same way as done for batch normalization.\n Default: \"False\".\n * track_running_stats (bool) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics and always uses batch statistics in both\n training and eval modes. Default: \"False\"\n Shape:\n * Input: (N, C, L) or (C, L)\n * Output: (N, C, L) or (C, L) (same shape as input)\n cls_to_become\n alias of \"InstanceNorm1d\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyInstanceNorm1d.html", "category": "pytorch docs"}
{"text": "enable_observerclass torch.quantization.fake_quantize.enable_observer(mod)\n Enable observation for this module, if applicable. Example usage:\n # model is any PyTorch model\n model.apply(torch.ao.quantization.enable_observer)", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.enable_observer.html", "category": "pytorch docs"}
{"text": "torch.fake_quantize_per_channel_affinetorch.fake_quantize_per_channel_affine(input, scale, zero_point, quant_min, quant_max) -> Tensor\n Returns a new tensor with the data in \"input\" fake quantized per\n channel using \"scale\", \"zero_point\", \"quant_min\" and \"quant_max\",\n across the channel specified by \"axis\".\n \\text{output} = min( \\text{quant_max}, max(\n \\text{quant_min}, \\text{std::nearby_int}(\\text{input}\n / \\text{scale}) + \\text{zero_point} ) )\n Parameters:\n * input (Tensor) -- the input value(s), in \"torch.float32\"\n * scale (Tensor) -- quantization scale, per channel in\n \"torch.float32\"\n * zero_point (Tensor) -- quantization zero_point, per\n channel in \"torch.int32\" or \"torch.half\" or \"torch.float32\"\n * axis (int32) -- channel axis\n * quant_min (int64) -- lower bound of the quantized domain", "source": "https://pytorch.org/docs/stable/generated/torch.fake_quantize_per_channel_affine.html", "category": "pytorch docs"}
{"text": "\nquant_max (int64) -- upper bound of the quantized domain\n Returns:\n A newly fake_quantized per channel \"torch.float32\" tensor\n Return type:\n Tensor\n Example:\n >>> x = torch.randn(2, 2, 2)\n >>> x\n tensor([[[-0.2525, -0.0466],\n [ 0.3491, -0.2168]],\n [[-0.5906, 1.6258],\n [ 0.6444, -0.0542]]])\n >>> scales = (torch.randn(2) + 1) * 0.05\n >>> scales\n tensor([0.0475, 0.0486])\n >>> zero_points = torch.zeros(2).to(torch.int32)\n >>> zero_points\n tensor([0, 0])\n >>> torch.fake_quantize_per_channel_affine(x, scales, zero_points, 1, 0, 255)\n tensor([[[0.0000, 0.0000],\n [0.3405, 0.0000]],\n [[0.0000, 1.6134],\n [0.6323, 0.0000]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.fake_quantize_per_channel_affine.html", "category": "pytorch docs"}
{"text": "torch.Tensor.subtractTensor.subtract(other, *, alpha=1) -> Tensor\n See \"torch.subtract()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.subtract.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.instance_normtorch.nn.functional.instance_norm(input, running_mean=None, running_var=None, weight=None, bias=None, use_input_stats=True, momentum=0.1, eps=1e-05)\n Applies Instance Normalization for each channel in each data sample\n in a batch.\n See \"InstanceNorm1d\", \"InstanceNorm2d\", \"InstanceNorm3d\" for\n details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.instance_norm.html", "category": "pytorch docs"}
{"text": "torch.Tensor.random_Tensor.random_(from=0, to=None, , generator=None) -> Tensor\n Fills \"self\" tensor with numbers sampled from the discrete uniform\n distribution over \"[from, to - 1]\". If not specified, the values\n are usually only bounded by \"self\" tensor's data type. However, for\n floating point types, if unspecified, range will be \"[0,\n 2^mantissa]\" to ensure that every value is representable. For\n example, torch.tensor(1, dtype=torch.double).random_()* will be\n uniform in \"[0, 2^53]\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.random_.html", "category": "pytorch docs"}
{"text": "per_channel_dynamic_qconfigtorch.quantization.qconfig.per_channel_dynamic_qconfig\n alias of QConfig(activation=functools.partial(,\n dtype=torch.quint8, quant_min=0, quant_max=255, is_dynamic=True){},\n weight=functools.partial(,\n dtype=torch.qint8, qscheme=torch.per_channel_symmetric){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.per_channel_dynamic_qconfig.html", "category": "pytorch docs"}
{"text": "torch.fft.ihfftntorch.fft.ihfftn(input, s=None, dim=None, norm=None, , out=None) -> Tensor\n Computes the N-dimensional inverse discrete Fourier transform of\n real \"input\".\n \"input\" must be a real-valued signal, interpreted in the Fourier\n domain. The n-dimensional IFFT of a real signal is Hermitian-\n symmetric, \"X[i, j, ...] = conj(X[-i, -j, ...])\". \"ihfftn()\"\n represents this in the one-sided form where only the positive\n frequencies below the Nyquist frequency are included in the last\n signal dimension. To compute the full output, use \"ifftn()\".\n Note:\n Supports torch.half on CUDA with GPU Architecture SM53 or\n greater. However it only supports powers of 2 signal length in\n every transformed dimensions.\n Parameters:\n * input (Tensor) -- the input tensor\n * s (Tuple[int], *optional) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ihfftn.html", "category": "pytorch docs"}
{"text": "either be zero-padded or trimmed to the length \"s[i]\" before\n computing the Hermitian IFFT. If a length \"-1\" is specified,\n no padding is done in that dimension. Default: \"s =\n [input.size(d) for d in dim]\"\n * dim (Tuple[int], optional) -- Dimensions to be\n transformed. Default: all dimensions, or the last \"len(s)\"\n dimensions if \"s\" is given.\n * norm (str, optional) --\n Normalization mode. For the backward transform (\"ihfftn()\"),\n these correspond to:\n * \"\"forward\"\" - no normalization\n * \"\"backward\"\" - normalize by \"1/n\"\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the Hermitian\n IFFT orthonormal)\n Where \"n = prod(s)\" is the logical IFFT size. Calling the\n forward transform (\"hfftn()\") with the same normalization mode\n will apply an overall normalization of \"1/n\" between the two\n transforms. This is required to make \"ihfftn()\" the exact\n inverse.", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ihfftn.html", "category": "pytorch docs"}
{"text": "inverse.\n Default is \"\"backward\"\" (normalize by \"1/n\").\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n\n\n\nT = torch.rand(10, 10)\nihfftn = torch.fft.ihfftn(T)\nihfftn.size()\n torch.Size([10, 6])\n Compared against the full output from \"ifftn()\", we have all\n elements up to the Nyquist frequency.\nifftn = torch.fft.ifftn(t)\ntorch.allclose(ifftn[..., :6], ihfftn)\n True\n The discrete Fourier transform is separable, so \"ihfftn()\" here is\n equivalent to a combination of \"ihfft()\" and \"ifft()\":\ntwo_iffts = torch.fft.ifft(torch.fft.ihfft(t, dim=1), dim=0)\ntorch.allclose(ihfftn, two_iffts)\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ihfftn.html", "category": "pytorch docs"}
{"text": "torch.isnantorch.isnan(input) -> Tensor\n Returns a new tensor with boolean elements representing if each\n element of \"input\" is NaN or not. Complex values are considered NaN\n when either their real and/or imaginary part is NaN.\n Parameters:\n input (Tensor) -- the input tensor.\n Returns:\n A boolean tensor that is True where \"input\" is NaN and False\n elsewhere\n Example:\n >>> torch.isnan(torch.tensor([1, float('nan'), 2]))\n tensor([False, True, False])", "source": "https://pytorch.org/docs/stable/generated/torch.isnan.html", "category": "pytorch docs"}
{"text": "torch.linalg.eigvalstorch.linalg.eigvals(A, , out=None) -> Tensor\n Computes the eigenvalues of a square matrix.\n Letting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the eigenvalues\n of a square matrix A \\in \\mathbb{K}^{n \\times n} are defined as the\n roots (counted with multiplicity) of the polynomial p of degree\n n given by\n p(\\lambda) = \\operatorname{det}(A - \\lambda\n \\mathrm{I}_n)\\mathrlap{\\qquad \\lambda \\in \\mathbb{C}}\n where \\mathrm{I}_n is the n*-dimensional identity matrix.\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n Note:\n The eigenvalues of a real matrix may be complex, as the roots of\n a real polynomial may be complex.The eigenvalues of a matrix are\n always well-defined, even when the matrix is not diagonalizable.\n Note:\n When inputs are on a CUDA device, this function synchronizes that", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigvals.html", "category": "pytorch docs"}
{"text": "device with the CPU.\n See also:\n \"torch.linalg.eig()\" computes the full eigenvalue decomposition.\n Parameters:\n A (Tensor) -- tensor of shape (, n, n) where *** is\n zero or more batch dimensions.\n Keyword Arguments:\n out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None*.\n Returns:\n A complex-valued tensor containing the eigenvalues even when \"A\"\n is real.\n Examples:\n >>> A = torch.randn(2, 2, dtype=torch.complex128)\n >>> L = torch.linalg.eigvals(A)\n >>> L\n tensor([ 1.1226+0.5738j, -0.7537-0.1286j], dtype=torch.complex128)\n >>> torch.dist(L, torch.linalg.eig(A).eigenvalues)\n tensor(2.4576e-07)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigvals.html", "category": "pytorch docs"}
{"text": "disable_fake_quantclass torch.quantization.fake_quantize.disable_fake_quant(mod)\n Disable fake quantization for this module, if applicable. Example\n usage:\n # model is any PyTorch model\n model.apply(torch.ao.quantization.disable_fake_quant)", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.disable_fake_quant.html", "category": "pytorch docs"}
{"text": "torch.Tensor.clip_Tensor.clip_(min=None, max=None) -> Tensor\n Alias for \"clamp_()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.clip_.html", "category": "pytorch docs"}
{"text": "torch.amintorch.amin(input, dim, keepdim=False, , out=None) -> Tensor\n Returns the minimum value of each slice of the \"input\" tensor in\n the given dimension(s) \"dim\".\n Note:\n The difference between \"max\"/\"min\" and \"amax\"/\"amin\" is:\n * \"amax\"/\"amin\" supports reducing on multiple dimensions,\n * \"amax\"/\"amin\" does not return indices,\n * \"amax\"/\"amin\" evenly distributes gradient between equal\n values, while \"max(dim)\"/\"min(dim)\" propagates gradient only\n to a single index in the source tensor.\n If \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int or tuple of ints*) -- the dimension or\n dimensions to reduce.", "source": "https://pytorch.org/docs/stable/generated/torch.amin.html", "category": "pytorch docs"}
{"text": "dimensions to reduce.\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[ 0.6451, -0.4866, 0.2987, -1.3312],\n [-0.5744, 1.2980, 1.8397, -0.2713],\n [ 0.9128, 0.9214, -1.7268, -0.2995],\n [ 0.9023, 0.4853, 0.9075, -1.6165]])\n >>> torch.amin(a, 1)\n tensor([-1.3312, -0.5744, -1.7268, -1.6165])", "source": "https://pytorch.org/docs/stable/generated/torch.amin.html", "category": "pytorch docs"}
{"text": "torch.ones_liketorch.ones_like(input, , dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor\n Returns a tensor filled with the scalar value 1, with the same\n size as \"input\". \"torch.ones_like(input)\" is equivalent to\n \"torch.ones(input.size(), dtype=input.dtype, layout=input.layout,\n device=input.device)\".\n Warning:\n As of 0.4, this function does not support an \"out\" keyword. As an\n alternative, the old \"torch.ones_like(input, out=output)\" is\n equivalent to \"torch.ones(input.size(), out=output)\".\n Parameters:\n input (Tensor) -- the size of \"input\" will determine size\n of the output tensor.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned Tensor. Default: if \"None\", defaults to the dtype\n of \"input\".\n * layout* (\"torch.layout\", optional) -- the desired layout of", "source": "https://pytorch.org/docs/stable/generated/torch.ones_like.html", "category": "pytorch docs"}
{"text": "returned tensor. Default: if \"None\", defaults to the layout of\n \"input\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", defaults to the device of\n \"input\".\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".\n Example:\n >>> input = torch.empty(2, 3)\n >>> torch.ones_like(input)\n tensor([[ 1., 1., 1.],\n [ 1., 1., 1.]])", "source": "https://pytorch.org/docs/stable/generated/torch.ones_like.html", "category": "pytorch docs"}
{"text": "torch.subtorch.sub(input, other, , alpha=1, out=None) -> Tensor\n Subtracts \"other\", scaled by \"alpha\", from \"input\".\n \\text{{out}}_i = \\text{{input}}_i - \\text{{alpha}} \\times\n \\text{{other}}_i\n Supports broadcasting to a common shape, type promotion, and\n integer, float, and complex inputs.\n Parameters:\n * input (Tensor) -- the input tensor.\n * other (Tensor or Number) -- the tensor or number to\n subtract from \"input\".\n Keyword Arguments:\n * alpha (Number) -- the multiplier for \"other\".\n * out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.tensor((1, 2))\n >>> b = torch.tensor((0, 1))\n >>> torch.sub(a, b, alpha=2)\n tensor([1, 0])", "source": "https://pytorch.org/docs/stable/generated/torch.sub.html", "category": "pytorch docs"}
{"text": "QConfigMappingclass torch.ao.quantization.qconfig_mapping.QConfigMapping\n Mapping from model ops to \"torch.ao.quantization.QConfig\" s.\n The user can specify QConfigs using the following methods (in\n increasing match priority):\n \"set_global\" : sets the global (default) QConfig\n \"set_object_type\" : sets the QConfig for a given module type,\n function, or method name\n \"set_module_name_regex\" : sets the QConfig for modules matching\n the given regex string\n \"set_module_name\" : sets the QConfig for modules matching the\n given module name\n \"set_module_name_object_type_order\" : sets the QConfig for\n modules matching a combination of the given module name, object\n type, and the index at which the module appears\n Example usage:\n qconfig_mapping = QConfigMapping()\n .set_global(global_qconfig)\n .set_object_type(torch.nn.Linear, qconfig1)\n .set_object_type(torch.nn.ReLU, qconfig1)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.qconfig_mapping.QConfigMapping.html", "category": "pytorch docs"}
{"text": ".set_module_name_regex(\"foo.bar.conv[0-9]+\", qconfig1)\n .set_module_name_regex(\"foo.\", qconfig2)\n .set_module_name(\"module1\", qconfig1)\n .set_module_name(\"module2\", qconfig2)\n .set_module_name_object_type_order(\"foo.bar\", torch.nn.functional.linear, 0, qconfig3)\n classmethod from_dict(qconfig_dict)\n Create a \"QConfigMapping\" from a dictionary with the following\n keys (all optional):\n \"\" (for global QConfig)\n \"object_type\"\n \"module_name_regex\"\n \"module_name\"\n \"module_name_object_type_order\"\n The values of this dictionary are expected to be lists of\n tuples.\n Return type:\n QConfigMapping\n set_global(global_qconfig)\n Set the global (default) QConfig.\n Return type:\n QConfigMapping*\n set_module_name(module_name, qconfig)\n Set the QConfig for modules matching the given module name. If\n the QConfig for an existing module name was already set, the new", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.qconfig_mapping.QConfigMapping.html", "category": "pytorch docs"}
{"text": "QConfig will override the old one.\n Return type:\n QConfigMapping\n set_module_name_object_type_order(module_name, object_type, index, qconfig)\n Set the QConfig for modules matching a combination of the given\n module name, object type, and the index at which the module\n appears.\n If the QConfig for an existing (module name, object type, index)\n was already set, the new QConfig will override the old one.\n Return type:\n QConfigMapping\n set_module_name_regex(module_name_regex, qconfig)\n Set the QConfig for modules matching the given regex string.\n Regexes will be matched in the order in which they are\n registered through this method. Thus, the caller should register\n more specific patterns first, e.g.:\n qconfig_mapping = QConfigMapping()\n .set_module_name_regex(\"foo.bar.conv[0-9]+\", qconfig1)\n .set_module_name_regex(\"foo.bar.\", qconfig2)\n .set_module_name_regex(\"foo.*\", qconfig3)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.qconfig_mapping.QConfigMapping.html", "category": "pytorch docs"}
{"text": "In this example, \"foo.bar.conv0\" would match qconfig1,\n \"foo.bar.linear\" would match qconfig2, and \"foo.baz.relu\" would\n match qconfig3.\n If the QConfig for an existing module name regex was already\n set, the new QConfig will override the old one while preserving\n the order in which the regexes were originally registered.\n Return type:\n QConfigMapping\n set_object_type(object_type, qconfig)\n Set the QConfig for a given module type, function, or method\n name. If the QConfig for an existing object type was already\n set, the new QConfig will override the old one.\n Return type:\n QConfigMapping\n to_dict()\n Convert this \"QConfigMapping\" to a dictionary with the following\n keys:\n \"\" (for global QConfig)\n \"object_type\"\n \"module_name_regex\"\n \"module_name\"\n \"module_name_object_type_order\"\n The values of this dictionary are lists of tuples.\n Return type:\n Dict[str, Any]", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.qconfig_mapping.QConfigMapping.html", "category": "pytorch docs"}
{"text": "torch.vartorch.var(input, dim=None, , correction=1, keepdim=False, out=None) -> Tensor\n Calculates the variance over the dimensions specified by \"dim\".\n \"dim\" can be a single dimension, list of dimensions, or \"None\" to\n reduce over all dimensions.\n The variance (\\sigma^2) is calculated as\n \\sigma^2 = \\frac{1}{N - \\delta N}\\sum_{i=0}^{N-1}(x_i-\\bar{x})^2\n where x is the sample set of elements, \\bar{x} is the sample mean,\n N is the number of samples and \\delta N is the \"correction\".\n If \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int or tuple of ints, optional*) -- the\n dimension or dimensions to reduce. If \"None\", all dimensions\n are reduced.", "source": "https://pytorch.org/docs/stable/generated/torch.var.html", "category": "pytorch docs"}
{"text": "are reduced.\n Keyword Arguments:\n * correction (int) --\n difference between the sample size and sample degrees of\n freedom. Defaults to Bessel's correction, \"correction=1\".\n Changed in version 2.0: Previously this argument was called\n \"unbiased\" and was a boolean with \"True\" corresponding to\n \"correction=1\" and \"False\" being \"correction=0\".\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n * out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n\n\n\na = torch.tensor(\n ... [[ 0.2035, 1.2959, 1.8101, -0.4644],\n ... [ 1.5027, -0.3270, 0.5905, 0.6538],\n ... [-1.5745, 1.3330, -0.5596, -0.6548],\n ... [ 0.1264, -0.5080, 1.6420, 0.1992]])\ntorch.var(a, dim=1, keepdim=True)\n tensor([[1.0631],\n [0.5590],\n [1.4893],\n [0.8258]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.var.html", "category": "pytorch docs"}
{"text": "torch.multinomialtorch.multinomial(input, num_samples, replacement=False, , generator=None, out=None) -> LongTensor\n Returns a tensor where each row contains \"num_samples\" indices\n sampled from the multinomial probability distribution located in\n the corresponding row of tensor \"input\".\n Note:\n The rows of \"input\" do not need to sum to one (in which case we\n use the values as weights), but must be non-negative, finite and\n have a non-zero sum.\n Indices are ordered from left to right according to when each was\n sampled (first samples are placed in first column).\n If \"input\" is a vector, \"out\" is a vector of size \"num_samples\".\n If \"input\" is a matrix with m* rows, \"out\" is an matrix of shape\n (m \\times \\text{num_samples}).\n If replacement is \"True\", samples are drawn with replacement.\n If not, they are drawn without replacement, which means that when a\n sample index is drawn for a row, it cannot be drawn again for that\n row.\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.multinomial.html", "category": "pytorch docs"}
{"text": "row.\n Note:\n When drawn without replacement, \"num_samples\" must be lower than\n number of non-zero elements in \"input\" (or the min number of non-\n zero elements in each row of \"input\" if it is a matrix).\n Parameters:\n * input (Tensor) -- the input tensor containing\n probabilities\n * num_samples (int) -- number of samples to draw\n * replacement (bool, optional) -- whether to draw with\n replacement or not\n Keyword Arguments:\n * generator (\"torch.Generator\", optional) -- a pseudorandom\n number generator for sampling\n * out (Tensor, optional) -- the output tensor.\n Example:\n >>> weights = torch.tensor([0, 10, 3, 0], dtype=torch.float) # create a tensor of weights\n >>> torch.multinomial(weights, 2)\n tensor([1, 2])\n >>> torch.multinomial(weights, 4) # ERROR!\n RuntimeError: invalid argument 2: invalid multinomial distribution (with replacement=False,", "source": "https://pytorch.org/docs/stable/generated/torch.multinomial.html", "category": "pytorch docs"}
{"text": "not enough non-negative category to sample) at ../aten/src/TH/generic/THTensorRandom.cpp:320\n >>> torch.multinomial(weights, 4, replacement=True)\n tensor([ 2, 1, 1, 1])", "source": "https://pytorch.org/docs/stable/generated/torch.multinomial.html", "category": "pytorch docs"}
{"text": "torch.histogramddtorch.histogramdd(input, bins, *, range=None, weight=None, density=False, out=None) -> (Tensor, Tensor[])\n Computes a multi-dimensional histogram of the values in a tensor.\n Interprets the elements of an input tensor whose innermost\n dimension has size N as a collection of N-dimensional points. Maps\n each of the points into a set of N-dimensional bins and returns the\n number of points (or total weight) in each bin.\n \"input\" must be a tensor with at least 2 dimensions. If input has\n shape (M, N), each of its M rows defines a point in N-dimensional\n space. If input has three or more dimensions, all but the last\n dimension are flattened.\n Each dimension is independently associated with its own strictly\n increasing sequence of bin edges. Bin edges may be specified\n explicitly by passing a sequence of 1D tensors. Alternatively, bin\n edges may be constructed automatically by passing a sequence of", "source": "https://pytorch.org/docs/stable/generated/torch.histogramdd.html", "category": "pytorch docs"}
{"text": "integers specifying the number of equal-width bins in each\n dimension.\n For each N-dimensional point in input:\n * Each of its coordinates is binned independently among the bin\n edges\n corresponding to its dimension\n * Binning results are combined to identify the N-dimensional bin\n (if any)\n into which the point falls\n * If the point falls into a bin, the bin's count (or total\n weight) is incremented\n * Points which do not fall into any bin do not contribute to the\n output\n \"bins\" can be a sequence of N 1D tensors, a sequence of N ints, or\n a single int.\n If \"bins\" is a sequence of N 1D tensors, it explicitly specifies\n the N sequences of bin edges. Each 1D tensor should contain a\n strictly increasing sequence with at least one element. A sequence\n of K bin edges defines K-1 bins, explicitly specifying the left and\n right edges of all bins. Every bin is exclusive of its left edge.", "source": "https://pytorch.org/docs/stable/generated/torch.histogramdd.html", "category": "pytorch docs"}
{"text": "Only the rightmost bin is inclusive of its right edge.\n If \"bins\" is a sequence of N ints, it specifies the number of\n equal-width bins in each dimension. By default, the leftmost and\n rightmost bin edges in each dimension are determined by the minimum\n and maximum elements of the input tensor in the corresponding\n dimension. The \"range\" argument can be provided to manually specify\n the leftmost and rightmost bin edges in each dimension.\n If \"bins\" is an int, it specifies the number of equal-width bins\n for all dimensions.\n Note:\n See also \"torch.histogram()\", which specifically computes 1D\n histograms. While \"torch.histogramdd()\" infers the dimensionality\n of its bins and binned values from the shape of \"input\",\n \"torch.histogram()\" accepts and flattens \"input\" of any shape.\n Parameters:\n * input (Tensor) -- the input tensor.\n * bins -- Tensor[], int[], or int. If Tensor[], defines the", "source": "https://pytorch.org/docs/stable/generated/torch.histogramdd.html", "category": "pytorch docs"}
{"text": "sequences of bin edges. If int[], defines the number of equal-\n width bins in each dimension. If int, defines the number of\n equal-width bins for all dimensions.\n Keyword Arguments:\n * range (sequence of python:float) -- Defines the leftmost\n and rightmost bin edges in each dimension.\n * weight (Tensor) -- By default, each value in the input\n has weight 1. If a weight tensor is passed, each N-dimensional\n coordinate in input contributes its associated weight towards\n its bin's result. The weight tensor should have the same shape\n as the \"input\" tensor excluding its innermost dimension N.\n * density (bool) -- If False (default), the result will\n contain the count (or total weight) in each bin. If True, each\n count (weight) is divided by the total count (total weight),\n then divided by the volume of its associated bin.\n Returns:\n N-dimensional Tensor containing the values of the histogram.", "source": "https://pytorch.org/docs/stable/generated/torch.histogramdd.html", "category": "pytorch docs"}
{"text": "bin_edges(Tensor[]): sequence of N 1D Tensors containing the bin\n edges.\n Return type:\n hist (Tensor)\n Example::\n >>> torch.histogramdd(torch.tensor([[0., 1.], [1., 0.], [2., 0.], [2., 2.]]), bins=[3, 3],\n ... weight=torch.tensor([1., 2., 4., 8.]))\n torch.return_types.histogramdd(\n hist=tensor([[0., 1., 0.],\n [2., 0., 0.],\n [4., 0., 8.]]),\n bin_edges=(tensor([0.0000, 0.6667, 1.3333, 2.0000]),\n tensor([0.0000, 0.6667, 1.3333, 2.0000])))\n >>> torch.histogramdd(torch.tensor([[0., 0.], [1., 1.], [2., 2.]]), bins=[2, 2],\n ... range=[0., 1., 0., 1.], density=True)\n torch.return_types.histogramdd(\n hist=tensor([[2., 0.],\n [0., 2.]]),\n bin_edges=(tensor([0.0000, 0.5000, 1.0000]),\n tensor([0.0000, 0.5000, 1.0000])))", "source": "https://pytorch.org/docs/stable/generated/torch.histogramdd.html", "category": "pytorch docs"}
{"text": "torch.Tensor.logaddexpTensor.logaddexp(other) -> Tensor\n See \"torch.logaddexp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logaddexp.html", "category": "pytorch docs"}
{"text": "torch.row_stacktorch.row_stack(tensors, *, out=None) -> Tensor\n Alias of \"torch.vstack()\".", "source": "https://pytorch.org/docs/stable/generated/torch.row_stack.html", "category": "pytorch docs"}
{"text": "torch.Tensor.is_conjTensor.is_conj() -> bool\n Returns True if the conjugate bit of \"self\" is set to true.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_conj.html", "category": "pytorch docs"}
{"text": "torch.emptytorch.empty(size, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, pin_memory=False, memory_format=torch.contiguous_format) -> Tensor\n Returns a tensor filled with uninitialized data. The shape of the\n tensor is defined by the variable argument \"size\".\n Parameters:\n size (int...) -- a sequence of integers defining the\n shape of the output tensor. Can be a variable number of\n arguments or a collection like a list or tuple.\n Keyword Arguments:\n * out (Tensor, optional) -- the output tensor.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of", "source": "https://pytorch.org/docs/stable/generated/torch.empty.html", "category": "pytorch docs"}
{"text": "returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * pin_memory (bool, optional) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n * memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.contiguous_format\".\n Example:\n >>> torch.empty((2,3), dtype=torch.int64)\n tensor([[ 9.4064e+13, 2.8000e+01, 9.3493e+13],\n [ 7.5751e+18, 7.1428e+18, 7.5955e+18]])", "source": "https://pytorch.org/docs/stable/generated/torch.empty.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.elutorch.nn.functional.elu(input, alpha=1.0, inplace=False)\n Applies the Exponential Linear Unit (ELU) function element-wise.\n See \"ELU\" for more details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.elu.html", "category": "pytorch docs"}
{"text": "ConvTranspose2dclass torch.ao.nn.quantized.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)\n Applies a 2D transposed convolution operator over an input image\n composed of several input planes. For details on input arguments,\n parameters, and implementation see \"ConvTranspose2d\".\n For special notes, please, see \"Conv2d\"\n Variables:\n * weight (Tensor) -- packed tensor derived from the\n learnable weight parameter.\n * scale (Tensor) -- scalar for the output scale\n * zero_point (Tensor) -- scalar for the output zero point\n See \"ConvTranspose2d\" for other attributes.\n Examples:\n >>> # QNNPACK or FBGEMM as backend\n >>> torch.backends.quantized.engine = 'qnnpack'\n >>> # With square kernels and equal stride\n >>> import torch.nn.quantized as nnq\n >>> m = nnq.ConvTranspose2d(16, 33, 3, stride=2)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose2d.html", "category": "pytorch docs"}
{"text": "\n\n\nnon-square kernels and unequal stride and with padding\n >>> m = nnq.ConvTranspose2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))\n >>> input = torch.randn(20, 16, 50, 100)\n >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)\n >>> output = m(q_input)\n >>> # exact output size can be also specified as an argument\n >>> input = torch.randn(1, 16, 12, 12)\n >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)\n >>> downsample = nnq.Conv2d(16, 16, 3, stride=2, padding=1)\n >>> upsample = nnq.ConvTranspose2d(16, 16, 3, stride=2, padding=1)\n >>> h = downsample(q_input)\n >>> h.size()\n torch.Size([1, 16, 6, 6])\n >>> output = upsample(h, output_size=input.size())\n >>> output.size()\n torch.Size([1, 16, 12, 12])\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose2d.html", "category": "pytorch docs"}
{"text": "torch.vdottorch.vdot(input, other, , out=None) -> Tensor\n Computes the dot product of two 1D vectors along a dimension.\n In symbols, this function computes\n \\sum_{i=1}^n \\overline{x_i}y_i.\n where \\overline{x_i} denotes the conjugate for complex vectors, and\n it is the identity for real vectors.\n Note:\n Unlike NumPy's vdot, torch.vdot intentionally only supports\n computing the dot product of two 1D tensors with the same number\n of elements.\n See also:\n \"torch.linalg.vecdot()\" computes the dot product of two batches\n of vectors along a dimension.\n Parameters:\n * input (Tensor) -- first tensor in the dot product, must\n be 1D. Its conjugate is used if it's complex.\n * other (Tensor*) -- second tensor in the dot product, must\n be 1D.\n Keyword args:\n Note:\n out (Tensor, optional): the output tensor.\n Example:\n >>> torch.vdot(torch.tensor([2, 3]), torch.tensor([2, 1]))\n tensor(7)", "source": "https://pytorch.org/docs/stable/generated/torch.vdot.html", "category": "pytorch docs"}
{"text": "tensor(7)\n >>> a = torch.tensor((1 +2j, 3 - 1j))\n >>> b = torch.tensor((2 +1j, 4 - 0j))\n >>> torch.vdot(a, b)\n tensor([16.+1.j])\n >>> torch.vdot(b, a)\n tensor([16.-1.j])", "source": "https://pytorch.org/docs/stable/generated/torch.vdot.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.vector_to_parameterstorch.nn.utils.vector_to_parameters(vec, parameters)\n Convert one vector to the parameters\n Parameters:\n * vec (Tensor) -- a single vector represents the\n parameters of a model.\n * parameters (Iterable[Tensor]) -- an iterator of\n Tensors that are the parameters of a model.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.vector_to_parameters.html", "category": "pytorch docs"}
{"text": "torch.svdtorch.svd(input, some=True, compute_uv=True, , out=None)\n Computes the singular value decomposition of either a matrix or\n batch of matrices \"input\". The singular value decomposition is\n represented as a namedtuple (U, S, V), such that \"input\" = U\n \\text{diag}(S) V^{\\text{H}}. where V^{\\text{H}} is the transpose of\n V for real inputs, and the conjugate transpose of V for complex\n inputs. If \"input\" is a batch of matrices, then U, S, and V\n are also batched with the same batch dimensions as \"input\".\n If \"some\" is True (default), the method returns the reduced\n singular value decomposition. In this case, if the last two\n dimensions of \"input\" are m and n, then the returned U and\n V matrices will contain only min(n, m) orthonormal columns.\n If \"compute_uv\" is False, the returned U and V will be zero-\n filled matrices of shape (m, m) and (n, n)* respectively, and\n the same device as \"input\". The argument \"some\" has no effect when", "source": "https://pytorch.org/docs/stable/generated/torch.svd.html", "category": "pytorch docs"}
{"text": "\"compute_uv\" is False.\n Supports \"input\" of float, double, cfloat and cdouble data types.\n The dtypes of U and V are the same as \"input\"'s. S will\n always be real-valued, even if \"input\" is complex.\n Warning:\n \"torch.svd()\" is deprecated in favor of \"torch.linalg.svd()\" and\n will be removed in a future PyTorch release.\"U, S, V =\n torch.svd(A, some=some, compute_uv=True)\" (default) should be\n replaced with\n U, S, Vh = torch.linalg.svd(A, full_matrices=not some)\n V = Vh.mH\n \"_, S, _ = torch.svd(A, some=some, compute_uv=False)\" should be\n replaced with\n S = torch.linalg.svdvals(A)\n Note:\n Differences with \"torch.linalg.svd()\":\n * \"some\" is the opposite of \"torch.linalg.svd()\"'s\n \"full_matrices\". Note that default value for both is True, so\n the default behavior is effectively the opposite.\n * \"torch.svd()\" returns V, whereas \"torch.linalg.svd()\" returns\n Vh, that is, V^{\\text{H}}.", "source": "https://pytorch.org/docs/stable/generated/torch.svd.html", "category": "pytorch docs"}
{"text": "Vh, that is, V^{\\text{H}}.\n * If \"compute_uv\" is False, \"torch.svd()\" returns zero-filled\n tensors for U and Vh, whereas \"torch.linalg.svd()\" returns\n empty tensors.\n Note:\n The singular values are returned in descending order. If \"input\"\n is a batch of matrices, then the singular values of each matrix\n in the batch are returned in descending order.\n Note:\n The S tensor can only be used to compute gradients if\n \"compute_uv\" is True.\n Note:\n When \"some\" is False, the gradients on U[..., :, min(m, n):]\n and V[..., :, min(m, n):] will be ignored in the backward pass,\n as those vectors can be arbitrary bases of the corresponding\n subspaces.\n Note:\n The implementation of \"torch.linalg.svd()\" on CPU uses LAPACK's\n routine ?gesdd (a divide-and-conquer algorithm) instead of\n ?gesvd for speed. Analogously, on GPU, it uses cuSOLVER's\n routines gesvdj and gesvdjBatched on CUDA 10.1.243 and later,", "source": "https://pytorch.org/docs/stable/generated/torch.svd.html", "category": "pytorch docs"}
{"text": "and MAGMA's routine gesdd on earlier versions of CUDA.\n Note:\n The returned U will not be contiguous. The matrix (or batch of\n matrices) will be represented as a column-major matrix (i.e.\n Fortran-contiguous).\n Warning:\n The gradients with respect to U and V will only be finite\n when the input does not have zero nor repeated singular values.\n Warning:\n If the distance between any two singular values is close to zero,\n the gradients with respect to U and V will be numerically\n unstable, as they depends on \\frac{1}{\\min_{i \\neq j} \\sigma_i^2\n - \\sigma_j^2}. The same happens when the matrix has small\n singular values, as these gradients also depend on S\u00e2\u0081\u00bb\u00c2\u00b9.\n Warning:\n For complex-valued \"input\" the singular value decomposition is\n not unique, as U and V may be multiplied by an arbitrary\n phase factor e^{i \\phi} on every column. The same happens when\n \"input\" has repeated singular values, where one may multiply the", "source": "https://pytorch.org/docs/stable/generated/torch.svd.html", "category": "pytorch docs"}
{"text": "columns of the spanning subspace in U and V by a rotation\n matrix and the resulting vectors will span the same subspace.\n Different platforms, like NumPy, or inputs on different device\n types, may produce different U and V tensors.\n Parameters:\n * input (Tensor) -- the input tensor of size (, m, n)\n where *** is zero or more batch dimensions consisting of (m,\n n) matrices.\n * some (bool, optional) -- controls whether to compute\n the reduced or full decomposition, and consequently, the shape\n of returned U and V. Default: True.\n * compute_uv (bool, optional) -- controls whether to\n compute U and V. Default: True.\n Keyword Arguments:\n out (tuple, optional*) -- the output tuple of tensors\n Example:\n >>> a = torch.randn(5, 3)\n >>> a\n tensor([[ 0.2364, -0.7752, 0.6372],\n [ 1.7201, 0.7394, -0.0504],\n [-0.3371, -1.0584, 0.5296],", "source": "https://pytorch.org/docs/stable/generated/torch.svd.html", "category": "pytorch docs"}
{"text": "[-0.3371, -1.0584, 0.5296],\n [ 0.3550, -0.4022, 1.5569],\n [ 0.2445, -0.0158, 1.1414]])\n >>> u, s, v = torch.svd(a)\n >>> u\n tensor([[ 0.4027, 0.0287, 0.5434],\n [-0.1946, 0.8833, 0.3679],\n [ 0.4296, -0.2890, 0.5261],\n [ 0.6604, 0.2717, -0.2618],\n [ 0.4234, 0.2481, -0.4733]])\n >>> s\n tensor([2.3289, 2.0315, 0.7806])\n >>> v\n tensor([[-0.0199, 0.8766, 0.4809],\n [-0.5080, 0.4054, -0.7600],\n [ 0.8611, 0.2594, -0.4373]])\n >>> torch.dist(a, torch.mm(torch.mm(u, torch.diag(s)), v.t()))\n tensor(8.6531e-07)\n >>> a_big = torch.randn(7, 5, 3)\n >>> u, s, v = torch.svd(a_big)\n >>> torch.dist(a_big, torch.matmul(torch.matmul(u, torch.diag_embed(s)), v.mT))\n tensor(2.6503e-06)", "source": "https://pytorch.org/docs/stable/generated/torch.svd.html", "category": "pytorch docs"}
{"text": "RNNBaseclass torch.nn.RNNBase(mode, input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0.0, bidirectional=False, proj_size=0, device=None, dtype=None)\n flatten_parameters()\n Resets parameter data pointer so that they can use faster code\n paths.\n Right now, this works only if the module is on the GPU and cuDNN\n is enabled. Otherwise, it's a no-op.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNNBase.html", "category": "pytorch docs"}
{"text": "torch.tril_indicestorch.tril_indices(row, col, offset=0, *, dtype=torch.long, device='cpu', layout=torch.strided) -> Tensor\n Returns the indices of the lower triangular part of a \"row\"-by-\n \"col\" matrix in a 2-by-N Tensor, where the first row contains row\n coordinates of all indices and the second row contains column\n coordinates. Indices are ordered based on rows and then columns.\n The lower triangular part of the matrix is defined as the elements\n on and below the diagonal.\n The argument \"offset\" controls which diagonal to consider. If\n \"offset\" = 0, all elements on and below the main diagonal are\n retained. A positive value includes just as many diagonals above\n the main diagonal, and similarly a negative value excludes just as\n many diagonals below the main diagonal. The main diagonal are the\n set of indices \\lbrace (i, i) \\rbrace for i \\in [0, \\min{d_{1},\n d_{2}} - 1] where d_{1}, d_{2} are the dimensions of the matrix.\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.tril_indices.html", "category": "pytorch docs"}
{"text": "Note:\n When running on CUDA, \"row * col\" must be less than 2^{59} to\n prevent overflow during calculation.\n Parameters:\n * row (\"int\") -- number of rows in the 2-D matrix.\n * col (\"int\") -- number of columns in the 2-D matrix.\n * offset (\"int\") -- diagonal offset from the main diagonal.\n Default: if not provided, 0.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", \"torch.long\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * layout (\"torch.layout\", optional) -- currently only\n support \"torch.strided\".\n Example:\n >>> a = torch.tril_indices(3, 3)\n >>> a", "source": "https://pytorch.org/docs/stable/generated/torch.tril_indices.html", "category": "pytorch docs"}
{"text": "\n\n\na = torch.tril_indices(3, 3)\n >>> a\n tensor([[0, 1, 1, 2, 2, 2],\n [0, 0, 1, 0, 1, 2]])\n >>> a = torch.tril_indices(4, 3, -1)\n >>> a\n tensor([[1, 2, 2, 3, 3, 3],\n [0, 0, 1, 0, 1, 2]])\n >>> a = torch.tril_indices(4, 3, 1)\n >>> a\n tensor([[0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3],\n [0, 1, 0, 1, 2, 0, 1, 2, 0, 1, 2]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.tril_indices.html", "category": "pytorch docs"}
{"text": "torch.Tensor.copy_Tensor.copy_(src, non_blocking=False) -> Tensor\n Copies the elements from \"src\" into \"self\" tensor and returns\n \"self\".\n The \"src\" tensor must be broadcastable with the \"self\" tensor. It\n may be of a different data type or reside on a different device.\n Parameters:\n * src (Tensor) -- the source tensor to copy from\n * non_blocking (bool) -- if \"True\" and this copy is\n between CPU and GPU, the copy may occur asynchronously with\n respect to the host. For other cases, this argument has no\n effect.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.copy_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.chalfTensor.chalf(memory_format=torch.preserve_format) -> Tensor\n \"self.chalf()\" is equivalent to \"self.to(torch.complex32)\". See\n \"to()\".\n Parameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.chalf.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.fractional_max_pool3dtorch.nn.functional.fractional_max_pool3d(args, kwargs)\n Applies 3D fractional max pooling over an input signal composed of\n several input planes.\n Fractional MaxPooling is described in detail in the paper\n Fractional MaxPooling by Ben Graham\n The max-pooling operation is applied in kT \\times kH \\times kW\n regions by a stochastic step size determined by the target output\n size. The number of output features is equal to the number of input\n planes.\n Parameters:\n * kernel_size -- the size of the window to take a max over.\n Can be a single number k (for a square kernel of k \\times k\n \\times k) or a tuple (kT, kH, kW)\n * output_size -- the target output size of the form oT\n \\times oH \\times oW. Can be a tuple (oT, oH, oW) or a single\n number oH for a cubic output oH \\times oH \\times oH\n * output_ratio* -- If one wants to have an output size as a", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.fractional_max_pool3d.html", "category": "pytorch docs"}
{"text": "ratio of the input size, this option can be given. This has to\n be a number or tuple in the range (0, 1)\n * return_indices -- if \"True\", will return the indices along\n with the outputs. Useful to pass to \"max_unpool3d()\".\n Shape:\n * Input: (N, C, T_{in}, H_{in}, W_{in}) or (C, T_{in}, H_{in},\n W_{in}).\n * Output: (N, C, T_{out}, H_{out}, W_{out}) or (C, T_{out},\n H_{out}, W_{out}), where (T_{out}, H_{out},\n W_{out})=\\text{output_size} or (T_{out}, H_{out},\n W_{out})=\\text{output_ratio} \\times (T_{in}, H_{in}, W_{in})\n Examples::\n >>> input = torch.randn(20, 16, 50, 32, 16)\n >>> # pool of cubic window of size=3, and target output size 13x12x11\n >>> F.fractional_max_pool3d(input, 3, output_size=(13, 12, 11))\n >>> # pool of cubic window and target output size being half of input size\n >>> F.fractional_max_pool3d(input, 3, output_ratio=(0.5, 0.5, 0.5))", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.fractional_max_pool3d.html", "category": "pytorch docs"}
{"text": "ConvBnReLU2dclass torch.ao.nn.intrinsic.qat.ConvBnReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None)\n A ConvBnReLU2d module is a module fused from Conv2d, BatchNorm2d\n and ReLU, attached with FakeQuantize modules for weight, used in\n quantization aware training.\n We combined the interface of \"torch.nn.Conv2d\" and\n \"torch.nn.BatchNorm2d\" and \"torch.nn.ReLU\".\n Similar to torch.nn.Conv2d, with FakeQuantize modules initialized\n to default.\n Variables:\n weight_fake_quant -- fake quant module for weight", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvBnReLU2d.html", "category": "pytorch docs"}
{"text": "ParametrizationListclass torch.nn.utils.parametrize.ParametrizationList(modules, original, unsafe=False)\n A sequential container that holds and manages the \"original\" or\n \"original0\", \"original1\", ... parameters or buffers of a\n parametrized \"torch.nn.Module\".\n It is the type of \"module.parametrizations[tensor_name]\" when\n \"module[tensor_name]\" has been parametrized with\n \"register_parametrization()\".\n If the first registered parametrization has a \"right_inverse\" that\n returns one tensor or does not have a \"right_inverse\" (in which\n case we assume that \"right_inverse\" is the identity), it will hold\n the tensor under the name \"original\". If it has a \"right_inverse\"\n that returns more than one tensor, these will be registered as\n \"original0\", \"original1\", ...\n Warning:\n This class is used internally by \"register_parametrization()\". It\n is documented here for completeness. It shall not be instantiated\n by the user.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.ParametrizationList.html", "category": "pytorch docs"}
{"text": "by the user.\n Parameters:\n * modules (sequence) -- sequence of modules representing\n the parametrizations\n * original (Parameter or Tensor) -- parameter or\n buffer that is parametrized\n * unsafe (bool) -- a boolean flag that denotes whether the\n parametrization may change the dtype and shape of the tensor.\n Default: False Warning: the parametrization is not checked\n for consistency upon registration. Enable this flag at your\n own risk.\n right_inverse(value)\n Calls the methods \"right_inverse\" (see\n \"register_parametrization()\") of the parametrizations in the\n inverse order they were registered in. Then, it stores the\n result in \"self.original\" if \"right_inverse\" outputs one tensor\n or in \"self.original0\", \"self.original1\", ... if it outputs\n several.\n Parameters:\n value (Tensor) -- Value to which initialize the module", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.ParametrizationList.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.cosine_embedding_losstorch.nn.functional.cosine_embedding_loss(input1, input2, target, margin=0, size_average=None, reduce=None, reduction='mean') -> Tensor\n See \"CosineEmbeddingLoss\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.cosine_embedding_loss.html", "category": "pytorch docs"}
{"text": "torch._foreach_fractorch._foreach_frac(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.frac()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_frac.html", "category": "pytorch docs"}
{"text": "torch.stfttorch.stft(input, n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='reflect', normalized=False, onesided=None, return_complex=None)\n Short-time Fourier transform (STFT).\n Warning:\n From version 1.8.0, \"return_complex\" must always be given\n explicitly for real inputs and return_complex=False has been\n deprecated. Strongly prefer return_complex=True as in a future\n pytorch release, this function will only return complex\n tensors.Note that \"torch.view_as_real()\" can be used to recover a\n real tensor with an extra last dimension for real and imaginary\n components.\n The STFT computes the Fourier transform of short overlapping\n windows of the input. This giving frequency components of the\n signal as they change over time. The interface of this function is\n modeled after (but not a drop-in replacement for) librosa stft\n function.\n Ignoring the optional batch dimension, this method computes the\n following expression:", "source": "https://pytorch.org/docs/stable/generated/torch.stft.html", "category": "pytorch docs"}
{"text": "following expression:\n X[\\omega, m] = \\sum_{k = 0}^{\\text{win_length-1}}%\n \\text{window}[k]\\ \\text{input}[m \\times \\text{hop_length} + k]\\\n % \\exp\\left(- j \\frac{2 \\pi \\cdot \\omega\n k}{\\text{win_length}}\\right),\n where m is the index of the sliding window, and \\omega is the\n frequency 0 \\leq \\omega < \\text{n_fft} for \"onesided=False\", or 0\n \\leq \\omega < \\lfloor \\text{n_fft} / 2 \\rfloor + 1 for\n \"onesided=True\".\n * \"input\" must be either a 1-D time sequence or a 2-D batch of time\n sequences.\n * If \"hop_length\" is \"None\" (default), it is treated as equal to\n \"floor(n_fft / 4)\".\n * If \"win_length\" is \"None\" (default), it is treated as equal to\n \"n_fft\".\n * \"window\" can be a 1-D tensor of size \"win_length\", e.g., from\n \"torch.hann_window()\". If \"window\" is \"None\" (default), it is\n treated as if having 1 everywhere in the window. If\n \\text{win_length} < \\text{n_fft}, \"window\" will be padded on", "source": "https://pytorch.org/docs/stable/generated/torch.stft.html", "category": "pytorch docs"}
{"text": "both sides to length \"n_fft\" before being applied.\n * If \"center\" is \"True\" (default), \"input\" will be padded on both\n sides so that the t-th frame is centered at time t \\times\n \\text{hop_length}. Otherwise, the t-th frame begins at time t\n \\times \\text{hop_length}.\n * \"pad_mode\" determines the padding method used on \"input\" when\n \"center\" is \"True\". See \"torch.nn.functional.pad()\" for all\n available options. Default is \"\"reflect\"\".\n * If \"onesided\" is \"True\" (default for real input), only values for\n \\omega in \\left[0, 1, 2, \\dots, \\left\\lfloor\n \\frac{\\text{n_fft}}{2} \\right\\rfloor + 1\\right] are returned\n because the real-to-complex Fourier transform satisfies the\n conjugate symmetry, i.e., X[m, \\omega] = X[m, \\text{n_fft} -\n \\omega]^*. Note if the input or window tensors are complex, then\n \"onesided\" output is not possible.\n * If \"normalized\" is \"True\" (default is \"False\"), the function", "source": "https://pytorch.org/docs/stable/generated/torch.stft.html", "category": "pytorch docs"}
{"text": "returns the normalized STFT results, i.e., multiplied by\n (\\text{frame_length})^{-0.5}.\n * If \"return_complex\" is \"True\" (default if input is complex), the\n return is a \"input.dim() + 1\" dimensional complex tensor. If\n \"False\", the output is a \"input.dim() + 2\" dimensional real\n tensor where the last dimension represents the real and imaginary\n components.\n Returns either a complex tensor of size ( \\times N \\times T) if\n \"return_complex\" is true, or a real tensor of size ( \\times N\n \\times T \\times 2). Where * is the optional batch size of \"input\",\n N is the number of frequencies where STFT is applied and T is the\n total number of frames used.\n Warning:\n This function changed signature at version 0.4.1. Calling with\n the previous signature may cause error or return incorrect\n result.\n Parameters:\n * input (Tensor) -- the input tensor\n * n_fft (int) -- size of Fourier transform", "source": "https://pytorch.org/docs/stable/generated/torch.stft.html", "category": "pytorch docs"}
{"text": "\nhop_length (int, optional) -- the distance between\n neighboring sliding window frames. Default: \"None\" (treated as\n equal to \"floor(n_fft / 4)\")\nwin_length (int, optional) -- the size of window\n frame and STFT filter. Default: \"None\" (treated as equal to\n \"n_fft\")\nwindow (Tensor, optional) -- the optional window\n function. Default: \"None\" (treated as window of all 1 s)\ncenter (bool, optional) -- whether to pad \"input\" on\n both sides so that the t-th frame is centered at time t \\times\n \\text{hop_length}. Default: \"True\"\npad_mode (str, optional) -- controls the padding\n method used when \"center\" is \"True\". Default: \"\"reflect\"\"\nnormalized (bool, optional) -- controls whether to\n return the normalized STFT results Default: \"False\"\nonesided (bool, optional) -- controls whether to\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.stft.html", "category": "pytorch docs"}
{"text": "return half of results to avoid redundancy for real inputs.\n Default: \"True\" for real \"input\" and \"window\", \"False\"\n otherwise.\n * return_complex (bool, optional) --\n whether to return a complex tensor, or a real tensor with an\n extra last dimension for the real and imaginary components.\n Changed in version 2.0: \"return_complex\" is now a required\n argument for real inputs, as the default is being transitioned\n to \"True\".\n Deprecated since version 2.0: \"return_complex=False\" is\n deprecated, instead use \"return_complex=True\" Note that\n calling \"torch.view_as_real()\" on the output will recover the\n deprecated output format.\n Returns:\n A tensor containing the STFT result with shape described above\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.stft.html", "category": "pytorch docs"}
{"text": "upsample_nearestclass torch.ao.nn.quantized.functional.upsample_nearest(input, size=None, scale_factor=None)\n Upsamples the input, using nearest neighbours' pixel values.\n Warning:\n This function is deprecated in favor of\n \"torch.nn.quantized.functional.interpolate()\". This is equivalent\n with \"nn.quantized.functional.interpolate(..., mode='nearest')\".\n Note:\n The input quantization parameters propagate to the output.\n Note:\n Only 2D inputs are supported\n Parameters:\n * input (Tensor) -- quantized input\n * size (int or Tuple[int, int] or\n Tuple[int, int, int]) -- output spatial size.\n * scale_factor (int) -- multiplier for spatial size. Has\n to be an integer.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.upsample_nearest.html", "category": "pytorch docs"}
{"text": "torch.addmmtorch.addmm(input, mat1, mat2, , beta=1, alpha=1, out=None) -> Tensor\n Performs a matrix multiplication of the matrices \"mat1\" and \"mat2\".\n The matrix \"input\" is added to the final result.\n If \"mat1\" is a (n \\times m) tensor, \"mat2\" is a (m \\times p)\n tensor, then \"input\" must be broadcastable with a (n \\times p)\n tensor and \"out\" will be a (n \\times p) tensor.\n \"alpha\" and \"beta\" are scaling factors on matrix-vector product\n between \"mat1\" and \"mat2\" and the added matrix \"input\"\n respectively.\n \\text{out} = \\beta\\ \\text{input} + \\alpha\\ (\\text{mat1}_i\n \\mathbin{@} \\text{mat2}_i)\n If \"beta\" is 0, then \"input\" will be ignored, and nan and inf\n in it will not be propagated.\n For inputs of type FloatTensor or DoubleTensor*, arguments\n \"beta\" and \"alpha\" must be real numbers, otherwise they should be\n integers.\n This operation has support for arguments with sparse layouts. If", "source": "https://pytorch.org/docs/stable/generated/torch.addmm.html", "category": "pytorch docs"}
{"text": "\"input\" is sparse the result will have the same layout and if \"out\"\n is provided it must have the same layout as \"input\".\n Warning:\n Sparse support is a beta feature and some layout(s)/dtype/device\n combinations may not be supported, or may not have autograd\n support. If you notice missing functionality please open a\n feature request.\n This operator supports TensorFloat32.\n On certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\n Parameters:\n * input (Tensor) -- matrix to be added\n * mat1 (Tensor) -- the first matrix to be matrix\n multiplied\n * mat2 (Tensor) -- the second matrix to be matrix\n multiplied\n Keyword Arguments:\n * beta (Number, optional) -- multiplier for \"input\"\n (\\beta)\n * alpha (Number, optional) -- multiplier for mat1 @\n mat2 (\\alpha)\n * out (Tensor, optional) -- the output tensor.\n Example:", "source": "https://pytorch.org/docs/stable/generated/torch.addmm.html", "category": "pytorch docs"}
{"text": "Example:\n >>> M = torch.randn(2, 3)\n >>> mat1 = torch.randn(2, 3)\n >>> mat2 = torch.randn(3, 3)\n >>> torch.addmm(M, mat1, mat2)\n tensor([[-4.8716, 1.4671, -1.3746],\n [ 0.7573, -3.9555, -2.8681]])", "source": "https://pytorch.org/docs/stable/generated/torch.addmm.html", "category": "pytorch docs"}
{"text": "torch.Tensor.subtract_Tensor.subtract_(other, *, alpha=1) -> Tensor\n In-place version of \"subtract()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.subtract_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.arcsinTensor.arcsin() -> Tensor\n See \"torch.arcsin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arcsin.html", "category": "pytorch docs"}
{"text": "torch.quantized_max_pool2dtorch.quantized_max_pool2d(input, kernel_size, stride=[], padding=0, dilation=1, ceil_mode=False) -> Tensor\n Applies a 2D max pooling over an input quantized tensor composed of\n several input planes.\n Parameters:\n * input (Tensor) -- quantized tensor\n * kernel_size (\"list of int\") -- the size of the sliding\n window\n * stride (\"list of int\", optional) -- the stride of the\n sliding window\n * padding (\"list of int\", optional) -- padding to be added\n on both sides, must be >= 0 and <= kernel_size / 2\n * dilation (\"list of int\", optional) -- The stride between\n elements within a sliding window, must be > 0. Default 1\n * ceil_mode (bool, optional) -- If True, will use ceil\n instead of floor to compute the output shape. Defaults to\n False.\n Returns:\n A quantized tensor with max_pool2d applied.\n Return type:\n Tensor\n Example:", "source": "https://pytorch.org/docs/stable/generated/torch.quantized_max_pool2d.html", "category": "pytorch docs"}
{"text": "Return type:\n Tensor\n Example:\n >>> qx = torch.quantize_per_tensor(torch.rand(2, 2, 2, 2), 1.5, 3, torch.quint8)\n >>> torch.quantized_max_pool2d(qx, [2,2])\n tensor([[[[1.5000]],\n [[1.5000]]],\n [[[0.0000]],\n [[0.0000]]]], size=(2, 2, 1, 1), dtype=torch.quint8,\n quantization_scheme=torch.per_tensor_affine, scale=1.5, zero_point=3)", "source": "https://pytorch.org/docs/stable/generated/torch.quantized_max_pool2d.html", "category": "pytorch docs"}
{"text": "ReLUclass torch.nn.ReLU(inplace=False)\n Applies the rectified linear unit function element-wise:\n \\text{ReLU}(x) = (x)^+ = \\max(0, x)\n Parameters:\n inplace (bool) -- can optionally do the operation in-\n place. Default: \"False\"\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.ReLU()\n >>> input = torch.randn(2)\n >>> output = m(input)\n An implementation of CReLU - https://arxiv.org/abs/1603.05201\n >>> m = nn.ReLU()\n >>> input = torch.randn(2).unsqueeze(0)\n >>> output = torch.cat((m(input), m(-input)))", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html", "category": "pytorch docs"}
{"text": "hardtanhclass torch.ao.nn.quantized.functional.hardtanh(input, min_val=- 1.0, max_val=1.0, inplace=False)\n This is the quantized version of \"hardtanh()\".\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.hardtanh.html", "category": "pytorch docs"}
{"text": "torch.Tensor.put_Tensor.put_(index, source, accumulate=False) -> Tensor\n Copies the elements from \"source\" into the positions specified by\n \"index\". For the purpose of indexing, the \"self\" tensor is treated\n as if it were a 1-D tensor.\n \"index\" and \"source\" need to have the same number of elements, but\n not necessarily the same shape.\n If \"accumulate\" is \"True\", the elements in \"source\" are added to\n \"self\". If accumulate is \"False\", the behavior is undefined if\n \"index\" contain duplicate elements.\n Parameters:\n * index (LongTensor) -- the indices into self\n * source (Tensor) -- the tensor containing values to copy\n from\n * accumulate (bool) -- whether to accumulate into self\n Example:\n >>> src = torch.tensor([[4, 3, 5],\n ... [6, 7, 8]])\n >>> src.put_(torch.tensor([1, 3]), torch.tensor([9, 10]))\n tensor([[ 4, 9, 5],\n [ 10, 7, 8]])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.put_.html", "category": "pytorch docs"}
{"text": "torch._foreach_trunctorch._foreach_trunc(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.trunc()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_trunc.html", "category": "pytorch docs"}
{"text": "torch.Tensor.acoshTensor.acosh() -> Tensor\n See \"torch.acosh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.acosh.html", "category": "pytorch docs"}
{"text": "torch.Tensor.backwardTensor.backward(gradient=None, retain_graph=None, create_graph=False, inputs=None)\n Computes the gradient of current tensor w.r.t. graph leaves.\n The graph is differentiated using the chain rule. If the tensor is\n non-scalar (i.e. its data has more than one element) and requires\n gradient, the function additionally requires specifying \"gradient\".\n It should be a tensor of matching type and location, that contains\n the gradient of the differentiated function w.r.t. \"self\".\n This function accumulates gradients in the leaves - you might need\n to zero \".grad\" attributes or set them to \"None\" before calling it.\n See Default gradient layouts for details on the memory layout of\n accumulated gradients.\n Note:\n If you run any forward ops, create \"gradient\", and/or call\n \"backward\" in a user-specified CUDA stream context, see Stream\n semantics of backward passes.\n Note:\n When \"inputs\" are provided and a given input is not a leaf, the", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.backward.html", "category": "pytorch docs"}
{"text": "current implementation will call its grad_fn (though it is not\n strictly needed to get this gradients). It is an implementation\n detail on which the user should not rely. See https://github.com\n /pytorch/pytorch/pull/60521#issuecomment-867061780 for more\n details.\n Parameters:\n * gradient (Tensor or None) -- Gradient w.r.t. the\n tensor. If it is a tensor, it will be automatically converted\n to a Tensor that does not require grad unless \"create_graph\"\n is True. None values can be specified for scalar Tensors or\n ones that don't require grad. If a None value would be\n acceptable then this argument is optional.\n * retain_graph (bool, optional) -- If \"False\", the\n graph used to compute the grads will be freed. Note that in\n nearly all cases setting this option to True is not needed and\n often can be worked around in a much more efficient way.\n Defaults to the value of \"create_graph\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.backward.html", "category": "pytorch docs"}
{"text": "Defaults to the value of \"create_graph\".\n * create_graph (bool, optional) -- If \"True\", graph of\n the derivative will be constructed, allowing to compute higher\n order derivative products. Defaults to \"False\".\n * inputs (sequence of Tensor) -- Inputs w.r.t. which the\n gradient will be accumulated into \".grad\". All other Tensors\n will be ignored. If not provided, the gradient is accumulated\n into all the leaf Tensors that were used to compute the\n attr::tensors.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.backward.html", "category": "pytorch docs"}
{"text": "torch.pinversetorch.pinverse(input, rcond=1e-15) -> Tensor\n Alias for \"torch.linalg.pinv()\"", "source": "https://pytorch.org/docs/stable/generated/torch.pinverse.html", "category": "pytorch docs"}
{"text": "torch.Tensor.reshape_asTensor.reshape_as(other) -> Tensor\n Returns this tensor as the same shape as \"other\".\n \"self.reshape_as(other)\" is equivalent to\n \"self.reshape(other.sizes())\". This method returns a view if\n \"other.sizes()\" is compatible with the current shape. See\n \"torch.Tensor.view()\" on when it is possible to return a view.\n Please see \"reshape()\" for more information about \"reshape\".\n Parameters:\n other (\"torch.Tensor\") -- The result tensor has the same\n shape as \"other\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.reshape_as.html", "category": "pytorch docs"}
{"text": "torch.nn.modules.module.register_module_forward_pre_hooktorch.nn.modules.module.register_module_forward_pre_hook(hook)\n Registers a forward pre-hook common to all modules.\n Warning:\n This adds global state to the nn.module module and it is only\n intended for debugging/profiling purposes.\n The hook will be called every time before \"forward()\" is invoked.\n It should have the following signature:\n hook(module, input) -> None or modified input\n The input contains only the positional arguments given to the\n module. Keyword arguments won't be passed to the hooks and only to\n the \"forward\". The hook can modify the input. User can either\n return a tuple or a single modified value in the hook. We will wrap\n the value into a tuple if a single value is returned(unless that\n value is already a tuple).\n This hook has precedence over the specific module hooks registered\n with \"register_forward_pre_hook\".\n Returns:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_pre_hook.html", "category": "pytorch docs"}
{"text": "with \"register_forward_pre_hook\".\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemovableHandle\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_pre_hook.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.prelutorch.nn.functional.prelu(input, weight) -> Tensor\n Applies element-wise the function \\text{PReLU}(x) = \\max(0,x) +\n \\text{weight} * \\min(0,x) where weight is a learnable parameter.\n Note:\n weight is expected to be a scalar or 1-D tensor. If weight is\n 1-D, its size must match the number of input channels, determined\n by input.size(1) when input.dim() >= 2, otherwise 1. In the\n 1-D case, note that when input has dim > 2, weight can be\n expanded to the shape of input in a way that is not possible\n using normal broadcasting semantics.\n See \"PReLU\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.prelu.html", "category": "pytorch docs"}
{"text": "default_per_channel_weight_fake_quanttorch.quantization.fake_quantize.default_per_channel_weight_fake_quant\n alias of functools.partial(,\n observer=, quant_min=-128, quant_max=127,\n dtype=torch.qint8, qscheme=torch.per_channel_symmetric,\n reduce_range=False, ch_axis=0){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_per_channel_weight_fake_quant.html", "category": "pytorch docs"}
{"text": "torch.fft.irffttorch.fft.irfft(input, n=None, dim=- 1, norm=None, *, out=None) -> Tensor\n Computes the inverse of \"rfft()\".\n \"input\" is interpreted as a one-sided Hermitian signal in the\n Fourier domain, as produced by \"rfft()\". By the Hermitian property,\n the output will be real-valued.\n Note:\n Some input frequencies must be real-valued to satisfy the\n Hermitian property. In these cases the imaginary component will\n be ignored. For example, any imaginary component in the zero-\n frequency term cannot be represented in a real output and so will\n always be ignored.\n Note:\n The correct interpretation of the Hermitian input depends on the\n length of the original data, as given by \"n\". This is because\n each input shape could correspond to either an odd or even length\n signal. By default, the signal is assumed to be even length and\n odd signals will not round-trip properly. So, it is recommended\n to always pass the signal length \"n\".\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfft.html", "category": "pytorch docs"}
{"text": "Note:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimension. With default arguments,\n size of the transformed dimension should be (2^n + 1) as argument\n n defaults to even output size = 2 * (transformed_dim_size - 1)\n Parameters:\n * input (Tensor) -- the input tensor representing a half-\n Hermitian signal\n * n (int, optional) -- Output signal length. This\n determines the length of the output signal. If given, the\n input will either be zero-padded or trimmed to this length\n before computing the real IFFT. Defaults to even output:\n \"n=2(input.size(dim) - 1)\".\n * dim (int, optional) -- The dimension along which to\n take the one dimensional real IFFT.\n * norm (str, optional*) --\n Normalization mode. For the backward transform (\"irfft()\"),\n these correspond to:", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfft.html", "category": "pytorch docs"}
{"text": "these correspond to:\n * \"\"forward\"\" - no normalization\n * \"\"backward\"\" - normalize by \"1/n\"\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the real IFFT\n orthonormal)\n Calling the forward transform (\"rfft()\") with the same\n normalization mode will apply an overall normalization of\n \"1/n\" between the two transforms. This is required to make\n \"irfft()\" the exact inverse.\n Default is \"\"backward\"\" (normalize by \"1/n\").\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n\n\n\nt = torch.linspace(0, 1, 5)\nt\n tensor([0.0000, 0.2500, 0.5000, 0.7500, 1.0000])\nT = torch.fft.rfft(t)\nT\n tensor([ 2.5000+0.0000j, -0.6250+0.8602j, -0.6250+0.2031j])\n Without specifying the output length to \"irfft()\", the output will\n not round-trip properly because the input is odd-length:\ntorch.fft.irfft(T)\n tensor([0.1562, 0.3511, 0.7812, 1.2114])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfft.html", "category": "pytorch docs"}
{"text": "tensor([0.1562, 0.3511, 0.7812, 1.2114])\n So, it is recommended to always pass the signal length \"n\":\n\n\n\nroundtrip = torch.fft.irfft(T, t.numel())\ntorch.testing.assert_close(roundtrip, t, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfft.html", "category": "pytorch docs"}
{"text": "torch.hamming_windowtorch.hamming_window(window_length, periodic=True, alpha=0.54, beta=0.46, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\n Hamming window function.\n w[n] = \\alpha - \\beta\\ \\cos \\left( \\frac{2 \\pi n}{N - 1}\n \\right),\n where N is the full window size.\n The input \"window_length\" is a positive integer controlling the\n returned window size. \"periodic\" flag determines whether the\n returned window trims off the last duplicate value from the\n symmetric window and is ready to be used as a periodic window with\n functions like \"torch.stft()\". Therefore, if \"periodic\" is true,\n the N in above formula is in fact \\text{window_length} + 1. Also,\n we always have \"torch.hamming_window(L, periodic=True)\" equal to\n \"torch.hamming_window(L + 1, periodic=False)[:-1])\".\n Note:\n If \"window_length\" =1, the returned window contains a single\n value 1.\n Note:\n This is a generalized version of \"torch.hann_window()\".", "source": "https://pytorch.org/docs/stable/generated/torch.hamming_window.html", "category": "pytorch docs"}
{"text": "Parameters:\n * window_length (int) -- the size of returned window\n * periodic (bool, optional) -- If True, returns a\n window to be used as periodic function. If False, return a\n symmetric window.\n * alpha (float, optional) -- The coefficient \\alpha in\n the equation above\n * beta (float, optional) -- The coefficient \\beta in\n the equation above\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\"). Only floating point\n types are supported.\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned window tensor. Only \"torch.strided\" (dense layout) is\n supported.\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device", "source": "https://pytorch.org/docs/stable/generated/torch.hamming_window.html", "category": "pytorch docs"}
{"text": "for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Returns:\n A 1-D tensor of size (\\text{window_length},) containing the\n window.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.hamming_window.html", "category": "pytorch docs"}
{"text": "torch.Tensor.matrix_powerTensor.matrix_power(n) -> Tensor\n Note:\n \"matrix_power()\" is deprecated, use \"torch.linalg.matrix_power()\"\n instead.\n Alias for \"torch.linalg.matrix_power()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.matrix_power.html", "category": "pytorch docs"}
{"text": "BackendConfigclass torch.ao.quantization.backend_config.BackendConfig(name='')\n Config that defines the set of patterns that can be quantized on a\n given backend, and how reference quantized models can be produced\n from these patterns.\n A pattern in this context refers to a module, a functional, an\n operator, or a directed acyclic graph of the above. Each pattern\n supported on the target backend can be individually configured\n through \"BackendPatternConfig\" in terms of:\n 1. The supported input/output activation, weight, and bias data\n types\n 2. How observers and quant/dequant ops are inserted in order to\n construct the reference pattern, and\n 3. (Optionally) Fusion, QAT, and reference module mappings.\n The format of the patterns is described in: https://github.com/pyt\n orch/pytorch/blob/master/torch/ao/quantization/backend_config/READ\n ME.md\n Example usage:\n import torch\n from torch.ao.quantization.backend_config import (\n BackendConfig,", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendConfig.html", "category": "pytorch docs"}
{"text": "BackendConfig,\n BackendPatternConfig,\n DTypeConfig,\n ObservationType,\n )\n weighted_int8_dtype_config = DTypeConfig(\n input_dtype=torch.quint8,\n output_dtype=torch.quint8,\n weight_dtype=torch.qint8,\n bias_dtype=torch.float)\n def fuse_conv2d_relu(is_qat, conv, relu):\n return torch.ao.nn.intrinsic.ConvReLU2d(conv, relu)\n # For quantizing Linear\n linear_config = BackendPatternConfig(torch.nn.Linear) .set_observation_type(ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT) .add_dtype_config(weighted_int8_dtype_config) .set_root_module(torch.nn.Linear) .set_qat_module(torch.ao.nn.qat.Linear) .set_reference_quantized_module(torch.ao.nn.quantized.reference.Linear)\n # For fusing Conv2d + ReLU into ConvReLU2d", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendConfig.html", "category": "pytorch docs"}
{"text": "For fusing Conv2d + ReLU into ConvReLU2d\n conv_relu_config = BackendPatternConfig((torch.nn.Conv2d, torch.nn.ReLU)) .set_observation_type(ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT) .add_dtype_config(weighted_int8_dtype_config) .set_fused_module(torch.ao.nn.intrinsic.ConvReLU2d) .set_fuser_method(fuse_conv2d_relu)\n # For quantizing ConvReLU2d\n fused_conv_relu_config = BackendPatternConfig(torch.ao.nn.intrinsic.ConvReLU2d) .set_observation_type(ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT) .add_dtype_config(weighted_int8_dtype_config) .set_root_module(torch.nn.Conv2d) .set_qat_module(torch.ao.nn.intrinsic.qat.ConvReLU2d) .set_reference_quantized_module(torch.ao.nn.quantized.reference.Conv2d)\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendConfig.html", "category": "pytorch docs"}
{"text": "backend_config = BackendConfig(\"my_backend\") .set_backend_pattern_config(linear_config) .set_backend_pattern_config(conv_relu_config) .set_backend_pattern_config(fused_conv_relu_config)\n property configs: List[BackendPatternConfig]\n Return a copy of the list of configs set in this\n BackendConfig.\n classmethod from_dict(backend_config_dict)\n Create a \"BackendConfig\" from a dictionary with the following\n items:\n \"name\": the name of the target backend\n \"configs\": a list of dictionaries that each represents a\n BackendPatternConfig\n Return type:\n BackendConfig\n set_backend_pattern_config(config)\n Set the config for an pattern that can be run on the target\n backend. This overrides any existing config for the given\n pattern.\n Return type:\n BackendConfig\n set_backend_pattern_configs(configs)\n Set the configs for patterns that can be run on the target", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendConfig.html", "category": "pytorch docs"}
{"text": "backend. This overrides any existing config for a given pattern\n if it was previously registered already.\n Return type:\n BackendConfig\n set_name(name)\n Set the name of the target backend.\n Return type:\n BackendConfig\n to_dict()\n Convert this \"BackendConfig\" to a dictionary with the items\n described in \"from_dict()\".\n Return type:\n Dict[str, Any]", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendConfig.html", "category": "pytorch docs"}
{"text": "torch.Tensor.as_subclassTensor.as_subclass(cls) -> Tensor\n Makes a \"cls\" instance with the same data pointer as \"self\".\n Changes in the output mirror changes in \"self\", and the output\n stays attached to the autograd graph. \"cls\" must be a subclass of\n \"Tensor\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.as_subclass.html", "category": "pytorch docs"}
{"text": "torch.Tensor.cumprod_Tensor.cumprod_(dim, dtype=None) -> Tensor\n In-place version of \"cumprod()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cumprod_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.flipudTensor.flipud() -> Tensor\n See \"torch.flipud()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.flipud.html", "category": "pytorch docs"}
{"text": "torch.zerostorch.zeros(size, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\n Returns a tensor filled with the scalar value 0, with the shape\n defined by the variable argument \"size\".\n Parameters:\n size (int...) -- a sequence of integers defining the\n shape of the output tensor. Can be a variable number of\n arguments or a collection like a list or tuple.\n Keyword Arguments:\n * out (Tensor, optional) -- the output tensor.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see", "source": "https://pytorch.org/docs/stable/generated/torch.zeros.html", "category": "pytorch docs"}
{"text": "for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Example:\n >>> torch.zeros(2, 3)\n tensor([[ 0., 0., 0.],\n [ 0., 0., 0.]])\n >>> torch.zeros(5)\n tensor([ 0., 0., 0., 0., 0.])", "source": "https://pytorch.org/docs/stable/generated/torch.zeros.html", "category": "pytorch docs"}
{"text": "torch.Tensor.swapaxesTensor.swapaxes(axis0, axis1) -> Tensor\n See \"torch.swapaxes()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.swapaxes.html", "category": "pytorch docs"}
{"text": "torch.jit.savetorch.jit.save(m, f, _extra_files=None)\n Save an offline version of this module for use in a separate\n process. The saved module serializes all of the methods,\n submodules, parameters, and attributes of this module. It can be\n loaded into the C++ API using \"torch::jit::load(filename)\" or into\n the Python API with \"torch.jit.load\".\n To be able to save a module, it must not make any calls to native\n Python functions. This means that all submodules must be\n subclasses of \"ScriptModule\" as well.\n Danger:\n All modules, no matter their device, are always loaded onto the\n CPU during loading. This is different from \"torch.load()\"'s\n semantics and may change in the future.\n Parameters:\n * m -- A \"ScriptModule\" to save.\n * f -- A file-like object (has to implement write and flush)\n or a string containing a file name.\n * _extra_files -- Map from filename to contents which will\n be stored as part of f.\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.jit.save.html", "category": "pytorch docs"}
{"text": "be stored as part of f.\n Note:\n torch.jit.save attempts to preserve the behavior of some\n operators across versions. For example, dividing two integer\n tensors in PyTorch 1.5 performed floor division, and if the\n module containing that code is saved in PyTorch 1.5 and loaded in\n PyTorch 1.6 its division behavior will be preserved. The same\n module saved in PyTorch 1.6 will fail to load in PyTorch 1.5,\n however, since the behavior of division changed in 1.6, and 1.5\n does not know how to replicate the 1.6 behavior.\n Example:\n import torch\n import io\n class MyModule(torch.nn.Module):\n def forward(self, x):\n return x + 10\n m = torch.jit.script(MyModule())\n # Save to file\n torch.jit.save(m, 'scriptmodule.pt')\n # This line is equivalent to the previous\n m.save(\"scriptmodule.pt\")\n # Save to io.BytesIO buffer\n buffer = io.BytesIO()\n torch.jit.save(m, buffer)\n # Save with extra files", "source": "https://pytorch.org/docs/stable/generated/torch.jit.save.html", "category": "pytorch docs"}
{"text": "Save with extra files\n extra_files = {'foo.txt': b'bar'}\n torch.jit.save(m, 'scriptmodule.pt', _extra_files=extra_files)\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.save.html", "category": "pytorch docs"}
{"text": "Sequentialclass torch.nn.Sequential(*args: Module)\nclass torch.nn.Sequential(arg: OrderedDict[str, Module])\n A sequential container. Modules will be added to it in the order\n they are passed in the constructor. Alternatively, an \"OrderedDict\"\n of modules can be passed in. The \"forward()\" method of \"Sequential\"\n accepts any input and forwards it to the first module it contains.\n It then \"chains\" outputs to inputs sequentially for each subsequent\n module, finally returning the output of the last module.\n The value a \"Sequential\" provides over manually calling a sequence\n of modules is that it allows treating the whole container as a\n single module, such that performing a transformation on the\n \"Sequential\" applies to each of the modules it stores (which are\n each a registered submodule of the \"Sequential\").\n What's the difference between a \"Sequential\" and a\n \"torch.nn.ModuleList\"? A \"ModuleList\" is exactly what it sounds", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html", "category": "pytorch docs"}
{"text": "like--a list for storing \"Module\" s! On the other hand, the layers\n in a \"Sequential\" are connected in a cascading way.\n Example:\n # Using Sequential to create a small model. When model is run,\n # input will first be passed to Conv2d(1,20,5). The output of\n # Conv2d(1,20,5) will be used as the input to the first\n # ReLU; the output of the first ReLU will become the input\n # for Conv2d(20,64,5). Finally, the output of\n # Conv2d(20,64,5) will be used as input to the second ReLU\n model = nn.Sequential(\n nn.Conv2d(1,20,5),\n nn.ReLU(),\n nn.Conv2d(20,64,5),\n nn.ReLU()\n )\n # Using Sequential with OrderedDict. This is functionally the\n # same as the above code\n model = nn.Sequential(OrderedDict([\n ('conv1', nn.Conv2d(1,20,5)),\n ('relu1', nn.ReLU()),\n ('conv2', nn.Conv2d(20,64,5)),\n ('relu2', nn.ReLU())", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html", "category": "pytorch docs"}
{"text": "('relu2', nn.ReLU())\n ]))\n append(module)\n Appends a given module to the end.\n Parameters:\n module (nn.Module) -- module to append\n Return type:\n Sequential", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html", "category": "pytorch docs"}
{"text": "torch.onestorch.ones(size, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\n Returns a tensor filled with the scalar value 1, with the shape\n defined by the variable argument \"size\".\n Parameters:\n size (int...) -- a sequence of integers defining the\n shape of the output tensor. Can be a variable number of\n arguments or a collection like a list or tuple.\n Keyword Arguments:\n * out (Tensor, optional) -- the output tensor.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see", "source": "https://pytorch.org/docs/stable/generated/torch.ones.html", "category": "pytorch docs"}
{"text": "for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Example:\n >>> torch.ones(2, 3)\n tensor([[ 1., 1., 1.],\n [ 1., 1., 1.]])\n >>> torch.ones(5)\n tensor([ 1., 1., 1., 1., 1.])", "source": "https://pytorch.org/docs/stable/generated/torch.ones.html", "category": "pytorch docs"}
{"text": "torch.arcsintorch.arcsin(input, *, out=None) -> Tensor\n Alias for \"torch.asin()\".", "source": "https://pytorch.org/docs/stable/generated/torch.arcsin.html", "category": "pytorch docs"}
{"text": "torch.meantorch.mean(input, , dtype=None) -> Tensor\n Returns the mean value of all elements in the \"input\" tensor.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n dtype (\"torch.dtype\", optional) -- the desired data type of\n returned tensor. If specified, the input tensor is casted to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\n Example:\n >>> a = torch.randn(1, 3)\n >>> a\n tensor([[ 0.2294, -0.5481, 1.3288]])\n >>> torch.mean(a)\n tensor(0.3367)\n torch.mean(input, dim, keepdim=False, , dtype=None, out=None) -> Tensor\n Returns the mean value of each row of the \"input\" tensor in the\n given dimension \"dim\". If \"dim\" is a list of dimensions, reduce\n over all of them.\n If \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.", "source": "https://pytorch.org/docs/stable/generated/torch.mean.html", "category": "pytorch docs"}
{"text": "Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int or tuple of ints) -- the dimension or\n dimensions to reduce.\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is casted\n to \"dtype\" before the operation is performed. This is useful\n for preventing data type overflows. Default: None.\n * out (Tensor, optional) -- the output tensor.\n See also:\n \"torch.nanmean()\" computes the mean value of non-NaN elements.\n Example:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[-0.3841, 0.6320, 0.4254, -0.7384],\n [-0.9644, 1.0131, -0.6549, -1.4279],", "source": "https://pytorch.org/docs/stable/generated/torch.mean.html", "category": "pytorch docs"}
{"text": "[-0.2951, -1.3350, -0.7694, 0.5600],\n [ 1.0842, -0.9580, 0.3623, 0.2343]])\n >>> torch.mean(a, 1)\n tensor([-0.0163, -0.5085, -0.4599, 0.1807])\n >>> torch.mean(a, 1, True)\n tensor([[-0.0163],\n [-0.5085],\n [-0.4599],\n [ 0.1807]])", "source": "https://pytorch.org/docs/stable/generated/torch.mean.html", "category": "pytorch docs"}
{"text": "torch.fft.ffttorch.fft.fft(input, n=None, dim=- 1, norm=None, , out=None) -> Tensor\n Computes the one dimensional discrete Fourier transform of \"input\".\n Note:\n The Fourier domain representation of any real signal satisfies\n the Hermitian property: X[i] = conj(X[-i]). This function\n always returns both the positive and negative frequency terms\n even though, for real inputs, the negative frequencies are\n redundant. \"rfft()\" returns the more compact one-sided\n representation where only the positive frequencies are returned.\n Note:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimension.\n Parameters:\n * input (Tensor) -- the input tensor\n * n (int, optional*) -- Signal length. If given, the\n input will either be zero-padded or trimmed to this length\n before computing the FFT.", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fft.html", "category": "pytorch docs"}
{"text": "before computing the FFT.\n * dim (int, optional) -- The dimension along which to\n take the one dimensional FFT.\n * norm (str, optional) --\n Normalization mode. For the forward transform (\"fft()\"), these\n correspond to:\n * \"\"forward\"\" - normalize by \"1/n\"\n * \"\"backward\"\" - no normalization\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the FFT\n orthonormal)\n Calling the backward transform (\"ifft()\") with the same\n normalization mode will apply an overall normalization of\n \"1/n\" between the two transforms. This is required to make\n \"ifft()\" the exact inverse.\n Default is \"\"backward\"\" (no normalization).\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n\n\n\nt = torch.arange(4)\nt\n tensor([0, 1, 2, 3])\ntorch.fft.fft(t)\n tensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j])\nt = torch.tensor([0.+1.j, 2.+3.j, 4.+5.j, 6.+7.j])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fft.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.fft.fft(t)\n tensor([12.+16.j, -8.+0.j, -4.-4.j, 0.-8.j])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fft.html", "category": "pytorch docs"}
{"text": "torch.Tensor.varTensor.var(dim=None, *, correction=1, keepdim=False) -> Tensor\n See \"torch.var()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.var.html", "category": "pytorch docs"}
{"text": "torch.erfctorch.erfc(input, *, out=None) -> Tensor\n Alias for \"torch.special.erfc()\".", "source": "https://pytorch.org/docs/stable/generated/torch.erfc.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.one_hottorch.nn.functional.one_hot(tensor, num_classes=- 1) -> LongTensor\n Takes LongTensor with index values of shape \"()\" and returns a\n tensor of shape \"(, num_classes)\" that have zeros everywhere\n except where the index of last dimension matches the corresponding\n value of the input tensor, in which case it will be 1.\n See also One-hot on Wikipedia .\n Parameters:\n * tensor (LongTensor) -- class values of any shape.\n * num_classes (int) -- Total number of classes. If set to\n -1, the number of classes will be inferred as one greater than\n the largest class value in the input tensor.\n Returns:\n LongTensor that has one more dimension with 1 values at the\n index of last dimension indicated by the input, and 0 everywhere\n else.\n -[ Examples ]-\n\n\n\nF.one_hot(torch.arange(0, 5) % 3)\n tensor([[1, 0, 0],\n [0, 1, 0],\n [0, 0, 1],\n [1, 0, 0],\n [0, 1, 0]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.one_hot.html", "category": "pytorch docs"}
{"text": "[1, 0, 0],\n [0, 1, 0]])\n\n\n\nF.one_hot(torch.arange(0, 5) % 3, num_classes=5)\n tensor([[1, 0, 0, 0, 0],\n [0, 1, 0, 0, 0],\n [0, 0, 1, 0, 0],\n [1, 0, 0, 0, 0],\n [0, 1, 0, 0, 0]])\nF.one_hot(torch.arange(0, 6).view(3,2) % 3)\n tensor([[[1, 0, 0],\n [0, 1, 0]],\n [[0, 0, 1],\n [1, 0, 0]],\n [[0, 1, 0],\n [0, 0, 1]]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.one_hot.html", "category": "pytorch docs"}
{"text": "torch.Tensor.tileTensor.tile(*reps) -> Tensor\n See \"torch.tile()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.tile.html", "category": "pytorch docs"}
{"text": "torch.Tensor.log2Tensor.log2() -> Tensor\n See \"torch.log2()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.log2.html", "category": "pytorch docs"}
{"text": "torch.Tensor.lcm_Tensor.lcm_(other) -> Tensor\n In-place version of \"lcm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lcm_.html", "category": "pytorch docs"}
{"text": "torch.cholesky_solvetorch.cholesky_solve(input, input2, upper=False, , out=None) -> Tensor\n Solves a linear system of equations with a positive semidefinite\n matrix to be inverted given its Cholesky factor matrix u.\n If \"upper\" is \"False\", u is and lower triangular and c is\n returned such that:\n c = (u u^T)^{{-1}} b\n If \"upper\" is \"True\" or not provided, u is upper triangular and c\n is returned such that:\n c = (u^T u)^{{-1}} b\n torch.cholesky_solve(b, u) can take in 2D inputs b, u or inputs\n that are batches of 2D matrices. If the inputs are batches, then\n returns batched outputs c\n Supports real-valued and complex-valued inputs. For the complex-\n valued inputs the transpose operator above is the conjugate\n transpose.\n Parameters:\n * input (Tensor) -- input matrix b of size (, m, k),\n where * is zero or more batch dimensions\n * input2 (Tensor) -- input matrix u of size (*, m, m),", "source": "https://pytorch.org/docs/stable/generated/torch.cholesky_solve.html", "category": "pytorch docs"}
{"text": "where * is zero of more batch dimensions composed of upper or\n lower triangular Cholesky factor\n * upper (bool, optional) -- whether to consider the\n Cholesky factor as a lower or upper triangular matrix.\n Default: \"False\".\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor for c\n Example:\n >>> a = torch.randn(3, 3)\n >>> a = torch.mm(a, a.t()) # make symmetric positive definite\n >>> u = torch.linalg.cholesky(a)\n >>> a\n tensor([[ 0.7747, -1.9549, 1.3086],\n [-1.9549, 6.7546, -5.4114],\n [ 1.3086, -5.4114, 4.8733]])\n >>> b = torch.randn(3, 2)\n >>> b\n tensor([[-0.6355, 0.9891],\n [ 0.1974, 1.4706],\n [-0.4115, -0.6225]])\n >>> torch.cholesky_solve(b, u)\n tensor([[ -8.1625, 19.6097],\n [ -5.8398, 14.2387],\n [ -4.3771, 10.4173]])\n >>> torch.mm(a.inverse(), b)\n tensor([[ -8.1626, 19.6097],", "source": "https://pytorch.org/docs/stable/generated/torch.cholesky_solve.html", "category": "pytorch docs"}
{"text": "tensor([[ -8.1626, 19.6097],\n [ -5.8398, 14.2387],\n [ -4.3771, 10.4173]])", "source": "https://pytorch.org/docs/stable/generated/torch.cholesky_solve.html", "category": "pytorch docs"}
{"text": "torch.tensor_splittorch.tensor_split(input, indices_or_sections, dim=0) -> List of Tensors\n Splits a tensor into multiple sub-tensors, all of which are views\n of \"input\", along dimension \"dim\" according to the indices or\n number of sections specified by \"indices_or_sections\". This\n function is based on NumPy's \"numpy.array_split()\".\n Parameters:\n * input (Tensor) -- the tensor to split\n * indices_or_sections (Tensor, int or list or\n tuple of ints) --\n If \"indices_or_sections\" is an integer \"n\" or a zero\n dimensional long tensor with value \"n\", \"input\" is split into\n \"n\" sections along dimension \"dim\". If \"input\" is divisible by\n \"n\" along dimension \"dim\", each section will be of equal size,\n \"input.size(dim) / n\". If \"input\" is not divisible by \"n\", the\n sizes of the first \"int(input.size(dim) % n)\" sections will\n have size \"int(input.size(dim) / n) + 1\", and the rest will", "source": "https://pytorch.org/docs/stable/generated/torch.tensor_split.html", "category": "pytorch docs"}
{"text": "have size \"int(input.size(dim) / n)\".\n If \"indices_or_sections\" is a list or tuple of ints, or a one-\n dimensional long tensor, then \"input\" is split along dimension\n \"dim\" at each of the indices in the list, tuple or tensor. For\n instance, \"indices_or_sections=[2, 3]\" and \"dim=0\" would\n result in the tensors \"input[:2]\", \"input[2:3]\", and\n \"input[3:]\".\n If \"indices_or_sections\" is a tensor, it must be a zero-\n dimensional or one-dimensional long tensor on the CPU.\n * dim (int, optional) -- dimension along which to\n split the tensor. Default: \"0\"\n Example:\n >>> x = torch.arange(8)\n >>> torch.tensor_split(x, 3)\n (tensor([0, 1, 2]), tensor([3, 4, 5]), tensor([6, 7]))\n >>> x = torch.arange(7)\n >>> torch.tensor_split(x, 3)\n (tensor([0, 1, 2]), tensor([3, 4]), tensor([5, 6]))\n >>> torch.tensor_split(x, (1, 6))\n (tensor([0]), tensor([1, 2, 3, 4, 5]), tensor([6]))", "source": "https://pytorch.org/docs/stable/generated/torch.tensor_split.html", "category": "pytorch docs"}
{"text": "\n\n\nx = torch.arange(14).reshape(2, 7)\n >>> x\n tensor([[ 0, 1, 2, 3, 4, 5, 6],\n [ 7, 8, 9, 10, 11, 12, 13]])\n >>> torch.tensor_split(x, 3, dim=1)\n (tensor([[0, 1, 2],\n [7, 8, 9]]),\n tensor([[ 3, 4],\n [10, 11]]),\n tensor([[ 5, 6],\n [12, 13]]))\n >>> torch.tensor_split(x, (1, 6), dim=1)\n (tensor([[0],\n [7]]),\n tensor([[ 1, 2, 3, 4, 5],\n [ 8, 9, 10, 11, 12]]),\n tensor([[ 6],\n [13]]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.tensor_split.html", "category": "pytorch docs"}
{"text": "torch.fft.rfftntorch.fft.rfftn(input, s=None, dim=None, norm=None, , out=None) -> Tensor\n Computes the N-dimensional discrete Fourier transform of real\n \"input\".\n The FFT of a real signal is Hermitian-symmetric, \"X[i_1, ..., i_n]\n = conj(X[-i_1, ..., -i_n])\" so the full \"fftn()\" output contains\n redundant information. \"rfftn()\" instead omits the negative\n frequencies in the last dimension.\n Note:\n Supports torch.half on CUDA with GPU Architecture SM53 or\n greater. However it only supports powers of 2 signal length in\n every transformed dimensions.\n Parameters:\n * input (Tensor) -- the input tensor\n * s (Tuple[int], *optional) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the real FFT. If a length \"-1\" is specified, no\n padding is done in that dimension. Default: \"s =", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfftn.html", "category": "pytorch docs"}
{"text": "[input.size(d) for d in dim]\"\n * dim (Tuple[int], optional) -- Dimensions to be\n transformed. Default: all dimensions, or the last \"len(s)\"\n dimensions if \"s\" is given.\n * norm (str, optional) --\n Normalization mode. For the forward transform (\"rfftn()\"),\n these correspond to:\n * \"\"forward\"\" - normalize by \"1/n\"\n * \"\"backward\"\" - no normalization\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the real FFT\n orthonormal)\n Where \"n = prod(s)\" is the logical FFT size. Calling the\n backward transform (\"irfftn()\") with the same normalization\n mode will apply an overall normalization of \"1/n\" between the\n two transforms. This is required to make \"irfftn()\" the exact\n inverse.\n Default is \"\"backward\"\" (no normalization).\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n\n\n\nt = torch.rand(10, 10)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfftn.html", "category": "pytorch docs"}
{"text": "-[ Example ]-\n\n\n\nt = torch.rand(10, 10)\nrfftn = torch.fft.rfftn(t)\nrfftn.size()\n torch.Size([10, 6])\n Compared against the full output from \"fftn()\", we have all\n elements up to the Nyquist frequency.\nfftn = torch.fft.fftn(t)\ntorch.testing.assert_close(fftn[..., :6], rfftn, check_stride=False)\n The discrete Fourier transform is separable, so \"rfftn()\" here is\n equivalent to a combination of \"fft()\" and \"rfft()\":\ntwo_ffts = torch.fft.fft(torch.fft.rfft(t, dim=1), dim=0)\ntorch.testing.assert_close(rfftn, two_ffts, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfftn.html", "category": "pytorch docs"}
{"text": "torch.randpermtorch.randperm(n, , generator=None, out=None, dtype=torch.int64, layout=torch.strided, device=None, requires_grad=False, pin_memory=False) -> Tensor\n Returns a random permutation of integers from \"0\" to \"n - 1\".\n Parameters:\n n (int) -- the upper bound (exclusive)\n Keyword Arguments:\n * generator (\"torch.Generator\", optional) -- a pseudorandom\n number generator for sampling\n * out (Tensor, optional) -- the output tensor.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: \"torch.int64\".\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device* (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU", "source": "https://pytorch.org/docs/stable/generated/torch.randperm.html", "category": "pytorch docs"}
{"text": "for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * pin_memory (bool, optional) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n Example:\n >>> torch.randperm(4)\n tensor([2, 1, 0, 3])", "source": "https://pytorch.org/docs/stable/generated/torch.randperm.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.tanhshrinktorch.nn.functional.tanhshrink(input) -> Tensor\n Applies element-wise, \\text{Tanhshrink}(x) = x - \\text{Tanh}(x)\n See \"Tanhshrink\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.tanhshrink.html", "category": "pytorch docs"}
{"text": "torch.func.replace_all_batch_norm_modules_torch.func.replace_all_batch_norm_modules_(root)\n In place updates \"root\" by setting the \"running_mean\" and\n \"running_var\" to be None and setting track_running_stats to be\n False for any nn.BatchNorm module in \"root\"\n Return type:\n Module", "source": "https://pytorch.org/docs/stable/generated/torch.func.replace_all_batch_norm_modules_.html", "category": "pytorch docs"}
{"text": "hardswishclass torch.ao.nn.quantized.functional.hardswish(input, scale, zero_point)\n This is the quantized version of \"hardswish()\".\n Parameters:\n * input (Tensor) -- quantized input\n * scale (float) -- quantization scale of the output tensor\n * zero_point (int) -- quantization zero point of the\n output tensor\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.hardswish.html", "category": "pytorch docs"}
{"text": "Transformerclass torch.nn.Transformer(d_model=512, nhead=8, num_encoder_layers=6, num_decoder_layers=6, dim_feedforward=2048, dropout=0.1, activation=, custom_encoder=None, custom_decoder=None, layer_norm_eps=1e-05, batch_first=False, norm_first=False, device=None, dtype=None)\n A transformer model. User is able to modify the attributes as\n needed. The architecture is based on the paper \"Attention Is All\n You Need\". Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob\n Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia\n Polosukhin. 2017. Attention is all you need. In Advances in Neural\n Information Processing Systems, pages 6000-6010.\n Parameters:\n * d_model (int) -- the number of expected features in the\n encoder/decoder inputs (default=512).\n * nhead (int) -- the number of heads in the\n multiheadattention models (default=8).\n * num_encoder_layers (int) -- the number of sub-encoder-", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html", "category": "pytorch docs"}
{"text": "layers in the encoder (default=6).\n * num_decoder_layers (int) -- the number of sub-decoder-\n layers in the decoder (default=6).\n * dim_feedforward (int) -- the dimension of the\n feedforward network model (default=2048).\n * dropout (float) -- the dropout value (default=0.1).\n * activation (Union[str,\n Callable[[Tensor], Tensor]]) -- the\n activation function of encoder/decoder intermediate layer, can\n be a string (\"relu\" or \"gelu\") or a unary callable. Default:\n relu\n * custom_encoder (Optional[Any]) -- custom encoder\n (default=None).\n * custom_decoder (Optional[Any]) -- custom decoder\n (default=None).\n * layer_norm_eps (float) -- the eps value in layer\n normalization components (default=1e-5).\n * batch_first (bool) -- If \"True\", then the input and\n output tensors are provided as (batch, seq, feature). Default:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html", "category": "pytorch docs"}
{"text": "\"False\" (seq, batch, feature).\n * norm_first (bool) -- if \"True\", encoder and decoder\n layers will perform LayerNorms before other attention and\n feedforward operations, otherwise after. Default: \"False\"\n (after).\n Examples::\n >>> transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12)\n >>> src = torch.rand((10, 32, 512))\n >>> tgt = torch.rand((20, 32, 512))\n >>> out = transformer_model(src, tgt)\n Note: A full example to apply nn.Transformer module for the word\n language model is available in\n https://github.com/pytorch/examples/tree/master/word_language_model\n forward(src, tgt, src_mask=None, tgt_mask=None, memory_mask=None, src_key_padding_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None)\n Take in and process masked source/target sequences.\n Parameters:\n * src (Tensor) -- the sequence to the encoder\n (required).\n * tgt (Tensor) -- the sequence to the decoder", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html", "category": "pytorch docs"}
{"text": "(required).\n * src_mask (Optional[Tensor]) -- the additive\n mask for the src sequence (optional).\n * tgt_mask (Optional[Tensor]) -- the additive\n mask for the tgt sequence (optional).\n * memory_mask (Optional[Tensor]) -- the additive\n mask for the encoder output (optional).\n * src_key_padding_mask (Optional[Tensor]) -- the\n ByteTensor mask for src keys per batch (optional).\n * tgt_key_padding_mask (Optional[Tensor]) -- the\n ByteTensor mask for tgt keys per batch (optional).\n * memory_key_padding_mask (Optional[Tensor]) --\n the ByteTensor mask for memory keys per batch (optional).\n Return type:\n Tensor\n Shape:\n * src: (S, E) for unbatched input, (S, N, E) if\n batch_first=False or (N, S, E) if batch_first=True.\n * tgt: (T, E) for unbatched input, (T, N, E) if", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html", "category": "pytorch docs"}
{"text": "batch_first=False or (N, T, E) if batch_first=True.\n * src_mask: (S, S) or (N\\cdot\\text{num_heads}, S, S).\n * tgt_mask: (T, T) or (N\\cdot\\text{num_heads}, T, T).\n * memory_mask: (T, S).\n * src_key_padding_mask: (S) for unbatched input otherwise (N,\n S).\n * tgt_key_padding_mask: (T) for unbatched input otherwise (N,\n T).\n * memory_key_padding_mask: (S) for unbatched input otherwise\n (N, S).\n Note: [src/tgt/memory]_mask ensures that position i is\n allowed to attend the unmasked positions. If a ByteTensor is\n provided, the non-zero positions are not allowed to attend\n while the zero positions will be unchanged. If a BoolTensor\n is provided, positions with \"True\" are not allowed to attend\n while \"False\" values will be unchanged. If a FloatTensor is\n provided, it will be added to the attention weight.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html", "category": "pytorch docs"}
{"text": "[src/tgt/memory]_key_padding_mask provides specified elements\n in the key to be ignored by the attention. If a ByteTensor is\n provided, the non-zero positions will be ignored while the\n zero positions will be unchanged. If a BoolTensor is\n provided, the positions with the value of \"True\" will be\n ignored while the position with the value of \"False\" will be\n unchanged.\n * output: (T, E) for unbatched input, (T, N, E) if\n batch_first=False or (N, T, E) if batch_first=True.\n Note: Due to the multi-head attention architecture in the\n transformer model, the output sequence length of a\n transformer is same as the input sequence (i.e. target)\n length of the decoder.\n where S is the source sequence length, T is the target\n sequence length, N is the batch size, E is the feature number\n -[ Examples ]-\n >>> output = transformer_model(src, tgt, src_mask=src_mask, tgt_mask=tgt_mask)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html", "category": "pytorch docs"}
{"text": "static generate_square_subsequent_mask(sz, device='cpu')\n Generate a square mask for the sequence. The masked positions\n are filled with float('-inf'). Unmasked positions are filled\n with float(0.0).\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html", "category": "pytorch docs"}
{"text": "default_placeholder_observertorch.quantization.observer.default_placeholder_observer\n alias of \"PlaceholderObserver\"", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_placeholder_observer.html", "category": "pytorch docs"}
{"text": "torch.sparse_csr_tensortorch.sparse_csr_tensor(crow_indices, col_indices, values, size=None, , dtype=None, device=None, requires_grad=False, check_invariants=None) -> Tensor\n Constructs a sparse tensor in CSR (Compressed Sparse Row) with\n specified values at the given \"crow_indices\" and \"col_indices\".\n Sparse matrix multiplication operations in CSR format are typically\n faster than that for sparse tensors in COO format. Make you have a\n look at the note on the data type of the indices.\n Note:\n If the \"device\" argument is not specified the device of the given\n \"values\" and indices tensor(s) must match. If, however, the\n argument is specified the input Tensors will be converted to the\n given device and in turn determine the device of the constructed\n sparse tensor.\n Parameters:\n * crow_indices (array_like) -- (B+1)-dimensional array of\n size \"(batchsize, nrows + 1)\". The last element of each", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_csr_tensor.html", "category": "pytorch docs"}
{"text": "batch is the number of non-zeros. This tensor encodes the\n index in values and col_indices depending on where the given\n row starts. Each successive number in the tensor subtracted by\n the number before it denotes the number of elements in a given\n row.\n * col_indices (array_like) -- Column co-ordinates of each\n element in values. (B+1)-dimensional tensor with the same\n length as values.\n * values (array_list) -- Initial values for the tensor.\n Can be a list, tuple, NumPy \"ndarray\", scalar, and other types\n that represents a (1+K)-dimensional tensor where \"K\" is the\n number of dense dimensions.\n * size (list, tuple, \"torch.Size\", optional) -- Size of the\n sparse tensor: \"(batchsize, nrows, ncols, densesize)\". If\n not provided, the size will be inferred as the minimum size\n big enough to hold all non-zero elements.\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_csr_tensor.html", "category": "pytorch docs"}
{"text": "Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if None, infers data type from\n \"values\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * check_invariants (bool, optional) -- If sparse\n tensor invariants are checked. Default: as returned by\n \"torch.sparse.check_sparse_tensor_invariants.is_enabled()\",\n initially False.\n Example::\n >>> crow_indices = [0, 2, 4]\n >>> col_indices = [0, 1, 0, 1]\n >>> values = [1, 2, 3, 4]", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_csr_tensor.html", "category": "pytorch docs"}
{"text": "\n\n\nvalues = [1, 2, 3, 4]\n >>> torch.sparse_csr_tensor(torch.tensor(crow_indices, dtype=torch.int64),\n ... torch.tensor(col_indices, dtype=torch.int64),\n ... torch.tensor(values), dtype=torch.double)\n tensor(crow_indices=tensor([0, 2, 4]),\n col_indices=tensor([0, 1, 0, 1]),\n values=tensor([1., 2., 3., 4.]), size=(2, 2), nnz=4,\n dtype=torch.float64, layout=torch.sparse_csr)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_csr_tensor.html", "category": "pytorch docs"}
{"text": "torch.column_stacktorch.column_stack(tensors, , out=None) -> Tensor\n Creates a new tensor by horizontally stacking the tensors in\n \"tensors\".\n Equivalent to \"torch.hstack(tensors)\", except each zero or one\n dimensional tensor \"t\" in \"tensors\" is first reshaped into a\n \"(t.numel(), 1)\" column before being stacked horizontally.\n Parameters:\n tensors (sequence of Tensors) -- sequence of tensors to\n concatenate\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.tensor([1, 2, 3])\n >>> b = torch.tensor([4, 5, 6])\n >>> torch.column_stack((a, b))\n tensor([[1, 4],\n [2, 5],\n [3, 6]])\n >>> a = torch.arange(5)\n >>> b = torch.arange(10).reshape(5, 2)\n >>> torch.column_stack((a, b, b))\n tensor([[0, 0, 1, 0, 1],\n [1, 2, 3, 2, 3],\n [2, 4, 5, 4, 5],\n [3, 6, 7, 6, 7],\n [4, 8, 9, 8, 9]])", "source": "https://pytorch.org/docs/stable/generated/torch.column_stack.html", "category": "pytorch docs"}
{"text": "torch.index_reducetorch.index_reduce(input, dim, index, source, reduce, *, include_self=True, out=None) -> Tensor\n See \"index_reduce_()\" for function description.", "source": "https://pytorch.org/docs/stable/generated/torch.index_reduce.html", "category": "pytorch docs"}
{"text": "torch.Tensor.negTensor.neg() -> Tensor\n See \"torch.neg()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.neg.html", "category": "pytorch docs"}
{"text": "torch.Tensor.is_complexTensor.is_complex() -> bool\n Returns True if the data type of \"self\" is a complex data type.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_complex.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.parametrizations.orthogonaltorch.nn.utils.parametrizations.orthogonal(module, name='weight', orthogonal_map=None, , use_trivialization=True)\n Applies an orthogonal or unitary parametrization to a matrix or a\n batch of matrices.\n Letting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the parametrized\n matrix Q \\in \\mathbb{K}^{m \\times n} is orthogonal as\n \\begin{align} Q^{\\text{H}}Q &= \\mathrm{I}_n\n \\mathrlap{\\qquad \\text{if }m \\geq n}\\ QQ^{\\text{H}} &=\n \\mathrm{I}_m \\mathrlap{\\qquad \\text{if }m < n} \\end{align}\n where Q^{\\text{H}} is the conjugate transpose when Q is complex and\n the transpose when Q is real-valued, and \\mathrm{I}_n is the\n n-dimensional identity matrix. In plain words, Q will have\n orthonormal columns whenever m \\geq n and orthonormal rows\n otherwise.\n If the tensor has more than two dimensions, we consider it as a\n batch of matrices of shape (..., m, n)*.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.orthogonal.html", "category": "pytorch docs"}
{"text": "batch of matrices of shape (..., m, n).\n The matrix Q may be parametrized via three different\n \"orthogonal_map\" in terms of the original tensor:\n * \"\"matrix_exp\"\"/\"\"cayley\"\": the \"matrix_exp()\" Q = \\exp(A) and the\n Cayley map Q = (\\mathrm{I}_n + A/2)(\\mathrm{I}_n - A/2)^{-1} are\n applied to a skew-symmetric A to give an orthogonal matrix.\n * \"\"householder\"\": computes a product of Householder reflectors\n (\"householder_product()\").\n \"\"matrix_exp\"\"/\"\"cayley\"\" often make the parametrized weight\n converge faster than \"\"householder\"\", but they are slower to\n compute for very thin or very wide matrices.\n If \"use_trivialization=True\" (default), the parametrization\n implements the \"Dynamic Trivialization Framework\", where an extra\n matrix B \\in \\mathbb{K}^{n \\times n} is stored under\n \"module.parametrizations.weight[0].base\". This helps the\n convergence of the parametrized layer at the expense of some extra\n memory use. See Trivializations for Gradient-Based Optimization on", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.orthogonal.html", "category": "pytorch docs"}
{"text": "Manifolds .\n Initial value of Q: If the original tensor is not parametrized and\n \"use_trivialization=True\" (default), the initial value of Q is that\n of the original tensor if it is orthogonal (or unitary in the\n complex case) and it is orthogonalized via the QR decomposition\n otherwise (see \"torch.linalg.qr()\"). Same happens when it is not\n parametrized and \"orthogonal_map=\"householder\"\" even when\n \"use_trivialization=False\". Otherwise, the initial value is the\n result of the composition of all the registered parametrizations\n applied to the original tensor.\n Note:\n This function is implemented using the parametrization\n functionality in \"register_parametrization()\".\n Parameters:\n * module (nn.Module) -- module on which to register the\n parametrization.\n * name (str, optional) -- name of the tensor to make\n orthogonal. Default: \"\"weight\"\".\n * orthogonal_map (str, optional) -- One of the", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.orthogonal.html", "category": "pytorch docs"}
{"text": "following: \"\"matrix_exp\"\", \"\"cayley\"\", \"\"householder\"\".\n Default: \"\"matrix_exp\"\" if the matrix is square or complex,\n \"\"householder\"\" otherwise.\n * use_trivialization (bool, optional) -- whether to\n use the dynamic trivialization framework. Default: \"True\".\n Returns:\n The original module with an orthogonal parametrization\n registered to the specified weight\n Return type:\n Module\n Example:\n >>> orth_linear = orthogonal(nn.Linear(20, 40))\n >>> orth_linear\n ParametrizedLinear(\n in_features=20, out_features=40, bias=True\n (parametrizations): ModuleDict(\n (weight): ParametrizationList(\n (0): _Orthogonal()\n )\n )\n )\n >>> Q = orth_linear.weight\n >>> torch.dist(Q.T @ Q, torch.eye(20))\n tensor(4.9332e-07)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.orthogonal.html", "category": "pytorch docs"}
{"text": "torch.Tensor.anyTensor.any(dim=None, keepdim=False) -> Tensor\n See \"torch.any()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.any.html", "category": "pytorch docs"}
{"text": "torch.Tensor.distTensor.dist(other, p=2) -> Tensor\n See \"torch.dist()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.dist.html", "category": "pytorch docs"}
{"text": "torch.cuda.device_counttorch.cuda.device_count()\n Returns the number of GPUs available.\n Return type:\n int", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.device_count.html", "category": "pytorch docs"}
{"text": "SyncBatchNormclass torch.nn.SyncBatchNorm(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, process_group=None, device=None, dtype=None)\n Applies Batch Normalization over a N-Dimensional input (a mini-\n batch of [N-2]D inputs with additional channel dimension) as\n described in the paper Batch Normalization: Accelerating Deep\n Network Training by Reducing Internal Covariate Shift .\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n The mean and standard-deviation are calculated per-dimension over\n all mini-batches of the same process groups. \\gamma and \\beta are\n learnable parameter vectors of size C (where C is the input\n size). By default, the elements of \\gamma are sampled from\n \\mathcal{U}(0, 1) and the elements of \\beta are set to 0. The\n standard-deviation is calculated via the biased estimator,\n equivalent to torch.var(input, unbiased=False).", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html", "category": "pytorch docs"}
{"text": "Also by default, during training this layer keeps running estimates\n of its computed mean and variance, which are then used for\n normalization during evaluation. The running estimates are kept\n with a default \"momentum\" of 0.1.\n If \"track_running_stats\" is set to \"False\", this layer then does\n not keep running estimates, and batch statistics are instead used\n during evaluation time as well.\n Note:\n This \"momentum\" argument is different from one used in optimizer\n classes and the conventional notion of momentum. Mathematically,\n the update rule for running statistics here is \\hat{x}_\\text{new}\n = (1 - \\text{momentum}) \\times \\hat{x} + \\text{momentum} \\times\n x_t, where \\hat{x} is the estimated statistic and x_t is the new\n observed value.\n Because the Batch Normalization is done for each channel in the \"C\"\n dimension, computing statistics on \"(N, +)\" slices, it's common\n terminology to call this Volumetric Batch Normalization or Spatio-", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html", "category": "pytorch docs"}
{"text": "temporal Batch Normalization.\n Currently \"SyncBatchNorm\" only supports \"DistributedDataParallel\"\n (DDP) with single GPU per process. Use\n \"torch.nn.SyncBatchNorm.convert_sync_batchnorm()\" to convert\n \"BatchNormD\" layer to \"SyncBatchNorm\" before wrapping Network with\n DDP.\n Parameters:\n * num_features (int) -- C from an expected input of size\n (N, C, +)\n * eps (float) -- a value added to the denominator for\n numerical stability. Default: \"1e-5\"\n * momentum (float) -- the value used for the running_mean\n and running_var computation. Can be set to \"None\" for\n cumulative moving average (i.e. simple average). Default: 0.1\n * affine (bool) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters. Default:\n \"True\"\n * track_running_stats (bool*) -- a boolean value that when\n set to \"True\", this module tracks the running mean and", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html", "category": "pytorch docs"}
{"text": "variance, and when set to \"False\", this module does not track\n such statistics, and initializes statistics buffers\n \"running_mean\" and \"running_var\" as \"None\". When these buffers\n are \"None\", this module always uses batch statistics. in both\n training and eval modes. Default: \"True\"\n * process_group (Optional[Any]) -- synchronization\n of stats happen within each process group individually.\n Default behavior is synchronization across the whole world\n Shape:\n * Input: (N, C, +)\n * Output: (N, C, +) (same shape as input)\n Note:\n Synchronization of batchnorm statistics occurs only while\n training, i.e. synchronization is disabled when \"model.eval()\" is\n set or if \"self.training\" is otherwise \"False\".\n Examples:\n >>> # With Learnable Parameters\n >>> m = nn.SyncBatchNorm(100)\n >>> # creating process group (optional)\n >>> # ranks is a list of int identifying rank ids.\n >>> ranks = list(range(8))", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html", "category": "pytorch docs"}
{"text": "\n\n\nranks = list(range(8))\n >>> r1, r2 = ranks[:4], ranks[4:]\n >>> # Note: every rank calls into new_group for every\n >>> # process group created, even if that rank is not\n >>> # part of the group.\n >>> process_groups = [torch.distributed.new_group(pids) for pids in [r1, r2]]\n >>> process_group = process_groups[0 if dist.get_rank() <= 3 else 1]\n >>> # Without Learnable Parameters\n >>> m = nn.BatchNorm3d(100, affine=False, process_group=process_group)\n >>> input = torch.randn(20, 100, 35, 45, 10)\n >>> output = m(input)\n >>> # network is nn.BatchNorm layer\n >>> sync_bn_network = nn.SyncBatchNorm.convert_sync_batchnorm(network, process_group)\n >>> # only single gpu per process is currently supported\n >>> ddp_sync_bn_network = torch.nn.parallel.DistributedDataParallel(\n >>> sync_bn_network,\n >>> device_ids=[args.local_rank],\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html", "category": "pytorch docs"}
{"text": "\n\n\n output_device=args.local_rank)\n\nclassmethod convert_sync_batchnorm(module, process_group=None)\n Helper function to convert all \"BatchNormD\" layers in the model\n to \"torch.nn.SyncBatchNorm\" layers.\n Parameters:\n * module (nn.Module) -- module containing one or more\n \"BatchNormD\" layers\n * process_group (optional) -- process group to scope\n synchronization, default is the whole world\n Returns:\n The original \"module\" with the converted\n \"torch.nn.SyncBatchNorm\" layers. If the original \"module\" is\n a \"BatchNorm*D\" layer, a new \"torch.nn.SyncBatchNorm\" layer\n object will be returned instead.\n Example:\n >>> # Network with nn.BatchNorm layer\n >>> module = torch.nn.Sequential(\n >>> torch.nn.Linear(20, 100),\n >>> torch.nn.BatchNorm1d(100),\n >>> ).cuda()\n >>> # creating process group (optional)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html", "category": "pytorch docs"}
{"text": "\n\n\ncreating process group (optional)\n >>> # ranks is a list of int identifying rank ids.\n >>> ranks = list(range(8))\n >>> r1, r2 = ranks[:4], ranks[4:]\n >>> # Note: every rank calls into new_group for every\n >>> # process group created, even if that rank is not\n >>> # part of the group.\n >>> process_groups = [torch.distributed.new_group(pids) for pids in [r1, r2]]\n >>> process_group = process_groups[0 if dist.get_rank() <= 3 else 1]\n >>> sync_bn_module = torch.nn.SyncBatchNorm.convert_sync_batchnorm(module, process_group)\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html", "category": "pytorch docs"}
{"text": "LambdaLRclass torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=- 1, verbose=False)\n Sets the learning rate of each parameter group to the initial lr\n times a given function. When last_epoch=-1, sets initial lr as lr.\n Parameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * lr_lambda (function or list) -- A function which\n computes a multiplicative factor given an integer parameter\n epoch, or a list of such functions, one for each group in\n optimizer.param_groups.\n * last_epoch (int) -- The index of last epoch. Default:\n -1.\n * verbose (bool) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n -[ Example ]-\n\n\n\nAssuming optimizer has two groups.\nlambda1 = lambda epoch: epoch // 30\nlambda2 = lambda epoch: 0.95 ** epoch\nscheduler = LambdaLR(optimizer, lr_lambda=[lambda1, lambda2])\nfor epoch in range(100):\n train(...)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LambdaLR.html", "category": "pytorch docs"}
{"text": "\n\n\ntrain(...)\nvalidate(...)\nscheduler.step()\n\nget_last_lr()\n Return last computed learning rate by current scheduler.\n load_state_dict(state_dict)\n Loads the schedulers state.\n When saving or loading the scheduler, please make sure to also\n save or load the state of the optimizer.\n Parameters:\n state_dict (dict) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n print_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n state_dict()\n Returns the state of the scheduler as a \"dict\".\n It contains an entry for every variable in self.dict which\n is not the optimizer. The learning rate lambda functions will\n only be saved if they are callable objects and not if they are\n functions or lambdas.\n When saving or loading the scheduler, please make sure to also\n save or load the state of the optimizer.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LambdaLR.html", "category": "pytorch docs"}
{"text": "torch.eyetorch.eye(n, m=None, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\n Returns a 2-D tensor with ones on the diagonal and zeros elsewhere.\n Parameters:\n * n (int) -- the number of rows\n * m (int, optional) -- the number of columns with\n default being \"n\"\n Keyword Arguments:\n * out (Tensor, optional) -- the output tensor.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device* (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU", "source": "https://pytorch.org/docs/stable/generated/torch.eye.html", "category": "pytorch docs"}
{"text": "for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Returns:\n A 2-D tensor with ones on the diagonal and zeros elsewhere\n Return type:\n Tensor\n Example:\n >>> torch.eye(3)\n tensor([[ 1., 0., 0.],\n [ 0., 1., 0.],\n [ 0., 0., 1.]])", "source": "https://pytorch.org/docs/stable/generated/torch.eye.html", "category": "pytorch docs"}
{"text": "Adagradclass torch.optim.Adagrad(params, lr=0.01, lr_decay=0, weight_decay=0, initial_accumulator_value=0, eps=1e-10, foreach=None, *, maximize=False, differentiable=False)\n Implements Adagrad algorithm.\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\ &\\textbf{input} : \\gamma \\text{ (lr)}, \\: \\theta_0\n \\text{ (params)}, \\: f(\\theta) \\text{ (objective)}, \\:\n \\lambda \\text{ (weight decay)}, \\\n &\\hspace{12mm} \\tau \\text{ (initial accumulator value)}, \\:\n \\eta\\text{ (lr decay)}\\ &\\textbf{initialize} :\n state_sum_0 \\leftarrow 0 \\[-1.ex]\n &\\rule{110mm}{0.4pt}\n \\ &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\:\n \\textbf{do} \\ &\\hspace{5mm}g_t\n \\leftarrow \\nabla_{\\theta} f_t (\\theta_{t-1}) \\\n &\\hspace{5mm} \\tilde{\\gamma} \\leftarrow \\gamma / (1 +(t-1)\n \\eta) \\ &\\hspace{5mm} \\textbf{if} \\:", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html", "category": "pytorch docs"}
{"text": "\\lambda \\neq 0 \\\n &\\hspace{10mm} g_t \\leftarrow g_t + \\lambda \\theta_{t-1}\n \\ &\\hspace{5mm}state_sum_t \\leftarrow state_sum_{t-1}\n + g^2_t \\ &\\hspace{5mm}\\theta_t\n \\leftarrow \\theta_{t-1}- \\tilde{\\gamma}\n \\frac{g_t}{\\sqrt{state_sum_t}+\\epsilon} \\\n &\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\n For further details regarding the algorithm we refer to Adaptive\n Subgradient Methods for Online Learning and Stochastic\n Optimization.\n Parameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * lr (float, optional) -- learning rate (default:\n 1e-2)\n * lr_decay (float, optional) -- learning rate decay\n (default: 0)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html", "category": "pytorch docs"}
{"text": "(default: 0)\n * weight_decay (float, optional) -- weight decay (L2\n penalty) (default: 0)\n * eps (float, optional) -- term added to the\n denominator to improve numerical stability (default: 1e-10)\n * foreach (bool, optional) -- whether foreach\n implementation of optimizer is used. If unspecified by the\n user (so foreach is None), we will try to use foreach over the\n for-loop implementation on CUDA, since it is usually\n significantly more performant. (default: None)\n * maximize (bool, optional) -- maximize the params\n based on the objective, instead of minimizing (default: False)\n * differentiable (bool, optional) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html", "category": "pytorch docs"}
{"text": "False if you don't intend to run autograd through this\n instance (default: False)\n add_param_group(param_group)\n Add a param group to the \"Optimizer\" s param_groups.\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n Parameters:\n param_group (dict) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n load_state_dict(state_dict)\n Loads the optimizer state.\n Parameters:\n state_dict (dict) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n register_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None\n The \"optimizer\" argument is the optimizer instance being used.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html", "category": "pytorch docs"}
{"text": "Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n register_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html", "category": "pytorch docs"}
{"text": "\"torch.utils.hooks.RemoveableHandle\"\n state_dict()\n Returns the state of the optimizer as a \"dict\".\n It contains two entries:\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n zero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n Parameters:\n set_to_none (bool) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html", "category": "pytorch docs"}
{"text": "\".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.tanhtorch.nn.functional.tanh(input) -> Tensor\n Applies element-wise, \\text{Tanh}(x) = \\tanh(x) = \\frac{\\exp(x) -\n \\exp(-x)}{\\exp(x) + \\exp(-x)}\n See \"Tanh\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.tanh.html", "category": "pytorch docs"}
{"text": "torch.Tensor.cholesky_inverseTensor.cholesky_inverse(upper=False) -> Tensor\n See \"torch.cholesky_inverse()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cholesky_inverse.html", "category": "pytorch docs"}
{"text": "torch.Tensor.new_emptyTensor.new_empty(size, , dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) -> Tensor\n Returns a Tensor of size \"size\" filled with uninitialized data. By\n default, the returned Tensor has the same \"torch.dtype\" and\n \"torch.device\" as this tensor.\n Parameters:\n size (int...*) -- a list, tuple, or \"torch.Size\" of\n integers defining the shape of the output tensor.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired type of\n returned tensor. Default: if None, same \"torch.dtype\" as this\n tensor.\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, same \"torch.device\" as this\n tensor.\n * requires_grad (*bool, optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * layout** (\"torch.layout\", optional) -- the desired layout of", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_empty.html", "category": "pytorch docs"}
{"text": "returned Tensor. Default: \"torch.strided\".\n * pin_memory (bool, optional) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n Example:\n >>> tensor = torch.ones(())\n >>> tensor.new_empty((2, 3))\n tensor([[ 5.8182e-18, 4.5765e-41, -1.0545e+30],\n [ 3.0949e-41, 4.4842e-44, 0.0000e+00]])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_empty.html", "category": "pytorch docs"}
{"text": "default_debug_observertorch.quantization.observer.default_debug_observer\n alias of \"RecordingObserver\"", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_debug_observer.html", "category": "pytorch docs"}
{"text": "default_per_channel_weight_observertorch.quantization.observer.default_per_channel_weight_observer\n alias of functools.partial(,\n dtype=torch.qint8, qscheme=torch.per_channel_symmetric){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_per_channel_weight_observer.html", "category": "pytorch docs"}
{"text": "torch.Tensor.transposeTensor.transpose(dim0, dim1) -> Tensor\n See \"torch.transpose()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.transpose.html", "category": "pytorch docs"}
{"text": "InstanceNorm2dclass torch.ao.nn.quantized.InstanceNorm2d(num_features, weight, bias, scale, zero_point, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False, device=None, dtype=None)\n This is the quantized version of \"InstanceNorm2d\".\n Additional args:\n * scale - quantization scale of the output, type: double.\n * zero_point - quantization zero point of the output, type:\n long.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.InstanceNorm2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.lerpTensor.lerp(end, weight) -> Tensor\n See \"torch.lerp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lerp.html", "category": "pytorch docs"}
{"text": "torch.Tensor.div_Tensor.div_(value, *, rounding_mode=None) -> Tensor\n In-place version of \"div()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.div_.html", "category": "pytorch docs"}
{"text": "torch.diagtorch.diag(input, diagonal=0, , out=None) -> Tensor\n * If \"input\" is a vector (1-D tensor), then returns a 2-D square\n tensor with the elements of \"input\" as the diagonal.\n * If \"input\" is a matrix (2-D tensor), then returns a 1-D tensor\n with the diagonal elements of \"input\".\n The argument \"diagonal\" controls which diagonal to consider:\n * If \"diagonal\" = 0, it is the main diagonal.\n * If \"diagonal\" > 0, it is above the main diagonal.\n * If \"diagonal\" < 0, it is below the main diagonal.\n Parameters:\n * input (Tensor) -- the input tensor.\n * diagonal (int, optional) -- the diagonal to consider\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n See also:\n \"torch.diagonal()\" always returns the diagonal of its input.\n \"torch.diagflat()\" always constructs a tensor with diagonal\n elements specified by the input.\n Examples:\n Get the square matrix where the input vector is the diagonal:", "source": "https://pytorch.org/docs/stable/generated/torch.diag.html", "category": "pytorch docs"}
{"text": "\n\n\na = torch.randn(3)\n >>> a\n tensor([ 0.5950,-0.0872, 2.3298])\n >>> torch.diag(a)\n tensor([[ 0.5950, 0.0000, 0.0000],\n [ 0.0000,-0.0872, 0.0000],\n [ 0.0000, 0.0000, 2.3298]])\n >>> torch.diag(a, 1)\n tensor([[ 0.0000, 0.5950, 0.0000, 0.0000],\n [ 0.0000, 0.0000,-0.0872, 0.0000],\n [ 0.0000, 0.0000, 0.0000, 2.3298],\n [ 0.0000, 0.0000, 0.0000, 0.0000]])\n Get the k-th diagonal of a given matrix:\n >>> a = torch.randn(3, 3)\n >>> a\n tensor([[-0.4264, 0.0255,-0.1064],\n [ 0.8795,-0.2429, 0.1374],\n [ 0.1029,-0.6482,-1.6300]])\n >>> torch.diag(a, 0)\n tensor([-0.4264,-0.2429,-1.6300])\n >>> torch.diag(a, 1)\n tensor([ 0.0255, 0.1374])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.diag.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.multilabel_soft_margin_losstorch.nn.functional.multilabel_soft_margin_loss(input, target, weight=None, size_average=None, reduce=None, reduction='mean') -> Tensor\n See \"MultiLabelSoftMarginLoss\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.multilabel_soft_margin_loss.html", "category": "pytorch docs"}
{"text": "ConvBnReLU1dclass torch.ao.nn.intrinsic.ConvBnReLU1d(conv, bn, relu)\n This is a sequential container which calls the Conv 1d, Batch Norm\n 1d, and ReLU modules. During quantization this will be replaced\n with the corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvBnReLU1d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.iscloseTensor.isclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) -> Tensor\n See \"torch.isclose()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.isclose.html", "category": "pytorch docs"}
{"text": "torch.fft.hfft2torch.fft.hfft2(input, s=None, dim=(- 2, - 1), norm=None, , out=None) -> Tensor\n Computes the 2-dimensional discrete Fourier transform of a\n Hermitian symmetric \"input\" signal. Equivalent to \"hfftn()\" but\n only transforms the last two dimensions by default.\n \"input\" is interpreted as a one-sided Hermitian signal in the time\n domain. By the Hermitian property, the Fourier transform will be\n real-valued.\n Note:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimensions. With default arguments,\n the size of last dimension should be (2^n + 1) as argument s\n defaults to even output size = 2 * (last_dim_size - 1)\n Parameters:\n * input (Tensor) -- the input tensor\n * s (Tuple[int], *optional) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfft2.html", "category": "pytorch docs"}
{"text": "either be zero-padded or trimmed to the length \"s[i]\" before\n computing the Hermitian FFT. If a length \"-1\" is specified, no\n padding is done in that dimension. Defaults to even output in\n the last dimension: \"s[-1] = 2(input.size(dim[-1]) - 1)\".\n * dim (Tuple[int], optional*) -- Dimensions to be\n transformed. The last dimension must be the half-Hermitian\n compressed dimension. Default: last two dimensions.\n * norm (*str, *optional) --\n Normalization mode. For the forward transform (\"hfft2()\"),\n these correspond to:\n * \"\"forward\"\" - normalize by \"1/n\"\n * \"\"backward\"\" - no normalization\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the Hermitian\n FFT orthonormal)\n Where \"n = prod(s)\" is the logical FFT size. Calling the\n backward transform (\"ihfft2()\") with the same normalization\n mode will apply an overall normalization of \"1/n\" between the", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfft2.html", "category": "pytorch docs"}
{"text": "two transforms. This is required to make \"ihfft2()\" the exact\n inverse.\n Default is \"\"backward\"\" (no normalization).\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n Starting from a real frequency-space signal, we can generate a\n Hermitian-symmetric time-domain signal: >>> T = torch.rand(10, 9)\n\n\n\nt = torch.fft.ihfft2(T)\n Without specifying the output length to \"hfftn()\", the output will\n not round-trip properly because the input is odd-length in the last\n dimension:\ntorch.fft.hfft2(t).size()\n torch.Size([10, 10])\n So, it is recommended to always pass the signal shape \"s\".\nroundtrip = torch.fft.hfft2(t, T.size())\nroundtrip.size()\n torch.Size([10, 9])\ntorch.allclose(roundtrip, T)\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfft2.html", "category": "pytorch docs"}
{"text": "torch.Tensor.bitwise_andTensor.bitwise_and() -> Tensor\n See \"torch.bitwise_and()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_and.html", "category": "pytorch docs"}
{"text": "torch.topktorch.topk(input, k, dim=None, largest=True, sorted=True, , out=None)\n Returns the \"k\" largest elements of the given \"input\" tensor along\n a given dimension.\n If \"dim\" is not given, the last dimension of the input is chosen.\n If \"largest\" is \"False\" then the k smallest elements are\n returned.\n A namedtuple of (values, indices) is returned with the values\n and indices of the largest k elements of each row of the\n input tensor in the given dimension dim.\n The boolean option \"sorted\" if \"True\", will make sure that the\n returned k elements are themselves sorted\n Parameters:\n * input (Tensor) -- the input tensor.\n * k (int) -- the k in \"top-k\"\n * dim (int, optional) -- the dimension to sort along\n * largest (bool, optional) -- controls whether to\n return largest or smallest elements\n * sorted (bool, optional*) -- controls whether to\n return the elements in sorted order", "source": "https://pytorch.org/docs/stable/generated/torch.topk.html", "category": "pytorch docs"}
{"text": "return the elements in sorted order\n Keyword Arguments:\n out (tuple, optional) -- the output tuple of (Tensor,\n LongTensor) that can be optionally given to be used as output\n buffers\n Example:\n >>> x = torch.arange(1., 6.)\n >>> x\n tensor([ 1., 2., 3., 4., 5.])\n >>> torch.topk(x, 3)\n torch.return_types.topk(values=tensor([5., 4., 3.]), indices=tensor([4, 3, 2]))", "source": "https://pytorch.org/docs/stable/generated/torch.topk.html", "category": "pytorch docs"}
{"text": "SmoothL1Lossclass torch.nn.SmoothL1Loss(size_average=None, reduce=None, reduction='mean', beta=1.0)\n Creates a criterion that uses a squared term if the absolute\n element-wise error falls below beta and an L1 term otherwise. It is\n less sensitive to outliers than \"torch.nn.MSELoss\" and in some\n cases prevents exploding gradients (e.g. see the paper Fast R-CNN\n by Ross Girshick).\n For a batch of size N, the unreduced loss can be described as:\n \\ell(x, y) = L = {l_1, ..., l_N}^T\n with\n l_n = \\begin{cases} 0.5 (x_n - y_n)^2 / beta, & \\text{if } |x_n\n - y_n| < beta \\ |x_n - y_n| - 0.5 * beta, & \\text{otherwise }\n \\end{cases}\n If reduction is not none, then:\n \\ell(x, y) = \\begin{cases} \\operatorname{mean}(L), &\n \\text{if reduction} = \\text{mean';}\\\\\n \\operatorname{sum}(L), & \\text{if reduction} = \\text{sum'.}\n \\end{cases}\n Note:\n Smooth L1 loss can be seen as exactly \"L1Loss\", but with the |x -", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html", "category": "pytorch docs"}
{"text": "y| < beta portion replaced with a quadratic function such that\n its slope is 1 at |x - y| = beta. The quadratic segment smooths\n the L1 loss near |x - y| = 0.\n Note:\n Smooth L1 loss is closely related to \"HuberLoss\", being\n equivalent to huber(x, y) / beta (note that Smooth L1's beta\n hyper-parameter is also known as delta for Huber). This leads to\n the following differences:\n * As beta -> 0, Smooth L1 loss converges to \"L1Loss\", while\n \"HuberLoss\" converges to a constant 0 loss. When beta is 0,\n Smooth L1 loss is equivalent to L1 loss.\n * As beta -> +\\infty, Smooth L1 loss converges to a constant 0\n loss, while \"HuberLoss\" converges to \"MSELoss\".\n * For Smooth L1 loss, as beta varies, the L1 segment of the loss\n has a constant slope of 1. For \"HuberLoss\", the slope of the L1\n segment is beta.\n Parameters:\n * size_average (bool, optional) -- Deprecated (see", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html", "category": "pytorch docs"}
{"text": "\"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html", "category": "pytorch docs"}
{"text": "\"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n * beta (float, optional) -- Specifies the threshold at\n which to change between L1 and L2 loss. The value must be non-\n negative. Default: 1.0\n Shape:\n * Input: (), where * means any number of dimensions.\n * Target: (), same shape as the input.\n * Output: scalar. If \"reduction\" is \"'none'\", then (*), same\n shape as the input.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html", "category": "pytorch docs"}
{"text": "AdaptiveAvgPool2dclass torch.nn.AdaptiveAvgPool2d(output_size)\n Applies a 2D adaptive average pooling over an input signal composed\n of several input planes.\n The output is of size H x W, for any input size. The number of\n output features is equal to the number of input planes.\n Parameters:\n output_size (Union[int, None,\n Tuple[Optional[int], Optional[int]]])\n -- the target output size of the image of the form H x W. Can be\n a tuple (H, W) or a single H for a square image H x H. H and W\n can be either a \"int\", or \"None\" which means the size will be\n the same as that of the input.\n Shape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).\n * Output: (N, C, S_{0}, S_{1}) or (C, S_{0}, S_{1}), where\n S=\\text{output_size}.\n -[ Examples ]-\n\n\n\ntarget output size of 5x7\nm = nn.AdaptiveAvgPool2d((5, 7))\ninput = torch.randn(1, 64, 8, 9)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveAvgPool2d.html", "category": "pytorch docs"}
{"text": "\n\n\noutput = m(input)\ntarget output size of 7x7 (square)\nm = nn.AdaptiveAvgPool2d(7)\ninput = torch.randn(1, 64, 10, 9)\noutput = m(input)\ntarget output size of 10x7\nm = nn.AdaptiveAvgPool2d((None, 7))\ninput = torch.randn(1, 64, 10, 9)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveAvgPool2d.html", "category": "pytorch docs"}
{"text": "torch.quantized_max_pool1dtorch.quantized_max_pool1d(input, kernel_size, stride=[], padding=0, dilation=1, ceil_mode=False) -> Tensor\n Applies a 1D max pooling over an input quantized tensor composed of\n several input planes.\n Parameters:\n * input (Tensor) -- quantized tensor\n * kernel_size (list of python:int) -- the size of the\n sliding window\n * stride (\"list of int\", optional) -- the stride of the\n sliding window\n * padding (\"list of int\", optional) -- padding to be added\n on both sides, must be >= 0 and <= kernel_size / 2\n * dilation (\"list of int\", optional) -- The stride between\n elements within a sliding window, must be > 0. Default 1\n * ceil_mode (bool, optional) -- If True, will use ceil\n instead of floor to compute the output shape. Defaults to\n False.\n Returns:\n A quantized tensor with max_pool1d applied.\n Return type:\n Tensor\n Example:", "source": "https://pytorch.org/docs/stable/generated/torch.quantized_max_pool1d.html", "category": "pytorch docs"}
{"text": "Return type:\n Tensor\n Example:\n >>> qx = torch.quantize_per_tensor(torch.rand(2, 2), 1.5, 3, torch.quint8)\n >>> torch.quantized_max_pool1d(qx, [2])\n tensor([[0.0000],\n [1.5000]], size=(2, 1), dtype=torch.quint8,\n quantization_scheme=torch.per_tensor_affine, scale=1.5, zero_point=3)", "source": "https://pytorch.org/docs/stable/generated/torch.quantized_max_pool1d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.rsqrt_Tensor.rsqrt_() -> Tensor\n In-place version of \"rsqrt()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.rsqrt_.html", "category": "pytorch docs"}
{"text": "torch.sorttorch.sort(input, dim=- 1, descending=False, stable=False, , out=None)\n Sorts the elements of the \"input\" tensor along a given dimension in\n ascending order by value.\n If \"dim\" is not given, the last dimension of the input is chosen.\n If \"descending\" is \"True\" then the elements are sorted in\n descending order by value.\n If \"stable\" is \"True\" then the sorting routine becomes stable,\n preserving the order of equivalent elements.\n A namedtuple of (values, indices) is returned, where the values\n are the sorted values and indices are the indices of the elements\n in the original input tensor.\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int, optional) -- the dimension to sort along\n * descending (bool, optional) -- controls the sorting\n order (ascending or descending)\n * stable (bool, optional*) -- makes the sorting routine", "source": "https://pytorch.org/docs/stable/generated/torch.sort.html", "category": "pytorch docs"}
{"text": "stable, which guarantees that the order of equivalent elements\n is preserved.\n Keyword Arguments:\n out (tuple, optional) -- the output tuple of\n (Tensor, LongTensor) that can be optionally given to be used\n as output buffers\n Example:\n >>> x = torch.randn(3, 4)\n >>> sorted, indices = torch.sort(x)\n >>> sorted\n tensor([[-0.2162, 0.0608, 0.6719, 2.3332],\n [-0.5793, 0.0061, 0.6058, 0.9497],\n [-0.5071, 0.3343, 0.9553, 1.0960]])\n >>> indices\n tensor([[ 1, 0, 2, 3],\n [ 3, 1, 0, 2],\n [ 0, 3, 1, 2]])\n >>> sorted, indices = torch.sort(x, 0)\n >>> sorted\n tensor([[-0.5071, -0.2162, 0.6719, -0.5793],\n [ 0.0608, 0.0061, 0.9497, 0.3343],\n [ 0.6058, 0.9553, 1.0960, 2.3332]])\n >>> indices\n tensor([[ 2, 0, 0, 1],\n [ 0, 1, 1, 2],\n [ 1, 2, 2, 0]])\n >>> x = torch.tensor([0, 1] * 9)", "source": "https://pytorch.org/docs/stable/generated/torch.sort.html", "category": "pytorch docs"}
{"text": "\n\n\nx = torch.tensor([0, 1] * 9)\n >>> x.sort()\n torch.return_types.sort(\n values=tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]),\n indices=tensor([ 2, 16, 4, 6, 14, 8, 0, 10, 12, 9, 17, 15, 13, 11, 7, 5, 3, 1]))\n >>> x.sort(stable=True)\n torch.return_types.sort(\n values=tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]),\n indices=tensor([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 1, 3, 5, 7, 9, 11, 13, 15, 17]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.sort.html", "category": "pytorch docs"}
{"text": "torch.sparse.addmmtorch.sparse.addmm(mat, mat1, mat2, , beta=1., alpha=1.) -> Tensor\n This function does exact same thing as \"torch.addmm()\" in the\n forward, except that it supports backward for sparse COO matrix\n \"mat1\". When \"mat1\" is a COO tensor it must have sparse_dim = 2.\n When inputs are COO tensors, this function also supports backward\n for both inputs.\n Supports both CSR and COO storage formats.\n Note:\n This function doesn't support computing derivaties with respect\n to CSR matrices.\n Parameters:\n * mat (Tensor) -- a dense matrix to be added\n * mat1 (Tensor) -- a sparse matrix to be multiplied\n * mat2 (Tensor) -- a dense matrix to be multiplied\n * beta (Number, optional) -- multiplier for \"mat\"\n (\\beta)\n * alpha (Number, optional*) -- multiplier for mat1 @\n mat2 (\\alpha)", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.addmm.html", "category": "pytorch docs"}
{"text": "torch.is_nonzerotorch.is_nonzero(input)\n Returns True if the \"input\" is a single element tensor which is not\n equal to zero after type conversions. i.e. not equal to\n \"torch.tensor([0.])\" or \"torch.tensor([0])\" or\n \"torch.tensor([False])\". Throws a \"RuntimeError\" if \"torch.numel()\n != 1\" (even in case of sparse tensors).\n Parameters:\n input (Tensor) -- the input tensor.\n Examples:\n >>> torch.is_nonzero(torch.tensor([0.]))\n False\n >>> torch.is_nonzero(torch.tensor([1.5]))\n True\n >>> torch.is_nonzero(torch.tensor([False]))\n False\n >>> torch.is_nonzero(torch.tensor([3]))\n True\n >>> torch.is_nonzero(torch.tensor([1, 3, 5]))\n Traceback (most recent call last):\n ...\n RuntimeError: bool value of Tensor with more than one value is ambiguous\n >>> torch.is_nonzero(torch.tensor([]))\n Traceback (most recent call last):\n ...\n RuntimeError: bool value of Tensor with no values is ambiguous", "source": "https://pytorch.org/docs/stable/generated/torch.is_nonzero.html", "category": "pytorch docs"}
{"text": "torch.signbittorch.signbit(input, , out=None) -> Tensor\n Tests if each element of \"input\" has its sign bit set or not.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.tensor([0.7, -1.2, 0., 2.3])\n >>> torch.signbit(a)\n tensor([ False, True, False, False])\n >>> a = torch.tensor([-0.0, 0.0])\n >>> torch.signbit(a)\n tensor([ True, False])\n Note:\n signbit handles signed zeros, so negative zero (-0) returns True.", "source": "https://pytorch.org/docs/stable/generated/torch.signbit.html", "category": "pytorch docs"}
{"text": "torch.kaiser_windowtorch.kaiser_window(window_length, periodic=True, beta=12.0, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\n Computes the Kaiser window with window length \"window_length\" and\n shape parameter \"beta\".\n Let I_0 be the zeroth order modified Bessel function of the first\n kind (see \"torch.i0()\") and \"N = L - 1\" if \"periodic\" is False and\n \"L\" if \"periodic\" is True, where \"L\" is the \"window_length\". This\n function computes:\n out_i = I_0 \\left( \\beta \\sqrt{1 - \\left( {\\frac{i - N/2}{N/2}}\n \\right) ^2 } \\right) / I_0( \\beta )\n Calling \"torch.kaiser_window(L, B, periodic=True)\" is equivalent to\n calling \"torch.kaiser_window(L + 1, B, periodic=False)[:-1])\". The\n \"periodic\" argument is intended as a helpful shorthand to produce a\n periodic window as input to functions like \"torch.stft()\".\n Note:\n If \"window_length\" is one, then the returned window is a single\n element tensor containing a one.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.kaiser_window.html", "category": "pytorch docs"}
{"text": "Parameters:\n * window_length (int) -- length of the window.\n * periodic (bool, optional) -- If True, returns a\n periodic window suitable for use in spectral analysis. If\n False, returns a symmetric window suitable for use in filter\n design.\n * beta (float, optional) -- shape parameter for the\n window.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned window tensor. Only \"torch.strided\" (dense layout) is\n supported.\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU", "source": "https://pytorch.org/docs/stable/generated/torch.kaiser_window.html", "category": "pytorch docs"}
{"text": "for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".", "source": "https://pytorch.org/docs/stable/generated/torch.kaiser_window.html", "category": "pytorch docs"}
{"text": "torch.prodtorch.prod(input, , dtype=None) -> Tensor\n Returns the product of all elements in the \"input\" tensor.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n dtype (\"torch.dtype\", optional) -- the desired data type of\n returned tensor. If specified, the input tensor is casted to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\n Example:\n >>> a = torch.randn(1, 3)\n >>> a\n tensor([[-0.8020, 0.5428, -1.5854]])\n >>> torch.prod(a)\n tensor(0.6902)\n torch.prod(input, dim, keepdim=False, , dtype=None) -> Tensor\n Returns the product of each row of the \"input\" tensor in the given\n dimension \"dim\".\n If \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in", "source": "https://pytorch.org/docs/stable/generated/torch.prod.html", "category": "pytorch docs"}
{"text": "the output tensor having 1 fewer dimension than \"input\".\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- the dimension to reduce.\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n Keyword Arguments:\n dtype (\"torch.dtype\", optional) -- the desired data type of\n returned tensor. If specified, the input tensor is casted to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\n Example:\n >>> a = torch.randn(4, 2)\n >>> a\n tensor([[ 0.5261, -0.3837],\n [ 1.1857, -0.2498],\n [-1.1646, 0.0705],\n [ 1.1131, -1.0629]])\n >>> torch.prod(a, 1)\n tensor([-0.2018, -0.2962, -0.0821, -1.1831])", "source": "https://pytorch.org/docs/stable/generated/torch.prod.html", "category": "pytorch docs"}
{"text": "torch.Tensor.stftTensor.stft(n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='reflect', normalized=False, onesided=None, return_complex=None)\n See \"torch.stft()\"\n Warning:\n This function changed signature at version 0.4.1. Calling with\n the previous signature may cause error or return incorrect\n result.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.stft.html", "category": "pytorch docs"}
{"text": "torch.fft.hfftntorch.fft.hfftn(input, s=None, dim=None, norm=None, *, out=None) -> Tensor\n Computes the n-dimensional discrete Fourier transform of a\n Hermitian symmetric \"input\" signal.\n \"input\" is interpreted as a one-sided Hermitian signal in the time\n domain. By the Hermitian property, the Fourier transform will be\n real-valued.\n Note:\n \"hfftn()\"/\"ihfftn()\" are analogous to \"rfftn()\"/\"irfftn()\". The\n real FFT expects a real signal in the time-domain and gives\n Hermitian symmetry in the frequency-domain. The Hermitian FFT is\n the opposite; Hermitian symmetric in the time-domain and real-\n valued in the frequency-domain. For this reason, special care\n needs to be taken with the shape argument \"s\", in the same way as\n with \"irfftn()\".\n Note:\n Some input frequencies must be real-valued to satisfy the\n Hermitian property. In these cases the imaginary component will\n be ignored. For example, any imaginary component in the zero-", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfftn.html", "category": "pytorch docs"}
{"text": "frequency term cannot be represented in a real output and so will\n always be ignored.\n Note:\n The correct interpretation of the Hermitian input depends on the\n length of the original data, as given by \"s\". This is because\n each input shape could correspond to either an odd or even length\n signal. By default, the signal is assumed to be even length and\n odd signals will not round-trip properly. It is recommended to\n always pass the signal shape \"s\".\n Note:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimensions. With default arguments,\n the size of last dimension should be (2^n + 1) as argument s\n defaults to even output size = 2 * (last_dim_size - 1)\n Parameters:\n * input (Tensor) -- the input tensor\n * s (Tuple[int], optional) -- Signal size in the", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfftn.html", "category": "pytorch docs"}
{"text": "transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the real FFT. If a length \"-1\" is specified, no\n padding is done in that dimension. Defaults to even output in\n the last dimension: \"s[-1] = 2(input.size(dim[-1]) - 1)\".\n * dim (Tuple[int], optional*) -- Dimensions to be\n transformed. The last dimension must be the half-Hermitian\n compressed dimension. Default: all dimensions, or the last\n \"len(s)\" dimensions if \"s\" is given.\n * norm (*str, *optional) --\n Normalization mode. For the forward transform (\"hfftn()\"),\n these correspond to:\n * \"\"forward\"\" - normalize by \"1/n\"\n * \"\"backward\"\" - no normalization\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the Hermitian\n FFT orthonormal)\n Where \"n = prod(s)\" is the logical FFT size. Calling the", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfftn.html", "category": "pytorch docs"}
{"text": "backward transform (\"ihfftn()\") with the same normalization\n mode will apply an overall normalization of \"1/n\" between the\n two transforms. This is required to make \"ihfftn()\" the exact\n inverse.\n Default is \"\"backward\"\" (no normalization).\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n Starting from a real frequency-space signal, we can generate a\n Hermitian-symmetric time-domain signal: >>> T = torch.rand(10, 9)\n\n\n\nt = torch.fft.ihfftn(T)\n Without specifying the output length to \"hfftn()\", the output will\n not round-trip properly because the input is odd-length in the last\n dimension:\ntorch.fft.hfftn(t).size()\n torch.Size([10, 10])\n So, it is recommended to always pass the signal shape \"s\".\nroundtrip = torch.fft.hfftn(t, T.size())\nroundtrip.size()\n torch.Size([10, 9])\ntorch.allclose(roundtrip, T)\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfftn.html", "category": "pytorch docs"}
{"text": "MultiMarginLossclass torch.nn.MultiMarginLoss(p=1, margin=1.0, weight=None, size_average=None, reduce=None, reduction='mean')\n Creates a criterion that optimizes a multi-class classification\n hinge loss (margin-based loss) between input x (a 2D mini-batch\n Tensor) and output y (which is a 1D tensor of target class\n indices, 0 \\leq y \\leq \\text{x.size}(1)-1):\n For each mini-batch sample, the loss in terms of the 1D input x and\n scalar output y is:\n \\text{loss}(x, y) = \\frac{\\sum_i \\max(0, \\text{margin} - x[y] +\n x[i])^p}{\\text{x.size}(0)}\n where i \\in \\left{0, \\; \\cdots , \\; \\text{x.size}(0) - 1\\right}\n and i \\neq y.\n Optionally, you can give non-equal weighting on the classes by\n passing a 1D \"weight\" tensor into the constructor.\n The loss function then becomes:\n \\text{loss}(x, y) = \\frac{\\sum_i \\max(0, w[y] * (\\text{margin} -\n x[y] + x[i]))^p}{\\text{x.size}(0)}\n Parameters:\n * p (int, optional) -- Has a default value of 1. 1 and", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiMarginLoss.html", "category": "pytorch docs"}
{"text": "2 are the only supported values.\n * margin (float, optional) -- Has a default value of\n 1.\n * weight (Tensor, optional) -- a manual rescaling\n weight given to each class. If given, it has to be a Tensor of\n size C. Otherwise, it is treated as if having all ones.\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiMarginLoss.html", "category": "pytorch docs"}
{"text": "batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n Shape:\n * Input: (N, C) or (C), where N is the batch size and C is the\n number of classes.\n * Target: (N) or (), where each value is 0 \\leq\n \\text{targets}[i] \\leq C-1.\n * Output: scalar. If \"reduction\" is \"'none'\", then same shape as\n the target.\n Examples:\n >>> loss = nn.MultiMarginLoss()\n >>> x = torch.tensor([[0.1, 0.2, 0.4, 0.8]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiMarginLoss.html", "category": "pytorch docs"}
{"text": "\n\n\nx = torch.tensor([[0.1, 0.2, 0.4, 0.8]])\n >>> y = torch.tensor([3])\n >>> # 0.25 * ((1-(0.8-0.1)) + (1-(0.8-0.2)) + (1-(0.8-0.4)))\n >>> loss(x, y)\n tensor(0.32...)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiMarginLoss.html", "category": "pytorch docs"}
{"text": "torch.slice_scattertorch.slice_scatter(input, src, dim=0, start=None, end=None, step=1) -> Tensor\n Embeds the values of the \"src\" tensor into \"input\" at the given\n dimension. This function returns a tensor with fresh storage; it\n does not create a view.\n Parameters:\n * input (Tensor) -- the input tensor.\n * src (Tensor) -- The tensor to embed into \"input\"\n * dim (int) -- the dimension to insert the slice into\n * start (Optional[int]) -- the start index of where\n to insert the slice\n * end (Optional[int]) -- the end index of where to\n insert the slice\n * step (int) -- the how many elements to skip in\n Example:\n >>> a = torch.zeros(8, 8)\n >>> b = torch.ones(8)\n >>> a.slice_scatter(b, start=6)\n tensor([[0., 0., 0., 0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0., 0., 0., 0.],", "source": "https://pytorch.org/docs/stable/generated/torch.slice_scatter.html", "category": "pytorch docs"}
{"text": "[0., 0., 0., 0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0., 0., 0., 0.],\n [1., 1., 1., 1., 1., 1., 1., 1.],\n [1., 1., 1., 1., 1., 1., 1., 1.]])\n >>> b = torch.ones(2)\n >>> a.slice_scatter(b, dim=1, start=2, end=6, step=2)\n tensor([[0., 0., 1., 0., 1., 0., 0., 0.],\n [0., 0., 1., 0., 1., 0., 0., 0.],\n [0., 0., 1., 0., 1., 0., 0., 0.],\n [0., 0., 1., 0., 1., 0., 0., 0.],\n [0., 0., 1., 0., 1., 0., 0., 0.],\n [0., 0., 1., 0., 1., 0., 0., 0.],\n [0., 0., 1., 0., 1., 0., 0., 0.],\n [0., 0., 1., 0., 1., 0., 0., 0.]])", "source": "https://pytorch.org/docs/stable/generated/torch.slice_scatter.html", "category": "pytorch docs"}
{"text": "torch.dettorch.det(input) -> Tensor\n Alias for \"torch.linalg.det()\"", "source": "https://pytorch.org/docs/stable/generated/torch.det.html", "category": "pytorch docs"}
{"text": "torch.linalg.matrix_powertorch.linalg.matrix_power(A, n, , out=None) -> Tensor\n Computes the n-th power of a square matrix for an integer n.\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n If \"n\"= 0, it returns the identity matrix (or batch) of the same\n shape as \"A\". If \"n\" is negative, it returns the inverse of each\n matrix (if invertible) raised to the power of abs(n).\n Note:\n Consider using \"torch.linalg.solve()\" if possible for multiplying\n a matrix on the left by a negative power as, if \"n\"> 0:\n matrix_power(torch.linalg.solve(A, B), n) == matrix_power(A, -n) @ B\n It is always preferred to use \"solve()\" when possible, as it is\n faster and more numerically stable than computing A^{-n}\n explicitly.\n See also:\n \"torch.linalg.solve()\" computes \"A\".inverse() @ *\"B\" with a", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matrix_power.html", "category": "pytorch docs"}
{"text": "numerically stable algorithm.\n Parameters:\n * A (Tensor) -- tensor of shape (, m, m) where *** is\n zero or more batch dimensions.\n * n (int) -- the exponent.\n Keyword Arguments:\n out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\n Raises:\n RuntimeError -- if \"n\"< 0* and the matrix \"A\" or any matrix\n in the batch of matrices \"A\" is not invertible.\n Examples:\n >>> A = torch.randn(3, 3)\n >>> torch.linalg.matrix_power(A, 0)\n tensor([[1., 0., 0.],\n [0., 1., 0.],\n [0., 0., 1.]])\n >>> torch.linalg.matrix_power(A, 3)\n tensor([[ 1.0756, 0.4980, 0.0100],\n [-1.6617, 1.4994, -1.9980],\n [-0.4509, 0.2731, 0.8001]])\n >>> torch.linalg.matrix_power(A.expand(2, -1, -1), -2)\n tensor([[[ 0.2640, 0.4571, -0.5511],\n [-1.0163, 0.3491, -1.5292],\n [-0.4899, 0.0822, 0.2773]],", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matrix_power.html", "category": "pytorch docs"}
{"text": "[-0.4899, 0.0822, 0.2773]],\n [[ 0.2640, 0.4571, -0.5511],\n [-1.0163, 0.3491, -1.5292],\n [-0.4899, 0.0822, 0.2773]]])", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matrix_power.html", "category": "pytorch docs"}
{"text": "Adadeltaclass torch.optim.Adadelta(params, lr=1.0, rho=0.9, eps=1e-06, weight_decay=0, foreach=None, *, maximize=False, differentiable=False)\n Implements Adadelta algorithm.\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\ &\\textbf{input} : \\gamma \\text{ (lr)}, \\: \\theta_0\n \\text{ (params)}, \\: f(\\theta) \\text{ (objective)}, \\:\n \\rho \\text{ (decay)}, \\: \\lambda \\text{ (weight decay)}\n \\ &\\textbf{initialize} : v_0 \\leftarrow 0 \\: \\text{\n (square avg)}, \\: u_0 \\leftarrow 0 \\: \\text{\n (accumulate variables)} \\[-1.ex]\n &\\rule{110mm}{0.4pt}\n \\ &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\:\n \\textbf{do} \\ &\\hspace{5mm}g_t\n \\leftarrow \\nabla_{\\theta} f_t (\\theta_{t-1}) \\\n &\\hspace{5mm}if \\: \\lambda \\neq 0\n \\ &\\hspace{10mm} g_t \\leftarrow g_t + \\lambda\n \\theta_{t-1} \\ &\\hspace{5mm}", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html", "category": "pytorch docs"}
{"text": "v_t \\leftarrow v_{t-1} \\rho + g^2_t (1 - \\rho)\n \\ &\\hspace{5mm}\\Delta x_t \\leftarrow\n \\frac{\\sqrt{u_{t-1} + \\epsilon }}{ \\sqrt{v_t +\n \\epsilon} }g_t \\hspace{21mm} \\\n &\\hspace{5mm} u_t \\leftarrow u_{t-1} \\rho + \\Delta\n x^2_t (1 - \\rho)\n \\ &\\hspace{5mm}\\theta_t \\leftarrow \\theta_{t-1} -\n \\gamma \\Delta x_t \\ &\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\n For further details regarding the algorithm we refer to ADADELTA:\n An Adaptive Learning Rate Method.\n Parameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * rho (float, optional) -- coefficient used for\n computing a running average of squared gradients (default:\n 0.9)\n * eps (float, optional) -- term added to the", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html", "category": "pytorch docs"}
{"text": "denominator to improve numerical stability (default: 1e-6)\n * lr (float, optional) -- coefficient that scale delta\n before it is applied to the parameters (default: 1.0)\n * weight_decay (float, optional) -- weight decay (L2\n penalty) (default: 0)\n * foreach (bool, optional) -- whether foreach\n implementation of optimizer is used. If unspecified by the\n user (so foreach is None), we will try to use foreach over the\n for-loop implementation on CUDA, since it is usually\n significantly more performant. (default: None)\n * maximize (bool, optional) -- maximize the params\n based on the objective, instead of minimizing (default: False)\n * differentiable (bool, optional) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html", "category": "pytorch docs"}
{"text": "False if you don't intend to run autograd through this\n instance (default: False)\n add_param_group(param_group)\n Add a param group to the \"Optimizer\" s param_groups.\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n Parameters:\n param_group (dict) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n load_state_dict(state_dict)\n Loads the optimizer state.\n Parameters:\n state_dict (dict) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n register_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None\n The \"optimizer\" argument is the optimizer instance being used.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html", "category": "pytorch docs"}
{"text": "Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n register_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html", "category": "pytorch docs"}
{"text": "\"torch.utils.hooks.RemoveableHandle\"\n state_dict()\n Returns the state of the optimizer as a \"dict\".\n It contains two entries:\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n zero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n Parameters:\n set_to_none (bool) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html", "category": "pytorch docs"}
{"text": "\".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html", "category": "pytorch docs"}
{"text": "torch.autograd.function.FunctionCtx.mark_non_differentiableFunctionCtx.mark_non_differentiable(args)\n Marks outputs as non-differentiable.\n This should be called at most once, only from inside the\n \"forward()\" method, and all arguments should be tensor outputs.*\n This will mark outputs as not requiring gradients, increasing the\n efficiency of backward computation. You still need to accept a\n gradient for each output in \"backward()\", but it's always going to\n be a zero tensor with the same shape as the shape of a\n corresponding output.\n This is used e.g. for indices returned from a sort. See example::\n >>> class Func(Function):\n >>> @staticmethod\n >>> def forward(ctx, x):\n >>> sorted, idx = x.sort()\n >>> ctx.mark_non_differentiable(idx)\n >>> ctx.save_for_backward(x, idx)\n >>> return sorted, idx\n >>>\n >>> @staticmethod", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.mark_non_differentiable.html", "category": "pytorch docs"}
{"text": "\n\n\n >>> @staticmethod\n >>> @once_differentiable\n >>> def backward(ctx, g1, g2): # still need to accept g2\n >>> x, idx = ctx.saved_tensors\n >>> grad_input = torch.zeros_like(x)\n >>> grad_input.index_add_(0, idx, g1)\n >>> return grad_input\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.mark_non_differentiable.html", "category": "pytorch docs"}
{"text": "torch.sparse.mmtorch.sparse.mm()\n Performs a matrix multiplication of the sparse matrix \"mat1\" and\n the (sparse or strided) matrix \"mat2\". Similar to \"torch.mm()\",\n if \"mat1\" is a (n \\times m) tensor, \"mat2\" is a (m \\times p)\n tensor, out will be a (n \\times p) tensor. When \"mat1\" is a COO\n tensor it must have sparse_dim = 2. When inputs are COO\n tensors, this function also supports backward for both inputs.\n Supports both CSR and COO storage formats.\n Note:\n This function doesn't support computing derivaties with respect\n to CSR matrices.\n Args:\n mat1 (Tensor): the first sparse matrix to be multiplied mat2\n (Tensor): the second matrix to be multiplied, which could be\n sparse or dense\n Shape:\n The format of the output tensor of this function follows: -\n sparse x sparse -> sparse - sparse x dense -> dense\n Example:\n >>> a = torch.randn(2, 3).to_sparse().requires_grad_(True)\n >>> a", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.mm.html", "category": "pytorch docs"}
{"text": "\n\n\na\n tensor(indices=tensor([[0, 0, 0, 1, 1, 1],\n [0, 1, 2, 0, 1, 2]]),\n values=tensor([ 1.5901, 0.0183, -0.6146, 1.8061, -0.0112, 0.6302]),\n size=(2, 3), nnz=6, layout=torch.sparse_coo, requires_grad=True)\n >>> b = torch.randn(3, 2, requires_grad=True)\n >>> b\n tensor([[-0.6479, 0.7874],\n [-1.2056, 0.5641],\n [-1.1716, -0.9923]], requires_grad=True)\n >>> y = torch.sparse.mm(a, b)\n >>> y\n tensor([[-0.3323, 1.8723],\n [-1.8951, 0.7904]], grad_fn=)\n >>> y.sum().backward()\n >>> a.grad\n tensor(indices=tensor([[0, 0, 0, 1, 1, 1],\n [0, 1, 2, 0, 1, 2]]),\n values=tensor([ 0.1394, -0.6415, -2.1639, 0.1394, -0.6415, -2.1639]),\n size=(2, 3), nnz=6, layout=torch.sparse_coo)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.mm.html", "category": "pytorch docs"}
{"text": "torch.Tensor.get_deviceTensor.get_device() -> Device ordinal (Integer)\n For CUDA tensors, this function returns the device ordinal of the\n GPU on which the tensor resides. For CPU tensors, this function\n returns -1.\n Example:\n >>> x = torch.randn(3, 4, 5, device='cuda:0')\n >>> x.get_device()\n 0\n >>> x.cpu().get_device()\n -1", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.get_device.html", "category": "pytorch docs"}
{"text": "torch.cuda.set_per_process_memory_fractiontorch.cuda.set_per_process_memory_fraction(fraction, device=None)\n Set memory fraction for a process. The fraction is used to limit an\n caching allocator to allocated memory on a CUDA device. The allowed\n value equals the total visible memory multiplied fraction. If\n trying to allocate more than the allowed value in a process, will\n raise an out of memory error in allocator.\n Parameters:\n * fraction (float) -- Range: 0~1. Allowed memory equals\n total_memory * fraction.\n * device (torch.device or int, optional) --\n selected device. If it is \"None\" the default CUDA device is\n used.\n Note:\n In general, the total available free memory is less than the\n total capacity.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.set_per_process_memory_fraction.html", "category": "pytorch docs"}
{"text": "ConvBn2dclass torch.ao.nn.intrinsic.ConvBn2d(conv, bn)\n This is a sequential container which calls the Conv 2d and Batch\n Norm 2d modules. During quantization this will be replaced with the\n corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvBn2d.html", "category": "pytorch docs"}
{"text": "torch.func.gradtorch.func.grad(func, argnums=0, has_aux=False)\n \"grad\" operator helps computing gradients of \"func\" with respect to\n the input(s) specified by \"argnums\". This operator can be nested to\n compute higher-order gradients.\n Parameters:\n * func (Callable) -- A Python function that takes one or\n more arguments. Must return a single-element Tensor. If\n specified \"has_aux\" equals \"True\", function can return a tuple\n of single-element Tensor and other auxiliary objects:\n \"(output, aux)\".\n * argnums (int or Tuple[int]) -- Specifies\n arguments to compute gradients with respect to. \"argnums\" can\n be single integer or tuple of integers. Default: 0.\n * has_aux (bool) -- Flag indicating that \"func\" returns a\n tensor and other auxiliary objects: \"(output, aux)\". Default:\n False.\n Returns:\n Function to compute gradients with respect to its inputs. By", "source": "https://pytorch.org/docs/stable/generated/torch.func.grad.html", "category": "pytorch docs"}
{"text": "default, the output of the function is the gradient tensor(s)\n with respect to the first argument. If specified \"has_aux\"\n equals \"True\", tuple of gradients and output auxiliary objects\n is returned. If \"argnums\" is a tuple of integers, a tuple of\n output gradients with respect to each \"argnums\" value is\n returned.\n Return type:\n Callable\n Example of using \"grad\":\n\n\n\nfrom torch.func import grad\nx = torch.randn([])\ncos_x = grad(lambda x: torch.sin(x))(x)\nassert torch.allclose(cos_x, x.cos())\nSecond-order gradients\nneg_sin_x = grad(grad(lambda x: torch.sin(x)))(x)\nassert torch.allclose(neg_sin_x, -x.sin())\n When composed with \"vmap\", \"grad\" can be used to compute per-\n sample-gradients:\nfrom torch.func import grad, vmap\nbatch_size, feature_size = 3, 5\ndef model(weights, feature_vec):\n # Very simple linear model with activation\n assert feature_vec.dim() == 1\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.grad.html", "category": "pytorch docs"}
{"text": "\n\n\nassert feature_vec.dim() == 1\nreturn feature_vec.dot(weights).relu()\n\ndef compute_loss(weights, example, target):\n y = model(weights, example)\n return ((y - target) ** 2).mean() # MSELoss\nweights = torch.randn(feature_size, requires_grad=True)\nexamples = torch.randn(batch_size, feature_size)\ntargets = torch.randn(batch_size)\ninputs = (weights, examples, targets)\ngrad_weight_per_example = vmap(grad(compute_loss), in_dims=(None, 0, 0))(*inputs)\n Example of using \"grad\" with \"has_aux\" and \"argnums\":\nfrom torch.func import grad\ndef my_loss_func(y, y_pred):\n loss_per_sample = (0.5 * y_pred - y) ** 2\n loss = loss_per_sample.mean()\n return loss, (y_pred, loss_per_sample)\nfn = grad(my_loss_func, argnums=(0, 1), has_aux=True)\ny_true = torch.rand(4)\ny_preds = torch.rand(4, requires_grad=True)\nout = fn(y_true, y_preds)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.grad.html", "category": "pytorch docs"}
{"text": "\n\n\nout = fn(y_true, y_preds)\n> output is ((grads w.r.t y_true, grads w.r.t y_preds), (y_pred, loss_per_sample))\nNote:\n Using PyTorch \"torch.no_grad\" together with \"grad\".Case 1: Using\n \"torch.no_grad\" inside a function:\n >>> def f(x):\n >>> with torch.no_grad():\n >>> c = x ** 2\n >>> return x - c\n In this case, \"grad(f)(x)\" will respect the inner\n \"torch.no_grad\".Case 2: Using \"grad\" inside \"torch.no_grad\"\n context manager:\n >>> with torch.no_grad():\n >>> grad(f)(x)\n In this case, \"grad\" will respect the inner \"torch.no_grad\", but\n not the outer one. This is because \"grad\" is a \"function\n transform\": its result should not depend on the result of a\n context manager outside of \"f\".\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.grad.html", "category": "pytorch docs"}
{"text": "torch.sparse_coo_tensortorch.sparse_coo_tensor(indices, values, size=None, , dtype=None, device=None, requires_grad=False, check_invariants=None) -> Tensor\n Constructs a sparse tensor in COO(rdinate) format with specified\n values at the given \"indices\".\n Note:\n This function returns an uncoalesced tensor.\n Note:\n If the \"device\" argument is not specified the device of the given\n \"values\" and indices tensor(s) must match. If, however, the\n argument is specified the input Tensors will be converted to the\n given device and in turn determine the device of the constructed\n sparse tensor.\n Parameters:\n * indices (array_like*) -- Initial data for the tensor. Can\n be a list, tuple, NumPy \"ndarray\", scalar, and other types.\n Will be cast to a \"torch.LongTensor\" internally. The indices\n are the coordinates of the non-zero values in the matrix, and\n thus should be two-dimensional where the first dimension is", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html", "category": "pytorch docs"}
{"text": "the number of tensor dimensions and the second dimension is\n the number of non-zero values.\n * values (array_like) -- Initial values for the tensor.\n Can be a list, tuple, NumPy \"ndarray\", scalar, and other\n types.\n * size (list, tuple, or \"torch.Size\", optional) -- Size of\n the sparse tensor. If not provided the size will be inferred\n as the minimum size big enough to hold all non-zero elements.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if None, infers data type from\n \"values\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, uses the current device for\n the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html", "category": "pytorch docs"}
{"text": "tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * check_invariants (bool, optional) -- If sparse\n tensor invariants are checked. Default: as returned by\n \"torch.sparse.check_sparse_tensor_invariants.is_enabled()\",\n initially False.\n Example:\n >>> i = torch.tensor([[0, 1, 1],\n ... [2, 0, 2]])\n >>> v = torch.tensor([3, 4, 5], dtype=torch.float32)\n >>> torch.sparse_coo_tensor(i, v, [2, 4])\n tensor(indices=tensor([[0, 1, 1],\n [2, 0, 2]]),\n values=tensor([3., 4., 5.]),\n size=(2, 4), nnz=3, layout=torch.sparse_coo)\n >>> torch.sparse_coo_tensor(i, v) # Shape inference\n tensor(indices=tensor([[0, 1, 1],\n [2, 0, 2]]),\n values=tensor([3., 4., 5.]),\n size=(2, 3), nnz=3, layout=torch.sparse_coo)", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.sparse_coo_tensor(i, v, [2, 4],\n ... dtype=torch.float64,\n ... device=torch.device('cuda:0'))\n tensor(indices=tensor([[0, 1, 1],\n [2, 0, 2]]),\n values=tensor([3., 4., 5.]),\n device='cuda:0', size=(2, 4), nnz=3, dtype=torch.float64,\n layout=torch.sparse_coo)\n # Create an empty sparse tensor with the following invariants:\n # 1. sparse_dim + dense_dim = len(SparseTensor.shape)\n # 2. SparseTensor._indices().shape = (sparse_dim, nnz)\n # 3. SparseTensor._values().shape = (nnz, SparseTensor.shape[sparse_dim:])\n #\n # For instance, to create an empty sparse tensor with nnz = 0, dense_dim = 0 and\n # sparse_dim = 1 (hence indices is a 2D tensor of shape = (1, 0))\n >>> S = torch.sparse_coo_tensor(torch.empty([1, 0]), [], [1])\n tensor(indices=tensor([], size=(1, 0)),\n values=tensor([], size=(0,)),\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html", "category": "pytorch docs"}
{"text": "values=tensor([], size=(0,)),\n size=(1,), nnz=0, layout=torch.sparse_coo)\n # and to create an empty sparse tensor with nnz = 0, dense_dim = 1 and\n # sparse_dim = 1\n >>> S = torch.sparse_coo_tensor(torch.empty([1, 0]), torch.empty([0, 2]), [1, 2])\n tensor(indices=tensor([], size=(1, 0)),\n values=tensor([], size=(0, 2)),\n size=(1, 2), nnz=0, layout=torch.sparse_coo)", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html", "category": "pytorch docs"}
{"text": "torch._foreach_roundtorch._foreach_round(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.round()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_round.html", "category": "pytorch docs"}
{"text": "LeakyReLUclass torch.nn.LeakyReLU(negative_slope=0.01, inplace=False)\n Applies the element-wise function:\n \\text{LeakyReLU}(x) = \\max(0, x) + \\text{negative_slope} *\n \\min(0, x)\n or\n \\text{LeakyReLU}(x) = \\begin{cases} x, & \\text{ if } x \\geq 0 \\\n \\text{negative_slope} \\times x, & \\text{ otherwise }\n \\end{cases}\n Parameters:\n * negative_slope (float) -- Controls the angle of the\n negative slope. Default: 1e-2\n * inplace (bool) -- can optionally do the operation in-\n place. Default: \"False\"\n Shape:\n * Input: () where *** means, any number of additional\n dimensions\n * Output: (), same shape as the input\n [image]\n Examples:\n >>> m = nn.LeakyReLU(0.1)\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LeakyReLU.html", "category": "pytorch docs"}
{"text": "torch.Tensor.isrealTensor.isreal() -> Tensor\n See \"torch.isreal()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.isreal.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.leaky_relutorch.nn.functional.leaky_relu(input, negative_slope=0.01, inplace=False) -> Tensor\n Applies element-wise, \\text{LeakyReLU}(x) = \\max(0, x) +\n \\text{negative_slope} * \\min(0, x)\n See \"LeakyReLU\" for more details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.leaky_relu.html", "category": "pytorch docs"}
{"text": "torch.Tensor.fill_diagonal_Tensor.fill_diagonal_(fill_value, wrap=False) -> Tensor\n Fill the main diagonal of a tensor that has at least 2-dimensions.\n When dims>2, all dimensions of input must be of equal length. This\n function modifies the input tensor in-place, and returns the input\n tensor.\n Parameters:\n * fill_value (Scalar) -- the fill value\n * wrap (bool) -- the diagonal 'wrapped' after N columns\n for tall matrices.\n Example:\n >>> a = torch.zeros(3, 3)\n >>> a.fill_diagonal_(5)\n tensor([[5., 0., 0.],\n [0., 5., 0.],\n [0., 0., 5.]])\n >>> b = torch.zeros(7, 3)\n >>> b.fill_diagonal_(5)\n tensor([[5., 0., 0.],\n [0., 5., 0.],\n [0., 0., 5.],\n [0., 0., 0.],\n [0., 0., 0.],\n [0., 0., 0.],\n [0., 0., 0.]])\n >>> c = torch.zeros(7, 3)\n >>> c.fill_diagonal_(5, wrap=True)\n tensor([[5., 0., 0.],", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fill_diagonal_.html", "category": "pytorch docs"}
{"text": "tensor([[5., 0., 0.],\n [0., 5., 0.],\n [0., 0., 5.],\n [0., 0., 0.],\n [5., 0., 0.],\n [0., 5., 0.],\n [0., 0., 5.]])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fill_diagonal_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.acosTensor.acos() -> Tensor\n See \"torch.acos()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.acos.html", "category": "pytorch docs"}
{"text": "ConstantLRclass torch.optim.lr_scheduler.ConstantLR(optimizer, factor=0.3333333333333333, total_iters=5, last_epoch=- 1, verbose=False)\n Decays the learning rate of each parameter group by a small\n constant factor until the number of epoch reaches a pre-defined\n milestone: total_iters. Notice that such decay can happen\n simultaneously with other changes to the learning rate from outside\n this scheduler. When last_epoch=-1, sets initial lr as lr.\n Parameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * factor (float) -- The number we multiply learning rate\n until the milestone. Default: 1./3.\n * total_iters (int) -- The number of steps that the\n scheduler decays the learning rate. Default: 5.\n * last_epoch (int) -- The index of the last epoch.\n Default: -1.\n * verbose (bool) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n -[ Example ]-", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ConstantLR.html", "category": "pytorch docs"}
{"text": "-[ Example ]-\n\n\n\nAssuming optimizer uses lr = 0.05 for all groups\nlr = 0.025 if epoch == 0\nlr = 0.025 if epoch == 1\nlr = 0.025 if epoch == 2\nlr = 0.025 if epoch == 3\nlr = 0.05 if epoch >= 4\nscheduler = ConstantLR(self.opt, factor=0.5, total_iters=4)\nfor epoch in range(100):\n train(...)\n validate(...)\n scheduler.step()\n get_last_lr()\n Return last computed learning rate by current scheduler.\n load_state_dict(state_dict)\n Loads the schedulers state.\n Parameters:\n state_dict (dict) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n print_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n state_dict()\n Returns the state of the scheduler as a \"dict\".\n It contains an entry for every variable in self.dict which\n is not the optimizer.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ConstantLR.html", "category": "pytorch docs"}
{"text": "BNReLU2dclass torch.ao.nn.intrinsic.quantized.BNReLU2d(num_features, eps=1e-05, momentum=0.1, device=None, dtype=None)\n A BNReLU2d module is a fused module of BatchNorm2d and ReLU\n We adopt the same interface as \"torch.ao.nn.quantized.BatchNorm2d\".\n Variables:\n torch.ao.nn.quantized.BatchNorm2d (Same as) --", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.BNReLU2d.html", "category": "pytorch docs"}
{"text": "freeze_bn_statsclass torch.ao.nn.intrinsic.qat.freeze_bn_stats(mod)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.freeze_bn_stats.html", "category": "pytorch docs"}
{"text": "ConvertCustomConfigclass torch.ao.quantization.fx.custom_config.ConvertCustomConfig\n Custom configuration for \"convert_fx()\".\n Example usage:\n convert_custom_config = ConvertCustomConfig() .set_observed_to_quantized_mapping(ObservedCustomModule, QuantizedCustomModule) .set_preserved_attributes([\"attr1\", \"attr2\"])\n classmethod from_dict(convert_custom_config_dict)\n Create a \"ConvertCustomConfig\" from a dictionary with the\n following items:\n \"observed_to_quantized_custom_module_class\": a nested\n dictionary mapping from quantization mode to an inner mapping\n from observed module classes to quantized module classes,\n e.g.:: { \"static\": {FloatCustomModule: ObservedCustomModule},\n \"dynamic\": {FloatCustomModule: ObservedCustomModule},\n \"weight_only\": {FloatCustomModule: ObservedCustomModule} }\n \"preserved_attributes\": a list of attributes that persist", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.ConvertCustomConfig.html", "category": "pytorch docs"}
{"text": "even if they are not used in \"forward\"\n This function is primarily for backward compatibility and may be\n removed in the future.\n Return type:\n ConvertCustomConfig\n set_observed_to_quantized_mapping(observed_class, quantized_class, quant_type=QuantType.STATIC)\n Set the mapping from a custom observed module class to a custom\n quantized module class.\n The quantized module class must have a \"from_observed\" class\n method that converts the observed module class to the quantized\n module class.\n Return type:\n ConvertCustomConfig\n set_preserved_attributes(attributes)\n Set the names of the attributes that will persist in the graph\n module even if they are not used in the model's \"forward\"\n method.\n Return type:\n ConvertCustomConfig\n to_dict()\n Convert this \"ConvertCustomConfig\" to a dictionary with the\n items described in \"from_dict()\".\n Return type:\n Dict[str, Any]", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.ConvertCustomConfig.html", "category": "pytorch docs"}
{"text": "torch.Tensor.angleTensor.angle() -> Tensor\n See \"torch.angle()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.angle.html", "category": "pytorch docs"}
{"text": "torch.set_default_tensor_typetorch.set_default_tensor_type(t)\n Sets the default \"torch.Tensor\" type to floating point tensor type\n \"t\". This type will also be used as default floating point type for\n type inference in \"torch.tensor()\".\n The default floating point tensor type is initially\n \"torch.FloatTensor\".\n Parameters:\n t (type or string) -- the floating point tensor type\n or its name\n Example:\n >>> torch.tensor([1.2, 3]).dtype # initial default for floating point is torch.float32\n torch.float32\n >>> torch.set_default_tensor_type(torch.DoubleTensor)\n >>> torch.tensor([1.2, 3]).dtype # a new floating point tensor\n torch.float64", "source": "https://pytorch.org/docs/stable/generated/torch.set_default_tensor_type.html", "category": "pytorch docs"}
{"text": "PairwiseDistanceclass torch.nn.PairwiseDistance(p=2.0, eps=1e-06, keepdim=False)\n Computes the pairwise distance between input vectors, or between\n columns of input matrices.\n Distances are computed using \"p\"-norm, with constant \"eps\" added to\n avoid division by zero if \"p\" is negative, i.e.:\n \\mathrm{dist}\\left(x, y\\right) = \\left\\Vert x-y + \\epsilon e\n \\right\\Vert_p,\n where e is the vector of ones and the \"p\"-norm is given by.\n \\Vert x \\Vert p = \\left( \\sum^n \\vert x_i \\vert ^ p\n \\right) ^ {1/p}.\n Parameters:\n * p (real, optional) -- the norm degree. Can be\n negative. Default: 2\n * eps (float, optional) -- Small value to avoid\n division by zero. Default: 1e-6\n * keepdim (bool, optional) -- Determines whether or\n not to keep the vector dimension. Default: False\n Shape:\n * Input1: (N, D) or (D) where N = batch dimension and D =\n vector dimension", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PairwiseDistance.html", "category": "pytorch docs"}
{"text": "vector dimension*\n * Input2: (N, D) or (D), same shape as the Input1\n * Output: (N) or () based on input dimension. If \"keepdim\" is\n \"True\", then (N, 1) or (1) based on input dimension.\n Examples::\n >>> pdist = nn.PairwiseDistance(p=2)\n >>> input1 = torch.randn(100, 128)\n >>> input2 = torch.randn(100, 128)\n >>> output = pdist(input1, input2)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PairwiseDistance.html", "category": "pytorch docs"}
{"text": "torch.fft.ifftntorch.fft.ifftn(input, s=None, dim=None, norm=None, , out=None) -> Tensor\n Computes the N dimensional inverse discrete Fourier transform of\n \"input\".\n Note:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimensions.\n Parameters:\n * input (Tensor) -- the input tensor\n * s (Tuple[int], optional*) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the IFFT. If a length \"-1\" is specified, no padding\n is done in that dimension. Default: \"s = [input.size(d) for d\n in dim]\"\n * dim (*Tuple[int], optional*) -- Dimensions to be\n transformed. Default: all dimensions, or the last \"len(s)\"\n dimensions if \"s\" is given.", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ifftn.html", "category": "pytorch docs"}
{"text": "dimensions if \"s\" is given.\n * norm (str, optional) --\n Normalization mode. For the backward transform (\"ifftn()\"),\n these correspond to:\n * \"\"forward\"\" - no normalization\n * \"\"backward\"\" - normalize by \"1/n\"\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the IFFT\n orthonormal)\n Where \"n = prod(s)\" is the logical IFFT size. Calling the\n forward transform (\"fftn()\") with the same normalization mode\n will apply an overall normalization of \"1/n\" between the two\n transforms. This is required to make \"ifftn()\" the exact\n inverse.\n Default is \"\"backward\"\" (normalize by \"1/n\").\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n\n\n\nx = torch.rand(10, 10, dtype=torch.complex64)\nifftn = torch.fft.ifftn(x)\n The discrete Fourier transform is separable, so \"ifftn()\" here is\n equivalent to two one-dimensional \"ifft()\" calls:\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ifftn.html", "category": "pytorch docs"}
{"text": "\n\n\ntwo_iffts = torch.fft.ifft(torch.fft.ifft(x, dim=0), dim=1)\ntorch.testing.assert_close(ifftn, two_iffts, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ifftn.html", "category": "pytorch docs"}
{"text": "torch.Tensor.ldexpTensor.ldexp(other) -> Tensor\n See \"torch.ldexp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ldexp.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.lp_pool1dtorch.nn.functional.lp_pool1d(input, norm_type, kernel_size, stride=None, ceil_mode=False)\n Applies a 1D power-average pooling over an input signal composed of\n several input planes. If the sum of all inputs to the power of p\n is zero, the gradient is set to zero as well.\n See \"LPPool1d\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.lp_pool1d.html", "category": "pytorch docs"}
{"text": "torch.frexptorch.frexp(input, , out=None) -> (Tensor mantissa, Tensor exponent)\n Decomposes \"input\" into mantissa and exponent tensors such that\n \\text{input} = \\text{mantissa} \\times 2^{\\text{exponent}}.\n The range of mantissa is the open interval (-1, 1).\n Supports float inputs.\n Parameters:\n input (Tensor) -- the input tensor\n Keyword Arguments:\n out (tuple, optional*) -- the output tensors\n Example:\n >>> x = torch.arange(9.)\n >>> mantissa, exponent = torch.frexp(x)\n >>> mantissa\n tensor([0.0000, 0.5000, 0.5000, 0.7500, 0.5000, 0.6250, 0.7500, 0.8750, 0.5000])\n >>> exponent\n tensor([0, 1, 2, 2, 3, 3, 3, 3, 4], dtype=torch.int32)\n >>> torch.ldexp(mantissa, exponent)\n tensor([0., 1., 2., 3., 4., 5., 6., 7., 8.])", "source": "https://pytorch.org/docs/stable/generated/torch.frexp.html", "category": "pytorch docs"}
{"text": "torch.vsplittorch.vsplit(input, indices_or_sections) -> List of Tensors\n Splits \"input\", a tensor with two or more dimensions, into multiple\n tensors vertically according to \"indices_or_sections\". Each split\n is a view of \"input\".\n This is equivalent to calling torch.tensor_split(input,\n indices_or_sections, dim=0) (the split dimension is 0), except that\n if \"indices_or_sections\" is an integer it must evenly divide the\n split dimension or a runtime error will be thrown.\n This function is based on NumPy's \"numpy.vsplit()\".\n Parameters:\n * input (Tensor) -- tensor to split.\n * indices_or_sections (int or list or tuple of\n ints) -- See argument in \"torch.tensor_split()\".\n Example::\n >>> t = torch.arange(16.0).reshape(4,4)\n >>> t\n tensor([[ 0., 1., 2., 3.],\n [ 4., 5., 6., 7.],\n [ 8., 9., 10., 11.],\n [12., 13., 14., 15.]])\n >>> torch.vsplit(t, 2)\n (tensor([[0., 1., 2., 3.],", "source": "https://pytorch.org/docs/stable/generated/torch.vsplit.html", "category": "pytorch docs"}
{"text": "(tensor([[0., 1., 2., 3.],\n [4., 5., 6., 7.]]),\n tensor([[ 8., 9., 10., 11.],\n [12., 13., 14., 15.]]))\n >>> torch.vsplit(t, [3, 6])\n (tensor([[ 0., 1., 2., 3.],\n [ 4., 5., 6., 7.],\n [ 8., 9., 10., 11.]]),\n tensor([[12., 13., 14., 15.]]),\n tensor([], size=(0, 4)))", "source": "https://pytorch.org/docs/stable/generated/torch.vsplit.html", "category": "pytorch docs"}
{"text": "no_gradclass torch.no_grad\n Context-manager that disabled gradient calculation.\n Disabling gradient calculation is useful for inference, when you\n are sure that you will not call \"Tensor.backward()\". It will reduce\n memory consumption for computations that would otherwise have\n requires_grad=True.\n In this mode, the result of every computation will have\n requires_grad=False, even when the inputs have\n requires_grad=True.\n This context manager is thread local; it will not affect\n computation in other threads.\n Also functions as a decorator. (Make sure to instantiate with\n parenthesis.)\n Note:\n No-grad is one of several mechanisms that can enable or disable\n gradients locally see Locally disabling gradient computation for\n more information on how they compare.\n Note:\n This API does not apply to forward-mode AD. If you want to\n disable forward AD for a computation, you can unpack your dual\n tensors.\n Example::", "source": "https://pytorch.org/docs/stable/generated/torch.no_grad.html", "category": "pytorch docs"}
{"text": "tensors.\n Example::\n >>> x = torch.tensor([1.], requires_grad=True)\n >>> with torch.no_grad():\n ... y = x * 2\n >>> y.requires_grad\n False\n >>> @torch.no_grad()\n ... def doubler(x):\n ... return x * 2\n >>> z = doubler(x)\n >>> z.requires_grad\n False", "source": "https://pytorch.org/docs/stable/generated/torch.no_grad.html", "category": "pytorch docs"}
{"text": "ConvReLU1dclass torch.ao.nn.intrinsic.quantized.ConvReLU1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\n A ConvReLU1d module is a fused module of Conv1d and ReLU\n We adopt the same interface as \"torch.ao.nn.quantized.Conv1d\".\n Variables:\n torch.ao.nn.quantized.Conv1d (Same as) --", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.ConvReLU1d.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.rrelutorch.nn.functional.rrelu(input, lower=1. / 8, upper=1. / 3, training=False, inplace=False) -> Tensor\n Randomized leaky ReLU.\n See \"RReLU\" for more details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.rrelu.html", "category": "pytorch docs"}
{"text": "InstanceNorm1dclass torch.ao.nn.quantized.InstanceNorm1d(num_features, weight, bias, scale, zero_point, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False, device=None, dtype=None)\n This is the quantized version of \"InstanceNorm1d\".\n Additional args:\n * scale - quantization scale of the output, type: double.\n * zero_point - quantization zero point of the output, type:\n long.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.InstanceNorm1d.html", "category": "pytorch docs"}
{"text": "torch.cuda.max_memory_reservedtorch.cuda.max_memory_reserved(device=None)\n Returns the maximum GPU memory managed by the caching allocator in\n bytes for a given device.\n By default, this returns the peak cached memory since the beginning\n of this program. \"reset_peak_memory_stats()\" can be used to reset\n the starting point in tracking this metric. For example, these two\n functions can measure the peak cached memory amount of each\n iteration in a training loop.\n Parameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\n Return type:\n int\n Note:\n See Memory management for more details about GPU memory\n management.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.max_memory_reserved.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.affine_gridtorch.nn.functional.affine_grid(theta, size, align_corners=None)\n Generates a 2D or 3D flow field (sampling grid), given a batch of\n affine matrices \"theta\".\n Note:\n This function is often used in conjunction with \"grid_sample()\"\n to build Spatial Transformer Networks .\n Parameters:\n * theta (Tensor) -- input batch of affine matrices with\n shape (N \\times 2 \\times 3) for 2D or (N \\times 3 \\times 4)\n for 3D\n * size (torch.Size) -- the target output image size. (N\n \\times C \\times H \\times W for 2D or N \\times C \\times D\n \\times H \\times W for 3D) Example: torch.Size((32, 3, 24, 24))\n * align_corners (bool, optional) -- if \"True\",\n consider \"-1\" and \"1\" to refer to the centers of the corner\n pixels rather than the image corners. Refer to \"grid_sample()\"\n for a more complete description. A grid generated by", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.affine_grid.html", "category": "pytorch docs"}
{"text": "\"affine_grid()\" should be passed to \"grid_sample()\" with the\n same setting for this option. Default: \"False\"\n Returns:\n output Tensor of size (N \\times H \\times W \\times 2)\n Return type:\n output (Tensor)\n Warning:\n When \"align_corners = True\", the grid positions depend on the\n pixel size relative to the input image size, and so the locations\n sampled by \"grid_sample()\" will differ for the same input given\n at different resolutions (that is, after being upsampled or\n downsampled). The default behavior up to version 1.2.0 was\n \"align_corners = True\". Since then, the default behavior has been\n changed to \"align_corners = False\", in order to bring it in line\n with the default for \"interpolate()\".\n Warning:\n When \"align_corners = True\", 2D affine transforms on 1D data and\n 3D affine transforms on 2D data (that is, when one of the spatial\n dimensions has unit size) are ill-defined, and not an intended", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.affine_grid.html", "category": "pytorch docs"}
{"text": "use case. This is not a problem when \"align_corners = False\". Up\n to version 1.2.0, all grid points along a unit dimension were\n considered arbitrarily to be at \"-1\". From version 1.3.0, under\n \"align_corners = True\" all grid points along a unit dimension are\n considered to be at \"0\" (the center of the input image).", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.affine_grid.html", "category": "pytorch docs"}
{"text": "torch.Tensor.to_sparse_cooTensor.to_sparse_coo()\n Convert a tensor to coordinate format.\n Examples:\n >>> dense = torch.randn(5, 5)\n >>> sparse = dense.to_sparse_coo()\n >>> sparse._nnz()\n 25", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_coo.html", "category": "pytorch docs"}
{"text": "torch.Tensor.negative_Tensor.negative_() -> Tensor\n In-place version of \"negative()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.negative_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.expand_asTensor.expand_as(other) -> Tensor\n Expand this tensor to the same size as \"other\".\n \"self.expand_as(other)\" is equivalent to\n \"self.expand(other.size())\".\n Please see \"expand()\" for more information about \"expand\".\n Parameters:\n other (\"torch.Tensor\") -- The result tensor has the same\n size as \"other\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.expand_as.html", "category": "pytorch docs"}
{"text": "torch.erftorch.erf(input, *, out=None) -> Tensor\n Alias for \"torch.special.erf()\".", "source": "https://pytorch.org/docs/stable/generated/torch.erf.html", "category": "pytorch docs"}
{"text": "torch.cuda.get_rng_statetorch.cuda.get_rng_state(device='cuda')\n Returns the random number generator state of the specified GPU as a\n ByteTensor.\n Parameters:\n device (torch.device or int, optional) -- The\n device to return the RNG state of. Default: \"'cuda'\" (i.e.,\n \"torch.device('cuda')\", the current CUDA device).\n Return type:\n Tensor\n Warning:\n This function eagerly initializes CUDA.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.get_rng_state.html", "category": "pytorch docs"}
{"text": "torch.Tensor.diffTensor.diff(n=1, dim=- 1, prepend=None, append=None) -> Tensor\n See \"torch.diff()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.diff.html", "category": "pytorch docs"}
{"text": "torch.rangetorch.range(start=0, end, step=1, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\n Returns a 1-D tensor of size \\left\\lfloor \\frac{\\text{end} -\n \\text{start}}{\\text{step}} \\right\\rfloor + 1 with values from\n \"start\" to \"end\" with step \"step\". Step is the gap between two\n values in the tensor.\n \\text{out}_{i+1} = \\text{out}_i + \\text{step}.\n Warning:\n This function is deprecated and will be removed in a future\n release because its behavior is inconsistent with Python's range\n builtin. Instead, use \"torch.arange()\", which produces values in\n [start, end).\n Parameters:\n * start (float) -- the starting value for the set of\n points. Default: \"0\".\n * end (float) -- the ending value for the set of points\n * step (float) -- the gap between each pair of adjacent\n points. Default: \"1\".\n Keyword Arguments:\n * out (Tensor, optional*) -- the output tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.range.html", "category": "pytorch docs"}
{"text": "\ndtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\"). If dtype is not\n given, infer the data type from the other input arguments. If\n any of start, end, or stop are floating-point, the\n dtype is inferred to be the default dtype, see\n \"get_default_dtype()\". Otherwise, the dtype is inferred to\n be torch.int64.\nlayout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\ndevice (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\nrequires_grad (bool, optional) -- If autograd should\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.range.html", "category": "pytorch docs"}
{"text": "record operations on the returned tensor. Default: \"False\".\n Example:\n >>> torch.range(1, 4)\n tensor([ 1., 2., 3., 4.])\n >>> torch.range(1, 4, 0.5)\n tensor([ 1.0000, 1.5000, 2.0000, 2.5000, 3.0000, 3.5000, 4.0000])", "source": "https://pytorch.org/docs/stable/generated/torch.range.html", "category": "pytorch docs"}
{"text": "ConvBnReLU1dclass torch.ao.nn.intrinsic.qat.ConvBnReLU1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None)\n A ConvBnReLU1d module is a module fused from Conv1d, BatchNorm1d\n and ReLU, attached with FakeQuantize modules for weight, used in\n quantization aware training.\n We combined the interface of \"torch.nn.Conv1d\" and\n \"torch.nn.BatchNorm1d\" and \"torch.nn.ReLU\".\n Similar to torch.nn.Conv1d, with FakeQuantize modules initialized\n to default.\n Variables:\n weight_fake_quant -- fake quant module for weight", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvBnReLU1d.html", "category": "pytorch docs"}
{"text": "torch.rand_liketorch.rand_like(input, , dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor\n Returns a tensor with the same size as \"input\" that is filled with\n random numbers from a uniform distribution on the interval [0, 1).\n \"torch.rand_like(input)\" is equivalent to \"torch.rand(input.size(),\n dtype=input.dtype, layout=input.layout, device=input.device)\".\n Parameters:\n input (Tensor) -- the size of \"input\" will determine size\n of the output tensor.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned Tensor. Default: if \"None\", defaults to the dtype\n of \"input\".\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned tensor. Default: if \"None\", defaults to the layout of\n \"input\".\n * device* (\"torch.device\", optional) -- the desired device of", "source": "https://pytorch.org/docs/stable/generated/torch.rand_like.html", "category": "pytorch docs"}
{"text": "returned tensor. Default: if \"None\", defaults to the device of\n \"input\".\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.rand_like.html", "category": "pytorch docs"}
{"text": "torch.orgqrtorch.orgqr(input, tau) -> Tensor\n Alias for \"torch.linalg.householder_product()\".", "source": "https://pytorch.org/docs/stable/generated/torch.orgqr.html", "category": "pytorch docs"}
{"text": "torch.Tensor.detachTensor.detach()\n Returns a new Tensor, detached from the current graph.\n The result will never require gradient.\n This method also affects forward mode AD gradients and the result\n will never have forward mode AD gradients.\n Note:\n Returned Tensor shares the same storage with the original one.\n In-place modifications on either of them will be seen, and may\n trigger errors in correctness checks. IMPORTANT NOTE: Previously,\n in-place size / stride / storage changes (such as resize_ /\n resize_as_ / set_ / transpose_) to the returned tensor also\n update the original tensor. Now, these in-place changes will not\n update the original tensor anymore, and will instead trigger an\n error. For sparse tensors: In-place indices / values changes\n (such as zero_ / copy_ / add_) to the returned tensor will\n not update the original tensor anymore, and will instead trigger\n an error.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.detach.html", "category": "pytorch docs"}
{"text": "torch.cliptorch.clip(input, min=None, max=None, *, out=None) -> Tensor\n Alias for \"torch.clamp()\".", "source": "https://pytorch.org/docs/stable/generated/torch.clip.html", "category": "pytorch docs"}
{"text": "torch.Tensor.histogramTensor.histogram(input, bins, *, range=None, weight=None, density=False)\n See \"torch.histogram()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.histogram.html", "category": "pytorch docs"}
{"text": "CUDAGraphclass torch.cuda.CUDAGraph\n Wrapper around a CUDA graph.\n Warning:\n This API is in beta and may change in future releases.\n capture_begin(pool=None)\n Begins capturing CUDA work on the current stream.\n Typically, you shouldn't call \"capture_begin\" yourself. Use\n \"graph\" or \"make_graphed_callables()\", which call\n \"capture_begin\" internally.\n Parameters:\n pool (optional) -- Token (returned by\n \"graph_pool_handle()\" or \"other_Graph_instance.pool()\") that\n hints this graph may share memory with the indicated pool.\n See Graph memory management.\n capture_end()\n Ends CUDA graph capture on the current stream. After\n \"capture_end\", \"replay\" may be called on this instance.\n Typically, you shouldn't call \"capture_end\" yourself. Use\n \"graph\" or \"make_graphed_callables()\", which call \"capture_end\"\n internally.\n debug_dump(debug_path)\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.CUDAGraph.html", "category": "pytorch docs"}
{"text": "debug_dump(debug_path)\n Parameters:\n debug_path (required) -- Path to dump the graph to.\n Calls a debugging function to dump the graph if the debugging is\n enabled via CUDAGraph.enable_debug_mode()\n enable_debug_mode()\n Enables debugging mode for CUDAGraph.debug_dump.\n pool()\n Returns an opaque token representing the id of this graph's\n memory pool. This id can optionally be passed to another graph's\n \"capture_begin\", which hints the other graph may share the same\n memory pool.\n replay()\n Replays the CUDA work captured by this graph.\n reset()\n Deletes the graph currently held by this instance.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.CUDAGraph.html", "category": "pytorch docs"}
{"text": "torch.are_deterministic_algorithms_enabledtorch.are_deterministic_algorithms_enabled()\n Returns True if the global deterministic flag is turned on. Refer\n to \"torch.use_deterministic_algorithms()\" documentation for more\n details.", "source": "https://pytorch.org/docs/stable/generated/torch.are_deterministic_algorithms_enabled.html", "category": "pytorch docs"}
{"text": "torch.Tensor.squeezeTensor.squeeze(dim=None) -> Tensor\n See \"torch.squeeze()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.squeeze.html", "category": "pytorch docs"}
{"text": "Softmax2dclass torch.nn.Softmax2d\n Applies SoftMax over features to each spatial location.\n When given an image of \"Channels x Height x Width\", it will apply\n Softmax to each location (Channels, h_i, w_j)\n Shape:\n * Input: (N, C, H, W) or (C, H, W).\n * Output: (N, C, H, W) or (C, H, W) (same shape as input)\n Returns:\n a Tensor of the same dimension and shape as the input with\n values in the range [0, 1]\n Return type:\n None\n Examples:\n >>> m = nn.Softmax2d()\n >>> # you softmax over the 2nd dimension\n >>> input = torch.randn(2, 3, 12, 13)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Softmax2d.html", "category": "pytorch docs"}
{"text": "torch.chunktorch.chunk(input, chunks, dim=0) -> List of Tensors\n Attempts to split a tensor into the specified number of chunks.\n Each chunk is a view of the input tensor.\n Note:\n This function may return fewer than the specified number of\n chunks!\n See also:\n \"torch.tensor_split()\" a function that always returns exactly the\n specified number of chunks\n If the tensor size along the given dimension \"dim\" is divisible by\n \"chunks\", all returned chunks will be the same size. If the tensor\n size along the given dimension \"dim\" is not divisible by \"chunks\",\n all returned chunks will be the same size, except the last one. If\n such division is not possible, this function may return fewer than\n the specified number of chunks.\n Parameters:\n * input (Tensor) -- the tensor to split\n * chunks (int) -- number of chunks to return\n * dim (int) -- dimension along which to split the tensor\n -[ Example ]-\n\n\n\ntorch.arange(11).chunk(6)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.chunk.html", "category": "pytorch docs"}
{"text": "-[ Example ]-\n\n\n\ntorch.arange(11).chunk(6)\n (tensor([0, 1]),\n tensor([2, 3]),\n tensor([4, 5]),\n tensor([6, 7]),\n tensor([8, 9]),\n tensor([10]))\ntorch.arange(12).chunk(6)\n (tensor([0, 1]),\n tensor([2, 3]),\n tensor([4, 5]),\n tensor([6, 7]),\n tensor([8, 9]),\n tensor([10, 11]))\ntorch.arange(13).chunk(6)\n (tensor([0, 1, 2]),\n tensor([3, 4, 5]),\n tensor([6, 7, 8]),\n tensor([ 9, 10, 11]),\n tensor([12]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.chunk.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.group_normtorch.nn.functional.group_norm(input, num_groups, weight=None, bias=None, eps=1e-05)\n Applies Group Normalization for last certain number of dimensions.\n See \"GroupNorm\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.group_norm.html", "category": "pytorch docs"}
{"text": "SGDclass torch.optim.SGD(params, lr=, momentum=0, dampening=0, weight_decay=0, nesterov=False, *, maximize=False, foreach=None, differentiable=False)\n Implements stochastic gradient descent (optionally with momentum).\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\ &\\textbf{input} : \\gamma \\text{ (lr)}, \\: \\theta_0\n \\text{ (params)}, \\: f(\\theta) \\text{ (objective)}, \\:\n \\lambda \\text{ (weight decay)}, \\\n &\\hspace{13mm} \\:\\mu \\text{ (momentum)}, \\:\\tau \\text{\n (dampening)}, \\:\\textit{ nesterov,}\\:\\textit{ maximize}\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\ &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\:\n \\textbf{do} \\ &\\hspace{5mm}g_t\n \\leftarrow \\nabla_{\\theta} f_t (\\theta_{t-1}) \\\n &\\hspace{5mm}\\textbf{if} \\: \\lambda \\neq 0\n \\ &\\hspace{10mm} g_t \\leftarrow g_t + \\lambda\n \\theta_{t-1} \\", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SGD.html", "category": "pytorch docs"}
{"text": "\\theta_{t-1} \\\n &\\hspace{5mm}\\textbf{if} \\: \\mu \\neq 0\n \\ &\\hspace{10mm}\\textbf{if} \\: t > 1\n \\ &\\hspace{15mm} \\textbf{b}t \\leftarrow \\mu\n \\textbf{b} + (1-\\tau) g_t \\\n &\\hspace{10mm}\\textbf{else}\n \\ &\\hspace{15mm} \\textbf{b}t \\leftarrow g_t\n \\ &\\hspace{10mm}\\textbf{if} \\: \\textit{nesterov}\n \\ &\\hspace{15mm} g_t \\leftarrow g + \\mu \\textbf{b}t\n \\ &\\hspace{10mm}\\textbf{else}\n \\[-1.ex] &\\hspace{15mm} g_t \\leftarrow \\textbf{b}_t\n \\ &\\hspace{5mm}\\textbf{if} \\: \\textit{maximize}\n \\ &\\hspace{10mm}\\theta_t \\leftarrow \\theta + \\gamma\n g_t \\[-1.ex] &\\hspace{5mm}\\textbf{else}\n \\[-1.ex] &\\hspace{10mm}\\theta_t \\leftarrow \\theta_{t-1} -\n \\gamma g_t \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SGD.html", "category": "pytorch docs"}
{"text": "\\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\n Nesterov momentum is based on the formula from On the importance of\n initialization and momentum in deep learning.\n Parameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * lr (float) -- learning rate\n * momentum (float, optional) -- momentum factor\n (default: 0)\n * weight_decay (float, optional) -- weight decay (L2\n penalty) (default: 0)\n * dampening (float, optional) -- dampening for\n momentum (default: 0)\n * nesterov (bool, optional) -- enables Nesterov\n momentum (default: False)\n * maximize (bool, optional) -- maximize the params\n based on the objective, instead of minimizing (default: False)\n * foreach (bool, optional) -- whether foreach\n implementation of optimizer is used (default: None)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SGD.html", "category": "pytorch docs"}
{"text": "\ndifferentiable (bool, optional) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\n -[ Example ]-\n\n\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)\noptimizer.zero_grad()\nloss_fn(model(input), target).backward()\noptimizer.step()\n Note:\n The implementation of SGD with Momentum/Nesterov subtly differs\n from Sutskever et. al. and implementations in some other\n frameworks.Considering the specific case of Momentum, the update\n can be written as\n \\begin{aligned} v_{t+1} & = \\mu * v_{t} + g_{t+1}, \\\n p_{t+1} & = p_{t} - \\text{lr} * v_{t+1}, \\end{aligned}\n where p, g, v and \\mu denote the parameters, gradient, velocity,\n\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SGD.html", "category": "pytorch docs"}
{"text": "and momentum respectively.This is in contrast to Sutskever et.\n al. and other frameworks which employ an update of the form\n \\begin{aligned} v_{t+1} & = \\mu * v_{t} + \\text{lr} *\n g_{t+1}, \\ p_{t+1} & = p_{t} - v_{t+1}. \\end{aligned}\n The Nesterov version is analogously modified.Moreover, the\n initial value of the momentum buffer is set to the gradient value\n at the first step. This is in contrast to some other frameworks\n that initialize it to all zeros.\n add_param_group(param_group)\n Add a param group to the \"Optimizer\" s param_groups.\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n Parameters:\n param_group (dict) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n load_state_dict(state_dict)\n Loads the optimizer state.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SGD.html", "category": "pytorch docs"}
{"text": "Loads the optimizer state.\n Parameters:\n state_dict (dict) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n register_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None\n The \"optimizer\" argument is the optimizer instance being used.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n register_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None or modified args and kwargs", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SGD.html", "category": "pytorch docs"}
{"text": "The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n state_dict()\n Returns the state of the optimizer as a \"dict\".\n It contains two entries:\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n zero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SGD.html", "category": "pytorch docs"}
{"text": "Parameters:\n set_to_none (bool) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SGD.html", "category": "pytorch docs"}
{"text": "torch.stdtorch.std(input, dim=None, , correction=1, keepdim=False, out=None) -> Tensor\n Calculates the standard deviation over the dimensions specified by\n \"dim\". \"dim\" can be a single dimension, list of dimensions, or\n \"None\" to reduce over all dimensions.\n The standard deviation (\\sigma) is calculated as\n \\sigma = \\sqrt{\\frac{1}{N - \\delta\n N}\\sum_{i=0}^{N-1}(x_i-\\bar{x})^2}\n where x is the sample set of elements, \\bar{x} is the sample mean,\n N is the number of samples and \\delta N is the \"correction\".\n If \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int or tuple of ints*) -- the dimension or\n dimensions to reduce.\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.std.html", "category": "pytorch docs"}
{"text": "Keyword Arguments:\n * correction (int) --\n difference between the sample size and sample degrees of\n freedom. Defaults to Bessel's correction, \"correction=1\".\n Changed in version 2.0: Previously this argument was called\n \"unbiased\" and was a boolean with \"True\" corresponding to\n \"correction=1\" and \"False\" being \"correction=0\".\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n * out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n\n\n\na = torch.tensor(\n ... [[ 0.2035, 1.2959, 1.8101, -0.4644],\n ... [ 1.5027, -0.3270, 0.5905, 0.6538],\n ... [-1.5745, 1.3330, -0.5596, -0.6548],\n ... [ 0.1264, -0.5080, 1.6420, 0.1992]])\ntorch.std(a, dim=1, keepdim=True)\n tensor([[1.0311],\n [0.7477],\n [1.2204],\n [0.9087]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.std.html", "category": "pytorch docs"}
{"text": "torch.linalg.invtorch.linalg.inv(A, , out=None) -> Tensor\n Computes the inverse of a square matrix if it exists. Throws a\n RuntimeError if the matrix is not invertible.\n Letting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, for a matrix A \\in\n \\mathbb{K}^{n \\times n}, its inverse matrix A^{-1} \\in\n \\mathbb{K}^{n \\times n} (if it exists) is defined as\n A^{-1}A = AA^{-1} = \\mathrm{I}_n\n where \\mathrm{I}_n is the n*-dimensional identity matrix.\n The inverse matrix exists if and only if A is invertible. In this\n case, the inverse is unique.\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n Note:\n When inputs are on a CUDA device, this function synchronizes that\n device with the CPU.\n Note:\n Consider using \"torch.linalg.solve()\" if possible for multiplying\n a matrix on the left by the inverse, as:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.inv.html", "category": "pytorch docs"}
{"text": "a matrix on the left by the inverse, as:\n linalg.solve(A, B) == linalg.inv(A) @ B # When B is a matrix\n It is always preferred to use \"solve()\" when possible, as it is\n faster and more numerically stable than computing the inverse\n explicitly.\n See also:\n \"torch.linalg.pinv()\" computes the pseudoinverse (Moore-Penrose\n inverse) of matrices of any shape.\n \"torch.linalg.solve()\" computes \"A\".inv() @ \"B\" with a\n numerically stable algorithm.\n Parameters:\n A (Tensor) -- tensor of shape (, n, n) where *** is\n zero or more batch dimensions consisting of invertible matrices.\n Keyword Arguments:\n out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\n Raises:\n RuntimeError* -- if the matrix \"A\" or any matrix in the batch\n of matrices \"A\" is not invertible.\n Examples:\n >>> A = torch.randn(4, 4)\n >>> Ainv = torch.linalg.inv(A)\n >>> torch.dist(A @ Ainv, torch.eye(4))", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.inv.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.dist(A @ Ainv, torch.eye(4))\n tensor(1.1921e-07)\n >>> A = torch.randn(2, 3, 4, 4) # Batch of matrices\n >>> Ainv = torch.linalg.inv(A)\n >>> torch.dist(A @ Ainv, torch.eye(4))\n tensor(1.9073e-06)\n >>> A = torch.randn(4, 4, dtype=torch.complex128) # Complex matrix\n >>> Ainv = torch.linalg.inv(A)\n >>> torch.dist(A @ Ainv, torch.eye(4))\n tensor(7.5107e-16, dtype=torch.float64)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.inv.html", "category": "pytorch docs"}
{"text": "ChannelShuffleclass torch.nn.ChannelShuffle(groups)\n Divide the channels in a tensor of shape (, C , H, W) into g\n groups and rearrange them as (, C \\frac g, g, H, W), while keeping\n the original tensor shape.\n Parameters:\n groups (int) -- number of groups to divide channels in.\n Examples:\n >>> channel_shuffle = nn.ChannelShuffle(2)\n >>> input = torch.randn(1, 4, 2, 2)\n >>> print(input)\n [[[[1, 2],\n [3, 4]],\n [[5, 6],\n [7, 8]],\n [[9, 10],\n [11, 12]],\n [[13, 14],\n [15, 16]],\n ]]\n >>> output = channel_shuffle(input)\n >>> print(output)\n [[[[1, 2],\n [3, 4]],\n [[9, 10],\n [11, 12]],\n [[5, 6],\n [7, 8]],\n [[13, 14],\n [15, 16]],\n ]]", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ChannelShuffle.html", "category": "pytorch docs"}
{"text": "torch.gttorch.gt(input, other, , out=None) -> Tensor\n Computes \\text{input} > \\text{other} element-wise.\n The second argument can be a number or a tensor whose shape is\n broadcastable with the first argument.\n Parameters:\n * input (Tensor) -- the tensor to compare\n * other (Tensor or float) -- the tensor or value to\n compare\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Returns:\n A boolean tensor that is True where \"input\" is greater than\n \"other\" and False elsewhere\n Example:\n >>> torch.gt(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))\n tensor([[False, True], [False, False]])", "source": "https://pytorch.org/docs/stable/generated/torch.gt.html", "category": "pytorch docs"}
{"text": "Bilinearclass torch.nn.Bilinear(in1_features, in2_features, out_features, bias=True, device=None, dtype=None)\n Applies a bilinear transformation to the incoming data: y = x_1^T A\n x_2 + b\n Parameters:\n * in1_features (int) -- size of each first input sample\n * in2_features (int) -- size of each second input sample\n * out_features (int) -- size of each output sample\n * bias (bool) -- If set to False, the layer will not learn\n an additive bias. Default: \"True\"\n Shape:\n * Input1: (, H_{in1}) where H_{in1}=\\text{in1_features} and *\n means any number of additional dimensions including none. All\n but the last dimension of the inputs should be the same.\n * Input2: (, H_{in2}) where H_{in2}=\\text{in2_features}.\n * Output: (*, H_{out}) where H_{out}=\\text{out_features} and\n all but the last dimension are the same shape as the input.\n Variables:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Bilinear.html", "category": "pytorch docs"}
{"text": "Variables:\n * weight (torch.Tensor) -- the learnable weights of the\n module of shape (\\text{out_features}, \\text{in1_features},\n \\text{in2_features}). The values are initialized from\n \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}), where k =\n \\frac{1}{\\text{in1_features}}\n * bias -- the learnable bias of the module of shape\n (\\text{out_features}). If \"bias\" is \"True\", the values are\n initialized from \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}), where k =\n \\frac{1}{\\text{in1_features}}\n Examples:\n >>> m = nn.Bilinear(20, 30, 40)\n >>> input1 = torch.randn(128, 20)\n >>> input2 = torch.randn(128, 30)\n >>> output = m(input1, input2)\n >>> print(output.size())\n torch.Size([128, 40])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Bilinear.html", "category": "pytorch docs"}
{"text": "torch.Tensor.logical_and_Tensor.logical_and_() -> Tensor\n In-place version of \"logical_and()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logical_and_.html", "category": "pytorch docs"}
{"text": "torch.arangetorch.arange(start=0, end, step=1, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\n Returns a 1-D tensor of size \\left\\lceil \\frac{\\text{end} -\n \\text{start}}{\\text{step}} \\right\\rceil with values from the\n interval \"[start, end)\" taken with common difference \"step\"\n beginning from start.\n Note that non-integer \"step\" is subject to floating point rounding\n errors when comparing against \"end\"; to avoid inconsistency, we\n advise adding a small epsilon to \"end\" in such cases.\n \\text{out}{{i+1}} = \\text{out} + \\text{step}\n Parameters:\n * start (Number) -- the starting value for the set of\n points. Default: \"0\".\n * end (Number) -- the ending value for the set of points\n * step (Number) -- the gap between each pair of adjacent\n points. Default: \"1\".\n Keyword Arguments:\n * out (Tensor, optional*) -- the output tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.arange.html", "category": "pytorch docs"}
{"text": "\ndtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\"). If dtype is not\n given, infer the data type from the other input arguments. If\n any of start, end, or stop are floating-point, the\n dtype is inferred to be the default dtype, see\n \"get_default_dtype()\". Otherwise, the dtype is inferred to\n be torch.int64.\nlayout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\ndevice (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\nrequires_grad (bool, optional) -- If autograd should\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.arange.html", "category": "pytorch docs"}
{"text": "record operations on the returned tensor. Default: \"False\".\n Example:\n >>> torch.arange(5)\n tensor([ 0, 1, 2, 3, 4])\n >>> torch.arange(1, 4)\n tensor([ 1, 2, 3])\n >>> torch.arange(1, 2.5, 0.5)\n tensor([ 1.0000, 1.5000, 2.0000])", "source": "https://pytorch.org/docs/stable/generated/torch.arange.html", "category": "pytorch docs"}
{"text": "torch.signal.windows.hanntorch.signal.windows.hann(M, , sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\n Computes the Hann window.\n The Hann window is defined as follows:\n w_n = \\frac{1}{2}\\ \\left[1 - \\cos \\left( \\frac{2 \\pi n}{M - 1}\n \\right)\\right] = \\sin^2 \\left( \\frac{\\pi n}{M - 1} \\right)\n The window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\n Parameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\n Keyword Arguments:\n * sym (bool, optional) -- If False, returns a\n periodic window suitable for use in spectral analysis. If\n True, returns a symmetric window suitable for use in filter\n design. Default: True.\n * dtype* (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.hann.html", "category": "pytorch docs"}
{"text": "(see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Return type:\n Tensor\n Examples:\n >>> # Generates a symmetric Hann window.\n >>> torch.signal.windows.hann(10)\n tensor([0.0000, 0.1170, 0.4132, 0.7500, 0.9698, 0.9698, 0.7500, 0.4132, 0.1170, 0.0000])\n >>> # Generates a periodic Hann window.\n >>> torch.signal.windows.hann(10, sym=False)", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.hann.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.signal.windows.hann(10, sym=False)\n tensor([0.0000, 0.0955, 0.3455, 0.6545, 0.9045, 1.0000, 0.9045, 0.6545, 0.3455, 0.0955])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.hann.html", "category": "pytorch docs"}
{"text": "torch.maximumtorch.maximum(input, other, , out=None) -> Tensor\n Computes the element-wise maximum of \"input\" and \"other\".\n Note:\n If one of the elements being compared is a NaN, then that element\n is returned. \"maximum()\" is not supported for tensors with\n complex dtypes.\n Parameters:\n * input (Tensor) -- the input tensor.\n * other (Tensor) -- the second input tensor\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.tensor((1, 2, -1))\n >>> b = torch.tensor((3, 0, 4))\n >>> torch.maximum(a, b)\n tensor([3, 2, 4])", "source": "https://pytorch.org/docs/stable/generated/torch.maximum.html", "category": "pytorch docs"}
{"text": "strict_fusionclass torch.jit.strict_fusion\n This class errors if not all nodes have been fused in inference, or\n symbolically differentiated in training.\n Example:\n Forcing fusion of additions.\n @torch.jit.script\n def foo(x):\n with torch.jit.strict_fusion():\n return x + x + x", "source": "https://pytorch.org/docs/stable/generated/torch.jit.strict_fusion.html", "category": "pytorch docs"}
{"text": "Linearclass torch.nn.Linear(in_features, out_features, bias=True, device=None, dtype=None)\n Applies a linear transformation to the incoming data: y = xA^T + b\n This module supports TensorFloat32.\n On certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\n Parameters:\n * in_features (int) -- size of each input sample\n * out_features (int) -- size of each output sample\n * bias (bool) -- If set to \"False\", the layer will not\n learn an additive bias. Default: \"True\"\n Shape:\n * Input: (, H_{in}) where * means any number of dimensions\n including none and H_{in} = \\text{in_features}.\n * Output: (, H_{out}) where all but the last dimension are the\n same shape as the input and H_{out} = \\text{out_features}.\n Variables:\n * weight (torch.Tensor) -- the learnable weights of the\n module of shape (\\text{out_features}, \\text{in_features}).", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Linear.html", "category": "pytorch docs"}
{"text": "The values are initialized from \\mathcal{U}(-\\sqrt{k},\n \\sqrt{k}), where k = \\frac{1}{\\text{in_features}}\n * bias -- the learnable bias of the module of shape\n (\\text{out_features}). If \"bias\" is \"True\", the values are\n initialized from \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{1}{\\text{in_features}}\n Examples:\n >>> m = nn.Linear(20, 30)\n >>> input = torch.randn(128, 20)\n >>> output = m(input)\n >>> print(output.size())\n torch.Size([128, 30])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Linear.html", "category": "pytorch docs"}
{"text": "torch.cumulative_trapezoidtorch.cumulative_trapezoid(y, x=None, , dx=None, dim=- 1) -> Tensor\n Cumulatively computes the trapezoidal rule along \"dim\". By default\n the spacing between elements is assumed to be 1, but \"dx\" can be\n used to specify a different constant spacing, and \"x\" can be used\n to specify arbitrary spacing along \"dim\".\n For more details, please read \"torch.trapezoid()\". The difference\n between \"torch.trapezoid()\" and this function is that,\n \"torch.trapezoid()\" returns a value for each integration, where as\n this function returns a cumulative value for every spacing within\n the integration. This is analogous to how .sum returns a value\n and .cumsum returns a cumulative sum.\n Parameters:\n * y (Tensor) -- Values to use when computing the\n trapezoidal rule.\n * x (Tensor*) -- If specified, defines spacing between\n values as specified above.\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.cumulative_trapezoid.html", "category": "pytorch docs"}
{"text": "Keyword Arguments:\n * dx (float) -- constant spacing between values. If\n neither \"x\" or \"dx\" are specified then this defaults to 1.\n Effectively multiplies the result by its value.\n * dim (int) -- The dimension along which to compute the\n trapezoidal rule. The last (inner-most) dimension by default.\n Examples:\n >>> # Cumulatively computes the trapezoidal rule in 1D, spacing is implicitly 1.\n >>> y = torch.tensor([1, 5, 10])\n >>> torch.cumulative_trapezoid(y)\n tensor([3., 10.5])\n >>> # Computes the same trapezoidal rule directly up to each element to verify\n >>> (1 + 5) / 2\n 3.0\n >>> (1 + 10 + 10) / 2\n 10.5\n >>> # Cumulatively computes the trapezoidal rule in 1D with constant spacing of 2\n >>> # NOTE: the result is the same as before, but multiplied by 2\n >>> torch.cumulative_trapezoid(y, dx=2)\n tensor([6., 21.])\n >>> # Cumulatively computes the trapezoidal rule in 1D with arbitrary spacing", "source": "https://pytorch.org/docs/stable/generated/torch.cumulative_trapezoid.html", "category": "pytorch docs"}
{"text": "\n\n\nx = torch.tensor([1, 3, 6])\n >>> torch.cumulative_trapezoid(y, x)\n tensor([6., 28.5])\n >>> # Computes the same trapezoidal rule directly up to each element to verify\n >>> ((3 - 1) * (1 + 5)) / 2\n 6.0\n >>> ((3 - 1) * (1 + 5) + (6 - 3) * (5 + 10)) / 2\n 28.5\n >>> # Cumulatively computes the trapezoidal rule for each row of a 3x3 matrix\n >>> y = torch.arange(9).reshape(3, 3)\n tensor([[0, 1, 2],\n [3, 4, 5],\n [6, 7, 8]])\n >>> torch.cumulative_trapezoid(y)\n tensor([[ 0.5, 2.],\n [ 3.5, 8.],\n [ 6.5, 14.]])\n >>> # Cumulatively computes the trapezoidal rule for each column of the matrix\n >>> torch.cumulative_trapezoid(y, dim=0)\n tensor([[ 1.5, 2.5, 3.5],\n [ 6.0, 8.0, 10.0]])\n >>> # Cumulatively computes the trapezoidal rule for each row of a 3x3 ones matrix\n >>> # with the same arbitrary spacing\n >>> y = torch.ones(3, 3)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.cumulative_trapezoid.html", "category": "pytorch docs"}
{"text": "\n\n\ny = torch.ones(3, 3)\n >>> x = torch.tensor([1, 3, 6])\n >>> torch.cumulative_trapezoid(y, x)\n tensor([[2., 5.],\n [2., 5.],\n [2., 5.]])\n >>> # Cumulatively computes the trapezoidal rule for each row of a 3x3 ones matrix\n >>> # with different arbitrary spacing per row\n >>> y = torch.ones(3, 3)\n >>> x = torch.tensor([[1, 2, 3], [1, 3, 5], [1, 4, 7]])\n >>> torch.cumulative_trapezoid(y, x)\n tensor([[1., 2.],\n [2., 4.],\n [3., 6.]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.cumulative_trapezoid.html", "category": "pytorch docs"}
{"text": "BatchNorm2dclass torch.ao.nn.quantized.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, device=None, dtype=None)\n This is the quantized version of \"BatchNorm2d\".", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.BatchNorm2d.html", "category": "pytorch docs"}
{"text": "ParameterDictclass torch.nn.ParameterDict(parameters=None)\n Holds parameters in a dictionary.\n ParameterDict can be indexed like a regular Python dictionary, but\n Parameters it contains are properly registered, and will be visible\n by all Module methods. Other objects are treated as would be done\n by a regular Python dictionary\n \"ParameterDict\" is an ordered dictionary. \"update()\" with other\n unordered mapping types (e.g., Python's plain \"dict\") does not\n preserve the order of the merged mapping. On the other hand,\n \"OrderedDict\" or another \"ParameterDict\" will preserve their\n ordering.\n Note that the constructor, assigning an element of the dictionary\n and the \"update()\" method will convert any \"Tensor\" into\n \"Parameter\".\n Parameters:\n values (iterable, optional) -- a mapping (dictionary)\n of (string : Any) or an iterable of key-value pairs of type\n (string, Any)\n Example:\n class MyModule(nn.Module):\n def init(self):", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ParameterDict.html", "category": "pytorch docs"}
{"text": "def init(self):\n super(MyModule, self).init()\n self.params = nn.ParameterDict({\n 'left': nn.Parameter(torch.randn(5, 10)),\n 'right': nn.Parameter(torch.randn(5, 10))\n })\n def forward(self, x, choice):\n x = self.params[choice].mm(x)\n return x\n clear()\n Remove all items from the ParameterDict.\n copy()\n Returns a copy of this \"ParameterDict\" instance.\n Return type:\n ParameterDict\n fromkeys(keys, default=None)\n Return a new ParameterDict with the keys provided\n Parameters:\n * keys (iterable, string) -- keys to make the new\n ParameterDict from\n * default (Parameter, optional) -- value to set for\n all keys\n Return type:\n ParameterDict\n get(key, default=None)\n Return the parameter associated with key if present. Otherwise\n return default if provided, None if not.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ParameterDict.html", "category": "pytorch docs"}
{"text": "return default if provided, None if not.\n Parameters:\n * key (str) -- key to get from the ParameterDict\n * default (Parameter, optional) -- value to return\n if key not present\n Return type:\n Any\n items()\n Return an iterable of the ParameterDict key/value pairs.\n Return type:\n Iterable[Tuple[str, Any]]\n keys()\n Return an iterable of the ParameterDict keys.\n Return type:\n Iterable[str]\n pop(key)\n Remove key from the ParameterDict and return its parameter.\n Parameters:\n key (str) -- key to pop from the ParameterDict\n Return type:\n Any\n popitem()\n Remove and return the last inserted (key, parameter) pair from\n the ParameterDict\n Return type:\n Tuple[str, Any]\n setdefault(key, default=None)\n If key is in the ParameterDict, return its value. If not, insert", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ParameterDict.html", "category": "pytorch docs"}
{"text": "key with a parameter default and return default. default\n defaults to None.\n Parameters:\n * key (str) -- key to set default for\n * default (Any) -- the parameter set to the key\n Return type:\n Any\n update(parameters)\n Update the \"ParameterDict\" with the key-value pairs from a\n mapping or an iterable, overwriting existing keys.\n Note:\n If \"parameters\" is an \"OrderedDict\", a \"ParameterDict\", or an\n iterable of key-value pairs, the order of new elements in it\n is preserved.\n Parameters:\n parameters (iterable) -- a mapping (dictionary) from\n string to \"Parameter\", or an iterable of key-value pairs of\n type (string, \"Parameter\")\n values()\n Return an iterable of the ParameterDict values.\n Return type:\n Iterable[Any]", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ParameterDict.html", "category": "pytorch docs"}
{"text": "torch.bitwise_andtorch.bitwise_and(input, other, , out=None) -> Tensor\n Computes the bitwise AND of \"input\" and \"other\". The input tensor\n must be of integral or Boolean types. For bool tensors, it computes\n the logical AND.\n Parameters:\n * input -- the first input tensor\n * other -- the second input tensor\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> torch.bitwise_and(torch.tensor([-1, -2, 3], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8))\n tensor([1, 0, 3], dtype=torch.int8)\n >>> torch.bitwise_and(torch.tensor([True, True, False]), torch.tensor([False, True, False]))\n tensor([ False, True, False])", "source": "https://pytorch.org/docs/stable/generated/torch.bitwise_and.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.selutorch.nn.functional.selu(input, inplace=False) -> Tensor\n Applies element-wise, \\text{SELU}(x) = scale * (\\max(0,x) + \\min(0,\n \\alpha * (\\exp(x) - 1))), with\n \\alpha=1.6732632423543772848170429916717 and\n scale=1.0507009873554804934193349852946.\n See \"SELU\" for more details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.selu.html", "category": "pytorch docs"}
{"text": "torch.view_as_realtorch.view_as_real(input) -> Tensor\n Returns a view of \"input\" as a real tensor. For an input complex\n tensor of \"size\" m1, m2, \\dots, mi, this function returns a new\n real tensor of size m1, m2, \\dots, mi, 2, where the last dimension\n of size 2 represents the real and imaginary components of complex\n numbers.\n Warning:\n \"view_as_real()\" is only supported for tensors with \"complex\n dtypes\".\n Parameters:\n input (Tensor) -- the input tensor.\n Example:\n >>> x=torch.randn(4, dtype=torch.cfloat)\n >>> x\n tensor([(0.4737-0.3839j), (-0.2098-0.6699j), (0.3470-0.9451j), (-0.5174-1.3136j)])\n >>> torch.view_as_real(x)\n tensor([[ 0.4737, -0.3839],\n [-0.2098, -0.6699],\n [ 0.3470, -0.9451],\n [-0.5174, -1.3136]])", "source": "https://pytorch.org/docs/stable/generated/torch.view_as_real.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sspaddmmTensor.sspaddmm(mat1, mat2, *, beta=1, alpha=1) -> Tensor\n See \"torch.sspaddmm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sspaddmm.html", "category": "pytorch docs"}
{"text": "torch.lesstorch.less(input, other, *, out=None) -> Tensor\n Alias for \"torch.lt()\".", "source": "https://pytorch.org/docs/stable/generated/torch.less.html", "category": "pytorch docs"}
{"text": "torch.Tensor.bitwise_right_shiftTensor.bitwise_right_shift(other) -> Tensor\n See \"torch.bitwise_right_shift()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_right_shift.html", "category": "pytorch docs"}
{"text": "StreamContextclass torch.cuda.StreamContext(stream)\n Context-manager that selects a given stream.\n All CUDA kernels queued within its context will be enqueued on a\n selected stream.\n Parameters:\n Stream (Stream) -- selected stream. This manager is a no-\n op if it's \"None\".\n Note:\n Streams are per-device.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.StreamContext.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sgnTensor.sgn() -> Tensor\n See \"torch.sgn()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sgn.html", "category": "pytorch docs"}
{"text": "torch.fft.iffttorch.fft.ifft(input, n=None, dim=- 1, norm=None, , out=None) -> Tensor\n Computes the one dimensional inverse discrete Fourier transform of\n \"input\".\n Note:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimension.\n Parameters:\n * input (Tensor) -- the input tensor\n * n (int, optional) -- Signal length. If given, the\n input will either be zero-padded or trimmed to this length\n before computing the IFFT.\n * dim (int, optional) -- The dimension along which to\n take the one dimensional IFFT.\n * norm (str, optional*) --\n Normalization mode. For the backward transform (\"ifft()\"),\n these correspond to:\n * \"\"forward\"\" - no normalization\n * \"\"backward\"\" - normalize by \"1/n\"\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the IFFT", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ifft.html", "category": "pytorch docs"}
{"text": "orthonormal)\n Calling the forward transform (\"fft()\") with the same\n normalization mode will apply an overall normalization of\n \"1/n\" between the two transforms. This is required to make\n \"ifft()\" the exact inverse.\n Default is \"\"backward\"\" (normalize by \"1/n\").\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n\n\n\nt = torch.tensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j])\ntorch.fft.ifft(t)\n tensor([0.+0.j, 1.+0.j, 2.+0.j, 3.+0.j])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ifft.html", "category": "pytorch docs"}
{"text": "torch.Tensor.frac_Tensor.frac_() -> Tensor\n In-place version of \"frac()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.frac_.html", "category": "pytorch docs"}
{"text": "InstanceNorm1dclass torch.nn.InstanceNorm1d(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False, device=None, dtype=None)\n Applies Instance Normalization over a 2D (unbatched) or 3D\n (batched) input as described in the paper Instance Normalization:\n The Missing Ingredient for Fast Stylization.\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n The mean and standard-deviation are calculated per-dimension\n separately for each object in a mini-batch. \\gamma and \\beta are\n learnable parameter vectors of size C (where C is the number of\n features or channels of the input) if \"affine\" is \"True\". The\n standard-deviation is calculated via the biased estimator,\n equivalent to torch.var(input, unbiased=False).\n By default, this layer uses instance statistics computed from input\n data in both training and evaluation modes.\n If \"track_running_stats\" is set to \"True\", during training this", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm1d.html", "category": "pytorch docs"}
{"text": "layer keeps running estimates of its computed mean and variance,\n which are then used for normalization during evaluation. The\n running estimates are kept with a default \"momentum\" of 0.1.\n Note:\n This \"momentum\" argument is different from one used in optimizer\n classes and the conventional notion of momentum. Mathematically,\n the update rule for running statistics here is \\hat{x}_\\text{new}\n = (1 - \\text{momentum}) \\times \\hat{x} + \\text{momentum} \\times\n x_t, where \\hat{x} is the estimated statistic and x_t is the new\n observed value.\n Note:\n \"InstanceNorm1d\" and \"LayerNorm\" are very similar, but have some\n subtle differences. \"InstanceNorm1d\" is applied on each channel\n of channeled data like multidimensional time series, but\n \"LayerNorm\" is usually applied on entire sample and often in NLP\n tasks. Additionally, \"LayerNorm\" applies elementwise affine\n transform, while \"InstanceNorm1d\" usually don't apply affine\n transform.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm1d.html", "category": "pytorch docs"}
{"text": "transform.\n Parameters:\n * num_features (int) -- number of features or channels C\n of the input\n * eps (float) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n * momentum (float) -- the value used for the running_mean\n and running_var computation. Default: 0.1\n * affine (bool) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters,\n initialized the same way as done for batch normalization.\n Default: \"False\".\n * track_running_stats (bool) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics and always uses batch statistics in both\n training and eval modes. Default: \"False\"\n Shape:\n * Input: (N, C, L) or (C, L)\n * Output: (N, C, L) or (C, L) (same shape as input)\n Examples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm1d.html", "category": "pytorch docs"}
{"text": "Examples:\n >>> # Without Learnable Parameters\n >>> m = nn.InstanceNorm1d(100)\n >>> # With Learnable Parameters\n >>> m = nn.InstanceNorm1d(100, affine=True)\n >>> input = torch.randn(20, 100, 40)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm1d.html", "category": "pytorch docs"}
{"text": "TransformerEncoderclass torch.nn.TransformerEncoder(encoder_layer, num_layers, norm=None, enable_nested_tensor=True, mask_check=True)\n TransformerEncoder is a stack of N encoder layers. Users can build\n the BERT(https://arxiv.org/abs/1810.04805) model with corresponding\n parameters.\n Parameters:\n * encoder_layer -- an instance of the\n TransformerEncoderLayer() class (required).\n * num_layers -- the number of sub-encoder-layers in the\n encoder (required).\n * norm -- the layer normalization component (optional).\n * enable_nested_tensor -- if True, input will automatically\n convert to nested tensor (and convert back on output). This\n will improve the overall performance of TransformerEncoder\n when padding rate is high. Default: \"True\" (enabled).\n Examples::\n >>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8)\n >>> transformer_encoder = nn.TransformerEncoder(encoder_layer, num_layers=6)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html", "category": "pytorch docs"}
{"text": "\n\n\nsrc = torch.rand(10, 32, 512)\n >>> out = transformer_encoder(src)\n forward(src, mask=None, src_key_padding_mask=None, is_causal=None)\n Pass the input through the encoder layers in turn.\n Parameters:\n * src (Tensor) -- the sequence to the encoder\n (required).\n * mask (Optional[Tensor]) -- the mask for the src\n sequence (optional).\n * is_causal (Optional[bool]) -- If specified,\n applies a causal mask as mask (optional) and ignores\n attn_mask for computing scaled dot product attention.\n Default: \"False\".\n * src_key_padding_mask (Optional[Tensor]) -- the\n mask for the src keys per batch (optional).\n Return type:\n Tensor\n Shape:\n see the docs in Transformer class.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html", "category": "pytorch docs"}
{"text": "torch.atantorch.atan(input, , out=None) -> Tensor\n Returns a new tensor with the arctangent of the elements of\n \"input\".\n \\text{out}{i} = \\tan^{-1}(\\text{input})\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 0.2341, 0.2539, -0.6256, -0.6448])\n >>> torch.atan(a)\n tensor([ 0.2299, 0.2487, -0.5591, -0.5727])", "source": "https://pytorch.org/docs/stable/generated/torch.atan.html", "category": "pytorch docs"}
{"text": "LayerNormclass torch.ao.nn.quantized.LayerNorm(normalized_shape, weight, bias, scale, zero_point, eps=1e-05, elementwise_affine=True, device=None, dtype=None)\n This is the quantized version of \"LayerNorm\".\n Additional args:\n * scale - quantization scale of the output, type: double.\n * zero_point - quantization zero point of the output, type:\n long.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.LayerNorm.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.embedding_bagtorch.nn.functional.embedding_bag(input, weight, offsets=None, max_norm=None, norm_type=2, scale_grad_by_freq=False, mode='mean', sparse=False, per_sample_weights=None, include_last_offset=False, padding_idx=None)\n Computes sums, means or maxes of bags of embeddings, without\n instantiating the intermediate embeddings.\n See \"torch.nn.EmbeddingBag\" for more details.\n Note:\n This operation may produce nondeterministic gradients when given\n tensors on a CUDA device. See Reproducibility for more\n information.\n Parameters:\n * input (LongTensor) -- Tensor containing bags of indices\n into the embedding matrix\n * weight (Tensor) -- The embedding matrix with number of\n rows equal to the maximum possible index + 1, and number of\n columns equal to the embedding size\n * offsets (LongTensor, optional) -- Only used when", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding_bag.html", "category": "pytorch docs"}
{"text": "\"input\" is 1D. \"offsets\" determines the starting index\n position of each bag (sequence) in \"input\".\n * max_norm (float, optional) -- If given, each\n embedding vector with norm larger than \"max_norm\" is\n renormalized to have norm \"max_norm\". Note: this will modify\n \"weight\" in-place.\n * norm_type (float, optional) -- The \"p\" in the\n \"p\"-norm to compute for the \"max_norm\" option. Default \"2\".\n * scale_grad_by_freq (bool, optional) -- if given,\n this will scale gradients by the inverse of frequency of the\n words in the mini-batch. Default \"False\". Note: this option is\n not supported when \"mode=\"max\"\".\n * mode (str, optional) -- \"\"sum\"\", \"\"mean\"\" or\n \"\"max\"\". Specifies the way to reduce the bag. Default:\n \"\"mean\"\"\n * sparse (bool, optional) -- if \"True\", gradient\n w.r.t. \"weight\" will be a sparse tensor. See Notes under", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding_bag.html", "category": "pytorch docs"}
{"text": "\"torch.nn.Embedding\" for more details regarding sparse\n gradients. Note: this option is not supported when\n \"mode=\"max\"\".\n * per_sample_weights (Tensor, optional) -- a tensor of\n float / double weights, or None to indicate all weights should\n be taken to be 1. If specified, \"per_sample_weights\" must have\n exactly the same shape as input and is treated as having the\n same \"offsets\", if those are not None.\n * include_last_offset (bool, optional) -- if \"True\",\n the size of offsets is equal to the number of bags + 1. The\n last element is the size of the input, or the ending index\n position of the last bag (sequence).\n * padding_idx (int, optional) -- If specified, the\n entries at \"padding_idx\" do not contribute to the gradient;\n therefore, the embedding vector at \"padding_idx\" is not\n updated during training, i.e. it remains as a fixed \"pad\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding_bag.html", "category": "pytorch docs"}
{"text": "Note that the embedding vector at \"padding_idx\" is excluded\n from the reduction.\n Return type:\n Tensor\n Shape:\n * \"input\" (LongTensor) and \"offsets\" (LongTensor, optional)\n * If \"input\" is 2D of shape (B, N), it will be treated as\n \"B\" bags (sequences) each of fixed length \"N\", and this will\n return \"B\" values aggregated in a way depending on the\n \"mode\". \"offsets\" is ignored and required to be \"None\" in\n this case.\n * If \"input\" is 1D of shape (N), it will be treated as a\n concatenation of multiple bags (sequences). \"offsets\" is\n required to be a 1D tensor containing the starting index\n positions of each bag in \"input\". Therefore, for \"offsets\"\n of shape (B), \"input\" will be viewed as having \"B\" bags.\n Empty bags (i.e., having 0-length) will have returned\n vectors filled by zeros.\n * \"weight\" (Tensor): the learnable weights of the module of", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding_bag.html", "category": "pytorch docs"}
{"text": "shape (num_embeddings, embedding_dim)\n * \"per_sample_weights\" (Tensor, optional). Has the same shape as\n \"input\".\n * \"output\": aggregated embedding values of shape (B,\n embedding_dim)\n Examples:\n >>> # an Embedding module containing 10 tensors of size 3\n >>> embedding_matrix = torch.rand(10, 3)\n >>> # a batch of 2 samples of 4 indices each\n >>> input = torch.tensor([1, 2, 4, 5, 4, 3, 2, 9])\n >>> offsets = torch.tensor([0, 4])\n >>> F.embedding_bag(input, embedding_matrix, offsets)\n tensor([[ 0.3397, 0.3552, 0.5545],\n [ 0.5893, 0.4386, 0.5882]])\n >>> # example with padding_idx\n >>> embedding_matrix = torch.rand(10, 3)\n >>> input = torch.tensor([2, 2, 2, 2, 4, 3, 2, 9])\n >>> offsets = torch.tensor([0, 4])\n >>> F.embedding_bag(input, embedding_matrix, offsets, padding_idx=2, mode='sum')\n tensor([[ 0.0000, 0.0000, 0.0000],\n [-0.7082, 3.2145, -2.6251]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding_bag.html", "category": "pytorch docs"}
{"text": "torch.Tensor.cos_Tensor.cos_() -> Tensor\n In-place version of \"cos()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cos_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.logaddexp2Tensor.logaddexp2(other) -> Tensor\n See \"torch.logaddexp2()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logaddexp2.html", "category": "pytorch docs"}
{"text": "Identityclass torch.nn.Identity(args, kwargs)\n A placeholder identity operator that is argument-insensitive.\n Parameters:\n * args (Any) -- any argument (unused)\n * kwargs (Any) -- any keyword argument (unused)\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n Examples:\n >>> m = nn.Identity(54, unused_argument1=0.1, unused_argument2=False)\n >>> input = torch.randn(128, 20)\n >>> output = m(input)\n >>> print(output.size())\n torch.Size([128, 20])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Identity.html", "category": "pytorch docs"}
{"text": "torch.Tensor.positiveTensor.positive() -> Tensor\n See \"torch.positive()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.positive.html", "category": "pytorch docs"}
{"text": "torch.Tensor.logit_Tensor.logit_() -> Tensor\n In-place version of \"logit()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logit_.html", "category": "pytorch docs"}
{"text": "torch.jit.script_if_tracingtorch.jit.script_if_tracing(fn)\n Compiles \"fn\" when it is first called during tracing.\n \"torch.jit.script\" has a non-negligible start up time when it is\n first called due to lazy-initializations of many compiler builtins.\n Therefore you should not use it in library code. However, you may\n want to have parts of your library work in tracing even if they use\n control flow. In these cases, you should use\n \"@torch.jit.script_if_tracing\" to substitute for\n \"torch.jit.script\".\n Parameters:\n fn -- A function to compile.\n Returns:\n If called during tracing, a \"ScriptFunction\" created by\n torch.jit.script is returned. Otherwise, the original function\n fn is returned.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.script_if_tracing.html", "category": "pytorch docs"}
{"text": "torch._foreach_sigmoidtorch._foreach_sigmoid(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.sigmoid()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_sigmoid.html", "category": "pytorch docs"}
{"text": "torch.Tensor.eqTensor.eq(other) -> Tensor\n See \"torch.eq()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.eq.html", "category": "pytorch docs"}
{"text": "torch.Tensor.zero_Tensor.zero_() -> Tensor\n Fills \"self\" tensor with zeros.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.zero_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.splitTensor.split(split_size, dim=0)\n See \"torch.split()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.split.html", "category": "pytorch docs"}
{"text": "Dropout3dclass torch.nn.Dropout3d(p=0.5, inplace=False)\n Randomly zero out entire channels (a channel is a 3D feature map,\n e.g., the j-th channel of the i-th sample in the batched input is a\n 3D tensor \\text{input}[i, j]). Each channel will be zeroed out\n independently on every forward call with probability \"p\" using\n samples from a Bernoulli distribution.\n Usually the input comes from \"nn.Conv3d\" modules.\n As described in the paper Efficient Object Localization Using\n Convolutional Networks , if adjacent pixels within feature maps are\n strongly correlated (as is normally the case in early convolution\n layers) then i.i.d. dropout will not regularize the activations and\n will otherwise just result in an effective learning rate decrease.\n In this case, \"nn.Dropout3d()\" will help promote independence\n between feature maps and should be used instead.\n Parameters:\n * p (float, optional) -- probability of an element to\n be zeroed.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Dropout3d.html", "category": "pytorch docs"}
{"text": "be zeroed.\n * inplace (bool, optional) -- If set to \"True\", will\n do this operation in-place\n Shape:\n * Input: (N, C, D, H, W) or (C, D, H, W).\n * Output: (N, C, D, H, W) or (C, D, H, W) (same shape as input).\n Examples:\n >>> m = nn.Dropout3d(p=0.2)\n >>> input = torch.randn(20, 16, 4, 32, 32)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Dropout3d.html", "category": "pytorch docs"}
{"text": "ASGDclass torch.optim.ASGD(params, lr=0.01, lambd=0.0001, alpha=0.75, t0=1000000.0, weight_decay=0, foreach=None, maximize=False, differentiable=False)\n Implements Averaged Stochastic Gradient Descent.\n It has been proposed in Acceleration of stochastic approximation by\n averaging.\n Parameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * lr (float, optional) -- learning rate (default:\n 1e-2)\n * lambd (float, optional) -- decay term (default:\n 1e-4)\n * alpha (float, optional) -- power for eta update\n (default: 0.75)\n * t0 (float, optional) -- point at which to start\n averaging (default: 1e6)\n * weight_decay (float, optional) -- weight decay (L2\n penalty) (default: 0)\n * foreach (bool, optional) -- whether foreach\n implementation of optimizer is used. If unspecified by the", "source": "https://pytorch.org/docs/stable/generated/torch.optim.ASGD.html", "category": "pytorch docs"}
{"text": "user (so foreach is None), we will try to use foreach over the\n for-loop implementation on CUDA, since it is usually\n significantly more performant. (default: None)\n * maximize (bool, optional) -- maximize the params\n based on the objective, instead of minimizing (default: False)\n * differentiable (bool, optional) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\n add_param_group(param_group)\n Add a param group to the \"Optimizer\" s param_groups.\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.optim.ASGD.html", "category": "pytorch docs"}
{"text": "as training progresses.\n Parameters:\n param_group (dict) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n load_state_dict(state_dict)\n Loads the optimizer state.\n Parameters:\n state_dict (dict) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n register_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None\n The \"optimizer\" argument is the optimizer instance being used.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n register_step_pre_hook(hook)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.ASGD.html", "category": "pytorch docs"}
{"text": "register_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n state_dict()\n Returns the state of the optimizer as a \"dict\".\n It contains two entries:\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.", "source": "https://pytorch.org/docs/stable/generated/torch.optim.ASGD.html", "category": "pytorch docs"}
{"text": "differs between optimizer classes.\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n zero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n Parameters:\n set_to_none (bool) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case", "source": "https://pytorch.org/docs/stable/generated/torch.optim.ASGD.html", "category": "pytorch docs"}
{"text": "it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.ASGD.html", "category": "pytorch docs"}
{"text": "torch.squeezetorch.squeeze(input, dim=None) -> Tensor\n Returns a tensor with all specified dimensions of \"input\" of size\n 1 removed.\n For example, if input is of shape: (A \\times 1 \\times B \\times C\n \\times 1 \\times D) then the input.squeeze() will be of shape: (A\n \\times B \\times C \\times D).\n When \"dim\" is given, a squeeze operation is done only in the given\n dimension(s). If input is of shape: (A \\times 1 \\times B),\n \"squeeze(input, 0)\" leaves the tensor unchanged, but\n \"squeeze(input, 1)\" will squeeze the tensor to the shape (A \\times\n B).\n Note:\n The returned tensor shares the storage with the input tensor, so\n changing the contents of one will change the contents of the\n other.\n Warning:\n If the tensor has a batch dimension of size 1, then\n squeeze(input) will also remove the batch dimension, which can\n lead to unexpected errors. Consider specifying only the dims you\n wish to be squeezed.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.squeeze.html", "category": "pytorch docs"}
{"text": "wish to be squeezed.\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int or tuple of ints, optional) --\n if given, the input will be squeezed\n only in the specified dimensions.\n Changed in version 2.0: \"dim\" now accepts tuples of\n dimensions.\n Example:\n >>> x = torch.zeros(2, 1, 2, 1, 2)\n >>> x.size()\n torch.Size([2, 1, 2, 1, 2])\n >>> y = torch.squeeze(x)\n >>> y.size()\n torch.Size([2, 2, 2])\n >>> y = torch.squeeze(x, 0)\n >>> y.size()\n torch.Size([2, 1, 2, 1, 2])\n >>> y = torch.squeeze(x, 1)\n >>> y.size()\n torch.Size([2, 2, 1, 2])\n >>> y = torch.squeeze(x, (1, 2, 3))\n torch.Size([2, 2, 2])", "source": "https://pytorch.org/docs/stable/generated/torch.squeeze.html", "category": "pytorch docs"}
{"text": "torch.cuda.empty_cachetorch.cuda.empty_cache()\n Releases all unoccupied cached memory currently held by the caching\n allocator so that those can be used in other GPU application and\n visible in nvidia-smi.\n Note:\n \"empty_cache()\" doesn't increase the amount of GPU memory\n available for PyTorch. However, it may help reduce fragmentation\n of GPU memory in certain cases. See Memory management for more\n details about GPU memory management.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.empty_cache.html", "category": "pytorch docs"}
{"text": "torch.is_deterministic_algorithms_warn_only_enabledtorch.is_deterministic_algorithms_warn_only_enabled()\n Returns True if the global deterministic flag is set to warn only.\n Refer to \"torch.use_deterministic_algorithms()\" documentation for\n more details.", "source": "https://pytorch.org/docs/stable/generated/torch.is_deterministic_algorithms_warn_only_enabled.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sub_Tensor.sub_(other, *, alpha=1) -> Tensor\n In-place version of \"sub()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sub_.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.rnn.pack_sequencetorch.nn.utils.rnn.pack_sequence(sequences, enforce_sorted=True)\n Packs a list of variable length Tensors\n Consecutive call of the next functions: \"pad_sequence\",\n \"pack_padded_sequence\".\n \"sequences\" should be a list of Tensors of size \"L x \", where L\n is the length of a sequence and *** is any number of trailing\n dimensions, including zero.\n For unsorted sequences, use enforce_sorted = False*. If\n \"enforce_sorted\" is \"True\", the sequences should be sorted in the\n order of decreasing length. \"enforce_sorted = True\" is only\n necessary for ONNX export.\n -[ Example ]-\n\n\n\nfrom torch.nn.utils.rnn import pack_sequence\na = torch.tensor([1, 2, 3])\nb = torch.tensor([4, 5])\nc = torch.tensor([6])\npack_sequence([a, b, c])\n PackedSequence(data=tensor([1, 4, 6, 2, 5, 3]), batch_sizes=tensor([3, 2, 1]), sorted_indices=None, unsorted_indices=None)\n Parameters:\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pack_sequence.html", "category": "pytorch docs"}
{"text": "Parameters:\n * sequences (list[Tensor]) -- A list of sequences of\n decreasing length.\n * enforce_sorted (bool, optional) -- if \"True\", checks\n that the input contains sequences sorted by length in a\n decreasing order. If \"False\", this condition is not checked.\n Default: \"True\".\n Returns:\n a \"PackedSequence\" object\n Return type:\n PackedSequence", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pack_sequence.html", "category": "pytorch docs"}
{"text": "Sigmoidclass torch.ao.nn.quantized.Sigmoid(output_scale, output_zero_point)\n This is the quantized equivalent of \"Sigmoid\".\n Parameters:\n * scale -- quantization scale of the output tensor\n * zero_point -- quantization zero point of the output tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Sigmoid.html", "category": "pytorch docs"}
{"text": "torch.Tensor.moveaxisTensor.moveaxis(source, destination) -> Tensor\n See \"torch.moveaxis()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.moveaxis.html", "category": "pytorch docs"}
{"text": "torch.Tensor.gcd_Tensor.gcd_(other) -> Tensor\n In-place version of \"gcd()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.gcd_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.meanTensor.mean(dim=None, keepdim=False, *, dtype=None) -> Tensor\n See \"torch.mean()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.mean.html", "category": "pytorch docs"}
{"text": "torch.Tensor.resize_as_Tensor.resize_as_(tensor, memory_format=torch.contiguous_format) -> Tensor\n Resizes the \"self\" tensor to be the same size as the specified\n \"tensor\". This is equivalent to \"self.resize_(tensor.size())\".\n Parameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of Tensor. Default:\n \"torch.contiguous_format\". Note that memory format of \"self\" is\n going to be unaffected if \"self.size()\" matches \"tensor.size()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.resize_as_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.roundTensor.round(decimals=0) -> Tensor\n See \"torch.round()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.round.html", "category": "pytorch docs"}
{"text": "torch.empty_stridedtorch.empty_strided(size, stride, , dtype=None, layout=None, device=None, requires_grad=False, pin_memory=False) -> Tensor\n Creates a tensor with the specified \"size\" and \"stride\" and filled\n with undefined data.\n Warning:\n If the constructed tensor is \"overlapped\" (with multiple indices\n referring to the same element in memory) its behavior is\n undefined.\n Parameters:\n * size (tuple of python:int) -- the shape of the output\n tensor\n * stride (tuple of python:int) -- the strides of the\n output tensor\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device* (\"torch.device\", optional) -- the desired device of", "source": "https://pytorch.org/docs/stable/generated/torch.empty_strided.html", "category": "pytorch docs"}
{"text": "returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * pin_memory (bool, optional) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n Example:\n >>> a = torch.empty_strided((2, 3), (1, 2))\n >>> a\n tensor([[8.9683e-44, 4.4842e-44, 5.1239e+07],\n [0.0000e+00, 0.0000e+00, 3.0705e-41]])\n >>> a.stride()\n (1, 2)\n >>> a.size()\n torch.Size([2, 3])", "source": "https://pytorch.org/docs/stable/generated/torch.empty_strided.html", "category": "pytorch docs"}
{"text": "torch.Tensor.absoluteTensor.absolute() -> Tensor\n Alias for \"abs()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.absolute.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.ctc_losstorch.nn.functional.ctc_loss(log_probs, targets, input_lengths, target_lengths, blank=0, reduction='mean', zero_infinity=False)\n The Connectionist Temporal Classification loss.\n See \"CTCLoss\" for details.\n Note:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n Note:\n This operation may produce nondeterministic gradients when given\n tensors on a CUDA device. See Reproducibility for more\n information.\n Parameters:\n * log_probs (Tensor) -- (T, N, C) or (T, C) where C =\n number of characters in alphabet including blank, *T = input", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.ctc_loss.html", "category": "pytorch docs"}
{"text": "length, and N = batch size. The logarithmized probabilities\n of the outputs (e.g. obtained with\n \"torch.nn.functional.log_softmax()\").\n * targets (Tensor) -- (N, S) or (sum(target_lengths)).\n Targets cannot be blank. In the second form, the targets are\n assumed to be concatenated.\n * input_lengths (Tensor) -- (N) or (). Lengths of the\n inputs (must each be \\leq T)\n * target_lengths (Tensor) -- (N) or (). Lengths of the\n targets\n * blank (int, optional) -- Blank label. Default 0.\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the output\n losses will be divided by the target lengths and then the mean\n over the batch is taken, \"'sum'\": the output will be summed.\n Default: \"'mean'\"\n * zero_infinity (bool, optional*) -- Whether to zero", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.ctc_loss.html", "category": "pytorch docs"}
{"text": "infinite losses and the associated gradients. Default: \"False\"\n Infinite losses mainly occur when the inputs are too short to\n be aligned to the targets.\n Return type:\n Tensor\n Example:\n >>> log_probs = torch.randn(50, 16, 20).log_softmax(2).detach().requires_grad_()\n >>> targets = torch.randint(1, 20, (16, 30), dtype=torch.long)\n >>> input_lengths = torch.full((16,), 50, dtype=torch.long)\n >>> target_lengths = torch.randint(10, 30, (16,), dtype=torch.long)\n >>> loss = F.ctc_loss(log_probs, targets, input_lengths, target_lengths)\n >>> loss.backward()", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.ctc_loss.html", "category": "pytorch docs"}
{"text": "torch.mmtorch.mm(input, mat2, *, out=None) -> Tensor\n Performs a matrix multiplication of the matrices \"input\" and\n \"mat2\".\n If \"input\" is a (n \\times m) tensor, \"mat2\" is a (m \\times p)\n tensor, \"out\" will be a (n \\times p) tensor.\n Note:\n This function does not broadcast. For broadcasting matrix\n products, see \"torch.matmul()\".\n Supports strided and sparse 2-D tensors as inputs, autograd with\n respect to strided inputs.\n This operation has support for arguments with sparse layouts. If\n \"out\" is provided it's layout will be used. Otherwise, the result\n layout will be deduced from that of \"input\".\n Warning:\n Sparse support is a beta feature and some layout(s)/dtype/device\n combinations may not be supported, or may not have autograd\n support. If you notice missing functionality please open a\n feature request.\n This operator supports TensorFloat32.\n On certain ROCm devices, when using float16 inputs this module will", "source": "https://pytorch.org/docs/stable/generated/torch.mm.html", "category": "pytorch docs"}
{"text": "use different precision for backward.\n Parameters:\n * input (Tensor) -- the first matrix to be matrix\n multiplied\n * mat2 (Tensor) -- the second matrix to be matrix\n multiplied\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> mat1 = torch.randn(2, 3)\n >>> mat2 = torch.randn(3, 3)\n >>> torch.mm(mat1, mat2)\n tensor([[ 0.4851, 0.5037, -0.3633],\n [-0.0760, -3.6705, 2.4784]])", "source": "https://pytorch.org/docs/stable/generated/torch.mm.html", "category": "pytorch docs"}
{"text": "torch.letorch.le(input, other, , out=None) -> Tensor\n Computes \\text{input} \\leq \\text{other} element-wise.\n The second argument can be a number or a tensor whose shape is\n broadcastable with the first argument.\n Parameters:\n * input (Tensor) -- the tensor to compare\n * other (Tensor or Scalar) -- the tensor or value to\n compare\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Returns:\n A boolean tensor that is True where \"input\" is less than or\n equal to \"other\" and False elsewhere\n Example:\n >>> torch.le(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))\n tensor([[True, False], [True, True]])", "source": "https://pytorch.org/docs/stable/generated/torch.le.html", "category": "pytorch docs"}
{"text": "torch.Tensor.imagTensor.imag\n Returns a new tensor containing imaginary values of the \"self\"\n tensor. The returned tensor and \"self\" share the same underlying\n storage.\n Warning:\n \"imag()\" is only supported for tensors with complex dtypes.\n Example::\n >>> x=torch.randn(4, dtype=torch.cfloat)\n >>> x\n tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)])\n >>> x.imag\n tensor([ 0.3553, -0.7896, -0.0633, -0.8119])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.imag.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.prune.identitytorch.nn.utils.prune.identity(module, name)\n Applies pruning reparametrization to the tensor corresponding to\n the parameter called \"name\" in \"module\" without actually pruning\n any units. Modifies module in place (and also return the modified\n module) by:\n 1. adding a named buffer called \"name+'_mask'\" corresponding to the\n binary mask applied to the parameter \"name\" by the pruning\n method.\n 2. replacing the parameter \"name\" by its pruned version, while the\n original (unpruned) parameter is stored in a new parameter named\n \"name+'_orig'\".\n Note:\n The mask is a tensor of ones.\n Parameters:\n * module (nn.Module) -- module containing the tensor to\n prune.\n * name (str) -- parameter name within \"module\" on which\n pruning will act.\n Returns:\n modified (i.e. pruned) version of the input module\n Return type:\n module (nn.Module)\n -[ Examples ]-", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.identity.html", "category": "pytorch docs"}
{"text": "module (nn.Module)\n -[ Examples ]-\n\n\n\nm = prune.identity(nn.Linear(2, 3), 'bias')\nprint(m.bias_mask)\n tensor([1., 1., 1.])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.identity.html", "category": "pytorch docs"}
{"text": "torch.Tensor.not_equalTensor.not_equal(other) -> Tensor\n See \"torch.not_equal()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.not_equal.html", "category": "pytorch docs"}
{"text": "Mishclass torch.nn.Mish(inplace=False)\n Applies the Mish function, element-wise. Mish: A Self Regularized\n Non-Monotonic Neural Activation Function.\n \\text{Mish}(x) = x * \\text{Tanh}(\\text{Softplus}(x))\n Note:\n See Mish: A Self Regularized Non-Monotonic Neural Activation\n Function\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.Mish()\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Mish.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.elu_torch.nn.functional.elu_(input, alpha=1.) -> Tensor\n In-place version of \"elu()\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.elu_.html", "category": "pytorch docs"}
{"text": "torch.costorch.cos(input, , out=None) -> Tensor\n Returns a new tensor with the cosine of the elements of \"input\".\n \\text{out}{i} = \\cos(\\text{input})\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 1.4309, 1.2706, -0.8562, 0.9796])\n >>> torch.cos(a)\n tensor([ 0.1395, 0.2957, 0.6553, 0.5574])", "source": "https://pytorch.org/docs/stable/generated/torch.cos.html", "category": "pytorch docs"}
{"text": "torch.Tensor.addr_Tensor.addr_(vec1, vec2, *, beta=1, alpha=1) -> Tensor\n In-place version of \"addr()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addr_.html", "category": "pytorch docs"}
{"text": "torch.logittorch.logit(input, eps=None, *, out=None) -> Tensor\n Alias for \"torch.special.logit()\".", "source": "https://pytorch.org/docs/stable/generated/torch.logit.html", "category": "pytorch docs"}
{"text": "torch.Tensor.ne_Tensor.ne_(other) -> Tensor\n In-place version of \"ne()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ne_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.renormTensor.renorm(p, dim, maxnorm) -> Tensor\n See \"torch.renorm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.renorm.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.l1_losstorch.nn.functional.l1_loss(input, target, size_average=None, reduce=None, reduction='mean') -> Tensor\n Function that takes the mean element-wise absolute value\n difference.\n See \"L1Loss\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.l1_loss.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.celutorch.nn.functional.celu(input, alpha=1., inplace=False) -> Tensor\n Applies element-wise, \\text{CELU}(x) = \\max(0,x) + \\min(0, \\alpha *\n (\\exp(x/\\alpha) - 1)).\n See \"CELU\" for more details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.celu.html", "category": "pytorch docs"}
{"text": "torch.cuda.jiterator._create_multi_output_jit_fntorch.cuda.jiterator._create_multi_output_jit_fn(code_string, num_outputs, kwargs)\n Create a jiterator-generated cuda kernel for an elementwise op that\n supports returning one or more outputs.\n Parameters:\n * code_string (str) -- CUDA code string to be compiled by\n jiterator. The entry functor must return value by reference.\n * num_outputs (int) -- number of outputs return by the\n kernel\n * kwargs (*Dict, optional) -- Keyword arguments for\n generated function\n Return type:\n Callable*\n Example:\n code_string = \"template void my_kernel(T x, T y, T alpha, T& out) { out = -x + alpha * y; }\"\n jitted_fn = create_jit_fn(code_string, alpha=1.0)\n a = torch.rand(3, device='cuda')\n b = torch.rand(3, device='cuda')\n # invoke jitted function like a regular python function\n result = jitted_fn(a, b, alpha=3.14)", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.jiterator._create_multi_output_jit_fn.html", "category": "pytorch docs"}
{"text": "result = jitted_fn(a, b, alpha=3.14)\n Warning:\n This API is in beta and may change in future releases.\n Warning:\n This API only supports up to 8 inputs and 8 outputs", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.jiterator._create_multi_output_jit_fn.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sinTensor.sin() -> Tensor\n See \"torch.sin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sin.html", "category": "pytorch docs"}
{"text": "LazyConv3dclass torch.nn.LazyConv3d(out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\n A \"torch.nn.Conv3d\" module with lazy initialization of the\n \"in_channels\" argument of the \"Conv3d\" that is inferred from the\n \"input.size(1)\". The attributes that will be lazily initialized are\n weight and bias.\n Check the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\n Parameters:\n * out_channels (int) -- Number of channels produced by the\n convolution\n * kernel_size (int or tuple) -- Size of the convolving\n kernel\n * stride (int or tuple, optional) -- Stride of the\n convolution. Default: 1\n * padding (int or tuple, optional) -- Zero-padding\n added to both sides of the input. Default: 0\n * padding_mode (str, optional) -- \"'zeros'\",", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConv3d.html", "category": "pytorch docs"}
{"text": "\"'reflect'\", \"'replicate'\" or \"'circular'\". Default: \"'zeros'\"\n * dilation (int or tuple, optional) -- Spacing\n between kernel elements. Default: 1\n * groups (int, optional) -- Number of blocked\n connections from input channels to output channels. Default: 1\n * bias (bool, optional) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n See also:\n \"torch.nn.Conv3d\" and \"torch.nn.modules.lazy.LazyModuleMixin\"\n cls_to_become\n alias of \"Conv3d\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConv3d.html", "category": "pytorch docs"}
{"text": "torch.gertorch.ger(input, vec2, *, out=None) -> Tensor\n Alias of \"torch.outer()\".\n Warning:\n This function is deprecated and will be removed in a future\n PyTorch release. Use \"torch.outer()\" instead.", "source": "https://pytorch.org/docs/stable/generated/torch.ger.html", "category": "pytorch docs"}
{"text": "torch.Tensor.expm1Tensor.expm1() -> Tensor\n See \"torch.expm1()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.expm1.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.conv_transpose2dtorch.nn.functional.conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor\n Applies a 2D transposed convolution operator over an input image\n composed of several input planes, sometimes also called\n \"deconvolution\".\n This operator supports TensorFloat32.\n See \"ConvTranspose2d\" for details and output shape.\n Note:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n Parameters:\n * input -- input tensor of shape (\\text{minibatch} ,\n \\text{in_channels} , iH , iW)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose2d.html", "category": "pytorch docs"}
{"text": "\\text{in_channels} , iH , iW)\n * weight -- filters of shape (\\text{in_channels} ,\n \\frac{\\text{out_channels}}{\\text{groups}} , kH , kW)\n * bias -- optional bias of shape (\\text{out_channels}).\n Default: None\n * stride -- the stride of the convolving kernel. Can be a\n single number or a tuple \"(sH, sW)\". Default: 1\n * padding -- \"dilation * (kernel_size - 1) - padding\" zero-\n padding will be added to both sides of each dimension in the\n input. Can be a single number or a tuple \"(padH, padW)\".\n Default: 0\n * output_padding -- additional size added to one side of\n each dimension in the output shape. Can be a single number or\n a tuple \"(out_padH, out_padW)\". Default: 0\n * groups -- split input into groups, \\text{in_channels}\n should be divisible by the number of groups. Default: 1\n * dilation -- the spacing between kernel elements. Can be a", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose2d.html", "category": "pytorch docs"}
{"text": "single number or a tuple \"(dH, dW)\". Default: 1\n Examples:\n >>> # With square kernels and equal stride\n >>> inputs = torch.randn(1, 4, 5, 5)\n >>> weights = torch.randn(4, 8, 3, 3)\n >>> F.conv_transpose2d(inputs, weights, padding=1)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose2d.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.parametrize.remove_parametrizationstorch.nn.utils.parametrize.remove_parametrizations(module, tensor_name, leave_parametrized=True)\n Removes the parametrizations on a tensor in a module.\n * If \"leave_parametrized=True\", \"module[tensor_name]\" will be set\n to its current output. In this case, the parametrization shall\n not change the \"dtype\" of the tensor.\n * If \"leave_parametrized=False\", \"module[tensor_name]\" will be set\n to the unparametrised tensor in\n \"module.parametrizations[tensor_name].original\". This is only\n possible when the parametrization depends on just one tensor.\n Parameters:\n * module (nn.Module) -- module from which remove the\n parametrization\n * tensor_name (str) -- name of the parametrization to be\n removed\n * leave_parametrized (bool, optional) -- leave the\n attribute \"tensor_name\" parametrized. Default: \"True\"\n Returns:\n module", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.remove_parametrizations.html", "category": "pytorch docs"}
{"text": "Returns:\n module\n Return type:\n Module\n Raises:\n * ValueError -- if \"module[tensor_name]\" is not parametrized\n * ValueError -- if \"leave_parametrized=False\" and the\n parametrization depends on several tensors", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.remove_parametrizations.html", "category": "pytorch docs"}
{"text": "torch.Tensor.q_per_channel_axisTensor.q_per_channel_axis() -> int\n Given a Tensor quantized by linear (affine) per-channel\n quantization, returns the index of dimension on which per-channel\n quantization is applied.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.q_per_channel_axis.html", "category": "pytorch docs"}
{"text": "torch.Tensor.triu_Tensor.triu_(diagonal=0) -> Tensor\n In-place version of \"triu()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.triu_.html", "category": "pytorch docs"}
{"text": "RandomUnstructuredclass torch.nn.utils.prune.RandomUnstructured(amount)\n Prune (currently unpruned) units in a tensor at random.\n Parameters:\n * name (str) -- parameter name within \"module\" on which\n pruning will act.\n * amount (int or float) -- quantity of parameters to\n prune. If \"float\", should be between 0.0 and 1.0 and represent\n the fraction of parameters to prune. If \"int\", it represents\n the absolute number of parameters to prune.\n classmethod apply(module, name, amount)\n Adds the forward pre-hook that enables pruning on the fly and\n the reparametrization of a tensor in terms of the original\n tensor and the pruning mask.\n Parameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * name (str) -- parameter name within \"module\" on which\n pruning will act.\n * amount (int or float) -- quantity of parameters", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomUnstructured.html", "category": "pytorch docs"}
{"text": "to prune. If \"float\", should be between 0.0 and 1.0 and\n represent the fraction of parameters to prune. If \"int\", it\n represents the absolute number of parameters to prune.\n apply_mask(module)\n Simply handles the multiplication between the parameter being\n pruned and the generated mask. Fetches the mask and the original\n tensor from the module and returns the pruned version of the\n tensor.\n Parameters:\n module (nn.Module) -- module containing the tensor to\n prune\n Returns:\n pruned version of the input tensor\n Return type:\n pruned_tensor (torch.Tensor)\n prune(t, default_mask=None, importance_scores=None)\n Computes and returns a pruned version of input tensor \"t\"\n according to the pruning rule specified in \"compute_mask()\".\n Parameters:\n * t (torch.Tensor) -- tensor to prune (of same\n dimensions as \"default_mask\").", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomUnstructured.html", "category": "pytorch docs"}
{"text": "dimensions as \"default_mask\").\n * importance_scores (torch.Tensor) -- tensor of\n importance scores (of same shape as \"t\") used to compute\n mask for pruning \"t\". The values in this tensor indicate\n the importance of the corresponding elements in the \"t\"\n that is being pruned. If unspecified or None, the tensor\n \"t\" will be used in its place.\n * default_mask (torch.Tensor, optional) -- mask\n from previous pruning iteration, if any. To be considered\n when determining what portion of the tensor that pruning\n should act on. If None, default to a mask of ones.\n Returns:\n pruned version of tensor \"t\".\n remove(module)\n Removes the pruning reparameterization from a module. The pruned\n parameter named \"name\" remains permanently pruned, and the\n parameter named \"name+'_orig'\" is removed from the parameter", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomUnstructured.html", "category": "pytorch docs"}
{"text": "list. Similarly, the buffer named \"name+'_mask'\" is removed from\n the buffers.\n Note:\n Pruning itself is NOT undone or reversed!", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomUnstructured.html", "category": "pytorch docs"}
{"text": "RNNCellclass torch.nn.RNNCell(input_size, hidden_size, bias=True, nonlinearity='tanh', device=None, dtype=None)\n An Elman RNN cell with tanh or ReLU non-linearity.\n h' = \\tanh(W_{ih} x + b_{ih} + W_{hh} h + b_{hh})\n If \"nonlinearity\" is 'relu', then ReLU is used in place of tanh.\n Parameters:\n * input_size (int) -- The number of expected features in\n the input x\n * hidden_size (int) -- The number of features in the\n hidden state h\n * bias (bool) -- If \"False\", then the layer does not use\n bias weights b_ih and b_hh. Default: \"True\"\n * nonlinearity (str) -- The non-linearity to use. Can be\n either \"'tanh'\" or \"'relu'\". Default: \"'tanh'\"\n Inputs: input, hidden\n * input: tensor containing input features\n * hidden: tensor containing the initial hidden state\n Defaults to zero if not provided.\n Outputs: h'\n * h' of shape (batch, hidden_size): tensor containing the", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNNCell.html", "category": "pytorch docs"}
{"text": "next hidden state for each element in the batch\n Shape:\n * input: (N, H_{in}) or (H_{in}) tensor containing input\n features where H_{in} = input_size.\n * hidden: (N, H_{out}) or (H_{out}) tensor containing the\n initial hidden state where H_{out} = hidden_size. Defaults\n to zero if not provided.\n * output: (N, H_{out}) or (H_{out}) tensor containing the next\n hidden state.\n Variables:\n * weight_ih (torch.Tensor) -- the learnable input-hidden\n weights, of shape (hidden_size, input_size)\n * weight_hh (torch.Tensor) -- the learnable hidden-hidden\n weights, of shape (hidden_size, hidden_size)\n * bias_ih -- the learnable input-hidden bias, of shape\n (hidden_size)\n * bias_hh -- the learnable hidden-hidden bias, of shape\n (hidden_size)\n Note:\n All the weights and biases are initialized from\n \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{1}{\\text{hidden_size}}", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNNCell.html", "category": "pytorch docs"}
{"text": "\\frac{1}{\\text{hidden_size}}\n Examples:\n >>> rnn = nn.RNNCell(10, 20)\n >>> input = torch.randn(6, 3, 10)\n >>> hx = torch.randn(3, 20)\n >>> output = []\n >>> for i in range(6):\n ... hx = rnn(input[i], hx)\n ... output.append(hx)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNNCell.html", "category": "pytorch docs"}
{"text": "torch.Tensor.rsqrtTensor.rsqrt() -> Tensor\n See \"torch.rsqrt()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.rsqrt.html", "category": "pytorch docs"}
{"text": "torch.diagonal_scattertorch.diagonal_scatter(input, src, offset=0, dim1=0, dim2=1) -> Tensor\n Embeds the values of the \"src\" tensor into \"input\" along the\n diagonal elements of \"input\", with respect to \"dim1\" and \"dim2\".\n This function returns a tensor with fresh storage; it does not\n return a view.\n The argument \"offset\" controls which diagonal to consider:\n * If \"offset\" = 0, it is the main diagonal.\n * If \"offset\" > 0, it is above the main diagonal.\n * If \"offset\" < 0, it is below the main diagonal.\n Parameters:\n * input (Tensor) -- the input tensor. Must be at least\n 2-dimensional.\n * src (Tensor) -- the tensor to embed into \"input\".\n * offset (int, optional) -- which diagonal to\n consider. Default: 0 (main diagonal).\n * dim1 (int, optional) -- first dimension with respect\n to which to take diagonal. Default: 0.\n * dim2 (int, optional) -- second dimension with", "source": "https://pytorch.org/docs/stable/generated/torch.diagonal_scatter.html", "category": "pytorch docs"}
{"text": "respect to which to take diagonal. Default: 1.\n Note:\n \"src\" must be of the proper size in order to be embedded into\n \"input\". Specifically, it should have the same shape as\n \"torch.diagonal(input, offset, dim1, dim2)\"\n Examples:\n >>> a = torch.zeros(3, 3)\n >>> a\n tensor([[0., 0., 0.],\n [0., 0., 0.],\n [0., 0., 0.]])\n >>> torch.diagonal_scatter(a, torch.ones(3), 0)\n tensor([[1., 0., 0.],\n [0., 1., 0.],\n [0., 0., 1.]])\n >>> torch.diagonal_scatter(a, torch.ones(2), 1)\n tensor([[0., 1., 0.],\n [0., 0., 1.],\n [0., 0., 0.]])", "source": "https://pytorch.org/docs/stable/generated/torch.diagonal_scatter.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.prune.l1_unstructuredtorch.nn.utils.prune.l1_unstructured(module, name, amount, importance_scores=None)\n Prunes tensor corresponding to parameter called \"name\" in \"module\"\n by removing the specified amount of (currently unpruned) units\n with the lowest L1-norm. Modifies module in place (and also return\n the modified module) by:\n 1. adding a named buffer called \"name+'_mask'\" corresponding to the\n binary mask applied to the parameter \"name\" by the pruning\n method.\n 2. replacing the parameter \"name\" by its pruned version, while the\n original (unpruned) parameter is stored in a new parameter named\n \"name+'_orig'\".\n Parameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * name (str) -- parameter name within \"module\" on which\n pruning will act.\n * amount (int or float) -- quantity of parameters to", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.l1_unstructured.html", "category": "pytorch docs"}
{"text": "prune. If \"float\", should be between 0.0 and 1.0 and represent\n the fraction of parameters to prune. If \"int\", it represents\n the absolute number of parameters to prune.\n * importance_scores (torch.Tensor) -- tensor of importance\n scores (of same shape as module parameter) used to compute\n mask for pruning. The values in this tensor indicate the\n importance of the corresponding elements in the parameter\n being pruned. If unspecified or None, the module parameter\n will be used in its place.\n Returns:\n modified (i.e. pruned) version of the input module\n Return type:\n module (nn.Module)\n -[ Examples ]-\n\n\n\nm = prune.l1_unstructured(nn.Linear(2, 3), 'weight', amount=0.2)\nm.state_dict().keys()\n odict_keys(['bias', 'weight_orig', 'weight_mask'])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.l1_unstructured.html", "category": "pytorch docs"}
{"text": "torch.Tensor.matrix_expTensor.matrix_exp() -> Tensor\n See \"torch.matrix_exp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.matrix_exp.html", "category": "pytorch docs"}
{"text": "torch.lgammatorch.lgamma(input, , out=None) -> Tensor\n Computes the natural logarithm of the absolute value of the gamma\n function on \"input\".\n \\text{out}{i} = \\ln \\Gamma(|\\text{input}|)\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.arange(0.5, 2, 0.5)\n >>> torch.lgamma(a)\n tensor([ 0.5724, 0.0000, -0.1208])", "source": "https://pytorch.org/docs/stable/generated/torch.lgamma.html", "category": "pytorch docs"}
{"text": "torch.gathertorch.gather(input, dim, index, , sparse_grad=False, out=None) -> Tensor\n Gathers values along an axis specified by dim.\n For a 3-D tensor the output is specified by:\n out[i][j][k] = input[index[i][j][k]][j][k] # if dim == 0\n out[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1\n out[i][j][k] = input[i][j][index[i][j][k]] # if dim == 2\n \"input\" and \"index\" must have the same number of dimensions. It is\n also required that \"index.size(d) <= input.size(d)\" for all\n dimensions \"d != dim\". \"out\" will have the same shape as \"index\".\n Note that \"input\" and \"index\" do not broadcast against each other.\n Parameters:\n * input (Tensor) -- the source tensor\n * dim (int) -- the axis along which to index\n * index (LongTensor) -- the indices of elements to gather\n Keyword Arguments:\n * sparse_grad (bool, optional*) -- If \"True\", gradient\n w.r.t. \"input\" will be a sparse tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.gather.html", "category": "pytorch docs"}
{"text": "w.r.t. \"input\" will be a sparse tensor.\n * out (Tensor, optional) -- the destination tensor\n Example:\n >>> t = torch.tensor([[1, 2], [3, 4]])\n >>> torch.gather(t, 1, torch.tensor([[0, 0], [1, 0]]))\n tensor([[ 1, 1],\n [ 4, 3]])", "source": "https://pytorch.org/docs/stable/generated/torch.gather.html", "category": "pytorch docs"}
{"text": "torch.quantize_per_channeltorch.quantize_per_channel(input, scales, zero_points, axis, dtype) -> Tensor\n Converts a float tensor to a per-channel quantized tensor with\n given scales and zero points.\n Parameters:\n * input (Tensor) -- float tensor to quantize\n * scales (Tensor) -- float 1D tensor of scales to use,\n size should match \"input.size(axis)\"\n * zero_points (int) -- integer 1D tensor of offset to use,\n size should match \"input.size(axis)\"\n * axis (int) -- dimension on which apply per-channel\n quantization\n * dtype (\"torch.dtype\") -- the desired data type of returned\n tensor. Has to be one of the quantized dtypes: \"torch.quint8\",\n \"torch.qint8\", \"torch.qint32\"\n Returns:\n A newly quantized tensor\n Return type:\n Tensor\n Example:\n >>> x = torch.tensor([[-1.0, 0.0], [1.0, 2.0]])", "source": "https://pytorch.org/docs/stable/generated/torch.quantize_per_channel.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.quantize_per_channel(x, torch.tensor([0.1, 0.01]), torch.tensor([10, 0]), 0, torch.quint8)\n tensor([[-1., 0.],\n [ 1., 2.]], size=(2, 2), dtype=torch.quint8,\n quantization_scheme=torch.per_channel_affine,\n scale=tensor([0.1000, 0.0100], dtype=torch.float64),\n zero_point=tensor([10, 0]), axis=0)\n >>> torch.quantize_per_channel(x, torch.tensor([0.1, 0.01]), torch.tensor([10, 0]), 0, torch.quint8).int_repr()\n tensor([[ 0, 10],\n [100, 200]], dtype=torch.uint8)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantize_per_channel.html", "category": "pytorch docs"}
{"text": "torch.signal.windows.blackmantorch.signal.windows.blackman(M, , sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\n Computes the Blackman window.\n The Blackman window is defined as follows:\n w_n = 0.42 - 0.5 \\cos \\left( \\frac{2 \\pi n}{M - 1} \\right) +\n 0.08 \\cos \\left( \\frac{4 \\pi n}{M - 1} \\right)\n The window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\n Parameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\n Keyword Arguments:\n * sym (bool, optional) -- If False, returns a\n periodic window suitable for use in spectral analysis. If\n True, returns a symmetric window suitable for use in filter\n design. Default: True.\n * dtype* (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.blackman.html", "category": "pytorch docs"}
{"text": "(see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Return type:\n Tensor\n Examples:\n >>> # Generates a symmetric Blackman window.\n >>> torch.signal.windows.blackman(5)\n tensor([-1.4901e-08, 3.4000e-01, 1.0000e+00, 3.4000e-01, -1.4901e-08])\n >>> # Generates a periodic Blackman window.\n >>> torch.signal.windows.blackman(5, sym=False)", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.blackman.html", "category": "pytorch docs"}
{"text": "tensor([-1.4901e-08, 2.0077e-01, 8.4923e-01, 8.4923e-01, 2.0077e-01])", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.blackman.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.softsigntorch.nn.functional.softsign(input) -> Tensor\n Applies element-wise, the function \\text{SoftSign}(x) = \\frac{x}{1\n + |x|}\n See \"Softsign\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.softsign.html", "category": "pytorch docs"}
{"text": "Eventclass torch.cuda.Event(enable_timing=False, blocking=False, interprocess=False)\n Wrapper around a CUDA event.\n CUDA events are synchronization markers that can be used to monitor\n the device's progress, to accurately measure timing, and to\n synchronize CUDA streams.\n The underlying CUDA events are lazily initialized when the event is\n first recorded or exported to another process. After creation, only\n streams on the same device may record the event. However, streams\n on any device can wait on the event.\n Parameters:\n * enable_timing (bool, optional) -- indicates if the\n event should measure time (default: \"False\")\n * blocking (bool, optional) -- if \"True\", \"wait()\"\n will be blocking (default: \"False\")\n * interprocess (bool) -- if \"True\", the event can be\n shared between processes (default: \"False\")\n elapsed_time(end_event)\n Returns the time elapsed in milliseconds after the event was", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.Event.html", "category": "pytorch docs"}
{"text": "recorded and before the end_event was recorded.\n classmethod from_ipc_handle(device, handle)\n Reconstruct an event from an IPC handle on the given device.\n ipc_handle()\n Returns an IPC handle of this event. If not recorded yet, the\n event will use the current device.\n query()\n Checks if all work currently captured by event has completed.\n Returns:\n A boolean indicating if all work currently captured by event\n has completed.\n record(stream=None)\n Records the event in a given stream.\n Uses \"torch.cuda.current_stream()\" if no stream is specified.\n The stream's device must match the event's device.\n synchronize()\n Waits for the event to complete.\n Waits until the completion of all work currently captured in\n this event. This prevents the CPU thread from proceeding until\n the event completes.\n Note:\n This is a wrapper around \"cudaEventSynchronize()\": see CUDA\n Event documentation for more info.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.Event.html", "category": "pytorch docs"}
{"text": "Event documentation for more info.\n wait(stream=None)\n Makes all future work submitted to the given stream wait for\n this event.\n Use \"torch.cuda.current_stream()\" if no stream is specified.\n Note:\n This is a wrapper around \"cudaStreamWaitEvent()\": see CUDA\n Event documentation for more info.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.Event.html", "category": "pytorch docs"}
{"text": "torch.argsorttorch.argsort(input, dim=- 1, descending=False, stable=False) -> Tensor\n Returns the indices that sort a tensor along a given dimension in\n ascending order by value.\n This is the second value returned by \"torch.sort()\". See its\n documentation for the exact semantics of this method.\n If \"stable\" is \"True\" then the sorting routine becomes stable,\n preserving the order of equivalent elements. If \"False\", the\n relative order of values which compare equal is not guaranteed.\n \"True\" is slower.\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int, optional) -- the dimension to sort along\n * descending (bool, optional) -- controls the sorting\n order (ascending or descending)\n * stable (bool, optional) -- controls the relative\n order of equivalent elements\n Example:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[ 0.0785, 1.5267, -0.8521, 0.4065],", "source": "https://pytorch.org/docs/stable/generated/torch.argsort.html", "category": "pytorch docs"}
{"text": "[ 0.1598, 0.0788, -0.0745, -1.2700],\n [ 1.2208, 1.0722, -0.7064, 1.2564],\n [ 0.0669, -0.2318, -0.8229, -0.9280]])\n >>> torch.argsort(a, dim=1)\n tensor([[2, 0, 3, 1],\n [3, 2, 1, 0],\n [2, 1, 0, 3],\n [3, 2, 1, 0]])", "source": "https://pytorch.org/docs/stable/generated/torch.argsort.html", "category": "pytorch docs"}
{"text": "torch.is_grad_enabledtorch.is_grad_enabled()\n Returns True if grad mode is currently enabled.", "source": "https://pytorch.org/docs/stable/generated/torch.is_grad_enabled.html", "category": "pytorch docs"}
{"text": "CosineEmbeddingLossclass torch.nn.CosineEmbeddingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean')\n Creates a criterion that measures the loss given input tensors x_1,\n x_2 and a Tensor label y with values 1 or -1. This is used for\n measuring whether two inputs are similar or dissimilar, using the\n cosine similarity, and is typically used for learning nonlinear\n embeddings or semi-supervised learning.\n The loss function for each sample is:\n \\text{loss}(x, y) = \\begin{cases} 1 - \\cos(x_1, x_2), & \\text{if\n } y = 1 \\ \\max(0, \\cos(x_1, x_2) - \\text{margin}), & \\text{if }\n y = -1 \\end{cases}\n Parameters:\n * margin (float, optional) -- Should be a number from\n -1 to 1, 0 to 0.5 is suggested. If \"margin\" is missing, the\n default value is 0.\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CosineEmbeddingLoss.html", "category": "pytorch docs"}
{"text": "loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CosineEmbeddingLoss.html", "category": "pytorch docs"}
{"text": "deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n Shape:\n * Input1: (N, D) or (D), where N is the batch size and D is\n the embedding dimension.\n * Input2: (N, D) or (D), same shape as Input1.\n * Target: (N) or ().\n * Output: If \"reduction\" is \"'none'\", then (N), otherwise\n scalar.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CosineEmbeddingLoss.html", "category": "pytorch docs"}
{"text": "torch.arccoshtorch.arccosh(input, *, out=None) -> Tensor\n Alias for \"torch.acosh()\".", "source": "https://pytorch.org/docs/stable/generated/torch.arccosh.html", "category": "pytorch docs"}
{"text": "ConvReLU2dclass torch.ao.nn.intrinsic.qat.ConvReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None)\n A ConvReLU2d module is a fused module of Conv2d and ReLU, attached\n with FakeQuantize modules for weight for quantization aware\n training.\n We combined the interface of \"Conv2d\" and \"BatchNorm2d\".\n Variables:\n weight_fake_quant -- fake quant module for weight", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvReLU2d.html", "category": "pytorch docs"}
{"text": "torch.vstacktorch.vstack(tensors, , out=None) -> Tensor\n Stack tensors in sequence vertically (row wise).\n This is equivalent to concatenation along the first axis after all\n 1-D tensors have been reshaped by \"torch.atleast_2d()\".\n Parameters:\n tensors (sequence of Tensors) -- sequence of tensors to\n concatenate\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.tensor([1, 2, 3])\n >>> b = torch.tensor([4, 5, 6])\n >>> torch.vstack((a,b))\n tensor([[1, 2, 3],\n [4, 5, 6]])\n >>> a = torch.tensor([[1],[2],[3]])\n >>> b = torch.tensor([[4],[5],[6]])\n >>> torch.vstack((a,b))\n tensor([[1],\n [2],\n [3],\n [4],\n [5],\n [6]])", "source": "https://pytorch.org/docs/stable/generated/torch.vstack.html", "category": "pytorch docs"}
{"text": "torch.fliptorch.flip(input, dims) -> Tensor\n Reverse the order of an n-D tensor along given axis in dims.\n Note:\n torch.flip makes a copy of \"input\"'s data. This is different\n from NumPy's np.flip, which returns a view in constant time.\n Since copying a tensor's data is more work than viewing that\n data, torch.flip is expected to be slower than np.flip.\n Parameters:\n * input (Tensor) -- the input tensor.\n * dims (a list or tuple) -- axis to flip on\n Example:\n >>> x = torch.arange(8).view(2, 2, 2)\n >>> x\n tensor([[[ 0, 1],\n [ 2, 3]],\n [[ 4, 5],\n [ 6, 7]]])\n >>> torch.flip(x, [0, 1])\n tensor([[[ 6, 7],\n [ 4, 5]],\n [[ 2, 3],\n [ 0, 1]]])", "source": "https://pytorch.org/docs/stable/generated/torch.flip.html", "category": "pytorch docs"}
{"text": "torch.Tensor.frexpTensor.frexp(input) -> (Tensor mantissa, Tensor exponent)\n See \"torch.frexp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.frexp.html", "category": "pytorch docs"}
{"text": "torch.mediantorch.median(input) -> Tensor\n Returns the median of the values in \"input\".\n Note:\n The median is not unique for \"input\" tensors with an even number\n of elements. In this case the lower of the two medians is\n returned. To compute the mean of both medians, use\n \"torch.quantile()\" with \"q=0.5\" instead.\n Warning:\n This function produces deterministic (sub)gradients unlike\n \"median(dim=0)\"\n Parameters:\n input (Tensor) -- the input tensor.\n Example:\n >>> a = torch.randn(1, 3)\n >>> a\n tensor([[ 1.5219, -1.5212, 0.2202]])\n >>> torch.median(a)\n tensor(0.2202)\n torch.median(input, dim=- 1, keepdim=False, *, out=None)\n Returns a namedtuple \"(values, indices)\" where \"values\" contains\n the median of each row of \"input\" in the dimension \"dim\", and\n \"indices\" contains the index of the median values found in the\n dimension \"dim\".\n By default, \"dim\" is the last dimension of the \"input\" tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.median.html", "category": "pytorch docs"}
{"text": "If \"keepdim\" is \"True\", the output tensors are of the same size as\n \"input\" except in the dimension \"dim\" where they are of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the outputs tensor having 1 fewer dimension than \"input\".\n Note:\n The median is not unique for \"input\" tensors with an even number\n of elements in the dimension \"dim\". In this case the lower of the\n two medians is returned. To compute the mean of both medians in\n \"input\", use \"torch.quantile()\" with \"q=0.5\" instead.\n Warning:\n \"indices\" does not necessarily contain the first occurrence of\n each median value found, unless it is unique. The exact\n implementation details are device-specific. Do not expect the\n same result when run on CPU and GPU in general. For the same\n reason do not expect the gradients to be deterministic.\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- the dimension to reduce.", "source": "https://pytorch.org/docs/stable/generated/torch.median.html", "category": "pytorch docs"}
{"text": "\nkeepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n Keyword Arguments:\n out ((Tensor, Tensor), optional) -- The first\n tensor will be populated with the median values and the second\n tensor, which must have dtype long, with their indices in the\n dimension \"dim\" of \"input\".\n Example:\n >>> a = torch.randn(4, 5)\n >>> a\n tensor([[ 0.2505, -0.3982, -0.9948, 0.3518, -1.3131],\n [ 0.3180, -0.6993, 1.0436, 0.0438, 0.2270],\n [-0.2751, 0.7303, 0.2192, 0.3321, 0.2488],\n [ 1.0778, -1.9510, 0.7048, 0.4742, -0.7125]])\n >>> torch.median(a, 1)\n torch.return_types.median(values=tensor([-0.3982, 0.2270, 0.2488, 0.4742]), indices=tensor([1, 4, 4, 3]))\n", "source": "https://pytorch.org/docs/stable/generated/torch.median.html", "category": "pytorch docs"}
{"text": "ConstantPad1dclass torch.nn.ConstantPad1d(padding, value)\n Pads the input tensor boundaries with a constant value.\n For N-dimensional padding, use \"torch.nn.functional.pad()\".\n Parameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in both boundaries. If a 2-tuple,\n uses (\\text{padding_left}, \\text{padding_right})\n Shape:\n * Input: (C, W_{in}) or (N, C, W_{in}).\n * Output: (C, W_{out}) or (N, C, W_{out}), where\n W_{out} = W_{in} + \\text{padding_left} +\n \\text{padding_right}\n Examples:\n >>> m = nn.ConstantPad1d(2, 3.5)\n >>> input = torch.randn(1, 2, 4)\n >>> input\n tensor([[[-1.0491, -0.7152, -0.0749, 0.8530],\n [-1.3287, 1.8966, 0.1466, -0.2771]]])\n >>> m(input)\n tensor([[[ 3.5000, 3.5000, -1.0491, -0.7152, -0.0749, 0.8530, 3.5000,\n 3.5000],\n [ 3.5000, 3.5000, -1.3287, 1.8966, 0.1466, -0.2771, 3.5000,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad1d.html", "category": "pytorch docs"}
{"text": "3.5000]]])\n >>> m = nn.ConstantPad1d(2, 3.5)\n >>> input = torch.randn(1, 2, 3)\n >>> input\n tensor([[[ 1.6616, 1.4523, -1.1255],\n [-3.6372, 0.1182, -1.8652]]])\n >>> m(input)\n tensor([[[ 3.5000, 3.5000, 1.6616, 1.4523, -1.1255, 3.5000, 3.5000],\n [ 3.5000, 3.5000, -3.6372, 0.1182, -1.8652, 3.5000, 3.5000]]])\n >>> # using different paddings for different sides\n >>> m = nn.ConstantPad1d((3, 1), 3.5)\n >>> m(input)\n tensor([[[ 3.5000, 3.5000, 3.5000, 1.6616, 1.4523, -1.1255, 3.5000],\n [ 3.5000, 3.5000, 3.5000, -3.6372, 0.1182, -1.8652, 3.5000]]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad1d.html", "category": "pytorch docs"}
{"text": "ZeroPad2dclass torch.nn.ZeroPad2d(padding)\n Pads the input tensor boundaries with zero.\n For N-dimensional padding, use \"torch.nn.functional.pad()\".\n Parameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in all boundaries. If a 4-tuple,\n uses (\\text{padding_left}, \\text{padding_right},\n \\text{padding_top}, \\text{padding_bottom})\n Shape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).\n * Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),\n where\n H_{out} = H_{in} + \\text{padding_top} +\n \\text{padding_bottom}\n W_{out} = W_{in} + \\text{padding_left} +\n \\text{padding_right}\n Examples:\n >>> m = nn.ZeroPad2d(2)\n >>> input = torch.randn(1, 1, 3, 3)\n >>> input\n tensor([[[[-0.1678, -0.4418, 1.9466],\n [ 0.9604, -0.4219, -0.5241],\n [-0.9162, -0.5436, -0.6446]]]])\n >>> m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ZeroPad2d.html", "category": "pytorch docs"}
{"text": "\n\n\nm(input)\n tensor([[[[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n [ 0.0000, 0.0000, -0.1678, -0.4418, 1.9466, 0.0000, 0.0000],\n [ 0.0000, 0.0000, 0.9604, -0.4219, -0.5241, 0.0000, 0.0000],\n [ 0.0000, 0.0000, -0.9162, -0.5436, -0.6446, 0.0000, 0.0000],\n [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]])\n >>> # using different paddings for different sides\n >>> m = nn.ZeroPad2d((1, 1, 2, 0))\n >>> m(input)\n tensor([[[[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n [ 0.0000, -0.1678, -0.4418, 1.9466, 0.0000],\n [ 0.0000, 0.9604, -0.4219, -0.5241, 0.0000],\n [ 0.0000, -0.9162, -0.5436, -0.6446, 0.0000]]]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ZeroPad2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.copysignTensor.copysign(other) -> Tensor\n See \"torch.copysign()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.copysign.html", "category": "pytorch docs"}
{"text": "torch.true_dividetorch.true_divide(dividend, divisor, *, out) -> Tensor\n Alias for \"torch.div()\" with \"rounding_mode=None\".", "source": "https://pytorch.org/docs/stable/generated/torch.true_divide.html", "category": "pytorch docs"}
{"text": "torch.Tensor.scatter_reduceTensor.scatter_reduce(dim, index, src, reduce, *, include_self=True) -> Tensor\n Out-of-place version of \"torch.Tensor.scatter_reduce_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_reduce.html", "category": "pytorch docs"}
{"text": "torch.linalg.ldl_solvetorch.linalg.ldl_solve(LD, pivots, B, , hermitian=False, out=None) -> Tensor\n Computes the solution of a system of linear equations using the LDL\n factorization.\n \"LD\" and \"pivots\" are the compact representation of the LDL\n factorization and are expected to be computed by\n \"torch.linalg.ldl_factor_ex()\". \"hermitian\" argument to this\n function should be the same as the corresponding arguments in\n \"torch.linalg.ldl_factor_ex()\".\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n Warning:\n This function is \"experimental\" and it may change in a future\n PyTorch release.\n Parameters:\n * LD (Tensor) -- the n times n matrix or the batch of\n such matrices of size (, n, n) where *** is one or more\n batch dimensions.\n * pivots (Tensor) -- the pivots corresponding to the LDL", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.ldl_solve.html", "category": "pytorch docs"}
{"text": "factorization of \"LD\".\n * B (Tensor) -- right-hand side tensor of shape (, n,\n k).\n Keyword Arguments:\n * hermitian (bool, optional) -- whether to consider\n the decomposed matrix to be Hermitian or symmetric. For real-\n valued matrices, this switch has no effect. Default: False.\n * out (tuple, optional) -- output tensor. B may be\n passed as out and the result is computed in-place on B.\n Ignored if None. Default: None*.\n Examples:\n >>> A = torch.randn(2, 3, 3)\n >>> A = A @ A.mT # make symmetric\n >>> LD, pivots, info = torch.linalg.ldl_factor_ex(A)\n >>> B = torch.randn(2, 3, 4)\n >>> X = torch.linalg.ldl_solve(LD, pivots, B)\n >>> torch.linalg.norm(A @ X - B)\n >>> tensor(0.0001)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.ldl_solve.html", "category": "pytorch docs"}
{"text": "torch.tantorch.tan(input, , out=None) -> Tensor\n Returns a new tensor with the tangent of the elements of \"input\".\n \\text{out}{i} = \\tan(\\text{input})\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([-1.2027, -1.7687, 0.4412, -1.3856])\n >>> torch.tan(a)\n tensor([-2.5930, 4.9859, 0.4722, -5.3366])", "source": "https://pytorch.org/docs/stable/generated/torch.tan.html", "category": "pytorch docs"}
{"text": "torch.Tensor.greater_equal_Tensor.greater_equal_(other) -> Tensor\n In-place version of \"greater_equal()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.greater_equal_.html", "category": "pytorch docs"}
{"text": "default_fused_per_channel_wt_fake_quanttorch.quantization.fake_quantize.default_fused_per_channel_wt_fake_quant\n alias of functools.partial(, observer=,\n quant_min=-128, quant_max=127, dtype=torch.qint8,\n qscheme=torch.per_channel_symmetric){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_fused_per_channel_wt_fake_quant.html", "category": "pytorch docs"}
{"text": "torch.optim.Optimizer.state_dictOptimizer.state_dict()\n Returns the state of the optimizer as a \"dict\".\n It contains two entries:\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n * param_groups - a list containing all parameter groups where each\n parameter group is a dict", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.state_dict.html", "category": "pytorch docs"}
{"text": "leaky_reluclass torch.ao.nn.quantized.functional.leaky_relu(input, negative_slope=0.01, inplace=False, scale=None, zero_point=None)\n Quantized version of the. leaky_relu(input, negative_slope=0.01,\n inplace=False, scale, zero_point) -> Tensor\n Applies element-wise, \\text{LeakyReLU}(x) = \\max(0, x) +\n \\text{negative_slope} * \\min(0, x)\n Parameters:\n * input (Tensor) -- Quantized input\n * negative_slope (float) -- The slope of the negative\n input\n * inplace (bool) -- Inplace modification of the input\n tensor\n * scale (Optional[float]) -- Scale and zero point of\n the output tensor.\n * zero_point (Optional[int]) -- Scale and zero point\n of the output tensor.\n See \"LeakyReLU\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.leaky_relu.html", "category": "pytorch docs"}
{"text": "torch.func.jacfwdtorch.func.jacfwd(func, argnums=0, has_aux=False, , randomness='error')\n Computes the Jacobian of \"func\" with respect to the arg(s) at index\n \"argnum\" using forward-mode autodiff\n Parameters:\n * func (function) -- A Python function that takes one or\n more arguments, one of which must be a Tensor, and returns one\n or more Tensors\n * argnums (int or Tuple[int]*) -- Optional,\n integer or tuple of integers, saying which arguments to get\n the Jacobian with respect to. Default: 0.\n * has_aux (bool) -- Flag indicating that \"func\" returns a\n \"(output, aux)\" tuple where the first element is the output of\n the function to be differentiated and the second element is\n auxiliary objects that will not be differentiated. Default:\n False.\n * randomness (str) -- Flag indicating what type of\n randomness to use. See \"vmap()\" for more detail. Allowed:", "source": "https://pytorch.org/docs/stable/generated/torch.func.jacfwd.html", "category": "pytorch docs"}
{"text": "\"different\", \"same\", \"error\". Default: \"error\"\n Returns:\n Returns a function that takes in the same inputs as \"func\" and\n returns the Jacobian of \"func\" with respect to the arg(s) at\n \"argnums\". If \"has_aux is True\", then the returned function\n instead returns a \"(jacobian, aux)\" tuple where \"jacobian\" is\n the Jacobian and \"aux\" is auxiliary objects returned by \"func\".\n Note:\n You may see this API error out with \"forward-mode AD not\n implemented for operator X\". If so, please file a bug report and\n we will prioritize it. An alternative is to use \"jacrev()\", which\n has better operator coverage.\n A basic usage with a pointwise, unary operation will give a\n diagonal array as the Jacobian\n\n\n\nfrom torch.func import jacfwd\nx = torch.randn(5)\njacobian = jacfwd(torch.sin)(x)\nexpected = torch.diag(torch.cos(x))\nassert torch.allclose(jacobian, expected)\n \"jacfwd()\" can be composed with vmap to produce batched Jacobians:\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.jacfwd.html", "category": "pytorch docs"}
{"text": "\n\n\nfrom torch.func import jacfwd, vmap\nx = torch.randn(64, 5)\njacobian = vmap(jacfwd(torch.sin))(x)\nassert jacobian.shape == (64, 5, 5)\n If you would like to compute the output of the function as well as\n the jacobian of the function, use the \"has_aux\" flag to return the\n output as an auxiliary object:\nfrom torch.func import jacfwd\nx = torch.randn(5)\ndef f(x):\n return x.sin()\ndef g(x):\n result = f(x)\n return result, result\njacobian_f, f_x = jacfwd(g, has_aux=True)(x)\nassert torch.allclose(f_x, f(x))\n Additionally, \"jacrev()\" can be composed with itself or \"jacrev()\"\n to produce Hessians\nfrom torch.func import jacfwd, jacrev\ndef f(x):\n return x.sin().sum()\nx = torch.randn(5)\nhessian = jacfwd(jacrev(f))(x)\nassert torch.allclose(hessian, torch.diag(-x.sin()))\n By default, \"jacfwd()\" computes the Jacobian with respect to the\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.jacfwd.html", "category": "pytorch docs"}
{"text": "first input. However, it can compute the Jacboian with respect to a\n different argument by using \"argnums\":\n\n\n\nfrom torch.func import jacfwd\ndef f(x, y):\n return x + y ** 2\nx, y = torch.randn(5), torch.randn(5)\njacobian = jacfwd(f, argnums=1)(x, y)\nexpected = torch.diag(2 * y)\nassert torch.allclose(jacobian, expected)\n Additionally, passing a tuple to \"argnums\" will compute the\n Jacobian with respect to multiple arguments\nfrom torch.func import jacfwd\ndef f(x, y):\n return x + y ** 2\nx, y = torch.randn(5), torch.randn(5)\njacobian = jacfwd(f, argnums=(0, 1))(x, y)\nexpectedX = torch.diag(torch.ones_like(x))\nexpectedY = torch.diag(2 * y)\nassert torch.allclose(jacobian[0], expectedX)\nassert torch.allclose(jacobian[1], expectedY)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.jacfwd.html", "category": "pytorch docs"}
{"text": "torch._foreach_exptorch._foreach_exp(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.exp()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_exp.html", "category": "pytorch docs"}
{"text": "torch.linalg.solve_extorch.linalg.solve_ex(A, B, , left=True, check_errors=False, out=None)\n A version of \"solve()\" that does not perform error checks unless\n \"check_errors\"= True. It also returns the \"info\" tensor returned\n by LAPACK's getrf.\n Note:\n When the inputs are on a CUDA device, this function synchronizes\n only when \"check_errors\"= True.\n Warning:\n This function is \"experimental\" and it may change in a future\n PyTorch release.\n Parameters:\n A (Tensor) -- tensor of shape (, n, n) where *** is\n zero or more batch dimensions.\n Keyword Arguments:\n * left (bool, optional) -- whether to solve the system\n AX=B or XA = B. Default: True.\n * check_errors (bool, optional) -- controls whether to\n check the content of \"infos\" and raise an error if it is non-\n zero. Default: False.\n * out (tuple, optional) -- tuple of two tensors to", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve_ex.html", "category": "pytorch docs"}
{"text": "write the output to. Ignored if None. Default: None.\n Returns:\n A named tuple (result, info).\n Examples:\n >>> A = torch.randn(3, 3)\n >>> Ainv, info = torch.linalg.solve_ex(A)\n >>> torch.dist(torch.linalg.inv(A), Ainv)\n tensor(0.)\n >>> info\n tensor(0, dtype=torch.int32)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve_ex.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.smooth_l1_losstorch.nn.functional.smooth_l1_loss(input, target, size_average=None, reduce=None, reduction='mean', beta=1.0)\n Function that uses a squared term if the absolute element-wise\n error falls below beta and an L1 term otherwise.\n See \"SmoothL1Loss\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.smooth_l1_loss.html", "category": "pytorch docs"}
{"text": "MaxPool2dclass torch.nn.MaxPool2d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)\n Applies a 2D max pooling over an input signal composed of several\n input planes.\n In the simplest case, the output value of the layer with input size\n (N, C, H, W), output (N, C, H_{out}, W_{out}) and \"kernel_size\"\n (kH, kW) can be precisely described as:\n \\begin{aligned} out(N_i, C_j, h, w) ={} & \\max_{m=0, \\ldots,\n kH-1} \\max_{n=0, \\ldots, kW-1} \\ &\n \\text{input}(N_i, C_j, \\text{stride[0]} \\times h + m,\n \\text{stride[1]} \\times w + n) \\end{aligned}\n If \"padding\" is non-zero, then the input is implicitly padded with\n negative infinity on both sides for \"padding\" number of points.\n \"dilation\" controls the spacing between the kernel points. It is\n harder to describe, but this link has a nice visualization of what\n \"dilation\" does.\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html", "category": "pytorch docs"}
{"text": "\"dilation\" does.\n Note:\n When ceil_mode=True, sliding windows are allowed to go off-bounds\n if they start within the left padding or the input. Sliding\n windows that would start in the right padded region are ignored.\n The parameters \"kernel_size\", \"stride\", \"padding\", \"dilation\" can\n either be:\n * a single \"int\" -- in which case the same value is used for the\n height and width dimension\n * a \"tuple\" of two ints -- in which case, the first int is\n used for the height dimension, and the second int for the\n width dimension\n Parameters:\n * kernel_size (Union[int, Tuple[int,\n int]]) -- the size of the window to take a max over\n * stride (Union[int, Tuple[int, int]])\n -- the stride of the window. Default value is \"kernel_size\"\n * padding (Union[int, Tuple[int,\n int]]) -- Implicit negative infinity padding to be\n added on both sides", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html", "category": "pytorch docs"}
{"text": "added on both sides\n * dilation (Union[int, Tuple[int,\n int]]) -- a parameter that controls the stride of\n elements in the window\n * return_indices (bool) -- if \"True\", will return the max\n indices along with the outputs. Useful for\n \"torch.nn.MaxUnpool2d\" later\n * ceil_mode (bool) -- when True, will use ceil instead\n of floor to compute the output shape\n Shape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in})\n * Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),\n where\n H_{out} = \\left\\lfloor\\frac{H_{in} + 2 * \\text{padding[0]}\n - \\text{dilation[0]} \\times (\\text{kernel_size[0]} -\n 1) - 1}{\\text{stride[0]}} + 1\\right\\rfloor\n W_{out} = \\left\\lfloor\\frac{W_{in} + 2 * \\text{padding[1]}\n - \\text{dilation[1]} \\times (\\text{kernel_size[1]} -\n 1) - 1}{\\text{stride[1]}} + 1\\right\\rfloor\n Examples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html", "category": "pytorch docs"}
{"text": "Examples:\n >>> # pool of square window of size=3, stride=2\n >>> m = nn.MaxPool2d(3, stride=2)\n >>> # pool of non-square window\n >>> m = nn.MaxPool2d((3, 2), stride=(2, 1))\n >>> input = torch.randn(20, 16, 50, 32)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html", "category": "pytorch docs"}
{"text": "torch.jit.forktorch.jit.fork(func, args, kwargs)\n Creates an asynchronous task executing func and a reference to\n the value of the result of this execution. fork will return\n immediately, so the return value of func may not have been\n computed yet. To force completion of the task and access the return\n value invoke torch.jit.wait on the Future. fork invoked with a\n func which returns T is typed as torch.jit.Future[T]. fork\n calls can be arbitrarily nested, and may be invoked with positional\n and keyword arguments. Asynchronous execution will only occur when\n run in TorchScript. If run in pure python, fork will not execute\n in parallel. fork will also not execute in parallel when invoked\n while tracing, however the fork and wait calls will be captured\n in the exported IR Graph.\n Warning:\n fork* tasks will execute non-deterministically. We recommend\n only spawning parallel fork tasks for pure functions that do not", "source": "https://pytorch.org/docs/stable/generated/torch.jit.fork.html", "category": "pytorch docs"}
{"text": "modify their inputs, module attributes, or global state.\n Parameters:\n * func (callable or torch.nn.Module) -- A Python\n function or torch.nn.Module that will be invoked. If\n executed in TorchScript, it will execute asynchronously,\n otherwise it will not. Traced invocations of fork will be\n captured in the IR.\n * args -- arguments to invoke func* with.\n * kwargs -- arguments to invoke func with.\n Returns:\n a reference to the execution of func. The value T can only\n be accessed by forcing completion of func through\n torch.jit.wait.\n Return type:\n torch.jit.Future[T]\n Example (fork a free function):\n import torch\n from torch import Tensor\n def foo(a : Tensor, b : int) -> Tensor:\n return a + b\n def bar(a):\n fut : torch.jit.Future[Tensor] = torch.jit.fork(foo, a, b=2)\n return torch.jit.wait(fut)\n script_bar = torch.jit.script(bar)", "source": "https://pytorch.org/docs/stable/generated/torch.jit.fork.html", "category": "pytorch docs"}
{"text": "script_bar = torch.jit.script(bar)\n input = torch.tensor(2)\n # only the scripted version executes asynchronously\n assert script_bar(input) == bar(input)\n # trace is not run asynchronously, but fork is captured in IR\n graph = torch.jit.trace(bar, (input,)).graph\n assert \"fork\" in str(graph)\n Example (fork a module method):\n import torch\n from torch import Tensor\n class AddMod(torch.nn.Module):\n def forward(self, a: Tensor, b : int):\n return a + b\n class Mod(torch.nn.Module):\n def init(self):\n super(self).init()\n self.mod = AddMod()\n def forward(self, input):\n fut = torch.jit.fork(self.mod, a, b=2)\n return torch.jit.wait(fut)\n input = torch.tensor(2)\n mod = Mod()\n assert mod(input) == torch.jit.script(mod).forward(input)", "source": "https://pytorch.org/docs/stable/generated/torch.jit.fork.html", "category": "pytorch docs"}
{"text": "torch.Tensor.conjTensor.conj() -> Tensor\n See \"torch.conj()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.conj.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.logsigmoidtorch.nn.functional.logsigmoid(input) -> Tensor\n Applies element-wise \\text{LogSigmoid}(x_i) = \\log \\left(\\frac{1}{1\n + \\exp(-x_i)}\\right)\n See \"LogSigmoid\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.logsigmoid.html", "category": "pytorch docs"}
{"text": "Parameterclass torch.nn.parameter.Parameter(data=None, requires_grad=True)\n A kind of Tensor that is to be considered a module parameter.\n Parameters are \"Tensor\" subclasses, that have a very special\n property when used with \"Module\" s - when they're assigned as\n Module attributes they are automatically added to the list of its\n parameters, and will appear e.g. in \"parameters()\" iterator.\n Assigning a Tensor doesn't have such effect. This is because one\n might want to cache some temporary state, like last hidden state of\n the RNN, in the model. If there was no such class as \"Parameter\",\n these temporaries would get registered too.\n Parameters:\n * data (Tensor) -- parameter tensor.\n * requires_grad (bool, optional) -- if the parameter\n requires gradient. See Locally disabling gradient computation\n for more details. Default: True", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parameter.Parameter.html", "category": "pytorch docs"}
{"text": "torch.foreach_lgamma_torch._foreach_lgamma(self: List[Tensor]) -> None\n Apply \"torch.lgamma()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_lgamma_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.q_zero_pointTensor.q_zero_point() -> int\n Given a Tensor quantized by linear(affine) quantization, returns\n the zero_point of the underlying quantizer().", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.q_zero_point.html", "category": "pytorch docs"}
{"text": "torch.Tensor.dimTensor.dim() -> int\n Returns the number of dimensions of \"self\" tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.dim.html", "category": "pytorch docs"}
{"text": "PlaceholderObserverclass torch.quantization.observer.PlaceholderObserver(dtype=torch.float32, custom_op_name='', compute_dtype=None, quant_min=None, quant_max=None, is_dynamic=False)\n Observer that doesn't do anything and just passes its configuration\n to the quantized module's \".from_float()\".\n Can be used for quantization to float16 which doesn't require\n determining ranges.\n Parameters:\n * dtype -- dtype argument to the quantize node needed to\n implement the reference model spec.\n * quant_min -- minimum value in quantized domain (TODO:\n align behavior with other observers)\n * quant_min -- maximum value in quantized domain\n * custom_op_name -- (temporary) specify this observer for an\n operator that doesn't require any observation (Can be used in\n Graph Mode Passes for special case ops).\n * compute_dtype (deprecated) -- if set, marks the future", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.PlaceholderObserver.html", "category": "pytorch docs"}
{"text": "quantize function to use dynamic quantization instead of\n static quantization. This field is deprecated, use\n is_dynamic=True instead.\n * is_dynamic -- if True, the quantize function in the\n reference model representation taking stats from this observer\n instance will use dynamic quantization.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.PlaceholderObserver.html", "category": "pytorch docs"}
{"text": "torch.Tensor.element_sizeTensor.element_size() -> int\n Returns the size in bytes of an individual element.\n Example:\n >>> torch.tensor([]).element_size()\n 4\n >>> torch.tensor([], dtype=torch.uint8).element_size()\n 1", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.element_size.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sin_Tensor.sin_() -> Tensor\n In-place version of \"sin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sin_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.lcmTensor.lcm(other) -> Tensor\n See \"torch.lcm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lcm.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.parametrize.is_parametrizedtorch.nn.utils.parametrize.is_parametrized(module, tensor_name=None)\n Returns \"True\" if module has an active parametrization.\n If the argument \"tensor_name\" is specified, returns \"True\" if\n \"module[tensor_name]\" is parametrized.\n Parameters:\n * module (nn.Module) -- module to query\n * tensor_name (str, optional) -- attribute in the\n module to query Default: \"None\"\n Return type:\n bool", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.is_parametrized.html", "category": "pytorch docs"}
{"text": "torch.Tensor.scatter_reduce_Tensor.scatter_reduce_(dim, index, src, reduce, *, include_self=True) -> Tensor\n Reduces all values from the \"src\" tensor to the indices specified\n in the \"index\" tensor in the \"self\" tensor using the applied\n reduction defined via the \"reduce\" argument (\"\"sum\"\", \"\"prod\"\",\n \"\"mean\"\", \"\"amax\"\", \"\"amin\"\"). For each value in \"src\", it is\n reduced to an index in \"self\" which is specified by its index in\n \"src\" for \"dimension != dim\" and by the corresponding value in\n \"index\" for \"dimension = dim\". If \"include_self=\"True\"\", the values\n in the \"self\" tensor are included in the reduction.\n \"self\", \"index\" and \"src\" should all have the same number of\n dimensions. It is also required that \"index.size(d) <= src.size(d)\"\n for all dimensions \"d\", and that \"index.size(d) <= self.size(d)\"\n for all dimensions \"d != dim\". Note that \"index\" and \"src\" do not\n broadcast.\n For a 3-D tensor with \"reduce=\"sum\"\" and \"include_self=True\" the", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_reduce_.html", "category": "pytorch docs"}
{"text": "output is given as:\n self[index[i][j][k]][j][k] += src[i][j][k] # if dim == 0\n self[i][index[i][j][k]][k] += src[i][j][k] # if dim == 1\n self[i][j][index[i][j][k]] += src[i][j][k] # if dim == 2\n Note:\n This operation may behave nondeterministically when given tensors\n on a CUDA device. See Reproducibility for more information.\n Note:\n The backward pass is implemented only for \"src.shape ==\n index.shape\".\n Warning:\n This function is in beta and may change in the near future.\n Parameters:\n * dim (int) -- the axis along which to index\n * index (LongTensor) -- the indices of elements to scatter\n and reduce.\n * src (Tensor) -- the source elements to scatter and\n reduce\n * reduce (str) -- the reduction operation to apply for\n non-unique indices (\"\"sum\"\", \"\"prod\"\", \"\"mean\"\", \"\"amax\"\",\n \"\"amin\"\")\n * include_self (bool) -- whether elements from the \"self\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_reduce_.html", "category": "pytorch docs"}
{"text": "tensor are included in the reduction\n Example:\n >>> src = torch.tensor([1., 2., 3., 4., 5., 6.])\n >>> index = torch.tensor([0, 1, 0, 1, 2, 1])\n >>> input = torch.tensor([1., 2., 3., 4.])\n >>> input.scatter_reduce(0, index, src, reduce=\"sum\")\n tensor([5., 14., 8., 4.])\n >>> input.scatter_reduce(0, index, src, reduce=\"sum\", include_self=False)\n tensor([4., 12., 5., 4.])\n >>> input2 = torch.tensor([5., 4., 3., 2.])\n >>> input2.scatter_reduce(0, index, src, reduce=\"amax\")\n tensor([5., 6., 5., 2.])\n >>> input2.scatter_reduce(0, index, src, reduce=\"amax\", include_self=False)\n tensor([3., 6., 5., 2.])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_reduce_.html", "category": "pytorch docs"}
{"text": "torch._foreach_sinhtorch._foreach_sinh(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.sinh()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_sinh.html", "category": "pytorch docs"}
{"text": "torch.negativetorch.negative(input, *, out=None) -> Tensor\n Alias for \"torch.neg()\"", "source": "https://pytorch.org/docs/stable/generated/torch.negative.html", "category": "pytorch docs"}
{"text": "ReflectionPad3dclass torch.nn.ReflectionPad3d(padding)\n Pads the input tensor using the reflection of the input boundary.\n For N-dimensional padding, use \"torch.nn.functional.pad()\".\n Parameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in all boundaries. If a 6-tuple,\n uses (\\text{padding_left}, \\text{padding_right},\n \\text{padding_top}, \\text{padding_bottom},\n \\text{padding_front}, \\text{padding_back})\n Shape:\n * Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},\n W_{in}).\n * Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},\n H_{out}, W_{out}), where\n D_{out} = D_{in} + \\text{padding_front} +\n \\text{padding_back}\n H_{out} = H_{in} + \\text{padding_top} +\n \\text{padding_bottom}\n W_{out} = W_{in} + \\text{padding_left} +\n \\text{padding_right}\n Examples:\n >>> m = nn.ReflectionPad3d(1)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad3d.html", "category": "pytorch docs"}
{"text": "Examples:\n >>> m = nn.ReflectionPad3d(1)\n >>> input = torch.arange(8, dtype=torch.float).reshape(1, 1, 2, 2, 2)\n >>> m(input)\n tensor([[[[[7., 6., 7., 6.],\n [5., 4., 5., 4.],\n [7., 6., 7., 6.],\n [5., 4., 5., 4.]],\n [[3., 2., 3., 2.],\n [1., 0., 1., 0.],\n [3., 2., 3., 2.],\n [1., 0., 1., 0.]],\n [[7., 6., 7., 6.],\n [5., 4., 5., 4.],\n [7., 6., 7., 6.],\n [5., 4., 5., 4.]],\n [[3., 2., 3., 2.],\n [1., 0., 1., 0.],\n [3., 2., 3., 2.],\n [1., 0., 1., 0.]]]]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad3d.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.grid_sampletorch.nn.functional.grid_sample(input, grid, mode='bilinear', padding_mode='zeros', align_corners=None)\n Given an \"input\" and a flow-field \"grid\", computes the \"output\"\n using \"input\" values and pixel locations from \"grid\".\n Currently, only spatial (4-D) and volumetric (5-D) \"input\" are\n supported.\n In the spatial (4-D) case, for \"input\" with shape (N, C,\n H_\\text{in}, W_\\text{in}) and \"grid\" with shape (N, H_\\text{out},\n W_\\text{out}, 2), the output will have shape (N, C, H_\\text{out},\n W_\\text{out}).\n For each output location \"output[n, :, h, w]\", the size-2 vector\n \"grid[n, h, w]\" specifies \"input\" pixel locations \"x\" and \"y\",\n which are used to interpolate the output value \"output[n, :, h,\n w]\". In the case of 5D inputs, \"grid[n, d, h, w]\" specifies the\n \"x\", \"y\", \"z\" pixel locations for interpolating \"output[n, :, d, h,\n w]\". \"mode\" argument specifies \"nearest\" or \"bilinear\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html", "category": "pytorch docs"}
{"text": "interpolation method to sample the input pixels.\n \"grid\" specifies the sampling pixel locations normalized by the\n \"input\" spatial dimensions. Therefore, it should have most values\n in the range of \"[-1, 1]\". For example, values \"x = -1, y = -1\" is\n the left-top pixel of \"input\", and values \"x = 1, y = 1\" is the\n right-bottom pixel of \"input\".\n If \"grid\" has values outside the range of \"[-1, 1]\", the\n corresponding outputs are handled as defined by \"padding_mode\".\n Options are\n * \"padding_mode=\"zeros\"\": use \"0\" for out-of-bound grid\n locations,\n * \"padding_mode=\"border\"\": use border values for out-of-bound\n grid locations,\n * \"padding_mode=\"reflection\"\": use values at locations reflected\n by the border for out-of-bound grid locations. For location\n far away from the border, it will keep being reflected until\n becoming in bound, e.g., (normalized) pixel location \"x =\n -3.5\" reflects by border \"-1\" and becomes \"x' = 1.5\", then", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html", "category": "pytorch docs"}
{"text": "reflects by border \"1\" and becomes \"x'' = -0.5\".\n Note:\n This function is often used in conjunction with \"affine_grid()\"\n to build Spatial Transformer Networks .\n Note:\n When using the CUDA backend, this operation may induce\n nondeterministic behaviour in its backward pass that is not\n easily switched off. Please see the notes on Reproducibility for\n background.\n Note:\n NaN values in \"grid\" would be interpreted as \"-1\".\n Parameters:\n * input (Tensor) -- input of shape (N, C, H_\\text{in},\n W_\\text{in}) (4-D case) or (N, C, D_\\text{in}, H_\\text{in},\n W_\\text{in}) (5-D case)\n * grid (Tensor) -- flow-field of shape (N, H_\\text{out},\n W_\\text{out}, 2) (4-D case) or (N, D_\\text{out}, H_\\text{out},\n W_\\text{out}, 3) (5-D case)\n * mode (str) -- interpolation mode to calculate output\n values \"'bilinear'\" | \"'nearest'\" | \"'bicubic'\". Default:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html", "category": "pytorch docs"}
{"text": "\"'bilinear'\" Note: \"mode='bicubic'\" supports only 4-D input.\n When \"mode='bilinear'\" and the input is 5-D, the interpolation\n mode used internally will actually be trilinear. However, when\n the input is 4-D, the interpolation mode will legitimately be\n bilinear.\n * padding_mode (str) -- padding mode for outside grid\n values \"'zeros'\" | \"'border'\" | \"'reflection'\". Default:\n \"'zeros'\"\n * align_corners (bool, optional) -- Geometrically, we\n consider the pixels of the input as squares rather than\n points. If set to \"True\", the extrema (\"-1\" and \"1\") are\n considered as referring to the center points of the input's\n corner pixels. If set to \"False\", they are instead considered\n as referring to the corner points of the input's corner\n pixels, making the sampling more resolution agnostic. This\n option parallels the \"align_corners\" option in", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html", "category": "pytorch docs"}
{"text": "\"interpolate()\", and so whichever option is used here should\n also be used there to resize the input image before grid\n sampling. Default: \"False\"\n Returns:\n output Tensor\n Return type:\n output (Tensor)\n Warning:\n When \"align_corners = True\", the grid positions depend on the\n pixel size relative to the input image size, and so the locations\n sampled by \"grid_sample()\" will differ for the same input given\n at different resolutions (that is, after being upsampled or\n downsampled). The default behavior up to version 1.2.0 was\n \"align_corners = True\". Since then, the default behavior has been\n changed to \"align_corners = False\", in order to bring it in line\n with the default for \"interpolate()\".\n Note:\n \"mode='bicubic'\" is implemented using the cubic convolution\n algorithm with \\alpha=-0.75. The constant \\alpha might be\n different from packages to packages. For example, PIL and OpenCV", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html", "category": "pytorch docs"}
{"text": "use -0.5 and -0.75 respectively. This algorithm may \"overshoot\"\n the range of values it's interpolating. For example, it may\n produce negative values or values greater than 255 when\n interpolating input in [0, 255]. Clamp the results with :func:\n torch.clamp to ensure they are within the valid range.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html", "category": "pytorch docs"}
{"text": "torch.isintorch.isin(elements, test_elements, , assume_unique=False, invert=False) -> Tensor\n Tests if each element of \"elements\" is in \"test_elements\". Returns\n a boolean tensor of the same shape as \"elements\" that is True for\n elements in \"test_elements\" and False otherwise.\n Note:\n One of \"elements\" or \"test_elements\" can be a scalar, but not\n both.\n Parameters:\n * elements (Tensor or Scalar) -- Input elements\n * test_elements (Tensor or Scalar) -- Values against\n which to test for each input element\n * assume_unique (bool, optional) -- If True, assumes\n both \"elements\" and \"test_elements\" contain unique elements,\n which can speed up the calculation. Default: False\n * invert (bool, optional) -- If True, inverts the\n boolean return tensor, resulting in True values for elements\n not* in \"test_elements\". Default: False\n Returns:", "source": "https://pytorch.org/docs/stable/generated/torch.isin.html", "category": "pytorch docs"}
{"text": "Returns:\n A boolean tensor of the same shape as \"elements\" that is True\n for elements in \"test_elements\" and False otherwise\n -[ Example ]-\n\n\n\ntorch.isin(torch.tensor([[1, 2], [3, 4]]), torch.tensor([2, 3]))\n tensor([[False, True],\n [ True, False]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.isin.html", "category": "pytorch docs"}
{"text": "BackendPatternConfigclass torch.ao.quantization.backend_config.BackendPatternConfig(pattern=None)\n Config object that specifies quantization behavior for a given\n operator pattern. For a detailed example usage, see\n \"BackendConfig\".\n add_dtype_config(dtype_config)\n Add a set of supported data types passed as arguments to\n quantize ops in the reference model spec.\n Return type:\n BackendPatternConfig\n classmethod from_dict(backend_pattern_config_dict)\n Create a \"BackendPatternConfig\" from a dictionary with the\n following items:\n \"pattern\": the pattern being configured \"observation_type\":\n the \"ObservationType\" that specifies how observers should be\n inserted for this pattern \"dtype_configs\": a list of\n dictionaries that represents \"DTypeConfig\" s \"root_module\": a\n \"torch.nn.Module\" that represents the root for this pattern\n \"qat_module\": a \"torch.nn.Module\" that represents the QAT", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendPatternConfig.html", "category": "pytorch docs"}
{"text": "implementation for this pattern \"reference_quantized_module\":\n a \"torch.nn.Module\" that represents the reference quantized\n implementation for this pattern's root module.\n \"fused_module\": a \"torch.nn.Module\" that represents the fused\n implementation for this pattern \"fuser_method\": a function\n that specifies how to fuse the pattern for this pattern\n \"pattern_complex_format\": the pattern specified in the\n reversed nested tuple format (deprecated)\n Return type:\n BackendPatternConfig\n set_dtype_configs(dtype_configs)\n Set the supported data types passed as arguments to quantize ops\n in the reference model spec, overriding all previously\n registered data types.\n Return type:\n BackendPatternConfig\n set_fused_module(fused_module)\n Set the module that represents the fused implementation for this\n pattern.\n Return type:\n BackendPatternConfig\n set_fuser_method(fuser_method)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendPatternConfig.html", "category": "pytorch docs"}
{"text": "set_fuser_method(fuser_method)\n Set the function that specifies how to fuse this\n BackendPatternConfig's pattern.\n The first argument of this function should be is_qat, and the\n rest of the arguments should be the items in the tuple pattern.\n The return value of this function should be the resulting fused\n module.\n For example, the fuser method for the pattern (torch.nn.Linear,\n torch.nn.ReLU) can be:\n def fuse_linear_relu(is_qat, linear, relu):\n return torch.ao.nn.intrinsic.LinearReLU(linear, relu)\n For a more complicated example, see https://gist.github.com/jer\n ryzh168/8bea7180a8ba3c279f2c9b050f2a69a6.\n Return type:\n BackendPatternConfig\n set_observation_type(observation_type)\n Set how observers should be inserted in the graph for this\n pattern.\n Observation type here refers to how observers (or quant-dequant\n ops) will be placed in the graph. This is used to produce the", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendPatternConfig.html", "category": "pytorch docs"}
{"text": "desired reference patterns understood by the backend. Weighted\n ops such as linear and conv require different observers (or\n quantization parameters passed to quantize ops in the reference\n model) for the input and the output.\n There are two observation types:\n OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT (default): the\n output observer instance will be different from the input.\n This is the most common observation type.\n OUTPUT_SHARE_OBSERVER_WITH_INPUT: the output observer\n instance will be the same as the input. This is useful for\n operators like cat.\n Note: This will be renamed in the near future, since we will\n soon insert QuantDeQuantStubs with observers (and fake\n quantizes) attached instead of observers themselves.\n Return type:\n BackendPatternConfig\n set_pattern(pattern)\n Set the pattern to configure.\n The pattern can be a float module, functional operator, pytorch", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendPatternConfig.html", "category": "pytorch docs"}
{"text": "operator, or a tuple combination of the above. Tuple patterns\n are treated as sequential patterns, and currently only tuples of\n 2 or 3 elements are supported.\n Return type:\n BackendPatternConfig\n set_qat_module(qat_module)\n Set the module that represents the QAT implementation for this\n pattern.\n Return type:\n BackendPatternConfig\n set_reference_quantized_module(reference_quantized_module)\n Set the module that represents the reference quantized\n implementation for this pattern's root module.\n For more detail, see \"set_root_module()\".\n Return type:\n BackendPatternConfig\n set_root_module(root_module)\n Set the module that represents the root for this pattern.\n When we construct the reference quantized model during the\n convert phase, the root modules (e.g. torch.nn.Linear for\n torch.ao.nn.intrinsic.LinearReLU) will be swapped to the\n corresponding reference quantized modules (e.g.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendPatternConfig.html", "category": "pytorch docs"}
{"text": "torch.ao.nn.reference.quantized.Linear). This allows custom\n backends to specify custom reference quantized module\n implementations to match the numerics of their lowered\n operators. Since this is a one-to-one mapping, both the root\n module and the reference quantized module must be specified in\n the same BackendPatternConfig in order for the conversion to\n take place.\n Return type:\n BackendPatternConfig\n to_dict()\n Convert this \"BackendPatternConfig\" to a dictionary with the\n items described in \"from_dict()\".\n Return type:\n Dict[str, Any]", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendPatternConfig.html", "category": "pytorch docs"}
{"text": "torch.randntorch.randn(size, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, pin_memory=False) -> Tensor\n Returns a tensor filled with random numbers from a normal\n distribution with mean 0 and variance 1 (also called the\n standard normal distribution).\n \\text{out}_{i} \\sim \\mathcal{N}(0, 1)\n The shape of the tensor is defined by the variable argument \"size\".\n Parameters:\n size (int...) -- a sequence of integers defining the\n shape of the output tensor. Can be a variable number of\n arguments or a collection like a list or tuple.\n Keyword Arguments:\n * generator (\"torch.Generator\", optional) -- a pseudorandom\n number generator for sampling\n * out (Tensor, optional) -- the output tensor.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").", "source": "https://pytorch.org/docs/stable/generated/torch.randn.html", "category": "pytorch docs"}
{"text": "(see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * pin_memory (bool, optional) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n Example:\n >>> torch.randn(4)\n tensor([-2.1436, 0.9966, 2.3426, -0.6366])\n >>> torch.randn(2, 3)\n tensor([[ 1.5954, 2.8929, -1.0923],", "source": "https://pytorch.org/docs/stable/generated/torch.randn.html", "category": "pytorch docs"}
{"text": "tensor([[ 1.5954, 2.8929, -1.0923],\n [ 1.1719, -0.4709, -0.1996]])", "source": "https://pytorch.org/docs/stable/generated/torch.randn.html", "category": "pytorch docs"}
{"text": "torch.linalg.lu_solvetorch.linalg.lu_solve(LU, pivots, B, , left=True, adjoint=False, out=None) -> Tensor\n Computes the solution of a square system of linear equations with a\n unique solution given an LU decomposition.\n Letting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, this function\n computes the solution X \\in \\mathbb{K}^{n \\times k} of the linear\n system associated to A \\in \\mathbb{K}^{n \\times n}, B \\in\n \\mathbb{K}^{n \\times k}, which is defined as\n AX = B\n where A is given factorized as returned by \"lu_factor()\".\n If \"left\"= False, this function returns the matrix X \\in\n \\mathbb{K}^{n \\times k} that solves the system\n XA = B\\mathrlap{\\qquad A \\in \\mathbb{K}^{k \\times k}, B \\in\n \\mathbb{K}^{n \\times k}.}\n If \"adjoint\"= True (and \"left\"= True), given an LU\n factorization of :math:`A* this function function returns the X \\in\n \\mathbb{K}^{n \\times k} that solves the system", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_solve.html", "category": "pytorch docs"}
{"text": "\\mathbb{K}^{n \\times k} that solves the system\n A^{\\text{H}}X = B\\mathrlap{\\qquad A \\in \\mathbb{K}^{k \\times k},\n B \\in \\mathbb{K}^{n \\times k}.}\n where A^{\\text{H}} is the conjugate transpose when A is complex,\n and the transpose when A is real-valued. The \"left\"= False case\n is analogous.\n Supports inputs of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if the inputs are batches of\n matrices then the output has the same batch dimensions.\n Parameters:\n * LU (Tensor) -- tensor of shape (, n, n) (or (, k,\n k) if \"left\"= True) where *** is zero or more batch\n dimensions as returned by \"lu_factor()\".\n * pivots (Tensor) -- tensor of shape (, n) (or (, k)\n if \"left\"= True) where *** is zero or more batch dimensions\n as returned by \"lu_factor()\".\n * B (Tensor) -- right-hand side tensor of shape (, n,\n k)*.\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_solve.html", "category": "pytorch docs"}
{"text": "k).\n Keyword Arguments:\n * left (bool, optional) -- whether to solve the system\n AX=B or XA = B. Default: True.\n * adjoint (bool, optional) -- whether to solve the\n system AX=B or A^{\\text{H}}X = B. Default: False.\n * out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None*.\n Examples:\n >>> A = torch.randn(3, 3)\n >>> LU, pivots = torch.linalg.lu_factor(A)\n >>> B = torch.randn(3, 2)\n >>> X = torch.linalg.lu_solve(LU, pivots, B)\n >>> torch.allclose(A @ X, B)\n True\n >>> B = torch.randn(3, 3, 2) # Broadcasting rules apply: A is broadcasted\n >>> X = torch.linalg.lu_solve(LU, pivots, B)\n >>> torch.allclose(A @ X, B)\n True\n >>> B = torch.randn(3, 5, 3)\n >>> X = torch.linalg.lu_solve(LU, pivots, B, left=False)\n >>> torch.allclose(X @ A, B)\n True\n >>> B = torch.randn(3, 3, 4) # Now solve for A^T", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_solve.html", "category": "pytorch docs"}
{"text": "\n\n\nX = torch.linalg.lu_solve(LU, pivots, B, adjoint=True)\n >>> torch.allclose(A.mT @ X, B)\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_solve.html", "category": "pytorch docs"}
{"text": "torch.sgntorch.sgn(input, , out=None) -> Tensor\n This function is an extension of torch.sign() to complex tensors.\n It computes a new tensor whose elements have the same angles as the\n corresponding elements of \"input\" and absolute values (i.e.\n magnitudes) of one for complex tensors and is equivalent to\n torch.sign() for non-complex tensors.\n \\text{out}_{i} = \\begin{cases} 0 &\n |\\text{{input}}_i| == 0 \\\n \\frac{{\\text{{input}}_i}}{|{\\text{{input}}_i}|} &\n \\text{otherwise} \\end{cases}\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> t = torch.tensor([3+4j, 7-24j, 0, 1+2j])\n >>> t.sgn()\n tensor([0.6000+0.8000j, 0.2800-0.9600j, 0.0000+0.0000j, 0.4472+0.8944j])", "source": "https://pytorch.org/docs/stable/generated/torch.sgn.html", "category": "pytorch docs"}
{"text": "torch.matrix_powertorch.matrix_power(input, n, *, out=None) -> Tensor\n Alias for \"torch.linalg.matrix_power()\"", "source": "https://pytorch.org/docs/stable/generated/torch.matrix_power.html", "category": "pytorch docs"}
{"text": "torch.Tensor.storage_typeTensor.storage_type() -> type\n Returns the type of the underlying storage.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.storage_type.html", "category": "pytorch docs"}
{"text": "torch.cuda.OutOfMemoryErrorexception torch.cuda.OutOfMemoryError\n Exception raised when CUDA is out of memory", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.OutOfMemoryError.html", "category": "pytorch docs"}
{"text": "torch.as_tensortorch.as_tensor(data, dtype=None, device=None) -> Tensor\n Converts \"data\" into a tensor, sharing data and preserving autograd\n history if possible.\n If \"data\" is already a tensor with the requested dtype and device\n then \"data\" itself is returned, but if \"data\" is a tensor with a\n different dtype or device then it's copied as if using\n data.to(dtype=dtype, device=device).\n If \"data\" is a NumPy array (an ndarray) with the same dtype and\n device then a tensor is constructed using \"torch.from_numpy()\".\n See also:\n \"torch.tensor()\" never shares its data and creates a new \"leaf\n tensor\" (see Autograd mechanics).\n Parameters:\n * data (array_like) -- Initial data for the tensor. Can be\n a list, tuple, NumPy \"ndarray\", scalar, and other types.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", infers data type from\n \"data\".", "source": "https://pytorch.org/docs/stable/generated/torch.as_tensor.html", "category": "pytorch docs"}
{"text": "\"data\".\n * device (\"torch.device\", optional) -- the device of the\n constructed tensor. If None and data is a tensor then the\n device of data is used. If None and data is not a tensor then\n the result tensor is constructed on the CPU.\n Example:\n >>> a = numpy.array([1, 2, 3])\n >>> t = torch.as_tensor(a)\n >>> t\n tensor([ 1, 2, 3])\n >>> t[0] = -1\n >>> a\n array([-1, 2, 3])\n >>> a = numpy.array([1, 2, 3])\n >>> t = torch.as_tensor(a, device=torch.device('cuda'))\n >>> t\n tensor([ 1, 2, 3])\n >>> t[0] = -1\n >>> a\n array([1, 2, 3])", "source": "https://pytorch.org/docs/stable/generated/torch.as_tensor.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.softplustorch.nn.functional.softplus(input, beta=1, threshold=20) -> Tensor\n Applies element-wise, the function \\text{Softplus}(x) =\n \\frac{1}{\\beta} * \\log(1 + \\exp(\\beta * x)).\n For numerical stability the implementation reverts to the linear\n function when input \\times \\beta > threshold.\n See \"Softplus\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.softplus.html", "category": "pytorch docs"}
{"text": "torch.tiletorch.tile(input, dims) -> Tensor\n Constructs a tensor by repeating the elements of \"input\". The\n \"dims\" argument specifies the number of repetitions in each\n dimension.\n If \"dims\" specifies fewer dimensions than \"input\" has, then ones\n are prepended to \"dims\" until all dimensions are specified. For\n example, if \"input\" has shape (8, 6, 4, 2) and \"dims\" is (2, 2),\n then \"dims\" is treated as (1, 1, 2, 2).\n Analogously, if \"input\" has fewer dimensions than \"dims\" specifies,\n then \"input\" is treated as if it were unsqueezed at dimension zero\n until it has as many dimensions as \"dims\" specifies. For example,\n if \"input\" has shape (4, 2) and \"dims\" is (3, 3, 2, 2), then\n \"input\" is treated as if it had the shape (1, 1, 4, 2).\n Note:\n This function is similar to NumPy's tile function.\n Parameters:\n * input (Tensor) -- the tensor whose elements to repeat.\n * dims (tuple) -- the number of repetitions per dimension.\n Example:", "source": "https://pytorch.org/docs/stable/generated/torch.tile.html", "category": "pytorch docs"}
{"text": "Example:\n >>> x = torch.tensor([1, 2, 3])\n >>> x.tile((2,))\n tensor([1, 2, 3, 1, 2, 3])\n >>> y = torch.tensor([[1, 2], [3, 4]])\n >>> torch.tile(y, (2, 2))\n tensor([[1, 2, 1, 2],\n [3, 4, 3, 4],\n [1, 2, 1, 2],\n [3, 4, 3, 4]])", "source": "https://pytorch.org/docs/stable/generated/torch.tile.html", "category": "pytorch docs"}
{"text": "Conv3dclass torch.nn.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\n Applies a 3D convolution over an input signal composed of several\n input planes.\n In the simplest case, the output value of the layer with input size\n (N, C_{in}, D, H, W) and output (N, C_{out}, D_{out}, H_{out},\n W_{out}) can be precisely described as:\n out(N_i, C_{out_j}) = bias(C_{out_j}) +\n \\sum_{k = 0}^{C_{in} - 1} weight(C_{out_j}, k) \\star input(N_i,\n k)\n where \\star is the valid 3D cross-correlation operator\n This module supports TensorFloat32.\n On certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\n * \"stride\" controls the stride for the cross-correlation.\n * \"padding\" controls the amount of padding applied to the input. It\n can be either a string {'valid', 'same'} or a tuple of ints", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html", "category": "pytorch docs"}
{"text": "giving the amount of implicit padding applied on both sides.\n * \"dilation\" controls the spacing between the kernel points; also\n known as the \u00c3\u00a0 trous algorithm. It is harder to describe, but\n this link has a nice visualization of what \"dilation\" does.\n * \"groups\" controls the connections between inputs and outputs.\n \"in_channels\" and \"out_channels\" must both be divisible by\n \"groups\". For example,\n * At groups=1, all inputs are convolved to all outputs.\n * At groups=2, the operation becomes equivalent to having two\n conv layers side by side, each seeing half the input\n channels and producing half the output channels, and both\n subsequently concatenated.\n * At groups= \"in_channels\", each input channel is convolved\n with its own set of filters (of size\n \\frac{\\text{out_channels}}{\\text{in_channels}}).\n The parameters \"kernel_size\", \"stride\", \"padding\", \"dilation\" can\n either be:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html", "category": "pytorch docs"}
{"text": "either be:\n * a single \"int\" -- in which case the same value is used for the\n depth, height and width dimension\n * a \"tuple\" of three ints -- in which case, the first int is\n used for the depth dimension, the second int for the height\n dimension and the third int for the width dimension\n Note:\n When groups == in_channels and out_channels == K *\n in_channels, where K is a positive integer, this operation is\n also known as a \"depthwise convolution\".In other words, for an\n input of size (N, C_{in}, L_{in}), a depthwise convolution with a\n depthwise multiplier K can be performed with the arguments\n (C_\\text{in}=C_\\text{in}, C_\\text{out}=C_\\text{in} \\times\n \\text{K}, ..., \\text{groups}=C_\\text{in}).\n Note:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html", "category": "pytorch docs"}
{"text": "can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n Note:\n \"padding='valid'\" is the same as no padding. \"padding='same'\"\n pads the input so the output has the shape as the input. However,\n this mode doesn't support any stride values other than 1.\n Note:\n This module supports complex data types i.e. \"complex32,\n complex64, complex128\".\n Parameters:\n * in_channels (int) -- Number of channels in the input\n image\n * out_channels (int) -- Number of channels produced by the\n convolution\n * kernel_size (int or tuple) -- Size of the convolving\n kernel\n * stride (int or tuple, optional) -- Stride of the\n convolution. Default: 1\n * padding (int, tuple or str, optional) --\n Padding added to all six sides of the input. Default: 0", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html", "category": "pytorch docs"}
{"text": "\npadding_mode (str, optional) -- \"'zeros'\",\n \"'reflect'\", \"'replicate'\" or \"'circular'\". Default: \"'zeros'\"\ndilation (int or tuple, optional) -- Spacing\n between kernel elements. Default: 1\ngroups (int, optional) -- Number of blocked\n connections from input channels to output channels. Default: 1\nbias (bool, optional) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n Shape:\nInput: (N, C_{in}, D_{in}, H_{in}, W_{in}) or (C_{in}, D_{in},\n H_{in}, W_{in})\nOutput: (N, C_{out}, D_{out}, H_{out}, W_{out}) or (C_{out},\n D_{out}, H_{out}, W_{out}), where\n D_{out} = \\left\\lfloor\\frac{D_{in} + 2 \\times\n \\text{padding}[0] - \\text{dilation}[0] \\times\n (\\text{kernel_size}[0] - 1) - 1}{\\text{stride}[0]} +\n 1\\right\\rfloor\n H_{out} = \\left\\lfloor\\frac{H_{in} + 2 \\times\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html", "category": "pytorch docs"}
{"text": "\\text{padding}[1] - \\text{dilation}[1] \\times\n (\\text{kernel_size}[1] - 1) - 1}{\\text{stride}[1]} +\n 1\\right\\rfloor\n W_{out} = \\left\\lfloor\\frac{W_{in} + 2 \\times\n \\text{padding}[2] - \\text{dilation}[2] \\times\n (\\text{kernel_size}[2] - 1) - 1}{\\text{stride}[2]} +\n 1\\right\\rfloor\n Variables:\n * weight (Tensor) -- the learnable weights of the module\n of shape (\\text{out_channels},\n \\frac{\\text{in_channels}}{\\text{groups}},\n \\text{kernel_size[0]}, \\text{kernel_size[1]},\n \\text{kernel_size[2]}). The values of these weights are\n sampled from \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{groups}{C_\\text{in} *\n \\prod_{i=0}^{2}\\text{kernel_size}[i]}\n * bias (Tensor) -- the learnable bias of the module of\n shape (out_channels). If \"bias\" is \"True\", then the values of\n these weights are sampled from \\mathcal{U}(-\\sqrt{k},", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html", "category": "pytorch docs"}
{"text": "\\sqrt{k}) where k = \\frac{groups}{C_\\text{in} *\n \\prod_{i=0}^{2}\\text{kernel_size}[i]}\n Examples:\n >>> # With square kernels and equal stride\n >>> m = nn.Conv3d(16, 33, 3, stride=2)\n >>> # non-square kernels and unequal stride and with padding\n >>> m = nn.Conv3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(4, 2, 0))\n >>> input = torch.randn(20, 16, 10, 50, 100)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.cudaTensor.cuda(device=None, non_blocking=False, memory_format=torch.preserve_format) -> Tensor\n Returns a copy of this object in CUDA memory.\n If this object is already in CUDA memory and on the correct device,\n then no copy is performed and the original object is returned.\n Parameters:\n * device (\"torch.device\") -- The destination GPU device.\n Defaults to the current CUDA device.\n * non_blocking (bool) -- If \"True\" and the source is in\n pinned memory, the copy will be asynchronous with respect to\n the host. Otherwise, the argument has no effect. Default:\n \"False\".\n * memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cuda.html", "category": "pytorch docs"}
{"text": "torch.Tensor.exponential_Tensor.exponential_(lambd=1, *, generator=None) -> Tensor\n Fills \"self\" tensor with elements drawn from the exponential\n distribution:\n f(x) = \\lambda e^{-\\lambda x}", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.exponential_.html", "category": "pytorch docs"}
{"text": "torch.randn_liketorch.randn_like(input, , dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor\n Returns a tensor with the same size as \"input\" that is filled with\n random numbers from a normal distribution with mean 0 and variance\n 1. \"torch.randn_like(input)\" is equivalent to\n \"torch.randn(input.size(), dtype=input.dtype, layout=input.layout,\n device=input.device)\".\n Parameters:\n input (Tensor) -- the size of \"input\" will determine size\n of the output tensor.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned Tensor. Default: if \"None\", defaults to the dtype\n of \"input\".\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned tensor. Default: if \"None\", defaults to the layout of\n \"input\".\n * device* (\"torch.device\", optional) -- the desired device of", "source": "https://pytorch.org/docs/stable/generated/torch.randn_like.html", "category": "pytorch docs"}
{"text": "returned tensor. Default: if \"None\", defaults to the device of\n \"input\".\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.randn_like.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.poisson_nll_losstorch.nn.functional.poisson_nll_loss(input, target, log_input=True, full=False, size_average=None, eps=1e-08, reduce=None, reduction='mean')\n Poisson negative log likelihood loss.\n See \"PoissonNLLLoss\" for details.\n Parameters:\n * input (Tensor) -- expectation of underlying Poisson\n distribution.\n * target (Tensor) -- random sample target \\sim\n \\text{Poisson}(input).\n * log_input (bool) -- if \"True\" the loss is computed as\n \\exp(\\text{input}) - \\text{target} * \\text{input}, if \"False\"\n then loss is \\text{input} - \\text{target} *\n \\log(\\text{input}+\\text{eps}). Default: \"True\"\n * full (bool) -- whether to compute full loss, i. e. to\n add the Stirling approximation term. Default: \"False\"\n \\text{target} * \\log(\\text{target}) - \\text{target} + 0.5 *\n \\log(2 * \\pi * \\text{target}).", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.poisson_nll_loss.html", "category": "pytorch docs"}
{"text": "\\log(2 * \\pi * \\text{target}).\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n multiple elements per sample. If the field \"size_average\" is\n set to \"False\", the losses are instead summed for each\n minibatch. Ignored when reduce is \"False\". Default: \"True\"\n * eps (float, optional) -- Small value to avoid\n evaluation of \\log(0) when \"log_input\"=\"False\". Default: 1e-8\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.poisson_nll_loss.html", "category": "pytorch docs"}
{"text": "to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.poisson_nll_loss.html", "category": "pytorch docs"}
{"text": "torch.foreach_log1p_torch._foreach_log1p(self: List[Tensor]) -> None\n Apply \"torch.log1p()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_log1p_.html", "category": "pytorch docs"}
{"text": "torch.maxtorch.max(input) -> Tensor\n Returns the maximum value of all elements in the \"input\" tensor.\n Warning:\n This function produces deterministic (sub)gradients unlike\n \"max(dim=0)\"\n Parameters:\n input (Tensor) -- the input tensor.\n Example:\n >>> a = torch.randn(1, 3)\n >>> a\n tensor([[ 0.6763, 0.7445, -2.2369]])\n >>> torch.max(a)\n tensor(0.7445)\n torch.max(input, dim, keepdim=False, *, out=None)\n Returns a namedtuple \"(values, indices)\" where \"values\" is the\n maximum value of each row of the \"input\" tensor in the given\n dimension \"dim\". And \"indices\" is the index location of each\n maximum value found (argmax).\n If \"keepdim\" is \"True\", the output tensors are of the same size as\n \"input\" except in the dimension \"dim\" where they are of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensors having 1 fewer dimension than \"input\".\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.max.html", "category": "pytorch docs"}
{"text": "Note:\n If there are multiple maximal values in a reduced row then the\n indices of the first maximal value are returned.\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- the dimension to reduce.\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not. Default: \"False\".\n Keyword Arguments:\n out (tuple, optional) -- the result tuple of two\n output tensors (max, max_indices)\n Example:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[-1.2360, -0.2942, -0.1222, 0.8475],\n [ 1.1949, -1.1127, -2.2379, -0.6702],\n [ 1.5717, -0.9207, 0.1297, -1.8768],\n [-0.6172, 1.0036, -0.6060, -0.2432]])\n >>> torch.max(a, 1)\n torch.return_types.max(values=tensor([0.8475, 1.1949, 1.5717, 1.0036]), indices=tensor([3, 0, 0, 1]))\n torch.max(input, other, *, out=None) -> Tensor\n See \"torch.maximum()\".", "source": "https://pytorch.org/docs/stable/generated/torch.max.html", "category": "pytorch docs"}
{"text": "torch.Tensor.storageTensor.storage() -> torch.TypedStorage\n Returns the underlying \"TypedStorage\".\n Warning:\n \"TypedStorage\" is deprecated. It will be removed in the future,\n and \"UntypedStorage\" will be the only storage class. To access\n the \"UntypedStorage\" directly, use \"Tensor.untyped_storage()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.storage.html", "category": "pytorch docs"}
{"text": "torch.Tensor.crossTensor.cross(other, dim=None) -> Tensor\n See \"torch.cross()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cross.html", "category": "pytorch docs"}
{"text": "torch.corrcoeftorch.corrcoef(input) -> Tensor\n Estimates the Pearson product-moment correlation coefficient matrix\n of the variables given by the \"input\" matrix, where rows are the\n variables and columns are the observations.\n Note:\n The correlation coefficient matrix R is computed using the\n covariance matrix C as given by R_{ij} = \\frac{ C_{ij} } { \\sqrt{\n C_{ii} * C_{jj} } }\n Note:\n Due to floating point rounding, the resulting array may not be\n Hermitian and its diagonal elements may not be 1. The real and\n imaginary values are clipped to the interval [-1, 1] in an\n attempt to improve this situation.\n Parameters:\n input (Tensor) -- A 2D matrix containing multiple\n variables and observations, or a Scalar or 1D vector\n representing a single variable.\n Returns:\n (Tensor) The correlation coefficient matrix of the variables.\n See also: \"torch.cov()\" covariance matrix.\n Example:", "source": "https://pytorch.org/docs/stable/generated/torch.corrcoef.html", "category": "pytorch docs"}
{"text": "Example:\n >>> x = torch.tensor([[0, 1, 2], [2, 1, 0]])\n >>> torch.corrcoef(x)\n tensor([[ 1., -1.],\n [-1., 1.]])\n >>> x = torch.randn(2, 4)\n >>> x\n tensor([[-0.2678, -0.0908, -0.3766, 0.2780],\n [-0.5812, 0.1535, 0.2387, 0.2350]])\n >>> torch.corrcoef(x)\n tensor([[1.0000, 0.3582],\n [0.3582, 1.0000]])\n >>> torch.corrcoef(x[0])\n tensor(1.)", "source": "https://pytorch.org/docs/stable/generated/torch.corrcoef.html", "category": "pytorch docs"}
{"text": "torch.bitwise_left_shifttorch.bitwise_left_shift(input, other, , out=None) -> Tensor\n Computes the left arithmetic shift of \"input\" by \"other\" bits. The\n input tensor must be of integral type. This operator supports\n broadcasting to a common shape and type promotion.\n The operation applied is:\n \\text{out}_i = \\text{input}_i << \\text{other}_i\n Parameters:\n * input (Tensor or Scalar) -- the first input tensor\n * other (Tensor or Scalar) -- the second input tensor\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> torch.bitwise_left_shift(torch.tensor([-1, -2, 3], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8))\n tensor([-2, -2, 24], dtype=torch.int8)", "source": "https://pytorch.org/docs/stable/generated/torch.bitwise_left_shift.html", "category": "pytorch docs"}
{"text": "torch.heavisidetorch.heaviside(input, values, , out=None) -> Tensor\n Computes the Heaviside step function for each element in \"input\".\n The Heaviside step function is defined as:\n \\text{{heaviside}}(input, values) = \\begin{cases} 0, &\n \\text{if input < 0}\\ values, & \\text{if input == 0}\\\n 1, & \\text{if input > 0} \\end{cases}\n Parameters:\n * input (Tensor) -- the input tensor.\n * values (Tensor) -- The values to use where \"input\" is\n zero.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> input = torch.tensor([-1.5, 0, 2.0])\n >>> values = torch.tensor([0.5])\n >>> torch.heaviside(input, values)\n tensor([0.0000, 0.5000, 1.0000])\n >>> values = torch.tensor([1.2, -2.0, 3.5])\n >>> torch.heaviside(input, values)\n tensor([0., -2., 1.])", "source": "https://pytorch.org/docs/stable/generated/torch.heaviside.html", "category": "pytorch docs"}
{"text": "float16_dynamic_qconfigtorch.quantization.qconfig.float16_dynamic_qconfig\n alias of QConfig(activation=functools.partial(,\n dtype=torch.float16, is_dynamic=True){},\n weight=functools.partial(,\n dtype=torch.float16){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.float16_dynamic_qconfig.html", "category": "pytorch docs"}
{"text": "torch.cuda.get_allocator_backendtorch.cuda.get_allocator_backend()\n Returns a string describing the active allocator backend as set by\n \"PYTORCH_CUDA_ALLOC_CONF\". Currently available backends are\n \"native\" (PyTorch's native caching allocator) and\n cudaMallocAsync` (CUDA's built-in asynchronous allocator).\n Note:\n See Memory management for details on choosing the allocator\n backend.\n Return type:\n str", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.get_allocator_backend.html", "category": "pytorch docs"}
{"text": "torch.Tensor.cholesky_solveTensor.cholesky_solve(input2, upper=False) -> Tensor\n See \"torch.cholesky_solve()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cholesky_solve.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.upsampletorch.nn.functional.upsample(input, size=None, scale_factor=None, mode='nearest', align_corners=None)\n Upsamples the input to either the given \"size\" or the given\n \"scale_factor\"\n Warning:\n This function is deprecated in favor of\n \"torch.nn.functional.interpolate()\". This is equivalent with\n \"nn.functional.interpolate(...)\".\n Note:\n This operation may produce nondeterministic gradients when given\n tensors on a CUDA device. See Reproducibility for more\n information.\n The algorithm used for upsampling is determined by \"mode\".\n Currently temporal, spatial and volumetric upsampling are\n supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape.\n The input dimensions are interpreted in the form: mini-batch x\n channels x [optional depth] x [optional height] x width.\n The modes available for upsampling are: nearest, linear (3D-\n only), bilinear, bicubic (4D-only), trilinear (5D-only)\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.upsample.html", "category": "pytorch docs"}
{"text": "Parameters:\n * input (Tensor) -- the input tensor\n * size (int or Tuple[int] or Tuple[int,\n int] or Tuple[int, int, int]) -- output\n spatial size.\n * scale_factor (float or Tuple[float]) --\n multiplier for spatial size. Has to match input size if it is\n a tuple.\n * mode (str) -- algorithm used for upsampling: \"'nearest'\"\n | \"'linear'\" | \"'bilinear'\" | \"'bicubic'\" | \"'trilinear'\".\n Default: \"'nearest'\"\n * align_corners (bool, optional) -- Geometrically, we\n consider the pixels of the input and output as squares rather\n than points. If set to \"True\", the input and output tensors\n are aligned by the center points of their corner pixels,\n preserving the values at the corner pixels. If set to \"False\",\n the input and output tensors are aligned by the corner points", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.upsample.html", "category": "pytorch docs"}
{"text": "of their corner pixels, and the interpolation uses edge value\n padding for out-of-boundary values, making this operation\n independent of input size when \"scale_factor\" is kept the\n same. This only has an effect when \"mode\" is \"'linear'\",\n \"'bilinear'\", \"'bicubic'\" or \"'trilinear'\". Default: \"False\"\n Note:\n With \"mode='bicubic'\", it's possible to cause overshoot, in other\n words it can produce negative values or values greater than 255\n for images. Explicitly call \"result.clamp(min=0, max=255)\" if you\n want to reduce the overshoot when displaying the image.\n Warning:\n With \"align_corners = True\", the linearly interpolating modes\n (linear, bilinear, and trilinear) don't proportionally\n align the output and input pixels, and thus the output values can\n depend on the input size. This was the default behavior for these\n modes up to version 0.3.1. Since then, the default behavior is", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.upsample.html", "category": "pytorch docs"}
{"text": "\"align_corners = False\". See \"Upsample\" for concrete examples on\n how this affects the outputs.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.upsample.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.relu_torch.nn.functional.relu_(input) -> Tensor\n In-place version of \"relu()\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.relu_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.storage_offsetTensor.storage_offset() -> int\n Returns \"self\" tensor's offset in the underlying storage in terms\n of number of storage elements (not bytes).\n Example:\n >>> x = torch.tensor([1, 2, 3, 4, 5])\n >>> x.storage_offset()\n 0\n >>> x[3:].storage_offset()\n 3", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.storage_offset.html", "category": "pytorch docs"}
{"text": "Hardswishclass torch.ao.nn.quantized.Hardswish(scale, zero_point)\n This is the quantized version of \"Hardswish\".\n Parameters:\n * scale -- quantization scale of the output tensor\n * zero_point -- quantization zero point of the output tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Hardswish.html", "category": "pytorch docs"}
{"text": "torch.linalg.vecdottorch.linalg.vecdot(x, y, , dim=- 1, out=None) -> Tensor\n Computes the dot product of two batches of vectors along a\n dimension.\n In symbols, this function computes\n \\sum_{i=1}^n \\overline{x_i}y_i.\n over the dimension \"dim\" where \\overline{x_i} denotes the conjugate\n for complex vectors, and it is the identity for real vectors.\n Supports input of half, bfloat16, float, double, cfloat, cdouble\n and integral dtypes. It also supports broadcasting.\n Parameters:\n * x (Tensor) -- first batch of vectors of shape (, n).\n * y (Tensor) -- second batch of vectors of shape (, n).\n Keyword Arguments:\n * dim (int) -- Dimension along which to compute the dot\n product. Default: -1.\n * out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None*.\n Examples:\n >>> v1 = torch.randn(3, 2)\n >>> v2 = torch.randn(3, 2)\n >>> linalg.vecdot(v1, v2)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.vecdot.html", "category": "pytorch docs"}
{"text": "\n\n\nlinalg.vecdot(v1, v2)\n tensor([ 0.3223, 0.2815, -0.1944])\n >>> torch.vdot(v1[0], v2[0])\n tensor(0.3223)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.vecdot.html", "category": "pytorch docs"}
{"text": "torch.Tensor.strideTensor.stride(dim) -> tuple or int\n Returns the stride of \"self\" tensor.\n Stride is the jump necessary to go from one element to the next one\n in the specified dimension \"dim\". A tuple of all strides is\n returned when no argument is passed in. Otherwise, an integer value\n is returned as the stride in the particular dimension \"dim\".\n Parameters:\n dim (int, optional) -- the desired dimension in which\n stride is required\n Example:\n >>> x = torch.tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]])\n >>> x.stride()\n (5, 1)\n >>> x.stride(0)\n 5\n >>> x.stride(-1)\n 1", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.stride.html", "category": "pytorch docs"}
{"text": "torch.Tensor.bitwise_left_shift_Tensor.bitwise_left_shift_(other) -> Tensor\n In-place version of \"bitwise_left_shift()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_left_shift_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.logsumexpTensor.logsumexp(dim, keepdim=False) -> Tensor\n See \"torch.logsumexp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logsumexp.html", "category": "pytorch docs"}
{"text": "ReplicationPad1dclass torch.nn.ReplicationPad1d(padding)\n Pads the input tensor using replication of the input boundary.\n For N-dimensional padding, use \"torch.nn.functional.pad()\".\n Parameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in all boundaries. If a 2-tuple,\n uses (\\text{padding_left}, \\text{padding_right})\n Shape:\n * Input: (C, W_{in}) or (N, C, W_{in}).\n * Output: (C, W_{out}) or (N, C, W_{out}), where\n W_{out} = W_{in} + \\text{padding_left} +\n \\text{padding_right}\n Examples:\n >>> m = nn.ReplicationPad1d(2)\n >>> input = torch.arange(8, dtype=torch.float).reshape(1, 2, 4)\n >>> input\n tensor([[[0., 1., 2., 3.],\n [4., 5., 6., 7.]]])\n >>> m(input)\n tensor([[[0., 0., 0., 1., 2., 3., 3., 3.],\n [4., 4., 4., 5., 6., 7., 7., 7.]]])\n >>> # using different paddings for different sides", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReplicationPad1d.html", "category": "pytorch docs"}
{"text": "\n\n\nm = nn.ReplicationPad1d((3, 1))\n >>> m(input)\n tensor([[[0., 0., 0., 0., 1., 2., 3., 3.],\n [4., 4., 4., 4., 5., 6., 7., 7.]]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReplicationPad1d.html", "category": "pytorch docs"}
{"text": "torch.linalg.qrtorch.linalg.qr(A, mode='reduced', , out=None)\n Computes the QR decomposition of a matrix.\n Letting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the full QR\n decomposition of a matrix A \\in \\mathbb{K}^{m \\times n} is\n defined as\n A = QR\\mathrlap{\\qquad Q \\in \\mathbb{K}^{m \\times m}, R \\in\n \\mathbb{K}^{m \\times n}}\n where Q is orthogonal in the real case and unitary in the complex\n case, and R is upper triangular with real diagonal (even in the\n complex case).\n When m > n (tall matrix), as R is upper triangular, its last m\n - n rows are zero. In this case, we can drop the last m - n\n columns of Q to form the reduced QR decomposition:\n A = QR\\mathrlap{\\qquad Q \\in \\mathbb{K}^{m \\times n}, R \\in\n \\mathbb{K}^{n \\times n}}\n The reduced QR decomposition agrees with the full QR decomposition\n when n >= m* (wide matrix).\n Supports input of float, double, cfloat and cdouble dtypes. Also", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.qr.html", "category": "pytorch docs"}
{"text": "supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n The parameter \"mode\" chooses between the full and reduced QR\n decomposition. If \"A\" has shape (, m, n), denoting k = min(m,\n n)\n * \"mode\"= 'reduced' (default): Returns (Q, R) of shapes (, m,\n k), (, k, n) respectively. It is always differentiable.\n * \"mode\"= 'complete': Returns (Q, R) of shapes (, m, m),\n (, m, n) respectively. It is differentiable for m <= n.\n * \"mode\"= 'r': Computes only the reduced R. Returns (Q, R)\n with Q empty and R of shape (, k, n). It is never\n differentiable.\n Differences with numpy.linalg.qr:\n * \"mode\"= 'raw' is not implemented.\n * Unlike numpy.linalg.qr, this function always returns a tuple of\n two tensors. When \"mode\"= 'r', the Q tensor is an empty\n tensor.\n Warning:\n The elements in the diagonal of R are not necessarily positive.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.qr.html", "category": "pytorch docs"}
{"text": "As such, the returned QR decomposition is only unique up to the\n sign of the diagonal of R. Therefore, different platforms, like\n NumPy, or inputs on different devices, may produce different\n valid decompositions.\n Warning:\n The QR decomposition is only well-defined if the first k =\n min(m, n) columns of every matrix in \"A\" are linearly\n independent. If this condition is not met, no error will be\n thrown, but the QR produced may be incorrect and its autodiff may\n fail or produce incorrect results.\n Parameters:\n * A (Tensor) -- tensor of shape (, m, n) where *** is\n zero or more batch dimensions.\n * mode (str, optional) -- one of 'reduced',\n 'complete', 'r'. Controls the shape of the returned\n tensors. Default: 'reduced'.\n Keyword Arguments:\n out (tuple, optional) -- output tuple of two tensors.\n Ignored if None. Default: None.\n Returns:\n A named tuple (Q, R)*.\n Examples:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.qr.html", "category": "pytorch docs"}
{"text": "A named tuple (Q, R).\n Examples:\n >>> A = torch.tensor([[12., -51, 4], [6, 167, -68], [-4, 24, -41]])\n >>> Q, R = torch.linalg.qr(A)\n >>> Q\n tensor([[-0.8571, 0.3943, 0.3314],\n [-0.4286, -0.9029, -0.0343],\n [ 0.2857, -0.1714, 0.9429]])\n >>> R\n tensor([[ -14.0000, -21.0000, 14.0000],\n [ 0.0000, -175.0000, 70.0000],\n [ 0.0000, 0.0000, -35.0000]])\n >>> (Q @ R).round()\n tensor([[ 12., -51., 4.],\n [ 6., 167., -68.],\n [ -4., 24., -41.]])\n >>> (Q.T @ Q).round()\n tensor([[ 1., 0., 0.],\n [ 0., 1., -0.],\n [ 0., -0., 1.]])\n >>> Q2, R2 = torch.linalg.qr(A, mode='r')\n >>> Q2\n tensor([])\n >>> torch.equal(R, R2)\n True\n >>> A = torch.randn(3, 4, 5)\n >>> Q, R = torch.linalg.qr(A, mode='complete')\n >>> torch.dist(Q @ R, A)\n tensor(1.6099e-06)\n >>> torch.dist(Q.mT @ Q, torch.eye(4))", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.qr.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.dist(Q.mT @ Q, torch.eye(4))\n tensor(6.2158e-07)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.qr.html", "category": "pytorch docs"}
{"text": "torch.cuda.get_device_capabilitytorch.cuda.get_device_capability(device=None)\n Gets the cuda capability of a device.\n Parameters:\n device (torch.device or int, optional) -- device\n for which to return the device capability. This function is a\n no-op if this argument is a negative integer. It uses the\n current device, given by \"current_device()\", if \"device\" is\n \"None\" (default).\n Returns:\n the major and minor cuda capability of the device\n Return type:\n tuple(int, int)", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.get_device_capability.html", "category": "pytorch docs"}
{"text": "torch.Tensor.fliplrTensor.fliplr() -> Tensor\n See \"torch.fliplr()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fliplr.html", "category": "pytorch docs"}
{"text": "torch.Tensor.addmm_Tensor.addmm_(mat1, mat2, *, beta=1, alpha=1) -> Tensor\n In-place version of \"addmm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addmm_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.logical_or_Tensor.logical_or_() -> Tensor\n In-place version of \"logical_or()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logical_or_.html", "category": "pytorch docs"}
{"text": "torch.cuda.get_arch_listtorch.cuda.get_arch_list()\n Returns list CUDA architectures this library was compiled for.\n Return type:\n List[str]", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.get_arch_list.html", "category": "pytorch docs"}
{"text": "torch.Tensor.bitwise_right_shift_Tensor.bitwise_right_shift_(other) -> Tensor\n In-place version of \"bitwise_right_shift()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_right_shift_.html", "category": "pytorch docs"}
{"text": "torch._foreach_lgammatorch._foreach_lgamma(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.lgamma()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_lgamma.html", "category": "pytorch docs"}
{"text": "torch.is_complextorch.is_complex(input)\n Returns True if the data type of \"input\" is a complex data type\n i.e., one of \"torch.complex64\", and \"torch.complex128\".\n Parameters:\n input (Tensor) -- the input tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.is_complex.html", "category": "pytorch docs"}
{"text": "torch.foreach_erfc_torch._foreach_erfc(self: List[Tensor]) -> None\n Apply \"torch.erfc()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_erfc_.html", "category": "pytorch docs"}
{"text": "CosineAnnealingLRclass torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max, eta_min=0, last_epoch=- 1, verbose=False)\n Set the learning rate of each parameter group using a cosine\n annealing schedule, where \\eta_{max} is set to the initial lr and\n T_{cur} is the number of epochs since the last restart in SGDR:\n \\begin{aligned} \\eta_t & = \\eta_{min} +\n \\frac{1}{2}(\\eta_{max} - \\eta_{min})\\left(1 +\n \\cos\\left(\\frac{T_{cur}}{T_{max}}\\pi\\right)\\right), &\n T_{cur} \\neq (2k+1)T_{max}; \\ \\eta_{t+1} & = \\eta_{t} +\n \\frac{1}{2}(\\eta_{max} - \\eta_{min}) \\left(1 -\n \\cos\\left(\\frac{1}{T_{max}}\\pi\\right)\\right), & T_{cur} =\n (2k+1)T_{max}. \\end{aligned}\n When last_epoch=-1, sets initial lr as lr. Notice that because the\n schedule is defined recursively, the learning rate can be\n simultaneously modified outside this scheduler by other operators.\n If the learning rate is set solely by this scheduler, the learning", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingLR.html", "category": "pytorch docs"}
{"text": "rate at each step becomes:\n \\eta_t = \\eta_{min} + \\frac{1}{2}(\\eta_{max} -\n \\eta_{min})\\left(1 +\n \\cos\\left(\\frac{T_{cur}}{T_{max}}\\pi\\right)\\right)\n It has been proposed in SGDR: Stochastic Gradient Descent with Warm\n Restarts. Note that this only implements the cosine annealing part\n of SGDR, and not the restarts.\n Parameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * T_max (int) -- Maximum number of iterations.\n * eta_min (float) -- Minimum learning rate. Default: 0.\n * last_epoch (int) -- The index of last epoch. Default:\n -1.\n * verbose (bool) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n get_last_lr()\n Return last computed learning rate by current scheduler.\n load_state_dict(state_dict)\n Loads the schedulers state.\n Parameters:\n state_dict (dict) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingLR.html", "category": "pytorch docs"}
{"text": "print_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n state_dict()\n Returns the state of the scheduler as a \"dict\".\n It contains an entry for every variable in self.dict which\n is not the optimizer.", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingLR.html", "category": "pytorch docs"}
{"text": "torch.foreach_tan_torch._foreach_tan(self: List[Tensor]) -> None\n Apply \"torch.tan()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_tan_.html", "category": "pytorch docs"}
{"text": "torch.is_floating_pointtorch.is_floating_point(input)\n Returns True if the data type of \"input\" is a floating point data\n type i.e., one of \"torch.float64\", \"torch.float32\",\n \"torch.float16\", and \"torch.bfloat16\".\n Parameters:\n input (Tensor) -- the input tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.is_floating_point.html", "category": "pytorch docs"}
{"text": "Conv1dclass torch.ao.nn.quantized.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\n Applies a 1D convolution over a quantized input signal composed of\n several quantized input planes.\n For details on input arguments, parameters, and implementation see\n \"Conv1d\".\n Note:\n Only zeros is supported for the \"padding_mode\" argument.\n Note:\n Only torch.quint8 is supported for the input data type.\n Variables:\n * weight (Tensor) -- packed tensor derived from the\n learnable weight parameter.\n * scale (Tensor) -- scalar for the output scale\n * zero_point (Tensor) -- scalar for the output zero point\n See \"Conv1d\" for other attributes.\n Examples:\n >>> m = nn.quantized.Conv1d(16, 33, 3, stride=2)\n >>> input = torch.randn(20, 16, 100)\n >>> # quantize input to quint8", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Conv1d.html", "category": "pytorch docs"}
{"text": "\n\n\nquantize input to quint8\n >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0,\n ... dtype=torch.quint8)\n >>> output = m(q_input)\n\nclassmethod from_float(mod)\n Creates a quantized module from a float module or qparams_dict.\n Parameters:\n mod (Module) -- a float module, either produced by\n torch.ao.quantization utilities or provided by the user\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Conv1d.html", "category": "pytorch docs"}
{"text": "disable_observerclass torch.quantization.fake_quantize.disable_observer(mod)\n Disable observation for this module, if applicable. Example usage:\n # model is any PyTorch model\n model.apply(torch.ao.quantization.disable_observer)", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.disable_observer.html", "category": "pytorch docs"}
{"text": "torch.autograd.graph.Node.metadataabstract Node.metadata()\n Returns the metadata.\n Return type:\n dict", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.metadata.html", "category": "pytorch docs"}
{"text": "torch.Tensor.arccosh_Tensor.arccosh_()\n acosh_() -> Tensor\n In-place version of \"arccosh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arccosh_.html", "category": "pytorch docs"}
{"text": "DTypeConfigclass torch.ao.quantization.backend_config.DTypeConfig(input_dtype=None, output_dtype=None, weight_dtype=None, bias_dtype=None, is_dynamic=None)\n Config object that specifies the supported data types passed as\n arguments to quantize ops in the reference model spec, for input\n and output activations, weights, and biases.\n For example, consider the following reference model:\n quant1 - [dequant1 - fp32_linear - quant2] - dequant2\n The pattern in the square brackets refers to the reference pattern\n of statically quantized linear. Setting the input dtype as\n torch.quint8 in the DTypeConfig means we pass in torch.quint8\n as the dtype argument to the first quantize op (quant1). Similarly,\n setting the output dtype as torch.quint8 means we pass in\n torch.quint8 as the dtype argument to the second quantize op\n (quant2).\n Note that the dtype here does not refer to the interface dtypes of\n the op. For example, the \"input dtype\" here is not the dtype of the", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.DTypeConfig.html", "category": "pytorch docs"}
{"text": "input tensor passed to the quantized linear op. Though it can still\n be the same as the interface dtype, this is not always the case,\n e.g. the interface dtype is fp32 in dynamic quantization but the\n \"input dtype\" specified in the DTypeConfig would still be quint8.\n The semantics of dtypes here are the same as the semantics of the\n dtypes specified in the observers.\n These dtypes are matched against the ones specified in the user\u00e2\u0080\u0099s\n QConfig. If there is a match, and the QConfig satisfies the\n constraints specified in the DTypeConfig (if any), then we will\n quantize the given pattern using this DTypeConfig. Otherwise, the\n QConfig is ignored and the pattern will not be quantized.\n Example usage:\n >>> dtype_config1 = DTypeConfig(\n ... input_dtype=torch.quint8,\n ... output_dtype=torch.quint8,\n ... weight_dtype=torch.qint8,\n ... bias_dtype=torch.float)\n >>> dtype_config2 = DTypeConfig(\n ... input_dtype=DTypeWithConstraints(", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.DTypeConfig.html", "category": "pytorch docs"}
{"text": "... input_dtype=DTypeWithConstraints(\n ... dtype=torch.quint8,\n ... quant_min_lower_bound=0,\n ... quant_max_upper_bound=255,\n ... ),\n ... output_dtype=DTypeWithConstraints(\n ... dtype=torch.quint8,\n ... quant_min_lower_bound=0,\n ... quant_max_upper_bound=255,\n ... ),\n ... weight_dtype=DTypeWithConstraints(\n ... dtype=torch.qint8,\n ... quant_min_lower_bound=-128,\n ... quant_max_upper_bound=127,\n ... ),\n ... bias_dtype=torch.float)\n >>> dtype_config1.input_dtype\n torch.quint8\n >>> dtype_config2.input_dtype\n torch.quint8\n >>> dtype_config2.input_dtype_with_constraints\n DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None)\n classmethod from_dict(dtype_config_dict)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.DTypeConfig.html", "category": "pytorch docs"}
{"text": "classmethod from_dict(dtype_config_dict)\n Create a \"DTypeConfig\" from a dictionary with the following\n items (all optional):\n \"input_dtype\": torch.dtype or \"DTypeWithConstraints\"\n \"output_dtype\": torch.dtype or \"DTypeWithConstraints\"\n \"weight_dtype\": torch.dtype or \"DTypeWithConstraints\"\n \"bias_type\": torch.dtype \"is_dynamic\": bool\n Return type:\n DTypeConfig\n to_dict()\n Convert this \"DTypeConfig\" to a dictionary with the items\n described in \"from_dict()\".\n Return type:\n Dict[str, Any]", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.DTypeConfig.html", "category": "pytorch docs"}
{"text": "BCEWithLogitsLossclass torch.nn.BCEWithLogitsLoss(weight=None, size_average=None, reduce=None, reduction='mean', pos_weight=None)\n This loss combines a Sigmoid layer and the BCELoss in one\n single class. This version is more numerically stable than using a\n plain Sigmoid followed by a BCELoss as, by combining the\n operations into one layer, we take advantage of the log-sum-exp\n trick for numerical stability.\n The unreduced (i.e. with \"reduction\" set to \"'none'\") loss can be\n described as:\n \\ell(x, y) = L = {l_1,\\dots,l_N}^\\top, \\quad l_n = - w_n\n \\left[ y_n \\cdot \\log \\sigma(x_n) + (1 - y_n) \\cdot \\log (1 -\n \\sigma(x_n)) \\right],\n where N is the batch size. If \"reduction\" is not \"'none'\" (default\n \"'mean'\"), then\n \\ell(x, y) = \\begin{cases} \\operatorname{mean}(L), &\n \\text{if reduction} = \\text{mean';}\\\\\n \\operatorname{sum}(L), & \\text{if reduction} = \\text{sum'.}\n \\end{cases}", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html", "category": "pytorch docs"}
{"text": "\\end{cases}\n This is used for measuring the error of a reconstruction in for\n example an auto-encoder. Note that the targets t[i] should be\n numbers between 0 and 1.\n It's possible to trade off recall and precision by adding weights\n to positive examples. In the case of multi-label classification the\n loss can be described as:\n \\ell_c(x, y) = L_c = {l_{1,c},\\dots,l_{N,c}}^\\top, \\quad\n l_{n,c} = - w_{n,c} \\left[ p_c y_{n,c} \\cdot \\log\n \\sigma(x_{n,c}) + (1 - y_{n,c}) \\cdot \\log (1 - \\sigma(x_{n,c}))\n \\right],\n where c is the class number (c > 1 for multi-label binary\n classification, c = 1 for single-label binary classification), n is\n the number of the sample in the batch and p_c is the weight of the\n positive answer for the class c.\n p_c > 1 increases the recall, p_c < 1 increases the precision.\n For example, if a dataset contains 100 positive and 300 negative\n examples of a single class, then pos_weight for the class should", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html", "category": "pytorch docs"}
{"text": "be equal to \\frac{300}{100}=3. The loss would act as if the dataset\n contains 3\\times 100=300 positive examples.\n Examples:\n >>> target = torch.ones([10, 64], dtype=torch.float32) # 64 classes, batch size = 10\n >>> output = torch.full([10, 64], 1.5) # A prediction (logit)\n >>> pos_weight = torch.ones([64]) # All weights are equal to 1\n >>> criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight)\n >>> criterion(output, target) # -log(sigmoid(1.5))\n tensor(0.20...)\n Parameters:\n * weight (Tensor, optional) -- a manual rescaling\n weight given to the loss of each batch element. If given, has\n to be a Tensor of size nbatch.\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html", "category": "pytorch docs"}
{"text": "is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html", "category": "pytorch docs"}
{"text": "\npos_weight (Tensor, optional) -- a weight of\n positive examples. Must be a vector with length equal to the\n number of classes.\n Shape:\n * Input: (), where * means any number of dimensions.\n * Target: (), same shape as the input.\n * Output: scalar. If \"reduction\" is \"'none'\", then (*), same\n shape as input.\n Examples:\n >>> loss = nn.BCEWithLogitsLoss()\n >>> input = torch.randn(3, requires_grad=True)\n >>> target = torch.empty(3).random_(2)\n >>> output = loss(input, target)\n >>> output.backward()\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html", "category": "pytorch docs"}
{"text": "torch.Tensor.nextafter_Tensor.nextafter_(other) -> Tensor\n In-place version of \"nextafter()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nextafter_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.qschemeTensor.qscheme() -> torch.qscheme\n Returns the quantization scheme of a given QTensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.qscheme.html", "category": "pytorch docs"}
{"text": "torch.autograd.gradchecktorch.autograd.gradcheck(func, inputs, *, eps=1e-06, atol=1e-05, rtol=0.001, raise_exception=True, check_sparse_nnz=False, nondet_tol=0.0, check_undefined_grad=True, check_grad_dtypes=False, check_batched_grad=False, check_batched_forward_grad=False, check_forward_ad=False, check_backward_ad=True, fast_mode=False)\n Check gradients computed via small finite differences against\n analytical gradients w.r.t. tensors in \"inputs\" that are of\n floating point or complex type and with \"requires_grad=True\".\n The check between numerical and analytical gradients uses\n \"allclose()\".\n For most of the complex functions we consider for optimization\n purposes, no notion of Jacobian exists. Instead, gradcheck verifies\n if the numerical and analytical values of the Wirtinger and\n Conjugate Wirtinger derivatives are consistent. Because the\n gradient computation is done under the assumption that the overall", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.gradcheck.html", "category": "pytorch docs"}
{"text": "function has a real-valued output, we treat functions with complex\n output in a special way. For these functions, gradcheck is applied\n to two real-valued functions corresponding to taking the real\n components of the complex outputs for the first, and taking the\n imaginary components of the complex outputs for the second. For\n more details, check out Autograd for Complex Numbers.\n Note:\n The default values are designed for \"input\" of double precision.\n This check will likely fail if \"input\" is of less precision,\n e.g., \"FloatTensor\".\n Warning:\n If any checked tensor in \"input\" has overlapping memory, i.e.,\n different indices pointing to the same memory address (e.g., from\n \"torch.expand()\"), this check will likely fail because the\n numerical gradients computed by point perturbation at such\n indices will change values at all other indices that share the\n same memory address.\n Parameters:\n * func (function) -- a Python function that takes Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.gradcheck.html", "category": "pytorch docs"}
{"text": "inputs and returns a Tensor or a tuple of Tensors\n * inputs (tuple of Tensor or Tensor) -- inputs to the\n function\n * eps (float, optional) -- perturbation for finite\n differences\n * atol (float, optional) -- absolute tolerance\n * rtol (float, optional) -- relative tolerance\n * raise_exception (bool, optional) -- indicating\n whether to raise an exception if the check fails. The\n exception gives more information about the exact nature of the\n failure. This is helpful when debugging gradchecks.\n * check_sparse_nnz (bool, optional) -- if True,\n gradcheck allows for SparseTensor input, and for any\n SparseTensor at input, gradcheck will perform check at nnz\n positions only.\n * nondet_tol (float, optional) -- tolerance for non-\n determinism. When running identical inputs through the", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.gradcheck.html", "category": "pytorch docs"}
{"text": "differentiation, the results must either match exactly\n (default, 0.0) or be within this tolerance.\n * check_undefined_grad (bool, optional) -- if True,\n check if undefined output grads are supported and treated as\n zeros, for \"Tensor\" outputs.\n * check_batched_grad (bool, optional) -- if True,\n check if we can compute batched gradients using prototype vmap\n support. Defaults to False.\n * check_batched_forward_grad (bool, optional) -- if\n True, checks if we can compute batched forward gradients using\n forward ad and prototype vmap support. Defaults to False.\n * check_forward_ad (bool, optional) -- if True, check\n that the gradients computed with forward mode AD match the\n numerical ones. Defaults to False.\n * check_backward_ad (bool, optional) -- if False, do\n not perform any checks that rely on backward mode AD to be\n implemented. Defaults to True.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.gradcheck.html", "category": "pytorch docs"}
{"text": "implemented. Defaults to True.\n * fast_mode (bool, optional) -- Fast mode for\n gradcheck and gradgradcheck is currently only implemented for\n R to R functions. If none of the inputs and outputs are\n complex a faster implementation of gradcheck that no longer\n computes the entire jacobian is run; otherwise, we fall back\n to the slow implementation.\n Returns:\n True if all differences satisfy allclose condition\n Return type:\n bool", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.gradcheck.html", "category": "pytorch docs"}
{"text": "MaxPool3dclass torch.nn.MaxPool3d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)\n Applies a 3D max pooling over an input signal composed of several\n input planes.\n In the simplest case, the output value of the layer with input size\n (N, C, D, H, W), output (N, C, D_{out}, H_{out}, W_{out}) and\n \"kernel_size\" (kD, kH, kW) can be precisely described as:\n \\begin{aligned} \\text{out}(N_i, C_j, d, h, w) ={} &\n \\max_{k=0, \\ldots, kD-1} \\max_{m=0, \\ldots, kH-1} \\max_{n=0,\n \\ldots, kW-1} \\ &\n \\text{input}(N_i, C_j, \\text{stride[0]} \\times d + k,\n \\text{stride[1]} \\times h + m, \\text{stride[2]} \\times w + n)\n \\end{aligned}\n If \"padding\" is non-zero, then the input is implicitly padded with\n negative infinity on both sides for \"padding\" number of points.\n \"dilation\" controls the spacing between the kernel points. It is", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool3d.html", "category": "pytorch docs"}
{"text": "harder to describe, but this link has a nice visualization of what\n \"dilation\" does.\n Note:\n When ceil_mode=True, sliding windows are allowed to go off-bounds\n if they start within the left padding or the input. Sliding\n windows that would start in the right padded region are ignored.\n The parameters \"kernel_size\", \"stride\", \"padding\", \"dilation\" can\n either be:\n * a single \"int\" -- in which case the same value is used for the\n depth, height and width dimension\n * a \"tuple\" of three ints -- in which case, the first int is\n used for the depth dimension, the second int for the height\n dimension and the third int for the width dimension\n Parameters:\n * kernel_size (Union[int, Tuple[int, int,\n int]]) -- the size of the window to take a max over\n * stride (Union[int, Tuple[int, int,\n int]]) -- the stride of the window. Default value is\n \"kernel_size\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool3d.html", "category": "pytorch docs"}
{"text": "\"kernel_size\"\n * padding (Union[int, Tuple[int, int,\n int]]) -- Implicit negative infinity padding to be\n added on all three sides\n * dilation (Union[int, Tuple[int, int,\n int]]) -- a parameter that controls the stride of\n elements in the window\n * return_indices (bool) -- if \"True\", will return the max\n indices along with the outputs. Useful for\n \"torch.nn.MaxUnpool3d\" later\n * ceil_mode (bool) -- when True, will use ceil instead\n of floor to compute the output shape\n Shape:\n * Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},\n W_{in}).\n * Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},\n H_{out}, W_{out}), where\n D_{out} = \\left\\lfloor\\frac{D_{in} + 2 \\times\n \\text{padding}[0] - \\text{dilation}[0] \\times\n (\\text{kernel_size}[0] - 1) - 1}{\\text{stride}[0]} +", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool3d.html", "category": "pytorch docs"}
{"text": "1\\right\\rfloor\n H_{out} = \\left\\lfloor\\frac{H_{in} + 2 \\times\n \\text{padding}[1] - \\text{dilation}[1] \\times\n (\\text{kernel_size}[1] - 1) - 1}{\\text{stride}[1]} +\n 1\\right\\rfloor\n W_{out} = \\left\\lfloor\\frac{W_{in} + 2 \\times\n \\text{padding}[2] - \\text{dilation}[2] \\times\n (\\text{kernel_size}[2] - 1) - 1}{\\text{stride}[2]} +\n 1\\right\\rfloor\n Examples:\n >>> # pool of square window of size=3, stride=2\n >>> m = nn.MaxPool3d(3, stride=2)\n >>> # pool of non-square window\n >>> m = nn.MaxPool3d((3, 2, 2), stride=(2, 1, 2))\n >>> input = torch.randn(20, 16, 50, 44, 31)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool3d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.index_reduceTensor.index_reduce()", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_reduce.html", "category": "pytorch docs"}
{"text": "torch.hspmmtorch.hspmm(mat1, mat2, , out=None) -> Tensor\n Performs a matrix multiplication of a sparse COO matrix \"mat1\" and\n a strided matrix \"mat2\". The result is a (1 + 1)-dimensional hybrid\n COO matrix.\n Parameters:\n * mat1 (Tensor) -- the first sparse matrix to be matrix\n multiplied\n * mat2 (Tensor) -- the second strided matrix to be matrix\n multiplied\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.hspmm.html", "category": "pytorch docs"}
{"text": "torch.sparse.sampled_addmmtorch.sparse.sampled_addmm(input, mat1, mat2, , beta=1., alpha=1., out=None) -> Tensor\n Performs a matrix multiplication of the dense matrices \"mat1\" and\n \"mat2\" at the locations specified by the sparsity pattern of\n \"input\". The matrix \"input\" is added to the final result.\n Mathematically this performs the following operation:\n \\text{out} = \\alpha\\ (\\text{mat1} \\mathbin{@}\n \\text{mat2})\\text{spy}(\\text{input}) + \\beta\\ \\text{input}\n where \\text{spy}(\\text{input}) is the sparsity pattern matrix of\n \"input\", \"alpha\" and \"beta\" are the scaling factors.\n \\text{spy}(\\text{input}) has value 1 at the positions where \"input\"\n has non-zero values, and 0 elsewhere.\n Note:\n \"input\" must be a sparse CSR tensor. \"mat1\" and \"mat2\" must be\n dense tensors.\n Parameters:\n * input (Tensor) -- a sparse CSR matrix of shape (m, n)\n to be added and used to compute the sampled matrix\n multiplication", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.sampled_addmm.html", "category": "pytorch docs"}
{"text": "multiplication\n * mat1 (Tensor) -- a dense matrix of shape (m, k) to be\n multiplied\n * mat2 (Tensor) -- a dense matrix of shape (k, n) to be\n multiplied\n Keyword Arguments:\n * beta (Number, optional) -- multiplier for \"input\"\n (\\beta)\n * alpha (Number, optional) -- multiplier for mat1 @\n mat2 (\\alpha)\n * out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\n Examples:\n >>> input = torch.eye(3, device='cuda').to_sparse_csr()\n >>> mat1 = torch.randn(3, 5, device='cuda')\n >>> mat2 = torch.randn(5, 3, device='cuda')\n >>> torch.sparse.sampled_addmm(input, mat1, mat2)\n tensor(crow_indices=tensor([0, 1, 2, 3]),\n col_indices=tensor([0, 1, 2]),\n values=tensor([ 0.2847, -0.7805, -0.1900]), device='cuda:0',\n size=(3, 3), nnz=3, layout=torch.sparse_csr)\n >>> torch.sparse.sampled_addmm(input, mat1, mat2).to_dense()", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.sampled_addmm.html", "category": "pytorch docs"}
{"text": "tensor([[ 0.2847, 0.0000, 0.0000],\n [ 0.0000, -0.7805, 0.0000],\n [ 0.0000, 0.0000, -0.1900]], device='cuda:0')\n >>> torch.sparse.sampled_addmm(input, mat1, mat2, beta=0.5, alpha=0.5)\n tensor(crow_indices=tensor([0, 1, 2, 3]),\n col_indices=tensor([0, 1, 2]),\n values=tensor([ 0.1423, -0.3903, -0.0950]), device='cuda:0',\n size=(3, 3), nnz=3, layout=torch.sparse_csr)", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.sampled_addmm.html", "category": "pytorch docs"}
{"text": "torch.taketorch.take(input, index) -> Tensor\n Returns a new tensor with the elements of \"input\" at the given\n indices. The input tensor is treated as if it were viewed as a 1-D\n tensor. The result takes the same shape as the indices.\n Parameters:\n * input (Tensor) -- the input tensor.\n * index (LongTensor) -- the indices into tensor\n Example:\n >>> src = torch.tensor([[4, 3, 5],\n ... [6, 7, 8]])\n >>> torch.take(src, torch.tensor([0, 2, 5]))\n tensor([ 4, 5, 8])", "source": "https://pytorch.org/docs/stable/generated/torch.take.html", "category": "pytorch docs"}
{"text": "torch.Tensor.equalTensor.equal(other) -> bool\n See \"torch.equal()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.equal.html", "category": "pytorch docs"}
{"text": "default_weight_only_qconfigtorch.quantization.qconfig.default_weight_only_qconfig\n alias of QConfig(activation=,\n weight=functools.partial(,\n observer=,\n quant_min=-128, quant_max=127, dtype=torch.qint8,\n qscheme=torch.per_tensor_symmetric, reduce_range=False){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_weight_only_qconfig.html", "category": "pytorch docs"}
{"text": "torch.vandertorch.vander(x, N=None, increasing=False) -> Tensor\n Generates a Vandermonde matrix.\n The columns of the output matrix are elementwise powers of the\n input vector x^{(N-1)}, x^{(N-2)}, ..., x^0. If increasing is True,\n the order of the columns is reversed x^0, x^1, ..., x^{(N-1)}. Such\n a matrix with a geometric progression in each row is named for\n Alexandre-Theophile Vandermonde.\n Parameters:\n * x (Tensor) -- 1-D input tensor.\n * N (int, optional) -- Number of columns in the\n output. If N is not specified, a square array is returned (N =\n len(x)).\n * increasing (bool, optional) -- Order of the powers\n of the columns. If True, the powers increase from left to\n right, if False (the default) they are reversed.\n Returns:\n Vandermonde matrix. If increasing is False, the first column is\n x^{(N-1)}, the second x^{(N-2)} and so forth. If increasing is", "source": "https://pytorch.org/docs/stable/generated/torch.vander.html", "category": "pytorch docs"}
{"text": "True, the columns are x^0, x^1, ..., x^{(N-1)}.\n Return type:\n Tensor\n Example:\n >>> x = torch.tensor([1, 2, 3, 5])\n >>> torch.vander(x)\n tensor([[ 1, 1, 1, 1],\n [ 8, 4, 2, 1],\n [ 27, 9, 3, 1],\n [125, 25, 5, 1]])\n >>> torch.vander(x, N=3)\n tensor([[ 1, 1, 1],\n [ 4, 2, 1],\n [ 9, 3, 1],\n [25, 5, 1]])\n >>> torch.vander(x, N=3, increasing=True)\n tensor([[ 1, 1, 1],\n [ 1, 2, 4],\n [ 1, 3, 9],\n [ 1, 5, 25]])", "source": "https://pytorch.org/docs/stable/generated/torch.vander.html", "category": "pytorch docs"}
{"text": "NLLLossclass torch.nn.NLLLoss(weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction='mean')\n The negative log likelihood loss. It is useful to train a\n classification problem with C classes.\n If provided, the optional argument \"weight\" should be a 1D Tensor\n assigning weight to each of the classes. This is particularly\n useful when you have an unbalanced training set.\n The input given through a forward call is expected to contain\n log-probabilities of each class. input has to be a Tensor of size\n either (minibatch, C) or (minibatch, C, d_1, d_2, ..., d_K) with K\n \\geq 1 for the K-dimensional case. The latter is useful for\n higher dimension inputs, such as computing NLL loss per-pixel for\n 2D images.\n Obtaining log-probabilities in a neural network is easily achieved\n by adding a LogSoftmax layer in the last layer of your network.\n You may use CrossEntropyLoss instead, if you prefer not to add an\n extra layer.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html", "category": "pytorch docs"}
{"text": "extra layer.\n The target that this loss expects should be a class index in the\n range [0, C-1] where C = number of classes; if ignore_index is\n specified, this loss also accepts this class index (this index may\n not necessarily be in the class range).\n The unreduced (i.e. with \"reduction\" set to \"'none'\") loss can be\n described as:\n \\ell(x, y) = L = {l_1,\\dots,l_N}^\\top, \\quad l_n = - w_{y_n}\n x_{n,y_n}, \\quad w_{c} = \\text{weight}[c] \\cdot \\mathbb{1}{c\n \\not= \\text{ignore_index}},\n where x is the input, y is the target, w is the weight, and N is\n the batch size. If \"reduction\" is not \"'none'\" (default \"'mean'\"),\n then\n \\ell(x, y) = \\begin{cases} \\sum_{n=1}^N\n \\frac{1}{\\sum_{n=1}^N w_{y_n}} l_n, & \\text{if reduction} =\n \\text{mean';}\\\\ \\sum_{n=1}^N l_n, & \\text{if\n reduction} = \\text{sum'.} \\end{cases}\n Parameters:\n * weight (Tensor, optional) -- a manual rescaling", "source": "https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html", "category": "pytorch docs"}
{"text": "weight given to each class. If given, it has to be a Tensor of\n size C. Otherwise, it is treated as if having all ones.\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"None\"\n * ignore_index (int, optional) -- Specifies a target\n value that is ignored and does not contribute to the input\n gradient. When \"size_average\" is \"True\", the loss is averaged\n over non-ignored targets.\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on", "source": "https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html", "category": "pytorch docs"}
{"text": "\"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"None\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the weighted\n mean of the output is taken, \"'sum'\": the output will be\n summed. Note: \"size_average\" and \"reduce\" are in the process\n of being deprecated, and in the meantime, specifying either of\n those two args will override \"reduction\". Default: \"'mean'\"\n Shape:\n * Input: (N, C) or (C), where C = number of classes, or (N, C,\n d_1, d_2, ..., d_K) with K \\geq 1 in the case of\n K-dimensional loss.\n * Target: (N) or (), where each value is 0 \\leq\n \\text{targets}[i] \\leq C-1, or (N, d_1, d_2, ..., d_K) with K\n \\geq 1 in the case of K-dimensional loss.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html", "category": "pytorch docs"}
{"text": "\\geq 1 in the case of K-dimensional loss.\n * Output: If \"reduction\" is \"'none'\", shape (N) or (N, d_1, d_2,\n ..., d_K) with K \\geq 1 in the case of K-dimensional loss.\n Otherwise, scalar.\n Examples:\n >>> m = nn.LogSoftmax(dim=1)\n >>> loss = nn.NLLLoss()\n >>> # input is of size N x C = 3 x 5\n >>> input = torch.randn(3, 5, requires_grad=True)\n >>> # each element in target has to have 0 <= value < C\n >>> target = torch.tensor([1, 0, 4])\n >>> output = loss(m(input), target)\n >>> output.backward()\n >>>\n >>>\n >>> # 2D loss example (used, for example, with image inputs)\n >>> N, C = 5, 4\n >>> loss = nn.NLLLoss()\n >>> # input is of size N x C x height x width\n >>> data = torch.randn(N, 16, 10, 10)\n >>> conv = nn.Conv2d(16, C, (3, 3))\n >>> m = nn.LogSoftmax(dim=1)\n >>> # each element in target has to have 0 <= value < C\n >>> target = torch.empty(N, 8, 8, dtype=torch.long).random_(0, C)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html", "category": "pytorch docs"}
{"text": "\n\n\noutput = loss(m(conv(data)), target)\n >>> output.backward()\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html", "category": "pytorch docs"}
{"text": "CUDAPluggableAllocatorclass torch.cuda.CUDAPluggableAllocator(path_to_so_file, alloc_fn_name, free_fn_name)\n CUDA memory allocator loaded from a so file.\n Memory allocators are compiled in .so files and loaded dynamically\n using ctypes. To change the active allocator use the\n \"torch.memory.cuda.change_current_allocator()\" function.\n Parameters:\n * path_to_so_file (str) -- Path in the filesystem to the\n .so file containing the allocator functions\n * alloc_fn_name (str) -- Name of the function to perform\n the memory allocation in the so file. The signature must be:\n void alloc_fn_name(ssize_t size, int device, cudaStream_t\n stream);\n * free_fn_name (str) -- Name of the function to perform\n the memory release in the so file. The signature must be: void\n free_fn_name(void ptr, size_t size, cudaStream_t stream);\n Warning:\n This is currently supported only in unix OSs\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.CUDAPluggableAllocator.html", "category": "pytorch docs"}
{"text": "Note:\n See Memory management for details on creating and using a custom\n allocator", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.CUDAPluggableAllocator.html", "category": "pytorch docs"}
{"text": "torch.set_deterministic_debug_modetorch.set_deterministic_debug_mode(debug_mode)\n Sets the debug mode for deterministic operations.\n Note:\n This is an alternative interface for\n \"torch.use_deterministic_algorithms()\". Refer to that function's\n documentation for details about affected operations.\n Parameters:\n debug_mode (str or int) -- If \"default\" or 0, don't\n error or warn on nondeterministic operations. If \"warn\" or 1,\n warn on nondeterministic operations. If \"error\" or 2, error on\n nondeterministic operations.", "source": "https://pytorch.org/docs/stable/generated/torch.set_deterministic_debug_mode.html", "category": "pytorch docs"}
{"text": "torch.Tensor.is_coalescedTensor.is_coalesced() -> bool\n Returns \"True\" if \"self\" is a sparse COO tensor that is coalesced,\n \"False\" otherwise.\n Warning:\n Throws an error if \"self\" is not a sparse COO tensor.\n See \"coalesce()\" and uncoalesced tensors.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_coalesced.html", "category": "pytorch docs"}
{"text": "ReLU6class torch.ao.nn.quantized.ReLU6(inplace=False)\n Applies the element-wise function:\n \\text{ReLU6}(x) = \\min(\\max(x_0, x), q(6)), where x_0 is the\n zero_point, and q(6) is the quantized representation of number 6.\n Parameters:\n inplace (bool) -- can optionally do the operation in-\n place. Default: \"False\"\n Shape:\n * Input: (N, ) where *** means, any number of additional\n dimensions\n * Output: (N, ), same shape as the input\n [image]\n Examples:\n >>> m = nn.quantized.ReLU6()\n >>> input = torch.randn(2)\n >>> input = torch.quantize_per_tensor(input, 1.0, 0, dtype=torch.qint32)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ReLU6.html", "category": "pytorch docs"}
{"text": "torch.cumsumtorch.cumsum(input, dim, , dtype=None, out=None) -> Tensor\n Returns the cumulative sum of elements of \"input\" in the dimension\n \"dim\".\n For example, if \"input\" is a vector of size N, the result will also\n be a vector of size N, with elements.\n y_i = x_1 + x_2 + x_3 + \\dots + x_i\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- the dimension to do the operation over\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is casted\n to \"dtype\" before the operation is performed. This is useful\n for preventing data type overflows. Default: None.\n * out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(10)\n >>> a\n tensor([-0.8286, -0.4890, 0.5155, 0.8443, 0.1865, -0.1752, -2.0595,\n 0.1850, -1.1571, -0.4243])\n >>> torch.cumsum(a, dim=0)", "source": "https://pytorch.org/docs/stable/generated/torch.cumsum.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.cumsum(a, dim=0)\n tensor([-0.8286, -1.3175, -0.8020, 0.0423, 0.2289, 0.0537, -2.0058,\n -1.8209, -2.9780, -3.4022])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.cumsum.html", "category": "pytorch docs"}
{"text": "torch.autograd.graph.Node.nameabstract Node.name()\n Returns the name.\n Example:\n >>> import torch\n >>> a = torch.tensor([0., 0., 0.], requires_grad=True)\n >>> b = a.clone()\n >>> assert isinstance(b.grad_fn, torch.autograd.graph.Node)\n >>> print(b.grad_fn.name())\n CloneBackward0\n Return type:\n str", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.name.html", "category": "pytorch docs"}
{"text": "set_multithreading_enabledclass torch.autograd.set_multithreading_enabled(mode)\n Context-manager that sets multithreaded backwards on or off.\n \"set_multithreading_enabled\" will enable or disable multithreaded\n backwards based on its argument \"mode\". It can be used as a\n context-manager or as a function.\n This context manager is thread local; it will not affect\n computation in other threads.\n Parameters:\n mode (bool) -- Flag whether to enable multithreaded\n backwards (\"True\"), or disable (\"False\").\n Note:\n This API does not apply to forward-mode AD.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.set_multithreading_enabled.html", "category": "pytorch docs"}
{"text": "torch.Tensor.is_pinnedTensor.is_pinned()\n Returns true if this tensor resides in pinned memory.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_pinned.html", "category": "pytorch docs"}
{"text": "torch.signal.windows.gaussiantorch.signal.windows.gaussian(M, , std=1.0, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\n Computes a window with a gaussian waveform.\n The gaussian window is defined as follows:\n w_n = \\exp{\\left(-\\left(\\frac{n}{2\\sigma}\\right)^2\\right)}\n The window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\n Parameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\n Keyword Arguments:\n * std (float, optional) -- the standard deviation of\n the gaussian. It controls how narrow or wide the window is.\n Default: 1.0.\n * sym (bool, optional) -- If False, returns a\n periodic window suitable for use in spectral analysis. If\n True, returns a symmetric window suitable for use in filter\n design. Default: True*.", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.gaussian.html", "category": "pytorch docs"}
{"text": "design. Default: True.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Return type:\n Tensor\n Examples:\n >>> # Generates a symmetric gaussian window with a standard deviation of 1.0.\n >>> torch.signal.windows.gaussian(10)", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.gaussian.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.signal.windows.gaussian(10)\n tensor([4.0065e-05, 2.1875e-03, 4.3937e-02, 3.2465e-01, 8.8250e-01, 8.8250e-01, 3.2465e-01, 4.3937e-02, 2.1875e-03, 4.0065e-05])\n >>> # Generates a periodic gaussian window and standard deviation equal to 0.9.\n >>> torch.signal.windows.gaussian(10, sym=False,std=0.9)\n tensor([1.9858e-07, 5.1365e-05, 3.8659e-03, 8.4658e-02, 5.3941e-01, 1.0000e+00, 5.3941e-01, 8.4658e-02, 3.8659e-03, 5.1365e-05])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.gaussian.html", "category": "pytorch docs"}
{"text": "torch.Tensor.isposinfTensor.isposinf() -> Tensor\n See \"torch.isposinf()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.isposinf.html", "category": "pytorch docs"}
{"text": "torch.Tensor.gatherTensor.gather(dim, index) -> Tensor\n See \"torch.gather()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.gather.html", "category": "pytorch docs"}
{"text": "torch.linalg.lu_factortorch.linalg.lu_factor(A, , bool pivot=True, out=None) -> (Tensor, Tensor)\n Computes a compact representation of the LU factorization with\n partial pivoting of a matrix.\n This function computes a compact representation of the\n decomposition given by \"torch.linalg.lu()\". If the matrix is\n square, this representation may be used in\n \"torch.linalg.lu_solve()\" to solve system of linear equations that\n share the matrix \"A\".\n The returned decomposition is represented as a named tuple (LU,\n pivots). The \"LU\" matrix has the same shape as the input matrix\n \"A\". Its upper and lower triangular parts encode the non-constant\n elements of \"L\" and \"U\" of the LU decomposition of \"A\".\n The returned permutation matrix is represented by a 1-indexed\n vector. pivots[i] == j represents that in the i-th step of the\n algorithm, the i-th row was permuted with the j-1-th row.\n On CUDA, one may use \"pivot\"= False*. In this case, this function", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_factor.html", "category": "pytorch docs"}
{"text": "returns the LU decomposition without pivoting if it exists.\n Supports inputs of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if the inputs are batches of\n matrices then the output has the same batch dimensions.\n Note:\n When inputs are on a CUDA device, this function synchronizes that\n device with the CPU. For a version of this function that does not\n synchronize, see \"torch.linalg.lu_factor_ex()\".\n Warning:\n The LU decomposition is almost never unique, as often there are\n different permutation matrices that can yield different LU\n decompositions. As such, different platforms, like SciPy, or\n inputs on different devices, may produce different valid\n decompositions.Gradient computations are only supported if the\n input matrix is full-rank. If this condition is not met, no error\n will be thrown, but the gradient may not be finite. This is\n because the LU decomposition with pivoting is not differentiable", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_factor.html", "category": "pytorch docs"}
{"text": "at these points.\n See also:\n \"torch.linalg.lu_solve()\" solves a system of linear equations\n given the output of this function provided the input matrix was\n square and invertible.\n \"torch.lu_unpack()\" unpacks the tensors returned by \"lu_factor()\"\n into the three matrices P, L, U that form the decomposition.\n \"torch.linalg.lu()\" computes the LU decomposition with partial\n pivoting of a possibly non-square matrix. It is a composition of\n \"lu_factor()\" and \"torch.lu_unpack()\".\n \"torch.linalg.solve()\" solves a system of linear equations. It is\n a composition of \"lu_factor()\" and \"lu_solve()\".\n Parameters:\n A (Tensor) -- tensor of shape (, m, n) where *** is\n zero or more batch dimensions.\n Keyword Arguments:\n * pivot (bool, optional) -- Whether to compute the LU\n decomposition with partial pivoting, or the regular LU\n decomposition. \"pivot\"= False not supported on CPU. Default:\n True*.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_factor.html", "category": "pytorch docs"}
{"text": "True.\n * out (tuple, optional) -- tuple of two tensors to\n write the output to. Ignored if None. Default: None.\n Returns:\n A named tuple (LU, pivots).\n Raises:\n RuntimeError -- if the \"A\" matrix is not invertible or any\n matrix in a batched \"A\" is not invertible.\n Examples:\n >>> A = torch.randn(2, 3, 3)\n >>> B1 = torch.randn(2, 3, 4)\n >>> B2 = torch.randn(2, 3, 7)\n >>> A_factor = torch.linalg.lu_factor(A)\n >>> X1 = torch.linalg.lu_solve(A_factor, B1)\n >>> X2 = torch.linalg.lu_solve(A_factor, B2)\n >>> torch.allclose(A @ X1, B1)\n True\n >>> torch.allclose(A @ X2, B2)\n True", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_factor.html", "category": "pytorch docs"}
{"text": "torch.logical_nottorch.logical_not(input, , out=None) -> Tensor\n Computes the element-wise logical NOT of the given input tensor. If\n not specified, the output tensor will have the bool dtype. If the\n input tensor is not a bool tensor, zeros are treated as \"False\" and\n non-zeros are treated as \"True\".\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> torch.logical_not(torch.tensor([True, False]))\n tensor([False, True])\n >>> torch.logical_not(torch.tensor([0, 1, -10], dtype=torch.int8))\n tensor([ True, False, False])\n >>> torch.logical_not(torch.tensor([0., 1.5, -10.], dtype=torch.double))\n tensor([ True, False, False])\n >>> torch.logical_not(torch.tensor([0., 1., -10.], dtype=torch.double), out=torch.empty(3, dtype=torch.int16))\n tensor([1, 0, 0], dtype=torch.int16)", "source": "https://pytorch.org/docs/stable/generated/torch.logical_not.html", "category": "pytorch docs"}
{"text": "LazyConvTranspose3dclass torch.nn.LazyConvTranspose3d(out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)\n A \"torch.nn.ConvTranspose3d\" module with lazy initialization of the\n \"in_channels\" argument of the \"ConvTranspose3d\" that is inferred\n from the \"input.size(1)\". The attributes that will be lazily\n initialized are weight and bias.\n Check the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\n Parameters:\n * out_channels (int) -- Number of channels produced by the\n convolution\n * kernel_size (int or tuple) -- Size of the convolving\n kernel\n * stride (int or tuple, optional) -- Stride of the\n convolution. Default: 1\n * padding (int or tuple, optional) -- \"dilation *\n (kernel_size - 1) - padding\" zero-padding will be added to", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConvTranspose3d.html", "category": "pytorch docs"}
{"text": "both sides of each dimension in the input. Default: 0\n * output_padding (int or tuple, optional) --\n Additional size added to one side of each dimension in the\n output shape. Default: 0\n * groups (int, optional) -- Number of blocked\n connections from input channels to output channels. Default: 1\n * bias (bool, optional) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n * dilation (int or tuple, optional) -- Spacing\n between kernel elements. Default: 1\n See also:\n \"torch.nn.ConvTranspose3d\" and\n \"torch.nn.modules.lazy.LazyModuleMixin\"\n cls_to_become\n alias of \"ConvTranspose3d\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConvTranspose3d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.logitTensor.logit() -> Tensor\n See \"torch.logit()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logit.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.hardtanh_torch.nn.functional.hardtanh_(input, min_val=- 1., max_val=1.) -> Tensor\n In-place version of \"hardtanh()\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.hardtanh_.html", "category": "pytorch docs"}
{"text": "torch.cuda.reset_max_memory_allocatedtorch.cuda.reset_max_memory_allocated(device=None)\n Resets the starting point in tracking maximum GPU memory occupied\n by tensors for a given device.\n See \"max_memory_allocated()\" for details.\n Parameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\n Warning:\n This function now calls \"reset_peak_memory_stats()\", which resets\n /all/ peak memory stats.\n Note:\n See Memory management for more details about GPU memory\n management.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.reset_max_memory_allocated.html", "category": "pytorch docs"}
{"text": "torch.Tensor.dense_dimTensor.dense_dim() -> int\n Return the number of dense dimensions in a sparse tensor \"self\".\n Note:\n Returns \"len(self.shape)\" if \"self\" is not a sparse tensor.\n See also \"Tensor.sparse_dim()\" and hybrid tensors.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.dense_dim.html", "category": "pytorch docs"}
{"text": "torch.Tensor.expm1_Tensor.expm1_() -> Tensor\n In-place version of \"expm1()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.expm1_.html", "category": "pytorch docs"}
{"text": "torch.cuda.initial_seedtorch.cuda.initial_seed()\n Returns the current random seed of the current GPU.\n Warning:\n This function eagerly initializes CUDA.\n Return type:\n int", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.initial_seed.html", "category": "pytorch docs"}
{"text": "torch.Tensor.pow_Tensor.pow_(exponent) -> Tensor\n In-place version of \"pow()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.pow_.html", "category": "pytorch docs"}
{"text": "PruningContainerclass torch.nn.utils.prune.PruningContainer(args)\n Container holding a sequence of pruning methods for iterative\n pruning. Keeps track of the order in which pruning methods are\n applied and handles combining successive pruning calls.\n Accepts as argument an instance of a BasePruningMethod or an\n iterable of them.\n add_pruning_method(method)\n Adds a child pruning \"method\" to the container.\n Parameters:\n method (subclass of BasePruningMethod) -- child pruning\n method to be added to the container.\n classmethod apply(module, name, args, importance_scores=None, kwargs)\n Adds the forward pre-hook that enables pruning on the fly and\n the reparametrization of a tensor in terms of the original\n tensor and the pruning mask.\n Parameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * name (str) -- parameter name within \"module\" on which", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.PruningContainer.html", "category": "pytorch docs"}
{"text": "pruning will act.\n * args -- arguments passed on to a subclass of\n \"BasePruningMethod\"\n * importance_scores (torch.Tensor) -- tensor of\n importance scores (of same shape as module parameter) used\n to compute mask for pruning. The values in this tensor\n indicate the importance of the corresponding elements in\n the parameter being pruned. If unspecified or None, the\n parameter will be used in its place.\n * kwargs -- keyword arguments passed on to a subclass of\n a \"BasePruningMethod\"\n apply_mask(module)\n Simply handles the multiplication between the parameter being\n pruned and the generated mask. Fetches the mask and the original\n tensor from the module and returns the pruned version of the\n tensor.\n Parameters:\n module (nn.Module) -- module containing the tensor to\n prune\n Returns:\n pruned version of the input tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.PruningContainer.html", "category": "pytorch docs"}
{"text": "pruned version of the input tensor\n Return type:\n pruned_tensor (torch.Tensor)\n compute_mask(t, default_mask)\n Applies the latest \"method\" by computing the new partial masks\n and returning its combination with the \"default_mask\". The new\n partial mask should be computed on the entries or channels that\n were not zeroed out by the \"default_mask\". Which portions of the\n tensor \"t\" the new mask will be calculated from depends on the\n \"PRUNING_TYPE\" (handled by the type handler):\n * for 'unstructured', the mask will be computed from the raveled\n list of nonmasked entries;\n * for 'structured', the mask will be computed from the nonmasked\n channels in the tensor;\n * for 'global', the mask will be computed across all entries.\n Parameters:\n * t (torch.Tensor) -- tensor representing the parameter\n to prune (of same dimensions as \"default_mask\").", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.PruningContainer.html", "category": "pytorch docs"}
{"text": "\ndefault_mask (torch.Tensor) -- mask from previous\n pruning iteration.\n Returns:\n new mask that combines the effects of the \"default_mask\" and\n the new mask from the current pruning \"method\" (of same\n dimensions as \"default_mask\" and \"t\").\n Return type:\n mask (torch.Tensor)\n prune(t, default_mask=None, importance_scores=None)\n Computes and returns a pruned version of input tensor \"t\"\n according to the pruning rule specified in \"compute_mask()\".\n Parameters:\n * t (torch.Tensor) -- tensor to prune (of same\n dimensions as \"default_mask\").\n * importance_scores (torch.Tensor) -- tensor of\n importance scores (of same shape as \"t\") used to compute\n mask for pruning \"t\". The values in this tensor indicate\n the importance of the corresponding elements in the \"t\"\n that is being pruned. If unspecified or None, the tensor\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.PruningContainer.html", "category": "pytorch docs"}
{"text": "\"t\" will be used in its place.\n * default_mask (torch.Tensor, optional) -- mask\n from previous pruning iteration, if any. To be considered\n when determining what portion of the tensor that pruning\n should act on. If None, default to a mask of ones.\n Returns:\n pruned version of tensor \"t\".\n remove(module)\n Removes the pruning reparameterization from a module. The pruned\n parameter named \"name\" remains permanently pruned, and the\n parameter named \"name+'_orig'\" is removed from the parameter\n list. Similarly, the buffer named \"name+'_mask'\" is removed from\n the buffers.\n Note:\n Pruning itself is NOT undone or reversed!", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.PruningContainer.html", "category": "pytorch docs"}
{"text": "torch.permutetorch.permute(input, dims) -> Tensor\n Returns a view of the original tensor \"input\" with its dimensions\n permuted.\n Parameters:\n * input (Tensor) -- the input tensor.\n * dims (tuple of python:int) -- The desired ordering of\n dimensions\n -[ Example ]-\n\n\n\nx = torch.randn(2, 3, 5)\nx.size()\n torch.Size([2, 3, 5])\ntorch.permute(x, (2, 0, 1)).size()\n torch.Size([5, 2, 3])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.permute.html", "category": "pytorch docs"}
{"text": "torch.Tensor.le_Tensor.le_(other) -> Tensor\n In-place version of \"le()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.le_.html", "category": "pytorch docs"}
{"text": "torch.movedimtorch.movedim(input, source, destination) -> Tensor\n Moves the dimension(s) of \"input\" at the position(s) in \"source\" to\n the position(s) in \"destination\".\n Other dimensions of \"input\" that are not explicitly moved remain in\n their original order and appear at the positions not specified in\n \"destination\".\n Parameters:\n * input (Tensor) -- the input tensor.\n * source (int or tuple of ints) -- Original positions\n of the dims to move. These must be unique.\n * destination (int or tuple of ints) -- Destination\n positions for each of the original dims. These must also be\n unique.\n Examples:\n >>> t = torch.randn(3,2,1)\n >>> t\n tensor([[[-0.3362],\n [-0.8437]],\n [[-0.9627],\n [ 0.1727]],\n [[ 0.5173],\n [-0.1398]]])\n >>> torch.movedim(t, 1, 0).shape\n torch.Size([2, 3, 1])\n >>> torch.movedim(t, 1, 0)\n tensor([[[-0.3362],", "source": "https://pytorch.org/docs/stable/generated/torch.movedim.html", "category": "pytorch docs"}
{"text": "tensor([[[-0.3362],\n [-0.9627],\n [ 0.5173]],\n [[-0.8437],\n [ 0.1727],\n [-0.1398]]])\n >>> torch.movedim(t, (1, 2), (0, 1)).shape\n torch.Size([2, 1, 3])\n >>> torch.movedim(t, (1, 2), (0, 1))\n tensor([[[-0.3362, -0.9627, 0.5173]],\n [[-0.8437, 0.1727, -0.1398]]])", "source": "https://pytorch.org/docs/stable/generated/torch.movedim.html", "category": "pytorch docs"}
{"text": "CustomFromMaskclass torch.nn.utils.prune.CustomFromMask(mask)\n classmethod apply(module, name, mask)\n Adds the forward pre-hook that enables pruning on the fly and\n the reparametrization of a tensor in terms of the original\n tensor and the pruning mask.\n Parameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * name (str) -- parameter name within \"module\" on which\n pruning will act.\n apply_mask(module)\n Simply handles the multiplication between the parameter being\n pruned and the generated mask. Fetches the mask and the original\n tensor from the module and returns the pruned version of the\n tensor.\n Parameters:\n module (nn.Module) -- module containing the tensor to\n prune\n Returns:\n pruned version of the input tensor\n Return type:\n pruned_tensor (torch.Tensor)\n prune(t, default_mask=None, importance_scores=None)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.CustomFromMask.html", "category": "pytorch docs"}
{"text": "Computes and returns a pruned version of input tensor \"t\"\n according to the pruning rule specified in \"compute_mask()\".\n Parameters:\n * t (torch.Tensor) -- tensor to prune (of same\n dimensions as \"default_mask\").\n * importance_scores (torch.Tensor) -- tensor of\n importance scores (of same shape as \"t\") used to compute\n mask for pruning \"t\". The values in this tensor indicate\n the importance of the corresponding elements in the \"t\"\n that is being pruned. If unspecified or None, the tensor\n \"t\" will be used in its place.\n * default_mask (torch.Tensor, optional) -- mask\n from previous pruning iteration, if any. To be considered\n when determining what portion of the tensor that pruning\n should act on. If None, default to a mask of ones.\n Returns:\n pruned version of tensor \"t\".\n remove(module)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.CustomFromMask.html", "category": "pytorch docs"}
{"text": "remove(module)\n Removes the pruning reparameterization from a module. The pruned\n parameter named \"name\" remains permanently pruned, and the\n parameter named \"name+'_orig'\" is removed from the parameter\n list. Similarly, the buffer named \"name+'_mask'\" is removed from\n the buffers.\n Note:\n Pruning itself is NOT undone or reversed!", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.CustomFromMask.html", "category": "pytorch docs"}
{"text": "torch.foreach_expm1_torch._foreach_expm1(self: List[Tensor]) -> None\n Apply \"torch.expm1()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_expm1_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.greaterTensor.greater(other) -> Tensor\n See \"torch.greater()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.greater.html", "category": "pytorch docs"}
{"text": "torch.linalg.eigvalshtorch.linalg.eigvalsh(A, UPLO='L', , out=None) -> Tensor\n Computes the eigenvalues of a complex Hermitian or real symmetric\n matrix.\n Letting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the eigenvalues\n of a complex Hermitian or real symmetric matrix A \\in\n \\mathbb{K}^{n \\times n} are defined as the roots (counted with\n multiplicity) of the polynomial p of degree n given by\n p(\\lambda) = \\operatorname{det}(A - \\lambda\n \\mathrm{I}_n)\\mathrlap{\\qquad \\lambda \\in \\mathbb{R}}\n where \\mathrm{I}_n is the n*-dimensional identity matrix. The\n eigenvalues of a real symmetric or complex Hermitian matrix are\n always real.\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n The eigenvalues are returned in ascending order.\n \"A\" is assumed to be Hermitian (resp. symmetric), but this is not", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigvalsh.html", "category": "pytorch docs"}
{"text": "checked internally, instead:\n * If \"UPLO\"= 'L' (default), only the lower triangular part of the\n matrix is used in the computation.\n * If \"UPLO\"= 'U', only the upper triangular part of the matrix is\n used.\n Note:\n When inputs are on a CUDA device, this function synchronizes that\n device with the CPU.\n See also:\n \"torch.linalg.eigh()\" computes the full eigenvalue decomposition.\n Parameters:\n * A (Tensor) -- tensor of shape (, n, n) where *** is\n zero or more batch dimensions consisting of symmetric or\n Hermitian matrices.\n * UPLO ('L', 'U', optional) -- controls whether to\n use the upper or lower triangular part of \"A\" in the\n computations. Default: 'L'.\n Keyword Arguments:\n out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None*.\n Returns:\n A real-valued tensor containing the eigenvalues even when \"A\" is", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigvalsh.html", "category": "pytorch docs"}
{"text": "complex. The eigenvalues are returned in ascending order.\n Examples:\n >>> A = torch.randn(2, 2, dtype=torch.complex128)\n >>> A = A + A.T.conj() # creates a Hermitian matrix\n >>> A\n tensor([[2.9228+0.0000j, 0.2029-0.0862j],\n [0.2029+0.0862j, 0.3464+0.0000j]], dtype=torch.complex128)\n >>> torch.linalg.eigvalsh(A)\n tensor([0.3277, 2.9415], dtype=torch.float64)\n >>> A = torch.randn(3, 2, 2, dtype=torch.float64)\n >>> A = A + A.mT # creates a batch of symmetric matrices\n >>> torch.linalg.eigvalsh(A)\n tensor([[ 2.5797, 3.4629],\n [-4.1605, 1.3780],\n [-3.1113, 2.7381]], dtype=torch.float64)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigvalsh.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.adaptive_max_pool3dtorch.nn.functional.adaptive_max_pool3d(args, kwargs)\n Applies a 3D adaptive max pooling over an input signal composed of\n several input planes.\n See \"AdaptiveMaxPool3d\" for details and output shape.\n Parameters:\n * output_size -- the target output size (single integer or\n triple-integer tuple)\n * return_indices* -- whether to return pooling indices.\n Default: \"False\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.adaptive_max_pool3d.html", "category": "pytorch docs"}
{"text": "torch.mvtorch.mv(input, vec, , out=None) -> Tensor\n Performs a matrix-vector product of the matrix \"input\" and the\n vector \"vec\".\n If \"input\" is a (n \\times m) tensor, \"vec\" is a 1-D tensor of size\n m, \"out\" will be 1-D of size n.\n Note:\n This function does not broadcast.\n Parameters:\n * input (Tensor) -- matrix to be multiplied\n * vec (Tensor) -- vector to be multiplied\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> mat = torch.randn(2, 3)\n >>> vec = torch.randn(3)\n >>> torch.mv(mat, vec)\n tensor([ 1.0404, -0.6361])", "source": "https://pytorch.org/docs/stable/generated/torch.mv.html", "category": "pytorch docs"}
{"text": "torch.Tensor.medianTensor.median(dim=None, keepdim=False)\n See \"torch.median()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.median.html", "category": "pytorch docs"}
{"text": "default_qat_qconfigtorch.quantization.qconfig.default_qat_qconfig\n alias of QConfig(activation=functools.partial(,\n observer=,\n quant_min=0, quant_max=255, dtype=torch.quint8,\n qscheme=torch.per_tensor_affine, reduce_range=True){},\n weight=functools.partial(,\n observer=,\n quant_min=-128, quant_max=127, dtype=torch.qint8,\n qscheme=torch.per_tensor_symmetric, reduce_range=False){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_qat_qconfig.html", "category": "pytorch docs"}
{"text": "torch.Tensor.set_Tensor.set_(source=None, storage_offset=0, size=None, stride=None) -> Tensor\n Sets the underlying storage, size, and strides. If \"source\" is a\n tensor, \"self\" tensor will share the same storage and have the same\n size and strides as \"source\". Changes to elements in one tensor\n will be reflected in the other.\n If \"source\" is a \"Storage\", the method sets the underlying storage,\n offset, size, and stride.\n Parameters:\n * source (Tensor or Storage) -- the tensor or storage\n to use\n * storage_offset (int, optional) -- the offset in the\n storage\n * size (torch.Size, optional) -- the desired size.\n Defaults to the size of the source.\n * stride (tuple, optional) -- the desired stride.\n Defaults to C-contiguous strides.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.set_.html", "category": "pytorch docs"}
{"text": "torch.amaxtorch.amax(input, dim, keepdim=False, , out=None) -> Tensor\n Returns the maximum value of each slice of the \"input\" tensor in\n the given dimension(s) \"dim\".\n Note:\n The difference between \"max\"/\"min\" and \"amax\"/\"amin\" is:\n * \"amax\"/\"amin\" supports reducing on multiple dimensions,\n * \"amax\"/\"amin\" does not return indices,\n * \"amax\"/\"amin\" evenly distributes gradient between equal\n values, while \"max(dim)\"/\"min(dim)\" propagates gradient only\n to a single index in the source tensor.\n If \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int or tuple of ints*) -- the dimension or\n dimensions to reduce.", "source": "https://pytorch.org/docs/stable/generated/torch.amax.html", "category": "pytorch docs"}
{"text": "dimensions to reduce.\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[ 0.8177, 1.4878, -0.2491, 0.9130],\n [-0.7158, 1.1775, 2.0992, 0.4817],\n [-0.0053, 0.0164, -1.3738, -0.0507],\n [ 1.9700, 1.1106, -1.0318, -1.0816]])\n >>> torch.amax(a, 1)\n tensor([1.4878, 2.0992, 0.0164, 1.9700])", "source": "https://pytorch.org/docs/stable/generated/torch.amax.html", "category": "pytorch docs"}
{"text": "torch.cuda.manual_seedtorch.cuda.manual_seed(seed)\n Sets the seed for generating random numbers for the current GPU.\n It's safe to call this function if CUDA is not available; in that\n case, it is silently ignored.\n Parameters:\n seed (int) -- The desired seed.\n Warning:\n If you are working with a multi-GPU model, this function is\n insufficient to get determinism. To seed all GPUs, use\n \"manual_seed_all()\".", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.manual_seed.html", "category": "pytorch docs"}
{"text": "torch.lobpcgtorch.lobpcg(A, k=None, B=None, X=None, n=None, iK=None, niter=None, tol=None, largest=None, method=None, tracker=None, ortho_iparams=None, ortho_fparams=None, ortho_bparams=None)\n Find the k largest (or smallest) eigenvalues and the corresponding\n eigenvectors of a symmetric positive definite generalized\n eigenvalue problem using matrix-free LOBPCG methods.\n This function is a front-end to the following LOBPCG algorithms\n selectable via method argument:\n method=\"basic\" - the LOBPCG method introduced by Andrew\n Knyazev, see [Knyazev2001]. A less robust method, may fail when\n Cholesky is applied to singular input.\n method=\"ortho\" - the LOBPCG method with orthogonal basis\n selection [StathopoulosEtal2002]. A robust method.\n Supported inputs are dense, sparse, and batches of dense matrices.\n Note:\n In general, the basic method spends least time per iteration.\n However, the robust methods converge much faster and are more", "source": "https://pytorch.org/docs/stable/generated/torch.lobpcg.html", "category": "pytorch docs"}
{"text": "stable. So, the usage of the basic method is generally not\n recommended but there exist cases where the usage of the basic\n method may be preferred.\n Warning:\n The backward method does not support sparse and complex inputs.\n It works only when B is not provided (i.e. B == None). We are\n actively working on extensions, and the details of the algorithms\n are going to be published promptly.\n Warning:\n While it is assumed that A is symmetric, A.grad is not. To\n make sure that A.grad is symmetric, so that A - t * A.grad is\n symmetric in first-order optimization routines, prior to running\n lobpcg we do the following symmetrization map: A -> (A +\n A.t()) / 2. The map is performed only when the A requires\n gradients.\n Parameters:\n * A (Tensor) -- the input tensor of size (, m, m)\n * B (Tensor, optional) -- the input tensor of size (,\n m, m). When not specified, B is interpreted as identity\n matrix.", "source": "https://pytorch.org/docs/stable/generated/torch.lobpcg.html", "category": "pytorch docs"}
{"text": "matrix.\n * X (tensor, optional) -- the input tensor of size (,\n m, n) where k <= n <= m. When specified, it is used as\n initial approximation of eigenvectors. X must be a dense\n tensor.\n * iK (tensor, optional) -- the input tensor of size\n (, m, m). When specified, it will be used as preconditioner.\n * k (integer, optional) -- the number of requested\n eigenpairs. Default is the number of X columns (when\n specified) or 1.\n * n (integer, optional) -- if X is not specified then\n n specifies the size of the generated random approximation\n of eigenvectors. Default value for n is k. If X is\n specified, the value of n (when specified) must be the\n number of X columns.\n * tol (float, optional) -- residual tolerance for\n stopping criterion. Default is feps ** 0.5 where feps is\n smallest non-zero floating-point number of the given input", "source": "https://pytorch.org/docs/stable/generated/torch.lobpcg.html", "category": "pytorch docs"}
{"text": "tensor A data type.\n * largest (bool, optional) -- when True, solve the\n eigenproblem for the largest eigenvalues. Otherwise, solve the\n eigenproblem for smallest eigenvalues. Default is True.\n * method (str, optional) -- select LOBPCG method. See\n the description of the function above. Default is \"ortho\".\n * niter (int, optional) -- maximum number of\n iterations. When reached, the iteration process is hard-\n stopped and the current approximation of eigenpairs is\n returned. For infinite iteration but until convergence\n criteria is met, use -1.\n * tracker (callable, optional) --\n a function for tracing the iteration process. When specified,\n it is called at each iteration step with LOBPCG instance as an\n argument. The LOBPCG instance holds the full state of the\n iteration process in the following attributes:", "source": "https://pytorch.org/docs/stable/generated/torch.lobpcg.html", "category": "pytorch docs"}
{"text": "iparams, fparams, bparams - dictionaries of integer,\n float, and boolean valued input parameters, respectively\n ivars, fvars, bvars, tvars - dictionaries of\n integer, float, boolean, and Tensor valued iteration\n variables, respectively.\n A, B, iK - input Tensor arguments.\n E, X, S, R - iteration Tensor variables.\n For instance:\n ivars[\"istep\"] - the current iteration step X - the\n current approximation of eigenvectors E - the current\n approximation of eigenvalues R - the current residual\n ivars[\"converged_count\"] - the current number of\n converged eigenpairs tvars[\"rerr\"] - the current state of\n convergence criteria\n Note that when tracker stores Tensor objects from the LOBPCG\n instance, it must make copies of these.\n If tracker sets bvars[\"force_stop\"] = True, the iteration", "source": "https://pytorch.org/docs/stable/generated/torch.lobpcg.html", "category": "pytorch docs"}
{"text": "process will be hard-stopped.\n * ortho_iparams (dict, optional) -- various parameters\n to LOBPCG algorithm when using method=\"ortho\".\n * ortho_fparams (dict, optional) -- various parameters\n to LOBPCG algorithm when using method=\"ortho\".\n * ortho_bparams (dict, optional) -- various parameters\n to LOBPCG algorithm when using method=\"ortho\".\n Returns:\n tensor of eigenvalues of size (, k)\n X (Tensor): tensor of eigenvectors of size (, m, k)\n Return type:\n E (Tensor)\n -[ References ]-\n [Knyazev2001] Andrew V. Knyazev. (2001) Toward the Optimal\n Preconditioned Eigensolver: Locally Optimal Block Preconditioned\n Conjugate Gradient Method. SIAM J. Sci. Comput., 23(2), 517-541.\n (25 pages) https://epubs.siam.org/doi/abs/10.1137/S1064827500366124\n [StathopoulosEtal2002] Andreas Stathopoulos and Kesheng Wu. (2002)\n A Block Orthogonalization Procedure with Constant Synchronization", "source": "https://pytorch.org/docs/stable/generated/torch.lobpcg.html", "category": "pytorch docs"}
{"text": "Requirements. SIAM J. Sci. Comput., 23(6), 2165-2182. (18 pages)\n https://epubs.siam.org/doi/10.1137/S1064827500370883\n [DuerschEtal2018] Jed A. Duersch, Meiyue Shao, Chao Yang, Ming Gu.\n (2018) A Robust and Efficient Implementation of LOBPCG. SIAM J.\n Sci. Comput., 40(5), C655-C676. (22 pages)\n https://epubs.siam.org/doi/abs/10.1137/17M1129830", "source": "https://pytorch.org/docs/stable/generated/torch.lobpcg.html", "category": "pytorch docs"}
{"text": "torch.Tensor.movedimTensor.movedim(source, destination) -> Tensor\n See \"torch.movedim()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.movedim.html", "category": "pytorch docs"}
{"text": "torch.signal.windows.general_hammingtorch.signal.windows.general_hamming(M, , alpha=0.54, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\n Computes the general Hamming window.\n The general Hamming window is defined as follows:\n w_n = \\alpha - (1 - \\alpha) \\cos{ \\left( \\frac{2 \\pi n}{M-1}\n \\right)}\n The window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\n Parameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\n Keyword Arguments:\n * alpha (float, optional) -- the window coefficient.\n Default: 0.54.\n * sym (bool, optional) -- If False, returns a\n periodic window suitable for use in spectral analysis. If\n True, returns a symmetric window suitable for use in filter\n design. Default: True*.", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.general_hamming.html", "category": "pytorch docs"}
{"text": "design. Default: True.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Return type:\n Tensor\n Examples:\n >>> # Generates a symmetric Hamming window with the general Hamming window.\n >>> torch.signal.windows.general_hamming(10, sym=True)", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.general_hamming.html", "category": "pytorch docs"}
{"text": "tensor([0.0800, 0.1876, 0.4601, 0.7700, 0.9723, 0.9723, 0.7700, 0.4601, 0.1876, 0.0800])\n >>> # Generates a periodic Hann window with the general Hamming window.\n >>> torch.signal.windows.general_hamming(10, alpha=0.5, sym=False)\n tensor([0.0000, 0.0955, 0.3455, 0.6545, 0.9045, 1.0000, 0.9045, 0.6545, 0.3455, 0.0955])", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.general_hamming.html", "category": "pytorch docs"}
{"text": "torch.Tensor.arctanhTensor.arctanh() -> Tensor\n See \"torch.arctanh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arctanh.html", "category": "pytorch docs"}
{"text": "torch.Tensor.less_equal_Tensor.less_equal_(other) -> Tensor\n In-place version of \"less_equal()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.less_equal_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.lu_solveTensor.lu_solve(LU_data, LU_pivots) -> Tensor\n See \"torch.lu_solve()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lu_solve.html", "category": "pytorch docs"}
{"text": "torch.addcdivtorch.addcdiv(input, tensor1, tensor2, , value=1, out=None) -> Tensor\n Performs the element-wise division of \"tensor1\" by \"tensor2\",\n multiplies the result by the scalar \"value\" and adds it to \"input\".\n Warning:\n Integer division with addcdiv is no longer supported, and in a\n future release addcdiv will perform a true division of tensor1\n and tensor2. The historic addcdiv behavior can be implemented as\n (input + value * torch.trunc(tensor1 / tensor2)).to(input.dtype)\n for integer inputs and as (input + value * tensor1 / tensor2) for\n float inputs. The future addcdiv behavior is just the latter\n implementation: (input + value * tensor1 / tensor2), for all\n dtypes.\n \\text{out}_i = \\text{input}_i + \\text{value} \\times\n \\frac{\\text{tensor1}_i}{\\text{tensor2}_i}\n The shapes of \"input\", \"tensor1\", and \"tensor2\" must be\n broadcastable.\n For inputs of type FloatTensor or DoubleTensor*, \"value\" must be", "source": "https://pytorch.org/docs/stable/generated/torch.addcdiv.html", "category": "pytorch docs"}
{"text": "a real number, otherwise an integer.\n Parameters:\n * input (Tensor) -- the tensor to be added\n * tensor1 (Tensor) -- the numerator tensor\n * tensor2 (Tensor) -- the denominator tensor\n Keyword Arguments:\n * value (Number, optional) -- multiplier for\n \\text{tensor1} / \\text{tensor2}\n * out (Tensor, optional) -- the output tensor.\n Example:\n >>> t = torch.randn(1, 3)\n >>> t1 = torch.randn(3, 1)\n >>> t2 = torch.randn(1, 3)\n >>> torch.addcdiv(t, t1, t2, value=0.1)\n tensor([[-0.2312, -3.6496, 0.1312],\n [-1.0428, 3.4292, -0.1030],\n [-0.5369, -0.9829, 0.0430]])", "source": "https://pytorch.org/docs/stable/generated/torch.addcdiv.html", "category": "pytorch docs"}
{"text": "VerificationOptionsclass torch.onnx.verification.VerificationOptions(flatten=True, ignore_none=True, check_shape=True, check_dtype=True, backend=OnnxBackend.ONNX_RUNTIME_CPU, rtol=0.001, atol=1e-07, remained_onnx_input_idx=None, acceptable_error_percentage=None)\n Options for ONNX export verification.\n Variables:\n * flatten (bool) -- If True, unpack nested list/tuple/dict\n inputs into a flattened list of Tensors for ONNX. Set this to\n False if nested structures are to be preserved for ONNX, which\n is usually the case with exporting ScriptModules. Default\n True.\n * ignore_none (bool) -- Whether to ignore None type in\n torch output, which is usually the case with tracing. Set this\n to False, if torch output should keep None type, which is\n usually the case with exporting ScriptModules. Default to\n True.\n * check_shape (bool) -- Whether to check the shapes", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.verification.VerificationOptions.html", "category": "pytorch docs"}
{"text": "between PyTorch and ONNX Runtime outputs are exactly the same.\n Set this to False to allow output shape broadcasting. Default\n to True.\n * check_dtype (bool) -- Whether to check the dtypes\n between PyTorch and ONNX Runtime outputs are consistent.\n Default to True.\n * backend (torch.onnx.verification.OnnxBackend) -- ONNX\n backend for verification. Default to\n OnnxBackend.ONNX_RUNTIME_CPU.\n * rtol (float) -- relative tolerance in comparison between\n ONNX and PyTorch outputs.\n * atol (float) -- absolute tolerance in comparison between\n ONNX and PyTorch outputs.\n * remained_onnx_input_idx\n (Optional[Sequence[int]]) -- If provided, only\n the specified inputs will be passed to the ONNX model. Supply\n a list when there are unused inputs in the model. Since unused\n inputs will be removed in the exported ONNX model, supplying", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.verification.VerificationOptions.html", "category": "pytorch docs"}
{"text": "all inputs will cause an error on unexpected inputs. This\n parameter tells the verifier which inputs to pass into the\n ONNX model.\n * acceptable_error_percentage (Optional[float]) --\n acceptable percentage of element mismatches in comparison. It\n should be a float of value between 0.0 and 1.0.", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.verification.VerificationOptions.html", "category": "pytorch docs"}
{"text": "torch.foreach_floor_torch._foreach_floor(self: List[Tensor]) -> None\n Apply \"torch.floor()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_floor_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.true_divideTensor.true_divide(value) -> Tensor\n See \"torch.true_divide()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.true_divide.html", "category": "pytorch docs"}
{"text": "torch.Tensor.isinfTensor.isinf() -> Tensor\n See \"torch.isinf()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.isinf.html", "category": "pytorch docs"}
{"text": "torch.sqrttorch.sqrt(input, , out=None) -> Tensor\n Returns a new tensor with the square-root of the elements of\n \"input\".\n \\text{out}{i} = \\sqrt{\\text{input}}\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([-2.0755, 1.0226, 0.0831, 0.4806])\n >>> torch.sqrt(a)\n tensor([ nan, 1.0112, 0.2883, 0.6933])", "source": "https://pytorch.org/docs/stable/generated/torch.sqrt.html", "category": "pytorch docs"}
{"text": "torch.func.stack_module_statetorch.func.stack_module_state(models) -> params, buffers\n Prepares a list of torch.nn.Modules for ensembling with \"vmap()\".\n Given a list of \"M\" \"nn.Modules\" of the same class, returns two\n dictionaries that stack all of their parameters and buffers\n together, indexed by name. The stacked parameters are optimizable\n (i.e. they are new leaf nodes in the autograd history that are\n unrelated to the original parameters and can be passed directly to\n an optimizer).\n Here's an example of how to ensemble over a very simple model:\n num_models = 5\n batch_size = 64\n in_features, out_features = 3, 3\n models = [torch.nn.Linear(in_features, out_features) for i in range(num_models)]\n data = torch.randn(batch_size, 3)\n def wrapper(params, buffers, data):\n return torch.func.functional_call(model[0], (params, buffers), data)\n params, buffers = stack_module_state(models)", "source": "https://pytorch.org/docs/stable/generated/torch.func.stack_module_state.html", "category": "pytorch docs"}
{"text": "params, buffers = stack_module_state(models)\n output = vmap(wrapper, (0, 0, None))(params, buffers, data)\n assert output.shape == (num_models, batch_size, out_features)\n When there's submodules, this follows state dict naming conventions\n import torch.nn as nn\n class Foo(nn.Module):\n def init(self, in_features, out_features):\n super().init()\n hidden = 4\n self.l1 = nn.Linear(in_features, hidden)\n self.l2 = nn.Linear(hidden, out_features)\n def forward(self, x):\n return self.l2(self.l1(x))\n num_models = 5\n in_features, out_features = 3, 3\n models = [Foo(in_features, out_features) for i in range(num_models)]\n params, buffers = stack_module_state(models)\n print(list(params.keys())) # \"l1.weight\", \"l1.bias\", \"l2.weight\", \"l2.bias\"\n Warning:\n All of the modules being stacked together must be the same", "source": "https://pytorch.org/docs/stable/generated/torch.func.stack_module_state.html", "category": "pytorch docs"}
{"text": "(except for the values of their parameters/buffers). For example,\n they should be in the same mode (training vs eval).\n Return type:\n Tuple[Dict[str, Any], Dict[str, Any]]", "source": "https://pytorch.org/docs/stable/generated/torch.func.stack_module_state.html", "category": "pytorch docs"}
{"text": "swap_moduleclass torch.quantization.swap_module(mod, mapping, custom_module_class_mapping)\n Swaps the module if it has a quantized counterpart and it has an\n observer attached.\n Parameters:\n * mod -- input module\n * mapping -- a dictionary that maps from nn module to nnq\n module\n Returns:\n The corresponding quantized module of mod", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.swap_module.html", "category": "pytorch docs"}
{"text": "ConvReLU2dclass torch.ao.nn.intrinsic.quantized.ConvReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\n A ConvReLU2d module is a fused module of Conv2d and ReLU\n We adopt the same interface as \"torch.ao.nn.quantized.Conv2d\".\n Variables:\n torch.ao.nn.quantized.Conv2d (Same as) --", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.ConvReLU2d.html", "category": "pytorch docs"}
{"text": "torch.narrowtorch.narrow(input, dim, start, length) -> Tensor\n Returns a new tensor that is a narrowed version of \"input\" tensor.\n The dimension \"dim\" is input from \"start\" to \"start + length\". The\n returned tensor and \"input\" tensor share the same underlying\n storage.\n Parameters:\n * input (Tensor) -- the tensor to narrow\n * dim (int) -- the dimension along which to narrow\n * start (int or Tensor) -- index of the element to\n start the narrowed dimension from. Can be negative, which\n means indexing from the end of dim. If Tensor, it must be\n an 0-dim integral Tensor (bools not allowed)\n * length (int) -- length of the narrowed dimension, must\n be weakly positive\n Example:\n >>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])\n >>> torch.narrow(x, 0, 0, 2)\n tensor([[ 1, 2, 3],\n [ 4, 5, 6]])\n >>> torch.narrow(x, 1, 1, 2)\n tensor([[ 2, 3],", "source": "https://pytorch.org/docs/stable/generated/torch.narrow.html", "category": "pytorch docs"}
{"text": "tensor([[ 2, 3],\n [ 5, 6],\n [ 8, 9]])\n >>> torch.narrow(x, -1, torch.tensor(-1), 1)\n tensor([[3],\n [6],\n [9]])", "source": "https://pytorch.org/docs/stable/generated/torch.narrow.html", "category": "pytorch docs"}
{"text": "float16_static_qconfigtorch.quantization.qconfig.float16_static_qconfig\n alias of QConfig(activation=functools.partial(,\n dtype=torch.float16){}, weight=functools.partial(,\n dtype=torch.float16){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.float16_static_qconfig.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.dropout2dtorch.nn.functional.dropout2d(input, p=0.5, training=True, inplace=False)\n Randomly zero out entire channels (a channel is a 2D feature map,\n e.g., the j-th channel of the i-th sample in the batched input is a\n 2D tensor \\text{input}[i, j]) of the input tensor). Each channel\n will be zeroed out independently on every forward call with\n probability \"p\" using samples from a Bernoulli distribution.\n See \"Dropout2d\" for details.\n Parameters:\n * p (float) -- probability of a channel to be zeroed.\n Default: 0.5\n * training (bool) -- apply dropout if is \"True\". Default:\n \"True\"\n * inplace (bool) -- If set to \"True\", will do this\n operation in-place. Default: \"False\"\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.dropout2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.cummaxTensor.cummax(dim)\n See \"torch.cummax()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cummax.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.upsample_bilineartorch.nn.functional.upsample_bilinear(input, size=None, scale_factor=None)\n Upsamples the input, using bilinear upsampling.\n Warning:\n This function is deprecated in favor of\n \"torch.nn.functional.interpolate()\". This is equivalent with\n \"nn.functional.interpolate(..., mode='bilinear',\n align_corners=True)\".\n Expected inputs are spatial (4 dimensional). Use\n upsample_trilinear fo volumetric (5 dimensional) inputs.\n Parameters:\n * input (Tensor) -- input\n * size (int or Tuple[int, int]) -- output\n spatial size.\n * scale_factor (int or Tuple[int, int]) --\n multiplier for spatial size\n Note:\n This operation may produce nondeterministic gradients when given\n tensors on a CUDA device. See Reproducibility for more\n information.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.upsample_bilinear.html", "category": "pytorch docs"}
{"text": "torch.reciprocaltorch.reciprocal(input, , out=None) -> Tensor\n Returns a new tensor with the reciprocal of the elements of \"input\"\n \\text{out}{i} = \\frac{1}{\\text{input}}\n Note:\n Unlike NumPy's reciprocal, torch.reciprocal supports integral\n inputs. Integral inputs to reciprocal are automatically promoted\n to the default scalar type.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([-0.4595, -2.1219, -1.4314, 0.7298])\n >>> torch.reciprocal(a)\n tensor([-2.1763, -0.4713, -0.6986, 1.3702])", "source": "https://pytorch.org/docs/stable/generated/torch.reciprocal.html", "category": "pytorch docs"}
{"text": "torch.cuda.reset_max_memory_cachedtorch.cuda.reset_max_memory_cached(device=None)\n Resets the starting point in tracking maximum GPU memory managed by\n the caching allocator for a given device.\n See \"max_memory_cached()\" for details.\n Parameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\n Warning:\n This function now calls \"reset_peak_memory_stats()\", which resets\n /all/ peak memory stats.\n Note:\n See Memory management for more details about GPU memory\n management.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.reset_max_memory_cached.html", "category": "pytorch docs"}
{"text": "torch.Tensor.lgammaTensor.lgamma() -> Tensor\n See \"torch.lgamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lgamma.html", "category": "pytorch docs"}
{"text": "SparseAdamclass torch.optim.SparseAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, maximize=False)\n Implements lazy version of Adam algorithm suitable for sparse\n tensors.\n In this variant, only moments that show up in the gradient get\n updated, and only those portions of the gradient get applied to the\n parameters.\n Parameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * lr (float, optional) -- learning rate (default:\n 1e-3)\n * betas (Tuple[float, float], optional) --\n coefficients used for computing running averages of gradient\n and its square (default: (0.9, 0.999))\n * eps (float, optional) -- term added to the\n denominator to improve numerical stability (default: 1e-8)\n * maximize (bool, optional) -- maximize the params\n based on the objective, instead of minimizing (default: False)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SparseAdam.html", "category": "pytorch docs"}
{"text": "add_param_group(param_group)\n Add a param group to the \"Optimizer\" s param_groups.\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n Parameters:\n param_group (dict) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n load_state_dict(state_dict)\n Loads the optimizer state.\n Parameters:\n state_dict (dict) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n register_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None\n The \"optimizer\" argument is the optimizer instance being used.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SparseAdam.html", "category": "pytorch docs"}
{"text": "registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n register_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n state_dict()\n Returns the state of the optimizer as a \"dict\".", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SparseAdam.html", "category": "pytorch docs"}
{"text": "It contains two entries:\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n step(closure=None)\n Performs a single optimization step.\n Parameters:\n closure (Callable, optional) -- A closure that\n reevaluates the model and returns the loss.\n zero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n Parameters:\n set_to_none (bool) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SparseAdam.html", "category": "pytorch docs"}
{"text": "differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SparseAdam.html", "category": "pytorch docs"}
{"text": "torch.Tensor.addbmmTensor.addbmm(batch1, batch2, *, beta=1, alpha=1) -> Tensor\n See \"torch.addbmm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addbmm.html", "category": "pytorch docs"}
{"text": "torch.is_conjtorch.is_conj(input)\n Returns True if the \"input\" is a conjugated tensor, i.e. its\n conjugate bit is set to True.\n Parameters:\n input (Tensor) -- the input tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.is_conj.html", "category": "pytorch docs"}
{"text": "torch.logtorch.log(input, , out=None) -> Tensor\n Returns a new tensor with the natural logarithm of the elements of\n \"input\".\n y_{i} = \\log_{e} (x_{i})\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.rand(5) * 5\n >>> a\n tensor([4.7767, 4.3234, 1.2156, 0.2411, 4.5739])\n >>> torch.log(a)\n tensor([ 1.5637, 1.4640, 0.1952, -1.4226, 1.5204])", "source": "https://pytorch.org/docs/stable/generated/torch.log.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.alpha_dropouttorch.nn.functional.alpha_dropout(input, p=0.5, training=False, inplace=False)\n Applies alpha dropout to the input.\n See \"AlphaDropout\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.alpha_dropout.html", "category": "pytorch docs"}
{"text": "torch.count_nonzerotorch.count_nonzero(input, dim=None) -> Tensor\n Counts the number of non-zero values in the tensor \"input\" along\n the given \"dim\". If no dim is specified then all non-zeros in the\n tensor are counted.\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int or tuple of ints, optional) -- Dim or\n tuple of dims along which to count non-zeros.\n Example:\n >>> x = torch.zeros(3,3)\n >>> x[torch.randn(3,3) > 0.5] = 1\n >>> x\n tensor([[0., 1., 1.],\n [0., 0., 0.],\n [0., 0., 1.]])\n >>> torch.count_nonzero(x)\n tensor(3)\n >>> torch.count_nonzero(x, dim=0)\n tensor([0, 1, 2])", "source": "https://pytorch.org/docs/stable/generated/torch.count_nonzero.html", "category": "pytorch docs"}
{"text": "NAdamclass torch.optim.NAdam(params, lr=0.002, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, momentum_decay=0.004, *, foreach=None, differentiable=False)\n Implements NAdam algorithm.\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\ &\\textbf{input} : \\gamma_t \\text{ (lr)}, \\:\n \\beta_1,\\beta_2 \\text{ (betas)}, \\: \\theta_0 \\text{\n (params)}, \\: f(\\theta) \\text{ (objective)} \\\n &\\hspace{13mm} \\: \\lambda \\text{ (weight decay)}, \\:\\psi \\text{\n (momentum decay)} \\ &\\textbf{initialize} : m_0\n \\leftarrow 0 \\text{ ( first moment)}, v_0 \\leftarrow 0\n \\text{ ( second moment)}\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\ &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\:\n \\textbf{do} \\ &\\hspace{5mm}g_t\n \\leftarrow \\nabla_{\\theta} f_t (\\theta_{t-1}) \\\n &\\hspace{5mm}if \\: \\lambda \\neq 0", "source": "https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html", "category": "pytorch docs"}
{"text": "&\\hspace{5mm}if \\: \\lambda \\neq 0\n \\ &\\hspace{10mm} g_t \\leftarrow g_t + \\lambda \\theta_{t-1}\n \\ &\\hspace{5mm} \\mu_t \\leftarrow \\beta_1 \\big(1 -\n \\frac{1}{2} 0.96^{t \\psi} \\big) \\ &\\hspace{5mm}\n \\mu_{t+1} \\leftarrow \\beta_1 \\big(1 - \\frac{1}{2}\n 0.96^{(t+1)\\psi}\\big)\\ &\\hspace{5mm}m_t\n \\leftarrow \\beta_1 m_{t-1} + (1 - \\beta_1) g_t \\\n &\\hspace{5mm}v_t \\leftarrow \\beta_2 v_{t-1} +\n (1-\\beta_2) g^2_t \\ &\\hspace{5mm}\\widehat{m_t}\n \\leftarrow \\mu_{t+1} m_t/(1-\\prod_{i=1}^{t+1}\\mu_i)\\[-1.ex]\n & \\hspace{11mm} + (1-\\mu_t) g_t /(1-\\prod_{i=1}^{t} \\mu_{i})\n \\ &\\hspace{5mm}\\widehat{v_t} \\leftarrow\n v_t/\\big(1-\\beta_2^t \\big) \\\n &\\hspace{5mm}\\theta_t \\leftarrow \\theta_{t-1} - \\gamma\n \\widehat{m_t}/ \\big(\\sqrt{\\widehat{v_t}} + \\epsilon\n \\big) \\\n &\\rule{110mm}{0.4pt}", "source": "https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html", "category": "pytorch docs"}
{"text": "&\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\n For further details regarding the algorithm we refer to\n Incorporating Nesterov Momentum into Adam.\n Parameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * lr (float, optional) -- learning rate (default:\n 2e-3)\n * betas (Tuple[float, float], optional) --\n coefficients used for computing running averages of gradient\n and its square (default: (0.9, 0.999))\n * eps (float, optional) -- term added to the\n denominator to improve numerical stability (default: 1e-8)\n * weight_decay (float, optional) -- weight decay (L2\n penalty) (default: 0)\n * momentum_decay (float, optional) -- momentum\n momentum_decay (default: 4e-3)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html", "category": "pytorch docs"}
{"text": "momentum_decay (default: 4e-3)\n * foreach (bool, optional) -- whether foreach\n implementation of optimizer is used. If unspecified by the\n user (so foreach is None), we will try to use foreach over the\n for-loop implementation on CUDA, since it is usually\n significantly more performant. (default: None)\n * differentiable (bool, optional) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\n add_param_group(param_group)\n Add a param group to the \"Optimizer\" s param_groups.\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html", "category": "pytorch docs"}
{"text": "as training progresses.\n Parameters:\n param_group (dict) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n load_state_dict(state_dict)\n Loads the optimizer state.\n Parameters:\n state_dict (dict) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n register_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None\n The \"optimizer\" argument is the optimizer instance being used.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n register_step_pre_hook(hook)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html", "category": "pytorch docs"}
{"text": "register_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n state_dict()\n Returns the state of the optimizer as a \"dict\".\n It contains two entries:\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.", "source": "https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html", "category": "pytorch docs"}
{"text": "differs between optimizer classes.\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n zero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n Parameters:\n set_to_none (bool) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case", "source": "https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html", "category": "pytorch docs"}
{"text": "it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html", "category": "pytorch docs"}
{"text": "torch.Tensor.bitwise_orTensor.bitwise_or() -> Tensor\n See \"torch.bitwise_or()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_or.html", "category": "pytorch docs"}
{"text": "torch.Tensor.floor_divideTensor.floor_divide(value) -> Tensor\n See \"torch.floor_divide()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.floor_divide.html", "category": "pytorch docs"}
{"text": "torch.Tensor.allTensor.all(dim=None, keepdim=False) -> Tensor\n See \"torch.all()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.all.html", "category": "pytorch docs"}
{"text": "torch.rad2degtorch.rad2deg(input, , out=None) -> Tensor\n Returns a new tensor with each of the elements of \"input\" converted\n from angles in radians to degrees.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.tensor([[3.142, -3.142], [6.283, -6.283], [1.570, -1.570]])\n >>> torch.rad2deg(a)\n tensor([[ 180.0233, -180.0233],\n [ 359.9894, -359.9894],\n [ 89.9544, -89.9544]])", "source": "https://pytorch.org/docs/stable/generated/torch.rad2deg.html", "category": "pytorch docs"}
{"text": "LSTMclass torch.ao.nn.quantized.dynamic.LSTM(args, kwargs)\n A dynamic quantized LSTM module with floating point tensor as\n inputs and outputs. We adopt the same interface as torch.nn.LSTM*,\n please see https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM\n for documentation.\n Examples:\n >>> rnn = nn.LSTM(10, 20, 2)\n >>> input = torch.randn(5, 3, 10)\n >>> h0 = torch.randn(2, 3, 20)\n >>> c0 = torch.randn(2, 3, 20)\n >>> output, (hn, cn) = rnn(input, (h0, c0))", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.LSTM.html", "category": "pytorch docs"}
{"text": "torch.can_casttorch.can_cast(from, to) -> bool\n Determines if a type conversion is allowed under PyTorch casting\n rules described in the type promotion documentation.\n Parameters:\n * from (dtype) -- The original \"torch.dtype\".\n * to (dtype) -- The target \"torch.dtype\".\n Example:\n >>> torch.can_cast(torch.double, torch.float)\n True\n >>> torch.can_cast(torch.float, torch.int)\n False", "source": "https://pytorch.org/docs/stable/generated/torch.can_cast.html", "category": "pytorch docs"}
{"text": "torch.autograd.function.FunctionCtx.set_materialize_gradsFunctionCtx.set_materialize_grads(value)\n Sets whether to materialize grad tensors. Default is \"True\".\n This should be called only from inside the \"forward()\"\n method\n If \"True\", undefined grad tensors will be expanded to tensors full\n of zeros prior to calling the \"backward()\" and \"jvp()\" methods.\n Example::\n >>> class SimpleFunc(Function):\n >>> @staticmethod\n >>> def forward(ctx, x):\n >>> return x.clone(), x.clone()\n >>>\n >>> @staticmethod\n >>> @once_differentiable\n >>> def backward(ctx, g1, g2):\n >>> return g1 + g2 # No check for None necessary\n >>>\n >>> # We modify SimpleFunc to handle non-materialized grad outputs\n >>> class Func(Function):\n >>> @staticmethod\n >>> def forward(ctx, x):\n >>> ctx.set_materialize_grads(False)", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.set_materialize_grads.html", "category": "pytorch docs"}
{"text": "\n\n\n ctx.set_materialize_grads(False)\n >>> ctx.save_for_backward(x)\n >>> return x.clone(), x.clone()\n >>>\n >>> @staticmethod\n >>> @once_differentiable\n >>> def backward(ctx, g1, g2):\n >>> x, = ctx.saved_tensors\n >>> grad_input = torch.zeros_like(x)\n >>> if g1 is not None: # We must check for None now\n >>> grad_input += g1\n >>> if g2 is not None:\n >>> grad_input += g2\n >>> return grad_input\n >>>\n >>> a = torch.tensor(1., requires_grad=True)\n >>> b, _ = Func.apply(a) # induces g2 to be undefined\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.set_materialize_grads.html", "category": "pytorch docs"}
{"text": "torch.Tensor.unique_consecutiveTensor.unique_consecutive(return_inverse=False, return_counts=False, dim=None)\n Eliminates all but the first element from every consecutive group\n of equivalent elements.\n See \"torch.unique_consecutive()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.unique_consecutive.html", "category": "pytorch docs"}
{"text": "torch.foreach_neg_torch._foreach_neg(self: List[Tensor]) -> None\n Apply \"torch.neg()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_neg_.html", "category": "pytorch docs"}
{"text": "torch.bmmtorch.bmm(input, mat2, , out=None) -> Tensor\n Performs a batch matrix-matrix product of matrices stored in\n \"input\" and \"mat2\".\n \"input\" and \"mat2\" must be 3-D tensors each containing the same\n number of matrices.\n If \"input\" is a (b \\times n \\times m) tensor, \"mat2\" is a (b \\times\n m \\times p) tensor, \"out\" will be a (b \\times n \\times p) tensor.\n \\text{out}_i = \\text{input}_i \\mathbin{@} \\text{mat2}_i\n This operator supports TensorFloat32.\n On certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\n Note:\n This function does not broadcast. For broadcasting matrix\n products, see \"torch.matmul()\".\n Parameters:\n * input (Tensor) -- the first batch of matrices to be\n multiplied\n * mat2 (Tensor) -- the second batch of matrices to be\n multiplied\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:", "source": "https://pytorch.org/docs/stable/generated/torch.bmm.html", "category": "pytorch docs"}
{"text": "Example:\n >>> input = torch.randn(10, 3, 4)\n >>> mat2 = torch.randn(10, 4, 5)\n >>> res = torch.bmm(input, mat2)\n >>> res.size()\n torch.Size([10, 3, 5])", "source": "https://pytorch.org/docs/stable/generated/torch.bmm.html", "category": "pytorch docs"}
{"text": "torch.cuda.memory_statstorch.cuda.memory_stats(device=None)\n Returns a dictionary of CUDA memory allocator statistics for a\n given device.\n The return value of this function is a dictionary of statistics,\n each of which is a non-negative integer.\n Core statistics:\n * \"\"allocated.{all,large_pool,small_pool}.{current,peak,allocated,\n freed}\"\": number of allocation requests received by the memory\n allocator.\n * \"\"allocated_bytes.{all,large_pool,small_pool}.{current,peak,allo\n cated,freed}\"\": amount of allocated memory.\n * \"\"segment.{all,large_pool,small_pool}.{current,peak,allocated,fr\n eed}\"\": number of reserved segments from \"cudaMalloc()\".\n * \"\"reserved_bytes.{all,large_pool,small_pool}.{current,peak,alloc\n ated,freed}\"\": amount of reserved memory.\n * \"\"active.{all,large_pool,small_pool}.{current,peak,allocated,fre\n ed}\"\": number of active memory blocks.\n * \"\"active_bytes.{all,large_pool,small_pool}.{current,peak,allocat", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_stats.html", "category": "pytorch docs"}
{"text": "ed,freed}\"\": amount of active memory.\n * \"\"inactive_split.{all,large_pool,small_pool}.{current,peak,alloc\n ated,freed}\"\": number of inactive, non-releasable memory blocks.\n * \"\"inactive_split_bytes.{all,large_pool,small_pool}.{current,peak\n ,allocated,freed}\"\": amount of inactive, non-releasable memory.\n For these core statistics, values are broken down as follows.\n Pool type:\n * \"all\": combined statistics across all memory pools.\n * \"large_pool\": statistics for the large allocation pool (as of\n October 2019, for size >= 1MB allocations).\n * \"small_pool\": statistics for the small allocation pool (as of\n October 2019, for size < 1MB allocations).\n Metric type:\n * \"current\": current value of this metric.\n * \"peak\": maximum value of this metric.\n * \"allocated\": historical total increase in this metric.\n * \"freed\": historical total decrease in this metric.\n In addition to the core statistics, we also provide some simple\n event counters:", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_stats.html", "category": "pytorch docs"}
{"text": "event counters:\n * \"\"num_alloc_retries\"\": number of failed \"cudaMalloc\" calls that\n result in a cache flush and retry.\n * \"\"num_ooms\"\": number of out-of-memory errors thrown.\n The caching allocator can be configured via ENV to not split blocks\n larger than a defined size (see Memory Management section of the\n Cuda Semantics documentation). This helps avoid memory\n fragmentation but may have a performance penalty. Additional\n outputs to assist with tuning and evaluating impact:\n * \"\"max_split_size\"\": blocks above this size will not be split.\n * \"\"oversize_allocations.{current,peak,allocated,freed}\"\": number\n of over-size allocation requests received by the memory\n allocator.\n * \"\"oversize_segments.{current,peak,allocated,freed}\"\": number of\n over-size reserved segments from \"cudaMalloc()\".\n Parameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistics for the current device, given by", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_stats.html", "category": "pytorch docs"}
{"text": "\"current_device()\", if \"device\" is \"None\" (default).\n Return type:\n Dict[str, Any]\n Note:\n See Memory management for more details about GPU memory\n management.\n Note:\n With backend:cudaMallocAsync, some stats are not meaningful, and\n are always reported as zero.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_stats.html", "category": "pytorch docs"}
{"text": "BatchNorm2dclass torch.nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)\n Applies Batch Normalization over a 4D input (a mini-batch of 2D\n inputs with additional channel dimension) as described in the paper\n Batch Normalization: Accelerating Deep Network Training by Reducing\n Internal Covariate Shift .\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n The mean and standard-deviation are calculated per-dimension over\n the mini-batches and \\gamma and \\beta are learnable parameter\n vectors of size C (where C is the input size). By default, the\n elements of \\gamma are set to 1 and the elements of \\beta are set\n to 0. The standard-deviation is calculated via the biased\n estimator, equivalent to torch.var(input, unbiased=False).\n Also by default, during training this layer keeps running estimates\n of its computed mean and variance, which are then used for", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html", "category": "pytorch docs"}
{"text": "normalization during evaluation. The running estimates are kept\n with a default \"momentum\" of 0.1.\n If \"track_running_stats\" is set to \"False\", this layer then does\n not keep running estimates, and batch statistics are instead used\n during evaluation time as well.\n Note:\n This \"momentum\" argument is different from one used in optimizer\n classes and the conventional notion of momentum. Mathematically,\n the update rule for running statistics here is \\hat{x}_\\text{new}\n = (1 - \\text{momentum}) \\times \\hat{x} + \\text{momentum} \\times\n x_t, where \\hat{x} is the estimated statistic and x_t is the new\n observed value.\n Because the Batch Normalization is done over the C dimension,\n computing statistics on (N, H, W) slices, it's common terminology\n to call this Spatial Batch Normalization.\n Parameters:\n * num_features (int) -- C from an expected input of size\n (N, C, H, W)\n * eps (float) -- a value added to the denominator for", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html", "category": "pytorch docs"}
{"text": "numerical stability. Default: 1e-5\n * momentum (float) -- the value used for the running_mean\n and running_var computation. Can be set to \"None\" for\n cumulative moving average (i.e. simple average). Default: 0.1\n * affine (bool) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters. Default:\n \"True\"\n * track_running_stats (bool) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics, and initializes statistics buffers\n \"running_mean\" and \"running_var\" as \"None\". When these buffers\n are \"None\", this module always uses batch statistics. in both\n training and eval modes. Default: \"True\"\n Shape:\n * Input: (N, C, H, W)\n * Output: (N, C, H, W) (same shape as input)\n Examples:\n >>> # With Learnable Parameters\n >>> m = nn.BatchNorm2d(100)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html", "category": "pytorch docs"}
{"text": "\n\n\nm = nn.BatchNorm2d(100)\n >>> # Without Learnable Parameters\n >>> m = nn.BatchNorm2d(100, affine=False)\n >>> input = torch.randn(20, 100, 35, 45)\n >>> output = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html", "category": "pytorch docs"}
{"text": "torch.triangular_solvetorch.triangular_solve(b, A, upper=True, transpose=False, unitriangular=False, , out=None)\n Solves a system of equations with a square upper or lower\n triangular invertible matrix A and multiple right-hand sides b.\n In symbols, it solves AX = b and assumes A is square upper-\n triangular (or lower-triangular if \"upper\"= False) and does not\n have zeros on the diagonal.\n torch.triangular_solve(b, A) can take in 2D inputs b, A or\n inputs that are batches of 2D matrices. If the inputs are batches,\n then returns batched outputs X\n If the diagonal of \"A\" contains zeros or elements that are very\n close to zero and \"unitriangular\"= False (default) or if the\n input matrix is badly conditioned, the result may contain NaN* s.\n Supports input of float, double, cfloat and cdouble data types.\n Warning:\n \"torch.triangular_solve()\" is deprecated in favor of\n \"torch.linalg.solve_triangular()\" and will be removed in a future", "source": "https://pytorch.org/docs/stable/generated/torch.triangular_solve.html", "category": "pytorch docs"}
{"text": "PyTorch release. \"torch.linalg.solve_triangular()\" has its\n arguments reversed and does not return a copy of one of the\n inputs.\"X = torch.triangular_solve(B, A).solution\" should be\n replaced with\n X = torch.linalg.solve_triangular(A, B)\n Parameters:\n * b (Tensor) -- multiple right-hand sides of size (, m,\n k) where * is zero of more batch dimensions\n * A (Tensor) -- the input triangular coefficient matrix of\n size (, m, m) where * is zero or more batch dimensions\n * upper (bool, optional) -- whether A is upper or\n lower triangular. Default: \"True\".\n * transpose (bool, optional) -- solves op(A)X = b\n where op(A) = A^T if this flag is \"True\", and op(A) = A if\n it is \"False\". Default: \"False\".\n * unitriangular (bool, optional) -- whether A is unit\n triangular. If True, the diagonal elements of A are assumed to\n be 1 and not referenced from A. Default: \"False\".", "source": "https://pytorch.org/docs/stable/generated/torch.triangular_solve.html", "category": "pytorch docs"}
{"text": "Keyword Arguments:\n out ((Tensor, Tensor), optional) -- tuple of\n two tensors to write the output to. Ignored if None. Default:\n None.\n Returns:\n A namedtuple (solution, cloned_coefficient) where\n cloned_coefficient is a clone of A and solution is the\n solution X to AX = b (or whatever variant of the system of\n equations, depending on the keyword arguments.)\n Examples:\n >>> A = torch.randn(2, 2).triu()\n >>> A\n tensor([[ 1.1527, -1.0753],\n [ 0.0000, 0.7986]])\n >>> b = torch.randn(2, 3)\n >>> b\n tensor([[-0.0210, 2.3513, -1.5492],\n [ 1.5429, 0.7403, -1.0243]])\n >>> torch.triangular_solve(b, A)\n torch.return_types.triangular_solve(\n solution=tensor([[ 1.7841, 2.9046, -2.5405],\n [ 1.9320, 0.9270, -1.2826]]),\n cloned_coefficient=tensor([[ 1.1527, -1.0753],\n [ 0.0000, 0.7986]]))", "source": "https://pytorch.org/docs/stable/generated/torch.triangular_solve.html", "category": "pytorch docs"}
{"text": "torch.frombuffertorch.frombuffer(buffer, *, dtype, count=- 1, offset=0, requires_grad=False) -> Tensor\n Creates a 1-dimensional \"Tensor\" from an object that implements the\n Python buffer protocol.\n Skips the first \"offset\" bytes in the buffer, and interprets the\n rest of the raw bytes as a 1-dimensional tensor of type \"dtype\"\n with \"count\" elements.\n Note that either of the following must be true:\n 1. \"count\" is a positive non-zero number, and the total number of\n bytes in the buffer is less than \"offset\" plus \"count\" times the\n size (in bytes) of \"dtype\".\n 2. \"count\" is negative, and the length (number of bytes) of the\n buffer subtracted by the \"offset\" is a multiple of the size (in\n bytes) of \"dtype\".\n The returned tensor and buffer share the same memory. Modifications\n to the tensor will be reflected in the buffer and vice versa. The\n returned tensor is not resizable.\n Note:\n This function increments the reference count for the object that", "source": "https://pytorch.org/docs/stable/generated/torch.frombuffer.html", "category": "pytorch docs"}
{"text": "owns the shared memory. Therefore, such memory will not be\n deallocated before the returned tensor goes out of scope.\n Warning:\n This function's behavior is undefined when passed an object\n implementing the buffer protocol whose data is not on the CPU.\n Doing so is likely to cause a segmentation fault.\n Warning:\n This function does not try to infer the \"dtype\" (hence, it is not\n optional). Passing a different \"dtype\" than its source may result\n in unexpected behavior.\n Parameters:\n buffer (object) -- a Python object that exposes the buffer\n interface.\n Keyword Arguments:\n * dtype (\"torch.dtype\") -- the desired data type of returned\n tensor.\n * count (int, optional) -- the number of desired\n elements to be read. If negative, all the elements (until the\n end of the buffer) will be read. Default: -1.\n * offset (int, optional) -- the number of bytes to", "source": "https://pytorch.org/docs/stable/generated/torch.frombuffer.html", "category": "pytorch docs"}
{"text": "skip at the start of the buffer. Default: 0.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Example:\n >>> import array\n >>> a = array.array('i', [1, 2, 3])\n >>> t = torch.frombuffer(a, dtype=torch.int32)\n >>> t\n tensor([ 1, 2, 3])\n >>> t[0] = -1\n >>> a\n array([-1, 2, 3])\n >>> # Interprets the signed char bytes as 32-bit integers.\n >>> # Each 4 signed char elements will be interpreted as\n >>> # 1 signed 32-bit integer.\n >>> import array\n >>> a = array.array('b', [-1, 0, 0, 0])\n >>> torch.frombuffer(a, dtype=torch.int32)\n tensor([255], dtype=torch.int32)", "source": "https://pytorch.org/docs/stable/generated/torch.frombuffer.html", "category": "pytorch docs"}
{"text": "StandaloneModuleConfigEntryclass torch.ao.quantization.fx.custom_config.StandaloneModuleConfigEntry(qconfig_mapping: 'Optional[QConfigMapping]', example_inputs: 'Tuple[Any, ...]', prepare_custom_config: 'Optional[PrepareCustomConfig]', backend_config: 'Optional[BackendConfig]')", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.StandaloneModuleConfigEntry.html", "category": "pytorch docs"}
{"text": "torch.foreach_erf_torch._foreach_erf(self: List[Tensor]) -> None\n Apply \"torch.erf()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_erf_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.detTensor.det() -> Tensor\n See \"torch.det()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.det.html", "category": "pytorch docs"}
{"text": "torch.autograd.Function.forwardstatic Function.forward(ctx, args, kwargs)\n This function is to be overridden by all subclasses. There are two\n ways to define forward:\n Usage 1 (Combined forward and ctx):\n @staticmethod\n def forward(ctx: Any, args: Any, kwargs: Any) -> Any:\n pass\n * It must accept a context ctx as the first argument, followed by\n any number of arguments (tensors or other types).\n * See Combined or separate forward() and setup_context() for more\n details\n Usage 2 (Separate forward and ctx):\n @staticmethod\n def forward(*args: Any, kwargs: Any) -> Any:\n pass\n @staticmethod\n def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None:\n pass\n * The forward no longer accepts a ctx argument.\n * Instead, you must also override the\n \"torch.autograd.Function.setup_context()\" staticmethod to handle\n setting up the \"ctx\" object. \"output\" is the output of the", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.Function.forward.html", "category": "pytorch docs"}
{"text": "forward, \"inputs\" are a Tuple of inputs to the forward.\n * See Extending torch.autograd for more details\n The context can be used to store arbitrary data that can be then\n retrieved during the backward pass. Tensors should not be stored\n directly on ctx (though this is not currently enforced for\n backward compatibility). Instead, tensors should be saved either\n with \"ctx.save_for_backward()\" if they are intended to be used in\n \"backward\" (equivalently, \"vjp\") or \"ctx.save_for_forward()\" if\n they are intended to be used for in \"jvp\".\n Return type:\n Any", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.Function.forward.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.max_unpool3dtorch.nn.functional.max_unpool3d(input, indices, kernel_size, stride=None, padding=0, output_size=None)\n Computes a partial inverse of \"MaxPool3d\".\n See \"MaxUnpool3d\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.max_unpool3d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.add_Tensor.add_(other, *, alpha=1) -> Tensor\n In-place version of \"add()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.add_.html", "category": "pytorch docs"}
{"text": "PrepareCustomConfigclass torch.ao.quantization.fx.custom_config.PrepareCustomConfig\n Custom configuration for \"prepare_fx()\" and \"prepare_qat_fx()\".\n Example usage:\n prepare_custom_config = PrepareCustomConfig() .set_standalone_module_name(\"module1\", qconfig_mapping, example_inputs, child_prepare_custom_config, backend_config) .set_standalone_module_class(MyStandaloneModule, qconfig_mapping, example_inputs, child_prepare_custom_config, backend_config) .set_float_to_observed_mapping(FloatCustomModule, ObservedCustomModule) .set_non_traceable_module_names([\"module2\", \"module3\"]) .set_non_traceable_module_classes([NonTraceableModule1, NonTraceableModule2]) .set_input_quantized_indexes([0]) .set_output_quantized_indexes([0]) .set_preserved_attributes([\"attr1\", \"attr2\"])\n classmethod from_dict(prepare_custom_config_dict)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.PrepareCustomConfig.html", "category": "pytorch docs"}
{"text": "Create a \"PrepareCustomConfig\" from a dictionary with the\n following items:\n \"standalone_module_name\": a list of (module_name,\n qconfig_mapping, example_inputs, child_prepare_custom_config,\n backend_config) tuples\n \"standalone_module_class\" a list of (module_class,\n qconfig_mapping, example_inputs, child_prepare_custom_config,\n backend_config) tuples\n \"float_to_observed_custom_module_class\": a nested dictionary\n mapping from quantization mode to an inner mapping from float\n module classes to observed module classes, e.g. {\"static\":\n {FloatCustomModule: ObservedCustomModule}}\n \"non_traceable_module_name\": a list of modules names that are\n not symbolically traceable \"non_traceable_module_class\": a\n list of module classes that are not symbolically traceable\n \"input_quantized_idxs\": a list of indexes of graph inputs\n that should be quantized \"output_quantized_idxs\": a list of", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.PrepareCustomConfig.html", "category": "pytorch docs"}
{"text": "indexes of graph outputs that should be quantized\n \"preserved_attributes\": a list of attributes that persist\n even if they are not used in \"forward\"\n This function is primarily for backward compatibility and may be\n removed in the future.\n Return type:\n PrepareCustomConfig\n set_float_to_observed_mapping(float_class, observed_class, quant_type=QuantType.STATIC)\n Set the mapping from a custom float module class to a custom\n observed module class.\n The observed module class must have a \"from_float\" class method\n that converts the float module class to the observed module\n class. This is currently only supported for static quantization.\n Return type:\n PrepareCustomConfig\n set_input_quantized_indexes(indexes)\n Set the indexes of the inputs of the graph that should be\n quantized. Inputs are otherwise assumed to be in fp32 by default\n instead.\n Return type:\n PrepareCustomConfig", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.PrepareCustomConfig.html", "category": "pytorch docs"}
{"text": "Return type:\n PrepareCustomConfig\n set_non_traceable_module_classes(module_classes)\n Set the modules that are not symbolically traceable, identified\n by class.\n Return type:\n PrepareCustomConfig\n set_non_traceable_module_names(module_names)\n Set the modules that are not symbolically traceable, identified\n by name.\n Return type:\n PrepareCustomConfig\n set_output_quantized_indexes(indexes)\n Set the indexes of the outputs of the graph that should be\n quantized. Outputs are otherwise assumed to be in fp32 by\n default instead.\n Return type:\n PrepareCustomConfig\n set_preserved_attributes(attributes)\n Set the names of the attributes that will persist in the graph\n module even if they are not used in the model's \"forward\"\n method.\n Return type:\n PrepareCustomConfig\n set_standalone_module_class(module_class, qconfig_mapping, example_inputs, prepare_custom_config, backend_config)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.PrepareCustomConfig.html", "category": "pytorch docs"}
{"text": "Set the configuration for running a standalone module identified\n by \"module_class\".\n If \"qconfig_mapping\" is None, the parent \"qconfig_mapping\" will\n be used instead. If \"prepare_custom_config\" is None, an empty\n \"PrepareCustomConfig\" will be used. If \"backend_config\" is None,\n the parent \"backend_config\" will be used instead.\n Return type:\n PrepareCustomConfig\n set_standalone_module_name(module_name, qconfig_mapping, example_inputs, prepare_custom_config, backend_config)\n Set the configuration for running a standalone module identified\n by \"module_name\".\n If \"qconfig_mapping\" is None, the parent \"qconfig_mapping\" will\n be used instead. If \"prepare_custom_config\" is None, an empty\n \"PrepareCustomConfig\" will be used. If \"backend_config\" is None,\n the parent \"backend_config\" will be used instead.\n Return type:\n PrepareCustomConfig\n to_dict()\n Convert this \"PrepareCustomConfig\" to a dictionary with the", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.PrepareCustomConfig.html", "category": "pytorch docs"}
{"text": "items described in \"from_dict()\".\n Return type:\n Dict[str, Any]", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.PrepareCustomConfig.html", "category": "pytorch docs"}
{"text": "torch.renormtorch.renorm(input, p, dim, maxnorm, , out=None) -> Tensor\n Returns a tensor where each sub-tensor of \"input\" along dimension\n \"dim\" is normalized such that the p-norm of the sub-tensor is\n lower than the value \"maxnorm\"\n Note:\n If the norm of a row is lower than maxnorm, the row is\n unchanged\n Parameters:\n * input (Tensor) -- the input tensor.\n * p (float) -- the power for the norm computation\n * dim (int) -- the dimension to slice over to get the sub-\n tensors\n * maxnorm (float) -- the maximum norm to keep each sub-\n tensor under\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> x = torch.ones(3, 3)\n >>> x[1].fill_(2)\n tensor([ 2., 2., 2.])\n >>> x[2].fill_(3)\n tensor([ 3., 3., 3.])\n >>> x\n tensor([[ 1., 1., 1.],\n [ 2., 2., 2.],\n [ 3., 3., 3.]])\n >>> torch.renorm(x, 1, 0, 5)", "source": "https://pytorch.org/docs/stable/generated/torch.renorm.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.renorm(x, 1, 0, 5)\n tensor([[ 1.0000, 1.0000, 1.0000],\n [ 1.6667, 1.6667, 1.6667],\n [ 1.6667, 1.6667, 1.6667]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.renorm.html", "category": "pytorch docs"}
{"text": "torch._foreach_costorch._foreach_cos(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.cos()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_cos.html", "category": "pytorch docs"}
{"text": "torch.Tensor.numelTensor.numel() -> int\n See \"torch.numel()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.numel.html", "category": "pytorch docs"}
{"text": "CosineSimilarityclass torch.nn.CosineSimilarity(dim=1, eps=1e-08)\n Returns cosine similarity between x_1 and x_2, computed along\n dim.\n \\text{similarity} = \\dfrac{x_1 \\cdot x_2}{\\max(\\Vert x_1 \\Vert\n _2 \\cdot \\Vert x_2 \\Vert _2, \\epsilon)}.\n Parameters:\n * dim (int, optional) -- Dimension where cosine\n similarity is computed. Default: 1\n * eps (float, optional) -- Small value to avoid\n division by zero. Default: 1e-8\n Shape:\n * Input1: (\\ast_1, D, \\ast_2) where D is at position dim\n * Input2: (\\ast_1, D, \\ast_2), same number of dimensions as x1,\n matching x1 size at dimension dim,\n and broadcastable with x1 at other dimensions.\n * Output: (\\ast_1, \\ast_2)\n Examples::\n >>> input1 = torch.randn(100, 128)\n >>> input2 = torch.randn(100, 128)\n >>> cos = nn.CosineSimilarity(dim=1, eps=1e-6)\n >>> output = cos(input1, input2)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CosineSimilarity.html", "category": "pytorch docs"}
{"text": "torch.Tensor.cosh_Tensor.cosh_() -> Tensor\n In-place version of \"cosh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cosh_.html", "category": "pytorch docs"}
{"text": "torch.tensortorch.tensor(data, , dtype=None, device=None, requires_grad=False, pin_memory=False) -> Tensor\n Constructs a tensor with no autograd history (also known as a \"leaf\n tensor\", see Autograd mechanics) by copying \"data\".\n Warning:\n When working with tensors prefer using \"torch.Tensor.clone()\",\n \"torch.Tensor.detach()\", and \"torch.Tensor.requires_grad_()\" for\n readability. Letting t be a tensor, \"torch.tensor(t)\" is\n equivalent to \"t.clone().detach()\", and \"torch.tensor(t,\n requires_grad=True)\" is equivalent to\n \"t.clone().detach().requires_grad_(True)\".\n See also:\n \"torch.as_tensor()\" preserves autograd history and avoids copies\n where possible. \"torch.from_numpy()\" creates a tensor that shares\n storage with a NumPy array.\n Parameters:\n data (array_like*) -- Initial data for the tensor. Can be a\n list, tuple, NumPy \"ndarray\", scalar, and other types.\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.tensor.html", "category": "pytorch docs"}
{"text": "Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", infers data type from\n \"data\".\n * device (\"torch.device\", optional) -- the device of the\n constructed tensor. If None and data is a tensor then the\n device of data is used. If None and data is not a tensor then\n the result tensor is constructed on the CPU.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * pin_memory (bool, optional) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n Example:\n >>> torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])\n tensor([[ 0.1000, 1.2000],\n [ 2.2000, 3.1000],\n [ 4.9000, 5.2000]])\n >>> torch.tensor([0, 1]) # Type inference on data\n tensor([ 0, 1])", "source": "https://pytorch.org/docs/stable/generated/torch.tensor.html", "category": "pytorch docs"}
{"text": "tensor([ 0, 1])\n >>> torch.tensor([[0.11111, 0.222222, 0.3333333]],\n ... dtype=torch.float64,\n ... device=torch.device('cuda:0')) # creates a double tensor on a CUDA device\n tensor([[ 0.1111, 0.2222, 0.3333]], dtype=torch.float64, device='cuda:0')\n >>> torch.tensor(3.14159) # Create a zero-dimensional (scalar) tensor\n tensor(3.1416)\n >>> torch.tensor([]) # Create an empty tensor (of size (0,))\n tensor([])", "source": "https://pytorch.org/docs/stable/generated/torch.tensor.html", "category": "pytorch docs"}
{"text": "Foldclass torch.nn.Fold(output_size, kernel_size, dilation=1, padding=0, stride=1)\n Combines an array of sliding local blocks into a large containing\n tensor.\n Consider a batched \"input\" tensor containing sliding local blocks,\n e.g., patches of images, of shape (N, C \\times\n \\prod(\\text{kernel_size}), L), where N is batch dimension, C\n \\times \\prod(\\text{kernel_size}) is the number of values within a\n block (a block has \\prod(\\text{kernel_size}) spatial locations\n each containing a C-channeled vector), and L is the total number of\n blocks. (This is exactly the same specification as the output shape\n of \"Unfold\".) This operation combines these local blocks into the\n large \"output\" tensor of shape (N, C, \\text{output_size}[0],\n \\text{output_size}[1], \\dots) by summing the overlapping values.\n Similar to \"Unfold\", the arguments must satisfy\n L = \\prod_d \\left\\lfloor\\frac{\\text{output_size}[d] + 2 \\times\n \\text{padding}[d] % - \\text{dilation}[d] \\times", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Fold.html", "category": "pytorch docs"}
{"text": "(\\text{kernel_size}[d] - 1) - 1}{\\text{stride}[d]} +\n 1\\right\\rfloor,\n where d is over all spatial dimensions.\n * \"output_size\" describes the spatial shape of the large containing\n tensor of the sliding local blocks. It is useful to resolve the\n ambiguity when multiple input shapes map to same number of\n sliding blocks, e.g., with \"stride > 0\".\n The \"padding\", \"stride\" and \"dilation\" arguments specify how the\n sliding blocks are retrieved.\n * \"stride\" controls the stride for the sliding blocks.\n * \"padding\" controls the amount of implicit zero-paddings on both\n sides for \"padding\" number of points for each dimension before\n reshaping.\n * \"dilation\" controls the spacing between the kernel points; also\n known as the \u00c3\u00a0 trous algorithm. It is harder to describe, but\n this link has a nice visualization of what \"dilation\" does.\n Parameters:\n * output_size (int or tuple) -- the shape of the", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Fold.html", "category": "pytorch docs"}
{"text": "spatial dimensions of the output (i.e., \"output.sizes()[2:]\")\n * kernel_size (int or tuple) -- the size of the\n sliding blocks\n * dilation (int or tuple, optional) -- a parameter\n that controls the stride of elements within the neighborhood.\n Default: 1\n * padding (int or tuple, optional) -- implicit\n zero padding to be added on both sides of input. Default: 0\n * stride (int or tuple) -- the stride of the sliding\n blocks in the input spatial dimensions. Default: 1\n * If \"output_size\", \"kernel_size\", \"dilation\", \"padding\" or\n \"stride\" is an int or a tuple of length 1 then their values will\n be replicated across all spatial dimensions.\n * For the case of two output spatial dimensions this operation is\n sometimes called \"col2im\".\n Note:\n \"Fold\" calculates each combined value in the resulting large\n tensor by summing all values from all containing blocks. \"Unfold\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Fold.html", "category": "pytorch docs"}
{"text": "extracts the values in the local blocks by copying from the large\n tensor. So, if the blocks overlap, they are not inverses of each\n other.In general, folding and unfolding operations are related as\n follows. Consider \"Fold\" and \"Unfold\" instances created with the\n same parameters:\n >>> fold_params = dict(kernel_size=..., dilation=..., padding=..., stride=...)\n >>> fold = nn.Fold(output_size=..., fold_params)\n >>> unfold = nn.Unfold(fold_params)\n Then for any (supported) \"input\" tensor the following equality\n holds:\n fold(unfold(input)) == divisor * input\n where \"divisor\" is a tensor that depends only on the shape and\n dtype of the \"input\":\n >>> input_ones = torch.ones(input.shape, dtype=input.dtype)\n >>> divisor = fold(unfold(input_ones))\n When the \"divisor\" tensor contains no zero elements, then \"fold\"\n and \"unfold\" operations are inverses of each other (up to\n constant divisor).\n Warning:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Fold.html", "category": "pytorch docs"}
{"text": "constant divisor).\n Warning:\n Currently, only unbatched (3D) or batched (4D) image-like output\n tensors are supported.\n Shape:\n * Input: (N, C \\times \\prod(\\text{kernel_size}), L) or (C\n \\times \\prod(\\text{kernel_size}), L)\n * Output: (N, C, \\text{output_size}[0], \\text{output_size}[1],\n \\dots) or (C, \\text{output_size}[0], \\text{output_size}[1],\n \\dots) as described above\n Examples:\n >>> fold = nn.Fold(output_size=(4, 5), kernel_size=(2, 2))\n >>> input = torch.randn(1, 3 * 2 * 2, 12)\n >>> output = fold(input)\n >>> output.size()\n torch.Size([1, 3, 4, 5])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Fold.html", "category": "pytorch docs"}
{"text": "torch.nanmediantorch.nanmedian(input) -> Tensor\n Returns the median of the values in \"input\", ignoring \"NaN\" values.\n This function is identical to \"torch.median()\" when there are no\n \"NaN\" values in \"input\". When \"input\" has one or more \"NaN\" values,\n \"torch.median()\" will always return \"NaN\", while this function will\n return the median of the non-\"NaN\" elements in \"input\". If all the\n elements in \"input\" are \"NaN\" it will also return \"NaN\".\n Parameters:\n input (Tensor) -- the input tensor.\n Example:\n >>> a = torch.tensor([1, float('nan'), 3, 2])\n >>> a.median()\n tensor(nan)\n >>> a.nanmedian()\n tensor(2.)\n torch.nanmedian(input, dim=- 1, keepdim=False, *, out=None)\n Returns a namedtuple \"(values, indices)\" where \"values\" contains\n the median of each row of \"input\" in the dimension \"dim\", ignoring\n \"NaN\" values, and \"indices\" contains the index of the median values\n found in the dimension \"dim\".", "source": "https://pytorch.org/docs/stable/generated/torch.nanmedian.html", "category": "pytorch docs"}
{"text": "found in the dimension \"dim\".\n This function is identical to \"torch.median()\" when there are no\n \"NaN\" values in a reduced row. When a reduced row has one or more\n \"NaN\" values, \"torch.median()\" will always reduce it to \"NaN\",\n while this function will reduce it to the median of the non-\"NaN\"\n elements. If all the elements in a reduced row are \"NaN\" then it\n will be reduced to \"NaN\", too.\n Parameters:\n * input (Tensor) -- the input tensor.\n * dim (int) -- the dimension to reduce.\n * keepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n Keyword Arguments:\n out ((Tensor, Tensor), optional) -- The first\n tensor will be populated with the median values and the second\n tensor, which must have dtype long, with their indices in the\n dimension \"dim\" of \"input\".\n Example:\n >>> a = torch.tensor([[2, 3, 1], [float('nan'), 1, float('nan')]])\n >>> a\n tensor([[2., 3., 1.],", "source": "https://pytorch.org/docs/stable/generated/torch.nanmedian.html", "category": "pytorch docs"}
{"text": "\n\n\na\n tensor([[2., 3., 1.],\n [nan, 1., nan]])\n >>> a.median(0)\n torch.return_types.median(values=tensor([nan, 1., nan]), indices=tensor([1, 1, 1]))\n >>> a.nanmedian(0)\n torch.return_types.nanmedian(values=tensor([2., 1., 1.]), indices=tensor([0, 1, 0]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nanmedian.html", "category": "pytorch docs"}
{"text": "EmbeddingBagclass torch.ao.nn.quantized.EmbeddingBag(num_embeddings, embedding_dim, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='sum', sparse=False, _weight=None, include_last_offset=False, dtype=torch.quint8)\n A quantized EmbeddingBag module with quantized packed weights as\n inputs. We adopt the same interface as torch.nn.EmbeddingBag,\n please see\n https://pytorch.org/docs/stable/nn.html#torch.nn.EmbeddingBag for\n documentation.\n Similar to \"EmbeddingBag\", attributes will be randomly initialized\n at module creation time and will be overwritten later\n Variables:\n weight (Tensor) -- the non-learnable quantized weights of\n the module of shape (\\text{num_embeddings},\n \\text{embedding_dim}).\n Examples::\n >>> m = nn.quantized.EmbeddingBag(num_embeddings=10, embedding_dim=12, include_last_offset=True, mode='sum')", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.EmbeddingBag.html", "category": "pytorch docs"}
{"text": "\n\n\nindices = torch.tensor([9, 6, 5, 7, 8, 8, 9, 2, 8, 6, 6, 9, 1, 6, 8, 8, 3, 2, 3, 6, 3, 6, 5, 7, 0, 8, 4, 6, 5, 8, 2, 3])\n >>> offsets = torch.tensor([0, 19, 20, 28, 28, 32])\n >>> output = m(indices, offsets)\n >>> print(output.size())\n torch.Size([5, 12])\n classmethod from_float(mod)\n Create a quantized embedding_bag module from a float module\n Parameters:\n mod (Module) -- a float module, either produced by\n torch.ao.quantization utilities or provided by user\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.EmbeddingBag.html", "category": "pytorch docs"}
{"text": "torch.linalg.slogdettorch.linalg.slogdet(A, , out=None)\n Computes the sign and natural logarithm of the absolute value of\n the determinant of a square matrix.\n For complex \"A\", it returns the sign and the natural logarithm of\n the modulus of the determinant, that is, a logarithmic polar\n decomposition of the determinant.\n The determinant can be recovered as sign * exp(logabsdet). When a\n matrix has a determinant of zero, it returns (0, -inf).\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n See also:\n \"torch.linalg.det()\" computes the determinant of square matrices.\n Parameters:\n A (Tensor) -- tensor of shape (, n, n) where *** is\n zero or more batch dimensions.\n Keyword Arguments:\n out (tuple, optional) -- output tuple of two tensors.\n Ignored if None. Default: None.\n Returns:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.slogdet.html", "category": "pytorch docs"}
{"text": "Returns:\n A named tuple (sign, logabsdet).\n sign will have the same dtype as \"A\".\n logabsdet will always be real-valued, even when \"A\" is\n complex.\n Examples:\n >>> A = torch.randn(3, 3)\n >>> A\n tensor([[ 0.0032, -0.2239, -1.1219],\n [-0.6690, 0.1161, 0.4053],\n [-1.6218, -0.9273, -0.0082]])\n >>> torch.linalg.det(A)\n tensor(-0.7576)\n >>> torch.logdet(A)\n tensor(nan)\n >>> torch.linalg.slogdet(A)\n torch.return_types.linalg_slogdet(sign=tensor(-1.), logabsdet=tensor(-0.2776))", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.slogdet.html", "category": "pytorch docs"}
{"text": "torch.float_powertorch.float_power(input, exponent, , out=None) -> Tensor\n Raises \"input\" to the power of \"exponent\", elementwise, in double\n precision. If neither input is complex returns a \"torch.float64\"\n tensor, and if one or more inputs is complex returns a\n \"torch.complex128\" tensor.\n Note:\n This function always computes in double precision, unlike\n \"torch.pow()\", which implements more typical type promotion. This\n is useful when the computation needs to be performed in a wider\n or more precise dtype, or the results of the computation may\n contain fractional values not representable in the input dtypes,\n like when an integer base is raised to a negative integer\n exponent.\n Parameters:\n * input (Tensor or Number) -- the base value(s)\n * exponent (Tensor or Number) -- the exponent value(s)\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:", "source": "https://pytorch.org/docs/stable/generated/torch.float_power.html", "category": "pytorch docs"}
{"text": "Example:\n >>> a = torch.randint(10, (4,))\n >>> a\n tensor([6, 4, 7, 1])\n >>> torch.float_power(a, 2)\n tensor([36., 16., 49., 1.], dtype=torch.float64)\n >>> a = torch.arange(1, 5)\n >>> a\n tensor([ 1, 2, 3, 4])\n >>> exp = torch.tensor([2, -3, 4, -5])\n >>> exp\n tensor([ 2, -3, 4, -5])\n >>> torch.float_power(a, exp)\n tensor([1.0000e+00, 1.2500e-01, 8.1000e+01, 9.7656e-04], dtype=torch.float64)", "source": "https://pytorch.org/docs/stable/generated/torch.float_power.html", "category": "pytorch docs"}
{"text": "ConvReLU2dclass torch.ao.nn.intrinsic.ConvReLU2d(conv, relu)\n This is a sequential container which calls the Conv2d and ReLU\n modules. During quantization this will be replaced with the\n corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvReLU2d.html", "category": "pytorch docs"}
{"text": "torch.istfttorch.istft(input, n_fft, hop_length=None, win_length=None, window=None, center=True, normalized=False, onesided=None, length=None, return_complex=False) -> Tensor:\n Inverse short time Fourier Transform. This is expected to be the\n inverse of \"stft()\".\n It has the same parameters (+ additional optional parameter of\n \"length\") and it should return the least squares estimation of the\n original signal. The algorithm will check using the NOLA condition\n ( nonzero overlap).\n Important consideration in the parameters \"window\" and \"center\" so\n that the envelop created by the summation of all the windows is\n never zero at certain point in time. Specifically,\n \\sum_{t=-\\infty}^{\\infty} |w|^2[n-t\\times hop_length] \\cancel{=}\n 0.\n Since \"stft()\" discards elements at the end of the signal if they\n do not fit in a frame, \"istft\" may return a shorter signal than the\n original signal (can occur if \"center\" is False since the signal", "source": "https://pytorch.org/docs/stable/generated/torch.istft.html", "category": "pytorch docs"}
{"text": "isn't padded). If length is given in the arguments and is longer\n than expected, \"istft\" will pad zeros to the end of the returned\n signal.\n If \"center\" is \"True\", then there will be padding e.g.\n \"'constant'\", \"'reflect'\", etc. Left padding can be trimmed off\n exactly because they can be calculated but right padding cannot be\n calculated without additional information.\n Example: Suppose the last window is: \"[17, 18, 0, 0, 0]\" vs \"[18,\n 0, 0, 0, 0]\"\n The \"n_fft\", \"hop_length\", \"win_length\" are all the same which\n prevents the calculation of right padding. These additional values\n could be zeros or a reflection of the signal so providing \"length\"\n could be useful. If \"length\" is \"None\" then padding will be\n aggressively removed (some loss of signal).\n [1] D. W. Griffin and J. S. Lim, \"Signal estimation from modified\n short-time Fourier transform,\" IEEE Trans. ASSP, vol.32, no.2,\n pp.236-243, Apr. 1984.\n Parameters:\n * input (Tensor) --", "source": "https://pytorch.org/docs/stable/generated/torch.istft.html", "category": "pytorch docs"}
{"text": "Parameters:\n * input (Tensor) --\n The input tensor. Expected to be in the format of \"stft()\",\n output. That is a complex tensor of shape (\"channel\",\n \"fft_size\", \"n_frame\"), where the \"channel\" dimension is\n optional.\n Changed in version 2.0: Real datatype inputs are no longer\n supported. Input must now have a complex datatype, as returned\n by \"stft(..., return_complex=True)\".\n * n_fft (int) -- Size of Fourier transform\n * hop_length (Optional[int]) -- The distance between\n neighboring sliding window frames. (Default: \"n_fft // 4\")\n * win_length (Optional[int]) -- The size of window\n frame and STFT filter. (Default: \"n_fft\")\n * window (Optional[torch.Tensor]) -- The optional\n window function. (Default: \"torch.ones(win_length)\")\n * center (bool) -- Whether \"input\" was padded on both\n sides so that the t-th frame is centered at time t \\times", "source": "https://pytorch.org/docs/stable/generated/torch.istft.html", "category": "pytorch docs"}
{"text": "\\text{hop_length}. (Default: \"True\")\n * normalized (bool) -- Whether the STFT was normalized.\n (Default: \"False\")\n * onesided (Optional[bool]) -- Whether the STFT was\n onesided. (Default: \"True\" if \"n_fft != fft_size\" in the input\n size)\n * length (Optional[int]) -- The amount to trim the\n signal by (i.e. the original signal length). (Default: whole\n signal)\n * return_complex (Optional[bool]) -- Whether the\n output should be complex, or if the input should be assumed to\n derive from a real signal and window. Note that this is\n incompatible with \"onesided=True\". (Default: \"False\")\n Returns:\n Least squares estimation of the original signal of size (...,\n signal_length)\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.istft.html", "category": "pytorch docs"}
{"text": "Softmaxclass torch.nn.Softmax(dim=None)\n Applies the Softmax function to an n-dimensional input Tensor\n rescaling them so that the elements of the n-dimensional output\n Tensor lie in the range [0,1] and sum to 1.\n Softmax is defined as:\n \\text{Softmax}(x_{i}) = \\frac{\\exp(x_i)}{\\sum_j \\exp(x_j)}\n When the input Tensor is a sparse tensor then the unspecified\n values are treated as \"-inf\".\n Shape:\n * Input: () where *** means, any number of additional\n dimensions\n * Output: (), same shape as the input\n Returns:\n a Tensor of the same dimension and shape as the input with\n values in the range [0, 1]\n Parameters:\n dim (int) -- A dimension along which Softmax will be\n computed (so every slice along dim will sum to 1).\n Return type:\n None\n Note:\n This module doesn't work directly with NLLLoss, which expects the\n Log to be computed between the Softmax and itself. Use", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Softmax.html", "category": "pytorch docs"}
{"text": "LogSoftmax instead (it's faster and has better numerical\n properties).\n Examples:\n >>> m = nn.Softmax(dim=1)\n >>> input = torch.randn(2, 3)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Softmax.html", "category": "pytorch docs"}
{"text": "torch.empty_liketorch.empty_like(input, , dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor\n Returns an uninitialized tensor with the same size as \"input\".\n \"torch.empty_like(input)\" is equivalent to\n \"torch.empty(input.size(), dtype=input.dtype, layout=input.layout,\n device=input.device)\".\n Parameters:\n input (Tensor) -- the size of \"input\" will determine size\n of the output tensor.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned Tensor. Default: if \"None\", defaults to the dtype\n of \"input\".\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned tensor. Default: if \"None\", defaults to the layout of\n \"input\".\n * device* (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", defaults to the device of\n \"input\".", "source": "https://pytorch.org/docs/stable/generated/torch.empty_like.html", "category": "pytorch docs"}
{"text": "\"input\".\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n * memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".\n Example:\n >>> a=torch.empty((2,3), dtype=torch.int32, device = 'cuda')\n >>> torch.empty_like(a)\n tensor([[0, 0, 0],\n [0, 0, 0]], device='cuda:0', dtype=torch.int32)", "source": "https://pytorch.org/docs/stable/generated/torch.empty_like.html", "category": "pytorch docs"}
{"text": "torch.linalg.matrix_exptorch.linalg.matrix_exp(A) -> Tensor\n Computes the matrix exponential of a square matrix.\n Letting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, this function\n computes the matrix exponential of A \\in \\mathbb{K}^{n \\times\n n}, which is defined as\n \\mathrm{matrix_exp}(A) = \\sum_{k=0}^\\infty \\frac{1}{k!}A^k \\in\n \\mathbb{K}^{n \\times n}.\n If the matrix A has eigenvalues \\lambda_i \\in \\mathbb{C}, the\n matrix \\mathrm{matrix_exp}(A) has eigenvalues e^{\\lambda_i} \\in\n \\mathbb{C}.\n Supports input of bfloat16, float, double, cfloat and cdouble\n dtypes. Also supports batches of matrices, and if \"A\" is a batch of\n matrices then the output has the same batch dimensions.\n Parameters:\n A (Tensor) -- tensor of shape (, n, n)* where *** is\n zero or more batch dimensions.\n Example:\n >>> A = torch.empty(2, 2, 2)\n >>> A[0, :, :] = torch.eye(2, 2)\n >>> A[1, :, :] = 2 * torch.eye(2, 2)\n >>> A", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matrix_exp.html", "category": "pytorch docs"}
{"text": "\n\n\nA\n tensor([[[1., 0.],\n [0., 1.]],\n [[2., 0.],\n [0., 2.]]])\n >>> torch.linalg.matrix_exp(A)\n tensor([[[2.7183, 0.0000],\n [0.0000, 2.7183]],\n [[7.3891, 0.0000],\n [0.0000, 7.3891]]])\n >>> import math\n >>> A = torch.tensor([[0, math.pi/3], [-math.pi/3, 0]]) # A is skew-symmetric\n >>> torch.linalg.matrix_exp(A) # matrix_exp(A) = [[cos(pi/3), sin(pi/3)], [-sin(pi/3), cos(pi/3)]]\n tensor([[ 0.5000, 0.8660],\n [-0.8660, 0.5000]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matrix_exp.html", "category": "pytorch docs"}
{"text": "torch.jit.unusedtorch.jit.unused(fn)\n This decorator indicates to the compiler that a function or method\n should be ignored and replaced with the raising of an exception.\n This allows you to leave code in your model that is not yet\n TorchScript compatible and still export your model.\n Example (using \"@torch.jit.unused\" on a method):\n import torch\n import torch.nn as nn\n class MyModule(nn.Module):\n def init(self, use_memory_efficient):\n super(MyModule, self).init()\n self.use_memory_efficient = use_memory_efficient\n @torch.jit.unused\n def memory_efficient(self, x):\n import pdb\n pdb.set_trace()\n return x + 10\n def forward(self, x):\n # Use not-yet-scriptable memory efficient mode\n if self.use_memory_efficient:\n return self.memory_efficient(x)\n else:", "source": "https://pytorch.org/docs/stable/generated/torch.jit.unused.html", "category": "pytorch docs"}
{"text": "else:\n return x + 10\n m = torch.jit.script(MyModule(use_memory_efficient=False))\n m.save(\"m.pt\")\n m = torch.jit.script(MyModule(use_memory_efficient=True))\n # exception raised\n m(torch.rand(100))", "source": "https://pytorch.org/docs/stable/generated/torch.jit.unused.html", "category": "pytorch docs"}
{"text": "FXFloatFunctionalclass torch.ao.nn.quantized.FXFloatFunctional\n module to replace FloatFunctional module before FX graph mode\n quantization, since activation_post_process will be inserted in top\n level module directly\n Valid operation names:\n * add\n * cat\n * mul\n * add_relu\n * add_scalar\n * mul_scalar", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.FXFloatFunctional.html", "category": "pytorch docs"}
{"text": "fuse_modulesclass torch.quantization.fuse_modules(model, modules_to_fuse, inplace=False, fuser_func=, fuse_custom_config_dict=None)\n Fuses a list of modules into a single module\n Fuses only the following sequence of modules: conv, bn conv, bn,\n relu conv, relu linear, relu bn, relu All other sequences are left\n unchanged. For these sequences, replaces the first item in the list\n with the fused module, replacing the rest of the modules with\n identity.\n Parameters:\n * model -- Model containing the modules to be fused\n * modules_to_fuse -- list of list of module names to fuse.\n Can also be a list of strings if there is only a single list\n of modules to fuse.\n * inplace -- bool specifying if fusion happens in place on\n the model, by default a new model is returned\n * fuser_func -- Function that takes in a list of modules and\n outputs a list of fused modules of the same length. For", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fuse_modules.html", "category": "pytorch docs"}
{"text": "example, fuser_func([convModule, BNModule]) returns the list\n [ConvBNModule, nn.Identity()] Defaults to\n torch.ao.quantization.fuse_known_modules\n * fuse_custom_config_dict -- custom configuration for fusion\n # Example of fuse_custom_config_dict\n fuse_custom_config_dict = {\n # Additional fuser_method mapping\n \"additional_fuser_method_mapping\": {\n (torch.nn.Conv2d, torch.nn.BatchNorm2d): fuse_conv_bn\n },\n }\n Returns:\n model with fused modules. A new copy is created if inplace=True.\n Examples:\n >>> m = M().eval()\n >>> # m is a module containing the sub-modules below\n >>> modules_to_fuse = [ ['conv1', 'bn1', 'relu1'], ['submodule.conv', 'submodule.relu']]\n >>> fused_m = torch.ao.quantization.fuse_modules(m, modules_to_fuse)\n >>> output = fused_m(input)\n >>> m = M().eval()\n >>> # Alternately provide a single list of modules to fuse\n >>> modules_to_fuse = ['conv1', 'bn1', 'relu1']", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fuse_modules.html", "category": "pytorch docs"}
{"text": "\n\n\nfused_m = torch.ao.quantization.fuse_modules(m, modules_to_fuse)\n >>> output = fused_m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fuse_modules.html", "category": "pytorch docs"}
{"text": "torch.roundtorch.round(input, , decimals=0, out=None) -> Tensor\n Rounds elements of \"input\" to the nearest integer.\n For integer inputs, follows the array-api convention of returning a\n copy of the input tensor.\n Note:\n This function implements the \"round half to even\" to break ties\n when a number is equidistant from two integers (e.g. round(2.5)\n is 2).When the :attr:decimals argument is specified the\n algorithm used is similar to NumPy's around. This algorithm is\n fast but inexact and it can easily overflow for low precision\n dtypes. Eg. round(tensor([10000], dtype=torch.float16),\n decimals=3) is inf.\n See also:\n \"torch.ceil()\", which rounds up. \"torch.floor()\", which rounds\n down. \"torch.trunc()\", which rounds towards zero.\n Parameters:\n * input (Tensor) -- the input tensor.\n * decimals (int*) -- Number of decimal places to round to\n (default: 0). If decimals is negative, it specifies the number", "source": "https://pytorch.org/docs/stable/generated/torch.round.html", "category": "pytorch docs"}
{"text": "of positions to the left of the decimal point.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> torch.round(torch.tensor((4.7, -2.3, 9.1, -7.7)))\n tensor([ 5., -2., 9., -8.])\n >>> # Values equidistant from two integers are rounded towards the\n >>> # the nearest even value (zero is treated as even)\n >>> torch.round(torch.tensor([-0.5, 0.5, 1.5, 2.5]))\n tensor([-0., 0., 2., 2.])\n >>> # A positive decimals argument rounds to the to that decimal place\n >>> torch.round(torch.tensor([0.1234567]), decimals=3)\n tensor([0.1230])\n >>> # A negative decimals argument rounds to the left of the decimal\n >>> torch.round(torch.tensor([1200.1234567]), decimals=-3)\n tensor([1000.])", "source": "https://pytorch.org/docs/stable/generated/torch.round.html", "category": "pytorch docs"}
{"text": "torch.is_tensortorch.is_tensor(obj)\n Returns True if obj is a PyTorch tensor.\n Note that this function is simply doing \"isinstance(obj, Tensor)\".\n Using that \"isinstance\" check is better for typechecking with mypy,\n and more explicit - so it's recommended to use that instead of\n \"is_tensor\".\n Parameters:\n obj (Object) -- Object to test\n Example:\n >>> x = torch.tensor([1, 2, 3])\n >>> torch.is_tensor(x)\n True", "source": "https://pytorch.org/docs/stable/generated/torch.is_tensor.html", "category": "pytorch docs"}
{"text": "torch._foreach_sintorch._foreach_sin(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.sin()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_sin.html", "category": "pytorch docs"}
{"text": "FractionalMaxPool2dclass torch.nn.FractionalMaxPool2d(kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)\n Applies a 2D fractional max pooling over an input signal composed\n of several input planes.\n Fractional MaxPooling is described in detail in the paper\n Fractional MaxPooling by Ben Graham\n The max-pooling operation is applied in kH \\times kW regions by a\n stochastic step size determined by the target output size. The\n number of output features is equal to the number of input planes.\n Parameters:\n * kernel_size (Union[int, Tuple[int,\n int]]) -- the size of the window to take a max over.\n Can be a single number k (for a square kernel of k x k) or a\n tuple (kh, kw)\n * output_size (Union[int, Tuple[int,\n int]]) -- the target output size of the image of the\n form oH x oW. Can be a tuple (oH, oW) or a single number", "source": "https://pytorch.org/docs/stable/generated/torch.nn.FractionalMaxPool2d.html", "category": "pytorch docs"}
{"text": "oH for a square image oH x oH\n * output_ratio (Union[float, Tuple[float,\n float]]) -- If one wants to have an output size as a\n ratio of the input size, this option can be given. This has to\n be a number or tuple in the range (0, 1)\n * return_indices (bool) -- if \"True\", will return the\n indices along with the outputs. Useful to pass to\n \"nn.MaxUnpool2d()\". Default: \"False\"\n Shape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).\n * Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),\n where (H_{out}, W_{out})=\\text{output_size} or (H_{out},\n W_{out})=\\text{output_ratio} \\times (H_{in}, W_{in}).\n -[ Examples ]-\n\n\n\npool of square window of size=3, and target output size 13x12\nm = nn.FractionalMaxPool2d(3, output_size=(13, 12))\npool of square window and target output size being half of input image size\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.FractionalMaxPool2d.html", "category": "pytorch docs"}
{"text": "\n\n\nm = nn.FractionalMaxPool2d(3, output_ratio=(0.5, 0.5))\ninput = torch.randn(20, 16, 50, 32)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.FractionalMaxPool2d.html", "category": "pytorch docs"}
{"text": "torch.repeat_interleavetorch.repeat_interleave(input, repeats, dim=None, , output_size=None) -> Tensor\n Repeat elements of a tensor.\n Warning:\n This is different from \"torch.Tensor.repeat()\" but similar to\n \"numpy.repeat\".\n Parameters:\n * input (Tensor) -- the input tensor.\n * repeats (Tensor or int) -- The number of repetitions\n for each element. repeats is broadcasted to fit the shape of\n the given axis.\n * dim (int, optional) -- The dimension along which to\n repeat values. By default, use the flattened input array, and\n return a flat output array.\n Keyword Arguments:\n output_size (int, optional*) -- Total output size for\n the given axis ( e.g. sum of repeats). If given, it will avoid\n stream synchronization needed to calculate output shape of the\n tensor.\n Returns:\n Repeated tensor which has the same shape as input, except along\n the given axis.", "source": "https://pytorch.org/docs/stable/generated/torch.repeat_interleave.html", "category": "pytorch docs"}
{"text": "the given axis.\n Return type:\n Tensor\n Example:\n >>> x = torch.tensor([1, 2, 3])\n >>> x.repeat_interleave(2)\n tensor([1, 1, 2, 2, 3, 3])\n >>> y = torch.tensor([[1, 2], [3, 4]])\n >>> torch.repeat_interleave(y, 2)\n tensor([1, 1, 2, 2, 3, 3, 4, 4])\n >>> torch.repeat_interleave(y, 3, dim=1)\n tensor([[1, 1, 1, 2, 2, 2],\n [3, 3, 3, 4, 4, 4]])\n >>> torch.repeat_interleave(y, torch.tensor([1, 2]), dim=0)\n tensor([[1, 2],\n [3, 4],\n [3, 4]])\n >>> torch.repeat_interleave(y, torch.tensor([1, 2]), dim=0, output_size=3)\n tensor([[1, 2],\n [3, 4],\n [3, 4]])\n torch.repeat_interleave(repeats, , output_size=None) -> Tensor\n If the repeats is tensor([n1, n2, n3, ...]), then the output\n will be tensor([0, 0, ..., 1, 1, ..., 2, 2, ..., ...]) where 0\n appears n1 times, 1 appears n2 times, 2 appears n3* times,\n etc.", "source": "https://pytorch.org/docs/stable/generated/torch.repeat_interleave.html", "category": "pytorch docs"}
{"text": "torch.cuda.is_availabletorch.cuda.is_available()\n Returns a bool indicating if CUDA is currently available.\n Return type:\n bool", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.is_available.html", "category": "pytorch docs"}
{"text": "torch.Tensor.normTensor.norm(p='fro', dim=None, keepdim=False, dtype=None)\n See \"torch.norm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.norm.html", "category": "pytorch docs"}
{"text": "torch.Tensor.arccoshTensor.arccosh()\n acosh() -> Tensor\n See \"torch.arccosh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arccosh.html", "category": "pytorch docs"}
{"text": "torch.Tensor.nelementTensor.nelement() -> int\n Alias for \"numel()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nelement.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.relutorch.nn.functional.relu(input, inplace=False) -> Tensor\n Applies the rectified linear unit function element-wise. See \"ReLU\"\n for more details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.relu.html", "category": "pytorch docs"}
{"text": "torch.sym_maxtorch.sym_max(a, b)\n SymInt-aware utility for max().", "source": "https://pytorch.org/docs/stable/generated/torch.sym_max.html", "category": "pytorch docs"}
{"text": "torch.clamptorch.clamp(input, min=None, max=None, , out=None) -> Tensor\n Clamps all elements in \"input\" into the range [ \"min\", \"max\" ].\n Letting min_value and max_value be \"min\" and \"max\", respectively,\n this returns:\n y_i = \\min(\\max(x_i, \\text{min_value}_i), \\text{max_value}_i)\n If \"min\" is \"None\", there is no lower bound. Or, if \"max\" is \"None\"\n there is no upper bound.\n Note:\n If \"min\" is greater than \"max\" \"torch.clamp(..., min, max)\" sets\n all elements in \"input\" to the value of \"max\".\n Parameters:\n * input (Tensor) -- the input tensor.\n * min (Number or Tensor, optional) -- lower-bound\n of the range to be clamped to\n * max (Number or Tensor, optional) -- upper-bound\n of the range to be clamped to\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([-1.7120, 0.1734, -0.0478, -0.0922])", "source": "https://pytorch.org/docs/stable/generated/torch.clamp.html", "category": "pytorch docs"}
{"text": "tensor([-1.7120, 0.1734, -0.0478, -0.0922])\n >>> torch.clamp(a, min=-0.5, max=0.5)\n tensor([-0.5000, 0.1734, -0.0478, -0.0922])\n >>> min = torch.linspace(-1, 1, steps=4)\n >>> torch.clamp(a, min=min)\n tensor([-1.0000, 0.1734, 0.3333, 1.0000])", "source": "https://pytorch.org/docs/stable/generated/torch.clamp.html", "category": "pytorch docs"}
{"text": "torch.Tensor.modeTensor.mode(dim=None, keepdim=False)\n See \"torch.mode()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.mode.html", "category": "pytorch docs"}
{"text": "L1Unstructuredclass torch.nn.utils.prune.L1Unstructured(amount)\n Prune (currently unpruned) units in a tensor by zeroing out the\n ones with the lowest L1-norm.\n Parameters:\n amount (int or float) -- quantity of parameters to\n prune. If \"float\", should be between 0.0 and 1.0 and represent\n the fraction of parameters to prune. If \"int\", it represents the\n absolute number of parameters to prune.\n classmethod apply(module, name, amount, importance_scores=None)\n Adds the forward pre-hook that enables pruning on the fly and\n the reparametrization of a tensor in terms of the original\n tensor and the pruning mask.\n Parameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * name (str) -- parameter name within \"module\" on which\n pruning will act.\n * amount (int or float) -- quantity of parameters\n to prune. If \"float\", should be between 0.0 and 1.0 and", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.L1Unstructured.html", "category": "pytorch docs"}
{"text": "represent the fraction of parameters to prune. If \"int\", it\n represents the absolute number of parameters to prune.\n * importance_scores (torch.Tensor) -- tensor of\n importance scores (of same shape as module parameter) used\n to compute mask for pruning. The values in this tensor\n indicate the importance of the corresponding elements in\n the parameter being pruned. If unspecified or None, the\n module parameter will be used in its place.\n apply_mask(module)\n Simply handles the multiplication between the parameter being\n pruned and the generated mask. Fetches the mask and the original\n tensor from the module and returns the pruned version of the\n tensor.\n Parameters:\n module (nn.Module) -- module containing the tensor to\n prune\n Returns:\n pruned version of the input tensor\n Return type:\n pruned_tensor (torch.Tensor)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.L1Unstructured.html", "category": "pytorch docs"}
{"text": "pruned_tensor (torch.Tensor)\n prune(t, default_mask=None, importance_scores=None)\n Computes and returns a pruned version of input tensor \"t\"\n according to the pruning rule specified in \"compute_mask()\".\n Parameters:\n * t (torch.Tensor) -- tensor to prune (of same\n dimensions as \"default_mask\").\n * importance_scores (torch.Tensor) -- tensor of\n importance scores (of same shape as \"t\") used to compute\n mask for pruning \"t\". The values in this tensor indicate\n the importance of the corresponding elements in the \"t\"\n that is being pruned. If unspecified or None, the tensor\n \"t\" will be used in its place.\n * default_mask (torch.Tensor, optional) -- mask\n from previous pruning iteration, if any. To be considered\n when determining what portion of the tensor that pruning\n should act on. If None, default to a mask of ones.\n Returns:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.L1Unstructured.html", "category": "pytorch docs"}
{"text": "Returns:\n pruned version of tensor \"t\".\n remove(module)\n Removes the pruning reparameterization from a module. The pruned\n parameter named \"name\" remains permanently pruned, and the\n parameter named \"name+'_orig'\" is removed from the parameter\n list. Similarly, the buffer named \"name+'_mask'\" is removed from\n the buffers.\n Note:\n Pruning itself is NOT undone or reversed!", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.L1Unstructured.html", "category": "pytorch docs"}
{"text": "torch.cuda.set_devicetorch.cuda.set_device(device)\n Sets the current device.\n Usage of this function is discouraged in favor of \"device\". In most\n cases it's better to use \"CUDA_VISIBLE_DEVICES\" environmental\n variable.\n Parameters:\n device (torch.device or int) -- selected device. This\n function is a no-op if this argument is negative.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.set_device.html", "category": "pytorch docs"}
{"text": "torch.Tensor.i0Tensor.i0() -> Tensor\n See \"torch.i0()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.i0.html", "category": "pytorch docs"}
{"text": "torch.Tensor.orgqrTensor.orgqr(input2) -> Tensor\n See \"torch.orgqr()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.orgqr.html", "category": "pytorch docs"}
{"text": "torch.Tensor.signbitTensor.signbit() -> Tensor\n See \"torch.signbit()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.signbit.html", "category": "pytorch docs"}
{"text": "torch.Tensor.dequantizeTensor.dequantize() -> Tensor\n Given a quantized Tensor, dequantize it and return the dequantized\n float Tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.dequantize.html", "category": "pytorch docs"}
{"text": "torch.fft.fft2torch.fft.fft2(input, s=None, dim=(- 2, - 1), norm=None, , out=None) -> Tensor\n Computes the 2 dimensional discrete Fourier transform of \"input\".\n Equivalent to \"fftn()\" but FFTs only the last two dimensions by\n default.\n Note:\n The Fourier domain representation of any real signal satisfies\n the Hermitian property: \"X[i, j] = conj(X[-i, -j])\". This\n function always returns all positive and negative frequency terms\n even though, for real inputs, half of these values are redundant.\n \"rfft2()\" returns the more compact one-sided representation where\n only the positive frequencies of the last dimension are returned.\n Note:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimensions.\n Parameters:\n * input (Tensor) -- the input tensor\n * s (Tuple[int], *optional) -- Signal size in the", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fft2.html", "category": "pytorch docs"}
{"text": "transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the FFT. If a length \"-1\" is specified, no padding\n is done in that dimension. Default: \"s = [input.size(d) for d\n in dim]\"\n * dim (Tuple[int], optional) -- Dimensions to be\n transformed. Default: last two dimensions.\n * norm (str, optional) --\n Normalization mode. For the forward transform (\"fft2()\"),\n these correspond to:\n * \"\"forward\"\" - normalize by \"1/n\"\n * \"\"backward\"\" - no normalization\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the FFT\n orthonormal)\n Where \"n = prod(s)\" is the logical FFT size. Calling the\n backward transform (\"ifft2()\") with the same normalization\n mode will apply an overall normalization of \"1/n\" between the\n two transforms. This is required to make \"ifft2()\" the exact\n inverse.", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fft2.html", "category": "pytorch docs"}
{"text": "inverse.\n Default is \"\"backward\"\" (no normalization).\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n\n\n\nx = torch.rand(10, 10, dtype=torch.complex64)\nfft2 = torch.fft.fft2(x)\n The discrete Fourier transform is separable, so \"fft2()\" here is\n equivalent to two one-dimensional \"fft()\" calls:\ntwo_ffts = torch.fft.fft(torch.fft.fft(x, dim=0), dim=1)\ntorch.testing.assert_close(fft2, two_ffts, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fft2.html", "category": "pytorch docs"}
{"text": "LazyConvTranspose2dclass torch.nn.LazyConvTranspose2d(out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)\n A \"torch.nn.ConvTranspose2d\" module with lazy initialization of the\n \"in_channels\" argument of the \"ConvTranspose2d\" that is inferred\n from the \"input.size(1)\". The attributes that will be lazily\n initialized are weight and bias.\n Check the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\n Parameters:\n * out_channels (int) -- Number of channels produced by the\n convolution\n * kernel_size (int or tuple) -- Size of the convolving\n kernel\n * stride (int or tuple, optional) -- Stride of the\n convolution. Default: 1\n * padding (int or tuple, optional) -- \"dilation *\n (kernel_size - 1) - padding\" zero-padding will be added to", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConvTranspose2d.html", "category": "pytorch docs"}
{"text": "both sides of each dimension in the input. Default: 0\n * output_padding (int or tuple, optional) --\n Additional size added to one side of each dimension in the\n output shape. Default: 0\n * groups (int, optional) -- Number of blocked\n connections from input channels to output channels. Default: 1\n * bias (bool, optional) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n * dilation (int or tuple, optional) -- Spacing\n between kernel elements. Default: 1\n See also:\n \"torch.nn.ConvTranspose2d\" and\n \"torch.nn.modules.lazy.LazyModuleMixin\"\n cls_to_become\n alias of \"ConvTranspose2d\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConvTranspose2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.ndimensionTensor.ndimension() -> int\n Alias for \"dim()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ndimension.html", "category": "pytorch docs"}
{"text": "torch.Tensor.reciprocal_Tensor.reciprocal_() -> Tensor\n In-place version of \"reciprocal()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.reciprocal_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.minimumTensor.minimum(other) -> Tensor\n See \"torch.minimum()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.minimum.html", "category": "pytorch docs"}
{"text": "torch._foreach_erftorch._foreach_erf(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.erf()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_erf.html", "category": "pytorch docs"}
{"text": "torch.jit.freezetorch.jit.freeze(mod, preserved_attrs=None, optimize_numerics=True)\n Freezing a \"ScriptModule\" will clone it and attempt to inline the\n cloned module's submodules, parameters, and attributes as constants\n in the TorchScript IR Graph. By default, forward will be\n preserved, as well as attributes & methods specified in\n preserved_attrs. Additionally, any attribute that is modified\n within a preserved method will be preserved.\n Freezing currently only accepts ScriptModules that are in eval\n mode.\n Freezing applies generic optimization that will speed up your model\n regardless of machine. To further optimize using server-specific\n settings, run optimize_for_inference after freezing.\n Parameters:\n * mod (\"ScriptModule\") -- a module to be frozen\n * preserved_attrs (Optional[List[str]]) -- a\n list of attributes to preserve in addition to the forward", "source": "https://pytorch.org/docs/stable/generated/torch.jit.freeze.html", "category": "pytorch docs"}
{"text": "method. Attributes modified in preserved methods will also be\n preserved.\n * optimize_numerics (bool) -- If \"True\", a set of\n optimization passes will be run that does not strictly\n preserve numerics. Full details of optimization can be found\n at torch.jit.run_frozen_optimizations.\n Returns:\n Frozen \"ScriptModule\".\n Example (Freezing a simple module with a Parameter):\n def forward(self, input):\n output = self.weight.mm(input)\n output = self.linear(output)\n return output\n scripted_module = torch.jit.script(MyModule(2, 3).eval())\n frozen_module = torch.jit.freeze(scripted_module)\n # parameters have been removed and inlined into the Graph as constants\n assert len(list(frozen_module.named_parameters())) == 0\n # See the compiled graph as Python code\n print(frozen_module.code)\n Example (Freezing a module with preserved attributes)\n def forward(self, input):", "source": "https://pytorch.org/docs/stable/generated/torch.jit.freeze.html", "category": "pytorch docs"}
{"text": "def forward(self, input):\n self.modified_tensor += 1\n return input + self.modified_tensor\n scripted_module = torch.jit.script(MyModule2().eval())\n frozen_module = torch.jit.freeze(scripted_module, preserved_attrs=[\"version\"])\n # we've manually preserved version, so it still exists on the frozen module and can be modified\n assert frozen_module.version == 1\n frozen_module.version = 2\n # modified_tensor is detected as being mutated in the forward, so freezing preserves\n # it to retain model semantics\n assert frozen_module(torch.tensor(1)) == torch.tensor(12)\n # now that we've run it once, the next result will be incremented by one\n assert frozen_module(torch.tensor(1)) == torch.tensor(13)\n Note:\n Freezing submodule attributes is also supported: frozen_module =\n torch.jit.freeze(scripted_module,\n preserved_attrs=[\"submodule.version\"])\n Note:\n If you're not sure why an attribute is not being inlined as a", "source": "https://pytorch.org/docs/stable/generated/torch.jit.freeze.html", "category": "pytorch docs"}
{"text": "constant, you can run dump_alias_db on\n frozen_module.forward.graph to see if freezing has detected the\n attribute is being modified.\n Note:\n Because freezing makes weights constants and removes module\n hierarchy, to and other nn.Module methods to manipulate device\n or dtype no longer work. As a workaround, You can remap devices\n by specifying map_location in torch.jit.load, however device-\n specific logic may have been baked into the model.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.freeze.html", "category": "pytorch docs"}
{"text": "torch.cuda.comm.gathertorch.cuda.comm.gather(tensors, dim=0, destination=None, , out=None)\n Gathers tensors from multiple GPU devices.\n Parameters:\n * tensors (Iterable[Tensor]*) -- an iterable of\n tensors to gather. Tensor sizes in all dimensions other than\n \"dim\" have to match.\n * dim (*int, optional*) -- a dimension along which the\n tensors will be concatenated. Default: \"0\".\n * destination (*torch.device, str, or int,\n optional*) -- the output device. Can be CPU or CUDA.\n Default: the current CUDA device.\n * out (*Tensor, optional, *keyword-only) -- the\n tensor to store gather result. Its sizes must match those of\n \"tensors\", except for \"dim\", where the size must equal\n \"sum(tensor.size(dim) for tensor in tensors)\". Can be on CPU\n or CUDA.\n Note:\n \"destination\" must not be specified when \"out\" is specified.\n Returns:", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.comm.gather.html", "category": "pytorch docs"}
{"text": "Returns:\n * If \"destination\" is specified,\n a tensor located on \"destination\" device, that is a result\n of concatenating \"tensors\" along \"dim\".\n * If \"out\" is specified,\n the \"out\" tensor, now containing results of concatenating\n \"tensors\" along \"dim\".", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.comm.gather.html", "category": "pytorch docs"}
{"text": "GraphInfoclass torch.onnx.verification.GraphInfo(graph, input_args, params_dict, export_options=, id='', _EXCLUDED_NODE_KINDS=frozenset({'aten::ScalarImplicit', 'prim::Constant', 'prim::ListConstruct'}))\n GraphInfo contains validation information of a TorchScript graph\n and its converted ONNX graph.\n all_mismatch_leaf_graph_info()\n Return a list of all leaf GraphInfo objects that have\n mismatch.\n Return type:\n List[GraphInfo]\n clear()\n Clear states and results of previous verification.\n essential_node_count()\n Return the number of nodes in the subgraph excluding those in\n _EXCLUDED_NODE_KINDS.\n Return type:\n int\n essential_node_kinds()\n Return the set of node kinds in the subgraph excluding those in\n _EXCLUDED_NODE_KINDS.\n Return type:\n Set[str]\n export_repro(repro_dir=None, name=None)\n Export the subgraph to ONNX along with the input/output data for\n repro.", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.verification.GraphInfo.html", "category": "pytorch docs"}
{"text": "repro.\n The repro directory will contain the following files:\n dir\n \u00e2\u0094\u009c\u00e2\u0094\u0080\u00e2\u0094\u0080 test_\n \u00e2\u0094\u0082 \u00e2\u0094\u009c\u00e2\u0094\u0080\u00e2\u0094\u0080 model.onnx\n \u00e2\u0094\u0082 \u00e2\u0094\u0094\u00e2\u0094\u0080\u00e2\u0094\u0080 test_data_set_0\n \u00e2\u0094\u0082 \u00e2\u0094\u009c\u00e2\u0094\u0080\u00e2\u0094\u0080 input_0.pb\n \u00e2\u0094\u0082 \u00e2\u0094\u009c\u00e2\u0094\u0080\u00e2\u0094\u0080 input_1.pb\n \u00e2\u0094\u0082 \u00e2\u0094\u009c\u00e2\u0094\u0080\u00e2\u0094\u0080 output_0.pb\n \u00e2\u0094\u0082 \u00e2\u0094\u0094\u00e2\u0094\u0080\u00e2\u0094\u0080 output_1.pb\n Parameters:\n * repro_dir (Optional[str]) -- The directory to\n export the repro files to. Defaults to current working\n directory if None.\n * name (Optional[str]) -- An optional name for\n the test case folder: \"test_{name}\".\n Returns:\n The path to the exported repro directory.\n Return type:\n str\n find_mismatch(options=None)\n Find all mismatches between the TorchScript IR graph and the\n exported onnx model.\n Binary searches the model graph to find the minimal subgraph\n that exhibits the mismatch. A GraphInfo object is created for", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.verification.GraphInfo.html", "category": "pytorch docs"}
{"text": "each subgraph, recording the test inputs and export options, as\n well as the validation results.\n Parameters:\n options (Optional[VerificationOptions]) -- The\n verification options.\n find_partition(id)\n Find the GraphInfo object with the given id.\n Return type:\n Optional[GraphInfo]\n has_mismatch()\n Return True if the subgraph has output mismatch between torch\n and ONNX.\n Return type:\n bool\n pretty_print_mismatch(graph=False)\n Pretty print details of the mismatch between torch and ONNX.\n Parameters:\n graph (bool) -- If True, print the ATen JIT graph and\n ONNX graph.\n pretty_print_tree()\n Pretty print GraphInfo tree.\n Each node represents a subgraph, showing the number of nodes in\n the subgraph and a check mark if the subgraph has output\n mismatch between torch and ONNX.\n The id of the subgraph is shown under the node. The GraphInfo", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.verification.GraphInfo.html", "category": "pytorch docs"}
{"text": "object for any subgraph can be retrieved by calling\n graph_info.find_partition(id).\n Example:\n ==================================== Tree: =====================================\n 5 X __2 X __1 \u00e2\u009c\u0093\n id: | id: 0 | id: 00\n | |\n | |__1 X (aten::relu)\n | id: 01\n |\n |__3 X __1 \u00e2\u009c\u0093\n id: 1 | id: 10\n |\n |__2 X __1 X (aten::relu)\n id: 11 | id: 110\n |\n |__1 \u00e2\u009c\u0093\n id: 111\n =========================== Mismatch leaf subgraphs: ===========================\n ['01', '110']\n ============================= Mismatch node kinds: =============================\n {'aten::relu': 2}\n verify_export(options)\n Verify the export from TorchScript IR graph to ONNX.", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.verification.GraphInfo.html", "category": "pytorch docs"}
{"text": "Export the TorchScript IR graph to ONNX, with the inputs,\n parameters and export options recorded in this object. Then\n verify the exported ONNX graph against the original TorchScript\n IR graph under the provided verification options.\n Parameters:\n options (VerificationOptions) -- The verification\n options.\n Returns:\n The AssertionError raised during the verification. Returns\n None if no error is raised. onnx_graph: The exported ONNX\n graph in TorchScript IR format. onnx_outs: The outputs from\n running exported ONNX model under the onnx backend in\n options. pt_outs: The outputs from running the TorchScript\n IR graph.\n Return type:\n error", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.verification.GraphInfo.html", "category": "pytorch docs"}
{"text": "Thresholdclass torch.nn.Threshold(threshold, value, inplace=False)\n Thresholds each element of the input Tensor.\n Threshold is defined as:\n y = \\begin{cases} x, &\\text{ if } x > \\text{threshold} \\\n \\text{value}, &\\text{ otherwise } \\end{cases}\n Parameters:\n * threshold (float) -- The value to threshold at\n * value (float) -- The value to replace with\n * inplace (bool) -- can optionally do the operation in-\n place. Default: \"False\"\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n Examples:\n >>> m = nn.Threshold(0.1, 20)\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Threshold.html", "category": "pytorch docs"}
{"text": "torch.addmvtorch.addmv(input, mat, vec, , beta=1, alpha=1, out=None) -> Tensor\n Performs a matrix-vector product of the matrix \"mat\" and the vector\n \"vec\". The vector \"input\" is added to the final result.\n If \"mat\" is a (n \\times m) tensor, \"vec\" is a 1-D tensor of size\n m, then \"input\" must be broadcastable with a 1-D tensor of size\n n and \"out\" will be 1-D tensor of size n.\n \"alpha\" and \"beta\" are scaling factors on matrix-vector product\n between \"mat\" and \"vec\" and the added tensor \"input\" respectively.\n \\text{out} = \\beta\\ \\text{input} + \\alpha\\ (\\text{mat}\n \\mathbin{@} \\text{vec})\n If \"beta\" is 0, then \"input\" will be ignored, and nan and inf\n in it will not be propagated.\n For inputs of type FloatTensor or DoubleTensor, arguments\n \"beta\" and \"alpha\" must be real numbers, otherwise they should be\n integers.\n Parameters:\n * input (Tensor) -- vector to be added\n * mat (Tensor*) -- matrix to be matrix multiplied", "source": "https://pytorch.org/docs/stable/generated/torch.addmv.html", "category": "pytorch docs"}
{"text": "\nvec (Tensor) -- vector to be matrix multiplied\n Keyword Arguments:\nbeta (Number, optional) -- multiplier for \"input\"\n (\\beta)\nalpha (Number, optional) -- multiplier for mat @ vec\n (\\alpha)\nout (Tensor, optional) -- the output tensor.\n Example:\n\n\nM = torch.randn(2)\nmat = torch.randn(2, 3)\nvec = torch.randn(3)\ntorch.addmv(M, mat, vec)\n tensor([-0.3768, -5.5565])\n\n\n\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.addmv.html", "category": "pytorch docs"}
{"text": "torch.lu_unpacktorch.lu_unpack(LU_data, LU_pivots, unpack_data=True, unpack_pivots=True, , out=None)\n Unpacks the LU decomposition returned by \"lu_factor()\" into the P,\n L, U matrices.\n See also:\n \"lu()\" returns the matrices from the LU decomposition. Its\n gradient formula is more efficient than that of doing\n \"lu_factor()\" followed by \"lu_unpack()\".\n Parameters:\n * LU_data (Tensor) -- the packed LU factorization data\n * LU_pivots (Tensor) -- the packed LU factorization pivots\n * unpack_data (bool) -- flag indicating if the data should\n be unpacked. If \"False\", then the returned \"L\" and \"U\" are\n empty tensors. Default: \"True\"\n * unpack_pivots (bool) -- flag indicating if the pivots\n should be unpacked into a permutation matrix \"P\". If \"False\",\n then the returned \"P\" is an empty tensor. Default: \"True\"\n Keyword Arguments:\n out (tuple, optional*) -- output tuple of three", "source": "https://pytorch.org/docs/stable/generated/torch.lu_unpack.html", "category": "pytorch docs"}
{"text": "tensors. Ignored if None.\n Returns:\n A namedtuple \"(P, L, U)\"\n Examples:\n >>> A = torch.randn(2, 3, 3)\n >>> LU, pivots = torch.linalg.lu_factor(A)\n >>> P, L, U = torch.lu_unpack(LU, pivots)\n >>> # We can recover A from the factorization\n >>> A_ = P @ L @ U\n >>> torch.allclose(A, A_)\n True\n >>> # LU factorization of a rectangular matrix:\n >>> A = torch.randn(2, 3, 2)\n >>> LU, pivots = torch.linalg.lu_factor(A)\n >>> P, L, U = torch.lu_unpack(LU, pivots)\n >>> # P, L, U are the same as returned by linalg.lu\n >>> P_, L_, U_ = torch.linalg.lu(A)\n >>> torch.allclose(P, P_) and torch.allclose(L, L_) and torch.allclose(U, U_)\n True", "source": "https://pytorch.org/docs/stable/generated/torch.lu_unpack.html", "category": "pytorch docs"}
{"text": "LayerNormclass torch.nn.LayerNorm(normalized_shape, eps=1e-05, elementwise_affine=True, device=None, dtype=None)\n Applies Layer Normalization over a mini-batch of inputs as\n described in the paper Layer Normalization\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n The mean and standard-deviation are calculated over the last D\n dimensions, where D is the dimension of \"normalized_shape\". For\n example, if \"normalized_shape\" is \"(3, 5)\" (a 2-dimensional shape),\n the mean and standard-deviation are computed over the last 2\n dimensions of the input (i.e. \"input.mean((-2, -1))\"). \\gamma and\n \\beta are learnable affine transform parameters of\n \"normalized_shape\" if \"elementwise_affine\" is \"True\". The standard-\n deviation is calculated via the biased estimator, equivalent to\n torch.var(input, unbiased=False).\n Note:\n Unlike Batch Normalization and Instance Normalization, which", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html", "category": "pytorch docs"}
{"text": "applies scalar scale and bias for each entire channel/plane with\n the \"affine\" option, Layer Normalization applies per-element\n scale and bias with \"elementwise_affine\".\n This layer uses statistics computed from input data in both\n training and evaluation modes.\n Parameters:\n * normalized_shape (int or list or torch.Size) --\n input shape from an expected input of size\n [ \\times \\text{normalized_shape}[0] \\times\n \\text{normalized_shape}[1] \\times \\ldots \\times\n \\text{normalized_shape}[-1]]\n If a single integer is used, it is treated as a singleton\n list, and this module will normalize over the last dimension\n which is expected to be of that specific size.\n * eps (float) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n * elementwise_affine (bool*) -- a boolean value that when\n set to \"True\", this module has learnable per-element affine", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html", "category": "pytorch docs"}
{"text": "parameters initialized to ones (for weights) and zeros (for\n biases). Default: \"True\".\n Variables:\n * weight -- the learnable weights of the module of shape\n \\text{normalized_shape} when \"elementwise_affine\" is set to\n \"True\". The values are initialized to 1.\n * bias -- the learnable bias of the module of shape\n \\text{normalized_shape} when \"elementwise_affine\" is set to\n \"True\". The values are initialized to 0.\n Shape:\n * Input: (N, )\n * Output: (N, ) (same shape as input)\n Examples:\n >>> # NLP Example\n >>> batch, sentence_length, embedding_dim = 20, 5, 10\n >>> embedding = torch.randn(batch, sentence_length, embedding_dim)\n >>> layer_norm = nn.LayerNorm(embedding_dim)\n >>> # Activate module\n >>> layer_norm(embedding)\n >>>\n >>> # Image Example\n >>> N, C, H, W = 20, 5, 10, 10\n >>> input = torch.randn(N, C, H, W)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html", "category": "pytorch docs"}
{"text": "\n\n\ninput = torch.randn(N, C, H, W)\n >>> # Normalize over the last three dimensions (i.e. the channel and spatial dimensions)\n >>> # as shown in the image below\n >>> layer_norm = nn.LayerNorm([C, H, W])\n >>> output = layer_norm(input)\n [image]\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html", "category": "pytorch docs"}
{"text": "torch.autograd.forward_ad.make_dualtorch.autograd.forward_ad.make_dual(tensor, tangent, , level=None)\n Associates a tensor value with a forward gradient, the tangent, to\n create a \"dual tensor\", which is used to compute forward AD\n gradients. The result is a new tensor aliased to \"tensor\" with\n \"tangent\" embedded as an attribute as-is if it has the same storage\n layout or copied otherwise. The tangent attribute can be recovered\n with \"unpack_dual()\".\n This function is backward differentiable.\n Given a function f whose jacobian is J, it allows one to\n compute the Jacobian-vector product (jvp) between J and a given\n vector v* as follows.\n Example:\n >>> with dual_level():\n ... inp = make_dual(x, v)\n ... out = f(inp)\n ... y, jvp = unpack_dual(out)\n Please see the forward-mode AD tutorial for detailed steps on how\n to use this API.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.forward_ad.make_dual.html", "category": "pytorch docs"}
{"text": "AdaptiveAvgPool1dclass torch.nn.AdaptiveAvgPool1d(output_size)\n Applies a 1D adaptive average pooling over an input signal composed\n of several input planes.\n The output size is L_{out}, for any input size. The number of\n output features is equal to the number of input planes.\n Parameters:\n output_size (Union[int, Tuple[int]]) --\n the target output size L_{out}.\n Shape:\n * Input: (N, C, L_{in}) or (C, L_{in}).\n * Output: (N, C, L_{out}) or (C, L_{out}), where\n L_{out}=\\text{output_size}.\n -[ Examples ]-\n\n\n\ntarget output size of 5\nm = nn.AdaptiveAvgPool1d(5)\ninput = torch.randn(1, 64, 8)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveAvgPool1d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.atan_Tensor.atan_() -> Tensor\n In-place version of \"atan()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.atan_.html", "category": "pytorch docs"}
{"text": "torch.fft.ihfft2torch.fft.ihfft2(input, s=None, dim=(- 2, - 1), norm=None, , out=None) -> Tensor\n Computes the 2-dimensional inverse discrete Fourier transform of\n real \"input\". Equivalent to \"ihfftn()\" but transforms only the two\n last dimensions by default.\n Note:\n Supports torch.half on CUDA with GPU Architecture SM53 or\n greater. However it only supports powers of 2 signal length in\n every transformed dimensions.\n Parameters:\n * input (Tensor) -- the input tensor\n * s (Tuple[int], optional*) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the Hermitian IFFT. If a length \"-1\" is specified,\n no padding is done in that dimension. Default: \"s =\n [input.size(d) for d in dim]\"\n * dim (*Tuple[int], optional*) -- Dimensions to be\n transformed. Default: last two dimensions.", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ihfft2.html", "category": "pytorch docs"}
{"text": "transformed. Default: last two dimensions.\n * norm (str, optional) --\n Normalization mode. For the backward transform (\"ihfft2()\"),\n these correspond to:\n * \"\"forward\"\" - no normalization\n * \"\"backward\"\" - normalize by \"1/n\"\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the Hermitian\n IFFT orthonormal)\n Where \"n = prod(s)\" is the logical IFFT size. Calling the\n forward transform (\"hfft2()\") with the same normalization mode\n will apply an overall normalization of \"1/n\" between the two\n transforms. This is required to make \"ihfft2()\" the exact\n inverse.\n Default is \"\"backward\"\" (normalize by \"1/n\").\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n\n\n\nT = torch.rand(10, 10)\nt = torch.fft.ihfft2(t)\nt.size()\n torch.Size([10, 6])\n Compared against the full output from \"ifft2()\", the Hermitian\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ihfft2.html", "category": "pytorch docs"}
{"text": "time-space signal takes up only half the space.\n\n\n\nfftn = torch.fft.ifft2(t)\ntorch.allclose(fftn[..., :6], rfftn)\n True\n The discrete Fourier transform is separable, so \"ihfft2()\" here is\n equivalent to a combination of \"ifft()\" and \"ihfft()\":\ntwo_ffts = torch.fft.ifft(torch.fft.ihfft(t, dim=1), dim=0)\ntorch.allclose(t, two_ffts)\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ihfft2.html", "category": "pytorch docs"}
{"text": "BNReLU3dclass torch.ao.nn.intrinsic.BNReLU3d(batch_norm, relu)\n This is a sequential container which calls the BatchNorm 3d and\n ReLU modules. During quantization this will be replaced with the\n corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.BNReLU3d.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.rrelu_torch.nn.functional.rrelu_(input, lower=1. / 8, upper=1. / 3, training=False) -> Tensor\n In-place version of \"rrelu()\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.rrelu_.html", "category": "pytorch docs"}
{"text": "torch.arcsinhtorch.arcsinh(input, *, out=None) -> Tensor\n Alias for \"torch.asinh()\".", "source": "https://pytorch.org/docs/stable/generated/torch.arcsinh.html", "category": "pytorch docs"}
{"text": "LazyConv1dclass torch.nn.LazyConv1d(out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\n A \"torch.nn.Conv1d\" module with lazy initialization of the\n \"in_channels\" argument of the \"Conv1d\" that is inferred from the\n \"input.size(1)\". The attributes that will be lazily initialized are\n weight and bias.\n Check the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\n Parameters:\n * out_channels (int) -- Number of channels produced by the\n convolution\n * kernel_size (int or tuple) -- Size of the convolving\n kernel\n * stride (int or tuple, optional) -- Stride of the\n convolution. Default: 1\n * padding (int or tuple, optional) -- Zero-padding\n added to both sides of the input. Default: 0\n * padding_mode (str, optional) -- \"'zeros'\",", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConv1d.html", "category": "pytorch docs"}
{"text": "\"'reflect'\", \"'replicate'\" or \"'circular'\". Default: \"'zeros'\"\n * dilation (int or tuple, optional) -- Spacing\n between kernel elements. Default: 1\n * groups (int, optional) -- Number of blocked\n connections from input channels to output channels. Default: 1\n * bias (bool, optional) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n See also:\n \"torch.nn.Conv1d\" and \"torch.nn.modules.lazy.LazyModuleMixin\"\n cls_to_become\n alias of \"Conv1d\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConv1d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.permuteTensor.permute(*dims) -> Tensor\n See \"torch.permute()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.permute.html", "category": "pytorch docs"}
{"text": "torch.sparse.softmaxtorch.sparse.softmax(input, dim, , dtype=None) -> Tensor\n Applies a softmax function.\n Softmax is defined as:\n \\text{Softmax}(x_{i}) = \\frac{exp(x_i)}{\\sum_j exp(x_j)}\n where i, j run over sparse tensor indices and unspecified entries\n are ignores. This is equivalent to defining unspecified entries as\n negative infinity so that exp(x_k) = 0 when the entry with index k\n has not specified.\n It is applied to all slices along dim, and will re-scale them so\n that the elements lie in the range [0, 1] and sum to 1.\n Parameters:\n * input (Tensor) -- input\n * dim (int) -- A dimension along which softmax will be\n computed.\n * dtype* (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is casted\n to \"dtype\" before the operation is performed. This is useful\n for preventing data type overflows. Default: None", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.softmax.html", "category": "pytorch docs"}
{"text": "L1Lossclass torch.nn.L1Loss(size_average=None, reduce=None, reduction='mean')\n Creates a criterion that measures the mean absolute error (MAE)\n between each element in the input x and target y.\n The unreduced (i.e. with \"reduction\" set to \"'none'\") loss can be\n described as:\n \\ell(x, y) = L = {l_1,\\dots,l_N}^\\top, \\quad l_n = \\left| x_n\n - y_n \\right|,\n where N is the batch size. If \"reduction\" is not \"'none'\" (default\n \"'mean'\"), then:\n \\ell(x, y) = \\begin{cases} \\operatorname{mean}(L), &\n \\text{if reduction} = \\text{mean';}\\\\\n \\operatorname{sum}(L), & \\text{if reduction} = \\text{sum'.}\n \\end{cases}\n x and y are tensors of arbitrary shapes with a total of n elements\n each.\n The sum operation still operates over all the elements, and divides\n by n.\n The division by n can be avoided if one sets \"reduction = 'sum'\".\n Supports real-valued and complex-valued inputs.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html", "category": "pytorch docs"}
{"text": "Parameters:\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of", "source": "https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html", "category": "pytorch docs"}
{"text": "the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n Shape:\n * Input: (), where * means any number of dimensions.\n * Target: (), same shape as the input.\n * Output: scalar. If \"reduction\" is \"'none'\", then (*), same\n shape as the input.\n Examples:\n >>> loss = nn.L1Loss()\n >>> input = torch.randn(3, 5, requires_grad=True)\n >>> target = torch.randn(3, 5)\n >>> output = loss(input, target)\n >>> output.backward()", "source": "https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html", "category": "pytorch docs"}
{"text": "torch.cuda.mem_get_infotorch.cuda.mem_get_info(device=None)\n Returns the global free and total GPU memory occupied for a given\n device using cudaMemGetInfo.\n Parameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\n Return type:\n Tuple[int, int]\n Note:\n See Memory management for more details about GPU memory\n management.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.mem_get_info.html", "category": "pytorch docs"}
{"text": "torch.Tensor.as_stridedTensor.as_strided(size, stride, storage_offset=None) -> Tensor\n See \"torch.as_strided()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.as_strided.html", "category": "pytorch docs"}
{"text": "torch.isneginftorch.isneginf(input, , out=None) -> Tensor\n Tests if each element of \"input\" is negative infinity or not.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.tensor([-float('inf'), float('inf'), 1.2])\n >>> torch.isneginf(a)\n tensor([ True, False, False])", "source": "https://pytorch.org/docs/stable/generated/torch.isneginf.html", "category": "pytorch docs"}
{"text": "torch.dividetorch.divide(input, other, *, rounding_mode=None, out=None) -> Tensor\n Alias for \"torch.div()\".", "source": "https://pytorch.org/docs/stable/generated/torch.divide.html", "category": "pytorch docs"}
{"text": "MaxPool1dclass torch.nn.MaxPool1d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)\n Applies a 1D max pooling over an input signal composed of several\n input planes.\n In the simplest case, the output value of the layer with input size\n (N, C, L) and output (N, C, L_{out}) can be precisely described as:\n out(N_i, C_j, k) = \\max_{m=0, \\ldots, \\text{kernel_size} - 1}\n input(N_i, C_j, stride \\times k + m)\n If \"padding\" is non-zero, then the input is implicitly padded with\n negative infinity on both sides for \"padding\" number of points.\n \"dilation\" is the stride between the elements within the sliding\n window. This link has a nice visualization of the pooling\n parameters.\n Note:\n When ceil_mode=True, sliding windows are allowed to go off-bounds\n if they start within the left padding or the input. Sliding\n windows that would start in the right padded region are ignored.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool1d.html", "category": "pytorch docs"}
{"text": "Parameters:\n * kernel_size (Union[int, Tuple[int]]) --\n The size of the sliding window, must be > 0.\n * stride (Union[int, Tuple[int]]) -- The\n stride of the sliding window, must be > 0. Default value is\n \"kernel_size\".\n * padding (Union[int, Tuple[int]]) --\n Implicit negative infinity padding to be added on both sides,\n must be >= 0 and <= kernel_size / 2.\n * dilation (Union[int, Tuple[int]]) -- The\n stride between elements within a sliding window, must be > 0.\n * return_indices (bool) -- If \"True\", will return the\n argmax along with the max values. Useful for\n \"torch.nn.MaxUnpool1d\" later\n * ceil_mode (bool) -- If \"True\", will use ceil instead\n of floor to compute the output shape. This ensures that\n every element in the input tensor is covered by a sliding\n window.\n Shape:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool1d.html", "category": "pytorch docs"}
{"text": "window.\n Shape:\n * Input: (N, C, L_{in}) or (C, L_{in}).\n * Output: (N, C, L_{out}) or (C, L_{out}), where\n L_{out} = \\left\\lfloor \\frac{L_{in} + 2 \\times\n \\text{padding} - \\text{dilation} \\times\n (\\text{kernel_size} - 1) - 1}{\\text{stride}} +\n 1\\right\\rfloor\n Examples:\n >>> # pool of size=3, stride=2\n >>> m = nn.MaxPool1d(3, stride=2)\n >>> input = torch.randn(20, 16, 50)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool1d.html", "category": "pytorch docs"}
{"text": "torch.atan2torch.atan2(input, other, , out=None) -> Tensor\n Element-wise arctangent of \\text{input}{i} / \\text{other} with\n consideration of the quadrant. Returns a new tensor with the signed\n angles in radians between vector (\\text{other}{i},\n \\text{input}) and vector (1, 0). (Note that \\text{other}{i},\n the second parameter, is the x-coordinate, while \\text{input},\n the first parameter, is the y-coordinate.)\n The shapes of \"input\" and \"other\" must be broadcastable.\n Parameters:\n * input (Tensor) -- the first input tensor\n * other (Tensor) -- the second input tensor\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 0.9041, 0.0196, -0.3108, -2.4423])\n >>> torch.atan2(a, torch.randn(4))\n tensor([ 0.9833, 0.0811, -1.9743, -1.4151])", "source": "https://pytorch.org/docs/stable/generated/torch.atan2.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.multilabel_margin_losstorch.nn.functional.multilabel_margin_loss(input, target, size_average=None, reduce=None, reduction='mean') -> Tensor\n See \"MultiLabelMarginLoss\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.multilabel_margin_loss.html", "category": "pytorch docs"}
{"text": "torch.Tensor.is_inferenceTensor.is_inference() -> bool\n See \"torch.is_inference()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_inference.html", "category": "pytorch docs"}
{"text": "torch.Tensor.sumTensor.sum(dim=None, keepdim=False, dtype=None) -> Tensor\n See \"torch.sum()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sum.html", "category": "pytorch docs"}
{"text": "default_fused_act_fake_quanttorch.quantization.fake_quantize.default_fused_act_fake_quant\n alias of functools.partial(, observer=,\n quant_min=0, quant_max=255, dtype=torch.quint8){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_fused_act_fake_quant.html", "category": "pytorch docs"}
{"text": "torch.autograd.Function.vmapstatic Function.vmap(info, in_dims, args)\n Defines a rule for the behavior of this autograd.Function\n underneath \"torch.vmap()\". For a \"torch.autograd.Function()\" to\n support \"torch.vmap()\", you must either override this staticmethod,\n or set \"generate_vmap_rule\" to \"True\" (you may not do both).\n If you choose to override this staticmethod: it must accept\n * an \"info\" object as the first argument. \"info.batch_size\"\n specifies the size of the dimension being vmapped over, while\n \"info.randomness\" is the randomness option passed to\n \"torch.vmap()\".\n * an \"in_dims\" tuple as the second argument. For each arg in\n \"args\", \"in_dims\" has a corresponding \"Optional[int]\". It is\n \"None\" if the arg is not a Tensor or if the arg is not being\n vmapped over, otherwise, it is an integer specifying what\n dimension of the Tensor is being vmapped over.\n * \"args\", which is the same as the args to \"forward()\".", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.Function.vmap.html", "category": "pytorch docs"}
{"text": "The return of the vmap staticmethod is a tuple of \"(output,\n out_dims)\". Similar to \"in_dims\", \"out_dims\" should be of the same\n structure as \"output\" and contain one \"out_dim\" per output that\n specifies if the output has the vmapped dimension and what index it\n is in.\n Please see Extending torch.func with autograd.Function for more\n details.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.Function.vmap.html", "category": "pytorch docs"}
{"text": "TripletMarginLossclass torch.nn.TripletMarginLoss(margin=1.0, p=2.0, eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')\n Creates a criterion that measures the triplet loss given an input\n tensors x1, x2, x3 and a margin with a value greater than 0. This\n is used for measuring a relative similarity between samples. A\n triplet is composed by a, p and n (i.e., anchor, positive\n examples and negative examples respectively). The shapes of all\n input tensors should be (N, D).\n The distance swap is described in detail in the paper Learning\n shallow convolutional feature descriptors with triplet losses by V.\n Balntas, E. Riba et al.\n The loss function for each sample in the mini-batch is:\n L(a, p, n) = \\max {d(a_i, p_i) - d(a_i, n_i) + {\\rm margin},\n 0}\n where\n d(x_i, y_i) = \\left\\lVert {\\bf x}_i - {\\bf y}_i \\right\\rVert_p\n See also \"TripletMarginWithDistanceLoss\", which computes the", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginLoss.html", "category": "pytorch docs"}
{"text": "triplet margin loss for input tensors using a custom distance\n function.\n Parameters:\n * margin (float, optional) -- Default: 1.\n * p (int, optional) -- The norm degree for pairwise\n distance. Default: 2.\n * swap (bool, optional) -- The distance swap is\n described in detail in the paper Learning shallow\n convolutional feature descriptors with triplet losses by V.\n Balntas, E. Riba et al. Default: \"False\".\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * reduce (bool, optional) -- Deprecated (see", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginLoss.html", "category": "pytorch docs"}
{"text": "\"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n Shape:\n * Input: (N, D) or (D) where D is the vector dimension.\n * Output: A Tensor of shape (N) if \"reduction\" is \"'none'\" and\n input shape is (N, D); a scalar otherwise.\n Examples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginLoss.html", "category": "pytorch docs"}
{"text": "Examples:\n >>> triplet_loss = nn.TripletMarginLoss(margin=1.0, p=2)\n >>> anchor = torch.randn(100, 128, requires_grad=True)\n >>> positive = torch.randn(100, 128, requires_grad=True)\n >>> negative = torch.randn(100, 128, requires_grad=True)\n >>> output = triplet_loss(anchor, positive, negative)\n >>> output.backward()", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginLoss.html", "category": "pytorch docs"}
{"text": "torch.Tensor.index_addTensor.index_add(dim, index, source, *, alpha=1) -> Tensor\n Out-of-place version of \"torch.Tensor.index_add_()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_add.html", "category": "pytorch docs"}
{"text": "torch.broadcast_shapestorch.broadcast_shapes(shapes) -> Size\n Similar to \"broadcast_tensors()\" but for shapes.\n This is equivalent to \"torch.broadcast_tensors(map(torch.empty,\n shapes))[0].shape\" but avoids the need create to intermediate\n tensors. This is useful for broadcasting tensors of common batch\n shape but different rightmost shape, e.g. to broadcast mean vectors\n with covariance matrices.\n Example:\n >>> torch.broadcast_shapes((2,), (3, 1), (1, 1, 1))\n torch.Size([1, 3, 2])\n Parameters:\n shapes (torch.Size*) -- Shapes of tensors.\n Returns:\n A shape compatible with all input shapes.\n Return type:\n shape (torch.Size)\n Raises:\n RuntimeError** -- If shapes are incompatible.", "source": "https://pytorch.org/docs/stable/generated/torch.broadcast_shapes.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.conv_transpose1dtorch.nn.functional.conv_transpose1d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor\n Applies a 1D transposed convolution operator over an input signal\n composed of several input planes, sometimes also called\n \"deconvolution\".\n This operator supports TensorFloat32.\n See \"ConvTranspose1d\" for details and output shape.\n Note:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n Parameters:\n * input -- input tensor of shape (\\text{minibatch} ,\n \\text{in_channels} , iW)\n * weight -- filters of shape (\\text{in_channels} ,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose1d.html", "category": "pytorch docs"}
{"text": "\\frac{\\text{out_channels}}{\\text{groups}} , kW)\n * bias -- optional bias of shape (\\text{out_channels}).\n Default: None\n * stride -- the stride of the convolving kernel. Can be a\n single number or a tuple \"(sW,)\". Default: 1\n * padding -- \"dilation * (kernel_size - 1) - padding\" zero-\n padding will be added to both sides of each dimension in the\n input. Can be a single number or a tuple \"(padW,)\". Default: 0\n * output_padding -- additional size added to one side of\n each dimension in the output shape. Can be a single number or\n a tuple \"(out_padW)\". Default: 0\n * groups -- split input into groups, \\text{in_channels}\n should be divisible by the number of groups. Default: 1\n * dilation -- the spacing between kernel elements. Can be a\n single number or a tuple \"(dW,)\". Default: 1\n Examples:\n >>> inputs = torch.randn(20, 16, 50)\n >>> weights = torch.randn(16, 33, 5)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose1d.html", "category": "pytorch docs"}
{"text": "\n\n\nweights = torch.randn(16, 33, 5)\n >>> F.conv_transpose1d(inputs, weights)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose1d.html", "category": "pytorch docs"}
{"text": "Hardshrinkclass torch.nn.Hardshrink(lambd=0.5)\n Applies the Hard Shrinkage (Hardshrink) function element-wise.\n Hardshrink is defined as:\n \\text{HardShrink}(x) = \\begin{cases} x, & \\text{ if } x >\n \\lambda \\ x, & \\text{ if } x < -\\lambda \\ 0, & \\text{\n otherwise } \\end{cases}\n Parameters:\n lambd (float) -- the \\lambda value for the Hardshrink\n formulation. Default: 0.5\n Shape:\n * Input: (), where * means any number of dimensions.\n * Output: (), same shape as the input.\n [image]\n Examples:\n >>> m = nn.Hardshrink()\n >>> input = torch.randn(2)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Hardshrink.html", "category": "pytorch docs"}
{"text": "torch.dottorch.dot(input, other, , out=None) -> Tensor\n Computes the dot product of two 1D tensors.\n Note:\n Unlike NumPy's dot, torch.dot intentionally only supports\n computing the dot product of two 1D tensors with the same number\n of elements.\n Parameters:\n * input (Tensor) -- first tensor in the dot product, must\n be 1D.\n * other (Tensor) -- second tensor in the dot product, must\n be 1D.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> torch.dot(torch.tensor([2, 3]), torch.tensor([2, 1]))\n tensor(7)", "source": "https://pytorch.org/docs/stable/generated/torch.dot.html", "category": "pytorch docs"}
{"text": "torch.cuda.current_devicetorch.cuda.current_device()\n Returns the index of a currently selected device.\n Return type:\n int", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.current_device.html", "category": "pytorch docs"}
{"text": "AdaptiveMaxPool1dclass torch.nn.AdaptiveMaxPool1d(output_size, return_indices=False)\n Applies a 1D adaptive max pooling over an input signal composed of\n several input planes.\n The output size is L_{out}, for any input size. The number of\n output features is equal to the number of input planes.\n Parameters:\n * output_size (Union[int, Tuple[int]]) --\n the target output size L_{out}.\n * return_indices (bool) -- if \"True\", will return the\n indices along with the outputs. Useful to pass to\n nn.MaxUnpool1d. Default: \"False\"\n Shape:\n * Input: (N, C, L_{in}) or (C, L_{in}).\n * Output: (N, C, L_{out}) or (C, L_{out}), where\n L_{out}=\\text{output_size}.\n -[ Examples ]-\n\n\n\ntarget output size of 5\nm = nn.AdaptiveMaxPool1d(5)\ninput = torch.randn(1, 64, 8)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveMaxPool1d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.isnanTensor.isnan() -> Tensor\n See \"torch.isnan()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.isnan.html", "category": "pytorch docs"}
{"text": "default_per_channel_qconfigtorch.quantization.qconfig.default_per_channel_qconfig\n alias of QConfig(activation=functools.partial(, quant_min=0,\n quant_max=127){}, weight=functools.partial(,\n dtype=torch.qint8, qscheme=torch.per_channel_symmetric){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_per_channel_qconfig.html", "category": "pytorch docs"}
{"text": "ScriptFunctionclass torch.jit.ScriptFunction\n Functionally equivalent to a \"ScriptModule\", but represents a\n single function and does not have any attributes or Parameters.\n get_debug_state(self: torch._C.ScriptFunction) -> torch._C.GraphExecutorState\n save(self: torch._C.ScriptFunction, filename: str, _extra_files: Dict[str, str] = {}) -> None\n save_to_buffer(self: torch._C.ScriptFunction, _extra_files: Dict[str, str] = {}) -> bytes", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptFunction.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.parametrize.cachedtorch.nn.utils.parametrize.cached()\n Context manager that enables the caching system within\n parametrizations registered with \"register_parametrization()\".\n The value of the parametrized objects is computed and cached the\n first time they are required when this context manager is active.\n The cached values are discarded when leaving the context manager.\n This is useful when using a parametrized parameter more than once\n in the forward pass. An example of this is when parametrizing the\n recurrent kernel of an RNN or when sharing weights.\n The simplest way to activate the cache is by wrapping the forward\n pass of the neural network\n import torch.nn.utils.parametrize as P\n ...\n with P.cached():\n output = model(inputs)\n in training and evaluation. One may also wrap the parts of the\n modules that use several times the parametrized tensors. For", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.cached.html", "category": "pytorch docs"}
{"text": "example, the loop of an RNN with a parametrized recurrent kernel:\n with P.cached():\n for x in xs:\n out_rnn = self.rnn_cell(x, out_rnn)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.cached.html", "category": "pytorch docs"}
{"text": "LeakyReLUclass torch.ao.nn.quantized.LeakyReLU(scale, zero_point, negative_slope=0.01, inplace=False, device=None, dtype=None)\n This is the quantized equivalent of \"LeakyReLU\".\n Parameters:\n * scale (float) -- quantization scale of the output tensor\n * zero_point (int) -- quantization zero point of the\n output tensor\n * negative_slope (float) -- Controls the angle of the\n negative slope. Default: 1e-2", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.LeakyReLU.html", "category": "pytorch docs"}
{"text": "torch.reshapetorch.reshape(input, shape) -> Tensor\n Returns a tensor with the same data and number of elements as\n \"input\", but with the specified shape. When possible, the returned\n tensor will be a view of \"input\". Otherwise, it will be a copy.\n Contiguous inputs and inputs with compatible strides can be\n reshaped without copying, but you should not depend on the copying\n vs. viewing behavior.\n See \"torch.Tensor.view()\" on when it is possible to return a view.\n A single dimension may be -1, in which case it's inferred from the\n remaining dimensions and the number of elements in \"input\".\n Parameters:\n * input (Tensor) -- the tensor to be reshaped\n * shape (tuple of python:int) -- the new shape\n Example:\n >>> a = torch.arange(4.)\n >>> torch.reshape(a, (2, 2))\n tensor([[ 0., 1.],\n [ 2., 3.]])\n >>> b = torch.tensor([[0, 1], [2, 3]])\n >>> torch.reshape(b, (-1,))\n tensor([ 0, 1, 2, 3])", "source": "https://pytorch.org/docs/stable/generated/torch.reshape.html", "category": "pytorch docs"}
{"text": "torch.get_num_interop_threadstorch.get_num_interop_threads() -> int\n Returns the number of threads used for inter-op parallelism on CPU\n (e.g. in JIT interpreter)", "source": "https://pytorch.org/docs/stable/generated/torch.get_num_interop_threads.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.mse_losstorch.nn.functional.mse_loss(input, target, size_average=None, reduce=None, reduction='mean') -> Tensor\n Measures the element-wise mean squared error.\n See \"MSELoss\" for details.\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.mse_loss.html", "category": "pytorch docs"}
{"text": "torch.Tensor.less_equalTensor.less_equal(other) -> Tensor\n See \"torch.less_equal()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.less_equal.html", "category": "pytorch docs"}
{"text": "torch.acostorch.acos(input, , out=None) -> Tensor\n Computes the inverse cosine of each element in \"input\".\n \\text{out}{i} = \\cos^{-1}(\\text{input})\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 0.3348, -0.5889, 0.2005, -0.1584])\n >>> torch.acos(a)\n tensor([ 1.2294, 2.2004, 1.3690, 1.7298])", "source": "https://pytorch.org/docs/stable/generated/torch.acos.html", "category": "pytorch docs"}
{"text": "torch.Tensor.diag_embedTensor.diag_embed(offset=0, dim1=- 2, dim2=- 1) -> Tensor\n See \"torch.diag_embed()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.diag_embed.html", "category": "pytorch docs"}
{"text": "torch.resolve_conjtorch.resolve_conj(input) -> Tensor\n Returns a new tensor with materialized conjugation if \"input\"'s\n conjugate bit is set to True, else returns \"input\". The output\n tensor will always have its conjugate bit set to False.\n Parameters:\n input (Tensor) -- the input tensor.\n Example:\n >>> x = torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j])\n >>> y = x.conj()\n >>> y.is_conj()\n True\n >>> z = y.resolve_conj()\n >>> z\n tensor([-1 - 1j, -2 - 2j, 3 + 3j])\n >>> z.is_conj()\n False", "source": "https://pytorch.org/docs/stable/generated/torch.resolve_conj.html", "category": "pytorch docs"}
{"text": "torch.Tensor.log10_Tensor.log10_() -> Tensor\n In-place version of \"log10()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.log10_.html", "category": "pytorch docs"}
{"text": "Dropout1dclass torch.nn.Dropout1d(p=0.5, inplace=False)\n Randomly zero out entire channels (a channel is a 1D feature map,\n e.g., the j-th channel of the i-th sample in the batched input is a\n 1D tensor \\text{input}[i, j]). Each channel will be zeroed out\n independently on every forward call with probability \"p\" using\n samples from a Bernoulli distribution.\n Usually the input comes from \"nn.Conv1d\" modules.\n As described in the paper Efficient Object Localization Using\n Convolutional Networks , if adjacent pixels within feature maps are\n strongly correlated (as is normally the case in early convolution\n layers) then i.i.d. dropout will not regularize the activations and\n will otherwise just result in an effective learning rate decrease.\n In this case, \"nn.Dropout1d()\" will help promote independence\n between feature maps and should be used instead.\n Parameters:\n * p (float, optional) -- probability of an element to\n be zero-ed.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Dropout1d.html", "category": "pytorch docs"}
{"text": "be zero-ed.\n * inplace (bool, optional) -- If set to \"True\", will\n do this operation in-place\n Shape:\n * Input: (N, C, L) or (C, L).\n * Output: (N, C, L) or (C, L) (same shape as input).\n Examples:\n >>> m = nn.Dropout1d(p=0.2)\n >>> input = torch.randn(20, 16, 32)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Dropout1d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.asinh_Tensor.asinh_() -> Tensor\n In-place version of \"asinh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.asinh_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.ormqrTensor.ormqr(input2, input3, left=True, transpose=False) -> Tensor\n See \"torch.ormqr()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ormqr.html", "category": "pytorch docs"}
{"text": "MultiplicativeLRclass torch.optim.lr_scheduler.MultiplicativeLR(optimizer, lr_lambda, last_epoch=- 1, verbose=False)\n Multiply the learning rate of each parameter group by the factor\n given in the specified function. When last_epoch=-1, sets initial\n lr as lr.\n Parameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * lr_lambda (function or list) -- A function which\n computes a multiplicative factor given an integer parameter\n epoch, or a list of such functions, one for each group in\n optimizer.param_groups.\n * last_epoch (int) -- The index of last epoch. Default:\n -1.\n * verbose (bool) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n -[ Example ]-\n\n\n\nlmbda = lambda epoch: 0.95\nscheduler = MultiplicativeLR(optimizer, lr_lambda=lmbda)\nfor epoch in range(100):\n train(...)\n validate(...)\n scheduler.step()\n get_last_lr()\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.MultiplicativeLR.html", "category": "pytorch docs"}
{"text": "\n\n\nscheduler.step()\n\nget_last_lr()\n Return last computed learning rate by current scheduler.\n load_state_dict(state_dict)\n Loads the schedulers state.\n Parameters:\n state_dict (dict) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n print_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n state_dict()\n Returns the state of the scheduler as a \"dict\".\n It contains an entry for every variable in self.dict which\n is not the optimizer. The learning rate lambda functions will\n only be saved if they are callable objects and not if they are\n functions or lambdas.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.MultiplicativeLR.html", "category": "pytorch docs"}
{"text": "torch.igammatorch.igamma(input, other, *, out=None) -> Tensor\n Alias for \"torch.special.gammainc()\".", "source": "https://pytorch.org/docs/stable/generated/torch.igamma.html", "category": "pytorch docs"}
{"text": "torch.Tensor.divTensor.div(value, *, rounding_mode=None) -> Tensor\n See \"torch.div()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.div.html", "category": "pytorch docs"}
{"text": "torch.cuda.reset_peak_memory_statstorch.cuda.reset_peak_memory_stats(device=None)\n Resets the \"peak\" stats tracked by the CUDA memory allocator.\n See \"memory_stats()\" for details. Peak stats correspond to the\n \"peak\" key in each individual stat dict.\n Parameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\n Note:\n See Memory management for more details about GPU memory\n management.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.reset_peak_memory_stats.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.embeddingtorch.nn.functional.embedding(input, weight, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False)\n A simple lookup table that looks up embeddings in a fixed\n dictionary and size.\n This module is often used to retrieve word embeddings using\n indices. The input to the module is a list of indices, and the\n embedding matrix, and the output is the corresponding word\n embeddings.\n See \"torch.nn.Embedding\" for more details.\n Parameters:\n * input (LongTensor) -- Tensor containing indices into the\n embedding matrix\n * weight (Tensor) -- The embedding matrix with number of\n rows equal to the maximum possible index + 1, and number of\n columns equal to the embedding size\n * padding_idx (int, optional) -- If specified, the\n entries at \"padding_idx\" do not contribute to the gradient;\n therefore, the embedding vector at \"padding_idx\" is not", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding.html", "category": "pytorch docs"}
{"text": "updated during training, i.e. it remains as a fixed \"pad\".\n * max_norm (float, optional) -- If given, each\n embedding vector with norm larger than \"max_norm\" is\n renormalized to have norm \"max_norm\". Note: this will modify\n \"weight\" in-place.\n * norm_type (float, optional) -- The p of the p-norm\n to compute for the \"max_norm\" option. Default \"2\".\n * scale_grad_by_freq (bool, optional) -- If given,\n this will scale gradients by the inverse of frequency of the\n words in the mini-batch. Default \"False\".\n * sparse (bool, optional) -- If \"True\", gradient\n w.r.t. \"weight\" will be a sparse tensor. See Notes under\n \"torch.nn.Embedding\" for more details regarding sparse\n gradients.\n Return type:\n Tensor\n Shape:\n * Input: LongTensor of arbitrary shape containing the indices to\n extract\n * Weight: Embedding matrix of floating point type with shape", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding.html", "category": "pytorch docs"}
{"text": "(V, embedding_dim), where V = maximum index + 1 and\n embedding_dim = the embedding size\n * Output: (, embedding_dim)*, where *** is the input shape\n Examples:\n >>> # a batch of 2 samples of 4 indices each\n >>> input = torch.tensor([[1, 2, 4, 5], [4, 3, 2, 9]])\n >>> # an embedding matrix containing 10 tensors of size 3\n >>> embedding_matrix = torch.rand(10, 3)\n >>> F.embedding(input, embedding_matrix)\n tensor([[[ 0.8490, 0.9625, 0.6753],\n [ 0.9666, 0.7761, 0.6108],\n [ 0.6246, 0.9751, 0.3618],\n [ 0.4161, 0.2419, 0.7383]],\n [[ 0.6246, 0.9751, 0.3618],\n [ 0.0237, 0.7794, 0.0528],\n [ 0.9666, 0.7761, 0.6108],\n [ 0.3385, 0.8612, 0.1867]]])\n >>> # example with padding_idx\n >>> weights = torch.rand(10, 3)\n >>> weights[0, :].zero_()\n >>> embedding_matrix = weights\n >>> input = torch.tensor([[0, 2, 0, 5]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding.html", "category": "pytorch docs"}
{"text": "\n\n\ninput = torch.tensor([[0, 2, 0, 5]])\n >>> F.embedding(input, embedding_matrix, padding_idx=0)\n tensor([[[ 0.0000, 0.0000, 0.0000],\n [ 0.5609, 0.5384, 0.8720],\n [ 0.0000, 0.0000, 0.0000],\n [ 0.6262, 0.2438, 0.7471]]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding.html", "category": "pytorch docs"}
{"text": "torch.fft.hffttorch.fft.hfft(input, n=None, dim=- 1, norm=None, *, out=None) -> Tensor\n Computes the one dimensional discrete Fourier transform of a\n Hermitian symmetric \"input\" signal.\n Note:\n \"hfft()\"/\"ihfft()\" are analogous to \"rfft()\"/\"irfft()\". The real\n FFT expects a real signal in the time-domain and gives a\n Hermitian symmetry in the frequency-domain. The Hermitian FFT is\n the opposite; Hermitian symmetric in the time-domain and real-\n valued in the frequency-domain. For this reason, special care\n needs to be taken with the length argument \"n\", in the same way\n as with \"irfft()\".\n Note:\n Because the signal is Hermitian in the time-domain, the result\n will be real in the frequency domain. Note that some input\n frequencies must be real-valued to satisfy the Hermitian\n property. In these cases the imaginary component will be ignored.\n For example, any imaginary component in \"input[0]\" would result", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfft.html", "category": "pytorch docs"}
{"text": "in one or more complex frequency terms which cannot be\n represented in a real output and so will always be ignored.\n Note:\n The correct interpretation of the Hermitian input depends on the\n length of the original data, as given by \"n\". This is because\n each input shape could correspond to either an odd or even length\n signal. By default, the signal is assumed to be even length and\n odd signals will not round-trip properly. So, it is recommended\n to always pass the signal length \"n\".\n Note:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimension. With default arguments,\n size of the transformed dimension should be (2^n + 1) as argument\n n defaults to even output size = 2 * (transformed_dim_size - 1)\n Parameters:\n * input (Tensor) -- the input tensor representing a half-\n Hermitian signal", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfft.html", "category": "pytorch docs"}
{"text": "Hermitian signal\n * n (int, optional) -- Output signal length. This\n determines the length of the real output. If given, the input\n will either be zero-padded or trimmed to this length before\n computing the Hermitian FFT. Defaults to even output:\n \"n=2(input.size(dim) - 1)\".\n * dim (int, optional) -- The dimension along which to\n take the one dimensional Hermitian FFT.\n * norm (str, optional*) --\n Normalization mode. For the forward transform (\"hfft()\"),\n these correspond to:\n * \"\"forward\"\" - normalize by \"1/n\"\n * \"\"backward\"\" - no normalization\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the Hermitian\n FFT orthonormal)\n Calling the backward transform (\"ihfft()\") with the same\n normalization mode will apply an overall normalization of\n \"1/n\" between the two transforms. This is required to make\n \"ihfft()\" the exact inverse.", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfft.html", "category": "pytorch docs"}
{"text": "\"ihfft()\" the exact inverse.\n Default is \"\"backward\"\" (no normalization).\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n Taking a real-valued frequency signal and bringing it into the time\n domain gives Hermitian symmetric output:\n\n\n\nt = torch.linspace(0, 1, 5)\nt\n tensor([0.0000, 0.2500, 0.5000, 0.7500, 1.0000])\nT = torch.fft.ifft(t)\nT\n tensor([ 0.5000-0.0000j, -0.1250-0.1720j, -0.1250-0.0406j, -0.1250+0.0406j,\n -0.1250+0.1720j])\n Note that \"T[1] == T[-1].conj()\" and \"T[2] == T[-2].conj()\" is\n redundant. We can thus compute the forward transform without\n considering negative frequencies:\ntorch.fft.hfft(T[:3], n=5)\n tensor([0.0000, 0.2500, 0.5000, 0.7500, 1.0000])\n Like with \"irfft()\", the output length must be given in order to\n recover an even length output:\ntorch.fft.hfft(T[:3])\n tensor([0.1250, 0.2809, 0.6250, 0.9691])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfft.html", "category": "pytorch docs"}
{"text": "UpsamplingNearest2dclass torch.nn.UpsamplingNearest2d(size=None, scale_factor=None)\n Applies a 2D nearest neighbor upsampling to an input signal\n composed of several input channels.\n To specify the scale, it takes either the \"size\" or the\n \"scale_factor\" as it's constructor argument.\n When \"size\" is given, it is the output size of the image (h, w).\n Parameters:\n * size (int or Tuple[int, int],\n optional) -- output spatial sizes\n * scale_factor (float or Tuple[float,\n float], optional) -- multiplier for spatial size.\n Warning:\n This class is deprecated in favor of \"interpolate()\".\n Shape:\n * Input: (N, C, H_{in}, W_{in})\n * Output: (N, C, H_{out}, W_{out}) where\n H_{out} = \\left\\lfloor H_{in} \\times \\text{scale_factor}\n \\right\\rfloor\n W_{out} = \\left\\lfloor W_{in} \\times \\text{scale_factor}\n \\right\\rfloor\n Examples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.UpsamplingNearest2d.html", "category": "pytorch docs"}
{"text": "\\right\\rfloor\n Examples:\n >>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2)\n >>> input\n tensor([[[[1., 2.],\n [3., 4.]]]])\n >>> m = nn.UpsamplingNearest2d(scale_factor=2)\n >>> m(input)\n tensor([[[[1., 1., 2., 2.],\n [1., 1., 2., 2.],\n [3., 3., 4., 4.],\n [3., 3., 4., 4.]]]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.UpsamplingNearest2d.html", "category": "pytorch docs"}
{"text": "torch.Tensor.abs_Tensor.abs_() -> Tensor\n In-place version of \"abs()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.abs_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.asinhTensor.asinh() -> Tensor\n See \"torch.asinh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.asinh.html", "category": "pytorch docs"}
{"text": "torch.subtracttorch.subtract(input, other, *, alpha=1, out=None) -> Tensor\n Alias for \"torch.sub()\".", "source": "https://pytorch.org/docs/stable/generated/torch.subtract.html", "category": "pytorch docs"}
{"text": "quantize_dynamicclass torch.quantization.quantize_dynamic(model, qconfig_spec=None, dtype=torch.qint8, mapping=None, inplace=False)\n Converts a float model to dynamic (i.e. weights-only) quantized\n model.\n Replaces specified modules with dynamic weight-only quantized\n versions and output the quantized model.\n For simplest usage provide dtype argument that can be float16 or\n qint8. Weight-only quantization by default is performed for layers\n with large weights size - i.e. Linear and RNN variants.\n Fine grained control is possible with qconfig and mapping that\n act similarly to quantize(). If qconfig is provided, the\n dtype argument is ignored.\n Parameters:\n * model -- input model\n * qconfig_spec --\n Either:\n * A dictionary that maps from name or type of submodule to\n quantization configuration, qconfig applies to all\n submodules of a given module unless qconfig for the", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_dynamic.html", "category": "pytorch docs"}
{"text": "submodules are specified (when the submodule already has\n qconfig attribute). Entries in the dictionary need to be\n QConfig instances.\n * A set of types and/or submodule names to apply dynamic\n quantization to, in which case the dtype argument is used\n to specify the bit-width\n * inplace -- carry out model transformations in-place, the\n original module is mutated\n * mapping -- maps type of a submodule to a type of\n corresponding dynamically quantized version with which the\n submodule needs to be replaced", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_dynamic.html", "category": "pytorch docs"}
{"text": "torch.Tensor.istftTensor.istft(n_fft, hop_length=None, win_length=None, window=None, center=True, normalized=False, onesided=None, length=None, return_complex=False)\n See \"torch.istft()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.istft.html", "category": "pytorch docs"}
{"text": "torch.concatenatetorch.concatenate(tensors, axis=0, out=None) -> Tensor\n Alias of \"torch.cat()\".", "source": "https://pytorch.org/docs/stable/generated/torch.concatenate.html", "category": "pytorch docs"}
{"text": "ConvTranspose1dclass torch.nn.ConvTranspose1d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)\n Applies a 1D transposed convolution operator over an input image\n composed of several input planes.\n This module can be seen as the gradient of Conv1d with respect to\n its input. It is also known as a fractionally-strided convolution\n or a deconvolution (although it is not an actual deconvolution\n operation as it does not compute a true inverse of convolution).\n For more information, see the visualizations here and the\n Deconvolutional Networks paper.\n This module supports TensorFloat32.\n On certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\n * \"stride\" controls the stride for the cross-correlation.\n * \"padding\" controls the amount of implicit zero padding on both", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html", "category": "pytorch docs"}
{"text": "sides for \"dilation * (kernel_size - 1) - padding\" number of\n points. See note below for details.\n * \"output_padding\" controls the additional size added to one side\n of the output shape. See note below for details.\n * \"dilation\" controls the spacing between the kernel points; also\n known as the \u00c3\u00a0 trous algorithm. It is harder to describe, but the\n link here has a nice visualization of what \"dilation\" does.\n * \"groups\" controls the connections between inputs and outputs.\n \"in_channels\" and \"out_channels\" must both be divisible by\n \"groups\". For example,\n * At groups=1, all inputs are convolved to all outputs.\n * At groups=2, the operation becomes equivalent to having two\n conv layers side by side, each seeing half the input\n channels and producing half the output channels, and both\n subsequently concatenated.\n * At groups= \"in_channels\", each input channel is convolved\n with its own set of filters (of size", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html", "category": "pytorch docs"}
{"text": "with its own set of filters (of size\n \\frac{\\text{out_channels}}{\\text{in_channels}}).\n Note:\n The \"padding\" argument effectively adds \"dilation * (kernel_size\n - 1) - padding\" amount of zero padding to both sizes of the\n input. This is set so that when a \"Conv1d\" and a\n \"ConvTranspose1d\" are initialized with same parameters, they are\n inverses of each other in regard to the input and output shapes.\n However, when \"stride > 1\", \"Conv1d\" maps multiple input shapes\n to the same output shape. \"output_padding\" is provided to resolve\n this ambiguity by effectively increasing the calculated output\n shape on one side. Note that \"output_padding\" is only used to\n find output shape, but does not actually add zero-padding to\n output.\n Note:\n In some circumstances when using the CUDA backend with CuDNN,\n this operator may select a nondeterministic algorithm to increase\n performance. If this is undesirable, you can try to make the", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html", "category": "pytorch docs"}
{"text": "operation deterministic (potentially at a performance cost) by\n setting \"torch.backends.cudnn.deterministic = True\". Please see\n the notes on Reproducibility for background.\n Parameters:\n * in_channels (int) -- Number of channels in the input\n image\n * out_channels (int) -- Number of channels produced by the\n convolution\n * kernel_size (int or tuple) -- Size of the convolving\n kernel\n * stride (int or tuple, optional) -- Stride of the\n convolution. Default: 1\n * padding (int or tuple, optional) -- \"dilation *\n (kernel_size - 1) - padding\" zero-padding will be added to\n both sides of the input. Default: 0\n * output_padding (int or tuple, optional) --\n Additional size added to one side of the output shape.\n Default: 0\n * groups (int, optional) -- Number of blocked\n connections from input channels to output channels. Default: 1", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html", "category": "pytorch docs"}
{"text": "\nbias (bool, optional) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\ndilation (int or tuple, optional) -- Spacing\n between kernel elements. Default: 1\n Shape:\nInput: (N, C_{in}, L_{in}) or (C_{in}, L_{in})\nOutput: (N, C_{out}, L_{out}) or (C_{out}, L_{out}), where\n L_{out} = (L_{in} - 1) \\times \\text{stride} - 2 \\times\n \\text{padding} + \\text{dilation} \\times\n (\\text{kernel_size} - 1) + \\text{output_padding} + 1\n Variables:\nweight (Tensor) -- the learnable weights of the module\n of shape (\\text{in_channels},\n \\frac{\\text{out_channels}}{\\text{groups}},\n \\text{kernel_size}). The values of these weights are sampled\n from \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{groups}{C_\\text{out} * \\text{kernel_size}}\nbias (Tensor) -- the learnable bias of the module of\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html", "category": "pytorch docs"}
{"text": "shape (out_channels). If \"bias\" is \"True\", then the values of\n these weights are sampled from \\mathcal{U}(-\\sqrt{k},\n \\sqrt{k}) where k = \\frac{groups}{C_\\text{out} *\n \\text{kernel_size}}", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html", "category": "pytorch docs"}
{"text": "torch.hypottorch.hypot(input, other, , out=None) -> Tensor\n Given the legs of a right triangle, return its hypotenuse.\n \\text{out}{i} = \\sqrt{\\text{input}^{2} +\n \\text{other}_{i}^{2}}\n The shapes of \"input\" and \"other\" must be broadcastable.\n Parameters:\n * input (Tensor) -- the first input tensor\n * other (Tensor) -- the second input tensor\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.hypot(torch.tensor([4.0]), torch.tensor([3.0, 4.0, 5.0]))\n tensor([5.0000, 5.6569, 6.4031])", "source": "https://pytorch.org/docs/stable/generated/torch.hypot.html", "category": "pytorch docs"}
{"text": "torch.Tensor.asinTensor.asin() -> Tensor\n See \"torch.asin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.asin.html", "category": "pytorch docs"}
{"text": "torch.Tensor.floor_divide_Tensor.floor_divide_(value) -> Tensor\n In-place version of \"floor_divide()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.floor_divide_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.to_sparse_bscTensor.to_sparse_bsc(blocksize, dense_dim) -> Tensor\n Convert a tensor to a block sparse column (BSC) storage format of\n given blocksize. If the \"self\" is strided, then the number of\n dense dimensions could be specified, and a hybrid BSC tensor will\n be created, with dense_dim dense dimensions and self.dim() - 2 -\n dense_dim batch dimension.\n Parameters:\n * blocksize (list, tuple, \"torch.Size\", optional) -- Block\n size of the resulting BSC tensor. A block size must be a tuple\n of length two such that its items evenly divide the two sparse\n dimensions.\n * dense_dim (int, optional) -- Number of dense\n dimensions of the resulting BSC tensor. This argument should\n be used only if \"self\" is a strided tensor, and must be a\n value between 0 and dimension of \"self\" tensor minus two.\n Example:\n >>> dense = torch.randn(10, 10)\n >>> sparse = dense.to_sparse_csr()", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_bsc.html", "category": "pytorch docs"}
{"text": "\n\n\nsparse = dense.to_sparse_csr()\n >>> sparse_bsc = sparse.to_sparse_bsc((5, 5))\n >>> sparse_bsc.row_indices()\n tensor([0, 1, 0, 1])\n >>> dense = torch.zeros(4, 3, 1)\n >>> dense[0:2, 0] = dense[0:2, 2] = dense[2:4, 1] = 1\n >>> dense.to_sparse_bsc((2, 1), 1)\n tensor(ccol_indices=tensor([0, 1, 2, 3]),\n row_indices=tensor([0, 1, 0]),\n values=tensor([[[[1.]],\n [[1.]]],\n [[[1.]],\n [[1.]]],\n [[[1.]],\n [[1.]]]]), size=(4, 3, 1), nnz=3,\n layout=torch.sparse_bsc)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_bsc.html", "category": "pytorch docs"}
{"text": "torch.bartlett_windowtorch.bartlett_window(window_length, periodic=True, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\n Bartlett window function.\n w[n] = 1 - \\left| \\frac{2n}{N-1} - 1 \\right| = \\begin{cases}\n \\frac{2n}{N - 1} & \\text{if } 0 \\leq n \\leq \\frac{N - 1}{2} \\\n 2 - \\frac{2n}{N - 1} & \\text{if } \\frac{N - 1}{2} < n < N \\\n \\end{cases},\n where N is the full window size.\n The input \"window_length\" is a positive integer controlling the\n returned window size. \"periodic\" flag determines whether the\n returned window trims off the last duplicate value from the\n symmetric window and is ready to be used as a periodic window with\n functions like \"torch.stft()\". Therefore, if \"periodic\" is true,\n the N in above formula is in fact \\text{window_length} + 1. Also,\n we always have \"torch.bartlett_window(L, periodic=True)\" equal to\n \"torch.bartlett_window(L + 1, periodic=False)[:-1])\".\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.bartlett_window.html", "category": "pytorch docs"}
{"text": "Note:\n If \"window_length\" =1, the returned window contains a single\n value 1.\n Parameters:\n * window_length (int) -- the size of returned window\n * periodic (bool, optional) -- If True, returns a\n window to be used as periodic function. If False, return a\n symmetric window.\n Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\"). Only floating point\n types are supported.\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned window tensor. Only \"torch.strided\" (dense layout) is\n supported.\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU", "source": "https://pytorch.org/docs/stable/generated/torch.bartlett_window.html", "category": "pytorch docs"}
{"text": "for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Returns:\n A 1-D tensor of size (\\text{window_length},) containing the\n window\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.bartlett_window.html", "category": "pytorch docs"}
{"text": "torch.cuda.set_streamtorch.cuda.set_stream(stream)\n Sets the current stream.This is a wrapper API to set the stream.\n Usage of this function is discouraged in favor of the \"stream\"\n context manager.\n Parameters:\n stream (Stream) -- selected stream. This function is a no-\n op if this argument is \"None\".", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.set_stream.html", "category": "pytorch docs"}
{"text": "torch.equaltorch.equal(input, other) -> bool\n \"True\" if two tensors have the same size and elements, \"False\"\n otherwise.\n Example:\n >>> torch.equal(torch.tensor([1, 2]), torch.tensor([1, 2]))\n True", "source": "https://pytorch.org/docs/stable/generated/torch.equal.html", "category": "pytorch docs"}
{"text": "torch.i0torch.i0(input, *, out=None) -> Tensor\n Alias for \"torch.special.i0()\".", "source": "https://pytorch.org/docs/stable/generated/torch.i0.html", "category": "pytorch docs"}
{"text": "BasePruningMethodclass torch.nn.utils.prune.BasePruningMethod\n Abstract base class for creation of new pruning techniques.\n Provides a skeleton for customization requiring the overriding of\n methods such as \"compute_mask()\" and \"apply()\".\n classmethod apply(module, name, args, importance_scores=None, kwargs)\n Adds the forward pre-hook that enables pruning on the fly and\n the reparametrization of a tensor in terms of the original\n tensor and the pruning mask.\n Parameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * name (str) -- parameter name within \"module\" on which\n pruning will act.\n * args -- arguments passed on to a subclass of\n \"BasePruningMethod\"\n * importance_scores (torch.Tensor*) -- tensor of\n importance scores (of same shape as module parameter) used\n to compute mask for pruning. The values in this tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.BasePruningMethod.html", "category": "pytorch docs"}
{"text": "indicate the importance of the corresponding elements in\n the parameter being pruned. If unspecified or None, the\n parameter will be used in its place.\n * kwargs -- keyword arguments passed on to a subclass of\n a \"BasePruningMethod\"\n apply_mask(module)\n Simply handles the multiplication between the parameter being\n pruned and the generated mask. Fetches the mask and the original\n tensor from the module and returns the pruned version of the\n tensor.\n Parameters:\n module (nn.Module) -- module containing the tensor to\n prune\n Returns:\n pruned version of the input tensor\n Return type:\n pruned_tensor (torch.Tensor)\n abstract compute_mask(t, default_mask)\n Computes and returns a mask for the input tensor \"t\". Starting\n from a base \"default_mask\" (which should be a mask of ones if\n the tensor has not been pruned yet), generate a random mask to", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.BasePruningMethod.html", "category": "pytorch docs"}
{"text": "apply on top of the \"default_mask\" according to the specific\n pruning method recipe.\n Parameters:\n * t (torch.Tensor) -- tensor representing the\n importance scores of the\n * prune. (parameter to) --\n * default_mask (torch.Tensor) -- Base mask from\n previous pruning\n * iterations --\n * is (that need to be respected after the new mask) --\n * t. (applied. Same dims as) --\n Returns:\n mask to apply to \"t\", of same dims as \"t\"\n Return type:\n mask (torch.Tensor)\n prune(t, default_mask=None, importance_scores=None)\n Computes and returns a pruned version of input tensor \"t\"\n according to the pruning rule specified in \"compute_mask()\".\n Parameters:\n * t (torch.Tensor) -- tensor to prune (of same\n dimensions as \"default_mask\").\n * importance_scores (torch.Tensor) -- tensor of", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.BasePruningMethod.html", "category": "pytorch docs"}
{"text": "importance scores (of same shape as \"t\") used to compute\n mask for pruning \"t\". The values in this tensor indicate\n the importance of the corresponding elements in the \"t\"\n that is being pruned. If unspecified or None, the tensor\n \"t\" will be used in its place.\n * default_mask (torch.Tensor, optional) -- mask\n from previous pruning iteration, if any. To be considered\n when determining what portion of the tensor that pruning\n should act on. If None, default to a mask of ones.\n Returns:\n pruned version of tensor \"t\".\n remove(module)\n Removes the pruning reparameterization from a module. The pruned\n parameter named \"name\" remains permanently pruned, and the\n parameter named \"name+'_orig'\" is removed from the parameter\n list. Similarly, the buffer named \"name+'_mask'\" is removed from\n the buffers.\n Note:\n Pruning itself is NOT undone or reversed!", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.BasePruningMethod.html", "category": "pytorch docs"}
{"text": "torch.Tensor.triangular_solveTensor.triangular_solve(A, upper=True, transpose=False, unitriangular=False)\n See \"torch.triangular_solve()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.triangular_solve.html", "category": "pytorch docs"}
{"text": "torch.Tensor.addcdiv_Tensor.addcdiv_(tensor1, tensor2, *, value=1) -> Tensor\n In-place version of \"addcdiv()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addcdiv_.html", "category": "pytorch docs"}
{"text": "LinearReLUclass torch.ao.nn.intrinsic.LinearReLU(linear, relu)\n This is a sequential container which calls the Linear and ReLU\n modules. During quantization this will be replaced with the\n corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.LinearReLU.html", "category": "pytorch docs"}
{"text": "torch.logspacetorch.logspace(start, end, steps, base=10.0, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\n Creates a one-dimensional tensor of size \"steps\" whose values are\n evenly spaced from {{\\text{{base}}}}^{{\\text{{start}}}} to\n {{\\text{{base}}}}^{{\\text{{end}}}}, inclusive, on a logarithmic\n scale with base \"base\". That is, the values are:\n (\\text{base}^{\\text{start}}, \\text{base}^{(\\text{start} +\n \\frac{\\text{end} - \\text{start}}{ \\text{steps} - 1})}, \\ldots,\n \\text{base}^{(\\text{start} + (\\text{steps} - 2) *\n \\frac{\\text{end} - \\text{start}}{ \\text{steps} - 1})},\n \\text{base}^{\\text{end}})\n From PyTorch 1.11 logspace requires the steps argument. Use\n steps=100 to restore the previous behavior.\n Parameters:\n * start (float) -- the starting value for the set of\n points\n * end (float*) -- the ending value for the set of points", "source": "https://pytorch.org/docs/stable/generated/torch.logspace.html", "category": "pytorch docs"}
{"text": "\nsteps (int) -- size of the constructed tensor\nbase (float, optional) -- base of the logarithm\n function. Default: \"10.0\".\n Keyword Arguments:\nout (Tensor, optional) -- the output tensor.\ndtype (torch.dtype, optional) -- the data type to\n perform the computation in. Default: if None, uses the global\n default dtype (see torch.get_default_dtype()) when both\n \"start\" and \"end\" are real, and corresponding complex dtype\n when either is complex.\nlayout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\ndevice (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.logspace.html", "category": "pytorch docs"}
{"text": "tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Example:\n >>> torch.logspace(start=-10, end=10, steps=5)\n tensor([ 1.0000e-10, 1.0000e-05, 1.0000e+00, 1.0000e+05, 1.0000e+10])\n >>> torch.logspace(start=0.1, end=1.0, steps=5)\n tensor([ 1.2589, 2.1135, 3.5481, 5.9566, 10.0000])\n >>> torch.logspace(start=0.1, end=1.0, steps=1)\n tensor([1.2589])\n >>> torch.logspace(start=2, end=2, steps=1, base=2)\n tensor([4.0])", "source": "https://pytorch.org/docs/stable/generated/torch.logspace.html", "category": "pytorch docs"}
{"text": "max_pool2dclass torch.ao.nn.quantized.functional.max_pool2d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)\n Applies a 2D max pooling over a quantized input signal composed of\n several quantized input planes.\n Note:\n The input quantization parameters are propagated to the output.\n See \"MaxPool2d\" for details.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.max_pool2d.html", "category": "pytorch docs"}
{"text": "torch.signal.windows.hammingtorch.signal.windows.hamming(M, , sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\n Computes the Hamming window.\n The Hamming window is defined as follows:\n w_n = \\alpha - \\beta\\ \\cos \\left( \\frac{2 \\pi n}{M - 1} \\right)\n The window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\n Parameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\n Keyword Arguments:\n * sym (bool, optional) -- If False, returns a\n periodic window suitable for use in spectral analysis. If\n True, returns a symmetric window suitable for use in filter\n design. Default: True.\n * alpha (float, optional) -- The coefficient \\alpha in\n the equation above.\n * beta (float, optional*) -- The coefficient \\beta in\n the equation above.", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.hamming.html", "category": "pytorch docs"}
{"text": "the equation above.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Return type:\n Tensor\n Examples:\n >>> # Generates a symmetric Hamming window.\n >>> torch.signal.windows.hamming(10)", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.hamming.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.signal.windows.hamming(10)\n tensor([0.0800, 0.1876, 0.4601, 0.7700, 0.9723, 0.9723, 0.7700, 0.4601, 0.1876, 0.0800])\n >>> # Generates a periodic Hamming window.\n >>> torch.signal.windows.hamming(10, sym=False)\n tensor([0.0800, 0.1679, 0.3979, 0.6821, 0.9121, 1.0000, 0.9121, 0.6821, 0.3979, 0.1679])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.hamming.html", "category": "pytorch docs"}
{"text": "torch.fft.fftntorch.fft.fftn(input, s=None, dim=None, norm=None, , out=None) -> Tensor\n Computes the N dimensional discrete Fourier transform of \"input\".\n Note:\n The Fourier domain representation of any real signal satisfies\n the Hermitian property: \"X[i_1, ..., i_n] = conj(X[-i_1, ...,\n -i_n])\". This function always returns all positive and negative\n frequency terms even though, for real inputs, half of these\n values are redundant. \"rfftn()\" returns the more compact one-\n sided representation where only the positive frequencies of the\n last dimension are returned.\n Note:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimensions.\n Parameters:\n * input (Tensor) -- the input tensor\n * s (Tuple[int], *optional) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftn.html", "category": "pytorch docs"}
{"text": "either be zero-padded or trimmed to the length \"s[i]\" before\n computing the FFT. If a length \"-1\" is specified, no padding\n is done in that dimension. Default: \"s = [input.size(d) for d\n in dim]\"\n * dim (Tuple[int], optional) -- Dimensions to be\n transformed. Default: all dimensions, or the last \"len(s)\"\n dimensions if \"s\" is given.\n * norm (str, optional) --\n Normalization mode. For the forward transform (\"fftn()\"),\n these correspond to:\n * \"\"forward\"\" - normalize by \"1/n\"\n * \"\"backward\"\" - no normalization\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the FFT\n orthonormal)\n Where \"n = prod(s)\" is the logical FFT size. Calling the\n backward transform (\"ifftn()\") with the same normalization\n mode will apply an overall normalization of \"1/n\" between the\n two transforms. This is required to make \"ifftn()\" the exact\n inverse.", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftn.html", "category": "pytorch docs"}
{"text": "inverse.\n Default is \"\"backward\"\" (no normalization).\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n\n\n\nx = torch.rand(10, 10, dtype=torch.complex64)\nfftn = torch.fft.fftn(x)\n The discrete Fourier transform is separable, so \"fftn()\" here is\n equivalent to two one-dimensional \"fft()\" calls:\ntwo_ffts = torch.fft.fft(torch.fft.fft(x, dim=0), dim=1)\ntorch.testing.assert_close(fftn, two_ffts, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftn.html", "category": "pytorch docs"}
{"text": "torch.atanhtorch.atanh(input, , out=None) -> Tensor\n Returns a new tensor with the inverse hyperbolic tangent of the\n elements of \"input\".\n Note:\n The domain of the inverse hyperbolic tangent is (-1, 1) and\n values outside this range will be mapped to \"NaN\", except for the\n values 1 and -1 for which the output is mapped to +/-INF\n respectively.\n \\text{out}{i} = \\tanh^{-1}(\\text{input})\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4).uniform_(-1, 1)\n >>> a\n tensor([ -0.9385, 0.2968, -0.8591, -0.1871 ])\n >>> torch.atanh(a)\n tensor([ -1.7253, 0.3060, -1.2899, -0.1893 ])", "source": "https://pytorch.org/docs/stable/generated/torch.atanh.html", "category": "pytorch docs"}
{"text": "torch.Tensor.mul_Tensor.mul_(value) -> Tensor\n In-place version of \"mul()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.mul_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.logical_orTensor.logical_or() -> Tensor\n See \"torch.logical_or()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logical_or.html", "category": "pytorch docs"}
{"text": "MinMaxObserverclass torch.quantization.observer.MinMaxObserver(dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None, factory_kwargs=None, eps=1.1920928955078125e-07)\n Observer module for computing the quantization parameters based on\n the running min and max values.\n This observer uses the tensor min/max statistics to compute the\n quantization parameters. The module records the running minimum and\n maximum of incoming tensors, and uses this statistic to compute the\n quantization parameters.\n Parameters:\n * dtype -- dtype argument to the quantize node needed to\n implement the reference model spec.\n * qscheme -- Quantization scheme to be used\n * reduce_range -- Reduces the range of the quantized data\n type by 1 bit\n * quant_min -- Minimum quantization value. If unspecified,\n it will follow the 8-bit setup.\n * quant_max -- Maximum quantization value. If unspecified,", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.MinMaxObserver.html", "category": "pytorch docs"}
{"text": "it will follow the 8-bit setup.\n * eps (Tensor) -- Epsilon value for float32, Defaults to\n torch.finfo(torch.float32).eps.\n Given running min/max as x_\\text{min} and x_\\text{max}, scale s and\n zero point z are computed as:\n The running minimum/maximum x_\\text{min/max} is computed as:\n \\begin{array}{ll} x_\\text{min} &= \\begin{cases} \\min(X) &\n \\text{if~}x_\\text{min} = \\text{None} \\\n \\min\\left(x_\\text{min}, \\min(X)\\right) & \\text{otherwise}\n \\end{cases}\\ x_\\text{max} &= \\begin{cases} \\max(X) &\n \\text{if~}x_\\text{max} = \\text{None} \\\n \\max\\left(x_\\text{max}, \\max(X)\\right) & \\text{otherwise}\n \\end{cases}\\ \\end{array}\n where X is the observed tensor.\n The scale s and zero point z are then computed as:\n \\begin{aligned} \\text{if Symmetric:}&\\ &s = 2\n \\max(|x_\\text{min}|, x_\\text{max}) / \\left( Q_\\text{max}\n - Q_\\text{min} \\right) \\ &z = \\begin{cases} 0 &", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.MinMaxObserver.html", "category": "pytorch docs"}
{"text": "\\text{if dtype is qint8} \\ 128 & \\text{otherwise}\n \\end{cases}\\ \\text{Otherwise:}&\\ &s = \\left(\n x_\\text{max} - x_\\text{min} \\right ) / \\left(\n Q_\\text{max} - Q_\\text{min} \\right ) \\ &z =\n Q_\\text{min} - \\text{round}(x_\\text{min} / s) \\end{aligned}\n where Q_\\text{min} and Q_\\text{max} are the minimum and maximum of\n the quantized data type.\n Warning:\n \"dtype\" can only take \"torch.qint8\" or \"torch.quint8\".\n Note:\n If the running minimum equals to the running maximum, the scale\n and zero_point are set to 1.0 and 0.\n calculate_qparams()\n Calculates the quantization parameters.\n forward(x_orig)\n Records the running minimum and maximum of \"x\".\n reset_min_max_vals()\n Resets the min/max values.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.MinMaxObserver.html", "category": "pytorch docs"}
{"text": "torch.Tensor.is_floating_pointTensor.is_floating_point() -> bool\n Returns True if the data type of \"self\" is a floating point data\n type.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_floating_point.html", "category": "pytorch docs"}
{"text": "torch.coshtorch.cosh(input, , out=None) -> Tensor\n Returns a new tensor with the hyperbolic cosine of the elements of\n \"input\".\n \\text{out}{i} = \\cosh(\\text{input})\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 0.1632, 1.1835, -0.6979, -0.7325])\n >>> torch.cosh(a)\n tensor([ 1.0133, 1.7860, 1.2536, 1.2805])\n Note:\n When \"input\" is on the CPU, the implementation of torch.cosh may\n use the Sleef library, which rounds very large results to\n infinity or negative infinity. See here for details.", "source": "https://pytorch.org/docs/stable/generated/torch.cosh.html", "category": "pytorch docs"}
{"text": "torch.Tensor.log2_Tensor.log2_() -> Tensor\n In-place version of \"log2()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.log2_.html", "category": "pytorch docs"}
{"text": "torch.msorttorch.msort(input, , out=None) -> Tensor\n Sorts the elements of the \"input\" tensor along its first dimension\n in ascending order by value.\n Note:\n torch.msort(t) is equivalent to torch.sort(t, dim=0)[0]. See\n also \"torch.sort()\".\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> t = torch.randn(3, 4)\n >>> t\n tensor([[-0.1321, 0.4370, -1.2631, -1.1289],\n [-2.0527, -1.1250, 0.2275, 0.3077],\n [-0.0881, -0.1259, -0.5495, 1.0284]])\n >>> torch.msort(t)\n tensor([[-2.0527, -1.1250, -1.2631, -1.1289],\n [-0.1321, -0.1259, -0.5495, 0.3077],\n [-0.0881, 0.4370, 0.2275, 1.0284]])", "source": "https://pytorch.org/docs/stable/generated/torch.msort.html", "category": "pytorch docs"}
{"text": "GroupNormclass torch.ao.nn.quantized.GroupNorm(num_groups, num_channels, weight, bias, scale, zero_point, eps=1e-05, affine=True, device=None, dtype=None)\n This is the quantized version of \"GroupNorm\".\n Additional args:\n * scale - quantization scale of the output, type: double.\n * zero_point - quantization zero point of the output, type:\n long.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.GroupNorm.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.cosine_similaritytorch.nn.functional.cosine_similarity(x1, x2, dim=1, eps=1e-8) -> Tensor\n Returns cosine similarity between \"x1\" and \"x2\", computed along\n dim. \"x1\" and \"x2\" must be broadcastable to a common shape. \"dim\"\n refers to the dimension in this common shape. Dimension \"dim\" of\n the output is squeezed (see \"torch.squeeze()\"), resulting in the\n output tensor having 1 fewer dimension.\n \\text{similarity} = \\dfrac{x_1 \\cdot x_2}{\\max(\\Vert x_1 \\Vert\n _2 \\cdot \\Vert x_2 \\Vert _2, \\epsilon)}\n Supports type promotion.\n Parameters:\n * x1 (Tensor) -- First input.\n * x2 (Tensor) -- Second input.\n * dim (int, optional) -- Dimension along which cosine\n similarity is computed. Default: 1\n * eps (float, optional) -- Small value to avoid\n division by zero. Default: 1e-8\n Example:\n >>> input1 = torch.randn(100, 128)\n >>> input2 = torch.randn(100, 128)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.cosine_similarity.html", "category": "pytorch docs"}
{"text": "\n\n\ninput2 = torch.randn(100, 128)\n >>> output = F.cosine_similarity(input1, input2)\n >>> print(output)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.cosine_similarity.html", "category": "pytorch docs"}
{"text": "torch._foreach_log2torch._foreach_log2(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.log2()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_log2.html", "category": "pytorch docs"}
{"text": "torch.Tensor.copysign_Tensor.copysign_(other) -> Tensor\n In-place version of \"copysign()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.copysign_.html", "category": "pytorch docs"}
{"text": "torch._foreach_reciprocaltorch._foreach_reciprocal(self: List[Tensor]) -> List[Tensor]\n Apply \"torch.reciprocal()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_reciprocal.html", "category": "pytorch docs"}
{"text": "torch.Tensor.divideTensor.divide(value, *, rounding_mode=None) -> Tensor\n See \"torch.divide()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.divide.html", "category": "pytorch docs"}
{"text": "torch.signal.windows.general_cosinetorch.signal.windows.general_cosine(M, , a, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\n Computes the general cosine window.\n The general cosine window is defined as follows:\n w_n = \\sum^{M-1}_{i=0} (-1)^i a_i \\cos{ \\left( \\frac{2 \\pi i\n n}{M - 1}\\right)}\n The window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\n Parameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\n Keyword Arguments:\n * a (Iterable) -- the coefficients associated to each of\n the cosine functions.\n * sym (bool, optional) -- If False, returns a\n periodic window suitable for use in spectral analysis. If\n True, returns a symmetric window suitable for use in filter\n design. Default: True*.", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.general_cosine.html", "category": "pytorch docs"}
{"text": "design. Default: True.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * layout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n * device (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * requires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n Return type:\n Tensor\n Examples:\n >>> # Generates a symmetric general cosine window with 3 coefficients.\n >>> torch.signal.windows.general_cosine(10, a=[0.46, 0.23, 0.31], sym=True)", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.general_cosine.html", "category": "pytorch docs"}
{"text": "tensor([0.5400, 0.3376, 0.1288, 0.4200, 0.9136, 0.9136, 0.4200, 0.1288, 0.3376, 0.5400])\n >>> # Generates a periodic general cosine window wit 2 coefficients.\n >>> torch.signal.windows.general_cosine(10, a=[0.5, 1 - 0.5], sym=False)\n tensor([0.0000, 0.0955, 0.3455, 0.6545, 0.9045, 1.0000, 0.9045, 0.6545, 0.3455, 0.0955])", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.general_cosine.html", "category": "pytorch docs"}
{"text": "ConvReLU3dclass torch.ao.nn.intrinsic.ConvReLU3d(conv, relu)\n This is a sequential container which calls the Conv3d and ReLU\n modules. During quantization this will be replaced with the\n corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvReLU3d.html", "category": "pytorch docs"}
{"text": "torch.block_diagtorch.block_diag(tensors)\n Create a block diagonal matrix from provided tensors.\n Parameters:\n tensors -- One or more tensors with 0, 1, or 2 dimensions.\n Returns:\n A 2 dimensional tensor with all the input tensors arranged in\n order such that their upper left and lower right corners are\n diagonally adjacent. All other elements are set to 0.\n Return type:\n Tensor\n Example:\n >>> import torch\n >>> A = torch.tensor([[0, 1], [1, 0]])\n >>> B = torch.tensor([[3, 4, 5], [6, 7, 8]])\n >>> C = torch.tensor(7)\n >>> D = torch.tensor([1, 2, 3])\n >>> E = torch.tensor([[4], [5], [6]])\n >>> torch.block_diag(A, B, C, D, E)\n tensor([[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],\n [1, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 3, 4, 5, 0, 0, 0, 0, 0],\n [0, 0, 6, 7, 8, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 7, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 1, 2, 3, 0],", "source": "https://pytorch.org/docs/stable/generated/torch.block_diag.html", "category": "pytorch docs"}
{"text": "[0, 0, 0, 0, 0, 0, 1, 2, 3, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 4],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 5],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 6]])", "source": "https://pytorch.org/docs/stable/generated/torch.block_diag.html", "category": "pytorch docs"}
{"text": "torch.Tensor.uniqueTensor.unique(sorted=True, return_inverse=False, return_counts=False, dim=None)\n Returns the unique elements of the input tensor.\n See \"torch.unique()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.unique.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.max_pool3dtorch.nn.functional.max_pool3d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)\n Applies a 3D max pooling over an input signal composed of several\n input planes.\n Note:\n The order of \"ceil_mode\" and \"return_indices\" is different from\n what seen in \"MaxPool3d\", and will change in a future release.\n See \"MaxPool3d\" for details.\n Parameters:\n * input -- input tensor (\\text{minibatch} ,\n \\text{in_channels} , iD, iH , iW), minibatch dim optional.\n * kernel_size -- size of the pooling region. Can be a single\n number or a tuple (kT, kH, kW)\n * stride -- stride of the pooling operation. Can be a single\n number or a tuple (sT, sH, sW). Default: \"kernel_size\"\n * padding -- Implicit negative infinity padding to be added\n on both sides, must be >= 0 and <= kernel_size / 2.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool3d.html", "category": "pytorch docs"}
{"text": "\ndilation -- The stride between elements within a sliding\n window, must be > 0.\nceil_mode -- If \"True\", will use ceil instead of floor\n to compute the output shape. This ensures that every element\n in the input tensor is covered by a sliding window.\nreturn_indices -- If \"True\", will return the argmax along\n with the max values. Useful for\n \"torch.nn.functional.max_unpool3d\" later\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool3d.html", "category": "pytorch docs"}
{"text": "enable_gradclass torch.enable_grad\n Context-manager that enables gradient calculation.\n Enables gradient calculation, if it has been disabled via \"no_grad\"\n or \"set_grad_enabled\".\n This context manager is thread local; it will not affect\n computation in other threads.\n Also functions as a decorator. (Make sure to instantiate with\n parenthesis.)\n Note:\n enable_grad is one of several mechanisms that can enable or\n disable gradients locally see Locally disabling gradient\n computation for more information on how they compare.\n Note:\n This API does not apply to forward-mode AD.\n Example::\n >>> x = torch.tensor([1.], requires_grad=True)\n >>> with torch.no_grad():\n ... with torch.enable_grad():\n ... y = x * 2\n >>> y.requires_grad\n True\n >>> y.backward()\n >>> x.grad\n tensor([2.])\n >>> @torch.enable_grad()\n ... def doubler(x):\n ... return x * 2\n >>> with torch.no_grad():", "source": "https://pytorch.org/docs/stable/generated/torch.enable_grad.html", "category": "pytorch docs"}
{"text": "\n\n\nwith torch.no_grad():\n ... z = doubler(x)\n >>> z.requires_grad\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.enable_grad.html", "category": "pytorch docs"}
{"text": "InstanceNorm2dclass torch.nn.InstanceNorm2d(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False, device=None, dtype=None)\n Applies Instance Normalization over a 4D input (a mini-batch of 2D\n inputs with additional channel dimension) as described in the paper\n Instance Normalization: The Missing Ingredient for Fast\n Stylization.\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n The mean and standard-deviation are calculated per-dimension\n separately for each object in a mini-batch. \\gamma and \\beta are\n learnable parameter vectors of size C (where C is the input\n size) if \"affine\" is \"True\". The standard-deviation is calculated\n via the biased estimator, equivalent to torch.var(input,\n unbiased=False).\n By default, this layer uses instance statistics computed from input\n data in both training and evaluation modes.\n If \"track_running_stats\" is set to \"True\", during training this", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm2d.html", "category": "pytorch docs"}
{"text": "layer keeps running estimates of its computed mean and variance,\n which are then used for normalization during evaluation. The\n running estimates are kept with a default \"momentum\" of 0.1.\n Note:\n This \"momentum\" argument is different from one used in optimizer\n classes and the conventional notion of momentum. Mathematically,\n the update rule for running statistics here is \\hat{x}_\\text{new}\n = (1 - \\text{momentum}) \\times \\hat{x} + \\text{momentum} \\times\n x_t, where \\hat{x} is the estimated statistic and x_t is the new\n observed value.\n Note:\n \"InstanceNorm2d\" and \"LayerNorm\" are very similar, but have some\n subtle differences. \"InstanceNorm2d\" is applied on each channel\n of channeled data like RGB images, but \"LayerNorm\" is usually\n applied on entire sample and often in NLP tasks. Additionally,\n \"LayerNorm\" applies elementwise affine transform, while\n \"InstanceNorm2d\" usually don't apply affine transform.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm2d.html", "category": "pytorch docs"}
{"text": "Parameters:\n * num_features (int) -- C from an expected input of size\n (N, C, H, W) or (C, H, W)\n * eps (float) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n * momentum (float) -- the value used for the running_mean\n and running_var computation. Default: 0.1\n * affine (bool) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters,\n initialized the same way as done for batch normalization.\n Default: \"False\".\n * track_running_stats (bool) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics and always uses batch statistics in both\n training and eval modes. Default: \"False\"\n Shape:\n * Input: (N, C, H, W) or (C, H, W)\n * Output: (N, C, H, W) or (C, H, W) (same shape as input)\n Examples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm2d.html", "category": "pytorch docs"}
{"text": "Examples:\n >>> # Without Learnable Parameters\n >>> m = nn.InstanceNorm2d(100)\n >>> # With Learnable Parameters\n >>> m = nn.InstanceNorm2d(100, affine=True)\n >>> input = torch.randn(20, 100, 35, 45)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm2d.html", "category": "pytorch docs"}
{"text": "torch.foreach_log_torch._foreach_log(self: List[Tensor]) -> None\n Apply \"torch.log()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_log_.html", "category": "pytorch docs"}
{"text": "torch.choleskytorch.cholesky(input, upper=False, *, out=None) -> Tensor\n Computes the Cholesky decomposition of a symmetric positive-\n definite matrix A or for batches of symmetric positive-definite\n matrices.\n If \"upper\" is \"True\", the returned matrix \"U\" is upper-triangular,\n and the decomposition has the form:\n A = U^TU\n If \"upper\" is \"False\", the returned matrix \"L\" is lower-triangular,\n and the decomposition has the form:\n A = LL^T\n If \"upper\" is \"True\", and A is a batch of symmetric positive-\n definite matrices, then the returned tensor will be composed of\n upper-triangular Cholesky factors of each of the individual\n matrices. Similarly, when \"upper\" is \"False\", the returned tensor\n will be composed of lower-triangular Cholesky factors of each of\n the individual matrices.\n Warning:\n \"torch.cholesky()\" is deprecated in favor of\n \"torch.linalg.cholesky()\" and will be removed in a future PyTorch", "source": "https://pytorch.org/docs/stable/generated/torch.cholesky.html", "category": "pytorch docs"}
{"text": "release.\"L = torch.cholesky(A)\" should be replaced with\n L = torch.linalg.cholesky(A)\n \"U = torch.cholesky(A, upper=True)\" should be replaced with\n U = torch.linalg.cholesky(A).mH\n This transform will produce equivalent results for all valid\n (symmetric positive definite) inputs.\n Parameters:\n * input (Tensor) -- the input tensor A of size (, n, n)\n where *** is zero or more batch dimensions consisting of\n symmetric positive-definite matrices.\n * upper (bool, optional) -- flag that indicates\n whether to return a upper or lower triangular matrix. Default:\n \"False\"\n Keyword Arguments:\n out (Tensor, optional*) -- the output matrix\n Example:\n >>> a = torch.randn(3, 3)\n >>> a = a @ a.mT + 1e-3 # make symmetric positive-definite\n >>> l = torch.cholesky(a)\n >>> a\n tensor([[ 2.4112, -0.7486, 1.4551],\n [-0.7486, 1.3544, 0.1294],\n [ 1.4551, 0.1294, 1.6724]])", "source": "https://pytorch.org/docs/stable/generated/torch.cholesky.html", "category": "pytorch docs"}
{"text": "[ 1.4551, 0.1294, 1.6724]])\n >>> l\n tensor([[ 1.5528, 0.0000, 0.0000],\n [-0.4821, 1.0592, 0.0000],\n [ 0.9371, 0.5487, 0.7023]])\n >>> l @ l.mT\n tensor([[ 2.4112, -0.7486, 1.4551],\n [-0.7486, 1.3544, 0.1294],\n [ 1.4551, 0.1294, 1.6724]])\n >>> a = torch.randn(3, 2, 2) # Example for batched input\n >>> a = a @ a.mT + 1e-03 # make symmetric positive-definite\n >>> l = torch.cholesky(a)\n >>> z = l @ l.mT\n >>> torch.dist(z, a)\n tensor(2.3842e-07)", "source": "https://pytorch.org/docs/stable/generated/torch.cholesky.html", "category": "pytorch docs"}
{"text": "torch.narrow_copytorch.narrow_copy(input, dim, start, length, , out=None) -> Tensor\n Same as \"Tensor.narrow()\" except this returns a copy rather than\n shared storage. This is primarily for sparse tensors, which do not\n have a shared-storage narrow method.\n Parameters:\n * input (Tensor) -- the tensor to narrow\n * dim (int) -- the dimension along which to narrow\n * start (int) -- index of the element to start the\n narrowed dimension from. Can be negative, which means indexing\n from the end of dim\n * length (int) -- length of the narrowed dimension, must\n be weakly positive\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])\n >>> torch.narrow_copy(x, 0, 0, 2)\n tensor([[ 1, 2, 3],\n [ 4, 5, 6]])\n >>> torch.narrow_copy(x, 1, 1, 2)\n tensor([[ 2, 3],\n [ 5, 6],", "source": "https://pytorch.org/docs/stable/generated/torch.narrow_copy.html", "category": "pytorch docs"}
{"text": "tensor([[ 2, 3],\n [ 5, 6],\n [ 8, 9]])\n >>> s = torch.arange(16).reshape(2, 2, 2, 2).to_sparse(2)\n >>> torch.narrow_copy(s, 0, 0, 1)\n tensor(indices=tensor([[0, 0],\n [0, 1]]),\n values=tensor([[[0, 1],\n [2, 3]],\n [[4, 5],\n [6, 7]]]),\n size=(1, 2, 2, 2), nnz=2, layout=torch.sparse_coo)\n See also: \"torch.narrow()\" for a non copy variant", "source": "https://pytorch.org/docs/stable/generated/torch.narrow_copy.html", "category": "pytorch docs"}
{"text": "torch.Tensor.logical_notTensor.logical_not() -> Tensor\n See \"torch.logical_not()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logical_not.html", "category": "pytorch docs"}
{"text": "torch.nn.utils.parametrizations.spectral_normtorch.nn.utils.parametrizations.spectral_norm(module, name='weight', n_power_iterations=1, eps=1e-12, dim=None)\n Applies spectral normalization to a parameter in the given module.\n \\mathbf{W}{SN} = \\dfrac{\\mathbf{W}}{\\sigma(\\mathbf{W})},\n \\sigma(\\mathbf{W}) = \\max: \\mathbf{h} \\ne 0}\n \\dfrac{|\\mathbf{W} \\mathbf{h}|2}{|\\mathbf{h}|_2}\n When applied on a vector, it simplifies to\n \\mathbf{x} = \\dfrac{\\mathbf{x}}{|\\mathbf{x}|_2}\n Spectral normalization stabilizes the training of discriminators\n (critics) in Generative Adversarial Networks (GANs) by reducing the\n Lipschitz constant of the model. \\sigma is approximated performing\n one iteration of the power method every time the weight is\n accessed. If the dimension of the weight tensor is greater than 2,\n it is reshaped to 2D in power iteration method to get spectral\n norm.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.spectral_norm.html", "category": "pytorch docs"}
{"text": "norm.\n See Spectral Normalization for Generative Adversarial Networks .\n Note:\n This function is implemented using the parametrization\n functionality in \"register_parametrization()\". It is a\n reimplementation of \"torch.nn.utils.spectral_norm()\".\n Note:\n When this constraint is registered, the singular vectors\n associated to the largest singular value are estimated rather\n than sampled at random. These are then updated performing\n \"n_power_iterations\" of the power method whenever the tensor is\n accessed with the module on training mode.\n Note:\n If the _SpectralNorm module, i.e.,\n module.parametrization.weight[idx], is in training mode on\n removal, it will perform another power iteration. If you'd like\n to avoid this iteration, set the module to eval mode before its\n removal.\n Parameters:\n * module (nn.Module) -- containing module\n * name (str, optional) -- name of weight parameter.\n Default: \"\"weight\"\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.spectral_norm.html", "category": "pytorch docs"}
{"text": "Default: \"\"weight\"\".\n * n_power_iterations (int, optional) -- number of\n power iterations to calculate spectral norm. Default: \"1\".\n * eps (float, optional) -- epsilon for numerical\n stability in calculating norms. Default: \"1e-12\".\n * dim (int, optional) -- dimension corresponding to\n number of outputs. Default: \"0\", except for modules that are\n instances of ConvTranspose{1,2,3}d, when it is \"1\"\n Returns:\n The original module with a new parametrization registered to the\n specified weight\n Return type:\n Module\n Example:\n >>> snm = spectral_norm(nn.Linear(20, 40))\n >>> snm\n ParametrizedLinear(\n in_features=20, out_features=40, bias=True\n (parametrizations): ModuleDict(\n (weight): ParametrizationList(\n (0): _SpectralNorm()\n )\n )\n )\n >>> torch.linalg.matrix_norm(snm.weight, 2)\n tensor(1.0081, grad_fn=)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.spectral_norm.html", "category": "pytorch docs"}
{"text": "torch.sparse.spdiagstorch.sparse.spdiags(diagonals, offsets, shape, layout=None) -> Tensor\n Creates a sparse 2D tensor by placing the values from rows of\n \"diagonals\" along specified diagonals of the output\n The \"offsets\" tensor controls which diagonals are set.\n * If \"offsets[i]\" = 0, it is the main diagonal\n * If \"offsets[i]\" < 0, it is below the main diagonal\n * If \"offsets[i]\" > 0, it is above the main diagonal\n The number of rows in \"diagonals\" must match the length of\n \"offsets\", and an offset may not be repeated.\n Parameters:\n * diagonals (Tensor) -- Matrix storing diagonals row-wise\n * offsets (Tensor) -- The diagonals to be set, stored as a\n vector\n * shape (2-tuple of ints) -- The desired shape of the\n result\n Keyword Arguments:\n layout (\"torch.layout\", optional) -- The desired layout of\n the returned tensor. \"torch.sparse_coo\", \"torch.sparse_csc\" and", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.spdiags.html", "category": "pytorch docs"}
{"text": "\"torch.sparse_csr\" are supported. Default: \"torch.sparse_coo\"\n Examples:\n Set the main and first two lower diagonals of a matrix:\n >>> diags = torch.arange(9).reshape(3, 3)\n >>> diags\n tensor([[0, 1, 2],\n [3, 4, 5],\n [6, 7, 8]])\n >>> s = torch.sparse.spdiags(diags, torch.tensor([0, -1, -2]), (3, 3))\n >>> s\n tensor(indices=tensor([[0, 1, 2, 1, 2, 2],\n [0, 1, 2, 0, 1, 0]]),\n values=tensor([0, 1, 2, 3, 4, 6]),\n size=(3, 3), nnz=6, layout=torch.sparse_coo)\n >>> s.to_dense()\n tensor([[0, 0, 0],\n [3, 1, 0],\n [6, 4, 2]])\n Change the output layout:\n >>> diags = torch.arange(9).reshape(3, 3)\n >>> diags\n tensor([[0, 1, 2],[3, 4, 5], [6, 7, 8])\n >>> s = torch.sparse.spdiags(diags, torch.tensor([0, -1, -2]), (3, 3), layout=torch.sparse_csr)\n >>> s\n tensor(crow_indices=tensor([0, 1, 3, 6]),", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.spdiags.html", "category": "pytorch docs"}
{"text": "tensor(crow_indices=tensor([0, 1, 3, 6]),\n col_indices=tensor([0, 0, 1, 0, 1, 2]),\n values=tensor([0, 3, 1, 6, 4, 2]), size=(3, 3), nnz=6,\n layout=torch.sparse_csr)\n >>> s.to_dense()\n tensor([[0, 0, 0],\n [3, 1, 0],\n [6, 4, 2]])\n Set partial diagonals of a large output:\n >>> diags = torch.tensor([[1, 2], [3, 4]])\n >>> offsets = torch.tensor([0, -1])\n >>> torch.sparse.spdiags(diags, offsets, (5, 5)).to_dense()\n tensor([[1, 0, 0, 0, 0],\n [3, 2, 0, 0, 0],\n [0, 4, 0, 0, 0],\n [0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0]])\n Note:\n When setting the values along a given diagonal the index into the\n diagonal and the index into the row of \"diagonals\" is taken as\n the column index in the output. This has the effect that when\n setting a diagonal with a positive offset k the first value\n along that diagonal will be the value in position k of the row", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.spdiags.html", "category": "pytorch docs"}
{"text": "of \"diagonals\"\n Specifying a positive offset:\n >>> diags = torch.tensor([[1, 2, 3], [1, 2, 3], [1, 2, 3]])\n >>> torch.sparse.spdiags(diags, torch.tensor([0, 1, 2]), (5, 5)).to_dense()\n tensor([[1, 2, 3, 0, 0],\n [0, 2, 3, 0, 0],\n [0, 0, 3, 0, 0],\n [0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0]])", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.spdiags.html", "category": "pytorch docs"}
{"text": "torch.Tensor.float_power_Tensor.float_power_(exponent) -> Tensor\n In-place version of \"float_power()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.float_power_.html", "category": "pytorch docs"}
{"text": "torch.Tensor.igammaTensor.igamma(other) -> Tensor\n See \"torch.igamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.igamma.html", "category": "pytorch docs"}
{"text": "torch.compiled_with_cxx11_abitorch.compiled_with_cxx11_abi()\n Returns whether PyTorch was built with _GLIBCXX_USE_CXX11_ABI=1", "source": "https://pytorch.org/docs/stable/generated/torch.compiled_with_cxx11_abi.html", "category": "pytorch docs"}
{"text": "ExternalStreamclass torch.cuda.ExternalStream(stream_ptr, device=None, kwargs)\n Wrapper around an externally allocated CUDA stream.\n This class is used to wrap streams allocated in other libraries in\n order to facilitate data exchange and multi-library interactions.\n Note:\n This class doesn't manage the stream life-cycle, it is the user\n responsibility to keep the referenced stream alive while this\n class is being used.\n Parameters:\n * stream_ptr (int) -- Integer representation of the\n cudaStream_t value. allocated externally.\n * device (*torch.device or int, *optional) -- the\n device where the stream was originally allocated. if device is\n specified incorrectly, subsequent launches using this stream\n may fail.\n query()\n Checks if all the work submitted has been completed.\n Returns:\n A boolean indicating if all kernels in this stream are\n completed.\n record_event(event=None)", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.ExternalStream.html", "category": "pytorch docs"}
{"text": "completed.\n record_event(event=None)\n Records an event.\n Parameters:\n event (torch.cuda.Event, optional) -- event to\n record. If not given, a new one will be allocated.\n Returns:\n Recorded event.\n synchronize()\n Wait for all the kernels in this stream to complete.\n Note:\n This is a wrapper around \"cudaStreamSynchronize()\": see CUDA\n Stream documentation for more info.\n wait_event(event)\n Makes all future work submitted to the stream wait for an event.\n Parameters:\n event (torch.cuda.Event) -- an event to wait for.\n Note:\n This is a wrapper around \"cudaStreamWaitEvent()\": see CUDA\n Stream documentation for more info.This function returns\n without waiting for \"event\": only future operations are\n affected.\n wait_stream(stream)\n Synchronizes with another stream.\n All future work submitted to this stream will wait until all", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.ExternalStream.html", "category": "pytorch docs"}
{"text": "kernels submitted to a given stream at the time of call\n complete.\n Parameters:\n stream (Stream) -- a stream to synchronize.\n Note:\n This function returns without waiting for currently enqueued\n kernels in \"stream\": only future operations are affected.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.ExternalStream.html", "category": "pytorch docs"}
{"text": "torch.tanhtorch.tanh(input, , out=None) -> Tensor\n Returns a new tensor with the hyperbolic tangent of the elements of\n \"input\".\n \\text{out}{i} = \\tanh(\\text{input})\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 0.8986, -0.7279, 1.1745, 0.2611])\n >>> torch.tanh(a)\n tensor([ 0.7156, -0.6218, 0.8257, 0.2553])", "source": "https://pytorch.org/docs/stable/generated/torch.tanh.html", "category": "pytorch docs"}
{"text": "torch.exptorch.exp(input, , out=None) -> Tensor\n Returns a new tensor with the exponential of the elements of the\n input tensor \"input\".\n y_{i} = e^{x_{i}}\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> torch.exp(torch.tensor([0, math.log(2.)]))\n tensor([ 1., 2.])", "source": "https://pytorch.org/docs/stable/generated/torch.exp.html", "category": "pytorch docs"}
{"text": "Rpropclass torch.optim.Rprop(params, lr=0.01, etas=(0.5, 1.2), step_sizes=(1e-06, 50), *, foreach=None, maximize=False, differentiable=False)\n Implements the resilient backpropagation algorithm.\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\ &\\textbf{input} : \\theta_0 \\in \\mathbf{R}^d \\text{\n (params)},f(\\theta) \\text{ (objective)},\n \\ &\\hspace{13mm} \\eta_{+/-} \\text{ (etaplus,\n etaminus)}, \\Gamma_{max/min} \\text{ (step sizes)}\n \\ &\\textbf{initialize} : g^0_{prev} \\leftarrow 0,\n \\: \\eta_0 \\leftarrow \\text{lr (learning rate)}\n \\ &\\rule{110mm}{0.4pt}\n \\ &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\:\n \\textbf{do} \\ &\\hspace{5mm}g_t\n \\leftarrow \\nabla_{\\theta} f_t (\\theta_{t-1}) \\\n &\\hspace{5mm} \\textbf{for} \\text{ } i = 0, 1, \\ldots, d-1 \\:\n \\mathbf{do} \\ &\\hspace{10mm} \\textbf{if} \\:", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html", "category": "pytorch docs"}
{"text": "g^i_{prev} g^i_t > 0 \\\n &\\hspace{15mm} \\eta^i_t \\leftarrow \\mathrm{min}(\\eta^i_{t-1}\n \\eta_{+}, \\Gamma_{max})\n \\ &\\hspace{10mm} \\textbf{else if} \\: g^i_{prev} g^i_t <\n 0 \\ &\\hspace{15mm} \\eta^i_t\n \\leftarrow \\mathrm{max}(\\eta^i_{t-1} \\eta_{-},\n \\Gamma_{min})\n \\ &\\hspace{15mm} g^i_t \\leftarrow 0\n \\ &\\hspace{10mm} \\textbf{else} \\:\n \\ &\\hspace{15mm} \\eta^i_t \\leftarrow \\eta^i_{t-1}\n \\ &\\hspace{5mm}\\theta_t \\leftarrow \\theta_{t-1}- \\eta_t\n \\mathrm{sign}(g_t) \\ &\\hspace{5mm}g_{prev}\n \\leftarrow g_t\n \\ &\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\n For further details regarding the algorithm we refer to the paper A\n Direct Adaptive Method for Faster Backpropagation Learning: The\n RPROP Algorithm.\n Parameters:", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html", "category": "pytorch docs"}
{"text": "RPROP Algorithm.\n Parameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * lr (float, optional) -- learning rate (default:\n 1e-2)\n * etas (Tuple[float, float], optional) --\n pair of (etaminus, etaplus), that are multiplicative increase\n and decrease factors (default: (0.5, 1.2))\n * step_sizes (Tuple[float, float], optional)\n -- a pair of minimal and maximal allowed step sizes (default:\n (1e-6, 50))\n * foreach (bool, optional) -- whether foreach\n implementation of optimizer is used. If unspecified by the\n user (so foreach is None), we will try to use foreach over the\n for-loop implementation on CUDA, since it is usually\n significantly more performant. (default: None)\n * maximize (bool, optional) -- maximize the params", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html", "category": "pytorch docs"}
{"text": "based on the objective, instead of minimizing (default: False)\n * differentiable (bool, optional) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\n add_param_group(param_group)\n Add a param group to the \"Optimizer\" s param_groups.\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n Parameters:\n param_group (dict) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n load_state_dict(state_dict)\n Loads the optimizer state.\n Parameters:\n state_dict (dict) -- optimizer state. Should be an", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html", "category": "pytorch docs"}
{"text": "object returned from a call to \"state_dict()\".\n register_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None\n The \"optimizer\" argument is the optimizer instance being used.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n register_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html", "category": "pytorch docs"}
{"text": "transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n state_dict()\n Returns the state of the optimizer as a \"dict\".\n It contains two entries:\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n zero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n Parameters:\n set_to_none (bool) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html", "category": "pytorch docs"}
{"text": "footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html", "category": "pytorch docs"}
{"text": "torch.fft.irfftntorch.fft.irfftn(input, s=None, dim=None, norm=None, *, out=None) -> Tensor\n Computes the inverse of \"rfftn()\".\n \"input\" is interpreted as a one-sided Hermitian signal in the\n Fourier domain, as produced by \"rfftn()\". By the Hermitian\n property, the output will be real-valued.\n Note:\n Some input frequencies must be real-valued to satisfy the\n Hermitian property. In these cases the imaginary component will\n be ignored. For example, any imaginary component in the zero-\n frequency term cannot be represented in a real output and so will\n always be ignored.\n Note:\n The correct interpretation of the Hermitian input depends on the\n length of the original data, as given by \"s\". This is because\n each input shape could correspond to either an odd or even length\n signal. By default, the signal is assumed to be even length and\n odd signals will not round-trip properly. So, it is recommended\n to always pass the signal shape \"s\".", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfftn.html", "category": "pytorch docs"}
{"text": "to always pass the signal shape \"s\".\n Note:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimensions. With default arguments,\n the size of last dimension should be (2^n + 1) as argument s\n defaults to even output size = 2 * (last_dim_size - 1)\n Parameters:\n * input (Tensor) -- the input tensor\n * s (Tuple[int], optional) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the real FFT. If a length \"-1\" is specified, no\n padding is done in that dimension. Defaults to even output in\n the last dimension: \"s[-1] = 2(input.size(dim[-1]) - 1)\".\n * dim (Tuple[int], *optional) -- Dimensions to be\n transformed. The last dimension must be the half-Hermitian", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfftn.html", "category": "pytorch docs"}
{"text": "compressed dimension. Default: all dimensions, or the last\n \"len(s)\" dimensions if \"s\" is given.\n * norm (str, optional) --\n Normalization mode. For the backward transform (\"irfftn()\"),\n these correspond to:\n * \"\"forward\"\" - no normalization\n * \"\"backward\"\" - normalize by \"1/n\"\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the real IFFT\n orthonormal)\n Where \"n = prod(s)\" is the logical IFFT size. Calling the\n forward transform (\"rfftn()\") with the same normalization mode\n will apply an overall normalization of \"1/n\" between the two\n transforms. This is required to make \"irfftn()\" the exact\n inverse.\n Default is \"\"backward\"\" (normalize by \"1/n\").\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n -[ Example ]-\n\n\n\nt = torch.rand(10, 9)\nT = torch.fft.rfftn(t)\n Without specifying the output length to \"irfft()\", the output will\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfftn.html", "category": "pytorch docs"}
{"text": "not round-trip properly because the input is odd-length in the last\n dimension:\n\n\n\ntorch.fft.irfftn(T).size()\n torch.Size([10, 8])\n So, it is recommended to always pass the signal shape \"s\".\nroundtrip = torch.fft.irfftn(T, t.size())\nroundtrip.size()\n torch.Size([10, 9])\ntorch.testing.assert_close(roundtrip, t, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfftn.html", "category": "pytorch docs"}
{"text": "torch.Tensor.charTensor.char(memory_format=torch.preserve_format) -> Tensor\n \"self.char()\" is equivalent to \"self.to(torch.int8)\". See \"to()\".\n Parameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.char.html", "category": "pytorch docs"}
{"text": "torch.linalg.inv_extorch.linalg.inv_ex(A, *, check_errors=False, out=None)\n Computes the inverse of a square matrix if it is invertible.\n Returns a namedtuple \"(inverse, info)\". \"inverse\" contains the\n result of inverting \"A\" and \"info\" stores the LAPACK error codes.\n If \"A\" is not an invertible matrix, or if it's a batch of matrices\n and one or more of them is not an invertible matrix, then \"info\"\n stores a positive integer for the corresponding matrix. The\n positive integer indicates the diagonal element of the LU\n decomposition of the input matrix that is exactly zero. \"info\"\n filled with zeros indicates that the inversion was successful. If\n \"check_errors=True\" and \"info\" contains positive integers, then a\n RuntimeError is thrown.\n Supports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n Note:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.inv_ex.html", "category": "pytorch docs"}
{"text": "Note:\n When the inputs are on a CUDA device, this function synchronizes\n only when \"check_errors\"= True.\n Warning:\n This function is \"experimental\" and it may change in a future\n PyTorch release.\n See also:\n \"torch.linalg.inv()\" is a NumPy compatible variant that always\n checks for errors.\n Parameters:\n * A (Tensor) -- tensor of shape (, n, n) where *** is\n zero or more batch dimensions consisting of square matrices.\n * check_errors (bool, optional) -- controls whether to\n check the content of \"info\". Default: False.\n Keyword Arguments:\n out (tuple, optional) -- tuple of two tensors to write\n the output to. Ignored if None. Default: None*.\n Examples:\n >>> A = torch.randn(3, 3)\n >>> Ainv, info = torch.linalg.inv_ex(A)\n >>> torch.dist(torch.linalg.inv(A), Ainv)\n tensor(0.)\n >>> info\n tensor(0, dtype=torch.int32)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.inv_ex.html", "category": "pytorch docs"}
{"text": "torch.nn.functional.binary_cross_entropy_with_logitstorch.nn.functional.binary_cross_entropy_with_logits(input, target, weight=None, size_average=None, reduce=None, reduction='mean', pos_weight=None)\n Function that measures Binary Cross Entropy between target and\n input logits.\n See \"BCEWithLogitsLoss\" for details.\n Parameters:\n * input (Tensor) -- Tensor of arbitrary shape as\n unnormalized scores (often referred to as logits).\n * target (Tensor) -- Tensor of the same shape as input\n with values between 0 and 1\n * weight (Tensor, optional) -- a manual rescaling\n weight if provided it's repeated to match input tensor shape\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n multiple elements per sample. If the field \"size_average\" is", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy_with_logits.html", "category": "pytorch docs"}
{"text": "set to \"False\", the losses are instead summed for each\n minibatch. Ignored when reduce is \"False\". Default: \"True\"\n * reduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy_with_logits.html", "category": "pytorch docs"}
{"text": "\npos_weight (Tensor, optional) -- a weight of\n positive examples. Must be a vector with length equal to the\n number of classes.\n Return type:\n Tensor\n Examples:\n >>> input = torch.randn(3, requires_grad=True)\n >>> target = torch.empty(3).random_(2)\n >>> loss = F.binary_cross_entropy_with_logits(input, target)\n >>> loss.backward()\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy_with_logits.html", "category": "pytorch docs"}
{"text": "CyclicLRclass torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr, max_lr, step_size_up=2000, step_size_down=None, mode='triangular', gamma=1.0, scale_fn=None, scale_mode='cycle', cycle_momentum=True, base_momentum=0.8, max_momentum=0.9, last_epoch=- 1, verbose=False)\n Sets the learning rate of each parameter group according to\n cyclical learning rate policy (CLR). The policy cycles the learning\n rate between two boundaries with a constant frequency, as detailed\n in the paper Cyclical Learning Rates for Training Neural Networks.\n The distance between the two boundaries can be scaled on a per-\n iteration or per-cycle basis.\n Cyclical learning rate policy changes the learning rate after every\n batch. step should be called after a batch has been used for\n training.\n This class has three built-in policies, as put forth in the paper:\n * \"triangular\": A basic triangular cycle without amplitude scaling.\n * \"triangular2\": A basic triangular cycle that scales initial", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html", "category": "pytorch docs"}
{"text": "amplitude by half each cycle.\n * \"exp_range\": A cycle that scales initial amplitude by\n \\text{gamma}^{\\text{cycle iterations}} at each cycle iteration.\n This implementation was adapted from the github repo:\n bckenstler/CLR\n Parameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * base_lr (float or list) -- Initial learning rate\n which is the lower boundary in the cycle for each parameter\n group.\n * max_lr (float or list) -- Upper learning rate\n boundaries in the cycle for each parameter group.\n Functionally, it defines the cycle amplitude (max_lr -\n base_lr). The lr at any cycle is the sum of base_lr and some\n scaling of the amplitude; therefore max_lr may not actually be\n reached depending on scaling function.\n * step_size_up (int) -- Number of training iterations in\n the increasing half of a cycle. Default: 2000", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html", "category": "pytorch docs"}
{"text": "\nstep_size_down (int) -- Number of training iterations in\n the decreasing half of a cycle. If step_size_down is None, it\n is set to step_size_up. Default: None\nmode (str) -- One of {triangular, triangular2,\n exp_range}. Values correspond to policies detailed above. If\n scale_fn is not None, this argument is ignored. Default:\n 'triangular'\ngamma (float) -- Constant in 'exp_range' scaling\n function: gamma**(cycle iterations) Default: 1.0\nscale_fn (function) -- Custom scaling policy defined by\n a single argument lambda function, where 0 <= scale_fn(x) <= 1\n for all x >= 0. If specified, then 'mode' is ignored. Default:\n None\nscale_mode (str) -- {'cycle', 'iterations'}. Defines\n whether scale_fn is evaluated on cycle number or cycle\n iterations (training iterations since start of cycle).\n Default: 'cycle'\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html", "category": "pytorch docs"}
{"text": "Default: 'cycle'\n * cycle_momentum (bool) -- If \"True\", momentum is cycled\n inversely to learning rate between 'base_momentum' and\n 'max_momentum'. Default: True\n * base_momentum (float or list) -- Lower momentum\n boundaries in the cycle for each parameter group. Note that\n momentum is cycled inversely to learning rate; at the peak of\n a cycle, momentum is 'base_momentum' and learning rate is\n 'max_lr'. Default: 0.8\n * max_momentum (float or list) -- Upper momentum\n boundaries in the cycle for each parameter group.\n Functionally, it defines the cycle amplitude (max_momentum -\n base_momentum). The momentum at any cycle is the difference of\n max_momentum and some scaling of the amplitude; therefore\n base_momentum may not actually be reached depending on scaling\n function. Note that momentum is cycled inversely to learning", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html", "category": "pytorch docs"}
{"text": "rate; at the start of a cycle, momentum is 'max_momentum' and\n learning rate is 'base_lr' Default: 0.9\n * last_epoch (int) -- The index of the last batch. This\n parameter is used when resuming a training job. Since step()\n should be invoked after each batch instead of after each\n epoch, this number represents the total number of batches\n computed, not the total number of epochs computed. When\n last_epoch=-1, the schedule is started from the beginning.\n Default: -1\n * verbose (bool) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n -[ Example ]-\n\n\n\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)\nscheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=0.01, max_lr=0.1)\ndata_loader = torch.utils.data.DataLoader(...)\nfor epoch in range(10):\n for batch in data_loader:\n train_batch(...)\n scheduler.step()\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html", "category": "pytorch docs"}
{"text": "\n\n\n scheduler.step()\n\nget_last_lr()\n Return last computed learning rate by current scheduler.\n get_lr()\n Calculates the learning rate at batch index. This function\n treats self.last_epoch as the last batch index.\n If self.cycle_momentum is \"True\", this function has a side\n effect of updating the optimizer's momentum.\n print_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html", "category": "pytorch docs"}
{"text": "torch.onnx diagnostics Overview\n* Diagnostic Rules\n* API Reference\nOverview\n========\nNOTE: This feature is underdevelopment and is subject to change.\nThe goal is to improve the diagnostics to help users debug and improve\ntheir model export to ONNX.\n* The diagnostics are emitted in machine parsable Static Analysis\n Results Interchange Format (SARIF).\n* A new clearer, structured way to add new and keep track of\n diagnostic rules.\n* Serve as foundation for more future improvements consuming the\n diagnostics.\nDiagnostic Rules\n================\n* POE0001:node-missing-onnx-shape-inference\n* POE0002:missing-custom-symbolic-function\n* POE0003:missing-standard-symbolic-function\n* POE0004:operator-supported-in-newer-opset-version\nAPI Reference\n=============\nclass torch.onnx._internal.diagnostics.ExportDiagnostic(args, **kwargs)\n Base class for all export diagnostics.\n This class is used to represent all export diagnostics. It is a", "source": "https://pytorch.org/docs/stable/onnx_diagnostics.html", "category": "pytorch docs"}
{"text": "subclass of infra.Diagnostic, and adds additional methods to add\n more information to the diagnostic.\n record_cpp_call_stack(frames_to_skip)\n Records the current C++ call stack in the diagnostic.\n record_python_call_stack(frames_to_skip)\n Records the current Python call stack in the diagnostic.\nclass torch.onnx._internal.diagnostics.infra.DiagnosticEngine\n A generic diagnostic engine based on SARIF.\n This class is the main interface for diagnostics. It manages the\n creation of diagnostic contexts. A DiagnosticContext provides the\n entry point for recording Diagnostics. See infra.DiagnosticContext\n for more details.\n -[ Examples ]-\n Step 1: Create a set of rules. >>> rules =\n infra.RuleCollection.custom_collection_from_list( ...\n \"CustomRuleCollection\", ... [ ... infra.Rule( ...\n id=\"r1\", ... name=\"rule-1\", ...\n message_default_template=\"Mising xxx\", ... ), ... ],\n ... )", "source": "https://pytorch.org/docs/stable/onnx_diagnostics.html", "category": "pytorch docs"}
{"text": "... )\n Step 2: Create a diagnostic engine. >>> engine = DiagnosticEngine()\n Step 3: Start a new diagnostic context. >>> with\n engine.create_diagnostic_context(\"torch.onnx.export\",\n version=\"1.0\") as context: ... ...\n Step 4: Add diagnostics in your code. ...\n context.diagnose(rules.rule1, infra.Level.ERROR)\n Step 5: Afterwards, get the SARIF log. >>> sarif_log =\n engine.sarif_log()\n clear()\n Clears all diagnostic contexts.\n create_diagnostic_context(name, version, options=None, diagnostic_type=)\n Creates a new diagnostic context.\n Parameters:\n * name (str) -- The subject name for the diagnostic\n context.\n * version (str) -- The subject version for the\n diagnostic context.\n * options (Optional[DiagnosticOptions]) -- The\n options for the diagnostic context.\n Returns:\n A new diagnostic context.\n Return type:", "source": "https://pytorch.org/docs/stable/onnx_diagnostics.html", "category": "pytorch docs"}
{"text": "Return type:\n DiagnosticContext\n pretty_print(verbose=False, level=Level.ERROR)\n Pretty prints all diagnostics in the diagnostic contexts.\n Parameters:\n * verbose (bool) -- Whether to print the diagnostics in\n verbose mode. See Diagnostic.pretty_print.\n * level (Level) -- The minimum level of diagnostics to\n print.", "source": "https://pytorch.org/docs/stable/onnx_diagnostics.html", "category": "pytorch docs"}
{"text": "Benchmark Utils - torch.utils.benchmarkclass torch.utils.benchmark.Timer(stmt='pass', setup='pass', global_setup='', timer=, globals=None, label=None, sub_label=None, description=None, env=None, num_threads=1, language=Language.PYTHON)\n Helper class for measuring execution time of PyTorch statements.\n For a full tutorial on how to use this class, see:\n https://pytorch.org/tutorials/recipes/recipes/benchmark.html\n The PyTorch Timer is based on timeit.Timer (and in fact uses\n timeit.Timer internally), but with several key differences:\n 1. Runtime aware:\n Timer will perform warmups (important as some elements of\n PyTorch are lazily initialized), set threadpool size so that\n comparisons are apples-to-apples, and synchronize\n asynchronous CUDA functions when necessary.\n 2. Focus on replicates:\n When measuring code, and particularly complex kernels /", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"}
{"text": "models, run-to-run variation is a significant confounding\n factor. It is expected that all measurements should include\n replicates to quantify noise and allow median computation,\n which is more robust than mean. To that effect, this class\n deviates from the timeit API by conceptually merging\n timeit.Timer.repeat and timeit.Timer.autorange. (Exact\n algorithms are discussed in method docstrings.) The timeit\n method is replicated for cases where an adaptive strategy is\n not desired.\n 3. Optional metadata:\n When defining a Timer, one can optionally specify label,\n sub_label, description, and env. (Defined later) These\n fields are included in the representation of result object\n and by the Compare class to group and display results for\n comparison.\n 4. Instruction counts\n In addition to wall times, Timer can run a statement under", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"}
{"text": "Callgrind and report instructions executed.\n Directly analogous to timeit.Timer constructor arguments:\n stmt, setup, timer, globals\n PyTorch Timer specific constructor arguments:\n label, sub_label, description, env, num_threads\n Parameters:\n * stmt (str) -- Code snippet to be run in a loop and\n timed.\n * setup (str) -- Optional setup code. Used to define\n variables used in stmt\n * global_setup (str) -- (C++ only) Code which is placed at\n the top level of the file for things like #include\n statements.\n * timer (Callable[[], float]) -- Callable\n which returns the current time. If PyTorch was built without\n CUDA or there is no GPU present, this defaults to\n timeit.default_timer; otherwise it will synchronize CUDA\n before measuring the time.\n * globals (Optional[Dict[str, Any]]) -- A", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"}
{"text": "dict which defines the global variables when stmt is being\n executed. This is the other method for providing variables\n which stmt needs.\n * label (Optional[str]) -- String which summarizes\n stmt. For instance, if stmt is\n \"torch.nn.functional.relu(torch.add(x, 1, out=out))\" one might\n set label to \"ReLU(x + 1)\" to improve readability.\n * sub_label (Optional[str]) --\n Provide supplemental information to disambiguate measurements\n with identical stmt or label. For instance, in our example\n above sub_label might be \"float\" or \"int\", so that it is easy\n to differentiate: \"ReLU(x + 1): (float)\"\n \"ReLU(x + 1): (int)\" when printing Measurements or summarizing\n using Compare.\n * description (Optional[str]) --\n String to distinguish measurements with identical label and\n sub_label. The principal use of description is to signal to", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"}
{"text": "Compare the columns of data. For instance one might set it\n based on the input size to create a table of the form:\n | n=1 | n=4 | ...\n ------------- ...\n ReLU(x + 1): (float) | ... | ... | ...\n ReLU(x + 1): (int) | ... | ... | ...\n using Compare. It is also included when printing a\n Measurement.\n * env (Optional[str]) -- This tag indicates that\n otherwise identical tasks were run in different environments,\n and are therefore not equivalent, for instance when A/B\n testing a change to a kernel. Compare will treat\n Measurements with different env specification as distinct\n when merging replicate runs.\n * num_threads (int) -- The size of the PyTorch threadpool\n when executing stmt. Single threaded performance is\n important as both a key inference workload and a good", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"}
{"text": "indicator of intrinsic algorithmic efficiency, so the default\n is set to one. This is in contrast to the default PyTorch\n threadpool size which tries to utilize all cores.\n blocked_autorange(callback=None, min_run_time=0.2)\n Measure many replicates while keeping timer overhead to a\n minimum.\n At a high level, blocked_autorange executes the following\n pseudo-code:\n setup\n total_time = 0\n while total_time < min_run_time\n start = timer()\n for _ in range(block_size):\n stmt\n total_time += (timer() - start)\n Note the variable block_size in the inner loop. The choice of\n block size is important to measurement quality, and must balance\n two competing objectives:\n 1. A small block size results in more replicates and\n generally better statistics.\n 2. A large block size better amortizes the cost of timer", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"}
{"text": "invocation, and results in a less biased measurement. This\n is important because CUDA synchronization time is non-\n trivial (order single to low double digit microseconds)\n and would otherwise bias the measurement.\n blocked_autorange sets block_size by running a warmup period,\n increasing block size until timer overhead is less than 0.1% of\n the overall computation. This value is then used for the main\n measurement loop.\n Returns:\n A Measurement object that contains measured runtimes and\n repetition counts, and can be used to compute statistics.\n (mean, median, etc.)\n Return type:\n Measurement\n collect_callgrind(number: int, , repeats: None, collect_baseline: bool, retain_out_file: bool) -> CallgrindStats\n collect_callgrind(number: int, , repeats: int, collect_baseline: bool, retain_out_file: bool) -> Tuple[CallgrindStats, ...]\n Collect instruction counts using Callgrind.", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"}
{"text": "Collect instruction counts using Callgrind.\n Unlike wall times, instruction counts are deterministic (modulo\n non-determinism in the program itself and small amounts of\n jitter from the Python interpreter.) This makes them ideal for\n detailed performance analysis. This method runs stmt in a\n separate process so that Valgrind can instrument the program.\n Performance is severely degraded due to the instrumentation,\n however this is ameliorated by the fact that a small number of\n iterations is generally sufficient to obtain good measurements.\n In order to to use this method valgrind, callgrind_control,\n and callgrind_annotate must be installed.\n Because there is a process boundary between the caller (this\n process) and the stmt execution, globals cannot contain\n arbitrary in-memory data structures. (Unlike timing methods)\n Instead, globals are restricted to builtins, nn.Modules's, and", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"}
{"text": "TorchScripted functions/modules to reduce the surprise factor\n from serialization and subsequent deserialization. The\n GlobalsBridge class provides more detail on this subject. Take\n particular care with nn.Modules: they rely on pickle and you may\n need to add an import to setup for them to transfer properly.\n By default, a profile for an empty statement will be collected\n and cached to indicate how many instructions are from the Python\n loop which drives stmt.\n Returns:\n A CallgrindStats object which provides instruction counts\n and some basic facilities for analyzing and manipulating\n results.\n timeit(number=1000000)\n Mirrors the semantics of timeit.Timer.timeit().\n Execute the main statement (stmt) number times. https://doc\n s.python.org/3/library/timeit.html#timeit.Timer.timeit\n Return type:\n Measurement", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"}
{"text": "Return type:\n Measurement\nclass torch.utils.benchmark.Measurement(number_per_run, raw_times, task_spec, metadata=None)\n The result of a Timer measurement.\n This class stores one or more measurements of a given statement. It\n is serializable and provides several convenience methods (including\n a detailed repr) for downstream consumers.\n static merge(measurements)\n Convenience method for merging replicates.\n Merge will extrapolate times to number_per_run=1 and will not\n transfer any metadata. (Since it might differ between\n replicates)\n Return type:\n List[Measurement]\n property significant_figures: int\n Approximate significant figure estimate.\n This property is intended to give a convenient way to estimate\n the precision of a measurement. It only uses the interquartile\n region to estimate statistics to try to mitigate skew from the\n tails, and uses a static z value of 1.645 since it is not", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"}
{"text": "expected to be used for small values of n, so z can\n approximate t.\n The significant figure estimation used in conjunction with the\n trim_sigfig method to provide a more human interpretable data\n summary. repr does not use this method; it simply displays\n raw values. Significant figure estimation is intended for\n Compare.\nclass torch.utils.benchmark.CallgrindStats(task_spec, number_per_run, built_with_debug_symbols, baseline_inclusive_stats, baseline_exclusive_stats, stmt_inclusive_stats, stmt_exclusive_stats, stmt_callgrind_out)\n Top level container for Callgrind results collected by Timer.\n Manipulation is generally done using the FunctionCounts class,\n which is obtained by calling CallgrindStats.stats(...). Several\n convenience methods are provided as well; the most significant is\n CallgrindStats.as_standardized().\n as_standardized()\n Strip library names and some prefixes from function strings.", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"}
{"text": "When comparing two different sets of instruction counts, on\n stumbling block can be path prefixes. Callgrind includes the\n full filepath when reporting a function (as it should). However,\n this can cause issues when diffing profiles. If a key component\n such as Python or PyTorch was built in separate locations in the\n two profiles, which can result in something resembling:\n 23234231 /tmp/first_build_dir/thing.c:foo(...)\n 9823794 /tmp/first_build_dir/thing.c:bar(...)\n ...\n 53453 .../aten/src/Aten/...:function_that_actually_changed(...)\n ...\n -9823794 /tmp/second_build_dir/thing.c:bar(...)\n -23234231 /tmp/second_build_dir/thing.c:foo(...)\n Stripping prefixes can ameliorate this issue by regularizing the\n strings and causing better cancellation of equivalent call sites\n when diffing.\n Return type:\n CallgrindStats\n counts(*, denoise=False)", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"}
{"text": "counts(, denoise=False)\n Returns the total number of instructions executed.\n See FunctionCounts.denoise() for an explanation of the\n denoise arg.\n Return type:\n int\n delta(other, inclusive=False)\n Diff two sets of counts.\n One common reason to collect instruction counts is to determine\n the the effect that a particular change will have on the number\n of instructions needed to perform some unit of work. If a change\n increases that number, the next logical question is \"why\". This\n generally involves looking at what part if the code increased in\n instruction count. This function automates that process so that\n one can easily diff counts on both an inclusive and exclusive\n basis.\n Return type:\n FunctionCounts*\n stats(inclusive=False)\n Returns detailed function counts.\n Conceptually, the FunctionCounts returned can be thought of as a\n tuple of (count, path_and_function_name) tuples.", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"}
{"text": "inclusive matches the semantics of callgrind. If True, the\n counts include instructions executed by children.\n inclusive=True is useful for identifying hot spots in code;\n inclusive=False is useful for reducing noise when diffing\n counts from two different runs. (See CallgrindStats.delta(...)\n for more details)\n Return type:\n FunctionCounts\nclass torch.utils.benchmark.FunctionCounts(_data, inclusive, truncate_rows=True, _linewidth=None)\n Container for manipulating Callgrind results.\n It supports:\n 1. Addition and subtraction to combine or diff results.\n 2. Tuple-like indexing.\n 3. A denoise function which strips CPython calls which are\n known to be non-deterministic and quite noisy.\n 4. Two higher order methods (filter and transform) for\n custom manipulation.\n denoise()\n Remove known noisy instructions.\n Several instructions in the CPython interpreter are rather", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"}
{"text": "noisy. These instructions involve unicode to dictionary lookups\n which Python uses to map variable names. FunctionCounts is\n generally a content agnostic container, however this is\n sufficiently important for obtaining reliable results to warrant\n an exception.\n Return type:\n FunctionCounts\n filter(filter_fn)\n Keep only the elements where filter_fn applied to function\n name returns True.\n Return type:\n FunctionCounts\n transform(map_fn)\n Apply map_fn to all of the function names.\n This can be used to regularize function names (e.g. stripping\n irrelevant parts of the file path), coalesce entries by mapping\n multiple functions to the same name (in which case the counts\n are added together), etc.\n Return type:\n FunctionCounts", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"}
{"text": "CUDA Stream SanitizerNote:\n This is a prototype feature, which means it is at an early stage for\n feedback and testing, and its components are subject to change.\nOverview\n========\nThis module introduces CUDA Sanitizer, a tool for detecting\nsynchronization errors between kernels ran on different streams. It\nstores information on accesses to tensors to determine if they are\nsynchronized or not. When enabled in a python program and a possible\ndata race is detected, a detailed warning will be printed and the\nprogram will exit.\nIt can be enabled either by importing this module and calling\n\"enable_cuda_sanitizer()\" or by exporting the \"TORCH_CUDA_SANITIZER\"\nenvironment variable.\nUsage\n=====\nHere is an example of a simple synchronization error in PyTorch:\n import torch\n a = torch.rand(4, 2, device=\"cuda\")\n with torch.cuda.stream(torch.cuda.Stream()):\n torch.mul(a, 5, out=a)\nThe \"a\" tensor is initialized on the default stream and, without any", "source": "https://pytorch.org/docs/stable/cuda._sanitizer.html", "category": "pytorch docs"}
{"text": "synchronization methods, modified on a new stream. The two kernels\nwill run concurrently on the same tensor, which might cause the second\nkernel to read uninitialized data before the first one was able to\nwrite it, or the first kernel might overwrite part of the result of\nthe second. When this script is run on the commandline with:\n TORCH_CUDA_SANITIZER=1 python example_error.py\nthe following output is printed by CSAN:\n ============================\n CSAN detected a possible data race on tensor with data pointer 139719969079296\n Access by stream 94646435460352 during kernel:\n aten::mul.out(Tensor self, Tensor other, *, Tensor(a!) out) -> Tensor(a!)\n writing to argument(s) self, out, and to the output\n With stack trace:\n File \"example_error.py\", line 6, in \n torch.mul(a, 5, out=a)\n ...\n File \"pytorch/torch/cuda/_sanitizer.py\", line 364, in _handle_kernel_launch\n stack_trace = traceback.StackSummary.extract(\n Previous access by stream 0 during kernel:", "source": "https://pytorch.org/docs/stable/cuda._sanitizer.html", "category": "pytorch docs"}
{"text": "Previous access by stream 0 during kernel:\n aten::rand(int[] size, *, int? dtype=None, Device? device=None) -> Tensor\n writing to the output\n With stack trace:\n File \"example_error.py\", line 3, in \n a = torch.rand(10000, device=\"cuda\")\n ...\n File \"pytorch/torch/cuda/_sanitizer.py\", line 364, in _handle_kernel_launch\n stack_trace = traceback.StackSummary.extract(\n Tensor was allocated with stack trace:\n File \"example_error.py\", line 3, in \n a = torch.rand(10000, device=\"cuda\")\n ...\n File \"pytorch/torch/cuda/_sanitizer.py\", line 420, in _handle_memory_allocation\n traceback.StackSummary.extract(\nThis gives extensive insight into the origin of the error:\n* A tensor was incorrectly accessed from streams with ids: 0 (default\n stream) and 94646435460352 (new stream)\n* The tensor was allocated by invoking \"a = torch.rand(10000,\n device=\"cuda\")\"\n* The faulty accesses were caused by operators", "source": "https://pytorch.org/docs/stable/cuda._sanitizer.html", "category": "pytorch docs"}
{"text": "\nThe faulty accesses were caused by operators\n\"a = torch.rand(10000, device=\"cuda\")\" on stream 0\n\"torch.mul(a, 5, out=a)\" on stream 94646435460352\n\n\nThe error message also displays the schemas of the invoked\n operators, along with a note showing which arguments of the\n operators correspond to the affected tensor.\nIn the example, it can be seen that tensor \"a\" corresponds to\n arguments \"self\", \"out\" and the \"output\" value of the invoked\n operator \"torch.mul\".\nSee also:\n The list of supported torch operators and their schemas can be\n viewed here.\nThe bug can be fixed by forcing the new stream to wait for the default\nstream:\n with torch.cuda.stream(torch.cuda.Stream()):\n torch.cuda.current_stream().wait_stream(torch.cuda.default_stream())\n torch.mul(a, 5, out=a)\nWhen the script is run again, there are no errors reported.\nAPI Reference\n=============\ntorch.cuda._sanitizer.enable_cuda_sanitizer()\n Enables CUDA Sanitizer.\n", "source": "https://pytorch.org/docs/stable/cuda._sanitizer.html", "category": "pytorch docs"}
{"text": "Enables CUDA Sanitizer.\n The sanitizer will begin to analyze low-level CUDA calls invoked by\n torch functions for synchronization errors. All data races found\n will be printed to the standard error output along with stack\n traces of suspected causes. For best results, the sanitizer should\n be enabled at the very beginning of the program.", "source": "https://pytorch.org/docs/stable/cuda._sanitizer.html", "category": "pytorch docs"}
{"text": "torch::deploy has been moved to pytorch/multipy\"torch::deploy\" has been moved to its new home at\nhttps://github.com/pytorch/multipy.", "source": "https://pytorch.org/docs/stable/deploy.html", "category": "pytorch docs"}
{"text": "Complex NumbersNote:\n When using complex numbers, use Pytorch with CUDA 11.6 downloaded\n via pip wheel as described in Get Started and select the CUDA 11.6\n pip package.\nComplex numbers are numbers that can be expressed in the form a + bj,\nwhere a and b are real numbers, and j is called the imaginary unit,\nwhich satisfies the equation j^2 = -1. Complex numbers frequently\noccur in mathematics and engineering, especially in topics like signal\nprocessing. Traditionally many users and libraries (e.g., TorchAudio)\nhave handled complex numbers by representing the data in float tensors\nwith shape (..., 2) where the last dimension contains the real and\nimaginary values.\nTensors of complex dtypes provide a more natural user experience while\nworking with complex numbers. Operations on complex tensors (e.g.,\n\"torch.mv()\", \"torch.matmul()\") are likely to be faster and more\nmemory efficient than operations on float tensors mimicking them.", "source": "https://pytorch.org/docs/stable/complex_numbers.html", "category": "pytorch docs"}
{"text": "Operations involving complex numbers in PyTorch are optimized to use\nvectorized assembly instructions and specialized kernels (e.g. LAPACK,\ncuBlas).\nNote:\n Spectral operations in the torch.fft module support native complex\n tensors.\nWarning:\n Complex tensors is a beta feature and subject to change.\nCreating Complex Tensors\n========================\nWe support two complex dtypes: torch.cfloat and torch.cdouble\n\n\n\nx = torch.randn(2,2, dtype=torch.cfloat)\nx\n tensor([[-0.4621-0.0303j, -0.2438-0.5874j],\n [ 0.7706+0.1421j, 1.2110+0.1918j]])\nNote:\n The default dtype for complex tensors is determined by the default\n floating point dtype. If the default floating point dtype is\n torch.float64 then complex numbers are inferred to have a dtype of\n torch.complex128, otherwise they are assumed to have a dtype of\n torch.complex64.\nAll factory functions apart from \"torch.linspace()\",\n\"torch.logspace()\", and \"torch.arange()\" are supported for complex\ntensors.\n\n\n", "source": "https://pytorch.org/docs/stable/complex_numbers.html", "category": "pytorch docs"}
{"text": "tensors.\nTransition from the old representation\n======================================\nUsers who currently worked around the lack of complex tensors with\nreal tensors of shape (..., 2) can easily to switch using the complex\ntensors in their code using \"torch.view_as_complex()\" and\n\"torch.view_as_real()\". Note that these functions don\u00e2\u0080\u0099t perform any\ncopy and return a view of the input tensor.\n\n\n\nx = torch.randn(3, 2)\nx\n tensor([[ 0.6125, -0.1681],\n [-0.3773, 1.3487],\n [-0.0861, -0.7981]])\ny = torch.view_as_complex(x)\ny\n tensor([ 0.6125-0.1681j, -0.3773+1.3487j, -0.0861-0.7981j])\ntorch.view_as_real(y)\n tensor([[ 0.6125, -0.1681],\n [-0.3773, 1.3487],\n [-0.0861, -0.7981]])\nAccessing real and imag\n=======================\nThe real and imaginary values of a complex tensor can be accessed\nusing the \"real\" and \"imag\".\nNote:\n Accessing real and imag attributes doesn't allocate any memory,\n\n\n", "source": "https://pytorch.org/docs/stable/complex_numbers.html", "category": "pytorch docs"}
{"text": "and in-place updates on the real and imag tensors will update\n the original complex tensor. Also, the returned real and imag\n tensors are not contiguous.\n\n\n\ny.real\n tensor([ 0.6125, -0.3773, -0.0861])\ny.imag\n tensor([-0.1681, 1.3487, -0.7981])\ny.real.mul_(2)\n tensor([ 1.2250, -0.7546, -0.1722])\ny\n tensor([ 1.2250-0.1681j, -0.7546+1.3487j, -0.1722-0.7981j])\ny.real.stride()\n (2,)\nAngle and abs\n=============\nThe angle and absolute values of a complex tensor can be computed\nusing \"torch.angle()\" and \"torch.abs()\".\nx1=torch.tensor([3j, 4+4j])\nx1.abs()\n tensor([3.0000, 5.6569])\nx1.angle()\n tensor([1.5708, 0.7854])\nLinear Algebra\n==============\nMany linear algebra operations, like \"torch.matmul()\", \"torch.svd()\",\n\"torch.solve()\" etc., support complex numbers. If you'd like to\nrequest an operation we don't currently support, please search if an\nissue has already been filed and if not, file one.\nSerialization\n=============\n\n\n", "source": "https://pytorch.org/docs/stable/complex_numbers.html", "category": "pytorch docs"}
{"text": "Serialization\nComplex tensors can be serialized, allowing data to be saved as\ncomplex values.\n\n\n\ntorch.save(y, 'complex_tensor.pt')\ntorch.load('complex_tensor.pt')\n tensor([ 0.6125-0.1681j, -0.3773+1.3487j, -0.0861-0.7981j])\nAutograd\n========\nPyTorch supports autograd for complex tensors. The gradient computed\nis the Conjugate Wirtinger derivative, the negative of which is\nprecisely the direction of steepest descent used in Gradient Descent\nalgorithm. Thus, all the existing optimizers work out of the box with\ncomplex parameters. For more details, check out the note Autograd for\nComplex Numbers.\nWe do not fully support the following subsystems:\n* Quantization\n* JIT\n* Sparse Tensors\n* Distributed\nIf any of these would help your use case, please search if an issue\nhas already been filed and if not, file one.\n\n\n", "source": "https://pytorch.org/docs/stable/complex_numbers.html", "category": "pytorch docs"}
{"text": "FullyShardedDataParallelclass torch.distributed.fsdp.FullyShardedDataParallel(module, process_group=None, sharding_strategy=None, cpu_offload=None, auto_wrap_policy=None, backward_prefetch=BackwardPrefetch.BACKWARD_PRE, mixed_precision=None, ignored_modules=None, param_init_fn=None, device_id=None, sync_module_states=False, forward_prefetch=False, limit_all_gathers=False, use_orig_params=False, ignored_parameters=None)\n A wrapper for sharding Module parameters across data parallel\n workers. This is inspired by Xu et al. as well as the ZeRO Stage 3\n from DeepSpeed. FullyShardedDataParallel is commonly shortened to\n FSDP.\n Example:\n >>> import torch\n >>> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP\n >>> torch.cuda.set_device(device_id)\n >>> sharded_module = FSDP(my_module)\n >>> optim = torch.optim.Adam(sharded_module.parameters(), lr=0.0001)\n >>> x = sharded_module(x, y=3, z=torch.Tensor([1]))\n >>> loss = x.sum()", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "\n\n\nloss = x.sum()\n >>> loss.backward()\n >>> optim.step()\n Warning:\n The optimizer must be initialized after the module has been\n wrapped, since FSDP will shard parameters in-place and this will\n break any previously initialized optimizers.\n Warning:\n If the destination CUDA device has ID \"dev_id\", either (1)\n \"module\" should already be placed on that device, (2) the device\n should be set using \"torch.cuda.set_device(dev_id)\", or (3)\n \"dev_id\" should be passed into the \"device_id\" constructor\n argument. This FSDP instance's compute device will be that\n destination device. For (1) and (3), the FSDP initialization\n always occurs on GPU. For (2), the FSDP initialization happens on\n \"module\" 's current device, which may be CPU.\n Warning:\n FSDP currently does not support gradient accumulation outside\n \"no_sync()\" when using CPU offloading. Trying to do so yields\n incorrect results since FSDP will use the newly-reduced gradient\n\n\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "instead of accumulating with any existing gradient.\n Warning:\n Changing the original parameter variable names after construction\n will lead to undefined behavior.\n Warning:\n Passing in sync_module_states=True flag requires module to be\n put on GPU, or to use \"device_id\" argument to specify a CUDA\n device that FSDP will move module to. This is because\n \"sync_module_states=True\" requires GPU communication.\n Warning:\n As of PyTorch 1.12, FSDP only offers limited support for shared\n parameters (for example, setting one \"Linear\" layer's weight to\n another's). In particular, modules that share parameters must be\n wrapped as part of the same FSDP unit. If enhanced shared\n parameter support is needed for your use case, please ping\n https://github.com/pytorch/pytorch/issues/77724\n Note:\n Inputs into FSDP \"forward\" function will be moved to compute\n device (same device FSDP module is on) before running \"forward\",", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "so user does not have to manually move inputs from CPU -> GPU.\n Parameters:\n * module (nn.Module) -- This is the module to be wrapped\n with FSDP.\n * process_group (Optional[Union[ProcessGroup,\n Tuple[ProcessGroup, ProcessGroup]]]) --\n Optional[Union[ProcessGroup, Tuple[ProcessGroup,\n ProcessGroup]]] This is the process group used for collective\n communications and the one over which the model is sharded.\n For hybrid sharding strategies such as\n \"ShardingStrategy.HYBRID_SHARD\" users can pass in a tuple of\n process groups representing the groups to shard and replicate\n across, respectively.\n * sharding_strategy (Optional[ShardingStrategy]) --\n This configures the sharding strategy used by FSDP, which may\n trade off memory saving and communication overhead. See\n \"ShardingStrategy\" for details. (Default: \"FULL_SHARD\")", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "\ncpu_offload (Optional[CPUOffload]) -- This\n configures CPU offloading. If this is set to \"None\", then no\n CPU offloading happens. See \"CPUOffload\" for details.\n (Default: \"None\")\nauto_wrap_policy\n (Optional[Union[Callable[[nn.Module,\n bool, int], bool], _FSDPPolicy]]) --\n This is either \"None\", an \"_FSDPPolicy\", or a callable of a\n fixed signature. If it is \"None\", then \"module\" is wrapped\n with only a top-level FSDP instance without any nested\n wrapping. If it is an \"_FSDPPolicy\", then the wrapping follows\n the given policy. \"ModuleWrapPolicy\" in\n \"torch.distributed.fsdp.wrap.py\" is an example. If it is a\n callable, then it should take in three arguments \"module:\n nn.Module\", \"recurse: bool\", and \"nonwrapped_numel: int\" and\n should return a \"bool\" specifying whether the passed-in\n\n\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "\"module\" should be wrapped if \"recurse=False\" or if the\n traversal should continue down the subtree if \"recurse=True\".\n Additional custom arguments may be added to the callable. The\n \"size_based_auto_wrap_policy\" in\n \"torch.distributed.fsdp.wrap.py\" gives an example callable\n that wraps a module if the parameters in its subtree exceed\n 100M numel. A good practice is to print the model after\n wrapping and adjust as needed.\n Example:\n >>> def custom_auto_wrap_policy(\n >>> module: nn.Module,\n >>> recurse: bool,\n >>> nonwrapped_numel: int,\n >>> # Additional custom arguments\n >>> min_num_params: int = int(1e8),\n >>> ) -> bool:\n >>> return nonwrapped_numel >= min_num_params\n >>> # Configure a custom min_num_params\n >>> my_auto_wrap_policy = functools.partial(custom_auto_wrap_policy, min_num_params=int(1e5))", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "\nbackward_prefetch (Optional[BackwardPrefetch]) --\n This configures explicit backward prefetching of all-gathers.\n See \"BackwardPrefetch\" for details. (Default: \"BACKWARD_PRE\")\nmixed_precision (Optional[MixedPrecision]) -- This\n configures native mixed precision for FSDP. If this is set to\n \"None\", then no mixed precision is used. Otherwise, parameter,\n buffer, and gradient reduction dtypes can be set. See\n \"MixedPrecision\" for details. (Default: \"None\")\nignored_modules\n (Optional[Iterable[torch.nn.Module]]) -- Modules\n whose own parameters and child modules' parameters and buffers\n are ignored by this instance. None of the modules directly in\n \"ignored_modules\" should be \"FullyShardedDataParallel\"\n instances, and any child modules that are already-constructed\n \"FullyShardedDataParallel\" instances will not be ignored if\n\n\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "they are nested under this instance. This argument may be used\n to avoid sharding specific parameters at module granularity\n when using an \"auto_wrap_policy\" or if parameters' sharding is\n not managed by FSDP. (Default: \"None\")\n * param_init_fn\n (Optional[Callable[[nn.Module], None]])\n --\n A \"Callable[torch.nn.Module] -> None\" that specifies how\n modules that are currently on the meta device should be\n initialized onto an actual device. Note that as of v1.12, we\n detect modules on the meta device via \"is_meta\" check and\n apply a default initialization that calls \"reset_parameters\"\n method on the passed in \"nn.Module\" if \"param_init_fn\" is not\n specified, otherwise we run \"param_init_fn\" to initialize the\n passed in \"nn.Module\". In particular, this means that if\n \"is_meta=True\" for any module parameters for modules that will", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "be wrapped with FSDP and \"param_init_fn\" is not specified, we\n assume your module properly implements a \"reset_parameters()\"\n and will throw errors if not. Note that additionally, we offer\n support for modules initialized with torchdistX's\n (https://github.com/pytorch/torchdistX) \"deferred_init\" API.\n In this case, deferred modules would be initialized by a\n default initialization function that calls torchdistX's\n \"materialize_module\", or the passed in \"param_init_fn\", if it\n is not \"None\". The same \"Callable\" is applied to initialize\n all meta modules. Note that this initialization function is\n applied before doing any FSDP sharding logic.\n Example:\n >>> module = MyModule(device=\"meta\")\n >>> def my_init_fn(module):\n >>> # responsible for initializing a module, such as with reset_parameters\n >>> ...", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "\n\n\n...\n >>> fsdp_model = FSDP(module, param_init_fn=my_init_fn, auto_wrap_policy=size_based_auto_wrap_policy)\n >>> print(next(fsdp_model.parameters()).device) # current CUDA device\n >>> # With torchdistX\n >>> module = deferred_init.deferred_init(MyModule, device=\"cuda\")\n >>> # Will initialize via deferred_init.materialize_module().\n >>> fsdp_model = FSDP(module, auto_wrap_policy=size_based_auto_wrap_policy)\n * **device_id** (*Optional**[**Union**[**int**,\n **torch.device**]**]*) -- An \"int\" or \"torch.device\"\n describing the CUDA device the FSDP module should be moved to\n determining where initialization such as sharding takes place.\n If this argument is not specified and \"module\" is on CPU, we\n issue a warning mentioning that this argument can be specified\n for faster initialization. If specified, resulting FSDP\n instances will reside on this device, including moving ignored\n\n\n\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "modules' parameters if needed. Note that if \"device_id\" is\n specified but \"module\" is already on a different CUDA device,\n an error will be thrown. (Default: \"None\")\n * sync_module_states (bool) -- If \"True\", each\n individually wrapped FSDP unit will broadcast module\n parameters from rank 0 to ensure they are the same across all\n ranks after initialization. This helps ensure model parameters\n are the same across ranks before starting training, but adds\n communication overhead to \"init\", as at least one\n broadcast is triggered per individually wrapped FSDP unit.\n This can also help load checkpoints taken by \"state_dict\" and\n to be loaded by \"load_state_dict\" in a memory efficient way.\n See documentation for \"FullStateDictConfig\" for an example of\n this. (Default: \"False\")\n * forward_prefetch (bool) -- If \"True\", then FSDP\n explicitly prefetches the next upcoming all-gather while", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "executing in the forward pass. This may improve communication\n and computation overlap for CPU bound workloads. This should\n only be used for static graph models since the forward order\n is fixed based on the first iteration's execution. (Default:\n \"False\")\n * limit_all_gathers (bool) -- If \"False\", then FSDP allows\n the CPU thread to schedule all-gathers without any extra\n synchronization. If \"True\", then FSDP explicitly synchronizes\n the CPU thread to prevent too many in-flight all-gathers. This\n \"bool\" only affects the sharded strategies that schedule all-\n gathers. Enabling this can help lower the number of CUDA\n malloc retries.\n * ignored_parameters\n (Optional[Iterable[torch.nn.Parameter]]) --\n Ignored parameters will not be managed by this FSDP instance,\n that means these parameters will not be flattened and sharded", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "by FSDP, their gradients will not be synchronized as well.\n With this newly added argument, \"ignored_modules\" could be\n deprecated soon. For backward compatibility, both\n \"ignored_parameters\" and \"ignored_modules\" are kept for now,\n but FSDP only allows one of them to be specified as not\n \"None\".\n apply(fn)\n Applies \"fn\" recursively to every submodule (as returned by\n \".children()\") as well as self. Typical use includes\n initializing the parameters of a model (see also torch.nn.init).\n Compared to \"torch.nn.Module.apply\", this version additionally\n gathers the full parameters before applying \"fn\". It should not\n be called from within another \"summon_full_params\" context.\n Parameters:\n fn (\"Module\" -> None) -- function to be applied to each\n submodule\n Returns:\n self\n Return type:\n Module\n clip_grad_norm_(max_norm, norm_type=2.0)", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "clip_grad_norm_(max_norm, norm_type=2.0)\n Clips the gradient norm of all parameters. The norm is computed\n over all parameters' gradients as viewed as a single vector, and\n the gradients are modified in-place.\n Parameters:\n * max_norm (float or int) -- max norm of the\n gradients\n * norm_type (float or int) -- type of the used\n p-norm. Can be \"'inf'\" for infinity norm.\n Returns:\n Total norm of the parameters (viewed as a single vector).\n Return type:\n Tensor\n Note:\n If every FSDP instance uses \"NO_SHARD\", meaning that no\n gradients are sharded across ranks, then you may directly use\n \"torch.nn.utils.clip_grad_norm_()\".\n Note:\n If at least some FSDP instance uses a sharded strategy (i.e.\n one other than \"NO_SHARD\"), then you should use this method\n instead of \"torch.nn.utils.clip_grad_norm_()\" since this", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "method handles the fact that gradients are sharded across\n ranks.\n Note:\n The total norm returned will have the \"largest\" dtype across\n all parameters/gradients as defined by PyTorch's type\n promotion semantics. For example, if all\n parameters/gradients use a low precision dtype, then the\n returned norm's dtype will be that low precision dtype, but if\n there exists at least one parameter/ gradient using FP32, then\n the returned norm's dtype will be FP32.\n Warning:\n This needs to be called on all ranks since it uses collective\n communications.\n static flatten_sharded_optim_state_dict(sharded_optim_state_dict, model, optim)\n The API is similar to \"shard_full_optim_state_dict()\". The only\n difference is that the input \"sharded_optim_state_dict\" should\n be returned from \"sharded_optim_state_dict()\". Therefore, there\n will be all-gather calls on each rank to gather \"ShardedTensor\"\n s.", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "s.\n Parameters:\n * sharded_optim_state_dict (Dict[str, Any])\n -- Optimizer state dict corresponding to the unflattened\n parameters and holding the sharded optimizer state.\n * model (torch.nn.Module) -- Refer to\n :meth:\"shard_full_optim_state_dict\".\n * optim (torch.optim.Optimizer) -- Optimizer for\n \"model\" 's\n * parameters. --\n Returns:\n Refer to \"shard_full_optim_state_dict()\".\n Return type:\n Dict[str, Any]\n forward(args, kwargs)\n Runs the forward pass for the wrapped module, inserting FSDP-\n specific pre- and post-forward sharding logic.\n Return type:\n Any\n static fsdp_modules(module, root_only=False)\n Returns all nested FSDP instances, possibly including \"module\"\n itself and only including FSDP root modules if \"root_only=True\".\n Parameters:\n * module (torch.nn.Module*) -- Root module, which may or", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "may not be an \"FSDP\" module.\n * root_only (bool) -- Whether to return only FSDP root\n modules. (Default: \"False\")\n Returns:\n FSDP modules that are nested in the input \"module\".\n Return type:\n List[FullyShardedDataParallel]\n static full_optim_state_dict(model, optim, optim_input=None, rank0_only=True, group=None)\n Consolidates the full optimizer state on rank 0 and returns it\n as a \"dict\" following the convention of\n \"torch.optim.Optimizer.state_dict()\", i.e. with keys \"\"state\"\"\n and \"\"param_groups\"\". The flattened parameters in \"FSDP\" modules\n contained in \"model\" are mapped back to their unflattened\n parameters.\n Warning:\n This needs to be called on all ranks since it uses collective\n communications. However, if \"rank0_only=True\", then the state\n dict is only populated on rank 0, and all other ranks return\n an empty \"dict\".\n Warning:", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "an empty \"dict\".\n Warning:\n Unlike \"torch.optim.Optimizer.state_dict()\", this method uses\n full parameter names as keys instead of parameter IDs.\n Note:\n Like in \"torch.optim.Optimizer.state_dict()\", the tensors\n contained in the optimizer state dict are not cloned, so there\n may be aliasing surprises. For best practices, consider saving\n the returned optimizer state dict immediately, e.g. using\n \"torch.save()\".\n Parameters:\n * model (torch.nn.Module) -- Root module (which may or\n may not be a \"FullyShardedDataParallel\" instance) whose\n parameters were passed into the optimizer \"optim\".\n * optim (torch.optim.Optimizer) -- Optimizer for\n \"model\" 's parameters.\n * optim_input\n (Optional[Union[List[Dict[str,\n Any]], Iterable[torch.nn.Parameter]]])\n -- Input passed into the optimizer \"optim\" representing", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "either a \"list\" of parameter groups or an iterable of\n parameters; if \"None\", then this method assumes the input\n was \"model.parameters()\". This argument is deprecated, and\n there is no need to pass it in anymore. (Default: \"None\")\n * rank0_only (bool) -- If \"True\", saves the populated\n \"dict\" only on rank 0; if \"False\", saves it on all ranks.\n (Default: \"True\")\n * group (dist.ProcessGroup) -- Model's process group or\n \"None\" if using the default process group. (Default:\n \"None\")\n Returns:\n A \"dict\" containing the optimizer state for \"model\" 's\n original unflattened parameters and including keys \"state\"\n and \"param_groups\" following the convention of\n \"torch.optim.Optimizer.state_dict()\". If \"rank0_only=True\",\n then nonzero ranks return an empty \"dict\".\n Return type:\n Dict[str, Any]\n property module: Module", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "Dict[str, Any]\n property module: Module\n Returns the wrapped module (like \"DistributedDataParallel\").\n named_buffers(args, kwargs)\n Overrides \"named_buffers()\" to intercept buffer names and remove\n all occurrences of the FSDP-specific flattened buffer prefix\n when inside the \"summon_full_params()\" context manager.\n Return type:\n Iterator[Tuple[str, Tensor]]\n named_parameters(args, kwargs)\n Overrides \"named_parameters()\" to intercept parameter names and\n remove all occurrences of the FSDP-specific flattened parameter\n prefix when inside the \"summon_full_params()\" context manager.\n Return type:\n Iterator[Tuple[str, Parameter]]\n no_sync()\n A context manager to disable gradient synchronizations across\n FSDP instances. Within this context, gradients will be\n accumulated in module variables, which will later be\n synchronized in the first forward-backward pass after exiting", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "the context. This should only be used on the root FSDP instance\n and will recursively apply to all children FSDP instances.\n Note:\n This likely results in higher memory usage because FSDP will\n accumulate the full model gradients (instead of gradient\n shards) until the eventual sync.\n Note:\n When used with CPU offloading, the gradients will not be\n offloaded to CPU when inside the context manager. Instead,\n they will only be offloaded right after the eventual sync.\n Return type:\n Generator\n register_comm_hook(state, hook)\n Registers a communication hook which is an enhancement that\n provides a flexible hook to users where they can specify how\n FSDP aggregates gradients across multiple workers. This hook can\n be used to implement several algorithms like GossipGrad and\n gradient compression which involve different communication\n strategies for parameter syncs while training with", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "\"FullyShardedDataParallel\".\n Warning:\n FSDP communication hook should be registered before running an\n initial forward pass and only once.\n Parameters:\n * state (object) --\n Passed to the hook to maintain any state information during\n the training process. Examples include error feedback in\n gradient compression, peers to communicate with next in\n GossipGrad, etc. It is locally stored by each worker and\n shared by all the gradient tensors on the worker.\n * hook (Callable) -- Callable, which has one of the\n following signatures: 1) \"hook: Callable[torch.Tensor] ->\n None\": This function takes in a Python tensor, which\n represents the full, flattened, unsharded gradient with\n respect to all variables corresponding to the model this\n FSDP unit is wrapping (that are not wrapped by other FSDP\n sub-units). It then performs all necessary processing and", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "returns \"None\"; 2) \"hook: Callable[torch.Tensor,\n torch.Tensor] -> None\": This function takes in two Python\n tensors, the first one represents the full, flattened,\n unsharded gradient with respect to all variables\n corresponding to the model this FSDP unit is wrapping (that\n are not wrapped by other FSDP sub-units). The latter\n represents a pre-sized tensor to store a chunk of a sharded\n gradient after reduction. In both cases, callable performs\n all necessary processing and returns \"None\". Callables with\n signature 1 are expected to handle gradient communication\n for a NO_SHARD case. Callables with signature 2 are\n expected to handle gradient communication for sharded\n cases.\n static rekey_optim_state_dict(optim_state_dict, optim_state_key_type, model, optim_input=None, optim=None)\n Re-keys the optimizer state dict \"optim_state_dict\" to use the", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "key type \"optim_state_key_type\". This can be used to achieve\n compatibility between optimizer state dicts from models with\n FSDP instances and ones without.\n To re-key an FSDP full optimizer state dict (i.e. from\n \"full_optim_state_dict()\") to use parameter IDs and be loadable\n to a non-wrapped model:\n >>> wrapped_model, wrapped_optim = ...\n >>> full_osd = FSDP.full_optim_state_dict(wrapped_model, wrapped_optim)\n >>> nonwrapped_model, nonwrapped_optim = ...\n >>> rekeyed_osd = FSDP.rekey_optim_state_dict(full_osd, OptimStateKeyType.PARAM_ID, nonwrapped_model)\n >>> nonwrapped_optim.load_state_dict(rekeyed_osd)\n To re-key a normal optimizer state dict from a non-wrapped model\n to be loadable to a wrapped model:\n >>> nonwrapped_model, nonwrapped_optim = ...\n >>> osd = nonwrapped_optim.state_dict()\n >>> rekeyed_osd = FSDP.rekey_optim_state_dict(osd, OptimStateKeyType.PARAM_NAME, nonwrapped_model)", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "\n\n\nwrapped_model, wrapped_optim = ...\n >>> sharded_osd = FSDP.shard_full_optim_state_dict(rekeyed_osd, wrapped_model)\n >>> wrapped_optim.load_state_dict(sharded_osd)\n Returns:\n The optimizer state dict re-keyed using the parameter keys\n specified by \"optim_state_key_type\".\n Return type:\n Dict[str, Any]\n static scatter_full_optim_state_dict(full_optim_state_dict, model, optim_input=None, optim=None, group=None)\n Scatters the full optimizer state dict from rank 0 to all other\n ranks, returning the sharded optimizer state dict on each rank.\n The return value is the same as \"shard_full_optim_state_dict()\",\n and on rank 0, the first argument should be the return value of\n \"full_optim_state_dict()\".\n Example:\n >>> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP\n >>> model, optim = ...\n >>> full_osd = FSDP.full_optim_state_dict(model, optim) # only non-empty on rank 0\n\n\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "\n\n\nDefine new model with possibly different world size\n >>> new_model, new_optim, new_group = ...\n >>> sharded_osd = FSDP.scatter_full_optim_state_dict(full_osd, new_model, group=new_group)\n >>> new_optim.load_state_dict(sharded_osd)\n Note:\n Both \"shard_full_optim_state_dict()\" and\n \"scatter_full_optim_state_dict()\" may be used to get the\n sharded optimizer state dict to load. Assuming that the full\n optimizer state dict resides in CPU memory, the former\n requires each rank to have the full dict in CPU memory, where\n each rank individually shards the dict without any\n communication, while the latter requires only rank 0 to have\n the full dict in CPU memory, where rank 0 moves each shard to\n GPU memory (for NCCL) and communicates it to ranks\n appropriately. Hence, the former has higher aggregate CPU\n memory cost, while the latter has higher communication cost.\n Parameters:\n\n\n\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "Parameters:\n * full_optim_state_dict (Optional[Dict[str,\n Any]]) -- Optimizer state dict corresponding to the\n unflattened parameters and holding the full non-sharded\n optimizer state if on rank 0; the argument is ignored on\n nonzero ranks.\n * model (torch.nn.Module) -- Root module (which may or\n may not be a \"FullyShardedDataParallel\" instance) whose\n parameters correspond to the optimizer state in\n \"full_optim_state_dict\".\n * optim_input\n (Optional[Union[List[Dict[str,\n Any]], Iterable[torch.nn.Parameter]]])\n -- Input passed into the optimizer representing either a\n \"list\" of parameter groups or an iterable of parameters; if\n \"None\", then this method assumes the input was\n \"model.parameters()\". This argument is deprecated, and", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "there is no need to pass it in anymore. (Default: \"None\")\n * optim (Optional[torch.optim.Optimizer]) --\n Optimizer that will load the state dict returned by this\n method. This is the preferred argument to use over\n \"optim_input\". (Default: \"None\")\n * group (dist.ProcessGroup) -- Model's process group or\n \"None\" if using the default process group. (Default:\n \"None\")\n Returns:\n The full optimizer state dict now remapped to flattened\n parameters instead of unflattened parameters and restricted\n to only include this rank's part of the optimizer state.\n Return type:\n Dict[str, Any]\n static set_state_dict_type(module, state_dict_type, state_dict_config=None)\n Set the \"state_dict_type\" and the corresponding (optional)\n configurations of all the descendant FSDP modules of the target\n module. The target module does not have to be a FSDP module. If", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "the target module is a FSDP module, its \"state_dict_type\" will\n also be changed.\n Note:\n This API should be called for only the top-level (root)\n module.\n Note:\n This API enables users to transparently use the conventional\n \"state_dict\" API to take model checkpoints in cases where the\n root FSDP module is wrapped by another \"nn.Module\". For\n example, the following will ensure \"state_dict\" is called on\n all non-FSDP instances, while dispatching into\n sharded_state_dict implementation for FSDP:\n Example:\n >>> model = DDP(FSDP(...))\n >>> FSDP.set_state_dict_type(\n >>> model,\n >>> StateDictType.SHARDED_STATE_DICT,\n >>> ShardedStateDictConfig(offload_to_cpu=True),\n >>> )\n >>> checkpoint = model.state_dict()\n Parameters:\n * module (torch.nn.Module) -- Root module.\n * state_dict_type (StateDictType) -- the desired", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "\"state_dict_type\" to set.\n * state_dict_config (Optional[StateDictConfig])\n -- the configuration for the target \"state_dict_type\".\n Return type:\n Tuple[StateDictType, StateDictConfig]\n static shard_full_optim_state_dict(full_optim_state_dict, model, optim_input=None, optim=None)\n Shards the full optimizer state dict \"full_optim_state_dict\" by\n remapping the state to flattened parameters instead of\n unflattened parameters and restricting to only this rank's part\n of the optimizer state. The first argument should be the return\n value of \"full_optim_state_dict()\".\n Example:\n >>> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP\n >>> model, optim = ...\n >>> full_osd = FSDP.full_optim_state_dict(model, optim)\n >>> torch.save(full_osd, PATH)\n >>> # Define new model with possibly different world size\n >>> new_model, new_optim = ...", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "\n\n\nnew_model, new_optim = ...\n >>> full_osd = torch.load(PATH)\n >>> sharded_osd = FSDP.shard_full_optim_state_dict(full_osd, new_model)\n >>> new_optim.load_state_dict(sharded_osd)\n Note:\n Both \"shard_full_optim_state_dict()\" and\n \"scatter_full_optim_state_dict()\" may be used to get the\n sharded optimizer state dict to load. Assuming that the full\n optimizer state dict resides in CPU memory, the former\n requires each rank to have the full dict in CPU memory, where\n each rank individually shards the dict without any\n communication, while the latter requires only rank 0 to have\n the full dict in CPU memory, where rank 0 moves each shard to\n GPU memory (for NCCL) and communicates it to ranks\n appropriately. Hence, the former has higher aggregate CPU\n memory cost, while the latter has higher communication cost.\n Parameters:\n * full_optim_state_dict (Dict[str, Any]) --\n\n\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "Optimizer state dict corresponding to the unflattened\n parameters and holding the full non-sharded optimizer\n state.\n * model (torch.nn.Module) -- Root module (which may or\n may not be a \"FullyShardedDataParallel\" instance) whose\n parameters correspond to the optimizer state in\n \"full_optim_state_dict\".\n * optim_input\n (Optional[Union[List[Dict[str,\n Any]], Iterable[torch.nn.Parameter]]])\n -- Input passed into the optimizer representing either a\n \"list\" of parameter groups or an iterable of parameters; if\n \"None\", then this method assumes the input was\n \"model.parameters()\". This argument is deprecated, and\n there is no need to pass it in anymore. (Default: \"None\")\n * optim (Optional[torch.optim.Optimizer]) --\n Optimizer that will load the state dict returned by this", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "method. This is the preferred argument to use over\n \"optim_input\". (Default: \"None\")\n Returns:\n The full optimizer state dict now remapped to flattened\n parameters instead of unflattened parameters and restricted\n to only include this rank's part of the optimizer state.\n Return type:\n Dict[str, Any]\n static sharded_optim_state_dict(model, optim, group=None)\n The API is similar to \"full_optim_state_dict()\" but this API\n chunks all non-zero-dimension states to \"ShardedTensor\" to save\n memory. This API should only be used when the model \"state_dict\"\n is derived with the context manager \"with\n state_dict_type(SHARDED_STATE_DICT):\".\n For the detailed usage, refer to \"full_optim_state_dict()\".\n Warning:\n The returned state dict contains \"ShardedTensor\" and cannot be\n directly used by the regular \"optim.load_state_dict\".\n Return type:\n Dict[str, Any]", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "Return type:\n Dict[str, Any]\n static state_dict_type(module, state_dict_type, state_dict_config=None)\n A context manager to set the \"state_dict_type\" of all the\n descendant FSDP modules of the target module. This context\n manager has the same functions as \"set_state_dict_type()\". Read\n the document of \"set_state_dict_type()\" for the detail.\n Example:\n >>> model = DDP(FSDP(...))\n >>> with FSDP.state_dict_type(\n >>> model,\n >>> StateDictType.SHARDED_STATE_DICT,\n >>> ):\n >>> checkpoint = model.state_dict()\n Parameters:\n * module (torch.nn.Module) -- Root module.\n * state_dict_type (StateDictType) -- the desired\n \"state_dict_type\" to set.\n * state_dict_config (Optional[StateDictConfig])\n -- the configuration for the target \"state_dict_type\".\n Return type:\n Generator", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "Return type:\n Generator\n static summon_full_params(module, recurse=True, writeback=True, rank0_only=False, offload_to_cpu=False, with_grads=False)\n A context manager to expose full params for FSDP instances. Can\n be useful after forward/backward for a model to get the params\n for additional processing or checking. It can take a non-FSDP\n module and will summon full params for all contained FSDP\n modules as well as their children, depending on the \"recurse\"\n argument.\n Note:\n This can be used on inner FSDPs.\n Note:\n This can not be used within a forward or backward pass. Nor\n can forward and backward be started from within this context.\n Note:\n Parameters will revert to their local shards after the context\n manager exits, storage behavior is the same as forward.\n Note:\n The full parameters can be modified, but only the portion\n corresponding to the local param shard will persist after the", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "context manager exits (unless \"writeback=False\", in which case\n changes will be discarded). In the case where FSDP does not\n shard the parameters, currently only when \"world_size == 1\",\n or \"NO_SHARD\" config, the modification is persisted regardless\n of \"writeback\".\n Note:\n This method works on modules which are not FSDP themselves but\n may contain multiple independent FSDP units. In that case, the\n given arguments will apply to all contained FSDP units.\n Warning:\n Note that \"rank0_only=True\" in conjunction with\n \"writeback=True\" is not currently supported and will raise an\n error. This is because model parameter shapes would be\n different across ranks within the context, and writing to them\n can lead to inconsistency across ranks when the context is\n exited.\n Warning:\n Note that \"offload_to_cpu\" and \"rank0_only=False\" will result", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "in full parameters being redundantly copied to CPU memory for\n GPUs that reside on the same machine, which may incur the risk\n of CPU OOM. It is recommended to use \"offload_to_cpu\" with\n \"rank0_only=True\".\n Parameters:\n * recurse (bool, Optional) -- recursively summon\n all params for nested FSDP instances (default: True).\n * writeback (bool, Optional) -- if \"False\",\n modifications to params are discarded after the context\n manager exits; disabling this can be slightly more\n efficient (default: True)\n * rank0_only (bool, Optional) -- if \"True\", full\n parameters are materialized on only global rank 0. This\n means that within the context, only rank 0 will have full\n parameters and the other ranks will have sharded\n parameters. Note that setting \"rank0_only=True\" with\n \"writeback=True\" is not supported, as model parameter", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "shapes will be different across ranks within the context,\n and writing to them can lead to inconsistency across ranks\n when the context is exited.\n * offload_to_cpu (bool, Optional) -- If \"True\",\n full parameters are offloaded to CPU. Note that this\n offloading currently only occurs if the parameter is\n sharded (which is only not the case for world_size = 1 or\n \"NO_SHARD\" config). It is recommended to use\n \"offload_to_cpu\" with \"rank0_only=True\" to avoid redundant\n copies of model parameters being offloaded to the same CPU\n memory.\n * with_grads (bool, Optional) -- If \"True\",\n gradients are also unsharded with the parameters.\n Currently, this is only supported when passing\n \"use_orig_params=True\" to the FSDP constructor and\n \"offload_to_cpu=False\" to this method. (Default: \"False\")\n Return type:\n Generator", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "Return type:\n Generator\nclass torch.distributed.fsdp.BackwardPrefetch(value)\n This configures explicit backward prefetching, which can improve\n throughput but may slightly increase peak memory usage.\n For NCCL backend, any collectives, even if issued in different\n streams, contend for the same per-device NCCL stream, which is why\n the relative order in which the collectives are issued matters for\n overlapping. The different backward prefetching settings correspond\n to different orderings.\n * \"BACKWARD_PRE\": This prefetches the next set of parameters before\n the current set of parameter's gradient computation. This\n improves backward pass throughput by overlapping communication\n (next all-gather) and computation (current gradient computation).\n * \"BACKWARD_POST\": This prefetches the next set of parameters after\n the current set of parameter's gradient computation. This may\n improve backward pass throughput by overlapping communication", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "(current reduce-scatter) and computation (next gradient\n computation). Specifically, the next all-gather is reordered to\n be before the current reduce-scatter.\n Note:\n If the increase in peak memory usage from prefetching is an\n issue, you may consider passing \"limit_all_gathers=True\" to the\n FSDP constructor, which may help reduce peak memory usage in some\n cases.\nclass torch.distributed.fsdp.ShardingStrategy(value)\n This specifies the sharding strategy to be used for distributed\n training by \"FullyShardedDataParallel\".\n * \"FULL_SHARD\": Parameters, gradients, and optimizer states are\n sharded. For the parameters, this strategy unshards (via all-\n gather) before the forward, reshards after the forward, unshards\n before the backward computation, and reshards after the backward\n computation. For gradients, it synchronizes and shards them (via\n reduce-scatter) after the backward computation. The sharded\n optimizer states are updated locally per rank.", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "\n\"SHARD_GRAD_OP\": Gradients and optimizer states are sharded\n during computation, and additionally, parameters are sharded\n outside computation. For the parameters, this strategy unshards\n before the forward, does not reshard them after the forward, and\n only reshards them after the backward computation. The sharded\n optimizer states are updated locally per rank. Inside\n \"no_sync()\", the parameters are not resharded after the backward\n computation.\n\"NO_SHARD\": Parameters, gradients, and optimizer states are not\n sharded but instead replicated across ranks similar to PyTorch's\n \"DistributedDataParallel\" API. For gradients, this strategy\n synchronizes them (via all-reduce) after the backward\n computation. The unsharded optimizer states are updated locally\n per rank.\n\"HYBRID_SHARD\": Apply \"FULL_SHARD\" within a node, and replicate\n parameters across\n nodes. This results in reduced communication volume as\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "expensive all-gathers and reduce-scatters are only done within\n a node, which can be more performant for medium -sized models.\n * \"_HYBRID_SHARD_ZERO2\": Apply \"SHARD_GRAD_OP\" within a node, and\n replicate parameters across\n nodes. This is like \"HYBRID_SHARD\", except this may provide\n even higher throughput since the unsharded parameters are not\n freed after the forward pass, saving the all-gathers in the\n pre-backward.\nclass torch.distributed.fsdp.MixedPrecision(param_dtype=None, reduce_dtype=None, buffer_dtype=None, keep_low_precision_grads=False, cast_forward_inputs=False, cast_root_forward_inputs=True)\n This configures FSDP-native mixed precision training.\n Variables:\n * param_dtype (torch.dtype) -- This specifies the dtype\n for model parameters, inputs (when \"cast_forward_inputs\" or\n \"cast_root_forward_inputsis set toTrue\"), and therefore\n the dtype for computation. However, outside the forward and", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "backward passes, parameters are in full precision. Model\n checkpointing always happens in full precision.\n * reduce_dtype (torch.dtype) -- This specifies the dtype\n for gradient reduction, which is permitted to differ from\n \"param_dtype\".\n * buffer_dtype (torch.dtype) -- This specifies the dtype\n for buffers. FSDP does not shard buffers, casts them to\n \"buffer_dtype\" in the first forward pass, and keeps them in\n that dtype thereafter. Model checkpointing always happens in\n full precision.\n * keep_low_precision_grads (bool) -- This specifies\n whether to upcast gradients back to the full parameter\n precision after the backward pass. This may be set to \"False\"\n to save memory if using custom optimizers that can perform the\n optimizer step in \"reduce_dtype\". (Default: \"False\")\n * cast_forward_inputs (bool) -- Cast floating point", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "tensors in the forward arguments and keyword arguments to\n \"param_dtype\". (Default: \"False\")\n * cast_root_forward_inputs (bool) -- Cast floating point\n tensors in the forward arguments and keyword arguments to\n \"param_dtype\" for the root FSDP instance. It takes precedence\n over \"cast_forward_inputs\" for the root FSDP instance.\n (Default: \"True\")\n Note:\n This API is experimental and subject to change.\n Note:\n Only floating point tensors are cast to their specified dtypes.\n Note:\n In \"summon_full_params\", parameters are forced to full precision,\n but buffers are not.\n Note:\n \"state_dict\" checkpoints parameters and buffers in full\n precision. For buffers, this is only supported for\n \"StateDictType.FULL_STATE_DICT\".\n Note:\n Each low precision dtype must be specified explicitly. For\n example, \"MixedPrecision(reduce_dtype=torch.float16)\" only\n specifies the reduction dtype to be low precision, and FSDP will", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "not cast parameters or buffers.\n Note:\n If a \"reduce_dtype\" is not specified, then gradient reduction\n happens in \"param_dtype\" if specified or the original parameter\n dtype otherwise.\n Note:\n If the user passes a model with \"BatchNorm\" modules and an\n \"auto_wrap_policy\" to the FSDP constructor, then FSDP will\n disable mixed precision for \"BatchNorm\" modules by wrapping them\n separately in their own FSDP instance with mixed precision\n disabled. This is due to some missing low precision \"BatchNorm\"\n kernels. If the user does not use an \"auto_wrap_policy\", then the\n user must take care to not use mixed precision for FSDP instances\n containing \"BatchNorm\" modules.\n Note:\n \"MixedPrecision\" has \"cast_root_forward_inputs=True\" and\n \"cast_forward_inputs=False\" by default. For the root FSDP\n instance, its \"cast_root_forward_inputs\" takes precedence over\n its \"cast_forward_inputs\". For non-root FSDP instances, their", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "\"cast_root_forward_inputs\" values are ignored. The default\n setting is sufficient for the typical case where each FSDP\n instance has the same \"MixedPrecision\" configuration and only\n needs to cast inputs to the \"param_dtype\" at the beginning of the\n model's forward pass.\n Note:\n For nested FSDP instances with different \"MixedPrecision\"\n configurations, we recommend setting individual\n \"cast_forward_inputs\" values to configure casting inputs or not\n before each instance's forward. In such a case, since the casts\n happen before each FSDP instance's forward, a parent FSDP\n instance should have its non-FSDP submodules run before its FSDP\n submodules to avoid the activation dtype being changed due to a\n different \"MixedPrecision\" configuration.Example:\n >>> model = nn.Sequential(nn.Linear(3, 3), nn.Linear(3, 3))\n >>> model[1] = FSDP(\n >>> model[1],", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "\n\n\nmodel[1],\n >>> mixed_precision=MixedPrecision(param_dtype=torch.float16, cast_forward_inputs=True),\n >>> )\n >>> model = FSDP(\n >>> model,\n >>> mixed_precision=MixedPrecision(param_dtype=torch.bfloat16, cast_forward_inputs=True),\n >>> )\n The above shows a working example. On the other hand, if\n \"model[1]\" were replaced with \"model[0]\", meaning that the\n submodule using different \"MixedPrecision\" ran its forward first,\n then \"model[1]\" would incorrectly see \"float16\" activations\n instead of \"bfloat16\" ones.\n\nclass torch.distributed.fsdp.CPUOffload(offload_params=False)\n This configures CPU offloading.\n Variables:\n offload_params (bool) -- This specifies whether to offload\n parameters to CPU when not involved in computation. If enabled,\n this implicitly offloads gradients to CPU as well. This is to\n support the optimizer step, which requires parameters and\n\n\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "gradients to be on the same device.", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"}
{"text": "torch.utils.cpp_extensiontorch.utils.cpp_extension.CppExtension(name, sources, args, *kwargs)\n Creates a \"setuptools.Extension\" for C++.\n Convenience method that creates a \"setuptools.Extension\" with the\n bare minimum (but often sufficient) arguments to build a C++\n extension.\n All arguments are forwarded to the \"setuptools.Extension\"\n constructor.\n -[ Example ]-\n\n\n\nfrom setuptools import setup\nfrom torch.utils.cpp_extension import BuildExtension, CppExtension\nsetup(\n ... name='extension',\n ... ext_modules=[\n ... CppExtension(\n ... name='extension',\n ... sources=['extension.cpp'],\n ... extra_compile_args=['-g']),\n ... ],\n ... cmdclass={\n ... 'build_ext': BuildExtension\n ... })\ntorch.utils.cpp_extension.CUDAExtension(name, sources, args, *kwargs)\n Creates a \"setuptools.Extension\" for CUDA/C++.\n\n\n", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"}
{"text": "Creates a \"setuptools.Extension\" for CUDA/C++.\n Convenience method that creates a \"setuptools.Extension\" with the\n bare minimum (but often sufficient) arguments to build a CUDA/C++\n extension. This includes the CUDA include path, library path and\n runtime library.\n All arguments are forwarded to the \"setuptools.Extension\"\n constructor.\n -[ Example ]-\n\n\n\nfrom setuptools import setup\nfrom torch.utils.cpp_extension import BuildExtension, CUDAExtension\nsetup(\n ... name='cuda_extension',\n ... ext_modules=[\n ... CUDAExtension(\n ... name='cuda_extension',\n ... sources=['extension.cpp', 'extension_kernel.cu'],\n ... extra_compile_args={'cxx': ['-g'],\n ... 'nvcc': ['-O2']})\n ... ],\n ... cmdclass={\n ... 'build_ext': BuildExtension\n ... })\n Compute capabilities:\n By default the extension will be compiled to run on all archs of\n\n\n", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"}
{"text": "the cards visible during the building process of the extension,\n plus PTX. If down the road a new card is installed the extension\n may need to be recompiled. If a visible card has a compute\n capability (CC) that's newer than the newest version for which your\n nvcc can build fully-compiled binaries, Pytorch will make nvcc fall\n back to building kernels with the newest version of PTX your nvcc\n does support (see below for details on PTX).\n You can override the default behavior using TORCH_CUDA_ARCH_LIST\n to explicitly specify which CCs you want the extension to support:\n TORCH_CUDA_ARCH_LIST=\"6.1 8.6\" python build_my_extension.py\n TORCH_CUDA_ARCH_LIST=\"5.2 6.0 6.1 7.0 7.5 8.0 8.6+PTX\" python\n build_my_extension.py\n The +PTX option causes extension kernel binaries to include PTX\n instructions for the specified CC. PTX is an intermediate\n representation that allows kernels to runtime-compile for any CC >=\n the specified CC (for example, 8.6+PTX generates PTX that can", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"}
{"text": "runtime-compile for any GPU with CC >= 8.6). This improves your\n binary's forward compatibility. However, relying on older PTX to\n provide forward compat by runtime-compiling for newer CCs can\n modestly reduce performance on those newer CCs. If you know exact\n CC(s) of the GPUs you want to target, you're always better off\n specifying them individually. For example, if you want your\n extension to run on 8.0 and 8.6, \"8.0+PTX\" would work functionally\n because it includes PTX that can runtime-compile for 8.6, but \"8.0\n 8.6\" would be better.\n Note that while it's possible to include all supported archs, the\n more archs get included the slower the building process will be, as\n it will build a separate kernel image for each arch.\n Note that CUDA-11.5 nvcc will hit internal compiler error while\n parsing torch/extension.h on Windows. To workaround the issue, move\n python binding logic to pure C++ file.\n Example use:\n #include at::Tensor", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"}
{"text": "include at::Tensor\n SigmoidAlphaBlendForwardCuda(....)\n\nInstead of:\n #include torch::Tensor\n SigmoidAlphaBlendForwardCuda(...)\n Currently open issue for nvcc bug:\n https://github.com/pytorch/pytorch/issues/69460 Complete workaround\n code example: https://github.com/facebookresearch/pytorch3d/commit\n /cb170ac024a949f1f9614ffe6af1c38d972f7d48\n Relocatable device code linking:\n If you want to reference device symbols across compilation units\n (across object files), the object files need to be built with\n relocatable device code (-rdc=true or -dc). An exception to this\n rule is \"dynamic parallelism\" (nested kernel launches) which is\n not used a lot anymore. Relocatable device code is less optimized\n so it needs to be used only on object files that need it. Using\n -dlto (Device Link Time Optimization) at the device code\n compilation step and dlink step help reduce the protentional perf", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"}
{"text": "degradation of -rdc. Note that it needs to be used at both steps\n to be useful.\n If you have rdc objects you need to have an extra -dlink\n (device linking) step before the CPU symbol linking step. There is\n also a case where -dlink is used without -rdc: when an\n extension is linked against a static lib containing rdc-compiled\n objects like the NVSHMEM\n library.\n Note: Ninja is required to build a CUDA Extension with RDC linking.\n -[ Example ]-\n\n\n\nCUDAExtension(\n ... name='cuda_extension',\n ... sources=['extension.cpp', 'extension_kernel.cu'],\n ... dlink=True,\n ... dlink_libraries=[\"dlink_lib\"],\n ... extra_compile_args={'cxx': ['-g'],\n ... 'nvcc': ['-O2', '-rdc=true']})\ntorch.utils.cpp_extension.BuildExtension(args, *kwargs)\n A custom \"setuptools\" build extension .\n This \"setuptools.build_ext\" subclass takes care of passing the\n\n\n", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"}
{"text": "minimum required compiler flags (e.g. \"-std=c++17\") as well as\n mixed C++/CUDA compilation (and support for CUDA files in general).\n When using \"BuildExtension\", it is allowed to supply a dictionary\n for \"extra_compile_args\" (rather than the usual list) that maps\n from languages (\"cxx\" or \"nvcc\") to a list of additional compiler\n flags to supply to the compiler. This makes it possible to supply\n different flags to the C++ and CUDA compiler during mixed\n compilation.\n \"use_ninja\" (bool): If \"use_ninja\" is \"True\" (default), then we\n attempt to build using the Ninja backend. Ninja greatly speeds up\n compilation compared to the standard \"setuptools.build_ext\".\n Fallbacks to the standard distutils backend if Ninja is not\n available.\n Note:\n By default, the Ninja backend uses #CPUS + 2 workers to build the\n extension. This may use up too many resources on some systems.\n One can control the number of workers by setting the MAX_JOBS", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"}
{"text": "environment variable to a non-negative number.\ntorch.utils.cpp_extension.load(name, sources, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False, with_cuda=None, is_python_module=True, is_standalone=False, keep_intermediates=True)\n Loads a PyTorch C++ extension just-in-time (JIT).\n To load an extension, a Ninja build file is emitted, which is used\n to compile the given sources into a dynamic library. This library\n is subsequently loaded into the current Python process as a module\n and returned from this function, ready for use.\n By default, the directory to which the build file is emitted and\n the resulting library compiled to is\n \"/torch_extensions/\", where \"\" is the temporary\n folder on the current platform and \"\" the name of the\n extension. This location can be overridden in two ways. First, if\n the \"TORCH_EXTENSIONS_DIR\" environment variable is set, it replaces", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"}
{"text": "\"/torch_extensions\" and all extensions will be compiled into\n subfolders of this directory. Second, if the \"build_directory\"\n argument to this function is supplied, it overrides the entire\n path, i.e. the library will be compiled into that folder directly.\n To compile the sources, the default system compiler (\"c++\") is\n used, which can be overridden by setting the \"CXX\" environment\n variable. To pass additional arguments to the compilation process,\n \"extra_cflags\" or \"extra_ldflags\" can be provided. For example, to\n compile your extension with optimizations, pass\n \"extra_cflags=['-O3']\". You can also use \"extra_cflags\" to pass\n further include directories.\n CUDA support with mixed compilation is provided. Simply pass CUDA\n source files (\".cu\" or \".cuh\") along with other sources. Such files\n will be detected and compiled with nvcc rather than the C++\n compiler. This includes passing the CUDA lib64 directory as a", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"}
{"text": "library directory, and linking \"cudart\". You can pass additional\n flags to nvcc via \"extra_cuda_cflags\", just like with\n \"extra_cflags\" for C++. Various heuristics for finding the CUDA\n install directory are used, which usually work fine. If not,\n setting the \"CUDA_HOME\" environment variable is the safest option.\n Parameters:\n * name -- The name of the extension to build. This MUST be\n the same as the name of the pybind11 module!\n * sources (Union[str, List[str]]) -- A\n list of relative or absolute paths to C++ source files.\n * extra_cflags -- optional list of compiler flags to forward\n to the build.\n * extra_cuda_cflags -- optional list of compiler flags to\n forward to nvcc when building CUDA sources.\n * extra_ldflags -- optional list of linker flags to forward\n to the build.\n * extra_include_paths -- optional list of include\n directories to forward to the build.", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"}
{"text": "directories to forward to the build.\n * build_directory -- optional path to use as build\n workspace.\n * verbose -- If \"True\", turns on verbose logging of load\n steps.\n * with_cuda (Optional[bool]) -- Determines whether\n CUDA headers and libraries are added to the build. If set to\n \"None\" (default), this value is automatically determined based\n on the existence of \".cu\" or \".cuh\" in \"sources\". Set it to\n True` to force CUDA headers and libraries to be included.\n * is_python_module -- If \"True\" (default), imports the\n produced shared library as a Python module. If \"False\",\n behavior depends on \"is_standalone\".\n * is_standalone -- If \"False\" (default) loads the\n constructed extension into the process as a plain dynamic\n library. If \"True\", build a standalone executable.\n Returns:\n Returns the loaded PyTorch extension as a Python module.", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"}
{"text": "If \"is_python_module\" is \"False\" and \"is_standalone\" is \"False\":\n Returns nothing. (The shared library is loaded into the\n process as a side effect.)\n If \"is_standalone\" is \"True\".\n Return the path to the executable. (On Windows,\n TORCH_LIB_PATH is added to the PATH environment variable as a\n side effect.)\n Return type:\n If \"is_python_module\" is \"True\"\n -[ Example ]-\n\n\n\nfrom torch.utils.cpp_extension import load\nmodule = load(\n ... name='extension',\n ... sources=['extension.cpp', 'extension_kernel.cu'],\n ... extra_cflags=['-O2'],\n ... verbose=True)\ntorch.utils.cpp_extension.load_inline(name, cpp_sources, cuda_sources=None, functions=None, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False, with_cuda=None, is_python_module=True, with_pytorch_error_handling=True, keep_intermediates=True)\n\n\n", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"}
{"text": "Loads a PyTorch C++ extension just-in-time (JIT) from string\n sources.\n This function behaves exactly like \"load()\", but takes its sources\n as strings rather than filenames. These strings are stored to files\n in the build directory, after which the behavior of \"load_inline()\"\n is identical to \"load()\".\n See the tests for good examples of using this function.\n Sources may omit two required parts of a typical non-inline C++\n extension: the necessary header includes, as well as the (pybind11)\n binding code. More precisely, strings passed to \"cpp_sources\" are\n first concatenated into a single \".cpp\" file. This file is then\n prepended with \"#include \".\n Furthermore, if the \"functions\" argument is supplied, bindings will\n be automatically generated for each function specified. \"functions\"\n can either be a list of function names, or a dictionary mapping\n from function names to docstrings. If a list is given, the name of\n each function is used as its docstring.", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"}
{"text": "each function is used as its docstring.\n The sources in \"cuda_sources\" are concatenated into a separate\n \".cu\" file and prepended with \"torch/types.h\", \"cuda.h\" and\n \"cuda_runtime.h\" includes. The \".cpp\" and \".cu\" files are compiled\n separately, but ultimately linked into a single library. Note that\n no bindings are generated for functions in \"cuda_sources\" per se.\n To bind to a CUDA kernel, you must create a C++ function that calls\n it, and either declare or define this C++ function in one of the\n \"cpp_sources\" (and include its name in \"functions\").\n See \"load()\" for a description of arguments omitted below.\n Parameters:\n * cpp_sources -- A string, or list of strings, containing\n C++ source code.\n * cuda_sources -- A string, or list of strings, containing\n CUDA source code.\n * functions -- A list of function names for which to\n generate function bindings. If a dictionary is given, it", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"}
{"text": "should map function names to docstrings (which are otherwise\n just the function names).\n * with_cuda -- Determines whether CUDA headers and libraries\n are added to the build. If set to \"None\" (default), this value\n is automatically determined based on whether \"cuda_sources\" is\n provided. Set it to \"True\" to force CUDA headers and libraries\n to be included.\n * with_pytorch_error_handling -- Determines whether pytorch\n error and warning macros are handled by pytorch instead of\n pybind. To do this, each function \"foo\" is called via an\n intermediary \"_safe_foo\" function. This redirection might\n cause issues in obscure cases of cpp. This flag should be set\n to \"False\" when this redirect causes issues.\n -[ Example ]-\n\n\n\nfrom torch.utils.cpp_extension import load_inline\nsource = \"\"\"\n at::Tensor sin_add(at::Tensor x, at::Tensor y) {\n return x.sin() + y.sin();\n }\n \"\"\"\n\n\n", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"}
{"text": "return x.sin() + y.sin();\n }\n \"\"\"\n\n\n\nmodule = load_inline(name='inline_extension',\n ... cpp_sources=[source],\n ... functions=['sin_add'])\n Note:\n By default, the Ninja backend uses #CPUS + 2 workers to build the\n extension. This may use up too many resources on some systems.\n One can control the number of workers by setting the MAX_JOBS\n environment variable to a non-negative number.\ntorch.utils.cpp_extension.include_paths(cuda=False)\n Get the include paths required to build a C++ or CUDA extension.\n Parameters:\n cuda (bool) -- If True, includes CUDA-specific include\n paths.\n Returns:\n A list of include path strings.\n Return type:\n List[str]\ntorch.utils.cpp_extension.get_compiler_abi_compatibility_and_version(compiler)\n Determine if the given compiler is ABI-compatible with PyTorch\n alongside its version.\n Parameters:\n compiler (str) -- The compiler executable name to check\n\n\n", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"}
{"text": "(e.g. \"g++\"). Must be executable in a shell process.\n Returns:\n A tuple that contains a boolean that defines if the compiler is\n (likely) ABI-incompatible with PyTorch, followed by a\n TorchVersion string that contains the compiler version\n separated by dots.\n Return type:\n Tuple[bool, TorchVersion]\ntorch.utils.cpp_extension.verify_ninja_availability()\n Raises \"RuntimeError\" if ninja build system is not available on the\n system, does nothing otherwise.\ntorch.utils.cpp_extension.is_ninja_available()\n Returns \"True\" if the ninja build system is available on the\n system, \"False\" otherwise.", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"}
{"text": "Installing TorchDynamoThis section describes how to install TorchDynamo. TorchDynamo is\nincluded in the nightly binaries of PyTorch. For more information, see\nGetting Started.\nRequirements\n============\nYou must have the following prerequisites to use TorchDynamo:\n* A Linux or macOS environment\n* Python 3.8 (recommended). Python 3.7 through 3.10 are supported and\n tested. Make sure to have a development version of Python installed\n locally as well.\nGPU/CUDA Requirements\n\nTo use GPU back ends, and in particular Triton, make sure that the\nCUDA that you have installed locally matches the PyTorch version you\nare running.\nThe following command installs GPU PyTorch + TorchDynamo along with\nGPU TorchDynamo dependencies (for CUDA 11.7):\n pip3 install numpy --pre torch[dynamo] --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117\nCPU requirements\n\nThere are no additional requirements for CPU TorchDynamo. CPU", "source": "https://pytorch.org/docs/stable/dynamo/installation.html", "category": "pytorch docs"}
{"text": "TorchDynamo is included in the nightly versions of PyTorch. To\ninstall, run the following command:\n pip3 install --pre torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu\nInstall from Local Source\n\nAlternatively, you can build PyTorch from source, which has\nTorchDynamo included.\nTo install GPU TorchDynamo dependencies, run \"make triton\" in the\nPyTorch repo root directory.\nVerify Installation\n\nIf you built PyTorch from source, then you can run the following\ncommands (from the PyTorch repo root directory) to check that\nTorchDynamo is installed correctly:\n cd tools/dynamo\n python verify_dynamo.py\nIf you do not have the PyTorch source locally, you can alternatively\ncopy the script (\"tools/dynamo/verify_dynamo.py\") from the PyTorch\nrepository and run it locally.\nDocker Installation\n===================\nWe also provide all the required dependencies in the PyTorch nightly\nbinaries which you can download with the following command:", "source": "https://pytorch.org/docs/stable/dynamo/installation.html", "category": "pytorch docs"}
{"text": "docker pull ghcr.io/pytorch/pytorch-nightly\nAnd for ad hoc experiments just make sure that your container has\naccess to all your GPUs:\n docker run --gpus all -it ghcr.io/pytorch/pytorch-nightly:latest /bin/bash", "source": "https://pytorch.org/docs/stable/dynamo/installation.html", "category": "pytorch docs"}
{"text": "TorchDynamo OverviewTorchDynamo is a Python-level JIT compiler designed to make\nunmodified PyTorch programs faster. TorchDynamo hooks into the frame\nevaluation API in CPython (PEP 523) to dynamically modify Python\nbytecode right before it is executed. It rewrites Python bytecode in\norder to extract sequences of PyTorch operations into an FX Graph\nwhich is then just-in-time compiled with a customizable backend. It\ncreates this FX Graph through bytecode analysis and is designed to mix\nPython execution with compiled backends to get the best of both worlds\n\u00e2\u0080\u0094 usability and performance.\nTorchDynamo makes it easy to experiment with different compiler\nbackends to make PyTorch code faster with a single line decorator\n\"torch._dynamo.optimize()\"\n[image]\nTorchInductor is one of the backends supported by TorchDynamo Graph\ninto Triton for GPUs or C++/OpenMP for CPUs. We have a training\nperformance dashboard that provides performance comparison for", "source": "https://pytorch.org/docs/stable/dynamo/index.html", "category": "pytorch docs"}
{"text": "different training backends. You can read more in the TorchInductor\npost on PyTorch dev-discuss.\nSee also:\n * TorchDynamo deep-dive video\n * dev-discuss topics", "source": "https://pytorch.org/docs/stable/dynamo/index.html", "category": "pytorch docs"}
{"text": "Guards OverviewFrom a UX perspective, TorchDynamo is very easy to use. The user\ninvokes \"torchdynamo.optimize\" as an annotation:\n @torchdynamo.optimize(my_compiler)\n def fn_foo(bar):\nWhere a complete example looks like this:\n from typing import List\n import torch\n import torchdynamo\n def my_compiler(gm: torch.fx.GraphModule, example_inputs: List[torch.Tensor]):\n print(\"my_compiler() called with FX graph:\")\n gm.graph.print_tabular()\n return gm.forward # return a python callable\n @torchdynamo.optimize(my_compiler)\n def toy_example(a, b):\n x = a / (torch.abs(a) + 1)\n if b.sum() < 0:\n b = b * -1\n return x * b\n for _ in range(100):\n toy_example(torch.randn(10), torch.randn(10))\nThis allows TorchDynamo to capture the interpreted Python frames, grab\nany and all relevant information, and speed things up wherever it can.\nThe speedup comes from a few places, and can be rather dependent on", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "the backend (my_compiler in the example above) provided, but the one\nspeedup that is important in this section is caching. Caching\nitself is not a direct speedup but a critical enablement that prevents\nrecompilation. We dig a hole with dynamo, and caching allows us to get\nout. It enables us to hold perf neutrality while then enabling\nbackends - the true source of our speedups.\nWith even a pass-through no-op backend provided:\n def my_compiler(gm: torch.fx.GraphModule, example_inputs: List[torch.Tensor]):\n return gm.forward\nWe can see TorchDynamo speeding up Python execution even on regular\nPython, not just PyTorch.\nCaching and Guards Overview\n===========================\nTorchDynamo operates through caching transformed (by TorchDynamo) user\nbytecode. When TorchDynamo receives a frame for evaluation, it checks\nif the objects referenced in the frame have changed in certain\nways, and if not, TorchDynamo reads the previously transformed user", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "bytecode to evaluate it. In this section, we will focus on how we can\nidentify whether or not the objects referenced in the frame have\nchanged. This is a critical piece of functionality in TorchDynamo,\nbecause it drives the entire invalidation lifecycle. This\nfunctionality is called guards.\nAt a very high level, the flow can be summarized like this:\n1. TorchDynamo receives a Python frame.\n2. It converts the frame (1) passing it through instruction\n translation.\n3. For the objects captured in (2), TorchDynamo creates tracking\n objects that are: * tracked on an output graph, which is an\n internal specialization of a torch.fx.Tracer * guards\n4. TorchDynamo processes the guard objects created in (3), turning\n them into a generated Python function, check_fn, associated with\n a piece of code.\n5. The check_fn is evaluated whenever we encounter this code a\n subsequent time - if a check_fn passes and evaluates to True,\n TorchDynamo identifies the code in the cache and the code", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "encountered here as same, and can be safely used. If it fails and\n evaluates to False, TorchDynamo identifies the code in the cache\n as not valid, and can be thrown out in favor of a new entry,\n through recompilation or a graph break.\nPython Frame Evaluation and PEP 523\n===================================\nThe functionality of TorchDynamo is based on PEP 523.\nTorchDynamo installs a frame evaluation function on Python by using\n_PyInterpreterState_SetEvalFrameFunc. TorchDynamo has a hook where\nPython can hand control back to us during evaluation.\nThe function we have installed is \"convert_frame\" or\n\"convert_frame_assert\" in the \"nopython=True\" case, but glossing over\nthat nuance for now, let\u00e2\u0080\u0099s take a look at \"convert_frame_assert\", as\n\"convert_frame\" proxies to it.\nWe can find it on line 20 of convert_frame.py, with a signature as\nfollows:\n def convert_frame_assert(compiler_fn: Callable, one_graph=True):\nThis function wraps the entry point of where Python invokes\nTorchDynamo with a frame:", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "TorchDynamo with a frame:\n def _convert_frame_assert(frame: types.FrameType, cache_size: int):\nHere is what this function does:\n1. Checks if it has seen this \"code\"(see: f_code here) before and\n exits early if it did.\n2. Checks if the code is an unsupported case.\n3. Checks if the \"cache_size\" (second arg above) crosses the limit\n defined in the config, \"cache_size_limit\". If it has, the function\n drops the frame and logs warnings. This helps to avoid constant\n recompilation of a frame as it generally means that the frame is\n hot in an unexpected way and caching it produces needless overhead,\n as it is likely to get evicted the next time it is encountered.\n4. Passes the frame, alongside a function that creates an\n \"InstructionTranslator\" through bytecode transformation, via\n \"transform_code_object\". A few crucial things happen under the hood\n here:\n 1. New code is produced through \"transform_code_object\".\n 2. An FX tracer named \"output\" is produced through", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "\"InstructionTranslator\".\n This can be a bit confusing, as \"InstructionTranslator\" is not\n an fx tracer, but its stored in a variable named tracer, and\n its outputisan fxtracer.\n 3. The function produces guards and stores them on \"output\" above.\n 4. The function produces \"output_instructions\" and stores them on\n \"output\" above.\n 5. The function maps the newly produced transformed code to the\n initial code it read off the frame. This mapping is worth\n remembering, we will refer to it much later on below where we\n cover guard failures.\n5. Using the transformed code from 4.1 and the guards from 4.3, the\n function produces a GuardedCode.\nNow that we have learned about frame evaluation, let\u00e2\u0080\u0099s review\n\"InstructionTranslator\", and see how it turns the frame we handed it\nover into TorchDynamo internal types.\nInstructionTranslator\n=====================\nInstructionTranslator* does a lot! We won\u00e2\u0080\u0099t cover the details of", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "everything it does, but most importantly for this document, it\nproduces a mapping of \"symbolic_locals\" which maintains a mapping from\nthe frame\u00e2\u0080\u0099s \"f_locals\" to TorchDynamo internal Variable objects (more\non these in a moment. \"symbolic_locals\" is filled via traversing the\nframe\u00e2\u0080\u0099s locals:\n self.symbolic_locals = collections.OrderedDict(\n (k, VariableBuilder(self, LocalSource(k))(f_locals[k]))\n for k in vars\n if k in f_locals\n )\nThe important component here is the invocation of a call into\n\"VariableBuilder\". \"VariableBuilder\"\u00e2\u0080\u0099s call implementation proxies\ninto a function called \"_wrap\", which in turn both constructs\ninstances of \"VariableTracker\" and calls \"make_guards\" on them. More\non that later.\nThis mapping, in turn, is critical as each Variable has associated\nguards, which are then passed to \"self.output\", the instance of\n\"OutputGraph\", an fx tracer, mentioned in 4.2 of the section above. If\nyou recall, this \"OutputGraph\", stored in a variable called \"output\"", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "is where our guards are stored before being passed on to become\n\"GuardedCode\"\nHow does \"InstructionTranslator\" do this? At the heart of it, there is\na loop that is pumped, which drives a function \"step\".\n\"step\" is just that - a single processing step, taking exactly one\ninstruction and doing something with it.\nNote:\n These are real instructions processed by TorchDynamo\u00e2\u0080\u0099s\n \"transform_code_object\", and it is pretty cool.\nNote:\n This section purposely skips the details of dis.get_instructions.\nFor the example above, here is a snippet of a what a few\n\"Instruction\"'s may look like:\n Instruction(opcode=124, opname='LOAD_FAST', arg=0, argval='b', offset=32, starts_line=8, is_jump_target=True, target=None)\n Instruction(opcode=100, opname='LOAD_CONST', arg=3, argval=-1, offset=34, starts_line=None, is_jump_target=False, target=None)\n Instruction(opcode=20, opname='BINARY_MULTIPLY', arg=None, argval=None, offset=36, starts_line=None, is_jump_target=False, target=None)", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "This is the core functionality of this function. Take a look at the\n\"opname\", and then take a look at this little snippet from inside\n\"step\";\n if not hasattr(self, inst.opname):\n unimplemented(f\"missing: {inst.opname}\")\n getattr(self, inst.opname)(inst)\nAs we can see, the function checks if the current class, the\n\"InstructionTranslator\" has an attribute set matching the operator\nname (for example, \"LOAD_CONST\"). If it does, the function invokes it,\npassing the whole instruction object in. If it does not, the function\ndrops the frame as unimplemented.\nFor the \"LOAD_CONST\" example, we can see that we do indeed support it,\nwith a relatively straightforward definition:\n def LOAD_CONST(self, inst):\n self.push(ConstantVariable(value=inst.argval))\nWe can see that this function creates a new instance of the class\n\"ConstantVariable\" , with a value, in our example case, -1, and then\npushes it onto the stack.\nThere are dozens of such methods - see \"symbolic_convert.py\" for all", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "of them. Generally, we implement as many matching methods to Python\nbytecode instructions as possible.\nAcross both the logic downstream of \"step\" and the logic from invoking\n\"VariableBuilder\" - we now have a lot of \"VariableTracker\"s and of\ncourse, we\u00e2\u0080\u0099ve spoken about creating guards quiet a bit. Let\u00e2\u0080\u0099s dig into\nwhat Variables are, and get a little closer to understanding guards.\nVariables\n=========\nA \"ConstantVariable\" is an instance of\"VariableTracker\".\n\"VariableTracker\" represents a tracked Python local or stack value.\nWhen it comes to representing an object inside TorchDynamo, a\n\"VariableTracker\" does exactly what it says - it tracks a given\nvariable. It is an extremely flexible class, but there are a few\npoints to keep in mind:\n* It manages the \"guard\" relationship around the underlying object\n through:\n * \"make_guard\"\n * \"replace_guards\"\n * \"add_guard(s)\"\n * \"propagate\" - \"propagate(*vars: List[List[\"VariableTracker\"]])\" -\n Perhaps the most important of all, in that it combines guards from", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "all the provided \"VariableTracker\" instances passed in. It visits\n the guards and combines the guards from these onto itself.\n* It acts as a proxy on behalf of the underlying object, implementing\n methods for the rest of TorchDynamo to get information about the\n tracked object:\n * \"call_method\"\n * \"call_function\"\n * \"python_type\"\n * \"as_proxy\"\n * \"is/as_python_proxy\"\n* It stores the variable \"source\" of type \"Source\", from\n \"torchdynamo/source.py\". This source type is a relatively self\n contained class that helps us organize and bookkeep where the\n original source came from, and helps provide convenience methods for\n things like getting the name, and importantly for us, producing\n guards.\nAnd this class (\"VariableTracker\") is built around subclassing,\nsomewhere between a full Abstract Base Class and fully fleshed out\nclass - it leaves many methods raising \"NotImplementedError\" - with\nreliance on subclasses. See \"torchdynamo/variables/\" for all", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "subclasses to fulfill contracts and custom behaviors.\nKnowing what we know now, we can see an example of how an instruction\nfrom \"dis\", \"BUILD_TUPLE\":\n \"BUILD_TUPLE(count)\" Creates a tuple consuming count items from the\n stack, and pushes the resulting tuple onto the stack.\nIn our case, our signature will be a little different due to the way\nwe create \"Instruction\" objects, but the gist of it will be the same.\nInstead of passing in \"count\", we pass in an object with a little\nextra bookkeeping, and of course, we deal with turning regular old\npython objects into TorchDynamo notions:\n def BUILD_TUPLE(self, inst):\n items = self.popn(inst.argval)\n options = VariableTracker.propagate(items)\n self.push(TupleVariable(items, **options))\nHere is what this code does:\n1. The function reads \"argval\", which in this case, is analogous to\n \"counts\" in the pydoc for the equivalent instruction.\n2. The function \"popn\" the items, in this case, the signature is \"def", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "popn(self, n: int) -> List[TensorVariable]:\" this hints at an\n underlying contract - we are returning \"TensorVariables\". If we\n take a closer look at \"sybmolic_convert.py\" and\n \"InstructionTranslatorBase\"/\"InstructionTranslator\"we see that the\n only thing pushed onto and popped from our stack are\n \"VariableTracker\"s.\n3. The function calls \"VariableTracker.propagate\". This takes the\n guards from every single item popped off the stack in 2, and\n recursively traverses it and combines all the guards into\n \"options\": \"py return { \"guards\": guards, }\"\n4. The function then makes a new instance of a \"VariableTracker\",\n \"TupleVariable\"out of the \"items\" and \"options\". This then allows\n us to install all the appropriate guards from the \"items\" that make\n up the new \"TupleVariable\"\nNote:\n Where did the first guards come from? Propagation is a good\n technique, but we need something created before it can be\n propagated. \"VariableBuilder\" calls \"make_guards\" as it creates", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "\"VariableTracker\" instances, from \"f_locals\". This in turn calls\n into the \"source\", to have it create guards.\nAfter all this, bytecode translation is done and we are one step\ncloser to producing \"GuardedCode\". We now understand how locals become\n\"VariableTracker\"s, how instructions are handled, and where guards are\ncalled on for creation. Before we can go into seeing how code and\nguards are combined into a GuardedCode object, we need to dig a little\nbit into those \"make_guard\" and \"source.make_guard\" calls above. We\ncan then understand, what was going on when we made guards alongside,\nand on, \"VariableTracker\" instances.\nMaking Guards\n=============\nGuards are just Python objects, of the class \"Guard\". Let's look at\nthem in more detail.\nLooking at the definition of the dataclass (and therefore, ctor\nsignature), we see that it has a name, a source, and a create\nfunction.\n @dataclasses.dataclass\n class Guard:\n name: str\n source: GuardSource\n create_fn: Callable", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "create_fn: Callable\nThe name should be the name of the variable.\nThe source here is an enum indicating what kind of source the guard\nbelongs to.\nNote:\n Not to be confused with \"Source\" and the other types in \"source.py\",\n as stored on \"VariableTracker\".\n\"create_fn\" provides the main functionality to transition from a\nsimple dataclass to actually producing valid Python code to be invoked\nfor knowing whether or not things have changed in between invocations,\nand whether we can safely read from the code cache or not.\nThe most common code paths for getting an instance of a guard are\nthrough \"make_guards\" on \"VariableTracker\".\n\"make_guards\"->source.make_guard->return Guard(self.name(),\nself.guard_source(), fn)\nOr, in a concrete example:\n ...\n elif istype(value, range):\n guards = self.make_guards(GuardBuilder.EQUALS_MATCH)\n return RangeVariable(value=value, guards=guards)\nSince \"source\" was set at the construction time of this", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "\"VariableTracker\", all that was needed here was to provide the \"fn\",\n\"GuardBuilder.EQUALS_MATCH\" to the \"create_fn\" field.\nThis \"create_fn\" must be a method on \"GuardBuilder\". The reason for\nthis becomes apparent in our next step. Once we have all the guards\ncreated for a frame, we move on to \"CheckFunctionManager\" and\n\"compile_check_fn\".\nBefore the \"convert_frame\" function can produce a \"GuardedCode\", it\nneeds to run the \"CheckFunctionManager\", with all the guards, to\nproduce a \"check_fn\" which will then, in turn get passed in alongside\nthe code into \"GuardedCode\". This is the same \"check_fn\" that we store\nin our cache entry, and the same one we run to know whether or not to\nretrieve the code stored alongside. For reference, here is that code:\n static CacheEntry create_cache_entry(CacheEntry next,\n PyObject guarded_code) {\n CacheEntry e = (CacheEntry *)malloc(sizeof(CacheEntry));\n DEBUG_NULL_CHECK(e);", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "DEBUG_NULL_CHECK(e);\n e->check_fn = PyObject_GetAttrString(guarded_code, \"check_fn\");\n NULL_CHECK(e->check_fn);\n e->code = (PyCodeObject *)PyObject_GetAttrString(guarded_code, \"code\");\n NULL_CHECK(e->code);\n e->next = next;\n return e;\n }\nWe now know how a \"check_fn\" function is used, and who makes it, and\nwhat it is composed of, but what we do not yet know is how. How does a\nlist of \"Guard\" objects become a function we can run later on?\nFirst, we iterate these guards:\n for guard in sorted(guards or [], key=Guard.sort_key):\n if not config.guard_nn_modules and guard.is_nn_module():\n continue\n guard.create(local_builder, global_builder)\nCalling \"guard.create\" runs that \"create_fn\" we set on the \"Guard\"\nclass above (don\u00e2\u0080\u0099t confuse it with the \"check_fn\" we are working on\nproducing, the names are similar, so it can get a little confusing).\nIn our example above, our \"create_fn\" is \"GuardBuilder.EQUALS_MATCH\".", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "So we are now invoking it, passing in the \"self\", the guard itself,\nin.\nThe signature is: \"def EQUALS_MATCH(self, guard: Guard):\"\nAnd internally to that function, we can use the \"name\" on the guard to\nget back our original object, querying it for data and type\ninformation, which in turn gets us to the most important bit:\nappending code.\nAt its simplest, \"EQUALS_MATCH\" appends just one line of code:\n\"self.code.append(f\"{ref} == {val!r}\")\". Where \"ref\" is the name of\nthe variable, and \"val\" is the value. It might produce code like this:\n y == 2\nThis is a basic example. But if we append a few other kinds of\n\"GuardBuilder\" functions and then combine them all with \"and\" in\nbetween each statement (as we do), we might get something like this:\n guardedcode.valid and _check_type_id(y, 94367738391392) and y == 2 and ___check_tensors(x)\nHere is what this code performs:\n1. A check for \".valid\"\n2. A type ID check\n3. A value check\n4. A tensor check", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "\nA value check\nA tensor check\nThis becomes the heart of the code our \"check_fn\", which in turn is\nevaluated the next time we encounter this code. It will then\ncheck:\nIs this code still valid?\nIf (1), Does \"y\" still have a type of \"94367738391392\"?\nIf (2), is \"y\" still 2?\nIf (3), let\u00e2\u0080\u0099s check on if tensor \"x\" changed in some specific ways.\nIf all of these are still true, then we can use the code cached\nalongside this \"check_fn\".\nNote:\n For a deeper dive for how and where this happens you can read\n \"static PyCodeObject lookup(CacheEntry e, PyObject *f_locals) {\"\n of \"_eval_frame.c\".\nIf not, then, we can move on to recompiling the code anew, and storing\nthat in the cache alongside this code, and a whole new \"check_fn\",\nagain to be checked on yet another subsequent frame.\nThere are lots of other such functions on \"GuardBuilder\" which get\ncoalesced into, at times massive, strings which then get evaluated as\nPython code and stored into \"check_fn\". The example above illustrates\n", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "of a simple case. To understand this functionality better, read the\nother functions on \"GuardBuilder\", or better yet, dump the \"code\"\nvariable in \"compile_check_fn\" to see what is getting produced,\nespecially on larger, real models.\nSummary\n=======\nIn this section, we have reviewed:\n* The role of \".valid\" and invalidation around weak references (and\n potentially soon to be NN Moduleinvalidations).\n* How the C++ side of guard functions (\"checktype_id\",\n \"_check_tensors\", etc) operate\n* What happens when guards fail.\n* What happens if we produce invalid guard code.\nWe covered how user provided code wrapped in a TorchDynamo context\ngoes on to get traced and tracked internally, organized into\n\"VariableTracker\"s \"Source\"s and subsequently \"Guard\"s, and how those\n\"Guards\" in turn guide cache entry selection and invalidation when\nhanding Python code.", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"}
{"text": "DDP Communication HooksDDP communication hook is a generic interface to control how to\ncommunicate gradients across workers by overriding the vanilla\nallreduce in DistributedDataParallel. A few built-in communication\nhooks are provided, and users can easily apply any of these hooks to\noptimize communication. Besides, the hook interface can also support\nuser-defined communication strategies for more advanced use cases.\nHow to Use a Communication Hook?\n================================\nTo use a communication hook, the user just needs to let the DDP model\nregister the hook before the training loop as below.\n\"torch.nn.parallel.DistributedDataParallel.register_comm_hook()\"\nWhat Does a Communication Hook Operate On?\n==========================================\nA communication hook provides a flexible way to allreduce gradients.\nTherefore, it mainly operates on the gradients on each replica before\nallreduce, which are bucketized to increase the overlap between", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "communication and computation. Particularly,\n\"torch.distributed.GradBucket\" represents a bucket of gradient tensors\nto be allreduced.\nclass torch.distributed.GradBucket\n This class mainly passes a flattened gradient tensor (returned by\n \"buffer()\") to DDP communication hook. This tensor can be further\n decomposed into a list of per-parameter tensors within this bucket\n (returned by \"get_per_parameter_tensors()\") to apply layer-wise\n operations.\ntorch.distributed.GradBucket.index(self: torch._C._distributed_c10d.GradBucket) -> int\n Warning:\n Since the buckets are rebuilt after the first iteration, should\n not rely on the indices at the beginning of training.\n Returns:\n The index of a bucket that stores gradients of a few contiguous\n layers. All the gradients are bucketized.\ntorch.distributed.GradBucket.buffer(self: torch._C._distributed_c10d.GradBucket) -> torch.Tensor\n Returns:\n A flattened 1D \"torch.Tensor\" buffer, which can be further", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "decomposed into a list of per-parameter tensors within this\n bucket.\ntorch.distributed.GradBucket.gradients(self: torch._C._distributed_c10d.GradBucket) -> List[torch.Tensor]\n Returns:\n A list of \"torch.Tensor\". Each tensor in the list corresponds to\n a gradient.\ntorch.distributed.GradBucket.is_last(self: torch._C._distributed_c10d.GradBucket) -> bool\n Returns:\n Whether this bucket is the last bucket to allreduce in an\n iteration. This also means that this bucket corresponds to the\n first few layers in the forward pass.\ntorch.distributed.GradBucket.set_buffer(self: torch._C._distributed_c10d.GradBucket, buffer: torch.Tensor) -> None\n Replaces the tensor in the bucket with the input tensor buffer.\ntorch.distributed.GradBucket.parameters(self: torch._C._distributed_c10d.GradBucket) -> List[torch.Tensor]\n Returns:\n A list of \"torch.Tensor\". Each tensor in the list corresponds to\n a model parameter.\nDefault Communication Hooks\n===========================", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "===========================\nDefault communication hooks are simple stateless hooks, so the\ninput state in \"register_comm_hook\" is either a process group or\n\"None\". The input \"bucket\" is a \"torch.distributed.GradBucket\" object.\ntorch.distributed.algorithms.ddp_comm_hooks.default_hooks.allreduce_hook(process_group, bucket)\n This DDP communication hook just calls \"allreduce\" using\n \"GradBucket\" tensors. Once gradient tensors are aggregated across\n all workers, its \"then\" callback takes the mean and returns the\n result. If user registers this hook, DDP results is expected to be\n same as the case where no hook was registered. Hence, this won't\n change behavior of DDP and user can use this as a reference or\n modify this hook to log useful information or any other purposes\n while unaffecting DDP behavior.\n Example::\n >>> ddp_model.register_comm_hook(process_group, allreduce_hook)\n Return type:\n Future[Tensor]", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "Return type:\n Future[Tensor]\ntorch.distributed.algorithms.ddp_comm_hooks.default_hooks.fp16_compress_hook(process_group, bucket)\n This DDP communication hook implements a simple gradient\n compression approach that casts \"GradBucket\" tensor to half-\n precision floating-point format (\"torch.float16\") and then divides\n it by the process group size. It allreduces those \"float16\"\n gradient tensors. Once compressed gradient tensors are allreduced,\n the chained callback \"decompress\" casts it back to the input data\n type (such as \"float32\").\n Example::\n >>> ddp_model.register_comm_hook(process_group, fp16_compress_hook)\n Return type:\n Future[Tensor]\ntorch.distributed.algorithms.ddp_comm_hooks.default_hooks.bf16_compress_hook(process_group, bucket)\n Warning: This API is experimental, and it requires NCCL version\n later than 2.9.6.\n This DDP communication hook implements a simple gradient\n compression approach that casts \"GradBucket\" tensor to half-", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "precision Brain floating point format (\"torch.bfloat16\") and then\n divides it by the process group size. It allreduces those\n \"bfloat16\" gradient tensors. Once compressed gradient tensors are\n allreduced, the chained callback \"decompress\" casts it back to the\n input data type (such as \"float32\").\n Example::\n >>> ddp_model.register_comm_hook(process_group, bf16_compress_hook)\n Return type:\n Future[Tensor]\nAdditionally, a communication hook wrapper is provided to support\n\"fp16_compress_hook()\" or \"bf16_compress_hook()\" as a wrapper, which\ncan be combined with other communication hooks.\ntorch.distributed.algorithms.ddp_comm_hooks.default_hooks.fp16_compress_wrapper(hook)\n This wrapper casts the input gradient tensor of a given DDP\n communication hook to half-precision floating point format\n (\"torch.float16\"), and casts the resulting tensor of the given hook\n back to the input data type, such as \"float32\".\n Therefore, \"fp16_compress_hook\" is equivalent to", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "\"fp16_compress_wrapper(allreduce_hook)\".\n Example::\n >>> state = PowerSGDState(process_group=process_group, matrix_approximation_rank=1, start_powerSGD_iter=10)\n >>> ddp_model.register_comm_hook(state, fp16_compress_wrapper(powerSGD_hook))\n Return type:\n Callable[[Any, GradBucket], Future[Tensor]]\ntorch.distributed.algorithms.ddp_comm_hooks.default_hooks.bf16_compress_wrapper(hook)\n Warning: This API is experimental, and it requires NCCL version\n later than 2.9.6.\n This wrapper casts the input gradient tensor of a given DDP\n communication hook to half-precision Brain floating point format\n https://en.wikipedia.org/wiki/Bfloat16_floating-point_format _\n (``torch.bfloat16), and casts the resulting tensor of the given\n hook back to the input data type, such as \"float32\".\n Therefore, \"bf16_compress_hook\" is equivalent to\n \"bf16_compress_wrapper(allreduce_hook)\".\n Example::", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "Example::\n >>> state = PowerSGDState(process_group=process_group, matrix_approximation_rank=1, start_powerSGD_iter=10)\n >>> ddp_model.register_comm_hook(state, bf16_compress_wrapper(powerSGD_hook))\n Return type:\n Callable[[Any, GradBucket], Future[Tensor]]\nPowerSGD Communication Hook\n===========================\nPowerSGD (Vogels et al., NeurIPS 2019) is a gradient compression\nalgorithm, which can provide very high compression rates and\naccelerate bandwidth-bound distributed training. This algorithm needs\nto maintain both some hyperparameters and the internal state.\nTherefore, PowerSGD communication hook is a stateful hook, and the\nuser needs to provide a state object defined as below.\nPowerSGD State\n", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "PowerSGD State\nclass torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.PowerSGDState(process_group, matrix_approximation_rank=1, start_powerSGD_iter=1000, min_compression_rate=2, use_error_feedback=True, warm_start=True, orthogonalization_epsilon=0, random_seed=0, compression_stats_logging_frequency=10000, batch_tensors_with_same_shape=False)\n Stores both the algorithm's hyperparameters and the internal state\n for all the gradients during the training. Particularly,\n \"matrix_approximation_rank\" and \"start_powerSGD_iter\" are the main\n hyperparameters that should be tuned by the user. For performance,\n we suggest to keep binary hyperparameters \"use_error_feedback\" and\n \"warm_start\" on.\n 1. \"matrix_approximation_rank\" controls the size of compressed low-\n rank tensors, which determines the compression rate. The lower\n the rank, the stronger the compression.\n 1.1. If \"matrix_approximation_rank\" is too low, the full", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "model quality will need more training steps to reach or will\n never reach and yield loss in accuracy.\n 1.2. The increase of \"matrix_approximation_rank\" can\n substantially increase the computation costs of the\n compression, and the accuracy may not be further improved\n beyond a certain \"matrix_approximation_rank\" threshold.\n To tune \"matrix_approximation_rank\", we suggest to start from 1 and\n increase by factors of 2 (like an exponential grid search, 1, 2, 4,\n ...), until a satisfactory accuracy is reached. Typically only a\n small value 1-4 is used. For some NLP tasks (as shown in Appendix D\n of the original paper), this value has been increased to 32.\n 2. \"start_powerSGD_iter\" defers PowerSGD compression until step\n \"start_powerSGD_iter\", and vanilla allreduce runs prior to step\n \"start_powerSGD_iter\". This hybrid scheme of vanilla allreduce\n + PowerSGD can effectively improve the accuracy, even a", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "relatively small \"matrix_approximation_rank\" is used. This is\n because that, the beginning of training phase is usually very\n sensitive to inaccurate gradients, and compressing gradients too\n early may make the training quickly take a suboptimal\n trajectory, which can result in an irrecoverable impact on the\n accuracy.\n To tune \"start_powerSGD_iter\", we suggest to start with 10% of\n total training steps, and increase it until a satisfactory accuracy\n is reached. If there is a warm-up stage in the training,\n \"start_powerSGD_iter\" typically should be no less than the number\n of warm-up steps.\n 3. \"min_compression_rate\" is the minimum compression rate required\n when a layer is compressed. Due to the computation overheads\n incurred by the compression, a tensor is worth compressing only\n if there can be sufficient saving in bandwidth, where \"(num_rows\n + num_cols) * matrix_approximation_rank * min_compression_rate <", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "num_rows * num_cols\". If the specified compression rate\n threshold cannot be satisfied, the tensor will be directly\n allreduced without compression.\n Compression statistics are logged every\n \"compression_stats_logging_frequency\" iterations once PowerSGD\n compression starts.\n 4. \"orthogonalization_epsilon\" can be a very small value (e.g.,\n 1e-8) added to every normalized matrix column in\n orthogonalization step, to prevent div-by-zero error if any\n column has all 0s. If this can already be prevented (e.g., by\n batch normalization), an epsilon of 0 is recommended for\n accuracy.\n 5. \"batch_tensors_with_same_shape\" controls whether to compress and\n decompress tensors with same shape in a batched operation to\n achieve higher parallelism. Note that you should also increase\n the bucket size (i.e., \"bucket_cap_mb\" arg in DDP constructor)\n to make more same-shaped tensors appear in the same bucket,", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "however this may reduce the overlap between computation and\n communication, and increase the memory footprint due to stacking\n the tensors of the same shape. Set to \"True\" if the compression\n / decompression computation is a bottleneck.\n Warning:\n If error feedback or warm-up is enabled, the minimum value of\n \"start_powerSGD_iter\" allowed in DDP is 2. This is because there\n is another internal optimization that rebuilds buckets at\n iteration 1 in DDP, and this can conflict with any tensor\n memorized before the rebuild process.\nPowerSGD Hooks\n\nWarning:\n PowerSGD typically requires extra memory of the same size as the\n model's gradients to enable error feedback, which can compensate for\n biased compressed communication and improve accuracy.\nWarning:\n PowerSGD hooks may conflict with Apex automatic mixed precision\n package. Please use PyTorch native automatic mixed precision package\n instead.", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "instead.\ntorch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.powerSGD_hook(state, bucket)\n This DDP communication hook implements PowerSGD gradient\n compression algorithm described in the paper. Once gradient tensors\n are aggregated across all workers, this hook applies compression as\n follows:\n 1. Views the input flattened 1D gradient tensor as a list of per-\n parameter tensors, and divides all the tensors into two groups:\n 1.1 The tensors that should be compressed before allreduce,\n because the compression can give enough saving in bandwidth.\n 1.2 Rest of the tensors will be directly allreduced without\n compression, including all the vector tensors (for biases).\n 2. Handles uncompressed tensors:\n 2.1. Allocate contiguous memory for those uncompressed\n tensors, and allreduces all the uncompressed tensors as a\n batch, without compression;\n 2.2. Copies the individual uncompressed tensors from the", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "contiguous memory back to the input tensor.\n 3. Handles the tensors that should be compressed by PowerSGD\n compression:\n 3.1. For each tensor M, creates two low-rank tensors P and Q\n for decomposing M, such that M = PQ^T, where Q is initialized\n from a standard normal distribution and orthogonalized;\n 3.2. Computes each P in Ps, which is equal to MQ;\n 3.3. Allreduces Ps as a batch;\n 3.4. Orthogonalizes each P in Ps;\n 3.5. Computes each Q in Qs, which is approximately equal to\n M^TP;\n 3.6. Allreduces Qs as a batch;\n 3.7. Computes each M among all the compressed tensors, which\n is approximately equal to PQ^T.\n Note that this communication hook enforces vanilla allreduce for\n the first \"state.start_powerSGD_iter\" iterations. This not only\n gives the user more control over the tradeoff between speedup and\n accuracy, but also helps abstract away some complexity of the", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "internal optimization of DDP for future communication hook\n developers.\n Parameters:\n * state (PowerSGDState) -- State information to configure\n the compression rate and support error feedback, warm start,\n etc. To tune the compression configs, mainly need to tune\n \"matrix_approximation_rank\", \"start_powerSGD_iter\" and\n \"min_compression_rate\".\n * bucket (dist.GradBucket) -- Bucket that stores a 1D\n flattened gradient tensor that batches multiple per-variable\n tensors. Note that since DDP comm hook only supports single\n process single device mode, only exactly one tensor is stored\n in this bucket.\n Returns:\n Future handler of the communication, which updates the gradients\n in place.\n Return type:\n Future[Tensor]\n Example::\n >>> state = PowerSGDState(process_group=process_group, matrix_approximation_rank=1,\n start_powerSGD_iter=10, min_compression_rate=0.5)", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "\n\n\nddp_model.register_comm_hook(state, powerSGD_hook)\ntorch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.batched_powerSGD_hook(state, bucket)\n This DDP communication hook implements a simplified PowerSGD\n gradient compression algorithm described in the paper. This variant\n does not compress the gradients layer by layer, but instead\n compresses the flattened input tensor that batches all the\n gradients. Therefore, it is faster than \"powerSGD_hook()\", but\n usually results in a much lower accuracy, unless\n \"matrix_approximation_rank\" is 1.\n Warning:\n Increasing \"matrix_approximation_rank\" here may not necessarily\n increase the accuracy, because batching per-parameter tensors\n without column/row alignment can destroy low-rank structure.\n Therefore, the user should always consider \"powerSGD_hook()\"\n first, and only consider this variant when a satisfactory\n accuracy can be achieved when \"matrix_approximation_rank\" is 1.\n\n\n", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "Once gradient tensors are aggregated across all workers, this hook\n applies compression as follows:\n 1. Views the input flattened 1D gradient tensor as a square-shaped\n tensor M with 0 paddings;\n 2. Creates two low-rank tensors P and Q for decomposing M, such\n that M = PQ^T, where Q is initialized from a standard normal\n distribution and orthogonalized;\n 3. Computes P, which is equal to MQ;\n 4. Allreduces P;\n 5. Orthogonalizes P;\n 6. Computes Q, which is approximately equal to M^TP;\n 7. Allreduces Q;\n 8. Computes M, which is approximately equal to PQ^T.\n 9. Truncates the input tensor to the original length.\n Note that this communication hook enforces vanilla allreduce for\n the first \"state.start_powerSGD_iter\" iterations. This not only\n gives the user more control over the tradeoff between speedup and\n accuracy, but also helps abstract away some complexity of the\n internal optimization of DDP for future communication hook\n developers.\n Parameters:", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "developers.\n Parameters:\n * state (PowerSGDState) -- State information to configure\n the compression rate and support error feedback, warm start,\n etc. To tune the compression configs, mainly need to tune\n \"matrix_approximation_rank\" and \"start_powerSGD_iter\".\n * bucket (dist.GradBucket) -- Bucket that stores a 1D\n flattened gradient tensor that batches multiple per-variable\n tensors. Note that since DDP comm hook only supports single\n process single device mode, only exactly one tensor is stored\n in this bucket.\n Returns:\n Future handler of the communication, which updates the gradients\n in place.\n Return type:\n Future[Tensor]\n Example::\n >>> state = PowerSGDState(process_group=process_group, matrix_approximation_rank=1)\n >>> ddp_model.register_comm_hook(state, batched_powerSGD_hook)\nDebugging Communication Hooks\n=============================", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "=============================\nAs the name implies, debugging communication hooks are only used\nfor debugging and performance optimization purpose.\nWarning:\n Debugging communication hooks do not necessarily output the correct\n results.\ntorch.distributed.algorithms.ddp_comm_hooks.debugging_hooks.noop_hook(_, bucket)\n This DDP communication hook returns a future that wraps the input,\n so it is a noop that does not incur any communication overheads.\n This hook should only be used for headroom analysis of\n allreduce optimization, instead of the normal gradient\n synchronization. For example, if only less than 10% speedup of\n training time can be observed after this hook is registered, it\n usually implies that allreduce is not a performance bottleneck for\n this case. Such instrumentation can be particularly useful if GPU\n traces cannot be easily retrieved or the trace analysis is\n complicated some factors such as the overlap between allreduce and", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "computation or the desynchronization across ranks.\n Example::\n >>> ddp_model.register_comm_hook(None, noop_hook)\n Return type:\n Future[Tensor]\nCheckpointing of Communication Hooks\n====================================\nA stateful communication hook can be saved as a part of model\ncheckpointing to enable trainer restarts. To make a hook serializable,\n\"setstate\" and \"getstate\" should be defined.\nWarning:\n \"getstate\" should exclude non-serializable attributes from a\n returned dictionary.\nWarning:\n \"setstate\" should properly initialize non-serializable\n attributes, excluded from a provided \"state\".\n\"PowerSGDState\" has \"setstate\" and \"getstate\" implemented and\ncan be used as a reference.\nclass torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.PowerSGDState\n getstate()\n Returns a \"Dict[str, Any]\" which will be pickled and saved.\n \"process_group\" is not serializable and excluded from a returned\n state.\n setstate(state)", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "state.\n setstate(state)\n Takes a provided \"state\" and retrieves \"PowerSGDState\".\n \"process_group\" is set to default.\nHere is a simple, end-to-end example of saving and reloading PowerSGD\nstate and hook.\n import os\n import sys\n import tempfile\n import torch\n import torch.distributed as dist\n import torch.nn as nn\n import torch.optim as optim\n from torch.distributed.algorithms.ddp_comm_hooks import powerSGD_hook as powerSGD\n class SimpleModel(nn.Module):\n def init(self):\n super(SimpleModel, self).init()\n self.fc1 = nn.Linear(24,24)\n self.relu = nn.ReLU()\n self.fc2 = nn.Linear(24,12)\n def forward(self, x):\n return self.fc2(self.relu(self.fc1(x)))\n def setup(rank, world_size):\n os.environ['MASTER_ADDR'] = 'localhost'\n os.environ['MASTER_PORT'] = '12355'\n # initialize the process group\n dist.init_process_group(\"nccl\", rank=rank, world_size=world_size)\n def cleanup():", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "def cleanup():\n dist.destroy_process_group()\n def run_demo(demo_fn, world_size):\n mp.spawn(\n demo_fn,\n args=(world_size,),\n nprocs=world_size,\n join=True)\n def demo_serialization(rank, world_size):\n setup(rank, world_size)\n CHECKPOINT = tempfile.gettempdir() + \"/checkpoint.pt\"\n model = SimpleModel().to(rank)\n ddp_model = DistributedDataParallel(model, device_ids=[rank])\n powersgd_hook = powerSGD.powerSGD_hook\n powersgd_state = powerSGD.PowerSGDState(process_group=None)\n optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)\n ddp_model.register_comm_hook(powersgd_state, powersgd_hook)\n state = {\n 'state_dict': ddp_model.state_dict(),\n 'comm_hook': hook,\n 'comm_hook_state': hook_state}\n if rank == 0:\n torch.save(state, CHECKPOINT)\n dist.barrier()\n map_location = {'cuda:%d' % 0: 'cuda:%d' % rank}", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "checkpoint = torch.load(CHECKPOINT, map_location=map_location)\n ddp_model.load_state_dict(checkpoint['state_dict'])\n powersgd_hook = checkpoint['comm_hook']\n powersgd_state = checkpoint['comm_hook_state']\n ddp_model.register_comm_hook(powersgd_state, powersgd_hook)\n if rank == 0:\n os.remove(CHECKPOINT)\n cleanup()\n if name == \"main\":\n n_gpus = torch.cuda.device_count()\n assert n_gpus >= 2, f\"Requires at least 2 GPUs to run, but got {n_gpus}\"\n world_size = n_gpus\n run_demo(demo_serialization, world_size)\nAcknowledgements\n================\nMany thanks to PowerSGD paper author Thijs Vogels for the code\nreview on PowerSGD communication hook, as well as the comparison\nexperiments, which show that the performance of PowerSGD communication\nhook is on par with the implementation in the original paper.", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"}
{"text": "Pipeline ParallelismPipeline parallelism was original introduced in the Gpipe paper and\nis an efficient technique to train large models on multiple GPUs.\nWarning:\n Pipeline Parallelism is experimental and subject to change.\nModel Parallelism using multiple GPUs\n=====================================\nTypically for large models which don't fit on a single GPU, model\nparallelism is employed where certain parts of the model are placed on\ndifferent GPUs. Although, if this is done naively for sequential\nmodels, the training process suffers from GPU under utilization since\nonly one GPU is active at one time as shown in the figure below:\n [image]The figure represents a model with 4 layers placed on 4\n different GPUs (vertical axis). The horizontal axis represents\n training this model through time demonstrating that only 1 GPU is\n utilized at a time (image source).\nPipelined Execution\n===================\nTo alleviate this problem, pipeline parallelism splits the input", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"}
{"text": "minibatch into multiple microbatches and pipelines the execution of\nthese microbatches across multiple GPUs. This is outlined in the\nfigure below:\n [image]The figure represents a model with 4 layers placed on 4\n different GPUs (vertical axis). The horizontal axis represents\n training this model through time demonstrating that the GPUs are\n utilized much more efficiently. However, there still exists a\n bubble (as demonstrated in the figure) where certain GPUs are not\n utilized. (image source).\nPipe APIs in PyTorch\n====================\nclass torch.distributed.pipeline.sync.Pipe(module, chunks=1, checkpoint='except_last', deferred_batch_norm=False)\n Wraps an arbitrary \"nn.Sequential\" module to train on using\n synchronous pipeline parallelism. If the module requires lots of\n memory and doesn't fit on a single GPU, pipeline parallelism is a\n useful technique to employ for training.\n The implementation is based on the torchgpipe paper.", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"}
{"text": "Pipe combines pipeline parallelism with checkpointing to reduce\n peak memory required to train while minimizing device under-\n utilization.\n You should place all the modules on the appropriate devices and\n wrap them into an \"nn.Sequential\" module defining the desired order\n of execution. If a module does not contain any parameters/buffers,\n it is assumed this module should be executed on CPU and appropriate\n input tensors to the module are moved to CPU before execution. This\n behavior can be overridden by the \"WithDevice\" wrapper which can be\n used to explicitly specify which device a module should run on.\n Parameters:\n * module (\"nn.Sequential\") -- sequential module to be\n parallelized using pipelining. Each module in the sequence has\n to have all of its parameters on a single device. Each module\n in the sequence has to either be an nn.Module or\n \"nn.Sequential\" (to combine multiple sequential modules on a\n single device)", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"}
{"text": "single device)\n * chunks (int) -- number of micro-batches (default: \"1\")\n * checkpoint (str) -- when to enable checkpointing, one of\n \"'always'\", \"'except_last'\", or \"'never'\" (default:\n \"'except_last'\"). \"'never'\" disables checkpointing completely,\n \"'except_last'\" enables checkpointing for all micro-batches\n except the last one and \"'always'\" enables checkpointing for\n all micro-batches.\n * deferred_batch_norm (bool) -- whether to use deferred\n \"BatchNorm\" moving statistics (default: \"False\"). If set to\n \"True\", we track statistics across multiple micro-batches to\n update the running statistics per mini-batch.\n Raises:\n * TypeError -- the module is not a \"nn.Sequential\".\n * ValueError -- invalid arguments\n Example::\n Pipeline of two FC layers across GPUs 0 and 1.\n >>> # Need to initialize RPC framework first.\n >>> os.environ['MASTER_ADDR'] = 'localhost'", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"}
{"text": "\n\n\nos.environ['MASTER_ADDR'] = 'localhost'\n >>> os.environ['MASTER_PORT'] = '29500'\n >>> torch.distributed.rpc.init_rpc('worker', rank=0, world_size=1)\n >>>\n >>> # Build pipe.\n >>> fc1 = nn.Linear(16, 8).cuda(0)\n >>> fc2 = nn.Linear(8, 4).cuda(1)\n >>> model = nn.Sequential(fc1, fc2)\n >>> model = Pipe(model, chunks=8)\n >>> input = torch.rand(16, 16).cuda(0)\n >>> output_rref = model(input)\n Note:\n You can wrap a \"Pipe\" model with\n \"torch.nn.parallel.DistributedDataParallel\" only when the\n checkpoint parameter of \"Pipe\" is \"'never'\".\n Note:\n \"Pipe\" only supports intra-node pipelining currently, but will be\n expanded to support inter-node pipelining in the future. The\n forward function returns an \"RRef\" to allow for inter-node\n pipelining in the future, where the output might be on a remote\n host. For intra-node pipelinining you can use \"local_value()\" to\n retrieve the output locally.\n Warning:\n\n\n", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"}
{"text": "retrieve the output locally.\n Warning:\n \"Pipe\" is experimental and subject to change.\n forward(inputs)\n Processes a single input mini-batch through the pipe and returns\n an \"RRef\" pointing to the output. \"Pipe\" is a fairly transparent\n module wrapper. It doesn't modify the input and output signature\n of the underlying module. But there's type restriction. Input\n and output have to contain at least one tensor. This restriction\n is applied at partition boundaries too.\n The sequence of inputs are fed into the first stage of the\n pipeline as \"inputs\". As a result the positional args for this\n function should match the positional args for the first stage of\n the pipeline. The same condition applies for output of one stage\n of the pipeline which is the input for the next stage.\n The input tensor is split into multiple micro-batches based on\n the \"chunks\" parameter used to initialize \"Pipe\". The batch size", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"}
{"text": "is assumed to be the first dimension of the tensor and if the\n batch size is less than \"chunks\", the number of micro-batches is\n equal to the batch size.\n Only tensors are split into multiple micro-batches, non-Tensor\n inputs are just replicated as-is in each micro-batch. For non-\n Tensor outputs in the last stage of the pipeline, they are\n aggregated as a \"List\" and returned the user. For example, if\n you have 2 micro-batches returning the integer 5, the user would\n receive the consolidated output of [5, 5]\n All the input tensors need to be on the same device as the first\n partition of the pipeline.\n If a tensor is wrapped with the \"NoChunk\" wrapper, the tensor is\n not split across micro-batches and is replicated as-is similar\n to non-tensors.\n Parameters:\n inputs -- input mini-batch\n Returns:\n \"RRef\" to the output of the mini-batch\n Raises:", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"}
{"text": "Raises:\n TypeError -- input doesn't contain at least one tensor\n Return type:\n RRef\nSkip connections\n\nCertain models like ResNeXt are not completely sequential and have\nskip connections between layers. Naively implementing as part of\npipeline parallelism would imply that we need to copy outputs for\ncertain layers through multiple GPUs till we eventually reach the GPU\nwhere the layer for the skip connection resides. To avoid this copy\noverhead, we provide APIs below to stash and pop Tensors in different\nlayers of the model.\ntorch.distributed.pipeline.sync.skip.skippable.skippable(stash=(), pop=())\n The decorator to define a \"nn.Module\" with skip connections.\n Decorated modules are called \"skippable\". This functionality works\n perfectly fine even when the module is not wrapped by \"Pipe\".\n Each skip tensor is managed by its name. Before manipulating skip\n tensors, a skippable module must statically declare the names for", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"}
{"text": "skip tensors by stash and/or pop parameters. Skip tensors with\n pre-declared name can be stashed by \"yield stash(name, tensor)\" or\n popped by \"tensor = yield pop(name)\".\n Here is an example with three layers. A skip tensor named \"1to3\" is\n stashed and popped at the first and last layer, respectively:\n @skippable(stash=['1to3'])\n class Layer1(nn.Module):\n def forward(self, input):\n yield stash('1to3', input)\n return f1(input)\n class Layer2(nn.Module):\n def forward(self, input):\n return f2(input)\n @skippable(pop=['1to3'])\n class Layer3(nn.Module):\n def forward(self, input):\n skip_1to3 = yield pop('1to3')\n return f3(input) + skip_1to3\n model = nn.Sequential(Layer1(), Layer2(), Layer3())\n One skippable module can stash or pop multiple skip tensors:\n @skippable(stash=['alice', 'bob'], pop=['carol'])\n class StashStashPop(nn.Module):\n def forward(self, input):", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"}
{"text": "def forward(self, input):\n yield stash('alice', f_alice(input))\n yield stash('bob', f_bob(input))\n carol = yield pop('carol')\n return input + carol\n Every skip tensor must be associated with exactly one pair of\n stash and pop. \"Pipe\" checks this restriction automatically\n when wrapping a module. You can also check the restriction by\n \"verify_skippables()\" without \"Pipe\".\n Return type:\n Callable[[Type[Module]], Type[Skippable]]\nclass torch.distributed.pipeline.sync.skip.skippable.stash(name, tensor)\n The command to stash a skip tensor.\n def forward(self, input):\n yield stash('name', input)\n return f(input)\n Parameters:\n * name (str) -- name of skip tensor\n * input (torch.Tensor or None) -- tensor to pass to\n the skip connection\nclass torch.distributed.pipeline.sync.skip.skippable.pop(name)\n The command to pop a skip tensor.\n def forward(self, input):", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"}
{"text": "def forward(self, input):\n skip = yield pop('name')\n return f(input) + skip\n Parameters:\n name (str) -- name of skip tensor\n Returns:\n the skip tensor previously stashed by another layer under the\n same name\n Return type:\n None\ntorch.distributed.pipeline.sync.skip.skippable.verify_skippables(module)\n Verifies if the underlying skippable modules satisfy integrity.\n Every skip tensor must have only one pair of stash and pop. If\n there are one or more unmatched pairs, it will raise \"TypeError\"\n with the detailed messages.\n Here are a few failure cases. \"verify_skippables()\" will report\n failure for these cases:\n # Layer1 stashes \"1to3\".\n # Layer3 pops \"1to3\".\n nn.Sequential(Layer1(), Layer2())\n # \u00e2\u0094\u0094\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080 ?\n nn.Sequential(Layer2(), Layer3())\n # ? \u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0098\n nn.Sequential(Layer1(), Layer2(), Layer3(), Layer3())", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"}
{"text": "\u00e2\u0094\u0094\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0098 ^^^^^^\n nn.Sequential(Layer1(), Layer1(), Layer2(), Layer3())\n # ^^^^^^ \u00e2\u0094\u0094\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0098\n\nTo use the same name for multiple skip tensors, they must be\n isolated by different namespaces. See \"isolate()\".\n Raises:\n TypeError -- one or more pairs of stash and pop are not\n matched.\nTutorials\n=========\nThe following tutorials give a good overview of how to use the \"Pipe\"\nAPI to train your models with the rest of the components that PyTorch\nprovides:\n* Training Transformer models using Pipeline Parallelism\n* Training Transformer models using Distributed Data Parallel and\n Pipeline Parallelism\nAcknowledgements\n================\nThe implementation for pipeline parallelism is based on fairscale's\npipe implementation and torchgpipe. We would like to thank both teams\nfor their contributions and guidance towards bringing pipeline", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"}
{"text": "parallelism into PyTorch.", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"}
{"text": "Distributed Checkpoint\n", "source": "https://pytorch.org/docs/stable/distributed.checkpoint.html", "category": "pytorch docs"}
{"text": "torch.backendstorch.backends controls the behavior of various backends that\nPyTorch supports.\nThese backends include:\n* \"torch.backends.cuda\"\n* \"torch.backends.cudnn\"\n* \"torch.backends.mps\"\n* \"torch.backends.mkl\"\n* \"torch.backends.mkldnn\"\n* \"torch.backends.openmp\"\n* \"torch.backends.opt_einsum\"\n* \"torch.backends.xeon\"\ntorch.backends.cuda\n===================\ntorch.backends.cuda.is_built()\n Returns whether PyTorch is built with CUDA support. Note that this\n doesn't necessarily mean CUDA is available; just that if this\n PyTorch binary were run a machine with working CUDA drivers and\n devices, we would be able to use it.\ntorch.backends.cuda.matmul.allow_tf32\n A \"bool\" that controls whether TensorFloat-32 tensor cores may be\n used in matrix multiplications on Ampere or newer GPUs. See\n TensorFloat-32(TF32) on Ampere devices.\ntorch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction\n A \"bool\" that controls whether reduced precision reductions (e.g.,", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"}
{"text": "with fp16 accumulation type) are allowed with fp16 GEMMs.\ntorch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction\n A \"bool\" that controls whether reduced precision reductions are\n allowed with bf16 GEMMs.\ntorch.backends.cuda.cufft_plan_cache\n \"cufft_plan_cache\" caches the cuFFT plans\n size\n A readonly \"int\" that shows the number of plans currently in the\n cuFFT plan cache.\n torch.backends.cuda.max_size\n A \"int\" that controls cache capacity of cuFFT plan.\n torch.backends.cuda.clear()\n Clears the cuFFT plan cache.\ntorch.backends.cuda.preferred_linalg_library(backend=None)\n Warning:\n This flag is experimental and subject to change.\n When PyTorch runs a CUDA linear algebra operation it often uses the\n cuSOLVER or MAGMA libraries, and if both are available it decides\n which to use with a heuristic. This flag (a \"str\") allows\n overriding those heuristics.\n * If \"cusolver\" is set then cuSOLVER will be used wherever\n possible.", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"}
{"text": "possible.\n * If \"magma\" is set then MAGMA will be used wherever possible.\n * If \"default\" (the default) is set then heuristics will be used\n to pick between cuSOLVER and MAGMA if both are available.\n * When no input is given, this function returns the currently\n preferred library.\n Note: When a library is preferred other libraries may still be used\n if the preferred library doesn't implement the operation(s) called.\n This flag may achieve better performance if PyTorch's heuristic\n library selection is incorrect for your application's inputs.\n Currently supported linalg operators:\n * \"torch.linalg.inv()\"\n * \"torch.linalg.inv_ex()\"\n * \"torch.linalg.cholesky()\"\n * \"torch.linalg.cholesky_ex()\"\n * \"torch.cholesky_solve()\"\n * \"torch.cholesky_inverse()\"\n * \"torch.linalg.lu_factor()\"\n * \"torch.linalg.lu()\"\n * \"torch.linalg.lu_solve()\"\n * \"torch.linalg.qr()\"\n * \"torch.linalg.eigh()\"\n * \"torch.linalg.eighvals()\"\n * \"torch.linalg.svd()\"", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"}
{"text": "\n\"torch.linalg.svd()\"\n\"torch.linalg.svdvals()\"\n Return type:\n _LinalgBackend\nclass torch.backends.cuda.SDPBackend(value)\n Enum class for the scaled dot product attention backends.\n Warning:\n This flag is experimental and subject to change.'\n This class needs to stay inline with the enum defined in:\n pytorch/aten/src/ATen/native/transformers/sdp_utils_cpp.h\ntorch.backends.cuda.flash_sdp_enabled()\n Warning:\n This flag is experimental and subject to change.\n Returns whether flash sdp is enabled or not.\ntorch.backends.cuda.enable_mem_efficient_sdp(enabled)\n Warning:\n This flag is experimental and subject to change.\n Enables or disables memory efficient sdp.\ntorch.backends.cuda.mem_efficient_sdp_enabled()\n Warning:\n This flag is experimental and subject to change.\n Returns whether memory efficient sdp is enabled or not.\ntorch.backends.cuda.enable_flash_sdp(enabled)\n Warning:\n This flag is experimental and subject to change.\n", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"}
{"text": "Enables or disables flash sdp.\ntorch.backends.cuda.math_sdp_enabled()\n Warning:\n This flag is experimental and subject to change.\n Returns whether math sdp is enabled or not.\ntorch.backends.cuda.enable_math_sdp(enabled)\n Warning:\n This flag is experimental and subject to change.\n Enables or disables math sdp.\ntorch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=True, enable_mem_efficient=True)\n Warning:\n This flag is experimental and subject to change.\n This context manager can be used to temporarily enable or disable\n flash/memory efficient sdp and math sdp. Upon exiting the context\n manager, the previous state of the flags will be restored.\ntorch.backends.cudnn\n====================\ntorch.backends.cudnn.version()\n Returns the version of cuDNN\ntorch.backends.cudnn.is_available()\n Returns a bool indicating if CUDNN is currently available.\ntorch.backends.cudnn.enabled\n A \"bool\" that controls whether cuDNN is enabled.\ntorch.backends.cudnn.allow_tf32", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"}
{"text": "torch.backends.cudnn.allow_tf32\n A \"bool\" that controls where TensorFloat-32 tensor cores may be\n used in cuDNN convolutions on Ampere or newer GPUs. See\n TensorFloat-32(TF32) on Ampere devices.\ntorch.backends.cudnn.deterministic\n A \"bool\" that, if True, causes cuDNN to only use deterministic\n convolution algorithms. See also\n \"torch.are_deterministic_algorithms_enabled()\" and\n \"torch.use_deterministic_algorithms()\".\ntorch.backends.cudnn.benchmark\n A \"bool\" that, if True, causes cuDNN to benchmark multiple\n convolution algorithms and select the fastest.\ntorch.backends.cudnn.benchmark_limit\n A \"int\" that specifies the maximum number of cuDNN convolution\n algorithms to try when torch.backends.cudnn.benchmark is True.\n Set benchmark_limit to zero to try every available algorithm.\n Note that this setting only affects convolutions dispatched via the\n cuDNN v8 API.\ntorch.backends.mps\n==================\ntorch.backends.mps.is_available()", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"}
{"text": "torch.backends.mps.is_available()\n Returns a bool indicating if MPS is currently available.\n Return type:\n bool\ntorch.backends.mps.is_built()\n Returns whether PyTorch is built with MPS support. Note that this\n doesn't necessarily mean MPS is available; just that if this\n PyTorch binary were run a machine with working MPS drivers and\n devices, we would be able to use it.\n Return type:\n bool\ntorch.backends.mkl\n==================\ntorch.backends.mkl.is_available()\n Returns whether PyTorch is built with MKL support.\nclass torch.backends.mkl.verbose(enable)\n On-demand oneMKL verbosing functionality To make it easier to debug\n performance issues, oneMKL can dump verbose messages containing\n execution information like duration while executing the kernel. The\n verbosing functionality can be invoked via an environment variable\n named MKL_VERBOSE. However, this methodology dumps messages in\n all steps. Those are a large amount of verbose messages. Moreover,", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"}
{"text": "for investigating the performance issues, generally taking verbose\n messages for one single iteration is enough. This on-demand\n verbosing functionality makes it possible to control scope for\n verbose message dumping. In the following example, verbose messages\n will be dumped out for the second inference only.\n import torch\n model(data)\n with torch.backends.mkl.verbose(torch.backends.mkl.VERBOSE_ON):\n model(data)\n Parameters:\n level -- Verbose level - \"VERBOSE_OFF\": Disable verbosing -\n \"VERBOSE_ON\": Enable verbosing\ntorch.backends.mkldnn\n=====================\ntorch.backends.mkldnn.is_available()\n Returns whether PyTorch is built with MKL-DNN support.\nclass torch.backends.mkldnn.verbose(level)\n On-demand oneDNN (former MKL-DNN) verbosing functionality To make\n it easier to debug performance issues, oneDNN can dump verbose\n messages containing information like kernel size, input data size", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"}
{"text": "and execution duration while executing the kernel. The verbosing\n functionality can be invoked via an environment variable named\n DNNL_VERBOSE. However, this methodology dumps messages in all\n steps. Those are a large amount of verbose messages. Moreover, for\n investigating the performance issues, generally taking verbose\n messages for one single iteration is enough. This on-demand\n verbosing functionality makes it possible to control scope for\n verbose message dumping. In the following example, verbose messages\n will be dumped out for the second inference only.\n import torch\n model(data)\n with torch.backends.mkldnn.verbose(torch.backends.mkldnn.VERBOSE_ON):\n model(data)\n Parameters:\n level -- Verbose level - \"VERBOSE_OFF\": Disable verbosing -\n \"VERBOSE_ON\": Enable verbosing - \"VERBOSE_ON_CREATION\": Enable\n verbosing, including oneDNN kernel creation\ntorch.backends.openmp\n=====================\ntorch.backends.openmp.is_available()", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"}
{"text": "torch.backends.openmp.is_available()\n Returns whether PyTorch is built with OpenMP support.\ntorch.backends.opt_einsum\n=========================\ntorch.backends.opt_einsum.is_available()\n Returns a bool indicating if opt_einsum is currently available.\n Return type:\n bool\ntorch.backends.opt_einsum.get_opt_einsum()\n Returns the opt_einsum package if opt_einsum is currently\n available, else None.\n Return type:\n Any\ntorch.backends.opt_einsum.enabled\n A :class:\"bool\" that controls whether opt_einsum is enabled (\"True\"\n by default). If so, torch.einsum will use opt_einsum (https\n ://optimized-einsum.readthedocs.io/en/stable/path_finding.html) if\n available to calculate an optimal path of contraction for faster\n performance.\n If opt_einsum is not available, torch.einsum will fall back to the\n default contraction path of left to right.\ntorch.backends.opt_einsum.strategy\n A :class:\"str\" that specifies which strategies to try when", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"}
{"text": "\"torch.backends.opt_einsum.enabled\" is \"True\". By default,\n torch.einsum will try the \"auto\" strategy, but the \"greedy\" and\n \"optimal\" strategies are also supported. Note that the \"optimal\"\n strategy is factorial on the number of inputs as it tries all\n possible paths. See more details in opt_einsum's docs (https\n ://optimized-einsum.readthedocs.io/en/stable/path_finding.html).\ntorch.backends.xeon\n===================", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"}
{"text": "torch.utils.dlpacktorch.utils.dlpack.from_dlpack(ext_tensor) -> Tensor\n Converts a tensor from an external library into a \"torch.Tensor\".\n The returned PyTorch tensor will share the memory with the input\n tensor (which may have come from another library). Note that in-\n place operations will therefore also affect the data of the input\n tensor. This may lead to unexpected issues (e.g., other libraries\n may have read-only flags or immutable data structures), so the user\n should only do this if they know for sure that this is fine.\n Parameters:\n ext_tensor (object with \"dlpack\" attribute, or a DLPack\n capsule) --\n The tensor or DLPack capsule to convert.\n If \"ext_tensor\" is a tensor (or ndarray) object, it must support\n the \"dlpack\" protocol (i.e., have a \"ext_tensor.dlpack\"\n method). Otherwise \"ext_tensor\" may be a DLPack capsule, which\n is an opaque \"PyCapsule\" instance, typically produced by a", "source": "https://pytorch.org/docs/stable/dlpack.html", "category": "pytorch docs"}
{"text": "\"to_dlpack\" function or method.\n Return type:\n Tensor\n Examples:\n >>> import torch.utils.dlpack\n >>> t = torch.arange(4)\n # Convert a tensor directly (supported in PyTorch >= 1.10)\n >>> t2 = torch.from_dlpack(t)\n >>> t2[:2] = -1 # show that memory is shared\n >>> t2\n tensor([-1, -1, 2, 3])\n >>> t\n tensor([-1, -1, 2, 3])\n # The old-style DLPack usage, with an intermediate capsule object\n >>> capsule = torch.utils.dlpack.to_dlpack(t)\n >>> capsule\n \n >>> t3 = torch.from_dlpack(capsule)\n >>> t3\n tensor([-1, -1, 2, 3])\n >>> t3[0] = -9 # now we're sharing memory between 3 tensors\n >>> t3\n tensor([-9, -1, 2, 3])\n >>> t2\n tensor([-9, -1, 2, 3])\n >>> t\n tensor([-9, -1, 2, 3])\ntorch.utils.dlpack.to_dlpack(tensor) -> PyCapsule\n Returns an opaque object (a \"DLPack capsule\") representing the\n tensor.\n Note:", "source": "https://pytorch.org/docs/stable/dlpack.html", "category": "pytorch docs"}
{"text": "tensor.\n Note:\n \"to_dlpack\" is a legacy DLPack interface. The capsule it returns\n cannot be used for anything in Python other than use it as input\n to \"from_dlpack\". The more idiomatic use of DLPack is to call\n \"from_dlpack\" directly on the tensor object - this works when\n that object has a \"dlpack\" method, which PyTorch and most\n other libraries indeed have now.\n Warning:\n Only call \"from_dlpack\" once per capsule produced with\n \"to_dlpack\". Behavior when a capsule is consumed multiple times\n is undefined.\n Parameters:\n tensor -- a tensor to be exported\n The DLPack capsule shares the tensor's memory.", "source": "https://pytorch.org/docs/stable/dlpack.html", "category": "pytorch docs"}
{"text": "PyTorch Governance | Build + CIHow to Add a New Maintainer\nFor the person to be a maintainer, a person needs to:\n* Land at least six commits to the related part of the PyTorch\n repository\n* At least one of these commits must be submitted in the last six\n months\nTo add a qualified person to the maintainers' list, please create a PR\nthat adds a person to the persons of interests page and merge_rules\nfiles. Current maintainers will cast their votes of support. Decision\ncriteria for approving the PR: * Not earlier than two business days\npassed before merging (ensure the majority of the contributors have\nseen it) * PR has the correct label (module: ci) * There are no\nobjections from the current maintainers * There are at least three net\nthumbs up from current maintainers (or all maintainers vote thumbs\nup when the module has less than 3 maintainers).", "source": "https://pytorch.org/docs/stable/community/build_ci_governance.html", "category": "pytorch docs"}
{"text": "Probability distributions - torch.distributionsThe \"distributions\" package contains parameterizable probability\ndistributions and sampling functions. This allows the construction of\nstochastic computation graphs and stochastic gradient estimators for\noptimization. This package generally follows the design of the\nTensorFlow Distributions package.\nIt is not possible to directly backpropagate through random samples.\nHowever, there are two main methods for creating surrogate functions\nthat can be backpropagated through. These are the score function\nestimator/likelihood ratio estimator/REINFORCE and the pathwise\nderivative estimator. REINFORCE is commonly seen as the basis for\npolicy gradient methods in reinforcement learning, and the pathwise\nderivative estimator is commonly seen in the reparameterization trick\nin variational autoencoders. Whilst the score function only requires\nthe value of samples f(x), the pathwise derivative requires the", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "derivative f'(x). The next sections discuss these two in a\nreinforcement learning example. For more details see Gradient\nEstimation Using Stochastic Computation Graphs .\nScore function\n==============\nWhen the probability density function is differentiable with respect\nto its parameters, we only need \"sample()\" and \"log_prob()\" to\nimplement REINFORCE:\n \\Delta\\theta = \\alpha r \\frac{\\partial\\log\n p(a|\\pi^\\theta(s))}{\\partial\\theta}\nwhere \\theta are the parameters, \\alpha is the learning rate, r is the\nreward and p(a|\\pi^\\theta(s)) is the probability of taking action a in\nstate s given policy \\pi^\\theta.\nIn practice we would sample an action from the output of a network,\napply this action in an environment, and then use \"log_prob\" to\nconstruct an equivalent loss function. Note that we use a negative\nbecause optimizers use gradient descent, whilst the rule above assumes\ngradient ascent. With a categorical policy, the code for implementing\nREINFORCE would be as follows:\n probs = policy_network(state)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "probs = policy_network(state)\n # Note that this is equivalent to what used to be called multinomial\n m = Categorical(probs)\n action = m.sample()\n next_state, reward = env.step(action)\n loss = -m.log_prob(action) * reward\n loss.backward()\nPathwise derivative\n===================\nThe other way to implement these stochastic/policy gradients would be\nto use the reparameterization trick from the \"rsample()\" method, where\nthe parameterized random variable can be constructed via a\nparameterized deterministic function of a parameter-free random\nvariable. The reparameterized sample therefore becomes differentiable.\nThe code for implementing the pathwise derivative would be as follows:\n params = policy_network(state)\n m = Normal(*params)\n # Any distribution with .has_rsample == True could work based on the application\n action = m.rsample()\n next_state, reward = env.step(action) # Assuming that reward is differentiable\n loss = -reward\n loss.backward()\nDistribution\n============", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "loss.backward()\nDistribution\n============\nclass torch.distributions.distribution.Distribution(batch_shape=torch.Size([]), event_shape=torch.Size([]), validate_args=None)\n Bases: \"object\"\n Distribution is the abstract base class for probability\n distributions.\n property arg_constraints: Dict[str, Constraint]\n Returns a dictionary from argument names to \"Constraint\" objects\n that should be satisfied by each argument of this distribution.\n Args that are not tensors need not appear in this dict.\n property batch_shape: Size\n Returns the shape over which parameters are batched.\n cdf(value)\n Returns the cumulative density/mass function evaluated at\n value.\n Parameters:\n value (Tensor) --\n Return type:\n Tensor\n entropy()\n Returns entropy of distribution, batched over batch_shape.\n Returns:\n Tensor of shape batch_shape.\n Return type:\n Tensor\n enumerate_support(expand=True)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "enumerate_support(expand=True)\n Returns tensor containing all values supported by a discrete\n distribution. The result will enumerate over dimension 0, so the\n shape of the result will be (cardinality,) + batch_shape +\n event_shape (where event_shape = () for univariate\n distributions).\n Note that this enumerates over all batched tensors in lock-step\n [[0, 0], [1, 1], ...]. With expand=False, enumeration\n happens along dim 0, but with the remaining batch dimensions\n being singleton dimensions, [[0], [1], ...\n To iterate over the full Cartesian product use\n itertools.product(m.enumerate_support()).\n Parameters:\n expand (bool) -- whether to expand the support over the\n batch dims to match the distribution's batch_shape.\n Returns:\n Tensor iterating over dimension 0.\n Return type:\n Tensor\n property event_shape: Size\n Returns the shape of a single sample (without batching).", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "expand(batch_shape, _instance=None)\n Returns a new distribution instance (or populates an existing\n instance provided by a derived class) with batch dimensions\n expanded to batch_shape. This method calls \"expand\" on the\n distribution's parameters. As such, this does not allocate new\n memory for the expanded distribution instance. Additionally,\n this does not repeat any args checking or parameter broadcasting\n in init.py, when an instance is first created.\n Parameters:\n * batch_shape (torch.Size) -- the desired expanded\n size.\n * _instance -- new instance provided by subclasses that\n need to override .expand.\n Returns:\n New distribution instance with batch dimensions expanded to\n batch_size.\n icdf(value)\n Returns the inverse cumulative density/mass function evaluated\n at value.\n Parameters:\n value (Tensor) --\n Return type:\n Tensor", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "Return type:\n Tensor\n log_prob(value)\n Returns the log of the probability density/mass function\n evaluated at value.\n Parameters:\n value (Tensor) --\n Return type:\n Tensor\n property mean: Tensor\n Returns the mean of the distribution.\n property mode: Tensor\n Returns the mode of the distribution.\n perplexity()\n Returns perplexity of distribution, batched over batch_shape.\n Returns:\n Tensor of shape batch_shape.\n Return type:\n Tensor\n rsample(sample_shape=torch.Size([]))\n Generates a sample_shape shaped reparameterized sample or\n sample_shape shaped batch of reparameterized samples if the\n distribution parameters are batched.\n Return type:\n Tensor\n sample(sample_shape=torch.Size([]))\n Generates a sample_shape shaped sample or sample_shape shaped\n batch of samples if the distribution parameters are batched.\n Return type:\n Tensor\n sample_n(n)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "Tensor\n sample_n(n)\n Generates n samples or n batches of samples if the distribution\n parameters are batched.\n Return type:\n Tensor\n static set_default_validate_args(value)\n Sets whether validation is enabled or disabled.\n The default behavior mimics Python's \"assert\" statement:\n validation is on by default, but is disabled if Python is run in\n optimized mode (via \"python -O\"). Validation may be expensive,\n so you may want to disable it once a model is working.\n Parameters:\n value (bool) -- Whether to enable validation.\n property stddev: Tensor\n Returns the standard deviation of the distribution.\n property support: Optional[Any]\n Returns a \"Constraint\" object representing this distribution's\n support.\n property variance: Tensor\n Returns the variance of the distribution.\nExponentialFamily\n=================", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "ExponentialFamily\nclass torch.distributions.exp_family.ExponentialFamily(batch_shape=torch.Size([]), event_shape=torch.Size([]), validate_args=None)\n Bases: \"Distribution\"\n ExponentialFamily is the abstract base class for probability\n distributions belonging to an exponential family, whose probability\n mass/density function has the form is defined below\n p_{F}(x; \\theta) = \\exp(\\langle t(x), \\theta\\rangle - F(\\theta)\n + k(x))\n where \\theta denotes the natural parameters, t(x) denotes the\n sufficient statistic, F(\\theta) is the log normalizer function for\n a given family and k(x) is the carrier measure.\n Note:\n This class is an intermediary between the Distribution class\n and distributions which belong to an exponential family mainly to\n check the correctness of the .entropy() and analytic KL\n divergence methods. We use this class to compute the entropy and\n KL divergence using the AD framework and Bregman divergences", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "(courtesy of: Frank Nielsen and Richard Nock, Entropies and\n Cross-entropies of Exponential Families).\n entropy()\n Method to compute the entropy using Bregman divergence of the\n log normalizer.\nBernoulli\n=========\nclass torch.distributions.bernoulli.Bernoulli(probs=None, logits=None, validate_args=None)\n Bases: \"ExponentialFamily\"\n Creates a Bernoulli distribution parameterized by \"probs\" or\n \"logits\" (but not both).\n Samples are binary (0 or 1). They take the value 1 with\n probability p and 0 with probability 1 - p.\n Example:\n >>> m = Bernoulli(torch.tensor([0.3]))\n >>> m.sample() # 30% chance 1; 70% chance 0\n tensor([ 0.])\n Parameters:\n * probs (Number, Tensor) -- the probability of\n sampling 1\n * logits (Number, Tensor) -- the log-odds of sampling\n 1\n arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}\n entropy()\n enumerate_support(expand=True)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "entropy()\n enumerate_support(expand=True)\n expand(batch_shape, _instance=None)\n has_enumerate_support = True\n log_prob(value)\n property logits\n property mean\n property mode\n property param_shape\n property probs\n sample(sample_shape=torch.Size([]))\n support = Boolean()\n property variance\nBeta\n====\nclass torch.distributions.beta.Beta(concentration1, concentration0, validate_args=None)\n Bases: \"ExponentialFamily\"\n Beta distribution parameterized by \"concentration1\" and\n \"concentration0\".\n Example:\n >>> m = Beta(torch.tensor([0.5]), torch.tensor([0.5]))\n >>> m.sample() # Beta distributed with concentration concentration1 and concentration0\n tensor([ 0.1046])\n Parameters:\n * concentration1 (float or Tensor) -- 1st\n concentration parameter of the distribution (often referred to\n as alpha)\n * concentration0 (float or Tensor) -- 2nd\n concentration parameter of the distribution (often referred to\n as beta)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "as beta)\n arg_constraints = {'concentration0': GreaterThan(lower_bound=0.0), 'concentration1': GreaterThan(lower_bound=0.0)}\n property concentration0\n property concentration1\n entropy()\n expand(batch_shape, _instance=None)\n has_rsample = True\n log_prob(value)\n property mean\n property mode\n rsample(sample_shape=())\n support = Interval(lower_bound=0.0, upper_bound=1.0)\n property variance\nBinomial\n========\nclass torch.distributions.binomial.Binomial(total_count=1, probs=None, logits=None, validate_args=None)\n Bases: \"Distribution\"\n Creates a Binomial distribution parameterized by \"total_count\" and\n either \"probs\" or \"logits\" (but not both). \"total_count\" must be\n broadcastable with \"probs\"/\"logits\".\n Example:\n >>> m = Binomial(100, torch.tensor([0 , .2, .8, 1]))\n >>> x = m.sample()\n tensor([ 0., 22., 71., 100.])\n >>> m = Binomial(torch.tensor([[5.], [10.]]), torch.tensor([0.5, 0.8]))\n >>> x = m.sample()\n tensor([[ 4., 5.],", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "\n\n\nx = m.sample()\n tensor([[ 4., 5.],\n [ 7., 6.]])\n Parameters:\n * total_count (int or Tensor) -- number of Bernoulli\n trials\n * probs (Tensor) -- Event probabilities\n * logits (Tensor) -- Event log-odds\n arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0), 'total_count': IntegerGreaterThan(lower_bound=0)}\n entropy()\n enumerate_support(expand=True)\n expand(batch_shape, _instance=None)\n has_enumerate_support = True\n log_prob(value)\n property logits\n property mean\n property mode\n property param_shape\n property probs\n sample(sample_shape=torch.Size([]))\n property support\n property variance\nCategorical\n===========\nclass torch.distributions.categorical.Categorical(probs=None, logits=None, validate_args=None)\n Bases: \"Distribution\"\n Creates a categorical distribution parameterized by either \"probs\"\n or \"logits\" (but not both).\n Note:\n\n\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "or \"logits\" (but not both).\n Note:\n It is equivalent to the distribution that \"torch.multinomial()\"\n samples from.\n Samples are integers from {0, \\ldots, K-1} where K is\n \"probs.size(-1)\".\n If probs is 1-dimensional with length-K, each element is the\n relative probability of sampling the class at that index.\n If probs is N-dimensional, the first N-1 dimensions are treated\n as a batch of relative probability vectors.\n Note:\n The probs argument must be non-negative, finite and have a non-\n zero sum, and it will be normalized to sum to 1 along the last\n dimension. \"probs\" will return this normalized value. The\n logits argument will be interpreted as unnormalized log\n probabilities and can therefore be any real number. It will\n likewise be normalized so that the resulting probabilities sum to\n 1 along the last dimension. \"logits\" will return this normalized\n value.\n See also: \"torch.multinomial()\"\n Example:", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "See also: \"torch.multinomial()\"\n Example:\n >>> m = Categorical(torch.tensor([ 0.25, 0.25, 0.25, 0.25 ]))\n >>> m.sample() # equal probability of 0, 1, 2, 3\n tensor(3)\n Parameters:\n * probs (Tensor) -- event probabilities\n * logits (Tensor) -- event log probabilities\n (unnormalized)\n arg_constraints = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()}\n entropy()\n enumerate_support(expand=True)\n expand(batch_shape, _instance=None)\n has_enumerate_support = True\n log_prob(value)\n property logits\n property mean\n property mode\n property param_shape\n property probs\n sample(sample_shape=torch.Size([]))\n property support\n property variance\nCauchy\n======\nclass torch.distributions.cauchy.Cauchy(loc, scale, validate_args=None)\n Bases: \"Distribution\"\n Samples from a Cauchy (Lorentz) distribution. The distribution of\n the ratio of independent normally distributed random variables with", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "means 0 follows a Cauchy distribution.\n Example:\n >>> m = Cauchy(torch.tensor([0.0]), torch.tensor([1.0]))\n >>> m.sample() # sample from a Cauchy distribution with loc=0 and scale=1\n tensor([ 2.3214])\n Parameters:\n * loc (float or Tensor) -- mode or median of the\n distribution.\n * scale (float or Tensor) -- half width at half\n maximum.\n arg_constraints = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}\n cdf(value)\n entropy()\n expand(batch_shape, _instance=None)\n has_rsample = True\n icdf(value)\n log_prob(value)\n property mean\n property mode\n rsample(sample_shape=torch.Size([]))\n support = Real()\n property variance\nChi2\n====\nclass torch.distributions.chi2.Chi2(df, validate_args=None)\n Bases: \"Gamma\"\n Creates a Chi-squared distribution parameterized by shape parameter\n \"df\". This is exactly equivalent to \"Gamma(alpha=0.5*df, beta=0.5)\"\n Example:\n >>> m = Chi2(torch.tensor([1.0]))", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "\n\n\nm = Chi2(torch.tensor([1.0]))\n >>> m.sample() # Chi2 distributed with shape df=1\n tensor([ 0.1046])\n Parameters:\n df (float or Tensor) -- shape parameter of the\n distribution\n arg_constraints = {'df': GreaterThan(lower_bound=0.0)}\n property df\n expand(batch_shape, _instance=None)\nContinuousBernoulli\n===================\nclass torch.distributions.continuous_bernoulli.ContinuousBernoulli(probs=None, logits=None, lims=(0.499, 0.501), validate_args=None)\n Bases: \"ExponentialFamily\"\n Creates a continuous Bernoulli distribution parameterized by\n \"probs\" or \"logits\" (but not both).\n The distribution is supported in [0, 1] and parameterized by\n 'probs' (in (0,1)) or 'logits' (real-valued). Note that, unlike the\n Bernoulli, 'probs' does not correspond to a probability and\n 'logits' does not correspond to log-odds, but the same names are\n used due to the similarity with the Bernoulli. See [1] for more\n details.\n Example:\n\n\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "details.\n Example:\n >>> m = ContinuousBernoulli(torch.tensor([0.3]))\n >>> m.sample()\n tensor([ 0.2538])\n Parameters:\n * probs (Number, Tensor) -- (0,1) valued parameters\n * logits (Number, Tensor) -- real valued parameters\n whose sigmoid matches 'probs'\n [1] The continuous Bernoulli: fixing a pervasive error in\n variational autoencoders, Loaiza-Ganem G and Cunningham JP, NeurIPS\n 2019. https://arxiv.org/abs/1907.06845\n arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}\n cdf(value)\n entropy()\n expand(batch_shape, _instance=None)\n has_rsample = True\n icdf(value)\n log_prob(value)\n property logits\n property mean\n property param_shape\n property probs\n rsample(sample_shape=torch.Size([]))\n sample(sample_shape=torch.Size([]))\n property stddev\n support = Interval(lower_bound=0.0, upper_bound=1.0)\n property variance\nDirichlet\n=========", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "property variance\nDirichlet\n=========\nclass torch.distributions.dirichlet.Dirichlet(concentration, validate_args=None)\n Bases: \"ExponentialFamily\"\n Creates a Dirichlet distribution parameterized by concentration\n \"concentration\".\n Example:\n >>> m = Dirichlet(torch.tensor([0.5, 0.5]))\n >>> m.sample() # Dirichlet distributed with concentration [0.5, 0.5]\n tensor([ 0.1046, 0.8954])\n Parameters:\n concentration (Tensor) -- concentration parameter of the\n distribution (often referred to as alpha)\n arg_constraints = {'concentration': IndependentConstraint(GreaterThan(lower_bound=0.0), 1)}\n entropy()\n expand(batch_shape, _instance=None)\n has_rsample = True\n log_prob(value)\n property mean\n property mode\n rsample(sample_shape=())\n support = Simplex()\n property variance\nExponential\n===========\nclass torch.distributions.exponential.Exponential(rate, validate_args=None)\n Bases: \"ExponentialFamily\"", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "Bases: \"ExponentialFamily\"\n Creates a Exponential distribution parameterized by \"rate\".\n Example:\n >>> m = Exponential(torch.tensor([1.0]))\n >>> m.sample() # Exponential distributed with rate=1\n tensor([ 0.1046])\n Parameters:\n rate (float or Tensor) -- rate = 1 / scale of the\n distribution\n arg_constraints = {'rate': GreaterThan(lower_bound=0.0)}\n cdf(value)\n entropy()\n expand(batch_shape, _instance=None)\n has_rsample = True\n icdf(value)\n log_prob(value)\n property mean\n property mode\n rsample(sample_shape=torch.Size([]))\n property stddev\n support = GreaterThanEq(lower_bound=0.0)\n property variance\nFisherSnedecor\n==============\nclass torch.distributions.fishersnedecor.FisherSnedecor(df1, df2, validate_args=None)\n Bases: \"Distribution\"\n Creates a Fisher-Snedecor distribution parameterized by \"df1\" and\n \"df2\".\n Example:\n >>> m = FisherSnedecor(torch.tensor([1.0]), torch.tensor([2.0]))", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "\n\n\nm.sample() # Fisher-Snedecor-distributed with df1=1 and df2=2\n tensor([ 0.2453])\n Parameters:\n * df1 (float or Tensor) -- degrees of freedom\n parameter 1\n * df2 (float or Tensor) -- degrees of freedom\n parameter 2\n arg_constraints = {'df1': GreaterThan(lower_bound=0.0), 'df2': GreaterThan(lower_bound=0.0)}\n expand(batch_shape, _instance=None)\n has_rsample = True\n log_prob(value)\n property mean\n property mode\n rsample(sample_shape=torch.Size([]))\n support = GreaterThan(lower_bound=0.0)\n property variance\nGamma\n=====\nclass torch.distributions.gamma.Gamma(concentration, rate, validate_args=None)\n Bases: \"ExponentialFamily\"\n Creates a Gamma distribution parameterized by shape \"concentration\"\n and \"rate\".\n Example:\n >>> m = Gamma(torch.tensor([1.0]), torch.tensor([1.0]))\n >>> m.sample() # Gamma distributed with concentration=1 and rate=1\n tensor([ 0.1046])\n Parameters:\n\n\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "tensor([ 0.1046])\n Parameters:\n * concentration (float or Tensor) -- shape parameter\n of the distribution (often referred to as alpha)\n * rate (float or Tensor) -- rate = 1 / scale of the\n distribution (often referred to as beta)\n arg_constraints = {'concentration': GreaterThan(lower_bound=0.0), 'rate': GreaterThan(lower_bound=0.0)}\n cdf(value)\n entropy()\n expand(batch_shape, _instance=None)\n has_rsample = True\n log_prob(value)\n property mean\n property mode\n rsample(sample_shape=torch.Size([]))\n support = GreaterThanEq(lower_bound=0.0)\n property variance\nGeometric\n=========\nclass torch.distributions.geometric.Geometric(probs=None, logits=None, validate_args=None)\n Bases: \"Distribution\"\n Creates a Geometric distribution parameterized by \"probs\", where\n \"probs\" is the probability of success of Bernoulli trials. It\n represents the probability that in k + 1 Bernoulli trials, the\n first k trials failed, before seeing a success.", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "first k trials failed, before seeing a success.\n Samples are non-negative integers [0, \\inf).\n Example:\n >>> m = Geometric(torch.tensor([0.3]))\n >>> m.sample() # underlying Bernoulli has 30% chance 1; 70% chance 0\n tensor([ 2.])\n Parameters:\n * probs (Number, Tensor) -- the probability of\n sampling 1. Must be in range (0, 1]\n * logits (Number, Tensor) -- the log-odds of sampling\n 1.\n arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}\n entropy()\n expand(batch_shape, _instance=None)\n log_prob(value)\n property logits\n property mean\n property mode\n property probs\n sample(sample_shape=torch.Size([]))\n support = IntegerGreaterThan(lower_bound=0)\n property variance\nGumbel\n======\nclass torch.distributions.gumbel.Gumbel(loc, scale, validate_args=None)\n Bases: \"TransformedDistribution\"\n Samples from a Gumbel Distribution.\n Examples:", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "Examples:\n >>> m = Gumbel(torch.tensor([1.0]), torch.tensor([2.0]))\n >>> m.sample() # sample from Gumbel distribution with loc=1, scale=2\n tensor([ 1.0124])\n Parameters:\n * loc (float or Tensor) -- Location parameter of the\n distribution\n * scale (float or Tensor) -- Scale parameter of the\n distribution\n arg_constraints: Dict[str, constraints.Constraint] = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}\n entropy()\n expand(batch_shape, _instance=None)\n log_prob(value)\n property mean\n property mode\n property stddev\n support = Real()\n property variance\nHalfCauchy\n==========\nclass torch.distributions.half_cauchy.HalfCauchy(scale, validate_args=None)\n Bases: \"TransformedDistribution\"\n Creates a half-Cauchy distribution parameterized by scale where:\n X ~ Cauchy(0, scale)\n Y = |X| ~ HalfCauchy(scale)\n Example:\n >>> m = HalfCauchy(torch.tensor([1.0]))", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "\n\n\nm = HalfCauchy(torch.tensor([1.0]))\n >>> m.sample() # half-cauchy distributed with scale=1\n tensor([ 2.3214])\n Parameters:\n scale (float or Tensor) -- scale of the full Cauchy\n distribution\n arg_constraints: Dict[str, constraints.Constraint] = {'scale': GreaterThan(lower_bound=0.0)}\n cdf(value)\n entropy()\n expand(batch_shape, _instance=None)\n has_rsample = True\n icdf(prob)\n log_prob(value)\n property mean\n property mode\n property scale\n support = GreaterThanEq(lower_bound=0.0)\n property variance\nHalfNormal\n==========\nclass torch.distributions.half_normal.HalfNormal(scale, validate_args=None)\n Bases: \"TransformedDistribution\"\n Creates a half-normal distribution parameterized by scale where:\n X ~ Normal(0, scale)\n Y = |X| ~ HalfNormal(scale)\n Example:\n >>> m = HalfNormal(torch.tensor([1.0]))\n >>> m.sample() # half-normal distributed with scale=1\n tensor([ 0.1046])\n Parameters:\n\n\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "tensor([ 0.1046])\n Parameters:\n scale (float or Tensor) -- scale of the full Normal\n distribution\n arg_constraints: Dict[str, constraints.Constraint] = {'scale': GreaterThan(lower_bound=0.0)}\n cdf(value)\n entropy()\n expand(batch_shape, _instance=None)\n has_rsample = True\n icdf(prob)\n log_prob(value)\n property mean\n property mode\n property scale\n support = GreaterThanEq(lower_bound=0.0)\n property variance\nIndependent\n===========\nclass torch.distributions.independent.Independent(base_distribution, reinterpreted_batch_ndims, validate_args=None)\n Bases: \"Distribution\"\n Reinterprets some of the batch dims of a distribution as event\n dims.\n This is mainly useful for changing the shape of the result of\n \"log_prob()\". For example to create a diagonal Normal distribution\n with the same shape as a Multivariate Normal distribution (so they\n are interchangeable), you can:\n >>> from torch.distributions.multivariate_normal import MultivariateNormal", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "\n\n\nfrom torch.distributions.normal import Normal\n >>> loc = torch.zeros(3)\n >>> scale = torch.ones(3)\n >>> mvn = MultivariateNormal(loc, scale_tril=torch.diag(scale))\n >>> [mvn.batch_shape, mvn.event_shape]\n [torch.Size([]), torch.Size([3])]\n >>> normal = Normal(loc, scale)\n >>> [normal.batch_shape, normal.event_shape]\n [torch.Size([3]), torch.Size([])]\n >>> diagn = Independent(normal, 1)\n >>> [diagn.batch_shape, diagn.event_shape]\n [torch.Size([]), torch.Size([3])]\n Parameters:\n * base_distribution\n (torch.distributions.distribution.Distribution) -- a base\n distribution\n * reinterpreted_batch_ndims (int) -- the number of batch\n dims to reinterpret as event dims\n arg_constraints: Dict[str, Constraint] = {}\n entropy()\n enumerate_support(expand=True)\n expand(batch_shape, _instance=None)\n property has_enumerate_support\n property has_rsample\n log_prob(value)\n property mean\n property mode\n\n\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "property mean\n property mode\n rsample(sample_shape=torch.Size([]))\n sample(sample_shape=torch.Size([]))\n property support\n property variance\nKumaraswamy\n===========\nclass torch.distributions.kumaraswamy.Kumaraswamy(concentration1, concentration0, validate_args=None)\n Bases: \"TransformedDistribution\"\n Samples from a Kumaraswamy distribution.\n Example:\n >>> m = Kumaraswamy(torch.tensor([1.0]), torch.tensor([1.0]))\n >>> m.sample() # sample from a Kumaraswamy distribution with concentration alpha=1 and beta=1\n tensor([ 0.1729])\n Parameters:\n * concentration1 (float or Tensor) -- 1st\n concentration parameter of the distribution (often referred to\n as alpha)\n * concentration0 (float or Tensor) -- 2nd\n concentration parameter of the distribution (often referred to\n as beta)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "as beta)\n arg_constraints: Dict[str, constraints.Constraint] = {'concentration0': GreaterThan(lower_bound=0.0), 'concentration1': GreaterThan(lower_bound=0.0)}\n entropy()\n expand(batch_shape, _instance=None)\n has_rsample = True\n property mean\n property mode\n support = Interval(lower_bound=0.0, upper_bound=1.0)\n property variance\nLKJCholesky\n===========\nclass torch.distributions.lkj_cholesky.LKJCholesky(dim, concentration=1.0, validate_args=None)\n Bases: \"Distribution\"\n LKJ distribution for lower Cholesky factor of correlation matrices.\n The distribution is controlled by \"concentration\" parameter \\eta to\n make the probability of the correlation matrix M generated from a\n Cholesky factor proportional to \\det(M)^{\\eta - 1}. Because of\n that, when \"concentration == 1\", we have a uniform distribution\n over Cholesky factors of correlation matrices:\n L ~ LKJCholesky(dim, concentration)\n X = L @ L' ~ LKJCorr(dim, concentration)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "X = L @ L' ~ LKJCorr(dim, concentration)\n Note that this distribution samples the Cholesky factor of\n correlation matrices and not the correlation matrices themselves\n and thereby differs slightly from the derivations in [1] for the\n LKJCorr distribution. For sampling, this uses the Onion method\n from [1] Section 3.\n Example:\n >>> l = LKJCholesky(3, 0.5)\n >>> l.sample() # l @ l.T is a sample of a correlation 3x3 matrix\n tensor([[ 1.0000, 0.0000, 0.0000],\n [ 0.3516, 0.9361, 0.0000],\n [-0.1899, 0.4748, 0.8593]])\n Parameters:\n * dimension (dim) -- dimension of the matrices\n * concentration (float or Tensor) --\n concentration/shape parameter of the distribution (often\n referred to as eta)\n References\n [1] Generating random correlation matrices based on vines and\n extended onion method (2009), Daniel Lewandowski, Dorota\n Kurowicka, Harry Joe. Journal of Multivariate Analysis. 100.", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "10.1016/j.jmva.2009.04.008\n arg_constraints = {'concentration': GreaterThan(lower_bound=0.0)}\n expand(batch_shape, _instance=None)\n log_prob(value)\n sample(sample_shape=torch.Size([]))\n support = CorrCholesky()\nLaplace\n=======\nclass torch.distributions.laplace.Laplace(loc, scale, validate_args=None)\n Bases: \"Distribution\"\n Creates a Laplace distribution parameterized by \"loc\" and \"scale\".\n Example:\n >>> m = Laplace(torch.tensor([0.0]), torch.tensor([1.0]))\n >>> m.sample() # Laplace distributed with loc=0, scale=1\n tensor([ 0.1046])\n Parameters:\n * loc (float or Tensor) -- mean of the distribution\n * scale (float or Tensor) -- scale of the distribution\n arg_constraints = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}\n cdf(value)\n entropy()\n expand(batch_shape, _instance=None)\n has_rsample = True\n icdf(value)\n log_prob(value)\n property mean\n property mode\n rsample(sample_shape=torch.Size([]))\n property stddev", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "property stddev\n support = Real()\n property variance\nLogNormal\n=========\nclass torch.distributions.log_normal.LogNormal(loc, scale, validate_args=None)\n Bases: \"TransformedDistribution\"\n Creates a log-normal distribution parameterized by \"loc\" and\n \"scale\" where:\n X ~ Normal(loc, scale)\n Y = exp(X) ~ LogNormal(loc, scale)\n Example:\n >>> m = LogNormal(torch.tensor([0.0]), torch.tensor([1.0]))\n >>> m.sample() # log-normal distributed with mean=0 and stddev=1\n tensor([ 0.1046])\n Parameters:\n * loc (float or Tensor) -- mean of log of distribution\n * scale (float or Tensor) -- standard deviation of log\n of the distribution\n arg_constraints: Dict[str, constraints.Constraint] = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}\n entropy()\n expand(batch_shape, _instance=None)\n has_rsample = True\n property loc\n property mean\n property mode\n property scale\n support = GreaterThan(lower_bound=0.0)\n property variance", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "property variance\nLowRankMultivariateNormal\n=========================\nclass torch.distributions.lowrank_multivariate_normal.LowRankMultivariateNormal(loc, cov_factor, cov_diag, validate_args=None)\n Bases: \"Distribution\"\n Creates a multivariate normal distribution with covariance matrix\n having a low-rank form parameterized by \"cov_factor\" and\n \"cov_diag\":\n covariance_matrix = cov_factor @ cov_factor.T + cov_diag\n -[ Example ]-\n\n\n\nm = LowRankMultivariateNormal(torch.zeros(2), torch.tensor([[1.], [0.]]), torch.ones(2))\nm.sample() # normally distributed with mean=[0,0], cov_factor=[[1],[0]], cov_diag=[1,1]\n tensor([-0.2102, -0.5429])\n Parameters:\n * loc (Tensor) -- mean of the distribution with shape\n batch_shape + event_shape\n * cov_factor (Tensor) -- factor part of low-rank form of\n covariance matrix with shape batch_shape + event_shape +\n (rank,)\n * cov_diag (Tensor) -- diagonal part of low-rank form of\n\n\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "covariance matrix with shape batch_shape + event_shape\n Note:\n The computation for determinant and inverse of covariance matrix\n is avoided when cov_factor.shape[1] << cov_factor.shape[0]\n thanks to Woodbury matrix identity and matrix determinant lemma.\n Thanks to these formulas, we just need to compute the determinant\n and inverse of the small size \"capacitance\" matrix:\n capacitance = I + cov_factor.T @ inv(cov_diag) @ cov_factor\n arg_constraints = {'cov_diag': IndependentConstraint(GreaterThan(lower_bound=0.0), 1), 'cov_factor': IndependentConstraint(Real(), 2), 'loc': IndependentConstraint(Real(), 1)}\n property covariance_matrix\n entropy()\n expand(batch_shape, _instance=None)\n has_rsample = True\n log_prob(value)\n property mean\n property mode\n property precision_matrix\n rsample(sample_shape=torch.Size([]))\n property scale_tril\n support = IndependentConstraint(Real(), 1)\n property variance\nMixtureSameFamily\n=================", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "MixtureSameFamily\nclass torch.distributions.mixture_same_family.MixtureSameFamily(mixture_distribution, component_distribution, validate_args=None)\n Bases: \"Distribution\"\n The MixtureSameFamily distribution implements a (batch of)\n mixture distribution where all component are from different\n parameterizations of the same distribution type. It is\n parameterized by a Categorical \"selecting distribution\" (over k\n component) and a component distribution, i.e., a Distribution\n with a rightmost batch shape (equal to [k]) which indexes each\n (batch of) component.\n Examples:\n >>> # Construct Gaussian Mixture Model in 1D consisting of 5 equally\n >>> # weighted normal distributions\n >>> mix = D.Categorical(torch.ones(5,))\n >>> comp = D.Normal(torch.randn(5,), torch.rand(5,))\n >>> gmm = MixtureSameFamily(mix, comp)\n >>> # Construct Gaussian Mixture Modle in 2D consisting of 5 equally\n >>> # weighted bivariate normal distributions", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "\n\n\nmix = D.Categorical(torch.ones(5,))\n >>> comp = D.Independent(D.Normal(\n ... torch.randn(5,2), torch.rand(5,2)), 1)\n >>> gmm = MixtureSameFamily(mix, comp)\n >>> # Construct a batch of 3 Gaussian Mixture Models in 2D each\n >>> # consisting of 5 random weighted bivariate normal distributions\n >>> mix = D.Categorical(torch.rand(3,5))\n >>> comp = D.Independent(D.Normal(\n ... torch.randn(3,5,2), torch.rand(3,5,2)), 1)\n >>> gmm = MixtureSameFamily(mix, comp)\n Parameters:\n * mixture_distribution --\n torch.distributions.Categorical-like instance. Manages the\n probability of selecting component. The number of categories\n must match the rightmost batch dimension of the\n component_distribution. Must have either scalar\n batch_shape or batch_shape matching\n component_distribution.batch_shape[:-1]\n * component_distribution --\n\n\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "\ncomponent_distribution --\n torch.distributions.Distribution-like instance. Right-most\n batch dimension indexes component.\n arg_constraints: Dict[str, Constraint] = {}\n cdf(x)\n property component_distribution\n expand(batch_shape, _instance=None)\n has_rsample = False\n log_prob(x)\n property mean\n property mixture_distribution\n sample(sample_shape=torch.Size([]))\n property support\n property variance\nMultinomial\n===========\nclass torch.distributions.multinomial.Multinomial(total_count=1, probs=None, logits=None, validate_args=None)\n Bases: \"Distribution\"\n Creates a Multinomial distribution parameterized by \"total_count\"\n and either \"probs\" or \"logits\" (but not both). The innermost\n dimension of \"probs\" indexes over categories. All other dimensions\n index over batches.\n Note that \"total_count\" need not be specified if only \"log_prob()\"\n is called (see example below)\n Note:\n The probs argument must be non-negative, finite and have a non-\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "zero sum, and it will be normalized to sum to 1 along the last\n dimension. \"probs\" will return this normalized value. The\n logits argument will be interpreted as unnormalized log\n probabilities and can therefore be any real number. It will\n likewise be normalized so that the resulting probabilities sum to\n 1 along the last dimension. \"logits\" will return this normalized\n value.\n * \"sample()\" requires a single shared total_count for all\n parameters and samples.\n * \"log_prob()\" allows different total_count for each parameter\n and sample.\n Example:\n >>> m = Multinomial(100, torch.tensor([ 1., 1., 1., 1.]))\n >>> x = m.sample() # equal probability of 0, 1, 2, 3\n tensor([ 21., 24., 30., 25.])\n >>> Multinomial(probs=torch.tensor([1., 1., 1., 1.])).log_prob(x)\n tensor([-4.1338])\n Parameters:\n * total_count (int) -- number of trials\n * probs (Tensor) -- event probabilities", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "\nlogits (Tensor) -- event log probabilities\n (unnormalized)\n arg_constraints = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()}\n entropy()\n expand(batch_shape, _instance=None)\n log_prob(value)\n property logits\n property mean\n property param_shape\n property probs\n sample(sample_shape=torch.Size([]))\n property support\n total_count: int\n property variance\nMultivariateNormal\n==================\nclass torch.distributions.multivariate_normal.MultivariateNormal(loc, covariance_matrix=None, precision_matrix=None, scale_tril=None, validate_args=None)\n Bases: \"Distribution\"\n Creates a multivariate normal (also called Gaussian) distribution\n parameterized by a mean vector and a covariance matrix.\n The multivariate normal distribution can be parameterized either in\n terms of a positive definite covariance matrix \\mathbf{\\Sigma} or a\n positive definite precision matrix \\mathbf{\\Sigma}^{-1} or a lower-\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "triangular matrix \\mathbf{L} with positive-valued diagonal entries,\n such that \\mathbf{\\Sigma} = \\mathbf{L}\\mathbf{L}^\\top. This\n triangular matrix can be obtained via e.g. Cholesky decomposition\n of the covariance.\n -[ Example ]-\n\n\n\nm = MultivariateNormal(torch.zeros(2), torch.eye(2))\nm.sample() # normally distributed with mean=[0,0] and covariance_matrix=I\n tensor([-0.2102, -0.5429])\n Parameters:\n * loc (Tensor) -- mean of the distribution\n * covariance_matrix (Tensor) -- positive-definite\n covariance matrix\n * precision_matrix (Tensor) -- positive-definite precision\n matrix\n * scale_tril (Tensor) -- lower-triangular factor of\n covariance, with positive-valued diagonal\n Note:\n Only one of \"covariance_matrix\" or \"precision_matrix\" or\n \"scale_tril\" can be specified.Using \"scale_tril\" will be more\n efficient: all computations internally are based on \"scale_tril\".\n\n\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "If \"covariance_matrix\" or \"precision_matrix\" is passed instead,\n it is only used to compute the corresponding lower triangular\n matrices using a Cholesky decomposition.\n arg_constraints = {'covariance_matrix': PositiveDefinite(), 'loc': IndependentConstraint(Real(), 1), 'precision_matrix': PositiveDefinite(), 'scale_tril': LowerCholesky()}\n property covariance_matrix\n entropy()\n expand(batch_shape, _instance=None)\n has_rsample = True\n log_prob(value)\n property mean\n property mode\n property precision_matrix\n rsample(sample_shape=torch.Size([]))\n property scale_tril\n support = IndependentConstraint(Real(), 1)\n property variance\nNegativeBinomial\n================\nclass torch.distributions.negative_binomial.NegativeBinomial(total_count, probs=None, logits=None, validate_args=None)\n Bases: \"Distribution\"\n Creates a Negative Binomial distribution, i.e. distribution of the\n number of successful independent and identical Bernoulli trials", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "before \"total_count\" failures are achieved. The probability of\n success of each Bernoulli trial is \"probs\".\n Parameters:\n * total_count (float or Tensor) -- non-negative number\n of negative Bernoulli trials to stop, although the\n distribution is still valid for real valued count\n * probs (Tensor) -- Event probabilities of success in the\n half open interval [0, 1)\n * logits (Tensor) -- Event log-odds for probabilities of\n success\n arg_constraints = {'logits': Real(), 'probs': HalfOpenInterval(lower_bound=0.0, upper_bound=1.0), 'total_count': GreaterThanEq(lower_bound=0)}\n expand(batch_shape, _instance=None)\n log_prob(value)\n property logits\n property mean\n property mode\n property param_shape\n property probs\n sample(sample_shape=torch.Size([]))\n support = IntegerGreaterThan(lower_bound=0)\n property variance\nNormal\n======\nclass torch.distributions.normal.Normal(loc, scale, validate_args=None)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "Bases: \"ExponentialFamily\"\n Creates a normal (also called Gaussian) distribution parameterized\n by \"loc\" and \"scale\".\n Example:\n >>> m = Normal(torch.tensor([0.0]), torch.tensor([1.0]))\n >>> m.sample() # normally distributed with loc=0 and scale=1\n tensor([ 0.1046])\n Parameters:\n * loc (float or Tensor) -- mean of the distribution\n (often referred to as mu)\n * scale (float or Tensor) -- standard deviation of the\n distribution (often referred to as sigma)\n arg_constraints = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}\n cdf(value)\n entropy()\n expand(batch_shape, _instance=None)\n has_rsample = True\n icdf(value)\n log_prob(value)\n property mean\n property mode\n rsample(sample_shape=torch.Size([]))\n sample(sample_shape=torch.Size([]))\n property stddev\n support = Real()\n property variance\nOneHotCategorical\n=================", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "OneHotCategorical\nclass torch.distributions.one_hot_categorical.OneHotCategorical(probs=None, logits=None, validate_args=None)\n Bases: \"Distribution\"\n Creates a one-hot categorical distribution parameterized by \"probs\"\n or \"logits\".\n Samples are one-hot coded vectors of size \"probs.size(-1)\".\n Note:\n The probs argument must be non-negative, finite and have a non-\n zero sum, and it will be normalized to sum to 1 along the last\n dimension. \"probs\" will return this normalized value. The\n logits argument will be interpreted as unnormalized log\n probabilities and can therefore be any real number. It will\n likewise be normalized so that the resulting probabilities sum to\n 1 along the last dimension. \"logits\" will return this normalized\n value.\n See also: \"torch.distributions.Categorical()\" for specifications of\n \"probs\" and \"logits\".\n Example:\n >>> m = OneHotCategorical(torch.tensor([ 0.25, 0.25, 0.25, 0.25 ]))", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "\n\n\nm.sample() # equal probability of 0, 1, 2, 3\n tensor([ 0., 0., 0., 1.])\n Parameters:\n * probs (Tensor) -- event probabilities\n * logits (Tensor) -- event log probabilities\n (unnormalized)\n arg_constraints = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()}\n entropy()\n enumerate_support(expand=True)\n expand(batch_shape, _instance=None)\n has_enumerate_support = True\n log_prob(value)\n property logits\n property mean\n property mode\n property param_shape\n property probs\n sample(sample_shape=torch.Size([]))\n support = OneHot()\n property variance\nPareto\n======\nclass torch.distributions.pareto.Pareto(scale, alpha, validate_args=None)\n Bases: \"TransformedDistribution\"\n Samples from a Pareto Type 1 distribution.\n Example:\n >>> m = Pareto(torch.tensor([1.0]), torch.tensor([1.0]))\n >>> m.sample() # sample from a Pareto distribution with scale=1 and alpha=1\n tensor([ 1.5623])\n Parameters:\n\n\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "tensor([ 1.5623])\n Parameters:\n * scale (float or Tensor) -- Scale parameter of the\n distribution\n * alpha (float or Tensor) -- Shape parameter of the\n distribution\n arg_constraints: Dict[str, constraints.Constraint] = {'alpha': GreaterThan(lower_bound=0.0), 'scale': GreaterThan(lower_bound=0.0)}\n entropy()\n expand(batch_shape, _instance=None)\n property mean\n property mode\n property support\n property variance\nPoisson\n=======\nclass torch.distributions.poisson.Poisson(rate, validate_args=None)\n Bases: \"ExponentialFamily\"\n Creates a Poisson distribution parameterized by \"rate\", the rate\n parameter.\n Samples are nonnegative integers, with a pmf given by\n \\mathrm{rate}^k \\frac{e^{-\\mathrm{rate}}}{k!}\n Example:\n >>> m = Poisson(torch.tensor([4]))\n >>> m.sample()\n tensor([ 3.])\n Parameters:\n rate (Number, Tensor) -- the rate parameter\n arg_constraints = {'rate': GreaterThanEq(lower_bound=0.0)}", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "expand(batch_shape, _instance=None)\n log_prob(value)\n property mean\n property mode\n sample(sample_shape=torch.Size([]))\n support = IntegerGreaterThan(lower_bound=0)\n property variance\nRelaxedBernoulli\n================\nclass torch.distributions.relaxed_bernoulli.RelaxedBernoulli(temperature, probs=None, logits=None, validate_args=None)\n Bases: \"TransformedDistribution\"\n Creates a RelaxedBernoulli distribution, parametrized by\n \"temperature\", and either \"probs\" or \"logits\" (but not both). This\n is a relaxed version of the Bernoulli distribution, so the values\n are in (0, 1), and has reparametrizable samples.\n Example:\n >>> m = RelaxedBernoulli(torch.tensor([2.2]),\n ... torch.tensor([0.1, 0.2, 0.3, 0.99]))\n >>> m.sample()\n tensor([ 0.2951, 0.3442, 0.8918, 0.9021])\n Parameters:\n * temperature (Tensor) -- relaxation temperature\n * probs (Number, Tensor) -- the probability of\n sampling 1", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "sampling 1\n * logits (Number, Tensor) -- the log-odds of sampling\n 1\n arg_constraints: Dict[str, constraints.Constraint] = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}\n expand(batch_shape, _instance=None)\n has_rsample = True\n property logits\n property probs\n support = Interval(lower_bound=0.0, upper_bound=1.0)\n property temperature\nLogitRelaxedBernoulli\n=====================\nclass torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli(temperature, probs=None, logits=None, validate_args=None)\n Bases: \"Distribution\"\n Creates a LogitRelaxedBernoulli distribution parameterized by\n \"probs\" or \"logits\" (but not both), which is the logit of a\n RelaxedBernoulli distribution.\n Samples are logits of values in (0, 1). See [1] for more details.\n Parameters:\n * temperature (Tensor) -- relaxation temperature\n * probs (Number, Tensor) -- the probability of\n sampling 1", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "sampling 1\n * logits (Number, Tensor) -- the log-odds of sampling\n 1\n [1] The Concrete Distribution: A Continuous Relaxation of Discrete\n Random Variables (Maddison et al, 2017)\n [2] Categorical Reparametrization with Gumbel-Softmax (Jang et al,\n 2017)\n arg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}\n expand(batch_shape, _instance=None)\n log_prob(value)\n property logits\n property param_shape\n property probs\n rsample(sample_shape=torch.Size([]))\n support = Real()\nRelaxedOneHotCategorical\n========================\nclass torch.distributions.relaxed_categorical.RelaxedOneHotCategorical(temperature, probs=None, logits=None, validate_args=None)\n Bases: \"TransformedDistribution\"\n Creates a RelaxedOneHotCategorical distribution parametrized by\n \"temperature\", and either \"probs\" or \"logits\". This is a relaxed\n version of the \"OneHotCategorical\" distribution, so its samples are", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "on simplex, and are reparametrizable.\n Example:\n >>> m = RelaxedOneHotCategorical(torch.tensor([2.2]),\n ... torch.tensor([0.1, 0.2, 0.3, 0.4]))\n >>> m.sample()\n tensor([ 0.1294, 0.2324, 0.3859, 0.2523])\n Parameters:\n * temperature (Tensor) -- relaxation temperature\n * probs (Tensor) -- event probabilities\n * logits (Tensor) -- unnormalized log probability for each\n event\n arg_constraints: Dict[str, constraints.Constraint] = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()}\n expand(batch_shape, _instance=None)\n has_rsample = True\n property logits\n property probs\n support = Simplex()\n property temperature\nStudentT\n========\nclass torch.distributions.studentT.StudentT(df, loc=0.0, scale=1.0, validate_args=None)\n Bases: \"Distribution\"\n Creates a Student's t-distribution parameterized by degree of\n freedom \"df\", mean \"loc\" and scale \"scale\".\n Example:", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "Example:\n >>> m = StudentT(torch.tensor([2.0]))\n >>> m.sample() # Student's t-distributed with degrees of freedom=2\n tensor([ 0.1046])\n Parameters:\n * df (float or Tensor) -- degrees of freedom\n * loc (float or Tensor) -- mean of the distribution\n * scale (float or Tensor) -- scale of the distribution\n arg_constraints = {'df': GreaterThan(lower_bound=0.0), 'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}\n entropy()\n expand(batch_shape, _instance=None)\n has_rsample = True\n log_prob(value)\n property mean\n property mode\n rsample(sample_shape=torch.Size([]))\n support = Real()\n property variance\nTransformedDistribution\n=======================\nclass torch.distributions.transformed_distribution.TransformedDistribution(base_distribution, transforms, validate_args=None)\n Bases: \"Distribution\"\n Extension of the Distribution class, which applies a sequence of\n Transforms to a base distribution. Let f be the composition of", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "transforms applied:\n X ~ BaseDistribution\n Y = f(X) ~ TransformedDistribution(BaseDistribution, f)\n log p(Y) = log p(X) + log |det (dX/dY)|\n Note that the \".event_shape\" of a \"TransformedDistribution\" is the\n maximum shape of its base distribution and its transforms, since\n transforms can introduce correlations among events.\n An example for the usage of \"TransformedDistribution\" would be:\n # Building a Logistic Distribution\n # X ~ Uniform(0, 1)\n # f = a + b * logit(X)\n # Y ~ f(X) ~ Logistic(a, b)\n base_distribution = Uniform(0, 1)\n transforms = [SigmoidTransform().inv, AffineTransform(loc=a, scale=b)]\n logistic = TransformedDistribution(base_distribution, transforms)\n For more examples, please look at the implementations of \"Gumbel\",\n \"HalfCauchy\", \"HalfNormal\", \"LogNormal\", \"Pareto\", \"Weibull\",\n \"RelaxedBernoulli\" and \"RelaxedOneHotCategorical\"\n arg_constraints: Dict[str, Constraint] = {}\n cdf(value)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "cdf(value)\n Computes the cumulative distribution function by inverting the\n transform(s) and computing the score of the base distribution.\n expand(batch_shape, _instance=None)\n property has_rsample\n icdf(value)\n Computes the inverse cumulative distribution function using\n transform(s) and computing the score of the base distribution.\n log_prob(value)\n Scores the sample by inverting the transform(s) and computing\n the score using the score of the base distribution and the log\n abs det jacobian.\n rsample(sample_shape=torch.Size([]))\n Generates a sample_shape shaped reparameterized sample or\n sample_shape shaped batch of reparameterized samples if the\n distribution parameters are batched. Samples first from base\n distribution and applies transform() for every transform in\n the list.\n sample(sample_shape=torch.Size([]))\n Generates a sample_shape shaped sample or sample_shape shaped", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "batch of samples if the distribution parameters are batched.\n Samples first from base distribution and applies transform()\n for every transform in the list.\n property support\nUniform\n=======\nclass torch.distributions.uniform.Uniform(low, high, validate_args=None)\n Bases: \"Distribution\"\n Generates uniformly distributed random samples from the half-open\n interval \"[low, high)\".\n Example:\n >>> m = Uniform(torch.tensor([0.0]), torch.tensor([5.0]))\n >>> m.sample() # uniformly distributed in the range [0.0, 5.0)\n tensor([ 2.3418])\n Parameters:\n * low (float or Tensor) -- lower range (inclusive).\n * high (float or Tensor) -- upper range (exclusive).\n arg_constraints = {'high': Dependent(), 'low': Dependent()}\n cdf(value)\n entropy()\n expand(batch_shape, _instance=None)\n has_rsample = True\n icdf(value)\n log_prob(value)\n property mean\n property mode\n rsample(sample_shape=torch.Size([]))\n property stddev\n property support", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "property stddev\n property support\n property variance\nVonMises\n========\nclass torch.distributions.von_mises.VonMises(loc, concentration, validate_args=None)\n Bases: \"Distribution\"\n A circular von Mises distribution.\n This implementation uses polar coordinates. The \"loc\" and \"value\"\n args can be any real number (to facilitate unconstrained\n optimization), but are interpreted as angles modulo 2 pi.\n Example::\n >>> m = VonMises(torch.tensor([1.0]), torch.tensor([1.0]))\n >>> m.sample() # von Mises distributed with loc=1 and concentration=1\n tensor([1.9777])\n Parameters:\n * loc (torch.Tensor) -- an angle in radians.\n * concentration (torch.Tensor) -- concentration parameter\n arg_constraints = {'concentration': GreaterThan(lower_bound=0.0), 'loc': Real()}\n expand(batch_shape)\n has_rsample = False\n log_prob(value)\n property mean\n The provided mean is the circular one.\n property mode\n sample(sample_shape=torch.Size([]))", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "sample(sample_shape=torch.Size([]))\n The sampling algorithm for the von Mises distribution is based\n on the following paper: Best, D. J., and Nicholas I. Fisher.\n \"Efficient simulation of the von Mises distribution.\" Applied\n Statistics (1979): 152-157.\n support = Real()\n property variance\n The provided variance is the circular one.\nWeibull\n=======\nclass torch.distributions.weibull.Weibull(scale, concentration, validate_args=None)\n Bases: \"TransformedDistribution\"\n Samples from a two-parameter Weibull distribution.\n -[ Example ]-\n\n\n\nm = Weibull(torch.tensor([1.0]), torch.tensor([1.0]))\nm.sample() # sample from a Weibull distribution with scale=1, concentration=1\n tensor([ 0.4784])\n Parameters:\n * scale (float or Tensor) -- Scale parameter of\n distribution (lambda).\n * concentration (float or Tensor) -- Concentration\n parameter of distribution (k/shape).\n\n\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "parameter of distribution (k/shape).\n arg_constraints: Dict[str, constraints.Constraint] = {'concentration': GreaterThan(lower_bound=0.0), 'scale': GreaterThan(lower_bound=0.0)}\n entropy()\n expand(batch_shape, _instance=None)\n property mean\n property mode\n support = GreaterThan(lower_bound=0.0)\n property variance\nWishart\n=======\nclass torch.distributions.wishart.Wishart(df, covariance_matrix=None, precision_matrix=None, scale_tril=None, validate_args=None)\n Bases: \"ExponentialFamily\"\n Creates a Wishart distribution parameterized by a symmetric\n positive definite matrix \\Sigma, or its Cholesky decomposition\n \\mathbf{\\Sigma} = \\mathbf{L}\\mathbf{L}^\\top\n -[ Example ]-\n\n\n\nm = Wishart(torch.eye(2), torch.Tensor([2]))\nm.sample() # Wishart distributed with mean=df * I and\n # variance(x_ij)=df for i != j and variance(x_ij)=2 * df for i == j\n Parameters:\n * covariance_matrix (Tensor) -- positive-definite\n covariance matrix\n\n\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "covariance matrix\n * precision_matrix (Tensor) -- positive-definite precision\n matrix\n * scale_tril (Tensor) -- lower-triangular factor of\n covariance, with positive-valued diagonal\n * df (float or Tensor) -- real-valued parameter larger\n than the (dimension of Square matrix) - 1\n Note:\n Only one of \"covariance_matrix\" or \"precision_matrix\" or\n \"scale_tril\" can be specified. Using \"scale_tril\" will be more\n efficient: all computations internally are based on \"scale_tril\".\n If \"covariance_matrix\" or \"precision_matrix\" is passed instead,\n it is only used to compute the corresponding lower triangular\n matrices using a Cholesky decomposition.\n 'torch.distributions.LKJCholesky' is a restricted Wishart\n distribution.[1]\n References\n [1] Wang, Z., Wu, Y. and Chu, H., 2018. On equivalence of the LKJ\n distribution and the restricted Wishart distribution. [2] Sawyer,", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "S., 2007. Wishart Distributions and Inverse-Wishart Sampling. [3]\n Anderson, T. W., 2003. An Introduction to Multivariate Statistical\n Analysis (3rd ed.). [4] Odell, P. L. & Feiveson, A. H., 1966. A\n Numerical Procedure to Generate a SampleCovariance Matrix. JASA,\n 61(313):199-203. [5] Ku, Y.-C. & Bloomfield, P., 2010. Generating\n Random Wishart Matrices with Fractional Degrees of Freedom in OX.\n arg_constraints = {'covariance_matrix': PositiveDefinite(), 'df': GreaterThan(lower_bound=0), 'precision_matrix': PositiveDefinite(), 'scale_tril': LowerCholesky()}\n property covariance_matrix\n entropy()\n expand(batch_shape, _instance=None)\n has_rsample = True\n log_prob(value)\n property mean\n property mode\n property precision_matrix\n rsample(sample_shape=torch.Size([]), max_try_correction=None)\n Warning:\n In some cases, sampling algorithm based on Bartlett\n decomposition may return singular matrix samples. Several", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "tries to correct singular samples are performed by default,\n but it may end up returning singular matrix samples. Singular\n samples may return -inf values in .log_prob(). In those\n cases, the user should validate the samples and either fix the\n value of df or adjust max_try_correction value for\n argument in .rsample accordingly.\n property scale_tril\n support = PositiveDefinite()\n property variance\nKL Divergence\n===============\ntorch.distributions.kl.kl_divergence(p, q)\n Compute Kullback-Leibler divergence KL(p | q) between two\n distributions.\n KL(p | q) = \\int p(x) \\log\\frac {p(x)} {q(x)} \\,dx\n Parameters:\n * p (Distribution) -- A \"Distribution\" object.\n * q (Distribution) -- A \"Distribution\" object.\n Returns:\n A batch of KL divergences of shape batch_shape.\n Return type:\n Tensor\n Raises:\n NotImplementedError -- If the distribution types have not", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "been registered via \"register_kl()\".\n KL divergence is currently implemented for the following\n distribution pairs:\n * \"Bernoulli\" and \"Bernoulli\"\n * \"Bernoulli\" and \"Poisson\"\n * \"Beta\" and \"Beta\"\n * \"Beta\" and \"ContinuousBernoulli\"\n * \"Beta\" and \"Exponential\"\n * \"Beta\" and \"Gamma\"\n * \"Beta\" and \"Normal\"\n * \"Beta\" and \"Pareto\"\n * \"Beta\" and \"Uniform\"\n * \"Binomial\" and \"Binomial\"\n * \"Categorical\" and \"Categorical\"\n * \"Cauchy\" and \"Cauchy\"\n * \"ContinuousBernoulli\" and \"ContinuousBernoulli\"\n * \"ContinuousBernoulli\" and \"Exponential\"\n * \"ContinuousBernoulli\" and \"Normal\"\n * \"ContinuousBernoulli\" and \"Pareto\"\n * \"ContinuousBernoulli\" and \"Uniform\"\n * \"Dirichlet\" and \"Dirichlet\"\n * \"Exponential\" and \"Beta\"\n * \"Exponential\" and \"ContinuousBernoulli\"\n * \"Exponential\" and \"Exponential\"\n * \"Exponential\" and \"Gamma\"\n * \"Exponential\" and \"Gumbel\"\n * \"Exponential\" and \"Normal\"", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "\n\"Exponential\" and \"Normal\"\n\"Exponential\" and \"Pareto\"\n\"Exponential\" and \"Uniform\"\n\"ExponentialFamily\" and \"ExponentialFamily\"\n\"Gamma\" and \"Beta\"\n\"Gamma\" and \"ContinuousBernoulli\"\n\"Gamma\" and \"Exponential\"\n\"Gamma\" and \"Gamma\"\n\"Gamma\" and \"Gumbel\"\n\"Gamma\" and \"Normal\"\n\"Gamma\" and \"Pareto\"\n\"Gamma\" and \"Uniform\"\n\"Geometric\" and \"Geometric\"\n\"Gumbel\" and \"Beta\"\n\"Gumbel\" and \"ContinuousBernoulli\"\n\"Gumbel\" and \"Exponential\"\n\"Gumbel\" and \"Gamma\"\n\"Gumbel\" and \"Gumbel\"\n\"Gumbel\" and \"Normal\"\n\"Gumbel\" and \"Pareto\"\n\"Gumbel\" and \"Uniform\"\n\"HalfNormal\" and \"HalfNormal\"\n\"Independent\" and \"Independent\"\n\"Laplace\" and \"Beta\"\n\"Laplace\" and \"ContinuousBernoulli\"\n\"Laplace\" and \"Exponential\"\n\"Laplace\" and \"Gamma\"\n\"Laplace\" and \"Laplace\"\n\"Laplace\" and \"Normal\"\n\"Laplace\" and \"Pareto\"\n\n\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "\n\"Laplace\" and \"Pareto\"\n\"Laplace\" and \"Uniform\"\n\"LowRankMultivariateNormal\" and \"LowRankMultivariateNormal\"\n\"LowRankMultivariateNormal\" and \"MultivariateNormal\"\n\"MultivariateNormal\" and \"LowRankMultivariateNormal\"\n\"MultivariateNormal\" and \"MultivariateNormal\"\n\"Normal\" and \"Beta\"\n\"Normal\" and \"ContinuousBernoulli\"\n\"Normal\" and \"Exponential\"\n\"Normal\" and \"Gamma\"\n\"Normal\" and \"Gumbel\"\n\"Normal\" and \"Laplace\"\n\"Normal\" and \"Normal\"\n\"Normal\" and \"Pareto\"\n\"Normal\" and \"Uniform\"\n\"OneHotCategorical\" and \"OneHotCategorical\"\n\"Pareto\" and \"Beta\"\n\"Pareto\" and \"ContinuousBernoulli\"\n\"Pareto\" and \"Exponential\"\n\"Pareto\" and \"Gamma\"\n\"Pareto\" and \"Normal\"\n\"Pareto\" and \"Pareto\"\n\"Pareto\" and \"Uniform\"\n\"Poisson\" and \"Bernoulli\"\n\"Poisson\" and \"Binomial\"\n\"Poisson\" and \"Poisson\"\n\n\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "\n\"Poisson\" and \"Poisson\"\n\"TransformedDistribution\" and \"TransformedDistribution\"\n\"Uniform\" and \"Beta\"\n\"Uniform\" and \"ContinuousBernoulli\"\n\"Uniform\" and \"Exponential\"\n\"Uniform\" and \"Gamma\"\n\"Uniform\" and \"Gumbel\"\n\"Uniform\" and \"Normal\"\n\"Uniform\" and \"Pareto\"\n\"Uniform\" and \"Uniform\"\ntorch.distributions.kl.register_kl(type_p, type_q)\n Decorator to register a pairwise function with \"kl_divergence()\".\n Usage:\n @register_kl(Normal, Normal)\n def kl_normal_normal(p, q):\n # insert implementation here\n Lookup returns the most specific (type,type) match ordered by\n subclass. If the match is ambiguous, a RuntimeWarning is raised.\n For example to resolve the ambiguous situation:\n @register_kl(BaseP, DerivedQ)\n def kl_version1(p, q): ...\n @register_kl(DerivedP, BaseQ)\n def kl_version2(p, q): ...\n you should register a third most-specific implementation, e.g.:\n\n\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "register_kl(DerivedP, DerivedQ)(kl_version1) # Break the tie.\n Parameters:\n * type_p (type) -- A subclass of \"Distribution\".\n * type_q (type) -- A subclass of \"Distribution\".\nTransforms\n============\nclass torch.distributions.transforms.AbsTransform(cache_size=0)\n Transform via the mapping y = |x|.\nclass torch.distributions.transforms.AffineTransform(loc, scale, event_dim=0, cache_size=0)\n Transform via the pointwise affine mapping y = \\text{loc} +\n \\text{scale} \\times x.\n Parameters:\n * loc (Tensor or float) -- Location parameter.\n * scale (Tensor or float) -- Scale parameter.\n * event_dim (int) -- Optional size of event_shape. This\n should be zero for univariate random variables, 1 for\n distributions over vectors, 2 for distributions over matrices,\n etc.\nclass torch.distributions.transforms.CatTransform(tseq, dim=0, lengths=None, cache_size=0)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "Transform functor that applies a sequence of transforms tseq\n component-wise to each submatrix at dim, of length\n lengths[dim], in a way compatible with \"torch.cat()\".\n Example:\n x0 = torch.cat([torch.range(1, 10), torch.range(1, 10)], dim=0)\n x = torch.cat([x0, x0], dim=0)\n t0 = CatTransform([ExpTransform(), identity_transform], dim=0, lengths=[10, 10])\n t = CatTransform([t0, t0], dim=0, lengths=[20, 20])\n y = t(x)\nclass torch.distributions.transforms.ComposeTransform(parts, cache_size=0)\n Composes multiple transforms in a chain. The transforms being\n composed are responsible for caching.\n Parameters:\n * parts (list of \"Transform\") -- A list of transforms to\n compose.\n * cache_size (int) -- Size of cache. If zero, no caching\n is done. If one, the latest single value is cached. Only 0 and\n 1 are supported.\nclass torch.distributions.transforms.CorrCholeskyTransform(cache_size=0)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "Transforms an uncontrained real vector x with length D(D-1)/2 into\n the Cholesky factor of a D-dimension correlation matrix. This\n Cholesky factor is a lower triangular matrix with positive\n diagonals and unit Euclidean norm for each row. The transform is\n processed as follows:\n 1. First we convert x into a lower triangular matrix in row\n order.\n 2. For each row X_i of the lower triangular part, we apply a\n signed* version of class \"StickBreakingTransform\" to\n transform X_i into a unit Euclidean length vector using the\n following steps: - Scales into the interval (-1, 1) domain:\n r_i = \\tanh(X_i). - Transforms into an unsigned domain: z_i =\n r_i^2. - Applies s_i = StickBreakingTransform(z_i). -\n Transforms back into signed domain: y_i = sign(r_i) *\n \\sqrt{s_i}.\nclass torch.distributions.transforms.CumulativeDistributionTransform(distribution, cache_size=0)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "Transform via the cumulative distribution function of a probability\n distribution.\n Parameters:\n distribution (Distribution) -- Distribution whose\n cumulative distribution function to use for the transformation.\n Example:\n # Construct a Gaussian copula from a multivariate normal.\n base_dist = MultivariateNormal(\n loc=torch.zeros(2),\n scale_tril=LKJCholesky(2).sample(),\n )\n transform = CumulativeDistributionTransform(Normal(0, 1))\n copula = TransformedDistribution(base_dist, [transform])\nclass torch.distributions.transforms.ExpTransform(cache_size=0)\n Transform via the mapping y = \\exp(x).\nclass torch.distributions.transforms.IndependentTransform(base_transform, reinterpreted_batch_ndims, cache_size=0)\n Wrapper around another transform to treat\n \"reinterpreted_batch_ndims\"-many extra of the right most dimensions\n as dependent. This has no effect on the forward or backward", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "transforms, but does sum out \"reinterpreted_batch_ndims\"-many of\n the rightmost dimensions in \"log_abs_det_jacobian()\".\n Parameters:\n * base_transform (\"Transform\") -- A base transform.\n * reinterpreted_batch_ndims (int) -- The number of extra\n rightmost dimensions to treat as dependent.\nclass torch.distributions.transforms.LowerCholeskyTransform(cache_size=0)\n Transform from unconstrained matrices to lower-triangular matrices\n with nonnegative diagonal entries.\n This is useful for parameterizing positive definite matrices in\n terms of their Cholesky factorization.\nclass torch.distributions.transforms.PositiveDefiniteTransform(cache_size=0)\n Transform from unconstrained matrices to positive-definite\n matrices.\nclass torch.distributions.transforms.PowerTransform(exponent, cache_size=0)\n Transform via the mapping y = x^{\\text{exponent}}.\nclass torch.distributions.transforms.ReshapeTransform(in_shape, out_shape, cache_size=0)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "Unit Jacobian transform to reshape the rightmost part of a tensor.\n Note that \"in_shape\" and \"out_shape\" must have the same number of\n elements, just as for \"torch.Tensor.reshape()\".\n Parameters:\n * in_shape (torch.Size) -- The input event shape.\n * out_shape (torch.Size) -- The output event shape.\nclass torch.distributions.transforms.SigmoidTransform(cache_size=0)\n Transform via the mapping y = \\frac{1}{1 + \\exp(-x)} and x =\n \\text{logit}(y).\nclass torch.distributions.transforms.SoftplusTransform(cache_size=0)\n Transform via the mapping \\text{Softplus}(x) = \\log(1 + \\exp(x)).\n The implementation reverts to the linear function when x > 20.\nclass torch.distributions.transforms.TanhTransform(cache_size=0)\n Transform via the mapping y = \\tanh(x).\n It is equivalent to \"ComposeTransform([AffineTransform(0., 2.),\n SigmoidTransform(), AffineTransform(-1., 2.)])\" However this\n might not be numerically stable, thus it is recommended to use\n TanhTransform instead.", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "TanhTransform instead.\n Note that one should use cache_size=1 when it comes to NaN/Inf\n values.\nclass torch.distributions.transforms.SoftmaxTransform(cache_size=0)\n Transform from unconstrained space to the simplex via y = \\exp(x)\n then normalizing.\n This is not bijective and cannot be used for HMC. However this acts\n mostly coordinate-wise (except for the final normalization), and\n thus is appropriate for coordinate-wise optimization algorithms.\nclass torch.distributions.transforms.StackTransform(tseq, dim=0, cache_size=0)\n Transform functor that applies a sequence of transforms tseq\n component-wise to each submatrix at dim in a way compatible with\n \"torch.stack()\".\n Example:\n x = torch.stack([torch.range(1, 10), torch.range(1, 10)], dim=1)\n t = StackTransform([ExpTransform(), identity_transform], dim=1)\n y = t(x)\nclass torch.distributions.transforms.StickBreakingTransform(cache_size=0)\n Transform from unconstrained space to the simplex of one additional", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "dimension via a stick-breaking process.\n This transform arises as an iterated sigmoid transform in a stick-\n breaking construction of the Dirichlet distribution: the first\n logit is transformed via sigmoid to the first probability and the\n probability of everything else, and then the process recurses.\n This is bijective and appropriate for use in HMC; however it mixes\n coordinates together and is less appropriate for optimization.\nclass torch.distributions.transforms.Transform(cache_size=0)\n Abstract class for invertable transformations with computable log\n det jacobians. They are primarily used in\n \"torch.distributions.TransformedDistribution\".\n Caching is useful for transforms whose inverses are either\n expensive or numerically unstable. Note that care must be taken\n with memoized values since the autograd graph may be reversed. For\n example while the following works with or without caching:\n y = t(x)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "y = t(x)\n t.log_abs_det_jacobian(x, y).backward() # x will receive gradients.\n However the following will error when caching due to dependency\n reversal:\n y = t(x)\n z = t.inv(y)\n grad(z.sum(), [y]) # error because z is x\n Derived classes should implement one or both of \"_call()\" or\n \"_inverse()\". Derived classes that set bijective=True should also\n implement \"log_abs_det_jacobian()\".\n Parameters:\n cache_size (int) -- Size of cache. If zero, no caching is\n done. If one, the latest single value is cached. Only 0 and 1\n are supported.\n Variables:\n * domain (\"Constraint\") -- The constraint representing valid\n inputs to this transform.\n * codomain (\"Constraint\") -- The constraint representing\n valid outputs to this transform which are inputs to the\n inverse transform.\n * bijective (bool) -- Whether this transform is bijective.\n A transform \"t\" is bijective iff \"t.inv(t(x)) == x\" and", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "\"t(t.inv(y)) == y\" for every \"x\" in the domain and \"y\" in the\n codomain. Transforms that are not bijective should at least\n maintain the weaker pseudoinverse properties \"t(t.inv(t(x)) ==\n t(x)\" and \"t.inv(t(t.inv(y))) == t.inv(y)\".\n * sign (int or Tensor) -- For bijective univariate\n transforms, this should be +1 or -1 depending on whether\n transform is monotone increasing or decreasing.\n property inv\n Returns the inverse \"Transform\" of this transform. This should\n satisfy \"t.inv.inv is t\".\n property sign\n Returns the sign of the determinant of the Jacobian, if\n applicable. In general this only makes sense for bijective\n transforms.\n log_abs_det_jacobian(x, y)\n Computes the log det jacobian log |dy/dx| given input and\n output.\n forward_shape(shape)\n Infers the shape of the forward computation, given the input\n shape. Defaults to preserving shape.\n inverse_shape(shape)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "inverse_shape(shape)\n Infers the shapes of the inverse computation, given the output\n shape. Defaults to preserving shape.\nConstraints\n=============\nThe following constraints are implemented:\n* \"constraints.boolean\"\n* \"constraints.cat\"\n* \"constraints.corr_cholesky\"\n* \"constraints.dependent\"\n* \"constraints.greater_than(lower_bound)\"\n* \"constraints.greater_than_eq(lower_bound)\"\n* \"constraints.independent(constraint, reinterpreted_batch_ndims)\"\n* \"constraints.integer_interval(lower_bound, upper_bound)\"\n* \"constraints.interval(lower_bound, upper_bound)\"\n* \"constraints.less_than(upper_bound)\"\n* \"constraints.lower_cholesky\"\n* \"constraints.lower_triangular\"\n* \"constraints.multinomial\"\n* \"constraints.nonnegative_integer\"\n* \"constraints.one_hot\"\n* \"constraints.positive_integer\"\n* \"constraints.positive\"\n* \"constraints.positive_semidefinite\"\n* \"constraints.positive_definite\"\n* \"constraints.real_vector\"\n* \"constraints.real\"\n* \"constraints.simplex\"\n* \"constraints.symmetric\"\n* \"constraints.stack\"", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "\n\"constraints.symmetric\"\n\"constraints.stack\"\n\"constraints.square\"\n\"constraints.symmetric\"\n\"constraints.unit_interval\"\nclass torch.distributions.constraints.Constraint\n Abstract base class for constraints.\n A constraint object represents a region over which a variable is\n valid, e.g. within which a variable can be optimized.\n Variables:\nis_discrete (bool) -- Whether constrained space is\n discrete. Defaults to False.\nevent_dim (int) -- Number of rightmost dimensions that\n together define an event. The \"check()\" method will remove\n this many dimensions when computing validity.\n check(value)\n Returns a byte tensor of \"sample_shape + batch_shape\" indicating\n whether each event in value satisfies this constraint.\ntorch.distributions.constraints.cat\n alias of \"_Cat\"\ntorch.distributions.constraints.dependent_property\n alias of \"_DependentProperty\"\ntorch.distributions.constraints.greater_than\n alias of \"_GreaterThan\"\n\n\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "alias of \"_GreaterThan\"\ntorch.distributions.constraints.greater_than_eq\n alias of \"_GreaterThanEq\"\ntorch.distributions.constraints.independent\n alias of \"_IndependentConstraint\"\ntorch.distributions.constraints.integer_interval\n alias of \"_IntegerInterval\"\ntorch.distributions.constraints.interval\n alias of \"_Interval\"\ntorch.distributions.constraints.half_open_interval\n alias of \"_HalfOpenInterval\"\ntorch.distributions.constraints.less_than\n alias of \"_LessThan\"\ntorch.distributions.constraints.multinomial\n alias of \"_Multinomial\"\ntorch.distributions.constraints.stack\n alias of \"_Stack\"\nConstraint Registry\n=====================\nPyTorch provides two global \"ConstraintRegistry\" objects that link\n\"Constraint\" objects to \"Transform\" objects. These objects both input\nconstraints and return transforms, but they have different guarantees\non bijectivity.\n1. \"biject_to(constraint)\" looks up a bijective \"Transform\" from\n \"constraints.real\" to the given \"constraint\". The returned", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "transform is guaranteed to have \".bijective = True\" and should\n implement \".log_abs_det_jacobian()\".\n2. \"transform_to(constraint)\" looks up a not-necessarily bijective\n \"Transform\" from \"constraints.real\" to the given \"constraint\". The\n returned transform is not guaranteed to implement\n \".log_abs_det_jacobian()\".\nThe \"transform_to()\" registry is useful for performing unconstrained\noptimization on constrained parameters of probability distributions,\nwhich are indicated by each distribution's \".arg_constraints\" dict.\nThese transforms often overparameterize a space in order to avoid\nrotation; they are thus more suitable for coordinate-wise optimization\nalgorithms like Adam:\n loc = torch.zeros(100, requires_grad=True)\n unconstrained = torch.zeros(100, requires_grad=True)\n scale = transform_to(Normal.arg_constraints['scale'])(unconstrained)\n loss = -Normal(loc, scale).log_prob(data).sum()\nThe \"biject_to()\" registry is useful for Hamiltonian Monte Carlo,", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "where samples from a probability distribution with constrained\n\".support\" are propagated in an unconstrained space, and algorithms\nare typically rotation invariant.:\n dist = Exponential(rate)\n unconstrained = torch.zeros(100, requires_grad=True)\n sample = biject_to(dist.support)(unconstrained)\n potential_energy = -dist.log_prob(sample).sum()\nNote:\n An example where \"transform_to\" and \"biject_to\" differ is\n \"constraints.simplex\": \"transform_to(constraints.simplex)\" returns a\n \"SoftmaxTransform\" that simply exponentiates and normalizes its\n inputs; this is a cheap and mostly coordinate-wise operation\n appropriate for algorithms like SVI. In contrast,\n \"biject_to(constraints.simplex)\" returns a \"StickBreakingTransform\"\n that bijects its input down to a one-fewer-dimensional space; this a\n more expensive less numerically stable transform but is needed for\n algorithms like HMC.\nThe \"biject_to\" and \"transform_to\" objects can be extended by user-", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "defined constraints and transforms using their \".register()\" method\neither as a function on singleton constraints:\n transform_to.register(my_constraint, my_transform)\nor as a decorator on parameterized constraints:\n @transform_to.register(MyConstraintClass)\n def my_factory(constraint):\n assert isinstance(constraint, MyConstraintClass)\n return MyTransform(constraint.param1, constraint.param2)\nYou can create your own registry by creating a new\n\"ConstraintRegistry\" object.\nclass torch.distributions.constraint_registry.ConstraintRegistry\n Registry to link constraints to transforms.\n register(constraint, factory=None)\n Registers a \"Constraint\" subclass in this registry. Usage:\n @my_registry.register(MyConstraintClass)\n def construct_transform(constraint):\n assert isinstance(constraint, MyConstraint)\n return MyTransform(constraint.arg_constraints)\n Parameters:\n * constraint (subclass of \"Constraint\") -- A subclass of", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "\"Constraint\", or a singleton object of the desired class.\n * factory (Callable) -- A callable that inputs a\n constraint object and returns a \"Transform\" object.", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"}
{"text": "Named Tensors operator coveragePlease read Named Tensors first for an introduction to named tensors.\nThis document is a reference for name inference, a process that\ndefines how named tensors:\n1. use names to provide additional automatic runtime correctness\n checks\n2. propagate names from input tensors to output tensors\nBelow is a list of all operations that are supported with named\ntensors and their associated name inference rules.\nIf you don't see an operation listed here, but it would help your use\ncase, please search if an issue has already been filed and if not,\nfile one.\nWarning:\n The named tensor API is experimental and subject to change.\nSupported Operations\n^^^^^^^^^^^^^^^^^^^^\n+----------------------+----------------------+\n| API | Name inference rule |\n|======================|======================|\n| \"Tensor.abs()\", | Keeps input names |\n| \"torch.abs()\" | |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "+----------------------+----------------------+\n| \"Tensor.abs_()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.acos()\", | Keeps input names |\n| \"torch.acos()\" | |\n+----------------------+----------------------+\n| \"Tensor.acos_()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.add()\", | Unifies names from |\n| \"torch.add()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.add_()\" | Unifies names from |\n| | inputs |\n+----------------------+----------------------+\n| \"Tensor.addmm()\", | Contracts away dims |\n| \"torch.addmm()\" | |\n+----------------------+----------------------+\n| \"Tensor.addmm_()\" | Contracts away dims |\n+----------------------+----------------------+\n| \"Tensor.addmv()\", | Contracts away dims |\n| \"torch.addmv()\" | |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "| \"torch.addmv()\" | |\n+----------------------+----------------------+\n| \"Tensor.addmv_()\" | Contracts away dims |\n+----------------------+----------------------+\n| \"Tensor.align_as()\" | See documentation |\n+----------------------+----------------------+\n| \"Tensor.align_to()\" | See documentation |\n+----------------------+----------------------+\n| \"Tensor.all()\", | None |\n| \"torch.all()\" | |\n+----------------------+----------------------+\n| \"Tensor.any()\", | None |\n| \"torch.any()\" | |\n+----------------------+----------------------+\n| \"Tensor.asin()\", | Keeps input names |\n| \"torch.asin()\" | |\n+----------------------+----------------------+\n| \"Tensor.asin_()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.atan()\", | Keeps input names |\n| \"torch.atan()\" | |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "| \"torch.atan()\" | |\n+----------------------+----------------------+\n| \"Tensor.atan2()\", | Unifies names from |\n| \"torch.atan2()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.atan2_()\" | Unifies names from |\n| | inputs |\n+----------------------+----------------------+\n| \"Tensor.atan_()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.bernoulli() | Keeps input names |\n| \", | |\n| \"torch.bernoulli()\" | |\n+----------------------+----------------------+\n| \"Tensor.bernoulli_( | None |\n| )\" | |\n+----------------------+----------------------+\n| \"Tensor.bfloat16()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.bitwise_not | Keeps input names |\n| ()\", \"torch.bitwise | |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "| ()\", \"torch.bitwise | |\n| not()\" | |\n+----------------------+----------------------+\n| \"Tensor.bitwise_not | None |\n| ()\" | |\n+----------------------+----------------------+\n| \"Tensor.bmm()\", | Contracts away dims |\n| \"torch.bmm()\" | |\n+----------------------+----------------------+\n| \"Tensor.bool()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.byte()\" | Keeps input names |\n+----------------------+----------------------+\n| \"torch.cat()\" | Unifies names from |\n| | inputs |\n+----------------------+----------------------+\n| \"Tensor.cauchy_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.ceil()\", | Keeps input names |\n| \"torch.ceil()\" | |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "+----------------------+----------------------+\n| \"Tensor.ceil_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.char()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.chunk()\", | Keeps input names |\n| \"torch.chunk()\" | |\n+----------------------+----------------------+\n| \"Tensor.clamp()\", | Keeps input names |\n| \"torch.clamp()\" | |\n+----------------------+----------------------+\n| \"Tensor.clamp_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.copy_()\" | out function and in- |\n| | place variants |\n+----------------------+----------------------+\n| \"Tensor.cos()\", | Keeps input names |\n| \"torch.cos()\" | |\n+----------------------+----------------------+\n| \"Tensor.cos_()\" | None |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "+----------------------+----------------------+\n| \"Tensor.cosh()\", | Keeps input names |\n| \"torch.cosh()\" | |\n+----------------------+----------------------+\n| \"Tensor.cosh_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.acosh()\", | Keeps input names |\n| \"torch.acosh()\" | |\n+----------------------+----------------------+\n| \"Tensor.acosh_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.cpu()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.cuda()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.cumprod()\", | Keeps input names |\n| \"torch.cumprod()\" | |\n+----------------------+----------------------+\n| \"Tensor.cumsum()\", | Keeps input names |\n| \"torch.cumsum()\" | |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "+----------------------+----------------------+\n| \"Tensor.data_ptr()\" | None |\n+----------------------+----------------------+\n| \"Tensor.deg2rad()\", | Keeps input names |\n| \"torch.deg2rad()\" | |\n+----------------------+----------------------+\n| \"Tensor.deg2rad_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.detach()\", | Keeps input names |\n| \"torch.detach()\" | |\n+----------------------+----------------------+\n| \"Tensor.detach_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.device\", | None |\n| \"torch.device()\" | |\n+----------------------+----------------------+\n| \"Tensor.digamma()\", | Keeps input names |\n| \"torch.digamma()\" | |\n+----------------------+----------------------+\n| \"Tensor.digamma_()\" | None |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "+----------------------+----------------------+\n| \"Tensor.dim()\" | None |\n+----------------------+----------------------+\n| \"Tensor.div()\", | Unifies names from |\n| \"torch.div()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.div_()\" | Unifies names from |\n| | inputs |\n+----------------------+----------------------+\n| \"Tensor.dot()\", | None |\n| \"torch.dot()\" | |\n+----------------------+----------------------+\n| \"Tensor.double()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.element_siz | None |\n| e()\" | |\n+----------------------+----------------------+\n| \"torch.empty()\" | Factory functions |\n+----------------------+----------------------+\n| \"torch.empty_like()\" | Factory functions |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "+----------------------+----------------------+\n| \"Tensor.eq()\", | Unifies names from |\n| \"torch.eq()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.erf()\", | Keeps input names |\n| \"torch.erf()\" | |\n+----------------------+----------------------+\n| \"Tensor.erf_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.erfc()\", | Keeps input names |\n| \"torch.erfc()\" | |\n+----------------------+----------------------+\n| \"Tensor.erfc_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.erfinv()\", | Keeps input names |\n| \"torch.erfinv()\" | |\n+----------------------+----------------------+\n| \"Tensor.erfinv_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.exp()\", | Keeps input names |\n| \"torch.exp()\" | |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "| \"torch.exp()\" | |\n+----------------------+----------------------+\n| \"Tensor.exp_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.expand()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.expm1()\", | Keeps input names |\n| \"torch.expm1()\" | |\n+----------------------+----------------------+\n| \"Tensor.expm1_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.exponential | None |\n| ()\" | |\n+----------------------+----------------------+\n| \"Tensor.fill()\" | None |\n+----------------------+----------------------+\n| \"Tensor.flatten()\", | See documentation |\n| \"torch.flatten()\" | |\n+----------------------+----------------------+\n| \"Tensor.float()\" | Keeps input names |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "+----------------------+----------------------+\n| \"Tensor.floor()\", | Keeps input names |\n| \"torch.floor()\" | |\n+----------------------+----------------------+\n| \"Tensor.floor_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.frac()\", | Keeps input names |\n| \"torch.frac()\" | |\n+----------------------+----------------------+\n| \"Tensor.frac_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.ge()\", | Unifies names from |\n| \"torch.ge()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.get_device( | None |\n| )\", | |\n| \"torch.get_device()\" | |\n+----------------------+----------------------+\n| \"Tensor.grad\" | None |\n+----------------------+----------------------+\n| \"Tensor.gt()\", | Unifies names from |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "| \"Tensor.gt()\", | Unifies names from |\n| \"torch.gt()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.half()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.has_names()\" | See documentation |\n+----------------------+----------------------+\n| \"Tensor.index_fill( | Keeps input names |\n| )\", | |\n| \"torch.index_fill()\" | |\n+----------------------+----------------------+\n| \"Tensor.index_fill_ | None |\n| ()\" | |\n+----------------------+----------------------+\n| \"Tensor.int()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.is_contiguo | None |\n| us()\" | |\n+----------------------+----------------------+\n| \"Tensor.is_cuda\" | None |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "+----------------------+----------------------+\n| \"Tensor.is_floating | None |\n| _point()\", \"torch.i | |\n| s_floating_point()\" | |\n+----------------------+----------------------+\n| \"Tensor.is_leaf\" | None |\n+----------------------+----------------------+\n| \"Tensor.is_pinned()\" | None |\n+----------------------+----------------------+\n| \"Tensor.is_shared()\" | None |\n+----------------------+----------------------+\n| \"Tensor.is_signed() | None |\n| \", | |\n| \"torch.is_signed()\" | |\n+----------------------+----------------------+\n| \"Tensor.is_sparse\" | None |\n+----------------------+----------------------+\n| \"Tensor.is_sparse_c | None |\n| sr\" | |\n+----------------------+----------------------+\n| \"torch.is_tensor()\" | None |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "| \"torch.is_tensor()\" | None |\n+----------------------+----------------------+\n| \"Tensor.item()\" | None |\n+----------------------+----------------------+\n| \"Tensor.kthvalue()\", | Removes dimensions |\n| \"torch.kthvalue()\" | |\n+----------------------+----------------------+\n| \"Tensor.le()\", | Unifies names from |\n| \"torch.le()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.log()\", | Keeps input names |\n| \"torch.log()\" | |\n+----------------------+----------------------+\n| \"Tensor.log10()\", | Keeps input names |\n| \"torch.log10()\" | |\n+----------------------+----------------------+\n| \"Tensor.log10_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.log1p()\", | Keeps input names |\n| \"torch.log1p()\" | |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "+----------------------+----------------------+\n| \"Tensor.log1p_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.log2()\", | Keeps input names |\n| \"torch.log2()\" | |\n+----------------------+----------------------+\n| \"Tensor.log2_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.log_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.log_normal_ | None |\n| ()\" | |\n+----------------------+----------------------+\n| \"Tensor.logical_not | Keeps input names |\n| ()\", \"torch.logical | |\n| not()\" | |\n+----------------------+----------------------+\n| \"Tensor.logical_not | None |\n| ()\" | |\n+----------------------+----------------------+\n| \"Tensor.logsumexp() | Removes dimensions |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "| \"Tensor.logsumexp() | Removes dimensions |\n| \", | |\n| \"torch.logsumexp()\" | |\n+----------------------+----------------------+\n| \"Tensor.long()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.lt()\", | Unifies names from |\n| \"torch.lt()\" | inputs |\n+----------------------+----------------------+\n| \"torch.manual_seed( | None |\n| )\" | |\n+----------------------+----------------------+\n| \"Tensor.masked_fill | Keeps input names |\n| ()\", \"torch.masked_ | |\n| fill()\" | |\n+----------------------+----------------------+\n| \"Tensor.masked_fill | None |\n| _()\" | |\n+----------------------+----------------------+\n| \"Tensor.masked_sele | Aligns mask up to |\n| ct()\", \"torch.maske | input and then unif |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "| ct()\", \"torch.maske | input and then unif |\n| d_select()\" | ies_names_from_inpu |\n| | t_tensors |\n+----------------------+----------------------+\n| \"Tensor.matmul()\", | Contracts away dims |\n| \"torch.matmul()\" | |\n+----------------------+----------------------+\n| \"Tensor.mean()\", | Removes dimensions |\n| \"torch.mean()\" | |\n+----------------------+----------------------+\n| \"Tensor.median()\", | Removes dimensions |\n| \"torch.median()\" | |\n+----------------------+----------------------+\n| \"Tensor.nanmedian() | Removes dimensions |\n| \", | |\n| \"torch.nanmedian()\" | |\n+----------------------+----------------------+\n| \"Tensor.mm()\", | Contracts away dims |\n| \"torch.mm()\" | |\n+----------------------+----------------------+\n| \"Tensor.mode()\", | Removes dimensions |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "| \"Tensor.mode()\", | Removes dimensions |\n| \"torch.mode()\" | |\n+----------------------+----------------------+\n| \"Tensor.mul()\", | Unifies names from |\n| \"torch.mul()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.mul_()\" | Unifies names from |\n| | inputs |\n+----------------------+----------------------+\n| \"Tensor.mv()\", | Contracts away dims |\n| \"torch.mv()\" | |\n+----------------------+----------------------+\n| \"Tensor.names\" | See documentation |\n+----------------------+----------------------+\n| \"Tensor.narrow()\", | Keeps input names |\n| \"torch.narrow()\" | |\n+----------------------+----------------------+\n| \"Tensor.ndim\" | None |\n+----------------------+----------------------+\n| \"Tensor.ndimension( | None |\n| )\" | |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "| )\" | |\n+----------------------+----------------------+\n| \"Tensor.ne()\", | Unifies names from |\n| \"torch.ne()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.neg()\", | Keeps input names |\n| \"torch.neg()\" | |\n+----------------------+----------------------+\n| \"Tensor.neg_()\" | None |\n+----------------------+----------------------+\n| \"torch.normal()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.normal_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.numel()\", | None |\n| \"torch.numel()\" | |\n+----------------------+----------------------+\n| \"torch.ones()\" | Factory functions |\n+----------------------+----------------------+\n| \"Tensor.pow()\", | Unifies names from |\n| \"torch.pow()\" | inputs |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "| \"torch.pow()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.pow_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.prod()\", | Removes dimensions |\n| \"torch.prod()\" | |\n+----------------------+----------------------+\n| \"Tensor.rad2deg()\", | Keeps input names |\n| \"torch.rad2deg()\" | |\n+----------------------+----------------------+\n| \"Tensor.rad2deg_()\" | None |\n+----------------------+----------------------+\n| \"torch.rand()\" | Factory functions |\n+----------------------+----------------------+\n| \"torch.rand()\" | Factory functions |\n+----------------------+----------------------+\n| \"torch.randn()\" | Factory functions |\n+----------------------+----------------------+\n| \"torch.randn()\" | Factory functions |\n+----------------------+----------------------+\n| \"Tensor.random_()\" | None |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "| \"Tensor.random_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.reciprocal( | Keeps input names |\n| )\", | |\n| \"torch.reciprocal()\" | |\n+----------------------+----------------------+\n| \"Tensor.reciprocal_ | None |\n| ()\" | |\n+----------------------+----------------------+\n| \"Tensor.refine_name | See documentation |\n| s()\" | |\n+----------------------+----------------------+\n| \"Tensor.register_ho | None |\n| ok()\" | |\n+----------------------+----------------------+\n| \"Tensor.rename()\" | See documentation |\n+----------------------+----------------------+\n| \"Tensor.rename_()\" | See documentation |\n+----------------------+----------------------+\n| \"Tensor.requires_gr | None |\n| ad\" | |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "| ad\" | |\n+----------------------+----------------------+\n| \"Tensor.requires_gr | None |\n| ad_()\" | |\n+----------------------+----------------------+\n| \"Tensor.resize_()\" | Only allow resizes |\n| | that do not change |\n| | shape |\n+----------------------+----------------------+\n| \"Tensor.resize_as_( | Only allow resizes |\n| )\" | that do not change |\n| | shape |\n+----------------------+----------------------+\n| \"Tensor.round()\", | Keeps input names |\n| \"torch.round()\" | |\n+----------------------+----------------------+\n| \"Tensor.round_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.rsqrt()\", | Keeps input names |\n| \"torch.rsqrt()\" | |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "+----------------------+----------------------+\n| \"Tensor.rsqrt_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.select()\", | Removes dimensions |\n| \"torch.select()\" | |\n+----------------------+----------------------+\n| \"Tensor.short()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.sigmoid()\", | Keeps input names |\n| \"torch.sigmoid()\" | |\n+----------------------+----------------------+\n| \"Tensor.sigmoid_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.sign()\", | Keeps input names |\n| \"torch.sign()\" | |\n+----------------------+----------------------+\n| \"Tensor.sign_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.sgn()\", | Keeps input names |\n| \"torch.sgn()\" | |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "+----------------------+----------------------+\n| \"Tensor.sgn_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.sin()\", | Keeps input names |\n| \"torch.sin()\" | |\n+----------------------+----------------------+\n| \"Tensor.sin_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.sinh()\", | Keeps input names |\n| \"torch.sinh()\" | |\n+----------------------+----------------------+\n| \"Tensor.sinh_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.asinh()\", | Keeps input names |\n| \"torch.asinh()\" | |\n+----------------------+----------------------+\n| \"Tensor.asinh_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.size()\" | None |\n+----------------------+----------------------+\n| \"Tensor.softmax()\", | Keeps input names |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "| \"Tensor.softmax()\", | Keeps input names |\n| \"torch.softmax()\" | |\n+----------------------+----------------------+\n| \"Tensor.split()\", | Keeps input names |\n| \"torch.split()\" | |\n+----------------------+----------------------+\n| \"Tensor.sqrt()\", | Keeps input names |\n| \"torch.sqrt()\" | |\n+----------------------+----------------------+\n| \"Tensor.sqrt_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.squeeze()\", | Removes dimensions |\n| \"torch.squeeze()\" | |\n+----------------------+----------------------+\n| \"Tensor.std()\", | Removes dimensions |\n| \"torch.std()\" | |\n+----------------------+----------------------+\n| \"torch.std_mean()\" | Removes dimensions |\n+----------------------+----------------------+\n| \"Tensor.stride()\" | None |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "+----------------------+----------------------+\n| \"Tensor.sub()\", | Unifies names from |\n| \"torch.sub()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.sub_()\" | Unifies names from |\n| | inputs |\n+----------------------+----------------------+\n| \"Tensor.sum()\", | Removes dimensions |\n| \"torch.sum()\" | |\n+----------------------+----------------------+\n| \"Tensor.tan()\", | Keeps input names |\n| \"torch.tan()\" | |\n+----------------------+----------------------+\n| \"Tensor.tan_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.tanh()\", | Keeps input names |\n| \"torch.tanh()\" | |\n+----------------------+----------------------+\n| \"Tensor.tanh_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.atanh()\", | Keeps input names |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "| \"Tensor.atanh()\", | Keeps input names |\n| \"torch.atanh()\" | |\n+----------------------+----------------------+\n| \"Tensor.atanh_()\" | None |\n+----------------------+----------------------+\n| \"torch.tensor()\" | Factory functions |\n+----------------------+----------------------+\n| \"Tensor.to()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.topk()\", | Removes dimensions |\n| \"torch.topk()\" | |\n+----------------------+----------------------+\n| \"Tensor.transpose() | Permutes dimensions |\n| \", | |\n| \"torch.transpose()\" | |\n+----------------------+----------------------+\n| \"Tensor.trunc()\", | Keeps input names |\n| \"torch.trunc()\" | |\n+----------------------+----------------------+\n| \"Tensor.trunc_()\" | None |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "+----------------------+----------------------+\n| \"Tensor.type()\" | None |\n+----------------------+----------------------+\n| \"Tensor.type_as()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.unbind()\", | Removes dimensions |\n| \"torch.unbind()\" | |\n+----------------------+----------------------+\n| \"Tensor.unflatten()\" | See documentation |\n+----------------------+----------------------+\n| \"Tensor.uniform_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.var()\", | Removes dimensions |\n| \"torch.var()\" | |\n+----------------------+----------------------+\n| \"torch.var_mean()\" | Removes dimensions |\n+----------------------+----------------------+\n| \"Tensor.zero_()\" | None |\n+----------------------+----------------------+\n| \"torch.zeros()\" | Factory functions |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "+----------------------+----------------------+\nKeeps input names\n=================\nAll pointwise unary functions follow this rule as well as some other\nunary functions.\n* Check names: None\n* Propagate names: input tensor's names are propagated to the output.\n\n\n\nx = torch.randn(3, 3, names=('N', 'C'))\nx.abs().names\n ('N', 'C')\nRemoves dimensions\n==================\nAll reduction ops like \"sum()\" remove dimensions by reducing over the\ndesired dimensions. Other operations like \"select()\" and \"squeeze()\"\nremove dimensions.\nWherever one can pass an integer dimension index to an operator, one\ncan also pass a dimension name. Functions that take lists of dimension\nindices can also take in a list of dimension names.\n* Check names: If \"dim\" or \"dims\" is passed in as a list of names,\n check that those names exist in \"self\".\n* Propagate names: If the dimensions of the input tensor specified by\n \"dim\" or \"dims\" are not present in the output tensor, then the\n\n\n", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "corresponding names of those dimensions do not appear in\n \"output.names\".\n\n\n\nx = torch.randn(1, 3, 3, 3, names=('N', 'C', 'H', 'W'))\nx.squeeze('N').names\n ('C', 'H', 'W')\nx = torch.randn(3, 3, 3, 3, names=('N', 'C', 'H', 'W'))\nx.sum(['N', 'C']).names\n ('H', 'W')\n # Reduction ops with keepdim=True don't actually remove dimensions.\nx = torch.randn(3, 3, 3, 3, names=('N', 'C', 'H', 'W'))\nx.sum(['N', 'C'], keepdim=True).names\n ('N', 'C', 'H', 'W')\nUnifies names from inputs\n=========================\nAll binary arithmetic ops follow this rule. Operations that broadcast\nstill broadcast positionally from the right to preserve compatibility\nwith unnamed tensors. To perform explicit broadcasting by names, use\n\"Tensor.align_as()\".\n* Check names: All names must match positionally from the right. i.e.,\n in \"tensor + other\", \"match(tensor.names[i], other.names[i])\" must\n be true for all \"i\" in \"(-min(tensor.dim(), other.dim()) + 1, -1]\".\n\n\n", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "\nCheck names: Furthermore, all named dimensions must be aligned from\n the right. During matching, if we match a named dimension \"A\" with\n an unnamed dimension \"None\", then \"A\" must not appear in the tensor\n with the unnamed dimension.\nPropagate names: unify pairs of names from the right from both\n tensors to produce output names.\nFor example,\n # tensor: Tensor[ N, None]\n # other: Tensor[None, C]\n\n\ntensor = torch.randn(3, 3, names=('N', None))\nother = torch.randn(3, 3, names=(None, 'C'))\n(tensor + other).names\n ('N', 'C')\nCheck names:\n\n\n\n\n\"match(tensor.names[-1], other.names[-1])\" is \"True\"\n\"match(tensor.names[-2], tensor.names[-2])\" is \"True\"\nBecause we matched \"None\" in \"tensor\" with \"'C'\", check to make sure\n \"'C'\" doesn't exist in \"tensor\" (it does not).\nCheck to make sure \"'N'\" doesn't exists in \"other\" (it does not).\nFinally, the output names are computed with \"[unify('N', None),\nunify(None, 'C')] = ['N', 'C']\"\nMore examples:\n", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "unify(None, 'C')] = ['N', 'C']\"\nMore examples:\n # Dimensions don't match from the right:\n # tensor: Tensor[N, C]\n # other: Tensor[ N]\n\n\n\ntensor = torch.randn(3, 3, names=('N', 'C'))\nother = torch.randn(3, names=('N',))\n(tensor + other).names\n RuntimeError: Error when attempting to broadcast dims ['N', 'C'] and dims\n ['N']: dim 'C' and dim 'N' are at the same position from the right but do\n not match.\n # Dimensions aren't aligned when matching tensor.names[-1] and other.names[-1]:\n # tensor: Tensor[N, None]\n # other: Tensor[ N]\ntensor = torch.randn(3, 3, names=('N', None))\nother = torch.randn(3, names=('N',))\n(tensor + other).names\n RuntimeError: Misaligned dims when attempting to broadcast dims ['N'] and\n dims ['N', None]: dim 'N' appears in a different position from the right\n across both lists.\nNote:\n In both of the last examples, it is possible to align the tensors by\n names and then perform the addition. Use \"Tensor.align_as()\" to\n\n\n", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "align tensors by name or \"Tensor.align_to()\" to align tensors to a\n custom dimension ordering.\nPermutes dimensions\n===================\nSome operations, like \"Tensor.t()\", permute the order of dimensions.\nDimension names are attached to individual dimensions so they get\npermuted as well.\nIf the operator takes in positional index \"dim\", it is also able to\ntake a dimension name as \"dim\".\n* Check names: If \"dim\" is passed as a name, check that it exists in\n the tensor.\n* Propagate names: Permute dimension names in the same way as the\n dimensions that are being permuted.\n\n\n\nx = torch.randn(3, 3, names=('N', 'C'))\nx.transpose('N', 'C').names\n ('C', 'N')\nContracts away dims\n===================\nMatrix multiply functions follow some variant of this. Let's go\nthrough \"torch.mm()\" first and then generalize the rule for batch\nmatrix multiplication.\nFor \"torch.mm(tensor, other)\":\n* Check names: None\n* Propagate names: result names are \"(tensor.names[-2],\n other.names[-1])\".\n\n\n", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "other.names[-1])\".\n\n\n\nx = torch.randn(3, 3, names=('N', 'D'))\ny = torch.randn(3, 3, names=('in', 'out'))\nx.mm(y).names\n ('N', 'out')\nInherently, a matrix multiplication performs a dot product over two\ndimensions, collapsing them. When two tensors are matrix-multiplied,\nthe contracted dimensions disappear and do not show up in the output\ntensor.\n\"torch.mv()\", \"torch.dot()\" work in a similar way: name inference does\nnot check input names and removes the dimensions that are involved in\nthe dot product:\nx = torch.randn(3, 3, names=('N', 'D'))\ny = torch.randn(3, names=('something',))\nx.mv(y).names\n ('N',)\nNow, let's take a look at \"torch.matmul(tensor, other)\". Assume that\n\"tensor.dim() >= 2\" and \"other.dim() >= 2\".\n* Check names: Check that the batch dimensions of the inputs are\n aligned and broadcastable. See Unifies names from inputs for what it\n means for the inputs to be aligned.\n* Propagate names: result names are obtained by unifying the batch\n\n\n", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "dimensions and removing the contracted dimensions:\n \"unify(tensor.names[:-2], other.names[:-2]) + (tensor.names[-2],\n other.names[-1])\".\nExamples:\n # Batch matrix multiply of matrices Tensor['C', 'D'] and Tensor['E', 'F'].\n # 'A', 'B' are batch dimensions.\n\n\n\nx = torch.randn(3, 3, 3, 3, names=('A', 'B', 'C', 'D'))\ny = torch.randn(3, 3, 3, names=('B', 'E', 'F'))\ntorch.matmul(x, y).names\n ('A', 'B', 'C', 'F')\nFinally, there are fused \"add\" versions of many matmul functions.\ni.e., \"addmm()\" and \"addmv()\". These are treated as composing name\ninference for i.e. \"mm()\" and name inference for \"add()\".\nFactory functions\n=================\nFactory functions now take a new \"names\" argument that associates a\nname with each dimension.\ntorch.zeros(2, 3, names=('N', 'C'))\n tensor([[0., 0., 0.],\n [0., 0., 0.]], names=('N', 'C'))\nout function and in-place variants\n==================================\nA tensor specified as an \"out=\" tensor has the following behavior:\n\n\n", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "\nIf it has no named dimensions, then the names computed from the\n operation get propagated to it.\nIf it has any named dimensions, then the names computed from the\n operation must be exactly equal to the existing names. Otherwise,\n the operation errors.\nAll in-place methods modify inputs to have names equal to the computed\nnames from name inference. For example:\n\n\nx = torch.randn(3, 3)\ny = torch.randn(3, 3, names=('N', 'C'))\nx.names\n (None, None)\nx += y\nx.names\n ('N', 'C')\n\n\n\n\n", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"}
{"text": "Tensor Parallelism\n", "source": "https://pytorch.org/docs/stable/distributed.tensor.parallel.html", "category": "pytorch docs"}
{"text": "torch.libraryPython operator registration API provides capabilities for extending\nPyTorch's core library of operators with user defined operators.\nCurrently, this can be done in two ways:\n1. Creating new libraries\n * Lets you to register new operators and kernels for various\n backends and functionalities by specifying the appropriate\n dispatch keys. For example,\n * Consider registering a new operator \"add\" in your newly\n created namespace \"foo\". You can access this operator using\n the \"torch.ops\" API and calling into by calling\n \"torch.ops.foo.add\". You can also access specific registered\n overloads by calling \"torch.ops.foo.add.{overload_name}\".\n * If you registered a new kernel for the \"CUDA\" dispatch key\n for this operator, then your custom defined function will be\n called for CUDA tensor inputs.\n * This can be done by creating Library class objects of \"\"DEF\"\"\n kind.", "source": "https://pytorch.org/docs/stable/library.html", "category": "pytorch docs"}
{"text": "kind.\n2. Extending existing C++ libraries (e.g., aten)\n * Lets you register kernels for existing operators\n corresponding to various backends and functionalities by\n specifying the appropriate dispatch keys.\n * This may come in handy to fill up spotty operator support for a\n feature implemented through a dispatch key. For example.,\n * You can add operator support for Meta Tensors (by\n registering function to the \"Meta\" dispatch key).\n * This can be done by creating Library class objects of \"\"IMPL\"\"\n kind.\nA tutorial that walks you through some examples on how to use this API\nis available on Google Colab.\nWarning:\n Dispatcher is a complicated PyTorch concept and having a sound\n understanding of Dispatcher is crucial to be able to do anything\n advanced with this API. This blog post is a good starting point to\n learn about Dispatcher.\nclass torch.library.Library(ns, kind, dispatch_key='')\n A class to create libraries that can be used to register new", "source": "https://pytorch.org/docs/stable/library.html", "category": "pytorch docs"}
{"text": "operators or override operators in existing libraries from Python.\n A user can optionally pass in a dispatch keyname if they only want\n to register kernels corresponding to only one specific dispatch\n key.\n To create a library to override operators in an existing library\n (with name ns), set the kind to \"IMPL\". To create a new library\n (with name ns) to register new operators, set the kind to \"DEF\".\n :param ns: library name :param kind: \"DEF\", \"IMPL\" (default:\n \"IMPL\") :param dispatch_key: PyTorch dispatch key (default: \"\")\n define(schema, alias_analysis='')\n Defines a new operator and its semantics in the ns namespace.\n Parameters:\n * schema -- function schema to define a new operator.\n * alias_analysis (optional) -- Indicates if the\n aliasing properties of the operator arguments can be\n inferred from the schema (default behavior) or not\n (\"CONSERVATIVE\").\n Returns:", "source": "https://pytorch.org/docs/stable/library.html", "category": "pytorch docs"}
{"text": "(\"CONSERVATIVE\").\n Returns:\n name of the operator as inferred from the schema.\n Example::\n >>> my_lib = Library(\"foo\", \"DEF\")\n >>> my_lib.define(\"sum(Tensor self) -> Tensor\")\n impl(op_name, fn, dispatch_key='')\n Registers the function implementation for an operator defined in\n the library.\n Parameters:\n * op_name -- operator name (along with the overload) or\n OpOverload object.\n * fn -- function that's the operator implementation for\n the input dispatch key.\n * dispatch_key -- dispatch key that the input function\n should be registered for. By default, it uses the dispatch\n key that the library was created with.\n Example::\n >>> my_lib = Library(\"aten\", \"IMPL\")\n >>> def div_cpu(self, other):\n >>> return self * (1 / other)\n >>> my_lib.impl(\"div.Tensor\", \"CPU\")\nWe have also added some function decorators to make it convenient to", "source": "https://pytorch.org/docs/stable/library.html", "category": "pytorch docs"}
{"text": "register functions for operators:\n* \"torch.library.impl()\"\n* \"torch.library.define()\"", "source": "https://pytorch.org/docs/stable/library.html", "category": "pytorch docs"}
{"text": "Named TensorsNamed Tensors allow users to give explicit names to tensor dimensions.\nIn most cases, operations that take dimension parameters will accept\ndimension names, avoiding the need to track dimensions by position. In\naddition, named tensors use names to automatically check that APIs are\nbeing used correctly at runtime, providing extra safety. Names can\nalso be used to rearrange dimensions, for example, to support\n\"broadcasting by name\" rather than \"broadcasting by position\".\nWarning:\n The named tensor API is a prototype feature and subject to change.\nCreating named tensors\n======================\nFactory functions now take a new \"names\" argument that associates a\nname with each dimension.\n\n\n\ntorch.zeros(2, 3, names=('N', 'C'))\n tensor([[0., 0., 0.],\n [0., 0., 0.]], names=('N', 'C'))\nNamed dimensions, like regular Tensor dimensions, are ordered.\n\"tensor.names[i]\" is the name of dimension \"i\" of \"tensor\".\nThe following factory functions support named tensors:\n\n\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"}
{"text": "\n\"torch.empty()\"\n\"torch.rand()\"\n\"torch.randn()\"\n\"torch.ones()\"\n\"torch.tensor()\"\n\"torch.zeros()\"\nNamed dimensions\n================\nSee \"names\" for restrictions on tensor names.\nUse \"names\" to access the dimension names of a tensor and \"rename()\"\nto rename named dimensions.\n\n\nimgs = torch.randn(1, 2, 2, 3 , names=('N', 'C', 'H', 'W'))\nimgs.names\n ('N', 'C', 'H', 'W')\nrenamed_imgs = imgs.rename(H='height', W='width')\nrenamed_imgs.names\n ('N', 'C', 'height', 'width)\nNamed tensors can coexist with unnamed tensors; named tensors are\ninstances of \"torch.Tensor\". Unnamed tensors have \"None\"-named\ndimensions. Named tensors do not require all dimensions to be named.\nimgs = torch.randn(1, 2, 2, 3 , names=(None, 'C', 'H', 'W'))\nimgs.names\n (None, 'C', 'H', 'W')\nName propagation semantics\n==========================\nNamed tensors use names to automatically check that APIs are being\ncalled correctly at runtime. This occurs in a process called *name\n\n\n\n\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"}
{"text": "inference. More formally, name inference consists of the following\ntwo steps:\n* Check names: an operator may perform automatic checks at runtime\n that check that certain dimension names must match.\n* Propagate names*: name inference propagates names to output\n tensors.\nAll operations that support named tensors propagate names.\n\n\n\nx = torch.randn(3, 3, names=('N', 'C'))\nx.abs().names\n ('N', 'C')\nmatch semantics\n\n\n\n\nTwo names match if they are equal (string equality) or if at least\none is \"None\". Nones are essentially a special \"wildcard\" name.\n\"unify(A, B)\" determines which of the names \"A\" and \"B\" to propagate\nto the outputs. It returns the more specific of the two names, if\nthey match. If the names do not match, then it errors.\nNote:\n In practice, when working with named tensors, one should avoid\n having unnamed dimensions because their handling can be complicated.\n It is recommended to lift all unnamed dimensions to be named\n dimensions by using \"refine_names()\".", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"}
{"text": "dimensions by using \"refine_names()\".\nBasic name inference rules\n\nLet's see how \"match\" and \"unify\" are used in name inference in the\ncase of adding two one-dim tensors with no broadcasting.\n x = torch.randn(3, names=('X',))\n y = torch.randn(3)\n z = torch.randn(3, names=('Z',))\nCheck names: check that the names of the two tensors match.\nFor the following examples:\n\n\n\nx + y # match('X', None) is True\nx + z # match('X', 'Z') is False\nx + x # match('X', 'X') is True\nx + z\n Error when attempting to broadcast dims ['X'] and dims ['Z']: dim 'X' and dim 'Z' are at the same position from the right but do not match.\nPropagate names: unify the names to select which one to\npropagate. In the case of \"x + y\", \"unify('X', None) = 'X'\" because\n\"'X'\" is more specific than \"None\".\n(x + y).names\n ('X',)\n(x + x).names\n ('X',)\nFor a comprehensive list of name inference rules, see Named Tensors\n\n\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"}
{"text": "operator coverage. Here are two common operations that may be useful\nto go over:\n* Binary arithmetic ops: Unifies names from inputs\n* Matrix multiplication ops: Contracts away dims\nExplicit alignment by names\n===========================\nUse \"align_as()\" or \"align_to()\" to align tensor dimensions by name to\na specified ordering. This is useful for performing \"broadcasting by\nnames\".\n # This function is agnostic to the dimension ordering of input,\n # as long as it has a C dimension somewhere.\n def scale_channels(input, scale):\n scale = scale.refine_names('C')\n return input * scale.align_as(input)\n\n\n\nnum_channels = 3\nscale = torch.randn(num_channels, names=('C',))\nimgs = torch.rand(3, 3, 3, num_channels, names=('N', 'H', 'W', 'C'))\nmore_imgs = torch.rand(3, num_channels, 3, 3, names=('N', 'C', 'H', 'W'))\nvideos = torch.randn(3, num_channels, 3, 3, 3, names=('N', 'C', 'H', 'W', 'D')\nscale_channels(imgs, scale)\nscale_channels(more_imgs, scale)\n\n\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"}
{"text": "\n\n\nscale_channels(more_imgs, scale)\nscale_channels(videos, scale)\nManipulating dimensions\n=======================\nUse \"align_to()\" to permute large amounts of dimensions without\nmentioning all of them as in required by \"permute()\".\ntensor = torch.randn(2, 2, 2, 2, 2, 2)\nnamed_tensor = tensor.refine_names('A', 'B', 'C', 'D', 'E', 'F')\n # Move the F (dim 5) and E dimension (dim 4) to the front while keeping\n # the rest in the same order\ntensor.permute(5, 4, 0, 1, 2, 3)\nnamed_tensor.align_to('F', 'E', ...)\nUse \"flatten()\" and \"unflatten()\" to flatten and unflatten dimensions,\nrespectively. These methods are more verbose than \"view()\" and\n\"reshape()\", but have more semantic meaning to someone reading the\ncode.\nimgs = torch.randn(32, 3, 128, 128)\nnamed_imgs = imgs.refine_names('N', 'C', 'H', 'W')\nflat_imgs = imgs.view(32, -1)\nnamed_flat_imgs = named_imgs.flatten(['C', 'H', 'W'], 'features')\nnamed_flat_imgs.names\n ('N', 'features')\n\n\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"}
{"text": "\n\n\nnamed_flat_imgs.names\n ('N', 'features')\nunflattened_imgs = imgs.view(32, 3, 128, 128)\nunflattened_named_imgs = named_flat_imgs.unflatten(\n 'features', [('C', 3), ('H', 128), ('W', 128)])\nAutograd support\n================\nAutograd currently supports named tensors in a limited manner:\nautograd ignores names on all tensors. Gradient computation is still\ncorrect but we lose the safety that names give us.\nx = torch.randn(3, names=('D',))\nweight = torch.randn(3, names=('D',), requires_grad=True)\nloss = (x - weight).abs()\ngrad_loss = torch.randn(3)\nloss.backward(grad_loss)\nweight.grad # Unnamed for now. Will be named in the future\n tensor([-1.8107, -0.6357, 0.0783])\nweight.grad.zero_()\ngrad_loss = grad_loss.refine_names('C')\nloss = (x - weight).abs()\n # Ideally we'd check that the names of loss and grad_loss match but we don't yet.\nloss.backward(grad_loss)\nweight.grad\n\n\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"}
{"text": "\n\n\nloss.backward(grad_loss)\nweight.grad\n tensor([-1.8107, -0.6357, 0.0783])\nCurrently supported operations and subsystems\n=============================================\nOperators\n\n\n\n\nSee Named Tensors operator coverage for a full list of the supported\ntorch and tensor operations. We do not yet support the following that\nis not covered by the link:\n* indexing, advanced indexing.\nFor \"torch.nn.functional\" operators, we support the following:\n* \"torch.nn.functional.relu()\"\n* \"torch.nn.functional.softmax()\"\n* \"torch.nn.functional.log_softmax()\"\n* \"torch.nn.functional.tanh()\"\n* \"torch.nn.functional.sigmoid()\"\n* \"torch.nn.functional.dropout()\"\nSubsystems\n\nAutograd is supported, see Autograd support. Because gradients are\ncurrently unnamed, optimizers may work but are untested.\nNN modules are currently unsupported. This can lead to the following\nwhen calling modules with named tensor inputs:\n* NN module parameters are unnamed, so outputs may be partially named.", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"}
{"text": "\nNN module forward passes have code that don't support named tensors\n and will error out appropriately.\nWe also do not support the following subsystems, though some may work\nout of the box:\ndistributions\nserialization (\"torch.load()\", \"torch.save()\")\nmultiprocessing\nJIT\ndistributed\nONNX\nIf any of these would help your use case, please search if an issue\nhas already been filed and if not, file one.\nNamed tensor API reference\n==========================\nIn this section please find the documentation for named tensor\nspecific APIs. For a comprehensive reference for how names are\npropagated through other PyTorch operators, see Named Tensors operator\ncoverage.\nclass torch.Tensor\n names\n Stores names for each of this tensor's dimensions.\n \"names[idx]\" corresponds to the name of tensor dimension \"idx\".\n Names are either a string if the dimension is named or \"None\" if\n the dimension is unnamed.\n Dimension names may contain characters or underscore.\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"}
{"text": "Furthermore, a dimension name must be a valid Python variable\n name (i.e., does not start with underscore).\n Tensors may not have two named dimensions with the same name.\n Warning:\n The named tensor API is experimental and subject to change.\n rename(names, rename_map)\n Renames dimension names of \"self\".\n There are two main usages:\n \"self.rename(rename_map)\" returns a view on tensor that has\n dims renamed as specified in the mapping \"rename_map\".\n \"self.rename(names)\" returns a view on tensor, renaming all\n dimensions positionally using \"names\". Use \"self.rename(None)\"\n to drop names on a tensor.\n One cannot specify both positional args \"names\" and keyword args\n \"rename_map\".\n Examples:\n >>> imgs = torch.rand(2, 3, 5, 7, names=('N', 'C', 'H', 'W'))\n >>> renamed_imgs = imgs.rename(N='batch', C='channels')\n >>> renamed_imgs.names\n ('batch', 'channels', 'H', 'W')", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"}
{"text": "('batch', 'channels', 'H', 'W')\n >>> renamed_imgs = imgs.rename(None)\n >>> renamed_imgs.names\n (None, None, None, None)\n >>> renamed_imgs = imgs.rename('batch', 'channel', 'height', 'width')\n >>> renamed_imgs.names\n ('batch', 'channel', 'height', 'width')\n Warning:\n The named tensor API is experimental and subject to change.\n rename_(names, rename_map)\n In-place version of \"rename()\".\n refine_names(names)\n Refines the dimension names of \"self\" according to \"names\".\n Refining is a special case of renaming that \"lifts\" unnamed\n dimensions. A \"None\" dim can be refined to have any name; a\n named dim can only be refined to have the same name.\n Because named tensors can coexist with unnamed tensors, refining\n names gives a nice way to write named-tensor-aware code that\n works with both named and unnamed tensors.\n \"names\" may contain up to one Ellipsis (\"...\"). The Ellipsis is", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"}
{"text": "expanded greedily; it is expanded in-place to fill \"names\" to\n the same length as \"self.dim()\" using names from the\n corresponding indices of \"self.names\".\n Python 2 does not support Ellipsis but one may use a string\n literal instead (\"'...'\").\n Parameters:\n names (iterable of str) -- The desired names of the\n output tensor. May contain up to one Ellipsis.\n Examples:\n >>> imgs = torch.randn(32, 3, 128, 128)\n >>> named_imgs = imgs.refine_names('N', 'C', 'H', 'W')\n >>> named_imgs.names\n ('N', 'C', 'H', 'W')\n >>> tensor = torch.randn(2, 3, 5, 7, 11)\n >>> tensor = tensor.refine_names('A', ..., 'B', 'C')\n >>> tensor.names\n ('A', None, None, 'B', 'C')\n Warning:\n The named tensor API is experimental and subject to change.\n align_as(other) -> Tensor\n Permutes the dimensions of the \"self\" tensor to match the\n dimension order in the \"other\" tensor, adding size-one dims for", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"}
{"text": "any new names.\n This operation is useful for explicit broadcasting by names (see\n examples).\n All of the dims of \"self\" must be named in order to use this\n method. The resulting tensor is a view on the original tensor.\n All dimension names of \"self\" must be present in \"other.names\".\n \"other\" may contain named dimensions that are not in\n \"self.names\"; the output tensor has a size-one dimension for\n each of those new names.\n To align a tensor to a specific order, use \"align_to()\".\n Examples:\n # Example 1: Applying a mask\n >>> mask = torch.randint(2, [127, 128], dtype=torch.bool).refine_names('W', 'H')\n >>> imgs = torch.randn(32, 128, 127, 3, names=('N', 'H', 'W', 'C'))\n >>> imgs.masked_fill_(mask.align_as(imgs), 0)\n # Example 2: Applying a per-channel-scale\n >>> def scale_channels(input, scale):\n >>> scale = scale.refine_names('C')\n >>> return input * scale.align_as(input)", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"}
{"text": "\n\n\nnum_channels = 3\n >>> scale = torch.randn(num_channels, names=('C',))\n >>> imgs = torch.rand(32, 128, 128, num_channels, names=('N', 'H', 'W', 'C'))\n >>> more_imgs = torch.rand(32, num_channels, 128, 128, names=('N', 'C', 'H', 'W'))\n >>> videos = torch.randn(3, num_channels, 128, 128, 128, names=('N', 'C', 'H', 'W', 'D'))\n # scale_channels is agnostic to the dimension order of the input\n >>> scale_channels(imgs, scale)\n >>> scale_channels(more_imgs, scale)\n >>> scale_channels(videos, scale)\n Warning:\n The named tensor API is experimental and subject to change.\n align_to(*names)\n Permutes the dimensions of the \"self\" tensor to match the order\n specified in \"names\", adding size-one dims for any new names.\n All of the dims of \"self\" must be named in order to use this\n method. The resulting tensor is a view on the original tensor.\n All dimension names of \"self\" must be present in \"names\".\n\n\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"}
{"text": "\"names\" may contain additional names that are not in\n \"self.names\"; the output tensor has a size-one dimension for\n each of those new names.\n \"names\" may contain up to one Ellipsis (\"...\"). The Ellipsis is\n expanded to be equal to all dimension names of \"self\" that are\n not mentioned in \"names\", in the order that they appear in\n \"self\".\n Python 2 does not support Ellipsis but one may use a string\n literal instead (\"'...'\").\n Parameters:\n names (iterable of str) -- The desired dimension\n ordering of the output tensor. May contain up to one Ellipsis\n that is expanded to all unmentioned dim names of \"self\".\n Examples:\n >>> tensor = torch.randn(2, 2, 2, 2, 2, 2)\n >>> named_tensor = tensor.refine_names('A', 'B', 'C', 'D', 'E', 'F')\n # Move the F and E dims to the front while keeping the rest in order\n >>> named_tensor.align_to('F', 'E', ...)\n Warning:", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"}
{"text": "Warning:\n The named tensor API is experimental and subject to change.\n flatten(dims, out_dim) -> Tensor\n Flattens \"dims\" into a single dimension with name \"out_dim\".\n All of dims must be consecutive in order in the \"self\" tensor,\n but not necessary contiguous in memory.\n Examples:\n >>> imgs = torch.randn(32, 3, 128, 128, names=('N', 'C', 'H', 'W'))\n >>> flat_imgs = imgs.flatten(['C', 'H', 'W'], 'features')\n >>> flat_imgs.names, flat_imgs.shape\n (('N', 'features'), torch.Size([32, 49152]))\n Warning:\n The named tensor API is experimental and subject to change.", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"}
{"text": "torch.futuresThis package provides a \"Future\" type that encapsulates an\nasynchronous execution and a set of utility functions to simplify\noperations on \"Future\" objects. Currently, the \"Future\" type is\nprimarily used by the Distributed RPC Framework.\nclass torch.futures.Future(*, devices=None)\n Wrapper around a \"torch._C.Future\" which encapsulates an\n asynchronous execution of a callable, e.g. \"rpc_async()\". It also\n exposes a set of APIs to add callback functions and set results.\n Warning:\n GPU support is a beta feature, subject to changes.\n add_done_callback(callback)\n Append the given callback function to this \"Future\", which will\n be run when the \"Future\" is completed. Multiple callbacks can\n be added to the same \"Future\", but the order in which they will\n be executed cannot be guaranteed. The callback must take one\n argument, which is the reference to this \"Future\". The callback\n function can use the \"value()\" method to get the value. Note", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"}
{"text": "that if this \"Future\" is already completed, the given callback\n will be run inline.\n We recommend that you use the \"then()\" method as it provides a\n way to synchronize after your callback has completed.\n \"add_done_callback\" can be cheaper if your callback does not\n return anything. But both \"then()\" and \"add_done_callback\" use\n the same callback registration API under the hood.\n With respect to GPU tensors, this method behaves in the same way\n as \"then()\".\n Parameters:\n callback (\"Future\") -- a \"Callable\" that takes in one\n argument, which is the reference to this \"Future\".\n Note:\n Note that if the callback function throws, either through the\n original future being completed with an exception and calling\n \"fut.wait()\", or through other code in the callback, error\n handling must be carefully taken care of. For example, if this\n callback later completes additional futures, those futures are", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"}
{"text": "not marked as completed with an error and the user is\n responsible for handling completion/waiting on those futures\n independently.\n Example::\n >>> def callback(fut):\n ... print(\"This will run after the future has finished.\")\n ... print(fut.wait())\n >>> fut = torch.futures.Future()\n >>> fut.add_done_callback(callback)\n >>> fut.set_result(5)\n This will run after the future has finished.\n 5\n done()\n Return \"True\" if this \"Future\" is done. A \"Future\" is done if it\n has a result or an exception.\n If the value contains tensors that reside on GPUs,\n \"Future.done()\" will return \"True\" even if the asynchronous\n kernels that are populating those tensors haven't yet completed\n running on the device, because at such stage the result is\n already usable, provided one performs the appropriate\n synchronizations (see \"wait()\").\n Return type:\n bool\n set_exception(result)", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"}
{"text": "bool\n set_exception(result)\n Set an exception for this \"Future\", which will mark this\n \"Future\" as completed with an error and trigger all attached\n callbacks. Note that when calling wait()/value() on this\n \"Future\", the exception set here will be raised inline.\n Parameters:\n result (BaseException) -- the exception for this\n \"Future\".\n Example::\n >>> fut = torch.futures.Future()\n >>> fut.set_exception(ValueError(\"foo\"))\n >>> fut.wait()\n Traceback (most recent call last):\n ...\n ValueError: foo\n set_result(result)\n Set the result for this \"Future\", which will mark this \"Future\"\n as completed and trigger all attached callbacks. Note that a\n \"Future\" cannot be marked completed twice.\n If the result contains tensors that reside on GPUs, this method\n can be called even if the asynchronous kernels that are\n populating those tensors haven't yet completed running on the", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"}
{"text": "device, provided that the streams on which those kernels were\n enqueued are set as the current ones when this method is called.\n Put simply, it's safe to call this method immediately after\n launching those kernels, without any additional synchronization,\n as long as one doesn't change streams in between. This method\n will record events on all the relevant current streams and will\n use them to ensure proper scheduling for all the consumers of\n this \"Future\".\n Parameters:\n result (object) -- the result object of this \"Future\".\n Example::\n >>> import threading\n >>> import time\n >>> def slow_set_future(fut, value):\n ... time.sleep(0.5)\n ... fut.set_result(value)\n >>> fut = torch.futures.Future()\n >>> t = threading.Thread(\n ... target=slow_set_future,\n ... args=(fut, torch.ones(2) * 3)\n ... )\n >>> t.start()\n >>> print(fut.wait())", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"}
{"text": "\n\n\nprint(fut.wait())\n tensor([3., 3.])\n >>> t.join()\n then(callback)\n Append the given callback function to this \"Future\", which will\n be run when the \"Future\" is completed. Multiple callbacks can\n be added to the same \"Future\", but the order in which they will\n be executed cannot be guaranteed (to enforce a certain order\n consider chaining: \"fut.then(cb1).then(cb2)\"). The callback must\n take one argument, which is the reference to this \"Future\". The\n callback function can use the \"value()\" method to get the value.\n Note that if this \"Future\" is already completed, the given\n callback will be run immediately inline.\n If the \"Future\"'s value contains tensors that reside on GPUs,\n the callback might be invoked while the async kernels that are\n populating those tensors haven't yet finished executing on the\n device. However, the callback will be invoked with some\n\n\n", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"}
{"text": "dedicated streams set as current (fetched from a global pool)\n which will be synchronized with those kernels. Hence any\n operation performed by the callback on these tensors will be\n scheduled on the device after the kernels complete. In other\n words, as long as the callback doesn't switch streams, it can\n safely manipulate the result without any additional\n synchronization. This is similar to the non-blocking behavior of\n \"wait()\".\n Similarly, if the callback returns a value that contains tensors\n that reside on a GPU, it can do so even if the kernels that are\n producing these tensors are still running on the device, as long\n as the callback didn't change streams during its execution. If\n one wants to change streams, one must be careful to re-\n synchronize them with the original streams, that is, those that\n were current when the callback was invoked.\n Parameters:", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"}
{"text": "Parameters:\n callback (\"Callable\") -- a \"Callable\" that takes this\n \"Future\" as the only argument.\n Returns:\n A new \"Future\" object that holds the return value of the\n \"callback\" and will be marked as completed when the given\n \"callback\" finishes.\n Return type:\n Future[S]\n Note:\n Note that if the callback function throws, either through the\n original future being completed with an exception and calling\n \"fut.wait()\", or through other code in the callback, the\n future returned by \"then\" will be marked appropriately with\n the encountered error. However, if this callback later\n completes additional futures, those futures are not marked as\n completed with an error and the user is responsible for\n handling completion/waiting on those futures independently.\n Example::\n >>> def callback(fut):\n ... print(f\"RPC return value is {fut.wait()}.\")", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"}
{"text": "\n\n\nfut = torch.futures.Future()\n >>> # The inserted callback will print the return value when\n >>> # receiving the response from \"worker1\"\n >>> cb_fut = fut.then(callback)\n >>> chain_cb_fut = cb_fut.then(\n ... lambda x : print(f\"Chained cb done. {x.wait()}\")\n ... )\n >>> fut.set_result(5)\n RPC return value is 5.\n Chained cb done. None\n value()\n Obtain the value of an already-completed future.\n This method should only be called after a call to \"wait()\" has\n completed, or inside a callback function passed to \"then()\". In\n other cases this \"Future\" may not yet hold a value and calling\n \"value()\" could fail.\n If the value contains tensors that reside on GPUs, then this\n method will not perform any additional synchronization. This\n should be done beforehand, separately, through a call to\n \"wait()\" (except within callbacks, for which it's already being\n\n\n", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"}
{"text": "taken care of by \"then()\").\n Returns:\n The value held by this \"Future\". If the function (callback or\n RPC) creating the value has thrown an error, this \"value()\"\n method will also throw an error.\n Return type:\n T\n wait()\n Block until the value of this \"Future\" is ready.\n If the value contains tensors that reside on GPUs, then an\n additional synchronization is performed with the kernels\n (executing on the device) which may be asynchronously populating\n those tensors. Such sync is non-blocking, which means that\n \"wait()\" will insert the necessary instructions in the current\n streams to ensure that further operations enqueued on those\n streams will be properly scheduled after the async kernels but,\n once that is done, \"wait()\" will return, even if those kernels\n are still running. No further synchronization is required when\n accessing and using the values, as long as one doesn't change\n streams.", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"}
{"text": "streams.\n Returns:\n The value held by this \"Future\". If the function (callback or\n RPC) creating the value has thrown an error, this \"wait\"\n method will also throw an error.\n Return type:\n T\ntorch.futures.collect_all(futures)\n Collects the provided \"Future\" objects into a single combined\n \"Future\" that is completed when all of the sub-futures are\n completed.\n Parameters:\n futures (list) -- a list of \"Future\" objects.\n Returns:\n Returns a \"Future\" object to a list of the passed in Futures.\n Return type:\n Future[List[Future]]\n Example::\n >>> fut0 = torch.futures.Future()\n >>> fut1 = torch.futures.Future()\n >>> fut = torch.futures.collect_all([fut0, fut1])\n >>> fut0.set_result(0)\n >>> fut1.set_result(1)\n >>> fut_list = fut.wait()\n >>> print(f\"fut0 result = {fut_list[0].wait()}\")\n fut0 result = 0\n >>> print(f\"fut1 result = {fut_list[1].wait()}\")\n fut1 result = 1", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"}
{"text": "fut1 result = 1\ntorch.futures.wait_all(futures)\n Waits for all provided futures to be complete, and returns the list\n of completed values. If any of the futures encounters an error, the\n method will exit early and report the error not waiting for other\n futures to complete.\n Parameters:\n futures (list) -- a list of \"Future\" object.\n Returns:\n A list of the completed \"Future\" results. This method will throw\n an error if \"wait\" on any \"Future\" throws.\n Return type:\n List", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"}
{"text": "torch.config__torch.__config.show()\n Return a human-readable string with descriptions of the\n configuration of PyTorch.\ntorch.config.parallel_info()\n Returns detailed string with parallelization settings", "source": "https://pytorch.org/docs/stable/config_mod.html", "category": "pytorch docs"}
{"text": "torch.profilerOverview\nPyTorch Profiler is a tool that allows the collection of performance\nmetrics during training and inference. Profiler's context manager API\ncan be used to better understand what model operators are the most\nexpensive, examine their input shapes and stack traces, study device\nkernel activity and visualize the execution trace.\nNote:\n An earlier version of the API in \"torch.autograd\" module is\n considered legacy and will be deprecated.\nAPI Reference\n=============\nclass torch.profiler._KinetoProfile(, activities=None, record_shapes=False, profile_memory=False, with_stack=False, with_flops=False, with_modules=False, experimental_config=None)\n Low-level profiler wrap the autograd profile\n Parameters:\n * activities (iterable*) -- list of activity groups (CPU,\n CUDA) to use in profiling, supported values:\n \"torch.profiler.ProfilerActivity.CPU\",\n \"torch.profiler.ProfilerActivity.CUDA\". Default value:", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"}
{"text": "ProfilerActivity.CPU and (when available)\n ProfilerActivity.CUDA.\n * record_shapes (bool) -- save information about\n operator's input shapes.\n * profile_memory (bool) -- track tensor memory\n allocation/deallocation.\n * with_stack (bool) -- record source information (file and\n line number) for the ops.\n * with_flops (bool) -- use formula to estimate the FLOPS\n of specific operators (matrix multiplication and 2D\n convolution).\n * with_modules (bool) -- record module hierarchy\n (including function names) corresponding to the callstack of\n the op. e.g. If module A's forward call's module B's forward\n which contains an aten::add op, then aten::add's module\n hierarchy is A.B Note that this support exist, at the moment,\n only for TorchScript models and not eager mode models.\n * experimental_config (_ExperimentalConfig) -- A set of", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"}
{"text": "experimental options used by profiler libraries like Kineto.\n Note, backward compatibility is not guaranteed.\n Note:\n This API is experimental and subject to change in the\n future.Enabling shape and stack tracing results in additional\n overhead. When record_shapes=True is specified, profiler will\n temporarily hold references to the tensors; that may further\n prevent certain optimizations that depend on the reference count\n and introduce extra tensor copies.\n add_metadata(key, value)\n Adds a user defined metadata with a string key and a string\n value into the trace file\n add_metadata_json(key, value)\n Adds a user defined metadata with a string key and a valid json\n value into the trace file\n events()\n Returns the list of unaggregated profiler events, to be used in\n the trace callback or after the profiling is finished\n export_chrome_trace(path)\n Exports the collected trace in Chrome JSON format.", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"}
{"text": "export_stacks(path, metric='self_cpu_time_total')\n Save stack traces in a file in a format suitable for\n visualization.\n Parameters:\n * path (str) -- save stacks file to this location;\n * metric (str) -- metric to use: \"self_cpu_time_total\"\n or \"self_cuda_time_total\"\n Note:\n Example of using FlameGraph tool:\n * git clone https://github.com/brendangregg/FlameGraph\n * cd FlameGraph\n * ./flamegraph.pl --title \"CPU time\" --countname \"us.\"\n profiler.stacks > perf_viz.svg\n key_averages(group_by_input_shape=False, group_by_stack_n=0)\n Averages events, grouping them by operator name and (optionally)\n input shapes and stack.\n Note:\n To use shape/stack functionality make sure to set\n record_shapes/with_stack when creating profiler context\n manager.", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"}
{"text": "manager.\nclass torch.profiler.profile(, activities=None, schedule=None, on_trace_ready=None, record_shapes=False, profile_memory=False, with_stack=False, with_flops=False, with_modules=False, experimental_config=None, use_cuda=None)\n Profiler context manager.\n Parameters:\n * activities (iterable) -- list of activity groups (CPU,\n CUDA) to use in profiling, supported values:\n \"torch.profiler.ProfilerActivity.CPU\",\n \"torch.profiler.ProfilerActivity.CUDA\". Default value:\n ProfilerActivity.CPU and (when available)\n ProfilerActivity.CUDA.\n * schedule (Callable) -- callable that takes step (int) as\n a single parameter and returns \"ProfilerAction\" value that\n specifies the profiler action to perform at each step.\n * on_trace_ready (Callable*) -- callable that is called at\n each step when \"schedule\" returns\n \"ProfilerAction.RECORD_AND_SAVE\" during the profiling.", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"}
{"text": "\nrecord_shapes (bool) -- save information about\n operator's input shapes.\nprofile_memory (bool) -- track tensor memory\n allocation/deallocation.\nwith_stack (bool) -- record source information (file and\n line number) for the ops.\nwith_flops (bool) -- use formula to estimate the FLOPs\n (floating point operations) of specific operators (matrix\n multiplication and 2D convolution).\nwith_modules (bool) -- record module hierarchy\n (including function names) corresponding to the callstack of\n the op. e.g. If module A's forward call's module B's forward\n which contains an aten::add op, then aten::add's module\n hierarchy is A.B Note that this support exist, at the moment,\n only for TorchScript models and not eager mode models.\nexperimental_config (_ExperimentalConfig) -- A set of\n experimental options used for Kineto library features. Note,\n\n\n", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"}
{"text": "backward compatibility is not guaranteed.\n * use_cuda (bool) --\n Deprecated since version 1.8.1: use \"activities\" instead.\n Note:\n Use \"schedule()\" to generate the callable schedule. Non-default\n schedules are useful when profiling long training jobs and allow\n the user to obtain multiple traces at the different iterations of\n the training process. The default schedule simply records all the\n events continuously for the duration of the context manager.\n Note:\n Use \"tensorboard_trace_handler()\" to generate result files for T\n ensorBoard:\"on_trace_ready=torch.profiler.tensorboard_trace_hand\n ler(dir_name)\"After profiling, result files can be found in the\n specified directory. Use the command:\"tensorboard --logdir\n dir_name\"to see the results in TensorBoard. For more information,\n see PyTorch Profiler TensorBoard Plugin\n Note:\n Enabling shape and stack tracing results in additional overhead.", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"}
{"text": "When record_shapes=True is specified, profiler will temporarily\n hold references to the tensors; that may further prevent certain\n optimizations that depend on the reference count and introduce\n extra tensor copies.\n Examples:\n with torch.profiler.profile(\n activities=[\n torch.profiler.ProfilerActivity.CPU,\n torch.profiler.ProfilerActivity.CUDA,\n ]\n ) as p:\n code_to_profile()\n print(p.key_averages().table(\n sort_by=\"self_cuda_time_total\", row_limit=-1))\n Using the profiler's \"schedule\", \"on_trace_ready\" and \"step\"\n functions:\n # Non-default profiler schedule allows user to turn profiler on and off\n # on different iterations of the training loop;\n # trace_handler is called every time a new trace becomes available\n def trace_handler(prof):\n print(prof.key_averages().table(\n sort_by=\"self_cuda_time_total\", row_limit=-1))", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"}
{"text": "prof.export_chrome_trace(\"/tmp/test_trace_\" + str(prof.step_num) + \".json\")\n with torch.profiler.profile(\n activities=[\n torch.profiler.ProfilerActivity.CPU,\n torch.profiler.ProfilerActivity.CUDA,\n ],\n # In this example with wait=1, warmup=1, active=2,\n # profiler will skip the first step/iteration,\n # start warming up on the second, record\n # the third and the forth iterations,\n # after which the trace will become available\n # and on_trace_ready (when set) is called;\n # the cycle repeats starting with the next step\n schedule=torch.profiler.schedule(\n wait=1,\n warmup=1,\n active=2),\n on_trace_ready=trace_handler\n # on_trace_ready=torch.profiler.tensorboard_trace_handler('./log')\n # used when outputting for tensorboard\n ) as p:\n for iter in range(N):\n", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"}
{"text": "for iter in range(N):\n code_iteration_to_profile(iter)\n # send a signal to the profiler that the next iteration has started\n p.step()\n step()\n Signals the profiler that the next profiling step has started.\nclass torch.profiler.ProfilerAction(value)\n Profiler actions that can be taken at the specified intervals\nclass torch.profiler.ProfilerActivity\n Members:\n CPU\n CUDA\n property name\ntorch.profiler.schedule(*, wait, warmup, active, repeat=0, skip_first=0)\n Returns a callable that can be used as profiler \"schedule\"\n argument. The profiler will skip the first \"skip_first\" steps, then\n wait for \"wait\" steps, then do the warmup for the next \"warmup\"\n steps, then do the active recording for the next \"active\" steps and\n then repeat the cycle starting with \"wait\" steps. The optional\n number of cycles is specified with the \"repeat\" parameter, the zero\n value means that the cycles will continue until the profiling is", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"}
{"text": "finished.\n Return type:\n Callable\ntorch.profiler.tensorboard_trace_handler(dir_name, worker_name=None, use_gzip=False)\n Outputs tracing files to directory of \"dir_name\", then that\n directory can be directly delivered to tensorboard as logdir.\n \"worker_name\" should be unique for each worker in distributed\n scenario, it will be set to '[hostname]_[pid]' by default.\nIntel Instrumentation and Tracing Technology APIs\n=================================================\ntorch.profiler.itt.is_available()\n Check if ITT feature is available or not\ntorch.profiler.itt.mark(msg)\n Describe an instantaneous event that occurred at some point.\n Parameters:\n msg (str) -- ASCII message to associate with the event.\ntorch.profiler.itt.range_push(msg)\n Pushes a range onto a stack of nested range span. Returns zero-\n based depth of the range that is started.\n Parameters:\n msg (str) -- ASCII message to associate with range\ntorch.profiler.itt.range_pop()", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"}
{"text": "torch.profiler.itt.range_pop()\n Pops a range off of a stack of nested range spans. Returns the\n zero-based depth of the range that is ended.", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"}
{"text": "Distributed RPC FrameworkThe distributed RPC framework provides mechanisms for multi-machine\nmodel training through a set of primitives to allow for remote\ncommunication, and a higher-level API to automatically differentiate\nmodels split across several machines.\nWarning:\n APIs in the RPC package are stable. There are multiple ongoing work\n items to improve performance and error handling, which will ship in\n future releases.\nWarning:\n CUDA support was introduced in PyTorch 1.9 and is still a beta\n feature. Not all features of the RPC package are yet compatible with\n CUDA support and thus their use is discouraged. These unsupported\n features include: RRefs, JIT compatibility, dist autograd and dist\n optimizer, and profiling. These shortcomings will be addressed in\n future releases.\nNote:\n Please refer to PyTorch Distributed Overview for a brief\n introduction to all features related to distributed training.\nBasics\n======", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "Basics\nThe distributed RPC framework makes it easy to run functions remotely,\nsupports referencing remote objects without copying the real data\naround, and provides autograd and optimizer APIs to transparently run\nbackward and update parameters across RPC boundaries. These features\ncan be categorized into four sets of APIs.\n1. Remote Procedure Call (RPC) supports running a function on the\n specified destination worker with the given arguments and getting\n the return value back or creating a reference to the return value.\n There are three main RPC APIs: \"rpc_sync()\" (synchronous),\n \"rpc_async()\" (asynchronous), and \"remote()\" (asynchronous and\n returns a reference to the remote return value). Use the\n synchronous API if the user code cannot proceed without the return\n value. Otherwise, use the asynchronous API to get a future, and\n wait on the future when the return value is needed on the caller.\n The \"remote()\" API is useful when the requirement is to create", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "something remotely but never need to fetch it to the caller.\n Imagine the case that a driver process is setting up a parameter\n server and a trainer. The driver can create an embedding table on\n the parameter server and then share the reference to the embedding\n table with the trainer, but itself will never use the embedding\n table locally. In this case, \"rpc_sync()\" and \"rpc_async()\" are no\n longer appropriate, as they always imply that the return value will\n be returned to the caller immediately or in the future.\n2. Remote Reference (RRef) serves as a distributed shared pointer\n to a local or remote object. It can be shared with other workers\n and reference counting will be handled transparently. Each RRef\n only has one owner and the object only lives on that owner. Non-\n owner workers holding RRefs can get copies of the object from the\n owner by explicitly requesting it. This is useful when a worker\n needs to access some data object, but itself is neither the creator", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "(the caller of \"remote()\") or the owner of the object. The\n distributed optimizer, as we will discuss below, is one example of\n such use cases.\n3. Distributed Autograd stitches together local autograd engines\n on all the workers involved in the forward pass, and automatically\n reach out to them during the backward pass to compute gradients.\n This is especially helpful if the forward pass needs to span\n multiple machines when conducting, e.g., distributed model parallel\n training, parameter-server training, etc. With this feature, user\n code no longer needs to worry about how to send gradients across\n RPC boundaries and in which order should the local autograd engines\n be launched, which can become quite complicated where there are\n nested and inter-dependent RPC calls in the forward pass.\n4. Distributed Optimizer's constructor takes a \"Optimizer()\"\n (e.g., \"SGD()\", \"Adagrad()\", etc.) and a list of parameter RRefs,", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "creates an \"Optimizer()\" instance on each distinct RRef owner, and\n updates parameters accordingly when running \"step()\". When you have\n distributed forward and backward passes, parameters and gradients\n will be scattered across multiple workers, and hence it requires an\n optimizer on each of the involved workers. Distributed Optimizer\n wraps all those local optimizers into one, and provides a concise\n constructor and \"step()\" API.\nRPC\n===\nBefore using RPC and distributed autograd primitives, initialization\nmust take place. To initialize the RPC framework we need to use\n\"init_rpc()\" which would initialize the RPC framework, RRef framework\nand distributed autograd.\ntorch.distributed.rpc.init_rpc(name, backend=None, rank=- 1, world_size=None, rpc_backend_options=None)\n Initializes RPC primitives such as the local RPC agent and\n distributed autograd, which immediately makes the current process\n ready to send and receive RPCs.\n Parameters:", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "ready to send and receive RPCs.\n Parameters:\n * name (str) -- a globally unique name of this node.\n (e.g., \"Trainer3\", \"ParameterServer2\", \"Master\", \"Worker1\")\n Name can only contain number, alphabet, underscore, colon,\n and/or dash, and must be shorter than 128 characters.\n * backend (BackendType, optional) -- The type of RPC\n backend implementation. Supported values is\n \"BackendType.TENSORPIPE\" (the default). See Backends for more\n information.\n * rank (int) -- a globally unique id/rank of this node.\n * world_size (int) -- The number of workers in the group.\n * rpc_backend_options (RpcBackendOptions, optional) --\n The options passed to the RpcAgent constructor. It must be an\n agent-specific subclass of \"RpcBackendOptions\" and contains\n agent-specific initialization configurations. By default, for\n all agents, it sets the default timeout to 60 seconds and", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "performs the rendezvous with an underlying process group\n initialized using \"init_method = \"env://\"\", meaning that\n environment variables \"MASTER_ADDR\" and \"MASTER_PORT\" need to\n be set properly. See Backends for more information and find\n which options are available.\nThe following APIs allow users to remotely execute functions as well\nas create references (RRefs) to remote data objects. In these APIs,\nwhen passing a \"Tensor\" as an argument or a return value, the\ndestination worker will try to create a \"Tensor\" with the same meta\n(i.e., shape, stride, etc.). We intentionally disallow transmitting\nCUDA tensors because it might crash if the device lists on source and\ndestination workers do not match. In such cases, applications can\nalways explicitly move the input tensors to CPU on the caller and move\nit to the desired devices on the callee if necessary.\nWarning:\n TorchScript support in RPC is a prototype feature and subject to", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "change. Since v1.5.0, \"torch.distributed.rpc\" supports calling\n TorchScript functions as RPC target functions, and this will help\n improve parallelism on the callee side as executing TorchScript\n functions does not require GIL.\ntorch.distributed.rpc.rpc_sync(to, func, args=None, kwargs=None, timeout=- 1.0)\n Make a blocking RPC call to run function \"func\" on worker \"to\". RPC\n messages are sent and received in parallel to execution of Python\n code. This method is thread-safe.\n Parameters:\n * to (str or WorkerInfo or int) --\n name/rank/\"WorkerInfo\" of the destination worker.\n * func (Callable) -- a callable function, such as Python\n callables, builtin operators (e.g. \"add()\") and annotated\n TorchScript functions.\n * args (tuple) -- the argument tuple for the \"func\"\n invocation.\n * kwargs (dict) -- is a dictionary of keyword arguments\n for the \"func\" invocation.", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "for the \"func\" invocation.\n * timeout (float, optional) -- timeout in seconds to\n use for this RPC. If the RPC does not complete in this amount\n of time, an exception indicating it has timed out will be\n raised. A value of 0 indicates an infinite timeout, i.e. a\n timeout error will never be raised. If not provided, the\n default value set during initialization or with\n \"_set_rpc_timeout\" is used.\n Returns:\n Returns the result of running \"func\" with \"args\" and \"kwargs\".\n Example::\n Make sure that \"MASTER_ADDR\" and \"MASTER_PORT\" are set properly\n on both workers. Refer to \"init_process_group()\" API for more\n details. For example,\n export MASTER_ADDR=localhost export MASTER_PORT=5678\n Then run the following code in two different processes:\n >>> # On worker 0:\n >>> import torch\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker0\", rank=0, world_size=2)", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "\n\n\nret = rpc.rpc_sync(\"worker1\", torch.add, args=(torch.ones(2), 3))\n >>> rpc.shutdown()\n >>> # On worker 1:\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker1\", rank=1, world_size=2)\n >>> rpc.shutdown()\n Below is an example of running a TorchScript function using RPC.\n >>> # On both workers:\n >>> @torch.jit.script\n >>> def my_script_add(t1, t2):\n >>> return torch.add(t1, t2)\n >>> # On worker 0:\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker0\", rank=0, world_size=2)\n >>> ret = rpc.rpc_sync(\"worker1\", my_script_add, args=(torch.ones(2), 3))\n >>> rpc.shutdown()\n >>> # On worker 1:\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker1\", rank=1, world_size=2)\n >>> rpc.shutdown()\ntorch.distributed.rpc.rpc_async(to, func, args=None, kwargs=None, timeout=- 1.0)\n Make a non-blocking RPC call to run function \"func\" on worker \"to\".\n\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "RPC messages are sent and received in parallel to execution of\n Python code. This method is thread-safe. This method will\n immediately return a \"Future\" that can be awaited on.\n Parameters:\n * to (str or WorkerInfo or int) --\n name/rank/\"WorkerInfo\" of the destination worker.\n * func (Callable) -- a callable function, such as Python\n callables, builtin operators (e.g. \"add()\") and annotated\n TorchScript functions.\n * args (tuple) -- the argument tuple for the \"func\"\n invocation.\n * kwargs (dict) -- is a dictionary of keyword arguments\n for the \"func\" invocation.\n * timeout (float, optional) -- timeout in seconds to\n use for this RPC. If the RPC does not complete in this amount\n of time, an exception indicating it has timed out will be\n raised. A value of 0 indicates an infinite timeout, i.e. a\n timeout error will never be raised. If not provided, the", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "default value set during initialization or with\n \"_set_rpc_timeout\" is used.\n Returns:\n Returns a \"Future\" object that can be waited on. When completed,\n the return value of \"func\" on \"args\" and \"kwargs\" can be\n retrieved from the \"Future\" object.\n Warning:\n Using GPU tensors as arguments or return values of \"func\" is not\n supported since we don't support sending GPU tensors over the\n wire. You need to explicitly copy GPU tensors to CPU before using\n them as arguments or return values of \"func\".\n Warning:\n The \"rpc_async\" API does not copy storages of argument tensors\n until sending them over the wire, which could be done by a\n different thread depending on the RPC backend type. The caller\n should make sure that the contents of those tensors stay intact\n until the returned \"Future\" completes.\n Example::\n Make sure that \"MASTER_ADDR\" and \"MASTER_PORT\" are set properly", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "on both workers. Refer to \"init_process_group()\" API for more\n details. For example,\n export MASTER_ADDR=localhost export MASTER_PORT=5678\n Then run the following code in two different processes:\n >>> # On worker 0:\n >>> import torch\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker0\", rank=0, world_size=2)\n >>> fut1 = rpc.rpc_async(\"worker1\", torch.add, args=(torch.ones(2), 3))\n >>> fut2 = rpc.rpc_async(\"worker1\", min, args=(1, 2))\n >>> result = fut1.wait() + fut2.wait()\n >>> rpc.shutdown()\n >>> # On worker 1:\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker1\", rank=1, world_size=2)\n >>> rpc.shutdown()\n Below is an example of running a TorchScript function using RPC.\n >>> # On both workers:\n >>> @torch.jit.script\n >>> def my_script_add(t1, t2):\n >>> return torch.add(t1, t2)\n >>> # On worker 0:\n >>> import torch.distributed.rpc as rpc", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "\n\n\nimport torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker0\", rank=0, world_size=2)\n >>> fut = rpc.rpc_async(\"worker1\", my_script_add, args=(torch.ones(2), 3))\n >>> ret = fut.wait()\n >>> rpc.shutdown()\n >>> # On worker 1:\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker1\", rank=1, world_size=2)\n >>> rpc.shutdown()\ntorch.distributed.rpc.remote(to, func, args=None, kwargs=None, timeout=- 1.0)\n Make a remote call to run \"func\" on worker \"to\" and return an\n \"RRef\" to the result value immediately. Worker \"to\" will be the\n owner of the returned \"RRef\", and the worker calling \"remote\" is a\n user. The owner manages the global reference count of its \"RRef\",\n and the owner \"RRef\" is only destructed when globally there are no\n living references to it.\n Parameters:\n * to (str or WorkerInfo or int) --\n name/rank/\"WorkerInfo\" of the destination worker.\n\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "\nfunc (Callable) -- a callable function, such as Python\n callables, builtin operators (e.g. \"add()\") and annotated\n TorchScript functions.\nargs (tuple) -- the argument tuple for the \"func\"\n invocation.\nkwargs (dict) -- is a dictionary of keyword arguments\n for the \"func\" invocation.\ntimeout (float, optional) -- timeout in seconds for\n this remote call. If the creation of this \"RRef\" on worker\n \"to\" is not successfully processed on this worker within this\n timeout, then the next time there is an attempt to use the\n RRef (such as \"to_here()\"), a timeout will be raised\n indicating this failure. A value of 0 indicates an infinite\n timeout, i.e. a timeout error will never be raised. If not\n provided, the default value set during initialization or with\n \"_set_rpc_timeout\" is used.\n Returns:\n A user \"RRef\" instance to the result value. Use the blocking API\n\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "\"torch.distributed.rpc.RRef.to_here()\" to retrieve the result\n value locally.\n Warning:\n The \"remote\" API does not copy storages of argument tensors until\n sending them over the wire, which could be done by a different\n thread depending on the RPC backend type. The caller should make\n sure that the contents of those tensors stay intact until the\n returned RRef is confirmed by the owner, which can be checked\n using the \"torch.distributed.rpc.RRef.confirmed_by_owner()\" API.\n Warning:\n Errors such as timeouts for the \"remote\" API are handled on a\n best-effort basis. This means that when remote calls initiated by\n \"remote\" fail, such as with a timeout error, we take a best-\n effort approach to error handling. This means that errors are\n handled and set on the resulting RRef on an asynchronous basis.\n If the RRef has not been used by the application before this\n handling (such as \"to_here\" or fork call), then future uses of", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "the \"RRef\" will appropriately raise errors. However, it is\n possible that the user application will use the \"RRef\" before the\n errors are handled. In this case, errors may not be raised as\n they have not yet been handled.\n Example:\n Make sure that MASTER_ADDR and MASTER_PORT are set properly\n on both workers. Refer to :meth:~torch.distributed.init_process_group\n API for more details. For example,\n export MASTER_ADDR=localhost\n export MASTER_PORT=5678\n Then run the following code in two different processes:\n >>> # On worker 0:\n >>> import torch\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker0\", rank=0, world_size=2)\n >>> rref1 = rpc.remote(\"worker1\", torch.add, args=(torch.ones(2), 3))\n >>> rref2 = rpc.remote(\"worker1\", torch.add, args=(torch.ones(2), 1))\n >>> x = rref1.to_here() + rref2.to_here()\n >>> rpc.shutdown()\n >>> # On worker 1:\n >>> import torch.distributed.rpc as rpc", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "\n\n\nimport torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker1\", rank=1, world_size=2)\n >>> rpc.shutdown()\n Below is an example of running a TorchScript function using RPC.\n >>> # On both workers:\n >>> @torch.jit.script\n >>> def my_script_add(t1, t2):\n >>> return torch.add(t1, t2)\n >>> # On worker 0:\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker0\", rank=0, world_size=2)\n >>> rref = rpc.remote(\"worker1\", my_script_add, args=(torch.ones(2), 3))\n >>> rref.to_here()\n >>> rpc.shutdown()\n >>> # On worker 1:\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker1\", rank=1, world_size=2)\n >>> rpc.shutdown()\ntorch.distributed.rpc.get_worker_info(worker_name=None)\n Get \"WorkerInfo\" of a given worker name. Use this \"WorkerInfo\" to\n avoid passing an expensive string on every invocation.\n Parameters:\n worker_name (str) -- the string name of a worker. If\n\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "\"None\", return the the id of the current worker. (default\n \"None\")\n Returns:\n \"WorkerInfo\" instance for the given \"worker_name\" or\n \"WorkerInfo\" of the current worker if \"worker_name\" is \"None\".\ntorch.distributed.rpc.shutdown(graceful=True, timeout=0)\n Perform a shutdown of the RPC agent, and then destroy the RPC\n agent. This stops the local agent from accepting outstanding\n requests, and shuts down the RPC framework by terminating all RPC\n threads. If \"graceful=True\", this will block until all local and\n remote RPC processes reach this method and wait for all outstanding\n work to complete. Otherwise, if \"graceful=False\", this is a local\n shutdown, and it does not wait for other RPC processes to reach\n this method.\n Warning:\n For \"Future\" objects returned by \"rpc_async()\", \"future.wait()\"\n should not be called after \"shutdown()\".\n Parameters:\n graceful (bool) -- Whether to do a graceful shutdown or", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "not. If True, this will 1) wait until there is no pending system\n messages for \"UserRRefs\" and delete them; 2) block until all\n local and remote RPC processes have reached this method and wait\n for all outstanding work to complete.\n Example::\n Make sure that \"MASTER_ADDR\" and \"MASTER_PORT\" are set properly\n on both workers. Refer to \"init_process_group()\" API for more\n details. For example,\n export MASTER_ADDR=localhost export MASTER_PORT=5678\n Then run the following code in two different processes:\n >>> # On worker 0:\n >>> import torch\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker0\", rank=0, world_size=2)\n >>> # do some work\n >>> result = rpc.rpc_sync(\"worker1\", torch.add, args=(torch.ones(1), 1))\n >>> # ready to shutdown\n >>> rpc.shutdown()\n >>> # On worker 1:\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker1\", rank=1, world_size=2)", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "\n\n\nwait for worker 0 to finish work, and then shutdown.\n >>> rpc.shutdown()\n\nclass torch.distributed.rpc.WorkerInfo\n A structure that encapsulates information of a worker in the\n system. Contains the name and ID of the worker. This class is not\n meant to be constructed directly, rather, an instance can be\n retrieved through \"get_worker_info()\" and the result can be passed\n in to functions such as \"rpc_sync()\", \"rpc_async()\", \"remote()\" to\n avoid copying a string on every invocation.\n property id\n Globally unique id to identify the worker.\n property name\n The name of the worker.\nThe RPC package also provides decorators which allow applications to\nspecify how a given function should be treated on the callee side.\ntorch.distributed.rpc.functions.async_execution(fn)\n A decorator for a function indicating that the return value of the\n function is guaranteed to be a \"Future\" object and this function\n can run asynchronously on the RPC callee. More specifically, the\n\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "callee extracts the \"Future\" returned by the wrapped function and\n installs subsequent processing steps as a callback to that\n \"Future\". The installed callback will read the value from the\n \"Future\" when completed and send the value back as the RPC\n response. That also means the returned \"Future\" only exists on the\n callee side and is never sent through RPC. This decorator is useful\n when the wrapped function's (\"fn\") execution needs to pause and\n resume due to, e.g., containing \"rpc_async()\" or waiting for other\n signals.\n Note:\n To enable asynchronous execution, applications must pass the\n function object returned by this decorator to RPC APIs. If RPC\n detected attributes installed by this decorator, it knows that\n this function returns a \"Future\" object and will handle that\n accordingly. However, this does not mean this decorator has to be\n outmost one when defining a function. For example, when combined\n with \"@staticmethod\" or \"@classmethod\",", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "with \"@staticmethod\" or \"@classmethod\",\n \"@rpc.functions.async_execution\" needs to be the inner decorator\n to allow the target function be recognized as a static or class\n function. This target function can still execute asynchronously\n because, when accessed, the static or class method preserves\n attributes installed by \"@rpc.functions.async_execution\".\n Example::\n The returned \"Future\" object can come from \"rpc_async()\",\n \"then()\", or \"Future\" constructor. The example below shows\n directly using the \"Future\" returned by \"then()\".\n >>> from torch.distributed import rpc\n >>>\n >>> # omitting setup and shutdown RPC\n >>>\n >>> # On all workers\n >>> @rpc.functions.async_execution\n >>> def async_add_chained(to, x, y, z):\n >>> # This function runs on \"worker1\" and returns immediately when\n >>> # the callback is installed through the then(cb) API. In the", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "\n\n\n# mean time, the `rpc_async` to \"worker2\" can run concurrently.\n >>> # When the return value of that `rpc_async` arrives at\n >>> # \"worker1\", \"worker1\" will run the lambda function accordingly\n >>> # and set the value for the previously returned `Future`, which\n >>> # will then trigger RPC to send the result back to \"worker0\".\n >>> return rpc.rpc_async(to, torch.add, args=(x, y)).then(\n >>> lambda fut: fut.wait() + z\n >>> )\n >>>\n >>> # On worker0\n >>> ret = rpc.rpc_sync(\n >>> \"worker1\",\n >>> async_add_chained,\n >>> args=(\"worker2\", torch.ones(2), 1, 1)\n >>> )\n >>> print(ret) # prints tensor([3., 3.])\n When combined with TorchScript decorators, this decorator must\n be the outmost one.\n >>> from torch import Tensor\n >>> from torch.futures import Future\n >>> from torch.distributed import rpc\n >>>\n >>> # omitting setup and shutdown RPC\n >>>\n\n\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "\n\n\n >>> # On all workers\n >>> @torch.jit.script\n >>> def script_add(x: Tensor, y: Tensor) -> Tensor:\n >>> return x + y\n >>>\n >>> @rpc.functions.async_execution\n >>> @torch.jit.script\n >>> def async_add(to: str, x: Tensor, y: Tensor) -> Future[Tensor]:\n >>> return rpc.rpc_async(to, script_add, (x, y))\n >>>\n >>> # On worker0\n >>> ret = rpc.rpc_sync(\n >>> \"worker1\",\n >>> async_add,\n >>> args=(\"worker2\", torch.ones(2), 1)\n >>> )\n >>> print(ret) # prints tensor([2., 2.])\n When combined with static or class method, this decorator must\n be the inner one.\n >>> from torch.distributed import rpc\n >>>\n >>> # omitting setup and shutdown RPC\n >>>\n >>> # On all workers\n >>> class AsyncExecutionClass:\n >>>\n >>> @staticmethod\n >>> @rpc.functions.async_execution\n >>> def static_async_add(to, x, y, z):\n\n\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "\n\n\ndef static_async_add(to, x, y, z):\n >>> return rpc.rpc_async(to, torch.add, args=(x, y)).then(\n >>> lambda fut: fut.wait() + z\n >>> )\n >>>\n >>> @classmethod\n >>> @rpc.functions.async_execution\n >>> def class_async_add(cls, to, x, y, z):\n >>> ret_fut = torch.futures.Future()\n >>> rpc.rpc_async(to, torch.add, args=(x, y)).then(\n >>> lambda fut: ret_fut.set_result(fut.wait() + z)\n >>> )\n >>> return ret_fut\n >>>\n >>> @rpc.functions.async_execution\n >>> def bound_async_add(self, to, x, y, z):\n >>> return rpc.rpc_async(to, torch.add, args=(x, y)).then(\n >>> lambda fut: fut.wait() + z\n >>> )\n >>>\n >>> # On worker0\n >>> ret = rpc.rpc_sync(\n >>> \"worker1\",\n >>> AsyncExecutionClass.static_async_add,\n >>> args=(\"worker2\", torch.ones(2), 1, 2)\n >>> )\n\n\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "\n\n\n)\n >>> print(ret) # prints tensor([4., 4.])\n >>>\n >>> ret = rpc.rpc_sync(\n >>> \"worker1\",\n >>> AsyncExecutionClass.class_async_add,\n >>> args=(\"worker2\", torch.ones(2), 1, 2)\n >>> )\n >>> print(ret) # prints tensor([4., 4.])\n This decorator also works with RRef helpers, i.e., .\n \"torch.distributed.rpc.RRef.rpc_sync()\",\n \"torch.distributed.rpc.RRef.rpc_async()\", and\n \"torch.distributed.rpc.RRef.remote()\".\n >>> from torch.distributed import rpc\n >>>\n >>> # reuse the AsyncExecutionClass class above\n >>> rref = rpc.remote(\"worker1\", AsyncExecutionClass)\n >>> ret = rref.rpc_sync().static_async_add(\"worker2\", torch.ones(2), 1, 2)\n >>> print(ret) # prints tensor([4., 4.])\n >>>\n >>> rref = rpc.remote(\"worker1\", AsyncExecutionClass)\n >>> ret = rref.rpc_async().static_async_add(\"worker2\", torch.ones(2), 1, 2).wait()\n >>> print(ret) # prints tensor([4., 4.])\n >>>\n\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "\n\n\n >>> rref = rpc.remote(\"worker1\", AsyncExecutionClass)\n >>> ret = rref.remote().static_async_add(\"worker2\", torch.ones(2), 1, 2).to_here()\n >>> print(ret) # prints tensor([4., 4.])\n\nBackends\n\n\n\n\nThe RPC module can leverage different backends to perform the\ncommunication between the nodes. The backend to be used can be\nspecified in the \"init_rpc()\" function, by passing a certain value of\nthe \"BackendType\" enum. Regardless of what backend is used, the rest\nof the RPC API won't change. Each backend also defines its own\nsubclass of the \"RpcBackendOptions\" class, an instance of which can\nalso be passed to \"init_rpc()\" to configure the backend's behavior.\nclass torch.distributed.rpc.BackendType(value)\n An enum class of available backends.\n PyTorch ships with a builtin \"BackendType.TENSORPIPE\" backend.\n Additional ones can be registered using the \"register_backend()\"\n function.\nclass torch.distributed.rpc.RpcBackendOptions", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "class torch.distributed.rpc.RpcBackendOptions\n An abstract structure encapsulating the options passed into the RPC\n backend. An instance of this class can be passed in to \"init_rpc()\"\n in order to initialize RPC with specific configurations, such as\n the RPC timeout and \"init_method\" to be used.\n property init_method\n URL specifying how to initialize the process group. Default is\n \"env://\"\n property rpc_timeout\n A float indicating the timeout to use for all RPCs. If an RPC\n does not complete in this timeframe, it will complete with an\n exception indicating that it has timed out.\nTensorPipe Backend\n~~~~~~~~~~~~~~~~~~\nThe TensorPipe agent, which is the default, leverages the TensorPipe\nlibrary, which provides a natively point-to-point communication\nprimitive specifically suited for machine learning that fundamentally\naddresses some of the limitations of Gloo. Compared to Gloo, it has\nthe advantage of being asynchronous, which allows a large number of", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "transfers to occur simultaneously, each at their own speed, without\nblocking each other. It will only open pipes between pairs of nodes\nwhen needed, on demand, and when one node fails only its incident\npipes will be closed, while all other ones will keep working as\nnormal. In addition, it is able to support multiple different\ntransports (TCP, of course, but also shared memory, NVLink,\nInfiniBand, ...) and can automatically detect their availability and\nnegotiate the best transport to use for each pipe.\nThe TensorPipe backend has been introduced in PyTorch v1.6 and is\nbeing actively developed. At the moment, it only supports CPU tensors,\nwith GPU support coming soon. It comes with a TCP-based transport,\njust like Gloo. It is also able to automatically chunk and multiplex\nlarge tensors over multiple sockets and threads in order to achieve\nvery high bandwidths. The agent will be able to pick the best\ntransport on its own, with no intervention required.\nExample:\n\n\n\nimport os\n\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "Example:\n\n\n\nimport os\nfrom torch.distributed import rpc\nos.environ['MASTER_ADDR'] = 'localhost'\nos.environ['MASTER_PORT'] = '29500'\nrpc.init_rpc(\n \"worker1\",\n rank=0,\n world_size=2,\n rpc_backend_options=rpc.TensorPipeRpcBackendOptions(\n num_worker_threads=8,\n rpc_timeout=20 # 20 second timeout\n )\n)\nomitting init_rpc invocation on worker2\nclass torch.distributed.rpc.TensorPipeRpcBackendOptions(, num_worker_threads=16, rpc_timeout=60.0, init_method='env://', device_maps=None, devices=None, _transports=None, _channels=None)\n The backend options for \"TensorPipeAgent\", derived from\n \"RpcBackendOptions\".\n Parameters:\n * num_worker_threads (int, optional) -- The number of\n threads in the thread-pool used by \"TensorPipeAgent\" to\n execute requests (default: 16).\n * rpc_timeout (float, optional*) -- The default\n\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "timeout, in seconds, for RPC requests (default: 60 seconds).\n If the RPC has not completed in this timeframe, an exception\n indicating so will be raised. Callers can override this\n timeout for individual RPCs in \"rpc_sync()\" and \"rpc_async()\"\n if necessary.\n * init_method (str, optional) -- The URL to initialize\n the distributed store used for rendezvous. It takes any value\n accepted for the same argument of \"init_process_group()\"\n (default: \"env://\").\n * device_maps (Dict[str, Dict], optional) --\n Device placement mappings from this worker to the callee. Key\n is the callee worker name and value the dictionary (\"Dict\" of\n \"int\", \"str\", or \"torch.device\") that maps this worker's\n devices to the callee worker's devices. (default: \"None\")\n * devices (List[int, str, or \"torch.device\"], optional) --\n all local CUDA devices used by RPC agent. By Default, it will", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "be initialized to all local devices from its own \"device_maps\"\n and corresponding devices from its peers' \"device_maps\". When\n processing CUDA RPC requests, the agent will properly\n synchronize CUDA streams for all devices in this \"List\".\n property device_maps\n The device map locations.\n property devices\n All devices used by the local agent.\n property init_method\n URL specifying how to initialize the process group. Default is\n \"env://\"\n property num_worker_threads\n The number of threads in the thread-pool used by\n \"TensorPipeAgent\" to execute requests.\n property rpc_timeout\n A float indicating the timeout to use for all RPCs. If an RPC\n does not complete in this timeframe, it will complete with an\n exception indicating that it has timed out.\n set_device_map(to, device_map)\n Set device mapping between each RPC caller and callee pair. This\n function can be called multiple times to incrementally add", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "device placement configurations.\n Parameters:\n * to (str) -- Callee name.\n * device_map (Dict of python:int, str, or\n torch.device) -- Device placement mappings from this\n worker to the callee. This map must be invertible.\n -[ Example ]-\n >>> # both workers\n >>> def add(x, y):\n >>> print(x) # tensor([1., 1.], device='cuda:1')\n >>> return x + y, (x + y).to(2)\n >>>\n >>> # on worker 0\n >>> options = TensorPipeRpcBackendOptions(\n >>> num_worker_threads=8,\n >>> device_maps={\"worker1\": {0: 1}}\n >>> # maps worker0's cuda:0 to worker1's cuda:1\n >>> )\n >>> options.set_device_map(\"worker1\", {1: 2})\n >>> # maps worker0's cuda:1 to worker1's cuda:2\n >>>\n >>> rpc.init_rpc(\n >>> \"worker0\",\n >>> rank=0,\n >>> world_size=2,\n >>> backend=rpc.BackendType.TENSORPIPE,\n >>> rpc_backend_options=options\n >>> )", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "\n\n\n)\n >>>\n >>> x = torch.ones(2)\n >>> rets = rpc.rpc_sync(\"worker1\", add, args=(x.to(0), 1))\n >>> # The first argument will be moved to cuda:1 on worker1. When\n >>> # sending the return value back, it will follow the invert of\n >>> # the device map, and hence will be moved back to cuda:0 and\n >>> # cuda:1 on worker0\n >>> print(rets[0]) # tensor([2., 2.], device='cuda:0')\n >>> print(rets[1]) # tensor([2., 2.], device='cuda:1')\n set_devices(devices)\n Set local devices used by the TensorPipe RPC agent. When\n processing CUDA RPC requests, the TensorPipe RPC agent will\n properly synchronize CUDA streams for all devices in this\n \"List\".\n Parameters:\n devices (List of python:int, str, or\n torch.device) -- local devices used by the TensorPipe RPC\n agent.\nNote:\n The RPC framework does not automatically retry any \"rpc_sync()\",\n \"rpc_async()\" and \"remote()\" calls. The reason being that there is\n\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "no way the RPC framework can determine whether an operation is\n idempotent or not and whether it is safe to retry. As a result, it\n is the application's responsibility to deal with failures and retry\n if necessary. RPC communication is based on TCP and as a result\n failures could happen due to network failures or intermittent\n network connectivity issues. In such scenarios, the application\n needs to retry appropriately with reasonable backoffs to ensure the\n network isn't overwhelmed by aggressive retries.\nRRef\n====\nWarning:\n RRefs are not currently supported when using CUDA tensors\nAn \"RRef\" (Remote REFerence) is a reference to a value of some type\n\"T\" (e.g. \"Tensor\") on a remote worker. This handle keeps the\nreferenced remote value alive on the owner, but there is no\nimplication that the value will be transferred to the local worker in\nthe future. RRefs can be used in multi-machine training by holding\nreferences to nn.Modules that exist on other workers, and calling the", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "appropriate functions to retrieve or modify their parameters during\ntraining. See Remote Reference Protocol for more details.\nclass torch.distributed.rpc.RRef\nMore Information about RRef\n^^^^^^^^^^^^^^^^^^^^^^^^^^^\n* Remote Reference Protocol\n * Background\n * Assumptions\n * RRef Lifetime\n * Design Reasoning\n * Implementation\n * Protocol Scenarios\n * User Share RRef with Owner as Return Value\n * User Share RRef with Owner as Argument\n * Owner Share RRef with User\n * User Share RRef with User\nRemoteModule\n============\nWarning:\n RemoteModule is not currently supported when using CUDA tensors\n\"RemoteModule\" is an easy way to create an nn.Module remotely on a\ndifferent process. The actual module resides on a remote host, but the\nlocal host has a handle to this module and invoke this module similar\nto a regular nn.Module. The invocation however incurs RPC calls to the\nremote end and can be performed asynchronously if needed via\nadditional APIs supported by RemoteModule.", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "additional APIs supported by RemoteModule.\nclass torch.distributed.nn.api.remote_module.RemoteModule(args, *kwargs)\n A RemoteModule instance can only be created after RPC\n initialization. It creates a user-specified module on a\n specified remote node. It behaves like a regular \"nn.Module\"\n except that the \"forward\" method is executed on the remote node.\n It takes care of autograd recording to ensure the backward pass\n propagates gradients back to the corresponding remote module.\n It generates two methods \"forward_async\" and \"forward\" based on\n the signature of the \"forward\" method of \"module_cls\".\n \"forward_async\" runs asynchronously and returns a Future. The\n arguments of \"forward_async\" and \"forward\" are the same as the\n \"forward\" method of the module returned by the \"module_cls\".\n For example, if \"module_cls\" returns an instance of \"nn.Linear\",\n that has \"forward\" method signature: \"def forward(input: Tensor)", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "-> Tensor:\", the generated \"RemoteModule\" will have 2 methods\n with the signatures:\n \"def forward(input: Tensor) -> Tensor:\"\n \"def forward_async(input: Tensor) -> Future[Tensor]:\"\n Parameters:\n * remote_device (str) -- Device on the destination worker\n where we'd like to place this module. The format should be\n \"/\", where the device field can be parsed\n as torch.device type. E.g., \"trainer0/cpu\", \"trainer0\",\n \"ps0/cuda:0\". In addition, the device field can be optional\n and the default value is \"cpu\".\n * module_cls (nn.Module) --\n Class for the module to be created remotely. For example,\n >>> class MyModule(nn.Module):\n >>> def forward(input):\n >>> return input + 1\n >>>\n >>> module_cls = MyModule\n * args (Sequence, optional) -- args to be passed to\n \"module_cls\".\n * kwargs (Dict, optional) -- kwargs to be passed to", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "\"module_cls\".\n Returns:\n A remote module instance which wraps the \"Module\" created by the\n user-provided \"module_cls\", it has a blocking \"forward\" method\n and an asynchronous \"forward_async\" method that returns a future\n of the \"forward\" call on the user-provided module on the remote\n side.\n Example::\n Run the following code in two different processes:\n >>> # On worker 0:\n >>> import torch\n >>> import torch.distributed.rpc as rpc\n >>> from torch import nn, Tensor\n >>> from torch.distributed.nn.api.remote_module import RemoteModule\n >>>\n >>> rpc.init_rpc(\"worker0\", rank=0, world_size=2)\n >>> remote_linear_module = RemoteModule(\n >>> \"worker1/cpu\", nn.Linear, args=(20, 30),\n >>> )\n >>> input = torch.randn(128, 20)\n >>> ret_fut = remote_linear_module.forward_async(input)\n >>> ret = ret_fut.wait()\n >>> rpc.shutdown()\n >>> # On worker 1:\n >>> import torch", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "\n\n\nOn worker 1:\n >>> import torch\n >>> import torch.distributed.rpc as rpc\n >>>\n >>> rpc.init_rpc(\"worker1\", rank=1, world_size=2)\n >>> rpc.shutdown()\n Furthermore, a more practical example that is combined with\n DistributedDataParallel (DDP) can be found in this tutorial.\n\nget_module_rref()\n Returns an \"RRef\" (\"RRef[nn.Module]\") pointing to the remote\n module.\n Return type:\n RRef[Module]\n remote_parameters(recurse=True)\n Returns a list of \"RRef\" pointing to the remote module's\n parameters. This can typically be used in conjuction with\n \"DistributedOptimizer\".\n Parameters:\n recurse (bool) -- if True, then returns parameters of\n the remote module and all submodules of the remote module.\n Otherwise, returns only parameters that are direct members of\n the remote module.\n Returns:\n A list of \"RRef\" (\"List[RRef[nn.Parameter]]\") to remote\n module's parameters.\n\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "module's parameters.\n Return type:\n List[RRef[Parameter]]\nDistributed Autograd Framework\n==============================\nWarning:\n Distributed autograd is not currently supported when using CUDA\n tensors\nThis module provides an RPC-based distributed autograd framework that\ncan be used for applications such as model parallel training. In\nshort, applications may send and receive gradient recording tensors\nover RPC. In the forward pass, we record when gradient recording\ntensors are sent over RPC and during the backward pass we use this\ninformation to perform a distributed backward pass using RPC. For more\ndetails see Distributed Autograd Design.\ntorch.distributed.autograd.backward(context_id: int, roots: List[Tensor], retain_graph=False) -> None\n Kicks off the distributed backward pass using the provided roots.\n This currently implements the FAST mode algorithm which assumes all\n RPC messages sent in the same distributed autograd context across", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "workers would be part of the autograd graph during the backward\n pass.\n We use the provided roots to discover the autograd graph and\n compute appropriate dependencies. This method blocks until the\n entire autograd computation is done.\n We accumulate the gradients in the appropriate\n \"torch.distributed.autograd.context\" on each of the nodes. The\n autograd context to be used is looked up given the \"context_id\"\n that is passed in when \"torch.distributed.autograd.backward()\" is\n called. If there is no valid autograd context corresponding to the\n given ID, we throw an error. You can retrieve the accumulated\n gradients using the \"get_gradients()\" API.\n Parameters:\n * context_id (int) -- The autograd context id for which we\n should retrieve the gradients.\n * roots (list) -- Tensors which represent the roots of the\n autograd computation. All the tensors should be scalars.\n * retain_graph (bool, optional) -- If False, the graph", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "used to compute the grad will be freed. Note that in nearly\n all cases setting this option to True is not needed and often\n can be worked around in a much more efficient way. Usually,\n you need to set this to True to run backward multiple times.\n Example::\n >>> import torch.distributed.autograd as dist_autograd\n >>> with dist_autograd.context() as context_id:\n >>> pred = model.forward()\n >>> loss = loss_func(pred, loss)\n >>> dist_autograd.backward(context_id, loss)\nclass torch.distributed.autograd.context\n Context object to wrap forward and backward passes when using\n distributed autograd. The \"context_id\" generated in the \"with\"\n statement is required to uniquely identify a distributed backward\n pass on all workers. Each worker stores metadata associated with\n this \"context_id\", which is required to correctly execute a\n distributed autograd pass.\n Example::\n >>> import torch.distributed.autograd as dist_autograd", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "\n\n\nwith dist_autograd.context() as context_id:\n >>> t1 = torch.rand((3, 3), requires_grad=True)\n >>> t2 = torch.rand((3, 3), requires_grad=True)\n >>> loss = rpc.rpc_sync(\"worker1\", torch.add, args=(t1, t2)).sum()\n >>> dist_autograd.backward(context_id, [loss])\ntorch.distributed.autograd.get_gradients(context_id: int) -> Dict[Tensor, Tensor]\n Retrieves a map from Tensor to the appropriate gradient for that\n Tensor accumulated in the provided context corresponding to the\n given \"context_id\" as part of the distributed autograd backward\n pass.\n Parameters:\n context_id (int) -- The autograd context id for which we\n should retrieve the gradients.\n Returns:\n A map where the key is the Tensor and the value is the\n associated gradient for that Tensor.\n Example::\n >>> import torch.distributed.autograd as dist_autograd\n >>> with dist_autograd.context() as context_id:\n >>> t1 = torch.rand((3, 3), requires_grad=True)\n\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "\n\n\nt2 = torch.rand((3, 3), requires_grad=True)\n >>> loss = t1 + t2\n >>> dist_autograd.backward(context_id, [loss.sum()])\n >>> grads = dist_autograd.get_gradients(context_id)\n >>> print(grads[t1])\n >>> print(grads[t2])\n\nMore Information about RPC Autograd\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n* Distributed Autograd Design\n * Background\n * Autograd recording during the forward pass\n * Distributed Autograd Context\n * Distributed Backward Pass\n * Computing dependencies\n * FAST mode algorithm\n * SMART mode algorithm\n * Distributed Optimizer\n * Simple end to end example\nDistributed Optimizer\n=====================\nSee the torch.distributed.optim page for documentation on distributed\noptimizers.\nDesign Notes\n============\nThe distributed autograd design note covers the design of the RPC-\nbased distributed autograd framework that is useful for applications\nsuch as model parallel training.\n* Distributed Autograd Design\n\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "\nDistributed Autograd Design\nThe RRef design note covers the design of the RRef (Remote REFerence)\nprotocol used to refer to values on remote workers by the framework.\nRemote Reference Protocol\nTutorials\n=========\nThe RPC tutorials introduce users to the RPC framework, provide\nseveral example applications using torch.distributed.rpc APIs, and\ndemonstrate how to use the profiler to profile RPC-based workloads.\nGetting started with Distributed RPC Framework\nImplementing a Parameter Server using Distributed RPC Framework\nCombining Distributed DataParallel with Distributed RPC Framework\n (covers RemoteModule as well)\nProfiling RPC-based Workloads\nImplementing batch RPC processing\nDistributed Pipeline Parallel\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"}
{"text": "torch.specialThe torch.special module, modeled after SciPy's special module.\nFunctions\n=========\ntorch.special.airy_ai(input, , out=None) -> Tensor\n Airy function \\text{Ai}\\left(\\text{input}\\right).\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\ntorch.special.bessel_j0(input, , out=None) -> Tensor\n Bessel function of the first kind of order 0.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\ntorch.special.bessel_j1(input, , out=None) -> Tensor\n Bessel function of the first kind of order 1.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\ntorch.special.digamma(input, , out=None) -> Tensor\n Computes the logarithmic derivative of the gamma function on\n input.", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "input.\n \\digamma(x) = \\frac{d}{dx} \\ln\\left(\\Gamma\\left(x\\right)\\right)\n = \\frac{\\Gamma'(x)}{\\Gamma(x)}\n Parameters:\n input (Tensor) -- the tensor to compute the digamma\n function on\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Note:\n This function is similar to SciPy's scipy.special.digamma.\n Note:\n From PyTorch 1.8 onwards, the digamma function returns -Inf for\n 0. Previously it returned NaN for 0.\n Example:\n >>> a = torch.tensor([1, 0.5])\n >>> torch.special.digamma(a)\n tensor([-0.5772, -1.9635])\ntorch.special.entr(input, , out=None) -> Tensor\n Computes the entropy on \"input\" (as defined below), elementwise.\n \\begin{align} \\text{entr(x)} = \\begin{cases} -x * \\ln(x) &\n x > 0 \\ 0 & x = 0.0 \\ -\\infty & x < 0 \\end{cases}\n \\end{align}\n Parameters:\n input (Tensor*) -- the input tensor.\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example::\n >>> a = torch.arange(-0.5, 1, 0.5)\n >>> a\n tensor([-0.5000, 0.0000, 0.5000])\n >>> torch.special.entr(a)\n tensor([ -inf, 0.0000, 0.3466])\ntorch.special.erf(input, , out=None) -> Tensor\n Computes the error function of \"input\". The error function is\n defined as follows:\n \\mathrm{erf}(x) = \\frac{2}{\\sqrt{\\pi}} \\int_{0}^{x} e^{-t^2} dt\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> torch.special.erf(torch.tensor([0, -1., 10.]))\n tensor([ 0.0000, -0.8427, 1.0000])\ntorch.special.erfc(input, , out=None) -> Tensor\n Computes the complementary error function of \"input\". The\n complementary error function is defined as follows:\n \\mathrm{erfc}(x) = 1 - \\frac{2}{\\sqrt{\\pi}} \\int_{0}^{x}\n e^{-t^2} dt\n Parameters:", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "e^{-t^2} dt\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> torch.special.erfc(torch.tensor([0, -1., 10.]))\n tensor([ 1.0000, 1.8427, 0.0000])\ntorch.special.erfcx(input, , out=None) -> Tensor\n Computes the scaled complementary error function for each element\n of \"input\". The scaled complementary error function is defined as\n follows:\n \\mathrm{erfcx}(x) = e^{x^2} \\mathrm{erfc}(x)\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> torch.special.erfcx(torch.tensor([0, -1., 10.]))\n tensor([ 1.0000, 5.0090, 0.0561])\ntorch.special.erfinv(input, , out=None) -> Tensor\n Computes the inverse error function of \"input\". The inverse error\n function is defined in the range (-1, 1) as:\n \\mathrm{erfinv}(\\mathrm{erf}(x)) = x\n Parameters:", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> torch.special.erfinv(torch.tensor([0, 0.5, -1.]))\n tensor([ 0.0000, 0.4769, -inf])\ntorch.special.exp2(input, , out=None) -> Tensor\n Computes the base two exponential function of \"input\".\n y_{i} = 2^{x_{i}}\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> torch.special.exp2(torch.tensor([0, math.log2(2.), 3, 4]))\n tensor([ 1., 2., 8., 16.])\ntorch.special.expit(input, , out=None) -> Tensor\n Computes the expit (also known as the logistic sigmoid function) of\n the elements of \"input\".\n \\text{out}{i} = \\frac{1}{1 + e^{-\\text{input}}}\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "Example:\n >>> t = torch.randn(4)\n >>> t\n tensor([ 0.9213, 1.0887, -0.8858, -1.7683])\n >>> torch.special.expit(t)\n tensor([ 0.7153, 0.7481, 0.2920, 0.1458])\ntorch.special.expm1(input, , out=None) -> Tensor\n Computes the exponential of the elements minus 1 of \"input\".\n y_{i} = e^{x_{i}} - 1\n Note:\n This function provides greater precision than exp(x) - 1 for\n small values of x.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> torch.special.expm1(torch.tensor([0, math.log(2.)]))\n tensor([ 0., 1.])\ntorch.special.gammainc(input, other, , out=None) -> Tensor\n Computes the regularized lower incomplete gamma function:\n \\text{out}_{i} = \\frac{1}{\\Gamma(\\text{input}_i)}\n \\int_0^{\\text{other}_i} t^{\\text{input}_i-1} e^{-t} dt\n where both \\text{input}_i and \\text{other}_i are weakly positive", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "and at least one is strictly positive. If both are zero or either\n is negative then \\text{out}_i=\\text{nan}. \\Gamma(\\cdot) in the\n equation above is the gamma function,\n \\Gamma(\\text{input}_i) = \\int_0^\\infty t^{(\\text{input}_i-1)}\n e^{-t} dt.\n See \"torch.special.gammaincc()\" and \"torch.special.gammaln()\" for\n related functions.\n Supports broadcasting to a common shape and float inputs.\n Note:\n The backward pass with respect to \"input\" is not yet supported.\n Please open an issue on PyTorch's Github to request it.\n Parameters:\n * input (Tensor) -- the first non-negative input tensor\n * other (Tensor) -- the second non-negative input tensor\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> a1 = torch.tensor([4.0])\n >>> a2 = torch.tensor([3.0, 4.0, 5.0])\n >>> a = torch.special.gammaincc(a1, a2)\n tensor([0.3528, 0.5665, 0.7350])\n tensor([0.3528, 0.5665, 0.7350])", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "tensor([0.3528, 0.5665, 0.7350])\n >>> b = torch.special.gammainc(a1, a2) + torch.special.gammaincc(a1, a2)\n tensor([1., 1., 1.])\ntorch.special.gammaincc(input, other, *, out=None) -> Tensor\n Computes the regularized upper incomplete gamma function:\n \\text{out}{i} = \\frac{1}{\\Gamma(\\text{input}_i)}\n \\int_i}^{\\infty} t^{\\text{input}_i-1} e^{-t} dt\n where both \\text{input}_i and \\text{other}_i are weakly positive\n and at least one is strictly positive. If both are zero or either\n is negative then \\text{out}_i=\\text{nan}. \\Gamma(\\cdot) in the\n equation above is the gamma function,\n \\Gamma(\\text{input}_i) = \\int_0^\\infty t^{(\\text{input}_i-1)}\n e^{-t} dt.\n See \"torch.special.gammainc()\" and \"torch.special.gammaln()\" for\n related functions.\n Supports broadcasting to a common shape and float inputs.\n Note:\n The backward pass with respect to \"input\" is not yet supported.\n Please open an issue on PyTorch's Github to request it.\n Parameters:", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "Parameters:\n * input (Tensor) -- the first non-negative input tensor\n * other (Tensor) -- the second non-negative input tensor\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> a1 = torch.tensor([4.0])\n >>> a2 = torch.tensor([3.0, 4.0, 5.0])\n >>> a = torch.special.gammaincc(a1, a2)\n tensor([0.6472, 0.4335, 0.2650])\n >>> b = torch.special.gammainc(a1, a2) + torch.special.gammaincc(a1, a2)\n tensor([1., 1., 1.])\ntorch.special.gammaln(input, , out=None) -> Tensor\n Computes the natural logarithm of the absolute value of the gamma\n function on \"input\".\n \\text{out}{i} = \\ln \\Gamma(|\\text{input}|)\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.arange(0.5, 2, 0.5)\n >>> torch.special.gammaln(a)\n tensor([ 0.5724, 0.0000, -0.1208])", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "tensor([ 0.5724, 0.0000, -0.1208])\ntorch.special.i0(input, , out=None) -> Tensor\n Computes the zeroth order modified Bessel function of the first\n kind for each element of \"input\".\n \\text{out}{i} = I_0(\\text{input}) = \\sum_{k=0}^{\\infty}\n \\frac{(\\text{input}_{i}^2/4)^k}{(k!)^2}\n Parameters:\n input (Tensor) -- the input tensor\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> torch.i0(torch.arange(5, dtype=torch.float32))\n tensor([ 1.0000, 1.2661, 2.2796, 4.8808, 11.3019])\ntorch.special.i0e(input, , out=None) -> Tensor\n Computes the exponentially scaled zeroth order modified Bessel\n function of the first kind (as defined below) for each element of\n \"input\".\n \\text{out}{i} = \\exp(-|x|) * i0(x) = \\exp(-|x|) *\n \\sum^{\\infty} \\frac{(\\text{input}_{i}^2/4)^k}{(k!)^2}\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example::\n >>> torch.special.i0e(torch.arange(5, dtype=torch.float32))\n tensor([1.0000, 0.4658, 0.3085, 0.2430, 0.2070])\ntorch.special.i1(input, , out=None) -> Tensor\n Computes the first order modified Bessel function of the first kind\n (as defined below) for each element of \"input\".\n \\text{out}{i} = \\frac{(\\text{input})}{2} *\n \\sum_{k=0}^{\\infty} \\frac{(\\text{input}_{i}^2/4)^k}{(k!) *\n (k+1)!}\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example::\n >>> torch.special.i1(torch.arange(5, dtype=torch.float32))\n tensor([0.0000, 0.5652, 1.5906, 3.9534, 9.7595])\ntorch.special.i1e(input, , out=None) -> Tensor\n Computes the exponentially scaled first order modified Bessel\n function of the first kind (as defined below) for each element of\n \"input\".", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "\"input\".\n \\text{out}{i} = \\exp(-|x|) * i1(x) = \\exp(-|x|) *\n \\frac{(\\text{input})}{2} * \\sum_{k=0}^{\\infty}\n \\frac{(\\text{input}{i}^2/4)^k}{(k!) * (k+1)!}\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example::\n >>> torch.special.i1e(torch.arange(5, dtype=torch.float32))\n tensor([0.0000, 0.2079, 0.2153, 0.1968, 0.1788])\ntorch.special.log1p(input, , out=None) -> Tensor\n Alias for \"torch.log1p()\".\ntorch.special.log_ndtr(input, , out=None) -> Tensor\n Computes the log of the area under the standard Gaussian\n probability density function, integrated from minus infinity to\n \"input\", elementwise.\n \\text{log_ndtr}(x) = \\log\\left(\\frac{1}{\\sqrt{2\n \\pi}}\\int^{x} e^{-\\frac{1}{2}t^2} dt \\right)\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example::", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "Example::\n >>> torch.special.log_ndtr(torch.tensor([-3., -2, -1, 0, 1, 2, 3]))\n tensor([-6.6077 -3.7832 -1.841 -0.6931 -0.1728 -0.023 -0.0014])\ntorch.special.log_softmax(input, dim, , dtype=None) -> Tensor\n Computes softmax followed by a logarithm.\n While mathematically equivalent to log(softmax(x)), doing these two\n operations separately is slower and numerically unstable. This\n function is computed as:\n \\text{log_softmax}(x_{i}) = \\log\\left(\\frac{\\exp(x_i) }{ \\sum_j\n \\exp(x_j)} \\right)\n Parameters:\n * input (Tensor) -- input\n * dim (int) -- A dimension along which log_softmax will be\n computed.\n * dtype* (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is cast to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\n Example::\n >>> t = torch.ones(2, 2)\n >>> torch.special.log_softmax(t, 0)", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.special.log_softmax(t, 0)\n tensor([[-0.6931, -0.6931],\n [-0.6931, -0.6931]])\ntorch.special.logit(input, eps=None, , out=None) -> Tensor\n Returns a new tensor with the logit of the elements of \"input\".\n \"input\" is clamped to [eps, 1 - eps] when eps is not None. When eps\n is None and \"input\" < 0 or \"input\" > 1, the function will yields\n NaN.\n \\begin{align} y_{i} &= \\ln(\\frac{z_{i}}{1 - z_{i}}) \\ z_{i} &=\n \\begin{cases} x_{i} & \\text{if eps is None} \\\n \\text{eps} & \\text{if } x_{i} < \\text{eps} \\ x_{i} &\n \\text{if } \\text{eps} \\leq x_{i} \\leq 1 - \\text{eps} \\ 1 -\n \\text{eps} & \\text{if } x_{i} > 1 - \\text{eps} \\end{cases}\n \\end{align}\n Parameters:\n * input (Tensor) -- the input tensor.\n * eps (float, optional) -- the epsilon for input clamp\n bound. Default: \"None\"\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> a = torch.rand(5)\n\n\n", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "Example:\n >>> a = torch.rand(5)\n >>> a\n tensor([0.2796, 0.9331, 0.6486, 0.1523, 0.6516])\n >>> torch.special.logit(a, eps=1e-6)\n tensor([-0.9466, 2.6352, 0.6131, -1.7169, 0.6261])\ntorch.special.logsumexp(input, dim, keepdim=False, , out=None)\n Alias for \"torch.logsumexp()\".\ntorch.special.multigammaln(input, p, , out=None) -> Tensor\n Computes the multivariate log-gamma function with dimension p\n element-wise, given by\n \\log(\\Gamma_{p}(a)) = C + \\displaystyle \\sum_{i=1}^{p}\n \\log\\left(\\Gamma\\left(a - \\frac{i - 1}{2}\\right)\\right)\n where C = \\log(\\pi) \\cdot \\frac{p (p - 1)}{4} and \\Gamma(-) is the\n Gamma function.\n All elements must be greater than \\frac{p - 1}{2}, otherwise the\n behavior is undefiend.\n Parameters:\n * input (Tensor) -- the tensor to compute the multivariate\n log-gamma function\n * p (int) -- the number of dimensions\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "Example:\n >>> a = torch.empty(2, 3).uniform_(1, 2)\n >>> a\n tensor([[1.6835, 1.8474, 1.1929],\n [1.0475, 1.7162, 1.4180]])\n >>> torch.special.multigammaln(a, 2)\n tensor([[0.3928, 0.4007, 0.7586],\n [1.0311, 0.3901, 0.5049]])\ntorch.special.ndtr(input, , out=None) -> Tensor\n Computes the area under the standard Gaussian probability density\n function, integrated from minus infinity to \"input\", elementwise.\n \\text{ndtr}(x) = \\frac{1}{\\sqrt{2 \\pi}}\\int_{-\\infty}^{x}\n e^{-\\frac{1}{2}t^2} dt\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example::\n >>> torch.special.ndtr(torch.tensor([-3., -2, -1, 0, 1, 2, 3]))\n tensor([0.0013, 0.0228, 0.1587, 0.5000, 0.8413, 0.9772, 0.9987])\ntorch.special.ndtri(input, , out=None) -> Tensor\n Computes the argument, x, for which the area under the Gaussian", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "probability density function (integrated from minus infinity to x)\n is equal to \"input\", elementwise.\n \\text{ndtri}(p) = \\sqrt{2}\\text{erf}^{-1}(2p - 1)\n Note:\n Also known as quantile function for Normal Distribution.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example::\n >>> torch.special.ndtri(torch.tensor([0, 0.25, 0.5, 0.75, 1]))\n tensor([ -inf, -0.6745, 0.0000, 0.6745, inf])\ntorch.special.polygamma(n, input, , out=None) -> Tensor\n Computes the n^{th} derivative of the digamma function on \"input\".\n n \\geq 0 is called the order of the polygamma function.\n \\psi^{(n)}(x) = \\frac{d^{(n)}}{dx^{(n)}} \\psi(x)\n Note:\n This function is implemented only for nonnegative integers n \\geq\n 0.\n Parameters:\n * n (int) -- the order of the polygamma function\n * input (Tensor*) -- the input tensor.\n Keyword Arguments:", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example::\n >>> a = torch.tensor([1, 0.5])\n >>> torch.special.polygamma(1, a)\n tensor([1.64493, 4.9348])\n >>> torch.special.polygamma(2, a)\n tensor([ -2.4041, -16.8288])\n >>> torch.special.polygamma(3, a)\n tensor([ 6.4939, 97.4091])\n >>> torch.special.polygamma(4, a)\n tensor([ -24.8863, -771.4742])\ntorch.special.psi(input, , out=None) -> Tensor\n Alias for \"torch.special.digamma()\".\ntorch.special.round(input, , out=None) -> Tensor\n Alias for \"torch.round()\".\ntorch.special.scaled_modified_bessel_k0(input, , out=None) -> Tensor\n Scaled modified Bessel function of the second kind of order 0.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\ntorch.special.scaled_modified_bessel_k1(input, , out=None) -> Tensor\n Scaled modified Bessel function of the second kind of order 1.", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\ntorch.special.sinc(input, , out=None) -> Tensor\n Computes the normalized sinc of \"input.\"\n \\text{out}{i} = \\begin{cases} 1, & \\text{if}\\\n \\text{input}=0 \\ \\sin(\\pi \\text{input}{i}) / (\\pi\n \\text{input}), & \\text{otherwise} \\end{cases}\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example::\n >>> t = torch.randn(4)\n >>> t\n tensor([ 0.2252, -0.2948, 1.0267, -1.1566])\n >>> torch.special.sinc(t)\n tensor([ 0.9186, 0.8631, -0.0259, -0.1300])\ntorch.special.softmax(input, dim, , dtype=None) -> Tensor\n Computes the softmax function.\n Softmax is defined as:\n \\text{Softmax}(x_{i}) = \\frac{\\exp(x_i)}{\\sum_j \\exp(x_j)}\n It is applied to all slices along dim, and will re-scale them so", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "that the elements lie in the range [0, 1] and sum to 1.\n Parameters:\n * input (Tensor) -- input\n * dim (int) -- A dimension along which softmax will be\n computed.\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is cast to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\n Examples::\n >>> t = torch.ones(2, 2)\n >>> torch.special.softmax(t, 0)\n tensor([[0.5000, 0.5000],\n [0.5000, 0.5000]])\ntorch.special.spherical_bessel_j0(input, , out=None) -> Tensor\n Spherical Bessel function of the first kind of order 0.\n Parameters:\n input (Tensor) -- the input tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\ntorch.special.xlog1py(input, other, , out=None) -> Tensor\n Computes \"input * log1p(other)\" with the following cases.", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "\\text{out}{i} = \\begin{cases} \\text{NaN} & \\text{if }\n \\text{other} = \\text{NaN} \\ 0 & \\text{if }\n \\text{input}{i} = 0.0 \\text{ and } \\text{other} !=\n \\text{NaN} \\ \\text{input}{i} *\n \\text{log1p}(\\text{other})& \\text{otherwise} \\end{cases}\n Similar to SciPy's scipy.special.xlog1py.\n Parameters:\n * input (Number or Tensor) -- Multiplier\n * other (Number or Tensor) -- Argument\n Note:\n At least one of \"input\" or \"other\" must be a tensor.\n Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\n Example:\n >>> x = torch.zeros(5,)\n >>> y = torch.tensor([-1, 0, 1, float('inf'), float('nan')])\n >>> torch.special.xlog1py(x, y)\n tensor([0., 0., 0., 0., nan])\n >>> x = torch.tensor([1, 2, 3])\n >>> y = torch.tensor([3, 2, 1])\n >>> torch.special.xlog1py(x, y)\n tensor([1.3863, 2.1972, 2.0794])\n >>> torch.special.xlog1py(x, 4)", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "\n\n\ntorch.special.xlog1py(x, 4)\n tensor([1.6094, 3.2189, 4.8283])\n >>> torch.special.xlog1py(2, y)\n tensor([2.7726, 2.1972, 1.3863])\ntorch.special.xlogy(input, other, , out=None) -> Tensor\n Computes \"input * log(other)\" with the following cases.\n \\text{out}{i} = \\begin{cases} \\text{NaN} & \\text{if }\n \\text{other} = \\text{NaN} \\ 0 & \\text{if }\n \\text{input}{i} = 0.0 \\ \\text{input} *\n \\log{(\\text{other}_{i})} & \\text{otherwise} \\end{cases}\n Similar to SciPy's scipy.special.xlogy.\n Parameters:\n * input (Number or Tensor) -- Multiplier\n * other (Number or Tensor) -- Argument\n Note:\n At least one of \"input\" or \"other\" must be a tensor.\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example:\n >>> x = torch.zeros(5,)\n >>> y = torch.tensor([-1, 0, 1, float('inf'), float('nan')])\n >>> torch.special.xlogy(x, y)\n tensor([0., 0., 0., 0., nan])\n\n\n", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "tensor([0., 0., 0., 0., nan])\n >>> x = torch.tensor([1, 2, 3])\n >>> y = torch.tensor([3, 2, 1])\n >>> torch.special.xlogy(x, y)\n tensor([1.0986, 1.3863, 0.0000])\n >>> torch.special.xlogy(x, 4)\n tensor([1.3863, 2.7726, 4.1589])\n >>> torch.special.xlogy(2, y)\n tensor([2.1972, 1.3863, 0.0000])\ntorch.special.zeta(input, other, , out=None) -> Tensor\n Computes the Hurwitz zeta function, elementwise.\n \\zeta(x, q) = \\sum_{k=0}^{\\infty} \\frac{1}{(k + q)^x}\n Parameters:\n * input (Tensor) -- the input tensor corresponding to x.\n * other (Tensor) -- the input tensor corresponding to q.\n Note:\n The Riemann zeta function corresponds to the case when q = 1\n Keyword Arguments:\n out (Tensor, optional*) -- the output tensor.\n Example::\n >>> x = torch.tensor([2., 4.])\n >>> torch.special.zeta(x, 1)\n tensor([1.6449, 1.0823])\n >>> torch.special.zeta(x, torch.tensor([1., 2.]))\n tensor([1.6449, 0.0823])", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "tensor([1.6449, 0.0823])\n >>> torch.special.zeta(2, torch.tensor([1., 2.]))\n tensor([1.6449, 0.6449])", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"}
{"text": "torch.utils.bottlenecktorch.utils.bottleneck is a tool that can be used as an initial step\nfor debugging bottlenecks in your program. It summarizes runs of your\nscript with the Python profiler and PyTorch's autograd profiler.\nRun it on the command line with\n python -m torch.utils.bottleneck /path/to/source/script.py [args]\nwhere [args] are any number of arguments to script.py, or run\n\"python -m torch.utils.bottleneck -h\" for more usage instructions.\nWarning:\n Because your script will be profiled, please ensure that it exits in\n a finite amount of time.\nWarning:\n Due to the asynchronous nature of CUDA kernels, when running against\n CUDA code, the cProfile output and CPU-mode autograd profilers may\n not show correct timings: the reported CPU time reports the amount\n of time used to launch the kernels but does not include the time the\n kernel spent executing on a GPU unless the operation does a\n synchronize. Ops that do synchronize appear to be extremely", "source": "https://pytorch.org/docs/stable/bottleneck.html", "category": "pytorch docs"}
{"text": "expensive under regular CPU-mode profilers. In these case where\n timings are incorrect, the CUDA-mode autograd profiler may be\n helpful.\nNote:\n To decide which (CPU-only-mode or CUDA-mode) autograd profiler\n output to look at, you should first check if your script is CPU-\n bound (\"CPU total time is much greater than CUDA total time\"). If it\n is CPU-bound, looking at the results of the CPU-mode autograd\n profiler will help. If on the other hand your script spends most of\n its time executing on the GPU, then it makes sense to start looking\n for responsible CUDA operators in the output of the CUDA-mode\n autograd profiler.Of course the reality is much more complicated and\n your script might not be in one of those two extremes depending on\n the part of the model you're evaluating. If the profiler outputs\n don't help, you could try looking at the result of\n \"torch.autograd.profiler.emit_nvtx()\" with \"nvprof\". However, please\n take into account that the NVTX overhead is very high and often", "source": "https://pytorch.org/docs/stable/bottleneck.html", "category": "pytorch docs"}
{"text": "gives a heavily skewed timeline. Similarly, \"Intel\u00c2\u00ae VTune\u00e2\u0084\u00a2 Profiler\"\n helps to analyze performance on Intel platforms further with\n \"torch.autograd.profiler.emit_itt()\".\nWarning:\n If you are profiling CUDA code, the first profiler that \"bottleneck\"\n runs (cProfile) will include the CUDA startup time (CUDA buffer\n allocation cost) in its time reporting. This should not matter if\n your bottlenecks result in code much slower than the CUDA startup\n time.\nFor more complicated uses of the profilers (like in a multi-GPU case),\nplease see https://docs.python.org/3/library/profile.html or\n\"torch.autograd.profiler.profile()\" for more information.", "source": "https://pytorch.org/docs/stable/bottleneck.html", "category": "pytorch docs"}
{"text": "Frequently Asked QuestionsMy model reports \"cuda runtime error(2): out of memory\"\nAs the error message suggests, you have run out of memory on your GPU.\nSince we often deal with large amounts of data in PyTorch, small\nmistakes can rapidly cause your program to use up all of your GPU;\nfortunately, the fixes in these cases are often simple. Here are a few\ncommon things to check:\nDon't accumulate history across your training loop. By default,\ncomputations involving variables that require gradients will keep\nhistory. This means that you should avoid using such variables in\ncomputations which will live beyond your training loops, e.g., when\ntracking statistics. Instead, you should detach the variable or access\nits underlying data.\nSometimes, it can be non-obvious when differentiable variables can\noccur. Consider the following training loop (abridged from source):\n total_loss = 0\n for i in range(10000):", "source": "https://pytorch.org/docs/stable/notes/faq.html", "category": "pytorch docs"}
{"text": "total_loss = 0\n for i in range(10000):\n optimizer.zero_grad()\n output = model(input)\n loss = criterion(output)\n loss.backward()\n optimizer.step()\n total_loss += loss\nHere, \"total_loss\" is accumulating history across your training loop,\nsince \"loss\" is a differentiable variable with autograd history. You\ncan fix this by writing total_loss += float(loss) instead.\nOther instances of this problem: 1.\nDon't hold onto tensors and variables you don't need. If you\nassign a Tensor or Variable to a local, Python will not deallocate\nuntil the local goes out of scope. You can free this reference by\nusing \"del x\". Similarly, if you assign a Tensor or Variable to a\nmember variable of an object, it will not deallocate until the object\ngoes out of scope. You will get the best memory usage if you don't\nhold onto temporaries you don't need.\nThe scopes of locals can be larger than you expect. For example:\n for i in range(5):\n intermediate = f(input[i])", "source": "https://pytorch.org/docs/stable/notes/faq.html", "category": "pytorch docs"}
{"text": "intermediate = f(input[i])\n result += g(intermediate)\n output = h(result)\n return output\nHere, \"intermediate\" remains live even while \"h\" is executing, because\nits scope extrudes past the end of the loop. To free it earlier, you\nshould \"del intermediate\" when you are done with it.\nAvoid running RNNs on sequences that are too large. The amount of\nmemory required to backpropagate through an RNN scales linearly with\nthe length of the RNN input; thus, you will run out of memory if you\ntry to feed an RNN a sequence that is too long.\nThe technical term for this phenomenon is backpropagation through\ntime, and there are plenty of references for how to implement\ntruncated BPTT, including in the word language model example;\ntruncation is handled by the \"repackage\" function as described in this\nforum post.\nDon't use linear layers that are too large. A linear layer\n\"nn.Linear(m, n)\" uses O(nm) memory: that is to say, the memory\nrequirements of the weights scales quadratically with the number of", "source": "https://pytorch.org/docs/stable/notes/faq.html", "category": "pytorch docs"}
{"text": "features. It is very easy to blow through your memory this way (and\nremember that you will need at least twice the size of the weights,\nsince you also need to store the gradients.)\nConsider checkpointing. You can trade-off memory for compute by\nusing checkpoint.\nMy GPU memory isn't freed properly\n==================================\nPyTorch uses a caching memory allocator to speed up memory\nallocations. As a result, the values shown in \"nvidia-smi\" usually\ndon't reflect the true memory usage. See Memory management for more\ndetails about GPU memory management.\nIf your GPU memory isn't freed even after Python quits, it is very\nlikely that some Python subprocesses are still alive. You may find\nthem via \"ps -elf | grep python\" and manually kill them with \"kill -9\n[pid]\".\nMy out of memory exception handler can't allocate memory\n========================================================\nYou may have some code that tries to recover from out of memory\nerrors.\n try:\n run_model(batch_size)", "source": "https://pytorch.org/docs/stable/notes/faq.html", "category": "pytorch docs"}
{"text": "errors.\n try:\n run_model(batch_size)\n except RuntimeError: # Out of memory\n for _ in range(batch_size):\n run_model(1)\nBut find that when you do run out of memory, your recovery code can't\nallocate either. That's because the python exception object holds a\nreference to the stack frame where the error was raised. Which\nprevents the original tensor objects from being freed. The solution is\nto move you OOM recovery code outside of the \"except\" clause.\n oom = False\n try:\n run_model(batch_size)\n except RuntimeError: # Out of memory\n oom = True\n if oom:\n for _ in range(batch_size):\n run_model(1)\nMy data loader workers return identical random numbers\n======================================================\nYou are likely using other libraries to generate random numbers in the\ndataset and worker subprocesses are started via \"fork\". See\n\"torch.utils.data.DataLoader\"'s documentation for how to properly set", "source": "https://pytorch.org/docs/stable/notes/faq.html", "category": "pytorch docs"}
{"text": "up random seeds in workers with its \"worker_init_fn\" option.\nMy recurrent network doesn't work with data parallelism\n=======================================================\nThere is a subtlety in using the \"pack sequence -> recurrent network\n-> unpack sequence\" pattern in a \"Module\" with \"DataParallel\" or\n\"data_parallel()\". Input to each the \"forward()\" on each device will\nonly be part of the entire input. Because the unpack operation\n\"torch.nn.utils.rnn.pad_packed_sequence()\" by default only pads up to\nthe longest input it sees, i.e., the longest on that particular\ndevice, size mismatches will happen when results are gathered\ntogether. Therefore, you can instead take advantage of the\n\"total_length\" argument of \"pad_packed_sequence()\" to make sure that\nthe \"forward()\" calls return sequences of same length. For example,\nyou can write:\n from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence\n class MyModule(nn.Module):\n # ... init, other methods, etc.", "source": "https://pytorch.org/docs/stable/notes/faq.html", "category": "pytorch docs"}
{"text": "... init, other methods, etc.\n # padded_input is of shape [B x T x *] (batch_first mode) and contains\n # the sequences sorted by lengths\n # B is the batch size\n # T is max sequence length\n def forward(self, padded_input, input_lengths):\n total_length = padded_input.size(1) # get the max sequence length\n packed_input = pack_padded_sequence(padded_input, input_lengths,\n batch_first=True)\n packed_output, _ = self.my_lstm(packed_input)\n output, _ = pad_packed_sequence(packed_output, batch_first=True,\n total_length=total_length)\n return output\n\nm = MyModule().cuda()\n dp_m = nn.DataParallel(m)\nAdditionally, extra care needs to be taken when batch dimension is dim\n\"1\" (i.e., \"batch_first=False\") with data parallelism. In this case,\nthe first argument of pack_padded_sequence \"padding_input\" will be of", "source": "https://pytorch.org/docs/stable/notes/faq.html", "category": "pytorch docs"}
{"text": "shape \"[T x B x *]\" and should be scattered along dim \"1\", but the\nsecond argument \"input_lengths\" will be of shape \"[B]\" and should be\nscattered along dim \"0\". Extra code to manipulate the tensor shapes\nwill be needed.", "source": "https://pytorch.org/docs/stable/notes/faq.html", "category": "pytorch docs"}
{"text": "MPS backend\"mps\" device enables high-performance training on GPU for MacOS\ndevices with Metal programming framework. It introduces a new device\nto map Machine Learning computational graphs and primitives on highly\nefficient Metal Performance Shaders Graph framework and tuned kernels\nprovided by Metal Performance Shaders framework respectively.\nThe new MPS backend extends the PyTorch ecosystem and provides\nexisting scripts capabilities to setup and run operations on GPU.\nTo get started, simply move your Tensor and Module to the \"mps\"\ndevice:\n # Check that MPS is available\n if not torch.backends.mps.is_available():\n if not torch.backends.mps.is_built():\n print(\"MPS not available because the current PyTorch install was not \"\n \"built with MPS enabled.\")\n else:\n print(\"MPS not available because the current MacOS version is not 12.3+ \"\n \"and/or you do not have an MPS-enabled device on this machine.\")\n else:", "source": "https://pytorch.org/docs/stable/notes/mps.html", "category": "pytorch docs"}
{"text": "else:\n mps_device = torch.device(\"mps\")\n # Create a Tensor directly on the mps device\n x = torch.ones(5, device=mps_device)\n # Or\n x = torch.ones(5, device=\"mps\")\n # Any operation happens on the GPU\n y = x * 2\n # Move your model to mps just like any other device\n model = YourFavoriteNet()\n model.to(mps_device)\n # Now every call runs on the GPU\n pred = model(x)", "source": "https://pytorch.org/docs/stable/notes/mps.html", "category": "pytorch docs"}
{"text": "Distributed Data ParallelWarning:\n The implementation of \"torch.nn.parallel.DistributedDataParallel\"\n evolves over time. This design note is written based on the state as\n of v1.4.\n\"torch.nn.parallel.DistributedDataParallel\" (DDP) transparently\nperforms distributed data parallel training. This page describes how\nit works and reveals implementation details.\nExample\n=======\nLet us start with a simple \"torch.nn.parallel.DistributedDataParallel\"\nexample. This example uses a \"torch.nn.Linear\" as the local model,\nwraps it with DDP, and then runs one forward pass, one backward pass,\nand an optimizer step on the DDP model. After that, parameters on the\nlocal model will be updated, and all models on different processes\nshould be exactly the same.\n import torch\n import torch.distributed as dist\n import torch.multiprocessing as mp\n import torch.nn as nn\n import torch.optim as optim\n from torch.nn.parallel import DistributedDataParallel as DDP\n def example(rank, world_size):", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"}
{"text": "def example(rank, world_size):\n # create default process group\n dist.init_process_group(\"gloo\", rank=rank, world_size=world_size)\n # create local model\n model = nn.Linear(10, 10).to(rank)\n # construct DDP model\n ddp_model = DDP(model, device_ids=[rank])\n # define loss function and optimizer\n loss_fn = nn.MSELoss()\n optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)\n # forward pass\n outputs = ddp_model(torch.randn(20, 10).to(rank))\n labels = torch.randn(20, 10).to(rank)\n # backward pass\n loss_fn(outputs, labels).backward()\n # update parameters\n optimizer.step()\n def main():\n world_size = 2\n mp.spawn(example,\n args=(world_size,),\n nprocs=world_size,\n join=True)\n if name==\"main\":\n # Environment variables which need to be\n # set when using c10d's default \"env\"\n # initialization mode.\n os.environ[\"MASTER_ADDR\"] = \"localhost\"", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"}
{"text": "os.environ[\"MASTER_ADDR\"] = \"localhost\"\n os.environ[\"MASTER_PORT\"] = \"29500\"\n main()\nDDP works with TorchDynamo. When used with TorchDynamo, apply the DDP\nmodel wrapper before compiling the model, such that torchdynamo can\napply \"DDPOptimizer\" (graph-break optimizations) based on DDP bucket\nsizes. (See TorchDynamo DDPOptimizer for more information.)\nTorchDynamo support for DDP currently requires setting\nstatic_graph=False, due to interactions between the graph tracing\nprocess and DDP's mechanism for observing operations happening on its\nmodule, but this should be fixed ultimately.\n ddp_model = DDP(model, device_ids=[rank])\n ddp_model = torch.compile(ddp_model)\nInternal Design\n===============\nThis section reveals how it works under the hood of\n\"torch.nn.parallel.DistributedDataParallel\" by diving into details of\nevery step in one iteration.\n* Prerequisite: DDP relies on c10d \"ProcessGroup\" for\n communications. Hence, applications must create \"ProcessGroup\"", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"}
{"text": "instances before constructing DDP.\n* Construction: The DDP constructor takes a reference to the local\n module, and broadcasts \"state_dict()\" from the process with rank 0\n to all other processes in the group to make sure that all model\n replicas start from the exact same state. Then, each DDP process\n creates a local \"Reducer\", which later will take care of the\n gradients synchronization during the backward pass. To improve\n communication efficiency, the \"Reducer\" organizes parameter\n gradients into buckets, and reduces one bucket at a time. Bucket\n size can be configured by setting the bucket_cap_mb argument in\n DDP constructor. The mapping from parameter gradients to buckets is\n determined at the construction time, based on the bucket size limit\n and parameter sizes. Model parameters are allocated into buckets in\n (roughly) the reverse order of \"Model.parameters()\" from the given\n model. The reason for using the reverse order is because DDP expects", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"}
{"text": "gradients to become ready during the backward pass in approximately\n that order. The figure below shows an example. Note that, the\n \"grad0\" and \"grad1\" are in \"bucket1\", and the other two gradients\n are in \"bucket0\". Of course, this assumption might not always be\n true, and when that happens it could hurt DDP backward speed as the\n \"Reducer\" cannot kick off the communication at the earliest possible\n time. Besides bucketing, the \"Reducer\" also registers autograd hooks\n during construction, one hook per parameter. These hooks will be\n triggered during the backward pass when the gradient becomes ready.\n* Forward Pass: The DDP takes the input and passes it to the local\n model, and then analyzes the output from the local model if\n \"find_unused_parameters\" is set to \"True\". This mode allows running\n backward on a subgraph of the model, and DDP finds out which\n parameters are involved in the backward pass by traversing the\n autograd graph from the model output and marking all unused", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"}
{"text": "parameters as ready for reduction. During the backward pass, the\n \"Reducer\" would only wait for unready parameters, but it would still\n reduce all buckets. Marking a parameter gradient as ready does not\n help DDP skip buckets as for now, but it will prevent DDP from\n waiting for absent gradients forever during the backward pass. Note\n that traversing the autograd graph introduces extra overheads, so\n applications should only set \"find_unused_parameters\" to \"True\" when\n necessary.\n* Backward Pass: The \"backward()\" function is directly invoked on\n the loss \"Tensor\", which is out of DDP's control, and DDP uses\n autograd hooks registered at construction time to trigger gradients\n synchronizations. When one gradient becomes ready, its corresponding\n DDP hook on that grad accumulator will fire, and DDP will then mark\n that parameter gradient as ready for reduction. When gradients in\n one bucket are all ready, the \"Reducer\" kicks off an asynchronous", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"}
{"text": "\"allreduce\" on that bucket to calculate mean of gradients across all\n processes. When all buckets are ready, the \"Reducer\" will block\n waiting for all \"allreduce\" operations to finish. When this is done,\n averaged gradients are written to the \"param.grad\" field of all\n parameters. So after the backward pass, the grad field on the same\n corresponding parameter across different DDP processes should be the\n same.\n* Optimizer Step: From the optimizer's perspective, it is\n optimizing a local model. Model replicas on all DDP processes can\n keep in sync because they all start from the same state and they\n have the same averaged gradients in every iteration.\n[image: ddp_grad_sync.png][image]\nNote:\n DDP requires \"Reducer\" instances on all processes to invoke\n \"allreduce\" in exactly the same order, which is done by always\n running \"allreduce\" in the bucket index order instead of actual\n bucket ready order. Mismatched \"allreduce\" order across processes", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"}
{"text": "can lead to wrong results or DDP backward hang.\nImplementation\n==============\nBelow are pointers to the DDP implementation components. The stacked\ngraph shows the structure of the code.\nProcessGroup\n\n\nProcessGroup.hpp: contains the abstract API of all process group\n implementations. The \"c10d\" library provides 3 implementations out\n of the box, namely, ProcessGroupGloo, ProcessGroupNCCL, and\n ProcessGroupMPI. \"DistributedDataParallel\" uses\n \"ProcessGroup::broadcast()\" to send model states from the process\n with rank 0 to others during initialization and\n \"ProcessGroup::allreduce()\" to sum gradients.\nStore.hpp: assists the rendezvous service for process group\n instances to find each other.\nDistributedDataParallel\n\n\n\ndistributed.py: is the Python entry point for DDP. It implements the\n initialization steps and the \"forward\" function for the\n \"nn.parallel.DistributedDataParallel\" module which call into C++\n", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"}
{"text": "libraries. Its \"_sync_param\" function performs intra-process\n parameter synchronization when one DDP process works on multiple\n devices, and it also broadcasts model buffers from the process with\n rank 0 to all other processes. The inter-process parameter\n synchronization happens in \"Reducer.cpp\".\n* comm.h: implements the coalesced broadcast helper function which is\n invoked to broadcast model states during initialization and\n synchronize model buffers before the forward pass.\n* reducer.h: provides the core implementation for gradient\n synchronization in the backward pass. It has three entry point\n functions:\n * \"Reducer\": The constructor is called in \"distributed.py\" which\n registers \"Reducer::autograd_hook()\" to gradient accumulators.\n * \"autograd_hook()\" function will be invoked by the autograd engine\n when a gradient becomes ready.\n * \"prepare_for_backward()\" is called at the end of DDP forward pass\n in \"distributed.py\". It traverses the autograd graph to find", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"}
{"text": "unused parameters when \"find_unused_parameters\" is set to \"True\"\n in DDP constructor.\n[image: ddp_code.png][image]\nTorchDynamo DDPOptimizer\n\nDDP's performance advantage comes from overlapping allreduce\ncollectives with computations during backwards. AotAutograd prevents\nthis overlap when used with TorchDynamo for compiling a whole forward\nand whole backward graph, because allreduce ops are launched by\nautograd hooks after the whole optimized backwards computation\nfinishes.\nTorchDynamo's DDPOptimizer helps by breaking the forward graph at the\nlogical boundaries of DDP's allreduce buckets during backwards. Note:\nthe goal is to break the graph during backwards, and the simplest\nimplementation is to break the forward graphs and then call\nAotAutograd and compilation on each section. This allows DDP's\nallreduce hooks to fire in-between sections of backwards, and schedule\ncommunications to overlap with compute.\nSee this blog post for a more in-depth explanation and experimental", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"}
{"text": "results, or read the docs and code at\ntorch/_dynamo/optimizations/distributed.py\nTo Debug DDPOptimizer, set torch._dynamo.config.log_level to DEBUG\n(for full graph dumps) or INFO (for basic info about bucket\nboundaries). To disable DDPOptimizer, set\ntorch._dynamo.config.optimize_ddp=False. DDP and TorchDynamo should\nstill work correctly without DDPOptimizer, but with performance\ndegradation.", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"}
{"text": "Features for large-scale deployments* Fleet-wide operator profiling\n* API usage logging\n* Attaching metadata to saved TorchScript models\n* Build environment considerations\n* Common extension points\nThis note talks about several extension points and tricks that might\nbe useful when running PyTorch within a larger system or operating\nmultiple systems using PyTorch in a larger organization.\nIt doesn't cover topics of deploying models to production. Check\n\"torch.jit\" or one of the corresponding tutorials.\nThe note assumes that you either build PyTorch from source in your\norganization or have an ability to statically link additional code to\nbe loaded when PyTorch is used. Therefore, many of the hooks are\nexposed as C++ APIs that can be triggered once in a centralized place,\ne.g. in static initialization code.\nFleet-wide operator profiling\n=============================\nPyTorch comes with \"torch.autograd.profiler\" capable of measuring time", "source": "https://pytorch.org/docs/stable/notes/large_scale_deployments.html", "category": "pytorch docs"}
{"text": "taken by individual operators on demand. One can use the same\nmechanism to do \"always ON\" measurements for any process running\nPyTorch. It might be useful for gathering information about PyTorch\nworkloads running in a given process or across the entire set of\nmachines.\nNew callbacks for any operator invocation can be added with\n\"torch::addGlobalCallback\". Hooks will be called with\n\"torch::RecordFunction\" struct that describes invocation context (e.g.\nname). If enabled, \"RecordFunction::inputs()\" contains arguments of\nthe function represented as \"torch::IValue\" variant type. Note, that\ninputs logging is relatively expensive and thus has to be enabled\nexplicitly.\nThe operator callbacks also have access to\n\"c10::ThreadLocalDebugInfo::get()\" interface that returns a pointer to\nthe struct holding the debug information. This debug information can\nbe set earlier by using \"at::DebugInfoGuard\" object. Debug information\nis propagated through the forward (including async \"fork\" tasks) and", "source": "https://pytorch.org/docs/stable/notes/large_scale_deployments.html", "category": "pytorch docs"}
{"text": "backward passes and can be useful for passing some extra information\nabout execution environment (e.g. model id) from the higher layers of\nthe application down to the operator callbacks.\nInvoking callbacks adds some overhead, so usually it's useful to just\nrandomly sample operator invocations. This can be enabled on per-\ncallback basis with an optional sampling rate passed into\n\"torch::addGlobalCallback\".\nNote, that \"addGlobalCallback\" is not thread-safe and can be called\nonly when no PyTorch operator is running. Usually, it's a good idea to\ncall them once during initialization.\nHere's an example:\n // Called somewhere in the program beginning\n void init() {\n // Sample one in a hundred operator runs randomly\n addGlobalCallback(\n RecordFunctionCallback(\n &onFunctionEnter,\n &onFunctionExit)\n .needsInputs(true)\n .samplingProb(0.01)\n );\n // Note, to enable observers in the model calling thread,", "source": "https://pytorch.org/docs/stable/notes/large_scale_deployments.html", "category": "pytorch docs"}
{"text": "// call enableRecordFunction() in the thread before running a model\n }\n void onFunctionEnter(const RecordFunction& fn) {\n std::cerr << \"Before function \" << fn.name()\n << \" with \" << fn.inputs().size() << \" inputs\" << std::endl;\n }\n void onFunctionExit(const RecordFunction& fn) {\n std::cerr << \"After function \" << fn.name();\n }\nAPI usage logging\n=================\nWhen running in a broader ecosystem, for example in managed job\nscheduler, it's often useful to track which binaries invoke particular\nPyTorch APIs. There exists simple instrumentation injected at several\nimportant API points that triggers a given callback. Because usually\nPyTorch is invoked in one-off python scripts, the callback fires only\nonce for a given process for each of the APIs.\n\"c10::SetAPIUsageHandler\" can be used to register API usage\ninstrumentation handler. Passed argument is going to be an \"api key\"\nidentifying used point, for example \"python.import\" for PyTorch", "source": "https://pytorch.org/docs/stable/notes/large_scale_deployments.html", "category": "pytorch docs"}
{"text": "extension import or \"torch.script.compile\" if TorchScript compilation\nwas triggered.\n SetAPIUsageLogger( {\n std::cerr << \"API was used: \" << event_name << std::endl;\n });\nNote for developers: new API trigger points can be added in code with\n\"C10_LOG_API_USAGE_ONCE(\"my_api\")\" in C++ or\n\"torch._C._log_api_usage_once(\"my.api\")\" in Python.\nAttaching metadata to saved TorchScript models\n==============================================\nTorchScript modules can be saved as an archive file that bundles\nserialized parameters and module code as TorchScript (see\n\"torch.jit.save()\"). It's often convenient to bundle additional\ninformation together with the model, for example, description of model\nproducer or auxiliary artifacts.\nIt can be achieved by passing the \"_extra_files\" argument to\n\"torch.jit.save()\" and \"torch::jit::load\" to store and retrieve\narbitrary binary blobs during saving process. Since TorchScript files", "source": "https://pytorch.org/docs/stable/notes/large_scale_deployments.html", "category": "pytorch docs"}
{"text": "are regular ZIP archives, extra information gets stored as regular\nfiles inside archive's \"extra/\" directory.\nThere's also a global hook allowing to attach extra files to any\nTorchScript archive produced in the current process. It might be\nuseful to tag models with producer metadata, akin to JPEG metadata\nproduced by digital cameras. Example usage might look like:\n SetExportModuleExtraFilesHook( {\n ExtraFilesMap files;\n files[\"producer_info.json\"] = \"{\\\"user\\\": \\\"\" + getenv(\"USER\") + \"\\\"}\";\n return files;\n });\nBuild environment considerations\n================================\nTorchScript's compilation needs to have access to the original python\nfiles as it uses python's \"inspect.getsource\" call. In certain\nproduction environments it might require explicitly deploying \".py\"\nfiles along with precompiled \".pyc\".\nCommon extension points\n=======================\nPyTorch APIs are generally loosely coupled and it's easy to replace a", "source": "https://pytorch.org/docs/stable/notes/large_scale_deployments.html", "category": "pytorch docs"}
{"text": "component with specialized version. Common extension points include:\n* Custom operators implemented in C++ - see tutorial for more details.\n* Custom data reading can be often integrated directly by invoking\n corresponding python library. Existing functionality of\n \"torch.utils.data\" can be utilized by extending \"Dataset\" or\n \"IterableDataset\".", "source": "https://pytorch.org/docs/stable/notes/large_scale_deployments.html", "category": "pytorch docs"}
{"text": "Numerical accuracyIn modern computers, floating point numbers are represented using IEEE\n754 standard. For more details on floating point arithmetics and IEEE\n754 standard, please see Floating point arithmetic In particular, note\nthat floating point provides limited accuracy (about 7 decimal digits\nfor single precision floating point numbers, about 16 decimal digits\nfor double precision floating point numbers) and that floating point\naddition and multiplication are not associative, so the order of the\noperations affects the results. Because of this, PyTorch is not\nguaranteed to produce bitwise identical results for floating point\ncomputations that are mathematically identical. Similarly, bitwise\nidentical results are not guaranteed across PyTorch releases,\nindividual commits, or different platforms. In particular, CPU and GPU\nresults can be different even for bitwise-identical inputs and even\nafter controlling for the sources of randomness.\nBatched computations or slice computations", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"}
{"text": "Batched computations or slice computations\nMany operations in PyTorch support batched computation, where the same\noperation is performed for the elements of the batches of inputs. An\nexample of this is \"torch.mm()\" and \"torch.bmm()\". It is possible to\nimplement batched computation as a loop over batch elements, and apply\nthe necessary math operations to the individual batch elements, for\nefficiency reasons we are not doing that, and typically perform\ncomputation for the whole batch. The mathematical libraries that we\nare calling, and PyTorch internal implementations of operations can\nproduces slightly different results in this case, compared to non-\nbatched computations. In particular, let \"A\" and \"B\" be 3D tensors\nwith the dimensions suitable for batched matrix multiplication. Then\n\"(A@B)[0]\" (the first element of the batched result) is not guaranteed\nto be bitwise identical to \"A[0]@B[0]\" (the matrix product of the", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"}
{"text": "first elements of the input batches) even though mathematically it's\nan identical computation.\nSimilarly, an operation applied to a tensor slice is not guaranteed to\nproduce results that are identical to the slice of the result of the\nsame operation applied to the full tensor. E.g. let \"A\" be a\n2-dimensional tensor. \"A.sum(-1)[0]\" is not guaranteed to be bitwise\nequal to \"A[:,0].sum()\".\nExtremal values\n===============\nWhen inputs contain large values such that intermediate results may\noverflow the range of the used datatype, the end result may overflow\ntoo, even though it is representable in the original datatype. E.g.:\n import torch\n a=torch.tensor([1e20, 1e20]) # fp32 type by default\n a.norm() # produces tensor(inf)\n a.double().norm() # produces tensor(1.4142e+20, dtype=torch.float64), representable in fp32\nLinear algebra (\"torch.linalg\")\n===============================\nNon-finite values\n\nThe external libraries (backends) that \"torch.linalg\" uses provide no", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"}
{"text": "guarantees on their behaviour when the inputs have non-finite values\nlike \"inf\" or \"NaN\". As such, neither does PyTorch. The operations may\nreturn a tensor with non-finite values, or raise an exception, or even\nsegfault.\nConsider using \"torch.isfinite()\" before calling these functions to\ndetect this situation.\nExtremal values in linalg\n\nFunctions within \"torch.linalg\" have more Extremal Values than other\nPyTorch functions.\nSolvers and Inverses assume that the input matrix \"A\" is invertible.\nIf it is close to being non-invertible (for example, if it has a very\nsmall singular value), then these algorithms may silently return\nincorrect results. These matrices are said to be ill-conditioned. If\nprovided with ill-conditioned inputs, the result of these functions\nthey may vary when using the same inputs on different devices or when\nusing different backends via the keyword \"driver\".\nSpectral operations like \"svd\", \"eig\", and \"eigh\" may also return", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"}
{"text": "incorrect results (and their gradients may be infinite) when their\ninputs have singular values that are close to each other. This is\nbecause the algorithms used to compute these decompositions struggle\nto converge for these inputs.\nRunning the computation in \"float64\" (as NumPy does by default) often\nhelps, but it does not solve these issues in all cases. Analyzing the\nspectrum of the inputs via \"torch.linalg.svdvals()\" or their condition\nnumber via \"torch.linalg.cond()\" may help to detect these issues.\nTensorFloat-32(TF32) on Nvidia Ampere devices\n=============================================\nOn Ampere Nvidia GPUs, PyTorch can use TensorFloat32 (TF32) to speed\nup mathematically intensive operations, in particular matrix\nmultiplications and convolutions. When an operation is performed using\nTF32 tensor cores, only the first 10 bits of the input mantissa are\nread. This may reduce accuracy and produce surprising results (e.g.,\nmultiplying a matrix by the identity matrix may produce results that", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"}
{"text": "are different from the input). By default, TF32 tensor cores are\ndisabled for matrix multiplications and enabled for convolutions,\nalthough most neural network workloads have the same convergence\nbehavior when using TF32 as they have with fp32. We recommend enabling\nTF32 tensor cores for matrix multiplications with\n\"torch.backends.cuda.matmul.allow_tf32 = True\" if your network does\nnot need full float32 precision. If your network needs full float32\nprecision for both matrix multiplications and convolutions, then TF32\ntensor cores can also be disabled for convolutions with\n\"torch.backends.cudnn.allow_tf32 = False\".\nFor more information see TensorFloat32.\nReduced Precision Reduction for FP16 and BF16 GEMMs\n====================================================\nHalf-precision GEMM operations are typically done with intermediate\naccumulations (reduction) in single-precision for numerical accuracy\nand improved resilience to overflow. For performance, certain GPU", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"}
{"text": "architectures, especially more recent ones, allow a few truncations of\nthe intermediate accumulation results to the reduced precision (e.g.,\nhalf-precision). This change is often benign from the perspective of\nmodel convergence, though it may lead to unexpected results (e.g.,\n\"inf\" values when the final result should be be representable in half-\nprecision). If reduced-precision reductions are problematic, they can\nbe turned off with\n\"torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction =\nFalse\"\nA similar flag exists for BF16 GEMM operations and is turned off by\ndefault. If BF16 reduced-precision reductions are problematic, they\ncan be turned off with\n\"torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction =\nFalse\"\nFor more information see allow_fp16_reduced_precision_reduction and\nallow_bf16_reduced_precision_reduction\nReduced Precision FP16 and BF16 GEMMs and Convolutions on AMD Instinct MI200 devices\n====================================================================================", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"}
{"text": "On AMD Instinct MI200 GPUs, the FP16 and BF16 V_DOT2 and MFMA matrix\ninstructions flush input and output denormal values to zero. FP32 and\nFP64 MFMA matrix instructions do not flush input and output denormal\nvalues to zero. The affected instructions are only used by rocBLAS\n(GEMM) and MIOpen (convolution) kernels; all other PyTorch operations\nwill not encounter this behavior. All other supported AMD GPUs will\nnot encounter this behavior.\nrocBLAS and MIOpen provide alternate implementations for affected FP16\noperations. Alternate implementations for BF16 operations are not\nprovided; BF16 numbers have a larger dynamic range than FP16 numbers\nand are less likely to encounter denormal values. For the FP16\nalternate implementations, FP16 input values are cast to an\nintermediate BF16 value and then cast back to FP16 output after the\naccumulate FP32 operations. In this way, the input and output types\nare unchanged.\nWhen training using FP16 precision, some models may fail to converge", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"}
{"text": "with FP16 denorms flushed to zero. Denormal values more frequently\noccur in the backward pass of training during gradient calculation.\nPyTorch by default will use the rocBLAS and MIOpen alternate\nimplementations during the backward pass. The default behavior can be\noverridden using environment variables, ROCBLAS_INTERNAL_FP16_ALT_IMPL\nand MIOPEN_DEBUG_CONVOLUTION_ATTRIB_FP16_ALT_IMPL. The behavior of\nthese environment variables is as follows:\n+-----------------+-------------+-------------+\n| | forward | backward |\n|=================|=============|=============|\n| Env unset | original | alternate |\n+-----------------+-------------+-------------+\n| Env set to 1 | alternate | alternate |\n+-----------------+-------------+-------------+\n| Env set to 0 | original | original |\n+-----------------+-------------+-------------+\nThe following is the list of operations where rocBLAS may be used:\n* torch.addbmm\n* torch.addmm\n* torch.baddbmm\n* torch.bmm\n* torch.mm", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"}
{"text": "\ntorch.baddbmm\ntorch.bmm\ntorch.mm\ntorch.nn.GRUCell\ntorch.nn.LSTMCell\ntorch.nn.Linear\ntorch.sparse.addmm\nthe following torch._C._ConvBackend implementations:\nslowNd\nslowNd_transposed\nslowNd_dilated\nslowNd_dilated_transposed\nThe following is the list of operations where MIOpen may be used:\ntorch.nn.Conv[Transpose]Nd\nthe following torch._C._ConvBackend implementations:\nConvBackend::Miopen\nConvBackend::MiopenDepthwise\nConvBackend::MiopenTranspose\n", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"}
{"text": "Broadcasting semanticsMany PyTorch operations support NumPy's broadcasting semantics. See\nhttps://numpy.org/doc/stable/user/basics.broadcasting.html for\ndetails.\nIn short, if a PyTorch operation supports broadcast, then its Tensor\narguments can be automatically expanded to be of equal sizes (without\nmaking copies of the data).\nGeneral semantics\n=================\nTwo tensors are \"broadcastable\" if the following rules hold:\n* Each tensor has at least one dimension.\n* When iterating over the dimension sizes, starting at the trailing\n dimension, the dimension sizes must either be equal, one of them is\n 1, or one of them does not exist.\nFor Example:\n\n\n\nx=torch.empty(5,7,3)\ny=torch.empty(5,7,3)\n # same shapes are always broadcastable (i.e. the above rules always hold)\nx=torch.empty((0,))\ny=torch.empty(2,2)\n # x and y are not broadcastable, because x does not have at least 1 dimension\n # can line up trailing dimensions\nx=torch.empty(5,3,4,1)\n\n\n", "source": "https://pytorch.org/docs/stable/notes/broadcasting.html", "category": "pytorch docs"}
{"text": "\n\n\nx=torch.empty(5,3,4,1)\ny=torch.empty( 3,1,1)\n # x and y are broadcastable.\n # 1st trailing dimension: both have size 1\n # 2nd trailing dimension: y has size 1\n # 3rd trailing dimension: x size == y size\n # 4th trailing dimension: y dimension doesn't exist\n # but:\nx=torch.empty(5,2,4,1)\ny=torch.empty( 3,1,1)\n # x and y are not broadcastable, because in the 3rd trailing dimension 2 != 3\nIf two tensors \"x\", \"y\" are \"broadcastable\", the resulting tensor size\nis calculated as follows:\n* If the number of dimensions of \"x\" and \"y\" are not equal, prepend 1\n to the dimensions of the tensor with fewer dimensions to make them\n equal length.\n* Then, for each dimension size, the resulting dimension size is the\n max of the sizes of \"x\" and \"y\" along that dimension.\nFor Example:\n # can line up trailing dimensions to make reading easier\nx=torch.empty(5,1,4,1)\ny=torch.empty( 3,1,1)\n(x+y).size()\n torch.Size([5, 3, 4, 1])\n # but not necessary:\n\n\n", "source": "https://pytorch.org/docs/stable/notes/broadcasting.html", "category": "pytorch docs"}
{"text": "but not necessary:\n\n\n\nx=torch.empty(1)\ny=torch.empty(3,1,7)\n(x+y).size()\n torch.Size([3, 1, 7])\nx=torch.empty(5,2,4,1)\ny=torch.empty(3,1,1)\n(x+y).size()\n RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 1\nIn-place semantics\n==================\nOne complication is that in-place operations do not allow the in-place\ntensor to change shape as a result of the broadcast.\nFor Example:\nx=torch.empty(5,3,4,1)\ny=torch.empty(3,1,1)\n(x.add_(y)).size()\n torch.Size([5, 3, 4, 1])\n # but:\nx=torch.empty(1,3,1)\ny=torch.empty(3,1,7)\n(x.add_(y)).size()\n RuntimeError: The expanded size of the tensor (1) must match the existing size (7) at non-singleton dimension 2.\nBackwards compatibility\n=======================\nPrior versions of PyTorch allowed certain pointwise functions to\nexecute on tensors with different shapes, as long as the number of\n\n\n", "source": "https://pytorch.org/docs/stable/notes/broadcasting.html", "category": "pytorch docs"}
{"text": "elements in each tensor was equal. The pointwise operation would then\nbe carried out by viewing each tensor as 1-dimensional. PyTorch now\nsupports broadcasting and the \"1-dimensional\" pointwise behavior is\nconsidered deprecated and will generate a Python warning in cases\nwhere tensors are not broadcastable, but have the same number of\nelements.\nNote that the introduction of broadcasting can cause backwards\nincompatible changes in the case where two tensors do not have the\nsame shape, but are broadcastable and have the same number of\nelements. For Example:\n\n\n\ntorch.add(torch.ones(4,1), torch.randn(4))\nwould previously produce a Tensor with size: torch.Size([4,1]), but\nnow produces a Tensor with size: torch.Size([4,4]). In order to help\nidentify cases in your code where backwards incompatibilities\nintroduced by broadcasting may exist, you may set\ntorch.utils.backcompat.broadcast_warning.enabled to True, which\nwill generate a python warning in such cases.\nFor Example:\n\n\n", "source": "https://pytorch.org/docs/stable/notes/broadcasting.html", "category": "pytorch docs"}
{"text": "For Example:\n\n\n\ntorch.utils.backcompat.broadcast_warning.enabled=True\ntorch.add(torch.ones(4,1), torch.ones(4))\n main:1: UserWarning: self and other do not have the same shape, but are broadcastable, and have the same number of elements.\n Changing behavior in a backwards incompatible manner to broadcasting rather than viewing as 1-dimensional.\n\n\n", "source": "https://pytorch.org/docs/stable/notes/broadcasting.html", "category": "pytorch docs"}
{"text": "HIP (ROCm) semanticsROCm\u00e2\u0084\u00a2 is AMD\u00e2\u0080\u0099s open source software platform for GPU-accelerated high\nperformance computing and machine learning. HIP is ROCm's C++ dialect\ndesigned to ease conversion of CUDA applications to portable C++ code.\nHIP is used when converting existing CUDA applications like PyTorch to\nportable C++ and for new projects that require portability between AMD\nand NVIDIA.\nHIP Interfaces Reuse the CUDA Interfaces\n========================================\nPyTorch for HIP intentionally reuses the existing \"torch.cuda\"\ninterfaces. This helps to accelerate the porting of existing PyTorch\ncode and models because very few code changes are necessary, if any.\nThe example from CUDA semantics will work exactly the same for HIP:\n cuda = torch.device('cuda') # Default HIP device\n cuda0 = torch.device('cuda:0') # 'rocm' or 'hip' are not valid, use 'cuda'\n cuda2 = torch.device('cuda:2') # GPU 2 (these are 0-indexed)\n x = torch.tensor([1., 2.], device=cuda0)", "source": "https://pytorch.org/docs/stable/notes/hip.html", "category": "pytorch docs"}
{"text": "x = torch.tensor([1., 2.], device=cuda0)\n # x.device is device(type='cuda', index=0)\n y = torch.tensor([1., 2.]).cuda()\n # y.device is device(type='cuda', index=0)\n with torch.cuda.device(1):\n # allocates a tensor on GPU 1\n a = torch.tensor([1., 2.], device=cuda)\n # transfers a tensor from CPU to GPU 1\n b = torch.tensor([1., 2.]).cuda()\n # a.device and b.device are device(type='cuda', index=1)\n # You can also use Tensor.to to transfer a tensor:\n b2 = torch.tensor([1., 2.]).to(device=cuda)\n # b.device and b2.device are device(type='cuda', index=1)\n c = a + b\n # c.device is device(type='cuda', index=1)\n z = x + y\n # z.device is device(type='cuda', index=0)\n # even within a context, you can specify the device\n # (or give a GPU index to the .cuda call)\n d = torch.randn(2, device=cuda2)\n e = torch.randn(2).to(cuda2)\n f = torch.randn(2).cuda(cuda2)", "source": "https://pytorch.org/docs/stable/notes/hip.html", "category": "pytorch docs"}
{"text": "f = torch.randn(2).cuda(cuda2)\n # d.device, e.device, and f.device are all device(type='cuda', index=2)\nChecking for HIP\n================\nWhether you are using PyTorch for CUDA or HIP, the result of calling\n\"is_available()\" will be the same. If you are using a PyTorch that has\nbeen built with GPU support, it will return True. If you must check\nwhich version of PyTorch you are using, refer to this example below:\n if torch.cuda.is_available() and torch.version.hip:\n # do something specific for HIP\n elif torch.cuda.is_available() and torch.version.cuda:\n # do something specific for CUDA\nTensorFloat-32(TF32) on ROCm\n============================\nTF32 is not supported on ROCm.\nMemory management\n=================\nPyTorch uses a caching memory allocator to speed up memory\nallocations. This allows fast memory deallocation without device\nsynchronizations. However, the unused memory managed by the allocator\nwill still show as if used in \"rocm-smi\". You can use", "source": "https://pytorch.org/docs/stable/notes/hip.html", "category": "pytorch docs"}
{"text": "\"memory_allocated()\" and \"max_memory_allocated()\" to monitor memory\noccupied by tensors, and use \"memory_reserved()\" and\n\"max_memory_reserved()\" to monitor the total amount of memory managed\nby the caching allocator. Calling \"empty_cache()\" releases all\nunused cached memory from PyTorch so that those can be used by\nother GPU applications. However, the occupied GPU memory by tensors\nwill not be freed so it can not increase the amount of GPU memory\navailable for PyTorch.\nFor more advanced users, we offer more comprehensive memory\nbenchmarking via \"memory_stats()\". We also offer the capability to\ncapture a complete snapshot of the memory allocator state via\n\"memory_snapshot()\", which can help you understand the underlying\nallocation patterns produced by your code.\nTo debug memory errors, set \"PYTORCH_NO_CUDA_MEMORY_CACHING=1\" in your\nenvironment to disable caching.\nhipFFT/rocFFT plan cache\n========================\nSetting the size of the cache for hipFFT/rocFFT plans is not\nsupported.", "source": "https://pytorch.org/docs/stable/notes/hip.html", "category": "pytorch docs"}
{"text": "supported.\ntorch.distributed backends\n==========================\nCurrently, only the \"nccl\" and \"gloo\" backends for torch.distributed\nare supported on ROCm.\nCUDA API to HIP API mappings in C++\n===================================\nPlease refer: https://rocmdocs.amd.com/en/latest/Programming_Guides/H\nIP_API_Guide.html\nNOTE: The CUDA_VERSION macro, cudaRuntimeGetVersion and\ncudaDriverGetVersion APIs do not semantically map to the same values\nas HIP_VERSION macro, hipRuntimeGetVersion and hipDriverGetVersion\nAPIs. Please do not use them interchangeably when doing version\nchecks.\nFor example: Instead of using\n\"#if defined(CUDA_VERSION) && CUDA_VERSION >= 11000\" to implicitly\nexclude ROCm/HIP,\nuse the following to not take the code path for ROCm/HIP:\n\"#if defined(CUDA_VERSION) && CUDA_VERSION >= 11000 &&\n!defined(USE_ROCM)\"\nAlternatively, if it is desired to take the code path for ROCm/HIP:\n\"#if (defined(CUDA_VERSION) && CUDA_VERSION >= 11000) ||\ndefined(USE_ROCM)\"", "source": "https://pytorch.org/docs/stable/notes/hip.html", "category": "pytorch docs"}
{"text": "defined(USE_ROCM)\"\nOr if it is desired to take the code path for ROCm/HIP only for\nspecific HIP versions:\n\"#if (defined(CUDA_VERSION) && CUDA_VERSION >= 11000) ||\n(defined(USE_ROCM) && ROCM_VERSION >= 40300)\"\nRefer to CUDA Semantics doc\n===========================\nFor any sections not listed here, please refer to the CUDA semantics\ndoc: CUDA semantics\nEnabling kernel asserts\n=======================\nKernel asserts are supported on ROCm, but they are disabled due to\nperformance overhead. It can be enabled by recompiling the PyTorch\nfrom source.\nPlease add below line as an argument to cmake command parameters:\n -DROCM_FORCE_ENABLE_GPU_ASSERTS:BOOL=ON", "source": "https://pytorch.org/docs/stable/notes/hip.html", "category": "pytorch docs"}
{"text": "Generic Join Context ManagerThe generic join context manager facilitates distributed training on\nuneven inputs. This page outlines the API of the relevant classes:\n\"Join\", \"Joinable\", and \"JoinHook\". For a tutorial, see Distributed\nTraining with Uneven Inputs Using the Join Context Manager.\nclass torch.distributed.algorithms.Join(joinables, enable=True, throw_on_early_termination=False, **kwargs)\n This class defines the generic join context manager, which allows\n custom hooks to be called after a process joins. These hooks should\n shadow the collective communications of non-joined processes to\n prevent hanging and erroring and to ensure algorithmic correctness.\n Refer to \"JoinHook\" for details about the hook definition.\n Warning:\n The context manager requires each participating \"Joinable\" to\n call the method \"notify_join_context()\" before its own per-\n iteration collective communications to ensure correctness.\n Warning:", "source": "https://pytorch.org/docs/stable/distributed.algorithms.join.html", "category": "pytorch docs"}
{"text": "Warning:\n The context manager requires that all \"process_group\" attributes\n in the \"JoinHook\" objects are the same. If there are multiple\n \"JoinHook\" objects, then the \"device\" of the first is used. The\n process group and device information is used for checking for\n non- joined processes and for notifying processes to throw an\n exception if \"throw_on_early_termination\" is enabled, both of\n which using an all- reduce.\n Parameters:\n * joinables (List[Joinable]) -- a list of the\n participating \"Joinable\" s; their hooks are iterated over in\n the given order.\n * enable (bool) -- a flag enabling uneven input detection;\n setting to \"False\" disables the context manager's\n functionality and should only be set when the user knows the\n inputs will not be uneven (default: \"True\").\n * throw_on_early_termination (bool) -- a flag controlling\n whether to throw an exception upon detecting uneven inputs", "source": "https://pytorch.org/docs/stable/distributed.algorithms.join.html", "category": "pytorch docs"}
{"text": "(default: \"False\").\n Example:\n >>> import os\n >>> import torch\n >>> import torch.distributed as dist\n >>> import torch.multiprocessing as mp\n >>> import torch.nn.parallel.DistributedDataParallel as DDP\n >>> import torch.distributed.optim.ZeroRedundancyOptimizer as ZeRO\n >>> from torch.distributed.algorithms.join import Join\n >>>\n >>> # On each spawned worker\n >>> def worker(rank):\n >>> dist.init_process_group(\"nccl\", rank=rank, world_size=2)\n >>> model = DDP(torch.nn.Linear(1, 1).to(rank), device_ids=[rank])\n >>> optim = ZeRO(model.parameters(), torch.optim.Adam, lr=0.01)\n >>> # Rank 1 gets one more input than rank 0\n >>> inputs = [torch.tensor([1.]).to(rank) for _ in range(10 + rank)]\n >>> with Join([model, optim]):\n >>> for input in inputs:\n >>> loss = model(input).sum()\n >>> loss.backward()\n >>> optim.step()", "source": "https://pytorch.org/docs/stable/distributed.algorithms.join.html", "category": "pytorch docs"}
{"text": "\n\n\n optim.step()\n >>> # All ranks reach here without hanging/erroring\n\nstatic notify_join_context(joinable)\n Notifies the join context manager that the calling process has\n not yet joined; then, if \"throw_on_early_termination=True\",\n checks if uneven inputs have been detected (i.e. if one process\n has already joined) and throws an exception if so.\n This method should be called from a \"Joinable\" object before its\n per-iteration collective communications. For example, this\n should be called at the beginning of the forward pass in\n \"DistributedDataParallel\".\n Only the first \"Joinable\" object passed into the context manager\n performs the collective communications in this method, and for\n the others, this method is vacuous.\n Parameters:\n joinable (Joinable) -- the \"Joinable\" object calling\n this method.\n Returns:\n An async work handle for the all-reduce meant to notify the\n\n\n", "source": "https://pytorch.org/docs/stable/distributed.algorithms.join.html", "category": "pytorch docs"}
{"text": "context manager that the process has not yet joined if\n \"joinable\" is the first one passed into the context manager;\n \"None\" otherwise.\nclass torch.distributed.algorithms.Joinable\n This defines an abstract base class for joinable classes. A\n joinable class (inheriting from \"Joinable\") should implement\n \"join_hook()\", which returns a \"JoinHook\" instance, in addition to\n \"join_device()\" and \"join_process_group()\" that return device and\n process group information, respectively.\n abstract property join_device: device\n Returns the device from which to perform collective\n communications needed by the join context manager implementation\n itself.\n abstract join_hook(kwargs)\n Returns a \"JoinHook\" instance for the given \"Joinable\".\n Parameters:\n kwargs (dict) -- a \"dict\" containing any keyword\n arguments to modify the behavior of the join hook at run\n time; all \"Joinable\" instances sharing the same join context", "source": "https://pytorch.org/docs/stable/distributed.algorithms.join.html", "category": "pytorch docs"}
{"text": "manager are forwarded the same value for \"kwargs\".\n Return type:\n JoinHook\n abstract property join_process_group: Any\n Returns the process group for the collective communications\n needed by the join context manager itself.\nclass torch.distributed.algorithms.JoinHook\n This defines a join hook, which provides two entry points in the\n join context manager: a main hook, which is called repeatedly while\n there exists a non-joined process, and a post-hook, which is called\n once all processes have joined.\n To implement a join hook for the generic join context manager,\n define a class that inherits from \"JoinHook\" and override\n \"main_hook()\" and \"post_hook()\" as appropriate.\n main_hook()\n This hook is called repeatedly while there exists a non-joined\n process to shadow collective communications in one training\n iteration (i.e. in one forward pass, backward pass, and\n optimizer step).\n post_hook(is_last_joiner)", "source": "https://pytorch.org/docs/stable/distributed.algorithms.join.html", "category": "pytorch docs"}
{"text": "post_hook(is_last_joiner)\n This hook is called after all processes have joined. It is\n passed an additional \"bool\" argument \"is_last_joiner\", which\n indicates if the rank is one of the last to join.\n Parameters:\n is_last_joiner (bool) -- \"True\" if the rank is one of\n the last to join; \"False\" otherwise.", "source": "https://pytorch.org/docs/stable/distributed.algorithms.join.html", "category": "pytorch docs"}
{"text": "torch.utils.tensorboardBefore going further, more details on TensorBoard can be found at\nhttps://www.tensorflow.org/tensorboard/\nOnce you've installed TensorBoard, these utilities let you log PyTorch\nmodels and metrics into a directory for visualization within the\nTensorBoard UI. Scalars, images, histograms, graphs, and embedding\nvisualizations are all supported for PyTorch models and tensors as\nwell as Caffe2 nets and blobs.\nThe SummaryWriter class is your main entry to log data for consumption\nand visualization by TensorBoard. For example:\n import torch\n import torchvision\n from torch.utils.tensorboard import SummaryWriter\n from torchvision import datasets, transforms\n # Writer will output to ./runs/ directory by default\n writer = SummaryWriter()\n transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])\n trainset = datasets.MNIST('mnist_train', train=True, download=True, transform=transform)", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)\n model = torchvision.models.resnet50(False)\n # Have ResNet model take in grayscale rather than RGB\n model.conv1 = torch.nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False)\n images, labels = next(iter(trainloader))\n grid = torchvision.utils.make_grid(images)\n writer.add_image('images', grid, 0)\n writer.add_graph(model, images)\n writer.close()\nThis can then be visualized with TensorBoard, which should be\ninstallable and runnable with:\n pip install tensorboard\n tensorboard --logdir=runs\nLots of information can be logged for one experiment. To avoid\ncluttering the UI and have better result clustering, we can group\nplots by naming them hierarchically. For example, \"Loss/train\" and\n\"Loss/test\" will be grouped together, while \"Accuracy/train\" and\n\"Accuracy/test\" will be grouped separately in the TensorBoard\ninterface.\n from torch.utils.tensorboard import SummaryWriter\n import numpy as np", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "import numpy as np\n writer = SummaryWriter()\n for n_iter in range(100):\n writer.add_scalar('Loss/train', np.random.random(), n_iter)\n writer.add_scalar('Loss/test', np.random.random(), n_iter)\n writer.add_scalar('Accuracy/train', np.random.random(), n_iter)\n writer.add_scalar('Accuracy/test', np.random.random(), n_iter)\nExpected result:\n[image]\nclass torch.utils.tensorboard.writer.SummaryWriter(log_dir=None, comment='', purge_step=None, max_queue=10, flush_secs=120, filename_suffix='')\n Writes entries directly to event files in the log_dir to be\n consumed by TensorBoard.\n The SummaryWriter class provides a high-level API to create an\n event file in a given directory and add summaries and events to it.\n The class updates the file contents asynchronously. This allows a\n training program to call methods to add data to the file directly\n from the training loop, without slowing down training.", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "init(log_dir=None, comment='', purge_step=None, max_queue=10, flush_secs=120, filename_suffix='')\n Creates a SummaryWriter that will write out events and\n summaries to the event file.\n Parameters:\n * log_dir (str) -- Save directory location. Default is\n runs/CURRENT_DATETIME_HOSTNAME, which changes after\n each run. Use hierarchical folder structure to compare\n between runs easily. e.g. pass in 'runs/exp1', 'runs/exp2',\n etc. for each new experiment to compare across them.\n * comment (str) -- Comment log_dir suffix appended to\n the default \"log_dir\". If \"log_dir\" is assigned, this\n argument has no effect.\n * purge_step (int) -- When logging crashes at step T+X\n and restarts at step T, any events whose global_step larger\n or equal to T will be purged and hidden from TensorBoard.\n Note that crashed and resumed experiments should have the", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "same \"log_dir\".\n * max_queue (int) -- Size of the queue for pending\n events and summaries before one of the 'add' calls forces a\n flush to disk. Default is ten items.\n * flush_secs (int) -- How often, in seconds, to flush\n the pending events and summaries to disk. Default is every\n two minutes.\n * filename_suffix (str) -- Suffix added to all event\n filenames in the log_dir directory. More details on\n filename construction in tensorboard.summary.writer.event_\n file_writer.EventFileWriter.\n Examples:\n from torch.utils.tensorboard import SummaryWriter\n # create a summary writer with automatically generated folder name.\n writer = SummaryWriter()\n # folder location: runs/May04_22-14-54_s-MacBook-Pro.local/\n # create a summary writer using the specified folder name.\n writer = SummaryWriter(\"my_experiment\")", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "writer = SummaryWriter(\"my_experiment\")\n # folder location: my_experiment\n # create a summary writer with comment appended.\n writer = SummaryWriter(comment=\"LR_0.1_BATCH_16\")\n # folder location: runs/May04_22-14-54_s-MacBook-Pro.localLR_0.1_BATCH_16/\n add_scalar(tag, scalar_value, global_step=None, walltime=None, new_style=False, double_precision=False)\n Add scalar data to summary.\n Parameters:\n * tag (str) -- Data identifier\n * scalar_value (float or string/blobname) -- Value\n to save\n * global_step (int) -- Global step value to record\n * walltime (float) -- Optional override default\n walltime (time.time()) with seconds after epoch of event\n * new_style (boolean) -- Whether to use new style\n (tensor field) or old style (simple_value field). New style\n could lead to faster data loading.\n Examples:", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "Examples:\n from torch.utils.tensorboard import SummaryWriter\n writer = SummaryWriter()\n x = range(100)\n for i in x:\n writer.add_scalar('y=2x', i * 2, i)\n writer.close()\n Expected result:\n [image]\n add_scalars(main_tag, tag_scalar_dict, global_step=None, walltime=None)\n Adds many scalar data to summary.\n Parameters:\n * main_tag (str) -- The parent name for the tags\n * tag_scalar_dict (dict) -- Key-value pair storing the\n tag and corresponding values\n * global_step (int) -- Global step value to record\n * walltime (float) -- Optional override default\n walltime (time.time()) seconds after epoch of event\n Examples:\n from torch.utils.tensorboard import SummaryWriter\n writer = SummaryWriter()\n r = 5\n for i in range(100):\n writer.add_scalars('run_14h', {'xsinx':i*np.sin(i/r),", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "'xcosx':inp.cos(i/r),\n 'tanx': np.tan(i/r)}, i)\n writer.close()\n # This call adds three values to the same scalar plot with the tag\n # 'run_14h' in TensorBoard's scalar section.\n Expected result:\n [image]\n add_histogram(tag, values, global_step=None, bins='tensorflow', walltime=None, max_bins=None)\n Add histogram to summary.\n Parameters:\n * tag (str) -- Data identifier\n * values (torch.Tensor, numpy.ndarray, or\n string/blobname) -- Values to build histogram\n * global_step (int) -- Global step value to record\n * bins (str) -- One of {'tensorflow','auto', 'fd',\n ...}. This determines how the bins are made. You can find\n other options in: https://docs.scipy.org/doc/numpy/referen\n ce/generated/numpy.histogram.html\n * walltime (float*) -- Optional override default", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "walltime (time.time()) seconds after epoch of event\n Examples:\n from torch.utils.tensorboard import SummaryWriter\n import numpy as np\n writer = SummaryWriter()\n for i in range(10):\n x = np.random.random(1000)\n writer.add_histogram('distribution centers', x + i, i)\n writer.close()\n Expected result:\n [image]\n add_image(tag, img_tensor, global_step=None, walltime=None, dataformats='CHW')\n Add image data to summary.\n Note that this requires the \"pillow\" package.\n Parameters:\n * tag (str) -- Data identifier\n * img_tensor (torch.Tensor, numpy.ndarray, or\n string/blobname) -- Image data\n * global_step (int) -- Global step value to record\n * walltime (float) -- Optional override default\n walltime (time.time()) seconds after epoch of event\n * dataformats (str) -- Image data format specification", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "of the form CHW, HWC, HW, WH, etc.\n Shape:\n img_tensor: Default is (3, H, W). You can use\n \"torchvision.utils.make_grid()\" to convert a batch of tensor\n into 3xHxW format or call \"add_images\" and let us do the job.\n Tensor with (1, H, W), (H, W), (H, W, 3) is also suitable as\n long as corresponding \"dataformats\" argument is passed, e.g.\n \"CHW\", \"HWC\", \"HW\".\n Examples:\n from torch.utils.tensorboard import SummaryWriter\n import numpy as np\n img = np.zeros((3, 100, 100))\n img[0] = np.arange(0, 10000).reshape(100, 100) / 10000\n img[1] = 1 - np.arange(0, 10000).reshape(100, 100) / 10000\n img_HWC = np.zeros((100, 100, 3))\n img_HWC[:, :, 0] = np.arange(0, 10000).reshape(100, 100) / 10000\n img_HWC[:, :, 1] = 1 - np.arange(0, 10000).reshape(100, 100) / 10000\n writer = SummaryWriter()\n writer.add_image('my_image', img, 0)", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "writer.add_image('my_image', img, 0)\n # If you have non-default dimension setting, set the dataformats argument.\n writer.add_image('my_image_HWC', img_HWC, 0, dataformats='HWC')\n writer.close()\n Expected result:\n [image]\n add_images(tag, img_tensor, global_step=None, walltime=None, dataformats='NCHW')\n Add batched image data to summary.\n Note that this requires the \"pillow\" package.\n Parameters:\n * tag (str) -- Data identifier\n * img_tensor (torch.Tensor, numpy.ndarray, or\n string/blobname) -- Image data\n * global_step (int) -- Global step value to record\n * walltime (float) -- Optional override default\n walltime (time.time()) seconds after epoch of event\n * dataformats (str) -- Image data format specification\n of the form NCHW, NHWC, CHW, HWC, HW, WH, etc.\n Shape:\n img_tensor: Default is (N, 3, H, W). If \"dataformats\" is", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "specified, other shape will be accepted. e.g. NCHW or NHWC.\n Examples:\n from torch.utils.tensorboard import SummaryWriter\n import numpy as np\n img_batch = np.zeros((16, 3, 100, 100))\n for i in range(16):\n img_batch[i, 0] = np.arange(0, 10000).reshape(100, 100) / 10000 / 16 * i\n img_batch[i, 1] = (1 - np.arange(0, 10000).reshape(100, 100) / 10000) / 16 * i\n writer = SummaryWriter()\n writer.add_images('my_image_batch', img_batch, 0)\n writer.close()\n Expected result:\n [image]\n add_figure(tag, figure, global_step=None, close=True, walltime=None)\n Render matplotlib figure into an image and add it to summary.\n Note that this requires the \"matplotlib\" package.\n Parameters:\n * tag (str) -- Data identifier\n * figure (matplotlib.pyplot.figure) -- Figure or a list\n of figures\n * global_step (int) -- Global step value to record", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "\nclose (bool) -- Flag to automatically close the\n figure\n * walltime (float) -- Optional override default\n walltime (time.time()) seconds after epoch of event\n add_video(tag, vid_tensor, global_step=None, fps=4, walltime=None)\n Add video data to summary.\n Note that this requires the \"moviepy\" package.\n Parameters:\n * tag (str) -- Data identifier\n * vid_tensor (torch.Tensor) -- Video data\n * global_step (int) -- Global step value to record\n * fps (float or int) -- Frames per second\n * walltime (float) -- Optional override default\n walltime (time.time()) seconds after epoch of event\n Shape:\n vid_tensor: (N, T, C, H, W). The values should lie in [0,\n 255] for type uint8 or [0, 1] for type float.\n add_audio(tag, snd_tensor, global_step=None, sample_rate=44100, walltime=None)\n Add audio data to summary.\n Parameters:\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "Add audio data to summary.\n Parameters:\n * tag (str) -- Data identifier\n * snd_tensor (torch.Tensor) -- Sound data\n * global_step (int) -- Global step value to record\n * sample_rate (int) -- sample rate in Hz\n * walltime (float) -- Optional override default\n walltime (time.time()) seconds after epoch of event\n Shape:\n snd_tensor: (1, L). The values should lie between [-1, 1].\n add_text(tag, text_string, global_step=None, walltime=None)\n Add text data to summary.\n Parameters:\n * tag (str) -- Data identifier\n * text_string (str) -- String to save\n * global_step (int) -- Global step value to record\n * walltime (float) -- Optional override default\n walltime (time.time()) seconds after epoch of event\n Examples:\n writer.add_text('lstm', 'This is an lstm', 0)\n writer.add_text('rnn', 'This is an rnn', 10)", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "add_graph(model, input_to_model=None, verbose=False, use_strict_trace=True)\n Add graph data to summary.\n Parameters:\n * model (torch.nn.Module) -- Model to draw.\n * input_to_model (torch.Tensor or list of\n torch.Tensor) -- A variable or a tuple of variables to be\n fed.\n * verbose (bool) -- Whether to print graph structure in\n console.\n * use_strict_trace (bool) -- Whether to pass keyword\n argument strict to torch.jit.trace. Pass False when you\n want the tracer to record your mutable container types\n (list, dict)\n add_embedding(mat, metadata=None, label_img=None, global_step=None, tag='default', metadata_header=None)\n Add embedding projector data to summary.\n Parameters:\n * mat (torch.Tensor or numpy.ndarray) -- A matrix\n which each row is the feature vector of the data point", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "\nmetadata (list) -- A list of labels, each element\n will be convert to string\n * label_img (torch.Tensor) -- Images correspond to each\n data point\n * global_step (int) -- Global step value to record\n * tag (str) -- Name for the embedding\n Shape:\n mat: (N, D), where N is number of data and D is feature\n dimension\n label_img: (N, C, H, W)\n Examples:\n import keyword\n import torch\n meta = []\n while len(meta)<100:\n meta = meta+keyword.kwlist # get some strings\n meta = meta[:100]\n for i, v in enumerate(meta):\n meta[i] = v+str(i)\n label_img = torch.rand(100, 3, 10, 32)\n for i in range(100):\n label_img[i]*=i/100.0\n writer.add_embedding(torch.randn(100, 5), metadata=meta, label_img=label_img)\n writer.add_embedding(torch.randn(100, 5), label_img=label_img)\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "writer.add_embedding(torch.randn(100, 5), metadata=meta)\n add_pr_curve(tag, labels, predictions, global_step=None, num_thresholds=127, weights=None, walltime=None)\n Adds precision recall curve. Plotting a precision-recall curve\n lets you understand your model's performance under different\n threshold settings. With this function, you provide the ground\n truth labeling (T/F) and prediction confidence (usually the\n output of your model) for each target. The TensorBoard UI will\n let you choose the threshold interactively.\n Parameters:\n * tag (str) -- Data identifier\n * labels (torch.Tensor, numpy.ndarray, or\n string/blobname) -- Ground truth data. Binary label for\n each element.\n * predictions (torch.Tensor, numpy.ndarray, or\n string/blobname) -- The probability that an element be\n classified as true. Value should be in [0, 1]", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "\nglobal_step (int) -- Global step value to record\n * num_thresholds (int) -- Number of thresholds used to\n draw the curve.\n * walltime (float) -- Optional override default\n walltime (time.time()) seconds after epoch of event\n Examples:\n from torch.utils.tensorboard import SummaryWriter\n import numpy as np\n labels = np.random.randint(2, size=100) # binary label\n predictions = np.random.rand(100)\n writer = SummaryWriter()\n writer.add_pr_curve('pr_curve', labels, predictions, 0)\n writer.close()\n add_custom_scalars(layout)\n Create special chart by collecting charts tags in 'scalars'.\n Note that this function can only be called once for each\n SummaryWriter() object. Because it only provides metadata to\n tensorboard, the function can be called before or after the\n training loop.\n Parameters:\n layout (dict) -- {categoryName: charts}, where\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "charts is also a dictionary {chartName:\n ListOfProperties}. The first element in ListOfProperties\n is the chart's type (one of Multiline or Margin) and\n the second element should be a list containing the tags you\n have used in add_scalar function, which will be collected\n into the new chart.\n Examples:\n layout = {'Taiwan':{'twse':['Multiline',['twse/0050', 'twse/2330']]},\n 'USA':{ 'dow':['Margin', ['dow/aaa', 'dow/bbb', 'dow/ccc']],\n 'nasdaq':['Margin', ['nasdaq/aaa', 'nasdaq/bbb', 'nasdaq/ccc']]}}\n writer.add_custom_scalars(layout)\n add_mesh(tag, vertices, colors=None, faces=None, config_dict=None, global_step=None, walltime=None)\n Add meshes or 3D point clouds to TensorBoard. The visualization\n is based on Three.js, so it allows users to interact with the\n rendered object. Besides the basic definitions such as vertices,", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "faces, users can further provide camera parameter, lighting\n condition, etc. Please check https://threejs.org/docs/index.htm\n l#manual/en/introduction/Creating-a-scene for advanced usage.\n Parameters:\n * tag (str) -- Data identifier\n * vertices (torch.Tensor) -- List of the 3D coordinates\n of vertices.\n * colors (torch.Tensor) -- Colors for each vertex\n * faces (torch.Tensor) -- Indices of vertices within\n each triangle. (Optional)\n * config_dict -- Dictionary with ThreeJS classes names\n and configuration.\n * global_step (int) -- Global step value to record\n * walltime (float) -- Optional override default\n walltime (time.time()) seconds after epoch of event\n Shape:\n vertices: (B, N, 3). (batch, number_of_vertices, channels)\n colors: (B, N, 3). The values should lie in [0, 255] for type\n uint8 or [0, 1] for type float.", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "uint8 or [0, 1] for type float.\n faces: (B, N, 3). The values should lie in [0,\n number_of_vertices] for type uint8.\n Examples:\n from torch.utils.tensorboard import SummaryWriter\n vertices_tensor = torch.as_tensor([\n [1, 1, 1],\n [-1, -1, 1],\n [1, -1, -1],\n [-1, 1, -1],\n ], dtype=torch.float).unsqueeze(0)\n colors_tensor = torch.as_tensor([\n [255, 0, 0],\n [0, 255, 0],\n [0, 0, 255],\n [255, 0, 255],\n ], dtype=torch.int).unsqueeze(0)\n faces_tensor = torch.as_tensor([\n [0, 2, 3],\n [0, 3, 1],\n [0, 1, 2],\n [1, 3, 2],\n ], dtype=torch.int).unsqueeze(0)\n writer = SummaryWriter()\n writer.add_mesh('my_mesh', vertices=vertices_tensor, colors=colors_tensor, faces=faces_tensor)\n writer.close()", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "writer.close()\n add_hparams(hparam_dict, metric_dict, hparam_domain_discrete=None, run_name=None)\n Add a set of hyperparameters to be compared in TensorBoard.\n Parameters:\n * hparam_dict (dict) -- Each key-value pair in the\n dictionary is the name of the hyper parameter and it's\n corresponding value. The type of the value can be one of\n bool, string, float, int, or None.\n * metric_dict (dict) -- Each key-value pair in the\n dictionary is the name of the metric and it's corresponding\n value. Note that the key used here should be unique in the\n tensorboard record. Otherwise the value you added by\n \"add_scalar\" will be displayed in hparam plugin. In most\n cases, this is unwanted.\n * hparam_domain_discrete -- (Optional[Dict[str,\n List[Any]]]) A dictionary that contains names of the\n hyperparameters and all discrete values they can hold", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}
{"text": "\nrun_name (str) -- Name of the run, to be included as\n part of the logdir. If unspecified, will use current\n timestamp.\n Examples:\n from torch.utils.tensorboard import SummaryWriter\n with SummaryWriter() as w:\n for i in range(5):\n w.add_hparams({'lr': 0.1i, 'bsize': i},\n {'hparam/accuracy': 10i, 'hparam/loss': 10*i})\n Expected result:\n [image]\n flush()\n Flushes the event file to disk. Call this method to make sure\n that all pending events have been written to disk.\n close()\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}