text
stringlengths
0
1.73k
source
stringlengths
35
119
category
stringclasses
2 values
It’s only really straight-forward to access the outputs of top-level submodules. Dealing with nested submodules rapidly becomes complicated. We have to be careful not to miss any important operations in between the input and the output. We introduce potential for errors in transcribing the exact functionality of the original module to the new module. Overall, this method and the last both have the complication of tying in feature extraction with the model’s source code itself. Indeed, if we examine the source code for TorchVision models we might suspect that some of the design choices were influenced by the desire to use them in this way for downstream tasks. Use hooks Hooks move us away from the paradigm of writing source code, towards one of specifying outputs. Considering our toy CNN example above, and the goal of getting feature maps for each layer, we could use hooks like this: ```python model = CNN(3, 4, 10)
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
model = CNN(3, 4, 10) feature_maps = [] # This will be a list of Tensors, each representing a feature map def hook_feat_map(mod, inp, out): feature_maps.append(out) for block in model.blocks: block.register_forward_hook(hook_feat_map) out = model(torch.zeros(1, 3, 32, 32)) # This will be the final logits over classes Now we have full flexibility in terms of accessing nested submodules, and we free ourselves of the responsibilities of fiddling with the source code. But this approach comes with its own downsides: We can only apply hooks to modules. If we have functional operations (reshape, view, functional non-linearities, etc) for which we want the outputs, hooks won’t work directly on them. We have not modified anything about the source code, so the whole forward pass is executed, regardless of the hooks. If we only need to access early features without any need for the final output, this could result in a lot of useless computation. Hooks are not TorchScript friendly.
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
Hooks are not TorchScript friendly. Here’s a summary of the different methods and their pros/cons: Can use source code as is without any modifications or rewriting Full flexibility in accessing features Drops unnecessary computational steps TorchScript friendly Modify forward method NO Technically yes. Depends on how much code you’re willing to write. So in practice, NO. YES YES
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:| | New module that reuses submodules / parameters of original module | NO | Technically yes. Depends on how much code you’re willing to write. So in practice, NO. | YES | YES | |-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
Hooks YES Mostly YES. Only outputs of submodules NO NO Table 1: The pros (or cons) of some of the existing methods for feature extraction with PyTorch In the next section of this article, let’s see how we can get YES across the board. FX to The Rescue
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
FX to The Rescue The natural question for some new-starters in Python and coding at this point might be: “Can’t we just point to a line of code and tell Python or PyTorch that we want the result of that line?” For those who have spent more time coding, the reason this can’t be done is clear: multiple operations can happen in one line of code, whether they are explicitly written there, or they are implicit as sub-operations. Just take this simple module as an example: class MyModule(torch.nn.Module): def __init__(self): super().__init__() self.param = torch.nn.Parameter(torch.rand(3, 4)) self.submodule = MySubModule() def forward(self, x): return self.submodule(x + self.param).clamp(min=0.0, max=1.0) The forward method has a single line of code which we can unravel as: Add self.param to x
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
Add self.param to x Pass x through self.submodule. Here we would need to consider the steps happening in that submodule. I’m just going to use dummy operation names for illustration: I. submodule.op_1 II. submodule.op_2 Apply the clamp operation So even if we point at this one line, the question then is: “For which step do we want to extract the output?”. FX is a core PyTorch toolkit that (oversimplifying) does the unravelling I just mentioned. It does something called “symbolic tracing”, which means the Python code is interpreted and stepped through, operation-by-operation, using some dummy proxy for a real input. Introducing some nomenclature, each step as described above is considered a “node”, and consecutive nodes are connected to one another to form a “graph” (not unlike the common mathematical notion of a graph). Here are the “steps” above translated to this concept of a graph.
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
Figure 3: Graphical representation of the result of symbolically tracing our example of a simple forward method. Note that we call this a graph, and not just a set of steps, because it’s possible for the graph to branch off and recombine. Think of the skip connection in a residual block. This would look something like:
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
Figure 4: Graphical representation of a residual skip connection. The middle node is like the main branch of a residual block, and the final node represents the sum of the input and output of the main branch. Now, TorchVision’s get_graph_node_names function applies FX as described above, and in the process of doing so, tags each node with a human readable name. Let’s try this with our toy CNN model from the previous section: model = CNN(3, 4, 10) from torchvision.models.feature_extraction import get_graph_node_names nodes, _ = get_graph_node_names(model) print(nodes) which will result in: ```python
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
print(nodes) which will result in: ```python ['x', 'blocks.0.convs.0.0', 'blocks.0.convs.0.1', 'blocks.0.convs.1.0', 'blocks.0.convs.1.1', 'blocks.0.downsample', 'blocks.1.convs.0.0', 'blocks.1.convs.0.1', 'blocks.1.convs.1.0', 'blocks.1.convs.1.1', 'blocks.1.convs.2.0', 'blocks.1.convs.2.1', 'blocks.1.downsample', 'blocks.2.convs.0.0', 'blocks.2.convs.0.1', 'blocks.2.convs.1.0', 'blocks.2.convs.1.1', 'blocks.2.convs.2.0', 'blocks.2.convs.2.1', 'blocks.2.downsample', 'blocks.3.convs.0.0', 'blocks.3.convs.0.1', 'blocks.3.convs.1.0', 'blocks.3.convs.1.1', 'blocks.3.convs.2.0', 'blocks.3.convs.2.1', 'blocks.3.downsample', 'global_pool', 'flatten', 'cls'] We can read these node names as hierarchically organised “addresses” for the operations of interest. For example 'blocks.1.downsample' refers to the MaxPool2d layer in the second ConvBlock.
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
create_feature_extractor, which is where all the magic happens, goes a few steps further than get_graph_node_names. It takes desired node names as one of the input arguments, and then uses more FX core functionality to: Assign the desired nodes as outputs. Prune unnecessary downstream nodes and their associated parameters. Translate the resulting graph back into Python code. Return another PyTorch Module to the user. This has the python code from step 3 as the forward method. As a demonstration, here’s how we would apply create_feature_extractor to get the 4 feature maps from our toy CNN model ```python from torchvision.models.feature_extraction import create_feature_extractor Confused about the node specification here? We are allowed to provide truncated node names, and create_feature_extractor will choose the last node with that prefix.
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
will choose the last node with that prefix. feature_extractor = create_feature_extractor( model, return_nodes=['blocks.0', 'blocks.1', 'blocks.2', 'blocks.3']) out will be a dict of Tensors, each representing a feature map out = feature_extractor(torch.zeros(1, 3, 32, 32)) ``` It’s as simple as that. When it comes down to it, FX feature extraction is just a way of making it possible to do what some of us would have naively hoped for when we first started programming: “just give me the output of this code (points finger at screen)”*. [ ] … does not require us to fiddle with source code. [ ] … provides full flexibility in terms of accessing any intermediate transformation of our inputs, whether they are the results of a module or a functional operation [ ] … does drop unnecessary computations steps once features have been extracted [ ] … and I didn’t mention this before, but it’s also TorchScript friendly! Here’s that table again with another row added for FX feature extraction
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
Can use source code as is without any modifications or rewriting Full flexibility in accessing features Drops unnecessary computational steps TorchScript friendly Modify forward method NO Technically yes. Depends on how much code you’re willing to write. So in practice, NO. YES YES
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:| | New module that reuses submodules / parameters of original module | NO | Technically yes. Depends on how much code you’re willing to write. So in practice, NO. | YES | YES | |-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
Hooks YES Mostly YES. Only outputs of submodules NO NO FX YES YES YES YES
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:| Table 2: A copy of Table 1 with an added row for FX feature extraction. FX feature extraction gets YES across the board! Current FX Limitations Although I would have loved to end the post there, FX does have some of its own limitations which boil down to: There may be some Python code that isn’t yet handled by FX when it comes to the step of interpretation and translation into a graph. Dynamic control flow can’t be represented in terms of a static graph.
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
The easiest thing to do when these problems crop up is to bundle the underlying code into a “leaf node”. Recall the example graph from Figure 3? Conceptually, we may agree that the submodule should be treated as a node in itself rather than a set of nodes representing the underlying operations. If we do so, we can redraw the graph as: Figure 5: The individual operations within `submodule` may (left - within red box), may be consolidated into one node (right - node #2) if we consider the `submodule` as a "leaf" node.
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
We would want to do so if there is some problematic code within the submodule, but we don’t have any need for extracting any intermediate transformations from within it. In practice, this is easily achievable by providing a keyword argument to create_feature_extractor or get_graph_node_names. model = CNN(3, 4, 10) nodes, _ = get_graph_node_names(model, tracer_kwargs={'leaf_modules': [ConvBlock]}) print(nodes) for which the output will be: ['x', 'blocks.0', 'blocks.1', 'blocks.2', 'blocks.3', 'global_pool', 'flatten', 'cls'] Notice how, as compared to previously, all the nodes for any given ConvBlock are consolidated into a single node. We could do something similar with functions. For example, Python’s inbuilt len needs to be wrapped and the result should be treated as a leaf node. Here’s how you can do that with core FX functionality: ```python torch.fx.wrap('len') class MyModule(nn.Module): def forward(self, x): x += 1 len(x)
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
x += 1 len(x) model = MyModule() feature_extractor = create_feature_extractor(model, return_nodes=['add']) For functions you define, you may instead use another keyword argument to `create_feature_extractor` (minor detail: here’s[ why you might want to do it this way instead](https://github.com/pytorch/pytorch/issues/62021#issue-950458396)): ```python def myfunc(x): return len(x) class MyModule(nn.Module): def forward(self, x): x += 1 myfunc(x) model = MyModule() feature_extractor = create_feature_extractor( model, return_nodes=['add'], tracer_kwargs={'autowrap_functions': [myfunc]}) Notice that none of the fixes above involved modifying source code.
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
Of course, there may be times when the very intermediate transformation one is trying to get access to is within the same forward method or function that is causing problems. Here, we can’t just treat that module or function as a leaf node, because then we can’t access the intermediate transformations within. In these cases, some rewriting of the source code will be needed. Here are some examples (not exhaustive) FX will raise an error when trying to trace through code with an assert statement. In this case you may need to remove that assertion or switch it with torch._assert (this is not a public function - so consider it a bandaid and use with caution). Symbolically tracing in-place changes to slices of tensors is not supported. You will need to make a new variable for the slice, apply the operation, then reconstruct the original tensor using concatenation or stacking.
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
Representing dynamic control flow in a static graph is just not logically possible. See if you can distill the coded logic down to something that is not dynamic - see FX documentation for tips. In general, you may consult the FX documentation for more detail on the limitations of symbolic tracing and the possible workarounds. Conclusion
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
Conclusion We did a quick recap on feature extraction and why one might want to do it. Although there are existing methods for doing feature extraction in PyTorch they all have rather significant shortcomings. We learned how TorchVision’s FX feature extraction utility works and what makes it so versatile compared to the existing methods. While there are still some minor kinks to iron out for the latter, we understand the limitations, and can trade them off against the limitations of other methods depending on our use case. Hopefully by adding this new utility to your PyTorch toolkit, you’re now equipped to handle the vast majority of feature extraction requirements you may come across. Happy coding!
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
layout: blog_detail title: 'PyTorch Ecosystem Day 2021 Recap and New Contributor Resources' author: Team PyTorch Thank you to our incredible community for making the first ever PyTorch Ecosystem Day a success! The day was filled with discussions on new developments, trends and challenges showcased through 71 posters, 32 breakout sessions and 6 keynote speakers. Special thanks to our keynote speakers: Piotr Bialecki, Ritchie Ng, Miquel Farré, Joe Spisak, Geeta Chauhan, and Suraj Subramanian who shared updates from the latest release of PyTorch, exciting work being done with partners, use case example from Disney, the growth and development of the PyTorch community in Asia Pacific, and latest contributor highlights. If you missed the opening talks, you rewatch them here: * Morning/EMEA Opening Talks
https://pytorch.org/blog/ecosystem-day-2021-recap/
pytorch blogs
Evening/APAC Opening Talks In addition to the talks, we had 71 posters covering various topics such as multimodal, NLP, compiler, distributed training, researcher productivity tools, AI accelerators, and more. From the event, it was clear that an underlying thread that ties all of these different projects together is the cross-collaboration of the PyTorch community. Thank you for continuing to push the state of the art with PyTorch! To view the full catalogue of poster, please visit PyTorch Ecosystem Day 2021 Event Page. New Contributor Resources Today, we are also sharing new contributor resources that we are trying out to give you the most access to up-to-date news, networking opportunities and more.
https://pytorch.org/blog/ecosystem-day-2021-recap/
pytorch blogs
Contributor Newsletter - Includes curated news including RFCs, feature roadmaps, notable PRs, editorials from developers, and more to support keeping track of everything that’s happening in our community. Contributors Discussion Forum - Designed for contributors to learn and collaborate on the latest development across PyTorch. PyTorch Developer Podcast (Beta) - Edward Yang, PyTorch Research Scientist, at Facebook AI shares bite-sized (10 to 20 mins) podcast episodes discussing topics about all sorts of internal development topics in PyTorch. Thank you, Team PyTorch
https://pytorch.org/blog/ecosystem-day-2021-recap/
pytorch blogs
layout: blog_detail title: "Geospatial deep learning with TorchGeo" author: Adam Stewart (University of Illinois at Urbana-Champaign), Caleb Robinson (Microsoft AI for Good Research Lab), Isaac Corley (University of Texas at San Antonio) featured-img: 'assets/images/torchgeo-hurricane.jpg' TorchGeo is a PyTorch domain library providing datasets, samplers, transforms, and pre-trained models specific to geospatial data. https://github.com/microsoft/torchgeo
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
For decades, Earth observation satellites, aircraft, and more recently UAV platforms have been collecting increasing amounts of imagery of the Earth’s surface. With information about seasonal and long-term trends, remotely sensed imagery can be invaluable for solving some of the greatest challenges to humanity, including climate change adaptation, natural disaster monitoring, water resource management, and food security for a growing global population. From a computer vision perspective, this includes applications like land cover mapping (semantic segmentation), deforestation and flood monitoring (change detection), glacial flow (pixel tracking), hurricane tracking and intensity estimation (regression), and building and road detection (object detection, instance segmentation). By leveraging recent advancements in deep learning architectures, cheaper and more powerful GPUs, and petabytes of freely available satellite imagery datasets, we can come closer to solving these important problems.
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
National Oceanic and Atmospheric Administration satellite image of Hurricane Katrina, taken on August 28, 2005 (source). Geospatial machine learning libraries like TorchGeo can be used to detect, track, and predict future trajectories of hurricanes and other natural disasters. The challenges
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
In traditional computer vision datasets, such as ImageNet, the image files themselves tend to be rather simple and easy to work with. Most images have 3 spectral bands (RGB), are stored in common file formats like PNG or JPEG, and can be easily loaded with popular software libraries like PIL or OpenCV. Each image in these datasets is usually small enough to pass directly into a neural network. Furthermore, most of these datasets contain a finite number of well-curated images that are assumed to be independent and identically distributed, making train-val-test splits straightforward. As a result of this relative homogeneity, the same pre-trained models (e.g., CNNs pretrained on ImageNet) have shown to be effective across a wide range of vision tasks using transfer learning methods. Existing libraries, such as torchvision, handle these simple cases well, and have been used to make large advances in vision tasks over the past decade.
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
Remote sensing imagery is not so uniform. Instead of simple RGB images, satellites tend to capture images that are multispectral (Landsat 8 has 11 spectral bands) or even hyperspectral (Hyperion has 242 spectral bands). These images capture information at a wider range of wavelengths (400 nm–15 µm), far outside of the visible spectrum. Different satellites also have very different spatial resolutions—GOES has a resolution of 4 km/px, Maxar imagery is 30 cm/px, and drone imagery resolution can be as high as 7 mm/px. These datasets almost always have a temporal component, with satellite revisists that are daily, weekly, or biweekly. Images often have overlap with other images in the dataset, and need to be stitched together based on geographic metadata. These images tend to be very large (e.g., 10K x 10K pixels), so it isn't possible to pass an entire image through a neural network. This data is distributed in hundreds of different raster and vector file formats like GeoTIFF and ESRI Shapefile, requiring specialty libraries like GDAL to load.
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
From left to right: Mercator, Albers Equal Area, and Interrupted Goode Homolosine projections (source). Geospatial data is associated with one of many different types of reference systems that project the 3D Earth onto a 2D representation. Combining data from different sources often involves re-projecting to a common reference system in order to ensure that all layers are aligned.
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
Although each image is 2D, the Earth itself is 3D. In order to stitch together images, they first need to be projected onto a 2D representation of the Earth, called a coordinate reference system (CRS). Most people are familiar with equal angle representations like Mercator that distort the size of regions (Greenland looks larger than Africa even though Africa is 15x larger), but there are many other CRSs that are commonly used. Each dataset may use a different CRS, and each image within a single dataset may also be in a unique CRS. In order to use data from multiple layers, they must all share a common CRS, otherwise the data won't be properly aligned. For those who aren't familiar with remote sensing data, this can be a daunting task.
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
Even if you correctly georeference images during indexing, if you don't project them to a common CRS, you'll end up with rotated images with nodata values around them, and the images won't be pixel-aligned. The solution At the moment, it can be quite challenging to work with both deep learning models and geospatial data without having expertise in both of these very different fields. To address these challenges, we've built TorchGeo, a PyTorch domain library for working with geospatial data. TorchGeo is designed to make it simple: for machine learning experts to work with geospatial data, and for remote sensing experts to explore machine learning solutions.
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
TorchGeo is not just a research project, but a production-quality library that uses continuous integration to test every commit with a range of Python versions on a range of platforms (Linux, macOS, Windows). It can be easily installed with any of your favorite package managers, including pip, conda, and spack: $ pip install torchgeo TorchGeo is designed to have the same API as other PyTorch domain libraries like torchvision, torchtext, and torchaudio. If you already use torchvision in your workflow for computer vision datasets, you can switch to TorchGeo by changing only a few lines of code. All TorchGeo datasets and samplers are compatible with the PyTorch DataLoader class, meaning that you can take advantage of wrapper libraries like PyTorch Lightning for distributed training. In the following sections, we'll explore possible use cases for TorchGeo to show how simple it is to use. Geospatial datasets and samplers
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
Example application in which we combine A) a scene from Landsat 8 and B) Cropland Data Layer labels, even though these files are in different EPSG projections. We want to sample patches C) and D) from these datasets using a geospatial bounding box as an index. Many remote sensing applications involve working with geospatial datasets —datasets with geographic metadata. In TorchGeo, we define a GeoDataset class to represent these kinds of datasets. Instead of being indexed by an integer, each GeoDataset is indexed by a spatiotemporal bounding box, meaning that two or more datasets covering a different geographic extent can be intelligently combined.
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
In this example, we show how easy it is to work with geospatial data and to sample small image patches from a combination of Landsat and Cropland Data Layer (CDL) data using TorchGeo. First, we assume that the user has Landsat 7 and 8 imagery downloaded. Since Landsat 8 has more spectral bands than Landsat 7, we'll only use the bands that both satellites have in common. We'll create a single dataset including all images from both Landsat 7 and 8 data by taking the union between these two datasets. from torch.utils.data import DataLoader from torchgeo.datasets import CDL, Landsat7, Landsat8, stack_samples from torchgeo.samplers import RandomGeoSampler landsat7 = Landsat7(root="...") landsat8 = Landsat8(root="...", bands=Landsat8.all_bands[1:-2]) landsat = landsat7 | landsat8
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
landsat = landsat7 | landsat8 Next, we take the intersection between this dataset and the CDL dataset. We want to take the intersection instead of the union to ensure that we only sample from regions where we have both Landsat and CDL data. Note that we can automatically download and checksum CDL data. Also note that each of these datasets may contain files in different CRSs or resolutions, but TorchGeo automatically ensures that a matching CRS and resolution is used. ```c++ cdl = CDL(root="...", download=True, checksum=True) dataset = landsat & cdl
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
dataset = landsat & cdl This dataset can now be used with a PyTorch data loader. Unlike benchmark datasets, geospatial datasets often include very large images. For example, the CDL dataset consists of a single image covering the entire contiguous United States. In order to sample from these datasets using geospatial coordinates, TorchGeo defines a number of [*samplers*](https://torchgeo.readthedocs.io/en/latest/api/samplers.html). In this example, we'll use a random sampler that returns 256 x 256 pixel images and 10,000 samples per epoch. We'll also use a custom collation function to combine each sample dictionary into a mini-batch of samples. ```c++ sampler = RandomGeoSampler(dataset, size=256, length=10000) dataloader = DataLoader(dataset, batch_size=128, sampler=sampler, collate_fn=stack_samples) This data loader can now be used in your normal training/evaluation pipeline. ```c++ for batch in dataloader: image = batch["image"] mask = batch["mask"]
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
mask = batch["mask"] # train a model, or make predictions using a pre-trained model ``` Many applications involve intelligently composing datasets based on geospatial metadata like this. For example, users may want to: Combine datasets for multiple image sources and treat them as equivalent (e.g., Landsat 7 and 8) Combine datasets for disparate geospatial locations (e.g., Chesapeake NY and PA) These combinations require that all queries are present in at least one dataset, and can be created using a UnionDataset. Similarly, users may want to: Combine image and target labels and sample from both simultaneously (e.g., Landsat and CDL) Combine datasets for multiple image sources for multimodal learning or data fusion (e.g., Landsat and Sentinel)
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
These combinations require that all queries are present in both datasets, and can be created using an IntersectionDataset. TorchGeo automatically composes these datasets for you when you use the intersection (&) and union (|) operators. Multispectral and geospatial transforms In deep learning, it's common to augment and transform the data so that models are robust to variations in the input space. Geospatial data can have variations such as seasonal changes and warping effects, as well as image processing and capture issues like cloud cover and atmospheric distortion. TorchGeo utilizes augmentations and transforms from the Kornia library, which supports GPU acceleration and supports multispectral imagery with more than 3 channels.
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
Traditional geospatial analyses compute and visualize spectral indices which are combinations of multispectral bands. Spectral indices are designed to highlight areas of interest in a multispectral image relevant to some application, such as vegetation health, areas of man-made change or increasing urbanization, or snow cover. TorchGeo supports numerous transforms, which can compute common spectral indices and append them as additional bands to a multispectral image tensor. Below, we show a simple example where we compute the Normalized Difference Vegetation Index (NDVI) on a Sentinel-2 image. NDVI measures the presence of vegetation and vegetation health and is computed as the normalized difference between the red and near-infrared (NIR) spectral bands. Spectral index transforms operate on sample dictionaries returned from TorchGeo datasets and append the resulting spectral index to the image channel dimension.
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
First, we instantiate a Sentinel-2 dataset and load a sample image. Then, we plot the true color (RGB) representation of this data to see the region we are looking at. import matplotlib.pyplot as plt from torchgeo.datasets import Sentinel2 from torchgeo.transforms import AppendNDVI dataset = Sentinel2(root="...") sample = dataset[...] fig = dataset.plot(sample) plt.show() Next, we instantiate and compute an NDVI transform, appending this new channel to the end of the image. Sentinel-2 imagery uses index 0 for its red band and index 3 for its NIR band. In order to visualize the data, we also normalize the image. NDVI values can range from -1 to 1, but we want to use the range 0 to 1 for plotting. transform = AppendNDVI(index_red=0, index_nir=3) sample = transform(sample) sample["image"][-1] = (sample["image"][-1] + 1) / 2 plt.imshow(sample["image"][-1], cmap="RdYlGn_r") plt.show()
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
True color (left) and NDVI (right) of the Texas Hill Region, taken on November 16, 2018 by the Sentinel-2 satellite. In the NDVI image, red indicates water bodies, yellow indicates barren soil, light green indicates unhealthy vegetation, and dark green indicates healthy vegetation. Benchmark datasets One of the driving factors behind progress in computer vision is the existence of standardized benchmark datasets like ImageNet and MNIST. Using these datasets, researchers can directly compare the performance of different models and training procedures to determine which perform the best. In the remote sensing domain, there are many such datasets, but due to the aforementioned difficulties of working with this data and the lack of existing libraries for loading these datasets, many researchers opt to use their own custom datasets.
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
One of the goals of TorchGeo is to provide easy-to-use data loaders for these existing datasets. TorchGeo includes a number of benchmark datasets —datasets that include both input images and target labels. This includes datasets for tasks like image classification, regression, semantic segmentation, object detection, instance segmentation, change detection, and more. If you've used torchvision before, these types of datasets should be familiar. In this example, we'll create a dataset for the Northwestern Polytechnical University (NWPU) very-high-resolution ten-class (VHR-10) geospatial object detection dataset. This dataset can be automatically downloaded, checksummed, and extracted, just like with torchvision. ```c++ from torch.utils.data import DataLoader from torchgeo.datasets import VHR10 dataset = VHR10(root="...", download=True, checksum=True)
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
dataloader = DataLoader(dataset, batch_size=128, shuffle=True, num_workers=4) for batch in dataloader: image = batch["image"] label = batch["label"] # train a model, or make predictions using a pre-trained model ``` All TorchGeo datasets are compatible with PyTorch data loaders, making them easy to integrate into existing training workflows. The only difference between a benchmark dataset in TorchGeo and a similar dataset in torchvision is that each dataset returns a dictionary with keys for each PyTorch Tensor. Example predictions from a Mask R-CNN model trained on the NWPU VHR-10 dataset. The model predicts sharp bounding boxes and masks for all objects with high confidence scores. Reproducibility with PyTorch Lightning
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
Reproducibility with PyTorch Lightning Another key goal of TorchGeo is reproducibility. For many of these benchmark datasets, there is no predefined train-val-test split, or the predefined split has issues with class imbalance or geographic distribution. As a result, the performance metrics reported in the literature either can't be reproduced, or aren't indicative of how well a pre-trained model would work in a different geographic location.
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
In order to facilitate direct comparisons between results published in the literature and further reduce the boilerplate code needed to run experiments with datasets in TorchGeo, we have created PyTorch Lightning datamodules with well-defined train-val-test splits and trainers for various tasks like classification, regression, and semantic segmentation. These datamodules show how to incorporate augmentations from the kornia library, include preprocessing transforms (with pre-calculated channel statistics), and let users easily experiment with hyperparameters related to the data itself (as opposed to the modeling process). Training a semantic segmentation model on the Inria Aerial Image Labeling dataset is as easy as a few imports and four lines of code. ```c++ from pytorch_lightning import Trainer from torchgeo.datamodules import InriaAerialImageLabelingDataModule
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
from torchgeo.trainers import SemanticSegmentationTask datamodule = InriaAerialImageLabelingDataModule(root_dir="...", batch_size=64, num_workers=6) task = SemanticSegmentationTask(segmentation_model="unet", encoder_weights="imagenet", learning_rate=0.1) trainer = Trainer(gpus=1, default_root_dir="...") trainer.fit(model=task, datamodule=datamodule) ``` Building segmentations produced by a U-Net model trained on the Inria Aerial Image Labeling dataset. Reproducing these results is as simple as a few imports and four lines of code, making comparison of different models and training techniques simple and easy.
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
In our preprint we show a set of results that use the aforementioned datamodules and trainers to benchmark simple modeling approaches for several of the datasets in TorchGeo. For example, we find that a simple ResNet-50 can achieve state-of-the-art performance on the So2Sat dataset. These types of baseline results are important for evaluating the contribution of different modeling choices when tackling problems with remotely sensed data. Future work and contributing There is still a lot of remaining work to be done in order to make TorchGeo as easy to use as possible, especially for users without prior deep learning experience. One of the ways in which we plan to achieve this is by expanding our tutorials to include subjects like "writing a custom dataset" and "transfer learning", or tasks like "land cover mapping" and "object detection".
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
Another important project we are working on is pre-training models. Most remote sensing researchers work with very small labeled datasets, and could benefit from pre-trained models and transfer learning approaches. TorchGeo is the first deep learning library to provide models pre-trained on multispectral imagery. Our goal is to provide models for different image modalities (optical, SAR, multispectral) and specific platforms (Landsat, Sentinel, MODIS) as well as benchmark results showing their performance with different amounts of training data. Self-supervised learning is a promising method for training such models. Satellite imagery datasets often contain petabytes of imagery, but accurately labeled datasets are much harder to come by. Self-supervised learning methods will allow us to train directly on the raw imagery without needing large labeled datasets.
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
Aside from these larger projects, we're always looking to add new datasets, data augmentation transforms, and sampling strategies. If you're Python savvy and interested in contributing to TorchGeo, we would love to see contributions! TorchGeo is open source under an MIT license, so you can use it in almost any project. External links: Homepage: https://github.com/microsoft/torchgeo Documentation: https://torchgeo.readthedocs.io/ PyPI: https://pypi.org/project/torchgeo/ Paper: https://arxiv.org/abs/2111.08872 If you like TorchGeo, give us a star on GitHub! And if you use TorchGeo in your work, please cite our paper. Acknowledgments
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
Acknowledgments We would like to thank all TorchGeo contributors for their efforts in creating the library, the Microsoft AI for Good program for support, and the PyTorch Team for their guidance. This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993), the State of Illinois, and as of December, 2019, the National Geospatial-Intelligence Agency. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. The research was supported in part by NSF grants IIS-1908104, OAC-1934634, and DBI-2021898.
https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/
pytorch blogs
layout: blog_detail title: 'What’s New in PyTorch Profiler 1.9?' author: Sabrina Smai, Program Manager on the AI Framework team at Microsoft PyTorch Profiler v1.9 has been released! The goal of this new release (previous PyTorch Profiler release) is to provide you with new state-of-the-art tools to help diagnose and fix machine learning performance issues regardless of whether you are working on one or numerous machines. The objective is to target the execution steps that are the most costly in time and/or memory, and visualize the work load distribution between GPUs and CPUs. Here is a summary of the five major features being released:
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
Distributed Training View: This helps you understand how much time and memory is consumed in your distributed training job. Many issues occur when you take a training model and split the load into worker nodes to be run in parallel as it can be a black box. The overall model goal is to speed up model training. This distributed training view will help you diagnose and debug issues within individual nodes. Memory View: This view allows you to understand your memory usage better. This tool will help you avoid the famously pesky Out of Memory error by showing active memory allocations at various points of your program run. GPU Utilization Visualization: This tool helps you make sure that your GPU is being fully utilized. Cloud Storage Support: Tensorboard plugin can now read profiling data from Azure Blob Storage, Amazon S3, and Google Cloud Platform.
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
Jump to Source Code: This feature allows you to visualize stack tracing information and jump directly into the source code. This helps you quickly optimize and iterate on your code based on your profiling results. Getting Started with PyTorch Profiling Tool PyTorch includes a profiling functionality called « PyTorch Profiler ». The PyTorch Profiler tutorial can be found here. To instrument your PyTorch code for profiling, you must: $ pip install torch-tb-profiler import torch.profiler as profiler With profiler.profile(XXXX) Comments: • For CUDA and CPU profiling, see below: with torch.profiler.profile( activities=[ torch.profiler.ProfilerActivity.CPU, torch.profiler.ProfilerActivity.CUDA],
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
torch.profiler.ProfilerActivity.CUDA], ``` • With profiler.record_function(“$NAME”): allows putting a decorator (a tag associated to a name) for a block of function • Profile_memory=True parameter under profiler.profile allows you to profile CPU and GPU memory footprint Visualizing PyTorch Model Performance using PyTorch Profiler Distributed Training Recent advances in deep learning argue for the value of large datasets and large models, which requires you to scale out model training to more computational resources. Distributed Data Parallel (DDP) and NVIDIA Collective Communications Library (NCCL) are the widely adopted paradigms in PyTorch for accelerating your deep learning training. In this release of PyTorch Profiler, DDP with NCCL backend is now supported. Computation/Communication Overview
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
Computation/Communication Overview In the Computation/Communication overview under the Distributed training view, you can observe the computation-to-communication ratio of each worker and [load balancer](https://en.wikipedia.org/wiki/Load_balancing_(computing) nodes between worker as measured by granularity. Scenario 1: If the computation and overlapping time of one worker is much larger than the others, this may suggest an issue in the workload balance or worker being a straggler. Computation is the sum of kernel time on GPU minus the overlapping time. The overlapping time is the time saved by interleaving communications during computation. The more overlapping time represents better parallelism between computation and communication. Ideally the computation and communication completely overlap with each other. Communication is the total communication time minus the overlapping time. The example image below displays how this scenario appears on Tensorboard.
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
Figure: A straggler example Scenario 2: If there is a small batch size (i.e. less computation on each worker) or the data to be transferred is large, the computation-to-communication may also be small and be seen in the profiler with low GPU utilization and long waiting times. This computation/communication view will allow you to diagnose your code to reduce communication by adopting gradient accumulation, or to decrease the communication proportion by increasing batch size. DDP communication time depends on model size. Batch size has no relationship with model size. So increasing batch size could make computation time longer and make computation-to-communication ratio bigger. Synchronizing/Communication Overview
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
Synchronizing/Communication Overview In the Synchronizing/Communication view, you can observe the efficiency of communication. This is done by taking the step time minus computation and communication time. Synchronizing time is part of the total communication time for waiting and synchronizing with other workers. The Synchronizing/Communication view includes initialization, data loader, CPU computation, and so on Insights like what is the ratio of total communication is really used for exchanging data and what is the idle time of waiting for data from other workers can be drawn from this view. For example, if there is an inefficient workload balance or straggler issue, you’ll be able to identify it in this Synchronizing/Communication view. This view will show several workers’ waiting time being longer than others.
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
This table view above allows you to see the detailed statistics of all communication ops in each node. This allows you to see what operation types are being called, how many times each op is called, what is the size of the data being transferred by each op, etc. Memory View: This memory view tool helps you understand the hardware resource consumption of the operators in your model. Understanding the time and memory consumption on the operator-level allows you to resolve performance bottlenecks and in turn, allow your model to execute faster. Given limited GPU memory size, optimizing the memory usage can: Allow bigger model which can potentially generalize better on end level tasks. Allow bigger batch size. Bigger batch sizes increase the training speed.
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
The profiler records all the memory allocation during the profiler interval. Selecting the “Device” will allow you to see each operator’s memory usage on the GPU side or host side. You must enable profile_memory=True to generate the below memory data as shown here. With torch.profiler.profile( Profiler_memory=True # this will take 1 – 2 minutes to complete. ) Important Definitions: • “Size Increase” displays the sum of all allocation bytes and minus all the memory release bytes. • “Allocation Size” shows the sum of all allocation bytes without considering the memory release. • “Self” means the allocated memory is not from any child operators, instead by the operator itself. GPU Metric on Timeline:
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
GPU Metric on Timeline: This feature will help you debug performance issues when one or more GPU are underutilized. Ideally, your program should have high GPU utilization (aiming for 100% GPU utilization), minimal CPU to GPU communication, and no overhead. Overview: The overview page highlights the results of three important GPU usage metrics at different levels (i.e. GPU Utilization, Est. SM Efficiency, and Est. Achieved Occupancy). Essentially, each GPU has a bunch of SM each with a bunch of warps that can execute a bunch of threads concurrently. Warps execute a bunch because the amount depends on the GPU. But at a high level, this GPU Metric on Timeline tool allows you can see the whole stack, which is useful. If the GPU utilization result is low, this suggests a potential bottleneck is present in your model. Common reasons: •Insufficient parallelism in kernels (i.e., low batch size) •Small kernels called in a loop. This is to say the launch overheads are not amortized
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
•CPU or I/O bottlenecks lead to the GPU not receiving enough work to keep busy Looking of the overview page where the performance recommendation section is where you’ll find potential suggestions on how to increase that GPU utilization. In this example, GPU utilization is low so the performance recommendation was to increase batch size. Increasing batch size 4 to 32, as per the performance recommendation, increased the GPU Utilization by 60.68%.
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
GPU Utilization: the step interval time in the profiler when a GPU engine was executing a workload. The high the utilization %, the better. The drawback of using GPU utilization solely to diagnose performance bottlenecks is it is too high-level and coarse. It won’t be able to tell you how many Streaming Multiprocessors are in use. Note that while this metric is useful for detecting periods of idleness, a high value does not indicate efficient use of the GPU, only that it is doing anything at all. For instance, a kernel with a single thread running continuously will get a GPU Utilization of 100%
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
Estimated Stream Multiprocessor Efficiency (Est. SM Efficiency) is a finer grained metric, it indicates what percentage of SMs are in use at any point in the trace This metric reports the percentage of time where there is at least one active warp on a SM and those that are stalled (NVIDIA doc). Est. SM Efficiency also has it’s limitation. For instance, a kernel with only one thread per block can’t fully use each SM. SM Efficiency does not tell us how busy each SM is, only that they are doing anything at all, which can include stalling while waiting on the result of a memory load. To keep an SM busy, it is necessary to have a sufficient number of ready warps that can be run whenever a stall occurs
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
Estimated Achieved Occupancy (Est. Achieved Occupancy) is a layer deeper than Est. SM Efficiency and GPU Utilization for diagnosing performance issues. Estimated Achieved Occupancy indicates how many warps can be active at once per SMs. Having a sufficient number of active warps is usually key to achieving good throughput. Unlike GPU Utilization and SM Efficiency, it is not a goal to make this value as high as possible. As a rule of thumb, good throughput gains can be had by improving this metric to 15% and above. But at some point you will hit diminishing returns. If the value is already at 30% for example, further gains will be uncertain. This metric reports the average values of all warp schedulers for the kernel execution period (NVIDIA doc). The larger the Est. Achieve Occupancy value is the better.
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
Overview details: Resnet50_batchsize4 Overview details: Resnet50_batchsize32 Kernel View The kernel has “Blocks per SM” and “Est. Achieved Occupancy” which is a great tool to compare model runs. Mean Blocks per SM: Blocks per SM = Blocks of this kernel / SM number of this GPU. If this number is less than 1, it indicates the GPU multiprocessors are not fully utilized. “Mean Blocks per SM” is weighted average of all runs of this kernel name, using each run’s duration as weight. Mean Est. Achieved Occupancy:
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
Mean Est. Achieved Occupancy: Est. Achieved Occupancy is defined as above in overview. “Mean Est. Achieved Occupancy” is weighted average of all runs of this kernel name, using each run’s duration as weight. Trace View This trace view displays a timeline that shows the duration of operators in your model and which system executed the operation. This view can help you identify whether the high consumption and long execution is because of input or model training. Currently, this trace view shows GPU Utilization and Est. SM Efficiency on a timeline.
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
GPU utilization is calculated independently and divided into multiple 10 millisecond buckets. The buckets’ GPU utilization values are drawn alongside the timeline between 0 – 100%. In the above example, the “ProfilerStep5” GPU utilization during thread 28022’s busy time is higher than the following the one during “Optimizer.step”. This is where you can zoom-in to investigate why that is. From above, we can see the former’s kernels are longer than the later’s kernels. The later’s kernels are too short in execution, which results in lower GPU utilization. Est. SM Efficiency: Each kernel has a calculated est. SM efficiency between 0 – 100%. For example, the below kernel has only 64 blocks, while the SMs in this GPU is 80. Then its “Est. SM Efficiency” is 64/80, which is 0.8.
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
Cloud Storage Support After running pip install tensorboard, to have data be read through these cloud providers, you can now run: torch-tb-profiler[blob] torch-tb-profiler[gs] torch-tb-profiler[s3] pip install torch-tb-profiler[blob], pip install torch-tb-profiler[gs], or pip install torch-tb-profiler[S3] to have data be read through these cloud providers. For more information, please refer to this README. Jump to Source Code:
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
Jump to Source Code: One of the great benefits of having both TensorBoard and the PyTorch Profiler being integrated directly in Visual Studio Code (VS Code) is the ability to directly jump to the source code (file and line) from the profiler stack traces. VS Code Python Extension now supports TensorBoard Integration. Jump to source is ONLY available when Tensorboard is launched within VS Code. Stack tracing will appear on the plugin UI if the profiling with_stack=True. When you click on a stack trace from the PyTorch Profiler, VS Code will automatically open the corresponding file side by side and jump directly to the line of code of interest for you to debug. This allows you to quickly make actionable optimizations and changes to your code based on the profiling results and suggestions.
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
Gify: Jump to Source using Visual Studio Code Plug In UI For how to optimize batch size performance, check out the step-by-step tutorial here. PyTorch Profiler is also integrated with PyTorch Lightning and you can simply launch your lightning training jobs with --trainer.profiler=pytorch flag to generate the traces. Check out an example here. What’s Next for the PyTorch Profiler? You just saw how PyTorch Profiler can help optimize a model. You can now try the Profiler by pip install torch-tb-profiler to optimize your PyTorch model.
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
Look out for an advanced version of this tutorial in the future. We are also thrilled to continue to bring state-of-the-art tool to PyTorch users to improve ML performance. We'd love to hear from you. Feel free to open an issue here. For new and exciting features coming up with PyTorch Profiler, follow @PyTorch on Twitter and check us out on pytorch.org. Acknowledgements The author would like to thank the contributions of the following individuals to this piece. From the Facebook side: Geeta Chauhan, Gisle Dankel, Woo Kim, Sam Farahzad, and Mark Saroufim. On the Microsoft side: AI Framework engineers (Teng Gao, Mike Guo, and Yang Gu), Guoliang Hua, and Thuy Nguyen.
https://pytorch.org/blog/pytorch-profiler-1.9-released/
pytorch blogs
layout: blog_detail title: 'Announcing the Winners of the 2020 Global PyTorch Summer Hackathon' author: Team PyTorch More than 2,500 participants in this year’s Global PyTorch Summer Hackathon pushed the envelope to create unique new tools and applications for PyTorch developers and researchers. Notice: None of the projects submitted to the hackathon are associated with or offered by Facebook, Inc. This year’s projects fell into three categories: PyTorch Developer Tools: a tool or library for improving productivity and efficiency for PyTorch researchers and developers. Web/Mobile Applications Powered by PyTorch: a web or mobile interface and/or an embedded device built using PyTorch.
https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/
pytorch blogs
PyTorch Responsible AI Development Tools: a tool, library, or web/mobile app to support researchers and developers in creating responsible AI that factors in fairness, security, privacy, and more throughout its entire development process. The virtual hackathon ran from June 22 to August 25, with more than 2,500 registered participants, representing 114 countries from Republic of Azerbaijan, to Zimbabwe, to Japan, submitting a total of 106 projects. Entrants were judged on their idea’s quality, originality, potential impact, and how well they implemented it. Meet the winners of each category below. PyTorch Developer Tools 1st place - DeMask
https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/
pytorch blogs
DeMask is an end-to-end model for enhancing speech while wearing face masks — offering a clear benefit during times when face masks are mandatory in many spaces and for workers who wear face masks on the job. Built with Asteroid, a PyTorch-based audio source separation toolkit, DeMask is trained to recognize distortions in speech created by the muffling from face masks and to adjust the speech to make it sound clearer. This submission stood out in particular because it represents both a high-quality idea and an implementation that can be reproduced by other researchers. Here is an example on how to train a speech separation model in less than 20 lines: ```python from torch import optim from pytorch_lightning import Trainer from asteroid import ConvTasNet from asteroid.losses import PITLossWrapper from asteroid.data import LibriMix from asteroid.engine import System train_loader, val_loader = LibriMix.loaders_from_mini(task='sep_clean', batch_size=4)
https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/
pytorch blogs
model = ConvTasNet(n_src=2) optimizer = optim.Adam(model.parameters(), lr=1e-3) loss = PITLossWrapper( lambda x, y: (x - y).pow(2).mean(-1), # MSE pit_from="pw_pt", # Point in the pairwise matrix. ) system = System(model, optimizer, loss, train_loader, val_loader) trainer = Trainer(fast_dev_run=True) trainer.fit(system) ``` 2nd place - carefree-learn A PyTorch-based automated machine learning (AutoML) solution, carefree-learn provides high-level APIs to make training models using tabular data sets simpler. It features an interface similar to scikit-learn and functions as an end-to-end end pipeline for tabular data sets. It automatically detects feature column types and redundant feature columns, imputes missing values, encodes string columns and categorical columns, and preprocesses numerical columns, among other features. 3rd Place - TorchExpo
https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/
pytorch blogs
TorchExpo is a collection of models and extensions that simplifies taking PyTorch from research to production in mobile devices. This library is more than a web and mobile application, and also comes with a Python library. The Python library is available via pip install and it helps researchers convert a state-of-the-art model in TorchScript and ONNX format in just one line. Detailed docs are available here. Web/Mobile Applications Powered by PyTorch 1st place - Q&Aid
https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/
pytorch blogs
Q&Aid is a conceptual health-care chatbot aimed at making health-care diagnoses and facilitating communication between patients and doctors. It relies on a series of machine learning models to filter, label, and answer medical questions, based on a medical image and/or questions in text provided by a patient. The transcripts from the chat app then can be forwarded to the local hospitals and the patient will be contacted by one of them to make an appointment to determine proper diagnosis and care. The team hopes that this concept application helps hospitals to work with patients more efficiently and provide proper care. 2nd place - Rasoee
https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/
pytorch blogs
Rasoee is an application that can take images as input and output the name of the dish. It also lists the ingredients and recipe, along with the link to the original recipe online. Additionally, users can choose a cuisine from the list of cuisines in the drop menu, and describe the taste and/or method of preparation in text. Then the application will return matching dishes from the list of 308 identifiable dishes. The team has put a significant amount of effort gathering and cleaning various datasets to build more accurate and comprehensive models. You can check out the application here. 3rd place - Rexana the Robot — PyTorch
https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/
pytorch blogs
Rexana is an AI voice assistant meant to lay the foundation for a physical robot that can complete basic tasks around the house. The system is capable of autonomous navigation (knowing its position around the house relative to landmarks), recognizing voice commands, and object detection and recognition — meaning it can be commanded to perform various household tasks (e.g., "Rexana, water the potted plant in the lounge room.”). Rexana can be controlled remotely via a mobile device, and the robot itself features customizable hands (magnets, grippers, etc.) for taking on different jobs. PyTorch Responsible AI Development Tools 1st place: FairTorch
https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/
pytorch blogs
FairTorch is a fairness library for PyTorch. It lets developers add constraints to their models to equalize metrics across subgroups by simply adding a few lines of code. Model builders can choose a metric definition of fairness for their context, and enforce it at time of training. The library offers a suite of metrics that measure an AI system’s performance among subgroups, and can apply to high-stakes examples where decision-making algorithms are deployed, such as hiring, school admissions, and banking. 2nd place: Fluence
https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/
pytorch blogs
Fluence is a PyTorch-based deep learning library for language research. It specifically addresses the large compute demands of natural language processing (NLP) research. Fluence aims to provide low-resource and computationally efficient algorithms for NLP, giving researchers algorithms that can enhance current NLP methods or help discover where current methods fall short. 3rd place: Causing: CAUSal INterpretation using Graphs
https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/
pytorch blogs
Causing (CAUSal INterpretation using Graphs) is a multivariate graphic analysis tool for bringing transparency to neural networks. It explains causality and helps researchers and developers interpret the causal effects of a given equation system to ensure fairness. Developers can input data and a model describing the dependencies between the variables within the data set into Causing, and Causing will output a colored graph of quantified effects acting between the model’s variables. In addition, it also allows developers to estimate these effects to validate whether data fits a model. Thank you, The PyTorch team
https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/
pytorch blogs
layout: blog_detail title: "PyTorch’s Tracing Based Selective Build" author: Dhruv Matani, Suraj Subramanian featured-img: "/assets/images/pytorchs-tracing-based-selective-build_Figure_4.png" Introduction TL;DR: It can be challenging to run PyTorch on mobile devices, SBCs (Single Board Computers), and IOT devices. When compiled, the PyTorch library is huge and includes dependencies that might not be needed for the on-device use case. To run a specific set of models on-device, we actually require only a small subset of the features in the PyTorch library. We found that using a PyTorch runtime generated using selective build can achieve up to 90% reduction in binary size (for the CPU and QuantizedCPU backends on an x86-64 build on Linux). In this blog, we share our experience of generating model-specific minimal runtimes using Selective Build and show you how to do the same. Why is this important for app developers?
https://pytorch.org/blog/pytorchs-tracing-based-selective-build/
pytorch blogs
Why is this important for app developers? Using a PyTorch runtime generated by selective build can reduce the size of AI-powered apps by 30+ MB - a significant reduction for a typical mobile app! Making mobile applications more lightweight has many benefits - they are runnable on a wider variety of devices, consume less cellular data, and can be downloaded and updated faster on user’s devices. What does the Developer Experience look like? This method can work seamlessly with any existing PyTorch Mobile deployment workflows. All you need to do is replace the general PyTorch runtime library with a runtime customized for the specific models you wish to use in your application. The general steps in this process are: Build the PyTorch Runtime in instrumentation mode (this is called an instrumentation build of PyTorch). This will record the used operators, kernels and features.
https://pytorch.org/blog/pytorchs-tracing-based-selective-build/
pytorch blogs
Run your models through this instrumentation build by using the provided model_tracer binary. This will generate a single YAML file that stores all the features used by your model. These features will be preserved in the minimal runtime. Build PyTorch using this YAML file as input. This is the selective build technique, and it greatly reduces the size of the final PyTorch binary. Use this selectively-built PyTorch library to reduce the size of your mobile application! Building the PyTorch Runtime in a special “instrumentation” mode ( by passing the TRACING_BASED=1 build option) generates an instrumentation build runtime of PyTorch, along with a model_tracer binary. Running a model with this build allows us to trace the parts of PyTorch used by the model. Figure 1: Instrumentation build of PyTorch ```python
https://pytorch.org/blog/pytorchs-tracing-based-selective-build/
pytorch blogs
# Clone the PyTorch repo git clone https://github.com/pytorch/pytorch.git cd pytorch # Build the model_tracer USE_NUMPY=0 USE_DISTRIBUTED=0 USE_CUDA=0 TRACING_BASED=1 \ python setup.py develop Now this instrumentation build is used to run a model inference with representative inputs. The model_tracer binary observes parts of the instrumentation build that were activated during the inference run, and dumps it to a YAML file. Figure 2: YAML file generated by running model(s) on an instrumentation build # Generate YAML file ./build/bin/model_tracer \ --model_input_path /tmp/path_to_model.ptl \ --build_yaml_path /tmp/selected_ops.yaml
https://pytorch.org/blog/pytorchs-tracing-based-selective-build/
pytorch blogs
--build_yaml_path /tmp/selected_ops.yaml Now we build the PyTorch Runtime again, but this time using the YAML file generated by the tracer. The runtime now only includes those parts that are needed for this model. This is called **“Selectively built PyTorch runtime”** in the diagram below. ```python # Clean out cached configuration make clean # Build PyTorch using Selected Operators (from the YAML file) # using the host toolchain, and use this generated library BUILD_PYTORCH_MOBILE_WITH_HOST_TOOLCHAIN=1 \ USE_LIGHTWEIGHT_DISPATCH=0 \ BUILD_LITE_INTERPRETER=1 \ SELECTED_OP_LIST=/tmp/selected_ops.yaml \ TRACING_BASED=1 \ ./scripts/build_mobile.sh Figure 3: Selective Build of PyTorch and model execution on a selectively built PyTorch runtime Show me the code!
https://pytorch.org/blog/pytorchs-tracing-based-selective-build/
pytorch blogs
Show me the code! We’ve put together a notebook to illustrate what the process above looks like in code using a simple PyTorch model. For a more hands-on tutorial to deploy this on Android/iOS this tutorial should be helpful. Technical FAQs Why is Tracing needed for a Selective Build of PyTorch? In PyTorch, CPU kernels can call other operators via the PyTorch Dispatcher. Simply including the set of root operators called directly by the model is not sufficient as there might be many more being called under-the-hood transitively. Running the model on representative inputs and observing the actual list of operators called (aka “tracing”) is the most accurate way of determining what parts of PyTorch are used.
https://pytorch.org/blog/pytorchs-tracing-based-selective-build/
pytorch blogs
Additionally, factors such as which dtypes a kernel should handle are also runtime features that depend on actual input provided to the model. Hence, the tracing mechanism is extremely suitable for this purpose. Which features can be selected (in or out) by using Tracing Based Selective Build? The following features can be selected for the PyTorch runtime during the tracing based selective build process: CPU/QuantizedCPU kernels for PyTorch’s ATen Operators: If a PyTorch Operator is not needed by a model targeted at a selectively built runtime, then the registration of that CPU kernel is omitted in the runtime. This is controlled via Torchgen code-gen.
https://pytorch.org/blog/pytorchs-tracing-based-selective-build/
pytorch blogs
Primary Operators: This is controlled by a macro named TORCH_SELECTIVE_SCHEMA (via templated selective build) that either selects a primary operator or de-selects it based on information in a generated header file. Code that handles specific dtypes in CPU kernels: This is performed by generating exception throws in specific case statements in the switch case generated by the macro AT_PRIVATE_CHECK_SELECTIVE_BUILD.
https://pytorch.org/blog/pytorchs-tracing-based-selective-build/
pytorch blogs
Registration of Custom C++ Classes that extend PyTorch: This is controlled by the macro TORCH_SELECTIVE_CLASS, which can be used when registering Custom C++ Classes. The torch::selective_class_<> helper is to be used in conjunction with the macro TORCH_SELECTIVE_CLASS. What is the structure of the YAML file used during the build? The YAML file generated after tracing looks like the example below. It encodes all the elements of the “selectable” build feature as specified above. ```python include_all_non_op_selectives: false build_features: [] operators: aten::add.Tensor: is_used_for_training: false is_root_operator: true
https://pytorch.org/blog/pytorchs-tracing-based-selective-build/
pytorch blogs
is_root_operator: true include_all_overloads: false aten::len.t: is_used_for_training: false is_root_operator: true include_all_overloads: false kernel_metadata: local_scalar_dense_cpu: - Float add_stub: - Float copy: - Bool - Byte mul_cpu: - Float custom_classes: [] ``` How exactly is code eliminated from the generated binary? Depending on the specific scenario, there are 2 main techniques that are used to hint the compiler and linker about unused and unreachable code. This code is then cleaned up by the compiler or linker as unreachable code. [1] Unreferenced functions removed by the Linker When a function that isn’t transitively referenced from any visible function is present in the compiled object files that are being linked together, the linker will remove it (if the right build flags are provided). This is leveraged in 2 scenarios by the selective build system. Kernel Registration in the Dispatcher
https://pytorch.org/blog/pytorchs-tracing-based-selective-build/
pytorch blogs
Kernel Registration in the Dispatcher If an operator’s kernel isn’t needed, then it isn’t registered with the dispatcher. An unregistered kernel means that the function is unreachable, and it will be removed by the linker. Templated Selective Build The general idea here is that a class template specialization is used to select a class that either captures a reference to a function or not (depending on whether it’s used) and the linker can come along and clean out the unreferenced function. For example, in the code below, there’s no reference to the function “fn2”, so it will be cleaned up by the linker since it’s not referenced anywhere. ```python include include template struct FunctionSelector { T fn_; FunctionSelector(T fn): fn_(fn) {} T get() { return this->fn_; } }; // The "false" specialization of this class does NOT retain the argument passed // to the class constructor, which means that the function pointer passed in
https://pytorch.org/blog/pytorchs-tracing-based-selective-build/
pytorch blogs
// is considered to be unreferenced in the program (unless it is referenced // elsewhere). template struct FunctionSelector { FunctionSelector(T) {} }; template FunctionSelector make_function_selector_true(T fn) { return FunctionSelector(fn); } template FunctionSelector make_function_selector_false(T fn) { return FunctionSelector(fn); } typedef void(*fn_ptr_type)(); std::vector fns; template void add_fn(FunctionSelector fs) { fns.push_back(fs.get()); } template void add_fn(FunctionSelector) { // Do nothing. } // fn1 will be kept by the linker since it is added to the vector "fns" at // runtime. void fn1() { printf("fn1\n"); } // fn2 will be removed by the linker since it isn't referenced at all. void fn2() { printf("fn2\n"); } int main() { add_fn(make_function_selector_true(fn1)); add_fn(make_function_selector_false(fn2));
https://pytorch.org/blog/pytorchs-tracing-based-selective-build/
pytorch blogs
add_fn(make_function_selector_false(fn2)); } ``` [2] Dead Code Eliminated by the Compiler C++ Compilers can detect dead (unreachable) code by analyzing the code’s control flow statically. For example, if there’s a code-path that comes after an unconditional exception throw, then all the code after it will be marked as dead code and not converted to object code by the compiler. Typically, compilers require the use of the -fdce flag to eliminate dead code. In the example below, you can see that the C++ code on the left (in the red boxes) doesn’t have any corresponding generated object code on the right. Figure 4: Dead Code Elimination by C++ Compilers
https://pytorch.org/blog/pytorchs-tracing-based-selective-build/
pytorch blogs
This property is leveraged in the bodies of PyTorch kernel implementations that have a lot of repeated code to handle multiple dtypes of a Tensor. A dtype is the underlying data-type that the Tensor stores elements of. This can be one of float, double, int64, bool, int8, etc… Almost every PyTorch CPU kernel uses a macro of the form AT_DISPATCH_ALL_TYPES* that is used to substitute some code specialized for every dtype that the kernel needs to handle. For example: AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3( kBool, kHalf, kBFloat16, dtype, "copy_kernel", [&] { cpu_kernel_vec( iter, [=](scalar_t a) -> scalar_t { return a; }, [=](Vectorized<scalar_t> a) -> Vectorized<scalar_t> { return a; }); });
https://pytorch.org/blog/pytorchs-tracing-based-selective-build/
pytorch blogs
}); ``` The macro AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3 internally has a switch-case statement that looks like the code in Figure-4 above. The tracing process records the dtypes triggered for the kernel tag "copy_kernel" and the build process processes these tags and inserts throw statements in every case statement that is handling the dtype that isn’t required for this kernel tag. This is how dtype selectivity is implemented in PyTorch’s Tracing Based Selective Build. Conclusion Tracing Based Selective Build is a practical and scalable approach to selecting only the used parts of an application to retain code that static analysis can not detect. This code is usually extremely data/input dependent in nature. This article provides detailed insights into how Tracing Based Selective Build works under the hood, and the technical details related to its implementation. These techniques can also be applied to other applications and situations that can benefit from reduced binary size.
https://pytorch.org/blog/pytorchs-tracing-based-selective-build/
pytorch blogs
layout: blog_detail title: "PyTorch & OpenXLA: The Path Forward" author: Milad Mohammadi, Jack Cao, Shauheen Zahirazami, Joe Spisak, and Jiewen Tan As we celebrate the release of OpenXLA, PyTorch 2.0, and PyTorch/XLA 2.0, it’s worth taking a step back and sharing where we see it all going in the short to medium term. With PyTorch adoption leading in the AI space and XLA supporting best-in-class compiler features, PyTorch/XLA is well positioned to provide a cutting edge development stack for both model training and inference. To achieve this, we see investments in three main areas:
https://pytorch.org/blog/pytorch-2.0-xla-path-forward/
pytorch blogs