repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/sentence-transformers
| 635
|
sbert.net is down. Where can I view list of pretrained models?
|
https://github.com/huggingface/sentence-transformers/issues/635
|
closed
|
[] | 2020-12-19T12:16:46Z
| 2020-12-19T14:10:36Z
| null |
mani-rai
|
|
pytorch/vision
| 3,188
|
Cannot Build With FFmpeg Support
|
## ❓ Questions and Help
### Cannot Build With FFmpeg Support
Hi.
While trying to build `torchvision` from source, I've seen this output:
```
+ python3 setup.py build
Building wheel torchvision-0.8.2
PNG found: True
libpng version: 1.6.37
Building torchvision with PNG image support
libpng include path: /usr/include/libpng16
Running build on conda-build: False
Running build on conda: False
JPEG found: True
Building torchvision with JPEG image support
FFmpeg found: False
running build
running build_py
creating build
(omitted)
```
It showed that **`FFmpeg found: False`**. I tried `apt install ffmpeg` and built again, it still showed FFmpeg not found.
Then I tried:
```shell
apt update
apt install ffmpeg \
libavformat-dev libavcodec-dev libavdevice-dev \
libavutil-dev libswscale-dev libavresample-dev libavfilter-dev
# deps of python package av
pip3 install ffmpeg av
```
But it showed `FFmpeg found: False` once again.
I could not find any instructions in [README](../blob/master/README.rst) about installing `ffmpeg` dependencies for building `torchvision` yet, so how could I do that, or where could I find it?
Thanks.
cc @bjuncek
|
https://github.com/pytorch/vision/issues/3188
|
closed
|
[
"question",
"topic: build",
"module: video"
] | 2020-12-18T15:41:06Z
| 2021-11-16T07:26:28Z
| null |
KumaTea
|
huggingface/datasets
| 1,600
|
AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
|
The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong?
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files='data.txt')
dataset = dataset.train_test_split(test_size=0.1)
```
> AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
|
https://github.com/huggingface/datasets/issues/1600
|
closed
|
[
"question"
] | 2020-12-18T05:37:10Z
| 2023-05-03T04:22:55Z
| null |
david-waterworth
|
pytorch/vision
| 3,184
|
Are these 2 lines of code necessary?
|
Hi,
https://github.com/pytorch/vision/blob/master/references/video_classification/train.py#L134
https://github.com/pytorch/vision/blob/master/references/video_classification/train.py#L169
I wonder if these two lines are necessary.
Why do we need to assign transforms to dataset after loading them from cache, whose transforms have been declared when being saved.
I remove them and code seems still work.
Thanks.
|
https://github.com/pytorch/vision/issues/3184
|
closed
|
[
"question"
] | 2020-12-17T16:40:06Z
| 2021-01-21T13:10:18Z
| null |
jc-hou
|
pytorch/serve
| 917
|
Implement one of the TODOs: Pass request id while loading model in model_loader.py
|
<!--
Thank you for suggesting an idea to improve torchserve model serving experience.
Please fill in as much of the template below as you're able.
-->
**TODO**
https://github.com/pytorch/serve/blob/6c078d6cd1f91c1614c18abf2f94d3571be1b659/ts/model_loader.py#L71
```python
class TsModelLoader(ModelLoader):
"""
TorchServe 1.0 Model Loader
"""
def load(self, model_name, model_dir, handler, gpu_id, batch_size, envelope=None):
"""
Load TorchServe 1.0 model from file.
:param model_name:
:param model_dir:
:param handler:
:param gpu_id:
:param batch_size:
:param envelope:
:return:
"""
logging.debug("Loading model - working dir: %s", os.getcwd())
# TODO: Request ID is not given. UUID is a temp UUID.
metrics = MetricsStore(uuid.uuid4(), model_name)
manifest_file = os.path.join(model_dir, "MAR-INF/MANIFEST.json")
manifest = None
if os.path.exists(manifest_file):
with open(manifest_file) as f:
manifest = json.load(f)
```
## Is your feature request related to a problem? Please describe.
<!-- Please describe the problem you are trying to solve. -->
The main aim is to connect request maker(frontend) to request processor(backend) using request-id. One of the use cases can be when there is an error and we need to debug. It will be easy if we have request-id instead of random uuid
## Describe the solution
<!-- Please describe the desired behavior. -->
When encoding the `ModelLoadModelRequest` into buffer, also send request-id which was used to create that particular request
## Describe alternatives solution
<!-- Please describe alternative solutions or features you have considered. -->
|
https://github.com/pytorch/serve/issues/917
|
closed
|
[
"help wanted",
"question"
] | 2020-12-17T04:06:19Z
| 2021-11-16T02:42:09Z
| null |
rishabh1212
|
pytorch/vision
| 3,175
|
error: ‘constexpr’ call flows off the end of the function
|
### envs
libtorch==1.7.1
vision == 0.8.2
### install
```bash
cmake _DWITH_CUDA=on ..
make
```
### errors
libtorch-cxx11-abi-shared-with-deps-1.7.1/libtorch/include/ATen/core/op_registration/infer_schema.h:120:16: error: ‘constexpr’ call flows off the end of the function
constexpr auto returns = createReturns<ReturnType>::call();
^~~~~~~
make[2]: *** [CMakeFiles/torchvision.dir/build.make:518: CMakeFiles/torchvision.dir/torchvision/csrc/ops/cuda/deform_conv2d_kernel.cu.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/torchvision.dir/all] Error 2
i have test cxx11/cxx14/cxx17, cxx14 and cxx17 have the same error
cc @seemethere
|
https://github.com/pytorch/vision/issues/3175
|
closed
|
[
"question",
"module: c++ frontend"
] | 2020-12-16T05:18:51Z
| 2020-12-17T15:16:04Z
| null |
onism26
|
pytorch/pytorch
| 49,445
|
[doc] how to prevent pytorch-nightly from being replaced by a released version on pip install
|
## 📚 Documentation
I found an issue with pytorch-nightly and pip install of some packages depending on pytorch.
If a user installs pytorch-nightly using:
```
pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U
```
which allows for pre-released versions as prescribed on https://pytorch.org/get-started/locally/, e.g.:
installing some other packages that include `torch` in their requirements with:
```
pip install package1 package2
```
will wipe out the nightly build and install the latest release instead.
I'm not 100% sure yet when this happens. I think it might be the case for python pip packages that don't have a binary wheel and need to be built from source and perhaps depend on pytorch to build.
For example this happens with `fairscale` (no binary wheel provided) but doesn't happen with `fairseq` which provides a binary wheel on pypi. It happened before with other packages - I will try to identify the correct group.
The solution in such circumstances is to pass the same `--pre --f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html` used to install the nightly to `pip install package-depending-on-pytorch` to keep the pre-released version installed. e.g.:
```
pip install fairscale --pre --f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html
```
I have no idea where this could be documented.
cc @ezyang @seemethere @malfet @walterddr @jlin27 @mruberry
|
https://github.com/pytorch/pytorch/issues/49445
|
open
|
[
"module: binaries",
"module: docs",
"oncall: releng",
"triaged"
] | 2020-12-16T02:42:27Z
| 2021-05-31T17:06:32Z
| null |
stas00
|
pytorch/vision
| 3,169
|
Width Calculation For Bounding Boxes in torchvision\models\detection\_utils.py
|
In the function encode_boxes (line 79 of torchvision\models\detection\_utils.py), it seems that the width of the ground truth proposals matched is being computed as
ex_widths = proposals_x2 - proposals_x1
ex_heights = proposals_y2 - proposals_y1
But for a bounding box from ms coco [368, 413, 368, 417]. I guess this is just a matter of opinion if this is a "valid" bounding box, but it seems to me that x_min = x_max is valid for a box that is 1 pixel wide, and y_max-y_min pixels high. Anyway this causes the targets_dw or targets_dh to take the torch.log of 0, giving float(-inf), which can of course be easily fixed by adding +1 to the width, or the fix:
ex_widths = proposals_x2 - proposals_x1 + 1
ex_heights = proposals_y2 - proposals_y1 + 1
Either that or I could just filter out these boxes with x_min = x_max or y_min = y_max
|
https://github.com/pytorch/vision/issues/3169
|
closed
|
[
"question",
"module: ops"
] | 2020-12-14T08:54:46Z
| 2020-12-14T14:57:04Z
| null |
JamesMcCullochDickens
|
pytorch/pytorch
| 49,304
|
How to save model with half precision?
|
## ❓ Questions and Help
My model includes 5 resnet18, if they are saved with default precision(float32), then about 220MB space in my disk is occupied.
My idea is to reduce the storage to 110MB, so I used model.half() to apply precision 16.
I used torch.save(model.state_dict(),'model.pt') to save my model, however there still is 220MB for the model storage.
Does anyone know how to deal with this? Thanks very much.
|
https://github.com/pytorch/pytorch/issues/49304
|
closed
|
[] | 2020-12-14T02:07:34Z
| 2020-12-14T06:54:39Z
| null |
xinfangliu
|
pytorch/pytorch
| 49,298
|
[question] How hard would it be to implement 4-bit precision training?
|
I came across the paper [Ultra-Low Precision 4-bit Training of Deep Neural Networks](https://proceedings.neurips.cc/paper/2020/file/13b919438259814cd5be8cb45877d577-Paper.pdf) on NeurIPS 2020. I think it would be cool to implement support for it in PyTorch. I think it can be done quite efficiently on CPU using the AVX2 instruction set, as all the multiplication/addition operations can be stored in a fast cache. The operations would just make a lookup in this table.
I had a look how things are implemented in the library. If I am correct, there is enough of level of abstraction to make this doable. I need to implement a kernel and add it to the ATEN's DispatchStub or something like that. If I copy-paste implementation of `
pytorch/aten/src/ATen/quantized/` and make it work with custom `fp4` type that should work for end-to-end training, right? To start playing around with this, like training my own MNIST, it should be enough to just implement addition and multiplication for something like MLP with relus: all computation consists only from affine operations so + and * should be enough.
I would appreciate high-level guidance / help links on this. Thank you!
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @ailzhang
|
https://github.com/pytorch/pytorch/issues/49298
|
open
|
[
"module: internals",
"triaged"
] | 2020-12-13T18:04:02Z
| 2024-05-29T19:02:17Z
| null |
michalsustr
|
pytorch/vision
| 3,168
|
Getting Error: NotADirectoryError: [WinError 267] The directory name is invalid. File and folder both are valid
|
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
I am getting the following error
Getting Error: NotADirectoryError: [WinError 267] The directory name is invalid. File and folder both are valid
I am using the following code:
# Load all image data
data_dir = os.getcwd()
folder_name = "train"
image_folders = os.path.join(data_dir, folder_name)
transform = transforms.Compose([transforms.Resize((512,512)), transforms.ToTensor()])
images = []
for file in os.listdir(image_folders):
#print("1-->"+file)
images.append(ImageFolder(os.path.join(image_folders, file), transform=transform))
datasets = torch.utils.data.ConcatDataset(images)
## To Reproduce
I have placed the files in the D:\MS_Program\DR\Code\train
File Extension is . JPG
Steps to reproduce the behavior:
1. Run the piece of code
1.
1.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch / torchvision Version (e.g., 1.0 / 0.4.0): 1.7.1
- OS (e.g., Linux): Windows
- How you installed PyTorch / torchvision (`conda`, `pip`, source): Conda
- Build command you used (if compiling from source):
- Python version: Python 3.7.6
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
cc @pmeier
|
https://github.com/pytorch/vision/issues/3168
|
closed
|
[
"question",
"module: datasets"
] | 2020-12-13T15:33:38Z
| 2021-02-21T16:12:31Z
| null |
manojrustagi79
|
huggingface/datasets
| 1,514
|
how to get all the options of a property in datasets
|
Hi
could you tell me how I can get all unique options of a property of dataset?
for instance in case of boolq, if the user wants to know which unique labels it has, is there a way to access unique labels without getting all training data lables and then forming a set i mean? thanks
|
https://github.com/huggingface/datasets/issues/1514
|
closed
|
[
"question"
] | 2020-12-12T16:24:08Z
| 2022-05-25T16:27:29Z
| null |
rabeehk
|
pytorch/tutorials
| 1,277
|
cannot import name 'extract_archive', when run seq-to-seq model in the google colab.
|
Why run Seq to Seq model example in the pytorch use google colab exists the problem? how to solution it?
The model example following :
import io
import torch
from torchtext.utils import download_from_url, extract_archive
from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iterator
url = 'https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip'
test_filepath, valid_filepath, train_filepath = extract_archive(download_from_url(url))
tokenizer = get_tokenizer('basic_english')
vocab = build_vocab_from_iterator(map(tokenizer,
iter(io.open(train_filepath,
encoding="utf8"))))
def data_process(raw_text_iter):
data = [torch.tensor([vocab[token] for token in tokenizer(item)],
dtype=torch.long) for item in raw_text_iter]
return torch.cat(tuple(filter(lambda t: t.numel() > 0, data)))
train_data = data_process(iter(io.open(train_filepath, encoding="utf8")))
val_data = data_process(iter(io.open(valid_filepath, encoding="utf8")))
test_data = data_process(iter(io.open(test_filepath, encoding="utf8")))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
https://github.com/pytorch/tutorials/issues/1277
|
closed
|
[] | 2020-12-11T02:55:52Z
| 2021-07-27T15:12:26Z
| 4
|
funny000
|
pytorch/vision
| 3,149
|
How can I install torchvision on Apple M1?
|
How can I install torchvision on Apple M1?
|
https://github.com/pytorch/vision/issues/3149
|
closed
|
[
"help wanted",
"question",
"topic: build"
] | 2020-12-10T09:21:41Z
| 2021-06-06T05:59:32Z
| null |
huwei1024
|
pytorch/TensorRT
| 248
|
failed build trtorch
|
Hi,
when run bazel build //:libtrtorch -c opt I got the following error:
no such package '@platforms//os': The repository '@platforms' could not be resolved and referenced by '//:windows'
|
https://github.com/pytorch/TensorRT/issues/248
|
closed
|
[
"question"
] | 2020-12-09T08:29:33Z
| 2020-12-10T05:11:34Z
| null |
pribadihcr
|
pytorch/tutorials
| 1,272
|
AssertionError: Not equal to tolerance rtol=0.001, atol=1e-05
|
Recently I am converting the pytorch segmentation model to onnx model。I can export the onnx model, pass the onnx.checker.check_model() and use the onnxruntime to do inference. But when I use np.testing.assert_allclose(to_numpy(torch_out), ort_outs[0], rtol=1e-03, atol=1e-05) to compare ONNX Runtime and PyTorch results, there is an AssertionError, like follows:
AssertionError:
Not equal to tolerance rtol=0.001, atol=1e-05
Mismatched elements: 20827169 / 20971520 (99.3%)
Max absolute difference: 1.8859415
Max relative difference: 1008390.8
x: array([[[[ 1.165803e+01, 1.163278e+01, 1.160753e+01, ...,
1.179392e+01, 1.176985e+01, 1.174578e+01],
[ 1.167064e+01, 1.164517e+01, 1.161970e+01, ...,...
y: array([[[[11.636896, 11.6166 , 11.596304, ..., 12.943967, 12.909642,
12.875318],
[11.656967, 11.636346, 11.615723, ..., 12.954525, 12.920053,...
The code snippet to export the model is as follows:
model.eval()
batch_size = 1
input_shape = (3, 512, 512)
# # x = torch.autograd.Variable(torch.randn(batch_size, *input_shape))
x = torch.rand(batch_size, 3, 512, 512, requires_grad=True)
torch.onnx.export(model, x, model_file_name + '.onnx', export_params=True, opset_version=11, verbose=False)
In this tutorial, https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html, it said, if the results do not match then there is an issue in the ONNX exporter. But i don't know where is the mistake.
cc @BowenBao @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen
|
https://github.com/pytorch/tutorials/issues/1272
|
closed
|
[
"onnx",
"medium",
"docathon-h2-2023"
] | 2020-12-09T06:50:04Z
| 2023-11-07T00:44:48Z
| 7
|
GeneralJing
|
pytorch/examples
| 855
|
cannot find dcgan-sample-10.png
|
Hello, recently I learn the code from https://github.com/pytorch/examples/tree/master/cpp/dcgan. But when I want to run
python display_samples.py -i dcgan-sample-10.png
I didn't find the dcgan-sample-10.png.
can you tell me how to find the image correctly?
And when I run ./dcgan to train, I got some warning:
[W Resize.cpp:19] Warning: An output with one or more elements was resized since it had shape [64, 1, 1, 1], which does not match the required output shape [64, 1, 1, 64].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (function resize_output)
I didn't know how to fix it? could you help me?
|
https://github.com/pytorch/examples/issues/855
|
closed
|
[] | 2020-12-09T02:47:37Z
| 2022-03-09T20:42:06Z
| 1
|
liubamboo
|
pytorch/pytorch
| 48,995
|
How to do polymorphism on torch::nn::ModuleHolder?
|
The C++ frontend tutorial https://pytorch.org/tutorials/advanced/cpp_frontend.html recommends use ModuleHolder to create our own modules, but the inheritance relation does seem not translate to ModuleHolder. So I am wondering if there is a way to have both the benefit of ModuleHolder while having polymorphism among my customized modules.
cc @yf225 @glaringlee @albanD @mruberry
|
https://github.com/pytorch/pytorch/issues/48995
|
closed
|
[
"module: cpp",
"module: nn",
"triaged"
] | 2020-12-08T03:26:39Z
| 2020-12-22T20:23:06Z
| null |
thisisi3
|
pytorch/pytorch
| 48,928
|
When multiple GPUs run multiple processes, it is found that any process not running in GPU 0 will have some more memory (such as 200m) in GPU 0. What is the cause of this?(多个GPU跑多进程时候,发现只要不在0号GPU跑的进程都会在0号GPU多出一些内存(如200M),请问这是什么情况导致的?)
|
Hello everyone, when multiple GPUs run multiple processes, we find that a process running in GPU 0 only occupies 1000m of memory; however, running a process with GPU 1 will occupy 1000m of memory in GPU 1, and it will also occupy 200m of memory in GPU 0; GPU 2 or GPU 3 are the same; we found that as long as the processes not running in GPU 0 will have 200m more memory in GPU 0, what is the cause of this? Thank you!
大家好,在多个GPU跑多个进程的时候发现,0号GPU跑的一个进程只占显存1000M;但是用1号GPU跑一个进程会在1号GPU占显存1000M,而且会在0号GPU也占用200M显存;2号或3号GPU都一样;发现只要不在0号GPU跑的进程都会在0号GPU多出200M显存,请问这是什么情况导致的,谢谢!
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @agolynski @SciPioneer @H-Huang @mrzzd
|
https://github.com/pytorch/pytorch/issues/48928
|
open
|
[
"oncall: distributed"
] | 2020-12-07T11:28:55Z
| 2021-01-21T06:46:15Z
| null |
zoufangyu1987
|
pytorch/pytorch
| 48,927
|
how to train a "mask keypoint r-cnn"
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
**Question**
To have a detection model predict bbox, mask and keypoints simultaneously, I wrote a script of "mask keypoint r-cnn", based on pytorch's indigenous implementation of mask r-cnn and keypoint r-cnn.
To test the baseline I use a dataset with 10 images of pedestrians, labeled with keypoints and masks. In training I sum the losses of each module together and optimize it. But the result is unsatisfactory after even 200 epochs. Neither the predicted bboxes nor keypoints and masks looks fine. Yet the loss seems already converged and no more decreasing.
In my expectation, with so few samples, it should be easy for the model to overfit the dataset.
I tried ignoring one among mask loss and keypoint loss, then the model is well trained as expected, becoming a good keypoint r-cnn, or mask r-cnn. I think this proves my implementation didn't go wrong.
The question, is there advise or experience for training keypoint and mask together? Thanks in advance :)
**Appendix**
My implementation of mask keypoint r-cnn:
```
import torch
from torchvision.models.utils import load_state_dict_from_url
from torchvision.ops import MultiScaleRoIAlign
from torchvision.models.detection.faster_rcnn import FasterRCNN
from torchvision.models.detection.backbone_utils import resnet_fpn_backbone
from torchvision.models.detection.mask_rcnn import MaskRCNNHeads, MaskRCNNPredictor
from torchvision.models.detection.keypoint_rcnn import KeypointRCNNHeads, KeypointRCNNPredictor
import time
class MaskKeypointRCNN(FasterRCNN):
def __init__(self, backbone, num_classes=None,
# transform parameters
min_size=800, max_size=1333,
image_mean=None, image_std=None,
# RPN parameters
rpn_anchor_generator=None, rpn_head=None,
rpn_pre_nms_top_n_train=2000, rpn_pre_nms_top_n_test=1000,
rpn_post_nms_top_n_train=2000, rpn_post_nms_top_n_test=1000,
rpn_nms_thresh=0.7,
rpn_fg_iou_thresh=0.7, rpn_bg_iou_thresh=0.3,
rpn_batch_size_per_image=256, rpn_positive_fraction=0.5,
# Box parameters
box_roi_pool=None, box_head=None, box_predictor=None,
box_score_thresh=0.05, box_nms_thresh=0.5, box_detections_per_img=100,
box_fg_iou_thresh=0.5, box_bg_iou_thresh=0.5,
box_batch_size_per_image=512, box_positive_fraction=0.25,
bbox_reg_weights=None,
# Mask parameters
mask_roi_pool=None, mask_head=None, mask_predictor=None,
# keypoint parameters
keypoint_roi_pool = None, keypoint_head = None, keypoint_predictor = None,
num_keypoints = 17):
out_channels = backbone.out_channels
# mask predictor initialization
assert isinstance(mask_roi_pool, (MultiScaleRoIAlign, type(None)))
if num_classes is not None:
if mask_predictor is not None:
raise ValueError("num_classes should be None when mask_predictor is specified")
if mask_roi_pool is None:
mask_roi_pool = MultiScaleRoIAlign(
featmap_names=['0', '1', '2', '3'],
output_size=14,
sampling_ratio=2)
if mask_head is None:
mask_layers = (256, 256, 256, 256)
mask_dilation = 1
mask_head = MaskRCNNHeads(out_channels, mask_layers, mask_dilation)
if mask_predictor is None:
mask_predictor_in_channels = 256 # == mask_layers[-1]
mask_dim_reduced = 256
mask_predictor = MaskRCNNPredictor(mask_predictor_in_channels,
mask_dim_reduced, num_classes)
# keypoint predictor initialization
assert isinstance(keypoint_roi_pool, (MultiScaleRoIAlign, type(None)))
if min_size is None:
min_size = (640, 672, 704, 736, 768, 800)
if num_classes is not None:
if keypoint_predictor is not None:
raise ValueError("num_classes should be None when keypoint_predictor is specified")
if keypoint_roi_pool is None:
keypoint_roi_pool = MultiScaleRoIAlign(
featmap_names=['0', '1', '2', '3'],
output_size=14,
sampling_ratio=2)
if keypoint_head is None:
keypoint_layers = tuple(512 for _ in range(8))
keypoint_head = KeypointRCNNHeads(out_channels, keypoint_layers)
if keypoint_predi
|
https://github.com/pytorch/pytorch/issues/48927
|
closed
|
[] | 2020-12-07T09:04:08Z
| 2020-12-08T01:38:01Z
| null |
feiyangsuo
|
pytorch/examples
| 854
|
why multiple token embedding by math.sqrt(self.ninp)?
|
Dear author,
I am wondering why you multiple token's embedding by math.sqrt(self.ninp) in [model.py](https://github.com/pytorch/examples/blob/a3f28a26851867b314f4471ec6ca1c2c048217f1/word_language_model/model.py#L148) from the word_language_model example.
Best
|
https://github.com/pytorch/examples/issues/854
|
closed
|
[] | 2020-12-07T07:11:14Z
| 2022-03-09T21:05:40Z
| 1
|
KK666-AI
|
huggingface/datasets
| 1,167
|
❓ On-the-fly tokenization with datasets, tokenizers, and torch Datasets and Dataloaders
|
Hi there,
I have a question regarding "on-the-fly" tokenization. This question was elicited by reading the "How to train a new language model from scratch using Transformers and Tokenizers" [here](https://huggingface.co/blog/how-to-train). Towards the end there is this sentence: "If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step". I've tried coming up with a solution that would combine both `datasets` and `tokenizers`, but did not manage to find a good pattern.
I guess the solution would entail wrapping a dataset into a Pytorch dataset.
As a concrete example from the [docs](https://huggingface.co/transformers/custom_datasets.html)
```python
import torch
class SquadDataset(torch.utils.data.Dataset):
def __init__(self, encodings):
# instead of doing this beforehand, I'd like to do tokenization on the fly
self.encodings = encodings
def __getitem__(self, idx):
return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
def __len__(self):
return len(self.encodings.input_ids)
train_dataset = SquadDataset(train_encodings)
```
How would one implement this with "on-the-fly" tokenization exploiting the vectorized capabilities of tokenizers?
----
Edit: I have come up with this solution. It does what I want, but I feel it's not very elegant
```python
class CustomPytorchDataset(Dataset):
def __init__(self):
self.dataset = some_hf_dataset(...)
self.tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
def __getitem__(self, batch_idx):
instance = self.dataset[text_col][batch_idx]
tokenized_text = self.tokenizer(instance, truncation=True, padding=True)
return tokenized_text
def __len__(self):
return len(self.dataset)
@staticmethod
def collate_fn(batch):
# batch is a list, however it will always contain 1 item because we should not use the
# batch_size argument as batch_size is controlled by the sampler
return {k: torch.tensor(v) for k, v in batch[0].items()}
torch_ds = CustomPytorchDataset()
# NOTE: batch_sampler returns list of integers and since here we have SequentialSampler
# it returns: [1, 2, 3], [4, 5, 6], etc. - check calling `list(batch_sampler)`
batch_sampler = BatchSampler(SequentialSampler(torch_ds), batch_size=3, drop_last=True)
# NOTE: no `batch_size` as now the it is controlled by the sampler!
dl = DataLoader(dataset=torch_ds, sampler=batch_sampler, collate_fn=torch_ds.collate_fn)
```
|
https://github.com/huggingface/datasets/issues/1167
|
closed
|
[
"question",
"generic discussion"
] | 2020-12-05T17:02:56Z
| 2023-07-20T15:49:42Z
| null |
pietrolesci
|
pytorch/tutorials
| 1,267
|
Weird results in the AUTOMATIC MIXED PRECISION tutorial.
|
I followed the [amp tutorial](https://pytorch.org/tutorials/recipes/recipes/amp_recipe.html#automatic-mixed-precision) (authored by @mcarilli). It's succinct and perspicuous. But the results show that mixed precision takes more memory than default precision. Can someone explain?
More details about the settings and results of my experiment are [here](https://discuss.pytorch.org/t/automatic-mixed-precision-increases-max-memory-used-by-tensors/104875).
|
https://github.com/pytorch/tutorials/issues/1267
|
closed
|
[
"question",
"amp"
] | 2020-12-04T04:24:26Z
| 2023-03-14T18:26:59Z
| null |
qimingyudaowenti
|
pytorch/pytorch
| 48,770
|
How can I find a function to calculate correlation coefficient matrix like numpy.corrcoef () in pytorch?
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
|
https://github.com/pytorch/pytorch/issues/48770
|
closed
|
[] | 2020-12-03T05:40:21Z
| 2020-12-03T16:49:24Z
| null |
jiangzhiwei2018
|
pytorch/serve
| 822
|
How to fix this problem
|
When I run the official example,I've got this problem,Does anyone have the same problem as Me?How can I solve it? thank you!
|
https://github.com/pytorch/serve/issues/822
|
closed
|
[] | 2020-12-02T12:22:07Z
| 2020-12-02T15:37:40Z
| null |
shyoulala
|
pytorch/vision
| 3,093
|
VOCSegmentation transforms.ToTensor() not working
|
Hi,
I want to use the VOCSegmentation dataset but I always get this error:
```
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'PIL.PngImagePlugin.PngImageFile'>
```
This is a code snippet to recreate the error
```python
transform=transforms.Compose([
transforms.Resize((256, 256)),
transforms.ToTensor()
])
voc_train = VOCSegmentation(os.getcwd(), year='2012', image_set='train', transform=transform)
train_loader = DataLoader(voc_train, batch_size=64)
train_iter = iter(train_loader)
next(train_iter)
```
When I use the MNIST or CIFAR10 data set the code works as expected.
Is there something special about the `VOCSegmentation` data set?
Thanks
cc @vfdev-5
|
https://github.com/pytorch/vision/issues/3093
|
closed
|
[
"question",
"topic: semantic segmentation"
] | 2020-12-02T10:16:12Z
| 2024-01-08T06:45:57Z
| null |
sirtris
|
pytorch/vision
| 3,090
|
about retrain shufflenetv2 question
|
First of all, thanks for your perfect projects.
## Environments
pyhton: 3.7
pytorch: 1.7+cpu
torchvison: 0.8.1+cpu
system-os: ubuntu18.04
## Hyperparameters
lr: 0.001
momentum: 0.9
weights_decay: 0.0001
batch_size: 16
## Question introduction
Recently, I was learning the source code your provided in torchvision about shufflenetv2.
But when I was fine-training the network(only training fc layer), I had a problem that network convergence is very slow. like this:
```
[epoch 0] accuracy: 0.246
[epoch 1] accuracy: 0.253
[epoch 2] accuracy: 0.28
[epoch 3] accuracy: 0.305
[epoch 4] accuracy: 0.338
[epoch 5] accuracy: 0.353
```
I have read this document [https://pytorch.org/docs/stable/torchvision/models.html#classification](https://pytorch.org/docs/stable/torchvision/models.html#classification)
According to this document, I downloaded the weights [https://download.pytorch.org/models/shufflenetv2_x1-5666bf0f80.pth](https://download.pytorch.org/models/shufflenetv2_x1-5666bf0f80.pth), and use same preprocessing method.
```python
data_transform = {
"train": transforms.Compose([transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]),
"val": transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])}
```
But with conditions unchanged, I just replace the model with resnet34 your provided in torchvision, and I can get great results. like this:
```
[epoch 0] accuracy: 0.968
```
Strangely, When fine-training shfflenetv2 if I change the learning rate from 0.001 to 0.1, I can get the following results:
```
[epoch 0] accuracy: 0.85
[epoch 1] accuracy: 0.848
.....
[epoch 29] accuracy: 0.899
```
Does fine-training shufflenet network need such a large learning rate?
I guess the preprocessing algorithm is not like that. Because if I use the mobilenetv2 network, I can get better results under the same conditions. Could you help me find out what's wrong? Thank you very much.
## Code
[https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/blob/master/pytorch_classification/Test7_shufflenet/train.py](https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/blob/master/pytorch_classification/Test7_shufflenet/train.py)
|
https://github.com/pytorch/vision/issues/3090
|
open
|
[
"question"
] | 2020-12-02T02:35:51Z
| 2021-01-25T00:55:45Z
| null |
WZMIAOMIAO
|
pytorch/xla
| 2,657
|
Using iterative datasets with pytorch XLA is very slow on TPU, how to use it correctly
|
## Environment info
- Platform: TPU
- Python version: 3.7
## Information
I am running the following codes on TPU and GPU and on TPU this is very slow. I am not sure if the way I define dataloader for iterative dsatasets is correct or not. Here is how I define the dataloader, https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/tasks/tasks.py#L496
I shard the data per-tpu core here: https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/trainers/t5_trainer.py#L326
Could you point me if I am not using distributed data samplers and shard the data per core, how I can do distributed trianing properly? thanks
## To reproduce
```
git clone git@github.com:google-research/ruse.git
go to iter branch
pip install -r requirements.txt
python setup.py develop
cd seq2seq
python xla_spawn.py finetune_t5_trainer.py configs/mrpc_adapter_tpu.json
```
|
https://github.com/pytorch/xla/issues/2657
|
closed
|
[] | 2020-12-02T01:01:14Z
| 2020-12-06T00:00:11Z
| null |
rabeehkarimimahabadi
|
pytorch/vision
| 3,083
|
Getting an error when modifying the faster_rcnn model to add inception_v3 backbone model
|
I was following this tutorial [Modifying the model to add a different backbone](https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html#modifying-the-model-to-add-a-different-backbone). When I replace the mobilenet_v2 model with inception_v3, the code does not work and gives the following error:
```
File "/home/gpu-user/projects/building-outline-detection/src/models/faster_rcnn/vision/engine.py", line 46, in train_one_epoch
loss_dict = model(images, targets)
File "/home/gpu-user/miniconda3/envs/faster_rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/gpu-user/miniconda3/envs/faster_rcnn/lib/python3.8/site-packages/torchvision/models/detection/generalized_rcnn.py", line 99, in forward
proposals, proposal_losses = self.rpn(images, features, targets)
File "/home/gpu-user/miniconda3/envs/faster_rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/gpu-user/miniconda3/envs/faster_rcnn/lib/python3.8/site-packages/torchvision/models/detection/rpn.py", line 330, in forward
features = list(features.values())
AttributeError: 'InceptionOutputs' object has no attribute 'values'
```
I am using the following environment:
* Ubuntu 18.04.4 LTS
* CUDA Version: 10.2
* Python: 3.8.6
* Pytorch: 1.7.0
It will be great if someone can help me in resolving this issue.
Thanks
|
https://github.com/pytorch/vision/issues/3083
|
open
|
[
"question"
] | 2020-12-01T20:19:44Z
| 2020-12-02T12:10:21Z
| null |
js-kalsi
|
pytorch/vision
| 3,068
|
torchvison.ops.nms uses too much gpu memory
|
hi there, i have a quesetion nms operator.
If i use torchvision.ops.nms to filter bbox, about 900MB GPU memory is used, where the input box and score are put into GPU. But there is no problem if the box and score in cpu. meanwhile the time cost of gpu is 0.0007s, 0.0018s in cpu.
i do not know actually why this operator uses such much GPU mem. or is there any configuration about nms to save gpu mem?
my torchvision version is 0.4.0. thanks~
|
https://github.com/pytorch/vision/issues/3068
|
closed
|
[
"question"
] | 2020-12-01T07:44:41Z
| 2021-03-23T15:48:01Z
| null |
ThomsonW
|
pytorch/vision
| 3,064
|
I cannot reach the ori accuracy by training the ResNeXt-50 on the ImageNet.
|
I use the ['PyTorch ImageNet Training' example](https://github.com/pytorch/examples/tree/master/imagenet) and the ['models'](https://github.com/pytorch/vision/tree/master/torchvision/models) of TorchVision 0.4.2 to train ResNeXt-50 twice but got 23.52% and 23.57% (Top-1) on ImageNet Val set, which do not reach the ori Err. (22.2%). Besides, I find that the hyper parameters setting of ['PyTorch ImageNet Training' example](https://github.com/pytorch/examples/tree/master/imagenet) is same to the [original paper](https://arxiv.org/abs/1611.05431). Can you give me some advices for training to reach the ori Err. ?
The val acc alongside training is shown below:
<summary>
logs
<details>
Top-1
16.894
32.488
36.116
40.272
45.394
46.328
50.126
50.336
52.242
54.414
53.096
54.438
55.662
54.972
55.902
57.204
54.932
57.068
55.586
56.9
58.018
56.67
58.564
57.272
58.224
57.736
57.816
58.292
57.618
56.664
70.7
71.502
72.04
72.452
72.69
72.754
73.03
72.996
72.504
72.812
72.318
72.294
72.584
72.318
72.42
72.528
72.238
72.14
71.76
71.91
72.282
72.508
72.156
71.424
72.3
72.48
72.42
72.61
72.61
72.178
75.62
75.86
76.16
76.184
76.26
76.252
76.376
76.3
76.404
76.48
76.326
76.368
76.304
76.386
76.462
76.36
76.452
76.396
76.258
76.308
76.334
76.228
76.252
76.304
76.15
76.298
76.362
76.15
76.17
76.058
</details>
</summary>
EDIT: (vfdev-5) I updated the message and put the training logs into summary/details block.
|
https://github.com/pytorch/vision/issues/3064
|
closed
|
[
"question",
"module: models"
] | 2020-11-30T19:53:15Z
| 2021-04-25T16:12:11Z
| null |
PoonKinWang
|
pytorch/pytorch
| 48,576
|
how to avoid the precision loss(float32) caused by the gradient accumulation of Ring Allreduce in the case of ddp
|
## ❓ Questions and Help
### how to avoid the precision loss(float32) caused by the gradient accumulation of Ring Allreduce in the case of ddp.
How to avoid the precision loss(float32) caused by the gradient accumulation of Ring Allreduce in the case of ddp
when run model in single gpu twice, the weight is always same;
when run model in ddp twice , the weight is different in grad apply.
I suspect that the gradient error is accumulated in the Ring Allreduce.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @agolynski @SciPioneer @H-Huang @mrzzd
|
https://github.com/pytorch/pytorch/issues/48576
|
closed
|
[
"oncall: distributed",
"triaged"
] | 2020-11-30T09:24:09Z
| 2020-12-07T01:42:03Z
| null |
lezasantaizi
|
pytorch/vision
| 3,058
|
How to solve this error? RuntimeError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
I'm beginner of ML and trying to use some solution based on pytorch (called detectron2)
When the solution inferred the image, I always got the below error.
RuntimeError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. 'torchvision::nms' is only available for these backends: [CPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, Tracer, Autocast, Batched, VmapMode].
Actually, I didn't get this error and couldn't search anything about this on google.
Is there anybody who knows the way to handle this?
Info:
I installed the CUDA v11.1 from https://developer.nvidia.com/cuda-downloads
torch version: 1.7.0
torchvision version: 0.8.0
|
https://github.com/pytorch/vision/issues/3058
|
open
|
[
"needs reproduction",
"module: ops"
] | 2020-11-30T08:17:04Z
| 2024-01-18T01:41:05Z
| null |
manmani3
|
pytorch/vision
| 3,056
|
torchvision.roi_align does not support TPU
|
Hello.
We are using TPU in GCP.
We are currently modifying the code to allow the TPU to return to Detectron2.
However, there is an error that roi_align in Torchvision is not supported by TPU.
Please check the bottom. Can you solve it for me?
`File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torchvision/ops/roi_align.py", line 51, in roi_align
return torch.ops.torchvision.roi_align(input, rois, spatial_scale, output_size[0], output_size[1], sampling_ratio, aligned)
RuntimeError: Could not run 'torchvision::roi_align' with arguments from the 'XLA' backend. 'torchvision::roi_align' is only available for these backends: [CPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].`
|
https://github.com/pytorch/vision/issues/3056
|
open
|
[
"question"
] | 2020-11-28T12:16:32Z
| 2020-11-30T10:06:01Z
| null |
CheonJiEun
|
pytorch/pytorch
| 48,528
|
I am unable to install, How to install it?
|
## ❓ Questions and Help
### I am unable to install pytorch like the way they said
Here are all the screen shots to describe error is most of the detail
#### Website
<img width="960" alt="chrome page" src="https://user-images.githubusercontent.com/71920621/100496065-07ffb580-3177-11eb-9e8d-7445d613e97f.PNG">
#### Command Prompt
<img width="614" alt="cmd" src="https://user-images.githubusercontent.com/71920621/100496068-1221b400-3177-11eb-8402-856ac1d037d7.PNG">
#### System Info
<img width="900" alt="sys1" src="https://user-images.githubusercontent.com/71920621/100496070-18179500-3177-11eb-8a80-6c0d1638f557.PNG">
<img width="900" alt="sys2" src="https://user-images.githubusercontent.com/71920621/100496073-1cdc4900-3177-11eb-97d7-e57f1cbf6659.PNG">
cc @ezyang @seemethere @malfet @walterddr @peterjc123 @maxluk @nbcsm @guyang3532 @gunandrose4u @mszhanyi @skyline75489
|
https://github.com/pytorch/pytorch/issues/48528
|
closed
|
[
"module: binaries",
"module: windows",
"triaged"
] | 2020-11-28T07:13:38Z
| 2020-11-30T15:44:32Z
| null |
ghost
|
pytorch/vision
| 3,049
|
In function `ROIPool_forward(at::Tensor const&, at::Tensor const&, double, long, long)':
|
https://github.com/pytorch/vision/issues/1849, i try this,but it cannot work. Please help me . Thanks a lot.
In function `ROIPool_forward(at::Tensor const&, at::Tensor const&, double, long, long)':
undefined reference to `ROIPool_forward_cuda(at::Tensor const&, at::Tensor const&, float, int, int)'
|
https://github.com/pytorch/vision/issues/3049
|
closed
|
[
"question"
] | 2020-11-26T10:34:31Z
| 2020-12-24T08:58:28Z
| null |
wj1017090777
|
pytorch/TensorRT
| 242
|
Failure when add aten::gt converter
|
I was trying add new conveter aten::gt.Scalar(Tensor self, Scalar other) -> Tensor, but it failed in test_case
in core/conversion/conveters/impl/element_wise.cpp, I add this
```
.pattern({"aten::gt.Scalar(Tensor self, Scalar other) -> (Tensor)",
[](ConversionCtx* ctx, const torch::jit::Node* n, args& args) -> bool {
// TODO: Remove with functionalization
auto self = args[0].ITensorOrFreeze(ctx);
auto otherScalar = args[1].unwrapToScalar().to<float>();
auto other = tensor_to_const(ctx, torch::tensor({otherScalar}));
auto gt =
add_elementwise(ctx, nvinfer1::ElementWiseOperation::kGREATER, self, other, util::node_info(n));
TRTORCH_CHECK(gt, "Unable to create Greater layer from node: " << *n);
gt->setName(util::node_info(n).c_str());
auto out = ctx->AssociateValueAndTensor(n->outputs()[0], gt->getOutput(0));
LOG_DEBUG("Output tensor shape: " << out->getDimensions());
return true;
}})
```
in tests/core/convters/test_element_wise.cpp, I add this
```
TEST(Converters, ATenGtWithScalarConvertsCorrectly) {
const auto graph = R"IR(
graph(%0 : Tensor):
%scalar : float = prim::Constant[value=0.5]()
%1 : Tensor = aten::gt(%0, %scalar)
return (%1))IR";
pointwise_test_helper(graph, true);
}
```
And I use following command to build and test
```
bazel build //:libtrtorch --compilation_mode opt --distdir third_party/dist_dir/x86_64-linux-gnu
bazel build //tests/core/converters:test_converters --compilation_mode opt --distdir third_party/dist_dir/x86_64-linux-gnu
bazel run //tests/core/converters:test_element_wise
```
And get this error message
```
[ RUN ] Converters.ATenGtWithScalarConvertsCorrectly
DEBUG: [TRTorch - Debug Build] - Running JIT version
DEBUG: [TRTorch - Debug Build] - Running TRT version
DEBUG: [TRTorch - Debug Build] - Settings requested for TensorRT engine:
Operating Precision: Float32
Make Refittable Engine: 0
Debuggable Engine: 0
Strict Types: 0
GPU ID: 0
Allow GPU Fallback (if running on DLA): 0
Min Timing Iterations: 2
Avg Timing Iterations: 1
Max Workspace Size: 1048576
Max Batch Size: Not set
Device Type: GPU
GPU ID: 0
Engine Capability: Default
Calibrator Created: 0
INFO: [TRTorch Conversion Context] - Converting Block
INFO: [TRTorch Conversion Context] - Adding Input 0 named input_0 in engine (conversion.AddInputs)
DEBUG: [TRTorch Conversion Context] - Input shape set to [5]
DEBUG: [TRTorch Conversion Context] - Evaluating %1 : float = prim::Constant[value=0.5]()
DEBUG: [TRTorch Conversion Context] - Found the value to be: 0.5
INFO: [TRTorch Conversion Context] - Adding Layer %2 : Tensor = aten::gt(%0, %1) (ctx.AddLayer)
DEBUG: [TRTorch Conversion Context] - Node input is an already converted tensor
DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value
DEBUG: [TRTorch - Debug Build] - Frozen tensor shape: [5]
DEBUG: [TRTorch - Debug Build] - Weights: [1]
Number of input maps: 1
Number of output maps: 1
Element shape: [1]
DEBUG: [TRTorch Conversion Context] - Freezing tensor 0x662238f0 as an IConstantLayer
DEBUG: [TRTorch - Debug Build] - Output tensor shape: [5]
INFO: [TRTorch Conversion Context] - Marking Output 2 named output_0 in engine (ctx.MarkOutput)
DEBUG: [TRTorch Conversion Context] - Applying generic optimizations to the graph for inference.
DEBUG: [TRTorch Conversion Context] - Original: 2 layers
DEBUG: [TRTorch Conversion Context] - After dead-layer removal: 2 layers
DEBUG: [TRTorch Conversion Context] - After Myelin optimization: 2 layers
DEBUG: [TRTorch Conversion Context] - After scale fusion: 2 layers
DEBUG: [TRTorch Conversion Context] - After vertical fusions: 2 layers
DEBUG: [TRTorch Conversion Context] - After final dead-layer removal: 1 layers
DEBUG: [TRTorch Conversion Context] - After tensor merging: 1 layers
DEBUG: [TRTorch Conversion Context] - After concat removal: 1 layers
DEBUG: [TRTorch Conversion Context] - Graph construction and optimization completed in 0.000104867 seconds.
DEBUG: [TRTorch Conversion Context] - Constructing optimization profile number 0 out of 1
*************** Autotuning format combination: Float(1) -> Bool(1) ***************
DEBUG: [TRTorch Conversion Context] - --------------- Timing Runner: {%2 : Tensor = aten::gt(%0, %1)} (Myelin)
DEBUG: [TRTorch Conversion Context] - Tactic: 0 is the only option, timing skipped
DEBUG: [TRTorch Conversion Context] - Fastest Tactic: 0 Time: 0
DEBUG: [TRTorch Conversion Context] - Formats and tactics selection completed in 0.0941442 seconds.
DEBUG: [TRTorch Conversion Context] - After reformat layers: 1 layers
|
https://github.com/pytorch/TensorRT/issues/242
|
closed
|
[
"question",
"No Activity"
] | 2020-11-26T06:45:23Z
| 2021-01-22T00:34:40Z
| null |
inocsin
|
pytorch/pytorch
| 48,444
|
How to export to onnx with nms?
|
Hi, I am trying to add nms in pytorch detection model and export it to onnx so that I can convert the onnx to tensorrt7.1, so how can I export a model with nms ? Any examples or documents?
Thanks.
|
https://github.com/pytorch/pytorch/issues/48444
|
closed
|
[] | 2020-11-25T08:32:56Z
| 2020-11-26T00:58:23Z
| null |
Edwardmark
|
pytorch/TensorRT
| 240
|
❓ [Question] How to solve aten::floor converter not found?
|
## ❓ Question
How to solve aten::floor converter not found?
## What you have already tried
I am trying to convert a jit trace of a Fast SCNN network into TensorRT. I've confirmed that the trace was created in python3.6 using PyTorch 1.6.0. When printing the trace graph I do not even see the aten::floor operator. I also cannot locate a torch.floor operator in the original PyTorch model structure so I'm not sure what is even calling this operator? Here is the resulting error:
```
RuntimeError: [enforce fail at core/conversion/conversion.cpp:112] Expected converter to be true but got false
Unable to convert node: %376 : Tensor = aten::floor(%324) # /home/nmonhollen/tensorrt/venv/lib/python3.6/site-packages/torch/nn/functional.py:3010:0 (conversion.AddLayer)
Schema: aten::floor.int(int a) -> (int)
Converter for aten::floor requested, but no such converter was found.
If you need a converter for this operator, you can try implementing one yourself
or request a converter: https://www.github.com/NVIDIA/TRTorch/issues
```
## Environment
> Build information about the TRTorch compiler can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 1.6.0
- CPU Architecture: x86-64
- OS (e.g., Linux): Ubuntu 18.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source):
- Are you using local sources or building from archives: local sources (cuDNN=7.6.5, TensorRT=7.0.0.11)
- Python version: 3.6.9
- CUDA version: 10.2
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/240
|
closed
|
[
"feature request",
"question"
] | 2020-11-24T16:13:34Z
| 2021-04-22T00:54:15Z
| null |
nmonhollen
|
huggingface/datasets
| 883
|
Downloading/caching only a part of a datasets' dataset.
|
Hi,
I want to use the validation data *only* (of natural question).
I don't want to have the whole dataset cached in my machine, just the dev set.
Is this possible? I can't find a way to do it in the docs.
Thank you,
Sapir
|
https://github.com/huggingface/datasets/issues/883
|
open
|
[
"enhancement",
"question"
] | 2020-11-24T14:25:18Z
| 2020-11-27T13:51:55Z
| null |
SapirWeissbuch
|
pytorch/pytorch
| 48,390
|
what is the different between https://download.pytorch.org/whl/torch_stable.html and the tag
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
|
https://github.com/pytorch/pytorch/issues/48390
|
closed
|
[] | 2020-11-23T12:40:06Z
| 2020-11-26T01:06:09Z
| null |
jihuacao
|
huggingface/datasets
| 878
|
Loading Data From S3 Path in Sagemaker
|
In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files["validation"] = valid_path
data_files["test"] = test_path
extension = train_path.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, s3_enabled=True)
print(datasets)`
I getting an error of
`algo-1-7plil_1 | File "main.py", line 21, in <module>
algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files)
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config
algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file)))
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime
algo-1-7plil_1 | return os.stat(filename).st_mtime
algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv`
But when im trying with pandas , it is able to load from S3
Does the datasets library support S3 path to load
|
https://github.com/huggingface/datasets/issues/878
|
open
|
[
"enhancement",
"question"
] | 2020-11-23T09:17:22Z
| 2020-12-23T09:53:08Z
| null |
mahesh1amour
|
pytorch/TensorRT
| 235
|
❓[Question] Dynamic shape for ResNet-50
|
## ❓ Question
Hi, I try to convert ResNet-50 with dynamic shape:
```
{
"min": (1, 3, 224, 224),
"opt": (1, 3, 224, 224),
"max": (3, 3, 224, 224)
}
```
, but i get this error:
```
ERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred
ERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred
ERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred
ERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred
ERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred
ERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred
ERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred
ERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred
ERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred
ERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred
ERROR: [TRTorch Conversion Context] - %x.21 : Tensor = aten::flatten(%x.19, %3, %81) # /root/.cache/torch/hub/pytorch_vision_v0.6.0/torchvision/models/resnet.py:214:12: at most one dimension may be inferred
Segmentation fault (core dumped)
```
Code:
```
import torch
import trtorch
torch_model = torch.hub.load('pytorch/vision:v0.6.0', 'resnet50', pretrained=False)
script_model = torch.jit.script(torch_model.eval().cuda())
trt_model = trtorch.compile(script_model, {
"input_shapes": [{
"min": (1, 3, 224, 224),
"opt": (1, 3, 224, 224),
"max": (3, 3, 224, 224)
}],
"op_precision": torch.float32,
})
```
## What you have already tried
I run this [code](https://github.com/NVIDIA/TRTorch/issues/193#issuecomment-718162687). It works correct.
## Environment
> Build information about the TRTorch compiler can be found by turning on debug messages
- docker image: nvcr.io/nvidia/tensorrt :20.03-py3
- PyTorch Version (e.g., 1.0): 1.6.0, installed with pip
- CPU Architecture: x86
- OS (e.g., Linux): Ubuntu 18.04
- How installed TRTorch: pip install https://github.com/NVIDIA/TRTorch/releases/download/v0.1.0/trtorch-0.1.0-cp36-cp36m-linux_x86_64.whl
- Python version: 3.6.9
- CUDA version: 10.2
- GPU models and configuration: RTX 2060 SUPER
|
https://github.com/pytorch/TensorRT/issues/235
|
closed
|
[
"question"
] | 2020-11-23T05:02:13Z
| 2021-02-23T23:25:09Z
| null |
gavrin-s
|
pytorch/vision
| 3,040
|
I am not able to obtain results with custom backbone
|
_I am following the tutorial about FasterRCNN and I would like to test my network as backbone of the net:
UCapsNet return 512 features maps
I am training on VocPascal 2007_
FRCN_model = FasterRCNN(backbone_model.Ucapsnet, 21, rpn_anchor_generator=backbone_model.anchor_generator, box_roi_pool=backbone_model.roi_pooler)
FRCN_model = FRCN_model.to(device)
params = [p for p in FRCN_model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.02, momentum=0.9, weight_decay=1e-4)
pbar = tqdm(range(n_epochs))
for epoch in pbar:
train_one_epoch(FRCN_model, optimizer, dataloaders['train'], device, epoch, print_freq=10)
evaluate(FRCN_model, dataloaders['val'], device=device)
**I got**:
Averaged stats: model_time: 1605886336.0000 (1605886304.8101) evaluator_time: 0.0275 (0.0285)
Accumulating evaluation results...
DONE (t=0.06s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
In training, the loss is dropping slowly to 1.15 but in evaluation, i do not get anything.
Please help me understand
cc @fmassa
|
https://github.com/pytorch/vision/issues/3040
|
open
|
[
"question",
"module: documentation"
] | 2020-11-20T15:45:08Z
| 2020-11-24T08:08:56Z
| null |
Riretta
|
pytorch/vision
| 3,036
|
Faster R-CNN raise errors when input tensor has require_grad=True
|
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
I am using the pretrained Faster R-CNN model in torchvision as a sub-model in my own image generating model. In fact,I need the Faster R-CNN to backward properly when training my whole model.
But I found that when i feed the Faster R-CNN model in torchvision with the input having require_grad=True,it will raise following errors.
```
import torch
import torchvision
if __name__ == '__main__':
Faster_RCNN_ins=torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True, progress=True,
num_classes=91,pretrained_backbone=True)
Faster_RCNN_ins.eval()
Faster_RCNN_ins(torch.zeros(2,3,256,256,requires_grad=True))
```
OR
```
import torch
import torchvision
if __name__ == '__main__':
Faster_RCNN_ins=torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True, progress=True,
num_classes=91,pretrained_backbone=True)
Faster_RCNN_ins.eval()
out_tep=nn.Conv2d(3,3,3,stride=1,padding=1)(torch.zeros(2,3,256,256))
Faster_RCNN_ins(out_tep)
```
both code blocks will raise error:
```
File "/data/gaoyan/style_transfer/scripts/tep.py", line 23, in <module>
Faster_RCNN_ins(torch.zeros(2,3,256,256,requires_grad=True))
File "/data/gaoyan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/gaoyan/.local/lib/python3.6/site-packages/torchvision/models/detection/generalized_rcnn.py", line 80, in forward
images, targets = self.transform(images, targets)
File "/data/gaoyan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/gaoyan/.local/lib/python3.6/site-packages/torchvision/models/detection/transform.py", line 111, in forward
images = self.batch_images(images)
File "/data/gaoyan/.local/lib/python3.6/site-packages/torchvision/models/detection/transform.py", line 211, in batch_images
pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
RuntimeError: A view was created in no_grad mode and its base or another view of its base has been modified inplace with grad mode enabled. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
It can forward and backward wihout errors.
## Environment
output of the environment script
```
PyTorch version: 1.7.0
Is debug build: True
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 4.9.2 20150212 (Red Hat 4.9.2-6)
Clang version: Could not collect
CMake version: Could not collect
Python version: 3.6 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: 10.1.243
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 440.33.01
cuDNN version: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.4
[pip3] pytorch-model-summary==0.1.2
[pip3] torch==1.7.0
[pip3] torchstat==0.0.7
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.8.1
[conda] Could not collect
```
## Additional context
<!-- Add any other context about the problem here. -->
I think the bug may lay in torchvision/models/detection/transform.py
```
def batch_images(self, images, size_divisible=32):
# type: (List[Tensor], int) -> Tensor
if torchvision._is_tracing():
# batch_images() does not export well to ONNX
# call _onnx_batch_images() instead
return self._onnx_batch_images(images, size_divisible)
max_size = self.max_by_axis([list(img.shape) for img in images])
stride = float(size_divisible)
max_size = list(max_size)
max_size[1] = int(math.ceil(float(max_size[1]) / stride) * stride)
max_size[2] = int(math.ceil(float(max_size[2]) / stride) * stride)
batch_shape = [len(images)] + max_size
batched_imgs = images[0].new_full(batch_shape, 0)
for img, pad_img in zip(images, batched_imgs):
pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
return batched_imgs
```
this funciton cannot work when images is a list of tensors with require_grad=True
|
https://github.com/pytorch/vision/issues/3036
|
closed
|
[
"question",
"wontfix",
"module: models",
"topic: object detection"
] | 2020-11-20T06:38:35Z
| 2020-11-20T09:38:14Z
| null |
EZ4NO1
|
pytorch/TensorRT
| 232
|
How do you activate trtorch::CompileGraph with multi inputs?
|
## ❓ Question
Can you provide an example of using more than one input please?
## What you have already tried
For example I tried to do the following:
` auto Input1= torch::randn({ 4, 24, 64, 64 }, { torch::kCUDA });
auto Input2= torch::randn({ 1, 24, 1, 1 }, { torch::kCUDA });
std::vector<trtorch::CompileSpec::InputRange> inputRanges;
inputRanges.push_back(Input1.sizes());
inputRanges.push_back(Input2.sizes());
auto trt_mod = trtorch::CompileGraph(module, inputRanges);
`
A std::out_of_range exception was raised.
I can't be sure that the exception root cause related to the multi inputs that I used but for now I have no other suspicious.
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about the TRTorch compiler can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 1.6
- CPU Architecture: Jetson Xavier AGX
- OS (e.g., Linux): JetPack 4.4
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip3
- Build command you used (if compiling from source):
- Are you using local sources or building from archives: Local
- Python version: 3.6.9
- CUDA version: 10.2
- GPU models and configuration: Jetson Xavier AGX
- Any other relevant information: JetPack 4.4
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/232
|
closed
|
[
"question"
] | 2020-11-19T14:35:22Z
| 2020-11-24T15:56:11Z
| null |
OronG13
|
pytorch/vision
| 3,030
|
randomroate by some change
|
```
def mapper(dataset_dict):
dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below
image = utils.read_image(dataset_dict["file_name"], format="BGR")
transform_list = [
T.ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice')
,T.RandomRotation([10,15])
]
image, transforms = T.apply_transform_gens(transform_list, image)
dataset_dict["image"] = torch.as_tensor(image.transpose(2, 0, 1).astype("float32"))
#print('image_shape->',image.shape,image.shape[:2])
annos = [
utils.transform_instance_annotations(obj, transforms, image.shape[:2])
for obj in dataset_dict.pop("annotations")
if obj.get("iscrowd", 0) == 0
]
instances = utils.annotations_to_instances(annos, image.shape[:2])
dataset_dict["instances"] = instances
#dataset_dict["instances"] = utils.filter_empty_instances(instances)
return dataset_dict
```
this is my mapper for augmentation.
is T.RandomRotation([10,15]) happen every image? or by some change.
if it apply to every images. how should I apply it by only some percentage?
cc @vfdev-5
|
https://github.com/pytorch/vision/issues/3030
|
open
|
[
"question",
"module: transforms"
] | 2020-11-19T14:22:27Z
| 2020-11-20T09:39:28Z
| null |
SlowMonk
|
pytorch/TensorRT
| 231
|
How to solve "Unable to get schema" issue
|
I was trying to compile torchscript model, and the log says "Unable to get schema for Node". What should I do to fix this problem?
```
%2 : int = prim::Constant[value=2]()
%3 : int = prim::Constant[value=6]()
%4 : bool = prim::Constant[value=0]()
%5 : None = prim::Constant()
%6 : int[] = prim::Constant[value=[2]]()
%7 : bool = prim::Constant[value=1]()
%8 : int = prim::Constant[value=1]()
%9 : Tensor = prim::Constant[value={255}]()
%10 : Tensor = prim::Constant[value={0.447}]()
%11 : Tensor = prim::Constant[value={0.226}]()
%12 : Float(32:27, 3:9, 3:3, 3:1) = prim::Constant[value=<Tensor>]()
%13 : int[] = prim::Constant[value=[2, 2]]()
........
DEBUG: Unable to get schema for Node %323 : Tensor = aten::mean(%3, %6, %7, %5) # tasks/moco_simclr/export/export.py:21:0 (NodeConverterRegistry.Convertable)
terminate called after throwing an instance of 'trtorch::Error'
what(): [enforce fail at core/conversion/conversion.cpp:392] Expected schema to be true but got false
Unable to get schema for Node %323 : Tensor = aten::mean(%3, %6, %7, %5) # tasks/moco_simclr/export/export.py:21:0 (conversion.VerifyCoverterSupportForBlock)
```
|
https://github.com/pytorch/TensorRT/issues/231
|
closed
|
[
"question",
"No Activity"
] | 2020-11-19T09:53:50Z
| 2020-12-26T00:11:00Z
| null |
inocsin
|
pytorch/pytorch
| 48,241
|
How to use torch.onnx.export with customed input datatype, like SparseTensor?
|
## ❓ Questions and Help
In this repo [torchsparse](https://github.com/mit-han-lab/torchsparse), there is a customed datatype [SparseTensor`](https://github.com/mit-han-lab/torchsparse/blob/d2a5817c1b30565ffdfcd191b171a0957db408a8/torchsparse/sparse_tensor.py#L6).
```python
class SparseTensor:
def __init__(self, feats, coords, cur_tensor_stride=1):
self.F = feats
self.C = coords
self.s = cur_tensor_stride
self.coord_maps = {}
self.kernel_maps = {}
def check(self):
if self.s not in self.coord_maps:
self.coord_maps[self.s] = self.C
def cuda(self):
assert type(self.F) == torch.Tensor
assert type(self.C) == torch.Tensor
self.F = self.F.cuda()
self.C = self.C.cuda()
return self
def detach(self):
assert type(self.F) == torch.Tensor
assert type(self.C) == torch.Tensor
self.F = self.F.detach()
self.C = self.C.detach()
return self
def to(self, device, non_blocking=True):
assert type(self.F) == torch.Tensor
assert type(self.C) == torch.Tensor
self.F = self.F.to(device, non_blocking=non_blocking)
self.C = self.C.to(device, non_blocking=non_blocking)
return self
def __add__(self, other):
tensor = SparseTensor(self.F + other.F, self.C, self.s)
tensor.coord_maps = self.coord_maps
tensor.kernel_maps = self.kernel_maps
return tensor
```
And I want to export to ONNX model, but when I ran `torch.onnx.export`, I got this ERROR:
```
RuntimeError: Only tuples, lists and Variables supported as JIT inputs/outputs.
Dictionaries and strings are also accepted but their usage is not recommended.
But got unsupported type SparseTensor
```
This problem may be same to other custome data types.
I also noticed this line in [torch.onnx.__init__.py](https://github.com/pytorch/pytorch/blob/6da26fe79b7045fac743c81ca8d38c5340de17ab/torch/onnx/__init__.py#L45)
What do you mean by this ?
> Any non-Tensor arguments (including None) will be hard-coded into the exported model
Thanks in advance for any help!
|
https://github.com/pytorch/pytorch/issues/48241
|
closed
|
[] | 2020-11-19T07:05:42Z
| 2020-11-19T22:25:24Z
| null |
zeng-hello-world
|
pytorch/tutorials
| 1,247
|
Training with batch size > 1 for adverserial example generation
|
The tutorial notebook on [Adverserial Training](https://github.com/pytorch/tutorials/blob/master/beginner_source/fgsm_tutorial.py) uses a batch size of 1. What code changes are needed if we want to train on a batch size of say 16. My understanding is, we only need to change the logic of
`final_pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability`
` # Now we have batch size > 1`
`final_pred.squeeze_()`
`indexes = final_pred == target`
`correct += torch.sum(indexes).item()`
Is there something else needed. With this code change, I get values that are very similar to the case of batch_size=1 although not the same values. Any help would be appreciated.
|
https://github.com/pytorch/tutorials/issues/1247
|
closed
|
[
"question"
] | 2020-11-18T16:35:29Z
| 2023-03-14T21:02:30Z
| null |
chinmay5
|
pytorch/TensorRT
| 230
|
❓ [Question] We don't have an op for aten::addmm
|
## ❓ Question
I'm trying to convert a modified version of Yolov3 to TesnorRT, I have the model scripted to TorchScript and I'm trying to run trtorchexec on it
I'm getting an error
```
Checking operator support
terminate called after throwing an instance of 'c10::Error'
what(): 0 INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":461, please report a bug to PyTorch. We don't have an op for aten::addmm but it isn't a special case. Argument types: Tensor, int[], int[], int[], int[], bool,
Exception raised from analyzeImpl at ../torch/csrc/jit/ir/alias_analysis.cpp:461 (most recent call first):
```
I'm using the `pytorch_update` branch (since I need to use pytorch 1.7.0 & cuda 11.1), and I've merge master into it to get the latest updates (https://github.com/lablabla/TRTorch/tree/pytorch_update)
## What you have already tried
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about the TRTorch compiler can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 1.7.0
- CPU Architecture:
- OS (e.g., Linux): Ubuntu 16.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): Built TRTorch from sources, Bazel downloads prebuilt 1.7.0
- Build command you used (if compiling from source):
- Are you using local sources or building from archives: local sources
- Python version: 3.8.5
- CUDA version: 11.1
- GPU models and configuration: GeForce GTX 980
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
I saw this commit https://github.com/NVIDIA/TRTorch/commit/c5b6202 so I figured `aten:addmm` should be supported, but I guess I'm missing something
|
https://github.com/pytorch/TensorRT/issues/230
|
closed
|
[
"question",
"No Activity"
] | 2020-11-18T15:41:45Z
| 2021-04-20T00:02:56Z
| null |
lablabla
|
pytorch/vision
| 3,022
|
MaskRCNN Training on Images with no Annotations
|
Hi all,
I am working on a little MaskRCNN training program and ran into an issue. I know it is common practice to remove any images from the dataset that lack annotations upon initializing the dataset which I am doing. However, I am running a series of transforms using albumentations on my image and my mask. One of these transforms is a random crop and sometimes the resulting mask image no longer contains any instances. I was trying to find a way to pass in an empty tensor of some kind without much success. Would it be common practice just to remove it from the batch, and if so what happens if you had a batch size of 1 or an image that only had one annotation and the chances the random crop came across it are really low. I was able to create an empty tensor and pass it in but then received this error.
`RuntimeError: cannot reshape tensor of 0 elements into shape [0, -1] because the unspecified dimension size -1 can be any value and is ambiguous`
This is because my box tensor had a shape of 0, 4 which is what I want since there are no instances. I read some of the other issue reports and they talked about creating a background class and just making a small bounding box and having an empty segmentation mask but this seems a little hacky and I was wondering if there would be a better solution for my specific use case.
|
https://github.com/pytorch/vision/issues/3022
|
open
|
[
"question",
"awaiting response",
"topic: object detection"
] | 2020-11-18T15:23:52Z
| 2020-11-30T10:42:01Z
| null |
gatordevin
|
pytorch/TensorRT
| 229
|
Build trtorch failed in ubuntu
|
I try to build the project with bazel but failed.
my environment:
gcc: 7.5.0
g++: 7.5.0
cuda: 10.2
cudnn: 7.6.5
tensorRT: 7.0.0.11
error log:
[log.txt](https://github.com/NVIDIA/TRTorch/files/5559949/log.txt)
$ bazel build //:libtrtorch --compilation_mode opt
Starting local Bazel server and connecting to it...
INFO: Analyzed target //:libtrtorch (39 packages loaded, 2546 targets configured).
INFO: Found 1 target...
ERROR: /home/vincent/Projects/TRTorch/cpp/trtorchc/BUILD:10:10: Linking of rule '//cpp/trtorchc:trtorchc' failed (Exit 1) gcc failed: error executing command /usr/bin/gcc @bazel-out/k8-opt/bin/cpp/trtorchc/trtorchc-2.params
Use --sandbox_debug to see verbose messages from the sandbox
external/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(helpers.o):helpers.cpp:function nvinfer1::getNvrtcMajorVersion(): error: undefined reference to 'nvrtcVersion'
external/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxDomainSyncUserReleasing_impl_init_v3: error: undefined reference to 'dlopen'
external/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxDomainSyncUserReleasing_impl_init_v3: error: undefined reference to 'dlsym'
external/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxDomainSyncUserReleasing_impl_init_v3: error: undefined reference to 'dlclose'
external/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxDomainResourceDestroy_impl_init_v3: error: undefined reference to 'dlopen'
external/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxDomainResourceDestroy_impl_init_v3: error: undefined reference to 'dlsym'
external/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxDomainResourceDestroy_impl_init_v3: error: undefined reference to 'dlclose'
external/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxDomainDestroy_impl_init_v3: error: undefined reference to 'dlopen'
external/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxDomainDestroy_impl_init_v3: error: undefined reference to 'dlsym'
external/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxDomainDestroy_impl_init_v3: error: undefined reference to 'dlclose'
external/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxMarkA_impl_init_v3: error: undefined reference to 'dlopen'
external/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxMarkA_impl_init_v3: error: undefined reference to 'dlsym'
external/tensorrt/lib/x86_64-linux-gnu/libnvinfer_static.a(profile.o):profile.cpp:function nvtxMarkA_impl_init_v3: error: undefined reference to 'dlclose'
could you help solve this problem, thanks a lot
@narendasan
|
https://github.com/pytorch/TensorRT/issues/229
|
closed
|
[
"question"
] | 2020-11-18T12:23:02Z
| 2020-11-20T02:23:10Z
| null |
inocsin
|
huggingface/datasets
| 861
|
Possible Bug: Small training/dataset file creates gigantic output
|
Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely.
I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug?
I've used the following CMD:
`python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`
|
https://github.com/huggingface/datasets/issues/861
|
closed
|
[
"enhancement",
"question"
] | 2020-11-17T13:48:59Z
| 2021-03-30T14:04:04Z
| null |
NebelAI
|
pytorch/TensorRT
| 226
|
How to build from sources on Windows
|
## ❓ Question
How shall I edit the WORKSPACE file in order to build tag 0.1.0 from sources on Windows?
## What you have already tried
1. I successfully did the build from sources process for Jetson Xavier AGX, see:
[https://github.com/NVIDIA/TRTorch/issues/222](url)
1. Based on the material that I was already had from the Jetson process I tried to do the same for my Windows by editing the WORKSPACE based on my Windows setup.
I changed all required new_local_repository arguments of the cuda, torch, cudnn and tensorrt based on my Windows installations
1. Activate the following command:
bazel build //:libtrtorch
The following errors report was generated:
INFO: Repository rules_python instantiated at:
no stack (--record_rule_instantiation_callstack not enabled)
Repository rule git_repository defined at:
C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git.bzl:195:18: in <toplevel>
ERROR: An error occurred during the fetch of repository 'rules_python':
Traceback (most recent call last):
File "C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git.bzl", line 177
_clone_or_update(ctx)
File "C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git.bzl", line 36, in _clone_or_update
git_repo(ctx, directory)
File "C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 91, in git_repo
_update(ctx, git_repo)
File "C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 103, in _update
fetch(ctx, git_repo)
File "C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 129, in fetch
_git_maybe_shallow(ctx, <5 more arguments>)
File "C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 171, in _git_maybe_shallow
_error(ctx.name, <2 more arguments>)
File "C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 181, in _error
fail(<1 more arguments>)
error running 'git fetch origin refs/heads/*:refs/remotes/origin/* refs/tags/*:refs/tags/*' while working with @rules_python:
BUG: run-command.c:519: disabling cancellation: Invalid argument
ERROR: no such package '@rules_python//python': Traceback (most recent call last):
File "C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git.bzl", line 177
_clone_or_update(ctx)
File "C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git.bzl", line 36, in _clone_or_update
git_repo(ctx, directory)
File "C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 91, in git_repo
_update(ctx, git_repo)
File "C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 103, in _update
fetch(ctx, git_repo)
File "C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 129, in fetch
_git_maybe_shallow(ctx, <5 more arguments>)
File "C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 171, in _git_maybe_shallow
_error(ctx.name, <2 more arguments>)
File "C:/users/General/_bazel_General/zs4npqzu/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 181, in _error
fail(<1 more arguments>)
error running 'git fetch origin refs/heads/*:refs/remotes/origin/* refs/tags/*:refs/tags/*' while working with @rules_python:
BUG: run-command.c:519: disabling cancellation: Invalid argument
INFO: Elapsed time: 1.097s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
## Environment
> Build information about the TRTorch compiler can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 1.6
- CPU Architecture: Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz, 2592 Mhz, 4 Core(s), 8 Logical Processor(s)
- OS (e.g., Linux): Windows
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip3
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version: 3.6.8
- CUDA version: 11.0
- GPU models and configuration: Quadro M2000M
- Any other relevant information: TensorRT 7.2.1, CuDNN 8.0.1
## Additional context
I have a good experience with TensorRT development on my Windows setup so I know that from NVIDIA libraries setup point of view everything should b
|
https://github.com/pytorch/TensorRT/issues/226
|
closed
|
[
"question",
"channel: windows"
] | 2020-11-17T11:57:18Z
| 2022-09-02T18:12:18Z
| null |
OronG13
|
pytorch/pytorch
| 48,075
|
How to convert syncbn to batchnormND?
|
I want to run a model with syncBn in cpu, so I have to convert syncBN to batchNormND, how can I do that?
I just found a way to convert from bn to syncbn, but how to do the opposite? Thanks in advance.
[convert2syncbn](https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html?highlight=sync#torch.nn.SyncBatchNorm.convert_sync_batchnorm)
cc @albanD @mruberry
|
https://github.com/pytorch/pytorch/issues/48075
|
closed
|
[
"module: nn",
"triaged",
"enhancement"
] | 2020-11-17T02:41:22Z
| 2020-11-18T02:42:34Z
| null |
Edwardmark
|
pytorch/pytorch
| 48,074
|
How use libtorch(or other API) to implement "contiguous", "view", "permute", "transpose" in c++?
|
## How use libtorch to implement "contiguous", "view", "permute", "transpose" in c++?
Hello~ I need to transplant python to c++, I don't know how to implement "contiguous", "view", "permute" in c++. I found that libtorch can help me, but I have not find all the "Tensor operations" which I need,such as "contiguous", "view", "permute", "transpose".
The python code shows bellow:
def process_input_bmm(self, x):
bsz = x.size(0) # 18 # x.shape()=[18,192]
# [B x N] --> [B x g x N/g]
x = x.contiguous().view(bsz, self.n_groups, -1) # [18, 2, 96]
# [B x g x N/g] --> [g x B x N/g]
x = x.transpose(0, 1) # transpose so that group is first # [2,18,96]
# [g x B x N/g] x [g x N/g x M/g] --> [g x B x M/g]
x = torch.bmm(x, self.weights) # multiply with Weights #[2,18,96]
# add bias
if self.use_bias:
x = torch.add(x, self.bias)
if self.feature_shuffle:
# [g x B x M/g] --> [B x M/g x g]
# [2,18,96] --> [18,96,2]
x = x.permute(1, 2, 0) # permute:序号改变的意思。
# [B x M/g x g] --> [B x g x M/g]
# [18, 96, 2] --> [18,2,96]
x = x.contiguous().view(bsz, self.n_groups, -1)
else:
# [g x B x M/g] --> [B x g x M/g]
x = x.transpose(0, 1) # transpose so that batch is first
# feature map normalization
if self.normalization_fn is not None:
x = self.normalization_fn(x)
# feature map activation (or thresholding)
if self.act_fn is not None: # self.act_fn:swish
# print("act_fun in glt: ",self.act_fn) #Swish((sigmoid): Sigmoid())
x = self.act_fn(x)
return x
def forward(self, x):
"""
:param x: Input of shape [T x B x N] (should work with [B x T x N]
:return:
"""
if x.dim() == 2:
x = self.process_input_bmm(x)
elif x.dim() == 3:
T, B, N = x.size() # [18,1,192]
x = x.contiguous().view(B * T, -1) # [1*18,192]
x = self.process_input_bmm(x)
x = x.contiguous().view(T, B, -1)
else:
raise NotImplementedError
# dropout
if self.use_dropout:
x = self.drop_layer(x)
return x
The code I need help to write in C++ are as bellow:
x = x.contiguous().view(bsz, self.n_groups, -1)
x = x.transpose(0, 1)
x = x.permute(1, 2, 0)
If you can help me, please answer me with C++ code which work with libtorch, or the method to implement "contiguous", "view", "permute", "transpose" by libtorch(or other API)!
Thank you Very much!!!
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
|
https://github.com/pytorch/pytorch/issues/48074
|
closed
|
[] | 2020-11-17T02:16:24Z
| 2020-11-17T15:44:11Z
| null |
wxyhv
|
huggingface/datasets
| 853
|
concatenate_datasets support axis=0 or 1 ?
|
I want to achieve the following result

|
https://github.com/huggingface/datasets/issues/853
|
closed
|
[
"enhancement",
"help wanted",
"question"
] | 2020-11-16T02:46:23Z
| 2021-04-19T16:07:18Z
| null |
renqingcolin
|
pytorch/pytorch
| 47,980
|
How to avoid `torch.onnx.export` use INT64?
|
In order to do inference in browser/JavaScript, I used `torch.onnx.export()` to get the onnx model.
However, the exported model used INT64 which is invalid for the JavaScript environment. I tried to change the data type in ONNX manually but it brings more error.
May I know how to force the `torch.onnx.export` use INT32? Or is there any way to deal with the INT64 before getting the ONNX model?
Thank you!
cc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof
|
https://github.com/pytorch/pytorch/issues/47980
|
closed
|
[
"module: onnx",
"triaged"
] | 2020-11-15T04:11:06Z
| 2021-04-21T11:23:35Z
| null |
waittim
|
pytorch/tutorials
| 1,237
|
DistributedDataParallel tutorial should use actual data
|
The current DistributedDataParallel tutorial feeds in data randomly generated on the spot. This is useful to a point but, since all real world applications will use a dataloader, it would be good to have a complete example with even MNIST that implements DistributedDataParallel. "https://pytorch.org/tutorials/intermediate/dist_tuto.html" implements the code required to use real data but also isn't using DistributedDataParallel thus leaving it up to the reader to determine which pieces they need to implement themselves and which pieces are included with DistributedDataParallel. Using real data and DistributedDataParallel would answer that question right away. One key question this would answer is how does the partitioning happen. Is the partitioning fully left to the user or is it handled by DistributedDataParallel like it is with DataParallel? I'm assuming the first one but it would be nice to have a clear example of it.
|
https://github.com/pytorch/tutorials/issues/1237
|
closed
|
[] | 2020-11-13T18:51:01Z
| 2023-03-14T21:05:55Z
| 1
|
rmcavoy
|
pytorch/vision
| 2,999
|
CMake build failed with error: 'class c10::OperatorHandle' has no member named 'typed'
|
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1. Install PyTorch that was built myself, with build information:
```
#python3
Python 3.6.8 (default, Apr 20 2020, 14:49:33)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.__config__.show())
PyTorch built with:
- GCC 6.3
- C++ Version: 201402
- Intel(R) MKL-DNN v1.2.0 (Git Hash 70f8b879ea7a0c38caedb3320b7c85e8497ff50d)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 10.0
- NVCC architecture flags: -gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75
- CuDNN 7.6.3
- Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS=-D_GLIBCXX_USE_CXX11_ABI=0 -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DNDEBUG -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_INTERNAL_THREADPOOL_IMPL -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=0, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,
```
2. Resolve similar problem with the solution: https://github.com/pytorch/vision/issues/2001#issuecomment-611923412
> @bmanga tow-names works.
> just add these lines to the end of the CMakeLists.txt
> ```
> set_property(TARGET torch_cuda PROPERTY INTERFACE_COMPILE_OPTIONS "")
> set_property(TARGET torch_cpu PROPERTY INTERFACE_COMPILE_OPTIONS "")
> ```
3. Build vision with following commands:
```
source /opt/rh/devtoolset-6/enable
TORCH_DIR=/usr/local/lib64/python3.6/site-packages/torch
export CUDA_HOME=/usr/local/cuda
export CUDA_NVCC_EXECUTABLE=${CUDA_HOME}/bin/nvcc
export PATH=${CUDA_HOME}/bin/:$PATH
export TORCH_CUDA_ARCH_LIST="6.0 6.1 7.0 7.5"
mkdir build
cd build
cmake .. -DCMAKE_PREFIX_PATH=${TORCH_DIR} -DCMAKE_EXPORT_COMPILE_COMMANDS=ON
make -j
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
```
[ 88%] Building CXX object CMakeFiles/torchvision.dir/torchvision/csrc/cpu/nms_cpu.cpp.o
[ 94%] Building CXX object CMakeFiles/torchvision.dir/torchvision/csrc/vision.cpp.o
In file included from /home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/vision.cpp:14:0:
/home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/ROIAlign.h: In function 'at::Tensor roi_align(const at::Tensor&, const at::Tensor&, double, int64_t, int64_t, int64_t, bool)':
/home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/ROIAlign.h:29:25: error: 'class c10::OperatorHandle' has no member named 'typed'
.typed<decltype(roi_align)>();
^~~~~
/home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/ROIAlign.h:29:31: error: expected primary-expression before 'decltype'
.typed<decltype(roi_align)>();
^~~~~~~~
/home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/ROIAlign.h: In function 'at::Tensor _roi_align_backward(const at::Tensor&, const at::Tensor&, double, int64_t, int64_t, int64_t, int64_t, int6
4_t, int64_t, int64_t, bool)':
/home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/ROIAlign.h:77:12: error: 'class c10::OperatorHandle' has no member named 'typed'
.typed<decltype(_roi_align_backward)>();
^~~~~
/home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/ROIAlign.h:77:18: error: expected primary-expression before 'decltype'
.typed<decltype(_roi_align_backward)>();
^~~~~~~~
In file included from /home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/vision.cpp:17:0:
/home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/nms.h: In function 'at::Tensor nms(const at::Tensor&, const at::Tensor&, double)':
/home/tianyou.gty/builds/blade2.0/vision_cpp/torchvision/csrc/nms.h:19:25: error: 'class c10::OperatorHandle' has no member named 'typed'
.typed<decltype(nms)>();
|
https://github.com/pytorch/vision/issues/2999
|
closed
|
[
"question",
"topic: build"
] | 2020-11-13T09:47:07Z
| 2020-11-13T13:18:19Z
| null |
tanyokwok
|
pytorch/vision
| 2,994
|
How to dynamically split tensor
|
## How to split a tensor dynamically by split_sizes, not by constant shape
Trying to convert mask-rcnn to onnx and run on onnxruntime.
Following code try to split mask_pred by num_mask_roi_per_img
However, while run in onnxruntime, num_mask_roi_per_img becomes constant value, for instance (68,) which is number of boxes while tracing.
```
# split batch mask prediction back to each image
num_mask_roi_per_img = [ det_bbox.shape[0] for det_bbox in det_bboxes ]
mask_preds = mask_pred.split(num_mask_roi_per_img, 0)
```
It got this error while run in onnxruntime with another input image.
>
<class 'onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument'>", "[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running SplitToSequence node. Name:'SplitToSequence_1396' Status Message: split_size_sum (68) != split_dim_size (23)"
Could anyone help me with this?
Many thanks in advance.
cc @neginraoof
|
https://github.com/pytorch/vision/issues/2994
|
closed
|
[
"topic: object detection",
"module: onnx"
] | 2020-11-12T13:27:05Z
| 2022-07-21T09:11:35Z
| null |
RunningLeon
|
pytorch/pytorch
| 47,823
|
How to frozen weights in TorchScript IR?
|
Hi, i just add a pass in TorchScript IR to convert BertLayer to fastertransformer Encoder, however i find model is slow after convert to TorchScript. I get Nvprof result and find a time consuming activity:
```
Type Time(%) Time Calls Avg Min Max Name
GPU activities: 57.50% 1.49484s 25200 59.319us 3.2000us 151.55us _ZN2at6native27unrolled_elementwise_kernelIZZZNS0_21copy_device_to_deviceERNS_14TensorIteratorEbENKUlvE0_clEvENKUlvE2_clEvEUlfE_NS_6detail5ArrayIPcLi2EEE16OffsetCalculatorILi1EjESC_NS0_6memory15LoadWithoutCastENSD_16StoreWithoutCastEEEviT_T0_T1_T2_T3_T4_
```
I watched my final TorchScript IR, and i guess it's reason is each time it runs it will do aten::contiguous several times, like:
```
%1752 : Float(*, *, requires_grad=1, device=cuda:0) = aten::contiguous(%1153, %21)
```
aten::contiguous is needed for Tensors which will be send to custom op because they will be convert by .transpose(-1, -2) first, but aten::contiguous seems time consuming. So is there any way that i can convert model weights to constant in TorchScript IR so that aten::contiguous(weights) will be convert to Constant Tensor, or if i can do something to avoid aten::contiguous? Thankyou very much!
cc @gmagogsfm
|
https://github.com/pytorch/pytorch/issues/47823
|
closed
|
[
"oncall: jit"
] | 2020-11-12T02:51:07Z
| 2020-11-12T07:54:27Z
| null |
Sun-Knight-Soral
|
pytorch/pytorch
| 47,681
|
How to install Pytorch on AIX7.2 without internet access?
|
I am trying to install Pytorch on AIX7.2 server without internet access. I have pytorch-1.0.2.tar.gz from PYPI website and run the PIP installation as ```python -m pip install Flask --no-build-isolation --no-index --find-links ./ $pkg``` where $pkg is pytorch-1.0.2.tar.gz. However, it has the following error. How to fix it? Is it possible to install pytorch on a server without internet access?
Thanks.
```
Looking in links: ./
Processing ./pytorch-1.0.2.tar.gz
Building wheels for collected packages: pytorch
Building wheel for pytorch (setup.py): started
Building wheel for pytorch (setup.py): finished with status 'error'
ERROR: Command errored out with exit status 1:
command: /usr/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-84fugyap/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-84fugyap/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-640v38y9
cwd: /tmp/pip-req-build-84fugyap/
Complete output (5 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-req-build-84fugyap/setup.py", line 15, in <module>
raise Exception(message)
Exception: You tried to install "pytorch". The package named for PyTorch is "torch"
----------------------------------------
ERROR: Failed building wheel for pytorch
Running setup.py clean for pytorch
Failed to build pytorch
Installing collected packages: pytorch
Running setup.py install for pytorch: started
Running setup.py install for pytorch: finished with status 'error'
ERROR: Command errored out with exit status 1:
command: /usr/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-84fugyap/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-84fugyap/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-k2dyu_63/install-record.txt --single-version-externally-managed --compile --install-headers /opt/freeware/include/python3.7m/pytorch
cwd: /tmp/pip-req-build-84fugyap/
Complete output (5 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-req-build-84fugyap/setup.py", line 11, in <module>
raise Exception(message)
Exception: You tried to install "pytorch". The package named for PyTorch is "torch"
----------------------------------------
ERROR: Command errored out with exit status 1: /usr/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-84fugyap/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-84fugyap/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-k2dyu_63/install-record.txt --single-version-externally-managed --compile --install-headers /opt/freeware/include/python3.7m/pytorch Check the logs for full command output.
```
cc @malfet @seemethere @walterddr
|
https://github.com/pytorch/pytorch/issues/47681
|
open
|
[
"module: build",
"triaged"
] | 2020-11-10T17:02:24Z
| 2020-11-11T02:05:42Z
| null |
bergen288
|
pytorch/serve
| 779
|
Hi, any suggestion on how to serve yolov5 on torchserve ?
|
<!--
Thank you for suggesting an idea to improve torchserve model serving experience.
Please fill in as much of the template below as you're able.
-->
## Is your feature request related to a problem? Please describe.
<!-- Please describe the problem you are trying to solve. -->
I'd like to serve yolov5 model, but there is no template in the example.
## Describe the solution
<!-- Please describe the desired behavior. -->
serve model from https://github.com/ultralytics/yolov5/
## Describe alternatives solution
<!-- Please describe alternative solutions or features you have considered. -->
|
https://github.com/pytorch/serve/issues/779
|
closed
|
[
"triaged_wait"
] | 2020-11-10T03:18:54Z
| 2023-07-31T17:53:42Z
| null |
yuanyuangoo
|
pytorch/tutorials
| 1,227
|
Yolov5 quantization : problem with FloatFunctional()
|
I'm trying quantize [Yolov5 (object detection)](https://github.com/ultralytics/yolov5). And i'm following [this tutorial](https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html) to do static quantization. As per tutorial I'm changing all torch.add s to torch.nn.quantized.FloatFunctional() like this.
`return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))` to
`return torch.nn.quantized.FloatFunctional().add(x , self.cv2(self.cv1(x))) if self.add else self.cv2(self.cv1(x))`
when the model is calibrating it's working fine. But when it comes to evaluating the quantized model, I'm getting this error.
`RuntimeError: Could not run 'aten::add.Tensor' with arguments from the 'QuantizedCPU' backend. 'aten::add.Tensor' is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, Met 'aten::add.Tensor' is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, Meta, Named, Autograd, Profiler, Tracer].`
Now I changed FloatFunctional() to Qfunctional() hoping to get a result, then I got an error during the calibration stage.
Can someone help me? Thanks in advance.
cc @jerryzh168 @jianyuh
|
https://github.com/pytorch/tutorials/issues/1227
|
closed
|
[
"question",
"module: quantization"
] | 2020-11-09T16:38:33Z
| 2023-03-16T22:31:13Z
| null |
bingiflash
|
pytorch/tutorials
| 1,225
|
Seq2Seq Transformer Tutorial
|
I'm having difficulty understanding a few aspects of the Seq2Seq transformer tutorial (https://pytorch.org/tutorials/beginner/transformer_tutorial.html)
1. The tutorial says that it implements the architecture from Attention Is All You Need, but I don't see a TransformerDecoder used anywhere. It instead looks like only a TransformerEncoder is used. How does this example work without the decoder?
2. The tutorial says that it uses a softmax to output probabilities over the dictionary, but I only see a linear output layer. Where is the softmax applied?
3. Is this model learning to predict one word ahead (e.g. [hi how are you] -> [how are you doing])? I can't find the actual task described anywhere, only the inputs and targets in terms of an alphabet
Appreciate any help.
cc @pytorch/team-text-core @Nayef211
|
https://github.com/pytorch/tutorials/issues/1225
|
closed
|
[
"module: torchtext",
"docathon-h1-2023",
"easy"
] | 2020-11-08T20:39:19Z
| 2023-06-09T16:32:37Z
| 5
|
mmwebster
|
pytorch/pytorch
| 47,577
|
How to implement Iterative dataset with multiple workers
|
Hi
I have a TFDS dataset, which I convert it to an iterative dataset in pytorch, this is not clear for me how to make it work with multiple-workers, here is the minimal code to show what I mean, could you help me please complete it with different workers, and provide me with how I can implement worker_init_fn(worker_id) for this case. I also need to implement distributed sampler for this data class which I also appreciate your help on this. thanks
```
from torch.utils.data import Dataset, DataLoader
import torch
import tensorflow_datasets as tfds
import tensorflow as tf
import itertools
from itertools import cycle, islice
def get_dummy_dataset():
inputs = ["input 1",
"input 2",
"input 3",
"input 4"]
target = ["target 1",
"target 2",
"target 3",
"target 4"]
features = {"inputs": inputs, "targets": target}
def my_fn(features):
ret = {}
for k, v in features.items():
ret[f'{k}_plaintext'] = v
return ret
dataset = tf.data.Dataset.from_tensor_slices(features)
dataset = dataset.map(my_fn, num_parallel_calls=tf.data.experimental.AUTOTUNE)
return dataset
class WMTDataset(torch.utils.data.IterableDataset):
def __init__(self, batch_size):
super(WMTDataset).__init__()
dataset = get_dummy_dataset()
self.dataset_size = 4
self.batch_size = batch_size
self.dataset = self.create_dataset(dataset)
def __len__(self):
return self.dataset_size
def __iter__(self):
return self.dataset
def create_dataset(self, dataset):
dataset = dataset.batch(self.batch_size, drop_remainder=False)
return itertools.cycle(dataset)
iterable_dataset = WMTDataset(batch_size=2)
loader = DataLoader(iterable_dataset, batch_size=None)
for batch in islice(loader, 2):
print("#########batch ", batch)
```
|
https://github.com/pytorch/pytorch/issues/47577
|
closed
|
[] | 2020-11-08T13:01:52Z
| 2020-11-09T16:03:07Z
| null |
rabeehkarimimahabadi
|
pytorch/pytorch
| 47,574
|
How to add custom CUDA function as torchScript Node?
|
Hi, i want to add my CUDA function as a torchScript Node, but i can't use torchScript extention op as i can't let other people to use so file. It's there a way? Thank you very much!
cc @gmagogsfm
|
https://github.com/pytorch/pytorch/issues/47574
|
closed
|
[
"oncall: jit"
] | 2020-11-08T10:28:25Z
| 2021-02-27T07:58:59Z
| null |
Sun-Knight-Soral
|
pytorch/pytorch
| 47,573
|
How to add custom CUDA function as torchScript Node?
|
Hi, i want to add my CUDA function as a torchScript Node, but i can't use torchScript extention op as i can't let other people to use so file. It's there a way? Thank you very much!
cc @gmagogsfm
|
https://github.com/pytorch/pytorch/issues/47573
|
closed
|
[
"oncall: jit"
] | 2020-11-08T10:28:06Z
| 2020-11-08T17:15:02Z
| null |
Sun-Knight-Soral
|
pytorch/pytorch
| 47,572
|
How to add custom CUDA function as torchScript node?
|
Hi, i want to add my CUDA function to torchScript as a Node, but i don't want to use torchScript extention op as i can't let other people to load so file, is there any way? Thankyou very much!
cc @gmagogsfm
|
https://github.com/pytorch/pytorch/issues/47572
|
closed
|
[
"oncall: jit"
] | 2020-11-08T10:25:02Z
| 2020-11-08T17:15:35Z
| null |
Sun-Knight-Soral
|
pytorch/pytorch
| 47,548
|
how to extract more than two variables using default_collate from torch.utils.data.dataloader?
|
Kindly help as how to extract more than just two variables (x,y) using default_collate from torch.utils.data.dataloader.
|
https://github.com/pytorch/pytorch/issues/47548
|
closed
|
[] | 2020-11-07T05:52:49Z
| 2020-11-09T16:00:31Z
| null |
Jayashree-Pougajendy
|
pytorch/xla
| 2,613
|
How to get function return from xmp.spawn distributed processes
|
Wonder how do we get value returned from spawned functions.
For example, if accuracy is calculated in each core, and i want it to be returned to the main function
```
def _mp_fn():
#some training and valuation code here
return accuracy
```
```
accuracy = xmp.spawn(_mp_fn, nprocs=8)
```
In multiprocessor library we can do something like
```
if __name__ == '__main__':
p = Pool(processes=20)
data = p.map(job, [i for i in range(20)])
p.close()
print(data)
```
How do we do it with xmp.spawn?
|
https://github.com/pytorch/xla/issues/2613
|
closed
|
[] | 2020-11-06T19:58:07Z
| 2020-11-11T14:51:43Z
| null |
8key
|
pytorch/pytorch
| 47,491
|
How to get averaged loss in multi-gpu training ?
|
Hi,
I am using multi-gpu training, following the tutorial:
https://pytorch.org/docs/stable/notes/ddp.html
I am trying to construct the curves of training and validation losses for visulization. But it seems I can only access the loss of one gpu.
I know that the losses of multi-gpu will be averaged before back propagation. So how to get the averaged loss ?
Thank you !
|
https://github.com/pytorch/pytorch/issues/47491
|
closed
|
[] | 2020-11-06T05:44:59Z
| 2020-11-06T05:58:38Z
| null |
shuuchen
|
pytorch/text
| 1,071
|
How to get the translation results from tensor in seq2seq model
|
## ❓ Questions and Help
**Description**
<!-- Please send questions or ask for help here. -->
I am try to implement my own MT engine, i am following the steps in https://github.com/bentrevett/pytorch-seq2seq/blob/master/1%20-%20Sequence%20to%20Sequence%20Learning%20with%20Neural%20Networks.ipynb
I also propose a question on https://stackoverflow.com/questions/64694786/pytorch-build-seq2seq-mt-model-but-how-to-get-the-translation-results-from-the
```
SRC = Field(tokenize=tokenize_en,
init_token='<sos>',
eos_token='<eos>',
lower=True)
TRG = Field(tokenize=tokenize_de,
init_token='<sos>',
eos_token='<eos>',
lower=True)
```
After training the model,the link only share a way to batch evaluate but i want to try single string and get the translation results. for example i want my model to translate the input "Boys" and get the German translations.
```
savedfilemodelpath='./pretrained_model/2020-09-27en-de.pth'
model.load_state_dict(torch.load(savedfilemodelpath))
model.eval()
inputstring = 'Boys'
processed=SRC.process([SRC.preprocess(inputstring)]).to(device)
output=model(processed,processed)
output_dim = output.shape[-1]
outputs = output[1:].view(-1, output_dim)
for item in outputs:
print('item shape is {} and item.argmax is {}, and words is {}'.format(item.shape,item.argmax(),TRG.vocab.itos[item.argmax()]))
```
So my question is that it it right to get the translation results by:
First: convert the string to tensor
```
inputstring = 'Boys'
processed=SRC.process([SRC.preprocess(inputstring)]).to(device)
```
Second: send the tensor to the model. As the model have a TRG param.I have to give the tensor,am i able not given the TRG tensor?
```
output=model(processed,processed)
output_dim = output.shape[-1]
outputs = output[1:].view(-1, output_dim)
```
Third:through the return tensor, i use the argmax to get the translation results? is it right?
Or how can i get the right translation results?
```
for item in outputs:
print('item shape is {} and item.argmax is {}, and words is {}'.format(item.shape,item.argmax(),TRG.vocab.itos[item.argmax()+1]))
```
Thanks a lot.
|
https://github.com/pytorch/text/issues/1071
|
closed
|
[] | 2020-11-06T02:12:34Z
| 2020-11-06T06:05:17Z
| null |
Oscarjia
|
pytorch/pytorch
| 47,483
|
Update how to build PyTorch with CUDA Windows instructions
|
PyTorch currently could not be build using recommended `14.11.25503` minimal toolchain, see:
https://github.com/pytorch/pytorch/blame/b4b0fa637178baf9147416b550c7db70de6a5fa3/README.md#L258
But if one tries to following this instructions using PyTorch-1.7 or newer it will fail with as shown in:
https://github.com/pytorch/pytorch/issues/46208#issuecomment-707352250
cc @malfet @seemethere @walterddr @jlin27 @mruberry @peterjc123 @maxluk @nbcsm @guyang3532 @gunandrose4u @smartcat2010 @mszhanyi
|
https://github.com/pytorch/pytorch/issues/47483
|
closed
|
[
"module: build",
"module: windows",
"module: docs",
"triaged",
"windows-triaged"
] | 2020-11-06T01:27:58Z
| 2020-11-16T16:16:00Z
| null |
malfet
|
pytorch/xla
| 2,606
|
how to make sure pytorch xla is doing data parallelism
|
Hi
when I call xm.spawn to distribute a work over multiple TPU cores, how can I make sure this is actually working and getting use of all cores? thanks
|
https://github.com/pytorch/xla/issues/2606
|
closed
|
[] | 2020-11-05T16:30:15Z
| 2020-11-30T18:18:00Z
| null |
rabeehkarimimahabadi
|
pytorch/pytorch
| 47,439
|
how to use torch.utils.checkpoint + gru with variable length sequence?
|
I just want to use torch.utils.checkpoint on GRU to save gpu memory.
```py
def check(self, packed):
out, _ = self.rnn(packed)
padded = pad_packed_sequence(out, batch_first=True)
return padded
def forward(self, x, lengths):
"""Handles variable size captions
"""
x = self.embed(x)
packed = pack_padded_sequence(x, lengths, batch_first=True)
padded = checkpoint(self.check, packed)
```
my code is shown above.
i got a warning:
**UserWarning: None of the inputs have requires_grad=True. Gradients will be None**
because packed is a PackedSequence, it has no attribute requires_grad
then, i tried another way to do it
```py
def check(self, x, lengths):
packed = pack_padded_sequence(x, lengths, batch_first=True)
out, _ = self.rnn(packed)
padded = pad_packed_sequence(out, batch_first=True)
return padded
def forward(self, x, lengths):
"""Handles variable size captions
"""
x = self.embed(x)
padded = checkpoint(self.check, x, lengths)
```
than i got a error.
Traceback (most recent call last):
File "D:\安装程序\PyCharm 2019.2.3\helpers\pydev\pydevd.py", line 2073, in <module>
main()
File "D:\安装程序\PyCharm 2019.2.3\helpers\pydev\pydevd.py", line 2067, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "D:\安装程序\PyCharm 2019.2.3\helpers\pydev\pydevd.py", line 1418, in run
return self._exec(is_module, entry_point_fn, module_name, file, globals, locals)
File "D:\安装程序\PyCharm 2019.2.3\helpers\pydev\pydevd.py", line 1425, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "D:\安装程序\PyCharm 2019.2.3\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "D:/study/workspace/Python/xxxx/train.py", line 300, in <module>
main()
File "D:/study/workspace/Python/xxxx/train.py", line 144, in main
train(opt, train_loader, model, epoch, val_loader)
File "D:/study/workspace/Python/xxxx/train.py", line 181, in train
model.train_emb(*train_data)
File "D:\study\workspace\Python\SCAN\model.py", line 632, in train_emb
loss.backward()
File "D:\Environment\Anaconda\envs\PyTorch\lib\site-packages\torch\tensor.py", line 185, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "D:\Environment\Anaconda\envs\PyTorch\lib\site-packages\torch\autograd\__init__.py", line 127, in backward
allow_unreachable=True) # allow_unreachable flag
<b>RuntimeError: element 1 of tensors does not require grad and does not have a grad_fn</b>
so, i want to know how can i use torch.utils.checkpoint on gru with variable length sequence
thank you
cc @zou3519
|
https://github.com/pytorch/pytorch/issues/47439
|
open
|
[
"module: rnn",
"triaged"
] | 2020-11-05T13:25:06Z
| 2023-11-02T13:26:34Z
| null |
liuyyy111
|
pytorch/vision
| 2,963
|
detector as feature extractor
|
Hello,
I am using mask rcnn for detection. So basically fine tuning. However I want extract feature for each object that is being detected.
So possibly extracting feature vector just before last layer. How can I do that ? forward hooks ?
I was also looking into https://github.com/pytorch/vision/blob/master/torchvision/models/_utils.py ? could not get it working.
Also how to use jit for the same ?
Any leads would be helpful. @fmassa
Cheers!
|
https://github.com/pytorch/vision/issues/2963
|
open
|
[
"question"
] | 2020-11-04T22:49:19Z
| 2020-11-10T12:39:51Z
| null |
gaussiangit
|
pytorch/vision
| 2,959
|
Allow torchvision.io to pass through ToTensor()
|
## 🚀 Ensure torchvision.io is a drop-in replacement with current workflows
The following snippet will fail.
```
img = torchvision.io.read_image()
img = torchvision.transforms.ToTensor()(img)
```
## Pitch
Consider making native io compatible with existing transform workflows by allowing the tensor type to pass through `ToTensor()`. This would still scale down tensor values to the range 0-1 and not impact downstream transformations.
|
https://github.com/pytorch/vision/issues/2959
|
closed
|
[
"question",
"needs discussion"
] | 2020-11-04T03:56:27Z
| 2020-11-20T09:46:26Z
| null |
jgbradley1
|
pytorch/vision
| 2,955
|
[RFC] How to handle BC breaking changes on Model weights or hyper-parameters
|
## 🚀 Feature
In order to fix bugs we are sometimes forced to introduce BC breaking changes. While the process of such introductions is clear when it comes to code changes, it's not when it comes to model weights or hyper-parameters. Thus we should define when, why and how to introduce BC-breaking changes when it comes to model weights or model hyper-parameters.
## Motivation
We have recently bumped to a few issues that motivate this. Here are a few examples:
- On #2326 we discovered a bug in the initialization of some weights of all detection models. If we fix the bug on code, we should probably retrain the models. What happens if their accuracy improves? How do we make them available to our users?
- How do we handle cases such as #2599 where in order to fix a bug we need to update the hyper-parameters of the model?
## Approaches
There are quite a few different approaches for this:
1. Replace the old parameters and Inform the community about the BC breaking changes. Example: #2942
- Reasonable approach when the accuracy improvement is substantial or the effect on the model behaviour is negligible.
- Keeps the code-base clean from workarounds and minimizes the number of weights we provide.
- Can potentially cause issues to users who use transfer learning.
2. Write code/workarounds to minimize the effect of the changes on existing models. Example: #2940
- Reasonable approach when the changes lead to slight decrease in accuracy.
- Minimizes the effects on users who used pre-trained models.
- Introduces ugly workarounds on the code and increases the number of weights we provide.
3. Introduce versioning on model weights:
- Appropriate when introducing significant changes on the models.
- Keeps the code-base clean from workarounds.
- Forces us to maintain multiple versions of weights and model config.
It's worth discussing whether we want to adapt our approach depending on the characteristics of the problem or if we want to go with one approach for all cases. Moreover it's worth investigating whether we need to handle differently changes on weights vs changes on hyper-parameters used on inference.
cc @fmassa @cpuhrsch @vfdev-5 @mthrok
|
https://github.com/pytorch/vision/issues/2955
|
open
|
[
"needs discussion",
"version incompatibility"
] | 2020-11-03T12:10:36Z
| 2021-09-04T16:37:54Z
| null |
datumbox
|
pytorch/vision
| 2,951
|
Imagenet Pre-trained model for other Depth Multiplier
|
On the mnasnet model under mnasnet.py file, the link provided for imagenet pretrained model is only for two depth multiplier, as shown in the code below:
_MODEL_URLS = {
"mnasnet0_5":
"https://download.pytorch.org/models/mnasnet0.5_top1_67.823-3ffadce67e.pth",
"mnasnet0_75": None,
"mnasnet1_0":
"https://download.pytorch.org/models/mnasnet1.0_top1_73.512-f206786ef8.pth",
"mnasnet1_3": None
}
Can you provide the link for Imagenet pre-trained model for mnasnet0_75 and mnasnet1_3?
|
https://github.com/pytorch/vision/issues/2951
|
open
|
[
"question",
"module: models"
] | 2020-11-03T06:58:39Z
| 2020-11-06T00:56:35Z
| null |
NaifahNurya
|
pytorch/vision
| 2,943
|
the divide mistake of positive and negative samples
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
When dividing positive and negative samples, the gt_boxes index that anchor matches to is 0 will be mistaken as negative samples
for matched_idxs_per_image in matched_idxs:
> positive = torch.nonzero(matched_idxs_per_image >= 1).squeeze(1)
> negative = torch.nonzero(matched_idxs_per_image == 0).squeeze(1)
|
https://github.com/pytorch/vision/issues/2943
|
closed
|
[
"question",
"module: models",
"topic: object detection"
] | 2020-10-31T07:58:29Z
| 2020-11-06T10:29:01Z
| null |
ghost
|
pytorch/pytorch
| 47,147
|
libtorch 1.6.0: How to make the data of each batch have different sizes
|
libtorch 1.6.0 win10 x64 .
I wrote an OCR model of the dataset.The word is encoded with different lengths as the label input.How to make the data of each batch have different sizes?
example:
data:123.png datasize:[batchsize,3,180,32] ,label: 123,labelsize:[batchsize,3]
data:3234.png datasize:[batchsize,3,180,32] ,label: 3234,lablesize:[batchsize,4]
[batchsize,3]!=[batchsize,4]
The dataset in Pytorch supports different sizes, but libtorch does not.
|
https://github.com/pytorch/pytorch/issues/47147
|
closed
|
[] | 2020-10-31T04:30:47Z
| 2020-11-01T03:52:41Z
| null |
williamlzw
|
pytorch/pytorch
| 47,118
|
How to specify the instances for batches
|
I am trying to solve a multi-task learning problem where I want to implement a homogeneous epoch sampling strategy (i.e in a single batch, instances from only one task are present and such batches are shuffled).
For example, Bij represents ith batch during training is of jth task
Let's assume tasks are A,B,C
B1A, B2B, B3A, B4C, B5B, ....
So a batch contains instances of one task only.
How can this be achieved?
|
https://github.com/pytorch/pytorch/issues/47118
|
closed
|
[] | 2020-10-30T15:13:54Z
| 2020-10-30T16:38:44Z
| null |
nrjvarshney
|
huggingface/pytorch-image-models
| 261
|
What is different with paper for mobilenet v3 and efficientNet
|
Thank for your great works.
The results with your code show much higher accuracy compared to reported accuracy. (mobilenet v3 and efficientNet)
I want to know what is main different with paper.
|
https://github.com/huggingface/pytorch-image-models/issues/261
|
closed
|
[] | 2020-10-29T13:34:35Z
| 2020-10-30T01:15:38Z
| null |
gksruf
|
pytorch/vision
| 2,919
|
How to change the num_classes from 1000 in vgg?
|
I use
model = vgg.vgg16(pretrained=True, progress = True, num_classes=10)
and use pretrained model 'vgg16': 'https://download.pytorch.org/models/vgg16-397923af.pth',
then, the error happend:
RuntimeError: Error(s) in loading state_dict for VGG:
size mismatch for classifier.6.weight: copying a param with shape torch.Size([1000, 4096]) from checkpoint, the shape in current model is torch.Size([10, 4096]).
size mismatch for classifier.6.bias: copying a param with shape torch.Size([1000]) from checkpoint, the shape in current model is torch.Size([10]).
when i use model = vgg.vgg16(pretrained=True, progress = True, num_classes=1000)
error above not occur, but after seizing a long time, cuda out of memory.
so how can i fix these?
|
https://github.com/pytorch/vision/issues/2919
|
closed
|
[
"question"
] | 2020-10-28T07:43:00Z
| 2020-10-28T14:53:37Z
| null |
SunJJ1996
|
pytorch/pytorch
| 46,902
|
How to use clang as a cuda compiler instead of nvcc?
|
I want to ask if we can use clang as a cuda compiler instead of nvcc, such as 'TF_CUDA_CLANG', 'CLANG_CUDA_COMPILER_PATH' options similar to tensorflow/third_party/gpus/cuda_configure.bzl?
cc @malfet @seemethere @walterddr
|
https://github.com/pytorch/pytorch/issues/46902
|
open
|
[
"module: build",
"triaged",
"enhancement"
] | 2020-10-27T05:52:23Z
| 2020-11-10T03:48:41Z
| null |
HangJie720
|
pytorch/vision
| 2,894
|
Activation function for object proposals in RoI (test time)
|
## 🚀 Feature
Replace softmax with sigmoid in `postprocess_detections `method in `roi_heads`:
https://github.com/pytorch/vision/blob/5cb77a20c3c65ca6199fdf1c1bc642af7447d311/torchvision/models/detection/roi_heads.py#L677
## Motivation
In the current implementation, score is class-dependent (softmax), but NMS is class-independent. So the question is, can/should one RoI output more than 1 prediction.
## Pitch
If there are` C` classes, each RoI outputs `C` score and bounding box predictions (two tensors, size` (1, C)` and `(4,C)` resp.) at test stage (`postprocess_detections `method). Non-max suppression is done independently of the class (i.e. boxes overlapping more than NMS are kept if they are different classes). But the normalization function is not class-independent:
`pred_scores = F.softmax(class_logits, -1)`
So if there are two positive classes, pred_scores vector will be, e.g. [0.9, 0.1], and at some point both of these scores will be compared to `box_score_thresh`. Obviously one of them is very likely to be rejected. Therefore, I don’t quite understand this implementation. It should be either:
```
pred_scores = F.sigmoid(class_logits, -1)
preds = torch.nonzero(pred_scores.sigmoid()>box_score_thresh)
```
to compute the scores independently, or
```
preds = class_logits.max(-1)
preds.values[preds.indices>0].sigmoid()>box_score_thresh
```
to extract the best prediction from every RoI. Then the predictions will be independent. I think it needs to be re-implemented or at least added as an argument to choose from. Mask predictions are done independently in this way:
https://github.com/pytorch/vision/blob/5cb77a20c3c65ca6199fdf1c1bc642af7447d311/torchvision/models/detection/roi_heads.py#L73
|
https://github.com/pytorch/vision/issues/2894
|
closed
|
[
"question",
"module: models",
"topic: object detection"
] | 2020-10-26T11:34:58Z
| 2020-10-26T12:51:20Z
| null |
AlexTS1980
|
pytorch/examples
| 837
|
License of the fast-neural-style models?
|
Are the fast-neural-style models that are downloadable through
https://github.com/pytorch/examples/blob/0f0c9131ca5c79d1332dce1f4c06fe942fbdc665/fast_neural_style/download_saved_models.py#L27
also licensed under the [BSD-3-Clause license](https://github.com/pytorch/examples/blob/master/LICENSE)?
|
https://github.com/pytorch/examples/issues/837
|
closed
|
[] | 2020-10-26T06:59:51Z
| 2021-03-04T06:12:57Z
| 3
|
pmeier
|
pytorch/vision
| 2,884
|
GroupedBatchSampler related bug in vision/references/detection/train.py
|
I strongly suspect that there is a bug in the detection trainer code that uses `GroupedBatchSampler` to group images by aspect ratio.
```
if args.distributed:
train_sampler = torch.utils.data.distributed.DistributedSampler(dataset)
test_sampler = torch.utils.data.distributed.DistributedSampler(dataset_test)
else:
train_sampler = torch.utils.data.RandomSampler(dataset)
test_sampler = torch.utils.data.SequentialSampler(dataset_test)
if args.aspect_ratio_group_factor >= 0:
group_ids = create_aspect_ratio_groups(dataset, k=args.aspect_ratio_group_factor)
train_batch_sampler = GroupedBatchSampler(train_sampler, group_ids, args.batch_size)
```
https://github.com/pytorch/vision/blob/cffac640d703196ea9a369166fa8ae587cb5e64d/references/detection/train.py#L80
Due to the random shuffle done by `DistributedSampler` and `RandomSampler`, there is an inconsistency between `train_sampler` and `group_ids`. Specifically: `group_ids` is with respect to the original dataset order (as dictated by `dataset`), but `GroupedBatchSampler` will index into `group_ids` using the indices output by `train_sampler`, eg:
```
def __iter__(self):
buffer_per_group = defaultdict(list)
samples_per_group = defaultdict(list)
num_batches = 0
for idx in self.sampler:
group_id = self.group_ids[idx]
```
https://github.com/pytorch/vision/blob/cffac640d703196ea9a369166fa8ae587cb5e64d/references/detection/group_by_aspect_ratio.py#L53
The impact is: `GroupedBatchSampler` will use retrieve the wrong aspect ratios when attempting to batch images with the same aspect ratio together, resulting in batches that are sub-optimally aspect-ratio balanced.
If my understanding is correct, then: to fix this, we'd need to change the `train.py` to ensure that the `train_sampler` and `group_ids` are consistent.
I haven't yet had the time to write a small, contained test case that demonstrates the bug, but just in case I'll create this issue while it's on my mind.
|
https://github.com/pytorch/vision/issues/2884
|
closed
|
[
"question",
"module: reference scripts",
"topic: object detection"
] | 2020-10-24T07:48:48Z
| 2020-10-26T12:36:38Z
| null |
erickim555
|
pytorch/tutorials
| 1,203
|
Put some more better practices in custom operator tutorial
|
Given our experience with internal users of custom operator registration API, there are some more important things the tutorial should cover:
* Handling non-contiguous inputs
* How to use TensorIterator for easy pointwise operators
* (FB only) The rest of the scaffolding you need for fbcode
cc @dzhulgakov
|
https://github.com/pytorch/tutorials/issues/1203
|
open
|
[
"C++",
"torchscript"
] | 2020-10-23T21:15:48Z
| 2021-07-27T22:04:45Z
| 0
|
ezyang
|
pytorch/vision
| 2,878
|
Hello , i found a mismatch between the implemented torchvision.models.detection.backbone_utils.resnet_fpn_backbone() in github and what we get by installing via pip, the one in github is having returned_layer and extra_blocks as parameters but one we get by installign doesnt have any of these parameters,
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
|
https://github.com/pytorch/vision/issues/2878
|
closed
|
[
"question"
] | 2020-10-23T10:18:55Z
| 2020-10-23T10:38:52Z
| null |
akashprakas
|
pytorch/pytorch
| 46,760
|
How to define a new data type in native_functions.yaml?
|
How to define a new data type in native_functions.yaml?
Such as there is exist a data type "int[]",bu i want a data type "float[]",what sould i do?
Looking forward to your advice, I will be very grateful!
cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @ailzhang
|
https://github.com/pytorch/pytorch/issues/46760
|
open
|
[
"module: internals",
"triaged"
] | 2020-10-23T09:04:27Z
| 2020-10-26T15:16:34Z
| null |
max-niu
|
pytorch/TensorRT
| 193
|
❓ [Question] How does max_batch_size work?
|
## ❓ Question
How one should use `max_batch_size` compilation option? There's not a lot said about it on [documentation](https://nvidia.github.io/TRTorch/py_api/trtorch.html) apart of that it should greater than 0.
## What you have already tried
Here's the toy example I'm playing with:
```
import torch
import trtorch
torch.manual_seed(0)
size = (1, 1)
torch_model = torch.nn.Linear(*size)
script_model = torch.jit.script(torch_model.eval().cuda())
trt_model = trtorch.compile(script_model, {
"input_shapes": [size],
"op_precision": torch.half,
"max_batch_size": 2
})
print("Single value:")
x1 = torch.rand(size).cuda()
print(torch_model(x1).tolist(), trt_model(x1.half()).tolist())
print("Batch:")
x2 = torch.rand((2, 1)).cuda()
print(torch_model(x2).tolist(), trt_model(x2.half()).tolist())
```
I'm expecting the output to be the same for both PyTorch and TRTorch models for both `x1` and `x2`. Here's the output I'm getting (notice the error message and missing second value from TRTorch model on the last line):
```
$ python test.py
Single value:
[[0.53578120470047]] [[0.5357810258865356]]
Batch:
ERROR: [__torch__.torch.nn.modules.linear.Linear_trt_engine] - Parameter check failed at: engine.cpp::setBindingDimensions::948, condition: profileMaxDims.d[i] >= dimensions.d[i]
[[0.5354551076889038], [0.5341419577598572]] [[0.5354547500610352]]
```
I expected that setting `max_batch_size=2` would do the trick but apparently it does not.
## Environment
- x86 CPU Architecture, 1660Ti GPU, Linux OS;
- Python 3.8.6;
- CUDA 10.1;
- PyTorch 1.5.1, installed using pip;
- TRTorch 0.0.3, installed using `pip install https://github.com/NVIDIA/TRTorch/releases/download/v0.0.3/trtorch-0.0.3-cp38-cp38-linux_x86_64.whl`
|
https://github.com/pytorch/TensorRT/issues/193
|
closed
|
[
"question"
] | 2020-10-21T15:49:13Z
| 2020-10-30T08:22:33Z
| null |
ateraz
|
pytorch/vision
| 2,853
|
How to get corresponding feature regions of final detections from feature map of backbone?
|
Hi,
For every output detection [x1, y1, x2, y2], I would like to extract its corresponding region in the feature map output of the backbone of Faster-RCNN. Similarly, I want to extract the corresponding region in the feature map for the target (groundtruth) bounding boxes.
Can you point me to how this should be done?
Thank you.
|
https://github.com/pytorch/vision/issues/2853
|
open
|
[
"question",
"topic: object detection"
] | 2020-10-21T13:24:56Z
| 2020-10-27T12:11:54Z
| null |
igygi
|
pytorch/vision
| 2,850
|
Can pretrained resnet-50 extract feature from a higher resolution picture?
|
Can pre-trained ResNet-50 extract feature from a higher resolution picture?
Typically, when we use Resnet to extract features, we need to crop the image into 224 x 224 then pass the image to ResNet.
I want to know if we want a larger image( e.g. 720 x 720) to be processed, we have to modify the network and re-train the network? Can we directly use the original pre-train network? Is the quality of feature extraction guaranteed?
Thanks!
|
https://github.com/pytorch/vision/issues/2850
|
closed
|
[
"question"
] | 2020-10-21T04:45:24Z
| 2020-10-26T12:58:27Z
| null |
Frank-Dz
|
pytorch/vision
| 2,832
|
About the segmentation.
|
In the reference, I replaced the cross entropy in the semantic segmentation module with weighted cross entropy. The result was worse. The weight is calculated based on the training set. If the cross entropy is replaced by focal loss, the effect is also poor. Why is this? Still, the best loss function for semantic segmentation is cross entropy.
I sincerely need your help!
|
https://github.com/pytorch/vision/issues/2832
|
closed
|
[
"question"
] | 2020-10-19T01:53:05Z
| 2020-10-19T07:26:20Z
| null |
ghost
|
pytorch/text
| 1,045
|
How to get the original sentences from train_iter object?
|
## ❓ Questions and Help
**Description**
Is there a easy way to print out the original input sentences instead of tensor objects? For example:
```
def eval(data_iter, model, args):
model.eval()
corrects, avg_loss = 0, 0
for batch in data_iter:
feature, target = batch.text, batch.label
print (feature.original_sentence)
```
|
https://github.com/pytorch/text/issues/1045
|
closed
|
[] | 2020-10-16T18:30:17Z
| 2020-10-16T19:25:24Z
| null |
sunyangfu
|
pytorch/pytorch
| 46,450
|
How to use GPU Tensor in diffrent GPUStreams with multi threads
|
thread task codes as follow:
void* task_routine3(void** arg)
{
struct timeval time_cur;
auto options = torch::TensorOptions().device(torch::kCUDA, 0);
torch::Device device(torch::kCUDA, 0);
pthread_t tid = pthread_self();
std::cout << tid << "Start time:" << time_cur.tv_sec << ":" << time_cur.tv_usec << std::endl;
at::cuda::CUDAStream mystream = at::cuda::getStreamFromPool();
at::cuda::setCurrentCUDAStream(mystream);
{
at::cuda::CUDAStreamGuard guard(mystream);
std::cout << "Stream ID: " << mystream.id() << std::endl;
torch::Tensor* pt_base_feature_cpu = (torch::Tensor*) arg[0];
torch::Tensor* pt_match_feature_cpu = (torch::Tensor*) arg[1];
for(int i = 0; i < 10; i++)
{
torch::Tensor base_feature = (pt_base_feature_cpu->slice(0, i*50000, (i+1)*50000, 1)).to(device);
torch::Tensor match_feature = (*pt_match_feature_cpu).to(device);
torch::Tensor tensor_tmp;
torch::Tensor tensor_sum;
std::tuple<torch::Tensor, torch::Tensor> sort_ret;
tensor_tmp = torch::sub(base_feature, match_feature);
tensor_tmp = torch::pow(tensor_tmp, 2);
tensor_sum = torch::sum(tensor_tmp, 1);
sort_ret = torch::topk(tensor_sum, 1);
}
}
}
I use thread pools to run the thread-func in multi-threads. I found that running time using single thread is seem to multi threads. T want to using multi threads to save running time.
How can I do it? Anyone can help me?
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
|
https://github.com/pytorch/pytorch/issues/46450
|
closed
|
[] | 2020-10-16T06:44:57Z
| 2020-10-16T22:33:05Z
| null |
litttl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.