repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/pytorch
| 54,212
|
How to update a Wiki page?
|
## ❓ Questions and Help
The `-k` option for filtering tests with a string can no longer be used with `python`, and should be used with `pytest` now.
Pull requests can't be submitted for the Wiki, so I couldn't suggest an update to https://github.com/pytorch/pytorch/wiki/Writing-tests-in-PyTorch-1.8.
Please update the Wiki page with this detail. Thank you!
cc @brianjo @mruberry @VitalyFedyunin @walterddr
|
https://github.com/pytorch/pytorch/issues/54212
|
closed
|
[
"module: docs",
"module: tests",
"triaged"
] | 2021-03-17T21:31:48Z
| 2021-03-18T15:10:54Z
| null |
imaginary-person
|
pytorch/FBGEMM
| 553
|
Is it possible to speed up matrix multiplication by adjusting the values of the Packing parameters under the same hardware environment?
|
Hi! I am reading the source code of FBGEMM and interested in the CPU optimization part. I found that FBGEMM sets Packing parameters for each ISA separately. I am curious whether the values of these parameters are determined empirically or by a certain algorithm? Is it possible to speed up matrix multiplication by adjusting the values of the Packing parameters under the same hardware environment? Is it possible to run FBGEMM on more ISA by appropriately setting the values of the Packing parameters? I will be very grateful for your help.
|
https://github.com/pytorch/FBGEMM/issues/553
|
closed
|
[
"question"
] | 2021-03-17T05:03:16Z
| 2021-03-25T07:39:09Z
| null |
umiswing
|
pytorch/pytorch
| 53,993
|
How to set the amp to all fp16 training?
|
Hello, I would like to ask how to set up all amp training for fp16? Similar to apex's O1 O2 O3 mode? thank you very much!
cc @mcarilli @ptrblck
|
https://github.com/pytorch/pytorch/issues/53993
|
closed
|
[
"triaged",
"module: amp (automated mixed precision)"
] | 2021-03-15T08:02:23Z
| 2021-03-16T03:04:16Z
| null |
sky-fly97
|
pytorch/pytorch
| 53,957
|
Is pytorch 1.8.0 incompatible with cuda 11.2 or what is the reason for this error?
|
I have spent all day trying to upgrade cuda to 11.2 and get it working with pytorch. At the moment I believe I should have a fully working version of Cuda 11.2, yet I still get the following error when I try to run my pytorch code, which normally works without issues.
```
Traceback (most recent call last):
File "/snap/pycharm-community/226/plugins/python-ce/helpers/pydev/pydevd.py", line 1477, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/snap/pycharm-community/226/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/tue/PycharmProjects/Pfold/run_1d_supervised.py", line 112, in <module>
losses = main()
File "/home/tue/PycharmProjects/Pfold/supervised/main.py", line 73, in main
net = train(net, optimizer, dl_train, loss_fnc, dl_test=dl_test, scheduler=lr_scheduler,ite=ite_start, loss_reg_fnc=loss_reg_fnc, loss_reg_min_sep_fnc=loss_reg_min_sep_fnc)
File "/home/tue/PycharmProjects/Pfold/supervised/optimization.py", line 75, in train
dists_pred, coords_pred = net(features,mask)
File "/home/tue/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/tue/PycharmProjects/Pfold/supervised/network_vnet.py", line 508, in forward
dists += (tr2DistSmall(x[:,i*3:(i+1)*3,:]),)
File "/home/tue/PycharmProjects/Pfold/supervised/network_transformer.py", line 155, in tr2DistSmall
D = torch.sum(Z**2, dim=1).unsqueeze(1) + torch.sum(Z**2, dim=1).unsqueeze(2) - 2*Z.transpose(1,2) @ Z
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemmStridedBatched( handle, opa, opb, m, n, k, &alpha, a, lda, stridea, b, ldb, strideb, &beta, c, ldc, stridec, num_batches)`
python-BaseException
Backend TkAgg is interactive backend. Turning interactive mode on.
Process finished with exit code 130 (interrupted by signal 2: SIGINT)
```
I have checked that cuda/cudnn seems to work, at least I was able to compile and run a hello_world script with nvcc. Additional information:
```
tue@tue-laptop:~$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Feb_14_21:12:58_PST_2021
Cuda compilation tools, release 11.2, V11.2.152
Build cuda_11.2.r11.2/compiler.29618528_0
```
```
Python 3.8.5 (default, Jan 27 2021, 15:41:15)
[GCC 9.3.0] on linux
import torch
torch.version.cuda
'11.1'
torch.version
<module 'torch.version' from '/home/tue/.local/lib/python3.8/site-packages/torch/version.py'>
torch.version.__version__
'1.8.0+cu111'
```
Searching on the error pytorch is giving, hasn't really lead me to any understanding of what the problem could be, so I'm hoping for some insight here and perhaps a solution?
cc @ngimel
|
https://github.com/pytorch/pytorch/issues/53957
|
open
|
[
"module: cuda",
"triaged"
] | 2021-03-13T06:02:04Z
| 2021-03-24T14:13:31Z
| null |
tueboesen
|
pytorch/pytorch
| 53,888
|
How to shift columns (or rows) in a tensor with different offsets?
|
`torch.roll` function is only able to shift columns (or rows) with same offsets. But I want to shift columns with different offsets. Suppose the input tensor is
```
[[1,2,3],
[4,5,6],
[7,8,9]]
```
Say, to shift with offset `i` for the i-th column, the expected output is
```
[[1,8,6],
[4,2,9],
[7,5,3]]
```
An option to do so is to separately shift every column using `torch.roll` and stack them. But for the consideration of effectiveness and code compactness, I don't want to introduce the loop structure. Is there a better way?
|
https://github.com/pytorch/pytorch/issues/53888
|
closed
|
[
"triaged",
"module: advanced indexing"
] | 2021-03-12T10:11:21Z
| 2021-03-13T05:05:51Z
| null |
changmenseng
|
pytorch/FBGEMM
| 540
|
Is it possible to generate SPMDM kernels with asmjit?
|
Hi all,
Thanks for sharing such a high-performance GEMM library.
After reading through source codes, I found that only U8S8S32AC* kernels are generated from asmjit.
Is it possible to port SpMDM codes to asmjit? I'm tring to optimzie SpMDM by myself.
Thanks!
Yang
|
https://github.com/pytorch/FBGEMM/issues/540
|
closed
|
[
"question"
] | 2021-03-12T03:08:40Z
| 2021-03-17T16:38:56Z
| null |
YangWang92
|
pytorch/vision
| 3,547
|
How to train a classifier with custom class num while also want pretrain=True?
|
It will gives an error:
```
size mismatch for fc.weight: copying a param with shape torch.Size([1000, 1024]) from checkpoint, the shape in current model is torch.Size([42, 1024]).
```
|
https://github.com/pytorch/vision/issues/3547
|
closed
|
[
"question",
"module: models"
] | 2021-03-11T09:35:42Z
| 2021-03-19T18:06:32Z
| null |
lucasjinreal
|
pytorch/pytorch
| 53,693
|
how to use torch.distributions.Normal/log_prob in libtorch?
|
I dont find class like torch.distributions in libtorch,so is there any way to get log_prob of a tensor?
cc @yf225 @glaringlee @fritzo @neerajprad @alicanb @vishwakftw @nikitaved
|
https://github.com/pytorch/pytorch/issues/53693
|
closed
|
[
"module: distributions",
"module: cpp",
"triaged"
] | 2021-03-10T07:23:51Z
| 2021-03-10T15:33:47Z
| null |
scirocc
|
pytorch/pytorch
| 53,678
|
[FX] Regression from 1.8: FX can no longer trace functions where the first element of an int list is a Proxy
|
```
import torch
import torch.fx as fx
def f(x):
return torch.reshape(x, (x.shape[0], -1))
mod = fx.symbolic_trace(f)
print(mod.code)
```
In 1.18 this worked, but it was broken by this PR, which fails since it verifies that the first element of the list is an integer (while it's actually a Proxy): https://github.com/pytorch/pytorch/pull/51350
cc @ezyang
|
https://github.com/pytorch/pytorch/issues/53678
|
open
|
[
"triaged",
"module: fx"
] | 2021-03-10T02:13:32Z
| 2022-07-20T21:23:30Z
| null |
Chillee
|
pytorch/pytorch
| 53,676
|
How to concatenate a variable number of tensors
|
## ❓ Questions and Help
How to concatenate a variable number of tensors using `torch.cat() `. For example, I have three layers and I need to concatenate the output of these layers as below:
```
for layer in self.layers:
src = layer(src, src_mask)
# I have three layers and I expect 3 vectors
src = torch.cat([src],1)
```
Kind regards,
Aiman Solyman
|
https://github.com/pytorch/pytorch/issues/53676
|
closed
|
[] | 2021-03-10T01:36:15Z
| 2021-03-10T08:34:26Z
| null |
aimanmutasem
|
pytorch/TensorRT
| 391
|
❓ [Question] PyTorch 1.8 Support
|
## ❓ Question
<!-- Your question -->
## What you have already tried
PyTorch 1.8(stable) is released recently.
When will TRTorch be compatible to PyTorch 1.8?
|
https://github.com/pytorch/TensorRT/issues/391
|
closed
|
[
"question"
] | 2021-03-09T05:57:53Z
| 2021-03-22T21:50:54Z
| null |
developer0hye
|
pytorch/pytorch
| 53,584
|
How to delete Module from GPU? (libtorch C++)
|
All the demo only show how to load model files. But how to unload the model file from the GPU and free up the GPU memory space?
I tried this, but it doesn't work.
```cpp
model.~Module();
c10::cuda::CUDACachingAllocator::emptyCache();
```
cc @yf225 @glaringlee
|
https://github.com/pytorch/pytorch/issues/53584
|
open
|
[
"module: cpp-extensions",
"module: cpp",
"triaged"
] | 2021-03-09T02:55:03Z
| 2021-03-11T03:11:09Z
| null |
ZhiZe-ZG
|
pytorch/pytorch
| 53,580
|
how to use logging in libtorch C++ ? any example ? Many thanks
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
cc @yf225 @glaringlee
|
https://github.com/pytorch/pytorch/issues/53580
|
closed
|
[
"module: cpp",
"triaged"
] | 2021-03-09T02:34:56Z
| 2021-03-10T02:58:13Z
| null |
yulinhuyang
|
pytorch/serve
| 1,001
|
How to deploy on the cloud sentence transformer from the UKPLab from
|
Hi community,
How could I practically deploy on the cloud pre-trained sentence transformer from the UKPLab ?
I saw the issue #681 and customisation proposed but didn't know whether it was intended for cloud.
Secondly, once deployed on cloud how to configure at scale?
Thanks !
|
https://github.com/pytorch/serve/issues/1001
|
closed
|
[
"triaged_wait"
] | 2021-03-08T20:24:40Z
| 2021-05-13T16:51:01Z
| null |
mattvan83
|
pytorch/tutorials
| 1,401
|
Dynamic Quantization for GPT2 model from huggingface.
|
Hi,
Reproducibility required: PyTorch version 1.4.0
I am trying to use the ```torch.quantization.quantize_dynamic``` function to quantize the ```pre_trained``` DistilGPT2 model from Hugging-face.
As most transformer blocks in this model are made up of the ```nn.Conv1d``` modules, there occurs a problem while performing the quantization.
I understand, because the function ```torch.quantization.quantize_dynamic``` does not define a way for quantizing the ```nn.Conv1d``` layer (see the snippet below), they all just go **UN-Quantized**
```
if qconfig_spec is None:
if dtype == torch.qint8:
qconfig_spec = {
nn.Linear : default_dynamic_qconfig,
nn.LSTM : default_dynamic_qconfig
}
```
Please suggest a solution.
cc @jerryzh168 @jianyuh
|
https://github.com/pytorch/tutorials/issues/1401
|
open
|
[
"question",
"module: quantization"
] | 2021-03-08T15:06:23Z
| 2023-03-09T19:37:48Z
| null |
mriganktiwari
|
pytorch/pytorch
| 53,395
|
How to solve dist.init_process_group from hanging (or deadlocks) with DGX A100?
|
## 🐛 Bug
DDP deadlocks on a new dgx A100 machine with 8 gpus
## To Reproduce
Run this self contained code:
```
"""
For code used in distributed training.
"""
from typing import Tuple
import torch
import torch.distributed as dist
import os
from torch import Tensor
import torch.multiprocessing as mp
def set_sharing_strategy(new_strategy=None):
"""
https://pytorch.org/docs/stable/multiprocessing.html
https://discuss.pytorch.org/t/how-does-one-setp-up-the-set-sharing-strategy-strategy-for-multiprocessing/113302
https://stackoverflow.com/questions/66426199/how-does-one-setup-the-set-sharing-strategy-strategy-for-multiprocessing-in-pyto
"""
from sys import platform
if new_strategy is not None:
mp.set_sharing_strategy(new_strategy=new_strategy)
else:
if platform == 'darwin': # OS X
# only sharing strategy available at OS X
mp.set_sharing_strategy('file_system')
else:
# ulimit -n 32767 or ulimit -n unlimited (perhaps later do try catch to execute this increase fd limit)
mp.set_sharing_strategy('file_descriptor')
def use_file_system_sharing_strategy():
"""
when to many file descriptor error happens
https://discuss.pytorch.org/t/how-does-one-setp-up-the-set-sharing-strategy-strategy-for-multiprocessing/113302
"""
import torch.multiprocessing
torch.multiprocessing.set_sharing_strategy('file_system')
def find_free_port():
""" https://stackoverflow.com/questions/1365265/on-localhost-how-do-i-pick-a-free-port-number """
import socket
from contextlib import closing
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
s.bind(('', 0))
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
return str(s.getsockname()[1])
def setup_process(rank, world_size, backend='gloo'):
"""
Initialize the distributed environment (for each process).
gloo: is a collective communications library (https://github.com/facebookincubator/gloo). My understanding is that
it's a library/API for process to communicate/coordinate with each other/master. It's a backend library.
export NCCL_SOCKET_IFNAME=eth0
export NCCL_IB_DISABLE=1
https://stackoverflow.com/questions/61075390/about-pytorch-nccl-error-unhandled-system-error-nccl-version-2-4-8
https://pytorch.org/docs/stable/distributed.html#common-environment-variables
"""
import torch.distributed as dist
import os
import torch
if rank != -1: # -1 rank indicates serial code
print(f'setting up rank={rank} (with world_size={world_size})')
# MASTER_ADDR = 'localhost'
MASTER_ADDR = '127.0.0.1'
MASTER_PORT = find_free_port()
# set up the master's ip address so this child process can coordinate
os.environ['MASTER_ADDR'] = MASTER_ADDR
print(f"{MASTER_ADDR=}")
os.environ['MASTER_PORT'] = MASTER_PORT
print(f"{MASTER_PORT}")
# - use NCCL if you are using gpus: https://pytorch.org/tutorials/intermediate/dist_tuto.html#communication-backends
if torch.cuda.is_available():
# unsure if this is really needed
# os.environ['NCCL_SOCKET_IFNAME'] = 'eth0'
# os.environ['NCCL_IB_DISABLE'] = '1'
backend = 'nccl'
print(f'{backend=}')
# Initializes the default distributed process group, and this will also initialize the distributed package.
dist.init_process_group(backend, rank=rank, world_size=world_size)
# dist.init_process_group(backend, rank=rank, world_size=world_size)
# dist.init_process_group(backend='nccl', init_method='env://', world_size=world_size, rank=rank)
print(f'--> done setting up rank={rank}')
def cleanup(rank):
""" Destroy a given process group, and deinitialize the distributed package """
# only destroy the process distributed group if the code is not running serially
if rank != -1: # -1 rank indicates serial code
dist.destroy_process_group()
def get_batch(batch: Tuple[Tensor, Tensor], rank) -> Tuple[Tensor, Tensor]:
x, y = batch
if torch.cuda.is_available():
x, y = x.to(rank), y.to(rank)
else:
# I don't think this is needed...
# x, y = x.share_memory_(), y.share_memory_()
pass
return x, y
def test_setup():
print('test_setup')
world_size = 4
mp.spawn(setup_process, args=(world_size,), nprocs=4)
dist.destroy_process_group()
print('successful test_setup!')
if __name__ == '__main__':
test_setup()
```
error msg
```
Traceback (most recent call last):
File "/home/miranda9/miniconda3/envs/metalearning/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/miranda9/miniconda3/envs/metalearning/lib/python3.8/mu
|
https://github.com/pytorch/pytorch/issues/53395
|
closed
|
[
"oncall: distributed"
] | 2021-03-05T19:14:08Z
| 2023-06-08T10:36:24Z
| null |
brando90
|
pytorch/pytorch
| 53,348
|
How to obtain the gradient of a tensor when in-place operation included?
|
## ❓ How to obtain the gradient of a tensor when in-place operation included?
For simplicity, here is the code to describe the question: when using `res = ma @ mb` in pytorch, we can easily obtain the gradient of ma by calling some backward function, e.g. `(res**2).sum().backward(); print(ma.grad)`. But when this multiplication is implemented in a for loop manner, how can we can the gradient of tensor ma or mb?
```python
import torch
ma = torch.randn(2,3,3,4).requires_grad_(True)
mb = torch.randn(2,3,4,5).requires_grad_(True)
B,C,H,W = ma.shape
B,C,W,K = mb.shape
res_torch = torch.zeros((B,C,H,K), requires_grad=True)
for b in range(B):
for c in range(C):
for h in range(H):
for k in range(K):
for r in range(W):
res_torch[b][c][h][k] = res_torch[b][c][h][k] + ma[b][c][h][r] * mb[b][c][r][k]
res_torch.sum().backward()
print(ma.grad)
```
A runtime error raised for the above code: `RuntimeError: leaf variable has been moved into the graph interio`.
However for this one, it can not yield the expected result:
```python
ma = torch.randn(2,3,3,4).requires_grad_(True)
mb = torch.randn(2,3,4,5).requires_grad_(True)
B,C,H,W = ma.shape
B,C,W,K = mb.shape
res_torch = torch.zeros((B,C,H,K), requires_grad=True)
for b in range(B):
for c in range(C):
for h in range(H):
for k in range(K):
res = 0
for r in range(W):
res = res + ma[b][c][h][r] * mb[b][c][r][k]
res_torch[b][c][h][k].data.fill_(res)
res_torch.sum().backward()
print(ma.grad)
```
the output was `None`.
Any hints for solving this problem?
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer
|
https://github.com/pytorch/pytorch/issues/53348
|
closed
|
[
"module: autograd",
"triaged"
] | 2021-03-05T09:33:53Z
| 2021-03-06T02:53:54Z
| null |
Leiwx52
|
pytorch/vision
| 3,509
|
simple API discussion about the AutoAugment
|
## ❓ Questions and Help
question about the user interface API
[transforms/autoaugment.py](https://github.com/pytorch/vision/blob/7b9d30eb7c4d92490d9ac038a140398e0a690db6/torchvision/transforms/autoaugment.py)
The current usage would be `AutoAugment(AutoAugmentPolicy('cifar10'))`, but since the policy is just an `Enum`, I doubt whether it'll be more convenient to be `AutoAugment('cifar10')`. Is there any future advantage to use the policy?
cc @vfdev-5
|
https://github.com/pytorch/vision/issues/3509
|
closed
|
[
"question",
"module: transforms"
] | 2021-03-05T06:19:31Z
| 2021-03-07T02:58:29Z
| null |
ain-soph
|
pytorch/examples
| 889
|
Low training accuracy using pre-trained model
|
Hello,
I am trying to evaluate a pre-trained mobilenetv2 model from torchvision on the ImageNet training dataset using this script.
To do so, I modify lines 235-237 to perform validation on the train loader instead of the val loader:
```
if args.evaluate:
validate(train_loader, model, criterion, args)
return
```
Everything else is left untouched. The command I use to run is:
`python imagenet_train_example.py -a mobilenet_v2 -j 16 -b 1024 -e --pretrained /data/ImageNet`
However, the results are lower than expected:
`Acc@1 2.926 Acc@5 15.079 Loss 11.795791`
|
https://github.com/pytorch/examples/issues/889
|
open
|
[
"help wanted",
"vision"
] | 2021-03-04T15:15:11Z
| 2022-03-09T21:10:33Z
| 2
|
AndreiXYZ
|
pytorch/pytorch
| 53,264
|
How to load trained . torch from conversion to .mlmodel
|
Hi , I need in help in converting the .torch to .mlmodel ,, while doing it i faced an error . After researching found no solution for the same and posted for help .
the error:
<img width="1009" alt="Screenshot 2021-03-01 at 10 24 01 PM" src="https://user-images.githubusercontent.com/35099512/109978249-b2154d80-7d23-11eb-8e2f-39497d77051d.png">
the code used :
<img width="843" alt="Screenshot 2021-03-02 at 10 46 58 PM" src="https://user-images.githubusercontent.com/35099512/109978278-b93c5b80-7d23-11eb-8286-43ac2cab72e3.png">
cc @mruberry
|
https://github.com/pytorch/pytorch/issues/53264
|
open
|
[
"oncall: mobile"
] | 2021-03-04T14:27:11Z
| 2021-03-12T05:28:21Z
| null |
NaveenTg
|
pytorch/serve
| 989
|
How to get the URL parameters within the custom inference handler?
|
Hi guys, recently I'm writing an custom service handler for yolov5. However, I have no idea about how to get the URL parameters in my inference handler.
For example:
```
curl -XPOST http://localhost:8080/predictions/yolo?my_parameter=123 -T@sample.jpg
```
How can I get the value of ``my_parameter`` in my custom service handler?
I know that I could pass the parameters within the multipart/form-data or json body to my service handler. But I can't, because the API signature is by-design. Passing the parameter with URL is the only choice of mine.
Any suggestions would be appreciated!
|
https://github.com/pytorch/serve/issues/989
|
open
|
[
"triaged_wait"
] | 2021-03-03T09:48:50Z
| 2023-11-07T12:42:08Z
| null |
neoragex2002
|
huggingface/datasets
| 1,973
|
Question: what gets stored in the datasets cache and why is it so huge?
|
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any insight? Thank you!
|
https://github.com/huggingface/datasets/issues/1973
|
closed
|
[] | 2021-03-02T14:35:53Z
| 2021-03-30T14:03:59Z
| null |
ioana-blue
|
pytorch/pytorch
| 53,101
|
How to compile torch/lib/c10d/ProcessGroupNCCL.cpp
|
I want to modify `ProcessGroupNCCL.cpp` to add some print statements, but I don't know how to recompile this file.
It is located at [https://github.com/pytorch/pytorch/tree/v1.7.1/torch/lib/c10d](https://github.com/pytorch/pytorch/tree/v1.7.1/torch/lib/c10d).
I'm using pytorch 1.7.1 installed by anaconda.
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @agolynski @SciPioneer @H-Huang @mrzzd @cbalioglu
|
https://github.com/pytorch/pytorch/issues/53101
|
closed
|
[
"oncall: distributed"
] | 2021-03-02T09:06:49Z
| 2021-03-04T03:14:39Z
| null |
1013801464
|
pytorch/text
| 1,218
|
how to load data using TabularDataset and the new nightly torchtext experimental dataloader
|
the `torchtext.data.TabularDataset` returns an iterable of objects that cannot further be split into batches, or (x,y) sets of values. making it impossible to use the new `torchtext.vocab.Vocab` to build vocab using `Counter`
**my use-case code:**
tokenize = lambda x:x.split(" ")
konkani = Field(sequential=True, tokenize=tokenize, init_token='<sos>', eos_token='<eos>')
hindi = Field(sequential=True, tokenize=tokenize, init_token='<sos>', eos_token='<eos>')
fields = [("word_token_konkani", konkani), ('word_token_hindi', hindi)]
train_data, test_data = TabularDataset.splits(path="translation/", train="train.csv",
test="test.csv", format="csv", fields=fields)
i was trying to refer the migration tutorials here : [link](https://github.com/pytorch/text/blob/master/examples/legacy_tutorial/migration_tutorial.ipynb)
|
https://github.com/pytorch/text/issues/1218
|
closed
|
[] | 2021-02-26T08:08:49Z
| 2021-02-26T16:47:50Z
| null |
StephennFernandes
|
pytorch/pytorch
| 52,850
|
How to skip the images in a custom dataset and deal with None values?
|
I have an object detection dataset with RGB images and annotations in Json. I use a custom DataLoader class to read the images and the labels. One issue that I’m facing is that I would like to skip images when training my model if/when labels don’t contain certain objects.
For example, If one image doesn’t contain any target labels belonging to the class ‘Cars’, I would like to skip them. When parsing my Json annotation, I tried checking for labels that don’t contain the class ‘Cars’ and returned None. Subsequently, I used a collate function to filter the None but unfortunately, It is not working.
```
import torch
from torch.utils.data.dataset import Dataset
import json
import os
from PIL import Image
from torchvision import transforms
#import cv2
import numpy as np
general_classes = {
# Cars
"Toyota Corolla" : 0,
"VW Golf" : 0,
"VW Beetle" : 0,
# Motor-cycles
"Harley Davidson" : 1,
"Yamaha YZF-R6" : 1,
}
car_classes={
"Toyota Corolla" : 0,
"VW Golf" : 0,
"VW Beetle" : 0
}
def get_transform(train):
transforms = []
# converts the image, a PIL image, into a PyTorch Tensor
transforms.append(T.ToTensor())
if train:
# during training, randomly flip the training images
# and ground-truth for data augmentation
transforms.append(T.RandomHorizontalFlip(0.5))
return T.Compose(transforms)
def my_collate(batch):
batch = list(filter(lambda x: x is not None, batch))
return torch.utils.data.dataloader.default_collate(batch)
class FilteredDataset(Dataset):
# The dataloader will skip the image and corresponding labels based on the dictionary 'car_classes'
def __init__(self, data_dir, transforms):
self.data_dir = data_dir
img_folder_list = os.listdir(self.data_dir)
self.transforms = transforms
imgs_list = []
json_list = []
self.filter_count=0
self.filtered_label_list=[]
for img_path in img_folder_list:
#img_full_path = self.data_dir + img_path
img_full_path=os.path.join(self.data_dir,img_path)
json_file = os.path.join(img_full_path, 'annotations-of-my-images.json')
img_file = os.path.join(img_full_path, 'Image-Name.png')
json_list.append(json_file)
imgs_list.append(img_file)
self.imgs = imgs_list
self.annotations = json_list
total_count=0
for one_annotation in self.annotations:
filtered_obj_id=[]
with open(one_annotation) as f:
img_annotations = json.load(f)
parts_list = img_annotations['regions']
for part in parts_list:
current_obj_id = part['tags'][0] # bbox label
check_obj_id = general_classes[current_obj_id]
if(check_obj_id==0):
subclass_id=car_classes[current_obj_id]
filtered_obj_id.append(subclass_id)
total_count=total_count+1
if(len(filtered_obj_id)>0):
self.filter_count=self.filter_count+1
self.filtered_label_list.append(one_annotation)
print("The total number of the objects in all images: ",total_count)
# get one image and the bboxes,img_id, labels of parts, etc in the image as target.
def __getitem__(self, idx):
img_path = self.imgs[idx]
image_id = torch.tensor([idx])
with open(self.annotations[idx]) as f:
img_annotations = json.load(f)
parts_list = img_annotations['regions']
obj_ids = []
boxes = []
for part in parts_list:
obj_id = part['tags'][0]
check_obj_id = general_classes[obj_id]
if(check_obj_id==0):
obj_id=car_classes[obj_id]
obj_ids.append(obj_id)
#print("---------------------------------------------------")
if(len(obj_ids)>0):
img = Image.open(img_path).convert("RGB")
labels = torch.as_tensor(obj_ids, dtype = torch.int64)
target = {}
target['labels'] = labels
if self.transforms is not None:
img, target = self.transforms(img, target)
return img, target
else:
return None
def __len__(self):
return len(self.filtered_label_list)
train_data_path = "path-to-my-annotation"
# Generators
train_dataset = FilteredDataset(train_data_path,get_transform(train=True))
print("Total files in the train_dataset: ",len(train_dataset))
#print("The first instance in the train dataset : ",train_dataset[0])
#training_generator = torch.utils.data.DataLoader(train_dataset)
training_generator = torch.utils.data.DataLoader(train_dataset,collate_fn=my_collate)
print("\n\n Iterat
|
https://github.com/pytorch/pytorch/issues/52850
|
open
|
[
"module: dataloader",
"triaged"
] | 2021-02-25T18:04:33Z
| 2021-02-25T22:04:56Z
| null |
srinivasgln
|
pytorch/vision
| 3,451
|
Can't compile master: requires nightly PyTorch?
|
I have installed torch 1.7.1 and g++ 7.5.0. Do I need nightly PyTorch version to compile nightly torchvision 0.9.0?
`pip install git+https://github.com/pytorch/vision --no-dependencies`: [log.txt](https://github.com/pytorch/vision/files/6037409/log.txt)
|
https://github.com/pytorch/vision/issues/3451
|
closed
|
[
"question"
] | 2021-02-24T16:28:35Z
| 2021-02-24T18:18:30Z
| null |
vadimkantorov
|
pytorch/vision
| 3,436
|
Windows CPU build missing on PyPI?
|
## 🐛 Bug
Is there a reason the CPU build of `torchvision` is not pushed to PyPI anymore?
## To Reproduce
Steps to reproduce the behavior:
1. `pip install torch==1.7.1 torchvision==0.8.2 torchaudio==0.7.1`
Output:
```
Collecting torch==1.7.1
Downloading torch-1.7.1-cp38-cp38-win_amd64.whl (184.0 MB)
|████████████████████████████████| 184.0 MB 201 kB/s
ERROR: Could not find a version that satisfies the requirement torchvision==0.8.2
ERROR: No matching distribution found for torchvision==0.8.2
```
## Expected behavior
CPU build of `torchvision` is installed.
## Environment
- OS: Windows
- Python version: 3.8.6
## Additional context
`torchvision` used to be pushed to PyPI ([up until v0.5.0](https://pypi.org/project/torchvision/0.5.0/#files)) and I'm wondering why this isn't the case anymore. I'm aware the standard/recommended way of installing is through [the pytorch.org index](https://download.pytorch.org/whl/torch_stable.html). However, the main `torch` package (CPU only) is being pushed to PyPI, so I'm wondering whether it is inteded that both `torchvision` and `torchaudio` are not or if it's just a bug?
I could not find any helpful recent information on this, only some discussions around PyPI binary size contraints (mainly [this](https://github.com/pytorch/vision/issues/1774) and [this](https://github.com/pytorch/pytorch/issues/24310#)). I understand this is a problem for the CUDA builds but for the CPU build I really do not see any issue (e.g. `torchvision` v0.5.0 is 1.2 MB).
Does anybody have some insight as to why this is happening?
cc @peterjc123 @nbcsm @guyang3532 @maxluk @gunandrose4u @smartcat2010 @mszhanyi
|
https://github.com/pytorch/vision/issues/3436
|
closed
|
[
"question",
"windows",
"topic: binaries"
] | 2021-02-23T11:54:25Z
| 2021-03-09T11:25:53Z
| null |
1enn0
|
pytorch/audio
| 1,298
|
how to compute log filter bank energy in torch audio compare with python_speech_feature?
|
## ❓ I want re-procedure result like when i use compute log-filterbank energy of lib: python_speech_feature by using torchaudio.
this is my code, and I'm see the result is difference:
```
# load audio data by librosa
path_audio = "audio_a.wav"
y, sr = librosa.load(path_audio, sr=16000, offset=0.5, duration=0.4)
# load audio data by torch audio
audio_ft, sr = torchaudio.load(path_audio)
audio_ft = audio_ft.squeeze(0)
y_torch = audio_ft[int(0.5*16000):int(0.9*16000)]
# the result is the same then i compute log filterbank energy
ft_f_bank = python_speech_features.logfbank(y, samplerate=16000, winlen=0.025, winstep=0.01, nfilt=64,nfft=512)
print(ft_f_bank.shape) # result: (39, 64)
ft_f_bank_by_torch = torchaudio.compliance.kaldi.fbank(y_torch, sample_frequency=16000.0, frame_length=25.0, frame_shift=10.0, use_log_fbank=True, use_energy=True, num_mel_bins=64)
print(ft_f_bank_by_torch.shape) # result: (38, 65)
```
How can i make result return by torchaudio is the same with python speech feature. I'm not have deep understand more about speech feature, so question can so weird, sorry.
Thankyou
|
https://github.com/pytorch/audio/issues/1298
|
closed
|
[] | 2021-02-23T10:20:25Z
| 2021-02-23T16:34:42Z
| null |
trangtv57
|
pytorch/vision
| 3,429
|
Inconsistency between the pretrained models and labels
|
I notice that for pretrain models that are provided the labels are not consistent.
For example vgg16 class 1 is different from Resnet50 class 1.
Can you let us know where we can find the corresponding labels for each model?
For vgg i notice that the one that looks like this:
```{
"0": [
"n01440764",
"tench"
],
"1": [
"n01443537",
"goldfish"
],
"2": [
"n01484850",
"great_white_shark"
],
"3": [
"n01491361",
"tiger_shark"
],
"4": [
"n01494475",
"hammerhead"
],
"5": [
"n01496331",
"electric_ray"
],
"6": [
"n01498041",
"stingray"
],
"7": [
"n01514668",
"cock"
],
"8": [
"n01514859",
"hen"
```
works, but this one is not the one that we should use for resnets, please let us know what we should do.
Thanks
|
https://github.com/pytorch/vision/issues/3429
|
closed
|
[
"question",
"module: models",
"module: reference scripts"
] | 2021-02-22T22:50:42Z
| 2021-03-31T08:46:32Z
| null |
seyeeet
|
pytorch/text
| 1,193
|
Looking for an example on how to use BucketIterator with a transformer model?
|
I would appreciate an end-to-end example. The examples that I found stop with the BucketIterator. It is unclear what to do with it.
|
https://github.com/pytorch/text/issues/1193
|
closed
|
[
"legacy"
] | 2021-02-20T02:41:12Z
| 2024-07-12T11:58:25Z
| null |
sorenwacker
|
pytorch/vision
| 3,421
|
error making: python-torchvision-cuda
|
can't make an app from AUR `python-torchvision-cuda` in Arch Linux
```sh
=========================================================================================== short test summary info ===========================================================================================
FAILED test/test_functional_tensor.py::Tester::test_adjust_brightness - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_adjust_contrast - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_adjust_gamma - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_adjust_hue - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_adjust_saturation - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_affine - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_center_crop - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_crop - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_five_crop - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_gaussian_blur - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_hflip - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_hsv2rgb - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_pad - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_perspective - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_resize - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_resized_crop - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_rgb2hsv - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_rgb_to_grayscale - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_rotate - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_ten_crop - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_functional_tensor.py::Tester::test_vflip - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_image.py::ImageTester::test_decode_image - AssertionError: False is not true
FAILED test/test_image.py::ImageTester::test_decode_jpeg - AssertionError: False is not true
FAILED test/test_image.py::ImageTester::test_encode_jpeg - AssertionError: False is not true
FAILED test/test_image.py::ImageTester::test_write_jpeg - AssertionError: b'\xf[2208 chars]e6\xa6\x87\xc2\x0c\xaa\xcc\xd9\xe4\xfd\xe3\x82[170942 chars]\xd9' != b'\xf[2208 chars]e6\xa7\x0f\xf0\x83*\xb36y?x...
FAILED test/test_models.py::ModelTester::test_fasterrcnn_resnet50_fpn_cpu - TypeError: Object of type 'NoneType' is not an instance of 'function'
FAILED test/test_models.py::ModelTester::test_googlenet_eval - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_models.py::ModelTester::test_keypointrcnn_resnet50_fpn_cpu - RuntimeError: class '__torch__.torchvision.models.detection._utils.BoxCoder' already defined.
FAILED test/test_models.py::ModelTester::test_maskrcnn_resnet50_fpn_cpu - RuntimeError: class '__torch__.torchvision.models.detection._utils.BoxCoder' already defined.
FAILED test/test_models.py::ModelTester::test_retinanet_resnet50_fpn_cpu - RuntimeError: class '__torch__.torchvision.models.detection._utils.BoxCoder' already defined.
FAILED test/test_ops.py::RoIPoolTester::test_backward_cpu_contiguous - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_ops.py::RoIPoolTester::test_backward_cpu_non_contiguous - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_ops.py::PSRoIPoolTester::test_backward_cpu_contiguous - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_ops.py::PSRoIPoolTester::test_backward_cpu_non_contiguous - TypeError: Object of type 'module' is not an instance of 'function'
FAILED test/test_ops.py::RoIAlignTester::test_backwar
|
https://github.com/pytorch/vision/issues/3421
|
closed
|
[
"question",
"topic: build"
] | 2021-02-19T19:53:46Z
| 2021-02-21T23:01:29Z
| null |
chiboreache
|
pytorch/cpuinfo
| 53
|
Cpuinfo in sparc
|
I was able to compile pytorch on Debian 10, with Sparc processor. However, when it runs, it gives the error that it does not recognize the cpuinfo information and uses only one processor of the 32 existing ones. I would like to know if I can modify something to take at least one 16 core socket. On several occasions I was able to modify the code so that it takes the correct information. Thanks in advance.
|
https://github.com/pytorch/cpuinfo/issues/53
|
open
|
[
"question"
] | 2021-02-19T17:48:14Z
| 2024-01-11T00:57:03Z
| null |
alerenato
|
pytorch/TensorRT
| 344
|
[Question ][Error ] at least 4 dimensions are required for input
|
## ❓ Question
Hi I managed to compile TRTorch but it gives me very weird results when I apply it to a simple Conv2d model.
The model is as follows :
```
class DummyModel(torch.nn.Module):
def __init__(self,):
super().__init__()
self.conv = torch.nn.Conv2d(in_channels=3, out_channels=10, kernel_size=3)
def forward(self, x):
return torch.mean(self.conv(x))
md = DummyModel().to(DEVICE)
input_ = torch.ones((1, 3, 1024, 1024)).to(DEVICE)
with torch.no_grad():
traced_model = torch.jit.trace(md, input_)
torch.jit.save(traced_model, "net.pth")
```
Running
`bazel run //cpp/trtorchexec -- net.pth "(1,3,1024,1024)"`
Gives :
```
DEBUG: [TRTorch - Debug Build] - stride: [1, 1]
DEBUG: [TRTorch - Debug Build] - padding: [0, 0]
DEBUG: [TRTorch - Debug Build] - dilation: [1, 1]
DEBUG: [TRTorch - Debug Build] - out_padding: [0, 0]
DEBUG: [TRTorch - Debug Build] - groups: 1
DEBUG: [TRTorch - Debug Build] - Weights: [10]
Number of input maps: 10
Number of output maps: 10
Element shape: [1]
ERROR: [TRTorch Conversion Context] - %10 : Tensor = aten::_convolution(%input.1, %self.conv.weight, %self.conv.bias, %3, %2, %3, %5, %2, %6, %5, %5, %4) # /home/matthieu/anaconda3/envs/gym/lib/python3.7/site-packages/torch/nn/modules/conv.py:416:0: at least 4 dimensions are required for input.
DEBUG: [TRTorch - Debug Build] - Output tensor shape: []
INFO: [TRTorch Conversion Context] - Adding Layer %11 : Tensor = aten::mean(%10, %7) # <ipython-input-76-8dff675398f2>:6:0 (ctx.AddLayer)
DEBUG: [TRTorch Conversion Context] - Node input is an already converted tensor
DEBUG: [TRTorch Conversion Context] - Node input is a result of a previously evaluated value
ERROR: [TRTorch Conversion Context] - %10 : Tensor = aten::_convolution(%input.1, %self.conv.weight, %self.conv.bias, %3, %2, %3, %5, %2, %6, %5, %5, %4) # /home/matthieu/anaconda3/envs/gym/lib/python3.7/site-packages/torch/nn/modules/conv.py:416:0: at least 4 dimensions are required for input.
DEBUG: [TRTorch - Debug Build] - Frozen tensor shape: []
ERROR: [TRTorch Conversion Context] - %10 : Tensor = aten::_convolution(%input.1, %self.conv.weight, %self.conv.bias, %3, %2, %3, %5, %2, %6, %5, %5, %4) # /home/matthieu/anaconda3/envs/gym/lib/python3.7/site-packages/torch/nn/modules/conv.py:416:0: at least 4 dimensions are required for input.
WARNING: [TRTorch - Debug Build] - Mean Converter disregards dtype
ERROR: [TRTorch Conversion Context] - %10 : Tensor = aten::_convolution(%input.1, %self.conv.weight, %self.conv.bias, %3, %2, %3, %5, %2, %6, %5, %5, %4) # /home/matthieu/anaconda3/envs/gym/lib/python3.7/site-packages/torch/nn/modules/conv.py:416:0: at least 4 dimensions are required for input.
DEBUG: [TRTorch - Debug Build] - Output shape: []
INFO: [TRTorch Conversion Context] - Marking Output 11 named output_0 in engine (ctx.MarkOutput)
ERROR: [TRTorch Conversion Context] - %10 : Tensor = aten::_convolution(%input.1, %self.conv.weight, %self.conv.bias, %3, %2, %3, %5, %2, %6, %5, %5, %4) # /home/matthieu/anaconda3/envs/gym/lib/python3.7/site-packages/torch/nn/modules/conv.py:416:0: at least 4 dimensions are required for input.
ERROR: [TRTorch Conversion Context] - %10 : Tensor = aten::_convolution(%input.1, %self.conv.weight, %self.conv.bias, %3, %2, %3, %5, %2, %6, %5, %5, %4) # /home/matthieu/anaconda3/envs/gym/lib/python3.7/site-packages/torch/nn/modules/conv.py:416:0: at least 4 dimensions are required for input.
ERROR: [TRTorch Conversion Context] - Layer %10 : Tensor = aten::_convolution(%input.1, %self.conv.weight, %self.conv.bias, %3, %2, %3, %5, %2, %6, %5, %5, %4) # /home/matthieu/anaconda3/envs/gym/lib/python3.7/site-packages/torch/nn/modules/conv.py:416:0 failed validation
ERROR: [TRTorch Conversion Context] - Network validation failed.
```
Is there another to specify the input size ?
## Environment
> Build information about the TRTorch compiler can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):1.7.1
- CPU Architecture:
- OS (e.g., Linux):Ubuntu 18.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source):bazel build //:libtrtorch --compilation_mode opt
- Are you using local sources or building from archives:local sources
- Python version:3.7.9
- CUDA version:11.0
- GPU models and configuration:2080 TI
- Any other relevant information:Nvidia-driver : 450.51.05
|
https://github.com/pytorch/TensorRT/issues/344
|
closed
|
[
"question"
] | 2021-02-17T14:59:17Z
| 2021-02-17T17:36:29Z
| null |
MatthieuToulemont
|
pytorch/vision
| 3,406
|
RetinaNet: TypeError: __init__() got an unexpected keyword argument 'trainable_backbone_layers'
|
## 🐛 Bug
`retinanet_resnet50_fpn` throws an error while passing `trainable_backbone_layers` as an argument.
## To Reproduce
Steps to reproduce the behavior:
```python
import torchvision
model = torchvision.models.detection.retinanet_resnet50_fpn(trainable_backbone_layers=2)
```
```
~/gridai/venv/lib/python3.8/site-packages/torchvision/models/detection/retinanet.py in retinanet_resnet50_fpn(pretrained, progress, num_classes, pretrained_backbone, **kwargs)
620 backbone = resnet_fpn_backbone('resnet50', pretrained_backbone,
621 returned_layers=[2, 3, 4], extra_blocks=LastLevelP6P7(256, 256))
--> 622 model = RetinaNet(backbone, num_classes, **kwargs)
623 if pretrained:
624 state_dict = load_state_dict_from_url(model_urls['retinanet_resnet50_fpn_coco'],
TypeError: __init__() got an unexpected keyword argument 'trainable_backbone_layers'
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch / torchvision Version (e.g., 1.0 / 0.4.0):
- OS (e.g., Linux):
- How you installed PyTorch / torchvision (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/vision/issues/3406
|
closed
|
[
"question"
] | 2021-02-16T05:20:25Z
| 2021-02-27T17:22:53Z
| null |
kaushikb11
|
pytorch/vision
| 3,397
|
Bug Report: No module named 'torchvision.models.mobilenetv2'
|
## ❓ Questions and Help
Hi there, I encounter a bug when running this following line
>>> import torch
>>> res = torch.hub.load('pytorch/vision', 'resnet50')
the error is:
-------------------------------------begin of error info---------------------------------
Using cache found in /root/.cache/torch/hub/pytorch_vision_master
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-21-55b890d7b167> in <module>()
1 import torch
----> 2 res = torch.hub.load('pytorch/vision', 'resnet50')
3 print(res)
5 frames
/root/.cache/torch/hub/pytorch_vision_master/hubconf.py in <module>()
12 from torchvision.models.googlenet import googlenet
13 from torchvision.models.shufflenetv2 import shufflenet_v2_x0_5, shufflenet_v2_x1_0
---> 14 from torchvision.models.mobilenetv2 import mobilenet_v2
15 from torchvision.models.mobilenetv3 import mobilenet_v3_large, mobilenet_v3_small
16 from torchvision.models.mnasnet import mnasnet0_5, mnasnet0_75, mnasnet1_0, \
ModuleNotFoundError: No module named 'torchvision.models.mobilenetv2'
-----------------end of error info------------------------------------------------
BTW, my environment is torch-1.7.1, torchvision-0.8.2, I also try to
pip install torchvision.models.mobilenetv2
it turns out useless.
Grateful to hear any suggestions!
|
https://github.com/pytorch/vision/issues/3397
|
closed
|
[
"question"
] | 2021-02-15T12:24:50Z
| 2021-02-15T14:35:27Z
| null |
DemonsHunter
|
pytorch/vision
| 3,392
|
How to compile arbitrary nn modules with jit pytorch? ( RuntimeError: builtin cannot be used as a value, with a dict)
|
## 🐛 Bug
Similar to https://github.com/pytorch/vision/issues/1675.
Simple, I compare my value to a dict and it throws an error.
```
"""
if type(json_data) is dict:
~~~~ <--- HERE
```
## To Reproduce
Simple, any code that has a comparison with a dict:
```
class Node(object):
def __init__(self):
pass
@classmethod
def from_json(cls, json_data):
if type(json_data) is dict:
node_data = next(iter(json_data))
assert type(json_data[node_data]) is list
node_children = [cls.from_json(child) for child in json_data[node_data]]
return Node(node_data, node_children)
else:
return Node(json_data)
```
## Expected behavior
Jit makes my checkpoint.
## Environment
- PyTorch / torchvision Version (e.g., 1.0 / 0.4.0): 1.7.1
- OS (e.g., Linux): mac os x
- How you installed PyTorch / torchvision (`conda`, `pip`, source): conda
- Build command you used (if compiling from source): conda
- Python version: 3.8
- CUDA/cuDNN version: CPU
- GPU models and configuration: CPU
- Any other relevant information: CPU
## Additional context
Compiling arbitrary custom nn modules to jit
error:
```
/Users/brando/anaconda3/envs/coq_gym/bin/python /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py --cmd-line --multiproc --qt-support=auto --client 127.0.0.1 --port 59213 --file /Users/brando/ML4Coq/playground/running_pytorch_ocaml/treenn2jit_ckpt.py
Connected to pydev debugger (build 203.7148.72)
1.7.1
Traceback (most recent call last):
File "/Users/brando/anaconda3/envs/coq_gym/lib/python3.7/site-packages/torch/jit/_recursive.py", line 680, in compile_unbound_method
create_methods_and_properties_from_stubs(concrete_type, (stub,), ())
File "/Users/brando/anaconda3/envs/coq_gym/lib/python3.7/site-packages/torch/jit/_recursive.py", line 304, in create_methods_and_properties_from_stubs
concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
File "/Users/brando/anaconda3/envs/coq_gym/lib/python3.7/site-packages/torch/jit/annotations.py", line 330, in try_ann_to_type
torch.jit._script._recursive_compile_class(ann, loc)
File "/Users/brando/anaconda3/envs/coq_gym/lib/python3.7/site-packages/torch/jit/_script.py", line 1056, in _recursive_compile_class
_compile_and_register_class(obj, rcb, _qual_name)
File "/Users/brando/anaconda3/envs/coq_gym/lib/python3.7/site-packages/torch/jit/_script.py", line 64, in _compile_and_register_class
torch._C._jit_script_class_compile(qualified_name, ast, defaults, rcb)
RuntimeError:
builtin cannot be used as a value:
File "/Users/brando/ML4Coq/ml4coq-proj/embeddings_zoo/extract_tactic_from_lasse_data.py", line 56
term = string
"""
if type(json_data) is dict:
~~~~ <--- HERE
node_data = next(iter(json_data))
assert type(json_data[node_data]) is list
'Node.from_json' is being compiled since it was called from '__torch__.embeddings_zoo.extract_tactic_from_lasse_data.Node'
```
https://stackoverflow.com/questions/66179121/how-to-fix-the-runtimeerror-builtin-cannot-be-used-as-a-value-with-a-dict-whe
|
https://github.com/pytorch/vision/issues/3392
|
closed
|
[
"invalid"
] | 2021-02-12T21:11:15Z
| 2021-02-17T16:29:07Z
| null |
brando90
|
huggingface/sentence-transformers
| 753
|
What is 'sentence_embedding' of a Sentence Transformer Model?
|
Hey, I try to understand where this comes from. It is just mentioned here [link](https://github.com/UKPLab/sentence-transformers/blob/9932965c92a06835eda255dac7eacd53f48c5cd7/sentence_transformers/SentenceTransformer.py#L144)
But seems not be used anywhere than. Because this feature is used in the losses like OnlineContrastive. I don't hink it comes from the huggingface model?
To which forward is this [here ](https://github.com/UKPLab/sentence-transformers/blob/9932965c92a06835eda255dac7eacd53f48c5cd7/sentence_transformers/SentenceTransformer.py#L181)referring to?
I also wonder what this _modules is like [here](https://github.com/UKPLab/sentence-transformers/blob/9932965c92a06835eda255dac7eacd53f48c5cd7/sentence_transformers/SentenceTransformer.py#L338).
Why is this not in the init?
Thanks. :-)
|
https://github.com/huggingface/sentence-transformers/issues/753
|
open
|
[] | 2021-02-11T20:48:07Z
| 2021-02-12T14:03:59Z
| null |
PaulForInvent
|
pytorch/pytorch
| 52,147
|
Pointer passed where number is expected for PYTORCH_CUDA_FUSER_JIT_OPT_LEVEL leading to crash
|
## 🐛 Bug
The CUDA API expects a `void**` for option values for functions like `cuModuleLoadDataEx`. The documentation seems to be unclear, what that should be but according to other sources (see below) that value should be simply the value casted to a `void*`, not a pointer to that value.
Hence the code at https://github.com/pytorch/pytorch/blob/7763c127cd5630ba4123ad89fc5243c28e91aa4a/torch/csrc/jit/codegen/cuda/executor_utils.cpp#L320 is wrong and may lead to failed executions or wrong optimization levels.
I've seen this in one of the PyTorch tests (see below) where I get:
```
======================================================================
ERROR: test_unary_ops (test_jit_cuda_fuser.TestCudaFuser)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/install_pt/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py", line 827, in wrapper
method(*args, **kwargs)
File "/dev/shm/s3248973-EasyBuild/PyTorch/1.7.1/fosscuda-2020b/pytorch-1.7.1/test/test_jit_cuda_fuser.py", line 369, in test_unary_ops
self._unary_test_helper(op)
File "/dev/shm/s3248973-EasyBuild/PyTorch/1.7.1/fosscuda-2020b/pytorch-1.7.1/test/test_jit_cuda_fuser.py", line 328, in _unary_test_helper
jit_o = t_jit(x, 2.0)
File "/tmp/install_pt/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py", line 126, in prof_func_call
return prof_callable(func_call, *args, **kwargs)
File "/tmp/install_pt/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py", line 123, in prof_callable
return callable(*args, **kwargs)
RuntimeError: The following operation failed in the TorchScript interpreter2.
Traceback of TorchScript (most recent call last):
RuntimeError: CUDA driver error: a PTX JIT compilation failed
```
And to verify I added the following code to torch/csrc/jit/codegen/cuda/executor_utils.cpp above the call to `cuModuleLoadDataEx`:
```
options.push_back(CU_JIT_ERROR_LOG_BUFFER);
options.push_back(CU_JIT_ERROR_LOG_BUFFER_SIZE_BYTES);
std::string errors(8000, '\0');
option_vals.push_back((void*) errors.data());
option_vals.push_back((void*) errors.size());
```
When printing this string on failure I got:
> ptxas fatal : 32-bit integer value (3849789140) out of range
This is exactly the pointer to `jit_opt_level` which confirms the above.
PS: It is likely a good idea to include the JIT error buffer in PyTorch and report it on failure.
References:
- https://stackoverflow.com/a/17070844/1930508
- https://github.com/HongjianLi/cuda/blob/dd52fd563558667315de3fecea3559ac6ba2a89a/vectorAdd/vectorAdd.cpp#L74
- https://github.com/MentorEmbedded/nvptx-tools/blob/59e0b755e3ab085a3a348bd001bad4f010fd9c00/nvptx-run.c#L77-L88
## To Reproduce
Steps to reproduce the behavior:
1. `python test_jit_cuda_fuser_legacy.py -k test_unary_ops`
## Environment
- PyTorch Version (e.g., 1.0): 1.7.1, master
cc @gmagogsfm
|
https://github.com/pytorch/pytorch/issues/52147
|
open
|
[
"oncall: jit"
] | 2021-02-11T17:04:53Z
| 2021-02-11T17:44:14Z
| null |
Flamefire
|
pytorch/TensorRT
| 338
|
❓ [Question] What is the correct way to create a trtorch::CompileSpec for a single input?
|
## ❓ Question
My network has a single input of the following shape [1, 3, 224, 224]. I a trying to create the trtorch::CompileSpec as follows
`auto compile_settings = trtorch::CompileSpec({1, 3, 224, 224});` however I am getting the following output
````
terminate called after throwing an instance of 'trtorch::Error'
what(): [enforce fail at core/conversion/conversion.cpp:135] Expected input_tensors.size() == input_dims.size() to be true but got false
Expected dimension specifications for all input tensors, but found 1 input tensors and 4 dimension specs (conversion.AddInputs)
````
I am wondering whether the constructor is a vector of input shapes? If so, doing
````
std::vector<std::vector<int64_t>> input_dims = {{1, 3, 224, 224}};
auto compile_settings = trtorch::CompileSpec(input_dims);
````
gives the following error
````
ERROR: [TRTorch] - Requested converter for aten::adaptive_max_pool2d, but no such converter was found
terminate called after throwing an instance of 'trtorch::Error'
what(): [enforce fail at core/conversion/conversion.cpp:108] Expected converter to be true but got false
Unable to convert node: %512 : Tensor, %513 : Tensor = aten::adaptive_max_pool2d(%511, %7) # /home/federico/.local/lib/python3.8/site-packages/torch/nn/functional.py:844:0 (conversion.AddLayer)
Schema: aten::adaptive_max_pool2d(Tensor self, int[2] output_size) -> (Tensor, Tensor)
Converter for aten::adaptive_max_pool2d requested, but no such converter was found.
If you need a converter for this operator, you can try implementing one yourself
or request a converter: https://www.github.com/NVIDIA/TRTorch/issues
````
So my question is, which approach is the correct one. If the second is, I can try to implement the converter myself but I want to be sure what has to be passed to create a correct `CompileSpec`.
## What you have already tried
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about the TRTorch compiler can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 1.7.1
- CPU Architecture: amd64
- OS (e.g., Linux): Ubuntu 20.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source): `LD_LIBRARY_PATH=$(pwd)/bazel-TRTorch/external/libtorch/lib/:$(pwd)/bazel-TRTorch/external/cudnn/lib64/:$(pwd)/bazel-TRTorch/external/tensorrt/lib/:/usr/local/cuda/lib64/:$LD_LIBRARY_PATH bazel run //adv_test:adv_trtorch -c opt --jobs=3 --distdir third_party/dist_dir/x86_64-linux-gnu/`
- Are you using local sources or building from archives: archives
- Python version: 3.8
- CUDA version: 11.0
- GPU models and configuration: GTX 1050
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/338
|
closed
|
[
"question"
] | 2021-02-10T14:23:19Z
| 2021-02-11T08:04:42Z
| null |
federicohml
|
pytorch/tutorials
| 1,354
|
Tensors tutorial broken?
|
It looks like a lot of content is missing from this tutorial: https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#sphx-glr-beginner-blitz-tensor-tutorial-py.
|
https://github.com/pytorch/tutorials/issues/1354
|
closed
|
[] | 2021-02-10T09:58:54Z
| 2021-02-12T07:20:38Z
| 2
|
Attila94
|
pytorch/TensorRT
| 337
|
❓ [Question] Why bazel is not able to find libcudart-xxxxxxx.so.11.0?
|
## ❓ Question
I cloned TRTorch repo and try to play with it with a sample code. I created a folder for this playground in the root path (next to WORKSPACE), add the corresponding `BUILD` and `cpp` files. However when executing `bazel build //adv_test:adv_torchscript --distdir third_party/dist_dir/x86_64-linux-gnu/` I get the following error `execroot/TRTorch/bazel-out/k8-fastbuild/bin/adv_test/adv_torchscript: error while loading shared libraries: libcudart-3f3c6934.so.11.0: cannot open shared object file: No such file or directory`
The BUILD file looks like
```
cc_binary(
name = "adv_torchscript",
srcs = ["adv_torchscript.cc"],
deps = [
"@cuda",
"@libtorch",
"@libtorch//:caffe2",
],
)
````
The cpp file looks like
````
#include <torch/script.h>
// #include <trtorch/trtorch.h>
// #include <chrono>
#include <iostream>
#include <string>
// https://gist.github.com/zeryx/526dbc05479e166ca7d512a670e6b82d
// https://github.com/pytorch/vision/issues/2691
int main(int argc, char** argv) {
const std::string model_file = "./my_net_torch_script.pt";
const std::string img_file = "./test_img.jpg";
const float num_iterations = 1000.F;
bool use_gpu = false;
if (argc == 2) {
use_gpu = std::atoi(argv[1]) ? true : false;
}
std::cout << "Device set to " << ((use_gpu) ? "GPU" : "CPU") << std::endl;
std::cout << "Loading TorchScript Model";
torch::jit::script::Module ts_module;
if (use_gpu) {
ts_module = torch::jit::load(model_file, torch::kCUDA);
} else {
ts_module = torch::jit::load(model_file);
}
std::cout << " ... OK" << std::endl;
return 0;
}
````
## What you have already tried
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about the TRTorch compiler can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):
- CPU Architecture: amd64
- OS (e.g., Linux): Ubuntu 20.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source):
- Build command you used (if compiling from source): bazel build //adv_test:adv_torchscript --distdir third_party/dist_dir/x86_64-linux-gnu/
- Are you using local sources or building from archives: archives
- Python version: 3.8
- CUDA version: 11.2
- GPU models and configuration: GTX 1050
- Any other relevant information: I updated the relevant parts of WORKSPACE to use the latest and greatest of CUDNN and TensorRT, i.e. URL and sha256sum.
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/337
|
closed
|
[
"question"
] | 2021-02-09T16:33:25Z
| 2021-02-09T21:34:10Z
| null |
federicohml
|
pytorch/TensorRT
| 335
|
❓ [Question] Typo in "/py/README.md"
|
## ❓ Question
<!-- Your question -->
There are typo in example in "/py/README.md"
## Example Usage
``` python
import torch
import torchvision
import trtorch
# Get a model
model = torchvision.models.alexnet(pretrained=True).eval().cuda()
# Create some example data
data = torch.randn((1, 3, 224, 224)).to("cuda")
# Trace the module with example data
traced_model = torch.jit.trace(model, [data])
# Compile module
compiled_trt_model = trtorch.compile(model, {
"input_shapes": [data.shape],
"op_precision": torch.half, # Run in FP16
})
results = compiled_trt_model(data.half())
```
```
# Compile module
compiled_trt_model = trtorch.compile(model, {
"input_shapes": [data.shape],
"op_precision": torch.half, # Run in FP16
})
```
The above code should be fixed like the below code.
```
# Compile module
compiled_trt_model = trtorch.compile(traced_model , {
"input_shapes": [data.shape],
"op_precision": torch.half, # Run in FP16
})
```
## What you have already tried
I fixed typo, and requested pull request.
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about the TRTorch compiler can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):
- CPU Architecture:
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source):
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:
- CUDA version:
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/335
|
closed
|
[
"question"
] | 2021-02-09T07:43:17Z
| 2021-02-09T23:57:31Z
| null |
developer0hye
|
pytorch/TensorRT
| 334
|
❓ [Question] Typo in "core/conversion/conversionctx/ConversionCtx.cpp "
|
## ❓ Question
<!-- Your question -->
There are typo in "core/conversion/conversionctx/ConversionCtx.cpp "
https://github.com/NVIDIA/TRTorch/blob/6442fce997e1506d859fab789527fe1e282f683f/core/conversion/conversionctx/ConversionCtx.cpp#L57-L62
Is this typo, right?
## What you have already tried
<!-- A clear and concise description of what you have already done. -->
I requested [Pull requests](https://github.com/NVIDIA/TRTorch/pull/333).
## Environment
> Build information about the TRTorch compiler can be found by turning on debug messages
- PyTorch Version (e.g., 1.0):
- CPU Architecture:
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source):
- Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:
- CUDA version:
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/334
|
closed
|
[
"question"
] | 2021-02-09T07:36:32Z
| 2021-02-09T23:57:40Z
| null |
developer0hye
|
pytorch/pytorch
| 51,859
|
Need help when using torch jit with an thread pool. (how to use at::set_num_threads correctly)
|
Hi, I'm trying to using an thread pool with size N to manage N torch::jit::Module instances, and I want assign one thread to each individual torch::jit::Modules. I'm currently wrapping one torch::jit::Module with a wrapper class, and in the constructor I call at::set_num_threads(1) and at::set_num_interop_threads(1), but it seems not behaving as expected (there being only one working thread doing inference at any time, but not N threads). How should I call at::set_num_threads and at::set_num_interop_threads in my program ? Thanks for attention.
In short, how can I restrict one torch::jit::Module doing inference with only one working thread, while controlling the concurrency of different inferences by an existing thread pool ?
cc @gmagogsfm
|
https://github.com/pytorch/pytorch/issues/51859
|
closed
|
[
"oncall: jit"
] | 2021-02-07T13:09:51Z
| 2021-02-12T08:23:52Z
| null |
w1d2s
|
pytorch/TensorRT
| 326
|
❓ [Question] Is there a way to do multithreaded half-precision compilation?
|
## ❓ Question
I want to compile a Torch script in a different thread than the main thread in a C++ program. However, doing so with half precision for large networks will result in a Segmentation fault.
Here's a program that extracts what I want to do:
https://github.com/SakodaShintaro/trtorch-test/blob/master/main.cpp
```cpp
#include <torch/script.h>
#include <trtorch/trtorch.h>
using namespace std;
void compile(bool fp16) {
constexpr int64_t INPUT_CHANNEL_NUM = 256;
constexpr int64_t WIDTH = 32;
torch::jit::Module module = torch::jit::load("model.ts");
if (fp16) {
module.to(torch::kCUDA, torch::kHalf);
} else {
module.to(torch::kCUDA);
}
module.eval();
std::vector<int64_t> in_sizes = {1, INPUT_CHANNEL_NUM, WIDTH, WIDTH};
trtorch::CompileSpec::InputRange range(in_sizes);
trtorch::CompileSpec info({range});
if (fp16) {
info.op_precision = torch::kHalf;
}
module = trtorch::CompileGraph(module, info);
}
int main() {
// fp32, this thread -> OK
compile(false);
cout << "fp32, this thread -> finish" << endl;
// fp32, another thread -> OK
std::thread thread0([]() { compile(false); });
thread0.join();
cout << "fp32, another thread -> finish" << endl;
// fp16, this thread -> OK
compile(true);
cout << "fp16, this thread -> finish" << endl;
// fp16, another thread -> NG
std::thread thread1([]() { compile(true); });
thread1.join();
cout << "fp16, another thread -> finish" << endl;
}
```
result
```
fp32, this thread -> finish
fp32, another thread -> finish
fp16, this thread -> finish
Segmentation fault (core dumped)
```
Is there anything wrong with my code?
## Environment
I used a Dockerfile I made.
https://github.com/SakodaShintaro/trtorch-test/blob/master/docker/Dockerfile
If I create a container with this image and execute `./Test`, a Segmentation fault will occur on the 4th line.
In `trtorch-test/docker`,
```
docker build -t trtorch_test_image .
docker run --gpus all -it --name trtorch_test_container trtorch_test_image:latest bash
./Test
```
I sometimes succeed in it, so try it a few times if you want to reproduce it.
- PyTorch Version (e.g., 1.0): 1.7
- CPU Architecture: x86_64
- OS (e.g., Linux): Ubuntu 20.04 (on Docker)
- Build command you used (if compiling from source): bazel build //:libtrtorch --compilation_mode opt
- CUDA version: 11.0
- GPU models and configuration: RTX 2080ti
- Nvidia driver version : 460
|
https://github.com/pytorch/TensorRT/issues/326
|
closed
|
[
"bug",
"question",
"bug: triaged [verified]"
] | 2021-02-05T08:50:21Z
| 2021-02-26T02:18:13Z
| null |
SakodaShintaro
|
pytorch/examples
| 885
|
DDP on GPUs invalid ordinal
|
there is a node with 8 gpus,and I can't train my model on any 4 of the gpus, except gpu-id is 0,1,2,3.
how can I use any permutation and combination of the 8 gpus? Thanks
`-- Process 2 terminated with the following error:
Traceback (most recent call last):
File "/home/lab-chen.qi/anaconda3/envs/torch17/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/lab-chen.qi/sc/resweightv1/tiny_imagenet_multi.py", line 223, in main_worker
torch.cuda.set_device(gpu)
File "/home/lab-chen.qi/anaconda3/envs/torch17/lib/python3.7/site-packages/torch/cuda/__init__.py", line 263, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal`
some of my code
```
import torch
import torch.nn as nn
import torch.distributed as dist
import torch.utils.data.distributed
import torch.multiprocessing as mp
import argparse
import os
parser = argparse.ArgumentParser(description = 'multi process')
parser.add_argument('--gpu-id',type =str,default='0,1,2,4')
parser.add_argument('--world-size', default=1, type=int,
help='number of nodes for distributed training')
parser.add_argument('--rank', default=0, type=int,
help='node rank for distributed training')
parser.add_argument('--dist-url', default='tcp://localhost:23456', type=str,
help='url used to set up distributed training')
parser.add_argument('--dist-backend', default='nccl', type=str,
help='distributed backend')
args = parser.parse_args()
def main():
global args
os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu_id
# args.gpu = list(map(int,args.gpu_id.split(',')))
# state = {k: v for k, v in args._get_kwargs()}
# ngpus_per_node = torch.cuda.device_count() #len(args.gpu)
ngpus_per_node = args.gpu_id.split(',').__len__()
# print(os.environ['CUDA_VISIBLE_DEVICES'])
# print('能看到的gpu',ngpus_per_node)
args.nprocs = ngpus_per_node
args.world_size = ngpus_per_node * args.world_size
mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))
# Random seed
# best_acc = 0 # best test accuracy
def main_worker(local_rank,ngpus_per_node,args):
# global best_acc
# start from epoch 0 or last checkpoint epoch
# if not os.path.isdir(args.checkpoint):
# mkdir_p(args.checkpoint)
# # import pdb
# pdb.set_trace()
gpus = os.environ['CUDA_VISIBLE_DEVICES'].split(',')
gpu = int(gpus[local_rank])
args.gpu = gpu
best_acc = 0
# print(best_acc)
args.rank = args.rank * ngpus_per_node + local_rank#args.gpu[gpu]
print('rank: {} / {}'.format(args.rank, args.world_size))
dist.init_process_group(backend=args.dist_backend,
init_method=args.dist_url,
world_size=args.world_size,
rank=args.rank)
torch.cuda.set_device(gpu)
if __name__ == '__main__':
main()`
```
I try this, but it doesn't work [https://github.com/PyTorchLightning/pytorch-lightning/issues/3791](https://github.com/PyTorchLightning/pytorch-lightning/issues/3791)
|
https://github.com/pytorch/examples/issues/885
|
open
|
[
"distributed"
] | 2021-02-05T02:40:06Z
| 2023-03-31T08:30:25Z
| 1
|
ccijunk
|
pytorch/serve
| 965
|
How to change loadedAtStartup to be true while registering a model?
|
## 📚 Documentation
<!-- A clear and concise description of what content in https://pytorch.org/serve/ is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->
When a model is registered loadedAtStartup is false by default. Is this option related to model pre-load? Is the model supposed to be loaded all time if this is set to be true? And how exactly do we change it while registering a model? Thank you in advance.
|
https://github.com/pytorch/serve/issues/965
|
closed
|
[
"triaged_wait"
] | 2021-02-05T01:58:37Z
| 2021-05-13T17:41:39Z
| null |
wangs0007
|
pytorch/pytorch
| 51,712
|
UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()`
|
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1.
1.
1.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/pytorch/issues/51712
|
closed
|
[] | 2021-02-04T08:03:59Z
| 2021-02-04T15:52:25Z
| null |
vkl-git
|
huggingface/transformers
| 9,961
|
What is the correct way to use Adafactor?
|
Hi, from the papers I've seen that Adafactor is typically used with no learning rate (as in Pegasus paper), however, when I try to execute run_seq2seq.py or seq2seq/finetune_trainer.py from your examples, and set --adafactor parameter, without specifying learning rate (for no learning rate), it uses the default 3e-05. Is there a way to use Adafactor without learning rate?
|
https://github.com/huggingface/transformers/issues/9961
|
closed
|
[
"wontfix"
] | 2021-02-02T15:42:08Z
| 2021-03-06T00:12:07Z
| null |
avacaondata
|
huggingface/datasets
| 1,808
|
writing Datasets in a human readable format
|
Hi
I see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq
|
https://github.com/huggingface/datasets/issues/1808
|
closed
|
[
"enhancement",
"question"
] | 2021-02-02T02:55:40Z
| 2022-06-01T15:38:13Z
| null |
ghost
|
pytorch/pytorch
| 51,431
|
torch.where dtype inference is not smart
|
## 🐛 Bug
If we call `torch.where(mask, float_py_scalar, int_py_scalar)`, the dtype inference will error, but it should use floating type.
```py
In [198]: torch.__version__
Out[198]: '1.7.0'
In [199]: x = torch.randn(3)
In [200]: x
Out[200]: tensor([0.1649, 2.0497, 1.2026])
In [201]: torch.where(x > 1, 1.0, 0.0)
Out[201]: tensor([0., 1., 1.])
In [202]: torch.where(x > 1, 1.0, 0)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-202-d99e0dfc5858> in <module>
----> 1 torch.where(x > 1, 1.0, 0)
RuntimeError: expected scalar type float but found long long
In [203]: torch.where(x > 1, 1, 0)
Out[203]: tensor([0, 1, 1])
```
While one may argue for this error because `int64` and `float32` are not fully compatible, we also support
1. `float32_tensor.add(1)`
2.
```py
In [211]: torch.where(x > 0, 1.0, 0.0)
Out[211]: tensor([1., 1., 1.])
In [212]: torch.where(x > 0, 1.0, 0.0).dtype
Out[212]: torch.float32
```
Note how we don't use float64 either.
so I don't think it should be a problem.
Similarly, these errors are also quite annoying
```py
In [204]: torch.where(x > 1, x, 0)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-204-c1551b46bfbc> in <module>
----> 1 torch.where(x > 1, x, 0)
RuntimeError: expected scalar type float but found long long
In [205]: torch.where(x > 1, x, 0.)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-205-b52b9d3df92f> in <module>
----> 1 torch.where(x > 1, x, 0.)
RuntimeError: expected scalar type float but found double
```
cc @heitorschueroff
|
https://github.com/pytorch/pytorch/issues/51431
|
closed
|
[
"triaged",
"module: sorting and selection",
"function request"
] | 2021-01-31T17:00:30Z
| 2021-02-03T17:33:05Z
| null |
ssnl
|
pytorch/examples
| 880
|
How to run
|
https://github.com/pytorch/examples/issues/880
|
closed
|
[] | 2021-01-31T08:26:07Z
| 2022-03-09T19:59:23Z
| null |
1158481739
|
|
pytorch/TensorRT
| 305
|
aten::view error
|
## ❓ Question
During conversion, it seems like I found an incomplete support of the torch.view function:
Error as follows:
`at most one dimension may be inferred`
The function it is trying to convert is this:
`out.view(out.shape[0], -1, 4)`
|
https://github.com/pytorch/TensorRT/issues/305
|
closed
|
[
"question",
"No Activity"
] | 2021-01-29T21:31:49Z
| 2021-05-11T00:06:59Z
| null |
rafale77
|
pytorch/pytorch
| 51,345
|
how to convert torch::conv2d return value(tensor) to cv::mat
|
I run the following program:
read a picture of 3 channel input torch::nn::conv2d(3,3,3).pad(1).stride(1),then I got the results:

code:
```
cv::Mat img = cv::imread("babyx2.png", 1);
torch::Tensor img_tensor = torch::from_blob(img.data, { img.rows, img.cols, 3 }, torch::kByte);
img_tensor = img_tensor.permute({ 2, 0, 1 });
img_tensor = img_tensor.unsqueeze(0);
img_tensor = img_tensor.to(kFloat32);
torch::Tensor result = C1(img_tensor); //C1: torch::nn::Conv2d(torch::nn::Conv2dOptions(3, 3, 5).padding(1))
.....then get the result use following method
auto ToCvImage(at::Tensor tensor)
{
int width = tensor.sizes()[0];
int height = tensor.sizes()[1];
//auto sizes = tensor.sizes();
try
{
cv::Mat output_mat(cv::Size{ height, width }, CV_8UC3, tensor.data_ptr<uchar>());
return output_mat.clone();
}
catch (const c10::Error& e)
{
std::cout << "an error has occured : " << e.msg() << std::endl;
}
return cv::Mat(height, width, CV_8UC3);
}
```
what happen?????
|
https://github.com/pytorch/pytorch/issues/51345
|
closed
|
[] | 2021-01-29T08:58:10Z
| 2021-01-29T16:20:09Z
| null |
yzqxmu
|
pytorch/pytorch
| 51,339
|
gcc 4.8.5 -std=11 how to build pytorch1.7
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
i want to use gcc4.8.5 to make pytorch1.7 code
What should i do on torch1.7.
on torch1.2 usr gcc4.8.5 is ok! but torch 1.7 is bad!
|
https://github.com/pytorch/pytorch/issues/51339
|
closed
|
[] | 2021-01-29T07:55:01Z
| 2021-01-30T03:39:58Z
| null |
joinhe
|
pytorch/vision
| 3,322
|
a question about segmentation model loading
|
## ❓ Questions and Help
Why they are different?
### Please note that this issue tracker is not a help form and this issue will be closed.

We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
cc @vfdev-5
|
https://github.com/pytorch/vision/issues/3322
|
closed
|
[
"question",
"module: models",
"topic: semantic segmentation"
] | 2021-01-29T03:19:01Z
| 2021-01-29T13:45:39Z
| null |
njzyxiong
|
pytorch/pytorch
| 51,320
|
Pytorch not working properly (I don't know how to summarize it, see below)
|
When I have a pytorch model, I sometimes would like to extract the features before the final softmax layers or such. Here, I have a model trained and loaded from a pickle:
```
def build_model():
model = resnet18(pretrained=True)
n_features = model.fc.in_features
n_hidden = 100
model.fc = torch.nn.Sequential(
torch.nn.Linear(n_features, n_hidden),
torch.nn.ReLU(),
torch.nn.Linear(n_hidden, 2)
)
model.to(device)
return model
model = build_model()
model.load_state_dict(torch.load('./model.pickle'))
model.eval()
```
Then, I would suppose that the model can be rebuild from it's children:
```
modules = list(model.children())
encoder = nn.Sequential(*modules)
```
However, given a test tensor:
```
>>> x_test.shape
torch.Size([100, 3, 128, 128])
```
model(x_test) produces an output normaly, but encoder(x_test) gives RuntimeError: mat1 dim 1 must match mat2 dim 0. I don't have any idea on how to investigate it further. The error messages are quite poor. The documentation is also EXTREMELY poor and doesn't specify at all the interface of the torchvision models (for example. the ".children" method came from a forum, because it doesn't appear anywhere in the documentation, which is insane).
cc @albanD @mruberry @jbschlosser
|
https://github.com/pytorch/pytorch/issues/51320
|
open
|
[
"module: nn",
"triaged"
] | 2021-01-29T00:17:31Z
| 2021-02-08T23:52:23Z
| null |
ghost
|
huggingface/transformers
| 9,867
|
where is position_embedding_type used
|
When I was using pytorch Electra Model, I read its source code but I didn't find where position_embedding_type is used.
So did I miss something?
|
https://github.com/huggingface/transformers/issues/9867
|
closed
|
[] | 2021-01-28T08:29:08Z
| 2021-01-29T02:00:07Z
| null |
awdrgyjilplij
|
huggingface/datasets
| 1,786
|
How to use split dataset
|

Hey,
I want to split the lambada dataset into corpus, test, train and valid txt files (like penn treebank) but I am not able to achieve this. What I am doing is, executing the lambada.py file in my project but its not giving desired results. Any help will be appreciated!
|
https://github.com/huggingface/datasets/issues/1786
|
closed
|
[
"question"
] | 2021-01-27T21:37:47Z
| 2021-04-23T15:17:39Z
| null |
kkhan188
|
pytorch/xla
| 2,756
|
How to sync XLA GPU Tensor between torch and torch_xla
|
I'm newly to torch_xla and trying to enable torch_xla in distributed training in PyTorch with multi-node gpu.
However, it seems torch_xla doesn't support this scenario well,for the following reasons:
1. torch_xla only support single-node multi-processing training by [xmp.spawn](https://pytorch.org/xla/release/1.7/index.html#running-on-multiple-xla-devices-with-multiprocessing)
2. torch_xla GPU aten::Tensor dosen't compatible well with cuda aten::Tensor(since they are difference device)
To workaround the issue, I had try to sync xla tensor gradients and move to cuda aten::Tensor mannually before all-reduce. And something weird found:
1. Each xla tensor sync create a SyncTensorGraph, the compilation slow down very much
2. Xla aten::ensor conversion to cuda aten::Tensor would actually do copy
## ❓ Questions and Help
1. Is there any function or API that support zero-copy between aten::cuda::Tensor & XLA_GPU aten::tensor?
2. Does each SyncTensor trigger a full-subgraph XLA Compilation?
3. Any best practices or good suggestions to PyTorch multi-node distributed training?
|
https://github.com/pytorch/xla/issues/2756
|
closed
|
[
"stale"
] | 2021-01-27T02:21:37Z
| 2021-06-26T02:22:41Z
| null |
tanyokwok
|
pytorch/TensorRT
| 294
|
Python Library error after painful compilation.
|
## ❓ Question
After very painfully building the repo from source due to a lot of strangely hardcoded paths to libraries and include which had me modify both the setup.py and the WORKSPACE, I have successfully completed the compilation using bazel. However when I try to use the python extension, I get the following error upon import of the library:
```
import trtorch
File "/home/user/.local/lib/python3.8/site-packages/trtorch/__init__.py", line 11, in <module>
from trtorch._compiler import *
File "/home/user/.local/lib/python3.8/site-packages/trtorch/_compiler.py", line 5, in <module>
import trtorch._C
ImportError: /home/anhman/.local/lib/python3.8/site-packages/trtorch/lib/libtrtorch.so: undefined symbol: _ZN2at11show_configB5cxx11Ev
```
## What you have already tried
The last time I have seen something similar, it was due to attempts of running a compiled binary under a different version of pytorch than the one it was compiled with. It's not the case here as I compiles with 1.7.1.
## Environment
> Build information about the TRTorch compiler can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 1.7.1-cu110
- CPU Architecture: x64
- OS (e.g., Linux): Ubuntu20.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source): bazel build //:libtrtorch --compilation_mode opt and then python3 setup.py install.
- Are you using local sources or building from archives: source
- Python version: 3.8.7
- CUDA version: 11.2
- GPU models and configuration: RTX 3070
- Any other relevant information: TensorRT 7.2.2.3 and cudnn 8.1
## Additional context
|
https://github.com/pytorch/TensorRT/issues/294
|
closed
|
[
"question"
] | 2021-01-27T01:57:24Z
| 2021-02-15T02:41:41Z
| null |
rafale77
|
pytorch/pytorch
| 51,114
|
How to find the module dependency?
|
## ❓ There are many operations in a Model
If we run these codes below:
```
import torch
import torchvision
model = torchvision.models.resnet18()
inp = torch.zeros([64, 3, 7, 7])
for temp in model.children():
print(temp)
```
We can get several modules:
```
Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
ReLU(inplace=True)
MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
........
```
## My problems are:
1. We can only see constructed modules, but cannot see their input\output dependencies: In restnet18, the input of second Sequential module are both from the first Sequential and MaxPool2d. Is there any way we can figure out the depencies among different modules (maybe in Python client)?
2. Moudules are related to high-level operations, can we see related operations and their dependencies in Python client (the outputs of torch.jit._get_trace_graph are too low-level)?
3. How can wen find back propagation dependencies in Python client?
|
https://github.com/pytorch/pytorch/issues/51114
|
closed
|
[] | 2021-01-26T17:28:57Z
| 2021-01-26T21:51:08Z
| null |
Xuyuanjia2014
|
pytorch/vision
| 3,294
|
Using torchvision roi_align in libtorch c++ jit modules
|
## 🐛 Bug
Hi, I’m trying to use libtorch 1.7.1 to load a jit model that is created with pytorch 1.5.1 and torchvision 0.6.1.
This model is using torchvision::roi_align operator.
When running the model I get this error:
**Could not find any similar ops to torchvision::roi_align. This op may not exist or may not be currently supported in TorchScript.**
loading the model in pytorch is working fine.
Any idea why its not loading?
I need to install another package to my c++ env to be able to load this model?
## Expected behavior
load and forward the model successfully in libtorch
## Environment
libtorch version: 1.7.1
Collecting environment information...
PyTorch version: 1.5.1
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.2 LTS (x86_64)
GCC version: (Ubuntu 6.4.0-17ubuntu1) 6.4.0 20180424
Clang version: Could not collect
CMake version: version 3.18.0
Python version: 3.6 (64-bit runtime)
Is CUDA available: False
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: Quadro P5000
Nvidia driver version: 418.87.01
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.17.2
[pip3] numpy-indexed==0.3.5
[pip3] numpy-quaternion==2019.10.3.10.26.21
[pip3] numpydoc==0.9.1
[pip3] pytorch3d==0.2.0
[pip3] torch==1.5.1
[pip3] torchvision==0.6.1
[conda] Could not collect
Thanks
|
https://github.com/pytorch/vision/issues/3294
|
closed
|
[
"question",
"module: ops",
"topic: object detection",
"module: c++ frontend"
] | 2021-01-26T07:23:02Z
| 2022-11-28T05:56:59Z
| null |
natangold85
|
pytorch/vision
| 3,293
|
Affine Transform: why is translate a list[int] when the code suggests it could be floating point?
|
https://github.com/pytorch/vision/blob/f16322b596c7dc9e9d67d3b40907694f29e16357/torchvision/transforms/functional.py#L956
cc @vfdev-5
|
https://github.com/pytorch/vision/issues/3293
|
open
|
[
"question",
"module: transforms"
] | 2021-01-26T07:14:08Z
| 2021-01-26T15:41:51Z
| null |
varung
|
pytorch/TensorRT
| 291
|
Questions about Value_Tensor_map and Evaluated_Value_map? (Not an issue, just try to understand them...)
|
I have just gone through TRTorch's 2020 GTC talk/slides/documentation focusing mainly on the graph conversion implementation part. There are some confusions of concepts and questions:
1. What's the relationship between `torch::jit::Values` and `torch::jit::IValue`, Are they the same thing? I noticed they are used interchangeably in some situations and are referring to different classes in others.
2. Why do we need to record Value -> ITensor map and Value->IValue map? What's the main use of these two maps?
Could someone help me? Thanks in advance!
|
https://github.com/pytorch/TensorRT/issues/291
|
closed
|
[
"question"
] | 2021-01-25T12:40:35Z
| 2021-01-25T19:19:48Z
| null |
maxyanghu
|
pytorch/elastic
| 140
|
Torch Elastic - How to make sure all nodes are in the same AZ?
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
Before submitting, please ensure you have gone through our documentation. Here
are some links that may be helpful:
* [What is torchelastic?](../../README.md)
* [Quickstart on AWS](../../aws/README.md)
* [Usage](../../USAGE.md)
* [Examples](../../examples/README.md)
* API documentation
* [Overview](../../USAGE.md)
* [Rendezvous documentation](../../torchelastic/rendezvous/README.md)
* [Checkpointing documentation](../../torchelastic/checkpoint/README.md)
* [Configuring](../../USAGE.md#configuring)
### Question
Hi, when using TorchElastic + AWS EKS, how can we ensure that multi-node training jobs have all of the nodes located in the same AZ? This is critical for multi-node training jobs, in terms of speed of data transfer and data transfer costs.
One naive way would be to just specify 1 subnet when creating the EKS cluster, but is there a way we can create an EKS cluster with multiple subnets, and when TorchElastic attempts to launch multiple nodes for a training job, it will try to launch them such that all of the nodes are located within 1 subnet/AZ (where that subnet would be one of the subnets that the EKS cluster has)? And is this possible to do with spot instances?
Thanks!
|
https://github.com/pytorch/elastic/issues/140
|
closed
|
[] | 2021-01-25T00:14:10Z
| 2021-05-17T15:47:49Z
| null |
thecooltechguy
|
pytorch/vision
| 3,283
|
How to install torchvision to use video_reader backend?
|
I simply installed torchvision from conda (as advertised on pytorch.org). But `torchvision.set_video_backend('video_reader')` prints `video_reader video backend is not available. Please compile torchvision from source and try again`. This should be mentioned in https://pytorch.org/docs/stable/torchvision/index.html#torchvision.set_video_backend and in torchvision README (including if the `video_reader` is temporarily not supported)
cc @bjuncek
|
https://github.com/pytorch/vision/issues/3283
|
closed
|
[
"enhancement",
"module: documentation",
"module: video"
] | 2021-01-24T03:09:56Z
| 2022-08-16T10:58:31Z
| null |
vadimkantorov
|
pytorch/vision
| 3,281
|
Can we use DeeplabV3 in Salient Object Detection ?
|
Recently, I start doing more in Deep Learning in Semantic Segmentation. I can't figure DeepLabV3 is possible to apply in Salient Object Detection ?
|
https://github.com/pytorch/vision/issues/3281
|
closed
|
[
"question"
] | 2021-01-24T01:32:09Z
| 2021-04-12T07:40:18Z
| null |
duynguyen51
|
pytorch/xla
| 2,750
|
How to change torch tpu v3 baseline into torch tpu pod v2?
|
i was trying to run this working torch tpu v3 baseline : https://www.kaggle.com/mobassir/faster-pytorch-tpu-baseline-for-cld-cv-0-9 into torch tpu pod v2.
i changed hardware accelerator from tpu v3-8 to tpu v2 pod in kaggle and changed used batch size = 1 and
```
def _mp_fn(rank, flags):
global acc_list
torch.set_default_tensor_type('torch.FloatTensor')
res = train_model()
FLAGS={}
xmp.spawn(_mp_fn, args=(FLAGS,), nprocs=32//8, start_method='fork')
```
but i get error saying "process 0 terminated with exit code 1"
i am not finding any resource or tutorial to convert tpu v3 notebook into tpu pod v2 in pytorch xla,so i wanted to give it a try myself,,,, @taylanbil need your help
|
https://github.com/pytorch/xla/issues/2750
|
closed
|
[] | 2021-01-23T07:48:39Z
| 2021-01-25T21:29:29Z
| null |
mobassir94
|
pytorch/vision
| 3,274
|
Different ENODATA code on macOS
|
## 🐛 Bug
It seems macOS ENODATA code (96) is different than the Linux one (61). The Linux code is currently hard-coded in `Video.cpp`, which results in an (unnecessary?) error being shown when using the video decoder on macOS:
https://github.com/pytorch/vision/blob/7d831a2f9b3ebab9eb8e5c899cf70b103ad6908a/torchvision/csrc/io/video/Video.cpp#L314-L318
cc @bjuncek
|
https://github.com/pytorch/vision/issues/3274
|
closed
|
[
"question",
"module: video"
] | 2021-01-22T12:05:45Z
| 2021-01-22T17:29:52Z
| null |
stefanwayon
|
pytorch/serve
| 943
|
how to return Chinese characters with UTF-8 code
|
1. When I use torch sever, I return a list in the **postprocess function** of the handler. Each element of the list is a python dictionary and the dictionary value is Chinese characters. Torch sever directly returns a json with the unicode encoding like "\u59d3". Can I control the return using UTF-8?
2. In addition, Is there a corresponding document for "model-server.jar ” ? What's the relationship with torch sever?
We look forward to your reply. Thanks a lot.
|
https://github.com/pytorch/serve/issues/943
|
open
|
[
"triaged_wait",
"language"
] | 2021-01-22T09:19:00Z
| 2021-05-27T04:36:56Z
| null |
aixuedegege
|
pytorch/vision
| 3,273
|
What is expected Kinetics400 dataset directory structure?
|
Given that the dataset does not come with official downloader scripts and that most roll their own or hack some third-party scripts, it would be much clearer if https://pytorch.org/docs/stable/torchvision/datasets.html#kinetics-400 explained what directory structure is expected by `torchvision.datasets.Kinetics400`
What is the expected dataset size? and the video file extensions?
Thanks!
cc @pmeier
|
https://github.com/pytorch/vision/issues/3273
|
closed
|
[
"enhancement",
"module: datasets",
"module: documentation"
] | 2021-01-22T01:02:24Z
| 2021-03-01T10:18:21Z
| null |
vadimkantorov
|
pytorch/vision
| 3,267
|
get v0.8.1 branch compile out torchvision==0.9.0a0+7b9d30e
|
I clone the v0.8.1 branch and compiled it with pytorch 1.7.0, but at last the compiled version is 0.9.0, does anything wrong?
|
https://github.com/pytorch/vision/issues/3267
|
closed
|
[
"question"
] | 2021-01-20T09:52:05Z
| 2021-01-20T10:29:08Z
| null |
helloyan
|
pytorch/pytorch
| 50,709
|
conv3d in r3d_18: How to maintain the dimension?
|
## How to maintain the dimension in conv3d(r3d_18)?
### convolution in conv3d about padding
1. the input is (1, 3, 5, 112, 112)
2. the model is `models.video.r3d_18(pretrained=True, progress=False)`
3. the model summary
```
VideoResNet(
(stem): BasicStem(
(0): Conv3d(3, 64, kernel_size=(3, 7, 7), stride=(1, 2, 2), padding=(1, 3, 3), bias=False)
(1): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(layer1): Sequential(
(0): BasicBlock(
(conv1): Sequential(
(0): Conv3DSimple(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)
(1): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv3DSimple(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)
(1): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(relu): ReLU(inplace=True)
)
(1): BasicBlock(
(conv1): Sequential(
(0): Conv3DSimple(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)
(1): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv3DSimple(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False)
(1): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(relu): ReLU(inplace=True)
)
)
.............
```
4. input through the first layer in model
```
input = torch.zeros(1, 3, 5, 112, 112)
output = model.stem(input)
>>> torch.Size([1, 64, 5, 56, 56])
```
5. my question is : why the output is 1* 64* 5* 56* 56
how to padding in pytorch
this is my Schematic diagram

|
https://github.com/pytorch/pytorch/issues/50709
|
closed
|
[] | 2021-01-19T03:07:16Z
| 2021-01-20T14:08:55Z
| null |
u0251077
|
pytorch/vision
| 3,261
|
ImportError: libcudart.so.10.1: cannot open shared object file: No such file or directory
|
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1. from torchvision import _C
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
>>> from torchvision import _C Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: libcudart.so.10.1: cannot open shared object file: No such file or directory.
## Environment
python collect_env.py
```
Collecting environment information...
PyTorch version: 1.1.0
Is debug build: False
CUDA used to build PyTorch: 10.0.130
ROCM used to build PyTorch: N/A
OS: Ubuntu 16.04.5 LTS (x86_64)
GCC version: (Ubuntu 8.4.0-1ubuntu1~16.04.1) 8.4.0
Clang version: Could not collect
CMake version: version 3.14.4
Python version: 3.6 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: 10.0.130
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
GPU 2: GeForce GTX 1080 Ti
GPU 3: GeForce GTX 1080 Ti
GPU 4: GeForce GTX 1080 Ti
GPU 5: GeForce GTX 1080 Ti
GPU 6: GeForce GTX 1080 Ti
GPU 7: GeForce GTX 1080 Ti
Nvidia driver version: 418.39
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.5.0
/usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn.so.5.1.10
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] torch==1.1.0
[pip3] torchvision==0.4.2
[conda] cudatoolkit 10.0.130 hf841e97_6 conda-forge
[conda] mkl 2020.2 256
[conda] numpy 1.19.5 py36h2aa4a07_1 conda-forge
[conda] pytorch 1.1.0 py3.6_cuda10.0.130_cudnn7.5.1_0 pytorch
[conda] torchvision 0.3.0 py36_cu10.0.130_1 pytorch
```
## Additional context
<!-- Add any other context about the problem here. -->
I was using fasterRCNN Object detector in torchvision while doing keep = nms(boxes_for_nms, scores, iou_threshold) it is giving this error. Easy way to reproduce this error is to run
> from torchvision import _C
Please help.
cc @fmassa @vfdev-5
|
https://github.com/pytorch/vision/issues/3261
|
closed
|
[
"question",
"topic: binaries"
] | 2021-01-17T17:08:27Z
| 2021-06-16T15:08:15Z
| null |
IISCAditayTripathi
|
pytorch/pytorch
| 50,657
|
How to maximize inference speed of models implemented with C++ API ? (not using torchscript or jit)
|
I'm currently implementing some seq2seq model with LibTorch C++ API (build from torch::nn::Modules, not using jit), is there any special techniques to optimize the inference speed ? Thanks.
cc @yf225 @glaringlee @VitalyFedyunin @ngimel @gmagogsfm
|
https://github.com/pytorch/pytorch/issues/50657
|
closed
|
[
"module: performance",
"module: cpp",
"triaged"
] | 2021-01-17T02:55:51Z
| 2024-06-27T07:58:38Z
| null |
w1d2s
|
huggingface/sentence-transformers
| 693
|
What is 'Spearman’s rank correlation between the cosine-similarity of the sentence embeddings and the gold labels.' ?
|
In your paper,you mention this
`we compute the Spearman’s rank
correlation between the cosine-similarity of the
sentence embeddings and the gold labels.`
in **section 4.1**
Here is my question,what is the `gold labels` mean ,and can you provide a example to explain how to calculate the Spearman’s rank correlation in your paper?Any help will be appreciate!
|
https://github.com/huggingface/sentence-transformers/issues/693
|
closed
|
[] | 2021-01-15T08:46:57Z
| 2021-01-15T09:55:00Z
| null |
Gpwner
|
pytorch/xla
| 2,733
|
How to install Torch_XLA in my own laptop?
|
## ❓ Questions and Help
I want build a envirment about Torch_XLA on my own laptop by Annconda3. But I do not find any information about this. Is it difficult to use Annconda3 or pip install Torch_XLA?
|
https://github.com/pytorch/xla/issues/2733
|
closed
|
[] | 2021-01-15T02:38:39Z
| 2021-04-09T04:54:46Z
| null |
TianshengSun
|
pytorch/examples
| 870
|
Permissions to contribute
|
Hi there, I thought I could contribute a few notebooks with really low barrier to entry for concepts like regression using tensors and for loops, small and highly documented shallow nets to illustrate concepts etc. I tried to push a notebook today to a branch I checked out for a PR but don't have permissions. How I can I request them?
|
https://github.com/pytorch/examples/issues/870
|
closed
|
[] | 2021-01-13T13:26:02Z
| 2022-03-09T20:16:51Z
| 1
|
rbownes
|
huggingface/datasets
| 1,733
|
connection issue with glue, what is the data url for glue?
|
Hi
my codes sometimes fails due to connection issue with glue, could you tell me how I can have the URL datasets library is trying to read GLUE from to test the machines I am working on if there is an issue on my side or not
thanks
|
https://github.com/huggingface/datasets/issues/1733
|
closed
|
[] | 2021-01-13T08:37:40Z
| 2021-08-04T18:13:55Z
| null |
ghost
|
pytorch/vision
| 3,246
|
assert error len(grid_sizes) == len(strides) == len(cell_anchors)
|
It looks like a bug. When I do not set the AnchorGenerator() in FasterRCNN, the default anchor_sizes in ### **detection/faster_rcnn.py** line**182** shows that 'anchor_sizes = ((32,), (64,), (128,), (512,))' which cause len(cell_anchors) == 5. And I found that in the **detection/faster_rcnn.py** line**120** the anchor_size set '((32, 64, 128, 256, 512), )' and len(cell_anchors) == 1
|
https://github.com/pytorch/vision/issues/3246
|
closed
|
[
"question"
] | 2021-01-13T03:30:16Z
| 2021-01-20T11:06:09Z
| null |
ghost
|
huggingface/transformers
| 9,556
|
Where is convert_bert_original_tf_checkpoint_to_pytorch.py?
|
HI:
I am getting the following error when implementing entity extraction in BERT. OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index']
I am very new to using BERT, and noted that [issue 2110](https://github.com/huggingface/transformers/issues/2110) had a similar issue. Issue 2110 was referred to the convert_bert_original_tf_checkpoint_to_pytorch.py file. However, the current link isn't working. Could you point me to its current location?
V/r,
L
|
https://github.com/huggingface/transformers/issues/9556
|
closed
|
[
"wontfix",
"Migration"
] | 2021-01-13T02:49:48Z
| 2021-03-06T00:13:15Z
| null |
sednaasil
|
pytorch/pytorch
| 50,426
|
How to do gathering on a tensor with two-dim indexing
|
### Question
Hi,
Want to add symbolic func to a custom PyTorch op and export it to ONNX using existing ONNX ops. There is two-dim indexing operation. Have tried `index_select`, but not work. So could anyone take a look into this and help me with this?
### Further information
Sample code
```
def my_custom_op(data, x_indices, y_indices):
## suppose this op is written in c++
return data[x_indice, y_indices]
class MyCustomOp(torch.autograd.Function):
@staticmethod
def forward(ctx, data, x_indices, y_indices):
return my_custom_op(data, x_indices, y_indices)
@staticmethod
def symbolic(g, data, x_indices, y_indices):
from torch.onnx.symbolic_opset9 import index_select, transpose
data_xs = index_select(g, data, 0, x_indices)
## don't know how to do this because index_select not work for this
# data_xs = transpose(g, data_xs, 0, 1)
# data_ys = index_select(g, data_xs, 0, y_indices)
return out
```
Thanks in advance.
|
https://github.com/pytorch/pytorch/issues/50426
|
closed
|
[] | 2021-01-12T10:21:14Z
| 2021-01-12T22:15:39Z
| null |
RunningLeon
|
pytorch/pytorch
| 50,346
|
how to save weights when using RPC framework
|
Hi,
I am using the RPC framework to split the model across different processes/ranks. However, I notice that calling torch.save will only save the weights of the part of the model on a single rank. I am wondering if there is a way to save the weights of all models into one file?
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @jjlilley @osalpekar @jiayisuse @mrzzd @agolynski @SciPioneer @H-Huang @cbalioglu
|
https://github.com/pytorch/pytorch/issues/50346
|
open
|
[
"oncall: distributed",
"triaged",
"module: rpc"
] | 2021-01-10T08:26:37Z
| 2024-11-18T17:04:45Z
| null |
FrankLeeeee
|
pytorch/TensorRT
| 267
|
prim::ListUnpack unable to get schema
|
When I try to complie a model, I got such error
```
[1;35mDEBUG: [0mUnable to get schema for Node %b.1 : int, %nframe.1 : int, %c : int, %h.1 : int, %w.1 : int = prim::ListUnpack(%15) (NodeConverterRegistry.Convertable)
terminate called after throwing an instance of 'trtorch::Error'
what(): [enforce fail at core/conversion/conversion.cpp:392] Expected schema to be true but got false
Unable to get schema for Node %b.1 : int, %nframe.1 : int, %c : int, %h.1 : int, %w.1 : int = prim::ListUnpack(%15) (conversion.VerifyCoverterSupportForBlock)
```
and the related graph definition is this
```
%15 : int[] = aten::size(%images.1) # <string>:7:9
%b.1 : int, %nframe.1 : int, %c : int, %h.1 : int, %w.1 : int = prim::ListUnpack(%15)
```
Input shape is (1,1,3,672,672)
detailed log is here
[listunpack.txt](https://github.com/NVIDIA/TRTorch/files/5786336/listunpack.txt)
GDB backtrace
```
#0 0x00007fff63987438 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54
#1 0x00007fff6398903a in __GI_abort () at abort.c:89
#2 0x00007ffff7a8ddde in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#3 0x00007ffff7a99896 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#4 0x00007ffff7a99901 in std::terminate() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#5 0x00007ffff7a99b55 in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#6 0x000000000047b116 in trtorch::core::conversion::GetUnsupportedOpsInBlock[abi:cxx11](torch::jit::Block const*) (b=0x5d4b9d50) at core/conversion/conversion.cpp:390
#7 0x000000000047b3a7 in trtorch::core::conversion::VerifyConverterSupportForBlock (b=0x5d4b9d50) at core/conversion/conversion.cpp:406
#8 0x000000000045d784 in trtorch::core::CheckMethodOperatorSupport (mod=..., method_name="forward") at core/compiler.cpp:136
#9 0x000000000045ac55 in trtorch::CheckMethodOperatorSupport (module=..., method_name="forward") at cpp/api/src/trtorch.cpp:14
#10 0x000000000042178d in main (argc=5, argv=0x7fffffffdf68) at cpp/trtorchc/main.cpp:371
```
In official pytorch source code, I find this
```
%16 : Tensor[] = aten::chunk(%gates, %7, %8)
%ingate.1 : Tensor, %forgetgate.1 : Tensor, %cellgate.1 : Tensor, %outgate.1 : Tensor = prim::ListUnpack(%16)
```
Dose this mean the aten::size is a operator rather than evaluator ?
In trtorch aten.cpp, we have
```
.evaluator({c10::Symbol::fromQualString("aten::size"),
[](const torch::jit::Node* n, kwargs& args) -> c10::optional<torch::jit::IValue> {
LOG_WARNING("There may be undefined behavior using dynamic shape and aten::size");
auto tensor_var = args.at(n->input(0));
if (n->inputs().size() == 1) {
if (tensor_var.isITensor()) {
auto tensor = tensor_var.ITensor();
return util::toVec(tensor->getDimensions());
} else {
auto tensor = tensor_var.unwrapToTensor();
return tensor.sizes();
}
} else {
auto dim = args.at(n->input(1)).unwrapToInt();
if (tensor_var.isITensor()) {
auto tensor = tensor_var.ITensor();
return util::toVec(tensor->getDimensions())[dim];
} else {
auto tensor = tensor_var.unwrapToTensor();
return tensor.sizes()[dim];
}
}
},
EvalOptions().validSchemas(
{"aten::size(Tensor self) -> (int[])", "aten::size.int(Tensor self, int dim) -> (int)"})})
.evaluator({c10::Symbol::fromQualString("aten::__getitem__"),
```
In another graph, compiling have the same issue
```
%46 : Tensor[] = aten::split(%45, %6, %7) # /opt/tiger/conda/lib/python3.7/site-packages/torch/tensor.py:375:0
%47 : Tensor, %48 : Tensor = prim::ListUnpack(%46)
[1;35mDEBUG: [0mUnable to get schema for Node %47 : Tensor, %48 : Tensor = prim::ListUnpack(%46) (NodeConverterRegistry.Convertable)
terminate called after throwing an instance of 'trtorch::Error'
what(): [enforce fail at core/conversion/conversion.cpp:392] Expected schema to be true but got false
Unable to get schema for Node %47 : Tensor, %48 : Tensor = prim::ListUnpack(%46) (conversion.VerifyCoverterSupportForBlock)
```
|
https://github.com/pytorch/TensorRT/issues/267
|
closed
|
[
"question"
] | 2021-01-08T09:28:32Z
| 2021-01-22T19:51:16Z
| null |
inocsin
|
pytorch/vision
| 3,233
|
Which paper is torchvision.ops.deform_conv2d from?
|
## 📚 Documentation
<!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->
I want to know which paper [torchvision.ops.deform_conv2d](https://pytorch.org/docs/stable/torchvision/ops.html#torchvision.ops.deform_conv2d) is from, is it DCNv1 or DCNv2?
|
https://github.com/pytorch/vision/issues/3233
|
closed
|
[
"question",
"module: documentation"
] | 2021-01-08T09:17:08Z
| 2021-01-08T10:11:11Z
| null |
songyuc
|
pytorch/pytorch
| 50,139
|
How to correctly nest datasets and dataloaders?
|
## ❓ Questions and Help
Hi, I am asking here because it seemed like the right place, if it isn't please tell me where to ask.
Consider a stream of tabular data.
```
import pandas as pd
import numpy as np
def data_stream():
for _ in range(1000):
df = pd.DataFrame({
'a': np.arange(10000),
'b': (np.arange(10000) + 10000)
})
yield df
```
Please assume the dataframes will be large (and different).
I want to create a dataloader for data that is arranged as I stated above.
batches should be of X rows of the current dataframe, until it is done (including shuffling flexibility ect.). Can throw away the last batch if it is not full.
Then, go on to the next dataframe, until StopIteration.
If it were a single dataframe, I would simply use the good old torch.utils.data.Dataset with a standard dataloader, with small configuration of the number of df rows per sample and be done.
If it were a stream of single sample per stream item, I would use torch.utils.data.IterableDataset exactly like the doc states.
However, I have both.
If I use a torch.utils.data.IterableDataset, I have to define a DataLoader for it, and I then lose the power of the DataLoader that would operate on the df itself. The same problem would arise in the other direction.
___
What's the correct way of handling data that is arranged like this?
|
https://github.com/pytorch/pytorch/issues/50139
|
closed
|
[] | 2021-01-06T11:44:07Z
| 2021-01-07T00:46:10Z
| null |
noamzilo
|
pytorch/tutorials
| 1,304
|
NLP FROM SCRATCH: TRANSLATION WITH A SEQUENCE TO SEQUENCE NETWORK AND ATTENTION
|
Hi
I'm exgausted... how to save and load model in future?
|
https://github.com/pytorch/tutorials/issues/1304
|
closed
|
[] | 2021-01-06T10:45:46Z
| 2021-06-02T19:39:35Z
| 1
|
aloska
|
pytorch/TensorRT
| 266
|
How to convert model from double to float
|
When I try to complie torchscript model, I get this log
```
DEBUG: [TRTorch Conversion Context] - Found IValue containing object of type Double(requires_grad=0, device=cpu)
terminate called after throwing an instance of 'trtorch::Error'
what(): [enforce fail at core/util/trt_util.cpp:293] Expected aten_trt_type_map.find(t) != aten_trt_type_map.end() to be true but got false
Unsupported Aten datatype
```
So I try to convert model to float using this
```
script_model = torch.jit.load(path)
script_model = script_model.eval()
script_model = script_model.float()
script_model.save(new_path)
```
And it still throw this error
|
https://github.com/pytorch/TensorRT/issues/266
|
closed
|
[
"question",
"component: core"
] | 2021-01-06T09:59:10Z
| 2022-08-12T21:10:14Z
| null |
inocsin
|
pytorch/pytorch
| 50,118
|
torch.where scalar/tensor documentation is unclear and not formatted
|
## 📚 Documentation
See:
`
Currently valid scalar and tensor combination are 1. Scalar of floating dtype and torch.double 2. Scalar of integral dtype and torch.long 3. Scalar of complex dtype and torch.complex128
`
I believe these are supposed to be on separate lines. Also this message comes before the type information, it's not clear what. "scalar and tensor combination" are. It should at least mention it's talking about `x` and `y` and not `condition`.
<!-- A clear and concise description of what content in https://pytorch.org/docs is an issue. If this has to do with the general https://pytorch.org website, please file an issue at https://github.com/pytorch/pytorch.github.io/issues/new/choose instead. If this has to do with https://pytorch.org/tutorials, please file an issue at https://github.com/pytorch/tutorials/issues/new -->
cc @jlin27 @mruberry @heitorschueroff
|
https://github.com/pytorch/pytorch/issues/50118
|
open
|
[
"module: docs",
"triaged",
"module: sorting and selection"
] | 2021-01-05T22:52:49Z
| 2021-01-07T17:14:35Z
| null |
gchanan
|
pytorch/pytorch
| 50,112
|
need a clear guide for when and how to use torch.cuda.set_device()
|
## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
I find myself quite unclear about `torch.cuda.set_device()`. The current documentation is very unsatisfactory, ambgious and confusing. e.g. the first 3 lines of code sample: https://pytorch.org/docs/stable/notes/cuda.html#cuda-semantics
```
cuda = torch.device('cuda') # Default CUDA device
cuda0 = torch.device('cuda:0')
cuda2 = torch.device('cuda:2') # GPU 2 (these are 0-indexed)
```
it's very ambiguous and doesn't tell me anything. What is the default device in that example?
How come `torch.cuda.set_device()` is not used here - as it's the latter that's supposed to set the default device.
If possible I would like to ask for a clarification of what @ngimel shared here: https://github.com/pytorch/pytorch/issues/49961#issuecomment-754319348 quote:
> Default device is the device you are setting with torch.cuda.set_device(). It's possible to set device to 1 and then operate on the tensors on device 0, but for every function internally pytorch would be calling cudaSetDevice(0) - launch function kernel - cudaSetDevice(1) as part of setting device guards, and this is generally less efficient then setting device to 0 in the first place.
She suggested that unless I explicitly set `torch.cuda.set_device()` when switching to a different device (say 0->1) the code could incur a performance hit, because it'll first switch to device 0 and then 1 on every pytorch op if the default device was somehow 0 at that point.
So, say, if I'm setting up a DDP in the program. Do I have to call `torch.cuda.set_device(local_rank)` at some point after `torch.distributed.init_process_group()` since otherwise the default device will be `cpu` and the whole program will be slower because of that.
Should pytorch flag to users when the default device isn't matching the device the op is run on?
And say, I'm doing model parallelism as explained in this [tutorial](https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html#apply-model-parallel-to-existing-modules) - why doesn't it do `torch.cuda.set_device()` when switching devices?
Would it be possible to write a clear documentation on when to use `torch.cuda.set_device()`? Currently, it seems to be used more as a band-aid when related to device-switching bugs are encountered, since most of the time most code seems to work just fine w/o it, yet we unknowingly create a performance hit.
Thank you!
cc @ngimel @jlin27 @mruberry
|
https://github.com/pytorch/pytorch/issues/50112
|
open
|
[
"module: docs",
"module: cuda",
"triaged",
"needs design"
] | 2021-01-05T22:11:26Z
| 2025-12-26T12:57:46Z
| null |
stas00
|
pytorch/examples
| 866
|
Structure of train_loader
|
Hi and thanks in advice for your help! I would like to upload my own set of images and to train the variational autoencoder model with my training set. I don't understand what is the structure of your train_loader. I see you use torch.utils.data.DataLoader on datasets.MNIST to obtain train_loader, but I don't understand if train_loader is the list of the images represented as numpy array or what else.
|
https://github.com/pytorch/examples/issues/866
|
closed
|
[] | 2021-01-04T15:57:34Z
| 2022-03-09T21:17:33Z
| 1
|
Silvia-Sciva
|
pytorch/pytorch
| 50,030
|
How to realize Cross Validation using torchtext?
|
I want to realize cross validation using torchtext. Here is what I have done:
1. First, I use TabularDataset to define a dataset from the JSON file
2. Then, I use train_exs_arr = np.array(train_data.examples), d_train = train_exs_arr[train_idx].tolist()
3. Then, I use Dataset to define a sub-dataset from Examples d_train
4. Finally, I use BucketIterator. However, I can not access the data from BucketIterator
|
https://github.com/pytorch/pytorch/issues/50030
|
closed
|
[] | 2021-01-04T03:08:29Z
| 2021-01-04T07:09:40Z
| null |
yipliu
|
huggingface/transformers
| 9,387
|
Where is the impact when output_attentions=True?
|
Is there any impact regarding performance (training/fine-tuning time, GPU memory, batch size, etc.) when `output_attentions=True`?
```python
self.bert_encoder = BertModel.from_pretrained(
hparams.architecture, # "bert-base-uncased"
output_attentions=True)
```
|
https://github.com/huggingface/transformers/issues/9387
|
closed
|
[
"wontfix"
] | 2021-01-02T23:16:57Z
| 2021-03-06T00:13:32Z
| null |
celsofranssa
|
pytorch/xla
| 2,707
|
How to write pure Python function which can be ran on TPUs while using PyTorch-XLA?
|
I got existing code to train EfficientNet using PyTorch which contains custom augmentations like CutMix, MixUp etc. in my training loop. This runs perfectly on GPU. Now I want to change my code such that it can run on TPUs.
I've made required changes to run my code on 8 TPU cores using PyTorch XLA but it's runs very slow when I use custom augmentations in training loop (even slower than GPU). When I remove them it runs significantly faster. So I think I have to make changes in my augmentation functions as well.
Here is my training loop.
```python
def train():
for batch in train_loader:
X, y = batch[0].to(device), batch[1].to(device) # device is xla
cutmixup_prob = random.random()
if cutmixup_prob > 0.4:
X, y, y_shuffled, lam = cutmix(X, y, 0.4)
# forward pass
# calc. loss
# backward pass
xm.optimizer_step(optimizer)
# calc. and return accuracy
```
And here is my complete `cutmix` function, which causes issues:
```python
# https://www.kaggle.com/c/bengaliai-cv19/discussion/126504
def rand_bbox(size, lam):
W = size[2]
H = size[3]
cut_rat = np.sqrt(1. - lam)
cut_w = np.int(W * cut_rat)
cut_h = np.int(H * cut_rat)
# uniform
cx = np.random.randint(W)
cy = np.random.randint(H)
bbx1 = np.clip(cx - cut_w // 2, 0, W)
bby1 = np.clip(cy - cut_h // 2, 0, H)
bbx2 = np.clip(cx + cut_w // 2, 0, W)
bby2 = np.clip(cy + cut_h // 2, 0, H)
return bbx1, bby1, bbx2, bby2
def cutmix(images, targets, alpha):
device = images.device
indices = torch.randperm(images.size(0)).to(device)
shuffled_targets = targets[indices].to(device)
lam = np.random.beta(alpha, alpha)
bbx1, bby1, bbx2, bby2 = rand_bbox(images.size(), lam)
# Cutmix
images[:, :, bbx1:bbx2, bby1:bby2] = images[indices, :, bbx1:bbx2, bby1:bby2]
# adjust lambda to exactly match pixel ratio
lam = 1 - ((bbx2 - bbx1) * (bby2 - bby1) / (images.size()[-1] * images.size()[-2]))
return images, targets, shuffled_targets, lam
```
Whenever I'm creating tensors, I'm moving them to xla device, but still this slows down the training loop on TPUs.
So my question is how can I write pure python functions (here is `cutmix` is pure python function which just does some processing with image tensors) which can efficiently run on TPUs? What changes should I make here? Am I supposed to create all new variables on "xla" device?
EDIT: I tried converting everything to tensors (with xla device) in `cutmix` function, but still no speed gain.
Thanks.
|
https://github.com/pytorch/xla/issues/2707
|
closed
|
[] | 2020-12-31T14:25:56Z
| 2021-01-08T17:34:16Z
| null |
Kaushal28
|
pytorch/examples
| 862
|
Why not move images onto gpu?
|
https://github.com/pytorch/examples/blob/792d336019a28a679e29cf174e10cee80ead8722/imagenet/main.py#L284
I'm trying to training vgg on imagenet with one node DataParallel and no multiprocessing。But I find 'images.device' before computation is 'cpu', and 'target.device=cuda:0'. I'm not sure why these four lines of codes move 'images' to gpu only when I choose only one gpu(args.gpu is not None) and move 'target' to gpu even with argument device=None(args.gpu=None).
I would appreciate it if someone could help me understand it.
|
https://github.com/pytorch/examples/issues/862
|
closed
|
[
"good first issue"
] | 2020-12-29T13:52:36Z
| 2022-04-28T14:55:08Z
| 3
|
I-Doctor
|
pytorch/pytorch
| 49,888
|
How to apply functions to nested modules?
|
## ❓ Questions and Help
Hi, all,
I understood when we want to apply a certain function to layers in a model, we can call self.apply(_function). For instance, apply weight norm to all convolutional layers. I checked the document of module.apply(), where its says the function will be applied to all the children.
My question is, if the model is complicated, say
```python
Block1=nn.Sequential(nn.Linear(10,10), nn.Linear(10,10))
Block2=nn.Sequential(nn.Linear(10,10), nn.Linear(10,10))
Model=nn.Sequential([nn.Linear(2,10), Block1, Block2])
```
Now if I want to apply a certain function on all linear layers (say a certain weight initialization), I can not directly call Model.apply(_function), right? Is there any elegant way to do this when nested modules are presented?
Thanks a lot!
cc @albanD @mruberry @jbschlosser
|
https://github.com/pytorch/pytorch/issues/49888
|
closed
|
[
"module: nn",
"triaged"
] | 2020-12-28T12:34:25Z
| 2020-12-28T17:34:15Z
| null |
121898
|
pytorch/pytorch
| 49,862
|
How to transform the adjacency matrix into the incidence matrix?
|
## ❓ Questions and Help
How to transform the adjacency matrix into the incidence matrix using the pytorch functions provided? It's easy to implement it using for loops, but it's Inefficient.
|
https://github.com/pytorch/pytorch/issues/49862
|
closed
|
[] | 2020-12-26T02:34:08Z
| 2020-12-26T03:31:19Z
| null |
zlpure
|
pytorch/pytorch
| 49,855
|
NN.CTCloss may be something wrong?How to decode CTC results?
|
pytorch 1.7.0 windows python3.7.5
I tried to train the ocr rec model with this code, where Nn. Ctcloss was used : https://github.com/WenmuZhou/PytorchOCR/tree/master/tools/rec_train.py
Loss went down to 0.02, ACC to 0.99. And then I try to deduce the model with https://github.com/WenmuZhou/PytorchOCR/tree/master/tools/rec_infer.py .The results are all wrong, not consistent with ACC.
Can you write an example of text recognition based on Nn.CTCLOSS?
|
https://github.com/pytorch/pytorch/issues/49855
|
closed
|
[] | 2020-12-25T15:18:50Z
| 2020-12-29T20:43:02Z
| null |
williamlzw
|
pytorch/vision
| 3,198
|
Boxes with negative scores in NMS input?
|
Hi, I found that the use of NMS in `RegionProposalNetwork` can take on boxes with negative scores as inputs. I found this when running MaskRCNN in v0.8 release.
https://github.com/pytorch/vision/blob/90645ccd0e774ad76200245e32222a23d09f2312/torchvision/models/detection/rpn.py#L261
In other use of NMS in `ROIHeads`, scores are thresholded to keep only boxes with positive scores:
https://github.com/pytorch/vision/blob/90645ccd0e774ad76200245e32222a23d09f2312/torchvision/models/detection/roi_heads.py#L703
I'm wondering if that lack of score thresholding in RPN is intentional or not... In TVM, we expects NMS input with negative scores to be invalid. Since NMS in PyTorch doesn't have a score threshold parameter, we didn't realize that there could be boxes with negative scores.
I proposed to fix TVM's NMS conversion in https://github.com/apache/tvm/pull/7137, but since it would have a big performance implication and I heard that negative boxes don't matter in the final output anyway, I'm now inclined not to fix this in TVM side.
cc @fmassa @t-vi
|
https://github.com/pytorch/vision/issues/3198
|
closed
|
[
"question",
"topic: object detection"
] | 2020-12-21T22:53:14Z
| 2021-01-06T13:57:38Z
| null |
masahi
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.