repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/vision
| 4,104
|
whats diff in this code ```try....except```
|
https://github.com/pytorch/vision/blob/d1ab583d0d2df73208e2fc9c4d3a84e969c69b70/torchvision/_internally_replaced_utils.py#L13

except code also use torch.hub,its same as try code!!!
why do like this?
|
https://github.com/pytorch/vision/issues/4104
|
closed
|
[
"question"
] | 2021-06-24T06:37:49Z
| 2021-06-24T11:46:28Z
| null |
jaffe-fly
|
pytorch/pytorch
| 60,625
|
How to checkout 1.8.1 release
|
I am trying to compile pytorch 1.8.1 release from source but not sure which branch to checkout, as there is no 1.8.1 and the 1.8.0 branches seem to be rc1 or rc2.
so for example
```
git checkout -b remotes/origin/lts/release/1.8
git describe --tags
```
returns
v1.8.0-rc1-4570-g80f40b172f
So how to get 1.8.1? I know the release tarballs don't work
|
https://github.com/pytorch/pytorch/issues/60625
|
closed
|
[] | 2021-06-24T04:17:58Z
| 2021-06-24T19:06:58Z
| null |
beew
|
pytorch/serve
| 1,135
|
how to serve model converted by hummingbird from sklearn?
|
if i have a sklearn model, and then use hummingbird (https://github.com/microsoft/hummingbird) to transfer as pytorch tensor model
so the model structure is from hummingbird, but not by my own such as :
hummingbird.ml.containers.sklearn.pytorch_containers.PyTorchSklearnContainerClassification
so i dont have the model.py
how to use torch serve to serve this model?
i tried below:
torch-model-archiver --model-name aa --version 1.0 --handler text_classifier --serialized-file torch_hm_model_cuda.pth
torchserve --start --ncs --model-store model_store --models tt=aa.mar
it juse show :
Removing orphan pid file.
java.lang.NoSuchMethodError: java.nio.file.Files.readString(Ljava/nio/file/Path;)Ljava/lang/String;
no other messages. thanks in advance.
|
https://github.com/pytorch/serve/issues/1135
|
closed
|
[] | 2021-06-23T07:25:56Z
| 2021-06-24T10:03:57Z
| null |
aohan237
|
pytorch/pytorch
| 60,433
|
Libtorch JIT : Does enabling profiling mode increase CPU memory usage ? How to disable profiling mode properly ?
|
Hi, I am trying to deploying an Attention-based Encoder Decoder (AED) model with libtorch C++ frontend, when model's decoder loops at output sequence ( the decoder jit module 's forward method is repeatedly called at each label time step ), the CPU memory usage is very high (~ 20 GB), and I think it's far too high compared to it should be ( at each decoder step, the internal state tensors should occupy about < 400 MB in total, and state tensors at previous steps is released correctly with management of smart pointers).
I call torch::jit::getProfilingMode() at begining of inference, and it's true; I try to set it false, but the memory usage is still high.
I would like to know :
1) whether the high CPU memory usage is related to torch JIT 's profiling mode ?
2) is there any other way to profile CPU memory usage ?
The libtorch version used is 1.9.0
Thanks a lot.
- [Discussion Forum](https://discuss.pytorch.org/)
cc @gmagogsfm
|
https://github.com/pytorch/pytorch/issues/60433
|
closed
|
[
"oncall: jit"
] | 2021-06-22T04:13:05Z
| 2021-06-28T03:36:08Z
| null |
w1d2s
|
pytorch/vision
| 4,091
|
Unnecessary call .clone() in box_convert function
|
https://github.com/pytorch/vision/blob/d391a0e992a35d7fb01e11110e2ccf8e445ad8a0/torchvision/ops/boxes.py#L183-L184
We can just return boxes without .clone().
What's the purpose?
|
https://github.com/pytorch/vision/issues/4091
|
closed
|
[
"question"
] | 2021-06-22T02:03:37Z
| 2021-06-22T14:14:56Z
| null |
developer0hye
|
pytorch/android-demo-app
| 156
|
How to add Model Inference Time to yolov5 demo when using live function? Like the iOS demo?
|
Dear developer, I watched this repository (for Android) yolov5 application test video and I compared another repository (for iOS) yolov5 application test video.
I found that the Android application is missing the provision of " Model Inference Time" for real time detection, could you please add it? If not, could you please tell me how to add it? Thank you.


|
https://github.com/pytorch/android-demo-app/issues/156
|
closed
|
[] | 2021-06-20T18:43:18Z
| 2022-05-08T15:41:52Z
| null |
zxsitu
|
pytorch/android-demo-app
| 154
|
where is yolov5 model
|
Does anyone know how to download yolov5s.torchscript.ptl, I don't have this file
|
https://github.com/pytorch/android-demo-app/issues/154
|
open
|
[] | 2021-06-19T14:38:56Z
| 2021-06-19T15:18:46Z
| null |
GuoQuanhao
|
pytorch/pytorch
| 60,266
|
UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler.
|
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1.
1.
1.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
```
- PyTorch Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/pytorch/issues/60266
|
closed
|
[] | 2021-06-18T13:09:33Z
| 2021-06-18T15:53:56Z
| null |
wanyne-yyds
|
pytorch/pytorch
| 60,253
|
How to export SPP-NET to onnx ?
|
Here's my code:
----------------------------------------------------------start----------------------------------------------------------------
import torch
import torch.nn as nn
import math
import torch.nn.functional as F
class Spp(nn.Module):
def __init__(self, level, pooling_type="max_pool"):
super().__init__()
self.num = level
self.pooling_type = pooling_type
def forward(self, x):
h, w = x.shape[2:]
kernel_size = (math.ceil(h / self.num), math.ceil(w / self.num))
stride = kernel_size
pooling = (math.ceil((kernel_size[0] * self.num - h) / 2), math.ceil((kernel_size[1] * self.num - w) / 2))
if self.pooling_type == 'max_pool' or self.pooling_type == "max":
tensor = F.max_pool2d(x, kernel_size=kernel_size, stride=stride, padding=pooling)
else:
tensor = F.avg_pool2d(x, kernel_size=kernel_size, stride=stride, padding=pooling)
return tensor
class SppNet(nn.Module):
def __init__(self, pooling_type="max_pool", level=1):
super(SppNet, self).__init__()
self.spps = []
for i in range(level):
self.spps.append(Spp(pooling_type=pooling_type, level=i+1))
pass
def forward(self, x):
n, c = input.shape[0:2]
out = []
for spp in self.spps:
y = spp(x).reshape(n, c, -1)
out.append(y)
out = torch.cat(out, dim=2)
return out
if __name__ == '__main__':
input = torch.randn(3, 45, 100, 120)
sppNet = SppNet(level=7)
y0 = sppNet(input)
print(y0.shape)
sppNet.eval()
torch.onnx.export(sppNet, # model being run
input, # model input (or a tuple for multiple inputs)
'spp-net.onnx',
# where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=11, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=["input"], # the model's input names
output_names=["output"], # the model's output names
dynamic_axes={
"input": {0: "batch_size", 1: "channel", 2:"height", 3:"width"},
"output": {0: "batch_size", 1: "channel", 2:"length"}
},
enable_onnx_checker=True)
------------------------------------------------------end----------------------------------------------------------------
Exported model “kernel_ size、pooling ” parameter is fixed.
But I need to set it to a fixed parameter.
That is to say, the kernel is calculated automatically according to the input size and other parameters, so how to do?
Ask for advice,Thank you!
cc @garymm @BowenBao @neginraoof
|
https://github.com/pytorch/pytorch/issues/60253
|
closed
|
[
"module: onnx",
"triaged",
"onnx-triaged"
] | 2021-06-18T06:56:59Z
| 2022-11-01T22:16:36Z
| null |
yongxin3344520
|
pytorch/TensorRT
| 502
|
❓ [Question] failed to build docker image
|
## ❓ Question
failed to build docker image
## What you have already tried
`docker build -t trtorch -f notebooks/Dockerfile.notebook .`
## Additional context
```
Step 13/21 : WORKDIR /workspace/TRTorch
---> Running in 6043f6a80286
Removing intermediate container 6043f6a80286
---> 18eaa4134512
Step 14/21 : RUN bazel build //:libtrtorch --compilation_mode opt
---> Running in e5ae54ec3c1e
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
Loading:
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Analyzing: target //:libtrtorch (1 packages loaded, 0 targets configured)
Analyzing: target //:libtrtorch (39 packages loaded, 155 targets configured)
INFO: Analyzed target //:libtrtorch (42 packages loaded, 2697 targets configured).
INFO: Found 1 target...
[0 / 112] [Prepa] BazelWorkspaceStatusAction stable-status.txt ... (6 actions, 0 running)
ERROR: /workspace/TRTorch/cpp/api/BUILD:3:11: C++ compilation of rule '//cpp/api:trtorch' failed (Exit 1): gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 61 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 61 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
In file included from cpp/api/src/compile_spec.cpp:6:0:
bazel-out/k8-opt/bin/cpp/api/_virtual_includes/trtorch/trtorch/trtorch.h:26:25: error: different underlying type in enum 'enum class c10::DeviceType'
enum class DeviceType : int8_t;
^~~~~~
In file included from bazel-out/k8-opt/bin/external/libtorch/_virtual_includes/c10_cuda/c10/core/Device.h:3:0,
from bazel-out/k8-opt/bin/external/libtorch/_virtual_includes/ATen/ATen/core/TensorBody.h:3,
from bazel-out/k8-opt/bin/external/libtorch/_virtual_includes/ATen/ATen/Tensor.h:3,
from external/libtorch/include/torch/csrc/autograd/function_hook.h:5,
from external/libtorch/include/torch/csrc/autograd/variable.h:7,
from external/libtorch/include/torch/csrc/jit/api/module.h:3,
from cpp/api/src/compile_spec.cpp:1:
bazel-out/k8-opt/bin/external/libtorch/_virtual_includes/c10_cuda/c10/core/DeviceType.h:15:12: note: previous definition here
enum class DeviceType : int16_t {
^~~~~~~~~~
Target //:libtrtorch failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 490.394s, Critical Path: 28.98s
INFO: 1030 processes: 1024 internal, 6 processwrapper-sandbox.
FAILED: Build did NOT complete successfully
FAILED: Build did NOT complete successfully
```
|
https://github.com/pytorch/TensorRT/issues/502
|
closed
|
[
"question",
"No Activity"
] | 2021-06-17T10:22:25Z
| 2021-09-27T00:01:13Z
| null |
chrjxj
|
pytorch/pytorch
| 60,122
|
what version of python is suggested with pytorch 1.9
|
I know pytorch support a variety of python version, but I wonder what version is suggested? python 3.6.6? 3.7.7? etc?
Thanks
|
https://github.com/pytorch/pytorch/issues/60122
|
closed
|
[] | 2021-06-16T19:05:52Z
| 2021-06-16T20:54:15Z
| null |
seyeeet
|
pytorch/pytorch
| 60,115
|
How to install torchaudio on Mac M1 ARM?
|
`torchaudio` doesn't seem to be available for Mac M1.
If I run `conda install pytorch torchvision torchaudio -c pytorch` (as described on pytorch's main page) I get this error message:
```
PackagesNotFoundError: The following packages are not available from current channels:
- torchaudio
```
If I run the command without `torchaudio` everything installs fine.
How can I fix this and install torchaudio too?
If it isn't available (yet) – do you have any plans to release it too?
Thanks in advance for your help! And I apologize in advance if I don't see the forest for the trees and overlooked sth. obvious.
|
https://github.com/pytorch/pytorch/issues/60115
|
closed
|
[] | 2021-06-16T18:24:04Z
| 2021-06-16T20:42:03Z
| null |
suissemaxx
|
pytorch/pytorch
| 59,933
|
If I only have the model of PyTorch and don't know the dimension of the input, how to convert it to onnx?
|
## ❓ If I only have the model of PyTorch and don't know the dimension of the input, how to convert it to onnx?
### Question
I have a series of PyTorch trained models, such as "model.pth", but I don't know the input dimensions of the model.
For instance, in the following function: torch.onnx.export(model, args, f, export_params=True, verbose=False, training=False, input_names=None, output_names=None).
I don't know the "args" of the function. How do I define it by just having the model file such as "model.pth"?
|
https://github.com/pytorch/pytorch/issues/59933
|
closed
|
[] | 2021-06-14T08:38:29Z
| 2021-06-14T15:31:26Z
| null |
Wendy-liu17
|
pytorch/pytorch
| 59,870
|
How to export a model with nn.Module in for loop to onnx?
|
Bellow is a demo code:
```
class Demo(nn.Module):
def __init__(self, hidden_size, max_span_len):
super().__init__()
self.max_span_len = max_span_len
self.fc = nn.Linear(hidden_size * 2, hidden_size)
def forward(self, seq_hiddens):
'''
seq_hiddens: (batch_size, seq_len, hidden_size)
'''
seq_len = seq_hiddens.size()[1]
hiddens_list = []
for ind in range(seq_len):
hidden_each_step = seq_hiddens[:, ind, :]
a = seq_hiddens[:, ind:ind + self.max_span_len, :]
b = hidden_each_step[:, None, :].repeat(1, a.shape[1], 1)
tmp = torch.cat([a, b], dim=-1)
tmp = torch.tanh(self.fc(tmp))
hiddens_list.append(tmp)
output = torch.cat(hiddens_list, dim = 1)
return output
```
How to expot it to onnx? I need the fc Layer in for loop. Script function seems not work.
Thanks!!!
cc @garymm @BowenBao @neginraoof
|
https://github.com/pytorch/pytorch/issues/59870
|
closed
|
[
"module: onnx"
] | 2021-06-11T12:25:15Z
| 2021-06-15T18:34:50Z
| null |
JaheimLee
|
huggingface/transformers
| 12,105
|
What is the correct way to pass labels to DetrForSegmentation?
|
The [current documentation](https://huggingface.co/transformers/master/model_doc/detr.html#transformers.DetrForSegmentation.forward) for `DetrModelForSegmentation.forward` says the following about `labels` kwarg:
> The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,), the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4) and the **masks a torch.FloatTensor of shape (number of bounding boxes in the image, 4).**
But when I looked at the tests, it seems the shape of `masks` is `torch.rand(self.n_targets, self.min_size, self.max_size)` .
https://github.com/huggingface/transformers/blob/d2753dcbec7123500c1a84a7c2143a79e74df48f/tests/test_modeling_detr.py#L87-L103
---
I'm guessing this is a documentation mixup!
Anyways, it would be super helpful to include a snippet in the DETR docs that shows how to correctly pass masks/other labels + get the loss/loss dict. 😄
CC: @NielsRogge
|
https://github.com/huggingface/transformers/issues/12105
|
closed
|
[] | 2021-06-10T22:15:23Z
| 2021-06-17T14:37:54Z
| null |
nateraw
|
pytorch/vision
| 4,001
|
Unable to build torchvision on Windows (installed torch from source and it is running)
|
## ❓ Questions and Help
I have installed torch successfully in my PC via source, but I am facing this issue while installing the torchvison. I don't think I can install torchvision via pip as it is re-downloading the torch.
Please help me to install it
TIA
i used `python setup.py install`
```
Building wheel torchvision-0.9.0a0+01dfa8e
PNG found: True
Running build on conda-build: False
Running build on conda: True
JPEG found: True
Building torchvision with JPEG image support
FFmpeg found: True
Traceback (most recent call last):
File "C:\Users\dhawals\repos\build_binaries\vision\setup.py", line 472, in <module>
ext_modules=get_extensions(),
File "C:\Users\dhawals\repos\build_binaries\vision\setup.py", line 352, in get_extensions
platform_tag = subprocess.run(
File "C:\Users\dhawals\miniconda3\lib\subprocess.py", line 501, in run
with Popen(*popenargs, **kwargs) as process:
File "C:\Users\dhawals\miniconda3\lib\subprocess.py", line 947, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\dhawals\miniconda3\lib\subprocess.py", line 1356, in _execute_child
args = list2cmdline(args)
File "C:\Users\dhawals\miniconda3\lib\subprocess.py", line 561, in list2cmdline
for arg in map(os.fsdecode, seq):
File "C:\Users\dhawals\miniconda3\lib\os.py", line 822, in fsdecode
filename = fspath(filename) # Does type-checking of `filename`.
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
|
https://github.com/pytorch/vision/issues/4001
|
closed
|
[
"question"
] | 2021-06-08T09:48:25Z
| 2021-06-14T11:01:21Z
| null |
dhawals1939
|
pytorch/pytorch
| 59,607
|
Where is libtorch archive???
|
Where is libtorch archive???
I can't find libtorch 1.6.0..
|
https://github.com/pytorch/pytorch/issues/59607
|
closed
|
[] | 2021-06-08T01:03:23Z
| 2023-04-07T13:29:34Z
| null |
hi-one-gg
|
pytorch/xla
| 2,981
|
Where is torch_xla/csrc/XLANativeFunctions.h?
|
## 🐛 Bug
Trying to compile master found that there is no https://github.com/pytorch/xla/blob/master/torch_xla/csrc/XLANativeFunctions.h after updating to latest master.
How this file is generated? (aka which step Im missing?)
```
$ time pip install -e . --verbose
...............
[23/101] clang++-8 -MMD -MF /home/tyoc213/Documents/github/pytorch/xla/build/temp.linux-x86_64-3.8/torch_xla/csrc/init_python_bindings.o.d -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/tyoc213/Documents/github/pytorch/xla -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-bin -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/protobuf_archive/src -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_protobuf/src -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/eigen_archive -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_absl -I/home/tyoc213/Documents/github/pytorch -I/home/tyoc213/Documents/github/pytorch/torch/csrc -I/home/tyoc213/Documents/github/pytorch/torch/lib/tmp_install/include -I/home/tyoc213/Documents/github/pytorch/torch/include -I/home/tyoc213/Documents/github/pytorch/torch/include/torch/csrc/api/include -I/home/tyoc213/Documents/github/pytorch/torch/include/TH -I/home/tyoc213/Documents/github/pytorch/torch/include/THC -I/home/tyoc213/miniconda3/envs/xla/include/python3.8 -c -c /home/tyoc213/Documents/github/pytorch/xla/torch_xla/csrc/init_python_bindings.cpp -o /home/tyoc213/Documents/github/pytorch/xla/build/temp.linux-x86_64-3.8/torch_xla/csrc/init_python_bindings.o -std=c++14 -Wno-sign-compare -Wno-deprecated-declarations -Wno-return-type -Wno-macro-redefined -Wno-return-std-move -DNDEBUG -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_clang"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1002"' -DTORCH_EXTENSION_NAME=_XLAC -D_GLIBCXX_USE_CXX11_ABI=1
FAILED: /home/tyoc213/Documents/github/pytorch/xla/build/temp.linux-x86_64-3.8/torch_xla/csrc/init_python_bindings.o
clang++-8 -MMD -MF /home/tyoc213/Documents/github/pytorch/xla/build/temp.linux-x86_64-3.8/torch_xla/csrc/init_python_bindings.o.d -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/tyoc213/Documents/github/pytorch/xla -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-bin -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/protobuf_archive/src -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_protobuf/src -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/eigen_archive -I/home/tyoc213/Documents/github/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_absl -I/home/tyoc213/Documents/github/pytorch -I/home/tyoc213/Documents/github/pytorch/torch/csrc -I/home/tyoc213/Documents/github/pytorch/torch/lib/tmp_install/include -I/home/tyoc213/Documents/github/pytorch/torch/include -I/home/tyoc213/Documents/github/pytorch/torch/include/torch/csrc/api/include -I/home/tyoc213/Documents/github/pytorch/torch/include/TH -I/home/tyoc213/Documents/github/pytorch/torch/include/THC -I/home/tyoc213/miniconda3/envs/xla/include/python3.8 -c -c /home/tyoc213/Documents/github/pytorch/xla/torch_xla/csrc/init_python_bindings.cpp -o /home/tyoc213/Documents/github/pytorch/xla/build/temp.linux-x86_64-3.8/torch_xla/csrc/init_python_bindings.o -std=c++14 -Wno-sign-compare -Wno-deprecated-declarations -Wno-return-type -Wno-macro-redefined -Wno-return-std-move -DNDEBUG -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_clang"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1002"' -DTORCH_EXTENSION_NAME=_XLAC -D_GLIBCXX_USE_CXX11_ABI=1
/home/tyoc213/Documents/github/pytorch/xla/torch_xla/csrc/init_python_bindings.cpp:36:10: fatal error: 'torch_xla/csrc/XLANativeFunctions.h' file not found
#include "torch_xla/csrc/XLANativeFunctions.h"
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
```
## Environment
- Installing from source on Linux/CUDA:
- torch_xla version: master
|
https://github.com/pytorch/xla/issues/2981
|
closed
|
[
"stale"
] | 2021-06-08T00:42:14Z
| 2021-07-21T13:22:46Z
| null |
tyoc213
|
pytorch/TensorRT
| 495
|
❓ [Question] How does the compiler uses the optimal input shape ?
|
When compiling the model we have to specify an optimal input shape as well as a minimal and maximal one.
I tested various optimal sizes to evaluate the impact of this parameter but found little to no difference for the inference time.
How is this parameter used by the compiler ?
Thank you for your time and consideration,
|
https://github.com/pytorch/TensorRT/issues/495
|
closed
|
[
"question"
] | 2021-06-07T13:21:20Z
| 2021-06-09T09:52:53Z
| null |
MatthieuToulemont
|
pytorch/text
| 1,323
|
How to use pretrained embeddings (`Vectors`) in the new API?
|
From what is see in the `experimental` module is that we pass a vocab object, which transforms the token into an unique integer.
https://github.com/pytorch/text/blob/e189c260e959ab966b1eaa986177549a6445858c/torchtext/experimental/datasets/text_classification.py#L50-L55
Thus something like `['hello', 'word']` might turn into `[42, 43]`, this can then be fed into an `nn.Embedding` layer to get the corresponding embedding vector and so on.
What i dont't understand is how do i use
https://github.com/pytorch/text/blob/e189c260e959ab966b1eaa986177549a6445858c/torchtext/vocab.py#L475-L487
`GloVe` is a `Vectors` but it transforms `['hello', 'world']` into its corresponding `Embedding` tensor representation, this doesn't allow me to pad the sentences beforehand.
Also its weird that now i don't need a `Vocab` object, but in most of the modules i see that `Vocab` is built if its set to `None`.
https://github.com/pytorch/text/blob/e189c260e959ab966b1eaa986177549a6445858c/torchtext/experimental/datasets/text_classification.py#L85-L89
I don't really understand how am i supposed to interpret `Vocab` and `Vectors` and where should i use them? In `nn.Module` i.e. my model, or in `data.Dataset`, i.e. my dataset ? What if i want to fine tune the pretrained embeddings as well ?
Should both of them be used, or just either one ?
I couldn't even find good examples in https://github.com/pytorch/text/tree/master/examples/text_classification
I'm coming from the traditional torch vision library guy, so kudos to dumping the old legacy style torchtext, i really hated it, the new api's seem promising, but just a little confusing as of now.
|
https://github.com/pytorch/text/issues/1323
|
open
|
[] | 2021-06-05T10:58:38Z
| 2021-07-01T03:26:20Z
| null |
satyajitghana
|
huggingface/transformers
| 12,005
|
where is the code for DetrFeatureExtractor, DetrForObjectDetection
|
Hello my dear friend.
i am long for the model of https://huggingface.co/facebook/detr-resnet-50
i cannot find the code of it in transformers==4.7.0.dev0 and 4.6.1 pleae help me . appreciated.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
|
https://github.com/huggingface/transformers/issues/12005
|
closed
|
[] | 2021-06-03T09:28:27Z
| 2021-06-10T07:06:59Z
| null |
zhangbo2008
|
pytorch/pytorch
| 59,368
|
How to remap RNNs hidden tensor to other device in torch.jit.load?
|
Model: CRNN (used in OCR)
1. When I trace model in cpu device, and use torch.jit.load(f, map_location="cuda:0"), I got an error as below
Input and hidden tensor are not at same device, found input tensor at cuda:0 and hidden tensor at cpu.
2. When I trace model in cuda:0 device, and use torch.jit.load(f, map_location="cuda:1"), I got an error as below
Input and hidden tensor are not at same device, found input tensor at cuda:1 and hidden tensor at cuda:0.
Is there a way to remap RNNs hidden tensor to other device in loaded module by jit?
PyTorch Version: 1.8.1
cc @gmagogsfm
|
https://github.com/pytorch/pytorch/issues/59368
|
closed
|
[
"oncall: jit"
] | 2021-06-03T09:24:23Z
| 2021-10-21T06:19:02Z
| null |
shihaoyin
|
pytorch/vision
| 3,949
|
Meaning of Assertion of infer_scale function in torchvision/ops/poolers.py
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
I want to know the meaning of the assert at infer_scale function in torchvision/ops/poolers.py.

It makes assertion error
File "/home/ubuntu/.jupyter/engine.py", line 199, in evaluate_one_image
output = model(loader)
File "/home/ubuntu/.venv/jupyter/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ubuntu/.venv/jupyter/lib/python3.6/site-packages/torchvision/models/detection/generalized_rcnn.py", line 98, in forward
detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets)
File "/home/ubuntu/.venv/jupyter/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ubuntu/.venv/jupyter/lib/python3.6/site-packages/torchvision/models/detection/roi_heads.py", line 752, in forward
box_features = self.box_roi_pool(features, proposals, image_shapes)
File "/home/ubuntu/.venv/jupyter/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ubuntu/.venv/jupyter/lib/python3.6/site-packages/torchvision/ops/poolers.py", line 221, in forward
self.setup_scales(x_filtered, image_shapes)
File "/home/ubuntu/.venv/jupyter/lib/python3.6/site-packages/torchvision/ops/poolers.py", line 182, in setup_scales
scales = [self.infer_scale(feat, original_input_shape) for feat in features]
File "/home/ubuntu/.venv/jupyter/lib/python3.6/site-packages/torchvision/ops/poolers.py", line 182, in <listcomp>
scales = [self.infer_scale(feat, original_input_shape) for feat in features]
File "/home/ubuntu/.venv/jupyter/lib/python3.6/site-packages/torchvision/ops/poolers.py", line 166, in infer_scale
assert possible_scales[0] == possible_scales[1]
AssertionError
like this, and without assertion it makes correct results.
what's the meaning of that assertion?
|
https://github.com/pytorch/vision/issues/3949
|
closed
|
[
"question",
"module: ops"
] | 2021-06-03T04:38:03Z
| 2021-06-09T11:59:52Z
| null |
teang1995
|
pytorch/pytorch
| 59,231
|
How to solve the AssertionError: Torch not compiled with CUDA enabled
|
For the usage of the repo based on PyTorch(Person_reID_baseline_pytorch), I followed the guidance on its readme.md. However, I've got an error on the training step below: (I used --gpu_ids -1 as I use CPU only option in my MacOS)
`python train.py --gpu_ids -1 --name ft_ResNet50 --train_all --batchsize 32 --data_dir /Users/455832/Person_reID_baseline_pytorch/Market-1501-v15.09.15/pytorch`
The error I got is below:
```
Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /Users/455832/.cache/torch/checkpoints/resnet50-19c8e357.pth
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 102502400/102502400 [00:14<00:00, 7210518.23it/s]
ft_net(
(model): ResNet(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): Bottleneck(
(conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(downsample): Sequential(
(0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): Bottleneck(
(conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
)
(2): Bottleneck
.......
.......
)
Traceback (most recent call last):
File "train.py", line 386, in <module>
model = model.cuda()
File "/Users/455832/opt/anaconda3/envs/reid_conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 265, in cuda
return self._apply(lambda t: t.cuda(device))
File "/Users/455832/opt/anaconda3/envs/reid_conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 193, in _apply
module._apply(fn)
File "/Users/455832/opt/anaconda3/envs/reid_conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 193, in _apply
module._apply(fn)
File "/Users/455832/opt/anaconda3/envs/reid_conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 199, in _apply
param.data = fn(param.data)
File "/Users/455832/opt/anaconda3/envs/reid_conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 265, in <lambda>
return self._apply(lambda t: t.cuda(device))
File "/Users/455832/opt/anaconda3/envs/reid_conda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 162, in _lazy_init
_check_driver()
File "/Users/455832/opt/anaconda3/envs/reid_conda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 75, in _check_driver
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
```
As suggested in its readme.md, I installed pytorch=1.1.0 and torchvision=0.3.0 and numpy=1.13.1, which are requirements, into my virtual environment using 3.6.12 python requirement over the instructions in PyTorch official website (https://pytorch.org/get-started/previous-versions/#wheel-10)
`conda install pytorch==1.1.0 torchvision==0.3.0 -c pytorch`
Can you please guide me to solve this issue?
|
https://github.com/pytorch/pytorch/issues/59231
|
closed
|
[] | 2021-05-31T20:45:33Z
| 2023-06-04T06:22:56Z
| null |
aktaseren
|
pytorch/TensorRT
| 493
|
❓ [Question] How to set three input tensor shape in input_shape?
|
## ❓ Question
<!-- How to set three input tensor shape in input_shape?-->
I have three input tensor:src_tokens, dummy_embeded_x, dummy_encoder_embedding
In this case, I don't konw how to set input_shape in compile_settings's "input_shape"
Who can help me? Thank you!
`encoder_out = model.forward_encoder([src_tokens, dummy_embeded_x, dummy_encoder_embedding])`
`...`
`script_encoder = torch.jit.script(encoder)...`
`compile_settings = {
"input_shapes": [[2, 16]],
"op_precision": torch.float32
}`
|
https://github.com/pytorch/TensorRT/issues/493
|
closed
|
[
"question"
] | 2021-05-31T02:48:53Z
| 2021-06-23T19:56:33Z
| null |
wxyhv
|
pytorch/vision
| 3,938
|
Batch size of the training recipes on multiple GPUs
|
## ❓ Questions and Help
In the README file that describes the recipes of training the classification models, under the references directory, it is stated that the models are trained with batch-size=32 on 8 GPUs.
Does it mean that:
- the whole batch-size is 32 and each GPU gets only 4 images to process at a time?
- OR each GPU gets 32 images to process at a time, meaning that the global batch-size is actually 256?
Thanks.
|
https://github.com/pytorch/vision/issues/3938
|
closed
|
[
"question"
] | 2021-05-30T13:16:07Z
| 2021-05-30T14:52:38Z
| null |
talcs
|
pytorch/pytorch
| 59,186
|
Document on how to use ATEN_CPU_CAPABILITY
|
## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
It would be great if ATEN_CPU_CAPABILITY would be documented with an example on how to use it.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
I am currently trying to build PyTorch without AVX instructions, because I am deploying my docker image to a lot of different systems. While trying to understand how to remove AVX instructions I found ATEN_CPU_CAPABILITY. It is not clear on how to use it.
My unanswered questions are: Does it work on runtime? Do I have build PyTorch myself and set ATEN_CPU_CAPABILITY before building? Can I pass ATEN_CPU_CAPABILITY to setup.py? How do I know if I set it the right way? Are there any wheels without AVX instructions available?
|
https://github.com/pytorch/pytorch/issues/59186
|
closed
|
[] | 2021-05-30T11:40:07Z
| 2021-05-30T21:31:30Z
| null |
derneuere
|
pytorch/serve
| 1,103
|
Can two workflows share the same model with each other?
|
Continuing my previous post: [How i do models chain processing and batch processing for analyzing text data?](https://github.com/pytorch/serve/issues/1055)
Can I create two workflows using the same RoBERTa base model to perform two different tasks, let's say the classifier_model and summarizer_model? I would like to be able to share the base model with two workflows.
I am trying to register two workflows: wf_classifier.war and wf_summarizer.war. The first one is registered and the second one is not.
[log.log](https://github.com/pytorch/serve/files/6562664/log.log)
**wf_classifier.war**
```
models:
min-workers: 1
max-workers: 1
batch-size: 1
max-batch-delay: 1000
retry-attempts: 5
timeout-ms: 300000
roberta:
url: roberta_base.mar
classifier:
url: classifier.mar
dag:
roberta: [classifier]
```
**wf_summarizer.war**
```
models:
min-workers: 1
max-workers: 1
batch-size: 1
max-batch-delay: 1000
retry-attempts: 5
timeout-ms: 300000
roberta:
url: roberta_base.mar
summarizer:
url: summarizer.mar
dag:
roberta_base: [summarizer]
```
|
https://github.com/pytorch/serve/issues/1103
|
open
|
[
"question",
"triaged_wait",
"workflowx"
] | 2021-05-28T18:04:57Z
| 2022-09-08T12:27:30Z
| null |
yurkoff-mv
|
pytorch/TensorRT
| 490
|
❓ [Question] How could I integrate TensorRT's Group Normalization plugin into a TRTorch model ?
|
## ❓ Question
What would be the steps to be able to use TensorRT's Group Normalization plugin into a TRTorch model ?
The plugin is defined [here](https://github.com/NVIDIA/TensorRT/tree/master/plugin/groupNormalizationPlugin)
## Context
Being new to this, the Readme from core/conversion/converters didn't really clarify the steps I should follow to make the converter for a TensorRt plugin
## Environment
As an environment I use the `docker/Dockerfile.20.10 -t trtorch:pytorch1.7-cuda11.1-trt7.2.1` from the commit 6bb9fbf561c9cc3f0f1c4c7dde3d61c88e687efc
Thank you for your time and consideration
|
https://github.com/pytorch/TensorRT/issues/490
|
closed
|
[
"question"
] | 2021-05-27T12:53:09Z
| 2021-06-09T09:53:11Z
| null |
MatthieuToulemont
|
pytorch/cpuinfo
| 55
|
Compilation for freeRTOS
|
Hi all,
We are staring to look into using cpuinfo in a freeRTOS / ZedBoard setup.
Do you know if any attempts to port this code to freeRTOS before?
If not, do you have any tips / advise on how to start this porting?
Thanks,
Pablo.
|
https://github.com/pytorch/cpuinfo/issues/55
|
open
|
[
"question"
] | 2021-05-26T09:57:18Z
| 2024-01-11T00:56:44Z
| null |
pablogh-2000
|
huggingface/notebooks
| 42
|
what is the ' token classification head'?
|
https://github.com/huggingface/notebooks/issues/42
|
closed
|
[] | 2021-05-25T09:17:49Z
| 2021-05-29T11:36:11Z
| null |
zingxy
|
|
pytorch/pytorch
| 58,894
|
ease use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose. warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
|
https://github.com/pytorch/pytorch/issues/58894
|
closed
|
[] | 2021-05-25T02:41:52Z
| 2021-05-25T21:30:41Z
| null |
umie0128
|
pytorch/tutorials
| 1,539
|
(Libtorch)How to use packed_accessor64 to access tensor elements in CUDA?
|
The [tutorial ](https://pytorch.org/cppdocs/notes/tensor_basics.html#cuda-accessors) gives an example about using _packed_accessor64_ to access tensor elements efficiently as follows. However, I still do not know how to use _packed_accessor64_. Can anyone give me a more specific example? Thanks.
```
__global__ void packed_accessor_kernel(
PackedTensorAccessor64<float, 2> foo,
float* trace) {
int i=threadIdx.x
gpuAtomicAdd(trace, foo[i][i])
}
torch::Tensor foo = torch::rand({12, 12});
// assert foo is 2-dimensional and holds floats.
auto foo_a = foo.packed_accessor64<float,2>();
float trace = 0;
packed_accessor_kernel<<<1, 12>>>(foo_a, &trace);
```
cc @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen
|
https://github.com/pytorch/tutorials/issues/1539
|
open
|
[
"CUDA",
"medium",
"docathon-h2-2023"
] | 2021-05-24T15:56:26Z
| 2023-11-14T06:41:03Z
| null |
tangyipeng100
|
pytorch/text
| 1,316
|
How to load AG_NEWS data from local files
|
## How to load AG_NEWS data from local files
I can't get ag news data with `train_iter, test_iter = AG_NEWS(split=('train', 'test'))` online because of my bad connection. So I download the the `train.csv` and `test.csv` manually to my local folder `AG_NEWS` from url `'train': "https://raw.githubusercontent.com/mhjabreel/CharCnn_Keras/master/data/ag_news_csv/train.csv",
'test': "https://raw.githubusercontent.com/mhjabreel/CharCnn_Keras/master/data/ag_news_csv/test.csv"`
After that I tried to load ag news data with `train_iter, test_iter = AG_NEWS(root = './AG_NEWS', split=('train', 'test'))`, throw a exception `RuntimeError: The hash of /myfolder/AG_NEWS/train.csv does not match. Delete the file manually and retry.`
My file content is
```
myfolder
│
└───AG_NEWS
│ └─── train.csv
│ └─── test.csv
```
|
https://github.com/pytorch/text/issues/1316
|
open
|
[] | 2021-05-24T06:23:55Z
| 2021-05-24T14:54:04Z
| null |
robbenplus
|
pytorch/tutorials
| 1,534
|
Why libtorch tensor value assignment takes so much time?
|
I just assign 10000 values to a tensor:
```
clock_t start = clock();
torch::Tensor transform_tensor = torch::zeros({ 10000 });
for (size_t m = 0; m < 10000 m++)
transform_tensor[m] = int(m);
clock_t finish = clock();
```
And it takes 0.317s. If I assign 10,000 to an array or a vector, the time cost will be less.
Why tensor takes so much time? Can the time cost be decreased?
|
https://github.com/pytorch/tutorials/issues/1534
|
open
|
[
"question",
"Tensors"
] | 2021-05-24T01:59:42Z
| 2023-03-08T16:31:16Z
| null |
tangyipeng100
|
pytorch/pytorch
| 58,554
|
How to install pytorch1.8.1 with cuda 11.3?
|
How to install pytorch1.8.1 with cuda 11.3?
|
https://github.com/pytorch/pytorch/issues/58554
|
closed
|
[] | 2021-05-19T13:24:42Z
| 2021-05-20T03:42:17Z
| null |
Bonsen
|
pytorch/xla
| 2,957
|
How to compile xla_ltc_plugin
|
I was following https://github.com/pytorch/xla/tree/asuhan/xla_ltc_plugin to build ltc-based torch/xla. I compiled ltc successfully but encountered errors when compiling xla. I guess I must have missed something here. Help is greatly appreciated :) cc @asuhan
<details>
<summary>Error log</summary>
```
[1/14] clang++-8 -MMD -MF /home/ubuntu/pytorch/xla/build/temp.linux-x86_64-3.7/lazy_xla/csrc/version.o.d -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/ubuntu/pytorch/xla -I/home/ubuntu/pytorch/xla/../lazy_tensor_core -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-bin -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/protobuf_archive/src -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_protobuf/src -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/eigen_archive -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_absl -I/home/ubuntu/pytorch -I/home/ubuntu/pytorch/torch/csrc -I/home/ubuntu/pytorch/torch/lib/tmp_install/include -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/include -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/include/TH -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/include/THC -I/home/ubuntu/anaconda3/envs/torch-dev/include/python3.7m -c -c /home/ubuntu/pytorch/xla/lazy_xla/csrc/version.cpp -o /home/ubuntu/pytorch/xla/build/temp.linux-x86_64-3.7/lazy_xla/csrc/version.o -std=c++14 -Wno-sign-compare -Wno-unknown-pragmas -Wno-deprecated-declarations -Wno-return-type -Wno-macro-redefined -Wno-return-std-move -DNDEBUG -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_LAZYXLAC -D_GLIBCXX_USE_CXX11_ABI=1
[2/14] clang++-8 -MMD -MF /home/ubuntu/pytorch/xla/build/temp.linux-x86_64-3.7/lazy_xla/csrc/compiler/data_ops.o.d -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/ubuntu/pytorch/xla -I/home/ubuntu/pytorch/xla/../lazy_tensor_core -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-bin -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/protobuf_archive/src -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_protobuf/src -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/eigen_archive -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_absl -I/home/ubuntu/pytorch -I/home/ubuntu/pytorch/torch/csrc -I/home/ubuntu/pytorch/torch/lib/tmp_install/include -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/include -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/include/TH -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/include/THC -I/home/ubuntu/anaconda3/envs/torch-dev/include/python3.7m -c -c /home/ubuntu/pytorch/xla/lazy_xla/csrc/compiler/data_ops.cpp -o /home/ubuntu/pytorch/xla/build/temp.linux-x86_64-3.7/lazy_xla/csrc/compiler/data_ops.o -std=c++14 -Wno-sign-compare -Wno-unknown-pragmas -Wno-deprecated-declarations -Wno-return-type -Wno-macro-redefined -Wno-return-std-move -DNDEBUG -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_LAZYXLAC -D_GLIBCXX_USE_CXX11_ABI=1
FAILED: /home/ubuntu/pytorch/xla/build/temp.linux-x86_64-3.7/lazy_xla/csrc/compiler/data_ops.o
clang++-8 -MMD -MF /home/ubuntu/pytorch/xla/build/temp.linux-x86_64-3.7/lazy_xla/csrc/compiler/data_ops.o.d -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/ubuntu/pytorch/xla -I/home/ubuntu/pytorch/xla/../lazy_tensor_core -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-bin -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/protobuf_archive/src -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_protobuf/src -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/eigen_archive -I/home/ubuntu/pytorch/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_absl -I/home/ubuntu/pytorch -I/home/ubuntu/pytorch/torch/csrc -I/home/ubuntu/pytorch/torch/lib/tmp_install/include -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/include -I/home/ubuntu/anaconda3/envs/torch-dev/lib/python3.7/site-packages/torch/incl
|
https://github.com/pytorch/xla/issues/2957
|
closed
|
[
"stale"
] | 2021-05-19T09:31:12Z
| 2021-07-08T09:11:02Z
| null |
hzfan
|
pytorch/pytorch
| 58,530
|
How to remove layer use parent name
|
Hi, I am a new user of pytorch. I try to load trained model and want to remove the last layer named 'fc'
```
model = models.alexnet()
model.fc = nn.Linear(4096, 4)
ckpt = torch.load('net_epoch_24.pth')
model.load_state_dict(ckpt)
model.classifier = nn.Sequential(nn.Linear(9216, 1024),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(1024, 8),
nn.LogSoftmax(dim=1))
print(model)
```
print out :
```
AlexNet(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(11, 11), stride=(4, 4), padding=(2, 2))
(1): ReLU(inplace=True)
(2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(64, 192, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
(4): ReLU(inplace=True)
(5): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(7): ReLU(inplace=True)
(8): Conv2d(384, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(9): ReLU(inplace=True)
(10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(11): ReLU(inplace=True)
(12): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=(6, 6))
(classifier): Sequential(
(0): Linear(in_features=9216, out_features=1024, bias=True)
(1): ReLU()
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=1024, out_features=8, bias=True)
(4): LogSoftmax(dim=1)
)
(fc): Linear(in_features=4096, out_features=4, bias=True)
)
```
is there any simple way to remove the last layer ('fc') ?
thanks
|
https://github.com/pytorch/pytorch/issues/58530
|
closed
|
[] | 2021-05-19T03:54:29Z
| 2021-05-20T05:29:28Z
| null |
ramdhan1989
|
pytorch/pytorch
| 58,460
|
how to convert scriptmodel to onnx?
|
how to convert scriptmodel to onnx?
D:\Python\Python37\lib\site-packages\torch\onnx\utils.py:348: UserWarning: Model has no forward function
warnings.warn("Model has no forward function")
Exception occurred when processing textline: 1
cc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof @SplitInfinity
|
https://github.com/pytorch/pytorch/issues/58460
|
closed
|
[
"module: onnx",
"triaged"
] | 2021-05-18T03:15:29Z
| 2022-02-24T08:22:22Z
| null |
williamlzw
|
pytorch/TensorRT
| 473
|
❓ Is it possible to use TRTorch with batchedNMSPlugin for TensorRT?
|
## ❓ Question
<!-- Your question -->
## What you have already tried
Hi, I am trying to convert detectron2 traced keypoint-rcnn model that contains ops from torchvision like torchvision::nms. I get the following error:
>
> terminate called after throwing an instance of 'torch::jit::ErrorReport'
> what():
> Unknown builtin op: torchvision::nms.
> Could not find any similar ops to torchvision::nms. This op may not exist or may not be currently supported in TorchScript.
> :
> File "/usr/local/lib/python3.7/dist-packages/torchvision/ops/boxes.py", line 36
> """
> _assert_has_ops()
> return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
> ~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
> Serialized File "code/__torch__/torchvision/ops/boxes.py", line 26
> _8 = __torch__.torchvision.extension._assert_has_ops
> _9 = _8()
> _10 = ops.torchvision.nms(boxes, scores, iou_threshold)
> ~~~~~~~~~~~~~~~~~~~ <--- HERE
> return _10
> 'nms' is being compiled since it was called from 'batched_nms'
> File "/usr/local/lib/python3.7/dist-packages/torchvision/ops/boxes.py", line 75
> offsets = idxs.to(boxes) * (max_coordinate + torch.tensor(1).to(boxes))
> boxes_for_nms = boxes + offsets[:, None]
> keep = nms(boxes_for_nms, scores, iou_threshold)
> ~~~ <--- HERE
> return keep
> Serialized File "code/__torch__/torchvision/ops/boxes.py", line 18
> _7 = torch.slice(offsets, 0, 0, 9223372036854775807, 1)
> boxes_for_nms = torch.add(boxes, torch.unsqueeze(_7, 1), alpha=1)
> keep = __torch__.torchvision.ops.boxes.nms(boxes_for_nms, scores, iou_threshold, )
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
> _0 = keep
> return _0
> 'batched_nms' is being compiled since it was called from 'RPN.forward'
> Serialized File "code/__torch__/detectron2/modeling/proposal_generator/rpn.py", line 19
> argument_9: Tensor,
> image_size: Tensor) -> Tensor:
> _0 = __torch__.torchvision.ops.boxes.batched_nms
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
> _1 = self.rpn_head
> _2 = (self.anchor_generator).forward(argument_1, argument_2, argument_3, argument_4, argument_5, argument_6, argument_7, argument_8, )
>
<!-- A clear and concise description of what you have already done. -->
## Environment
- PyTorch Version: 1.8.0
- CPU Architecture: arm64
- OS (e.g., Linux): Ubuntu 18.04
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): .whl file for jetson
- Build command you used (if compiling from source): `bazel build //:libtrtorch`
- Are you using local sources or building from archives: Local
- Python version: 3.7
- CUDA version: 10.2
- GPU models and configuration: Nvidia Jetson Xavier nx
- Any other relevant information: torchvision C++ API compiled locally
## Additional context
I know that there is [batchedNMSPlugin](https://www.ccoderun.ca/programming/doxygen/tensorrt/md_TensorRT_plugin_batchedNMSPlugin_README.html) for TensorRT, but I have no idea how to include it for conversion. I'd appreciate any advice.
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/473
|
closed
|
[
"question"
] | 2021-05-16T11:28:14Z
| 2022-08-20T07:31:37Z
| null |
VRSEN
|
pytorch/extension-cpp
| 72
|
How does the layer of C++ extensions translate to TorchScript or onnx?
|
https://github.com/pytorch/extension-cpp/issues/72
|
open
|
[] | 2021-05-14T09:50:12Z
| 2025-08-26T03:36:50Z
| null |
yanglinxiabuaaa
|
|
pytorch/vision
| 3,832
|
Error converting to onnx: forward function contains for loop
|
Hello, there is a for loop in my forward function. When I turned to onnx, the following error occurred:
`[ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Split node. Name:'Split_ 1277' Status Message: Cannot split using values in 'split' attribute. Axis=0 Input shape={59} NumOutputs=17 Num entries in 'split' (must equal number of outputs) was 17 Sum of sizes in 'split' (must equal size of selected axis) was 17`
Part of my forward code:
```
y, ey, x, ex = pad(boxes, w, h)
if len(boxes) > 0:
im_data = []
indx_y = torch.where(ey > y-1)[0]
for ind in indx_y:
img_k = imgs[image_inds[ind],:, (y[ind] - 1).type(torch.int64):ey[ind].type(torch.int64), (x[ind]-1).type(torch.int64):ex[ind].type(torch.int64)].unsqueeze(0)
im_data.append(imresample(img_k, (24, 24)))
im_data = torch.cat(im_data, dim=0)
return im_data
```
I found that during the first onnx conversion, the for loop was executed 17 times, but when I tested it, the for loop required 59 times, so there was an error. In the forward function, indx_y is dynamic, so the number of for loops is also dynamic. Is there any way to solve this problem?
cc @neginraoof
|
https://github.com/pytorch/vision/issues/3832
|
open
|
[
"question",
"awaiting response",
"module: onnx"
] | 2021-05-14T03:52:58Z
| 2021-05-18T09:42:32Z
| null |
wytcsuch
|
pytorch/vision
| 3,825
|
Why does RandomErasing transform aspect ratio use log scale
|
See from https://github.com/pytorch/vision/commit/06a5858b3b73d62351456886f0a9f725fddbb3fe the aspect ratio is chosen randomly from a log scale
I didn't see this in the original paper? And in the reference implementation.
https://github.com/zhunzhong07/Random-Erasing/blob/c699ae481219334755de93e9c870151f256013e4/transforms.py#L38
cc @vfdev-5
|
https://github.com/pytorch/vision/issues/3825
|
closed
|
[
"question",
"module: transforms"
] | 2021-05-13T11:43:04Z
| 2021-05-13T12:05:11Z
| null |
jxu
|
pytorch/vision
| 3,822
|
torchvision C++ compiling
|
1. quesion:
When I trying to compile torchvision from source in c++ language, the terminal thow erros:
In file included from /home/pc/anaconda3/include/python3.8/pytime.h:6:0,
from /home/pc/anaconda3/include/python3.8/Python.h:85,
from /media/pc/data/software/vision-0.9.0/torchvision/csrc/vision.cpp:4:
/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:647:30: error: stray ‘\343’ in program
const std::vector<IValue>& slots() const {
^
/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:647:30: error: stray ‘\200’ in program
const std::vector<IValue>& slots() const {
^
/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:647:30: error: stray ‘\200’ in program
const std::vector<IValue>& slots() const {
^
/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:647:30: error: stray ‘\343’ in program
const std::vector<IValue>& slots() const {
^
/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:647:30: error: stray ‘\200’ in program
const std::vector<IValue>& slots() const {
^
/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:647:30: error: stray ‘\200’ in program
const std::vector<IValue>& slots() const {
^
In file included from /media/pc/data/software/libtorch/include/c10/core/DispatchKey.h:6:0,
from /media/pc/data/software/libtorch/include/torch/library.h:61,
from /media/pc/data/software/vision-0.9.0/torchvision/csrc/vision.cpp:6:
/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:835:12: error: stray ‘\343’ in program
obj->slots().size() == 1,
^
/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:835:12: error: stray ‘\200’ in program
obj->slots().size() == 1,
^
/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:835:12: error: stray ‘\200’ in program
obj->slots().size() == 1,
^
/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:835:12: error: stray ‘\343’ in program
obj->slots().size() == 1,
^
/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:835:12: error: stray ‘\200’ in program
obj->slots().size() == 1,
^
/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:835:12: error: stray ‘\200’ in program
obj->slots().size() == 1,
^
/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:853:12: error: stray ‘\343’ in program
obj->slots().size() == 1,
^
/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:853:12: error: stray ‘\200’ in program
obj->slots().size() == 1,
^
/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:853:12: error: stray ‘\200’ in program
obj->slots().size() == 1,
^
/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:853:12: error: stray ‘\343’ in program
obj->slots().size() == 1,
^
/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:853:12: error: stray ‘\200’ in program
obj->slots().size() == 1,
^
/media/pc/data/software/libtorch/include/ATen/core/ivalue_inl.h:853:12: error: stray ‘\200’ in program
obj->slots().size() == 1,
^
2. enviroment:
libtorch: 1.8.1
vision: 0.9.1
cmake: 3.19.6
gcc: 7.5.0
python: 3.8.5
system: Ubuntu 18.04
3. compile code
cmake -DCMAKE_PREFIX_PATH=/media/pc/data/software/libtorch -DCMAKE_INSTALL_PREFIX=/media/pc/data/software/torchvision/install -DCMAKE_BUILD_TYPE=Release -DWITH_CUDA=ON ..
Thanks~!
|
https://github.com/pytorch/vision/issues/3822
|
closed
|
[
"question"
] | 2021-05-13T10:29:13Z
| 2021-05-13T12:02:06Z
| null |
swordnosword
|
pytorch/text
| 1,305
|
On Vocab Factory functions behavior
|
Related discussion #1016
Related PRs #1304, #1302
---------
torchtext provides several factory functions to construct [Vocab class](https://github.com/pytorch/text/blob/f7a6fbd3a910c4066b9a748545df388ae5933a6a/torchtext/vocab.py#L19) object. The primary ways to construct vocabulary are:
1. Reading raw text from file followed by tokenization to get token entries.
2. Reading token entries directly from file
3. Through iterators that yields iterator or list of tokens
3. Through user supplied ordered dictionary that maps tokens to their corresponding occurrence frequencies
Typically a vocabulary not only serve the purpose of numericalizing supplied tokens, but they also provide index for special occasions for example when the queried token is out of vocabulary (OOV) or when we need indices for special places like padding, masking, sentence beginning and end etc.
As the NLP is fast evolving, research and applied community alike will find novel and creative ways to push the frontiers of the field. Hence as a platform provider for NLP research and application, it is best not to make assumptions on special symbols including unknown token. We shall provide the aforementioned factory functions with minimal API requirements. We would expect the user to set the special symbols and fallback index through low level APIs of Vocab class.
Below are the examples of few scenarios and use cases:
Note that querying OOV token through Vocab object without setting default index would raise RuntimeError. Hence it is necessary to explicitly set this through API unless user wants to explicitly handle the runtime error as and when it happens. In below examples we set the default index to be same as index of `<unk>` token.
Example 1: Creating Vocab through text file and explicitly handling special symbols and fallback scenario
```
from torchtext.vocab import build_vocab_from_text_file
vocab = build_vocab_from_text_file("path/to/raw_text.txt", min_freq = 1)
special_symbols = {'<unk>':0,'<pad>':1,'<s>':2,'</s>':3}
default_index = special_symbols['<unk>']
for token, index in special_symbols.items():
if token in vocab:
vocab.reassign_token(token, index)
else:
vocab.insert_token(token, index)
vocab.set_default_index(default_index)
```
Example 2: Reading vocab directly from file with all the special symbols and setting fallback index to unknown token
```
from torchtext.vocab import build_vocab_from_file
unk_token = '<unk>'
vocab = build_vocab_from_text_file("path/to/tokens.txt", min_freq = 1)
assert unk_token in vocab
vocab.set_default_index(vocab[unk_token])
```
Example 3: Building Vocab using Iterators and explicitly adding special symbols and fallback index
```
from torchtext.vocab import build_vocab_from_iterator
special_symbols = {'<unk>':0,'<pad>':1,'<s>':2,'</s>':3}
vocab = build_vocab_from_iterator(iter_obj, min_freq = 1)
for token, index in special_symbols.items():
if token in vocab:
vocab.reassign_token(token, index)
else:
vocab.insert_token(token, index)
vocab.set_default_index(vocab[unk_token])
```
Example 4: Creating vocab through user supplied ordered dictionary that also contains all the special symbols
```
from torchtext.vocab import vocab as vocab_factory
unk_token = '<unk>'
vocab = vocab_factory(ordered_dict, min_freq = 1)
assert unk_token in vocab
vocab.set_default_index(vocab[unk_token])
```
Furthermore, legacy [Vocab class constructor](https://github.com/pytorch/text/blob/f7a6fbd3a910c4066b9a748545df388ae5933a6a/torchtext/legacy/vocab.py#L28) provide additional arguments to build Vocab using [Counters](https://docs.python.org/3/library/collections.html#collections.Counter). Here it provide support to add special symbols directly through input arguments rather than calling any low-level API.
We would love to hear from our users and community if the factory functions above is a good trade-off between flexibility and abstraction or if users would like to handle special symbols and default index through API arguments instead of explicitly calling the low level APIs of Vocab class.
with @cpuhrsch
cc: @hudeven, @snisarg, @dongreenberg
|
https://github.com/pytorch/text/issues/1305
|
open
|
[
"enhancement",
"question",
"need discussions"
] | 2021-05-13T02:52:19Z
| 2021-05-13T04:07:13Z
| null |
parmeet
|
pytorch/functorch
| 23
|
Figure out how to transform over optimizers
|
One way to transform over training loops (e.g. to do model ensembling or the inner step of a MAML) is to use a function that represents the optimizer step instead of an actual PyTorch optimizer. Right now I think we have the following requirements
- There should be a function version of each optimizer (e.g. `F.sgd`)
- The function should have an option to not mutate (e.g. `F.sgd(..., inplace=False)`)
- The function should be differentiable
PyTorch already has some here (in Prototype stage): https://github.com/pytorch/pytorch/blob/master/torch/optim/_functional.py, so we should check if these fit the requirements, and, if not, decide if we should influence the design
|
https://github.com/pytorch/functorch/issues/23
|
open
|
[] | 2021-05-11T13:13:39Z
| 2021-05-11T13:13:39Z
| null |
zou3519
|
pytorch/vision
| 3,811
|
Mask-rcnn training - all AP and Recall scores in “IoU Metric: segm” remain 0
|
With torchvision’s pre-trained mask-rcnn model, trying to train on a custom dataset prepared in COCO format.
Using torch/vision/detection/engine’s `train_one_epoch` and `evaluate` methods for training and evaluation, respectively.
The loss_mask metric is reducing as can be seen here:
```
Epoch: [5] [ 0/20] eta: 0:00:54 lr: 0.005000 loss: 0.5001 (0.5001) loss_classifier: 0.2200 (0.2200) loss_box_reg: 0.2616 (0.2616) loss_mask: 0.0014 (0.0014) loss_objectness: 0.0051 (0.0051) loss_rpn_box_reg: 0.0120 (0.0120) time: 2.7308 data: 1.2866 max mem: 9887
Epoch: [5] [10/20] eta: 0:00:26 lr: 0.005000 loss: 0.4734 (0.4982) loss_classifier: 0.2055 (0.2208) loss_box_reg: 0.2515 (0.2595) loss_mask: 0.0012 (0.0013) loss_objectness: 0.0038 (0.0054) loss_rpn_box_reg: 0.0094 (0.0113) time: 2.6218 data: 1.1780 max mem: 9887
Epoch: [5] [19/20] eta: 0:00:02 lr: 0.005000 loss: 0.5162 (0.5406) loss_classifier: 0.2200 (0.2384) loss_box_reg: 0.2616 (0.2820) loss_mask: 0.0014 (0.0013) loss_objectness: 0.0051 (0.0062) loss_rpn_box_reg: 0.0120 (0.0127) time: 2.6099 data: 1.1755 max mem: 9887
```
But the `evaluate` output shows absolutely no improvement from zero for IoU segm metric:
IoU metric: bbox
```
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.653
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.843
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.723
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.788
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.325
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.701
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.738
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.739
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.832
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.456
IoU metric: segm
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
````
The segm metrics don’t improve even after training 500 epochs.
And, the masks that I get as output after training for 100 or 500 epochs, if I visualize, they are showing a couple of dots here and there.
With the same dataset and annotations json, I was able to train instance seg model on detectron2. the the segmentation IoU metrics have clearly improved by each epoch.
Please suggest, what needs to be done. Posting here as there was no response on discuss.pytorch forum for 5 days
cc @vfdev-5
|
https://github.com/pytorch/vision/issues/3811
|
open
|
[
"question",
"topic: semantic segmentation"
] | 2021-05-11T12:09:41Z
| 2023-03-02T19:34:03Z
| null |
hemasunder
|
pytorch/TensorRT
| 449
|
error: ‘tryTypeMetaToScalarType’ is not a member of ‘c10’
|
## ❓ CMake building error using this [repo](https://github.com/JosephChenHub/TRTorch)
<!-- Your question -->
How to build the TRTorch or use the release packages of TRTorch in Ubuntu 18.04?
## What you have already tried
Tried build TRTorch through CMakeLists.txt provided by [this](https://github.com/NVIDIA/TRTorch/issues/263).
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about the TRTorch compiler can be found by turning on debug messages
- OS (e.g., Linux):Ubuntu18.04
- Build command you used (if compiling from source):cmake.. and make
- Are you using local sources or building from archives:Yes
- CUDA version:11.1
- TensorRT version:7.2.3.4
- Make error:
A bunch of warnings and then the error:
```
/home/SENSETIME/dongchunyu/dongchunyu/depends/TensorRT-7.2.3.4/include/NvInfer.h:3250:22: note: declared here
class TRT_DEPRECATED IRNNv2Layer : public ILayer
^~~~~~~~~~~
/home/SENSETIME/dongchunyu/dongchunyu/depends/TensorRT-7.2.3.4/include/NvInfer.h:5662:85: warning: ‘IPluginLayer’ is deprecated [-Wdeprecated-declarations]
ITensor* const* inputs, int32_t nbInputs, IPluginExt& plugin) TRTNOEXCEPT = 0;
^
/home/SENSETIME/dongchunyu/dongchunyu/depends/TensorRT-7.2.3.4/include/NvInfer.h:3454:22: note: declared here
class TRT_DEPRECATED IPluginLayer : public ILayer
^~~~~~~~~~~~
/home/SENSETIME/dongchunyu/dongchunyu/codes/c++/tmp/TRTorch/core/util/trt_util.cpp: In function ‘c10::optional<nvinfer1::DataType> trtorch::core::util::toTRTDataType(caffe2::TypeMeta)’:
/home/SENSETIME/dongchunyu/dongchunyu/codes/c++/tmp/TRTorch/core/util/trt_util.cpp:270:21: error: ‘tryTypeMetaToScalarType’ is not a member of ‘c10’
if (auto t = c10::tryTypeMetaToScalarType(dtype)) {
^~~~~~~~~~~~~~~~~~~~~~~
/home/SENSETIME/dongchunyu/dongchunyu/codes/c++/tmp/TRTorch/core/util/trt_util.cpp:270:21: note: suggested alternative: ‘optTypeMetaToScalarType’
if (auto t = c10::tryTypeMetaToScalarType(dtype)) {
^~~~~~~~~~~~~~~~~~~~~~~
optTypeMetaToScalarType
CMakeFiles/util.dir/build.make:110: recipe for target 'CMakeFiles/util.dir/core/util/trt_util.cpp.o' failed
make[2]: *** [CMakeFiles/util.dir/core/util/trt_util.cpp.o] Error 1
CMakeFiles/Makefile2:219: recipe for target 'CMakeFiles/util.dir/all' failed
make[1]: *** [CMakeFiles/util.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
```
## Additional context
Wishing for official cmake tool!!!
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/449
|
closed
|
[
"question",
"No Activity"
] | 2021-05-10T06:45:33Z
| 2021-11-01T00:01:56Z
| null |
AllentDan
|
pytorch/vision
| 3,801
|
Unable to train the keypointrcnn_resnet50_fpn model
|
## ❓ The Predictions after training the model are empty.
### I'm trying to train the model for keypoints detection, & bounding boxes, using the script in the references section, while training the loss_keypoint is always 0.0000. Using those weights for prediction is giving no predictions at all.
I'm running this on a Windows 2016 server (EC2 instance on AWS), with a single GPU (Instance Type: p2.xlarge)
My Dataset is in COCO format, but I'm using only 14 keypoints per person, so I had defined the model in the train.py file as below:
```
model = torchvision.models.detection.keypointrcnn_resnet50_fpn(
pretrained=False, progress=True, num_classes=1, num_keypoints=14,
pretrained_backbone=True, trainable_backbone_layers=None)
```
& I've made appropriate changes in coco_utils.py for Keypoint flip.
**Training**
Command:
```
python train.py --dataset coco_kp2 --model keypointrcnn_resnet50_fpn --epochs 1 --lr-steps 36 43
--aspect-ratio-group-factor 3
```
Output:
```
Not using distributed mode
Namespace(aspect_ratio_group_factor=3, batch_size=2, data_augmentation='hflip', data_path='/datasets01/COCO/022719/', dataset='coco_kp2', device='cuda', dist_url='env://', distributed=False, epochs=1, lr=0.02, lr_gamma=0.1, lr_step_size=8, lr_steps=[36, 43], model='keypointrcnn_resnet50_fpn', momentum=0.9, output_dir='.', pretrained=False, print_freq=20, resume='', rpn_score_thresh=None, start_epoch=0, test_only=False, trainable_backbone_layers=None, weight_decay=0.0001, workers=4, world_size=1)
Loading data
loading annotations into memory...
Done (t=0.02s)
creating index...
index created!
loading annotations into memory...
Done (t=0.02s)
creating index...
index created!
Creating data loaders
Using [0, 0.5, 0.6299605249474366, 0.7937005259840997, 1.0, 1.2599210498948732, 1.5874010519681994, 2.0, inf] as bins for aspect ratio quantization
Count of instances per bin: [180]
Creating model
Start training
Epoch: [0] [ 0/90] eta: 0:11:55 lr: 0.000244 loss: 0.7178 (0.7178) loss_classifier: 0.0000 (0.0000) loss_box_reg: 0.0000 (0.0000) loss_keypoint: 0.0000 (0.0000) loss_objectness: 0.6962 (0.6962) loss_rpn_box_reg: 0.0216 (0.0216) time: 7.9505 data: 5.2040 max mem: 2618
Epoch: [0] [20/90] eta: 0:02:04 lr: 0.004734 loss: 0.6764 (0.6253) loss_classifier: 0.0000 (0.0000) loss_box_reg: 0.0000 (0.0000) loss_keypoint: 0.0000 (0.0000) loss_objectness: 0.6526 (0.6053) loss_rpn_box_reg: 0.0186 (0.0200) time: 1.4630 data: 0.0062 max mem: 2951
Epoch: [0] [40/90] eta: 0:01:22 lr: 0.009224 loss: 0.0664 (0.3587) loss_classifier: 0.0000 (0.0000) loss_box_reg: 0.0000 (0.0000) loss_keypoint: 0.0000 (0.0000) loss_objectness: 0.0488 (0.3400) loss_rpn_box_reg: 0.0147 (0.0186) time: 1.5132 data: 0.0061 max mem: 2951
Epoch: [0] [60/90] eta: 0:00:47 lr: 0.013714 loss: 0.0196 (0.2480) loss_classifier: 0.0000 (0.0000) loss_box_reg: 0.0000 (0.0000) loss_keypoint: 0.0000 (0.0000) loss_objectness: 0.0072 (0.2316) loss_rpn_box_reg: 0.0118 (0.0164) time: 1.4801 data: 0.0065 max mem: 2951
Epoch: [0] [80/90] eta: 0:00:15 lr: 0.018204 loss: 0.0192 (0.1919) loss_classifier: 0.0000 (0.0000) loss_box_reg: 0.0000 (0.0000) loss_keypoint: 0.0000 (0.0000) loss_objectness: 0.0067 (0.1761) loss_rpn_box_reg: 0.0121 (0.0158) time: 1.4868 data: 0.0062 max mem: 2951
Epoch: [0] [89/90] eta: 0:00:01 lr: 0.020000 loss: 0.0182 (0.1745) loss_classifier: 0.0000 (0.0000) loss_box_reg: 0.0000 (0.0000) loss_keypoint: 0.0000 (0.0000) loss_objectness: 0.0067 (0.1591) loss_rpn_box_reg: 0.0107 (0.0153) time: 1.4933 data: 0.0053 max mem: 2951
Epoch: [0] Total time: 0:02:20 (1.5584 s / it)
Test: [ 0/90] eta: 0:08:32 model_time: 0.4447 (0.4447) evaluator_time: 0.0010 (0.0010) time: 5.6930 data: 5.2317 max mem: 2951
Test: [89/90] eta: 0:00:00 model_time: 0.3594 (0.3613) evaluator_time: 0.0010 (0.0011) time: 0.3689 data: 0.0033 max mem: 2951
Test: Total time: 0:00:38 (0.4315 s / it)
Averaged stats: model_time: 0.3594 (0.3613) evaluator_time: 0.0010 (0.0011)
Accumulating evaluation results...
DONE (t=0.02s).
Accumulating evaluation results...
DONE (t=0.00s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | ar
|
https://github.com/pytorch/vision/issues/3801
|
closed
|
[
"question",
"module: reference scripts",
"topic: object detection"
] | 2021-05-09T16:28:57Z
| 2021-05-10T15:54:04Z
| null |
d1nz-g33k
|
pytorch/serve
| 1,055
|
How i do models chain processing and batch processing for analyzing text data?
|
Hello, I wanted to thank you for creating such a convenient and easily deployable model service.
I have several questions / suggestions (maybe they have already been implemented).
The first thing I would like to know / get is the launch of a chain of models... Example: I have a basic model, let's say BERT, I would like to use it to get embeddings for further solving other tasks, such as classification, text summarization, Question/Answering etc. Those I would like to transfer data from the base model (BERT) to other models for solving particular problems (QA_model, classifier_model, summarizer_model). I would like to be able to dynamically change the output.
```
[
{
"modelName": "BERT",
"modelVersion": "1.0",
}
{
"modelName": "QA_model",
"modelVersion": "1.0",
}
{
"modelName": "classifier_model",
"modelVersion": "1.0",
}
{
"modelName": "summarizer_model",
"modelVersion": "1.0",
}
]
```
The second question is how to perform batch processing of text data in order to get execution for several sentences at once? And what are the restrictions on the batch size?
|
https://github.com/pytorch/serve/issues/1055
|
closed
|
[
"question"
] | 2021-05-08T10:42:46Z
| 2021-05-17T09:19:23Z
| null |
yurkoff-mv
|
pytorch/vision
| 3,784
|
Could T.Lambda be nn.Module?
|
It would allow it to be placed in nn.ModuleList for passing to RandomApply (for scriptability)
https://pytorch.org/vision/stable/transforms.html#torchvision.transforms.RandomApply
cc @vfdev-5
|
https://github.com/pytorch/vision/issues/3784
|
open
|
[
"question",
"module: transforms"
] | 2021-05-06T12:22:19Z
| 2021-05-07T14:13:30Z
| null |
vadimkantorov
|
pytorch/vision
| 3,783
|
[docs] Unclear if to_pil_image / to_tensor copy or zero-copy for CPU<->CPU
|
It currently uses a vague language "convert". It's not sure if "conversion" incurs a copy or not
cc @vfdev-5
|
https://github.com/pytorch/vision/issues/3783
|
open
|
[
"question",
"module: transforms"
] | 2021-05-06T11:52:59Z
| 2021-05-12T11:53:46Z
| null |
vadimkantorov
|
pytorch/vision
| 3,782
|
ToTensor confuse me with the way it takes input
|
So the `ToTensor` class of `to_tensor` function takes input in the dimension of (H, W) while PIL has it's images dimension be (W, H).
Why is this transpose ?
cc @vfdev-5
|
https://github.com/pytorch/vision/issues/3782
|
closed
|
[
"question",
"module: transforms"
] | 2021-05-06T10:17:52Z
| 2021-05-07T06:49:38Z
| null |
MohamedAliRashad
|
pytorch/vision
| 3,772
|
Unable to get segmented mask output image
|
.
|
https://github.com/pytorch/vision/issues/3772
|
closed
|
[
"question",
"awaiting response",
"topic: semantic segmentation"
] | 2021-05-05T08:31:26Z
| 2021-06-01T06:16:06Z
| null |
shubhamkotal
|
pytorch/tutorials
| 1,506
|
Seq2seq Transformer Tutorial best model saving
|
In [this tutorial](https://pytorch.org/tutorials/beginner/transformer_tutorial.html#load-and-batch-data), it says "Save the model if the validation loss is the best we’ve seen so far.", and then the following code follows (also [here](https://github.com/pytorch/tutorials/blob/master/beginner_source/transformer_tutorial.py#L324-L326)).
```
if val_loss < best_val_loss:
best_val_loss = val_loss
best_model = model
```
However, my understanding is that this kind of checkpointing won't work, as `best_model` will contain a pointer to the same set of parameters as `model` (which will be updated). I tried to verify this by checking that `next(model.parameters())` and `next(best_model.parameters())` are identical, and it seemed like that was the case (although admittedly I did not check that the last model was indeed not the best one).
cc @pytorch/team-text-core @Nayef211
|
https://github.com/pytorch/tutorials/issues/1506
|
closed
|
[
"question",
"Text",
"module: torchtext"
] | 2021-05-05T05:04:47Z
| 2023-03-08T20:55:00Z
| null |
micahcarroll
|
pytorch/xla
| 2,927
|
How to install torch_xla with python version 3.9.2
|
## ❓ Questions and Help
I have to use 3.9.2 for other dependency. Given that my python version must be 3.9.2, how do I install torch_xla ?
I tried these 2 method shown in tutorial
1)
`!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8.1-cp37-cp37m-linux_x86_64.whl`
This ( and c38 c39 variant of it ) does not work.
<img width="682" alt="Screen Shot 2021-05-03 at 11 18 17 PM" src="https://user-images.githubusercontent.com/14815380/116957477-ec3c9600-ac65-11eb-8e2c-ba5c7050af25.png">
2)
```
VERSION = "20200516" #@param ["1.5" , "20200516", "nightly"]
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
!python pytorch-xla-env-setup.py --version $VERSION
```
Running these gives
<img width="706" alt="Screen Shot 2021-05-03 at 11 20 02 PM" src="https://user-images.githubusercontent.com/14815380/116957546-18f0ad80-ac66-11eb-97c2-4d134f50f90e.png">
## Question
How do I use xla with python 3.9.2 ?
Do I must have python 3.7.x in order to use xla ???
|
https://github.com/pytorch/xla/issues/2927
|
closed
|
[
"stale"
] | 2021-05-04T03:21:02Z
| 2021-06-22T17:43:47Z
| null |
sirgarfieldc
|
pytorch/vision
| 3,767
|
Failing to load the pre-trained weights on multi-gpus.
|
## 🐛 Bug
Downloading the pre-trained weights for following models, Alexnet, Resnet_152, Resnet -18, SqueezeNet, VGG11 and trying to load them on any gpu other than cuda:0, it throw error.
## To Reproduce
wget https://download.pytorch.org/models/alexnet-owt-4df8aa71.pth
```
import torch
from torchvision.models.alexnet import AlexNet
class ImageClassifier(AlexNet):
def __init__(self):
super(ImageClassifier, self).__init__()
device1='cuda:0'
device2='cuda:2'
model = ImageClassifier()
state_dict = torch.load("alexnet-owt-4df8aa71.pth", map_location=device2)
model.load_state_dict(state_dict)
model = model.to(device2)
```
## Error
_File "test_device.py", line 16, in
state_dict = torch.load("alexnet-owt-4df8aa71.pth", map_location=device2)........_
_RuntimeError: Attempted to set the storage of a tensor on device "cuda:0" to a storage on different device "cuda:2". This is no longer allowed; the devices must match_
## Expected behavior
Be able to load the state_dict on any cuda device using map_location.
## Enviroment
- PyTorch / torchvision Version (e.g., 1.0 / 0.4.0):1.7.1, 1.8.0,1.8.1
- OS (e.g., Linux): ubuntu 18.04
- How you installed PyTorch / torchvision (`conda`, `pip`, source): pip
- Build command you used (if compiling from source):
- Python version: 3.7
- CUDA/cuDNN version:10.2
- GPU models and configuration: Nvidia Tesla k80
- Any other relevant information:
## Additional context
These models are being used in Torchserve examples are failing in multi-gpu setting to be loaded on different cuda device. As a work around in Torchserve stated dicts are loaded first on cuda:0 then move the model to another device/ cuda+ids which creates this [issue](https://github.com/pytorch/serve/issues/1037) where it results in duplicated processes on two gpus and adding to the memory footprint.
|
https://github.com/pytorch/vision/issues/3767
|
closed
|
[
"question"
] | 2021-05-03T21:33:58Z
| 2023-08-22T16:03:22Z
| null |
HamidShojanazeri
|
pytorch/serve
| 1,045
|
How to add a new handler guide
|
Goal is to support new use cases easily
The base handler is also quite general in its capabilities so want to showcase a bit more what can be done
|
https://github.com/pytorch/serve/issues/1045
|
closed
|
[
"documentation",
"enhancement"
] | 2021-04-28T20:43:10Z
| 2021-05-05T19:17:39Z
| null |
msaroufim
|
pytorch/pytorch
| 57,118
|
How to view VLOG information
|
How to use VLOG, which is same to specify TF_CPP_MIN_VLOG_LEVEL variable in TensorFlow.
|
https://github.com/pytorch/pytorch/issues/57118
|
open
|
[
"module: logging",
"triaged"
] | 2021-04-28T11:16:12Z
| 2024-09-04T19:25:04Z
| null |
HangJie720
|
pytorch/vision
| 3,746
|
Details on pre-training of torchvision models
|
I realize there is a closed issue on this topic here: https://github.com/pytorch/vision/issues/666
The issue has been opened in 2018. I have not found any documentation on how the models of torchvision are pre-trained, therefore I am opening another issue. Is the above answer still valid? Are the models still trained according to https://github.com/pytorch/examples/tree/master/imagenet ?
Specifically, I would like to know the details on the image size and data transformation used.
Thanks!
|
https://github.com/pytorch/vision/issues/3746
|
closed
|
[
"question",
"module: models"
] | 2021-04-28T11:13:01Z
| 2021-04-28T11:44:38Z
| null |
spurra
|
pytorch/text
| 1,295
|
How to train data with the similar number of tokens in a batch using distributed training?
|
My code needs two functions:
1. Bucket iterator;
2. In each batch, the number of tokens are similar. (This means the batch size of each batch is not same.)
I think I could fulfill the function 2 with a custom sampler which inherits torch.utils.data.Sampler, but as seen in the tutorial, Bucket iterator inherits torch.utils.data.Dataset, and for distributed training, the torch.utils.data.distributed.DistributeSampler should be used. The custom sampler and the DistributedSampler can’t both be used in torch.utils.data.DataLoader (dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None, num_workers=0, collate_fn=None, pin_memory=False).
So, how to sample data (sentences) in a batch with the similar number of tokens for distributed training?
Thanks a lot.
|
https://github.com/pytorch/text/issues/1295
|
open
|
[] | 2021-04-27T09:36:11Z
| 2021-07-06T16:22:55Z
| null |
sandthou
|
pytorch/vision
| 3,729
|
Evaluation Method does not work for detection
|
## 🐛 Bug
After the training process, when running the evaluation function available here (https://github.com/pytorch/vision/blob/dc42f933f3343c76727dbfba6e4242f1bcb8e1a0/references/detection/engine.py), the process gets stuck without any error. I left the evaluation method running for two days but there are no error or any suggestion of what this problem could be when interrupting the process.
## Expected behaviour
The result of this function is supposed to be the IoU metrics (the image is taken from the tutorial available on PyTorch)

## Environment
- PyTorch version: 1.7.0a0+7036e91
- OS: Ubuntu 18.04
- How you installed PyTorch / torchvision: pip
- Build command you used (if compiling from source): //
- Python version: 3.6
- CUDA/cuDNN version: //
- GPU models and configuration: Tesla T4
|
https://github.com/pytorch/vision/issues/3729
|
closed
|
[
"question",
"awaiting response",
"module: reference scripts"
] | 2021-04-26T08:54:08Z
| 2025-01-02T07:26:34Z
| null |
aliceinland
|
pytorch/pytorch
| 56,898
|
How do I convert the quantified model to onnx or ncnn?
|
## ❓ How do I convert the quantified model to onnx or ncnn?
### how to convert int8 model in pytorch to onnx.
I train a model with quantization aware train in pytorch, however I need use quantizated model to onnx, I have tried, but normal code is not work. Any bady can help me, thanks a lot.
@eklitzke @dreiss @huitseeker @jfsantos bug for guidance
|
https://github.com/pytorch/pytorch/issues/56898
|
closed
|
[] | 2021-04-26T03:07:06Z
| 2021-04-27T22:51:13Z
| null |
fucker007
|
pytorch/serve
| 1,041
|
How to debug slow serve models
|
## 📚 Documentation
Many issues are essentially people confused about the overhead that torch serve introduces so a good solution would be to have the below in a guide before opening a perf issue.
1. Point to existing benchmarks so people can get a baseline estimate
2. Running model without serve
3. Commands to get serve overhead
4. Expectations around how serve will scale horizontally and vertically
|
https://github.com/pytorch/serve/issues/1041
|
closed
|
[
"documentation",
"enhancement",
"help wanted"
] | 2021-04-22T15:09:57Z
| 2021-05-13T16:21:35Z
| null |
msaroufim
|
pytorch/pytorch
| 56,634
|
[package] Module name reported in error message does not always match what is needed to extern/mock it
|
## 🐛 Bug
The module name as printed in the packaging error messages is not always the name with which it can be successfully externed or mocked.
## To Reproduce
```
import torch
import io
model = torch.hub.load('nicolalandro/ntsnet-cub200', 'ntsnet', pretrained=True, **{'topN': 6, 'device':'cpu', 'num_classes': 200})
model.eval()
with torch.package.PackageExporter(io.BytesIO()) as exp:
exp.extern([
"sys",
"io",
"PIL.**",
"_queue",
"urllib3.**",
])
exp.save_pickle("ntsnet", "model.pkl", model)
```
This code produces the following error:
```
ValueError: cannot save source for module "mklinit" because its source file "/home/meghanl/local/miniconda3/envs/tutorial/lib/python3.8/site-packages/mkl/_mklinit.cpython-38-x86_64-linux-gnu.so" could not be found. See the dependency graph for more info:
```
## Expected Outcome
`exp.extern("mklinit")` externs this module.
## Actual Outcome
`exp.extern("mklinit")` does not extern this module; the same error is produced. `exp.extern("mkl.**")` does extern this module.
|
https://github.com/pytorch/pytorch/issues/56634
|
open
|
[
"triaged"
] | 2021-04-21T21:41:20Z
| 2021-04-21T21:43:22Z
| null |
SplitInfinity
|
huggingface/pytorch-image-models
| 572
|
What is EfficientNetV2s? What is it relationship with EfficientNetV2?
|
https://github.com/huggingface/pytorch-image-models/issues/572
|
closed
|
[
"enhancement"
] | 2021-04-21T07:24:51Z
| 2021-04-21T15:51:02Z
| null |
chenyang9799
|
|
pytorch/pytorch
| 56,473
|
how to restore a model's weight from jit.traced model file?
|
Hi guys,
i have a traced pt model file, now i need to use it restore a net instance like below
```py
traced_model = torch.jit.load('traced.pt')
state_dict = extract_state_dict(traced_model) #need to implement
model = construct_model(args)
model.load_state_dict(state_dict)
```
extract_state_dict is the function i want to know to implement,thanks
cc @gmagogsfm
|
https://github.com/pytorch/pytorch/issues/56473
|
closed
|
[
"oncall: jit"
] | 2021-04-20T12:54:26Z
| 2021-04-21T03:51:31Z
| null |
fortuneko
|
pytorch/examples
| 901
|
Pytorch C++ Frontend: generating networks at runtime?
|
closing and moving to pytorch repo
|
https://github.com/pytorch/examples/issues/901
|
closed
|
[] | 2021-04-20T04:25:48Z
| 2021-04-20T04:30:13Z
| 0
|
r2dliu
|
huggingface/sentence-transformers
| 875
|
Where is the saved model after the training?
|
model.fit(train_objectives=[(train_dataloader, train_loss)], output_path=dir, epochs=1, warmup_steps=100)
I have specified the output_path where the model output, but I didn't see any documents after training.
thank you.
|
https://github.com/huggingface/sentence-transformers/issues/875
|
open
|
[] | 2021-04-17T00:45:41Z
| 2021-04-17T09:54:52Z
| null |
Bulando
|
pytorch/vision
| 3,678
|
Deformable convolution best practice?
|
## ❓ Questions and Help
Would appreciate it if anyone has some insight on how to use deformable convolution correctly.
Deformable convolution is tricky as even the official implementation is different from what's described in the paper. The paper claims to use 2N offset size instead of 2 x ks x ks.
Anyway, we're using the 2 x ks x ks offset here, but I always got poor performance. Accuracy drops in CIFAR10 and YOLACT. Anything wrong with my usage?
```
from torchvision.ops import DeformConv2d
class DConv(nn.Module):
def __init__(self, inplanes, planes, kernel_size=3, stride=1, padding=1, bias=False):
super(DConv, self).__init__()
self.conv1 = nn.Conv2d(inplanes, 2 * kernel_size * kernel_size, kernel_size=kernel_size,
stride=stride, padding=padding, bias=bias)
self.conv2 = DeformConv2d(inplanes, planes, kernel_size=kernel_size, stride=stride, padding=padding, bias=bias)
def forward(self, x):
out = self.conv1(x)
out = self.conv2(x, out)
return out
```
|
https://github.com/pytorch/vision/issues/3678
|
open
|
[
"question",
"module: ops"
] | 2021-04-16T07:20:24Z
| 2021-04-21T12:56:47Z
| null |
liyy201912
|
pytorch/pytorch
| 56,149
|
how to RegisterPass for torch.jit.trace()
|
## ❓ Questions and Help
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
is there are new way to create a custom transformation pass in torchscript with torch1.8.0 ?
just like here: pytorch_compiler_tutorial/register.cpp at master · bwasti/pytorch_compiler_tutorial · GitHub 2
torch.jit.trace() doesnt call: RegisterPass pass anymore
cc @gmagogsfm
|
https://github.com/pytorch/pytorch/issues/56149
|
closed
|
[
"oncall: jit"
] | 2021-04-15T15:25:42Z
| 2021-06-04T16:57:02Z
| null |
Andrechang
|
pytorch/vision
| 3,673
|
The document of torchvision.ops.deform_conv2d is not clear
|
## 📚 Documentation
From the documentation, I cannot get the exact meaning of 18(ie, 2*3*3) channels of the offset in a deformable convolution?
I want to visualize the offset of the deformable convolution with kernel size 3*3.
So It’s essential for me to know what’s the exact meaning of these channels.
I write down something possible here:
```python
upper-left: ul
upper-right: ur
bottom-left: bl
bottom-right: br
up: u
bottom: b
right: r
left: l
center: c
possible offset layout (maybe not correct):
delta_ul_x, delta_ul_y, delta_u_x, delta_u_y, delta_ur_x, delta_ur_y;
delta_l_x, delta_l_y, delta_c_x, delta_c_y, delta_r_x, delta_r_y;
delta_bl_x, delta_bl_y, delta_b_x, delta_b_y, delta_br_x, delta_br_y;
```
|
https://github.com/pytorch/vision/issues/3673
|
open
|
[
"question"
] | 2021-04-15T06:43:49Z
| 2022-05-18T04:57:34Z
| null |
Zhaoyi-Yan
|
pytorch/xla
| 2,883
|
How to dump HLO IR
|
## ❓ Questions and Help
Hi,
I want to extract the HLO PROTO/TEXT of a function/module that I wrote in PyTorch.
Something similar to what jax is doing [here](https://jax.readthedocs.io/en/latest/jax.html#jax.xla_computation):
```
def f(x):
return jax.numpy.sin(jax.numpy.cos(x))
c = jax.xla_computation(f)(3.)
hlo_proto = c. as_serialized_hlo_module_proto()
hlo_txt = c. as_hlo_text()
```
Is there something similar that I can do it torch_xla?
|
https://github.com/pytorch/xla/issues/2883
|
closed
|
[
"stale"
] | 2021-04-15T02:42:09Z
| 2021-06-22T17:43:37Z
| null |
KatiaSN602
|
pytorch/pytorch
| 55,914
|
how to convert libtorch trained model to torch script model
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
|
https://github.com/pytorch/pytorch/issues/55914
|
closed
|
[] | 2021-04-13T15:57:00Z
| 2021-04-13T16:28:26Z
| null |
WuLoing
|
pytorch/vision
| 3,658
|
Failed to compile torchvision for ROCm as documented in pytorch.org
|
## 🐛 Bug
Failed to compile torchvision for ROCm as documented in pytorch.org/get-started
## To Reproduce
Steps to reproduce the behavior:
as in: https://pytorch.org/get-started/locally/
1. python -m venv ptamd; source ptamd/bin/activate
1. pip install torch -f https://download.pytorch.org/whl/rocm4.0.1/torch_stable.html
1. pip install ninja && pip install 'git+https://github.com/pytorch/vision.git@v0.9.1'
same with v0.9.0
## Error
ptamd/lib/python3.8/site-packages/torch/include/c10/util/complex.h:9:10: fatal error: 'thrust/complex.h' file not found
#include <thrust/complex.h>
^~~~~~~~~~~~~~~~~~
1 error generated when compiling for gfx803.
## Environment
```
PyTorch version: 1.8.1+rocm4.0.1
Is debug build: False
ROCM used to build PyTorch: 4.0.20496-4f163c68
OS: CentOS Linux 8 (x86_64)
GCC version: (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5) # same on GCC 10
Python version: 3.8 (64-bit runtime)
Is CUDA available: True
GPU models and configuration: Vega 20
HIP runtime version: 3.21.2
MIOpen runtime version: 2.9.0
Versions of relevant libraries:
[pip3] numpy==1.20.2
[pip3] torch==1.8.1+rocm4.0.1
```
|
https://github.com/pytorch/vision/issues/3658
|
open
|
[
"question",
"topic: build",
"topic: binaries"
] | 2021-04-11T12:32:10Z
| 2021-07-03T13:15:58Z
| null |
henrique
|
huggingface/datasets
| 2,196
|
`load_dataset` caches two arrow files?
|
Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be able to `load_from_disk`?
|
https://github.com/huggingface/datasets/issues/2196
|
closed
|
[
"question"
] | 2021-04-09T03:49:19Z
| 2021-04-12T05:25:29Z
| null |
hwijeen
|
huggingface/datasets
| 2,193
|
Filtering/mapping on one column is very slow
|
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_columns=['num_tokens']`, it seems that the entirety of each row is loaded into memory, which makes the operation take much longer than it should. Indeed, `filter` currently just calls `map`, and I found that in `_map_single` on lines 1690-1704 of `arrow_dataset.py`, the method is just grabbing slices of _all the rows_ of the dataset and then passing only the specified columns to the map function. It seems that, when the user passes a value for `input_columns`, the `map` function should create a temporary pyarrow table by selecting just those columns, and then get slices from that table. Or something like that— I'm not very familiar with the pyarrow API.
I know that in the meantime I can sort of get around this by simply only returning the rows that match my filter criterion from the tokenizing function I pass to `map()`, but I actually _also_ want to map on just the `num_tokens` column in order to compute batches with a roughly uniform number of tokens per batch. I would also ideally like to be able to change my minimum and maximum article lengths without having to re-tokenize the entire dataset.
PS: This is definitely not a "dataset request." I'm realizing that I don't actually know how to remove labels from my own issues on other people's repos, if that is even possible.
|
https://github.com/huggingface/datasets/issues/2193
|
closed
|
[
"question"
] | 2021-04-08T18:16:14Z
| 2021-04-26T16:13:59Z
| null |
norabelrose
|
pytorch/TensorRT
| 429
|
❓ [Question] Does TRTorch support autograd in inference?
|
## ❓ Question
Some models can contain autograd as part of their inference pass; a simple example, which does compile to TorchScript, would be:
```python
import torch
class M(torch.nn.Module):
def forward(self, x):
x.requires_grad_(True)
y = x**2
return torch.autograd.grad([y.sum()], [x])[0]
m = M()
m(3*torch.ones(3)) # => tensor([6., 6., 6.])
ms = torch.jit.script(m)
ms(3*torch.ones(3)) # => tensor([6., 6., 6.])
```
I know `autograd.grad` isn't in the list of supported operations, but I'm curious whether something like this would be possible in TRTorch, or if it is fundamentally incompatible with the TensorRT design.
Thanks!
|
https://github.com/pytorch/TensorRT/issues/429
|
closed
|
[
"question"
] | 2021-04-08T05:12:50Z
| 2021-04-12T14:03:51Z
| null |
Linux-cpp-lisp
|
huggingface/datasets
| 2,187
|
Question (potential issue?) related to datasets caching
|
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.builder - Using custom data configuration default-888a87931cbc5877
04/07/2021 18:34:42 - WARNING - datasets.builder - Reusing dataset csv (xxxx/cache-transformers/datasets/csv/default-888a87931cbc5877/0.0.0/965b6429be0fc05f975b608ce64e1fa941cc8fb4f30629b523d2390f3c0e1a93
```
Can you please let me know what this reusing dataset csv means? I wouldn't expect any reusing with the datasets caching disabled. Thank you!
|
https://github.com/huggingface/datasets/issues/2187
|
open
|
[
"question"
] | 2021-04-08T00:16:28Z
| 2023-01-03T18:30:38Z
| null |
ioana-blue
|
pytorch/pytorch
| 55,452
|
How to access model embedded functions?
|
## ❓ Questions and Help
I am working on C# .NET with Visual Studio 2019, over Windows Server 2019 Standard.
I aim to export a Python model to run inference on C# with onnxruntime.
I am using [Resemble-ai voice encoder](https://github.com/resemble-ai/Resemblyzer/blob/master/resemblyzer/voice_encoder.py) as ONNX, using:
import torch
import torch.onnx
x = torch.randn(1,3,40,requires_grad=True)
torch_out = encoder(x)
torch.onnx.export(encoder,
x,
"resemblyzer.onnx",
opset_version=13,
input_names=['input'],
output_names=['output'])
The export takes place without warnings or errors. The graph and input/outputs of the onnx model seem all right.
But I can't figure out how to use the model's embedded functions "embed_utterance" and "embed_speaker". Is that even possible? I mean, do the ONNX model include those functions or just the parameters of the trained model?
If the functions are inside de ONNX model, a snippet on how to access them would be great.
cc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof @SplitInfinity
|
https://github.com/pytorch/pytorch/issues/55452
|
closed
|
[
"module: onnx",
"triaged"
] | 2021-04-07T10:22:44Z
| 2021-04-21T08:56:02Z
| null |
ADD-eNavarro
|
huggingface/transformers
| 11,057
|
Difference in tokenizer output depending on where `add_prefix_space` is set.
|
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik
## Information
I am using `roberta-base` tokenizer. The tokenization output changes depending on whether `add_prefix_space` is passed into the `from_pretrained` factory as keyword argument or set using property after constructing the .
## To reproduce
Steps to reproduce the behavior:
``` python
from transformers import RobertaTokenizerFast
tokenizer_1 = RobertaTokenizerFast.from_pretrained('roberta-base', add_prefix_space=True)
tokenizer_2 = RobertaTokenizerFast.from_pretrained('roberta-base')
tokenizer_2.add_prefix_space = True
pre_tokenized_inputs = ["Is", "this", "tokenization", "correct"]
tokenizer_1(pre_tokenized_inputs, is_split_into_words=True)
# {'input_ids': [0, 1534, 42, 19233, 1938, 4577, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]}
tokenizer_2(pre_tokenized_inputs, is_split_into_words=True)
# {'input_ids': [0, 6209, 9226, 46657, 1938, 36064, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]}
```
## Expected behavior
The addition of prefix space is not working for `tokenizer_2`. Either setting the property should add prefix space to each tokens before splitting into sub-words, or we shouldn't allow it to be set to `True` (raise a exception) after object creation.
|
https://github.com/huggingface/transformers/issues/11057
|
closed
|
[] | 2021-04-05T10:30:25Z
| 2021-06-07T15:18:36Z
| null |
sai-prasanna
|
pytorch/pytorch
| 55,223
|
How to use PyTorch with ROCm (radeon gpu)? How to transfer data to gpu?
|
Hey,
So far I didnt see any documentation or similar, which gives a hint how to use PyTorch with other GPUs than NVIDIA (when the new ROCm package is installed). How can I choose my radeon GPU as device and so use it for training? Very glad for any advices.
Best
cc @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport
|
https://github.com/pytorch/pytorch/issues/55223
|
closed
|
[
"module: rocm",
"triaged"
] | 2021-04-02T08:07:42Z
| 2023-08-22T22:02:51Z
| null |
oconnor127
|
pytorch/TensorRT
| 420
|
❓ [Question] How can I pull TRTorch docker image?
|
## ❓ Question
I use the command to pull TRTorch docker image
```
sudo docker pull docker.pkg.github.com/nvidia/trtorch/docgen:0.3.0
```
Get respose unauthorized: Your request could not be authenticated by the GitHub Packages service. Please ensure your access token is valid and has the appropriate scopes configured.
I can't find anyway to accsee the token.
|
https://github.com/pytorch/TensorRT/issues/420
|
closed
|
[
"question"
] | 2021-04-01T12:18:50Z
| 2021-04-02T15:57:07Z
| null |
Tshzzz
|
pytorch/text
| 1,265
|
How to split `_RawTextIterableDataset`
|
## ❓ Questions and Help
I am trying to move from using `legacy` and use new provided features, i was doing this:
```
from torchtext import legacy
TEXT = legacy.data.Field(lower=True, batch_first=True)
LABEL = legacy.data.LabelField(dtype=torch.float)
train_data, test_data = legacy.datasets.IMDB.splits(TEXT, LABEL, root='/tmp/imdb/')
train_data, valid_data = train_data.split(split_ratio=0.8, random_state=random.seed(SEED))
```
But now i want to split train_data, how can i do that?
```
from torchtext.datasets import IMDB
train_iter, test_iter = IMDB(split=('train', 'test'))
# I need to split train_iter into train_iter and valid_iter
```
And i think providing more features more than just this [one](https://github.com/pytorch/text/blob/master/examples/legacy_tutorial/migration_tutorial.ipynb) would help more, Thanks!
<!-- Please send questions or ask for help here. -->
|
https://github.com/pytorch/text/issues/1265
|
open
|
[
"feature request"
] | 2021-03-30T15:34:40Z
| 2023-07-30T03:13:25Z
| null |
KickItLikeShika
|
huggingface/transformers
| 10,960
|
What is the score of trainer.predict()?
|
I want to know the meaning of output of trainer.predict().
example:
`PredictionOutput(predictions=array([[-2.2704859, 2.442343 ]], dtype=float32), label_ids=array([1]), metrics={'eval_loss': 0.008939245715737343, 'eval_runtime': 0.0215, 'eval_samples_per_second': 46.56})`
What is this score? -> predictions=array([[-2.2704859, 2.442343 ]]
I use it for Sequence Classification.
|
https://github.com/huggingface/transformers/issues/10960
|
closed
|
[] | 2021-03-30T07:53:13Z
| 2021-03-30T23:41:38Z
| null |
Yuukp
|
pytorch/text
| 1,264
|
How to use fasttext emebddings in the torchtext Nightly Vocab
|
I have a custom trained facebook fasttext embedding which i want to use in my RNN.
i use the nightly version of torchtext so the Vocab is kinda new.
How do i use fastext embedding there. a simple clear example would be great.
|
https://github.com/pytorch/text/issues/1264
|
open
|
[] | 2021-03-27T12:48:11Z
| 2021-03-29T01:44:16Z
| null |
StephennFernandes
|
pytorch/pytorch
| 54,790
|
tools/git-clang-format: The downloaded binary is not what was expected!
|
`tools/git-clang-format` seems to do a test on hash of the clang-format binary, but if it mismatches it just says "The downloaded binary is not what was expected!" and no instructions how to remediate. I rm -rf'ed .clang-format-bin that might help
|
https://github.com/pytorch/pytorch/issues/54790
|
closed
|
[
"module: lint",
"triaged"
] | 2021-03-26T18:58:06Z
| 2021-04-07T00:19:01Z
| null |
ezyang
|
pytorch/pytorch
| 54,758
|
How to release unnecessary tensor which occupys memory when executing inference at test phrase?
|
## ❓ Questions and Help
I have a memory-cost operation, I put this operation into a function like this:
```
class xxx(nn.Module):
def forward(xxx):
xxx = self.cost_memory_function(xxx)
... # OOM error occurs here rather than at the above function.
return xxx
def cost_memory_function(xxx):
...
```
But If the tensors generated from the function "cost_memory_function" release, the next part should successfully run. So I guess the tensors at function "cost_memory_function" still occupy memory even though the function has exited.
So I want to know how to release some tensors which is unnecessary. I have set "torch.set_grad_enable" as False.
|
https://github.com/pytorch/pytorch/issues/54758
|
closed
|
[] | 2021-03-26T06:14:24Z
| 2021-03-26T16:08:40Z
| null |
shoutOutYangJie
|
pytorch/TensorRT
| 411
|
how to compile on windows?
|
https://github.com/pytorch/TensorRT/issues/411
|
closed
|
[
"help wanted",
"No Activity"
] | 2021-03-25T22:51:05Z
| 2021-07-28T00:01:06Z
| null |
statham123
|
|
huggingface/datasets
| 2,108
|
Is there a way to use a GPU only when training an Index in the process of add_faisis_index?
|
Motivation - Some FAISS indexes like IVF consist of the training step that clusters the dataset into a given number of indexes. It would be nice if we can use a GPU to do the training step and covert the index back to CPU as mention in [this faiss example](https://gist.github.com/mdouze/46d6bbbaabca0b9778fca37ed2bcccf6).
|
https://github.com/huggingface/datasets/issues/2108
|
open
|
[
"question"
] | 2021-03-24T21:32:16Z
| 2021-03-25T06:31:43Z
| null |
shamanez
|
pytorch/vision
| 3,602
|
Imagenet dataloader error: RuntimeError: The archive ILSVRC2012_devkit_t12.tar.gz is not present in the root directory or is corrupted.
|
## 🐛 Bug
I am using pytorch 1.8.0 and torchvision 0.9.
I am trying to use the pretrained models from pytorch and evaluate them on imagenet val data. That should be fairly straightforward, but I am getting stuck on the dataloader.
I downloaded the imagenet and the folder structure that I have is like this:
```
/media/SSD2/ILSVRC/
|----Annotation
|----ImageSets
|----Data
|----CLS-LOC
|----test
|----train
|----val
|----ILSVRC2012_val_00000009.JPEG
|----ILSVRC2012_val_00000010.JPEG
|----...
```
I tried `datasets.ImageNet`, based on [pytorch](https://pytorch.org/vision/stable/datasets.html#imagenet) where it says to use the following
```
imagenet_data = torchvision.datasets.ImageNet('path/to/imagenet_root/')
data_loader = torch.utils.data.DataLoader(imagenet_data,
batch_size=4,
shuffle=True,
num_workers=args.nThreads)
```
I changed the path_to_imagenet_to `/media/SSD2/ILSVRC/` like this
`torchvision.datasets.ImageNet('/media/SSD2/ILSVRC/',split='val',download=False)`
but I get this error:
```
RuntimeError: The archive ILSVRC2012_devkit_t12.tar.gz is not present in the root directory or is corrupted. You need to download it externally and place it in /media/SSD2/ILSVRC/.
```
Is it a bug or I am doing something wrong?
cc @pmeier
|
https://github.com/pytorch/vision/issues/3602
|
closed
|
[
"question",
"module: datasets"
] | 2021-03-24T19:15:28Z
| 2021-03-25T17:01:31Z
| null |
seyeeet
|
pytorch/pytorch
| 54,583
|
How to specific a op qconfig in "prepare_jit" qconfig_dict
|
## ❓ Questions and Help
pytorch1.7/torchvision0.8.0
I want to use "prepare_jit" and "convert_jit" to quantize Resnet18. But I can't specific 'layer1.0.conv1' to different qconfig.
my code:
model = models.__dict__['resnet18] (pretrained=True)
model = torch.jit.script(model.eval())
qconfig1 = torch.quantization.QConfig(
activation=torch.quantization.HistogramObserver.with_args(
reduce_range=False),
weight=torch.quantization.default_per_channel_weight_observer)
torch.quantization.prepare_jit(model, {'layer1.0.conv1':qconfig1}, True)
model(torch.randn(1, 3, 224, 224))
torch.quantization.convert_jit(model, True, False)
But it will fail as below message:
File "/home/xxx/python3.7/site-packages/torch/quantization/quantize_jit.py", line 58, in _prepare_jit
quant_type)
RuntimeError: __torch__.torch.nn.modules.conv.___torch_mangle_67.Conv2d (of Python compilation unit at: 0x56088f811c00) is not compatible with the type __torch__.torch.nn.modules.conv.___torch_mangle_66.Conv2d (of Python compilation unit at: 0x56088f811c00) for the field 'conv1'
It seems the key 'layer1.0.conv1' is not correct.
How can I do?
cc @gmagogsfm
|
https://github.com/pytorch/pytorch/issues/54583
|
closed
|
[
"oncall: jit"
] | 2021-03-24T09:57:19Z
| 2021-03-25T19:22:22Z
| null |
PenghuiCheng
|
pytorch/tutorials
| 1,439
|
Question about pytorch mobile
|
Hello, I'm using Pytorch Mobile to deploy a model to phone via Android Studio.
I follow the official direction turn the model in to '.pt' , and load it in android studio, but it seems that it doesn't give the right prediction after turn it into '.pt', it always predict to the same label no matter any label of image I feed in.
The second question is that ,how can I avoid normalization in function TensorImageUtils.bitmapToFloat32Tensor , just turn it in to Tensor.
|
https://github.com/pytorch/tutorials/issues/1439
|
closed
|
[
"question",
"Mobile"
] | 2021-03-24T07:14:40Z
| 2023-03-10T17:22:49Z
| null |
stillbetter
|
pytorch/tutorials
| 1,432
|
Reinforcement Tutorial (DQN)
|
Hey,
I try to reproduce [PyTorch Reinforcement Tutorial (DGN)](https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html#training) in
In each time step, the state of the environment need to be evaluated in the function ```def get_screen()```
The line
```
screen = env.render(mode='rgb_array').transpose((2, 0, 1))
```
throws an error both in Google Colabs and on my local machine. The error is related to the gym environment
```
env = gym.make('CartPole-v0').unwrapped
```
Is there any idea, how to solve this problem and make this tutorial reproducible again?
|
https://github.com/pytorch/tutorials/issues/1432
|
closed
|
[
"Reinforcement Learning"
] | 2021-03-21T23:36:44Z
| 2022-09-06T17:44:22Z
| 2
|
sambaPython24
|
pytorch/pytorch
| 54,390
|
UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose. warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
|
https://github.com/pytorch/pytorch/issues/54390
|
closed
|
[] | 2021-03-21T14:17:10Z
| 2021-03-22T15:26:58Z
| null |
ZengcanXUE
|
pytorch/TensorRT
| 408
|
🐛 [Bug] Tests are not being linked properly, fail with 'symbol lookup error'
|
## Bug Description
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1. bazel test //tests --compilation_mode=dbg --test_output=errors --jobs=4 --runs_per_test=5
You will see all the tests fail. I am using stock 1.7.1 PyTorch.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
boris@snikolaev-DGXStation:~/git/TRTorch$ /home/boris/.cache/bazel/_bazel_boris/c6ee020343103959b26b654eb14e89ac/execroot/TRTorch/bazel-out/k8-dbg/bin/tests/core/conversion/converters/test_linear.runfiles/TRTorch/tests/core/conversion/converters/test_linear
/home/boris/.cache/bazel/_bazel_boris/c6ee020343103959b26b654eb14e89ac/execroot/TRTorch/bazel-out/k8-dbg/bin/tests/core/conversion/converters/test_linear.runfiles/TRTorch/tests/core/conversion/converters/test_linear: symbol lookup error: /home/boris/.cache/bazel/_bazel_boris/c6ee020343103959b26b654eb14e89ac/execroot/TRTorch/bazel-out/k8-dbg/bin/tests/core/conversion/converters/../../../../_solib_k8/libcore_Sutil_Slibtrt_Uutil.so: undefined symbol: _ZN3c105ErrorC1ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
boris@snikolaev-DGXStation:~/git/TRTorch$ nm /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so | grep _ZN3c105ErrorC1ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
boris@snikolaev-DGXStation:~/git/TRTorch$ nm /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so | grep SourceLocation
000000000004f130 T _ZN3c1014WarningHandler7processERKNS_14SourceLocationERKSsb
0000000000051870 T _ZN3c105ErrorC1ENS_14SourceLocationESs
0000000000051870 T _ZN3c105ErrorC2ENS_14SourceLocationESs
000000000004f210 T _ZN3c107Warning4warnENS_14SourceLocationERKSsb
00000000000527c0 t _ZN3c10lsERSoRKNS_14SourceLocationE
## Expected behavior
Tests run (or at least start up) successfully.
<!-- A clear and concise description of what you expected to happen. -->
## Environment
> Build information about the TRTorch compiler can be found by turning on debug messages
- PyTorch Version (e.g., 1.0): 1.7.1
- CPU Architecture:
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, `libtorch`, source): pip
- Build command you used (if compiling from source): bazel test //tests --compilation_mode=dbg --test_output=errors --jobs=4 --runs_per_test=5
- Are you using local sources or building from archives: local
- Python version: 3.6
- CUDA version: 11
- GPU models and configuration:
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
|
https://github.com/pytorch/TensorRT/issues/408
|
closed
|
[
"question"
] | 2021-03-20T04:06:57Z
| 2021-04-07T01:42:26Z
| null |
borisfom
|
pytorch/examples
| 895
|
Video classification example
|
Hi,
As we all know that video representation learning is a hot topic in computer vision community (thanks to recent advances in self-supervised learning), is it time to add a toy example for video classification? This code would be as simple as image classification examples. For example, we can add an example of video classification using I3D on UCF-101/HMDB-51?
|
https://github.com/pytorch/examples/issues/895
|
open
|
[
"good first issue"
] | 2021-03-18T19:08:06Z
| 2022-03-09T20:44:51Z
| 1
|
avijit9
|
pytorch/xla
| 2,831
|
RuntimeError: Cannot access data pointer of Tensor that doesn't have storage, how to resolve it?
|
## Issue description
Currently I am trying to solve an object detection problem using FastRCNN model with the help of Pytorch XLA module
But while training I am getting a **RuntimeError: Cannot access data pointer of Tensor that doesn't have storage**
It was working fine when I trained the model in GPU kernel, but started giving error when I switched to TPU
## Code example
Here's the link to my notebook --> [Object Detection Kernel](https://www.kaggle.com/mesparky/vunbigdata-chest-xray-object-detection?scriptVersionId=57113528)

## System Info
I am using Kaggle TPU kernel for training my model.
**PLEASE HELP ME RESOLVING THIS ISSUE**
|
https://github.com/pytorch/xla/issues/2831
|
closed
|
[
"stale"
] | 2021-03-18T17:03:32Z
| 2021-06-26T02:22:49Z
| null |
IamSparky
|
pytorch/tutorials
| 1,421
|
Chatbot tutorial - RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor
| ERROR: type should be string, got "https://github.com/pytorch/tutorials/blob/master/beginner_source/chatbot_tutorial.py\r\nTried running this chatbot tutorial. Training goes well but when actually using the model by uncommenting the final line of code (as specified in the comments) it returns the following error:\r\n\r\n`Iteration: 4000; Percent complete: 100.0%; Average loss: 2.4559\r\n> hello?\r\nTraceback (most recent call last):\r\n File \"C:/Users/user/PycharmProjects/pytorch-tests/main.py\", line 1377, in <module>\r\n evaluateInput(encoder, decoder, searcher, voc)\r\n File \"C:/Users/user/PycharmProjects/pytorch-tests/main.py\", line 1242, in evaluateInput\r\n output_words = evaluate(encoder, decoder, searcher, voc, input_sentence)\r\n File \"C:/Users/user/PycharmProjects/pytorch-tests/main.py\", line 1225, in evaluate\r\n tokens, scores = searcher(input_batch, lengths, max_length)\r\n File \"C:\\Users\\user\\.conda\\envs\\pytorch-tests\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"C:/Users/user/PycharmProjects/pytorch-tests/main.py\", line 1160, in forward\r\n encoder_outputs, encoder_hidden = self.encoder(input_seq, input_length)\r\n File \"C:\\Users\\user\\.conda\\envs\\pytorch-tests\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"C:/Users/user/PycharmProjects/pytorch-tests/main.py\", line 693, in forward\r\n packed = nn.utils.rnn.pack_padded_sequence(embedded, input_lengths)\r\n File \"C:\\Users\\user\\.conda\\envs\\pytorch-tests\\lib\\site-packages\\torch\\nn\\utils\\rnn.py\", line 245, in pack_padded_sequence\r\n _VF._pack_padded_sequence(input, lengths, batch_first)\r\nRuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor\r\n\r\nProcess finished with exit code 1`\r\n\r\nUnfamiliar with pytorch so no idea what the cause is or how to solve it, but it looks like something to do with tensor types.\r\nPackages in environment:\r\n\r\n"
|
https://github.com/pytorch/tutorials/issues/1421
|
closed
|
[
"Text"
] | 2021-03-18T11:59:07Z
| 2023-03-09T19:06:58Z
| 1
|
0xVavaldi
|
pytorch/serve
| 1,013
|
How to debug handlers?
|
Since the handler's logic is copied inside every `.mar` file, there is no sense of breakpoints in the original handler `.py` file. Can you please suggest how can we debug our handler modules?
|
https://github.com/pytorch/serve/issues/1013
|
closed
|
[
"triaged_wait"
] | 2021-03-17T21:52:46Z
| 2021-04-09T18:47:43Z
| null |
duklin
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.