repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/pytorch
| 33,343
|
How to convert the model to onnx in libtorch?
|
struct Net : torch::nn::Module {
Net()
: conv1(torch::nn::Conv2dOptions(1, 20, /*kernel_size=*/5).stride(1)),
conv2(torch::nn::Conv2dOptions(20, 40, /*kernel_size=*/5)),
fc1(640, 120),
fc2(120, 10) {
register_module("conv1", conv1);
register_module("conv2", conv2);
register_module("conv2_drop", conv2_drop);
register_module("fc1", fc1);
register_module("fc2", fc2);
}
torch::Tensor forward(torch::Tensor x) {
x = torch::relu(torch::max_pool2d(conv1->forward(x), 2));//(28-5)+1=24,12 x 12 x 10
x = torch::relu(torch::max_pool2d(conv2_drop->forward(conv2->forward(x)), 2));//(12-5)+1=8,4 x 4 x 20
//x = torch::relu(torch::avg_pool2d(conv2_drop->forward(conv2->forward(x)), 2));//(12-5)+1=8,4 x 4 x 20
x = x.view({ -1, 640 });
x = torch::relu(fc1->forward(x));
x = torch::dropout(x, /*p=*/0.5, /*training=*/is_training());
x = fc2->forward(x);
return torch::log_softmax(x, /*dim=*/1);
}
torch::nn::Conv2d conv1;
torch::nn::Conv2d conv2;
torch::nn::Dropout2d conv2_drop;
torch::nn::Linear fc1;
torch::nn::Linear fc2;
};
cc @yf225 @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof
|
https://github.com/pytorch/pytorch/issues/33343
|
closed
|
[
"module: onnx",
"module: cpp",
"triaged"
] | 2020-02-14T13:14:23Z
| 2021-11-08T22:01:30Z
| null |
bjliuzp
|
pytorch/pytorch
| 33,341
|
how-to-adjust-learning-rate using libtorch
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
|
https://github.com/pytorch/pytorch/issues/33341
|
open
|
[
"triaged"
] | 2020-02-14T11:25:57Z
| 2020-02-14T17:57:33Z
| null |
w1005444804
|
pytorch/examples
| 715
|
C++ tutorial on sentence classification
|
@soumith
Currently, all the examples in C++ are related to image classification/ GAN. There are not many examples on text/nlp. I would like to include a starter example on sentence classification in c++. Can I go ahead and work on this??
|
https://github.com/pytorch/examples/issues/715
|
open
|
[
"c++"
] | 2020-02-13T17:05:24Z
| 2024-03-16T23:09:13Z
| 4
|
avinashsai
|
pytorch/vision
| 1,883
|
Torchvision NMS description
|
I think here should be `boxes with IoU >= iou_threshold`. Is this only a documentation typo and the cuda function called here is actually correctly implemented?
https://github.com/pytorch/vision/blob/bf8595798eaccbaffb6c04db11406426eb1b3800/torchvision/ops/boxes.py#L22
|
https://github.com/pytorch/vision/issues/1883
|
closed
|
[
"question",
"module: documentation"
] | 2020-02-13T14:53:30Z
| 2020-02-13T18:03:20Z
| null |
sharifza
|
pytorch/vision
| 1,882
|
How to modify the loss function of models in torchvison?
|
Excuse me if this question is a little stupid, for I just recently got access to this extraordinary field and cannot find the answer after some researching.
I invoked the pretrained mrcnn model in torchvison however its output wasn't so ideal. So I wonder if I can modify the loss function to improve its performance without rewriting the whole framework?
Thanks a lot for any advice.
|
https://github.com/pytorch/vision/issues/1882
|
closed
|
[
"question",
"module: models",
"topic: object detection"
] | 2020-02-13T13:23:31Z
| 2023-06-28T15:01:18Z
| null |
Michael-J98
|
pytorch/tutorials
| 850
|
Why is the pytorch sphinx theme included as a submodule?
|
I'm not an expert in sphinx, but after a lot of testing and headache while trying to improve a tutorial I really wonder why the sphinx theme under `./src` is included at all (as a submodule on github).
If you clone the repo with `git clone ...` it doesn't get downloaded.
The theme gets downloaded with `pip install -e git+git://github.com/pytorch/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme` as it is defined in the `requirements.txt`. If the dir `src/pytorch-sphinx-theme` already exists you get ask if you want to wipe it, no matter if it is empty or not.
And if you cloned the repo with `--recurse-submodules` you'd download an old version of the theme.
So why not drop the submodule and just include an empty `src` dir where the theme will be installed w/o error messages during installation from `requirements.txt`?
|
https://github.com/pytorch/tutorials/issues/850
|
closed
|
[
"build issue"
] | 2020-02-13T13:02:16Z
| 2024-09-06T21:25:48Z
| 1
|
wAuner
|
pytorch/vision
| 1,878
|
So, what is the meaning for DeepLabHead in deeplabv3.py
|
Hi guys,
I am implementing the deeplabv3+, imitating the pattern of deeplabv3.py,
but I don't quite understand the meaning for DeepLabHead,
so do I need to put the upsampling operations in the DeepLabHead?
Any answer and idea will be appreciated!
|
https://github.com/pytorch/vision/issues/1878
|
closed
|
[
"question",
"module: models",
"topic: semantic segmentation"
] | 2020-02-12T09:45:51Z
| 2020-02-14T05:57:44Z
| null |
songyuc
|
pytorch/vision
| 1,875
|
[Bug?] roialign operation returning incorrect numerics
|
torchvision.ops.roialign is returning incorrect results for a simple test case-
```
# x: tensor of size (1,1,3,3)
x= torch.tensor([[[[1,2,3],[4,5,6],[7,8,9]]]], dtype=torch.float)
boxes = torch.tensor(([[0, 0, 2, 2, 0]]), dtype=torch.float)
z = torchvision.ops.roi_align(x, boxes, (2,2),sampling_ratio=1)
```
returns z as -
```
tensor([[[[7.5000, 8.5000],
[7.5000, 8.5000]]]])
```
shouldn't this be
```
tensor([[[[3.0000 4.0000],
[6.0000, 7.0000]]]])
```
|
https://github.com/pytorch/vision/issues/1875
|
closed
|
[
"question",
"module: ops"
] | 2020-02-11T21:06:04Z
| 2020-02-14T13:24:48Z
| null |
coderAddy
|
pytorch/vision
| 1,872
|
Shouldn't have a `+1` in the NMS implementation for the boxes width/height computation ?
|
The standard is to have a bounding box defined as quoted [here](https://github.com/facebookresearch/Detectron/blob/master/detectron/utils/boxes.py#L23).
But in the NMS [source code](https://github.com/pytorch/vision/blob/e2a8b4185e2b668b50039c91cdcf81eb4175d765/torchvision/csrc/cpu/nms_cpu.cpp), there is no `+1` when computing the areas and intersection values. This also leaves a bug in the case of getting `union = 0`, raising a `NaN` error when computing the `iou`.
If the code is correct, what am I missing ? Shouldn't the [documentation](https://pytorch.org/docs/stable/torchvision/ops.html#torchvision.ops.nms) explain this better ?
Thanks.
|
https://github.com/pytorch/vision/issues/1872
|
closed
|
[
"question",
"module: ops"
] | 2020-02-11T15:11:17Z
| 2020-02-14T13:59:38Z
| null |
viniciusarruda
|
pytorch/vision
| 1,870
|
Unexpected behavior of torchvision.ops.nms
|
Following the example below and looking the nms [source code](https://github.com/pytorch/vision/blob/e2a8b4185e2b668b50039c91cdcf81eb4175d765/torchvision/csrc/cpu/nms_cpu.cpp), I expected a `NaN` error, as the intersection and union will be zero.
import torchvision # torchvision==0.5.0+cpu
import torch # torch==1.4.0+cpu
boxes = [[0.0, 0.0, 1.0, 1.0],
[2.0, 1.0, 1.0, 2.0]]
boxes = torch.tensor(boxes)
scores = torch.tensor([1., 0.5])
keep = torchvision.ops.nms(boxes, scores, 0.7)
If this same example is used with [this](https://github.com/rbgirshick/fast-rcnn/blob/master/lib/utils/nms.py) nms implementation (removing the +1 from the source code to be equivalent to the torchvision implementation), it raises a `NaN` error as expected.
Am I missing something ?
Thanks.
|
https://github.com/pytorch/vision/issues/1870
|
closed
|
[
"question",
"module: ops"
] | 2020-02-11T12:09:02Z
| 2020-02-27T19:57:35Z
| null |
viniciusarruda
|
pytorch/vision
| 1,869
|
It seems there is no upsampling operations in the implementation of Deeplabv3?
|
Hi, guys,
I am learning about the the implementation of Deeplabv3 today,
and I find that it seems, there is no upsampling operations in deeplabv3.py,
so where is the upsampling operations of Deeplabv3 model?
Any answer or idea will be appreciated!
|
https://github.com/pytorch/vision/issues/1869
|
closed
|
[
"question",
"module: models",
"topic: semantic segmentation"
] | 2020-02-11T11:12:13Z
| 2020-02-13T18:23:26Z
| null |
songyuc
|
pytorch/vision
| 1,860
|
Is there a backbone implementation of Xception?
|
Hi, guys,
I want to know if there is a backbone implementation of Xception?
Any answer or idea will be appreciated!
|
https://github.com/pytorch/vision/issues/1860
|
closed
|
[
"question",
"module: models",
"topic: classification"
] | 2020-02-10T10:06:27Z
| 2020-02-10T13:46:21Z
| null |
songyuc
|
pytorch/vision
| 1,859
|
Is there an implementation of Deeplabv3+?
|
Hi, guys,
I want to know if there is an implementation of Deeplabv3+?
Any answer will be appreciated!
|
https://github.com/pytorch/vision/issues/1859
|
closed
|
[
"question",
"module: models",
"topic: semantic segmentation"
] | 2020-02-10T07:24:51Z
| 2020-02-10T14:10:28Z
| null |
songyuc
|
pytorch/vision
| 1,856
|
FasterRCNN ground truth boxes reference system
|
Hi,
I'm trying to train a FasterRCNN on a custom dataset.
I have the ground truth bounding boxes in the [x1, y1, x2, y2] format, where:
- 0 <= x1 <= x2 <= H
- 0 <= y1 <= y2 <= W
- `H, W = img.shape` with img being loaded with cv2
With numpy, if I extract `img[x1:x2, y1:y2]`, it's the correct portion of the image.
Now, this seems to me the right way of formatting the boxes, since the documentation says:
> boxes (``FloatTensor[N, 4]``): the ground-truth boxes in ``[x1, y1, x2, y2]`` format, with values
between ``0`` and ``H`` and ``0`` and ``W``
However, the network doesn't seem to be learning anything during training.
Instead, if I switch x1 with y1, x2 with y2, the network starts working properly.
It seems to be a reference system problem.
What am I missing? It feels like there is an easy explanation to this problem.
Thanks in advance!
|
https://github.com/pytorch/vision/issues/1856
|
closed
|
[
"question",
"module: models",
"topic: object detection"
] | 2020-02-07T12:55:44Z
| 2020-02-11T07:54:48Z
| null |
Robylyon93
|
pytorch/vision
| 1,854
|
Clarify the quantization bits in the pretrained models?
|
Thanks for the great work, and quantized pretrained models had been added in torchvision 0.5.
https://github.com/pytorch/vision/releases
>Quantized models
torchvision now provides quantized models for ResNet, ResNext, MobileNetV2, GoogleNet, InceptionV3 and ShuffleNetV2, as well as reference scripts for quantizing your own model in references/classification/train_quantization.py (https://github.com/pytorch/vision/blob/master/references/classification/train_quantization.py).
However, I was confused what is the quantized bits this models are in.
Is it in FP16 or INT8? I think this should be clarified to lessen confusion.
|
https://github.com/pytorch/vision/issues/1854
|
closed
|
[
"question",
"module: documentation",
"module: models.quantization"
] | 2020-02-07T04:50:29Z
| 2020-03-10T10:39:08Z
| null |
kentaroy47
|
pytorch/pytorch
| 33,022
|
How do you convert Torch output iOS NSNumber to UIImage
|
I recently trained a model in PyTorch and created the .pt model file. I was able to use the model file in iOS with https://pytorch.org/mobile/ios/ to get an output.
But the output is an array of NSNumber.
How can I convert that to UIImage?
Here's how i'm loading the model:
```
private lazy var module: TorchModule = {
if let filePath = Bundle.main.path(forResource: "face", ofType: "pt"),
let module = TorchModule(fileAtPath: filePath) {
print("Loaded Model")
return module
} else {
print(Bundle.main.path(forResource: "face", ofType: "pt"))
fatalError("Can't find the model file!")
}
}()
```
Here's how i'm passing and image and getting the NSNumber output:
```
let image = imageView.image!
let resizedImage = image.resized(to: CGSize(width: 256, height: 256))
guard var pixelBuffer = resizedImage.normalized() else {
return
}
guard let outputs = module.predict(image: UnsafeMutableRawPointer(&pixelBuffer)) else {
return
}
```
And here are the numbers I'm getting back:
```
1000 elements
- 0 : 0.9556794
- 1 : 0.959437
- 2 : 0.9545235
- 3 : 0.9602792
- 4 : 0.9626616
- 5 : 0.9451413
- 6 : 0.9630886
- 7 : 0.9649493
- 8 : 0.96794
- 9 : 0.9451433
- 10 : 0.9606364
- 11 : 0.9666034
- 12 : 0.9719177
- 13 : 0.9503573
- 14 : 0.9689084
- 15 : 0.9644295
- 16 : 0.9715278
- 17 : 0.9545213
- 18 : 0.9695826
- 19 : 0.9616866
- 20 : 0.9709251
- 21 : 0.9504414
- 22 : 0.9684582
- 23 : 0.9636042
- 24 : 0.9707479
- 25 : 0.9474098
- 26 : 0.9687761
- 27 : 0.962492
- 28 : 0.9722843
- 29 : 0.9512891
- 30 : 0.9713559
- 31 : 0.9646252
- 32 : 0.9709271
- 33 : 0.9450958
- 34 : 0.9687521
- 35 : 0.9592332
- 36 : 0.9614322
- 37 : 0.9501442
- 38 : 0.9671555
- 39 : 0.9576904
- 40 : 0.966316
- 41 : 0.9518282
- 42 : 0.9691417
- 43 : 0.9573505
- 44 : 0.9599486
- 45 : 0.9461015
- 46 : 0.9679283
- 47 : 0.9560247
- 48 : 0.9592899
- 49 : 0.9511722
- 50 : 0.9696479
- 51 : 0.9560531
- 52 : 0.9652212
- 53 : 0.9524947
- 54 : 0.9737433
- 55 : 0.960919
- 56 : 0.968053
- 57 : 0.9475061
- 58 : 0.9700636
- 59 : 0.9567729
- 60 : 0.9692516
- 61 : 0.9438604
- 62 : 0.9666854
- 63 : 0.9534383
- 64 : 0.9692665
- 65 : 0.940613
- 66 : 0.9655256
- 67 : 0.9560776
- 68 : 0.9666242
- 69 : 0.9394323
- 70 : 0.968111
- 71 : 0.95995
- 72 : 0.965363
- 73 : 0.9503852
- 74 : 0.9690766
- 75 : 0.9677175
- 76 : 0.9689373
- 77 : 0.958289
- 78 : 0.9717255
- 79 : 0.9717532
- 80 : 0.9726413
- 81 : 0.9699872
- 82 : 0.9718522
- 83 : 0.970526
- 84 : 0.9766954
- 85 : 0.969599
- 86 : 0.9727935
- 87 : 0.9729283
- 88 : 0.976265
- 89 : 0.9681603
- 90 : 0.9752769
- 91 : 0.9746329
- 92 : 0.9779454
- 93 : 0.9716548
- 94 : 0.9771305
- 95 : 0.9763421
- 96 : 0.9785836
- 97 : 0.972732
- 98 : 0.9775047
- 99 : 0.972182
- 100 : 0.9754875
- 101 : 0.9716605
- 102 : 0.9703948
- 103 : 0.9705175
- 104 : 0.9728737
- 105 : 0.9674641
- 106 : 0.9717978
- 107 : 0.9679852
- 108 : 0.9708558
- 109 : 0.9624084
- 110 : 0.971324
- 111 : 0.9681918
- 112 : 0.9727319
- 113 : 0.9670874
- 114 : 0.974831
- 115 : 0.9708152
- 116 : 0.9764423
- 117 : 0.9653759
- 118 : 0.9755697
- 119 : 0.9701872
- 120 : 0.9722598
- 121 : 0.9629219
- 122 : 0.9759187
- 123 : 0.9682656
- 124 : 0.9722873
- 125 : 0.9610798
- 126 : 0.9722118
- 127 : 0.9668668
- 128 : 0.9654322
- 129 : 0.9550279
- 130 : 0.9650962
- 131 : 0.9669107
- 132 : 0.9664246
- 133 : 0.9492099
- 134 : 0.968359
- 135 : 0.961526
- 136 : 0.9675772
- 137 : 0.9473796
- 138 : 0.9685749
- 139 : 0.9654633
- 140 : 0.9687688
- 141 : 0.9504932
- 142 : 0.9691511
- 143 : 0.9665062
- 144 : 0.9718524
- 145 : 0.9436379
- 146 : 0.9687477
- 147 : 0.9655094
- 148 : 0.9710371
- 149 : 0.9442329
- 150 : 0.9679898
- 151 : 0.9687661
- 152 : 0.9667206
- 153 : 0.9499748
- 154 : 0.9711047
- 155 : 0.9650826
- 156 : 0.9675245
- 157 : 0.9424814
- 158 : 0.9717015
- 159 : 0.961861
- 160 : 0.9632423
- 161 : 0.95027
- 162 : 0.9681548
- 163 : 0.95991
- 164 : 0.9622825
- 165 : 0.9419831
- 166 : 0.9676843
- 167 : 0.9502627
- 168 : 0.9604739
- 169 : 0.9390262
- 170 : 0.9632315
- 171 : 0.9489474
- 172 : 0.9538567
- 173 : 0.9387113
- 174 : 0.9685857
- 175 : 0.9537058
- 176 : 0.9516653
- 177 : 0.9406225
- 178 : 0.9654861
- 179 : 0.9563531
- 180 : 0.9503596
- 181 : 0.9421797
- 182 : 0.9610486
- 183 : 0.9516525
- 184 : 0.9575865
- 185 : 0.9422593
- 186 : 0.9571754
- 187
|
https://github.com/pytorch/pytorch/issues/33022
|
closed
|
[
"oncall: mobile",
"module: ios"
] | 2020-02-05T21:41:29Z
| 2020-02-07T19:12:04Z
| null |
rooseveltrp
|
pytorch/vision
| 1,848
|
training FCN and DeepLab for segmentation
|
does PyTorch provide steps on how to use the deeplab or fcn for training a segmentation task?
if it already exists, where I can find it?
|
https://github.com/pytorch/vision/issues/1848
|
closed
|
[
"question",
"module: reference scripts",
"topic: semantic segmentation"
] | 2020-02-04T19:34:28Z
| 2020-02-13T17:50:09Z
| null |
isalirezag
|
huggingface/sentence-transformers
| 120
|
What is the expected number of epochs for training sentenceBERT
|
Hi,
Given a model in {BERT, XLM, .XLnet, ...}, do you have a dictionary of estimated best number of epochs for training your Siamese Network on NLI dataset?
Else, what would be your suggestion on this? (other than just keep trying with different epochs parameters since it takes a lot of computational time 😞 )
That would be very useful for other users as well I think.
Cheers and great job! :D
|
https://github.com/huggingface/sentence-transformers/issues/120
|
open
|
[] | 2020-02-04T14:17:22Z
| 2020-06-08T19:48:20Z
| null |
MastafaF
|
pytorch/vision
| 1,847
|
Required range is confusing in torchvision.utils.save_image
|
https://discuss.pytorch.org/t/float-vs-int-in-torchvision-utils-save-image/68596
|
https://github.com/pytorch/vision/issues/1847
|
closed
|
[
"question",
"module: transforms"
] | 2020-02-04T07:47:28Z
| 2025-01-23T10:55:55Z
| null |
chinglamchoi
|
huggingface/transformers
| 2,705
|
What is the input for TFBertForSequenceClassification?
|
# ❓ Questions & Help
What is the input for TFBertForSequenceClassification?
## Details
I have a simple multiclass text data on which I want to train the BERT model.
From docs I have found the input format of data:
```a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])```
In my understanding:
`input_ids`- tokenized sentences, generated from BERT tokenizer.
`attention_mask`- As name suggests it is attention mask. I should use it to mask out padding tokens. Please correct me if I am wrong.
Now what is `token_type_ids'? is it necessary?
When I tried to print output_shape of the model? I got:
`AttributeError: The layer has never been called and thus has no defined output shape.`
So, let's say my dataset has 5 classes. Does this model expect one-hot encoded vector of shape [BATCH_SIZE, CLASSES] for .fit() method?
Also if I don't use .from_pretrained() method, will it load an untrained model?
|
https://github.com/huggingface/transformers/issues/2705
|
closed
|
[] | 2020-02-01T10:20:29Z
| 2020-03-12T08:41:25Z
| null |
sainimohit23
|
pytorch/pytorch
| 32,690
|
How to customize build torchscript model to be used in end devices codebase
|
## 🚀 Feature
I want to compile my model to be executed in the Python/C script running on our customers computers/end devices, without the need to load the entire torch/libtorch package, but only what is needed based on the model operations.
## Motivation
Currently, the size of my ResNet model (for example) is ~100MB but it needs torch/libtorch, which requires ~1.5GB of space.
End devices (smart cameras, robots, etc.) are low in resources. R&D efforts for deployment on end devices includes a large efforts to optimize the model and reduce its size to minimum. Having my model accompanied by torch/libtorch is a difficult restriction. I am aware that the mobile community is leading the attention for similar features. However, considering modern smartphones resources, there is even a greater need for such a solution for other end devices.
## Current status
Currently i am doing this series of commands:
`model = torchvision.models.resnet50(pretrained=True)`
`model.eval()`
`example = torch.ones(1, 3, 224, 224)`
`traced_model = torch.jit.trace(model, example)`
`ops = torch.jit.export_opnames(model)`
`traced_model.save('traced_model.pt')`
`with open('model_ops.yaml', 'w') as output:`
` yaml.dump(ops, output)`
The request is to enable building a model i can use in another python/c script without the need to load the entire torch or libtorch packages, but only what is needed based on the model operations.
## Alternatives
I am not aware of such alternatives. Will be happy to hear about them, if there are any.
cc @suo
|
https://github.com/pytorch/pytorch/issues/32690
|
open
|
[
"oncall: jit",
"triaged",
"oncall: mobile"
] | 2020-01-28T10:11:07Z
| 2020-02-28T18:54:55Z
| null |
danmalowany-allegro
|
pytorch/tutorials
| 833
|
Using encoder output in attention model
|
I study this [NLP from scratch](https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html) tutorial. Encoder's output shape is `(seq_len, batch, hidden_size)`
Why does the author only save `[0, 0]` part (later is needed for attention weights) but not `[0]`:
https://github.com/pytorch/tutorials/blob/8244bffa52641fab0c37d35c6843faa1beaba06b/intermediate_source/seq2seq_translation_tutorial.py#L563
Is there a mistake?
|
https://github.com/pytorch/tutorials/issues/833
|
closed
|
[] | 2020-01-25T18:06:14Z
| 2020-01-29T19:03:51Z
| 0
|
kenenbek
|
pytorch/pytorch
| 32,485
|
How to specify pytroch as a package requirement on windows ?
|
## ❓ Questions and Help
I have a python package which depends on pytorch and which I’d like windows users to be able to install via pip (the specific package is: https://github.com/mindsdb/lightwood, but I don’t think this is very relevant to my question).
What are the best practices for going about this ?
Are there some project I could use as examples ?
It seems like the pypi hosted version of torch & torchvision aren’t windows compatible and the “getting started” section suggests installing from the custom pytorch repository, but beyond that I’m not sure what the ideal solution would be to incorporate this as part of a setup script.
|
https://github.com/pytorch/pytorch/issues/32485
|
closed
|
[] | 2020-01-22T09:31:44Z
| 2020-01-22T10:27:03Z
| null |
George3d6
|
huggingface/transformers
| 2,591
|
What is the f1 score of Squad v2.0 on bert-base? I only got f1 score 74.78.
|
## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hello, I am doing some experiment of squad v2.0 on bert-base (NOT bert-large).
According to the BERT paper, bert-large achieves f1 score 81.9 with squad v2.0.
Since I couldn't find the official result for bert-base, I am not sure if I am getting the right f1 score.
Has anyone tried running squad v2.0 on bert base?
I got f1 score **74.78** for squad v2.0 result on bert-base, using below command:
sudo python3 ../../../run_squad.py \
--model_type bert \
--model_name_or_path bert-base-cased \
--do_train \
--do_eval \
--train_file $SQUAD2_DIR/train-v2.0.json \
--predict_file $SQUAD2_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 4 \
--learning_rate 4e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--version_2_with_negative \
--overwrite_output_dir \
--output_dir ../../../bert_base/$TASK_NAME/
|
https://github.com/huggingface/transformers/issues/2591
|
closed
|
[] | 2020-01-20T09:03:45Z
| 2020-01-22T05:03:12Z
| null |
YJYJLee
|
pytorch/tutorials
| 828
|
Multiple input tutorial
|
I am currently trying to build a model that takes two different inputs into account, trying to generalize the interaction between both from their properties.
However, I cannot find any resource on how to build a dataset that allows multiple inputs, while it seems to be quite simple to build the neural net itself. Yet, I haven't found a solution. It would be great to address this issue in the PyTorch documentation, or give a tutorial for this.
|
https://github.com/pytorch/tutorials/issues/828
|
closed
|
[] | 2020-01-20T08:21:57Z
| 2021-06-09T21:14:17Z
| 6
|
THinnerichs
|
pytorch/pytorch
| 32,418
|
how to install pytorch on AMD GPU
|
I find that the pytorch offer one version of downloading which not requires CUDA. And I follow the instruction.
I choose the pytorch 1.4.
My OS is Windows.
Pip is used to install.
My version of python is python 3.6
CUDA None
and I run the command pip3 install torch==1.4.0+cpu torchvision==0.5.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
However, here comes two errors
ERROR: Could not find a version that satisfies the requirement torch==1.4.0+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch==1.4.0+cpu
Why? Thanks a lot for help
|
https://github.com/pytorch/pytorch/issues/32418
|
closed
|
[] | 2020-01-20T06:19:18Z
| 2023-04-10T18:58:46Z
| null |
PIPIKAI-Sung
|
pytorch/pytorch
| 32,403
|
How to accelerate the compiling of pytorch
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
I modify some file of Aten for some reason, when I compile the pytorch project, it takes a lot of time, almost 5 minutes in my computer...
python setup install costs a lot of time, can anybody help me accelerate the compiling of pytorch, thanks a lot
|
https://github.com/pytorch/pytorch/issues/32403
|
open
|
[
"module: build",
"triaged"
] | 2020-01-19T13:42:14Z
| 2020-01-21T23:25:36Z
| null |
daydayfun
|
pytorch/java-demo
| 3
|
how and where is it better to install the LIBTORCH library localy for the project?
|
how and where is it better to install the LIBTORCH library localy for the project in linux(Ubuntu)?
While make proj Intellij idea write Error: "A problem occurred evaluating root project 'java-demo'. > LIBTORCH_HOME not present in environment."
|
https://github.com/pytorch/java-demo/issues/3
|
closed
|
[] | 2020-01-18T18:03:04Z
| 2020-04-29T02:53:34Z
| null |
vit1967
|
pytorch/pytorch
| 32,282
|
How to convert layer_norm layer to ONNX?
|
I’m trying to convert my model to ONNX format for further deployment in TensorRT. Here is a sample code to illustrate my problem in layer_norm here.
``` python
import torch
from torch import nn
class ExportModel(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
# n, c, h, w = x.shape
# y = nn.functional.layer_norm(x, [c, h, w]) # not working
# y = nn.functional.layer_norm(x, x.size()[1:]) # not working
y = nn.functional.layer_norm(x, [16, 32, 128])
return y
def main():
model = ExportModel()
dummy_input = torch.randn(64, 16, 32, 128)
input_names = [ "input" ]
output_names = [ "output" ]
with torch.no_grad():
torch.onnx.export(
model, dummy_input, "sample.onnx", verbose=True,
input_names=input_names, output_names=output_names
)
return
if __name__ == '__main__':
main()
```
It could only work when the parameter of layer_norm is constant number. If not, the following error will occur.
``` shell
Traceback (most recent call last):
File "sample.py", line 31, in <module>
main()
File "sample.py", line 26, in main
verbose=True, input_names=input_names, output_names=output_names
File "/opt/conda/lib/python3.6/site-packages/torch/onnx/__init__.py", line 148, in export
strip_doc_string, dynamic_axes, keep_initializers_as_inputs)
File "/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py", line 66, in export
dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs)
File "/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py", line 409, in _export
fixed_batch_size=fixed_batch_size)
File "/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py", line 289, in _model_to_graph
fixed_batch_size=fixed_batch_size)
File "/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py", line 132, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File "/opt/conda/lib/python3.6/site-packages/torch/onnx/__init__.py", line 179, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py", line 647, in _run_symbolic_function
return op_fn(g, *inputs, **attrs)
File "/opt/conda/lib/python3.6/site-packages/torch/onnx/symbolic_helper.py", line 128, in wrapper
args = [_parse_arg(arg, arg_desc) for arg, arg_desc in zip(args, arg_descriptors)]
File "/opt/conda/lib/python3.6/site-packages/torch/onnx/symbolic_helper.py", line 128, in <listcomp>
args = [_parse_arg(arg, arg_desc) for arg, arg_desc in zip(args, arg_descriptors)]
File "/opt/conda/lib/python3.6/site-packages/torch/onnx/symbolic_helper.py", line 81, in _parse_arg
"', since it's not constant, please try to make "
RuntimeError: Failed to export an ONNX attribute 'onnx::Gather', since it's not constant, please try to make things (e.g., kernel size) static if possible
```
I have few code blocks in my model have layer_norm op. It would turn into some ugly code if I explicitly mark all parameters constant number. Is there any “best practice” of how to use dynamic shape for this kind of use case?
Also, I have posted the same issue on [forum](https://discuss.pytorch.org/t/how-to-convert-layer-norm-layer-to-onnx/66841). I'm not sure where is the better place for this kind of quesion, so I duplicate the issue here.
Thanks in advance.
cc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof
|
https://github.com/pytorch/pytorch/issues/32282
|
closed
|
[
"module: onnx",
"triaged"
] | 2020-01-16T10:53:52Z
| 2020-03-23T08:24:02Z
| null |
rtrobin
|
pytorch/vision
| 1,757
|
Torchvision Resnet 50 accuracy
|
Hey, Pytorch’s (torchvision) Resnet 50 accuracy is declared to be 76.15.
But when I’m using the training script from PyTorch’s repo, which is mentioned in the official torchvision website(https://pytorch.org/docs/stable/torchvision/models.html#classification):
[https://github.com/pytorch/examples/blob/master/imagenet/main.py]
and the Resnet50 from torchvision:
[https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py]
When training it, after one epoch I’m getting an accuracy of 76.6, how can it be? isn’t the models fully trained?
Thanks!
|
https://github.com/pytorch/vision/issues/1757
|
closed
|
[
"question",
"module: models"
] | 2020-01-16T09:43:54Z
| 2021-06-30T15:08:29Z
| null |
Esaada
|
pytorch/vision
| 1,751
|
module 'torchvision' has no attribute 'ops'
|
torchvision. ops implements operators that are specific for Computer Vision. Those operators currently do not support TorchScript. Performs non-maximum suppression (NMS) on the boxes according to their intersection-over-union (IoU)
output[image_i] = pred[torchvision.ops.boxes.batched_nms(pred[:, :4], pred[:, 4], c, iou_thres)]
AttributeError: module 'torchvision' has no attribute 'ops'
[https://github.com/ultralytics/yolov3/blob/master/utils/utils.py](url)
can anyone please help me to bypass this problem?
|
https://github.com/pytorch/vision/issues/1751
|
closed
|
[
"question",
"module: ops"
] | 2020-01-15T15:01:54Z
| 2020-01-15T18:45:32Z
| null |
omizonly
|
huggingface/tokenizers
| 73
|
Decoding to string
|
Hi, thanks for this awesome library!
I want to decode BPE back to *actual* text, so that I can calculate BLEU scores. When I use the tokenizer.decoder, I get a string without any whitespace. I understand I can use a `pre_tokenizer` to get whitespaces, but in that case the decoded output would be `i can feel the mag i c , can you ?` (or something similar, depending on the BPE model). How do I get the actual text through decoding, so that I can calculate BLEU scores like I normally would?
```
from tokenizers import Tokenizer, models, pre_tokenizers, decoders
# Load a BPE Model
vocab = "./scripts/vocab.json"
merges = "./path/to/merges.txt"
bpe = models.BPE.from_files(vocab, merges)
# Initialize a tokenizer
tokenizer = Tokenizer(bpe)
# Customize pre-tokenization and decoding
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel.new(add_prefix_space=True)
tokenizer.decoder = decoders.ByteLevel.new()
# And then encode:
encoded = tokenizer.encode("i can feel the magic, can you?")
decoded = tokenizer.decode(encoded.ids)
print(encoded)
print(decoded)
>>> ['i', 'can', 'feel', 'the', 'mag', 'i', 'c', ',', 'can', 'you', '?']
>>> icanfeelthemagic,canyou?
```
|
https://github.com/huggingface/tokenizers/issues/73
|
closed
|
[
"question",
"python"
] | 2020-01-15T12:58:44Z
| 2020-01-20T15:38:29Z
| null |
davidstap
|
pytorch/vision
| 1,737
|
Pyramid layer
|
I want to extract the third layer of feature pyramid from
features = self.backbone(images.tensors) in generalized_rcnn.py
any help please?
|
https://github.com/pytorch/vision/issues/1737
|
open
|
[
"question",
"module: models",
"topic: object detection"
] | 2020-01-10T15:44:20Z
| 2020-01-10T16:29:44Z
| null |
MitraTj
|
pytorch/pytorch
| 32,041
|
How to export L2-normalization to onnx
|
## 🚀 Feature
Support export for LpNormalization from PyTorch to ONNX, thus it could be used in TensorRT model.
cc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof
|
https://github.com/pytorch/pytorch/issues/32041
|
closed
|
[
"module: onnx",
"triaged",
"enhancement",
"onnx-needs-info"
] | 2020-01-10T14:37:38Z
| 2022-10-24T18:08:40Z
| null |
stoneyang
|
pytorch/vision
| 1,732
|
How to use Resnet to deal with one channel input through pytorch.hub ?
|
I did this to load the Resnet model, and since my input contains only one channel, the model does not work.
`model = torch.hub.load('pytorch/vision:v0.4.2', 'resnet18', pretrained=True)`
I know how to modify the 'resnet.py' file to satisfy my demands, but that means I must include the modified 'resnet.py' file in my project, which may be unnecessary. It will be a lot better if the model can be loaded simply from pytorch.
Anyone has solutions? Thanks a lot.
|
https://github.com/pytorch/vision/issues/1732
|
closed
|
[
"question",
"module: models",
"topic: classification"
] | 2020-01-09T09:22:50Z
| 2020-01-09T20:22:18Z
| null |
PhilWallace
|
pytorch/pytorch
| 31,984
|
Question about how to predict the derivation of the output?
|
I expect a neural network predict a value and the derivation of value.Is the following code the correct way?
```python
import torch
from torch import nn
from torch.autograd import grad
class net(nn.Module):
def __init__(self):
super(net, self).__init__()
self.lin1 = nn.Linear(3, 30)
self.lin2 = nn.Linear(30, 1)
def forward(self, p):
x = self.lin1(p)
x = nn.ReLU()(x)
return self.lin2(x)
x = torch.randn(1000, 3)
y = (5 * torch.sin(x) + 3 * torch.cos(x)).sum(dim=-1).unsqueeze(-1)
z = (5 * torch.cos(x) - 3 * torch.sin(x)).sum(dim=-1).unsqueeze(-1)
model = net()
optimizer = torch.optim.Adam(model.parameters(), lr=3e-3)
for epoch in range(10000):
model.train()
x.requires_grad = True
optimizer.zero_grad()
output = model(x)
grad_x = grad(output.sum(), x, retain_graph=True)[0]
loss_y = nn.MSELoss()(output, y)
loss_z = nn.MSELoss()(grad_x.sum(dim=-1).unsqueeze(-1), z)
loss = loss_y + loss_z
loss.backward(retain_graph=True)
optimizer.step()
print('Loss_y = {:.4f} | Loss_z = {:.4f}.'.format(loss_y.item(), loss_z.item())
```
I check the grad_fn of variable ```loss_z```,find ```loss_y.grad_fn = <MseLossBackward object at 0x0000024F2AB8DF98>```,but ```loss_z.grad_fn = None```.So although ```loss_z``` decreases,this means the loss of the derivation of output doesn’t participate in the gradient decent.Maybe just the model predicts ```y``` very well,so it can predict ```z``` well.If the dataset is not as easy as this form,loss_z even doesn’t decrease.
Then I try to only predict z without predict y,like the following code:
```python
import torch
from torch import nn
from torch.autograd import grad
class net(nn.Module):
def __init__(self):
super(net, self).__init__()
self.lin1 = nn.Linear(3, 30)
self.lin2 = nn.Linear(30, 1)
def forward(self, p):
x = self.lin1(p)
x = nn.ReLU()(x)
return self.lin2(x)
x = torch.randn(100, 3)
y = (5 * torch.sin(x) + 3 * torch.cos(x)).sum(dim=-1).unsqueeze(-1)
z = (5 * torch.cos(x) - 3 * torch.sin(x)).sum(dim=-1).unsqueeze(-1)
model = net()
optimizer = torch.optim.Adam(model.parameters(), lr=3e-3)
for epoch in range(1000):
model.train()
x.requires_grad = True
optimizer.zero_grad()
output = model(x)
grad_x = grad(output.sum(), x, retain_graph=True)[0]
loss_z = nn.MSELoss()(grad_x.sum(dim=-1).unsqueeze(-1), z)
print(loss_z.grad_fn) # None
loss_z.backward()
optimizer.step()
print('Loss_z = {:.4f}.'.format(loss_z.item()))
```
This code can't run,with the error:
```python
Traceback (most recent call last):
File "c:/Users/wz/Desktop/test.py", line 33, in <module>
loss_z.backward()
File "C:\Users\wz\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\tensor.py", line 118, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "C:\Users\wz\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\autograd\__init__.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
```
I print ```loss_z.grad_fn``` and find it's None,but I don't know how to fix it.So how to predict the derivation of the output correctly?
|
https://github.com/pytorch/pytorch/issues/31984
|
closed
|
[] | 2020-01-09T07:31:25Z
| 2020-01-09T18:57:24Z
| null |
thu-wangz17
|
pytorch/vision
| 1,723
|
torchvision fail to use GPU.
|
While I am using [detectron2](https://github.com/facebookresearch/detectron2), I meet the problem that some function in torchvision can't use GPU.
The details are here: https://github.com/facebookresearch/detectron2/issues/469
It seems an install problem. Directly using conda to install torchvision should be ok for most situations, but I am not sure whether this will lead to cuda usage error.
Could you give some suggestions to fix this problem? : )
|
https://github.com/pytorch/vision/issues/1723
|
closed
|
[
"question",
"topic: build"
] | 2020-01-07T09:23:49Z
| 2020-05-11T12:18:51Z
| null |
dihuangdh
|
huggingface/transformers
| 2,411
|
What is the difference between T5Model, T5WithLMHeadModel, T5PreTrainedModel?
|
## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I notice that for T5 model, there are more choices(T5Model, T5WithLMHeadModel, T5PreTrainedModel) than BERT or GPT. What is the difference between these three? I think all three are pre-trained model. We do not use T5PreTrainedModel in our downstream task code. Besides, the difference between T5Model and T5WithLMHeadModel is that the latter contains one more linear layer at the end. Am I right about these?
|
https://github.com/huggingface/transformers/issues/2411
|
closed
|
[
"wontfix"
] | 2020-01-06T07:01:32Z
| 2020-03-13T08:09:42Z
| null |
g-jing
|
pytorch/vision
| 1,720
|
Enquiry on Implementation of RandomHorizontalFlip (in transforms.py from references folder)
|
I am a bit confused by the implementation RandomHorizontalFlip defined [here](https://github.com/pytorch/vision/blob/master/references/detection/transforms.py). Note the following snippet extracted:
```
class RandomHorizontalFlip(object):
def __init__(self, prob):
self.prob = prob
def __call__(self, image, target):
if random.random() < self.prob:
height, width = image.shape[-2:]
image = image.flip(-1)
bbox = target["boxes"]
bbox[:, [0, 2]] = width - bbox[:, [2, 0]]
target["boxes"] = bbox
```
should ```bbox[:, [0, 2]] = width - bbox[:, [2, 0]]``` be ```bbox[:, [1, 3]] = width - bbox[:, [3, 1]]``` instead?
Let original bounding box be ```[xmin, ymin, xmax, ymax]``` and image have size ```(height, width)```. After horizontal flip, the bounding box location should be ```[xmin, width - ymax, xmax, width - ymin]```.
(Please correct me if I have something wrong)
|
https://github.com/pytorch/vision/issues/1720
|
closed
|
[
"question",
"module: transforms",
"module: reference scripts"
] | 2020-01-05T11:04:12Z
| 2020-01-08T10:28:44Z
| null |
riven314
|
pytorch/pytorch
| 31,869
|
How to save int value in ctx.save_for_backward
|
I want to define a new memory op, and first impl a new memory function(torch.autograd.Function), but forward and backward are static method,
and inputs have some int value for some config(like stride in conv function), ctx.save_for_backward can't save int value, How to fix this problem?
First, i want to follow torch.nn.conv1d example, but i can't find any source for F.conv1d function?
|
https://github.com/pytorch/pytorch/issues/31869
|
closed
|
[] | 2020-01-05T07:13:11Z
| 2020-01-06T05:22:12Z
| null |
kuramawzw1
|
pytorch/pytorch
| 31,865
|
how to install pytorch 0.4.1
|
For some reason I have to install 0.4.1, I tired many times including install from source, I tried to install 0.4.1 under cuda9.0 and cuda 9.2, but it failed. my card is 2080ti. please help and tell me if there is a way to solve the problem, thanks!
|
https://github.com/pytorch/pytorch/issues/31865
|
closed
|
[] | 2020-01-05T03:25:46Z
| 2020-01-06T05:24:02Z
| null |
lapetite123
|
pytorch/pytorch
| 31,853
|
How to modify the internal calculation process of LSTM in pytorch-v1.1.0?
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
I want to modify the calculation process inside the LSTM. However, when I queried the _VF.lstm() method, no corresponding python implementation was found. Then I found the C++ implementation at this address (i.e., https://github.com/pytorch/pytorch/tree/master/aten/src/ATen/native/RNN.cpp) on GitHub. My question is which files need to be modified under the local PyTorch directory.
|
https://github.com/pytorch/pytorch/issues/31853
|
closed
|
[] | 2020-01-04T03:14:06Z
| 2020-01-06T05:24:17Z
| null |
zwd2016
|
pytorch/pytorch
| 31,823
|
How to set quantization aware training scaling factors?
|
## ❓ Questions and Help
when i use quantization aware training , The weight tensor scaling factors is a standard floating point number.
I want to convert my model as 8bit at FPGA, so the weight tensor scaling factor must be an integer power-of-two value exponent. Is there such an option? what should I do?
|
https://github.com/pytorch/pytorch/issues/31823
|
closed
|
[] | 2020-01-03T10:53:36Z
| 2020-01-06T05:24:37Z
| null |
sunkr1995
|
pytorch/pytorch
| 31,821
|
How to convert model with a new QConv to onnx?
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
I wrapped a new conv class to support quantization. When I convert this model to onnx, I want each conv in the onnx model to have quantized parameters such as quantization bits. Could you tell me how to convert this model to onnx.
cc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a
|
https://github.com/pytorch/pytorch/issues/31821
|
closed
|
[
"module: onnx",
"oncall: quantization",
"triaged"
] | 2020-01-03T07:56:58Z
| 2021-12-16T00:16:35Z
| null |
Wuqiman
|
pytorch/pytorch
| 31,818
|
How to distinguish different layers in hook?
|
## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
A way to distinguish different layers in each module itself
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
I'd like to store some intermedia data such as output data of all conv layers, and I want to use hook. It is easy to judge which class the module is in hook function like " if isinstance(module, nn.Conv2d):", but if I want to store the data, I need a name which can be got in hook function to be the file name so that data from different layers will be saved in different files. e.g. "save(filename, output)" How can I get this name?
Even if I collect all output data in a list and save it outside the hook function, I still don't know to which layer each data belongs.
## Pitch
<!-- A clear and concise description of what you want to happen. -->
There is no way to identify each layer now, a unique name or id.
```
def hook(moudle, input, output):
name = get_unique_name(module)
save(name+'.h5', output)
for n,m in model.named_module():
m.register_forward_hook(hook)
```
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered if any. -->
Because we can only get names from parent modules using"named_module", it will also work if I can pass arguments to hook function.
```
def hook(moudle, input, output, n):
save(n+'.h5', output)
for n,m in model.named_module():
m.register_forward_hook(hook, n)
```
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
|
https://github.com/pytorch/pytorch/issues/31818
|
open
|
[
"module: nn",
"triaged"
] | 2020-01-03T03:48:13Z
| 2022-09-22T22:55:48Z
| null |
I-Doctor
|
pytorch/examples
| 689
|
DDP training multi nodes nccl error
|
pytroch:1.3.1
python:3.6
system:ubuntu 16
cuda:10.0
when i run imagenet main.py in multi-nodes ,there is a error likes,(single node can run ):
Use GPU: 1 for training
Use GPU: 0 for training
=> creating model 'resnet50'
=> creating model 'resnet50'
id-d3:714:714 [0] misc/ibvwrap.cu:63 NCCL WARN Failed to open libibverbs.so[.1]
NCCL version 2.4.2+cuda9.0
id-d3:715:715 [1] misc/ibvwrap.cu:63 NCCL WARN Failed to open libibverbs.so[.1]
id-d3:715:790 [1] include/socket.h:382 NCCL WARN Connect to 172.18.0.1<49273> failed : Connection refused
Traceback (most recent call last):
File "dis_train.py", line 455, in <module>
main()
File "dis_train.py", line 120, in main
mp.spawn(main_worker, nprocs=ngpus_per_node, args=(ngpus_per_node, args))
File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 167, in spawn
while not spawn_context.join():
File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 114, in join
raise Exception(msg)
Exception:
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/mnt/sdc/zhangwg/cv/image_review/src/dis_train.py", line 197, in main_worker
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 286, in __init__
self.broadcast_bucket_size)
File "/usr/local/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 410, in _dist_broadcast_coalesced
dist._dist_broadcast_coalesced(self.process_group, tensors, buffer_size, False)
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:272, unhandled system error
does somebody konw how to fix it ?
thanks a lot
|
https://github.com/pytorch/examples/issues/689
|
open
|
[
"distributed"
] | 2020-01-02T03:56:27Z
| 2024-09-27T05:43:31Z
| 1
|
ciel-zhang
|
pytorch/vision
| 1,710
|
finetuning inception_v3
|
finetuning resnet18 as
train: `models.resnet18(pretrained=True)`
val: `models.resnet18()`
But while finetuning inception_v3 as above, I got poor result. The valuation must be
val: `models.inception_v3(pretrained=True)`
I spent much time stucking here..
|
https://github.com/pytorch/vision/issues/1710
|
closed
|
[
"question",
"module: models"
] | 2020-01-01T14:45:26Z
| 2020-01-08T10:54:17Z
| null |
stormchasingg
|
huggingface/transformers
| 2,372
|
What is the "could not find answer" warning in squad.py
|
Hello,
I am trying to run run_squad.py for BERT (italian-cased) with an italian version of squad.
During the creation of features from dataset, I got some answer skipped like in the following:
<img width="478" alt="Screenshot 2019-12-30 at 23 30 19" src="https://user-images.githubusercontent.com/26765504/71603304-81081e80-2b5c-11ea-8333-73608e3141a7.png">
Can you tell why is this happening and if this influences the overall accuracy of the training?
|
https://github.com/huggingface/transformers/issues/2372
|
closed
|
[
"wontfix"
] | 2019-12-30T22:31:58Z
| 2020-08-29T15:05:37Z
| null |
cppntn
|
pytorch/vision
| 1,707
|
'loss_dict' error from 'train_one_epoch'
|
Navigating through the code in 'train_one_epoch', running this line:
`loss_dict = model(image,targets)`
gives the error:
> 397 # RPN uses all feature maps that are available
--> 398 features = list(features.values())
399 objectness, pred_bbox_deltas = self.head(features)
400 anchors = self.anchor_generator(images, features)
AttributeError: 'tuple' object has no attribute 'values'
Can anyone help?
|
https://github.com/pytorch/vision/issues/1707
|
closed
|
[
"question",
"module: reference scripts"
] | 2019-12-30T10:32:15Z
| 2020-10-10T09:43:24Z
| null |
madiltalay
|
pytorch/pytorch
| 31,699
|
How to implement multiple different kernel shapes in 2D convolution?
|
Hello. I’m currently working on spherical convolutional network topic. Right now I’m trying to develop a new kind of kernel used for the convolutional layer.
The usual kernel is 3x3 matrix. But for spherical images, after being projected onto a plane using equirectangular projection, there will be distortion. So I want to define the kernel as a spherical cap and project it on plane according to its position.
For example, the kernel at different positions of the sphere perspective to the panorama pictures will look like this:

Is there any way to determine the shape of the kernels in these ways? I have already had the whole coordinate of the points in every case. I would very appreciate any help and information.
Thank you guys very much!
cc @csarofeen @ptrblck
|
https://github.com/pytorch/pytorch/issues/31699
|
closed
|
[
"feature",
"module: nn",
"triaged",
"needs research"
] | 2019-12-30T08:59:59Z
| 2020-01-07T15:14:06Z
| null |
vhchuong
|
pytorch/pytorch
| 31,696
|
how to set cuda stream by call Aten function
|
at::Tensor a = at::ones({16, 32}, opts);
at::Tensor b = at::randn({32, 64}, opts);
at::Tensor b1 = at::randn({32, 64}, opts);
auto c = at::matmul(a,b);
auto c1 = at::matmul(a,b1);
I want to call matmul by attach different cuda stream.
call at::matmul(a,b) by using stream1 , and call at::matmul(a,b1) by using stream2.
How to do it? Thanks
cc @ngimel
|
https://github.com/pytorch/pytorch/issues/31696
|
closed
|
[
"module: cuda",
"triaged"
] | 2019-12-30T05:44:55Z
| 2019-12-31T06:48:42Z
| null |
kuramawzw1
|
pytorch/pytorch
| 31,685
|
What is the significance of torchvision._is_tracing()?
|
## What is the significance of torchvision._is_tracing()? ❓
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
cc @fmassa
|
https://github.com/pytorch/pytorch/issues/31685
|
open
|
[
"triaged",
"module: vision"
] | 2019-12-29T04:07:08Z
| 2019-12-30T21:50:08Z
| null |
AyanKumarBhunia
|
pytorch/tutorials
| 799
|
Should I rewrote the "dcgan_faces_tutorial notebook" for the student to able to run it on colab for that 1GB dataset?
|
OK, I see it sets " data root = "/home/ubuntu/facebook/datasets/celeba..."". This is definitely not for Colab, and there are some students' computer does not have a GPU. I have a solution. I have rewritten it, so we can just download the zip file from google drive and unzip it. However, this requires to upload the 1GB data set to the student's own google drive, or someone can tell me that I can upload that 1 GB dataset to somewhere and be able to download with a link ending to .zip.
Thus, should I rewrite it so the student can run it on colab with GPU instead of their local computer?
|
https://github.com/pytorch/tutorials/issues/799
|
closed
|
[] | 2019-12-27T14:44:39Z
| 2019-12-29T12:07:31Z
| 0
|
AliceSum
|
pytorch/vision
| 1,701
|
Errors with COCO targets
|
I am using the COCO dataset for training with annotations available at the COCO website.
I use this dataloader:
`train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=1, shuffle=True, num_workers=4, collate_fn=collate_fn)
`
Running one iteration:
`image, target = next(iter(train_dataloader))`
gives 'image' and 'target' of type 'tuple'
To convert the 'target' into the desired type (list of dicts), I use:
`target = [[{k: v for k, v in obj.items()} for obj in t] for t in target]`
Now when I run:
`loss_dict = model(image,target)`
It gives:
> /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/transform.py in resize(self, image, target)
73 return image, target
74
---> 75 bbox = target["boxes"]
76 bbox = resize_boxes(bbox, (h, w), image.shape[-2:])
77 target["boxes"] = bbox
TypeError: list indices must be integers or slices, not str
I try to play around:
```
new_target = {}
new_target['boxes'] = [t['bbox'] for t in target[0]]
new_target['labels'] = [t['category_id'] for t in target[0]]
new_target = [new_target]
```
And it gives another error:
> /usr/local/lib/python3.6/dist-packages/torchvision/models/detection/transform.py in resize_boxes(boxes, original_size, new_size)
135 ratios = tuple(float(s) / float(s_orig) for s, s_orig in zip(new_size, original_size))
136 ratio_height, ratio_width = ratios
--> 137 xmin, ymin, xmax, ymax = boxes.unbind(1)
138 xmin = xmin * ratio_width
139 xmax = xmax * ratio_width
AttributeError: 'list' object has no attribute 'unbind'
Can anyone please help?
|
https://github.com/pytorch/vision/issues/1701
|
closed
|
[
"question",
"module: reference scripts"
] | 2019-12-27T07:17:14Z
| 2020-01-08T10:44:53Z
| null |
madiltalay
|
pytorch/pytorch
| 31,643
|
how to know the input_shape of a pretrained model ?
|
hi,dear,
Just wanna know the model's input_shape,
but got nothing,
So could you help me ?
thx
|
https://github.com/pytorch/pytorch/issues/31643
|
closed
|
[] | 2019-12-27T01:12:54Z
| 2019-12-27T01:49:43Z
| null |
ucasiggcas
|
pytorch/vision
| 1,699
|
'train_one_epoch' gives error while using COCO annotations
|
I am using the COCO dataset for training with annotations available at the COCO website.
While using the code from: [https://github.com/pytorch/vision/blob/master/references/detection/engine.py](url), I get an error:
> AttributeError: 'list' object has no attribute 'items'
for the code snippet:
`targets = [{k: v.to(device) for k, v in t.items()} for t in targets]`
Further digging into the issue, I find that the 'targets' I receive from the 'for loop':
`for images, targets in metric_logger.log_every(data_loader, print_freq, header):`
are in tuple format, with length equal to the batch_size.
Moreover, each item in this tuple is a list, and each list consists of seven dictionaries containing the annotation information.
When I apply this code to an individual object, it works fine:
```
target = targets[0]
obj_1 = target[0]
dict_1 = [{k: v for k, v in obj_1.items()}]
```
So I suppose the code might be written as follows:
`targets = [[{k: v for k, v in obj.items()} for obj in target] for target in targets]`
Can you guys please confirm this and provide help in this regard?
|
https://github.com/pytorch/vision/issues/1699
|
closed
|
[
"question",
"module: reference scripts"
] | 2019-12-25T10:15:51Z
| 2022-10-07T16:13:55Z
| null |
madiltalay
|
huggingface/transformers
| 2,278
|
where is the script of a second step of knwoledge distillation on SQuAD 1.0?
|
## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
In Distil part, there is a paragraph description which is "distilbert-base-uncased-distilled-squad: A finetuned version of distilbert-base-uncased finetuned using (a second step of) knwoledge distillation on SQuAD 1.0. This model reaches a F1 score of 86.9 on the dev set (for comparison, Bert bert-base-uncased version reaches a 88.5 F1 score)."
so where is the script of "a second step of knwoledge distillation on SQuAD 1.0" mentioned above?
Thanks a lot, it will be very helpful to me!
|
https://github.com/huggingface/transformers/issues/2278
|
closed
|
[
"wontfix"
] | 2019-12-23T09:13:26Z
| 2020-05-08T15:29:08Z
| null |
c0derm4n
|
huggingface/pytorch-image-models
| 63
|
what is the value range of magnitude in auto-augment when the MAX_LEVEL is set as 10.
|
Dear @rwightman , I have read the code about auto-augmentation and random-augmentation, and I noticed that the MAX_LEVEL is set as 10, same as the google's implementation. Also in the google implementation, they say an optimal magnitude is often in [5, 30]. But in your implementation you clip the input magnitude to be less than MAX_LEVEL (`magnitude = min(_MAX_LEVEL, max(0, magnitude)) # clip to valid range`).
Could you give me some hints about why MAX_LEVEL is set as 10, but the input magnitude range is recommended as [5, 30]? Really thanks!
|
https://github.com/huggingface/pytorch-image-models/issues/63
|
closed
|
[] | 2019-12-23T08:49:19Z
| 2019-12-26T23:40:49Z
| null |
cddlyf
|
pytorch/text
| 669
|
How to use datasets for distributed training?
|
## ❓ Questions and Help
**Description**
<!-- Please send questions or ask for help here. -->
I built a dataset from my corpus, and use each line as an Example.
It works fine at first until I try to use it for distributed training.
It seems that torch.nn.parallel.DistributedParallel has to use DistributedSampler, but it's not compatible with torchtext datasets.
Is there any idea to use torchtext datasets for distributed training?
Thx!
|
https://github.com/pytorch/text/issues/669
|
open
|
[] | 2019-12-22T03:20:56Z
| 2020-01-02T17:56:48Z
| null |
styxjedi
|
pytorch/pytorch
| 31,543
|
how to install torch by python3.8?
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
|
https://github.com/pytorch/pytorch/issues/31543
|
closed
|
[] | 2019-12-21T03:15:45Z
| 2019-12-21T05:43:47Z
| null |
Fenghuixueha
|
pytorch/android-demo-app
| 46
|
How to create custom model for the PyTorchDemoApplication?Thanks
|
Hi, I want to learn about how to apply pytorch model on andorid platform. And this android-demo-app is very useful to me.
The PyTorchDemoApp has already been deployed on my android mobile ,and it can be runned successfully.
But I want to know how to create a custom model with my own Image data.
When I copy the model.pt from HelloWorldApp, the PyTorchDemoApp crashes and tells me " Sorry There is an error"
Can anyone tell me how to create a custom model?
Thanks very much.
|
https://github.com/pytorch/android-demo-app/issues/46
|
open
|
[] | 2019-12-20T08:55:31Z
| 2021-06-27T18:52:02Z
| null |
btdan
|
pytorch/xla
| 1,490
|
pytorch/xla vs TF
|
## ❓ Questions and Help
Hi, is training a model with pytorch xla slower than training a model with tf? Are there any other limitations to using pytorch/xla compared to TF?
|
https://github.com/pytorch/xla/issues/1490
|
closed
|
[
"question"
] | 2019-12-19T21:03:11Z
| 2019-12-19T22:01:41Z
| null |
bilal2vec
|
huggingface/transformers
| 2,230
|
what is the most efficient way to store all hidden layers' weights?
|
Hi,
I am following this [post](https://mccormickml.com/2019/05/14/BERT-word-embeddings-tutorial/) for getting all 12 hidden layers' weights for every token in a sentence.
Consider I have a short text with 2 sentences: `He stole money today. He is fishing on the Mississippi riverbank.`
I want to store 5 + 8 = 13 tokens - all 12 hidden layers weights where each tensor's size=768. So, I will have 13 x 12 = 156 tensors.
I want to save all the weights in a file and I am wondering if I should use `pickle` or `hd5` format (I am working with long text documents.) I am planning to separate two sentences by a blank line, please suggest if any better ways to do it.
Thanks!
|
https://github.com/huggingface/transformers/issues/2230
|
closed
|
[
"wontfix"
] | 2019-12-19T19:41:00Z
| 2020-02-24T20:38:46Z
| null |
vr25
|
pytorch/pytorch
| 31,466
|
how to pass trained weight to neural network module
|
Suppose i used own data and trained a `conv1d`, how could we pass the weight to `conv1d` in c++ like what the `PyTorch` acts ?
Noticed that the implementation of `conv1d` in `PyTorch`, we could update the parameters like `in_channels`, `out_channels`, etc in the `__init__` function. If we want to update the `weights` and `bias`, which are from pretrained model, we could rewrite the `Conv1d`, which may not so difficult.
```
class Conv1d(_ConvNd):
def __init__(self, in_channels, out_channels, kernel_size, stride=1,
padding=0, dilation=1, groups=1,
bias=True, padding_mode='zeros'):
kernel_size = _single(kernel_size)
stride = _single(stride)
padding = _single(padding)
dilation = _single(dilation)
super(Conv1d, self).__init__(
in_channels, out_channels, kernel_size, stride, padding, dilation,
False, _single(0), groups, bias, padding_mode)
def forward(self, input):
if self.padding_mode == 'circular':
expanded_padding = ((self.padding[0] + 1) // 2, self.padding[0] // 2)
return F.conv1d(F.pad(input, expanded_padding, mode='circular'),
self.weight, self.bias, self.stride,
_single(0), self.dilation, self.groups)
return F.conv1d(input, self.weight, self.bias, self.stride,
self.padding, self.dilation, self.groups)
```
While notice the `conv1d` implementation in libtorch, noticed that
```
namespace nn {
Conv1dImpl::Conv1dImpl(
Conv1dOptions options_)
: ConvNdImpl(
detail::ConvNdOptions<1>(
/*in_channels=*/options_.in_channels(),
/*out_channels=*/options_.out_channels(),
/*kernel_size=*/options_.kernel_size())
.stride(options_.stride())
.padding(options_.padding())
.dilation(options_.dilation())
.transposed(false)
.output_padding(0)
.groups(options_.groups())
.bias(options_.bias())
.padding_mode(options_.padding_mode())) {}
Tensor Conv1dImpl::forward(const Tensor& input) {
if (c10::get_if<enumtype::kCircular>(&options.padding_mode())) {
std::vector<int64_t> expanded_padding = {((*options.padding())[0] + 1) / 2, (*options.padding())[0] / 2};
return F::detail::conv1d(
F::detail::pad(input, expanded_padding, torch::kCircular, 0),
weight, bias,
options.stride(),
/*padding=*/0,
options.dilation(),
options.groups());
}
return F::detail::conv1d(
input,
weight,
bias,
options.stride(),
options.padding(),
options.dilation(),
options.groups());
}
```
So how could we pass the weight in the c++ version ?
|
https://github.com/pytorch/pytorch/issues/31466
|
closed
|
[] | 2019-12-19T10:18:14Z
| 2019-12-19T14:53:57Z
| null |
OswaldoBornemann
|
pytorch/examples
| 682
|
"EOFError: Ran out of input“ occurred in example mnist_hogwild
|
Hi, when I ran example **mnist_hogwild** on cuda, errors occurred as below:
```
File "main.py", line 66, in <module>
p.start()
File "D:\Python3.7.3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "D:\Python3.7.3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\Python3.7.3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\Python3.7.3\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
reduction.dump(process_obj, to_child)
File "D:\Python3.7.3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File "D:\Python3.7.3\lib\site-packages\torch\multiprocessing\reductions.py", line 232, in reduce_tensor
event_sync_required) = storage._share_cuda_()
RuntimeError: cuda runtime error (71) : operation not supported at C:\w\1\s\windows\pytorch\torch/csrc/generic/StorageSharing.cpp:245
C:\Users\audrey\Desktop\test>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "D:\Python3.7.3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "D:\Python3.7.3\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
```
My system: **Windows10**
device: GeForce RTX 2080 Ti
PyTorch version: 1.2.0
How to fix this? Thanks!
|
https://github.com/pytorch/examples/issues/682
|
open
|
[
"distributed",
"pickle"
] | 2019-12-19T05:06:30Z
| 2023-10-11T06:19:14Z
| 2
|
audreycs
|
pytorch/examples
| 681
|
SNLI: The examples doesn't work
|
help, I try to run the snli task in examples,and I got the errors as follow:
Traceback (most recent call last):
File "C:/Users/syk/Desktop/git/examples/snli/train.py", line 35, in <module>
inputs.vocab.load_vectors(wv_dir=args.data_cache, wv_type=args.word_vectors, wv_dim=args.d_embed)
TypeError: load_vectors() missing 1 required positional argument: 'vectors'
it seems that the vocab.load_vectors need an argument vectors according to the definition of this function.
Does anyone know how to solve this?
I'm not sure if it's my problem. thank you very much!
|
https://github.com/pytorch/examples/issues/681
|
closed
|
[] | 2019-12-18T12:50:50Z
| 2020-09-13T13:50:53Z
| 0
|
Youarerare
|
huggingface/pytorch-image-models
| 61
|
where is your MixNet code? I can't find it.
|
https://github.com/huggingface/pytorch-image-models/issues/61
|
closed
|
[] | 2019-12-17T02:49:04Z
| 2019-12-17T05:30:46Z
| null |
xiebinghua
|
|
pytorch/tutorials
| 793
|
Explain how we can use same dataset for training an non-training
|
In the [Training a Classifer tutorial](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py), explain how can we use the same dataset for training and non-training? Is it cause we shuffle to randomize and use a subset?
|
https://github.com/pytorch/tutorials/issues/793
|
closed
|
[
"60_min_blitz"
] | 2019-12-16T23:24:55Z
| 2020-05-18T17:58:46Z
| 1
|
jlin27
|
pytorch/tutorials
| 790
|
Clarify why there are 6 output channels
|
In the [Define the network section of the Neural Network tutorial](https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py), clarify why is it 6 outputs? Is it bias?

|
https://github.com/pytorch/tutorials/issues/790
|
closed
|
[
"60_min_blitz"
] | 2019-12-16T22:58:37Z
| 2020-05-18T17:59:34Z
| 4
|
jlin27
|
pytorch/vision
| 1,669
|
Question regarding only bbox
|
https://github.com/pytorch/vision/blob/bce17fddd4da744e23512b8e224d085818e6d921/references/detection/coco_utils.py#L231
``
What if there are only bbox annotations and no segmentation available at all?!
|
https://github.com/pytorch/vision/issues/1669
|
closed
|
[
"question",
"module: reference scripts",
"topic: object detection"
] | 2019-12-16T14:24:54Z
| 2019-12-16T14:54:44Z
| null |
gaussiangit
|
pytorch/tutorials
| 772
|
Text classification dataset
|
Where can I find the dataset for text classification tutorial? I mean
https://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html
|
https://github.com/pytorch/tutorials/issues/772
|
closed
|
[] | 2019-12-15T17:21:34Z
| 2021-06-10T21:18:29Z
| 1
|
mahmoodn
|
pytorch/tutorials
| 771
|
Using CUDA for deep learning
|
For the deep learning [tutorial](https://pytorch.org/tutorials/beginner/nlp/deep_learning_tutorial.html), I have added the device command at the top to offload the work on GPU.
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.device("cuda:0")
```
However, no process will go to the GPU. I see only CPU usage.
How can I fix that?
|
https://github.com/pytorch/tutorials/issues/771
|
closed
|
[] | 2019-12-15T17:06:10Z
| 2021-07-30T21:55:36Z
| 1
|
mahmoodn
|
pytorch/vision
| 1,665
|
Automatic Background Removal technology
|
I am looking for a deep learning library/sdk which can be used to remove the background from any image automatically (with quality as good as www.remove.bg).
I tried some image segmentation SDKs with pre-trained models such as Tensorflow Lite & Fritz AI, but the accuracy of the cutout mask was very low, amongst other issues.
Criteria :-
1) Background Removal rather than just Human/Portrait Segmentation
If the foreground consists of person holding a balloon, sittting on a chair, with a pet on his side, then I want all of this to get extracted. Not just the human cutout. The segmentation SDKs I tried are only extracting humans (the chair gets vanished), that too with a very low quality mask (hair gets cut, parts of ear gets cut, etc).
2) Mask quality should be Super-Accurate
I want even the finer details like the hair, delicate clothes, etc to be extracted perfectly.
3) Fast & Lightweight (for mobile phone)
I want to use this technology on mobile phones (in an Android app) which should ideally work even in an offline environment. If this option is difficult to achieve, then plan B would be install the technoloy on our server.
4) Technology
What technology should I be exploring to achieve this? Is it called image segmentation or the better term would be image matting? (e.g. http://alphamatting.com/eval_25.php)
I have been reading a lot and I am currently lost in the sea of various technologies out there (OpenCV, Deep Matting, Mask RCNN, Instance Segmentation, Detectron2, Tensorflow, Pytorch, etc). I wonder what magic is happening behind the curtains of www.remove.bg
Would your library help me me to achieve what I am looking for? Any help you could provide would be awesome.
Thanks a ton!
|
https://github.com/pytorch/vision/issues/1665
|
closed
|
[
"question",
"module: models"
] | 2019-12-15T06:53:21Z
| 2020-03-24T15:44:36Z
| null |
InternetMaster1
|
pytorch/pytorch
| 31,246
|
How to do independent random number generatation in multiprocessing dataloader.
|
When I use num_woker > 0 in DataLoader, and I generate a random number in __getitem__ function.
I found all threads will generate the same random number...
For example, I set num_worker=8, and I want to got a random number to define my scale augmentation.
I will get
0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9
eight same 0.9!
So I want to know how to inplement independent random number generation in multiprocessing dataloader.
THanks...
cc @SsnL
|
https://github.com/pytorch/pytorch/issues/31246
|
closed
|
[
"module: dataloader",
"triaged"
] | 2019-12-13T08:34:29Z
| 2019-12-16T17:29:43Z
| null |
EricKani
|
pytorch/text
| 666
|
How to use torchtext for tasks involving image/tabular data like image captioning?
|
## ❓ Questions and Help
**Description**
<!-- Please send questions or ask for help here. -->
Hi, thanks for the great library. I am wondering is there a way to use torchtext Dataset for multi-modal data? An example task will be image captioning, where we need to generate some text based on the input image. Or generating text from tabular data, from example table summarization.
|
https://github.com/pytorch/text/issues/666
|
open
|
[] | 2019-12-13T05:24:33Z
| 2020-04-11T07:55:54Z
| null |
Hans0124SG
|
pytorch/pytorch
| 31,098
|
How to install pytorch for CUDA 10.2?
|
Hello everyone. I have installed CUDA 10.2 and i tried to install pytorch on windows.
But I catched error like this:
FAILED: build.ninja
C:\Users\TensorFlow\.conda\envs\torch\Library\bin\cmake.exe -SF:\Git\pytorch -BF:\Git\pytorch\build
ninja: error: rebuilding 'build.ninja': subcommand failed
Traceback (most recent call last):
File "setup.py", line 755, in <module>
build_deps()
File "setup.py", line 316, in build_deps
cmake=cmake)
File "F:\Git\pytorch\tools\build_pytorch_libs.py", line 62, in build_caffe2
cmake.build(my_env)
File "F:\Git\pytorch\tools\setup_helpers\cmake.py", line 337, in build
self.run(build_args, my_env)
File "F:\Git\pytorch\tools\setup_helpers\cmake.py", line 141, in run
check_call(command, cwd=self.build_dir, env=env)
File "C:\Users\TensorFlow\.conda\envs\torch\lib\subprocess.py", line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '8']' returned non-zero exit status 1.
Help me, please. How to I can fix this bug?
|
https://github.com/pytorch/pytorch/issues/31098
|
closed
|
[] | 2019-12-11T06:56:22Z
| 2019-12-11T17:01:39Z
| null |
tensor2flow
|
pytorch/text
| 665
|
How to load downloaded dataset?
|
I download sougoNews and try to use it like this:
`train_dataset, test_dataset = datasets.SogouNews(root='data',ngrams=3)`
but it didn't work.still autodownload the datasets.
|
https://github.com/pytorch/text/issues/665
|
closed
|
[] | 2019-12-11T01:03:17Z
| 2022-06-24T00:20:48Z
| null |
LotusQing
|
huggingface/transformers
| 2,127
|
Where is extract_features.py and run_classifier.py ?
|
## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hello! I couldn't find the extract_features.py and run_classifier.py. Have they been renamed ?
|
https://github.com/huggingface/transformers/issues/2127
|
closed
|
[] | 2019-12-10T17:14:27Z
| 2019-12-13T15:09:01Z
| null |
JiangYanting
|
pytorch/pytorch
| 31,041
|
How to load PyTorch model using C++ api
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
|
https://github.com/pytorch/pytorch/issues/31041
|
closed
|
[] | 2019-12-10T09:57:21Z
| 2019-12-10T10:30:15Z
| null |
henbucuoshanghai
|
pytorch/pytorch
| 30,962
|
How can I add masks to parameters
|
Hi,
Can I use hook to add a parameter masking function to Conv2d. Specifically, I’d like to add a binary mask buffer to each Conv2d module, during each training step, I need to update the mask buffer and then use it to mask the weight.
Or, is there any method to add masks and apply the masks to Conv2d in a given model.
Thanks!
|
https://github.com/pytorch/pytorch/issues/30962
|
open
|
[
"module: nn",
"triaged"
] | 2019-12-09T12:50:11Z
| 2019-12-11T07:37:43Z
| null |
tzm1003306213
|
pytorch/tutorials
| 761
|
RuntimeError: CUDA error: out of memory
|
I'm trying to run the code below:
_if torch.cuda.is_available():
device = torch.device("cuda") # a CUDA device object
y = torch.ones_like(x, device=device) # directly create a tensor on GPU
x = x.to(device) # or just use strings ``.to("cuda")``
z = x + y
print(z)
print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together!_
but I always get the error:
**y = torch.ones_like(x, device=device) # directly create a tensor on GPU
RuntimeError: CUDA error: out of memory**
I'm running this on CUDA version 10.1.243 and torch version 1.3.1 .
Anyone knows what is the problem?!
the source of the code: https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#cuda-tensors
|
https://github.com/pytorch/tutorials/issues/761
|
closed
|
[] | 2019-12-09T10:03:49Z
| 2021-07-30T22:15:11Z
| 3
|
Ala770
|
pytorch/examples
| 676
|
Reading my own dataset
|
Hi, I want to read/load my own dataset and build my models by using these datasets. But, I did not understand how can I read/load my own dataset. All examples are using PyTorch's datasets but do not help for me. Can you help me with this problem?
|
https://github.com/pytorch/examples/issues/676
|
closed
|
[] | 2019-12-08T08:49:16Z
| 2019-12-09T14:48:38Z
| 2
|
gozeloglu
|
pytorch/vision
| 1,646
|
What is the meta.bin file used by the ImageNet dataset?
|
[Comment from @kanonjz in #1457](https://github.com/pytorch/vision/pull/1457#issuecomment-562807954)
> I downloaded imagenet myself and used `parse_val_archive` to prepare the folders, but got an error below. What is the `meta.bin`? I didn't find it in the imagenet.
>
> `The meta file meta.bin is not present in the root directory or is corrupted. " "This file is automatically created by the ImageNet dataset.`
|
https://github.com/pytorch/vision/issues/1646
|
closed
|
[
"module: datasets"
] | 2019-12-07T12:30:20Z
| 2019-12-10T13:07:42Z
| null |
pmeier
|
pytorch/pytorch
| 30,929
|
How to set not to build libtorch_cpu.so and libmkl_*.so dependencies?
|
``` linux-vdso.so.1 (0x00007fffa4bfc000)
libtorch_cpu.so => /home/xxxxx/workfiles/work/pytorch/torch/lib/./libtorch_cpu.so (0x00007f63d4f6c000)
librt.so.1 => /lib64/librt.so.1 (0x00007f63d4d52000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f63d4b3c000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f63d4938000)
libmkl_intel_lp64.so => /lib/libmkl_intel_lp64.so (0x00007f63d3e06000)
libmkl_gnu_thread.so => /lib/libmkl_gnu_thread.so (0x00007f63d25cd000)
libmkl_core.so => /lib/libmkl_core.so (0x00007f63ce494000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f63ce275000)
libm.so.6 => /lib64/libm.so.6 (0x00007f63cdf73000)
libc10.so => /home/xxxxx/workfiles/work/pytorch/torch/lib/./libc10.so (0x00007f63cdd31000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f63cd9ae000)
libgomp.so.1 => /lib64/libgomp.so.1 (0x00007f63cd788000)
libc.so.6 => /lib64/libc.so.6 (0x00007f63cd3dc000)
/lib64/ld-linux-x86-64.so.2 (0x000055795895f000)
```
|
https://github.com/pytorch/pytorch/issues/30929
|
open
|
[
"module: build",
"triaged",
"module: mkl"
] | 2019-12-07T04:08:13Z
| 2020-05-01T18:47:25Z
| null |
LinGeLin
|
pytorch/examples
| 675
|
what do parameters 'ndf' and 'ngf' mean?
|
Thanks for your code. However, I was wondering if you could tell me what 'ndf' and 'ngf' mean? I do know how these two parameters are used, but I do not know why they are called 'ndf' and 'ngf' , respectively. Looking forward to your reply.
|
https://github.com/pytorch/examples/issues/675
|
closed
|
[] | 2019-12-06T21:29:40Z
| 2022-03-09T21:52:39Z
| 1
|
jianzhuwang
|
pytorch/pytorch
| 30,869
|
How to specify install path when build libtorch?no use cmake-gui
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
|
https://github.com/pytorch/pytorch/issues/30869
|
closed
|
[] | 2019-12-06T12:28:59Z
| 2019-12-06T13:39:39Z
| null |
LinGeLin
|
pytorch/pytorch
| 30,796
|
How to Build pytorch with local protobuf rather than third_party/protobuf?
|
## ❓ Questions and Help
I want to build pytorch with my own os built protobuf lib rather than third_part/protobuf, Which prefix to change, Can anyone help me?
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
|
https://github.com/pytorch/pytorch/issues/30796
|
closed
|
[] | 2019-12-05T06:11:52Z
| 2019-12-06T17:31:11Z
| null |
Raneee
|
pytorch/text
| 660
|
How to prefetch data?
|
Currently, the bottleneck of my model training is on the data loading part, is there any example about how to prefetch data? Like the `pin_memory` and `num_workers` arguments of `torch.utils.data.DataLoader`
|
https://github.com/pytorch/text/issues/660
|
closed
|
[] | 2019-12-04T14:04:20Z
| 2022-06-24T00:39:44Z
| null |
speedcell4
|
pytorch/vision
| 1,633
|
how can I use ROI align in torch version 1.0
|
https://github.com/pytorch/vision/issues/1633
|
closed
|
[
"question",
"module: ops"
] | 2019-12-04T13:25:24Z
| 2019-12-04T14:51:40Z
| null |
scut-salmon
|
|
pytorch/pytorch
| 30,720
|
what is tensor's storage C++ pointer?
|
Recently I look into PyTorch source codes. tensor's impl object is created after a tensor is created. But I can't know where the tensor's storage is and its pointer.
Could anyone give me some help? 😊
|
https://github.com/pytorch/pytorch/issues/30720
|
closed
|
[] | 2019-12-04T08:38:09Z
| 2019-12-04T16:22:54Z
| null |
alanzhai219
|
pytorch/xla
| 1,448
|
python on XLA for CPU/GPU?
|
IIUC, with the same HLO, XLA is able to run on GPU and TPU.
I wonder if this project allows running PyTorch on top of XLA for CPU/GPU and future AI chips (as soon as they support XLA)?
Thanks,
Tiezhen
|
https://github.com/pytorch/xla/issues/1448
|
closed
|
[
"question",
"stale"
] | 2019-12-04T06:51:32Z
| 2020-01-26T17:08:48Z
| null |
wangtz
|
pytorch/examples
| 672
|
I faced on the build error of libtorch:mnist.cpp in Ubuntu18.04
|
(1)Issue
I faced the build error of one of libtorch examples :mnist.cpp in Ubuntu18.04.
Please tell me the way to solve the build error.

(2)Enviroment
OS:Ubbuntu18.04LTS
libtorch: I downloaded https://download.pytorch.org/libtorch/cu101/libtorch-shared-with-deps-1.3.1.zip
cmake version 3.16.0
CUDA:10.1
(3)the way to reproduce of error
1.$mkdir OnPre and cd OnPre
2.I downloaded libtorch-shared-with-deps-1.3.1.zip and $unzip libtorch-shared-with-deps-1.3.1.zip.
3.the folder "libtorch" was made and $ cd libtorch.
4.$mkdir mnist and $cd mnist
5.I copied CMakeLists.txt and mnist.cpp from https://github.com/pytorch/examples/tree/master/cpp/mnist
6.$mkdir build and cd build
7.$ cmake -DCMAKE_PREFIX_PATH=/home/yoshiki/OnPre/libtorch ..
-- The C compiler identification is GNU 7.4.0
-- The CXX compiler identification is GNU 7.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found CUDA: /usr/local/cuda (found version "10.1")
-- Caffe2: CUDA detected: 10.1
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda
-- Caffe2: Header version is: 10.1
-- Found CUDNN: /usr/local/cuda/lib64/libcudnn.so
-- Found cuDNN: v7.6.5 (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so)
-- Autodetected CUDA architecture(s): 5.2
-- Added CUDA NVCC flags for: -gencode;arch=compute_52,code=sm_52
-- Found torch: /home/yoshiki/OnPre/libtorch/lib/libtorch.so
-- Downloading MNIST dataset
-- Configuring done
-- Generating done
-- Build files have been written to: /home/yoshiki/OnPre/libtorch/mnist/build
8.$ make
Scanning dependencies of target mnist
[ 50%] Building CXX object CMakeFiles/mnist.dir/mnist.cpp.o
/home/yoshiki/OnPre/libtorch/mnist/mnist.cpp: In function ‘void test(Net&, c10::Device, DataLoader&, size_t)’:
/home/yoshiki/OnPre/libtorch/mnist/mnist.cpp:102:26: error: ‘at::Reduction’ has not been declared
at::Reduction::Sum)
^~~~~~~~~
CMakeFiles/mnist.dir/build.make:62: recipe for target 'CMakeFiles/mnist.dir/mnist.cpp.o' failed
make[2]: *** [CMakeFiles/mnist.dir/mnist.cpp.o] Error 1
CMakeFiles/Makefile2:75: recipe for target 'CMakeFiles/mnist.dir/all' failed
make[1]: *** [CMakeFiles/mnist.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
9.The build error appeared .
|
https://github.com/pytorch/examples/issues/672
|
closed
|
[] | 2019-12-04T02:13:45Z
| 2019-12-04T07:35:11Z
| 1
|
yoshihingis
|
pytorch/vision
| 1,630
|
GeneralizedRCNNTransform doesn't work with four-channel inputs
|
When I modify the input channel of FasterRCNN from 3 to 4, GeneralizedRCNNTransform doesn't work.
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py", line 47, in forward
images, targets = self.transform(images, targets)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/transform.py", line 40, in forward
image = self.normalize(image)
File "/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/transform.py", line 55, in normalize
return (image - mean[:, None, None]) / std[:, None, None]
RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0
```
|
https://github.com/pytorch/vision/issues/1630
|
closed
|
[
"question",
"module: models",
"topic: object detection"
] | 2019-12-04T00:53:20Z
| 2019-12-04T12:58:30Z
| null |
ZhiangChen
|
pytorch/xla
| 1,447
|
How to use a specific commit of pytorch-xla in Colab?
|
## ❓ Questions and Help
Hi,
I'm eager to use a specific commit (or the latest) in Colab. My current setup is this cell:
```bash
XRT_VERSION = "nightly"
DIST_BUCKET = "gs://tpu-pytorch/wheels"
TORCH_WHEEL = "torch-{}-cp36-cp36m-linux_x86_64.whl".format(XRT_VERSION)
TORCH_XLA_WHEEL = "torch_xla-{}-cp36-cp36m-linux_x86_64.whl".format(XRT_VERSION)
TORCHVISION_WHEEL = "torchvision-0.3.0-cp36-cp36m-linux_x86_64.whl"
# Update TPU XRT version
import os
import requests
import threading
def update_server_xrt():
print("Updating server-side XRT...")
url = 'http://{TPU_ADDRESS}:8475/requestversion/{XRT_VERSION}'.format(
TPU_ADDRESS=os.environ['COLAB_TPU_ADDR'].split(':')[0],
XRT_VERSION=XRT_VERSION,
)
print("Done updating server-side XRT: {}".format(requests.post(url)))
update = threading.Thread(target=update_server_xrt)
update.start()
# Install Colab TPU compat PyTorch/TPU wheels and dependencies
!pip uninstall -y torch torchvision
!gsutil cp "$DIST_BUCKET/$TORCH_WHEEL" .
!gsutil cp "$DIST_BUCKET/$TORCH_XLA_WHEEL" .
!gsutil cp "$DIST_BUCKET/$TORCHVISION_WHEEL" .
!pip install "$TORCH_WHEEL"
!pip install "$TORCH_XLA_WHEEL"
!pip install "$TORCHVISION_WHEEL"
!sudo apt-get install libomp5
update.join()
```
But that only gets the nightly version. Is there some way to name a specific commit?
|
https://github.com/pytorch/xla/issues/1447
|
closed
|
[
"question"
] | 2019-12-03T20:04:55Z
| 2020-02-12T17:36:30Z
| null |
hrbigelow
|
pytorch/vision
| 1,629
|
Reference detection script image sizes help
|
Hi @fmassa ,
Somehow the reference detection script does not handle big images of size > 3000.
Always throw me cuda out of memory error.
Any suggestions on that ?
|
https://github.com/pytorch/vision/issues/1629
|
closed
|
[
"question",
"module: models",
"module: reference scripts",
"topic: object detection"
] | 2019-12-03T12:11:21Z
| 2019-12-03T12:30:55Z
| null |
gaussiangit
|
pytorch/pytorch
| 30,655
|
How to convert Tensor back to BitMap or any image format in Android?
|
I have converted a PyTorch model for Android mobile. The purpose of the model is to achieve Super Resolution. The problem I am facing is that the model gives output in the form of Tensor. Whereas I want to convert that tensor into some imaging format but I haven't been able to find a method to achieve this task.
I cannot find something suitable in Pytorch Java documentation for this certain task. Please advise regarding this issue.
|
https://github.com/pytorch/pytorch/issues/30655
|
closed
|
[
"module: android",
"oncall: mobile"
] | 2019-12-03T09:32:11Z
| 2023-09-29T16:39:11Z
| null |
nauyan
|
pytorch/pytorch
| 30,654
|
What is the different between nn.Functional.conv2d and nn.Conv2d?It seems a bit redundant?
|
## ❓ Questions and Help
Hi,I have just started learning pytorch recently. In the official website tutorials, I often see nn.Conv2d and nn.Functional.conv2d. I don't understand the difference between the two writing methods. It seems that one of these two is enough.
|
https://github.com/pytorch/pytorch/issues/30654
|
closed
|
[] | 2019-12-03T08:27:21Z
| 2019-12-04T01:09:44Z
| null |
wulongjian
|
pytorch/xla
| 1,442
|
Out of memory error?
|
Is the following an out-of-memory error from the TPU?:

The text just keeps scrolling with similar messages.
It's surprising I get this error, because all I wanted to do is have a batch of 512 for 224x224 images, which I thought the TPU could handle.
|
https://github.com/pytorch/xla/issues/1442
|
closed
|
[
"question"
] | 2019-12-03T04:26:23Z
| 2019-12-10T18:57:22Z
| null |
tmabraham
|
pytorch/examples
| 671
|
nn.Transformer tutorial uses nn.TransformerEncoder only
|
hello,
when I search for nn.Transformer use example, I find example which uses nn.TransformerEncoder, is there example use of nn.Transformer?
|
https://github.com/pytorch/examples/issues/671
|
closed
|
[
"question"
] | 2019-12-02T12:49:33Z
| 2022-03-10T04:46:18Z
| null |
vainaixr
|
huggingface/transformers
| 2,013
|
What is the real parameters to weight the triple loss (L_{ce}, L_{mlm}, L_{cos}) in DistilBert?
|
Hello! Thanks for your great work DistilBert. I want to ask what is the real parameters "alpha" you used in DistilBert to weight the triple loss (L_{ce}, L_{mlm}, L_{cos})?
You did not mention this detail in your NIPS workshop paper (http://arxiv.org/abs/1910.01108). In the [README](https://github.com/huggingface/transformers/blob/master/examples/distillation/README.md) file, you listed two different setups: `--alpha_ce 5.0 --alpha_mlm 2.0 --alpha_cos 1.0 --alpha_clm 0.0` for single GPU training and `--alpha_ce 0.33 --alpha_mlm 0.33 --alpha_cos 0.33 --alpha_clm 0.0` for distributed training. Can you tell me what is the best setting?
Actually, I have tried to reproduce your results of DistilBert. I trained the DistilBert with the corpus used by BERT, but the performance of GLUE seemed slightly fall behind your pre-trained `distilbert-base-uncased` by 2 points. I would be appreciated if you can tell me the parameters for reproducibility. Thanks!
|
https://github.com/huggingface/transformers/issues/2013
|
closed
|
[] | 2019-12-01T16:49:05Z
| 2019-12-02T15:37:37Z
| null |
voidism
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.