repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
⌀ | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/vision
| 1,625
|
Why does the rpn use the L1_Loss?
|
https://github.com/pytorch/vision/blob/master/torchvision/models/detection/rpn.py#L426
the code in the rpn.py , line 426 as follows:
**box_loss = F.l1_loss(
pred_bbox_deltas[sampled_pos_inds],
regression_targets[sampled_pos_inds],
reduction="sum",
) / (sampled_inds.numel())**
However, as said in the paper of Faster RCNN, the loss funtion used in the rpn training stage is smooth_L1_LOSS.
and I found that when computing the **rcnn_box_loss**, the loss function used in the torchvsion is **Smooth_L1_Loss**,:
https://github.com/pytorch/vision/blob/master/torchvision/models/detection/roi_heads.py#L47
Why not use the **Smooth_L1_LOSS** in both places ?
|
https://github.com/pytorch/vision/issues/1625
|
closed
|
[
"question",
"module: models",
"topic: object detection"
] | 2019-12-01T12:54:15Z
| 2019-12-02T12:14:14Z
| null |
TeeyoHuang
|
pytorch/vision
| 1,618
|
is faster rcnn scriptable?I tried,but failed~
|
https://github.com/pytorch/vision/issues/1618
|
closed
|
[
"question",
"module: models",
"topic: object detection"
] | 2019-11-27T06:32:41Z
| 2019-11-30T15:24:03Z
| null |
dao-kun
|
|
pytorch/vision
| 1,617
|
Question about converting custom dataset to coco api
|
https://github.com/pytorch/vision/blob/a44d55d87ba3628ac79292fdcaead7fb98fc130b/references/detection/coco_utils.py#L163
If the box is [3,10,6,20](xyxy format),the converted box should be [3,10,4,11]. I think this code should be added 1. Because there are 4 pixels between [3,6] and 11 pixels between [10,20]. It actually computes the pixels in grid.
May be the original computation of the area need to do this as well. Such as this tutorial, https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html
`area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])`
|
https://github.com/pytorch/vision/issues/1617
|
closed
|
[
"question",
"module: reference scripts"
] | 2019-11-27T03:22:53Z
| 2019-12-02T12:26:12Z
| null |
kangkang59812
|
pytorch/tutorials
| 735
|
Dataloader with SAMPLER tutorial missing.
|
Original discussion thread: https://discuss.pytorch.org/t/feedback-on-pytorch-for-kaggle-competitions/2252
Previously closed issue: https://github.com/pytorch/tutorials/issues/78
Related PR Merged: https://github.com/pytorch/tutorials/pull/96
Again posting a new issue because the previous issue has been closed and pr merged without providing a complete and thorough tutorial as was felt required in the initial discussion.
tldr; how to properly implement
> torch.utils.data.Sampler
Specifically for my current use-case, I have a deep metric loss model that implements an online hard mining strategy (probability of the selection of some samples per epoch is higher than rest based on certain metrics ).
It didn't feel correct putting the logic in the transforms, and I currently do the mining in the "run" function:
- Pull the current minibatch1 from the dataloader
- Apply hard mining logic to find samples to train on from current batch :
- dry forward run without back-prop
- get all misclassified samples as 'hard samples' for current batch
- calculate probability ranking of this subset based on certain heuristics ( Wrongly classified sample of higher similarity will have higher probability)
- based on sample rankings again create a dataset on the fly for these samples, wherein `__getitem__` : chooses a minibatch2 as subset of these hard samples (might have repeated samples which have a higher probability ranking)
- run forward and backward pass for samples in minibatch2
For reference size of minibatch1 ~ 10X minibatch2
The strategy works pretty well in training; though one can imagine the code sanity and running time :disappointed:
I understand, if the dataloader class was not intended for online sampling which requires a forward pass;
but can we atleast have the *complete* tutorial on the data.sampler et al methods showing different offline sampling techniques - choosing samples from the current batch based on some set heuristics.
Or did I completely misunderstand the use of the Samplers ??
@soumith @chsasank @apaszke
|
https://github.com/pytorch/tutorials/issues/735
|
closed
|
[] | 2019-11-27T00:28:36Z
| 2021-07-30T22:19:49Z
| 3
|
crazysal
|
pytorch/text
| 652
|
How to add special token in torch text.Data.Field( )?
|
Hello,
I defined my text Field as below:
```js
TEXT_openbookQA = Field(tokenize = "spacy",
init_token = '<sos>',
eos_token = '<eos>',
unk_token = '<unk>',
pad_token = '<pad>',
tokenizer_language = 'en',
lower = True)
```
However, in the text `openbookQA`, there is a special token named `<mcoption>`. How can I make the text Field to recognize this special token?
Thank you,
|
https://github.com/pytorch/text/issues/652
|
closed
|
[] | 2019-11-26T12:50:00Z
| 2019-11-26T13:40:24Z
| null |
h56cho
|
pytorch/pytorch
| 30,408
|
Where is the script of the synchronization of gradients during the backwards for DDP
|
## ❓ Questions and Help
Hi, I know the synchronization of gradients happens during the backwards for DDP. But I didn’t find the corresponding script in backwards. Where can I find it?
|
https://github.com/pytorch/pytorch/issues/30408
|
closed
|
[] | 2019-11-25T17:15:45Z
| 2019-11-26T00:49:21Z
| null |
meiluzhu
|
huggingface/neuralcoref
| 228
|
Integration of different word embeddings for prediction
|
HI,
I am using SciSpacy with neuralcoref (by adding `ENTITY` to `ACCEPTED_ENTS`) and would also like to use the SciSpacy word vectors if possible.
I already have switched the `self.static_vectors` and `self.tuned_vectors` to point to the `self.vocab.vectors` in the `NeuralCoref` constructor. I also changed `SIZE_EMBEDDING` constant to 300 dims (the dimensions of the SciSpacy vectors).
After these changes I am running into shape conflicts within the `thinc` module.
This said I have three questions:
- Being that I am working with biomedical text, would you think using domain-specific would improve performance since I would only be using them during prediction rather than training?
- Is there a better way to integrate these embeddings than what I am currently doing?
- If I am on the right path of integrating these embeddings, could you perhaps point me to a resource or give me an idea of how to adjust sizes in the ```# A BUNCH OF SIZES #``` section to accept my embeddings with 300 dimension?
Please let me know if I can provide any more information.
Thanks in advance and for making this very awesome tool :)
|
https://github.com/huggingface/neuralcoref/issues/228
|
closed
|
[
"question",
"wontfix",
"usage"
] | 2019-11-25T17:01:15Z
| 2022-01-09T04:06:41Z
| null |
masonedmison
|
pytorch/vision
| 1,610
|
code for visualization in the object detection tutorial
|
At the end of the [object detection tutorial ](https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html#torchvision-object-detection-finetuning-tutorial) it visualizes the masks.
can you please provide the code for that task? or guide how to do it?
|
https://github.com/pytorch/vision/issues/1610
|
closed
|
[
"question",
"module: models",
"topic: object detection"
] | 2019-11-25T15:41:21Z
| 2020-07-07T21:21:26Z
| null |
isalirezag
|
pytorch/vision
| 1,608
|
What's the input format of the fasterrcnn_resnet50_fpn? I mean RGB or BGR.
|
### pytorch>=1.1
I notice that both the RGB and BGR input of `[n,c,h,w]` can get a good result(BGR is slightly higher).
```
model = fasterrcnn_resnet50_fpn(pretrained=True)
model.eval()
## RGB
img1 = Image.open('image1.jpg')
## BGR
img2 = np.array(img1)[:, :, [2, 1, 0]].copy()
x1= [transforms.ToTensor()(img1)]
x2= [transforms.ToTensor()(img2)]
predictions1 = model(x1)
predictions2 = model(x2)
```
It seems that `predictions2` is better. So, should I use the BGR format to fine-tuning and eval ? I can't find this information in the code and I only know the size is `[n,c,h,w]`. In the config of the detectron2 of facebook, it says
```
# Values to be used for image normalization (BGR order).
# To train on images of different number of channels, just set different mean & std.
# Default values are the mean pixel value from ImageNet: [103.53, 116.28, 123.675]
_C.MODEL.PIXEL_MEAN = [103.530, 116.280, 123.675]
```
So BGR is the one we should choose?
|
https://github.com/pytorch/vision/issues/1608
|
closed
|
[
"question",
"module: models",
"topic: object detection"
] | 2019-11-25T12:20:25Z
| 2019-11-25T12:52:15Z
| null |
kangkang59812
|
huggingface/neuralcoref
| 227
|
What is the performance on CoNLL-2012 test set?
|
Hi,
Thank you for your excellent work. I am looking for an off-the-shelf tool to do some coref text processing. I am wondering about the model performance of this repo on the CoNLL-2012, such as the Avg. F1 score.
Would you please post it here or in the readme file? Thanks a lot.
|
https://github.com/huggingface/neuralcoref/issues/227
|
closed
|
[
"question",
"perf / accuracy"
] | 2019-11-25T09:26:30Z
| 2019-12-06T21:57:04Z
| null |
magic282
|
pytorch/text
| 649
|
How to perform common sense reasoning task with GPT-2?
|
Hello,
I am new to NLP so I have lots of questions.
I am interested in carrying out common sense reasoning task with GPT-2, for example, with Winograd Schema Challenge dataset.
Q1. How should I tokenize the Winograd Schema Challenge dataset to process it with GPT-2 (with the double heads model, for instance)? Can someone please give me an example?
Q2. Can GPT2DoubleHeadsModel be used to conduct common sense reasoning task with Winograd Schema Challenge dataset?
Thank you,
|
https://github.com/pytorch/text/issues/649
|
closed
|
[] | 2019-11-22T12:52:44Z
| 2019-11-23T14:38:47Z
| null |
h56cho
|
pytorch/xla
| 1,399
|
Why does printing progress every step slow things down?
|
## ❓ Questions and Help
@dlibenzi You mentioned the ParallelLoader background sender and its ability somehow to overlap communication between TPU and CPU without interrupting the flow of TPU computations. But, you also mentioned that printing the values of summary statistics (which ultimately requires calling `loss.item()` and so forth) triggers "an exit from the tensor world to CPU world". I'm wondering why this would be the case? Couldn't there be some sort of asynchronous process in which the tensor world does the quick `item()` calculation, sends the value to the CPU in the "background" and resumes its cycle, while the CPU goes to work printing the result?
Thanks very much,
Henry
|
https://github.com/pytorch/xla/issues/1399
|
closed
|
[
"question"
] | 2019-11-21T22:29:43Z
| 2019-11-22T17:29:28Z
| null |
hrbigelow
|
pytorch/xla
| 1,398
|
Should CPU constants be ported to tensors to prevent IR recompilation?
|
## ❓ Questions and Help
I have various constructs in my code like:
```python
rec_loss = - log_pred_target.mean()
ze_norm = (self.bottleneck.ze ** 2).sum(dim=1).sqrt()
norm_loss = self.norm_gamma * torch.abs(ze_norm - 1.0).mean()
total_loss = rec_loss + norm_loss
```
Would moving the `2` and `1.0` constants from CPU to scalar TPU tensors improve anything or will this be cached efficiently in the IR graph?
|
https://github.com/pytorch/xla/issues/1398
|
closed
|
[
"good first issue",
"question",
"stale"
] | 2019-11-21T22:18:27Z
| 2019-12-28T23:23:21Z
| null |
hrbigelow
|
pytorch/vision
| 1,599
|
ResNet identity (line 55) mustn't be mutable
|
The identity variable in line 55 is mutable
def forward(self, x):
identity = x
It must be immutable as follows:
def forward(self, x):
identity = 1*x
|
https://github.com/pytorch/vision/issues/1599
|
closed
|
[
"question",
"module: models"
] | 2019-11-20T12:39:36Z
| 2019-11-21T13:53:04Z
| null |
Abolfazl-Mehranian
|
pytorch/vision
| 1,598
|
How to feed negative samples during Faster R-CNN training
|
Hi all,
I have lots of non-annotated images in my training set, where there is no object of interest but there are couple other objects that should be interpreted as part of background. Is there any way I can provide background (negative) samples explicitly in my dataloder?
I tried to set a single fake bounding box with label zero for those non-annotated images, and set my num_classes as 3, i.e., I have 2 objects and background, and then performed transfer learning,
`model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True,
pretrained_backbone=False)`
`in_features = model.roi_heads.box_predictor.cls_score.in_features`
`model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)`
But I received a crash at `/torchvision/models/detection/roi_heads.py", line 34, in fastrcnn_loss`
`sampled_pos_inds_subset = torch.nonzero(labels > 0).squeeze(1)`
I think this is happening because I have fed some images with only label zero, i.e., with no positive bbox.
Is there any workaround for that purpose?
|
https://github.com/pytorch/vision/issues/1598
|
closed
|
[
"enhancement",
"help wanted",
"module: models",
"topic: object detection"
] | 2019-11-20T12:15:54Z
| 2023-03-29T16:37:30Z
| null |
kkirtac
|
huggingface/transformers
| 1,866
|
BertForTokenClassification for NER . what is the conclusion of this output ?
|
## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hi ,
Im trying to perform NER using BertForTokenClassification .I saw this sample code in transformers GIT page.
from transformers import BertForTokenClassification
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForTokenClassification.from_pretrained('bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
labels = torch.tensor([1] * input_ids.size(1)).unsqueeze(0) # Batch size 1
print(labels)
outputs = model(input_ids, labels=labels)
loss, scores = outputs[:2]
output loss:
tensor(0.5975, grad_fn=<NllLossBackward>)
output scores:
tensor([[[-0.1622, 0.1824],
[-0.1552, -0.0534],
[-0.3032, -0.1166],
[-0.2453, -0.1182],
[-0.4388, -0.1898],
[-0.3159, -0.1067]]], grad_fn=<AddBackward0>)
1.When i printed the loss and score i got below values .Now how should i infer this output ? what dose these value represent for performing NER ? what should i do to get the NER tags for the sentence "Hello, my dog is cute" .
2.i referred few NER codes in GIT using BERT and they have humongous line of code written for performing the NER . Is there any simple way to perform NER using bert ? like how Flair library has very simple method for performing the NER task ?
|
https://github.com/huggingface/transformers/issues/1866
|
closed
|
[
"wontfix"
] | 2019-11-19T09:23:23Z
| 2020-02-04T21:23:21Z
| null |
AjitAntony
|
pytorch/xla
| 1,385
|
How original pytorch calls xla's ops?
|
## ❓ Questions and Help
Recently, I am looking into pytorch/xla code but I am confused with some things.
- How original pytorch calls xla's ops?
Is there pytorch-xla internal mechanism?
Any reply will be much appreciated. THX
|
https://github.com/pytorch/xla/issues/1385
|
closed
|
[
"question",
"stale"
] | 2019-11-19T07:45:18Z
| 2019-12-28T16:29:15Z
| null |
alanzhai219
|
pytorch/examples
| 666
|
Distributed training resnet50 using 4 nodes 32 TeslaV100
|
I checked a lot of literature, but I didn't find the results. The questions are as follows:
How many hours can it converge?(Distributed training resnet50 using 4 nodes 32 TeslaV100 cards)
Do you have internal test results that can be displayed to better understand the performance of your distributed training.
|
https://github.com/pytorch/examples/issues/666
|
open
|
[
"distributed"
] | 2019-11-19T06:01:31Z
| 2022-03-09T20:52:45Z
| 0
|
gentelyang
|
pytorch/FBGEMM
| 199
|
[Question] 8bit integers and negative numbers
|
Hey,
I have been reading the code for sparse 8bit gemm: https://github.com/pytorch/FBGEMM/blob/master/test/SpMMI8Test.cc and I have a few questions.
I noticed that `getRandomSparseVector` will only generate positive numbers. Is this because you rely on the `maddubs` instruction? Does it mean that the A matrix can only contain positive numbers?
I noticed this bit in the code:
```c++
for (int i = 0; i < m * k; ++i) {
aptr[i] &= 0x7F;
}
```
You avoid large numbers to avoid saturation. Does this mean there is no handling of saturation when it happens?
Thanks,
Nick
|
https://github.com/pytorch/FBGEMM/issues/199
|
closed
|
[
"question"
] | 2019-11-18T17:09:26Z
| 2019-11-20T18:08:28Z
| null |
XapaJIaMnu
|
pytorch/vision
| 1,592
|
unable to load inception model. Or any other architect other than alexnet
|
import torchvision.models.inception
# works fine
arch = torchMd.alexnet(pretrained=True)
# gives error, also tried vgg, densenet
arch = torchMd.inception(pretrained=True)
AttributeError Traceback (most recent call last)
<ipython-input-43-3882461a2f37> in <module>
----> 1 print(torchvision.__version__)
AttributeError: module 'torchvision' has no attribute '__version__'
|
https://github.com/pytorch/vision/issues/1592
|
closed
|
[
"question",
"module: models"
] | 2019-11-18T07:30:01Z
| 2019-11-19T10:44:02Z
| null |
richesh09
|
pytorch/xla
| 1,379
|
Successive frames growing, but why?
|
## ❓ Questions and Help
In the attached report below, I see successive frames growing by ~30 lines at each. The relevant code is below. The approach I used was to load all of the training data (about 300 mb) into memory into two tensors (`data_source.snd_data` and `data_source.mel_data`) and then at each training step, fill the batch with a different slice of those tensors. I thought the varying slices at each iteration were causing graph recompilation. But, in the code below, I replace that step with the same hard-coded slice, and the problem remains.
Would anyone have any insights into this problem?
Any help would be greatly appreciated!
```python
def set(self, b, sample_slice, data_source):
ss = sample_slice
# self.voice_index[b] = ss.voice_index
wo = ss.wav_offset
mo = ss.mel_offset
dws = ss.dec_wav_slice
mis = ss.mel_in_slice
self.lcond_slice[b] = ss.lcond_slice
self.loss_wav_slice[b] = ss.loss_wav_slice
# self.wav_input[b,...] = data_source.snd_data[wo + dws[0]:wo + dws[1]]
# self.mel_input[b,...] = data_source.mel_data[mo + mis[0]:mo +
# mis[1],:].transpose(1, 0)
self.wav_input[b,...] = data_source.snd_data[3184397:3186543]
self.mel_input[b,...] = \
174 => data_source.mel_data[19855:19899,:].transpose(1, 0)
```
[xla.report.618294e.txt](https://github.com/pytorch/xla/files/3855904/xla.report.618294e.txt)
[xla_metrics.618294e.txt](https://github.com/pytorch/xla/files/3855905/xla_metrics.618294e.txt)
|
https://github.com/pytorch/xla/issues/1379
|
closed
|
[
"question",
"stale"
] | 2019-11-17T18:04:30Z
| 2019-12-29T18:57:28Z
| null |
hrbigelow
|
pytorch/vision
| 1,591
|
Training data set for pretrained resnet18
|
Anybody knows what the training data set of pretrained resnet18 is .
I cannot find the official information of training data set used for pretrained models in torchvision.models.
|
https://github.com/pytorch/vision/issues/1591
|
closed
|
[
"question",
"module: reference scripts",
"topic: classification"
] | 2019-11-17T09:01:12Z
| 2019-11-18T14:37:55Z
| null |
pantheon5100
|
pytorch/vision
| 1,588
|
pretrained model
|
Anybody know how to train a pretrain model(etc mobile net v2 in pysot ) ?
|
https://github.com/pytorch/vision/issues/1588
|
closed
|
[
"question",
"module: reference scripts",
"topic: classification"
] | 2019-11-16T08:30:07Z
| 2019-11-26T01:58:11Z
| null |
zhu2014yi
|
pytorch/text
| 643
|
How to skip last batch that has a different batch size?
|
## ❓ Questions and Help
**Description**
<!-- Please send questions or ask for help here. -->
Sorry if this is a newbie question.
In `torch.nn.utils.data.dataloader` we can drop the last batch by specifying `drop_last=True`.
Do we have something equivalent for our `Iterator`? Currently I continue the training loop if I see the current `batch_size` is different from my preset `batch_size`. Is there something built-in?
Thank you very much!
|
https://github.com/pytorch/text/issues/643
|
closed
|
[] | 2019-11-16T04:08:41Z
| 2019-11-18T15:54:07Z
| null |
Hans0124SG
|
pytorch/tutorials
| 725
|
transfer_learning_tutorial get a warning under pytorch1.3
|
>`/usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:100: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)`
Hello, I'm new to pytorch. In tutorial [transfer_learning_tutorial](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html), I run it in google colab, and got this warning, how to fix this?
|
https://github.com/pytorch/tutorials/issues/725
|
closed
|
[] | 2019-11-15T08:21:45Z
| 2019-11-15T08:32:13Z
| 1
|
neo0801
|
pytorch/xla
| 1,368
|
How to tell if a graph recompilation is happening?
|
## 📚 Documentation
Thanks so much for the great library! I'm running my Pytorch model on Google Colab with TPU. Following the tips in TROUBLESHOOTING.md, I see the following in my XLA_METRICS_FILE:
```
Metric: CompileTime
TotalSamples: 12
Accumulator: 44s280ms699.409us
ValueRate: 952ms609.347us / second
Rate: 0.25789 / second
Percentiles: 1%=230ms554.170us; 5%=230ms554.170us; 10%=253ms547.395us; 20%=256ms764.061us; 50%=304ms288.564us; 80%=12s512ms567.169us; 90%=12s778ms277.508us; 95%=18s450ms18.269us; 99%=18s450ms18.269us
...
Metric: CompileTime
TotalSamples: 14
Accumulator: 01m03s282ms217.172us
ValueRate: 924ms148.456us / second
Rate: 0.20445 / second
Percentiles: 1%=026ms136.061us; 5%=026ms136.061us; 10%=230ms554.170us; 20%=253ms547.395us; 50%=304ms288.564us; 80%=12s778ms277.508us; 90%=18s450ms18.269us; 95%=19s976ms381.702us; 99%=19s976ms381.702us
[more to follow]
```
There is one of these sections produced per SGD iteration. Does the fact that the ValueRate value is about the same in each one, mean that the graph is being compiled each time? If so, how do I tell what is causing it? I have studied the output of XLA_SAVE_TENSORS_FILE, and I can't find any place where the tensor dimensions are different.
However, I do also see lots of occurrences of `aten::permute`, `aten::view`, `aten::squeeze`, `aten::relu`, etc.
I also find that the code runs quite slow compared to GPU.
Thanks again,
Henry
<!-- A clear and concise description of what content is an issue. -->
|
https://github.com/pytorch/xla/issues/1368
|
closed
|
[
"question"
] | 2019-11-15T03:13:04Z
| 2019-12-03T02:37:56Z
| null |
hrbigelow
|
pytorch/vision
| 1,578
|
pilImage convert to tensor, than convert back to pilimage is not the same to the original
|
I convert a PILImage to tensor and than convert it back to PILImage. Saving the result, and compare to the original PILImage I loaded, they are not the same.
Why it is so?
|
https://github.com/pytorch/vision/issues/1578
|
closed
|
[
"question",
"module: transforms"
] | 2019-11-14T21:37:00Z
| 2019-11-26T12:43:44Z
| null |
Yumin-Sun-00
|
huggingface/transformers
| 1,834
|
Where is Model2Model PreTrainedEncoderDecoder in run_summerization_finetune
|
## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
|
https://github.com/huggingface/transformers/issues/1834
|
closed
|
[
"wontfix"
] | 2019-11-14T18:09:24Z
| 2020-03-09T03:39:51Z
| null |
yeliu918
|
pytorch/pytorch
| 29,802
|
How to release gpu memory of intermediate result tensor
|
In the example below, after calling torch.matmul, the gpu memory usage increases by 181796864 bytes, which is almost the sum of the sizes of c and b.transpose(2,3). So I guess the unreferenced intermediate result b.transpose(2,3) is stored in gpu memory. How could I release the gpu memory allocated to this intermediate result to save gpu memory?
import torch
from torch.autograd import Variable
a = Variable(torch.rand(32, 8, 151, 1024), requires_grad=True).cuda()
b = Variable(torch.rand(32, 8, 151, 1024), requires_grad=True).cuda()
torch.cuda.memory_allocated(0) # 316669952
c=torch.matmul(a, b.transpose(2,3))
torch.cuda.memory_allocated(0) # 498466816, increased by 181796864
c.element_size() * c.nelement() # 23348224
b.transpose(2,3).element_size() * b.transpose(2,3).nelement() #158334976
## Environment
- PyTorch Version (e.g., 1.0): 1.0.1
- OS (e.g., Linux): centos
- How you installed PyTorch (`conda`, `pip`, source): pip
- Build command you used (if compiling from source):
- Python version: 3.6.9
- CUDA/cuDNN version: cuda9.2/cudnn7.4.2
- GPU models and configuration:NVIDIA 1080TI
- Any other relevant information:
cc @ngimel
|
https://github.com/pytorch/pytorch/issues/29802
|
closed
|
[
"module: cuda",
"module: memory usage",
"triaged"
] | 2019-11-14T11:36:21Z
| 2019-11-15T15:49:09Z
| null |
akikaaa
|
pytorch/android-demo-app
| 31
|
How to add built AAR libraries to a project
|
Hi,
I've faced an issue. On PyTorch website there's an intro how to build and deploy pytorch-mobile from source (https://pytorch.org/mobile/android/#building-pytorch-android-from-source) but the part with Gradle won't work for me.
I've succesfully build AAR files, then edited `HelloWorldApp/app/gradle.build` as it said in intro, and added this AAR files to `HelloWorldApp/app/libs/`
And run it `./gradlew installDebug --stacktrace`
```
> Task :app:javaPreCompileDebug FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:javaPreCompileDebug'.
> Could not resolve all files for configuration ':app:debugCompileClasspath'.
> Failed to transform artifact 'pytorch_android-release.aar (:pytorch_android-release:)' to match attributes {artifactType=android-classes, org.gradle.usage=java-api}.
> Execution failed for JetifyTransform: /root/android-demo-app/HelloWorldApp/app/libs/pytorch_android-release.aar.
> Java heap space
* Try:
Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':app:javaPreCompileDebug'.
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:38)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:73)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:49)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:416)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:406)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$1.execute(DefaultBuildOperationExecutor.java:165)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:250)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:158)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:102)
at org.gradle.internal.operations.DelegatingBuildOperationExecutor.call(DelegatingBuildOperationExecutor.java:36)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:49)
at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:43)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:355)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:343)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:336)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:322)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker$1.execute(DefaultPlanExecutor.java:134)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker$1.execute(DefaultPlanExecutor.java:129)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:202)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.executeNextNode(DefaultPlanExecutor.java:193)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:129)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
Caused by: org.gradle.api.internal.artifacts.ivyservice.DefaultLenientConfiguration$ArtifactResolveException: Could not resolve all files for configuration ':app:debugCompileClasspath'.
at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.rethrowFailure(DefaultConfiguration.java:1195)
at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.access$2100(DefaultConfiguration.java:138)
at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration$ConfigurationFileCollection.getFiles(DefaultConfiguration.java:1170)
at org.gradle.api.internal.file.AbstractFileCollection.iterator(AbstractFileCollection.java:72)
|
https://github.com/pytorch/android-demo-app/issues/31
|
closed
|
[] | 2019-11-14T10:41:10Z
| 2022-08-13T17:06:38Z
| null |
zetyquickly
|
pytorch/examples
| 663
|
how do we pass multiple indices as input to generate multiple outputs in word_language model
|
The current codebase of [`word_language_model/generate.py`](https://github.com/pytorch/examples/blob/master/word_language_model/generate.py) uses a single (randomly sampled) index as `input` and generates a text based on this.
Now, I'd like to extend this a bit and would like to pass a set of indices (i.e. > 1) as `input` and be able to generate a set of texts as output. I tried it with a simple loop based approach of iteratively querying the model but it's taking hours to do this task, since it has to be done sequentially.
Any ideas about how to pass in a list of indices as input, particularly in the line: [`word_language_model/generate.py#L56`](https://github.com/pytorch/examples/blob/master/word_language_model/generate.py#L56) ? This can be called as *batchified generate* function!
|
https://github.com/pytorch/examples/issues/663
|
open
|
[
"nlp"
] | 2019-11-14T03:55:33Z
| 2022-03-09T23:42:32Z
| null |
kmario23
|
pytorch/pytorch
| 29,745
|
How to add PyTorch to requirements.txt
|
I'm trying to include PyTorch in a requirements.txt file to be installed in a Docker container, but can't seem to get it to work. I've tried adding the following with no luck:
```
torch==1.3.1
> ERROR: Could not find a version that satisfies the requirement torch==1.3.1 (from -r /requirements/./base.txt (line 28))
```
```
torch==1.2.0+cpu
> Could not find a version that satisfies the requirement torch==1.2.0+cpu (from -r /requirements/./base.txt (line 28)) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
```
How do you add PyTorch to requirements.txt?
|
https://github.com/pytorch/pytorch/issues/29745
|
closed
|
[] | 2019-11-13T20:12:58Z
| 2021-01-19T13:35:28Z
| null |
econti
|
pytorch/xla
| 1,348
|
How to downgrade torch version?
|
Hey guys, I'm trying to train my image classification model on multi-cores. I'm using Pytorch-nightly version but the problem is that torch version is 1.4.0a0+be75795, which isn't compatible with my Torchvision version(0.3.0). It gives the following error-
`AttributeError: module 'torch' has no attribute 'gels'`
This gels attribute is defined in previous torch versions, so how can I downgrade only the torch version to 1.2.0 once I'm inside the container?
Thanks
|
https://github.com/pytorch/xla/issues/1348
|
closed
|
[
"bug"
] | 2019-11-13T06:17:52Z
| 2019-11-14T00:21:42Z
| null |
ajay960singh
|
pytorch/examples
| 660
|
how to run resnet on Single node, multiple GPUs
|
can i use "CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python main.py -a resnet50 ......."
|
https://github.com/pytorch/examples/issues/660
|
closed
|
[] | 2019-11-12T03:42:46Z
| 2019-11-12T03:43:53Z
| null |
gentelyang
|
pytorch/examples
| 659
|
Do we need average_gradient when we do mutiprocess distributed training?
|
In the tutorial, it is said that we need to write `avereage_gradients` to get the average gradient for different process, then we can do `optimizer.step()`, however, in the imagenet example, `avereage_gradients` is not there. Does it means we do not need this function in new version of pytorch for mutiprocess distributed training?(I am using torch 1.3.0)
|
https://github.com/pytorch/examples/issues/659
|
closed
|
[] | 2019-11-12T00:11:38Z
| 2019-11-12T03:43:36Z
| 1
|
dzk9528
|
pytorch/pytorch
| 29,521
|
How to perform multi-task regression with pytorch?
|
```
import torch
from torch import nn
import torch.nn.functional as F
class mynet(nn.Module):
def __init__(self):
super(mynet, self).__init__()
self.lin1 = nn.Linear(5, 10)
self.lin2 = nn.Linear(10, 3)
self.lin3 = nn.Linear(10, 4)
def forward(self, x):
x = self.lin1(x)
x1 = self.lin2(x)
x2 = self.lin3(x)
return x1, x2
if __name__ == '__main__':
x = torch.randn(1000, 5)
y1 = torch.randn(1000, 3)
y2 = torch.randn(1000, 4)
model = mynet()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-4)
for epoch in range(100):
model.train()
optimizer.zero_grad()
out1, out2 = model(x)
loss = 0.2 * F.mse_loss(out1, y1) + 0.8 * F.mse_loss(out2, y2)
loss.backward()
optimizer.step()
```
Although the code above can run,I have a question that if I expect loss=0.2*loss1+0.8*loss2,how can loss be divides into two parts in proportion when backward propagating?
|
https://github.com/pytorch/pytorch/issues/29521
|
closed
|
[] | 2019-11-10T11:35:47Z
| 2019-11-11T03:50:40Z
| null |
thu-wangz17
|
pytorch/pytorch
| 29,517
|
Where is the source code for mathematical operations like specifically torch.mean()?
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
|
https://github.com/pytorch/pytorch/issues/29517
|
closed
|
[] | 2019-11-10T07:08:09Z
| 2019-11-10T08:56:45Z
| null |
C-Weed28
|
pytorch/pytorch
| 29,441
|
error when export to onnx:Auto nesting doesn't know how to process an input object of type maskrcnn_benchmark.structures.image_list.ImageList. Accepted types: Tensors, or lists/tuples of them
|
## ❓ Questions and Help
pytorch:1.0.0
cuda:10.0
torchvision:0.2.1
ubuntu:16.04
i clone the [facebook/maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark), and want to export the model to onnx:
```
x = torch.ones(1, 3, 224, 224, requires_grad=True)
torch.onnx.export(model, x, "faster.onnx", export_params=True)
```
but it get the error:
```
......
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 487, in __call__
result = self._slow_forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in _slow_forward
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 357, in forward
return self.module(*inputs[0], **kwargs[0])
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 487, in __call__
result = self._slow_forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in _slow_forward
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/maskrcnn_benchmark/modeling/detector/generalized_rcnn.py", line 50, in forward
proposals, proposal_losses = self.rpn(images, features, targets)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 487, in __call__
result = self._slow_forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 464, in _slow_forward
input_vars = tuple(torch.autograd.function._iter_tensors(input))
File "/opt/conda/lib/python3.6/site-packages/torch/autograd/function.py", line 284, in _iter
for var in _iter(o):
File "/opt/conda/lib/python3.6/site-packages/torch/autograd/function.py", line 293, in _iter
if condition_msg else ""))
ValueError: Auto nesting doesn't know how to process an input object of type maskrcnn_benchmark.structures.image_list.ImageList. Accepted types: Tensors, or lists/tuples of the
```
cc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof
|
https://github.com/pytorch/pytorch/issues/29441
|
closed
|
[
"module: onnx",
"triaged"
] | 2019-11-08T06:14:52Z
| 2021-12-23T01:43:59Z
| null |
zsk423200
|
pytorch/pytorch
| 29,434
|
How to know which whl version can be selected?
|
@svenstaro @eklitzke @jfsantos I wan't use pip install torch with cuda10, i know use bash like this:
pip3 install https://download.pytorch.org/whl/cu100/torch-1.0.1.post2-cp36-cp36m-linux_x86_64.whl
when i choose python vision is cp37, It will be reported wrong:
ERROR: torch-1.1.0-cp37-cp37m-linux_x86_64.whl is not a supported wheel on this platform.
so i want know which whl version can be selected?
cc @ezyang
|
https://github.com/pytorch/pytorch/issues/29434
|
closed
|
[
"module: binaries",
"triaged"
] | 2019-11-08T02:52:04Z
| 2019-11-09T05:54:40Z
| null |
moyans
|
pytorch/pytorch
| 29,422
|
How to inference with nn.TransformerDecoder layer
|
I am using customized Transformer with nn.TransformerDecoder layer . It seem like nn.TransformerDecoder layer doesn't support inference process(generation/testing), like sending token id one by one with fixed memory generated from nn.TransformerEncoder layer. I am wondering is there a tutorial that I can refer to as I didn't find a tutorial in the official documents. Thank you in advance for your help!
|
https://github.com/pytorch/pytorch/issues/29422
|
closed
|
[] | 2019-11-07T23:41:50Z
| 2019-11-08T21:47:20Z
| null |
xdwang0726
|
pytorch/vision
| 1,557
|
Can KeypointRCNN also detect objects that do not need to be predicted with keypoints?
|
As far as I understand keypoints would be computed for all the box classes (apart from background) in Keypoint-RCNN. I need to do object detection and keypoint prediction at the same time, however keypoints should only be predicted for one class. Does current version support this?
If not, I would need to modify some of the code in FasterRCNN and KeypointRCNN as far as I understand but in principle it is possible, right?
|
https://github.com/pytorch/vision/issues/1557
|
closed
|
[
"question",
"module: models",
"topic: object detection"
] | 2019-11-06T00:26:08Z
| 2019-11-06T09:56:00Z
| null |
anuar12
|
pytorch/examples
| 656
|
About DCGAN datasets
|
May I know what is the dataset URL for fake input in DCGAN example?
|
https://github.com/pytorch/examples/issues/656
|
closed
|
[] | 2019-11-05T18:51:11Z
| 2022-03-09T23:28:56Z
| 1
|
mahmoodn
|
pytorch/pytorch
| 29,190
|
How to run two different jit models in two GPUs respectively in one scrip?
|
I have an encoder-decoder model. After converted encoder and decoder model into jit models, I want to load encoder on GPU:0 and the encoder outputs **Keys** and **Value**. Then I move the **Keys** and **Values** to GPU:1 since the decoder is loaded on GPU:1.
encoder = torch.jit.load(feat_model).cuda(0)
gru_decoder = torch.jit.load(gru_model, map_location=torch.device("cpu")).cuda(1)
loader = getLoader(data_path, batch_size)
for data in loader:
audio, label = data
batch_size = audio.size(0)
k, v = encoder(audio.type("torch.FloatTensor").cuda(0))
k = k.cuda(1)
v = v.cuda(1)
hidden = torch.zeros(1, batch_size, 512).type("torch.FloatTensor").cuda(1)
target = torch.tensor(sos_id).repeat(batch_size).cuda(1)
for step in range(k.size(1)):
probs, hidden = gru_decoder(target, hidden, k, v)
target = torch.argmax(probs, dim=-1)
I have checked that target, hidden, k, v are on GPU:1. However, an error occurs:
arguments are located on different GPUs at /pytorch/aten/src/THC/generic/THCTensorIndex.cu:519:
operation failed in interpreter:
op_version_set = 0
def forward(self,
target: Tensor,
hx: Tensor,
keys: Tensor,
values: Tensor) -> Tuple[Tensor, Tensor]:
input_1 = torch.to(target, dtype=4, layout=0, device=torch.device("cuda"), non_blocking=False, copy=False)
_0 = torch.embedding(self.classifier.embedding.weight, input_1, -1, False, False)
~~~~~~~~~~~~~~~ <--- HERE
input_2 = torch.unsqueeze(_0, 1)
_1 = [self.classifier.rnn.weight_ih_l0, self.classifier.rnn.weight_hh_l0, self.classifier.rnn.bias_ih_l0, self.classifier.rnn.bias_hh_l0]
querys, _2 = torch.gru(input_2, hx, _1, True, 1, 0., False, False, True)
_3 = torch.matmul(keys, torch.permute(querys, [0, 2, 1]))
input_3 = torch.div(_3, CONSTANTS.c0)
attn_w = torch.softmax(input_3, 1)
sums = torch.matmul(torch.permute(attn_w, [0, 2, 1]), values)
input_4 = torch.view(torch.add(querys, sums, alpha=1), [-1, 512])
input = torch.addmm(self.classifier.h.bias, input_4, torch.t(self.classifier.h.weight), beta=1, alpha=1)
It seems that the embedding layer in decoder is on GPU:0. But I have already set decoder on CUDA:1.
Does anyone have any solutions or ideas? Thanks a lot.
cc @suo
|
https://github.com/pytorch/pytorch/issues/29190
|
open
|
[
"oncall: jit",
"triaged"
] | 2019-11-05T10:09:34Z
| 2020-03-19T06:14:43Z
| null |
lzj9072
|
pytorch/vision
| 1,553
|
Trained Mask RCNN without ground truth bounding boxes
|
Hi all,
Is acceptable to train mask rcnn without bounding boxes? I want to generate only negative samples after RPN model in order to lower false positive cases.
|
https://github.com/pytorch/vision/issues/1553
|
closed
|
[
"question",
"module: models",
"topic: object detection"
] | 2019-11-05T03:08:39Z
| 2019-11-05T10:48:14Z
| null |
ghost
|
pytorch/vision
| 1,552
|
Best practice to run Mask R-CNN in parallel
|
What ist the best practice to run Mask R-CNN in parallel?
@fmassa wrote in #1255
> The current code assumes that you are using 1 GPU per process, with DistributedDataParallel.
Is this information up-to-date?
|
https://github.com/pytorch/vision/issues/1552
|
closed
|
[
"question",
"module: reference scripts",
"topic: object detection"
] | 2019-11-04T15:56:36Z
| 2019-11-05T10:42:50Z
| null |
maxfrei750
|
pytorch/QNNPACK
| 68
|
How to build dependencies separately
|
I'm trying to add a package for QNNPACK to the [Spack package manager](https://spack.io). I see that QNNPACK downloads its own dependencies, and that this can be avoided by setting `*_SOURCE_DIR` via cmake. Is there a way to point to an existing external installation instead of a source directory so that Spack doesn't need to rebuild all of these dependencies? Spack is designed to work on air-gapped supercomputers that don't have internet access, so I can't have it download anything at build time.
|
https://github.com/pytorch/QNNPACK/issues/68
|
open
|
[] | 2019-11-01T22:08:50Z
| 2019-11-01T22:08:50Z
| null |
adamjstewart
|
pytorch/examples
| 653
|
What is the meaning of transforms.Normalize((0.1307,), (0.3081,)) in mnist
|
In mnist/main.py, when reading the dataset using DataLoader, there is a line:
`transforms.Normalize((0.1307,), (0.3081,))`
can any one explain its meaning? I know that it tries to normalize the data, but why there are two parameters and where do those 0.1307 and 0.3081 come from?
|
https://github.com/pytorch/examples/issues/653
|
closed
|
[] | 2019-11-01T15:41:13Z
| 2024-07-30T12:09:26Z
| null |
copyrightly
|
pytorch/examples
| 652
|
why not divide by batch size ?
|
https://github.com/pytorch/examples/blob/4e00723456160d910092aae567a0b8daf66c49ec/vae/main.py#L82
I think finally loss should be **(BCE+KLD) / batch_size** , is right?
|
https://github.com/pytorch/examples/issues/652
|
closed
|
[] | 2019-11-01T08:56:53Z
| 2022-03-09T23:26:30Z
| 2
|
Johnson-yue
|
pytorch/xla
| 1,280
|
machine translation validation fails with multi-process
|
## ❓ Questions and Help
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behavior:
1. create an instance using the latest torch-xla
```bash
export PROJECT_NAME=xxx
gcloud config set project ${PROJECT_NAME}
gcloud compute --project=${PROJECT_NAME} instances create instance-1 \
--zone=europe-west4-a \
--machine-type=n1-standard-8 \
--image=debian-9-torch-xla-v20191026 \
--image-project=ml-images \
--boot-disk-size=200GB
```
2. conda activate `torch-xla-nightly`
3. run machine translation scirpt following https://cloud.google.com/tpu/docs/tutorials/transformer-pytorch in tpu branch of fairseq-tpu (https://github.com/pytorch-tpu/fairseq/tree/tpu) as
```bash
gcloud compute tpus create transformer-pytorch-tutorial \
--zone=europe-west4-a \
--network=default \
--range=10.2.3.0 \
--version=pytorch-nightly \
--accelerator-type=v3-8
export TPU_IP_ADDRESS=ip-address; \
export XRT_TPU_CONFIG="tpu_worker;0;$TPU_IP_ADDRESS:8470";
python train.py \
$HOME/pytorch-tutorial-data/wmt18_en_de_bpej32k \
--save-interval=1 \
--arch=transformer_vaswani_wmt_en_de_big \
--max-target-positions=64 \
--attention-dropout=0.1 \
--no-progress-bar \
--criterion=label_smoothed_cross_entropy \
--source-lang=en \
--lr-scheduler=inverse_sqrt \
--min-lr 1e-09 \
--skip-invalid-size-inputs-valid-test \
--target-lang=de \
--label-smoothing=0.1 \
--update-freq=1 \
--optimizer adam \
--adam-betas '(0.9, 0.98)' \
--warmup-init-lr 1e-07 \
--lr 0.0005 \
--warmup-updates 4000 \
--share-all-embeddings \
--dropout 0.3 \
--weight-decay 0.0 \
--valid-subset=valid \
--max-epoch=25 \
--input_shapes 128x64 \
--num_cores=8 \
--metrics_debug \
--log_steps=100
```
After the first epoch during validation, it reports
`/anaconda3/envs/torch-xla-nightly/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))` and then crushes. There is no checkpoint saved, too.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
It crushes with the SIGKILL from multiprocessing:
```base
Traceback (most recent call last):
File "train.py", line 632, in <module>
cli_main()
File "train.py", line 623, in cli_main
xmp.spawn(_mp_fn, args=(args,), nprocs=args.num_cores)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 154, in spawn
_start_fn, args=(fn, args), nprocs=nprocs, join=join, daemon=daemon)
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 171, in spawn
while not spawn_context.join():
File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 107, in join
(error_index, name)
Exception: process 0 terminated with signal SIGKILL
```
<!-- A clear and concise description of what you expected to happen. -->
## Environment
- reproducible on XLA backend [CPU/TPU]: TPU
- torch_xla version: torch-xla-nightly (v1026)
- Any other relevant information:
|
https://github.com/pytorch/xla/issues/1280
|
closed
|
[
"question"
] | 2019-10-31T20:24:20Z
| 2021-05-22T04:59:05Z
| null |
sIncerass
|
pytorch/vision
| 1,538
|
Problem train/finetuning segmentation (fcn_resnet101) on voc data
|
Hi
Thanks for a great api.
I am trying to train/finetuning the trained fcn_resnet101 trained on coco dataset, but it seems like after 1. epoch it is way worse on voc data, than it is before.
If i test the already trained fcn_resnet101 on the voc data i get mean IoU: 73.3.
Then i train the fcn_resnet101 on the voc data and after 1. epoch i get mean IoU: 3.8.
Why is it so much worse after training 1. epoch?
Seems like i am not using the pretrained network for training.
The command line i am using for finetuning the network is:
python3 -m torch.distributed.launch --use_env train.py --lr 0.02 --dataset voc -b 2 --model fcn_resnet101 --aux-loss --pretrained
And for testing i use:
python3 -m torch.distributed.launch --use_env train.py --lr 0.02 --dataset voc -b 2 --model fcn_resnet101 --aux-loss --pretrained --test-only
Have somebody experinced the same?
Hope somebody can help?
Because after this i want to train on my own dataset.
|
https://github.com/pytorch/vision/issues/1538
|
closed
|
[
"question",
"module: reference scripts",
"topic: semantic segmentation"
] | 2019-10-30T15:10:32Z
| 2019-11-01T08:37:40Z
| null |
Denlar2
|
pytorch/examples
| 650
|
This project fast-neural-style takes too long, how to solve?
|
1. 32346.619 ms
2. 12375.127ms
|
https://github.com/pytorch/examples/issues/650
|
closed
|
[] | 2019-10-30T10:56:34Z
| 2022-03-09T23:24:07Z
| null |
tcxia
|
pytorch/xla
| 1,262
|
How to share weights memory while running big models
|
## ❓ Questions and Help
Hello, I use pytorch-xla multiprocessing approach to train my gpt2 model from `huggingface-transformers`. When training from pretrained weights, the model is however loaded multiple times, which increase the need for host memory. While for GPT2-small it's not a problem. GPT2-large can fill up to 80GB of ram when loaded by all processes. What is the suggested way to share host memory for model weights while running multiprocessing? Is this possible at all?
The part of code, run by each thread that causes the problem:
```
model_class = GPT2LMHeadModel
model = model_class.from_pretrained("gpt2")
model.to(device).train()
```
|
https://github.com/pytorch/xla/issues/1262
|
closed
|
[
"question"
] | 2019-10-30T09:59:28Z
| 2020-02-27T18:44:54Z
| null |
Glorf
|
pytorch/pytorch
| 28,868
|
How to build caffe2 with ONNX opset version greater than 9?
|
## ❓ Questions and Help
Hello,
I've currently worked with freshly merged feature pytorch/vision#1401 and won't able to find a way to make Caffe2 work with ONNX operation set 10?
Is there a way to build a Caffe2 from source with this opset?
|
https://github.com/pytorch/pytorch/issues/28868
|
closed
|
[] | 2019-10-30T09:51:52Z
| 2019-10-31T00:50:02Z
| null |
zetyquickly
|
pytorch/vision
| 1,534
|
The output of features is 512*7*7,why we still need AdaptiveAvgPool2d here to make the output size 7*7 output diamension
|
https://github.com/pytorch/vision/blob/13b35ffaa5167f3713ea7a53c43395d90b3a7cbc/torchvision/models/vgg.py#L44
|
https://github.com/pytorch/vision/issues/1534
|
closed
|
[
"question",
"module: models",
"topic: classification"
] | 2019-10-30T02:44:27Z
| 2019-10-30T10:04:25Z
| null |
shenlinyao
|
pytorch/xla
| 1,260
|
Is tensorboard visualization of computation graphs supported?
|
Hi. I would like to know is it possible to dump a tensorboard visualization of the structure of the computation graph and the TPU compatibility graph for debugging purposes.
[reference](https://cloud.google.com/tpu/docs/cloud-tpu-tools#profile_tab) This can be done in TF by setting the "model_dir" attribute of tf.estimator API.
|
https://github.com/pytorch/xla/issues/1260
|
closed
|
[
"question",
"stale"
] | 2019-10-30T02:18:09Z
| 2019-12-13T07:44:20Z
| null |
20171130
|
pytorch/android-demo-app
| 24
|
Hello, I use Java to load my training model, the program is stuck in “module.forward()” this step is not gone, how to do?
|
https://github.com/pytorch/android-demo-app/issues/24
|
closed
|
[] | 2019-10-28T10:23:04Z
| 2019-11-20T23:35:59Z
| null |
niushaoda
|
|
pytorch/pytorch
| 28,778
|
andoroid quantization model (mobilenetv2) first forward very slow? but second forward faster why how to fix it
|
## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
cc @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a
|
https://github.com/pytorch/pytorch/issues/28778
|
closed
|
[
"oncall: quantization",
"triaged"
] | 2019-10-28T04:42:36Z
| 2019-10-29T05:53:17Z
| null |
hexiangquan
|
pytorch/pytorch
| 28,776
|
How to use torch.quantization.get_observer_dict(mod, target_dict, prefix='')
|
## ❓ How to use torch.quantization.get_observer_dict(mod, target_dict, prefix='') to get the observer dict
Can you provide an example for this usage? Thanks a lot!
cc @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a
|
https://github.com/pytorch/pytorch/issues/28776
|
closed
|
[
"oncall: quantization",
"triaged"
] | 2019-10-28T04:13:45Z
| 2019-10-29T01:40:28Z
| null |
vippeterhou
|
pytorch/pytorch
| 28,771
|
why the data-type of output is quint8 in static quantize? what static quantize does under the hood?
|
Here is a example of static quantize,My python is version 3.7 and torch is 1.3.:
`
import torch
import torch.nn as nn
m = nn.quantized.Linear(20,30)
input = torch.randn(128.20)
input = torch.quantize_per_tensor(input,1.0,0,torch.quint8)
output = m(input)
print (output.dtype)
`
I feel confused why the data-type of output is quint8 rather than float or int-32 in static quantize,it may will cause accuracy loss when model 'm' is joint with a soft-max layer at last ?what static quantize does underlying?
cc @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a @pytorch/quantization
|
https://github.com/pytorch/pytorch/issues/28771
|
closed
|
[
"oncall: quantization",
"triaged"
] | 2019-10-28T02:11:47Z
| 2020-04-15T01:02:31Z
| null |
litaozijin
|
pytorch/examples
| 648
|
C++ MNIST without CUDA
|
Hi
Following instructions for MNIST in C++ I get this after make:
```
-- The C compiler identification is GNU 4.8.5
-- The CXX compiler identification is GNU 4.8.5
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
CUDA_TOOLKIT_ROOT_DIR not found or specified
-- Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR CUDA_NVCC_EXECUTABLE CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY)
CMake Error at (my path to)/libtorch/share/cmake/Caffe2/Caffe2Config.cmake:90 (message):
Your installed Caffe2 version uses CUDA but I cannot find the CUDA
libraries. Please set the proper CUDA prefixes and / or install CUDA.
```
I was wondering if there is a way to run this without CUDA.
Thanks
|
https://github.com/pytorch/examples/issues/648
|
open
|
[
"c++"
] | 2019-10-27T22:44:15Z
| 2022-03-09T20:49:35Z
| 0
|
maziar840
|
pytorch/text
| 629
|
How to use custom-built Torchtext vocabulary with the HuggingFace TransfoXLLMHeadModel?
|
Hello,
I am trying to use my custom built vocabulary which I defined using Torchtext functions with the HuggingFace TransfoXLLMHeadModel, and I am having some troubles with it.
I defined my text field as below:
```js
# Import packages
import torch
import torch.nn as nn
import torch.nn.functional as F
from transformers import TransfoXLConfig, TransfoXLTokenizer, TransfoXLLMHeadModel
from transformers import AdamW, WarmupLinearSchedule
import spacy
import torchtext
from torchtext.data.utils import get_tokenizer
from torchtext.data import Field, BPTTIterator, TabularDataset
import tensorflow as tf
#import lineflow as lf
#import lineflow.datasets as lfds
import math
import random
import numpy as np
import pandas as pd
import time
# define tokenizer
en = spacy.load('en')
def Sp_Tokenizer(text):
return [tok.text for tok in en.tokenizer(text)]
# define the English text field
TEXT = Field(tokenize = Sp_Tokenizer,
init_token='< sos >',
eos_token='< eos >',
unk_token='< unk >',
tokenizer_language='en',
lower=True)
# load WikiText-2 dataset and split it into train and test set
train_Wiki2, val_Wiki2, test_Wiki2 = torchtext.datasets.WikiText2.splits(TEXT)
train_Wiki103, val_Wiki103, test_Wiki103 = torchtext.datasets.WikiText103.splits(TEXT)
train_Penn, val_Penn, test_Penn = torchtext.datasets.PennTreebank.splits(TEXT)
# build custom vocabulary based on the field that we just defined.
TEXT.build_vocab(train_Wiki2, val_Wiki2, test_Wiki2,
train_Wiki103, val_Wiki103, test_Wiki103,
train_Penn, val_Penn, test_Penn)
```
and then I defined the HuggingFace transformer's configuration as below:
```js
# set hyperparameter ntokens
ntokens = len(TEXT.vocab.stoi)
# define transformer-XL configuration.
transfoXLconfig = TransfoXLConfig(vocab_size_or_config_json_file = ntokens,
cutoffs = [20000, 40000, 200000],
d_model = 64,
d_embed = 64,
n_head = 16,
d_head = 64,
n_layer = 5,
attn_type = 0,
dropout = 0.1,
output_hidden_states = True,
output_attentions = True)
# define the transformer-XL model based on the specified configuration.
model = TransfoXLLMHeadModel(transfoXLconfig)
# add new tokens to the embeddings of our model
model.resize_token_embeddings(ntokens)
```
and then I want to somehow specify that I want to use my `TEXT.vocab` that I defined earlier via Torchtext for my vocabulary along with the TransfoXLLMHeadModel, but I am not sure how to do this. Can someone help me on this? Thank you!
|
https://github.com/pytorch/text/issues/629
|
closed
|
[] | 2019-10-27T09:02:13Z
| 2019-11-01T15:21:23Z
| null |
h56cho
|
pytorch/android-demo-app
| 23
|
Does "pth" model need to convert "pt"? and how to convert
|
https://github.com/pytorch/android-demo-app/issues/23
|
open
|
[] | 2019-10-25T09:52:27Z
| 2020-08-25T03:50:08Z
| null |
niushaoda
|
|
pytorch/vision
| 1,523
|
Unable to pass `extensions` when creating custom `Kinetics400` Video Dataset
|
Thank you for the video support!
When imported using `from torchvision.datasets.kinetics import *`, the `Kinetics400` class doesn't accept an `extensions` argument:
```python
data = Kinetics400(root=data_path, frames_per_clip=32, extensions=('.mp4',))
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-904c8992e847> in <module>
----> 1 data = Kinetics400(root=data_path, frames_per_clip=32, extensions=('.mp4',))
TypeError: __init__() got an unexpected keyword argument 'extensions'
```
However, if I copy-paste the code from `kinetics.py` in my script/notebook, I have the option to pass in an `extensions` argument and it works fine for a dataset with, say, `.mp4` videos.
Why does this happen?
I'm using `python 3.7+`, torch `1.3.0` and `torchvision 0.4.1`
|
https://github.com/pytorch/vision/issues/1523
|
closed
|
[
"question",
"module: datasets",
"module: video"
] | 2019-10-25T07:28:37Z
| 2019-10-25T09:48:03Z
| null |
rsomani95
|
huggingface/transformers
| 1,626
|
What is currently the best way to add a custom dictionary to a neural machine translator that uses the transformer architecture?
|
## ❓ Questions & Help
It's common to add a custom dictionary to a machine translator to ensure that terminology from a specific domain is correctly translated. For example, the term server should be translated differently when the document is about data centers, vs when the document is about restaurants.
With a transformer model, this is not very obvious to do, since words are not aligned 1:1. I've seen a couple of papers on this topic, but I'm not sure which would be the best one to use. What are the best practices for this problem?
One paper I found that seem to describe what I'm looking for is [here](aclweb.org/anthology/W18-6318.pdf ) - I have a bunch of questions regarding the paper, which I'm happy to discuss here as well. I'm also wondering if there are other approaches.
|
https://github.com/huggingface/transformers/issues/1626
|
closed
|
[
"wontfix"
] | 2019-10-24T17:48:10Z
| 2020-01-04T09:41:58Z
| null |
moyid
|
pytorch/examples
| 645
|
Add Siamese Network example
|
Hi, I want to add an example for Siamese network, since it is one of the popular use cases in ML. I am thinking of implementing it in a way similar to other examples viz. command line arguments to choose which dataset to train, hyperparameters etc.
Is there something I need to keep in mind specifically apart from these:
- Use torchvision's Dataset class and PyTorch's DataLoader class to handle data.
- Implement a simple CNN as a nn.Module subclass
- Implement triplet loss
- Create train and test functions and a main function that calls those 2 methods at each epoch.
- Report final loss and accuracy
Is this something that is worth adding to the repository.
|
https://github.com/pytorch/examples/issues/645
|
open
|
[
"good first issue"
] | 2019-10-24T11:08:50Z
| 2022-05-13T18:17:30Z
| 4
|
piyush01123
|
pytorch/vision
| 1,521
|
per class mAP in coco_eval script?
|
Hi,
I was looking around the eval code and did not find function to calculate **per class mAP**? Is there an easy work around to include that. Thanks. @fmassa
|
https://github.com/pytorch/vision/issues/1521
|
closed
|
[
"question",
"module: reference scripts",
"topic: object detection"
] | 2019-10-23T15:58:28Z
| 2019-10-25T14:43:13Z
| null |
manoja328
|
pytorch/vision
| 1,520
|
DeepLabV3: segment only person
|
How can I segment person only and skip the other classes by using DeepLabV3?
|
https://github.com/pytorch/vision/issues/1520
|
closed
|
[
"question",
"topic: semantic segmentation"
] | 2019-10-23T10:22:46Z
| 2020-01-13T17:37:27Z
| null |
muna-cs
|
pytorch/pytorch
| 28,478
|
How to train a torch::jit::script::Module?
|
Existing documentation / tutorials show only how to train a `torch::nn::Module` https://pytorch.org/cppdocs/frontend.html#end-to-end-example
I have attempted to make a training loop in the following manner
```
#include <torch/script.h>
#include <torch/torch.h>
#include <iostream>
#include <vector>
// custom loader code
#include "nets/nets.h"
#include "util/runfiles.h"
int main(int argc, char** argv) {
std::cout << "Nets example" << std::endl;
// Custom code that loads the module on CUDA
auto runfiles = MakeRunfiles(argv[0]);
torch::jit::script::Module script_module = LoadSegnetBackbone(*runfiles);
script_module.train();
std::cout << "Loaded script module" << std::endl;
// Pull parameters out of the script module so we can push them into the
// optimizer.
std::vector<at::Tensor> parameters;
for (const auto& parameter : script_module.get_parameters()) {
parameters.push_back(parameter.value().toTensor());
}
torch::optim::SGD optimizer(std::move(parameters), /*lr=*/0.01);
constexpr int kBatchSize = 1;
for (int epoch = 1; epoch <= 1000; ++epoch) {
optimizer.zero_grad();
// The input is a (kBatchSize,3,300,300) tensor filled with ones
at::Tensor input = torch::ones({kBatchSize, /*channels (rgb) =*/3,
/*height=*/300, /*width=*/300})
.to(at::kFloat)
.to(at::kCUDA);
// Push the input through the script module
std::vector<torch::jit::IValue> inputs;
inputs.push_back(input);
at::Tensor script_module_forward = script_module.forward(inputs).toTensor();
// The result is an output tensor of size (kBatchSize, 32, 300, 300)
// ground truth is a (kBatchSize, 300, 300) tensor filled with ones
at::Tensor ground_truth =
torch::ones({kBatchSize, /*height=*/300, /*width=*/300})
.to(at::kLong)
.to(at::kCUDA);
at::Tensor loss = torch::nll_loss2d(
torch::log_softmax(script_module_forward, /*dim=*/1), ground_truth);
loss.backward();
optimizer.step();
if (epoch % 50 == 0) {
std::cout << "Loss was " << loss.item<float>() << std::endl;
}
}
}
```
but the loss never changes. I have also posted about this on the pytorch forums. https://discuss.pytorch.org/t/jit-module-parameters-are-not-updating-when-training/58945
cc @suo @yf225
|
https://github.com/pytorch/pytorch/issues/28478
|
closed
|
[
"oncall: jit",
"module: cpp",
"triaged"
] | 2019-10-22T23:36:22Z
| 2022-01-20T22:41:16Z
| null |
markisus
|
pytorch/text
| 622
|
How to integrate HuggingFace transformers with Torchtext BPTTIterator?
|
## ❓ Questions and Help
Hello,
I am trying to use the pretrained tokenizer from the HuggingFace Transformer-XL when training my custom transformer-XL model on WikiText2, and I am having a trouble making the BPTTIterator from the Torchtext to work.
Below are my code:
```js
# Import packages
import torch
import torch.nn as nn
import torch.nn.functional as F
from transformers import AdamW, WarmupLinearSchedule
from transformers import TransfoXLConfig, TransfoXLTokenizer, TransfoXLModel, TransfoXLLMHeadModel
import torchtext
import torchtext.data.utils
from torchtext.data import Field, BPTTIterator
import lineflow as lf
import lineflow.datasets as lfds
import math
import random
import numpy as np
import pandas as pd
import time
# set hyperparameters for this experiment
bptt = 30
batch_size = 64
lr = 0.01 # learning rate
# load the pretrained tokenizer
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103', do_lower_case=True)
# for huggingface - torchtext integration
tokenizer.mask_token = 'maskTok'
tokenizer.pad_token = '<pad>'
tokenizer.eos_token = '<eos>'
tokenizer.unk_token = '<unk>'
tokenizer.bos_token = '<sos>'
pad_index = tokenizer.convert_tokens_to_ids(tokenizer.pad_token)
eos_index = tokenizer.convert_tokens_to_ids(tokenizer.eos_token)
unk_index = tokenizer.convert_tokens_to_ids(tokenizer.unk_token)
mask_index = tokenizer.convert_tokens_to_ids(tokenizer.mask_token)
bos_index = tokenizer.convert_tokens_to_ids(tokenizer.bos_token)
# for huggingface - torchtext integration
tokenizer.mask_token = 'maskTok'
tokenizer.pad_token = '<pad>'
tokenizer.eos_token = '<eos>'
tokenizer.unk_token = '<unk>'
tokenizer.bos_token = '<sos>'
pad_index = tokenizer.convert_tokens_to_ids(tokenizer.pad_token)
eos_index = tokenizer.convert_tokens_to_ids(tokenizer.eos_token)
unk_index = tokenizer.convert_tokens_to_ids(tokenizer.unk_token)
mask_index = tokenizer.convert_tokens_to_ids(tokenizer.mask_token)
bos_index = tokenizer.convert_tokens_to_ids(tokenizer.bos_token)
# load WikiText-2 dataset and split it into train and test set
train_Wiki2, val_Wiki2, test_Wiki2 = torchtext.datasets.WikiText2.splits(TEXT)
# extract total number of tokens in the vocabulary
ntokens = tokenizer.vocab_size
# define transformer-XL configuration.
transfoXLconfig = TransfoXLConfig(vocab_size_or_config_json_file = ntokens,
cutoffs = [20000, 40000, 200000],
d_model = 1024,
d_embed = 1024,
n_head = 16,
d_head = 64,
n_layer = 5,
dropout = 0.1,
attn_type = 0,
output_hidden_states = True,
output_attentions = True)
model = TransfoXLLMHeadModel(config = transfoXLconfig)
model.resize_token_embeddings(len(tokenizer))
train_iter, test_iter = BPTTIterator.splits(
(train_Wiki2, test_Wiki2),
batch_size = batch_size,
bptt_len= bptt,
shuffle = False,
repeat=False)
# error occurs here; the error message is:
# File "/Users/jin-dominique/anaconda3/lib/python3.7/site-packages/torchtext/data/field.py",
# line 359, in numericalize
# var = torch.tensor(arr, dtype=self.dtype, device=device)
# "TypeError: an integer is required (got type str)"
train = next(iter(train_iter))
test = next(iter(test_iter))
```
How can I fix this error?
Thank you,
|
https://github.com/pytorch/text/issues/622
|
open
|
[] | 2019-10-21T17:27:46Z
| 2020-07-18T19:13:42Z
| null |
h56cho
|
pytorch/ios-demo-app
| 3
|
Add example of how to optimize model for mobile inference
|
This demo is great and works fine although it would be great to have an example of how to prepare model for mobile inference cause it's non trivial. For example you can add the receipt of how you've prepare the `mobilenet_quantized.pt`.
(Personally i've tried to convert my model to `float16` (it didn't work: model didn't load on mobile), also i've tried `torch.quantization.quantize` and it also didn't work.
Tnx!
|
https://github.com/pytorch/ios-demo-app/issues/3
|
closed
|
[] | 2019-10-19T14:53:57Z
| 2020-03-11T17:59:13Z
| null |
mirth
|
pytorch/pytorch
| 28,331
|
How to save quantized model in PyTorch1.3 with quantization information
|
## ❓ How to save the quantized model in PyTorch1.3 with quantization information
Is there any way to save the quantized model in PyTorch1.3, which keeps the original information remaining?
I have known that I can save it after tracing it by:
```python
# Save
torch.jit.save(torch.jit.script(self.model_q), "quant_model.pth")
# Load
mq = torch.jit.load("quant_model.pth")
```
Although `mq` has the right result, it, **however**, losts the quantized information, such as module(layer) name, zero point, scale, etc.
cc @jerryzh168 @jianyuh @dzhulgakov @raghuramank100
|
https://github.com/pytorch/pytorch/issues/28331
|
closed
|
[
"oncall: quantization",
"triaged"
] | 2019-10-19T07:55:01Z
| 2019-10-23T17:08:14Z
| null |
vippeterhou
|
pytorch/examples
| 643
|
How to run dcgan example?
|
I want to run `dcgan` example, however, the readme is not very clear.
I have downloaded classroom model from lsun as below
```
$ ls classroom_train_lmdb -lh
total 3.5G
-rw-r--r-- 1 mahmood mahmood 3.5G May 1 2015 data.mdb
-rw-r--r-- 1 mahmood mahmood 63K May 1 2015 lock.mdb
$ ls classroom_val_lmdb -lh
total 6.5M
-rw-r--r-- 1 mahmood mahmood 6.4M May 1 2015 data.mdb
-rw-r--r-- 1 mahmood mahmood 63K May 1 2015 lock.mdb
```
Now, the command is `python main.py --dataset lsun --dataroot XXX`.
What is XXX exactly? Is it the root folder that contains `classroom_val_lmdb/` and `classroom_train_lmdb/`?
|
https://github.com/pytorch/examples/issues/643
|
closed
|
[] | 2019-10-18T07:24:45Z
| 2022-03-09T23:35:07Z
| null |
mahmoodn
|
pytorch/tutorials
| 705
|
Where is the demo dataset and model files in (EXPERIMENTAL) STATIC QUANTIZATION WITH EAGER MODE IN PYTORCH
|
I'm trying to run the codes in [(EXPERIMENTAL) STATIC QUANTIZATION WITH EAGER MODE IN PYTORCH](https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html#experimental-static-quantization-with-eager-mode-in-pytorch), but there are no dataset and model files available, such as **imagenet_1k, mobilenet_quantization.pth** and so on.
So anyone can provide the address of the necessary files and dataset in this tutorial?
|
https://github.com/pytorch/tutorials/issues/705
|
closed
|
[] | 2019-10-18T01:38:06Z
| 2019-10-27T08:14:15Z
| null |
Aspirinkb
|
huggingface/neuralcoref
| 219
|
Pre-trained english model
|
Hi,
Is the pre-trained english model shipped with coref a model trained on the CoNLL and Ontonotes datasets?
Thanks!
|
https://github.com/huggingface/neuralcoref/issues/219
|
closed
|
[
"question",
"training"
] | 2019-10-17T18:49:51Z
| 2019-10-17T20:06:00Z
| null |
masonedmison
|
huggingface/neuralcoref
| 218
|
State-of-the-art benchmark
|
Hi,
You are claiming neuralCoref to be state-of-the-art for coreference resolution. Do you have any benchmark supporting the claim? I would like to include it in my paper. Also can it be cited yet?
|
https://github.com/huggingface/neuralcoref/issues/218
|
closed
|
[
"question",
"perf / accuracy"
] | 2019-10-17T15:30:16Z
| 2019-10-21T13:59:12Z
| null |
Masum06
|
huggingface/neuralcoref
| 217
|
train conll with BERT
|
Hi
I would like to train the conll-2012 data with BERT, for this the common thing is to first convert data to NLI format, then use the NLI bert for it, I was wondering if you could assist and the BERT-based codes to this repo. I really appreciate for your help.
thanks a lot
Best
Julia
|
https://github.com/huggingface/neuralcoref/issues/217
|
closed
|
[
"question"
] | 2019-10-17T09:25:01Z
| 2019-10-17T15:33:22Z
| null |
ghost
|
huggingface/transformers
| 1,543
|
Where is pytorch-pretrained-BERT?
|
## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
As the title shows, where is pytorch-pretrained-BERT? Please tell me the path, THX.
|
https://github.com/huggingface/transformers/issues/1543
|
closed
|
[] | 2019-10-17T07:46:13Z
| 2019-12-05T10:27:31Z
| null |
Foehnc
|
pytorch/pytorch
| 28,202
|
How to quantize resnet in pytorch 1.3?
|
I tried to quantize resnet18 refer to https://pytorch.org/tutorials/advanced/dynamic_quantization_tutorial.html
but I got this error
```
>>> from torchvision.models import resnet18
>>> net= resnet18()
>>> from torch.quantization import quantize_dynamic
>>> qnet = quantize_dynamic(net,{nn.Conv2d,nn.Linear},dtype=torch.qint8)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "E:\Program Files\Anaconda3\envs\torch\lib\site-packages\torch\quantization\quantize.py", line 241, in quantize_dynamic
convert(model, mapping, inplace=True)
File "E:\Program Files\Anaconda3\envs\torch\lib\site-packages\torch\quantization\quantize.py", line 294, in convert
reassign[name] = swap_module(mod, mapping)
File "E:\Program Files\Anaconda3\envs\torch\lib\site-packages\torch\quantization\quantize.py", line 316, in swap_module
new_mod = mapping[type(mod)].from_float(mod)
File "E:\Program Files\Anaconda3\envs\torch\lib\site-packages\torch\nn\quantized\dynamic\modules\linear.py", line 70, in from_float
qlinear = Linear(mod.in_features, mod.out_features)
File "E:\Program Files\Anaconda3\envs\torch\lib\site-packages\torch\nn\quantized\dynamic\modules\linear.py", line 33, in __init__
super(Linear, self).__init__(in_features, out_features, bias_)
File "E:\Program Files\Anaconda3\envs\torch\lib\site-packages\torch\nn\quantized\modules\linear.py", line 119, in __init__
self.set_weight_bias(qweight, bias)
File "E:\Program Files\Anaconda3\envs\torch\lib\site-packages\torch\nn\quantized\modules\linear.py", line 208, in set_weight_bias
self._packed_params = torch.ops.quantized.linear_prepack(w, b)
RuntimeError: Didn't find engine for operation quantized::linear_prepack NoQEngine (operator () at ..\aten\src\ATen\native\quantized\cpu\qlinear_prepack.cpp:202)
(no backtrace available)
```
How can I solve it?
my environment:
torch1.3.0+cpu
windows 7
python3.6.6
|
https://github.com/pytorch/pytorch/issues/28202
|
closed
|
[] | 2019-10-17T04:03:02Z
| 2020-06-23T14:10:10Z
| null |
Arctanxy
|
pytorch/pytorch
| 28,066
|
How to speed up installing pytorch1.3?
|
I am installing pytorch1.3 using pip. The command from the official site is `pip3 install torch===1.3.0 torchvision===0.4.1 -f https://download.pytorch.org/whl/torch_stable.html`.

My pip are using a mirror source which is fast for me. But the `-f https://download.pytorch.org/whl/torch_stable.html` part in the commad force pip to download things from the official site, which is slow for me.
**So my question is how to replace the `-f site` part to speed it up using mirror sites?**
Thanks for help!
|
https://github.com/pytorch/pytorch/issues/28066
|
closed
|
[
"triaged"
] | 2019-10-16T07:19:30Z
| 2019-10-17T23:13:46Z
| null |
gaopinghai
|
pytorch/pytorch.github.io
| 287
|
How to replace the website in the install command after -f ?
|
I am intalling pytorch on windows7 using pip. I get the command throgh the official website as the picture shows.

The command is `pip3 install torch===1.3.0 torchvision===0.4.1 -f https://download.pytorch.org/whl/torch_stable.html`.
But it is too slow because of the `-f https://download.pytorch.org/whl/torch_stable.html`.
**How can I replace this?** My pip is already using a different source, but is no use. Thanks for help.
|
https://github.com/pytorch/pytorch.github.io/issues/287
|
open
|
[] | 2019-10-16T06:12:43Z
| 2019-10-16T06:13:18Z
| null |
gaopinghai
|
pytorch/text
| 619
|
How to use torchtext for sequence labelling with wordpiece tokeniers
|
## ❓ Questions and Help
**Description**
<!-- Please send questions or ask for help here. -->
Hi,
In a previous issue (#609), I asked how to use the tokenizer from the [Transformers](https://github.com/huggingface/transformers) library with torch text.
I now would like to be able to use this tokenizer and torchtext to load sequence labelling datasets. The issue I am facing is that the tokenizer introduces wordpiece tokens, which ends up breaking the alignment between tokens and labels.
Ignoring labels, I am able to load a sequence labelling dataset with a Transformer tokenizer like so,
```python
from torchtext import data
from torchtext import datasets
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased', do_lower_case=False)
def preprocessor(batch):
return tokenizer.encode(batch, add_special_tokens=True)
TEXT = data.Field(
use_vocab=False,
batch_first=True,
pad_token=tokenizer.pad_token_id,
preprocessing=preprocessor
)
# LABEL = data.LabelField()
fields = [('text', TEXT), ('unused_col_1', None), ('unused_col_2', None), ('label', None)]
train, valid, test = datasets.SequenceTaggingDataset.splits(
path='/Users/johngiorgi/Downloads/bert_data/BC5CDR/chem',
train='train.tsv',
validation='devel.tsv',
test='test.tsv',
fields=fields
)
train_iter, valid_iter, test_iter = data.BucketIterator.splits(
(train, valid, test), batch_sizes=(16, 256, 256)
)
# LABEL.build_vocab(train)
```
The data comes from [here](https://github.com/ncbi-nlp/BLUE_Benchmark/releases/download/0.1/bert_data.zip), and is a tab-seperated file with four columns. The first column contains words, the last labels and each sentence is sperated by a newline, e.g.
```
Naloxone 227508 0 B
reverses - 9 O
the - 18 O
antihypertensive - 22 O
effect - 39 O
of - 46 O
clonidine - 49 B
. - 58 O
In 227508 60 O
.
.
.
```
But when I try to load the labels, e.g.
```python
from torchtext import data
from torchtext import datasets
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased', do_lower_case=False)
def preprocessor(batch):
return tokenizer.encode(batch, add_special_tokens=True)
TEXT = data.Field(
use_vocab=False,
batch_first=True,
pad_token=tokenizer.pad_token_id,
preprocessing=preprocessor
)
LABEL = data.LabelField()
fields = [('text', TEXT), ('unused_col_1', None), ('unused_col_2', None), ('label', LABEL)]
train, valid, test = datasets.SequenceTaggingDataset.splits(
path='/Users/johngiorgi/Downloads/bert_data/BC5CDR/chem',
train='train.tsv',
validation='devel.tsv',
test='test.tsv',
fields=fields
)
train_iter, valid_iter, test_iter = data.BucketIterator.splits(
(train, valid, test), batch_sizes=(16, 256, 256)
)
LABEL.build_vocab(train)
```
I get issues when trying to access the batch
```python
batch = next(iter(train_iter))
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-39-9919119fad82> in <module>
----> 1 batch = next(iter(train_iter))
~/miniconda3/envs/ml4h/lib/python3.7/site-packages/torchtext/data/iterator.py in __iter__(self)
154 else:
155 minibatch.sort(key=self.sort_key, reverse=True)
--> 156 yield Batch(minibatch, self.dataset, self.device)
157 if not self.repeat:
158 return
~/miniconda3/envs/ml4h/lib/python3.7/site-packages/torchtext/data/batch.py in __init__(self, data, dataset, device)
32 if field is not None:
33 batch = [getattr(x, name) for x in data]
---> 34 setattr(self, name, field.process(batch, device=device))
35
36 @classmethod
~/miniconda3/envs/ml4h/lib/python3.7/site-packages/torchtext/data/field.py in process(self, batch, device)
235 """
236 padded = self.pad(batch)
--> 237 tensor = self.numericalize(padded, device=device)
238 return tensor
239
~/miniconda3/envs/ml4h/lib/python3.7/site-packages/torchtext/data/field.py in numericalize(self, arr, device)
336 arr = [[self.vocab.stoi[x] for x in ex] for ex in arr]
337 else:
--> 338 arr = [self.vocab.stoi[x] for x in arr]
339
340 if self.postprocessing is not None:
~/miniconda3/envs/ml4h/lib/python3.7/site-packages/torchtext/data/field.py in <listcomp>(.0)
336 arr = [[self.vocab.stoi[x] for x in ex] for ex in arr]
337 else:
--> 338 arr = [self.vocab.stoi[x] for x in arr]
339
340 if self.postprocessing is not None:
TypeError: unhashable type: 'list
|
https://github.com/pytorch/text/issues/619
|
closed
|
[] | 2019-10-15T14:42:09Z
| 2020-02-22T03:22:23Z
| null |
JohnGiorgi
|
pytorch/pytorch
| 27,958
|
how to use libtorch library in cuda file with nvcc compiler(c++)?
|
## ❓ Questions and Help
# Motivation
i want to implement nms in parallel processing with libtorch library.
i use this cuda code(https://github.com/gdlg/pytorch_nms)
# Environment
PyTorch version : 1.2.0
CUDA (nvcc compiler ) : 10.0
libtorch version : 1.2.0
system : win10
# Operation
the command :`i use nvcc -c nms_kernel.cu -L -lcudart -I D:\Code-software\NNF\libtorch\libtorch\include -I D:\Code-software\NNF\libtorch\libtorch\include\torch\csrc\api\include` to compiled it
# ERROR
`D:/Code-software/NNF/libtorch/libtorch/include\torch/csrc/jit/argument_spec.h(181): error: member "torch::jit::ArgumentSpecCreator::DEPTH_LIMIT" may not be initialized 1 error detected in the compilation of "C:/Users/Cason/AppData/Local/Temp/tmpxft_00001b28_00000000-10_nms_kernel.cpp1.ii"`
as long as i add `#include <torch/extension.h>` or `#include <torch/script.h>` in cuda files,It makes this kind of mistake.
cc @yf225
|
https://github.com/pytorch/pytorch/issues/27958
|
open
|
[
"module: cpp",
"triaged"
] | 2019-10-15T03:35:07Z
| 2020-05-08T08:30:40Z
| null |
CasonTsai
|
pytorch/pytorch
| 27,827
|
How to hide latency on libtorch by multithreads? A problem about double stream pipelines execution.
|
Hello, I want to hide latency between data_loader and inference. I simply apply it by OpenMP with a simple double stream pipelines execution. However, the code "auto t=model->forward({Tensor.to(kCUDA)}.toTensor()" don't support multithreads(OpenMP).
Is there any solution?
My idea is just like Fig. 6 on this website: https://software.intel.com/en-us/articles/heterogeneous-computing-pipelining
|
https://github.com/pytorch/pytorch/issues/27827
|
closed
|
[] | 2019-10-14T02:50:44Z
| 2019-10-14T08:20:37Z
| null |
xiaoLiuxiaoLiuxiaoLiu
|
pytorch/examples
| 640
|
Do we still need to divide sample by ourselves when using a single GPU per process?
|
In https://github.com/pytorch/examples/blob/ee964a2eeb41e1712fe719b83645c79bcbd0ba1a/imagenet/main.py#L149, args.batch_size is manually divided by the number of processes.
However, when I checked https://pytorch.org/docs/stable/_modules/torch/utils/data/distributed.html#DistributedSampler, I found that DistributedSampler already subsampled the batch.
Is it a bug in the imagenet example, or have I missed anything?
|
https://github.com/pytorch/examples/issues/640
|
closed
|
[] | 2019-10-14T01:58:19Z
| 2020-02-14T10:24:15Z
| 2
|
taroxd
|
pytorch/examples
| 638
|
missing indent in def train(...) in `imagenet`
|
https://github.com/pytorch/examples/blob/ee964a2eeb41e1712fe719b83645c79bcbd0ba1a/imagenet/main.py#L284
It seems a missing indent in imagenet train(...) function.
`/example/imagenet/main.py`, line 282 to 284.
```python
if args.gpu is not None:
images = images.cuda(args.gpu, non_blocking=True)
target = target.cuda(args.gpu, non_blocking=True)
```
The default value of `args.gpu` is None.
When `args.gpu` is not specified (default as None), `images` tensor is not moved to cuda, which is reasonable. But, why `target` tensor is still moved to cuda? Is there a missing tab indent?
In this example, the `model` is always moved to cuda, so the `outputs` is in cuda. Always moving `target` tensor to cuda can avoid causing error for the following `loss = criterion(outputs, targets)`. If this is the consideration, then why `images` tensor is kept in cpu?
|
https://github.com/pytorch/examples/issues/638
|
closed
|
[] | 2019-10-12T05:07:56Z
| 2019-10-22T21:53:05Z
| 1
|
HearyShen
|
huggingface/transformers
| 1,503
|
What is the best way to handle sequences > max_len for tasks like abstract summarization?
|
What is the best way to handle situations where a sequence in your dataset exceeds the max length defined for a model?
For example, if I'm working on an abstract summarization task with a Bert model having a `max_position_embeddings=512` and tokenizer with `max_len=512`, how should I handle documents where the tokens to evaluate exceed 512?
Is there a recommended practice for this situation?
Thanks
|
https://github.com/huggingface/transformers/issues/1503
|
closed
|
[
"wontfix"
] | 2019-10-12T00:40:50Z
| 2020-02-17T13:26:11Z
| null |
ohmeow
|
pytorch/tutorials
| 694
|
net visualization image (https://pytorch.org/tutorials/_images/mnist.png) has the wrong dimensions
|
In the tutorial: beginner_source/blitz/neural_networks_tutorial.py,
The explanation for the first linear layer dimensions is unclear:
self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension
The input image dimension expected is 32 x 32.
The visualization of the net shows a dimension of 5x5 after the last max pool layer.
Where is the extra 1 x 1 coming from ?
The layer dimensions calculation can be a hurdle for beginners, is for me anyway.
Its confusing because the dimension sizes is complicatedly dependent on the input image size, which is nowhere in the initialization parameters.
The paper linked in the docs is helpful: https://arxiv.org/pdf/1603.07285.pdf
So, after printing the dimensions before and after each step in the net, i see that the net visualization image (https://pytorch.org/tutorials/_images/mnist.png) has the wrong dimensions listed. The actual sizes after each step in the net are:
torch.Size([1, 1, 32, 32]) # input size
torch.Size([1, 6, 30, 30]) # after conv1
torch.Size([1, 6, 30, 30]) # after relu1
torch.Size([1, 6, 15, 15]) # after maxpool1
torch.Size([1, 16, 13, 13]) # after conv2
torch.Size([1, 16, 13, 13]) # after relu2
torch.Size([1, 16, 6, 6]) # after maxpool2
torch.Size([1, 576]) # after flattening
torch.Size([1, 120]) # after fully connected layer 1
torch.Size([1, 84]) # after fully connected layer 2
torch.Size([1, 10]) # after fully connected layer 3
|
https://github.com/pytorch/tutorials/issues/694
|
closed
|
[] | 2019-10-11T15:30:46Z
| 2021-04-26T20:14:34Z
| 1
|
tinku99
|
pytorch/pytorch
| 27,479
|
[JIT] Figure out how to easily investigate memory usage issues issues
|
e.g. https://github.com/pytorch/pytorch/issues/25267
And other internal reports
cc @suo
|
https://github.com/pytorch/pytorch/issues/27479
|
open
|
[
"oncall: jit",
"triaged"
] | 2019-10-07T18:24:08Z
| 2020-02-28T18:54:51Z
| null |
jamesr66a
|
pytorch/vision
| 1,395
|
How to Crop single image before calling torchvision.utils.save_image, If I am using PIL lib Image.crop(....) method then image quality degrade.
|
vutils.save_image(fixed_fake.data,outputpath , normalize=True)
print("output path",outputpath)
img = Image.open(outputpath)
noOfRow = 5
noOfColumn = 8
x1 = 2
y1 = 2
x2 = 130
y2 = 130
folder = file_batch
for i in range(0, noOfColumn):
dest_dir = file_batch[i].split("/")[7]
if not os.path.exists(outf+"/"+dest_dir):
os.mkdir(outf+"/"+dest_dir)
for j in range(1, noOfRow + 1):
area = (x1, y1, x2, y2)
cropped_img = img.crop(area)
imgName = "{}{}".format(i, j)
cropped_img.save(os.path.join(outf+dest_dir,filename))
y1 = y1 + 130
y2 = y2 + 130
x1 = x1 + 130
x2 = x2 + 130
y1 = 2
y2 = 130
|
https://github.com/pytorch/vision/issues/1395
|
open
|
[
"module: utils"
] | 2019-09-30T20:35:12Z
| 2021-02-21T15:56:52Z
| null |
praveenkumarchandaliya
|
pytorch/pytorch
| 27,070
|
How to share a submodule but not copying its parameters in the computing graph?
|
Hi,
I am trying to feed a list of input images to a model that incorporates a number of the same submodule. The model is like following:
```
class SubModule(nn.Module):
def __init__(self):
super(SubModule, self).__init__()
self.embedding = nn.Linear(1000,20)
def forward(self, input):
return self.embedding(input)
class Model(nn.Module):
def __init__(self, subnet, n):
super(Model, self).__init__()
self.subnet = subnet
self.fc = nn.Linear(n*20, 2)
self.n = n
def forward(self, x_list):
# x_list is a list of n input images
out = []
for i in range(self.n):
h = self.subnet(x_list[i]) # h: shape[batch_size, feature_length(20)]
out.append(h.unsqueeze_(1))
out = torch.cat(out, dim=1) #out: shape[batch_size, n, feature_length(20)]
out = out.view(out.shape[0], -1)
out = self.fc(out)
return out
subnet = SubModule()
m = Model(subnet, 12)
```
Both "subnet" and "m" will be trained by back propagation at some point. I found that "m" actually creates n copies of "subnet". I want the parameters of "subnet" to be shared during training; i.e. every input image is fed through the same submodule. However, I don't want to create a computing graph forwarding multiple submodules at the same time, especially when n is large. Is there anyway to do so? Is there something similar to how RNN's are handled in pytorch for my case?
|
https://github.com/pytorch/pytorch/issues/27070
|
closed
|
[] | 2019-09-30T16:02:24Z
| 2020-03-19T06:06:45Z
| null |
ukaneverin
|
pytorch/pytorch
| 27,033
|
How to increase numerical accuracy of Pytorch model?
|
I write this sentence in my script
`print(self.netG(self.real_A)-self.netG(self.real_A))
`
I think I can get a all zero tensor but no.
```
tensor([[ [[-0.0032, 0.0089, -0.0085, ..., -0.0027, 0.0004, -0.0022],
[-0.0019, -0.0022, 0.0775, ..., 0.0236, -0.0277, -0.0125],
[ 0.0049, 0.0159, 0.0203, ..., -0.0212, 0.0010, -0.0069],
...,
[ 0.0042, 0.0081, -0.0127, ..., -0.0097, 0.0136, -0.0002],
[-0.0010, 0.0020, -0.0066, ..., 0.0260, 0.0433, 0.0088],
[-0.0023, 0.0095, 0.0125, ..., 0.0005, 0.0090, 0.0029]]]],
device='cuda:0', grad_fn=<SubBackward0>)
```
|
https://github.com/pytorch/pytorch/issues/27033
|
closed
|
[] | 2019-09-29T13:11:06Z
| 2019-10-02T12:56:36Z
| null |
gentlezr
|
pytorch/vision
| 1,384
|
How to test my trained model on my data set
|
https://github.com/pytorch/vision/issues/1384
|
closed
|
[
"question"
] | 2019-09-29T09:52:21Z
| 2019-09-30T12:35:10Z
| null |
PL-96
|
|
pytorch/pytorch
| 26,880
|
in TracedModel how to get model parameter like convolution stride info.
|
## ❓ Questions and Help
I use traced_model._modules[‘conv1’] to access conv module.
But how can I find ‘stride’ info in tracedModel object?
Is there any document to describe tracedModel API and structure?
Thanks,
8086
|
https://github.com/pytorch/pytorch/issues/26880
|
closed
|
[] | 2019-09-26T08:31:45Z
| 2019-09-26T20:34:53Z
| null |
joe8086
|
pytorch/pytorch
| 26,803
|
install pytorch1.2 where the environment is cuda9.0?
|
Can you tell me how to install pytorch1.2 in the environment is cuda9.0?
I don't have the root power, so can't upgrade cuda.
|
https://github.com/pytorch/pytorch/issues/26803
|
closed
|
[
"module: build",
"triaged"
] | 2019-09-25T14:29:19Z
| 2019-09-25T22:17:47Z
| null |
zyxdSTU
|
pytorch/pytorch
| 26,717
|
How to use RandomSampler?
|
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
class RandomSampler in torch/utils/data/sampler.py
def __iter__(self):
n = len(self.data_source)
if self.replacement:
return iter(torch.randint(high=n, size=(self.num_samples,), dtype=torch.int64).tolist())
return iter(torch.randperm(n).tolist())
problem:
`return iter(torch.randperm(n).tolist())`
If you want to get a random int number in [0,n) when `__iter__(self)` is called, you only need to use `random.randint(0, n-1)`. This code `return iter(torch.randperm(n).tolist())` uses much resource but only generates a random number when called.
I think in this function you want to generate a list like `torch.randperm(n)` and return the next number when called. Furthermore, we should shuffle the list again when we reach the end of the list. To implement this idea, we can modify the code like this:
- add
`self.iter = iter(torch.randperm(len(self.data_source)).tolist())` in `__init__(self, ...)`
- add
`def __next__(self):`
`try:`
`return next(self.iter)`
`except StopIteration:`
`self.iter = iter(torch.randperm(len(self.data_source)).tolist())`
`return next(self.iter)`
- add
`return self` in `__iter__(self)`
|
https://github.com/pytorch/pytorch/issues/26717
|
closed
|
[] | 2019-09-24T15:13:42Z
| 2019-09-24T15:31:55Z
| null |
sp2823
|
pytorch/pytorch
| 26,707
|
How to build pytorch for android
|
## ❓ How to build pytorch for android
when i run this command,
```
export ANDROID_NDK=~/android-ndk-r20
set USE_NCCL=OFF
set USE_CUDA=OFF
bash scripts/build_android.sh
```
i got follow errors
```
@ error/constitute.c/WriteImage/1028.
' @ error/constitute.c/WriteImage/1028.
: not foundL/ly/software/pytorch/pytorch/cmake/../aten/src/ATen/gen.py: 3: /media/zw/DL/ly/software/pytorch/pytorch/cmake/../aten/src/ATen/gen.py:
' @ error/constitute.c/WriteImage/1028.
from: can't read /var/mail/collections
```
My env:
- pytorch-1.1.0
- cmake-3.15.1
- android-ndk-r20
|
https://github.com/pytorch/pytorch/issues/26707
|
closed
|
[
"module: build",
"triaged",
"oncall: mobile"
] | 2019-09-24T03:02:44Z
| 2019-09-24T15:09:23Z
| null |
blackxer
|
huggingface/neuralcoref
| 203
|
training new language(French)
|
How can I get data like the English forme (there is any tool to do that) ?
|
https://github.com/huggingface/neuralcoref/issues/203
|
closed
|
[
"question",
"training"
] | 2019-09-23T13:16:42Z
| 2019-10-14T07:48:00Z
| null |
Berrougui
|
pytorch/extension-cpp
| 44
|
How to write cuda code of the multilayer units
|
This tutorials helped me to write a single layer unit with CUDA code.
But how to write CUDA code of the multilayer units, like torch/nn/_functions/rnn.py 281?
```
output, hy, cy, reserve, new_weight_buf = torch._cudnn_rnn(
input, weight_arr, weight_stride0,
flat_weight,
hx, cx,
mode, hidden_size, num_layers,
batch_first, dropout, train, bool(bidirectional),
list(batch_sizes.data) if variable_length else (),
dropout_ts)
```
I have achieved the same results by using the template of AutogradRNN, i.e., torch/nn/_functions/rnn.py 212.
```
def AutogradRNN(mode, input_size, hidden_size, num_layers=1, batch_first=False,
dropout=0, train=True, bidirectional=False, variable_length=False,
dropout_state=None, flat_weight=None):
```
But gpu utilization was too low and speed was too slow. Perhaps because each single layer unit is called individually, which involve launch of a CUDA kernel. So I want to rewrite multilayer units in CUDA and fuse particular groups of single layer. Can you provide a boilerplate?
|
https://github.com/pytorch/extension-cpp/issues/44
|
open
|
[] | 2019-09-23T03:37:04Z
| 2019-09-24T14:54:37Z
| null |
haoyz
|
pytorch/pytorch
| 26,630
|
How to script a model using c++ extension? I met this error
|
## ❓ How to script a model using c++ extension? I met this error
```
RuntimeError:
Could not export Python function call '_DCNv2'. Remove calls to Python functions before export. Did you forget add @script or @script_method annotation? If this is a nn.ModuleList
```
|
https://github.com/pytorch/pytorch/issues/26630
|
closed
|
[] | 2019-09-22T11:30:34Z
| 2019-09-24T14:42:22Z
| null |
yinnhao
|
huggingface/transformers
| 1,299
|
What is the best CPU inference acceleration solution for BERT now?
|
Thank you very much.
Thank you very much.
Thank you very much.
|
https://github.com/huggingface/transformers/issues/1299
|
closed
|
[
"wontfix"
] | 2019-09-20T02:50:55Z
| 2019-11-20T01:42:25Z
| null |
guotong1988
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.