id
stringlengths
3
8
text
stringlengths
1
115k
st100600
You would use padding in the CNN, then take the output and pass it to pack_padded_sequence for the RNN.
st100601
I just wanted to clarify - so pytorch does not support variable length sequences in the batch and masking is mandatory? I hoped it would be possible to avoid it
st100602
Masking is mandatory but the PyTorch RNNs natively support variable length sequences (created by pack_padded_sequence) and will correctly avoid processing the padding tokens, even for the reverse direction of a bidirectional RNN.
st100603
Could you plz point to some example where variable length sequences are being passed in a batch
st100604
I do not understand what is really meant by “masking” - could anyone explain please?
st100605
However, when the sequence is very sparse, i.e the maximum length is very large, while each sequence might very small. In this case, padding will waste a lot of memory. Is there any other way to resolve this?
st100606
Hello. I’ve read the code data_parallel.py and the implementation of optimizer.step, but I can’t find the code to accumulate the gradient from multi gpu to single one… Can any give a hint about the implementation ? Thanks !
st100607
It is the backward of Broadcast that calculates the gradient: https://github.com/pytorch/pytorch/blob/master/torch/nn/parallel/_functions.py#L8-L30 17. Gradients are automatically accumulated into .grad attribute in the autograd engine.
st100608
Thanks. Does it means that copying a variable from one gpu to another can still log to the computation graph ? Then when backward, the grad on different gpu will flow back.
st100609
Yes both for dataparallel and for just copying like x.cuda(1) or x.to(device). The backward for these two scenarios are implemented differently but they all work.
st100610
Thanks. I think with these truth, maybe I can do parallel training with batch which have different configuration for different gpu…
st100611
Hello, I was hoping to find a solution to this error: RuntimeError: /pytorch/torch/csrc/jit/tracer.h:117: getTracingState: Assertion var_state == state failed. Could anyone tell me which branch is the development branch for the 1.0 release? Thanks
st100612
RuntimeError: /pytorch/torch/csrc/jit/tracer.h:117: getTracingState: Assertion var_state == state failed. Please open an issue on GitHub Could anyone tell me which branch is the development branch for the 1.0 release? master
st100613
Hi, I am trying to generate batches for both input and output data with shuffling using DataLoader. I wonder if the only way to do that is to concatenate input and output and feed it into DataLoader, and then split it to get input/output batch afterwards, or there are other ways to do it? Thank you!
st100614
Solved by TheShadow29 in post #2 You don’t need to concatenate them. You can simply have more than one outputs in the dataset __getitem__, data loader returns those in batches.
st100615
You don’t need to concatenate them. You can simply have more than one outputs in the dataset __getitem__, data loader returns those in batches.
st100616
Hi all, How to set ‘Inf’ in Tensor to 0? I don’t wish to use numpy since that require to set backward when using it in Networks. Thanks, Qinqing
st100617
x = torch.Tensor([1, float("Inf"), 2, float("Inf")]) x[x == float("Inf")] = 0 x # should be 1, 0, 2, 0 now
st100618
Thanks for the answer! May I know if the masked entry will affect the gradient? For example, if I have a model whose intermediate layer gives an output of: out = [inf, -3.4, inf, -5.5] where the inf entries should be masked out, can I do it by: out[out == float('inf')] = 0 Thanks!
st100619
I have supervised model that has 2 outputs- 1 is a float and the other is multi-binary (an array of 0/1). I have a loss function for each of the outputs and during training I see that the loss for the float is decreasing as it should but the loss of the multi-binary output stays high. The loss function I use for it is binary cross entropy, does anyone has a suggestion for other loss functions that can fit here?
st100620
I have a weird issue with tensor.cuda(), it seems as if it doesn’t always work: self.returns = torch.zeros(self.step, 1, 1) self.returns = self.retruns.cuda() rewards = rewards.cuda() masks = masks.cuda() print(self.returns.type(), rewards.type(), masks.type()) --output: torch.FloatTensor torch.cuda.FloatTensor torch.cuda.FloatTensor does anyone know what might be the issue?
st100621
It was even more stupid than that, instead of doing self.returns = self.retruns.cuda() I just did self.returns.cuda() and didn’t understand why nothing has changed. Thanks.
st100622
Hi, I want to process parameters of a network only in forward pass and do not have backprop for the op. For example, add noise to the every parameter of every layer in net. But have no effect in backprop. How can I achieve this?? Also if I try to modify self.conv.weight.data then I run into CUDA related errors. I’m very new to pytorch so sorry if its a silly question. class net(nn.Module): def __init__(self): self.conv = nn.Conv2d(...) def forward(self,x): self.conv.weight = self.conv.weight + normal(mean=0,stddev=0.2) return self.conv(x)
st100623
I want to deploy semantic segmentation model. Currently onnx does not support upsampling or convolutional transpose. What are the alternatives for deploying the semantic segmentation?
st100624
Is there any tutorial on the web for using the Functional API? It would be great if it contains codes for some popular architectures.
st100625
Hi Pytorch, Are there best practices for avoiding out of memory errors when using dataparallel. For instance, I have 8 cards, but I think the gradients are all accumulated on one card (card 0 by default), so would it make sense to use 7 cards, and set the 8th one as the output device? Or would that be a waste? I’ve tried experimenting with it but the overhead is so high I’m not sure i’d get there quickly.
st100626
Using a card solely for the to accumulate gradients doesn’t sound so good. Whats your model look like? In many models the activations use more memory than the weights and gradients, so try decreasing your batch size if you’re running out of memory.
st100627
Hello everyone, I’m getting a Bus error (core dumped) , when using the distUtils.DistributedSampler with a larger dataset. It works fine, once I reduce the data size or don’t use the distUtils.DistributedSampler. Any thoughts on what may be causing this or how I can fix it ? Thx My code is following: Trainer.py corpus_dataset = CorpusDataset(h5py_path, self.word2Vec, self.args.maxL_input, self.args.maxL_output) train_sampler = None if self.args.distributed: dist.init_process_group(backend=self.args.distBackend, init_method=self.args.distUrl, world_size=self.args.worldSize, rank=self.args.rank) train_sampler = distUtils.DistributedSampler(corpus_dataset, self.args.worldSize, self.args.rank) custom_loader = Data.DataLoader( dataset=corpus_dataset, batch_size=self.args.batchSize, shuffle=(train_sampler is None), drop_last=(train_sampler is not None), num_workers=1, collate_fn=collate_fn, sampler=train_sampler ) for epoch in range(self.args.numEpochs): for posts, p_lens, responses, r_lens, labels in custom_loader: pass Dataset and collate_fn class CorpusDataset(Data.Dataset): def __init__(self, h5_path, word2Vec, maxL_input, maxL_output): self.h5f = h5py.File(h5_path, 'r') self.word2Vec = word2Vec self.pad_id = word2Vec.word2id(TAG_TOKEN_PAD) self.input_boundary = maxL_input self.output_boundary = maxL_output self.datasize = self.h5f['posts'].shape[0] # When the variable(times) is greater than 3 , it can not work in my device and getting the getting a Bus error (core dumped) .** self.len = self.datasize + (self.datasize * times) def __getitem__(self, index): question_index = index if index < self.datasize else index % self.datasize answer_index = index if index < self.datasize else get_random(question_index) raw_post = self.h5f['posts'][question_index].split() raw_response = self.h5f['responses'][answer_index].split() label = 1 if index < self.datasize else label = 0 post = raw_post[:self.input_boundary] response = raw_response[:self.output_boundary] post = self.word2Vec.sentence2id(post, True) response = self.word2Vec.sentence2id(response, True) return post, response, label def __len__(self): return self.len def collate_fn(batch): pairs = sorted(batch, key=lambda p: len(p[0]), reverse=True) inputs_batch, targets_batch, labels, pad_id = zip(*pairs) pad_id = pad_id[0] p_lens, posts = count_len_and_add_pad(inputs_batch, pad_id) r_lens, responses = count_len_and_add_pad(targets_batch, pad_id) posts = torch.LongTensor(posts) responses = torch.LongTensor(responses) labels = torch.FloatTensor(labels).unsqueeze(1) return posts, p_lens, responses, r_lens, labels
st100628
Hi, I have met a similar error, and setting num_workers=0 solves my problem. Note the warning of DistributedDataParallel in the documentation: If you plan on using this module with a nccl backend or a gloo backend (that uses Infiniband), together with a DataLoader that uses multiple workers, please change the multiprocessing start method to forkserver (Python 3 only) or spawn . Unfortunately Gloo (that uses Infiniband) and NCCL2 are not fork safe, and you will likely experience deadlocks if you don’t change this setting. I think use multiprocessing.set_start_method('forkserver') will also work, I will try this method later.
st100629
You are right. Setting num_workers=0 works. BTW, do you have any thoughts on what may be causing this? This problem has been bothering me for several days. Anyway, thank you very much .
st100630
When loading weight from file with model.load_state_dict(torch.load(model_file)) exception raised: THCudaCheck FAIL file=/data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/generic/THCStorage.c line=79 error=2 : out of memory Segmentation fault (core dumped) Previously this runs with no problem, actually two training processes are still running (on another two GPUs), however this breaks when I want to start an additional training process.
st100631
david-leon: torch.load(model_fil No, the target GPU is idle, and there’re still 22GB memory available on this GPU.
st100632
OK, I think I’ve got where the problem rises: the model weight saved with torch.save(model.state_dict(), file) contains device info and torch.load(model_file) will load the weight directly into the device according to the saved device info rather than load into CPU. So, if the previously used device is short of memory, this loading process will crash.
st100633
I’ve tried this: def map_loc(storage, loc): if loc.startswith('cuda'): return storage.cuda(device) else: return storage print('model weights loading...') model.load_state_dict(torch.load(model_file,map_location=map_loc)) print('model weights loaded') And still, exception raised: model weights loading... THCudaCheck FAIL file=/data/users/soumith/builder/wheel/pytorch-src/torch/csrc/generic/serialization.cpp line=145 error=2 : out of memory Traceback (most recent call last): File "PTR_evaluation_pytorch.py", line 197, in <module> model.load_state_dict(torch.load(model_file,map_location=map_loc)) File "/home/David/App/anaconda3/lib/python3.5/site-packages/torch/serialization.py", line 222, in load return _load(f, map_location, pickle_module) File "/home/David/App/anaconda3/lib/python3.5/site-packages/torch/serialization.py", line 377, in _load deserialized_objects[key]._set_from_file(f, offset) RuntimeError: cuda runtime error (2) : out of memory at /data/users/soumith/builder/wheel/pytorch-src/torch/csrc/generic/serialization.cpp:145 The target device is idle with over 20GB memory free.
st100634
there was a bug in the serialization where remapping devices still used the device memory. this is fixed in master. i am working on binaries of version 0.1.11 and that will have this fix.
st100635
@david-leon hello, have you solved your problem ? i meet the same one, i want to load model in different device, and there are following errors : THCudaCheck FAIL file=/py/conda-bld/pytorch_1493676237139/work/torch/lib/THC/generic/THCStorage.c line=79 error=2 : out of memory [1] 2276 segmentation fault (core dumped) python guitar_rnnsearch_ia.py
st100636
Did you specify the map_location while loading? What kind of error message did you get?
st100637
Thanks! I have solved this problem by specifying map_location and using torch.cuda.set_device() to set target GPU manually.
st100638
Thanks! It’s work. map_location 30 map_location – a function, torch.device, string or a dict specifying how to remap storage locations if torch.cuda.is_available() and cfg.use_gpu is not None: device = torch.device(use_gpu) else: device = torch.device("cpu") checkpoint_data = torch.load(checkpoint_path, map_location=device) model.load_state_dict(checkpoint_data['model']) optimizer.load_state_dict(checkpoint_data['optimizer']) checkpoint_data = dict( optimizer=optimizer.state_dict(), model=model.state_dict(), )
st100639
Here it is: the_device=torch.device("gpu:2") myModel.to(the_device) The GPU 2 is supposed to be used solely, but gpu 0 and gpu 2 come into play. I am not sure what is wrong with my code . I miss something ? Thanks in advance for your help
st100640
Maybe you should use replace “cuda:2” with “gpu:2” ? Moreover, you can also change used GPU with a torch.cuda.device() or torch.cuda.set_device() 3. You can refer this 2 to set up and run CUDA operations.
st100641
Did you use torch.cuda.device() or torch.cuda.set_device() 2? It works for me. If you already used torch.cuda.device() but it did not work. Check this 1 first.
st100642
You are right. …cuda.set_device works . BTW, what is the recommended way to set the gpu things? From the examples given by pytorch, it seems .to(device ) used . Anyway, thank you very much .
st100643
This is a bug. If your pytorch version is 0.4.1, please let me know your model definition and I can add it to this tracking issue: https://github.com/pytorch/pytorch/issues/10832 7
st100644
I have a one-layer network defined by nn.ConvTranspose2d(). Now I need to change the parameter of this network (weight, bias), and re-evaluate the output w.r.t the original input. But I still need to maintain the original network. All I need is the output w.r.t new parameter. Then work on the original network with old parameter. Is that possible that I still use nn.ConvTranspose2d() to realize it? If not what other function can I use?
st100645
Solved by SimonW in post #2 Use the functional interfact: torch.nn.functional.conv_transpose2d
st100646
Is there any tutorial about how to use it? If the module is nn.ConvTranspose2d(d, 1, 8, stride=2), I get the weight and bias of this network, and plan to apply it to an input. Is torch.nn.functional.conv_transpose2d(input, weight, bias) right? Or I need to specify some parameters. Thanks
st100647
Yes you need to specify the stride as well. The documentation is pretty clear. https://pytorch.org/docs/master/nn.html#conv-transpose2d 2
st100648
I happened to find a loss function nn.CosineEmbeddingLoss, which I found the idea is quite similar to contrastive loss used in siamese networks. Is this loss a better and more stable version of contrastive loss, or is there a paper proposing this loss ?
st100649
Can anyone explain the inputs of Conv3d 46, what Din is which is used as one of its input? How does it do the convolution? Any additional explanation to understand how it works is appreciated… In addition, is there any way to have input in form of: Input: (N,Cin,Din,Hin,Win) and out put in form of: (N,Cout,Hout,Wout), so Dout is just one Thanks
st100650
It is just one of the three dimensions the convolution op moves kernel around along. Whether it is time, depth, etc. depends on the user’s interpretation.
st100651
Hello, I was working on quantization problems. Is wondering if caffe2 8 bit runtime would be supported in the future?
st100652
When will ARM Compute Library’s GPU acceleration for Android devices be available in Caffe2?
st100653
I have a tensor like this, image.png1179×298 5.47 KB and I want to do convolution operation such that my kernel height is 3 and width is also 3 but kernel moves only in the one direction (in the direction of width). I tried to use 2D convolution with stride 0 for Height but it throws an error as stride should be greater than zero. Any idea how to achieve this in pytorch? One way could be is to do 1D convolution for each row (1,2,3) and the result of convolution from each row can be put together back in a tensor. Is there any easy way to achieve the same ?
st100654
I think what you are looking for is a 2D convolution with a 3x3 kernel size and without padding in the dimension you don’t want to traverse.
st100655
Yes, with 2D convolution in PyTorch, it does what’s called “valid padding” by default. That is, it won’t go over the edges. And in this case, it won’t move vertically (up or down). The output you expect to get here, from a 3x9 input with a 3x3 kernel with stride 1, is a 1x7 output
st100656
I have 2 classes (1 representing positive class and 0 representing negative class). I calculate weights for my BCE loss as follows. Weights for Zero = 1 / (number of zeros in the entire dataset) Weights for ones = 1 / (number of ones in the entire dataset) but the problem is that the weights of zeros and ones becomes very small like 1e-6 and 1e-4 respectively. I further change it to Weights for Zero = 1 - (number of zeros in the entire dataset / total number of pixels in the dataset) Weights for ones = 1 - (number of ones in the entire dataset / total number of pixels in the dataset) By this way i can get the weights in range [0-1] with zeros getting close to zero and ones getting close to 1. I would like to know if this is the correct way to do or is there any suggested way? I saw many reference discussions but they focus on classification and not segmenting pixels.
st100657
It seems your dataset is skewed towards 1. What about the weights as below: Weight for 0 : Avg number of 1s / Avg number of 0s Weight for 1 : 1 The intuition here is to give a higher weight to 0.
st100658
Do we need to give larger weight to zero class which has more occurrence in the mask ? or the other way round ? because in this reference it says we to need to give more weights to classes which are less. Loss weighting for imbalanced classes nlp Should my weight vector values be calculated from the total distribution or the distribution of the current batch that I’m looking at the specific iteration where I call the loss function? You should calculate the weight distribution from your training set. How should the weighting vector look? If I have 100 examples and the distribution looks like this 1: 10, 2: 10, 3: 10, 4: 50, 5: 20, would I want: [1/10 1/10 1/10 1/50 1/20] or [10 10 10 50 20], and why? You should be weighing in the i…
st100659
oh yes. you are right. its the other way around. Weight for 0 : 1 Weight for 1 : Avg number of 0s / Avg number of 1s The intuition here is to give a higher weight to 1.
st100660
Trying to train a network using multiple CPUs but I am not sure if Adam optimizer can be used instead of SGD, should the optimizer be shared??
st100661
Hi! I’ve found this strange behavior which I think is unexpected. Running the following code, the last 2 printed outputs are different though I would expect the 2 operations to be equivalent: import torch x = torch.randn(3, 3, 1, 2) print(x[:, 2, ...]) print(torch.tanh(x[...])) print(torch.tanh(x[:, 2, ...])) print(torch.tanh(x[:, 2, ...].contiguous())) It seems that torch.tanh(x[:, 2, ...]) gives the wrong result. Is this supposed to happen? If so, why? Thanks in advance PyTorch version: 0.4.1 Python version: 3.6
st100662
What .contiguous() does is makes the tensor continuous in the memory disk. What is happening here is that torch.tanh is taking in the input x[:, 2, ...] and starts with the first element. Now in the disk where this is written the next elements are not what would expect in x[:, 2, ...] rather those of x[...]. So it takes in those inputs instead. This is resolved by doing .contiguous() which places them in the order you expect. Hope this helps.
st100663
Hi! thank you for your answer! Ok, this seems to explain the behavior. However, why don’t torch.tanh use strides to access the right values? This still seems to me an quite unexpected behaviour and it implies I need to call .contiguous() every time I use torch.tanh. I need to apply a few different non-linearities to different subsets of the channels in a CNN: calling .contiguous() on each sub-set of the channels will end up doing a complete copy of the input tensor at every forward pass, which seems a waste of resources. Is there a way to avoid copying the whole input tensor? Do yo plan to support tensor’s strides? Thanks in advance
st100664
I am sorry but I am not a pytorch dev so can’t really help with support thing. I usually use .contiguous() whenever the data points are sliced or permuted and hence not consecutive. torch.tanh won’t cause a problem as long as the data points are consecutive. One possible solution is to may be permute the axes such that the values which would have the same non-linearity are consecutive. So in your case it would be something like x1 = x.permute(0, 2, 3, 1).contiguous() print(torch.tanh(x1)) print(torch.tanh(x1[:, :, :, -1])) print(torch.tanh(x1[:, :, :, -1].contiguous())) The last two lines would return the same result
st100665
This doesn’t seem to be right. The workaround of @TheShadow29 might work, but it still seems to be weird. Thanks for reporting it!
st100666
I saw the fixed code. It seems related to uncontiguous storage. I am a bit confused why it related to UnaryOps. What’s the main difference between UnaryOps and NonUnaryOps?
st100667
this one: http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html 451
st100668
I agree it was a cool tutorial, but I also think it could have been better: Both the encoder and the decoder use a loop in the number of layers, which basically forces the network to share weights across layers. This is not usual, and if it was intended, it could have been explicit in the text. It could have dealt with attention better. It basically admits that all sentences have the same length, and if they don’t, then they won’t use all attention weights. This basically means that the attention weights used may not sum to 1, as it is supposed.
st100669
Both the encoder and the decoder use a loop in the number of layers, which basically forces the network to share weights across layers. This is not usual, and if it was intended, it could have been explicit in the text. The number of layer is set to 1, so the loop could be simply removed and it would still work properly. It could have dealt with attention better. It basically admits that all sentences have the same length, and if they don’t, then they won’t use all attention weights. This basically means that the attention weights used may not sum to 1, as it is supposed. All the sentences do have the same lengths, after pre-processing the data. But it is free to us to extend this tutorial with variable lengths, in order to explain the pack_padding trick, using an RNN cell for attention (instead of the linear layer). That would be great !
st100670
Check the updated https://github.com/spro/practical-pytorch/blob/master/seq2seq-translation/seq2seq-translation.ipynb 166 for solutions to both those problems. Sharing weights was a mistake at first, I kept it because it actually converges faster (not sure if there are papers on this but someone should write one) but it should have been explained. The updated version does the normal thing, using the n_layers argument of the nn.GRU constructor. The attention implementation was definitely poor. The updated version covers multiple “correct” versions of attention as seen in Effective Approaches to Attention-based Neural Machine Translation 30 which do not suffer from the fixed length problem. (In fact that paper also touches upon a model called “global location” attention which is closer to the aforementioned implementation, and is shown to perform poorly.) I keep meaning to merge this into the official tutorials repo…
st100671
The number of layer is set to 1, so the loop could be simply removed and it would still work properly. Agree, but just because n_layers = 1. Otherwise it would be weird. All the sentences do have the same lengths, after pre-processing the data. True, but the smaller sentences were padded, so part of the attention is in the padding, which is not what you want.
st100672
Thanks @spro! Just one question, is there a good way to mask the attention? Even using the attention from the paper from Manning you mentioned, if the sentences in a batch are of different lengths, you will need to mask some positions. Right now I am directly filling the attention matrix (before applying softmax) with -float(“inf”), but this seems a bit hacky.
st100673
Just noticed, this doesnt share the embedding across encoder-input/decoder-input/output? Per https://arxiv.org/abs/1608.05859 12 might be interesting to do that? Also, this is used by the ‘attention is all you need’ paper, https://arxiv.org/abs/1706.03762 4 . Thoughts on how this would be implemented in a pytorch-idiomatic way? (I can put this into a separate/new thread/topic/question perhaps?)
st100674
It is certainly quite an interesting tutorial. @spro one thing that I think can be added is running the layers on cuda when cuda is available (USE_CUDA = True). In the current version, when cuda is available, the Variables run on cuda but the model/layers don’t.
st100675
afaik weight sharing doesn’t make sense for translation because the embeddings on the encoder and decoder side are totally different (two different languages)… for cases like CharRNN or WordRNN it makes sense and is very (almost too) easy to implement - see https://github.com/pytorch/examples/blob/f2a771a8a2f3a38ec15b11f6f19ac38c8bbaa900/word_language_model/model.py#L28-L31 27 Edit: it does make sense on the decoder side inputs & outputs
st100676
well, per the paper above, french and english share enough words that it worked ok for them. As far as copying the weights, I think that means the second embedding has already been allocated. And then we throw it away. That seems a bit ‘unclean’ to me. What if the embedding is huge? Would prefer to be able to create the embedding once, then re-use it. In idiomatic pytorch.
st100677
Couldn’t we use shared embeddings just like that? class Seq2Seq(nn.Module): def __init__(self): super(Seq2Seq, self).__init__() self.embed_seq2seq= nn.Embedding(len(vocab), 75, padding_idx=vocab("<pad>")) self.lstm_enc = nn.LSTM(75, 150, 2, batch_first=True, bidirectional=False) self.lstm_dec = nn.LSTM(75, 150, 2, batch_first=True, bidirectional=False) self.linear = nn.Linear(150, 1) And then in the forward function use self.embed_seq2seq for encoder and decoder
st100678
Yes, maybe Somewhat related question: how to get the output of the encoder and decoder in a pytorch idiomatic way, and handling teacher forcing etc? soemthing like? : def forward(self, encoder_input, decoder_input, state): if encoder_input is not None: enc_input_emb = self.embed_seq2seq(encoder_input) enc_out, state = self.lstm_enc(enc_input_emb, state) if decoder_input is not None: decoder_input_emb = self.embed_seq2seq(decoder_input) dec_out, state = self.lstm_dec(decoder_input_emb, state) embedding_size = self.embed_seq2seq.size()[1] batch_size = decoder_input.size()[1] seq_len = decoder_input.size()[0] dec_out_unemb = dec_out.view(-1, embedding_size) @ self.embed_seq2seq.weight.transpose(0, 1) dec_out_unemb = dec_out_unemb.view(seq_len, batch_size, -1) return enc_out, dec_out, dec_out_unemb, state ?
st100679
GitHub jadore801120/attention-is-all-you-need-pytorch 69 A PyTorch implementation of the Transformer model in "Attention is All You Need". - jadore801120/attention-is-all-you-need-pytorch Here you go. A pytorch implementation of the model in “Attention is All You Need”.
st100680
Thanks for the reply, can you also explain why attn_combine is a cat of att_applied and embeded; I did it with att_applied and prev hidden instead of embeded and it works quite simmilar.
st100681
Hi, thank you for your recommendation. This is a good tutorial but I am confused at the moment about the training process, especially the attention. I used Tensorflow before and I am new to Pytorch. I know the attention should be implemented manually instead of a wrapper. I am wondering how to train the model. In the tutorial and probably this link, https://github.com/spro/practical-pytorch/blob/master/seq2seq-translation/seq2seq-translation-batched.ipynb 10, @spro it seems the rnn sequence should also be iterated manually, as it shows something like “for t in range(max_target_length):”. So how about the batch iteration in an epoch? Is it here, ‘for iter in range(1, n_iters + 1):’? Basically, from my understanding, the training of seq2seq in Pytorch, might be two loops: 1) the batch loop in an epoch, 2) the sequence loop in one batch for feeding word by word until the end of a sequence (maybe max_time). Is it right? I am new to pytorch, and a little bit confused. I tried lstm for mnist classification as beginner program. It seems that the lstm can be used in a sequence style instead of feeding word by word. That is why I get stuck here. Could you help me with it? @hughperkins @spro
st100682
Curious about something: it seems odd that the attention weights are calculated without looking at encoder_output. Relevant line is: attn_weights = F.softmax(self.attn(torch.cat((embedded[0], hidden[0]), 1)), dim=1) This seems weird – do you have an intuition on why this would work? I thought you had to look at the encoder when calculating weights (your newer tutorial versions do use the encoder_outputs as I would expect, btw)
st100683
In a typical neural network, you have a first layer of input neurons, then a layer of connections to the next layer (synapses) with trainable weight parameters, a layer of neurons with an activation function, followed by another layer of connections with trainable weights etc… But often the synapses are considered part of the neuron itself. My question is, does for example torch.nn.ReLU() include this synapse layer? Or is it just the neuron proper (activation function), with this synapse layer (connection weights) separated in for example torch.nn.Linear()? Will this network be missing synapses between the hidden layers (no trainable weights)?: net1 = torch.nn.Sequential( torch.nn.Linear(In_Dimension, Hidden_Dimension), torch.nn.ReLU(), torch.nn.ReLU(), torch.nn.ReLU(), torch.nn.Linear(Hidden_Dimension, Out_Dimension), ) In contrast with this: net2 = torch.nn.Sequential( torch.nn.Linear(In_Dimension, Hidden_Dimension), torch.nn.ReLU(), torch.nn.Linear(Hidden_Dimension, Hidden_Dimension), torch.nn.ReLU(), torch.nn.Linear(Hidden_Dimension, Hidden_Dimension), torch.nn.ReLU(), torch.nn.Linear(Hidden_Dimension, Out_Dimension), )
st100684
No, it doesn’t and so net1 is bogus and net2 is what you want. And, if I may add, I have seen the activation seen attached to the synapses in your terminology (e.g. keras does that) but the other way round looks unusual to me. Best regards Thomas
st100685
Issue description The official support for batched inversion has not been released, so I have coded a snippet for the operation. Is this implementation using LU decomposition right? And any solution for optimizing this code? Hope it could help anyone who needs this. Code Below is my implementation, and one can also refer to the gist. def inv(A, eps = 1e-10): assert len(A.shape) == 3 and \ A.shape[1] == A.shape[2] n = A.shape[1] U = A.clone().data L = A.new_zeros(A.shape).data L[:, range(n), range(n)] = 1 I = L.clone() # A = LU # [A I] = [LU I] -> [U L^{-1}] L_inv = I for i in range(n-1): L[:, i+1:, i:i+1] = U[:, i+1:, i:i+1] / (U[:, i:i+1, i:i+1] + eps) L_inv[:, i+1:, :] = L_inv[:, i+1:, :] - L[:, i+1:, i:i+1].matmul(L_inv[:, i:i+1, :]) U[:, i+1:, :] = U[:, i+1:, :] - L[:, i+1:, i:i+1].matmul(U[:, i:i+1, :]) # [U L^{-1}] -> [I U^{-1}L^{-1}] = [I (LU)^{-1}] A_inv = L_inv for i in range(n-1, -1, -1): A_inv[:, i:i+1, :] = A_inv[:, i:i+1, :] / (U[:, i:i+1, i:i+1] + eps) U[:, i:i+1, :] = U[:, i:i+1, :] / (U[:, i:i+1, i:i+1] + eps) if i > 0: A_inv[:, :i, :] = A_inv[:, :i, :] - U[:, :i, i:i+1].matmul(A_inv[:, i:i+1, :]) U[:, :i, :] = U[:, :i, :] - U[:, :i, i:i+1].matmul(U[:, i:i+1, :]) A_inv_grad = - A_inv.matmul(A).matmul(A_inv) return A_inv + A_inv_grad - A_inv_grad.data
st100686
PyTorch has a few BLAS/LAPACK style operations implemented. torch.btrifact 2 — for instance — could serve as a faster (and probably numerically more stable) alternative.
st100687
I was wondering if it is possible to spawn multiple threads using the same model memory (weight sharing) to run an asynchronous inference on the CPU. Ideally, I would like to launch a new thread in a server to fulfill a request ASAP, and launch a second one to serve a second request that arrives in the meantime. Is this possible in pytorch? (i.e is pytorch thread safe?) Cheers, Miguel
st100688
Hi My dataset consist of 3000 files, 1000 files from 3 classes. I use the pytorch data module data.Dataset: class passengerShipDataset(data.Dataset): def __init__(self, root_dir): self.list_of_data_files = glob.glob(root_dir) self.len = len(self.list_of_data_files) def __getitem__(self, index): single_track = torch.from_numpy(np.transpose(genfromtxt(self.list_of_data_files[index], delimiter=','))[:,1:]) target = np.zeros(single_track.size(1)) target[:] = 1 target = torch.from_numpy(target) #passenger return (single_track, target) def __len__(self): return self.len I have three of these datasets, and then combine them into a single dataset using: allShips = data.ConcatDataset([cargoShips, passengerShips, fishingShips]) train_set_all = data.DataLoader(dataset=allShips,batch_size=1,shuffle=True) This will load one track at a time. Each track is of different length. One track is 3 channels: [Lat, lon, time]. I then want to run the track trough a CNN to extract features and then feed these features to a RNN to classify the track. However, I want to feed only one timestep at a time to my network not the entire sequence. The idea is that with timestep 1 the network will classify uniformly like 33% on each class. Then at each consecutive timestep it becomes better and better at classifying the track. When i train my network i see the loss fall throughout the first few 100-1000 timesteps on a track and then the loss stays constant. At the next track, the loss resets back to high and then again falls in the first 100-1000 timesteps. This continues on forever through all tracks as if the network learns nothing. Then when i test my network it is only good at classifying one class (the last trained upon) and i get accuracy 30%. I want the loss to fall over several tracks not through each individual track. Is there a way to run a track through the network one step at a time and only calculate loss in the end of the track? Right now its like my network receives “track length” samples of one class and then become very good at classifying that class. But I want my network to see each track as a sample and not as if its “track length” samples. I think the problem is that data is not “shuffled” because it sees one track as many samples of a single class. I can’t seem to find any solution to this because one track is one class. And i want to find features within a track so i cannot simply feed one timestep from one track and then one timestep from another track.
st100689
Im still stuck with my loss problem. This is the loss over time when i train one timestep at a time. Each track is 3000 timesteps long. The first image is the loss over 50 tracks (50 * 3000 timesteps). The constant low loss between 100000-130000 is because the same class is chosen again due to poor shuffling. Here is loss for just two consecutive tracks. Pretty fast the loss falls to acceptable values. The loss is NLL so 0.45 is equal to something like 65% confidence in the correct class. Any help regarding how to update my model and correctly calculate loss is highly appreciated. Right now i input a timestep. Then calculate loss and do the backpropagation and update weights and biases then input new timestep etc. Unfortunately its like my network forgets everything at the next track.
st100690
Hi, guys. I am trying to train a resnet model on single node and 8 GPUs with DistributedDataParallel. Every thing is ok during the first epoch. However, the script shut down without any error report when the second epoch starts. I have tried to track the code and find that the code stops at: for batch_idx, (data, label) in enumerate(train_loader, 0): I find a similar topic from here 7 which imply that this problem may caused by DistributedSampler. Meanwhile, I create a small version dataset with 34 classes, and the error is gone. Here is the code: torch.cuda.set_device(opt.local_rank) dist.init_process_group(backend='nccl', init_method='env://', world_size=8) train_dir = os.path.join(opt.data, 'train') train_dataset = datasets.ImageFolder( train_dir, transforms.Compose([transforms.ToTensor()]) ) train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset) train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=opt.batch_s, num_workers=opt.workers, pin_memory=True, shuffle=False, sampler=train_sampler ) input_size = (opt.batch_s, 3, 128, 128) num_classes = 9092 model = models.se_resnet34_v3(input_size, opt.grid_s, num_classes) model.cuda() model = torch.nn.parallel.DistributedDataParallel(model,\ device_ids=[opt.local_rank], output_device=opt.local_rank) optimizer = optim.SGD([ {'params': get_parameters(model, bias=False)}, {'params': get_parameters(model, bias=True), 'lr':opt.lr * 2, 'weight_decay': 0}, {'params': get_parameters(model, bn=True), 'lr':opt.lr * 1.00001001358, 'weight_decay':0} ], lr=opt.lr, momentum=opt.momentum, weight_decay=opt.weight_decay) if opt.resume: if os.path.isfile(opt.resume): checkpoint = torch.load(opt.resume) optimizer.load_state_dict(checkpoint['optimizer']) scheduler = optim.lr_scheduler.MultiStepLR(optimizer, \ milestones=[8,15,24], gamma=0.5) def train(epoch): for batch_idx, (data, label) in enumerate(train_loader, 0): optimizer.zero_grad() data, label = data.cuda(), label.cuda() output = model(data) nll_loss = F.nll_loss(output, label) de_loss = deformation_constraint_loss(grid, opt.grid_s) loss = nll_loss + de_loss loss.backward() optimizer.step() for epoch in range(opt.start_epoch, opt.epoch + 1): train_sampler.set_epoch(epoch) scheduler.step() model.train() train(epoch) Command Line: python -m torch.distributed.launch --nproc_per_node=8 train.py Any help will be appreciate.
st100691
def forward(self, input): if self.count%self.k==0: self.mask, rnn_x= self.com_mask(torch.randn()) return F.conv2d(input, self.weight*self.mask, self.bias, self.stride, self.padding, self.dilation, self.groups) else: return F.conv2d(input, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups) Will the code chage the conv’s original parameters(weight\bias tensor)? For example, if the mask tensor is full of 0; when then next backpropagation, grad will add to a 0 weight?
st100692
Hi, guys! I‘m trying to use distributed data parallel to train a resnet model wtih vggface2 dataset on 8 GPUs on single node. Every thing is ok during the first epoch. However, the script shut down without any error report when the second epoch starts. I have tracked the code, and find that the code stops at for batch_idx, (data, label) in enumerate(train_loader): on the second epoch. I have also tried a smaller version of vggface2 consisting of 34 classes, and found that this error is gone. Here are the scripts and the output: training code import argparse,os,time import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torch.distributed as dist import torch.utils.data import torch.utils.data.distributed import torchvision from torchvision import datasets, transforms import numpy as np import models from util import * parser = argparse.ArgumentParser() parser.add_argument('--start_epoch', type=int, default=1, help='start epoch number') parser.add_argument('--epoch', type=int, default=25, help='number of epochs to train for') parser.add_argument('--lr', type=float, default=0.1, help='learning rate, default=0.1') parser.add_argument('--momentum', type=float, default=0.9, help='momentum, default=0.9') parser.add_argument('--weight_decay', type=float, default=0.0002, help='weight_decay, default=0.0002') parser.add_argument('--batch_s', type=int, default=64, help='input batch size') parser.add_argument('--grid_s', type=int, default=8, help='grid size') parser.add_argument('--data', type=str, default='../vgg2data', help='data directory') parser.add_argument('--workers', type=int, default=4, help='number of data loading workers') parser.add_argument('--output_dir', type=str, default='./output/', help='model_saving directory') parser.add_argument('--resume', type=str, default='', help='resume') parser.add_argument("--display_interval", type=int, default=50) parser.add_argument("--local_rank", type=int) opt = parser.parse_args() torch.cuda.set_device(opt.local_rank) dist.init_process_group(backend='nccl', init_method='env://') train_dir = os.path.join(opt.data, 'train') train_dataset = datasets.ImageFolder( train_dir, transforms.Compose([transforms.ToTensor()]) ) train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset) train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=opt.batch_s, num_workers=opt.workers, pin_memory=True, shuffle=False, sampler=train_sampler ) input_size = (opt.batch_s, 3, 128, 128) num_classes = 9092 #num_classes = 34 model = models.se_resnet34_v3(input_size, opt.grid_s, num_classes) if opt.resume: if os.path.isfile(opt.resume): print("=> loading checkpoint '{}'".format(opt.resume)) checkpoint = torch.load(opt.resume) model.load_state_dict(checkpoint['state_dict']) print("=> loaded checkpoint '{}' (epoch {})" .format(opt.resume, checkpoint['epoch'])) model.cuda() model = torch.nn.parallel.DistributedDataParallel(model,\ device_ids=[opt.local_rank], output_device=opt.local_rank) optimizer = optim.SGD([ {'params': get_parameters(model, bias=False)}, {'params': get_parameters(model, bias=True), 'lr':opt.lr * 2, 'weight_decay': 0}, {'params': get_parameters(model, bn=True), 'lr':opt.lr * 1.00001001358, 'weight_decay':0} ], lr=opt.lr, momentum=opt.momentum, weight_decay=opt.weight_decay) if opt.resume: if os.path.isfile(opt.resume): checkpoint = torch.load(opt.resume) optimizer.load_state_dict(checkpoint['optimizer']) scheduler = optim.lr_scheduler.MultiStepLR(optimizer, \ milestones=[8,10,12,14,15,16,17,18,19,20,21,22,23,24], gamma=0.5) def train(epoch): batch_time = AverageMeter() data_time = AverageMeter() losses = AverageMeter() nll_losses = AverageMeter() de_losses = AverageMeter() end = time.time() for batch_idx, (data, label) in enumerate(train_loader): data_time.update(time.time() - end) optimizer.zero_grad() data, label = data.cuda(), label.cuda() output, grid = model(data) nll_loss = F.nll_loss(output, label) de_loss = deformation_constraint_loss(grid, opt.grid_s) loss = nll_loss + de_loss losses.update(loss.item(), data.size(0)) nll_losses.update(nll_loss.item(), data.size(0)) de_losses.update(de_loss.item(), data.size(0)) loss.backward() optimizer.step() batch_time.update(time.time() - end) end = time.time() if opt.local_rank == 0 and batch_idx % opt.display_interval == 0: total_time = int(((opt.epoch - epoch + 1) * len(train_loader) - batch_idx) * batch_time.avg) print('Epoch: [{0}][{1}/{2}]\t' 'Time {batch_time.val:.3f}\t' 'Data {data_time.val:.3f}\t' 'RemainTime [{3}:{4}]\t' 'Loss {loss.val:.4f} ({loss.avg:.4f})\t' 'NLL_Loss {nll_loss.val:.3f} ({nll_loss.avg:.3f})\t' 'DE_Loss {de_loss.val:.3f} ({de_loss.avg:.3f})'.format( epoch, batch_idx, len(train_loader), total_time / 3600, total_time % 3600 / 60, batch_time=batch_time, data_time=data_time, loss=losses, nll_loss=nll_losses, de_loss=de_losses)) for epoch in range(opt.start_epoch, opt.epoch + 1): train_sampler.set_epoch(epoch) scheduler.step() model.train() train(epoch) if opt.local_rank == 0: print("=> saving checkpoint '{}'".format('checkpoint'+'_'+str(epoch+1)+'.pth.tar')) save_checkpoint({ 'epoch': epoch + 1, 'arch': 'resnet34', 'state_dict':model.module.state_dict(), 'optimizer':optimizer.state_dict() }, 'checkpoint'+'_'+str(epoch+1)+'.pth.tar') torch.save(model.module.state_dict(), opt.output_dir + 'grid_face.pt') print 'Done.' command line python -m torch.distributed.launch --nproc_per_node=8 train.py log |Epoch: [1][0/6215]|Time 12.832|Data 9.403|RemainTime [553:49]|Loss 9.2308 (9.2308)|NLL_Loss 9.157 (9.157)|DE_Loss 0.074 (0.074)| |Epoch: [1][50/6215]|Time 0.271|Data 0.000|RemainTime [22:45]|Loss 13.9143 (19.6498)|NLL_Loss 9.073 (9.098)|DE_Loss 4.841 (10.552)| |Epoch: [1][100/6215]|Time 0.265|Data 0.000|RemainTime [17:22]|Loss 13.1845 (16.5875)|NLL_Loss 8.963 (9.031)|DE_Loss 4.221 (7.556)| |Epoch: [1][150/6215]|Time 0.256|Data 0.000|RemainTime [15:31]|Loss 12.5163 (15.3281)|NLL_Loss 8.679 (8.945)|DE_Loss 3.837 (6.383)| ... ... Epoch: [1][6150/6215] Time 0.283 Data 0.000 RemainTime [11:36] Loss 5.2405 (7.9289) NLL_Loss 3.552 (5.624) DE_Loss 1.688 (2.305) Epoch: [1][6200/6215] Time 0.285 Data 0.000 RemainTime [11:36] Loss 5.6334 (7.9077) NLL_Loss 3.953 (5.607) DE_Loss 1.680 (2.300) => saving checkpoint 'checkpoint_2.pth.tar' Does anyone have idea about this? Thanks.
st100693
I have a 2D tensor which I want to standardize. Each row contains an instance, and each instance is an array of 400 floats. I want to efficiently use mean/std functions to get means/stds of all those instances speparately, and then use them to standardize my data. So far I was able (I think) to get means and stds of all instances with this: means = train_input_data.mean(dim=1) stds = train_input_data.std(dim=1) But I don’t know how to apply subtraction and division of that data on all instances. I can do it on one: train_input_patches[0] = (train_input_patches[0] - means[0]) / stds[0] but that doesnt’t seem to be an optimal way to make a loop through all instances.
st100694
EDIT: Sorry about the last question, PyTorch supports broadcasting 76 like NumPy, you just have to keep the dimension: means = train_data.mean(dim=1, keepdim=True) stds = train_data.std(dim=1, keepdim=True) normalized_data = (train_data - means) / stds
st100695
Hi, in tensorflow, we have tf.segment_max (https://www.tensorflow.org/api_docs/python/tf/segment_max 15) but in pytorch, what is the equivalent way to do this? thanks
st100696
Hi, I’m experimenting with torch.einsum and torch.as_strided to implement a convolution. Right now, my implementation uses approximately 6 times more memory than F.conv2d. I was wondering if the added memory consumption is from torch.as_strided copying data, or simply because my implementation is not as optimized as the CUDA kernel behind F.conv2d. Also, I could not find documentation for as_strided, is it available somewhere ? Thanks, Lucas
st100697
Here’s my implementation just in case class einsum_conv(nn.Module): def __init__(self, kernel_size): super(einsum_conv, self).__init__() self.ks = kernel_size def forward(self, x, kernel): if len(x.size()) == 3: x = x.unsqueeze(0) assert len(x.size()) == 4, 'need bs x c x h x w format' bs, in_c, h, w = x.size() ks = self.ks strided_x = x.as_strided((bs, in_c, h - ks + 1, w - ks + 1, ks, ks), (h * w * in_c, h * w, w, 1, w, 1)) out = torch.einsum('bihwkl,oikl->bohw', (strided_x, kernel)) return out
st100698
Hi, As strided does not copy any data. The difference in memory usage might come from the fact that more intermediate results are used. Special care was taken for the Conv operation to reduce the number of intermediary results as much as possible.
st100699
It’s probably einsum that’s using that much memory here. Really if you want to implement a padding-less conv, you can use as_strided + matmul (gemm). It should be a lot faster.