instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
RuntimeError: expected scalar type Float but found Double
My code is as follows: net = nn.Linear(54, 7) optimizer = optim.SGD(net.parameters(), lr=lr, momentum=0) logloss = torch.nn.CrossEntropyLoss() for i in range(niter): optimizer.zero_grad() y_2 = torch.from_numpy(np.array(y, dtype='float64')) X_2 = torch.from_numpy(np.array(X, dtype='float64')) outputs = net(X_2) print(loss) loss.backward() optimizer.step() And I got the following error message ---> 57 outputs = net(X_2) 58 print(np.shape(outputs)) 59 loss = logloss(outputs, y_2) ~\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] ~\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\linear.py in forward(self, input) 94 95 def forward(self, input: Tensor) -> Tensor: ---> 96 return F.linear(input, self.weight, self.bias) 97 98 def extra_repr(self) -> str: ~\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\functional.py in linear(input, weight, bias) 1845 if has_torch_function_variadic(input, weight): 1846 return handle_torch_function(linear, (input, weight), input, weight, bias=bias) -> 1847 return torch._C._nn.linear(input, weight, bias) 1848 1849 RuntimeError: expected scalar type Float but found Double Can you specify what is my problems, thank you. I except I have transform the the results into float through torch.from_numpy(np.array(y, dtype='float64')), but do not work.
You need to cast your tensors to float32, either with dtype='float32' or calling float() on your input tensors.
https://stackoverflow.com/questions/69516366/
StopIteration Error occurs during training while running the train.py file
I am trying to run a code from github. The file is called train.py. It is supposed to run a Neural Network for training on a dataset. However, I get the following error (QGN) ubuntu@ip-172-31-13-114:~/QGN$ python train.py Input arguments: id ade20k arch_encoder resnet50 arch_decoder QGN_dense_resnet34 weights_encoder weights_decoder fc_dim 2048 list_train ./data/train_ade20k.odgt list_val ./data/validation_ade20k.odgt root_dataset ./data/ num_gpus 0 batch_size_per_gpu 2 num_epoch 20 start_epoch 1 epoch_iters 5000 optim SGD lr_encoder 0.02 lr_decoder 0.02 lr_pow 0.9 beta1 0.9 weight_decay 0.0001 deep_sup_scale 1.0 prop_weight 2.0 enhance_weight 2.0 fix_bn 0 num_val 500 num_class 150 transform_dict None workers 40 imgSize [300, 375, 450, 525, 600] imgMaxSize 1000 cropSize 0 padding_constant 32 random_flip True seed 1337 ckpt ./ckpt disp_iter 20 visualize False result ./result gpu_id 0 Model ID: ade20k-resnet50-QGN_dense_resnet34-batchSize0-LR_encoder0.02-LR_decoder0.02-epoch20-lossScale1.0-classScale2.0 # samples: 20210 1 Epoch = 5000 iters Starting Training! Traceback (most recent call last): File "train.py", line 355, in <module> main(args) File "train.py", line 217, in main train(segmentation_module, iterator_train, optimizers, history, epoch, args) File "train.py", line 33, in train batch_data = next(iterator) File "/home/ubuntu/QGN/lib/utils/data/dataloader.py", line 274, in __next__ raise StopIteration StopIteration Segmentation fault (core dumped) The code from train.py (lines 211 to 231) is as follows ''' Main loop history = {'train': {'epoch': [], 'loss': [], 'acc': []}} print('Starting Training!') for epoch in range(args.start_epoch, args.num_epoch + 1): train(segmentation_module, iterator_train, optimizers, history, epoch, args) # checkpointing checkpoint(nets, history, args, epoch) # evaluation args.weights_encoder = os.path.join(args.ckpt, 'encoder_epoch_' + str(epoch) + '.pth') args.weights_decoder = os.path.join(args.ckpt, 'decoder_epoch_' + str(epoch) + '.pth') iou = eval_train(args) # adaptive class weighting adjust_crit_weights(segmentation_module, iou, args) print('Training Done!') ''' I am not sure if I have shared all the required information. I would appreciate if ant help could be provided to resolve this issue. Just to inform, I have tried using the try and except method as shared on github on the link https://github.com/amdegroot/ssd.pytorch/issues/214. However the error still persists. The code from line 30 in train.py is as follows # main loop tic = time.time() for i in range(args.epoch_iters): batch_data = next(iterator) data_time.update(time.time() - tic) segmentation_module.zero_grad() I ammended the above code as follows # main loop loader_train = torchdata.DataLoader( dataset_train, batch_size=args.num_gpus, # we have modified data_parallel shuffle=False, # we do not use this param collate_fn=user_scattered_collate,num_workers=int(args.workers), drop_last=True, pin_memory=True) tic = time.time() for i in range(args.epoch_iters): try: batch_data = next(iterator) except StopIteration: iterator = iter(loader_train) batch_data = next(iterator) data_time.update(time.time() - tic) segmentation_module.zero_grad() But still no joy. The error still remains.
TL;DR Your args.epoch_iters is larger than the number of batches in loader_train. Python raises StopIteration error when you ask for more batches than there actually are. When you iterate over some pythonic collection of elements (e.g., list, tuple, DataLoader...) python needs to know when it reaches the end of that collection. It is done by raising StopIteration exception. for loop in python explicitly listens to this exception and uses it to know when to stop. Alas, in your code you do not use a for loop over loader_train, but rather over range(args.epoch_iter) and you use next(iterator) to get the batches.
https://stackoverflow.com/questions/69520913/
When to put pytorch tensor on GPU?
I'm experimenting with running neural network on GPU using pytorch, and my data have some unusual shape so I use Dataset and DataLoader to generate data batch. My code runs fine on CPU but I'm a little confused on when is the right timing to put the data on GPU: My data size is small enough to be put all together on GPU, should I put all data on GPU before fitting, so that all DataLoader and Dataset operations only take place on GPU in order to get optimal execution speed? Another possibility is to leave all data on CPU which could be useful when the data size become larger. In that case, should I call batch.to("cuda") for each batch generated from DataLoader? Should I also put the model on GPU first before training? It is a small enough model to be put on GPU. My raw data are numpy array, hence I have the freedom to write Dataset that returns numpy array in __getitem()___ method, or convert the numpy array to pytorch tensor and write Dataset that returns pytorch tensor. Is one method preferred over the other?
Let me clear up a thing. at the time of passing the data through the model, both your model and the data(that specific batch) have to be on the same device. To automate your code to work on both GPU and non-GPU environments you might use this line. device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') So if You are looking to using GPU for training you must put the model on GPU before training. personally, I prefer loading the model to GPU whenever I am creating the model object. [Answer 3] model = MyModel().to(device) Then you also need to put your data in GPU. One obvious option is putting all data at once. But I would suggest against that. Because however small your dataset is, You will always get better performance by putting one batch at a time in GPU, rather than whole data at once. I know you are thinking, there is a time delay for taking each batch from CPU to GPU. You are right! But putting one batch at a time would allow you a larger batch size, and a larger batch size will always win in terms of performance, compared to smaller batch sizes and loading all data at a time.[Answer 1 & 2] for x_data, y_data in train_dataloader: x_data, y_data = x_data.to(device), y_data.to(device) Finally, about writing the __getitem__, the Numpy array to PyTorch tensor will be handled by the data loader so your getitem can return Numpy arrays. But I feel good when I see the conversion written explicitly in my code. It gives me the sense of a complete and easy-to-understand pipeline.[Answer 4]
https://stackoverflow.com/questions/69545355/
No CUDA GPUs are available
i get this error from the method during the model training process. i am using the google colab to run the code. the google colab dont have any GPU. Is their any other way i can make the code run without requiring cuda cpu. How can i fix this error ? def train_model(model, train_loader, val_loader, epoch, loss_function, optimizer, path, early_stop): # GPU #device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # device = torch.device("cpu") device = torch.device("cuda") model = model.to(device) patience, eval_loss = 0, 0 # train for i in range(epoch): total_loss, count = 0, 0 y_pred = list() y_true = list() for idx, (x, y) in tqdm(enumerate(train_loader), total=len(train_loader)): x, y = x.to(device), y.to(device) u, m = model(x) predict = torch.sigmoid(torch.sum(u*m, 1)) y_pred.extend(predict.cpu().detach().numpy()) y_true.extend(y.cpu().detach().numpy()) loss = loss_function(predict, y.float()) optimizer.zero_grad() loss.backward() optimizer.step() total_loss += float(loss) count += 1 train_auc = roc_auc_score(np.array(y_true), np.array(y_pred)) torch.save(model, path.format(i+1)) print("Epoch %d train loss is %.3f and train auc is %.3f" % (i+1, total_loss / count, train_auc)) # verify total_eval_loss = 0 model.eval() count_eval = 0 val_y_pred = list() val_true = list() for idx, (x, y) in tqdm(enumerate(val_loader), total=len(val_loader)): x, y = x.to(device), y.to(device) u, m = model(x) predict = torch.sigmoid(torch.sum(u*m, 1)) val_y_pred.extend(predict.cpu().detach().numpy()) val_true.extend(y.cpu().detach().numpy()) loss = loss_function(predict, y.float()) total_eval_loss += float(loss) count_eval += 1 val_auc = roc_auc_score(np.array(y_true), np.array(y_pred)) print("Epoch %d val loss is %.3fand train auc is %.3f" % (i+1, total_eval_loss / count_eval, val_auc))
Just remove the line where you create your torch.device() and remove all the .to(device) functions where you use it. Then you also don't need to write .cpu().detach() also. You can simply write predict.numpy() as such. When you write device = torch.device("cuda") you are creating a GPU device and you are then transferring your model and data into the GPU device when its not available. This is the reason for the error.
https://stackoverflow.com/questions/69549094/
What does a single "||" mean in pytorch-geometric documents?
For example the "||" (\Vert) in https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.conv.GATConv
That documentation page includes a link to an arxiv paper that includes the following (bottom of page three)... where represents transposition and || is the concatenation operation. So yes, || is the concatenation operator.
https://stackoverflow.com/questions/69549292/
Get the value at a specific index in PyTorch
I have a ground truth label array for size 5. y=tensor([958, 85, 244, 182, 294]) I have the output for scores array of shape : [5,1000] scores = tensor([[ 1.0406, 1.1808, 4.4227, ..., 4.6864, 8.0145, 5.2128], [ 6.9101, 4.6083, 6.9259, ..., 9.7415, 9.6305, 9.3974], [ 7.6097, 4.0396, 4.4560, ..., 3.4892, 11.6411, 2], [ 1.0693, 4.6295, 5.3638, ..., 10.9041, 10.8380, 9.2077], [ 1.7085, 1.4938, 8.6876, ..., 15.1423, 9.6055, 9.8920]], grad_fn=<ViewBackward>) I want the value from scores array based on the corresponding index of y. So for y[0], which is 958, I want the corresponding value from scores[1], position 958. Is there some direct Pytorch function I can use?
Yes, you can do it by using your y array as an index: scores[torch.arange(5), y]
https://stackoverflow.com/questions/69549324/
torchaudio: Error opening '_sample_data\\steam.mp3': File contains data in an unknown format
I'm new to torch audio and i'm following the this tutorial step by step. I'm having a problem loading an mp3 audio using torchaudio.info(path). Here is my code: metadata = torchaudio.info(SAMPLE_MP3_PATH) print(metadata) Here is the error that i'm getting: .. RuntimeError: Error opening '_sample_data\\steam.mp3': File contains data in an unknown format. torch: v1.9.1+cpu torchaudio: v0.9.1
torchaudio.info will call its backend to show the information. If you use it in windows, and the backend is Soundfile, then this problem will occur. Because Soundfile does not support mp3 format. You can use the below code to see the available formats. import soundfile as sf sf.available_formats()
https://stackoverflow.com/questions/69553112/
Distributing CUDA runtime to customers but it's too big
At my company, we are building software that we need to push to customers when we update software (It's being pushed to custom hardware). We have a GPU on that custom hardware that is fixed, but sometimes, we might need to upgrade the CUDA and CUDNN runtime if we upgrade things in our software (such as libtorch). The problem now is that because of this, we have to ship CUDA and CUDNN together, which bloats the size of the binaries to over 2GB. While the actual size of our executable is only 100MB. Is there any smart way around this?
https://pytorch.org doesn't advertize it, but there is a static version of libtorch available (replace 'shared' with 'static' in the URL). Link against those libraries instead. Your binary will be a bit bigger (depending on how much of the library your code is using), but on the plus side you'll be saving 1.2GB there, because you don't have to ship the libraries. CUDA and cuDNN should also have static versions available, although they might be missing in some re-distributions (like in Anaconda).
https://stackoverflow.com/questions/69560957/
What is the difference edge_weight and edge_attr in Pytorch Geometric
I want to handle weighted undirected graphs in Pytorch Geometric. The node features are 50 dimensional. I found that this can be handled by the x attribute of the torch_geometric.data.data class. The weights of the edges are scalar values. We found out that edge_attr and edge_weight are the attributes to handle edges. I think I should probably use edge_weight, is this correct? Also, what is the difference between edge_attr and edge_weight? I'm not very good at English, so I apologize for that. I hope I can get a good answer. Thank you.
The difference between edge_weight and edge_attr is that edge_weight is always one-dimensional (one value per edge) and that edge_attribute can be multi-dimensional. You can check the cheatsheet for the support of the models.
https://stackoverflow.com/questions/69565890/
Number of neuron in CNN architecture
I am using a certain CNN architecture, however, I am not sure how to calculate the exact number of neuron I have in it. self.conv1 = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(7, 7), padding=(1, 1), stride=(2, 2)) self.conv2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=(7, 7), padding=(1, 1), stride=(2, 2)) self.conv3 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=(3, 3), padding=(2, 2), stride=(2, 2)) self.fc1 = nn.Linear(64 * 1 * 1, 8) There is also 2D-maxpooling after each convolution layer with stride of 2. I can get the number of parameters and Gmacs in my network, but I am not sure how to get the number of neurons? Is there a certain way to calculate them? Thanks.
One quick way to get the total count is to Fetch all parameters with nn.Module.parameters; Convert the generator to a flattened tensor with torch.nn.utils.parameters_to_vector; Find the total number of elements with torch.Tensor.numel. Which corresponds to: >>> p2v(model.parameters()).numel() 44936 Having imported parameters_to_vector from torch.nn.utils as p2v If you want to count the parameters yourself: Convolutions when counting kernels and biases. Given number of input channels in_c, output channels out_c, and kernel size k: conv = lambda in_c, out_c, k: k*k*in_c*out_c + out_c Fully-connected layers: just a two-dimensional matrix with biases: fc = lambda in_c, out_c: in_c*out_c + out_c Max-pool layers are non-parametrized layers: 0 parameters. All in all, this gives you: >>> conv(1, 16, 7) + conv(16, 32, 7) + conv(32, 64, 3) + fc(64, 8) 44936 The word neurons is just an abstraction. If you consider it to be the output dimension for each given layer then: For convolution layers, it will depend on the spatial dimension of the input. So given the spatial dimension x, the kernel size k, the padding p, and the stride s: conv = lambda x, k, p, s: math.floor((x+2*p - k)/ s + 1) For fully connected layers, it corresponds to the number of output features.
https://stackoverflow.com/questions/69572575/
How to Convert a Linear Model to Conv1 in Pytorch?
I am new to Pytroch and I cannot transform Keras models that I have in my mind in to it. I have a really simple linear model in Pytorch as follows: class linear_model(nn.Module): def __init__(self, output, activation=nn.ReLU): super(linear_model, self).__init__() self.net = nn.Sequential( nn.Linear(26, 26), activation(), nn.Linear(26, 26), activation(), nn.Linear(26, 26), activation(), nn.Linear(26, 26), activation(), nn.Linear(26, 26), activation(), nn.Linear(26, 26), activation(), nn.Linear(26, 6) ) def forward(self, x): out = self.net(x) return out I want to use Conv1d instead of a linear layer. Something like this: class resnet(nn.Module): def __init__(self, output, activation=nn.ReLU): super(resnet, self).__init__() self.net = nn.Sequential( nn.Linear(26, 64), activation(), nn.Conv1d(64, 64, kernel_size=5,padding=2), activation(), nn.MaxPool1d(3,stride=1, padding=1), nn.Conv1d(64, 32, 3,padding=1,stride=2), activation(), nn.MaxPool1d(3,stride=1, padding=1), nn.Linear(32, 6) ) def forward(self, x): out = self.net(x) return out The first Conv1d needs an [64,64,6] input and I provide it an [10,64] input (the batch size is 10). To summarize, I have an array [1,26] and I want to feed into the network, increase its size to [1,64], then reduce it to [1,6] at the end. I'm conflicting with the concepts of Keras and Pytorch. How should I correct the problem? Edit: I have changed the model, and now it's working. After all, I have no clue what's going on there. Here is the new one: class custom_model(nn.Module): def __init__(self, output, activation=nn.ReLU): super(custom_model, self).__init__() self.net = nn.Sequential( nn.Linear(num_features, 64), nn.Dropout(p=0.5), nn.ReLU(), nn.Conv1d(1, 64, kernel_size=5,padding=2), activation(), nn.MaxPool1d(3,stride=1, padding=1), nn.Conv1d(64, 32, 3,padding=1,stride=2), activation(), nn.MaxPool1d(3,stride=1, padding=1), nn.Conv1d(32, 6, 1), activation(), #nn.Conv1d(6, 6, 1) nn.Linear(32, output), nn.Dropout(p=0.5), nn.ReLU(), nn.Linear(6, 1), ) def forward(self, x): x = torch.unsqueeze(x,1) out = self.net(x) return out And, changed the way I calculate the loss to make it compatible: outputs = model(inputs) outputs_squeezed = torch.squeeze(outputs) loss_value = loss(outputs_squeezed, y_train) The model I devised is not necessary correct (I mean, the concept behind it). I just cannot figure out what is happening. To make it clear, I cannot understand what are the input and output of the Conv1d and how I can connect them correctly.
The data you are passing is that of single-point data. The difference between data of shape [B,N,L] for 1D tensors and [B,N] for single-point tensors is critical for the application of N-D convolutions (in this case B is batch size, L is sequence length, and N is feature depth). To solve this for your case, just add a dimension to your data so that N=1, because at the moment you do not have a third dimension. However, according to your question I am unsure if you're missing the batch (first) dimension or the feature (second) because you say you provide both, but not simultaneously? Nevertheless, I think this can be entirely amended by: input = input[:,None,:] Or input = torch.unsqueeze(input,1)
https://stackoverflow.com/questions/69589971/
(with cpu)Pytorch: IndexError: index out of range in self. (with cuda)Assertion `srcIndex < srcSelectDimSize` failed. How to solve?
Today I get the following error when I use BERT with Pytorch and cuda: /pytorch/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [234,0,0], thread: [0,0,0] Assertion srcIndex &lt; srcSelectDimSize failed. Epoch [1/100] Iter: 0, Train Loss: 1.1, Train Acc: 39.06%, Val Loss: 1.0, Val Acc: 51.90%, Time: 0:00:04 * Iter: 10, Train Loss: 0.99, Train Acc: 57.81%, Val Loss: 1.0, Val Acc: 52.01%, Time: 0:00:11 * Iter: 20, Train Loss: 1.0, Train Acc: 42.19%, Val Loss: 0.99, Val Acc: 52.01%, Time: 0:00:17 * Iter: 30, Train Loss: 1.0, Train Acc: 40.62%, Val Loss: 0.99, Val Acc: 52.12%, Time: 0:00:23 * Iter: 40, Train Loss: 1.0, Train Acc: 50.00%, Val Loss: 0.98, Val Acc: 52.12%, Time: 0:00:29 * Iter: 50, Train Loss: 1.1, Train Acc: 43.75%, Val Loss: 0.98, Val Acc: 52.12%, Time: 0:00:35 * Traceback (most recent call last): File &quot;/content/drive/MyDrive/Prediction/run.py&quot;, line 38, in &lt;module&gt; train(config, model, train_iter, dev_iter, test_iter) File &quot;/content/drive/MyDrive/Prediction/train_eval.py&quot;, line 50, in train outputs = model(trains) File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/content/drive/MyDrive/Prediction/models/BERT+Covid.py&quot;, line 68, in forward output = self.bert(context, attention_mask=mask) File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py&quot;, line 1005, in forward return_dict=return_dict, File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py&quot;, line 589, in forward output_attentions, File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py&quot;, line 475, in forward past_key_value=self_attn_past_key_value, File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py&quot;, line 408, in forward output_attentions, File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py&quot;, line 323, in forward attention_scores = attention_scores / math.sqrt(self.attention_head_size) RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [234,0,0], thread: [0,0,0] Assertion `srcIndex &lt; srcSelectDimSize` failed. #......I SKIPPED SEVERAL LINES DUE TO THE CHARACTER LIMITATION /pytorch/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [235,0,0], thread: [127,0,0] Assertion `srcIndex &lt; srcSelectDimSize` failed to find where exactly went wrong, I run my code again with CPU, and I got this error: IndexError: index out of range in self. traceback (most recent call last): File &quot;/content/drive/MyDrive/Prediction/run.py&quot;, line 37, in &lt;module&gt; train(config, model, train_iter, dev_iter, test_iter) File &quot;/content/drive/MyDrive/Prediction/train_eval.py&quot;, line 49, in train outputs = model(trains) File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/content/drive/MyDrive/Prediction/models/BERT+Covid.py&quot;, line 66, in forward output = self.bert(context, attention_mask=mask, ) File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py&quot;, line 993, in forward past_key_values_length=past_key_values_length, File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py&quot;, line 215, in forward inputs_embeds = self.word_embeddings(input_ids) File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py&quot;, line 160, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py&quot;, line 2043, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self According to the guidance I found online, I have ensured the following issue: the input length didn't exceed the maximum length in the model (the pad size I set is 98, and I had tried to print out the shape of the input before the line went wrong. it did be (batch_size, pad_size)). len(tokenizer)==model.config.vocab_size, so this is not the problem I have no idea now what could be the problem, could anybody help me? My model structure is: class Model(nn.Module): def __init__(self, config): super(Model, self).__init__() self.modelConfig = BertConfig.from_pretrained('./bert_pretrain/config.json') self.bert = BertModel.from_pretrained(config.bert_path,config=self.modelConfig) for param in self.bert.parameters(): param.requires_grad = False self.cls_fc_layer = FCLayer(config.hidden_size, config.word_size, config.dropout_rate) self.label_classifier = FCLayer( config.word_size+config.numerical_size, config.num_classes, config.dropout_rate, use_activation=False, ) def forward(self, x): context = x[0] # input token ids mask = x[2] # mask numerical=x[3] #size(batch_size,18) output = self.bert(context, attention_mask=mask) pooled_output=output[1] ##size(batch_size,768) pooled_output = self.cls_fc_layer(pooled_output) ##size(batch_size,18) concat_h = torch.cat([pooled_output, numerical], dim=-1) ##size(batch_size,36) logits = self.label_classifier(concat_h) return logits
I have resolved it!!! By printing out the maximum input_ids each batch for i, (trains, labels) in enumerate(train_iter): print(&quot;train max input:&quot;, torch.max(trains[0])) print(&quot;train min input:&quot;, torch.min(trains[0])) print(&quot;train max label:&quot;, torch.max(labels)) print(&quot;train min label:&quot;, torch.min(labels)) I got the following output, the max input_id == 21128, while the length of my tokenizer == 21128, which means the maximum input_id should be 21127, this is where the index is out of range! train max input: tensor(21128, device='cuda:0') train min input: tensor(0, device='cuda:0') train max label: tensor(2, device='cuda:0') train min label: tensor(0, device='cuda:0') The reason why this error occurs might be because I manually changed the vocab.txt file of the bert model (Sorry I was new to that...), and I resolved this problem by reloading the original BERT model/vocab and config
https://stackoverflow.com/questions/69596496/
Pytorch required_grad=False does not freeze network parameters when running on GPU
I'm trying to freeze a layer of a toy model when training using Pytorch. In the following code, when I run the code on CPU, the layer isn't updated. (Please see the code line print(&quot;%.8f&quot; % np.max(np.abs(before -after)))). However, when I run the code on GPU, the layer is updated. What is wrong with my implementation? import torch from torch import nn from torch.autograd import Variable import torch.nn.functional as F import torch.optim as optim import numpy as np def toNP(x): return x.detach().to('cpu').numpy() # toy feed-forward net class Sub_Net(nn.Module): def __init__(self): super(Sub_Net, self).__init__() self.fc1 = nn.Linear(10, 3) self.fc2 = nn.Linear(3, 3) self.fc3 = nn.Linear(3, 3) self.fc4 = nn.Linear(3, 3) self.fc5 = nn.Linear(3, 1) def forward(self, x): x = self.fc1(x) x = self.fc2(x) x = self.fc3(x) x = self.fc4(x) x = self.fc5(x) return x class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.Sub_Net = Sub_Net() self.fc1 = nn.Linear(10, 3) self.fc2 = nn.Linear(3, 3) self.fc3 = nn.Linear(3, 3) self.fc4 = nn.Linear(3, 3) self.fc5 = nn.Linear(3, 1) def forward(self, x): y = self.Sub_Net(x) x = self.fc1(x) x = self.fc2(x) x = self.fc3(x) x = self.fc4(x) x = self.fc5(x) return x+y def generator_step(net, optimizer, criterion, input, target): output = net(input) loss = criterion(output, target) net.zero_grad() loss.backward() optimizer.step() def discrimination_step(net, optimizer, criterion, input, target): for param in net.Sub_Net.parameters(): param.requires_grad = False before = toNP(net.Sub_Net.fc2.weight) output = net(input) loss = criterion(output, target) net.zero_grad() loss.backward() optimizer.step() after = toNP(net.Sub_Net.fc2.weight) print(&quot;%.8f&quot; % np.max(np.abs(before -after)) ) # Run model on GPU # net = Net().type(torch.cuda.FloatTensor) # random_input = Variable(torch.randn(10, )).cuda() # random_target = Variable(torch.randn(1, )).cuda() # Run model on CPU net = Net() random_input = Variable(torch.randn(10, )) random_target = Variable(torch.randn(1, )) # loss criterion = nn.MSELoss() optimizer = optim.Adam(filter(lambda p: p.requires_grad, net.parameters()), lr=0.1) for epoch in range(1, 10): generator_step(net, optimizer, criterion, random_input, random_target) discrimination_step(net, optimizer, criterion, random_input, random_target) Results when running on CPU 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 Results when running on GPU 0.06700575 0.04242781 0.03090768 0.02379489 0.01885229 0.01519108 0.01237211 0.01014686 0.00836059
It seems it's not CPU / GPU acting differently, but it is about .to('cpu') function in your toNP function. If given tensor is on GPU, it returns copied tensor on cpu, while it returns given original object when the given tensor is already on CPU. Please refer more on this site. To clarify, I've added print function to your discriminator_step as follows: def discrimination_step(net, optimizer, criterion, input, target): for param in net.Sub_Net.parameters(): param.requires_grad = False before = toNP(net.Sub_Net.fc2.weight) print(f'Before:\n{before}') output = net(input) loss = criterion(output, target) net.zero_grad() loss.backward() optimizer.step() print(f'Before:\n{before}') after = toNP(net.Sub_Net.fc2.weight) print(f'After:\n{after}') print(&quot;dff: %.8f&quot; % np.max(np.abs(before -after)) ) Then the code yields results when on CPU (for 1 epoch): Before: [[-0.0222426 0.06449176 0.41833472] [-0.3276776 -0.22486973 0.38021228] [-0.37726757 0.26268137 -0.05000275]] Before: [[ 0.04476321 0.13149747 0.48534054] [-0.2606718 -0.15786391 0.44721812] [-0.31026173 0.32968715 0.01700307]] After: [[ 0.04476321 0.13149747 0.48534054] [-0.2606718 -0.15786391 0.44721812] [-0.31026173 0.32968715 0.01700307]] dff: 0.00000000 and on GPU: Before: [[-0.06808002 0.39740798 0.55723506] [-0.17421165 -0.36702433 -0.4208245 ] [-0.37865937 -0.52346057 -0.15856335]] Before: [[-0.06808002 0.39740798 0.55723506] [-0.17421165 -0.36702433 -0.4208245 ] [-0.37865937 -0.52346057 -0.15856335]] After: [[-0.13508584 0.4644138 0.6242409 ] [-0.24121748 -0.30001852 -0.35381868] [-0.31165355 -0.5904664 -0.22556916]] dff: 0.06700583 It shows before values are changed since the returned before tensor shares the same storage with net.Sub_Net.fc2.weight when on CPU. Layers are updated regardless of CPU and GPU since they are already in parameter groups of the Adam optimizer.
https://stackoverflow.com/questions/69624788/
Exporting PyTorch Lightning model to ONNX format not working
am using Jupyter Lab to run. It has pre-installed tf2.3_py3.6 kernel installed in it. It has 2 GPUS in it. PyTorch Lightning Version (e.g., 1.3.0): '1.4.6' PyTorch Version (e.g., 1.8): '1.6.0+cu101' Python version: 3.6 OS (e.g., Linux): system='Linux' CUDA/cuDNN version: 11.2 How you installed PyTorch (conda, pip, source): pip I am saving the best model in checkpoint. I am doing multi-label classification using Hugging face model. After training the model I want to export the model using ONNX format. The input is attention mask, input ids. Here is the DataModule Class N_EPOCHS = 30 BATCH_SIZE = 10 class SRDataModule(pl.LightningDataModule): def __init__(self, X_train,y_train, X_test,y_test, tokenizer, batch_size=8, max_token_len=512): super().__init__() self.batch_size = batch_size self.train_df = X_train self.test_df = X_test self.train_lab = y_train self.test_lab = y_test self.tokenizer = tokenizer self.max_token_len = max_token_len def setup(self, stage=None): self.train_dataset = SRDataset( self.train_df, self.train_lab, self.tokenizer, self.max_token_len ) self.test_dataset = SRDataset( self.test_df, self.test_lab, self.tokenizer, self.max_token_len ) def train_dataloader(self): return DataLoader( self.train_dataset, batch_size=self.batch_size, shuffle=True, num_workers=10 ) def val_dataloader(self): return DataLoader( self.test_dataset, batch_size=self.batch_size, num_workers=10 ) def test_dataloader(self): return DataLoader( self.test_dataset, batch_size=self.batch_size, num_workers=10 ) Here is the model class: class SRTagger(pl.LightningModule): def __init__(self, n_classes: int, n_training_steps=None, n_warmup_steps=None): super().__init__() self.save_hyperparameters() self.bert = BertModel.from_pretrained(BERT_MODEL_NAME, return_dict=True) self.classifier = nn.Linear(self.bert.config.hidden_size, n_classes) self.n_training_steps = n_training_steps self.n_warmup_steps = n_warmup_steps self.criterion = nn.BCELoss() def forward(self, input_ids, attention_mask, labels=None): output = self.bert(input_ids, attention_mask=attention_mask) output = self.classifier(output.pooler_output) output = torch.sigmoid(output) loss = 0 if labels is not None: loss = self.criterion(output, labels) return loss, output def training_step(self, batch, batch_idx): input_ids = batch[&quot;input_ids&quot;] attention_mask = batch[&quot;attention_mask&quot;] labels = batch[&quot;labels&quot;] loss, outputs = self(input_ids, attention_mask, labels) self.log(&quot;train_loss&quot;, loss, prog_bar=True, logger=True) return {&quot;loss&quot;: loss, &quot;predictions&quot;: outputs, &quot;labels&quot;: labels} def validation_step(self, batch, batch_idx): input_ids = batch[&quot;input_ids&quot;] attention_mask = batch[&quot;attention_mask&quot;] labels = batch[&quot;labels&quot;] loss, outputs = self(input_ids, attention_mask, labels) self.log(&quot;val_loss&quot;, loss, prog_bar=True, logger=True) return loss def test_step(self, batch, batch_idx): input_ids = batch[&quot;input_ids&quot;] attention_mask = batch[&quot;attention_mask&quot;] labels = batch[&quot;labels&quot;] loss, outputs = self(input_ids, attention_mask, labels) self.log(&quot;test_loss&quot;, loss, prog_bar=True, logger=True) return loss def training_epoch_end(self, outputs): labels = [] predictions = [] for output in outputs: for out_labels in output[&quot;labels&quot;].detach().cpu(): labels.append(out_labels) for out_predictions in output[&quot;predictions&quot;].detach().cpu(): predictions.append(out_predictions) labels = torch.stack(labels).int() predictions = torch.stack(predictions) for i, name in enumerate(LABEL_COLUMNS): class_roc_auc = auroc(predictions[:, i], labels[:, i]) self.logger.experiment.add_scalar(f&quot;{name}_roc_auc/Train&quot;, class_roc_auc, self.current_epoch) def configure_optimizers(self): optimizer = optim.RAdam(self.parameters(), lr=2e-4) scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=self.n_warmup_steps, num_training_steps=self.n_training_steps ) return dict( optimizer=optimizer, lr_scheduler=dict( scheduler=scheduler, interval='step' ) ) Sample Data sample_batch = next(iter(DataLoader(train_dataset, batch_size=10, num_workers=2))) sample_batch[&quot;input_ids&quot;].shape, sample_batch[&quot;attention_mask&quot;].shape (torch.Size([10, 512]), torch.Size([10, 512])) sample_batch.keys() dict_keys(['text_data', 'input_ids', 'attention_mask', 'labels']) Model model = SRTagger( n_classes=100, n_warmup_steps=warmup_steps, n_training_steps=total_training_steps ) ONNX code # # Export the model torch.onnx.export(model, # model being run ##since model is in the cuda mode, input also need to be (sample_batch[&quot;input_ids&quot;],sample_batch[&quot;attention_mask&quot;]), # model input (or a tuple for multiple inputs) &quot;model_torch_export.onnx&quot;, # where to save the model (can be a file or file-like object) export_params=True, # store the trained parameter weights inside the model file opset_version=10, # the ONNX version to export the model to do_constant_folding=True, # whether to execute constant folding for optimization input_names = ['input'], # the model's input names output_names = ['output'], # the model's output names dynamic_axes={'input' : {0 : 'batch_size'}, # variable lenght axes 'output' : {0 : 'batch_size'}}) Error RuntimeError: output 1 (0 [ CPULongType{} ]) of traced region did not have observable data dependence with trace inputs; this probably indicates your program cannot be understood by the tracer.
Cuz at onnx conversion, the model output must be tensor, not list, tuple, or dict. You can check your forward of model.
https://stackoverflow.com/questions/69648338/
Challenge in replacing SelfAttention with ImageLinearAttention in Vision Transformer
When I am replacing ImageLinearAttention with SelfAttention in Vision Transformer, with the code as follows, I get a RuntimeError. The code for ImageLinearAttention is from https://github.com/lucidrains/linear-attention-transformer/blob/master/linear_attention_transformer/images.py except I removed number of channels as you see in commented code. class ImageLinearAttention(nn.Module): def __init__(self, chan, chan_out = None, kernel_size = 1, padding = 0, stride = 1, key_dim = 64, value_dim = 64, heads = 8, norm_queries = True): super().__init__() self.chan = chan chan_out = chan if chan_out is None else chan_out self.key_dim = key_dim self.value_dim = value_dim self.heads = heads self.norm_queries = norm_queries conv_kwargs = {'padding': padding, 'stride': stride} self.to_q = nn.Conv2d(chan, key_dim * heads, kernel_size, **conv_kwargs) self.to_k = nn.Conv2d(chan, key_dim * heads, kernel_size, **conv_kwargs) self.to_v = nn.Conv2d(chan, value_dim * heads, kernel_size, **conv_kwargs) print('value dim: ', value_dim) print('chan out: ', chan_out) print('kernel_size: ', kernel_size) out_conv_kwargs = {'padding': padding} print('out_conv_kwargs: ', out_conv_kwargs) print('in_chan: ', value_dim * heads) self.to_out = nn.Conv2d(value_dim * heads, chan_out, kernel_size, **out_conv_kwargs) def forward(self, x, context = None): print('x.shape: ', x.shape) print('*x.shape is: ', *x.shape) print('heads: ', self.heads) #b, c, h, w, k_dim, heads = *x.shape, self.key_dim, self.heads b, h, w, k_dim, heads = *x.shape, self.key_dim, self.heads q, k, v = (self.to_q(x), self.to_k(x), self.to_v(x)) q, k, v = map(lambda t: t.reshape(b, heads, -1, h * w), (q, k, v)) q, k = map(lambda x: x * (self.key_dim ** -0.25), (q, k)) if context is not None: #context = context.reshape(b, c, 1, -1) context = context.reshape(b, 1, -1) ck, cv = self.to_k(context), self.to_v(context) ck, cv = map(lambda t: t.reshape(b, heads, k_dim, -1), (ck, cv)) k = torch.cat((k, ck), dim=3) v = torch.cat((v, cv), dim=3) k = k.softmax(dim=-1) if self.norm_queries: q = q.softmax(dim=-2) context = torch.einsum('bhdn,bhen-&gt;bhde', k, v) out = torch.einsum('bhdn,bhde-&gt;bhen', q, context) out = out.reshape(b, -1, h, w) out = self.to_out(out) return out Error is: RuntimeError: Expected 4-dimensional input for 4-dimensional weight [384, 512, 1, 1], but got 3-dimensional input of size [1, 1984, 512] instead Also, my data fed to transformer is of size torch.Size([1983, 512]) and my batch size is 1. Full log is: $ bash scripts/train.sh train: True test: False cam: False preparing datasets and dataloaders...... total_train_num: 176 creating models...... n_class: 2 in_dim: 512 value dim: 64 chan out: 512 kernel_size: 1 out_conv_kwargs: {'padding': 0} in_chan: 768 in_dim: 512 value dim: 64 chan out: 512 kernel_size: 1 out_conv_kwargs: {'padding': 0} in_chan: 768 =&gt;Epoches 1, learning rate = 0.0010000, previous best = 0.0000 torch.Size([1983, 512]) features size: torch.Size([1983, 512]) /SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:129: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn(&quot;Detected call of `lr_scheduler.step()` before `optimizer.step()`. &quot; /SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:154: UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose. warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning) max_feature_num: 1983 batch feature size: torch.Size([1, 1983, 512]) x.shape: torch.Size([1, 1984, 512]) *x.shape is: 1 1984 512 heads: 12 Traceback (most recent call last): File &quot;main.py&quot;, line 148, in &lt;module&gt; preds,labels,loss = trainer.train(sample_batched, model) File &quot;/SeaExp/mona/research/code/cc/helper.py&quot;, line 71, in train pred,labels,loss = model.forward(feats, labels, masks) File &quot;/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py&quot;, line 166, in forward return self.module(*inputs[0], **kwargs[0]) File &quot;/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/SeaExp/mona/research/code/cc/models/Transformer.py&quot;, line 31, in forward out = self.transformer(X) File &quot;/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/SeaExp/mona/research/code/cc/models/linear_att_ViT.py&quot;, line 262, in forward feat = self.transformer(emb) File &quot;/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/SeaExp/mona/research/code/cc/models/linear_att_ViT.py&quot;, line 206, in forward out = layer(out) File &quot;/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/SeaExp/mona/research/code/cc/models/linear_att_ViT.py&quot;, line 174, in forward out = self.attn(out) File &quot;/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/SeaExp/mona/research/code/cc/models/linear_att_ViT.py&quot;, line 92, in forward q, k, v = (self.to_q(x), self.to_k(x), self.to_v(x)) File &quot;/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/conv.py&quot;, line 443, in forward return self._conv_forward(input, self.weight, self.bias) File &quot;/SeaExp/mona/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/conv.py&quot;, line 439, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Expected 4-dimensional input for 4-dimensional weight [384, 512, 1, 1], but got 3-dimensional input of size [1, 1984, 512] instead The original SelfAttention code is: class SelfAttention(nn.Module): def __init__(self, in_dim, heads=8, dropout_rate=0.1): super(SelfAttention, self).__init__() self.heads = heads self.head_dim = in_dim // heads self.scale = self.head_dim ** 0.5 self.query = LinearGeneral((in_dim,), (self.heads, self.head_dim)) self.key = LinearGeneral((in_dim,), (self.heads, self.head_dim)) self.value = LinearGeneral((in_dim,), (self.heads, self.head_dim)) self.out = LinearGeneral((self.heads, self.head_dim), (in_dim,)) if dropout_rate &gt; 0: self.dropout = nn.Dropout(dropout_rate) else: self.dropout = None def forward(self, x): b, n, _ = x.shape q = self.query(x, dims=([2], [0])) k = self.key(x, dims=([2], [0])) v = self.value(x, dims=([2], [0])) q = q.permute(0, 2, 1, 3) k = k.permute(0, 2, 1, 3) v = v.permute(0, 2, 1, 3) attn_weights = torch.matmul(q, k.transpose(-2, -1)) / self.scale attn_weights = F.softmax(attn_weights, dim=-1) out = torch.matmul(attn_weights, v) out = out.permute(0, 2, 1, 3) out = self.out(out, dims=([2, 3], [0, 1])) return out How can I fix this error? I am calling the ImageSelfAttention as following in the Encoder block of the Vision Transformer: class EncoderBlock(nn.Module): def __init__(self, in_dim, mlp_dim, num_heads, dropout_rate=0.1, attn_dropout_rate=0.1): super(EncoderBlock, self).__init__() self.norm1 = nn.LayerNorm(in_dim) #self.attn = SelfAttention(in_dim, heads=num_heads, dropout_rate=attn_dropout_rate) ## note Mona: not sure if I am correctly passing the params # what about attn_dropout_rate=0.1 ## I don't know print('in_dim: ', in_dim) self.attn = ImageLinearAttention(chan=in_dim, heads=num_heads, key_dim=32) if dropout_rate &gt; 0: self.dropout = nn.Dropout(dropout_rate) else: self.dropout = None self.norm2 = nn.LayerNorm(in_dim) self.mlp = MlpBlock(in_dim, mlp_dim, in_dim, dropout_rate) def forward(self, x): residual = x out = self.norm1(x) out = self.attn(out) if self.dropout: out = self.dropout(out) out += residual residual = out out = self.norm2(out) out = self.mlp(out) out += residual return out The code for SelfAttention and how to use it in encoder is mostly from https://github.com/asyml/vision-transformer-pytorch/blob/main/src/model.py
Looks like image self attention works on 4 dimensional inputs of shape (batch, dim, height, width) suited for images and self attention works on 3 dimensional inputs of shape (batch, sequence length, dim) suited for NLP tasks. Maybe the input has to be reshaped before feeding to self attention.
https://stackoverflow.com/questions/69653756/
Batching irregularities with data loader
I have some data in .txt files and an instance formed by two lines which both have 100 elements in them. First line defines the problem and the second line defines the solution. Even though it is not a great idea I tried to use a supervised setting among the data. However, I am facing problems with batching. I have added the code for both the data loader and the main for loop that does the job. The problem I get is that if I set the batch_size to 5 and preds array has the correct form. However, labels array has one more dimension and instead of having 5 integers in it, it has 5 complete problem solutions. I believe the problem is in the data loader but couldn't solve it. I am kinda new to the concept, I have been trying to find this for over a week but nothing has settled so far. Data Loader: import os import torch import torch.nn as nn import torch.nn.functional as F import pdb import numpy as np from torch.utils.data import Dataset class load_dataset(Dataset): def __init__(self, data_file='data.txt', transform=None): super().__init__() data = np.loadtxt(data_file) data = torch.Tensor(data) self.data = data[::2] self.targets = data[1::2] def __len__(self): return len(self.targets) def __getitem__(self, index): adj, target = self.data[index], self.targets[index] return adj, target Main Loop: for inputs, labels in loaders[&quot;train&quot;]: inputs, labels = inputs.view([batch_size, 100]), labels.data scores = mps(inputs) _, preds = torch.max(scores, 1) print(&quot;preds: &quot;) print(preds) print(&quot;labels: &quot;) print(labels) Output: preds: tensor([0, 0, 0, 0, 0]) labels: tensor([[1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.]])
You haven't shown how you defined your dataloader, but assuming you are wrapping load_dataset with a torch.utils.data.DataLoader and setting batch_size=5. If you set your batch size to 5, then you will have 5 &quot;problems&quot; and the corresponding 5 &quot;solutions&quot; in a single batch. Each having 100 components. This means inputs and labels will be two tensors shaped as (batch_size=5, 100).
https://stackoverflow.com/questions/69665818/
clang: error: unsupported option '-fopenmp' (SparseConvNe build error)
Thank you first for your help and time. I am trying to run and build SparseConvNet (https://github.com/facebookresearch/SparseConvNet)on my Mac, however, I get the following error after running bash develop.sh on my terminal: running develop running egg_info creating sparseconvnet.egg-info writing sparseconvnet.egg-info/PKG-INFO writing dependency_links to sparseconvnet.egg-info/dependency_links.txt writing top-level names to sparseconvnet.egg-info/top_level.txt writing manifest file 'sparseconvnet.egg-info/SOURCES.txt' package init file 'sparseconvnet/SCN/__init__.py' not found (or not a regular file) reading manifest file 'sparseconvnet.egg-info/SOURCES.txt' writing manifest file 'sparseconvnet.egg-info/SOURCES.txt' running build_ext building 'sparseconvnet.SCN' extension creating build creating build/temp.macosx-10.6-x86_64-3.5 creating build/temp.macosx-10.6-x86_64-3.5/sparseconvnet creating build/temp.macosx-10.6-x86_64-3.5/sparseconvnet/SCN /usr/bin/clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/sahar/opt/anaconda3/envs/mypython3/include -arch x86_64 -I/Users/sahar/opt/anaconda3/envs/mypython3/include -arch x86_64 -I/Users/sahar/Documents/test_sparse/SparseConvNet/sparseconvnet/SCN/ -I/Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include -I/Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include -I/Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/TH -I/Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/THC -I/Users/sahar/opt/anaconda3/envs/mypython3/include/python3.5m -c sparseconvnet/SCN/pybind.cpp -o build/temp.macosx-10.6-x86_64-3.5/sparseconvnet/SCN/pybind.o -std=c++14 -fopenmp /usr/local/bin/g++-11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=SCN -D_GLIBCXX_USE_CXX11_ABI=0 clang: warning: /usr/local/bin/g++-11: 'linker' input unused [-Wunused-command-line-argument] clang: error: unsupported option '-fopenmp' error: command '/usr/bin/clang' failed with exit status 1 after some search I figured out that the problem may be with the clang so I installed gcc as follow: brew install gcc and I added the path in my bash_profile as follow: export CC=/usr/local/bin/gcc-11 export CXX=/usr/local/bin/g++-11 I also modified the following line in setub.py from extra = {'cxx': ['-std=c++14', '-fopenmp','-O3'], 'nvcc': ['-std=c++14', '-Xcompiler', '-fopenmp', '-O3']} to extra = {'cxx': ['-std=c++14', '-fopenmp', &quot;/usr/local/bin/g++-11&quot;], 'nvcc': ['-std=c++14', '-Xcompiler', '-fopenmp', '-O3']} After doing the aforementioned steps I get the following error /usr/local/bin/gcc-11 -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/sahar/opt/anaconda3/envs/mypython3/include -arch x86_64 -I/Users/sahar/opt/anaconda3/envs/mypython3/include -arch x86_64 -I/Users/sahar/Documents/test_sparse/SparseConvNet/sparseconvnet/SCN/ -I/Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include -I/Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include -I/Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/TH -I/Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/THC -I/Users/sahar/opt/anaconda3/envs/mypython3/include/python3.5m -c sparseconvnet/SCN/sparseconvnet_cpu.cpp -o build/temp.macosx-10.6-x86_64-3.5/sparseconvnet/SCN/sparseconvnet_cpu.o -std=c++14 -fopenmp /usr/local/bin/g++-11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=SCN -D_GLIBCXX_USE_CXX11_ABI=0 cc1plus: warning: command-line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++ In file included from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/c10/core/Device.h:5, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/c10/core/Allocator.h:6, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/ATen.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/extension.h:4, from sparseconvnet/SCN/sparseconvnet_cpu.cpp:12: /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/jit/attributes.h: In function 'const char* torch::jit::toString(torch::jit::AttributeKind)': /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/jit/attributes.h:21:42: warning: expression does not compute the number of elements in this array; element type is 'const char*', not 'torch::jit::AttributeKind' -Wsizeof-array-div] 21 | AT_ASSERT(size_t(kind) &lt; sizeof(names) / sizeof(AttributeKind)); | ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~ /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/c10/util/Exception.h:148:39: note: in definition of macro 'C10_EXPAND_MSVC_WORKAROUND' 148 | #define C10_EXPAND_MSVC_WORKAROUND(x) x | ^ /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/c10/util/Exception.h:167:34: note: in expansion of macro 'C10_UNLIKELY' 167 | #define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e) | ^~~~~~~~~~~~ /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/c10/util/Exception.h:204:7: note: in expansion of macro 'C10_UNLIKELY_OR_CONST' 204 | if (C10_UNLIKELY_OR_CONST(!(cond))) { \ | ^~~~~~~~~~~~~~~~~~~~~ /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/c10/util/Exception.h:360:32: note: in expansion of macro 'TORCH_INTERNAL_ASSERT' 360 | C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \ | ^~~~~~~~~~~~~~~~~~~~~ /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/jit/attributes.h:21:3: note: in expansion of macro 'AT_ASSERT' 21 | AT_ASSERT(size_t(kind) &lt; sizeof(names) / sizeof(AttributeKind)); | ^~~~~~~~~ /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/jit/attributes.h:21:44: note: add parentheses around 'sizeof (torch::jit::AttributeKind)' to silence this warning 21 | AT_ASSERT(size_t(kind) &lt; sizeof(names) / sizeof(AttributeKind)); | ^~~~~~~~~~~~~~~~~~~~~ /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/c10/util/Exception.h:148:39: note: in definition of macro 'C10_EXPAND_MSVC_WORKAROUND' 148 | #define C10_EXPAND_MSVC_WORKAROUND(x) x | ^ /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/c10/util/Exception.h:167:34: note: in expansion of macro 'C10_UNLIKELY' 167 | #define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e) | ^~~~~~~~~~~~ /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/c10/util/Exception.h:204:7: note: in expansion of macro 'C10_UNLIKELY_OR_CONST' 204 | if (C10_UNLIKELY_OR_CONST(!(cond))) { \ | ^~~~~~~~~~~~~~~~~~~~~ /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/c10/util/Exception.h:360:32: note: in expansion of macro 'TORCH_INTERNAL_ASSERT' 360 | C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \ | ^~~~~~~~~~~~~~~~~~~~~ /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/jit/attributes.h:21:3: note: in expansion of macro 'AT_ASSERT' 21 | AT_ASSERT(size_t(kind) &lt; sizeof(names) / sizeof(AttributeKind)); | ^~~~~~~~~ In file included from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/jit/ir.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/autograd/generated/variable_factories.h:12, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/types.h:7, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/extension.h:4, from sparseconvnet/SCN/sparseconvnet_cpu.cpp:12: /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/jit/attributes.h:19:22: note: array 'names' declared here 19 | static const char* names[] = { | ^~~~~ In file included from sparseconvnet/SCN/sparseconvnet_cpu.cpp:33: sparseconvnet/SCN/CPU/SparseToDense.cpp: In instantiation of 'void cpu_SparseToDense_updateOutput(at::Tensor&amp;, Metadata&lt;Dimension&gt;&amp;, at::Tensor&amp;, at::Tensor&amp;, long int) [with T = float; int Dimension = 1]': sparseconvnet/SCN/sparseconvnet_cpu.cpp:387:51: required from 'void SparseToDense_updateOutput(at::Tensor&amp;, Metadata&lt;Dimension&gt;&amp;, at::Tensor&amp;, at::Tensor&amp;, long int) [with int Dimension = 1]' sparseconvnet/SCN/sparseconvnet_cpu.cpp:566:1: required from here sparseconvnet/SCN/CPU/SparseToDense.cpp:48:29: error: cannot convert 'std::array&lt;long int, 3&gt;' to 'c10::IntArrayRef' {aka 'c10::ArrayRef&lt;long long int&gt;'} 48 | output_features.resize_(sz); | ^~ | | | std::array&lt;long int, 3&gt; In file included from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/Tensor.h:12, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/Context.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/ATen.h:5, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/extension.h:4, from sparseconvnet/SCN/sparseconvnet_cpu.cpp:12: /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/core/TensorMethods.h:961:45: note: initializing argument 1 of 'at::Tensor&amp; at::Tensor::resize_(c10::IntArrayRef) const' 961 | inline Tensor &amp; Tensor::resize_(IntArrayRef size) const { | ~~~~~~~~~~~~^~~~ In file included from sparseconvnet/SCN/sparseconvnet_cpu.cpp:33: sparseconvnet/SCN/CPU/SparseToDense.cpp: In instantiation of 'void cpu_SparseToDense_updateOutput(at::Tensor&amp;, Metadata&lt;Dimension&gt;&amp;, at::Tensor&amp;, at::Tensor&amp;, long int) [with T = float; int Dimension = 2]': sparseconvnet/SCN/sparseconvnet_cpu.cpp:387:51: required from 'void SparseToDense_updateOutput(at::Tensor&amp;, Metadata&lt;Dimension&gt;&amp;, at::Tensor&amp;, at::Tensor&amp;, long int) [with int Dimension = 2]' sparseconvnet/SCN/sparseconvnet_cpu.cpp:569:1: required from here sparseconvnet/SCN/CPU/SparseToDense.cpp:48:29: error: cannot convert 'std::array&lt;long int, 4&gt;' to 'c10::IntArrayRef' {aka 'c10::ArrayRef&lt;long long int&gt;'} 48 | output_features.resize_(sz); | ^~ | | | std::array&lt;long int, 4&gt; In file included from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/Tensor.h:12, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/Context.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/ATen.h:5, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/extension.h:4, from sparseconvnet/SCN/sparseconvnet_cpu.cpp:12: /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/core/TensorMethods.h:961:45: note: initializing argument 1 of 'at::Tensor&amp; at::Tensor::resize_(c10::IntArrayRef) const' 961 | inline Tensor &amp; Tensor::resize_(IntArrayRef size) const { | ~~~~~~~~~~~~^~~~ In file included from sparseconvnet/SCN/sparseconvnet_cpu.cpp:33: sparseconvnet/SCN/CPU/SparseToDense.cpp: In instantiation of 'void cpu_SparseToDense_updateOutput(at::Tensor&amp;, Metadata&lt;Dimension&gt;&amp;, at::Tensor&amp;, at::Tensor&amp;, long int) [with T = float; int Dimension = 3]': sparseconvnet/SCN/sparseconvnet_cpu.cpp:387:51: required from 'void SparseToDense_updateOutput(at::Tensor&amp;, Metadata&lt;Dimension&gt;&amp;, at::Tensor&amp;, at::Tensor&amp;, long int) [with int Dimension = 3]' sparseconvnet/SCN/sparseconvnet_cpu.cpp:572:1: required from here sparseconvnet/SCN/CPU/SparseToDense.cpp:48:29: error: cannot convert 'std::array&lt;long int, 5&gt;' to 'c10::IntArrayRef' {aka 'c10::ArrayRef&lt;long long int&gt;'} 48 | output_features.resize_(sz); | ^~ | | | std::array&lt;long int, 5&gt; In file included from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/Tensor.h:12, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/Context.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/ATen.h:5, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/extension.h:4, from sparseconvnet/SCN/sparseconvnet_cpu.cpp:12: /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/core/TensorMethods.h:961:45: note: initializing argument 1 of 'at::Tensor&amp; at::Tensor::resize_(c10::IntArrayRef) const' 961 | inline Tensor &amp; Tensor::resize_(IntArrayRef size) const { | ~~~~~~~~~~~~^~~~ In file included from sparseconvnet/SCN/sparseconvnet_cpu.cpp:33: sparseconvnet/SCN/CPU/SparseToDense.cpp: In instantiation of 'void cpu_SparseToDense_updateOutput(at::Tensor&amp;, Metadata&lt;Dimension&gt;&amp;, at::Tensor&amp;, at::Tensor&amp;, long int) [with T = float; int Dimension = 4]': sparseconvnet/SCN/sparseconvnet_cpu.cpp:387:51: required from 'void SparseToDense_updateOutput(at::Tensor&amp;, Metadata&lt;Dimension&gt;&amp;, at::Tensor&amp;, at::Tensor&amp;, long int) [with int Dimension = 4]' sparseconvnet/SCN/sparseconvnet_cpu.cpp:575:1: required from here sparseconvnet/SCN/CPU/SparseToDense.cpp:48:29: error: cannot convert 'std::array&lt;long int, 6&gt;' to 'c10::IntArrayRef' {aka 'c10::ArrayRef&lt;long long int&gt;'} 48 | output_features.resize_(sz); | ^~ | | | std::array&lt;long int, 6&gt; In file included from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/Tensor.h:12, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/Context.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/ATen.h:5, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/extension.h:4, from sparseconvnet/SCN/sparseconvnet_cpu.cpp:12: /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/core/TensorMethods.h:961:45: note: initializing argument 1 of 'at::Tensor&amp; at::Tensor::resize_(c10::IntArrayRef) const' 961 | inline Tensor &amp; Tensor::resize_(IntArrayRef size) const { | ~~~~~~~~~~~~^~~~ In file included from sparseconvnet/SCN/sparseconvnet_cpu.cpp:33: sparseconvnet/SCN/CPU/SparseToDense.cpp: In instantiation of 'void cpu_SparseToDense_updateOutput(at::Tensor&amp;, Metadata&lt;Dimension&gt;&amp;, at::Tensor&amp;, at::Tensor&amp;, long int) [with T = float; int Dimension = 5]': sparseconvnet/SCN/sparseconvnet_cpu.cpp:387:51: required from 'void SparseToDense_updateOutput(at::Tensor&amp;, Metadata&lt;Dimension&gt;&amp;, at::Tensor&amp;, at::Tensor&amp;, long int) [with int Dimension = 5]' sparseconvnet/SCN/sparseconvnet_cpu.cpp:578:1: required from here sparseconvnet/SCN/CPU/SparseToDense.cpp:48:29: error: cannot convert 'std::array&lt;long int, 7&gt;' to 'c10::IntArrayRef' {aka 'c10::ArrayRef&lt;long long int&gt;'} 48 | output_features.resize_(sz); | ^~ | | | std::array&lt;long int, 7&gt; In file included from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/Tensor.h:12, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/Context.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/ATen.h:5, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/extension.h:4, from sparseconvnet/SCN/sparseconvnet_cpu.cpp:12: /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/core/TensorMethods.h:961:45: note: initializing argument 1 of 'at::Tensor&amp; at::Tensor::resize_(c10::IntArrayRef) const' 961 | inline Tensor &amp; Tensor::resize_(IntArrayRef size) const { | ~~~~~~~~~~~~^~~~ In file included from sparseconvnet/SCN/sparseconvnet_cpu.cpp:33: sparseconvnet/SCN/CPU/SparseToDense.cpp: In instantiation of 'void cpu_SparseToDense_updateOutput(at::Tensor&amp;, Metadata&lt;Dimension&gt;&amp;, at::Tensor&amp;, at::Tensor&amp;, long int) [with T = float; int Dimension = 6]': sparseconvnet/SCN/sparseconvnet_cpu.cpp:387:51: required from 'void SparseToDense_updateOutput(at::Tensor&amp;, Metadata&lt;Dimension&gt;&amp;, at::Tensor&amp;, at::Tensor&amp;, long int) [with int Dimension = 6]' sparseconvnet/SCN/sparseconvnet_cpu.cpp:581:1: required from here sparseconvnet/SCN/CPU/SparseToDense.cpp:48:29: error: cannot convert 'std::array&lt;long int, 8&gt;' to 'c10::IntArrayRef' {aka 'c10::ArrayRef&lt;long long int&gt;'} 48 | output_features.resize_(sz); | ^~ | | | std::array&lt;long int, 8&gt; In file included from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/Tensor.h:12, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/Context.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/ATen.h:5, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/csrc/api/include/torch/all.h:4, from /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/torch/extension.h:4, from sparseconvnet/SCN/sparseconvnet_cpu.cpp:12: /Users/sahar/opt/anaconda3/envs/mypython3/lib/python3.5/site-packages/torch/include/ATen/core/TensorMethods.h:961:45: note: initializing argument 1 of 'at::Tensor&amp; at::Tensor::resize_(c10::IntArrayRef) const' 961 | inline Tensor &amp; Tensor::resize_(IntArrayRef size) const { | ~~~~~~~~~~~~^~~~ error: command '/usr/local/bin/gcc-11' failed with exit status 1 I tested my code both by poetry and anaconda and in both, I get the same error.
I'm unsure about the error you received after adjusting the setub.py file (may be due to changing the compiler version being called), but your original error seems to relate to the MacOS version of clang not natively supporting fopenmp. A solution for this was posted here: Enable OpenMP support in clang in Mac OS X (sierra &amp; Mojave)
https://stackoverflow.com/questions/69689502/
Input to nn.Linear(in_features=16*4*4, out_features=100)
I am performing a CNN on the MNIST dataset with the following model: class ConvNet(nn.Module): def __init__(self, mode): super(ConvNet, self).__init__() # Define various layers here, such as in the tutorial example # self.conv1 = nn.Conv2D(...) #First Convolution Kayer #input size (28,28), output size = (24,24) self.conv1 = nn.Conv2d(1,6,5) self.reLU1 = nn.ReLU(inplace=True) self.MaxPool1 = nn.MaxPool2d(kernel_size=2) #Second Convolution Layer #input size (12,12), output_size = (8,8) self.conv2 = nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5) self.reLU2 = nn.ReLU(inplace=True) self.MaxPool2 = nn.MaxPool2d(kernel_size=2) #Affine operations self.fc1 = nn.Linear(in_features = 16*4*4, out_features = 100) self.sig = torch.nn.Sigmoid() self.fc2 = nn.Linear(in_features=100, out_features=10) My forward pass is defined below. def forward_pass(self, X): #Conv Layer #1 X = self.conv1(X) X = self.reLU1(X) X = self.MaxPool1(X) #Conv Layer #2 X = self.conv2(X) X = self.reLU2(X) X = self.MaxPool2(X) print(Tensor.size(X)) #X = X.view() X = self.fc1(X) X = self.sig(X) X = self.fc2(X) return X I get an error when trying to pass the Tensor into the fully connected layer #1 (fc1). This is due to a mismatch in the in_features from my last Convolutional Layers. When I print out the size of the tensor X before my fully connected layer I get tensor.Size([10,16,4,4]). Can anyone explain to me what would be the proper way to calculate the input to the first fully connected layer?
Your classifier's input is shaped (10, 16, 4, 4), discarding the first dimension which corresponds to the batch size, you end up with 16*4*4 elements. So this is correct, but the shape isn't: you need to flatten the spatial dimension before feeding the tensor to fc1. You can do using nn.Flatten: class ConvNet(nn.Module): def __init__(self, mode): super(ConvNet, self).__init__() ## layer definitions self.flatten = nn.Flatten() def forward(self, X): ## inference on CNN X = self.flatten(X) ## inference on fully-connected layers Here is a inference example: &gt;&gt;&gt; model = ConvNet(mode=None) &gt;&gt;&gt; model(torch.rand(10, 1, 24, 24)) torch.Size([10, 10]) Side note please name your function forward instead of forward_pass, it is standard practice.
https://stackoverflow.com/questions/69692406/
Error: Some NCCL operations have failed or timed out
While running a distributed training on 4 A6000 GPUs, I get the following error: [E ProcessGroupNCCL.cpp:630] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=1800000) ran for 1803710 milliseconds before timing out. [E ProcessGroupNCCL.cpp:390] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down. terminate called after throwing an instance of 'std::runtime_error' what(): [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=1800000) ran for 1804406 milliseconds before timing out. [E ProcessGroupNCCL.cpp:390] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down. I use standard NVidia PyTorch docker. Interesting thing is that training works fine for small datasets but for the bigger datasets, I get this error. So I can confirm that the training code is correct and does work. There is no actual runtime error or any other information to get an actual error messages anywhere.
Following two have solved the issue: Increase default SHM (shared memory) for CUDA to 10g (I think 1g would have worked as well). You can do this in docker run command by passing --shm-size=10g. I also pass --ulimit memlock=-1. export NCCL_P2P_LEVEL=NVL. Debugging Tips To check current SHM, df -h # see the row for shm To see NCCL debug messages: export NCCL_DEBUG=INFO Run p2p bandwidth test for GPU to GPU communication link: cd /usr/local/cuda/samples/1_Utilities/p2pBandwidthLatencyTest sudo make ./p2pBandwidthLatencyTest For A6000 4 GPU box this prints: The matrix shows bandwith betweeb each pair of GPU and with P2P, it should be high.
https://stackoverflow.com/questions/69693950/
GPU is not available for Pytorch
I installed Anaconda, CUDA, and PyTorch today, and I can't access my GPU (RTX 2070) in torch. I followed all of installation steps and PyTorch works fine otherwise, but when I try to access the GPU either in shell or in script I get &gt;&gt;&gt; import torch &gt;&gt;&gt; torch.cuda.is_available() False &gt;&gt;&gt; torch.cuda.device_count() 0 &gt;&gt;&gt; print(torch.version.cuda) None Running conda list shows this as my installed package cudatoolkit 11.3.1 h59b6b97_2 and running numba -s in the conda environment shows __CUDA Information__ CUDA Device Initialized : True CUDA Driver Version : 11030 CUDA Detect Output: Found 1 CUDA devices id 0 b'NVIDIA GeForce RTX 2070' [SUPPORTED] compute capability: 7.5 pci device id: 0 pci bus id: 1 Summary: 1/1 devices are supported and all of the tests pass with ok. CUDA 11.3 is one of the supported compute platforms for PyTorch and by my GPU and that is the version that I installed. I already tried reinstalling CUDA, I am on Windows 10, nvcc --version shows that CUDA is installed Build cuda_11.3.r11.3/compiler.29745058_0 Any suggestions would be helpful Edit: I am using PyTorch 1.10 installed from the generated command on their website. Using python 3.9.7. I also installed PyTorch again in a fresh conda environment and got the same problem.
If you use conda, try to update conda. It works for me to install PyTorch 1.10 with CUDA 10.2.
https://stackoverflow.com/questions/69694093/
How to split folder with images into train, val and test?
I am using colab and I have a folder with images. How to split them into three folders with images with random splitting? I want there to be 0.8 in train, 0.1 in val and 0.1 in test. I tried splitfolders library: splitfolders.ratio(&quot;content/data&quot;, output=&quot;output&quot;, seed=1337, ratio=(.8, .1, .1), group_prefix=None) but new folder didn't appear. and its not clear what would their names be. How to do that? Are there any other solutions? Pytorch solutions are very welcomed
Try this in your computer. I'm using this code in my project import splitfolders #### input dataset that want to split input_folder = 'D:/Raw_DS' output_folder= 'D:/Splitted_DS' splitfolders.ratio(input_folder, output= output_folder, seed=1337, ratio = (0.8, 0.1, 0.1))
https://stackoverflow.com/questions/69701114/
test_train_split ValueError: Found input variables with inconsistent numbers of samples: [200000, 6]
I've looked at a couple of other posts with this issue, and I cannot figure out what I'm getting wrong here. I have X_data, and Y_data, and they both have the shape (200000,6). Sample data output from them looks like this: X_data: (200000, 6) [[ 0.00237987 0.00237987 -0.00075756 -0.00221595 -0.00368199 0.00019625] [ 0.00171481 0.00171481 0.00176989 0.00125255 0.00275689 -0.00111833] [ 0.00190234 0.00190234 0.00333571 0.00127516 0.00146631 -0.00240469] ... [ 0.00211437 0.00211437 0.00221987 0.0002214 0.00273094 -0.00114419] [ 0.00185682 0.00185682 0.00352099 0.00064055 -0.00051575 0.00335213] [ 0.00155133 0.00155133 -0.00368774 -0.00200935 0.00225988 -0.00161371]] Y_data: (200000, 6) [[1. 0.14713856 0.04063819 0.03123633 0.00239176 0.01674091] [1. 0.35532772 0.09834969 0.19631962 0.0153588 0.10071312] [1. 0.17015225 0.04700213 0.04208244 0.00322773 0.02244747] ... [1. 0.14534398 0.04014234 0.03046259 0.0023313 0.01633189] [1. 0.18606737 0.05138638 0.0368341 0.00281708 0.01979553] [1. 0.31199003 0.0863072 0.14879644 0.01157114 0.07705023]] As soon as I do test_train_split, as follows: ts1 = 0.2 rs1 = 42 X_train, X_test, Y_train, Y_test = train_test_split(X_data, Y_data[0], test_size = ts1, random_state = rs1) My code crashes with the value error. I have no idea where I'm going wrong.
It seems like the first column of your Y_data matrix is the label for your x data, (I'm not sure what the other 5 columns in your Y_train represent). You are currently getting the first row, which isn't correct (note the size is 6 but you would like one y-value for each x input). So the code I think you want is X_train, X_test, Y_train, Y_test = train_test_split(X_data,\ Y_data[:, 0], test_size = ts1, random_state = rs1)
https://stackoverflow.com/questions/69702046/
Torch not compiled with CUDA enabled - reinstalling pytorch is not working
My code which I'm trying to run gives me an error: AssertionError: Torch not compiled with CUDA enabled. I was trying to search for an solution to this problem and I found a lot of solutions saying the same. Just use code: conda install pytorch torchvision cudatoolkit=10.2 -c pytorch And then it should work, since previously pytorch was installed without CUDA enabled. However, my code still returns me exactly same error and I don't know why. I've also tried to change cudatoolkit=10.2 to cudatoolkit=10.1 but result is identical. For more specific info I upload some specifics obtained with nvidia-smi command: Additionally I'm uploading list of my packages produced by conda list # Name Version Build Channel _libgcc_mutex 0.1 main _openmp_mutex 4.5 1_gnu _pytorch_select 0.1 cpu_0 absl-py 0.15.0 pyhd8ed1ab_0 conda-forge aiohttp 3.7.4.post0 py39h3811e60_0 conda-forge async-timeout 3.0.1 py_1000 conda-forge attrs 21.2.0 pyhd8ed1ab_0 conda-forge blas 1.0 mkl blinker 1.4 py_1 conda-forge brotlipy 0.7.0 py39h3811e60_1001 conda-forge bzip2 1.0.8 h7b6447c_0 c-ares 1.17.1 h27cfd23_0 ca-certificates 2021.9.30 h06a4308_1 cachetools 4.2.4 pyhd8ed1ab_0 conda-forge certifi 2021.10.8 py39h06a4308_0 cffi 1.14.6 py39h400218f_0 chardet 4.0.0 py39hf3d152e_1 conda-forge charset-normalizer 2.0.0 pyhd8ed1ab_0 conda-forge click 8.0.3 py39hf3d152e_0 conda-forge colorama 0.4.4 pyh9f0ad1d_0 conda-forge cpuonly 2.0 0 pytorch cryptography 35.0.0 py39hbca0aa6_0 conda-forge cudatoolkit 10.2.89 hfd86e86_1 dataclasses 0.8 pyhc8e2a94_3 conda-forge ffmpeg 4.3 hf484d3e_0 pytorch freetype 2.10.4 h5ab3b9f_0 fsspec 2021.10.1 pyhd8ed1ab_0 conda-forge future 0.18.2 py39hf3d152e_3 conda-forge giflib 5.2.1 h7b6447c_0 gmp 6.2.1 h2531618_2 gnutls 3.6.15 he1e5248_0 google-auth 1.35.0 pyh6c4a22f_0 conda-forge google-auth-oauthlib 0.4.6 pyhd8ed1ab_0 conda-forge grpcio 1.37.1 py39hff7568b_0 conda-forge idna 3.1 pyhd3deb0d_0 conda-forge importlib-metadata 4.8.1 py39hf3d152e_0 conda-forge intel-openmp 2019.4 243 jpeg 9d h7f8727e_0 lame 3.100 h7b6447c_0 lcms2 2.12 h3be6417_0 ld_impl_linux-64 2.35.1 h7274673_9 libffi 3.3 he6710b0_2 libgcc-ng 9.3.0 h5101ec6_17 libgfortran-ng 7.5.0 ha8ba4b0_17 libgfortran4 7.5.0 ha8ba4b0_17 libgomp 9.3.0 h5101ec6_17 libiconv 1.15 h63c8f33_5 libidn2 2.3.2 h7f8727e_0 libmklml 2019.0.5 0 libpng 1.6.37 hbc83047_0 libprotobuf 3.15.8 h780b84a_0 conda-forge libstdcxx-ng 9.3.0 hd4cf53a_17 libtasn1 4.16.0 h27cfd23_0 libtiff 4.2.0 h85742a9_0 libunistring 0.9.10 h27cfd23_0 libuv 1.40.0 h7b6447c_0 libwebp 1.2.0 h89dd481_0 libwebp-base 1.2.0 h27cfd23_0 lz4-c 1.9.3 h295c915_1 markdown 3.3.4 pyhd8ed1ab_0 conda-forge mkl 2020.2 256 mkl-service 2.3.0 py39he8ac12f_0 mkl_fft 1.3.0 py39h54f3939_0 mkl_random 1.0.2 py39h63df603_0 multidict 5.1.0 py39h27cfd23_2 ncurses 6.2 he6710b0_1 nettle 3.7.3 hbbd107a_1 ninja 1.10.2 hff7bd54_1 numpy 1.19.2 py39h89c1606_0 numpy-base 1.19.2 py39h2ae0177_0 oauthlib 3.1.1 pyhd8ed1ab_0 conda-forge olefile 0.46 pyhd3eb1b0_0 openh264 2.1.0 hd408876_0 openssl 1.1.1l h7f8727e_0 packaging 21.0 pyhd8ed1ab_0 conda-forge pillow 8.4.0 py39h5aabda8_0 pip 21.2.4 py39h06a4308_0 protobuf 3.15.8 py39he80948d_0 conda-forge pyasn1 0.4.8 py_0 conda-forge pyasn1-modules 0.2.7 py_0 conda-forge pycparser 2.20 py_2 pydeprecate 0.3.1 pyhd8ed1ab_0 conda-forge pyjwt 2.3.0 pyhd8ed1ab_0 conda-forge pyopenssl 21.0.0 pyhd8ed1ab_0 conda-forge pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge pysocks 1.7.1 py39hf3d152e_3 conda-forge python 3.9.7 h12debd9_1 python_abi 3.9 2_cp39 conda-forge pytorch 1.10.0 py3.9_cpu_0 pytorch pytorch-lightning 1.4.9 pyhd8ed1ab_0 conda-forge pytorch-mutex 1.0 cpu pytorch pyu2f 0.1.5 pyhd8ed1ab_0 conda-forge pyyaml 5.4.1 py39h3811e60_0 conda-forge readline 8.1 h27cfd23_0 requests 2.26.0 pyhd8ed1ab_0 conda-forge requests-oauthlib 1.3.0 pyh9f0ad1d_0 conda-forge rsa 4.7.2 pyh44b312d_0 conda-forge scipy 1.6.2 py39h91f5cce_0 setuptools 58.0.4 py39h06a4308_0 six 1.16.0 pyhd3eb1b0_0 sqlite 3.36.0 hc218d9a_0 tensorboard 2.6.0 pyhd8ed1ab_1 conda-forge tensorboard-data-server 0.6.0 py39h3da14fd_0 conda-forge tensorboard-plugin-wit 1.8.0 pyh44b312d_0 conda-forge tk 8.6.11 h1ccaba5_0 torchmetrics 0.5.1 pyhd8ed1ab_0 conda-forge torchvision 0.11.1 py39_cpu [cpuonly] pytorch tqdm 4.62.3 pyhd8ed1ab_0 conda-forge typing-extensions 3.10.0.2 hd3eb1b0_0 typing_extensions 3.10.0.2 pyh06a4308_0 tzdata 2021a h5d7bf9c_0 urllib3 1.26.7 pyhd8ed1ab_0 conda-forge werkzeug 2.0.1 pyhd8ed1ab_0 conda-forge wheel 0.37.0 pyhd3eb1b0_1 xz 5.2.5 h7b6447c_0 yaml 0.2.5 h516909a_0 conda-forge yarl 1.6.3 py39h3811e60_2 conda-forge zipp 3.6.0 pyhd8ed1ab_0 conda-forge zlib 1.2.11 h7b6447c_3 zstd 1.4.9 haebb681_0 Could you please help me with resolving the issue?
Seems you have the wrong combination of PyTorch, CUDA, and Python version, you have installed PyTorch py3.9_cpu_0 which indicates that it is CPU version, not GPU. What I see is that you ask or have installed for PyTorch 1.10.0 which so far I know the Py3.9 built with CUDA 11 support only. See list of available (compiled) versions for CUDA 10.2 and CUDA 11.3.
https://stackoverflow.com/questions/69735619/
DCGAN understanding generator update step
Here is some DCGAN example in Pytorch: https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html#training I wonder why we don't zero discriminator gradients before we update generator? (I added line in original code # netD.zero_grad() # Q: why we don't do this?) Is it because gradients accumulate in some separate variable and don't affect gradients calculation in generator and only can affect update of weights, but on generator update step we only update generator weights so it's fine? Here is code section from training loop: ############################ # (1) Update D network: maximize log(D(x)) + log(1 - D(G(z))) ########################### ## Train with all-real batch netD.zero_grad() # Format batch real_cpu = data[0].to(device) b_size = real_cpu.size(0) label = torch.full((b_size,), real_label, dtype=torch.float, device=device) # Forward pass real batch through D output = netD(real_cpu).view(-1) # Calculate loss on all-real batch errD_real = criterion(output, label) # Calculate gradients for D in backward pass errD_real.backward() D_x = output.mean().item() ## Train with all-fake batch # Generate batch of latent vectors noise = torch.randn(b_size, nz, 1, 1, device=device) # Generate fake image batch with G fake = netG(noise) label.fill_(fake_label) # Classify all fake batch with D output = netD(fake.detach()).view(-1) # Calculate D's loss on the all-fake batch errD_fake = criterion(output, label) # Calculate the gradients for this batch, accumulated (summed) with previous gradients errD_fake.backward() D_G_z1 = output.mean().item() # Compute error of D as sum over the fake and the real batches errD = errD_real + errD_fake # Update D optimizerD.step() ############################ # (2) Update G network: maximize log(D(G(z))) ########################### netG.zero_grad() # netD.zero_grad() # Q: why we don't do this? label.fill_(real_label) # fake labels are real for generator cost # Since we just updated D, perform another forward pass of all-fake batch through D output = netD(fake).view(-1) # Calculate G's loss based on this output errG = criterion(output, label) # Calculate gradients for G errG.backward() D_G_z2 = output.mean().item() # Update G optimizerG.step()
.. gradients accumulate in some separate variable .. Yes. That is correct. They are two storages (read &quot;variables&quot;) - one for parameters and another for its gradients. .. don't affect gradients calculation in generator .. Also correct. Let's look at the computation graph for the relevant part (x) -----&gt; netG ----&gt; (fake) -&gt; netD -&gt; (output) ^ ^ (G_params) --| | | (D_params) ------| Here, things inside () are tensors and rest are &quot;pure computation units&quot;. I have explicitly depicted the parameters as inputs to their own computation units. When you hit .backward(), the following computations take place (read backwards) (GRAD_x) &lt;----- netG &lt;---- (GRAD_fake) &lt;- netD &lt;- (GRAD_output) | ^ (GRAD_G_params)&lt;--| | | (D_params) ----------| I have replaced each variable with their gradients (i.e. GRAD_*) in the backward pass. You see, there is no GRAD_D_params because its just not required to go further into netG since netG has no role in producing D_params (they are standalone entities associated only with netD). Simply stated, just because the flow is &quot;going through&quot; netD does not mean it is required to compute/use gradient w.r.t its parameters. Computation of GRAD_fake does not require GRAD_D_params (however it requires D_params), which is why there is no reason to clear up the old GRAD_D_params. But technically, there is no harm either in clearing up GRAD_D_params. Question in comment: How GRAD_fake is obtained with GRAD_output and D_params ? Its simple chain rule. Let's rename the variable fake -&gt; x netD -&gt; f() output -&gt; y D_params -&gt; p GRAD_output -&gt; del_L/del_y Then the forward computation simplifies to y = f(x, p). It doesn't matter what the form of f() is, computing gradient of x w.r.t some loss L would require two things, 1. , which is basically GRAD_output; 2. or equivalently , which is basically a complicated function involving (x, p) (not their derivatives).
https://stackoverflow.com/questions/69739482/
Issue plotting a simple tensor with Torch
Just beginning to use Pyorch, and I am trying to plot a very simple, 1-D array Tensor onto a histogram with Matplotlib. torch.manual_seed(8436) a = torch.Tensor(1000) a.normal_(0, 2.) #This will fill our array with a normal distribution plt.hist(a); However, the result is strange..., and just consists of a bunch of vertical, multicolored lines. The result I am supposed to get, which I do when entering: plt.hist(a.numpy()) is the normal histogram. Thanks in advance for any help!
ok, it seems to me that here is the reason: cbook._reshape_2D is used to preprocess the data coming into plt.hist . in 3.4.3 it returns a list of arrays with only one element each, which obviously produces the wrong image above. in 3.2.2 , however, it returns a list with one 1D array, basically a NumPy version of the tensor we provided. and this one is plotted as expected. I downgraded the package and it worked. Would be interested to hear other solutions.
https://stackoverflow.com/questions/69741863/
RuntimeError: "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Int': Pytorch
So, I was trying to code a chatbot using Pytorch following this tutorial. Code: (Minimal, Reproducible one) tags = [] for intent in intents['intents']: tag = intent['tag'] tags.append(tag) tags = sorted(set(tags)) X_train = [] X_train = np.array(X_train) class ChatDataset(Dataset): def __init__(self): self.n_sample = len(X_train) self.x_data = X_train #Hyperparameter batch_size = 8 hidden_size = 47 output_size = len(tags) input_size = len(X_train[0]) learning_rate = 0.001 num_epochs = 1000 dataset = ChatDataset() train_loader = DataLoader(dataset=dataset, batch_size=batch_size, shuffle=True, num_workers=0) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # using gpu model = NeuralNet(input_size, hidden_size, output_size).to(device) # loss and optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) for epoch in range(num_epochs): for (words, labels) in train_loader: words = words.to(device) labels = labels.to(device) #forward outputs = model(words) loss = criterion(outputs, labels) #the line where it is showing the problem #backward and optimizer step optimizer.zero_grad() loss.backward() optimizer.step() if (epoch +1) % 100 == 0: print(f'epoch {epoch+1}/{num_epochs}, loss={loss.item():.4f}') print(f'final loss, loss={loss.item():.4f}') Full Code (if needed) I am getting this error while trying to get the loss function. RuntimeError: &quot;nll_loss_forward_reduce_cuda_kernel_2d_index&quot; not implemented for 'Int' Traceback: Traceback (most recent call last): File &quot;train.py&quot;, line 91, in &lt;module&gt; loss = criterion(outputs, labels) File &quot;C:\Users\PC\anaconda3\lib\site-packages\torch\nn\modules\module.py&quot;, line 1102, in _call_impl return forward_call(*input, **kwargs) File &quot;C:\Users\PC\anaconda3\lib\site-packages\torch\nn\modules\loss.py&quot;, line 1150, in forward return F.cross_entropy(input, target, weight=self.weight, File &quot;C:\Users\PC\anaconda3\lib\site-packages\torch\nn\functional.py&quot;, line 2846, in cross_entropy return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) RuntimeError: &quot;nll_loss_forward_reduce_cuda_kernel_2d_index&quot; not implemented for 'Int' But looking into the tutorial, it seems to work perfectly there whereas it is not in my case. What to do now? Thanks.
In my case, I solved this problem by converting the type of targets to torch.LongTensor before storing the data into the GPU as follows: for inputs, targets in data_loader: targets = targets.type(torch.LongTensor) # casting to long inputs, targets = inputs.to(device), targets.to(device) ... ... loss = self.criterion(output, targets)
https://stackoverflow.com/questions/69742930/
How to apply torchvision transformations to zip file
I was trying to download CelebA data set and apply transformation to it via code: from torchvision import transforms from torchvision.datasets import CelebA celeba_transforms = transforms.Compose([ transforms.CenterCrop(140), transforms.Resize([64, 64]), transforms.ToTensor() ]) CelebA(root='path', split='train', download=True, transform=celeba_transforms) However I obtained error: BadZipFile: File is not a zip file. I did some research about this error and it seems that its quite popular and not easy to solve (since the problem is with Google Drive which has a daily maximum quota for any file, which seems to be exceeded for the CelebA files). My solution then was to simply download file from Kaggle in zip. However then I have plain non-transformed CelebA data. Is there any possibility to still apply celeba_transforms to this data? EDIT CelebA(root='archive_celeba.zip', split='train', download=False, transform=celeba_transforms) I obtained error: RuntimeError: Dataset not found or corrupted. You can use download=True to download it Do you know what I'm doing wrong?
According to documentation from pytorch: download (bool, optional) – If true, downloads the dataset from the internet and puts it in root directory. If dataset is already downloaded, it is not downloaded again. you can set download to false and then it will still perform the transformations on the local dataset.
https://stackoverflow.com/questions/69753849/
How to design a joint loss function with two component with the aim of minimizing the first loss but maximizing the second loss?
I'm trying to do an experiment where there're two subtasks and the aim is to reduce the error rate of the 1st task and increase the error rate of the second task at the same time. This setting may be similar with that of multi-task learning or adversarial learning. And now my designed loss function is as follows: total_loss = loss1 - alpha * loss2 where I just added a weight alpha to make sure that the second loss won't cover the influence of the loss1 totally. And the result shows that, after training for a few epochs, the total loss get to be minus, and it decrease in a quite high speed. I assume that is because loss1 is already close to 0 but loss2 is still getting smaller (Increasing the error rate is much more easier than reducing that). I have never read a paper where a minus loss is added to the original loss function, so I am wondering is that appropriate to use such a loss function, or is that a better design for my experiment setting? And Is there any paper with similar aim of optimization?
First, let me explain why your loss will not work and will sharply drop to the negatives. total_loss = loss1 - alpha*loss2 You want to minimize loss1 and maximize loss2, and subsequently combined the two opposing objectives into a single total_loss. And then you are most likely training your model/system while minimizing total_loss. As it stands, it doesn't matter what alpha you use. The theoretical absolute minimum for a typical loss (crossentropy, mse) is 0. But because of the negative term in your loss, it can be minimized to negative infinity, so you can't stop it from exploding in the negative direction. Now that we have the explanation, we can think of potential solutions. Since the problem is that your loss tends to converge to negative infinity, we have to find some other operation whose output tends to decrease with increasing input. If we keep it simple, we could just try using an inverse. total_loss = loss_1 + 1 / (loss_2 + epsilon) The above objective should try to maximize loss_2 to make 1/loss_2 near 0. Another option could be using tanh, which is bounded at (-1, 1). Sigmoid can be used as well. total_loss = loss_1 + 1 - tanh(loss2) total_loss = loss_1 + 1 - sigmoid(loss2) There are probably other better ways to do this too. Lastly, you need to revisit some questions: Any learning problem has an end goal. What is your theoretical optimum? Is it system 1's loss is minimized to ~0 and system 2's loss to be maximized to infinity (or some large value)? Does system 2 start from a optimal position in the first place? I believe you should also review your approach. Look into adversarial learning approaches, like GANs.
https://stackoverflow.com/questions/69763161/
Linear Regression in PyTorch
It's a simple regression problem. But no matter how much I try, I can't get the answer I want. I'm guessing the weight should be 32 (4 * 8) but, the code returns 25. Why is that? This is my full source code: import torch import torch.nn as nn import torch.optim as op X = torch.FloatTensor([[1., 2.],[2., 4.],[3., 6.]]) Y = torch.FloatTensor([[2.],[8.],[18.]]) class TEST(nn.Module): def __init__(self): super(TEST,self).__init__() self.l1 = nn.Linear(2,1) def forward(self, input): x = self.l1(input) return x epochs = 2000 lr = 0.001 model = TEST() loss_func = nn.MSELoss() optimizer = op.SGD(model.parameters(), lr=lr) for epoch in range(epochs): optimizer.zero_grad() output = model(X) loss = loss_func(output, Y) loss.backward() optimizer.step() if epoch%10 == 0: print('loss[{}] : {}'.format(epoch, loss)) XX = torch.FloatTensor([[4., 8.]]) print(model(XX)) This is the output of the code: loss[1920] : 0.8891088366508484 loss[1930] : 0.8890921473503113 loss[1940] : 0.8890781402587891 loss[1950] : 0.8890655636787415 loss[1960] : 0.8890505433082581 loss[1970] : 0.8890388011932373 loss[1980] : 0.889029324054718 loss[1990] : 0.8890181183815002 tensor([[25.3124]], grad_fn=&lt;AddmmBackward&gt;)
You are trying to approximate y = x1*x2 but are using a single linear layer i.e. a purely linear model. Ultimately, what happens is you are learning weights a and b such that y = a*x1 + b*x2. However, this model cannot approximate the distribution of x1, x2 -&gt; x1*x2.
https://stackoverflow.com/questions/69769799/
Runtime Error: mat1 and mat2 shapes cannot be multiplied in pytorch
I'm new to deep learning and I have created a model using the code below for the prediction of plant disease class CNN_Model(nn.Module): def __init__(self): super(CNN_Model, self).__init__() self.cnn_model = nn.Sequential( nn.Conv2d(3, 16, 3), nn.ReLU(), nn.MaxPool2d(2, 2), nn.Conv2d(16, 32, 5), nn.ReLU(), nn.MaxPool2d(2, 2), ) self.fc_model = nn.Sequential( nn.Flatten(), nn.Linear(800, 300), nn.ReLU(), nn.Linear(300, 38), nn.Softmax(dim=1) ) def forward(self, x): x = self.cnn_model(x) x = self.fc_model(x) return x model = CNN_Model() out = model(imgs) out When I'm trying to run the above code, I'm getting the error mat1 and mat2 cannot be multiplied. I have tried the answers posted for questions similar to this, but still, my issue is not solved. RuntimeError Traceback (most recent call last) /tmp/ipykernel_66/1768380315.py in &lt;module&gt; ----&gt; 1 out = model(imgs) 2 out /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -&gt; 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /tmp/ipykernel_66/1577403502.py in forward(self, x) 26 def forward(self, x): 27 x = self.cnn_model(x) ---&gt; 28 x = self.fc_model(x) 29 30 return x /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -&gt; 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /opt/conda/lib/python3.7/site-packages/torch/nn/modules/container.py in forward(self, input) 137 def forward(self, input): 138 for module in self: --&gt; 139 input = module(input) 140 return input 141 /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -&gt; 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /opt/conda/lib/python3.7/site-packages/torch/nn/modules/linear.py in forward(self, input) 94 95 def forward(self, input: Tensor) -&gt; Tensor: ---&gt; 96 return F.linear(input, self.weight, self.bias) 97 98 def extra_repr(self) -&gt; str: /opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in linear(input, weight, bias) 1845 if has_torch_function_variadic(input, weight): 1846 return handle_torch_function(linear, (input, weight), input, weight, bias=bias) -&gt; 1847 return torch._C._nn.linear(input, weight, bias) 1848 1849 RuntimeError: mat1 and mat2 shapes cannot be multiplied (32x119072 and 800x300) Please someone help me to solve these errors.
The size mismatch error is shown as 32x119072 and 800x300. The first shape refers to the input tensor, while the second is the parameter of the layer. If you look into your model definition you will see that it matches the first fully connected layer, the one following the flatten. Indeed, nn.Linear(800, 300) was expecting 800-feature tensors, but got 119072-feature tensors instead. You need to modify this linear layer to match the incoming tensor flattened spatial shape. But notice how this value will depend on the image that is fed to the CNN, ultimately this will dictate the size of the tensor fed to the classifier. The general way to solve this is to use adaptive layers: such as nn.AdaptiveMaxPool2d which will always provide the same output shape regardless of the input dimension size.
https://stackoverflow.com/questions/69778174/
Huggingface giving pytorch index error on sentiment analysis task
I am trying to run sentiment analysis on a dataset of millions of tweets on the server. I am calling a API prediction function that takes a list of 100 tweets and iterate over the test of each tweet to return the huggingface sentiment value, and writes that sentiment to a solr database. However, after the process of few hundred tweets, I get the below error, any suggestions? API code: from transformers import pipeline model = pipeline(task = 'sentiment-analysis',model=&quot;finiteautomata/bertweet-base-sentiment-analysis&quot;) # huggingface sentiment analyser def huggingface_sent(sentence): text=preprocess(sentence) if (len(text)&gt;0): predicted_dic = {'NEG': 'Negative','NEU':'Neutral', 'POS':'Positive'} return predicted_dic[model(text)[0]['label']] else: return 'Neutral' def predict_list(tweets): print('Data Processing\n') predictions={} for t_id in tweets.keys(): if(tweets[t_id]['language']=='en'): predictions[t_id] = huggingface_sent(str(tweets[t_id]['full_text'])) else: predictions[t_id]='NoneEnglish' print('processed ', len(tweets.keys())) print('\n first element is ', predictions[t_id]) return predictions print('Running analyser ....\n') Error log: Token indices sequence length is longer than the specified maximum sequence length for this model (211 &gt; 128). Running this sequence through the model will result in indexing errors [2021-11-01 12:24:20,649] ERROR in app: Exception on /api/predict [POST] Traceback (most recent call last): File &quot;/myusername/anaconda3/lib/python3.8/site-packages/flask/app.py&quot;, line 2447, in wsgi_app response = self.full_dispatch_request() File &quot;/myusername/anaconda3/lib/python3.8/site-packages/flask/app.py&quot;, line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File &quot;/myusername/anaconda3/lib/python3.8/site-packages/flask/app.py&quot;, line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File &quot;/myusername/anaconda3/lib/python3.8/site-packages/flask/_compat.py&quot;, line 39, in reraise raise value File &quot;/myusername/anaconda3/lib/python3.8/site-packages/flask/app.py&quot;, line 1950, in full_dispatch_request rv = self.dispatch_request() File &quot;/myusername/anaconda3/lib/python3.8/site-packages/flask/app.py&quot;, line 1936, in dispatch_request return self.view_functionsrule.endpoint File &quot;/mnt/raid1/diil/sentiment_api/analyser_main.py&quot;, line 11, in api_predict_list predictions = predict_list(tweets) File &quot;/mnt/raid1/diil/sentiment_api/analyser_core.py&quot;, line 84, in predict_list predictions[t_id] = huggingface_sent(str(tweets[t_id]['full_text'])) File &quot;/mnt/raid1/diil/sentiment_api/analyser_core.py&quot;, line 70, in huggingface_sent if model(text): File &quot;/myusername/anaconda3/lib/python3.8/site-packages/transformers/pipelines/text_classification.py&quot;, line 126, in call return super().call(*args, **kwargs) File &quot;/myusername/anaconda3/lib/python3.8/site-packages/transformers/pipelines/base.py&quot;, line 915, in call return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File &quot;/myusername/anaconda3/lib/python3.8/site-packages/transformers/pipelines/text_classification.py&quot;, line 172, in run_single return [super().run_single(inputs, preprocess_params, forward_params, postprocess_params)] File &quot;/myusername/anaconda3/lib/python3.8/site-packages/transformers/pipelines/base.py&quot;, line 922, in run_single model_outputs = self.forward(model_inputs, **forward_params) File &quot;/myusername/anaconda3/lib/python3.8/site-packages/transformers/pipelines/base.py&quot;, line 871, in forward model_outputs = self._forward(model_inputs, **forward_params) File &quot;/myusername/anaconda3/lib/python3.8/site-packages/transformers/pipelines/text_classification.py&quot;, line 133, in _forward return self.model(**model_inputs) File &quot;/myusername/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/myusername/anaconda3/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py&quot;, line 1198, in forward outputs = self.roberta( File &quot;/myusername/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/myusername/anaconda3/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py&quot;, line 841, in forward embedding_output = self.embeddings( File &quot;/myusername/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/myusername/anaconda3/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py&quot;, line 136, in forward position_embeddings = self.position_embeddings(position_ids) File &quot;/myusername/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/myusername/anaconda3/lib/python3.8/site-packages/tousername/anaconda3/lib/python3.8/site-packages/torch/nn/functional.py&quot;, line 2043, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in selfusername/anaconda3/lib/python3.8/site-packages/torch/nn/functional.py&quot;, line 2043, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self
As @Quang Hoang mentioned in the comment, it seems the problem is due to the length of your input tweet. Fortunately, you are able to determine the behavior of the tokenizer in pipeline class and truncate longer tweets explicitly. In addition, it's possible to set any other argument for pipeline elements. MODEL_CHECKPOINT = &quot;finiteautomata/bertweet-base-sentiment-analysis&quot; ner_pipeline = pipeline(task=&quot;sentiment-analysis&quot;, tokenizer=(MODEL_CHECKPOINT, {'model_max_length': 128}), model=&quot;finiteautomata/bertweet-base-sentiment-analysis&quot;) As a side note, I recommend using the approach presented in this answer to accelerate the entire process.
https://stackoverflow.com/questions/69796828/
Is there any way to create a tensor with a specific pattern in Pytorch?
I'm working with linear transformation in the form of Y=Q(X+A), where X is the input tensor and Y is the output, Q and A are two tensors to be learned. Q is an arbitrary tensor, therefore I can use nn.Linear. But A is a (differentiable) tensor that has some specific pattern, as a short example, A = [[a0,a1,a2,a2,a2], [a1,a0,a1,a2,a2], [a2,a1,a0,a1,a2], [a2,a2,a1,a0,a1], [a2,a2,a2,a1,a0]]. So I cannot define such a pattern in nn.Linear. Is there any way to define such a tensor in Pytorch?
This looks like a Toeplitz matrix. A possible implementation in PyTorch is: def toeplitz(c, r): vals = torch.cat((r, c[1:].flip(0))) shape = len(c), len(r) i, j = torch.ones(*shape).nonzero().T return vals[j-i].reshape(*shape) In your case with a0 as 0, a1 as 1 and a2 as 2: &gt;&gt;&gt; toeplitz(torch.tensor([0,1,2,2,2]), torch.tensor([0,1,2,2,2])) tensor([[0, 1, 2, 2, 2], [1, 0, 1, 2, 2], [2, 1, 0, 1, 2], [2, 2, 1, 0, 1], [2, 2, 2, 1, 0]]) For a more detailed explanation refer to my other answer here.
https://stackoverflow.com/questions/69809789/
Pip install from source without building a wheel
I have a Python package that includes large PyTorch model checkpoints. I try including those in my setup.py as package_data = {'mypackage': ['model_weights/*', 'model_weights/sequential_models*']}, Now the problem is whenever I try to install from the source via pip install mypackage/ --no-cache-dir I get a MemoryError. I tried debugging with --verbose and realized that this happens at creating '/tmp/pip-wheel-bs29bp6a/tmpp0itbxn1/mypackage-1.0-py3-none-any.whl' and adding 'build/bdist.linux-x86_64/wheel' to it adding 'mypackage/model_weights/distilled_model.pt' adding 'mypackage-1.0.dist-info/RECORD' Traceback (most recent call last): ... File &quot;/zhome/1d/8/153438/miniconda3/envs/testenv/lib/python3.9/zipfile.py&quot;, line 1127, in write data = self._compressor.compress(data) MemoryError Building wheel for mypackage (PEP 517) ... error ERROR: Failed building wheel for mypackage I really only want the installation to copy over the files in model_weights/ to the installation directory. Including them in the wheel appears to be impossible. Is there a way to suppress this step when running pip install? The package will only be distributed as a source, never on PyPI, as the model_weights files are far too large anyway.
You can run $ pip install mypackage/ --no-cache-dir --no-binary=mypackage to skip wheel building (assuming mypackage is actually your distribution name - this is what you pass as name to setup() function).
https://stackoverflow.com/questions/69810109/
Is there a way to compute a circulant matrix in Pytorch?
I want a similar function as in https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.circulant.html to create a circulant matrix using PyTorch. I need this as a part of my Deep Learning model (in order to reduce over-parametrization in some of my Fully Connected layers as suggested in https://arxiv.org/abs/1907.08448 (Fig.3)) The input of the function shall be a 1D torch tensor, and the output should be the 2D circulant matrix.
You can make use of unfold to extract sliding windows. But to get the correct order you need to flip (later unflip) the tensors, and first concatenate the flipped tensor to itself. circ=lambda v:torch.cat([f:=v.flip(0),f[:-1]]).unfold(0,len(v),1).flip(0)
https://stackoverflow.com/questions/69820726/
what is equivalent to torch.load() in tensorflow?
I want to know if there is any way to see the parameters of models in tensorflow. there is a command in pytorch i.e. torch.load('/filepath').
Provided that you already have a model saved at MODEL_PATH, this should do the trick: model = tf.keras.models.load_model(MODEL_PATH) model.summary() Check this out for more info on saving and loading models.
https://stackoverflow.com/questions/69825651/
Can't install Pytorch in Pycharm terminal, Python 3.10 .win 10
I go to pytorch site and take this pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio===0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html I have windows 10 ,Python version is 3.10 ,CUDA version is 11.5 And I get this error ERROR: Could not find a version that satisfies the requirement torch==1.10.0+cu113 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch==1.10.0+cu113 I really struggled and tried to solve it, Please help.
Check out the official issue on Pytorch's Github repository. I've tried your exact command on python 3.9.5 and it works. I believe the issue is that PyTorch is not supported by python 3.10 yet. Downgrading to any 3.9 version of Python should solve your problem.
https://stackoverflow.com/questions/69826153/
How can I read pytorch model file via cv2.dnn.readNetFromTorch()?
I am able to save a PyTorch custom model? (it can work any PyTorch version above 1.0) However, I am not able to read the saved model. I am trying to read it via cv2.dnn.readNetFromTorch() so as to use the model in Opencv framework (4.1.0). I saved the PyTorch model with different methods as follows to see whether this difference reacts to the reading function of cv2.dnn. torch.save(model.state_dict(), '/home/aktaseren/people-opencv/pidxx.pt') torch.save(model.state_dict(), '/home/aktaseren/people-opencv/pidxx.t7') torch.save(model, '/home/aktaseren/people-opencv/pidxx.t7') torch.save(model, '/home/aktaseren/people-opencv/pidxx.pth') None of these saved file can be readable via cv2.dnn.readNetFromTorch(). The error I am getting is always the same on this issue, which is below. cv2.error: OpenCV(4.1.0) ../modules/dnn/src/torch/torch_importer.cpp:1022: error: (-213:The function/feature is not implemented) Unsupported Lua type in function 'readObject' Do you have any idea how to solve this issue?
OpenCV documentation states can only read in torch7 framework format. There is no mention of .pt or .pth saved by pytorch. This post mentions pytorch does not save as .t7. .t7 was used in Torch7 and is not used in PyTorch. If I’m not mistaken the file extension does not change the behavior of torch.save. An alternate method is to export the model as onnx, then read the model in opencv using readNetFromONNX.
https://stackoverflow.com/questions/69838994/
libtorch throws c10::error after build on Windows 10 (VS2019)
I've tried to build libtorch on Windows 10 using VS 2019 without CUDA and Python. Independent if I compile it with or without MKL, a simple test program crashes directly after start. After building the debug version, libtorch throws a c10:error in a function called torchCheckFail. The function seems to complain about ATen/core/jit_type.h. The problem is part of torch_cpu.dll. The problem disappears when I'm using the precompiled binaries for Windows. Here's the function: void torchCheckFail( const char* func, const char* file, uint32_t line, const std::string&amp; msg) { throw ::c10::Error({func, file, line}, msg); } And here's the call stack:
I encountered the same exact error with the same environment. A solution that worked for me was to take a release version of pytorch and not a non-release one (i.e. a release version + some commits). Hope it helps.
https://stackoverflow.com/questions/69839674/
Is it possible to train ONNX models developed in tensorflow and pytorch with C++?
I wonder if its possible to use tensorflow and pytorch models converted to onnx models to train them with the C++ Api like it is done in e.g. https://gist.github.com/asimshankar/5c96acd1280507940bad9083370fe8dc with a tensorflow model. I just found examples for inference with onnx. The idea is to be able to prototype with tensorflow and pytorch in python, convert to onnx models and to have a unified API in C++ to do inference and training. It would help quite a lot to get some (links to get) informaton.
ONNX's GitHub page suggests that it can be used for inference, but it doesn't seem reasonable to be able to train all models with it (from the development perspective). Currently we focus on the capabilities needed for inferencing (scoring). Although there are some difficulties, such as always writing backpropagation is more difficult than feedforwarding, and supporting it would double the framework size, which is not what ONNX is aiming for since there are already so many frameworks for this. To train you will need all the parameters, the derivatives of functions in the GPU and CPU(if its performance is lower than other frameworks, it will be a big problem since nobody will use it). And there are many other things that make a unified framework difficult(Supporting training on multiple GPUs over a network, for example).(So In our perspective, it's great, but in theirs it's so difficult) But we can see that some functionality for training has been added to the framework, in this case it can train transformer models Also, to training transformers in PyTorch you could see this link
https://stackoverflow.com/questions/69853476/
Finding out distance between output of two Convolutional Neural Network (CNN) i.e Siamese Network
I am trying to build a simple Siamese neural network for usage in Human re-identification. For that, I have used MTCNN (https://github.com/timesler/facenet-pytorch) for face detection and official pytorch implementation of arcface algorithm (https://github.com/deepinsight/insightface/tree/master/recognition/arcface_torch) for CNN implementation. I have used a pretrained model (ms1mv3_arcface_r50_fp16) trained on resnet 50 and a backbone CNN model from their repository for implementing the CNN. The CNN takes a 112x112 image and produces an array of 512x1. So I get two arrays as a result of the network. I have tried to compare the two array using cosine similarity but It is not giving me the correct results all the time. So, do I need to change my model parameters or do I need to use another metric for comparison? My code: https://gist.github.com/desertSniper87/26f5f45f4cece9d0f3008e89cea94be8
I've tried and got fully enjoied Elasticsearch with standard dlib face vectors (128x1)... ES can store and search&amp;compare such kind vectors super fast and accurate. I've used somthing like this to creat a ES index: from elasticsearch import Elasticsearch es = Elasticsearch([{'host': ELASTIC_HOST, 'port': ELASTIC_PORT }], timeout=30, retry_on_time=True, max_retries=3, http_auth=(ELASTIC_NAME, ELASTIC_PSW) ) mapping = { &quot;mappings&quot;: { &quot;properties&quot;: { &quot;face_vector&quot;:{ &quot;type&quot;: &quot;dense_vector&quot;, &quot;dims&quot;: 128 }, &quot;pic_file&quot;: { &quot;type&quot;: &quot;text&quot;}, &quot;face_loc&quot;: { &quot;type&quot;: &quot;integer&quot;} } } } es.indices.create(index=&quot;fr_idx&quot;, body=mapping) and then s_body = {&quot;size&quot;: ELASTIC_MAX_RESULTS, &quot;min_score&quot;: tolerance, &quot;query&quot;: { &quot;script_score&quot;: { &quot;query&quot;: { &quot;match_all&quot;: {} }, &quot;script&quot;: { &quot;source&quot;: &quot;1 / (1 + l2norm(params.query_vector, 'face_vector'))&quot;, &quot;params&quot;: {&quot;query_vector&quot;: UNKNOWN_FACE_ENCOD} } } } } res = es.search(index=&quot;fr_idx&quot;, body=s_body) #standard index for hit in res[&quot;hits&quot;][&quot;hits&quot;]: ... to search similar vectors.
https://stackoverflow.com/questions/69895999/
Linear decay as learning rate scheduler (pytorch)
I have read about LinearLR and ConstantLR in the Pytorch docs but I can't figure out, how to get a linear decay of my learning rate. Say I have epochs = 10 and lr=0.1 then I want to linearly reduce my learning-rate from 0.1 to 0 (or any other number) in 10 steps i.e by 0.01 in each step.
The two constraints you have are: lr(step=0)=0.1 and lr(step=10)=0. So naturally, lr(step) = -0.1*step/10 + 0.1 = 0.1*(1 - step/10). This is known as the polynomial learning rate scheduler. Its general form is: def polynomial(base_lr, iter, max_iter, power): return base_lr * ((1 - float(iter) / max_iter) ** power) Which in your case would be called with polynomial(base_lr=0.1, max_iter=10, power=1).
https://stackoverflow.com/questions/69899602/
Does model evaluation during training affect final accuracy in PyTorch?
During a simple training loop for PyTorch a strange effect was observed. If the evaluation function is called or not seems to have effects on the final performance of the model. We train on the CIFAR10 using a very simple MLP model and Adam with 10 training epochs. We try two Main loops: After the end of each training epoch we measure the accuracy of validation set We calculate the validation only once at the end of all training. We show the difference in code here below: Main Loop 1: # Main Loop 1 num_epochs = 10 print(f&quot;num_epochs: {num_epochs}&quot;) for epoch in range(num_epochs): # loop over the dataset multiple times print(f&quot;\nStart Epoch {epoch}&quot;) model.train() train_loss, train_accuracy = training_epoch(trainloader,optimizer,model,criterion) print(f&quot;Training Loss: {train_loss:.3f} - Training Accuracy: {train_accuracy:.3f}&quot;) model.eval() with torch.no_grad(): val_loss, val_accuracy = val_epoch(testloader, model, criterion) print(f&quot;Val Loss: {val_loss:.3f} - Val Accuracy: {val_accuracy:.3f}&quot;) print('Finished Training') Main Loop 2: # Main Loop 2 num_epochs = 10 print(f&quot;num_epochs: {num_epochs}&quot;) for epoch in range(num_epochs): # loop over the dataset multiple times print(f&quot;\nStart Epoch {epoch}&quot;) model.train() train_loss, train_accuracy = training_epoch(trainloader,optimizer,model,criterion) print(f&quot;Training Loss: {train_loss:.3f} - Training Accuracy: {train_accuracy:.3f}&quot;) model.eval() with torch.no_grad(): val_loss, val_accuracy = val_epoch(testloader, model, criterion) print(f&quot;Val Loss: {val_loss:.3f} - Val Accuracy: {val_accuracy:.3f}&quot;) print('Finished Training') Though there shouldn't be any change, the final performance of model change. Val Loss: 1.526 - Val Accuracy: 0.523 Val Loss: 1.501 - Val Accuracy: 0.528 Of course for reproducibility, we set all seeds. Moreover, this effect can already be observed at the beginning of the second training epoch. I share the entire code as a Colab notebook: https://colab.research.google.com/drive/1BODeKHZmcT8lH3r2bxYVHNR2KOpT9O9Y?usp=sharing
The observed difference would be due to variance because of stochasticity in the optimization algorithm. The evaluation you perform has no effect on the model's weights. Also in the link you provided, you are re-initializing a SimpleMLP on both experiments. Since the module's weights get instantiated randomly the inference will naturally yield different results.
https://stackoverflow.com/questions/69899685/
How to use django rest api to do inference?
I was trying to build a website with Django rest API as the backend. when given a string Its gives the score from 1 to 10 for negativity. The frontend part of the website was built using next.js. Previously I have made the same app without Django rest API by doing all inference in the views.py file. Now I am using Rest API I am confused about where should I need to include the machine learning inference code. I have seen tutorials on the internet showing that inference code is attached in the models.py file. Previously when I included inference code in views.py the page used to get reload whenever I do the inference. I want to avoid it. What is the best practice to include inference code while using Django rest API.
I do it the following way: from rest_framework.decorators import api_view from rest_framework.response import Response @api_view(['GET', 'POST']) def snippet_list(request): &quot;&quot;&quot; List all code snippets, or create a new snippet. &quot;&quot;&quot; if request.method == 'GET': # if you have a GET request data do stuffs here, else remove get elif request.method == 'POST': data = request.data # run your inference code here and get the predictions context = { 'score': score } return Response(context , status=status.HTTP_200_OK)
https://stackoverflow.com/questions/69907801/
What is the proper way to use pytorch and matplotlib with MKL on Windows?
Is there any clear instruction on how to actually make this work? The error message I get is: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized. It seems, there are only workarounds, such as to opt out from MKL altogether (not to mention, say, nomkl is not available on Windows) or import os os.environ['KMP_DUPLICATE_LIB_OK']='True' which is nasty.
If you have conda and pip installed on your machine. Please try installing by creating a new conda environment. You could try the below steps (which I have validated from my end) to install matplotlib and pytorch (with mkl). conda create -n myenv conda activate myenv conda install pytorch torchvision torchaudio cpuonly -c pytorch C:\Users\{USERNAME}\.conda\envs\myenv\python.exe -m pip install matplotlib==2.2.5 Once installed I verified pytorch uses mkl with the below command &gt;&gt;&gt; import torch &gt;&gt;&gt; torch.__config__.show() 'PyTorch built with:\n - C++ Version: 199711\n - MSVC 192829337\n - Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications\n - Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)\n - OpenMP 2019\n - LAPACK is enabled (usually provided by MKL)\n - CPU capability usage: AVX2\n - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CXX_COMPILER=C:/cb/pytorch_1000000000000/work/tmp_bin/sccache-cl.exe, CXX_FLAGS=/DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/cb/pytorch_1000000000000/work/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.0, USE_CUDA=0, USE_CUDNN=OFF, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON, \n'
https://stackoverflow.com/questions/69912244/
Pytorch Conv1D gives different size to ConvTranspose1d
I am trying to build a basic/shallow CNN auto-encoder for 1D time series data in pytorch/pytorch-lightning. Currently, my encoding block is: class encodingBlock(nn.Module): def __init__(self): super().__init__() self.conv1d_1 = nn.Conv1d(1, 64, kernel_size=32) self.relu = nn.ReLU() self.batchnorm = nn.BatchNorm1d(64) self.maxpool = nn.MaxPool1d(kernel_size=2, stride=2, return_indices=True) self.fc = nn.Linear(64, 4) def forward(self, x): cnn_out1 = self.conv1d_1(x) norm_out1 = self.batchnorm(cnn_out1) relu_out1 = self.relu(norm_out1) maxpool_out, indices = self.maxpool(relu_out1) gap_out = torch.mean(maxpool_out, dim = 2) fc_out = self.relu(self.fc(gap_out)) return fc_out, indices And my decoding block is: class decodingBlock(nn.Module): def __init__(self): super().__init__() self.Tconv1d_1 = nn.ConvTranspose1d(64, 1, kernel_size=32, output_padding=1) self.relu = nn.ReLU() self.batchnorm = nn.BatchNorm1d(1) self.maxunpool = nn.MaxUnpool1d(kernel_size=2, stride=2) self.upsamp = nn.Upsample(size=59, mode='nearest') self.fc = nn.Linear(4, 64) def forward(self, x, indices): fc_out = self.fc(x) relu_out = self.relu(fc_out) relu_out = relu_out.unsqueeze(dim = 2) upsamp_out = self.upsamp(relu_out) maxpool_out = self.maxunpool(upsamp_out, indices) cnnT_out = self.Tconv1d_1(maxpool_out) norm_out = self.batchnorm(cnnT_out) relu_out = self.relu(norm_out) return relu_out However, looking at the outputs: Input size: torch.Size([1, 1, 150]) Conv1D out size: torch.Size([1, 64, 119]) Maxpool out size: torch.Size([1, 64, 59]) Global average pooling out size: torch.Size([1, 64]) Encoder dense out size: torch.Size([1, 4]) ... Decoder input: torch.Size([1, 4]) Decoder dense out size: torch.Size([1, 64]) Unsqueeze out size: torch.Size([1, 64, 1]) Upsample out size: torch.Size([1, 64, 59]) Decoder maxunpool out size: torch.Size([1, 64, 118]) Transpose Conv out size: torch.Size([1, 1, 149]) The outputs from the MaxUnpool1d and ConvTranspose1d layers are not the expected dimension. I have two questions that I was hoping to get some help on: Why are the dimensions wrong? Is there a better way to &quot;reverse&quot; the global average pooling than the upsampling procedure I have used?
1. Regarding input and output shapes: pytorch's doc has the explicit formula relating input and output sizes. For convolution: Similarly for pooling: For transposed convolution: And for unpooling: Make sure your padding and output_padding values add up to the proper output shape. 2. Is there a better way? Transposed convolution has its faults, as you already noticed. It also tends to produce &quot;checkerboard artifacts&quot;. One solution is to use pixelshuffle: that is, predict for each low-res point twice the number of channels, and then split them into two points with the desired number of features. Alternatively, you can interpolate using a fixed method from the low resolution to the higher one. Apply regular convolutions to the upsampled vectors. If you choose this path, you might consider using ResizeRight instead of pytorch's interpolate - it has better handling of edge cases.
https://stackoverflow.com/questions/69915792/
PyTorch with CUDA and Nvidia card: RuntimeError: CUDA error: all CUDA-capable devices are busy or unavailable, but torch.cuda.is_available() is True
Problem: I occasionally get the following CUDA error when running PyTorch scripts with CUDA on an Nvidia GPU, running on CentOS 7. If I run: python3 -c 'import torch; print(torch.cuda.is_available()); torch.randn(1).to(&quot;cuda&quot;)'​ I get the following output: True​ Traceback (most recent call last):​ File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt;​ RuntimeError: CUDA error: all CUDA-capable devices are busy or unavailable​ CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.​ For debugging consider passing CUDA_LAUNCH_BLOCKING=1. PyTorch seems to think the GPU is available, but I can't put anything onto it's memory. When I restart the computer, the error goes away. I can't seem to get the error to come back consistently.
When I'm outside of Python and run nvidia-smi, it shows a process running on the GPU, despite the fact that I cancelled execution of the PyTorch script: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.27.04 Driver Version: 460.27.04 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla V100-PCIE... On | 00000000:00:06.0 Off | 0 | | N/A 29C P0 33W / 250W | 1215MiB / 32510MiB | 0% E. Process | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 18805 C python3 1211MiB | +-----------------------------------------------------------------------------+ If I kill the process (with PID=18805), by running kill -9 18805, the process no longer appears in nvidia-smi, and the error does not recur. Any insights on a better solution, or how to avoid this problem in the first place, are very welcome.
https://stackoverflow.com/questions/69919854/
How to split dataset into two considering fixed seed to ensure reproducibility in PyTorch?
I am working on one of my University assignment and there is one sub-task which says. Split the data in two (Train and Validation) while using using a fixed seed to ensure reproducibility. I have wrote some code which is working fine but I want to know whether it is the correct way or not? torch.manual_seed(0) mnist_train, mnist_val = torch.utils.data.random_split(mnist_rest, [54000,6000]) I am working on MNIST dataset.
According to PyTorch's docs: Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds. However, there are some steps you can take to limit the number of sources of nondeterministic behavior for a specific platform, device, and PyTorch release. First, you can control sources of randomness that can cause multiple executions of your application to behave differently. Second, you can configure PyTorch to avoid using nondeterministic algorithms for some operations, so that multiple calls to those operations, given the same inputs, will produce the same result. To control randomness, it's recommended you to use followings to reproduce the result: In PyTorch: You can use torch.manual_seed() to seed the RNG for all devices (both CPU and CUDA): import torch torch.manual_seed(0) In Python: For custom operators, you might need to set python seed as well: import random random.seed(0) If you or any of the libraries you are using rely on NumPy, you can seed the global NumPy RNG with: import numpy as np np.random.seed(0)
https://stackoverflow.com/questions/69919918/
PyTorch: module 'torch' has no attribute 'gradient'
PyTorch seems to have a serious bug leading to the error message AttributeError: module 'torch' has no attribute [some torch function] In my case, I try to use torch.gradient link. I am using Python version 3.8.5 and tried the PyTorch Versions 1.6.0, 1.7.0, 1.7.1 , 1.8, 1.9.0 for CPU. (The newest version has another bug for gradient torch.gradient edge order). There are several answers suggesting that I should install torch via pip, I should install torchvision, nothing worked. There is also the suggestion, that the wrong torch folder is used as a namespace which does not seem to be the case, since there is an initfile when I run print(torch.__path__) So my question is: How to finally solve this problem? I tried to install it with the recommended pytorch.org version for conda, with anaconda.org/pytorch/pytorch and with pypi.org/project/torch – nothing worked. The conda list torch element gives pytorch 1.7.1 py3.8_cpu_0 [cpuonly] pytorch pytorch-mutex 1.0 cpu pytorch torchaudio 0.7.2 py38 pytorch torchvision 0.8.2 py38_cpu [cpuonly] pytorch
The same happened to me. What I did was to create a new conda environment and reinstall PyTorch according to https://pytorch.org/
https://stackoverflow.com/questions/69919938/
Why VAE loss doesn’t converge to zero?
I’m using a Variational Autoencoder and this is my implementation for the loss function: class VariationalAutoencoder(nn.Module): # ...some functions... def gaussian_likelihood(self, x_hat, logscale, x): scale = torch.exp(logscale) mean = x_hat dist = torch.distributions.Normal(mean, scale) # measure prob of seeing image under p(x|z) log_pxz = dist.log_prob(x) return log_pxz.sum(dim=(1, 2, 3)) def forward(self, input): mu, logvar = self.encode(input) z = self.reparameterise(mu, logvar) return self.decoder(z), mu, logvar, z def loss_function(self, x_hat, x, mu, logvar, β=1): std = torch.exp(logvar / 2) q = torch.distributions.Normal(mu, std) z = q.rsample() # reconstruction loss recon_loss = self.gaussian_likelihood(x_hat, self.log_scale, x) # kl kl = self.kl_divergence(z, mu, std) # elbo elbo = (kl - recon_loss) elbo = elbo.mean() return elbo def kl_divergence(self, z, mu, std): # -------------------------- # Monte carlo KL divergence # -------------------------- # 1. define the first two probabilities (in this case Normal for both) p = torch.distributions.Normal(torch.zeros_like(mu), torch.ones_like(std)) q = torch.distributions.Normal(mu, std) # 2. get the probabilities from the equation log_qzx = q.log_prob(z) log_pz = p.log_prob(z) # kl kl = (log_qzx - log_pz) kl = kl.sum(-1) return kl I use Sigmoid() function when exiting the decoder. I train the model in this way: for epoch in range(0, epochs + 1): if epoch &gt; 0: # test untrained net first model.train() train_loss = 0 loop = tqdm(train_loader) optimizer = model.setOptimizer(model) for x in loop: x = x.to(device) x_hat, mu, logvar, features = model(x) loss = model.loss_function(x_hat, x, mu, logvar) train_loss += loss.item() optimizer.zero_grad() loss.backward() optimizer.step() loop.set_postfix(loss=loss) train_loss = train_loss /= len(train_loader.dataset) print(f'====&gt; Epoch: {epoch} Average loss: {train_loss:.4f}') The loss doesn’t settle at zero but but becomes negative (around -2). If I remove train_loss = train_loss /= len(train_loader.dataset), it’s diverges completely. How can I make the loss converge to zero?
Loss for VAE can be negative. It has a log-likelihood - which can be negative. There is nothing wrong in that.
https://stackoverflow.com/questions/69926777/
How to randomly set a variable number of elements in each row of a tensor in PyTorch
I want to create a zero-one matrix of dimension (n, n). The ones should be placed randomly, with a cap on the number of ones in each row. Let us say I have a list of length n that has the value of cap for each of the n rows. How can I do this in PyTorch? My question is similar to this previous question. The only change I am looking for is, there should be n values of k, corresponding to n rows.
As explained by @Marcel in the comments above, you can first set the first m values to value k then index by permuted indices in order to get a shuffle tensor: &gt;&gt;&gt; n = 10; m = 3; k = 1 &gt;&gt;&gt; x = torch.zeros(n, n) &gt;&gt;&gt; x[:, :m] = k tensor([[1., 1., 1., 0., 0., 0., 0., 0., 0., 0.], [1., 1., 1., 0., 0., 0., 0., 0., 0., 0.], [1., 1., 1., 0., 0., 0., 0., 0., 0., 0.], [1., 1., 1., 0., 0., 0., 0., 0., 0., 0.], [1., 1., 1., 0., 0., 0., 0., 0., 0., 0.], [1., 1., 1., 0., 0., 0., 0., 0., 0., 0.], [1., 1., 1., 0., 0., 0., 0., 0., 0., 0.], [1., 1., 1., 0., 0., 0., 0., 0., 0., 0.], [1., 1., 1., 0., 0., 0., 0., 0., 0., 0.], [1., 1., 1., 0., 0., 0., 0., 0., 0., 0.]]) Use torch.randperm to get row-wise column permutations: &gt;&gt;&gt; perm = torch.stack([torch.randperm(10) for _ in range(len(x))]) tensor([[8, 0, 3, 2, 1, 6, 9, 4, 5, 7], [5, 7, 1, 4, 8, 0, 6, 9, 2, 3], [2, 1, 9, 7, 0, 8, 6, 3, 5, 4], [1, 3, 5, 8, 7, 6, 9, 4, 2, 0], [7, 6, 0, 5, 2, 9, 1, 8, 4, 3], [5, 0, 6, 8, 1, 9, 2, 4, 3, 7], [4, 0, 6, 5, 8, 1, 3, 7, 2, 9], [5, 3, 4, 9, 0, 1, 7, 6, 8, 2], [5, 7, 9, 3, 2, 6, 8, 0, 4, 1], [2, 7, 4, 6, 3, 0, 9, 8, 5, 1]]) Then use torch.gather to index the tensor x with perm: &gt;&gt;&gt; x.gather(dim=0, index=perm) tensor([[0., 1., 0., 1., 1., 0., 0., 0., 0., 0.], [0., 0., 1., 0., 0., 1., 0., 0., 1., 0.], [1., 1., 0., 0., 1., 0., 0., 0., 0., 0.], [1., 0., 0., 0., 0., 0., 0., 0., 1., 1.], [0., 0., 1., 0., 1., 0., 1., 0., 0., 0.], [0., 1., 0., 0., 1., 0., 1., 0., 0., 0.], [0., 1., 0., 0., 0., 1., 0., 0., 1., 0.], [0., 0., 0., 0., 1., 1., 0., 0., 0., 1.], [0., 0., 0., 0., 1., 0., 0., 1., 0., 1.], [1., 0., 0., 0., 0., 1., 0., 0., 0., 1.]]) Alternatively you can use torch.scatter straight way with the value keyword argument: &gt;&gt;&gt; torch.zeros(n, n).scatter(dim=0, index=perm, value=1) tensor([[0., 1., 0., 1., 1., 0., 0., 0., 0., 0.], [0., 0., 1., 0., 0., 1., 0., 0., 1., 0.], [1., 1., 0., 0., 1., 0., 0., 0., 0., 0.], [1., 0., 0., 0., 0., 0., 0., 0., 1., 1.], [0., 0., 1., 0., 1., 0., 1., 0., 0., 0.], [0., 1., 0., 0., 1., 0., 1., 0., 0., 0.], [0., 1., 0., 0., 0., 1., 0., 0., 1., 0.], [0., 0., 0., 0., 1., 1., 0., 0., 0., 1.], [0., 0., 0., 0., 1., 0., 0., 1., 0., 1.], [1., 0., 0., 0., 0., 1., 0., 0., 0., 1.]]) If m is a tensor itself, you can find a workaround using a combination of torch.arange and torch.where: First encode the positions: &gt;&gt;&gt; d = torch.arange(n)[None].repeat(n,1) &gt;&gt;&gt; x = torch.where(d+m&gt;n, 0, 1) tensor([[1, 1, 1, 1, 1, 1, 0, 0, 0, 0], [1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 0, 0], [1, 1, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]) Construct the permutation as before: &gt;&gt;&gt; perm = torch.stack([torch.randperm(10) for _ in range(n)]) tensor([[2, 5, 7, 0, 4, 1, 3, 6, 8, 9], [7, 4, 9, 5, 6, 0, 3, 1, 2, 8], [5, 1, 4, 9, 0, 3, 2, 6, 7, 8], [9, 6, 0, 2, 3, 1, 7, 5, 4, 8], [3, 5, 4, 6, 0, 7, 9, 8, 2, 1], [5, 7, 8, 6, 9, 2, 0, 4, 3, 1], [8, 3, 9, 0, 6, 2, 5, 7, 4, 1], [2, 9, 4, 3, 7, 8, 1, 0, 6, 5], [5, 4, 8, 3, 2, 9, 7, 1, 6, 0], [8, 7, 3, 6, 5, 4, 2, 0, 9, 1]]) Then scatter on x: &gt;&gt;&gt; x.scatter(dim=0, index=perm, value=1) tensor([[1, 1, 1, 1, 1, 1, 1, 1, 0, 1], [1, 1, 1, 0, 0, 1, 1, 1, 0, 1], [1, 1, 1, 1, 1, 1, 1, 0, 1, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 0, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 0], [1, 1, 1, 0, 1, 1, 1, 1, 1, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])
https://stackoverflow.com/questions/69931610/
Implementation of multitask "nested" neural network
I am trying to implement a multitask neural network used by a paper but am quite unsure how I should code the multitask network because the authors did not provide code for that part. The network architecture looks like (paper): To make it simpler, the network architecture could be generalized as (For demo I changed their more complicated operation for the pair of individual embeddings to concatenation): The authors are summing the loss from the individual tasks and the pairwise tasks, and using the total loss to optimize the parameters for the three networks (encoder, MLP-1, MLP-2) in each batch, but I am kind of at sea as to how different types of data are combined in a single batch to feed into two different networks that share an initial encoder. I tried to search for other networks with similar structure but did not find any sources. Would appreciate any thoughts!
This is actually a common pattern. It would be solved by code like the following. class Network(nn.Module): def __init__(self, ...): self.encoder = DrugTargetInteractiongNetwork() self.mlp1 = ClassificationMLP() self.mlp2 = PairwiseMLP() def forward(self, data_a, data_b): a_encoded = self.encoder(data_a) b_encoded = self.encoder(data_b) a_classified = self.mlp1(a_encoded) b_classified = self.mlp1(b_encoded) # let me assume data_a and data_b are of shape # [batch_size, n_molecules, n_features]. # and that those n_molecules are not necessarily # equal. # This can be generalized to more dimensions. a_broadcast, b_broadcast = torch.broadcast_tensors( a_encoded[:, None, :, :], b_encoded[:, :, None, :], ) # this will work if your mlp2 accepts an arbitrary number of # learding dimensions and just broadcasts over them. That's true # for example if it uses just Linear and pointwise # operations, but may fail if it makes some specific assumptions # about the number of dimensions of the inputs pairwise_classified = self.mlp2(a_broadcast, b_broadcast) # if that is a problem, you have to reshape it such that it # works. Most torch models accept at least a leading batch dimension # for vectorization, so we can &quot;fold&quot; the pairwise dimension # into the batch dimension, presenting it as # [batch*n_mol_1*n_mol_2, n_features] # to mlp2 and then recover it back B, N1, N_feat = a_broadcast.shape _B, N2, _N_feat = b_broadcast.shape a_batched = a_broadcast.reshape(B*N1*N2, N_feat) b_batched = b_broadcast.reshape(B*N1*N2, N_feat) # above, -1 would suffice instead of B*N1*N2, just being explicit batch_output = self.mlp2(a_batched, b_batched) # this should be exactly the same as `pairwise_classified` alternative_classified = batch_output.reshape(B, N1, N2, -1) return a_classified, b_classified, pairwise_classified
https://stackoverflow.com/questions/69935341/
RuntimeError: CUDA error: initialization error when calling torch.distributed.init_process_group using torch multiprocessing
I created a pytest fixture using decorator to create multiple processes (using torch multiprocessing) for running model parallel distributed unit tests using pytorch distributed. I randomly encountered the below CUDA initialization error all of a sudden (when I was trying to fix some unit tests logic). Since then, all my unit tests have been failing and I traced the failure back to my pytest fixture which calls torch.distributed.init_process_group(…). Error traceback: Process Process-1: Traceback (most recent call last): File &quot;/usr/lib64/python3.7/multiprocessing/process.py&quot;, line 297, in _bootstrap self.run() File &quot;/usr/lib64/python3.7/multiprocessing/process.py&quot;, line 99, in run self._target(*self._args, **self._kwargs) File &quot;/fsx-dev/FSxLustre20201016T182138Z/prraman/home/workspace/ws_M5_meg/src/M5ModelParallelism/test_script/commons_debug.py&quot;, line 34, in dist_init torch.distributed.init_process_group(backend, rank=rank, world_size=world_size, init_method=init_method) File &quot;/usr/local/lib64/python3.7/site-packages/torch/distributed/distributed_c10d.py&quot;, line 480, in init_process_group barrier() File &quot;/usr/local/lib64/python3.7/site-packages/torch/distributed/distributed_c10d.py&quot;, line 2186, in barrier work = _default_pg.barrier() RuntimeError: CUDA error: initialization error Below is the pytest fixture I created: import os import time import torch import torch.distributed as dist from torch.multiprocessing import Process, set_start_method import pytest # Worker timeout *after* the first worker has completed. WORKER_TIMEOUT = 120 def distributed_test_debug(world_size=2, backend='nccl'): &quot;&quot;&quot;A decorator for executing a function (e.g., a unit test) in a distributed manner. This decorator manages the spawning and joining of processes, initialization of torch.distributed, and catching of errors. Usage example: @distributed_test_debug(worker_size=[2,3]) def my_test(): rank = dist.get_rank() world_size = dist.get_world_size() assert(rank &lt; world_size) Arguments: world_size (int or list): number of ranks to spawn. Can be a list to spawn multiple tests. &quot;&quot;&quot; def dist_wrap(run_func): &quot;&quot;&quot;Second-level decorator for dist_test. This actually wraps the function. &quot;&quot;&quot; def dist_init(local_rank, num_procs, *func_args, **func_kwargs): &quot;&quot;&quot;Initialize torch.distributed and execute the user function. &quot;&quot;&quot; os.environ['MASTER_ADDR'] = '127.0.0.1' os.environ['MASTER_PORT'] = '29503' os.environ['LOCAL_RANK'] = str(local_rank) # NOTE: unit tests don't support multi-node so local_rank == global rank os.environ['RANK'] = str(local_rank) os.environ['WORLD_SIZE'] = str(num_procs) master_addr = os.environ['MASTER_ADDR'] master_port = os.environ['MASTER_PORT'] rank = local_rank # Initializes the default distributed process group, and this will also initialize the distributed package. init_method = &quot;tcp://&quot; init_method += master_addr + &quot;:&quot; + master_port print('inside dist_init, world_size: ', world_size) torch.distributed.init_process_group(backend, rank=rank, world_size=world_size, init_method=init_method) print(&quot;rank={} init complete&quot;.format(rank)) #torch.distributed.destroy_process_group() # print(&quot;rank={} destroy complete&quot;.format(rank)) if torch.distributed.get_rank() == 0: print('&gt; testing initialize_model_parallel with size {} ...'.format( 2)) if torch.cuda.is_available(): torch.cuda.set_device(local_rank) run_func(*func_args, **func_kwargs) def dist_launcher(num_procs, *func_args, **func_kwargs): &quot;&quot;&quot;Launch processes and gracefully handle failures. &quot;&quot;&quot; # Spawn all workers on subprocesses. #set_start_method('spawn') processes = [] for local_rank in range(num_procs): p = Process(target=dist_init, args=(local_rank, num_procs, *func_args), kwargs=func_kwargs) p.start() processes.append(p) # Now loop and wait for a test to complete. The spin-wait here isn't a big # deal because the number of processes will be O(#GPUs) &lt;&lt; O(#CPUs). any_done = False while not any_done: for p in processes: if not p.is_alive(): any_done = True break # Wait for all other processes to complete for p in processes: p.join(WORKER_TIMEOUT) failed = [(rank, p) for rank, p in enumerate(processes) if p.exitcode != 0] for rank, p in failed: # If it still hasn't terminated, kill it because it hung. if p.exitcode is None: p.terminate() pytest.fail(f'Worker {rank} hung.', pytrace=False) if p.exitcode &lt; 0: pytest.fail(f'Worker {rank} killed by signal {-p.exitcode}', pytrace=False) if p.exitcode &gt; 0: pytest.fail(f'Worker {rank} exited with code {p.exitcode}', pytrace=False) def run_func_decorator(*func_args, **func_kwargs): &quot;&quot;&quot;Entry point for @distributed_test(). &quot;&quot;&quot; if isinstance(world_size, int): dist_launcher(world_size, *func_args, **func_kwargs) elif isinstance(world_size, list): for procs in world_size: dist_launcher(procs, *func_args, **func_kwargs) time.sleep(0.5) else: raise TypeError(f'world_size must be an integer or a list of integers.') return run_func_decorator return dist_wrap Below is how I run it: @distributed_test_debug(world_size=2) def test_dummy(): assert 1 == 1 I have seen some issues raised in the past when torch multiprocessing and CUDA not working well together, not sure if this is related to that. Perhaps a different way I should be creating my multiple processes to avoid this problem? Any help is appreciated. I am using pytorch version: 1.8.0a0+ae5c2fe
Not sure how much its useful now but we were getting this error when running yolox script command https://github.com/Megvii-BaseDetection/YOLOX python -m yolox.tools.train -n yolox-s -d 8 -b 64 --fp16 -o [--cache] However when we removed the --cache parameter the error was resolved. Another issue we encountered was script freezing which was resolved with help from this post. https://github.com/NVIDIA/nccl/issues/631 Note this was performed with 4 GPU setup on single machine.
https://stackoverflow.com/questions/69935635/
How to fix RuntimeError: Bool type is not supported by dlpack
I have been using the following code to get the default Cora dataset provided by DGL, but the following error suddenly occurred today. The code was runned in CoLab (python 3.7 and Pytorch backend). I believed this is a error from the DGL update (since it had worked all the time before). However, I just wonder if there is anything we can do on our end to fix this? Thanks.
It seems that an error is from torch update to 1.10.0. Reinstalling torch to 1.9.1 works for me. You can reinstall torch in colab as follows: !pip install dgl==0.6.1 !pip install torch==1.9.1 import dgl cora = dgl.data.CoraGraphDataset()
https://stackoverflow.com/questions/69937348/
What are the in_features and out_features supposed to be?
torch.nn.Linear(in_features, out_features, bias=True, device=None, dtype=None) I have a dataset of [914,19] shape. should my in_features be 914? And I want to predict 5 different values so should my output feature be 5? class NeuralNetwork(nn.Module): def __init__(self): super(NeuralNetwork, self).__init__() self.linear1 = nn.Linear(914,512) self.linear2 = nn.Linear(512,512) self.linear3 = nn.Linear(512,512) self.linear4 = nn.Linear(512,5) def forward(self, x): x = F.relu(self.linear1(x)) x = F.relu(self.linear2(x)) x = F.relu(self.linear3(x)) x = self.linear4(x) return x NeuralNet = NeuralNetwork() print(NeuralNet)
Your input data is shaped (914, 19), assuming 914 refers to your batch size here, then the in_features corresponds to 19. This can be read as a tensor containing 914 19-feature-long input vectors. In this case, the in_features of linear1 would be set to 19.
https://stackoverflow.com/questions/69939180/
what does mask_fill in pytorch do here
I have tensor named &quot;k1&quot; which is in shape 3,1,1,9 and also have p1 tensor in shape of 3,7,9,9 and I wanna know what does the line below do? p1 = p1 .masked_fill(k1== 0, float(&quot;-1e30&quot;))
As the documentation page describes it: Tensor.masked_fill(mask, value) Fills elements of self tensor with value where mask is True. The shape of mask must be broadcastable with the shape of the underlying tensor. In your case it will place in p1 the value of float(&quot;-1e30&quot;) at the positions where k1 is equal to zero. Since k1 has singleton dimensions its shape will be broadcasted to the shape of p1.
https://stackoverflow.com/questions/69956001/
The same output value whatever is the input value for a Pytorch LSTM regression model
My dataset looks like the following: on the left, my inputs, and on the right the outputs. The inputs are tokenized and converted to a list of indices, for instance, the molecule input: 'CC1(C)Oc2ccc(cc2C@HN3CCCC3=O)C#N' is converted to: [28, 28, 53, 69, 28, 70, 40, 2, 54, 2, 2, 2, 69, 2, 2, 54, 67, 28, 73, 33, 68, 69, 67, 28, 73, 73, 33, 68, 53, 40, 70, 39, 55, 28, 28, 28, 28, 55, 62, 40, 70, 28, 63, 39, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] I use the following list of chars as my map from strings to indices cs = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z', 'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z', '0','1','2','3','4','5','6','7','8','9', '=','#',':','+','-','[',']','(',')','/','\' , '@','.','%'] Thus, for every char in the input string, there is an index, and if the length of the input string is less than the max length of all inputs which is 100, I complement with zeros. (like in the above-shown example) My model looks like this: class LSTM_regr(torch.nn.Module) : def __init__(self, vocab_size, embedding_dim, hidden_dim) : super().__init__() self.embeddings = nn.Embedding(vocab_size, embedding_dim, padding_idx=0) self.lstm = nn.LSTM(embedding_dim, hidden_dim, batch_first=True) self.linear = nn.Linear(hidden_dim, 1) self.dropout = nn.Dropout(0.2) def forward(self, x, l): x = self.embeddings(x) x = self.dropout(x) lstm_out, (ht, ct) = self.lstm(x) return self.linear(ht[-1]) vocab_size = 76 model = LSTM_regr(vocab_size, 20, 256) My problem is, after training, every input I give to the model to test it, gives me the same output (i.e., 3.3318). Why is that? My training loop: def train_model_regr(model, epochs=10, lr=0.001): parameters = filter(lambda p: p.requires_grad, model.parameters()) optimizer = torch.optim.Adam(parameters, lr=lr) for i in range(epochs): model.train() sum_loss = 0.0 total = 0 for x, y, l in train_dl: x = x.long() y = y.float() y_pred = model(x, l) optimizer.zero_grad() loss = F.mse_loss(y_pred, y.unsqueeze(-1)) loss.backward() optimizer.step() sum_loss += loss.item()*y.shape[0] total += y.shape[0] EDIT: I figured it out, I reduced the learning rate from 0.01 to 0.0005 and reduced the batch size from 100 to 10 and it worked fine. I think this makes sense, the model was training on large batch size, thus it was learning to output the mean always since that's what the loss function does.
Your LSTM_regr returns the last hidden state regardless of the true sequence length. That is, if your true sequence is of length 3, x is of length 100, and the output is the last hidden state after processing 97 padding elements. You should compute the loss for the prediction that matches the true length of each sequence.
https://stackoverflow.com/questions/69964929/
RuntimeError: CUDA error: no kernel image is available for execution on the device after model.cuda()
I am working on this model: class Model(torch.nn.Module): def __init__(self, sizes, config): super(Model, self).__init__() self.lstm = [] for i in range(len(sizes) - 2): self.lstm.append(LSTM(sizes[i], sizes[i+1], num_layers=8)) self.lstm.append(torch.nn.Linear(sizes[-2], sizes[-1]).cuda()) self.lstm = torch.nn.ModuleList(self.lstm) self.config_mel = config.mel_features def forward(self, x): # convert to log-domain x = x.clip(min=1e-6).log10() for layer in self.lstm[:-1]: x, _ = layer(x) x = torch.relu(x) #x = torch_unpack_seq(x)[0] x = self.lstm[-1](x) mask = torch.sigmoid(x) return mask and then: model = Model(model_width, config) model.cuda() But I am getting this error: File &quot;main.py&quot;, line 29, in &lt;module&gt; Model.train(args) File &quot;.../src/model.py&quot;, line 57, in train model.cuda() File &quot;.../.local/lib/python3.8/site-packages/torch/nn/modules/module.py&quot;, line 637, in cuda return self._apply(lambda t: t.cuda(device)) File &quot;.../.local/lib/python3.8/site-packages/torch/nn/modules/module.py&quot;, line 530, in _apply module._apply(fn) File &quot;/.../.local/lib/python3.8/site-packages/torch/nn/modules/module.py&quot;, line 530, in _apply module._apply(fn) File &quot;.../.local/lib/python3.8/site-packages/torch/nn/modules/rnn.py&quot;, line 189, in _apply self.flatten_parameters() File &quot;.../.local/lib/python3.8/site-packages/torch/nn/modules/rnn.py&quot;, line 175, in flatten_parameters torch._cudnn_rnn_flatten_weight( RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. I have no idea why it is happening. I am trying to push model and the inputs in cuda, and I understand if the error was due to some models in CPU and some in GPU. But that is not the case here. I found some pip install solution here: Pytorch CUDA error: no kernel image is available for execution on the device on RTX 3090 with cuda 11.1 but I cannot use it as I am trying to do the work in a remote repo where I don't have access to pip install. Is there a way I can solve this?
I checked the latest torch and torchvision version with cuda from the given link. Stable versions list: https://download.pytorch.org/whl/cu113/torch_stable.html Below versions solved the error, pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 -f https://download.pytorch.org/whl/torch_stable.html Reference: #49161
https://stackoverflow.com/questions/69968477/
About np.einsum
I don't understand how the following code realizes the transformation of dimensions? The shape of C is [2, 3, 3, 4]. How to realize the following matrix operation without einsum function? import numpy as np a = np.random.randint(0, 10, (2,3,4)) b = np.random.randint(0, 10, (3, 6, 4)) c = np.einsum('bld,hid-&gt; blhd', a,b)
You can find more details in about einstein notation wikipedia This means that you have indices b,l,h,i,d this will iterate the indices to cover all the inputs and build the input I will use capital letters for the arrays here to distinguish from the indices. C[b,l,h,d] += A[b,l,d] * B[h,i,d] The shape of the output can be determined as follows. You take the index of each output axis and look for the same index in the input. For instance the first axis of C is indexed with b that is also used to index the first axis of A, thus assert C.shape[0] == A.shape[0]. Repeating for the other axes we have assert C.shape[1] == A.shape[1], assert C.shape[2] == B.shape[0], and assert C.shape[3] == A.shape[2], also assert C.shape[3] == B.shape[2]. Notice that the index i does not affect where the term will be added, each element of the output can be written as C[b,l,h,d] = sum(A[b,l,d] * B[h,i,d] for i in range(B.shape[1])) Notice also that i is not used to index A. So this could be also written as C[b,l,h,d] = A[b,l,d] * B[h,:,d].sum(); Or if you want to use vectorized operation first expanding then reducing C = A[:,:,None,:] * B[None,None,:,:,:].sum(-2) expanding reducing then expandin, possible because A does not use i C = A[:,:,None,:] * B.sum(-2)[None,None,:,:]
https://stackoverflow.com/questions/69972030/
Why does torch.utils.save_image overwrite saved images in my folder?
I am trying an adversarial attack on 10 images and I need to save all the perturbed images in a folder. So, I used torch.utils.save_image in pytorch which works pretty fine. I expect all the images to be saved in the folder but instead, they are being overwritten and the last image seen is the only image saved. I have the following attack() function that takes a single image to perturb def attack(img, label, net, target=None, pixels=1, maxiter=75, popsize=400, verbose=False): # img: 1*3*W*H tensor # label: a number targeted_attack = target is not None target_calss = target if targeted_attack else label bounds = [(0,32), (0,32), (0,255), (0,255), (0,255)] * pixels popmul = max(1, popsize//len(bounds)) predict_fn = lambda xs: predict_classes( xs, img, target_calss, net, target is None) callback_fn = lambda x, convergence: attack_success( x, img, target_calss, net, targeted_attack, verbose) inits = np.zeros([popmul*len(bounds), len(bounds)]) count = 1 for init in inits: for i in range(pixels): init[i*5+0] = np.random.random()*32 init[i*5+1] = np.random.random()*32 init[i*5+2] = np.random.normal(128,127) init[i*5+3] = np.random.normal(128,127) init[i*5+4] = np.random.normal(128,127) attack_result = differential_evolution(predict_fn, bounds, maxiter=maxiter, popsize=popmul, recombination=1, atol=-1, callback=callback_fn, polish=False, init=inits) attack_image = perturb_image(attack_result.x, img) # attack_var = Variable(attack_image, volatile=True).cuda() with torch.no_grad(): attack_var = attack_image.to(device) predicted_probs = F.softmax(net(attack_var), dim=1).data.cpu().numpy()[0] predicted_class = np.argmax(predicted_probs) vutils.save_image(vutils.make_grid(attack_image, normalize=True, scale_each=True), 'result_img/adversarial' + str(count) + '.png') vutils.save_image(vutils.make_grid(img, normalize=True, scale_each=True), 'result_img/original' + str(count) + '.png') count = count + 1 if (not targeted_attack and predicted_class != label) or (targeted_attack and predicted_class == target_calss): return 1, attack_result.x.astype(int) return 0, [None] Below is the attack_all() function that perturbs batches of images (entire test set) which is 10 images in my case. def attack_all(net, loader, pixels=1, targeted=False, maxiter=75, popsize=400, verbose=False): correct = 0 success = 0 for batch_idx, (input, target) in enumerate(loader): # img_var = Variable(input, volatile=True).cuda() with torch.no_grad(): img_var = input.to(device) target = target prior_probs = F.softmax(net(img_var), dim=1) _, indices = torch.max(prior_probs, 1) if target[0] != indices.data.cpu()[0]: continue correct += 1 target = target.numpy() targets = [None] if not targeted else range(10) for target_calss in targets: if (targeted): if (target_calss == target[0]): continue flag, x = attack(input, target[0], net, target_calss, pixels=pixels, maxiter=maxiter, popsize=popsize, verbose=verbose) success += flag if (targeted): success_rate = float(success)/(9*correct) else: success_rate = float(success)/correct if flag == 1: print(&quot;success rate: %.4f (%d/%d) [(x,y) = (%d,%d) and (R,G,B)=(%d,%d,%d)]&quot;%( success_rate, success, correct, x[0],x[1],x[2],x[3],x[4])) if correct == args.samples: break return success_rate Below is the main() class where I am attacking the 10 images with attack_all(). I expect all the 10 images (both original and perturbed) to be saved but only the last seen image is saved. def main(): print (&quot;==&gt; Loading data and model...&quot;) transform_test = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ]) # test_set = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=tranfrom_test) test_set = Cifar10Dataset(csv_file='mydata/cifar10.csv', root_dir = 'mydata/cifar_selected_10', transform = transform_test) testloader = torch.utils.data.DataLoader(test_set, batch_size=1, shuffle=True, num_workers=2) class_names = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] assert os.path.isdir('checkpoint'), 'Error: no checkpoint directory found!' checkpoint = torch.load('./checkpoint/%s.t7'%args.model) net = checkpoint['net'] net.cuda() cudnn.benchmark = True print (&quot;==&gt; Starting attack...&quot;) results = attack_all(net, testloader, pixels=args.pixels, targeted=args.targeted, maxiter=args.maxiter, popsize=args.popsize, verbose=args.verbose) print (&quot;Final success rate: %.4f&quot;%results)
So I figured out how to solve it myself. I noticed that variable count in attack() will not increase no matter how. Instead, I set count = 1 outside attack() and did global count inside same attack(). This way, value of count can change and will not remain thesame everytime attack_all() calls the function attack().
https://stackoverflow.com/questions/69973075/
How do I load a yolov5l model with custom weights into torch in python?
I have trained a yolov5l model for object detection and classification. I want to use the exported weights to identify images in a program I am creating. I am having trouble finding much of anything on how to use .pt weights in a python program. I believe I use the &quot;torch.load&quot; method from the pytorch library, but when I try: torch.load(path_to_weights) I get a ModuleNotFoundError for not having a module named 'models'. Any help is much appreciated. Thank you very much.
You should use torch.load_state_dict() method to load your trained parameters to your model in addition to torch.load(). There are some issues with your torch.load() method. You should provide your path parameter as a either string or os.PathLike object. (These are written in the docs). I am going to provide a simple code block to show you the way. #Initializing model model = Model() # Assuming your model's name is Model model.load_state_dict(torch.load(path_to_weights)) But don't forget that your path_to_weights parameter must be either string or os.PathLike object.
https://stackoverflow.com/questions/69977082/
RuntimeError: PytorchStreamReader failed locating file data.pkl: file not found
I have been trying to train some data using a model that utilizes src+img. When running the training script, I'm running into an error namely: RuntimeError: PytorchStreamReader failed locating file data.pkl: file not found the .pkl file in this should have been the pickled .pt file. First I figured that maybe I did not specify the preprocessed training.pt file correctly, but it is actually correctly specified. data.pkl hasn't been dumped anywhere (or I failed to find it). I am guessing it has something do with pickle accordimg to docs: &quot;This save/load process uses the most intuitive syntax and involves the least amount of code. Saving a model in this way will save the entire module using Python’s pickle module. The disadvantage of this approach is that the serialized data is bound to the specific classes and the exact directory structure used when the model is saved. The reason for this is because pickle does not save the model class itself. Rather, it saves a path to the file containing the class, which is used during load time. Because of this, your code can break in various ways when used in other projects or after refactors.&quot; I've tried a multitude of things, like changing data.pkl in the script in order to see if there was a generated corrupted file, but this seems to not be the case. I would appreciate it if anyone is willing to help: full error: File &quot;train_mm.py&quot;, line 448, in &lt;module&gt; main() File &quot;train_mm.py&quot;, line 423, in main first_dataset = next(lazily_load_dataset(&quot;train&quot;)) File &quot;train_mm.py&quot;, line 314, in lazily_load_dataset yield lazy_dataset_loader(pt, corpus_type) File &quot;train_mm.py&quot;, line 305, in lazy_dataset_loader dataset = torch.load(pt_file) File &quot;/usr/local/lib/python3.7/dist-packages/torch/serialization.py&quot;, line 607, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File &quot;/usr/local/lib/python3.7/dist-packages/torch/serialization.py&quot;, line 878, in _load data_file = io.BytesIO(zip_file.get_record(pickle_file)) RuntimeError: PytorchStreamReader failed locating file data.pkl: file not found solved: I re-ran the preprocessing script and it generated new .pt files and the error was resolved
This issue has been resolved. the .pt file was heavily corrupted. After deleting the corrupt .pt file and re-running the preprocess script and consequently training script, I did not get the error anymore.
https://stackoverflow.com/questions/69979034/
What is the difference between cuda.amp and model.half()?
According to https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/ We can use: with torch.cuda.amp.autocast(): loss = model(data) In order to casts operations to mixed precision. Another thing is that we can use model.half() to convert all the model weights to half precision. What is the difference between these 2 commands ? If I want to take advantage of FP16 (in order to create larger models and shorter training time), what do I need ? Do I need to use model.half() or using torch.cuda.amp (according the link above) ?
If you convert the entire model to fp16, there is a chance that some of the activations functions and batchnorm layers will cause the fp16 weights to underflow, i.e., become zero. So it is always recommended to use autocast which internally converts the weights to fp32 in problematic layers. model.half() in the end will save weight in fp16 where as autocast weights will be still in fp32. Training in fp16 will be faster than autocast but higher chance for instability if you are not careful. While using autocast you also need to scale up the gradient during the back propagation If fp16 requirement is on the inference side, I recommend using autocast and then converting to fp16 using ONNX and tensorrt.
https://stackoverflow.com/questions/69994731/
How to save png images with OpenCV
I am trying to use OpenCV to convert the results of the model I trained into a png image. My output has 4 channels, and I am not sure how to convert these 4 channels to png. # Load the model model = CNNSEG() model.load_state_dict(torch.load(PATH)) model.eval() for iteration, sample in enumerate(test_data_loader): img = sample print(img.shape) plt.imshow(img[0,...].squeeze(), cmap='gray') #visualise all images in test set plt.pause(1) # output the results img_in = img.unsqueeze(1) output = model(img_in) # shape: [2, 4, 96, 96] As here shows, the shape of output is [2, 4, 96, 96] which are batch size, channels, height and width. So how could I do to convert it to png image?
You would want to split your image into two essentially and then save them individually. import numpy as np import cv2 img = np.ones((2,4,96,96),dtype=np.uint8) #creating a random image img1 = img[0,:,:,:] #extracting the two separate images img2 = img[1,:,:,:] img1_reshaped = img1.transpose() #reshaping them to the desired form of (w,h,c) img2_reshaped = img2.transpose() cv2.imwrite(&quot;img1.png&quot;,img1_reshaped) #save the images as .png cv2.imwrite(&quot;img2.png&quot;,img2_reshaped)
https://stackoverflow.com/questions/69996609/
need help in modifying code to access YOLOv5 for CPU
I am trying to implement a object detection program with Pytorch , OpenCV and YOLOv5 that detect the objects and the type of object from a YouTube video . However, while running, the output console shows that the YOLO version the program is trying to run is for CUDA . I wish to use the YOLO for CPU to implement the project as my computer is not set up for CUDA. I seek assistance on how may I go about modifying the program to utilize the YOLO for CPU ? Thank You Very Much ! Link to GitHub Gist https://gist.github.com/neitherjames/c3b91033eca3794f8170ee51ee5357d4
From your __init__ it shows that you give the system the option to choose CUDA if it is available. You can force it to run on cpu by stating self.device = 'cpu' on line 23. Then when calling self.model.to(self.device) on line 48, the model is sent to cpu.
https://stackoverflow.com/questions/69999605/
pytorch RNN loss does not decrease and validate accuracy remains unchanged
I'm training a model using Pytorch GRU on a text-classification task (output dimension is 5). My network is implemented like the codes below. class GRU(nn.Module): def __init__(self, model_param: ModelParam): super(GRU, self).__init__() self.embedding = nn.Embedding(model_param.vocab_size, model_param.embed_dim) # Build with pre-trained embedding vectors, if given. if model_param.vocab_embedding is not None: self.embedding.weight.data.copy_(model_param.vocab_embedding) self.embedding.weight.requires_grad = False self.rnn = nn.GRU(model_param.embed_dim, model_param.hidden_dim, num_layers=2, bias=True, batch_first=True, dropout=0.5, bidirectional=False) self.dropout = nn.Dropout(0.5) self.fc = nn.Sequential( nn.Linear(in_features=model_param.hidden_dim, out_features=128), nn.Linear(in_features=128, out_features=model_param.output_dim) ) def forward(self, x, labels=None): ''' :param x: torch.tensor, of shape [batch_size, max_seq_len]. :param labels: torch.tensor, of shape [batch_size]. Not used in this model. :return outputs: torch.tensor, of shape [batch_size, output_dim]. ''' # [batch_size, max_seq_len, embed_dim]. features = self.dropout(self.embedding(x)) # [batch_size, max_seq_len, hidden_dim]. outputs, _ = self.rnn(features) # [batch_size, hidden_dim]. outputs = outputs[:, -1, :] return self.fc(self.dropout(outputs)) I'm using nn.CrossEntropyLoss() for loss function, and optim.SGD for optimizer. The definition of loss function and optimizer is given like this. # Loss function and optimizer. loss_func = nn.CrossEntropyLoss() optimizer = SGD(model.parameters(), lr=learning_rate, weight_decay=0.9) And my training procedure is roughly shown as below. for batch in train_iter: optimizer.zero_grad() # The prediction of model, and its corresponding loss. prediction = model(batch.text.type(torch.LongTensor).to(device), batch.label.to(device)) loss = loss_func(prediction, batch.label.to(device)) loss.backward() optimizer.step() # Record total loss. epoch_losses.append(loss.item() / batch_size) When I'm training this model, validate accuracy and losses are reported like this. Epoch 1/300 valid acc: [0.839] (16668 in 19873), time spent 631.497 sec. Validate loss 1.506138. Best validate epoch is 1. Epoch 2/300 valid acc: [0.839] (16668 in 19873), time spent 627.631 sec. Validate loss 1.577007. Best validate epoch is 2. Epoch 3/300 valid acc: [0.839] (16668 in 19873), time spent 631.427 sec. Validate loss 1.580756. Best validate epoch is 3. Epoch 4/300 valid acc: [0.839] (16668 in 19873), time spent 605.352 sec. Validate loss 1.581306. Best validate epoch is 4. Epoch 5/300 valid acc: [0.839] (16668 in 19873), time spent 388.487 sec. Validate loss 1.581431. Best validate epoch is 5. Epoch 6/300 valid acc: [0.839] (16668 in 19873), time spent 360.344 sec. Validate loss 1.581464. Best validate epoch is 6. Epoch 7/300 valid acc: [0.839] (16668 in 19873), time spent 624.345 sec. Validate loss 1.581473. Best validate epoch is 7. Epoch 8/300 valid acc: [0.839] (16668 in 19873), time spent 622.059 sec. Validate loss 1.581477. Best validate epoch is 8. Epoch 9/300 valid acc: [0.839] (16668 in 19873), time spent 651.425 sec. Validate loss 1.581478. Best validate epoch is 9. Epoch 10/300 valid acc: [0.839] (16668 in 19873), time spent 697.475 sec. Validate loss 1.581478. Best validate epoch is 10. ... It shows that validate loss does not decrease after epoch 9, and validate accuracy keeps unchanged since the first epoch (Note that in my dataset, one of the labels accounted for 83%, it could be inferred from this that my model tends to prediction all sequences to the same label, but this also happens when I'm training on another dataset that is relatively less unbalanced). Are there anybody that has met this situation B4? I'm wondering if I have made a mistake in designing model or training procedure. Thanks for your help XD. Updated on Nov.19th, I have added a figure which shows how loss behaves while training. It can be known from this figure that both training loss and validating loss turned to be constant after 5th epoch. training and validating loss in 20 epochs
Now I found that loss does not drop down mainly because weight decay set in optimizer is to high. optimizer = SGD(model.parameters(), lr=learning_rate, weight_decay=0.9) So I fixed this and changed weight decay to be 5e-5. optimizer = SGD(model.parameters(), lr=learning_rate, weight_decay=5e-5) This time the loss of my network begins to decrease. However, there is no improvement in accuracy. Epoch 1/100 valid acc: [0.839] (16668 in 19873), time spent 398.154 sec. Validate loss 0.713456. Best validate epoch is 1. Epoch 2/100 valid acc: [0.839] (16668 in 19873), time spent 572.057 sec. Validate loss 0.631721. Best validate epoch is 2. Epoch 3/100 valid acc: [0.839] (16668 in 19873), time spent 580.867 sec. Validate loss 0.613186. Best validate epoch is 3. Epoch 4/100 valid acc: [0.839] (16668 in 19873), time spent 561.953 sec. Validate loss 0.601883. Best validate epoch is 4. Epoch 5/100 valid acc: [0.839] (16668 in 19873), time spent 564.913 sec. Validate loss 0.596573. Best validate epoch is 5. Epoch 6/100 valid acc: [0.839] (16668 in 19873), time spent 574.525 sec. Validate loss 0.592848. Best validate epoch is 6. Epoch 7/100 valid acc: [0.839] (16668 in 19873), time spent 580.885 sec. Validate loss 0.591074. Best validate epoch is 7. Epoch 8/100 valid acc: [0.839] (16668 in 19873), time spent 455.228 sec. Validate loss 0.589787. Best validate epoch is 8. Epoch 9/100 valid acc: [0.839] (16668 in 19873), time spent 582.756 sec. Validate loss 0.588691. Best validate epoch is 9. Epoch 10/100 valid acc: [0.839] (16668 in 19873), time spent 583.997 sec. Validate loss 0.588260. Best validate epoch is 10. Epoch 11/100 valid acc: [0.839] (16668 in 19873), time spent 599.630 sec. Validate loss 0.588224. Best validate epoch is 11. Epoch 12/100 valid acc: [0.839] (16668 in 19873), time spent 597.713 sec. Validate loss 0.586977. Best validate epoch is 12. Epoch 13/100 valid acc: [0.839] (16668 in 19873), time spent 605.038 sec. Validate loss 0.587937. Best validate epoch is 13. Epoch 14/100 valid acc: [0.839] (16668 in 19873), time spent 598.712 sec. Validate loss 0.587059. Best validate epoch is 14. Epoch 15/100 valid acc: [0.839] (16668 in 19873), time spent 409.344 sec. Validate loss 0.587293. Best validate epoch is 15. ... How training loss behaves is shown in this figure. I'm wondering if learning rate of 1e-3 and weight decay of 5e-5 are reasonable settings. My designated size of batch is 32.
https://stackoverflow.com/questions/70006954/
PyTorch and torch_scatter were compiled with different CUDA versions on Google Colab despite attempting to specify same version
I'm installing pytorch geometric on Google colab. I've done this lots of times before and had no issues but it has suddenly stopped working. I've not changed my code since it worked. Here is how I install it: !pip install torch==1.8.1 torchvision torchtext import torch; print(torch.__version__); print(torch.version.cuda) !pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.8.1+cu102.html !pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-1.8.1+cu102.html !pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-1.8.1+cu102.html !pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-1.8.1+cu102.html !pip install torch-geometric The pytorch version should be 1.8.1+cu102, confirmed by the print statement above. I specify the version when installing with !pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.8.1+cu102.html. However, when I import torch_geometric I get the error: Detected that PyTorch and torch_scatter were compiled with different CUDA versions. PyTorch has CUDA version 10.2 and torch_scatter has CUDA version 11.1. Please reinstall the torch_scatter that matches your PyTorch install. Why is torch_scatter not compiling with CUDA version 10.2? Is there a way to force it to compile with this version?
you could try to specify the latest wheel version provided by the link you use: https://pytorch-geometric.com/whl/torch-1.8.1+cu102.html (for November 22nd 2021 it is 2.0.8): pip install torch-scatter==2.0.8 -f https://data.pyg.org/whl/torch-1.8.1+cu102.html It looks like the latest torch-scatter version in Google Colab is 2.0.9 which is newer than 2.0.8. Therefore, when you run your command it does not do anything thinking that the latest version is already installed.
https://stackoverflow.com/questions/70008715/
How to use tensor cores in pytorch and tensorflow?
I am using a Nvidia RTX GPU with tensor cores, I want to make sure pytorch/tensorflow is utilizing its tensor cores. I noticed in few articles that the tensor cores are used to process float16 and by default pytorch/tensorflow uses float32. They have introduced some lib that does &quot;mixed precision and distributed training&quot;. It is a somewhat old answer. I want to know if pytorch or tensorflow GPU now supports tensor core processing out of the box.
Mixed Precision is available in both libraries. For pytorch it is torch.cuda.amp, AUTOMATIC MIXED PRECISION PACKAGE. https://pytorch.org/docs/stable/amp.html https://pytorch.org/docs/stable/notes/amp_examples.html. Tensorflow has it here, https://www.tensorflow.org/guide/mixed_precision.
https://stackoverflow.com/questions/70013685/
tensorflow 2.4.1 requires six~=1.15.0, but you have six 1.16.0 which is incompatible
Iam getting this error:- [31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 104 1m 15s tensorflow 2.4.1 requires six~=1.15.0, but you have six 1.16.0 which is incompatible.[ when i try to import packages torch and datasets for huggingface summarization in my client's environment. pandas numpy torch datasets==1.14.0 transformers==4.11.3 rouge-score==0.0.4 nltk==3.6.5 pyarrow==6.0.0 beautifulsoup4==4.10.0 numpy pandas lxml requests==2.23.0 wikipedia==1.4.0 These are the packages that I am giving as requirements. Could it be that one of these is installing six=1.16 and uninstalling six=1.15? Here is the further detail of the log Downloading cnvrg-0.7.51-py3-none-any.whl (78 kB) 24 5m 22s [?25l 25 5m 22s [?25hRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (1.15.0) 26 5m 22s Collecting six 27 5m 22s Downloading six-1.16.0-py2.py3-none-any.whl (11 kB) [?25l Requirement already satisfied: pyasn1&lt;0.5.0,&gt;=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules&gt;=0.2.1-&gt;google-auth&lt;2.0dev,&gt;=1.11.0-&gt;google-cloud-storage-&gt;cnvrg) (0.4.8) 83 5m 22s Requirement already satisfied: PyYAML in /usr/local/lib/python3.6/dist-packages (from pyaml-&gt;cnvrg) (5.4.1) 84 5m 22s Installing collected packages: six, urllib3, requests, azure-core, azure-storage-blob, cnvrg 85 5m 22s Attempting uninstall: six 86 5m 22s Found existing installation: six 1.15.0 87 5m 22s Uninstalling six-1.15.0: 88 5m 22s Successfully uninstalled six-1.15.0 Eventually, this error is not allowing me to import both torch and datasets. Can anyone help in resolving this
after installing all dependencies, install six 1.15.0 pip install -r requirements.txt then run pip install six~=1.15.0 or pip install six==1.15.0
https://stackoverflow.com/questions/70014599/
Best way to import CSV data in a python tensor for machine learning?
I want to import a csv data file in python in order to create a neural network down the road (with pytorch). The file has 4 columns and around 1000 rows with first row as titles. Which is the best way to do this?
Just use pandas. In particular what you need is the read_csv function. import pandas as pd ... dataframe = pd.read_csv(&quot;/location/file.csv&quot;) Check out the pandas references for more details.
https://stackoverflow.com/questions/70025566/
Pandas: Unexpected behavior for apply function with torch.tensor()
I confused of the behavior of the panda.apply() function. I want to convert a column containing a list of int to a troch.tensor. Here is some sample code showing the behavior: df_test = pd.DataFrame([3,3,3], columns=['value']) df_test.value = df_test.value.apply(lambda x: [y for y in range(x)]) print(df_test) # Output: # value # 0 [0, 1, 2] # 1 [0, 1, 2] # 2 [0, 1, 2] print(df_test.value.apply(lambda x: torch.tensor(x))) # Output: # value # 0 [tensor(0), tensor(1), tensor(2)] # 1 [tensor(0), tensor(1), tensor(2)] # 2 [tensor(0), tensor(1), tensor(2)] print(df_test.value.apply(lambda x: x + [12])) # Output: # 0 [0, 1, 2, 12] # 1 [0, 1, 2, 12] # 2 [0, 1, 2, 12] print(torch.tensor([1,2,3])) # Output: # tensor([1, 2, 3]) I would have expected, one tensor with three elements per row element, but instead the apply creates a list of tensors containing one element. For testing, I added an example that adds an element to the list, to ensure, that x is the list itself. As you can see it behaves as expected. Can anyone explain the behavior? Is there a workaround? I don't want to use torch.tensor(df.values), since I need to apply the tensor transformation to multiple columns and want to keep them in the dataframe. Thanks!
The reason is that apply function converts implicitly a tensor to list because the type of df_test.value[0] is a list. When you convert a tensor to a list, here is a result: print(df_test.value[0]) # list x = torch.tensor([1,2,3]) print(list(x)) # convert a tensor to a list [tensor(1), tensor(2), tensor(3)] You expected tensor([1, 2, 3]) replacing each list in df_test[&quot;value&quot;]. But do not forget the column type will be tensor, which is not valid type in pandas. To solve this problem is to convert a dataframe to NumPy array and then to a tensor. Then you can do all your transformations and then convert it again to NumPy to pandas. If you try this code: df_test[&quot;new&quot;]= torch.tensor([1,2,3]) type(df_test.new.dtype) # it is not tensor but NumPy which is implicit conversion numpy.dtype[int64]
https://stackoverflow.com/questions/70031998/
How to convert from tensor to float
I am having a list of tensors I want to convert to floating points how can I do it. I have tried using .item but it is not working. I am getting ValueError: only one element tensors can be converted to Python scalars. tensor([[12.1834, 4.9616, 7.7913], [ 8.9394, 8.5784, 9.3691], [ 9.4475, 8.9766, 9.8418], [11.8210, 6.0852, 8.2168], [ 8.2885, 6.2607, 9.8877]], grad_fn=&lt;CloneBackward0&gt;)
You just need to cast Tensor constant to numpy object,then can access by index. result.numpy()[0]
https://stackoverflow.com/questions/70043645/
Pytorch - IndexError: index out of range in self
I am working on building a LSTM based seq2seq sentence - slots solution. For instance: Input sentence: My name is James Bond Output Slot: O O O B-name I-name I'm unable to figure out the reason for the below error: IndexError: index out of range in self &gt; &lt;ipython-input-37-19283c592e18&gt;(12)&lt;module&gt;() 10 set_trace() 11 inputs = torch.tensor(training_data[0][0]) ---&gt; 12 tag_scores = model(inputs) 13 print(tag_scores) When I try to execute the following code - class LSTMTagger(nn.Module): def __init__(self, embedding_dim, hidden_dim, vocab_size, tagset_size): super(LSTMTagger, self).__init__() self.hidden_dim = hidden_dim self.word_embeddings = nn.Embedding(vocab_size, embedding_dim) self.lstm = nn.LSTM(embedding_dim, hidden_dim) self.hidden2tag = nn.Linear(hidden_dim, tagset_size) def forward(self, sentence): embeds = self.word_embeddings(sentence) lstm_out, _ = self.lstm(embeds.view(len(sentence), 1, -1)) tag_space = self.hidden2tag(lstm_out.view(len(sentence), -1)) tag_scores = F.log_softmax(tag_space, dim=1) return tag_scores model = LSTMTagger( EMBEDDING_DIM, HIDDEN_DIM, len(vocab2sent), len(vocab2slot)) loss_function = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=0.1) with torch.no_grad(): inputs = torch.tensor(training_data[0][0]) tag_scores = model(inputs) print(tag_scores) for epoch in range(300): for sentence, tags in training_data: model.zero_grad() sentence_in = torch.tensor(sentence, dtype=torch.long) targets = torch.tensor(tags, dtype=torch.long) tag_scores = model(sentence_in) loss = loss_function(sentence_in, targets) loss.backward() optimizer.step() with torch.no_grad(): inputs = prepare_sequence(training_data[0][0], vocab2sent) tag_scores = model(inputs) print(tag_scores) My variable values: vocab2sent - dict with input sentences vocabulary ( word : unique number) vocab2slot - dict with output vocabulary (slot : unique number) inputs - tensor([ 229, 1056, 701, 330, 1093, 37, 166, 517, 1150, 1150, 1150, 1150, 1150, 1150, 1150, 1150, 1150, 1150, 1150, 1150, 1150]) Model value during runtime - LSTMTagger( (word_embeddings): Embedding(1148, 560) (lstm): LSTM(560, 560) (hidden2tag): Linear(in_features=560, out_features=28, bias=True) )
The vocabulary size for the Embedding layer is 1148: Embedding(1148, 560) but in the inputs you have index 1150. Maybe it is the source of your problem?
https://stackoverflow.com/questions/70052109/
How to build neural network in this structure?with different nodes connects to different number of nodes in next layer
I only know how to use the built-in network like RNN of LSTM in PyTorch. But they tend to deal with every node in the previous layer that will give information to all nodes in the next layer. I want to do something different but don't know how to code it myself. Like in this figure: the node a maps to all [d, e, f] three nodes in layer 2, while node b maps to [e,f] and node c only maps to [f]. As a result, node d will only contain information from a, while e will contain information from [a, b]. And f will contain information from all nodes in the previous layer. Does anyone know how to code this structure? PLS give me some insight I'll be very grateful :D Structure
When you have a layer that looks like Fully-Connected layer but with custom connectivity, use a mask with proper structure. Let's say x = [a, b, c] is your 3-dim input and W denotes the connectivity matrix. &gt;&gt; x tensor([[0.1825], [0.9598], [0.2871]]) &gt;&gt; W tensor([[0.7459, 0.4669, 0.9687], [0.9016, 0.4690, 0.0471], [0.5926, 0.9700, 0.5222]]) then W[i][j] points to the connecting weight between jth input and ith output neuron. To build the structure similar to your toy example, we would make a mask like this &gt;&gt; mask tensor([[1., 0., 0.], [1., 1., 0.], [1., 1., 1.]]) Then you can simply mask the W &gt;&gt; (mask * W) @ x tensor([[0.1361], [0.6147], [1.1892]]) Note: @ is matrix multiplication and * is pointwise multiplication.
https://stackoverflow.com/questions/70055054/
loss.backward() no grad in pytorch NN
The code gives an error in loss.backward() Error is: untimeError: element 0 of tensors does not require grad and does not have a grad_fn for epoch in range(N_EPOCHS): model.train() for i,(im1, im2, labels) in enumerate(train_dl): i1 = torch.flatten(im1,1) i2 = torch.flatten(im2,1) inp = torch.cat([i1,i2],1) b_x = Variable(inp) # batch x b_y = Variable(labels) # batch y y_ = model(b_x).squeeze() y_ = (y_&gt;0.5).float() print(y_) print(l) loss = criterion(y_,b_y) print(loss.item()) loss.backward() optimizer.step()
With additional info given by OP in the comment, the correct approach here is just removing the line y_ = (y_&gt;0.5).float()
https://stackoverflow.com/questions/70056774/
Select and MSELoss for two torch tensors
I have two torch tensors: predictions = tensor([[33, 34, 7, 5, 5, 23, 22, 1, 3, 5, 23, 1], [14, 1, 22, 7, 5, 11, 7, 33, 3, 12, 25, 22], [33, 1, 14, 12, 23, 22, 12, 2, 3, 12, 23, 14], [23, 34, 34, 3, 5, 25, 12, 11, 2, 23, 23, 13]]) labels = tensor([[-100, -100, -100, -100, -100, 11, -100, -100, -100, -100, -100, -100], [-100, -100, -100, -100, -100, -100, -100, -100, 40, -100, -100, -100], [-100, 42, -100, 43, -100, -100, -100, -100, -100, -100, -100, -100], [-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 32, -100]]) I'd like to calculate MSELoss(predictions, labels) only for values not equal to -100 - but don't know how to select corresponding values. For this example the result should be calculated for following values: [23] and [11] [3] and [40] [1,12] and [42, 43] [23] and [32] I trying like this: nn.MSELoss(preds.squeeze(), labels.squeeze()) but receive error.
In Short # convert to float predictions = predictions.to(torch.float) labels = labels.to(torch.float) # pick the right entries reduced_predictions = predictions[labels != -100] reduced_labels = labels[labels != -100] # initialaize loss_fn torch.nn.MSELoss()(reduced_predictions, reduced_labels) # Note ^ calling the nn.MSELoss() and passing tensors in additional () In Detail First you need to filter the undesired entries, you can do so as follows: import torch predictions = torch.tensor([[33, 34, 7, 5, 5, 23, 22, 1, 3, 5, 23, 1], [14, 1, 22, 7, 5, 11, 7, 33, 3, 12, 25, 22], [33, 1, 14, 12, 23, 22, 12, 2, 3, 12, 23, 14], [23, 34, 34, 3, 5, 25, 12, 11, 2, 23, 23, 13]]) labels = torch.tensor([[-100, -100, -100, -100, -100, 11, -100, -100, -100, -100, -100, -100], [-100, -100, -100, -100, -100, -100, -100, -100, 40, -100, -100, -100], [-100, 42, -100, 43, -100, -100, -100, -100, -100, -100, -100, -100], [-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 32, -100]]) # find the entries different than -100 indices = labels != -100 # pick the corresponding values from predictions and labels reduced_predictions = predictions[indices] reduced_labels = labels[indices].to(torch.float) The nn.MSELoss() receives floats so you must convert your tensors to a suitable dtype. Then, you have to instantiate the loss function (create an instance of MSELoss by calling it's initiator): loss_fn = torch.nn.MSELoss() And only then call it using your tensors: loss_fn(reduced_predictions, reduced_labels) The result I received: tensor(847.2000)
https://stackoverflow.com/questions/70058203/
Create a Image stack in PyTorch
I have a Bx3xHxW image tensor in PyTorch and wish to create a Bx3FxHxW image stack of this image where F=64. The image stack is formed by shifting the original image right. That is if the original image is to be shifted by 2 pixels to the right, the two left-most columns in new image will be 0 and the third-last column of the original image will become the last column of the new shifted image. The image stack is formed by shifting right the original image f times where f varies from 0 to F-1. How to achieve this in PyTorch in the most effecient manner using no or minimal number of for loops? A numpy code will also work as the two languages are quite compatible.
Torch (as well as numpy) provides torch.roll function, by padding with zeros first, rolling and then slicing the result you can achieve your right shift. Here's a numpy version: import numpy as np X = np.random.rand(4,3,28,28) Z = np.zeros((4,3,28,28)) XZ = np.concatenate([X,Z],axis=-1) res = [] shift = 2 F = 28//shift for i in range(0,F,shift): res.append(np.roll(XZ,i,-1)[:,:,:,:28]) res = np.concatenate(res,axis=1) Here's a 1D example for better understanding: x = np.arange(10) z = np.zeros(10) xz = np.concatenate([x,z],axis=-1) for i in range(0,10,2): print(np.roll(xz,i,-1)[:10]) [0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] [0. 0. 0. 1. 2. 3. 4. 5. 6. 7.] [0. 0. 0. 0. 0. 1. 2. 3. 4. 5.] [0. 0. 0. 0. 0. 0. 0. 1. 2. 3.] [0. 0. 0. 0. 0. 0. 0. 0. 0. 1.] I hope this is what you are looking for.
https://stackoverflow.com/questions/70064110/
Understanding torch.nn.LayerNorm in nlp
I'm trying to understanding how torch.nn.LayerNorm works in a nlp model. Asuming the input data is a batch of sequence of word embeddings: batch_size, seq_size, dim = 2, 3, 4 embedding = torch.randn(batch_size, seq_size, dim) print(&quot;x: &quot;, embedding) layer_norm = torch.nn.LayerNorm(dim) print(&quot;y: &quot;, layer_norm(embedding)) # outputs: &quot;&quot;&quot; x: tensor([[[ 0.5909, 0.1326, 0.8100, 0.7631], [ 0.5831, -1.7923, -0.1453, -0.6882], [ 1.1280, 1.6121, -1.2383, 0.2150]], [[-0.2128, -0.5246, -0.0511, 0.2798], [ 0.8254, 1.2262, -0.0252, -1.9972], [-0.6092, -0.4709, -0.8038, -1.2711]]]) y: tensor([[[ 0.0626, -1.6495, 0.8810, 0.7060], [ 1.2621, -1.4789, 0.4216, -0.2048], [ 0.6437, 1.0897, -1.5360, -0.1973]], [[-0.2950, -1.3698, 0.2621, 1.4027], [ 0.6585, 0.9811, -0.0262, -1.6134], [ 0.5934, 1.0505, -0.0497, -1.5942]]], grad_fn=&lt;NativeLayerNormBackward0&gt;) &quot;&quot;&quot; From the document's description, my understanding is that the mean and std are computed by all embedding values per sample. So I try to compute y[0, 0, :] manually: mean = torch.mean(embedding[0, :, :]) std = torch.std(embedding[0, :, :]) print((embedding[0, 0, :] - mean) / std) which gives tensor([ 0.4310, -0.0319, 0.6523, 0.6050]) and that's not the right output. I want to know what is the right way to compute y[0, 0, :]?
Pytorch layer norm states mean and std calculated over last D dimensions. Based on this as I expect for (batch_size, seq_size, embedding_dim) here calculation should be over (seq_size, embedding_dim) for layer norm as last 2 dimensions excluding batch dim. A similar question and answer with layer norm implementation can be found here, layer Normalization in pytorch?. In some paper below it shows different layer norm application in NLP. Explanation of Intance vs Layer vs Group Norm From group norm paper Layer Normalization (LN) operates along the channel dimension LN computes µ and σ along the (C, H, W) axes for each sample. Different Application Example In pytorch doc for NLP 3d tensor example mean and std instead are calculated over only last dim embedding_dim. In this paper it shows similar to pytorch doc example, almost all NLP tasks take variable length sequences as input, which is very suitable for LN that only calculates statistics in the channel dimension without involving the batch and sequence length dimension. Example shown in Another paper, LN normalizes across the channel/feature dimension as shown in Figure 1. Manual Layer Norm with only Embed Dim import torch batch_size, seq_size, dim = 2, 3, 4 last_dims = 4 embedding = torch.randn(batch_size, seq_size, dim) print(&quot;x: &quot;, embedding) layer_norm = torch.nn.LayerNorm(last_dims, elementwise_affine = False) layer_norm_out = layer_norm(embedding) print(&quot;y: &quot;, layer_norm_out) eps: float = 0.00001 mean = torch.mean(embedding[0, :, :], dim=(-1), keepdim=True) var = torch.square(embedding[0, :, :] - mean).mean(dim=(-1), keepdim=True) y_custom = (embedding[0, :, :] - mean) / torch.sqrt(var + eps) print(&quot;y_custom: &quot;, y_custom) assert torch.allclose(layer_norm_out[0], y_custom), 'Tensors do not match.' eps: float = 0.00001 mean = torch.mean(embedding[1, :, :], dim=(-1), keepdim=True) var = torch.square(embedding[1, :, :] - mean).mean(dim=(-1), keepdim=True) y_custom = (embedding[1, :, :] - mean) / torch.sqrt(var + eps) print(&quot;y_custom: &quot;, y_custom) assert torch.allclose(layer_norm_out[1], y_custom), 'Tensors do not match.' Output x: tensor([[[-0.0594, -0.8702, -1.9837, 0.2914], [-0.4774, 1.0372, 0.6425, -1.1357], [ 0.3872, -0.9190, -0.5774, 0.3281]], [[-0.5548, 0.0815, 0.2333, 0.3569], [ 1.0380, -0.1756, -0.7417, 2.2930], [-0.0075, -0.3623, 1.9310, -0.7043]]]) y: tensor([[[ 0.6813, -0.2454, -1.5180, 1.0822], [-0.5700, 1.1774, 0.7220, -1.3295], [ 1.0285, -1.2779, -0.6747, 0.9241]], [[-1.6638, 0.1490, 0.5814, 0.9334], [ 0.3720, -0.6668, -1.1513, 1.4462], [-0.2171, -0.5644, 1.6809, -0.8994]]]) y_custom: tensor([[ 0.6813, -0.2454, -1.5180, 1.0822], [-0.5700, 1.1774, 0.7220, -1.3295], [ 1.0285, -1.2779, -0.6747, 0.9241]]) y_custom: tensor([[-1.6638, 0.1490, 0.5814, 0.9334], [ 0.3720, -0.6668, -1.1513, 1.4462], [-0.2171, -0.5644, 1.6809, -0.8994]]) Manual Layer Norm over 4D Tensor import torch batch_size, c, h, w = 2, 3, 2, 4 last_dims = [c, h, w] embedding = torch.randn(batch_size, c, h, w) print(&quot;x: &quot;, embedding) layer_norm = torch.nn.LayerNorm(last_dims, elementwise_affine = False) layer_norm_out = layer_norm(embedding) print(&quot;y: &quot;, layer_norm_out) eps: float = 0.00001 mean = torch.mean(embedding[0, :, :], dim=(-3, -2, -1), keepdim=True) var = torch.square(embedding[0, :, :] - mean).mean(dim=(-3, -2, -1), keepdim=True) y_custom = (embedding[0, :, :] - mean) / torch.sqrt(var + eps) print(&quot;y_custom: &quot;, y_custom) assert torch.allclose(layer_norm_out[0], y_custom), 'Tensors do not match.' eps: float = 0.00001 mean = torch.mean(embedding[1, :, :], dim=(-3, -2, -1), keepdim=True) var = torch.square(embedding[1, :, :] - mean).mean(dim=(-3, -2, -1), keepdim=True) y_custom = (embedding[1, :, :] - mean) / torch.sqrt(var + eps) print(&quot;y_custom: &quot;, y_custom) assert torch.allclose(layer_norm_out[1], y_custom), 'Tensors do not match.' Output x: tensor([[[[ 1.0902, -0.8648, 1.5785, 0.3087], [ 0.0249, -1.3477, -0.9565, -1.5024]], [[ 1.8024, -0.2894, 0.7284, 0.7822], [ 1.4385, -0.2848, -0.3114, 0.4633]], [[ 0.9061, 0.3066, 0.9916, 0.9284], [ 0.3356, 0.9162, -0.4579, 1.0669]]], [[[-0.8292, 0.9111, -0.7307, -1.1003], [ 0.3441, -1.9823, 0.1313, 0.2048]], [[-0.2838, 0.1147, -0.1605, -0.4637], [-2.1343, -0.4402, 1.6685, 0.4455]], [[ 0.6895, -2.7331, 1.1693, -0.6999], [-0.3497, -0.2942, -0.0028, -1.3541]]]]) y: tensor([[[[ 0.8653, -1.3279, 1.4131, -0.0114], [-0.3298, -1.8697, -1.4309, -2.0433]], [[ 1.6643, -0.6824, 0.4594, 0.5198], [ 1.2560, -0.6772, -0.7071, 0.1619]], [[ 0.6587, -0.0137, 0.7547, 0.6838], [ 0.0188, 0.6701, -0.8715, 0.8392]]], [[[-0.4938, 1.2220, -0.3967, -0.7610], [ 0.6629, -1.6306, 0.4531, 0.5256]], [[ 0.0439, 0.4368, 0.1655, -0.1335], [-1.7805, -0.1103, 1.9686, 0.7629]], [[ 1.0035, -2.3707, 1.4764, -0.3663], [-0.0211, 0.0337, 0.3210, -1.0112]]]]) y_custom: tensor([[[ 0.8653, -1.3279, 1.4131, -0.0114], [-0.3298, -1.8697, -1.4309, -2.0433]], [[ 1.6643, -0.6824, 0.4594, 0.5198], [ 1.2560, -0.6772, -0.7071, 0.1619]], [[ 0.6587, -0.0137, 0.7547, 0.6838], [ 0.0188, 0.6701, -0.8715, 0.8392]]]) y_custom: tensor([[[-0.4938, 1.2220, -0.3967, -0.7610], [ 0.6629, -1.6306, 0.4531, 0.5256]], [[ 0.0439, 0.4368, 0.1655, -0.1335], [-1.7805, -0.1103, 1.9686, 0.7629]], [[ 1.0035, -2.3707, 1.4764, -0.3663], [-0.0211, 0.0337, 0.3210, -1.0112]]]) Example of custom layer norm implementation from typing import Union, List import torch batch_size, seq_size, embed_dim = 2, 3, 4 embedding = torch.randn(batch_size, seq_size, embed_dim) print(&quot;x: &quot;, embedding) print(embedding.shape) print() layer_norm = torch.nn.LayerNorm(embed_dim, elementwise_affine=False) layer_norm_output = layer_norm(embedding) print(&quot;y: &quot;, layer_norm_output) print(layer_norm_output.shape) print() def custom_layer_norm( x: torch.Tensor, dim: Union[int, List[int]] = -1, eps: float = 0.00001 ) -&gt; torch.Tensor: mean = torch.mean(x, dim=(dim,), keepdim=True) var = torch.square(x - mean).mean(dim=(dim,), keepdim=True) return (x - mean) / torch.sqrt(var + eps) custom_layer_norm_output = custom_layer_norm(embedding) print(&quot;y_custom: &quot;, custom_layer_norm_output) print(custom_layer_norm_output.shape) assert torch.allclose(layer_norm_output, custom_layer_norm_output), 'Tensors do not match.' Output x: tensor([[[-0.4808, -0.1981, 0.4538, -1.2653], [ 0.3578, 0.6592, 0.2161, 0.3852], [ 1.2184, -0.4238, -0.3415, -0.3487]], [[ 0.9874, -1.7737, 0.1886, 0.0448], [-0.5162, 0.7872, -0.3433, -0.3266], [-0.5459, -0.0371, 1.2625, -1.6030]]]) torch.Size([2, 3, 4]) y: tensor([[[-0.1755, 0.2829, 1.3397, -1.4471], [-0.2916, 1.5871, -1.1747, -0.1208], [ 1.7301, -0.6528, -0.5334, -0.5439]], [[ 1.1142, -1.6189, 0.3235, 0.1812], [-0.8048, 1.7141, -0.4709, -0.4384], [-0.3057, 0.1880, 1.4489, -1.3312]]]) torch.Size([2, 3, 4]) y_custom: tensor([[[-0.1755, 0.2829, 1.3397, -1.4471], [-0.2916, 1.5871, -1.1747, -0.1208], [ 1.7301, -0.6528, -0.5334, -0.5439]], [[ 1.1142, -1.6189, 0.3235, 0.1812], [-0.8048, 1.7141, -0.4709, -0.4384], [-0.3057, 0.1880, 1.4489, -1.3312]]]) torch.Size([2, 3, 4])
https://stackoverflow.com/questions/70065235/
Best practice for controlling Pytorch's neural network layers' number and size?
I'm looking for best practices for controlling/adjusting the number of layers and also their sizes in Pytorch neural networks in general. I have a configuration file in which I specify values for particular experiment variables. Additionally, I'd like to have an option in this file to determine the number and size of Pytorch's neural networks layers. Current solution: config.py ACTOR_LAYER_SIZES: (128, 256, 128) network.py # input_size: int # output_size: int # layer_sizes = ACTOR_LAYER_SIZES layers = [ nn.Linear(input_size, layer_sizes[0]), nn.ReLU(), BatchNorm1d(layer_sizes[0]), ] layers += list( chain.from_iterable( [ [ nn.Linear(n_size, next_n_size), nn.ReLU(), BatchNorm1d(next_n_size), ] for n_size, next_n_size in zip(layer_sizes, layer_sizes[1:]) ] ) ) layers += [(nn.Linear(layer_sizes[-1], output_size))] network = nn.Sequential(*layers) I wonder if using chain.from_iterable may be considered here as the best practice in general. Furthermore, this code appears to be a little lengthy. Maybe there is a better way to do that?
I use the following snippet for this task: import torch.nn as nn num_inputs = 10 num_outputs = 5 hidden_layers = (128, 256, 128) activation = nn.ReLU layers = ( num_inputs, *hidden_layers, num_outputs ) network_architecture = [] for i in range(1, len(layers)): network_architecture.append(nn.Linear(layers[i - 1], layers[i])) if i &lt; len(layers) - 1: # for regression tasks prevent the last layer from getting an activation function network_architecture.append(activation()) model = nn.Sequential(*network_architecture) The if statement prevents the output layer from getting an activation function. This is necessary when you do regression. If you want to do classification, however, you need some kind of activation function (e.g. softmax) there to output discrete classes. Using a for loop in conjunction with an if statement instead of chain.from_iterable has the advantage that it is universally and intuitively understood. Furthermore, by moving the activation function out of the loop, it is configurable. Adding the BatchNorm1d layer should be straightforward.
https://stackoverflow.com/questions/70065838/
Couldn't load the Pytorch optimised model in android
This is the error I'm getting in android while loading the model. java.lang.RuntimeException: Unable to start activity ComponentInfo{com.android.example.cataractdetectionapp/com.android.example.cataractdetectionapp.InferenceActivity}: com.facebook.jni.CppException: Could not run 'aten::empty_strided' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty_strided' is only available for these backends: [CPU, Vulkan, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode]. CPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel] Vulkan: registered at ../aten/src/ATen/native/vulkan/ops/Factory.cpp:47 [kernel] BackendSelect: registered at aten/src/ATen/RegisterBackendSelect.cpp:665 [kernel] Python: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback] Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback] Conjugate: fallthrough registered at ../aten/src/ATen/ConjugateFallback.cpp:22 [kernel] Negative: fallthrough registered at ../aten/src/ATen/native/NegateFallback.cpp:22 [kernel] ADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback] AutogradOther: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:35 [backend fallback] AutogradCPU: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:39 [backend fallback] AutogradCUDA: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:47 [backend fallback] AutogradXLA: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:51 [backend fallback] AutogradLazy: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:55 [backend fallback] AutogradXPU: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:43 [backend fallback] AutogradMLC: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:59 [backend fallback] UNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback] Autocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback] Batched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback] VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback] I used the below code for loading the module: Module module = LiteModuleLoader.load(assetFilePath(getApplicationContext(), &quot;Model.ptl&quot;)); The dependencies I'm using for Pytorch are as follows: implementation 'org.pytorch:pytorch_android_lite:1.10.0' implementation 'org.pytorch:pytorch_android_torchvision:1.10.0'
The solution to this error was simple. The error was because the model was saved with the Runtime being GPU. So, I loaded the native PyTorch model again in the CPU environment and saved it for Lite Interpreter and it is successfully loading in the android app.
https://stackoverflow.com/questions/70081406/
Dicom data training failed by pytorch
I've got a problem about training the Pytorch models. I'm trying to train my Pytorch model using dicom data and nifti GT However, the size of the weight file is ridiculously small because model training is not performed normally. I used network model Unet++ I think there is a problem with the data loader. But I can't fixe it... I'd appreciate it if you could help me. Raw image file format is dicom and GT image format is nifti in my dataloder def __getitem__(self, index): image_path = self.image_paths[index] image_GT_path = image_path[:8]+'_'+image_path[8:12]+'.nii' GT_path = self.GT_paths + image_GT_path ds = dcmread(self.root+image_path) image = ds.pixel_array.astype(np.float32) image = torch.from_numpy(image.transpose(0,1)/255) image = image.unsqueeze(0) GT = nib.load(GT_path) GT = GT.get_fdata(dtype=np.float32) print(GT.shape) GT = torch.from_numpy(GT.transpose(0,1)) GT = GT.unsqueeze(0) return image, GT, image_path and Train Code is for epoch in range(self.num_epochs): self.unet.train(True) epoch_loss = 0 for i, (images, GT,empty) in enumerate(tqdm(self.train_loader)): # GT : Ground Truth images = images.to(self.device) GT = GT.to(self.device) # SR : Segmentation Result SR = self.unet(images) SR_probs = torch.sigmoid(SR) SR_flat = SR_probs.view(SR_probs.size(0),-1) GT_flat = GT.view(GT.size(0),-1) loss =self.criterion(SR_flat,GT_flat) # self.criterion=DiceLoss() #BCE not use # loss = self.criterion(GT,SR_probs) epoch_loss += loss.item() train_losses.append(loss.item()) # Backprop + optimize self.reset_grad() loss.backward() self.optimizer.step()
Depending on what modality your images are, this might possibly be due to not converting the image data into the correct, clinically relevent, machine/vendor independent, units prior to any ML training 0-1 normalization. Typically in dicom files, the actual raw data values aren't that - they need processing... For instance, if you're trying to train on CT data, then the units you should be trying to train your model on are Houndsfield's (HU) numbers. (Do a google on that, CT and dicom to get some background). However raw CT dicom data could be little or big endian, likely needs a slope/intercept correction applied and also could need to have look up tables applied to convert it into HU numbers. ...ie can get complicated and messy. (again do a bit of googling ...you at least should have a bit of background on this if you're trying to do anything with medical image formats). I'm not sure how to process nifti data, however luckily for dicom files using pydicom this conversion can be done for you by the library, using (typically) a call to pydicom.pixel_data_handlers.util.apply_modality_lut: dcm = pydicom.dcmread(my_ct_dicom_file) data_in_HU = pydicom.pixel_data_handlers.util.apply_voi_lut( dcm.pixel_array, dcm )
https://stackoverflow.com/questions/70091655/
Does albumentations normalize mask?
When I pass an image and a mask to albumentations.Normalize(mean, std). How would I go about incorporating this? Should I just add it manually in dataset? Grateful for any tips you have!
Edited: Normalization works for three-channel images. If your mask image is grayscale image then probably you need to stack(image= np.stack((img,)*3, axis=-1)) it and make three channel image then apply albumentations's Normalization function. Official function for A.Normalize() is as following which deals with RGB images: def normalize(img, mean, std, max_pixel_value=255.0): mean = np.array(mean, dtype=np.float32) mean *= max_pixel_value std = np.array(std, dtype=np.float32) std *= max_pixel_value denominator = np.reciprocal(std, dtype=np.float32) img = img.astype(np.float32) img -= mean img *= denominator return img According to Albumentations's docs, you can make a composition of Transforms and use it within PyTorch dataset. import albumentations as A from albumentations.pytorch import ToTensorV2 train_transform = A.Compose( [ A.SmallestMaxSize(max_size=160), A.ShiftScaleRotate(shift_limit=0.05, scale_limit=0.05, rotate_limit=15, p=0.5), A.RandomCrop(height=128, width=128), A.RGBShift(r_shift_limit=15, g_shift_limit=15, b_shift_limit=15, p=0.5), A.RandomBrightnessContrast(p=0.5), A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)), ToTensorV2(), ] ) train_dataset = CatsVsDogsDataset(images_filepaths=train_images_filepaths, transform=train_transform) But I am not really sure that normalizing mask image is right way or not.
https://stackoverflow.com/questions/70094632/
NameError: name '_C' is not defined
When running a Python project using torch==1.4.0 I got the error in the title. I am using Linux and not using any kind of IDE or notebook. There are other questions about this, but they were all in Jupyter notebooks or on Windows.
What happened is that the version of libffi I was using was too new. It looks like libffi recently upgraded to version 8, but something (Torch?) required v7. v7 not being present caused some kind of import to fail silently rather than given a clear error, resulting in the error in the title. I was able to fix this by installing libffi7 using my operating system's package manager.
https://stackoverflow.com/questions/70107055/
How can I change self attention layer numbers and multihead attention head numbers in my model with Pytorch?
I working on sarcasm dataset and my model like below: I first tokenize my input text: PRETRAINED_MODEL_NAME = &quot;roberta-base&quot; from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(PRETRAINED_MODEL_NAME) import torch from torch.utils.data import Dataset, DataLoader MAX_LEN = 100 then I defined class for my dataset: class SentimentDataset (Dataset): def __init__(self,dataframe): self.dataframe = dataframe def __len__(self): return len(self.dataframe) def __getitem__(self, idx): df = self.dataframe.iloc[idx] text = [df[&quot;comment&quot;]] label = [df[&quot;label&quot;]] data_t = tokenizer(text,max_length = MAX_LEN, return_tensors=&quot;pt&quot;, padding=&quot;max_length&quot;, truncation=True) label_t = torch.LongTensor(label) return { &quot;input_ids&quot;:data_t[&quot;input_ids&quot;].squeeze().to(device), &quot;label&quot;: label_t.squeeze().to(device), } then I create obj from my class for training set and set other parameters: train_dataset = SentimentDataset(train_df) BATCH_SIZE = 32 train_dataloader = DataLoader(train_dataset, batch_size = BATCH_SIZE) from transformers import AutoModelForSequenceClassification, AutoConfig # For loading model stucture and pretrained weights: model = AutoModelForSequenceClassification.from_pretrained(PRETRAINED_MODEL_NAME).to(device) import transformers optimizer = torch.optim.Adam(model.parameters(), lr=2e-5, weight_decay=1e-5) Then I use dataloader for training my data: train_dataloader = DataLoader(train_dataset, batch_size = BATCH_SIZE) EPOCHS = 5 for epoch in range(EPOCHS): print(&quot;\n******************\n epoch=&quot;,epoch) i = 0 logits_list = [] labels_list = [] for batch in train_dataloader: i += 1 optimizer.zero_grad() output_model = model(input_ids = batch[&quot;input_ids&quot;], labels = batch[&quot;label&quot;]) loss = output_model.loss logits = output_model.logits logits_list.append(logits.cpu().detach().numpy()) labels_list.append(batch[&quot;label&quot;].cpu().detach().numpy()) loss.backward() optimizer.step() #scheduler.step() if i % 50 ==0 : print(&quot;training loss:&quot;,loss.item()) #print(&quot;validation loss:&quot;,loss.item()) logits_list = np.concatenate(logits_list, axis=0) labels_list = np.concatenate(labels_list, axis=0) logits_list = np.argmax(logits_list, axis =1) print(classification_report(labels_list, logits_list)) My question is how can I change self attention layers number and head of multihead attention in my model?
The short answer is: You can't. You are using a pretrained model: model = AutoModelForSequenceClassification.from_pretrained(PRETRAINED_MODEL_NAME).to(device) You can't easily change the pretrained model. It is possible to change pretrained models, but that is definitely not straightforward. You can download different pretrained model or you can train any model you like from scratch (which would take probably too much time and computational resources). The only thing you can easily change is the &quot;depth&quot; of the model - you can discard some of the transformer blocks.
https://stackoverflow.com/questions/70112800/
Failed to install Pytorch Snippets and Librosa in VScode - Apple M1
I anticipate I'm not an expert in informatics. I used to run PyTorch snippets for some deep learning on my old MacBook Pro (2015), but now I have Apple's last Pro, and have troubles with installing packages in VScode. Two of these packages give me trouble: PyTorch snippets and Librosa. PyTorch Snippets can be installed from the &quot;extensions&quot; in the VScode menu; that is what I did, and used to do with my old Mac. This time, however, when I import torch within an .ipynb instance I get the error &quot;No module named 'torch'&quot;. With Librosa, when I run in the terminal pip3 install librosa at some point I get this error: Using cached idna-3.3-py3-none-any.whl (61 kB) Building wheels for collected packages: numba, llvmlite Building wheel for numba (setup.py) ... error , then a long list of red things, then again: ERROR: Failed building wheel for llvmlite Running setup.py clean for llvmlite Failed to build numba llvmlite Installing collected packages: llvmlite, idna, charset-normalizer, certifi, threadpoolctl, scipy, requests, numba, joblib, appdirs, soundfile, scikit-learn, resampy, pooch, audioread, librosa Running setup.py install for llvmlite ... error ERROR: Command errored out with exit status 1: Do you experienced anything similar? Do you have some hint? Thanks a lot
There is a fix described in https://github.com/numba/llvmlite/issues/693#issuecomment-909501195 arch -arm64 brew install llvm@11 LLVM_CONFIG=&quot;/opt/homebrew/Cellar/llvm@11/11.1.0_2/bin/llvm-config&quot; arch -arm64 pip install llvmlite
https://stackoverflow.com/questions/70117868/
why and when to use torch.cuda.Stream()
I found torch.cuda.Stream() is manually defined in some open source code. self.input_stream = torch.cuda.Stream() self.model_stream = torch.cuda.Stream() self.output_stream = torch.cuda.Stream() On torch page, it says You normally do not need to create one explicitly: by default, each device uses its own “default” stream. Trying to understand why they had to define this manually. From the quick google search, there are lots of how to use cuda.Stream() but no why/when/best-practice to use it.
Streams are sequences of cuda kernels. Operations in different streams may run in parallel. I don't believe they have to use them. They are are just making the code more parallel and thus hopefully faster.
https://stackoverflow.com/questions/70128833/
How to concatenate a 0-D tensor into a 3-D tensor in PyTorch?
I have the following tensor: X = torch.randn(30,1,2) # [batch_size, dim_1, dim_2] t = torch.Tensor([0]) I am trying to concatenate the t tensor into X tensor that results [30,1,3] tensor. However, I tried couple of methods even with torch.stack. I still have not figured out how to do this properly. I tried both and they gave errors. result = torch.cat((X,t), dim = -1) # first try result = torch.stack([X,t], dim = -1) # second try. Is there a way I can concatenate these tensors?
You can't concatenate the two described tensors, the shape of tensor X is [30, 1 , 2], which means it has 30 positions in the first dimension, 1 position in the second dimension, and 2 positions in the last dimension, totalling 30*1*2 = 60 elements. A tensor of shape [30,1,3] has 90 elements, meaning you need to add 30 elements to get the desired result. You can do this by changing the code to: &gt;&gt;&gt; X = torch.randn(30,1,2) &gt;&gt;&gt; t = torch.zeros(30,1,1) &gt;&gt;&gt; r = torch.cat((X,t), dim=-1) &gt;&gt;&gt; r.shape torch.Size([30, 1, 3])
https://stackoverflow.com/questions/70131674/
PyTorch Lightning auto_scale_batch_size='power' does not show results
I am very new to Deep Learning, and am converting an existing project into a Pytorch Lightning one following this tutorial. I want to try the automatic batch size finder. So I added the requested flag to the Trainer : trainer = pl.Trainer(default_root_dir=model_dir, auto_scale_batch_size='power') and also added a batch_size parameter to the model's init method that is then used with self.batch_size = batch_size. class MyModel(pl.LightningModule): def __init__(self, other params, batch_size) super().__init__() self.batch_size = batch_size In train_dataloader I also use this self.batch_size : train_loader = DataLoader(self.dataset, collate_fn=lambda batch: collate_synthesizer(batch, self.reduction_factor, self.hparams2), batch_size=self.batch_size, num_workers=self.num_workers if platform.system() != &quot;Windows&quot; else 0, shuffle=True, pin_memory=True) But the training process runs without showing anything special about its findings with regards to batch_size. I tried it on my dev laptop (cpu mode) and on colab (gpu) without seeing anything. Should I wait till the end of the process or I missed something ? Thanks you very much
My bad, I missed something in the video that I later found in the doc. I was calling trainer.fit(model) instead of trainer.tune(model). Now it is working great!
https://stackoverflow.com/questions/70135432/
Problem encountered in MMDetection. KeyError: 'mask_detectionDataset is not in the dataset registry'
I tried to train my model with MMdetection, however, error like &quot;KeyError: 'mask_detectionDataset is not in the dataset registry'&quot; keep showing. I've added my dataset to init.py in \mmdetection\mmdet\datasets. And use @DATASETS.register_module(), but problem doesn't solved. When I try to run init.py directly in \mmdetection\mmdet\datasets, it shows attempted relative import with no known parent package,i'm wondering why. here is my code: # -*- coding: utf-8 -*- &quot;&quot;&quot; Created on Sat Nov 27 00:55:00 2021 @author: daish &quot;&quot;&quot; import mmcv from mmcv import Config from mmdet.apis import set_random_seed import os cfg = Config.fromfile('F:/Project/mmdetection/configs/swin/mask_rcnn_swin-t-p4-w7_fpn_1x_coco.py') # Modify dataset type and path cfg.dataset_type = 'mask_detectionDataset' cfg.data_root = 'F:/Project/dataset/' cfg.data.test.type = 'mask_detectionDataset' cfg.data.test.data_root = 'F:/Project/dataset/' cfg.data.test.ann_file = 'test.txt' cfg.data.test.img_prefix = 'images' cfg.data.train.type = 'mask_detectionDataset' cfg.data.train.data_root = 'F:/Project/dataset/' cfg.data.train.ann_file = 'train.txt' cfg.data.train.img_prefix = 'images' cfg.data.val.type = 'mask_detectionDataset' cfg.data.val.data_root = 'F:/Project/dataset/' cfg.data.val.ann_file = 'val.txt' cfg.data.val.img_prefix = 'images' # modify num classes of the model in box head cfg.model.roi_head.bbox_head.num_classes = 3 cfg.model.roi_head.mask_head.num_classes = 3 # We can still use the pre-trained Mask RCNN model though we do not need to # use the mask branch cfg.load_from = 'https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_tiny_patch4_window7_224.pth' # Set up working dir to save files and logs. cfg.work_dir = './swin/mask_rcnn_swin-t-p4-w7_fpn_1x' # The original learning rate (LR) is set for 8-GPU training. # We divide it by 8 since we only use one GPU. cfg.optimizer.lr = 0.02 / 8 cfg.lr_config.warmup = None cfg.log_config.interval = 10 # Change the evaluation metric since we use customized dataset. cfg.evaluation.metric = 'mAP' # We can set the evaluation interval to reduce the evaluation times cfg.evaluation.interval = 12 # We can set the checkpoint saving interval to reduce the storage cost cfg.checkpoint_config.interval = 12 # Set seed thus the results are more reproducible cfg.seed = 0 set_random_seed(0, deterministic=False) cfg.gpu_ids = range(1) # We can initialize the logger for training and have a look # at the final config used for training print(f'Config:\n{cfg.pretty_text}') from mmdet.datasets import build_dataset from mmdet.models import build_detector from mmdet.apis import train_detector # Build dataset datasets = [build_dataset(cfg.data.train)] # Build the detector model = build_detector( cfg.model, train_cfg=cfg.get('train_cfg'), test_cfg=cfg.get('test_cfg')) # Add an attribute for visualization convenience model.CLASSES = datasets[0].CLASSES # Create work_dir mmcv.mkdir_or_exist(os.path.abspath(cfg.work_dir)) train_detector(model, datasets, cfg, distributed=False, validate=True) below is the error: Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;D:\anaconda3\envs\openmmlab\lib\multiprocessing\spawn.py&quot;, line 105, in spawn_main exitcode = _main(fd) File &quot;D:\anaconda3\envs\openmmlab\lib\multiprocessing\spawn.py&quot;, line 114, in _main prepare(preparation_data) File &quot;D:\anaconda3\envs\openmmlab\lib\multiprocessing\spawn.py&quot;, line 225, in prepare _fixup_main_from_path(data['init_main_from_path']) File &quot;D:\anaconda3\envs\openmmlab\lib\multiprocessing\spawn.py&quot;, line 277, in _fixup_main_from_path run_name=&quot;__mp_main__&quot;) File &quot;D:\anaconda3\envs\openmmlab\lib\runpy.py&quot;, line 263, in run_path pkg_name=pkg_name, script_name=fname) File &quot;D:\anaconda3\envs\openmmlab\lib\runpy.py&quot;, line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File &quot;D:\anaconda3\envs\openmmlab\lib\runpy.py&quot;, line 85, in _run_code exec(code, run_globals) File &quot;F:\Project\config.py&quot;, line 70, in &lt;module&gt; datasets = [build_dataset(cfg.data.train)] File &quot;D:\anaconda3\envs\openmmlab\lib\site-packages\mmdet\datasets\builder.py&quot;, line 80, in build_dataset dataset = build_from_cfg(cfg, DATASETS, default_args) File &quot;D:\anaconda3\envs\openmmlab\lib\site-packages\mmcv\utils\registry.py&quot;, line 44, in build_from_cfg f'{obj_type} is not in the {registry.name} registry') KeyError: 'mask_detectionDataset is not in the dataset registry'
maybe add a custom_imports key in your config? custom_imports = dict( imports=['mmdet.datasets.mask_detectionDataset'], allow_failed_imports=False)
https://stackoverflow.com/questions/70136275/
Import error while launching PyTorch Lightning project on Colab TPU
I followed this guide to launch my PyTorch Lightning project on Google Colab TPU. So I installed !pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.9-cp37-cp37m-linux_x86_64.whl Then !pip install pytorch-lightning Then I !pip install torch torchvision torchaudio !pip install -r requirements.txt After installing the project requirements, I restarted the runtime as requested and re-ran the cloud-TPU-client install, the pytorch-lightning install, and both command from above. It ran smoothly. But just after the TPU has started with version PyTorch version 1.9, I get the following import error : WARNING:root:TPU has started up successfully with version pytorch-1.9 Traceback (most recent call last): File &quot;synthesizer_train.py&quot;, line 2, in &lt;module&gt; from synthesizer.train import train File &quot;/content/Real-Time-Voice-Cloning/synthesizer/train.py&quot;, line 6, in &lt;module&gt; from synthesizer.models.tacotron import Tacotron File &quot;/content/Real-Time-Voice-Cloning/synthesizer/models/tacotron.py&quot;, line 7, in &lt;module&gt; import pytorch_lightning as pl File &quot;/usr/local/lib/python3.7/dist-packages/pytorch_lightning/__init__.py&quot;, line 20, in &lt;module&gt; from pytorch_lightning.callbacks import Callback # noqa: E402 File &quot;/usr/local/lib/python3.7/dist-packages/pytorch_lightning/callbacks/__init__.py&quot;, line 14, in &lt;module&gt; from pytorch_lightning.callbacks.base import Callback File &quot;/usr/local/lib/python3.7/dist-packages/pytorch_lightning/callbacks/base.py&quot;, line 26, in &lt;module&gt; from pytorch_lightning.utilities.types import STEP_OUTPUT File &quot;/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/__init__.py&quot;, line 18, in &lt;module&gt; from pytorch_lightning.utilities.apply_func import move_data_to_device # noqa: F401 File &quot;/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/apply_func.py&quot;, line 26, in &lt;module&gt; from pytorch_lightning.utilities.imports import _compare_version, _TORCHTEXT_AVAILABLE File &quot;/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/imports.py&quot;, line 101, in &lt;module&gt; from pytorch_lightning.utilities.xla_device import XLADeviceUtils # noqa: E402 File &quot;/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/xla_device.py&quot;, line 24, in &lt;module&gt; import torch_xla.core.xla_model as xm File &quot;/usr/local/lib/python3.7/dist-packages/torch_xla/__init__.py&quot;, line 142, in &lt;module&gt; import _XLAC ImportError: /usr/local/lib/python3.7/dist-packages/_XLAC.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN2at13_foreach_erf_EN3c108ArrayRefINS_6TensorEEE Trainer was launched with the flag TPU_cores=8. The model had run on CPU and GPU beforehand (ie on another session). I tried to downgrade PyTorch to 1.9 (the same as the one shown when TPU is starting) because Colab uses torch 1.10.0+cu111 and a different error appeared : WARNING:root:TPU has started up successfully with version pytorch-1.9 Traceback (most recent call last): File &quot;synthesizer_train.py&quot;, line 2, in &lt;module&gt; from synthesizer.train import train File &quot;/content/Real-Time-Voice-Cloning/synthesizer/train.py&quot;, line 6, in &lt;module&gt; from synthesizer.models.tacotron import Tacotron File &quot;/content/Real-Time-Voice-Cloning/synthesizer/models/tacotron.py&quot;, line 7, in &lt;module&gt; import pytorch_lightning as pl File &quot;/usr/local/lib/python3.7/dist-packages/pytorch_lightning/__init__.py&quot;, line 20, in &lt;module&gt; from pytorch_lightning.callbacks import Callback # noqa: E402 File &quot;/usr/local/lib/python3.7/dist-packages/pytorch_lightning/callbacks/__init__.py&quot;, line 14, in &lt;module&gt; from pytorch_lightning.callbacks.base import Callback File &quot;/usr/local/lib/python3.7/dist-packages/pytorch_lightning/callbacks/base.py&quot;, line 26, in &lt;module&gt; from pytorch_lightning.utilities.types import STEP_OUTPUT File &quot;/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/__init__.py&quot;, line 18, in &lt;module&gt; from pytorch_lightning.utilities.apply_func import move_data_to_device # noqa: F401 File &quot;/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/apply_func.py&quot;, line 29, in &lt;module&gt; if _compare_version(&quot;torchtext&quot;, operator.ge, &quot;0.9.0&quot;): File &quot;/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/imports.py&quot;, line 54, in _compare_version pkg = importlib.import_module(package) File &quot;/usr/lib/python3.7/importlib/__init__.py&quot;, line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;/usr/local/lib/python3.7/dist-packages/torchtext/__init__.py&quot;, line 5, in &lt;module&gt; from . import vocab File &quot;/usr/local/lib/python3.7/dist-packages/torchtext/vocab/__init__.py&quot;, line 11, in &lt;module&gt; from .vocab_factory import ( File &quot;/usr/local/lib/python3.7/dist-packages/torchtext/vocab/vocab_factory.py&quot;, line 4, in &lt;module&gt; from torchtext._torchtext import ( ImportError: /usr/local/lib/python3.7/dist-packages/torchtext/_torchtext.so: undefined symbol: _ZTVN5torch3jit6MethodE Is there anything I can do to train the model on TPU ? Thank you very much
Actually the same problem has also been described and the suggested solution did work for me. So in the details they suggest to downgrade PyTorch to 1.9.0+cu111 (mind the +cu111) after installing torch_xla. Consequently here are the steps I followed to launch my Lightning project on Google Colab with TPU : !pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.9-cp37-cp37m-linux_x86_64.whl !pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchtext==0.10.0 -f https://download.pytorch.org/whl/cu111/torch_stable.html And then the project's pip : !pip install torch torchvision torchaudio pytorch-lightning !pip install -r requirements.txt And it worked even though after this last step, I had to restart runtime.
https://stackoverflow.com/questions/70136356/
WSL2 Pytorch - RuntimeError: No CUDA GPUs are available with RTX3080
I have been struggling for day to make torch work on WSL2 using an RTX 3080. I Installed the CUDA-toolkit version 11.3 Running nvcc -V returns this : nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Sun_Mar_21_19:15:46_PDT_2021 Cuda compilation tools, release 11.3, V11.3.58 Build cuda_11.3.r11.3/compiler.29745058_0 nvidia-smi returns this Mon Nov 29 00:38:26 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 510.00 Driver Version: 510.06 CUDA Version: 11.6 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... On | 00000000:01:00.0 On | N/A | | N/A 52C P5 17W / N/A | 1082MiB / 16384MiB | N/A Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ I verified the installation of the toolkit with blackscholes ./BlackScholes [./BlackScholes] - Starting... GPU Device 0: &quot;Ampere&quot; with compute capability 8.6 Initializing data... ...allocating CPU memory for options. ...allocating GPU memory for options. ...generating input data in CPU mem. ...copying input data to GPU mem. Data init done. Executing Black-Scholes GPU kernel (512 iterations)... Options count : 8000000 BlackScholesGPU() time : 0.242822 msec Effective memory bandwidth: 329.459087 GB/s Gigaoptions per second : 32.945909 BlackScholes, Throughput = 32.9459 GOptions/s, Time = 0.00024 s, Size = 8000000 options, NumDevsUsed = 1, Workgroup = 128 Reading back GPU results... Checking the results... ...running CPU calculations. Comparing the results... L1 norm: 1.741792E-07 Max absolute error: 1.192093E-05 Shutting down... ...releasing GPU memory. ...releasing CPU memory. Shutdown done. [BlackScholes] - Test Summary NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled. Test passed And when I try to use torch, it doesn't find any GPU. Btw, I had to install torch==1.10.0+cu113 if I wanted to use torch with my RTX 3080 as the sm_ with the simple 1.10.0 version are not compatible with the rtx3080. Running torch returns this : &gt;&gt;&gt; import torch &gt;&gt;&gt; torch.version &lt;module 'torch.version' from '/home/snihar/miniconda3/envs/tscbasset/lib/python3.7/site-packages/torch/version.py'&gt; &gt;&gt;&gt; torch.version.cuda '11.3' &gt;&gt;&gt; torch.cuda.get_arch_list() [] &gt;&gt;&gt; torch.cuda.device_count() 0 &gt;&gt;&gt; torch.cuda.current_device() Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/home/snihar/miniconda3/envs/tscbasset/lib/python3.7/site-packages/torch/cuda/__init__.py&quot;, line 479, in current_device _lazy_init() File &quot;/home/snihar/miniconda3/envs/tscbasset/lib/python3.7/site-packages/torch/cuda/__init__.py&quot;, line 214, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available At last, interestingly, I am completely able to run tensorflow-gpu on the same machine. Installed pytorch like this : conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch Also, I managed to run pytorch in a docker container started from my WSL2 machine with this command : sudo docker run --gpus all -it --rm -v /home/...:/home/... nvcr.io/nvidia/pytorch:21.11-py3. When running pytorch on the windows machine I am running the WSL from, it works too. Both return ['sm_37', 'sm_50', 'sm_60', 'sm_61', 'sm_70', 'sm_75', 'sm_80', 'sm_86', 'compute_37'] which says that the library is compatible with rtx 3080.
In my case, I solved this issue by linking /usr/lib/wsl/lib/libcuda.so.1 to the libcuda.so in your wsl2 CUDA location. See https://github.com/microsoft/WSL/issues/5663 After reboot, pytorch can find the GPU. (I found the warning &quot; /usr/lib/wsl/lib/libcuda.so.1 is not a symbolic link&quot; during the apt-get upgrade command. Not sure you can solve it in the same way) Downgrade to pytorch 1.8.2LTS can also solve the problem, but the calculation speed is extremely low.
https://stackoverflow.com/questions/70148547/
Counting number of occurrences in PyTorch Tensor. (Tensor is too big for Numpy)
Is there any smart way to count the number of occurrences of each value in a very Large PyTorch Tensor? Tensor Size is 11701*300=3510300 or maybe increase or decrease. TORCH.BINCOUNT, TORCH.UNIQUE and TORCH.UNIQUE_CONSECUTIVE are not useful so far. BINCOUNT returns a different number of elements every time. Unique is also not useful as it returns unique values. This is what I meant when I said it returns different elements every time. If 5 elements will return 8 elements tensor, How I am supposed to know which elements are how many times. this is confusing for me. The official documentation has limited content and there is no other website, explains it. In the above picture. So, 5 is 2 times. 0 is? what 0 times? How to read this output. it doesn't make any sense to me.
Actually the problem is how you read the output. The output of torch.bincount is a tensor of size max(input)+1, that means it covers all bins of size 1 from zero to max(input). Therefore, in the output tensor from the first element you see how many 0, 1, 2, ..., max(input) are there in your non-negative integral array. For example: t1 = torch.randint(0,10, (20,)) print(t1) tensor([2, 5, 7, 3, 1, 2, 7, 8, 8, 0, 5, 6, 4, 4, 4, 6, 3, 0, 6, 6]) in this tensor the max value is 8 (9 did not appear by chance), so it gives: print(torch.bincount(t1).size()) print(torch.bincount(t1)) torch.Size([9]) tensor([2, 1, 2, 2, 3, 2, 4, 2, 2]) that means, in the tensor t1 there are two 0s, one 1, two 3s, ..., and two 8s.
https://stackoverflow.com/questions/70156074/
How to clean garbage from CUDA in Pytorch?
I teached my neural nets and realized that even after torch.cuda.empty_cache() and gc.collect() my cuda-device memory is filled. In Colab Notebooks we can see the current variables in memory, but even I delete every variable and clean the garbage gpu-memory is busy. I heard it's because python garbage collector can't work on cuda-device. Please explain me, what should I do?
You can do this: import gc import torch gc.collect() torch.cuda.empty_cache()
https://stackoverflow.com/questions/70174653/
How do you prevent the tensorboard logger in pytorch lightning from logging the current epoch?
When creating a new tensorboard logger in pytorch lightning, the two things that are logged by default are the current epoch and the hp_metric. I was able to disable the hp_metric logging by setting default_hp_metric=False but I can't find anything to disable the logging of the epoch. I've searched in the lightning.py, trainer.py and tensorboard.py files which have the code for the module, the trainer and the tensorboard logger and couldn't find a logging call for epoch anywhere. This behavior occurs even taking the barebones example from the pytorch lightning tutorial. Is there a way to disable this logging of epoch to prevent clutter in the tensorboard interface?
In Short You can disable automatically writing epoch variable by overwriting tensorboard logger. from pytorch_lightning import loggers from pytorch_lightning.utilities import rank_zero_only class TBLogger(loggers.TensorBoardLogger): @rank_zero_only def log_metrics(self, metrics, step): metrics.pop('epoch', None) return super().log_metrics(metrics, step) Full version Pytorch lightning automatically add epoch vs global_step graph to each logger. (you can see description in here) There is no option to turn this behavior off. Because this is hard coded without any condition like below: (see full source code in here) if step is None: # added metrics for convenience scalar_metrics.setdefault(&quot;epoch&quot;, self.trainer.current_epoch) step = self.trainer.global_step # log actual metrics self.trainer.logger.agg_and_log_metrics(scalar_metrics, step=step) To disable this option, you should pop epoch variable from metric dictionary in log_metrics(metrics, step) that is called in add_and_log_metrics(scalar_metrics, step=step). Code is shown in above. You can see full long version snippet in here.
https://stackoverflow.com/questions/70183125/
How can I get labels from data generator?
I try to generate data for my conditional VAE and I need labels, but after generating data when I want to get labels I got this error: def gen_batch(BATCH_SIZE): labels = torch.randint(0, 8, (BATCH_SIZE,)).long().to(device) theta = (np.pi/4) * labels.float().to(device) centers = torch.stack((torch.cos(theta), torch.sin(theta)), dim = -1) noise = torch.randn_like(centers) * 0.1 return centers + noise, labels def data_gen(BATCH_SIZE): #8 gaussians while 1: yield gen_batch(BATCH_SIZE) train_loader,train_labels = data_gen(args.batch_size) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-17-3bc469737639&gt; in &lt;module&gt; ----&gt; 1 train_loader,train_labels = data_gen(args.batch_size) ValueError: too many values to unpack (expected 2) how can I fix that?
The exception was raised, because the data_gen() is a generator function, and such a function returns only a single object called a generator object, which cannot be unpacked like you tried to do. A generator object is an iterator that generates and returns data on demand and for that iterators support a special method called __next__, that returns only a single element at a time. However, you do not usually call this method directly. You either pass an iterator to the next built-in function, or you use it, for example, in a for loop that will call next() behind the scenes. Also, the important thing to remember is that after you have exhausted an iterator, you cannot use it again, and if you want to do that, you simply need to create a new one. In your case, though, you cannot exhaust your iterator, because you have an infinite while loop in the data_gen function. Here's an example: def data_gen(BATCH_SIZE): #8 gaussians while 1: yield gen_batch(BATCH_SIZE) gen_obj = data_gen(args.batch_size) train_loader, train_labels = next(gen_obj) # or: gen_obj = data_gen(args.batch_size) for train_loader, train_labels in gen_obj: # this is only an example and the for loop # will never end, because in # your case the generator is infinite pass
https://stackoverflow.com/questions/70183245/
How to easily convert a PyTorch dataloader to tf.Dataset?
How can we convert a pytorch dataloader to a tf.Dataset? I spied this snippet:- def convert_pytorch_dataloader_to_tf_dataset(dataloader, batch_size, shuffle=True): dataset = tf.data.Dataset.from_generator( lambda: dataloader, output_types=(tf.float32, tf.float32), output_shapes=(tf.TensorShape([256, 512]), tf.TensorShape([2,])) ) if shuffle: dataset = dataset.shuffle(buffer_size=len(dataloader.dataset)) dataset = dataset.batch(batch_size) return dataset But it doesn't work at all. Is there an in-built option to export dataloaders to tf.Datasets easily? I have a very complex dataloader, so a simple solutions should ensure things are bug-free :)
For your data in h5py format, you can use the script below. name_x is the features' name in your h5py and name_y is your label's file name. This method is memory efficient and you can feed the data batch by batch. class Generator(object): def __init__(self,open_directory,batch_size,name_x,name_y): self.open_directory = open_directory data_f = h5py.File(open_directory, &quot;r&quot;) self.x = data_f[name_x] self.y = data_f[name_y] if len(self.x.shape) == 4: self.shape_x = (None, self.x.shape[1], self.x.shape[2], self.x.shape[3]) if len(self.x.shape) == 3: self.shape_x = (None, self.x.shape[1], self.x.shape[2]) if len(self.y.shape) == 4: self.shape_y = (None, self.y.shape[1], self.y.shape[2], self.y.shape[3]) if len(self.y.shape) == 3: self.shape_y = (None, self.y.shape[1], self.y.shape[2]) self.num_samples = self.x.shape[0] self.batch_size = batch_size self.epoch_size = self.num_samples//self.batch_size+1*(self.num_samples % self.batch_size != 0) self.pointer = 0 self.sample_nums = np.arange(0, self.num_samples) np.random.shuffle(self.sample_nums) def data_generator(self): for batch_num in range(self.epoch_size): x = [] y = [] for elem_num in range(self.batch_size): sample_num = self.sample_nums[self.pointer] x += [self.x[sample_num]] y += [self.y[sample_num]] self.pointer += 1 if self.pointer == self.num_samples: self.pointer = 0 np.random.shuffle(self.sample_nums) break x = np.array(x, dtype=np.float32) y = np.array(y, dtype=np.float32) yield x, y def get_dataset(self): dataset = tf.data.Dataset.from_generator(self.data_generator, output_types=(tf.float32, tf.float32), output_shapes=(tf.TensorShape(self.shape_x), tf.TensorShape(self.shape_y))) dataset = dataset.prefetch(1) return dataset
https://stackoverflow.com/questions/70189513/
PyTorch PPO implementation for Cartpole-v0 getting stuck in local optima
I have implemented PPO for Cartpole-VO environment. However, it does not converge in certain iterations of the game. Sometimes it gets stuck in local optima. I have implemented the algorithm using the TD-0 advantage i.e. A(s_t) = R(t+1) + \gamma V(S_{t+1}) - V(S_t) Here is my code: def running_average(x, n): N = n kernel = np.ones(N) conv_len = x.shape[0]-N y = np.zeros(conv_len) for i in range(conv_len): y[i] = kernel @ x[i:i+N] # matrix multiplication operator: np.mul y[i] /= N return y class ActorNetwork(nn.Module): def __init__(self, state_dim, n_actions, learning_rate=0.0003, epsilon_clipping=0.3, update_epochs=10): super().__init__() self.n_actions = n_actions self.model = nn.Sequential( nn.Linear(state_dim, 64), nn.ReLU(), nn.Linear(64, 32), nn.ReLU(), nn.Linear(32, n_actions), nn.Softmax(dim=-1) ).float() self.optimizer = optim.Adam(self.model.parameters(), lr=learning_rate) self.epsilon_clipping = epsilon_clipping self.update_epochs = update_epochs def forward(self, X): return self.model(X) def predict(self, state): if state.ndim &lt; 2: action_probs = self.model(torch.FloatTensor(state).unsqueeze(0).float()) else: action_probs = self.model(torch.FloatTensor(state)) return action_probs.squeeze(0).data.numpy() def update(self, states, actions, deltas, old_prob): batch_size = len(states) state_batch = torch.Tensor(states) action_batch = torch.Tensor(actions) delta_batch = torch.Tensor(deltas) old_prob_batch = torch.Tensor(old_prob) for k in range(self.update_epochs): pred_batch = self.model(state_batch) prob_batch = pred_batch.gather(dim=1, index=action_batch.long().view(-1, 1)).squeeze() ratio = torch.exp(torch.log(prob_batch) - torch.log(old_prob_batch)) clipped = torch.clamp(ratio, 1 - self.epsilon_clipping, 1 + self.epsilon_clipping) * delta_batch loss_r = -torch.min(ratio*delta_batch, clipped) loss = torch.mean(loss_r) self.optimizer.zero_grad() loss.backward() self.optimizer.step() class CriticNetwork(nn.Module): def __init__(self, state_dim, learning_rate=0.001): super().__init__() self.model = nn.Sequential( nn.Linear(state_dim, 64), nn.ReLU(), nn.Linear(64, 32), nn.ReLU(), nn.Linear(32, 1), ).float() self.optimizer = optim.Adam(self.model.parameters(), lr=learning_rate) def forward(self, X): return self.model(X) def predict(self, state): if state.ndim &lt; 2: values = self.model(torch.FloatTensor(state).unsqueeze(0).float()) else: values = self.model(torch.FloatTensor(state)) return values.data.numpy() def update(self, states, targets): state_batch = torch.Tensor(states) target_batch = torch.Tensor(targets) pred_batch = self.model(state_batch) loss = torch.nn.functional.mse_loss(pred_batch, target_batch.unsqueeze(1)) self.optimizer.zero_grad() loss.backward() self.optimizer.step() def train_ppo_agent(env, episode_length, max_episodes, gamma, visualize_step, learning_rate_actor=0.0003, learning_rate_critic=0.001, epsilon_clipping=0.2, actor_update_epochs=10): model_actor = ActorNetwork(env.observation_space.shape[0], env.action_space.n, learning_rate=learning_rate_actor, epsilon_clipping=epsilon_clipping, update_epochs=actor_update_epochs) model_critic = CriticNetwork(env.observation_space.shape[0], learning_rate=learning_rate_critic) EPISODE_LENGTH = episode_length MAX_EPISODES = max_episodes GAMMA = gamma VISUALIZE_STEP = max(1, visualize_step) score = [] for episode in range(MAX_EPISODES): curr_state = env.reset() done = False all_episode_t = [] score_episode = 0 for t in range(EPISODE_LENGTH): act_prob = model_actor.predict(curr_state) action = np.random.choice(np.array(list(range(env.action_space.n))), p=act_prob) value = model_critic.predict(curr_state) prev_state = curr_state curr_state, reward, done, info = env.step(action) score_episode += reward e_t = {'state': prev_state, 'action':action, 'action_prob':act_prob[action],'reward': reward, 'value': value} all_episode_t.append(e_t) if done: break score.append(score_episode) episode_values = [all_episode_t[t]['value'] for t in range(len(all_episode_t))] next_state_estimates = [episode_values[i].item() for i in range(1, len(episode_values))] next_state_estimates.append(0) boostrap_estimate = [] for t in range(len(all_episode_t)): G = all_episode_t[t]['reward'] + GAMMA * next_state_estimates[t] boostrap_estimate.append(G) episode_target = np.array(boostrap_estimate) episode_values = np.array(episode_values) # compute the advantage for each state in the episode: R_{t+1} + \gamma * V(S_{t+1}) - V_{t} adv_batch = episode_target-episode_values state_batch = np.array([all_episode_t[t]['state'] for t in range(len(all_episode_t))]) action_batch = np.array([all_episode_t[t]['action'] for t in range(len(all_episode_t))]) old_actor_prob = np.array([all_episode_t[t]['action_prob'] for t in range(len(all_episode_t))]) model_actor.update(state_batch, action_batch, adv_batch, old_actor_prob) model_critic.update(state_batch, episode_target) # print the status after every VISUALIZE_STEP episodes if episode % VISUALIZE_STEP == 0 and episode &gt; 0: print('Episode {}\tAverage Score: {:.2f}'.format(episode, np.mean(score[-VISUALIZE_STEP:-1]))) # domain knowledge applied to stop training: if the average score across last 100 episodes is greater than 195, game is solved if np.mean(score[-100:-1]) &gt; 195: break # Training plot: Episodic reward over Training Episodes score = np.array(score) avg_score = running_average(score, visualize_step) plt.figure(figsize=(15, 7)) plt.ylabel(&quot;Episodic Reward&quot;, fontsize=12) plt.xlabel(&quot;Training Episodes&quot;, fontsize=12) plt.plot(score, color='gray', linewidth=1) plt.plot(avg_score, color='blue', linewidth=3) plt.scatter(np.arange(score.shape[0]), score, color='green', linewidth=0.3) plt.savefig(&quot;temp/cartpole_ppo_training_plot.pdf&quot;) # return the trained models return model_actor, model_critic def main(): env = gym.make('CartPole-v0') episode_length = 300 n_episodes = 5000 gamma = 0.99 vis_steps = 100 learning_rate_actor = 0.0003 actor_update_epochs = 10 epsilon_clipping = 0.2 learning_rate_critic = 0.001 # train the PPO agent model_actor, model_critic = train_ppo_agent(env, episode_length, n_episodes, gamma, vis_steps, learning_rate_actor=learning_rate_actor, learning_rate_critic=learning_rate_critic, epsilon_clipping=epsilon_clipping, actor_update_epochs=actor_update_epochs) Am I missing something, or is this kind of behaviour expected if one uses simple TD-0 advantages for PPO, given the nature of the Cartpole environment?
If you remove the &quot;-&quot; (the negative marker) in line: loss_r = -torch.min(ratio*delta_batch, clipped) The score will then start to steadily increase over time. Before this fix you had negative loss which would increase over time. This is not how loss should work for neural networks. As gradient descent works to minimize the loss. So you want a positive loss which can be minimized by optimizer. Hope my answer is somewhat clear, and sorry I cannot go into deeper detail. My run can be seen in the attached image:
https://stackoverflow.com/questions/70191012/
NotADirectoryError: [Errno 20] Not a directory when trying to load in Zip file on Google Colab
I'm trying to experiment on Google Colab with CNNs and GANs using the CelebA dataset. I'm trying to load in the data in PyTorch using ImageFolder like so: # Loading in data dataroot = &quot;/content/drive/MyDrive/Colab_Notebooks/celeba/img_align_celeba.zip&quot; dataset = dset.ImageFolder(root = dataroot, transform = transforms.Compose([ transforms.Resize(image_size), transforms.CenterCrop(image_size), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ])) Bear in mind, I've fully mounted my drive and have the celeba.zip file on my G-Drive. However, when trying to execute the above, I get the following error: --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) &lt;ipython-input-13-ebc6bb8ab4bd&gt; in &lt;module&gt;() 10 transforms.CenterCrop(image_size), 11 transforms.ToTensor(), ---&gt; 12 transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), 13 ])) 3 frames /usr/local/lib/python3.7/dist-packages/torchvision/datasets/folder.py in find_classes(directory) 38 See :class:`DatasetFolder` for details. 39 &quot;&quot;&quot; ---&gt; 40 classes = sorted(entry.name for entry in os.scandir(directory) if entry.is_dir()) 41 if not classes: 42 raise FileNotFoundError(f&quot;Couldn't find any class folder in {directory}.&quot;) NotADirectoryError: [Errno 20] Not a directory: '/content/drive/MyDrive/Colab_Notebooks/celeba/img_align_celeba.zip' Unless I'm missing something, my dataroot variable is the correct directory as it's the correct path to the zip containing the images. I've attempted to unzip the files by doing: !cp &quot;{dataroot}&quot; . !yes|unzip -q img_align_celeba.zip but I still get the same error - am I missing something?
Zip files aren't folders. Try unzipping first although you'll need to cd to the folder first - so something like this. !cd /content/drive/MyDrive/Colab_Notebooks/celeba Then unzip !yes|unzip -q img_align_celeba.zip -d img_align_celeba Then this should work. dataroot = '/content/drive/MyDrive/Colab_Notebooks/celeba/img_align_celeba' dataset = ImageFolder(root = dataroot, transform=transform)
https://stackoverflow.com/questions/70228305/
Validation losses increasing after a few epochs
I'm building a small CNN model to predict plant crop disease with the Plant Village Dataset. It consists of 39 classes of different species with and without diseases. CNN model class CropDetectCNN(nn.Module): # initialize the class and the parameters def __init__(self): super(CropDetectCNN, self).__init__() # convolutional layer 1 &amp; max pool layer 1 self.layer1 = nn.Sequential( nn.Conv2d(3, 16, kernel_size=3), nn.MaxPool2d(kernel_size=2)) # convolutional layer 2 &amp; max pool layer 2 self.layer2 = nn.Sequential( nn.Conv2d(16, 32, kernel_size=3, padding=1, stride=2), nn.MaxPool2d(kernel_size=2)) #Fully connected layer self.fc = nn.Linear(32*28*28, 39) # Feed forward the network def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = out.reshape(out.size(0), -1) out = self.fc(out) return out model = CropDetectCNN() Training criterion = nn.CrossEntropyLoss() # this include softmax + cross entropy loss optimizer = torch.optim.Adam(model.parameters()) def batch_gd(model, criterion, train_loader, validation_loader, epochs): train_losses = np.zeros(epochs) test_losses = np.zeros(epochs) validation_losses = np.zeros(epochs) for e in range(epochs): t0 = datetime.now() train_loss = [] model.train() for inputs, targets in train_loader: inputs, targets = inputs.to(device), targets.to(device) optimizer.zero_grad() output = model(inputs) loss = criterion(output, targets) train_loss.append(loss.item()) # torch to numpy world loss.backward() optimizer.step() train_loss = np.mean(train_loss) validation_loss = [] for inputs, targets in validation_loader: model.eval() inputs, targets = inputs.to(device), targets.to(device) output = model(inputs) loss = criterion(output, targets) validation_loss.append(loss.item()) # torch to numpy world validation_loss = np.mean(validation_loss) train_losses[e] = train_loss validation_losses[e] = validation_loss dt = datetime.now() - t0 print( f&quot;Epoch : {e+1}/{epochs} Train_loss: {train_loss:.3f} Validation_loss: {validation_loss:.3f} Duration: {dt}&quot; ) return train_losses, validation_losses # Running the function train_losses, validation_losses = batch_gd( model, criterion, train_loader, validation_loader, 5 ) # And theses are results: Epoch : 1/5 Train_loss: 1.164 Validation_loss: 0.861 Duration: 0:10:59.968168 Epoch : 2/5 Train_loss: 0.515 Validation_loss: 0.816 Duration: 0:10:49.199842 Epoch : 3/5 Train_loss: 0.241 Validation_loss: 1.007 Duration: 0:09:56.334155 Epoch : 4/5 Train_loss: 0.156 Validation_loss: 1.147 Duration: 0:10:12.625819 Epoch : 5/5 Train_loss: 0.135 Validation_loss: 1.603 Duration: 0:09:56.746308 Isn't the validation loss suppose to decrease with epochs ? So why is it first decreasing and then increasing ? How should I set the number of epochs, and why ? Any help is really appreciated !
You are facing the phenomenon of &quot;overfitting&quot; when your validation loss goes up after decreasing. You should stop training at that point and try to use some tricks to avoid overfitting. Getting different predictions might happen when your gradients keep updating during inference so try explicitly &quot;stop&quot; them from updating with torch.no_grad()
https://stackoverflow.com/questions/70232816/
How can I define a mask function based on the values of a list in Pytorch
I want to mask a tensor based on its values. In the following function if I pass a range (second part) it works, but I want to have a list with various values prompt_ids (3, 8, 9, 30). But it doesn't work and throws error. RuntimeError: Boolean value of Tensor with more than one value is ambiguous The function: def get_prompt_token_fn(self): if self.prompt_ids: return lambda x: x in self.prompt_ids else: return lambda x: (x&gt;=self.id_offset)&amp;(x&lt;self.id_offset+self.length) What's the problem and how can I resolve it?
In pytorch 1.10 there is an isin function that returns a boolean array based on the condition that elements of first array are in the second array. For versions lower than it, you can implement it as follows: def isin(ar1, ar2): return (ar1[..., None] == ar2).any(-1)
https://stackoverflow.com/questions/70234204/
Simplify pytorch einsum
Consider the following pytorch snippet: X = torch.einsum(&quot;rij, sij -&gt; rs&quot;, A, A) Y = torch.einsum(&quot;rij, sij -&gt; rs&quot;, B, B) Z = torch.einsum(&quot;rij, sij -&gt; rs&quot;, C, C) torch.einsum(&quot;ij, ij, ij -&gt; &quot;, X, Y, Z) which performs the following summation Is it possible to formulate this in a more succinct (that is more vectorised/optimal) way? (e.g. using the fact that X, Y, Z are symmetric matrices)
You can make it more succinct but I don’t see much room for actual performance optimisation: X = torch.einsum(&quot;rij, sij&quot;, A, A) Y = torch.einsum(&quot;rij, sij&quot;, B, B) Z = torch.einsum(&quot;rij, sij&quot;, C, C) torch.einsum(&quot;ij, ij, ij&quot;, X, Y, Z)
https://stackoverflow.com/questions/70237535/