instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Pytorch NN and communication between classes
|
I am new at python, and pytorch, and I have a problem understanding how it works.
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
class Net(nn.Module):
def __init__(self):
..
def forward(self, x):
..
return x
net = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
inputs, labels = data
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
So, this is the code, and I draw what I understand from the code in the picture.
I have some questions:
A) Why could not I use nn.CrossEntropy instead of 'criterion' directly in the code? What difference does it make if I assign it to a variable? I get this error: bool value of Tensor with more than one value is ambiguous
B) Why when the class Net gets an object (nn) (I assumed when 'as' is used, an object is created), then class Net can simply use backward afterward? It is supposed to be part of nn, not Net. Could you please make sense of it for me?
C) While optim is a different object, how do the parameters, which were optimized by optim, can affect nn? I don't understand how they pass variable and update each other?
|
A) By setting it as a variable in one spot it helps make it easier to change the loss function in one location as opposed to having to type nn.MSELoss in many places as the code increases in size and complexity. Less likely to make errors basically.
As for the error one would need more information to answer that bool error. On what line what are the inputs etc. Too little information to help there.
B) Net(nn.Module) inherits from nn.Module which will add the backwards to all the operations you add to the class. See the docs for more info.
C) "net" is an object. net.parameters() is an iterator that iterates over all of the parameters in the net object. So it's passed by reference as opposed to passing the parameters by value.
|
https://stackoverflow.com/questions/54063220/
|
How to customize number of multiple hidden layer units in pytorch LSTM?
|
In pytorch LSTM, RNN or GRU models, there is a parameter called "num_layers", which controls the number of hidden layers in an LSTM. I wonder that since there are multiple layers in an LSTM, why the parameter "hidden_size" is only one number instead of a list containing the number of hidden states in multiple layers, like [10, 20, 30].
I came across when I worked on a regression project, in which I feed sequence data of (seq_len, batch, feature) to LSTM, and I want to get the scalar output of every time step.
A helpful link to understand the pytorch LSTM framework, here.
I'd really appreciate it if anyone can answer this.
|
Seems that I've found a solution to this, which is to use LSTMCell instead. Helpful links: [1], [2]. But is there an easier way?
|
https://stackoverflow.com/questions/54075230/
|
How to set the environment default python in anaconda?
|
It makes me crazy, In anaconda I create the environment with the defualt iterpreter python3.4 Next I install pytorch 0.4.1
conda install pytorch=0.4.1 cuda80 -c pytorch
After this I found that the pytorch was installed in python3.6!
And the environment defualt interpreter is chaged from python3.4 to python3.6.
I am very confused what happend ? How shoud I fix it back? change defualt python back to python3.4? Hope some one could help me.
The commands I typed in are as follows:
conda create -n pointgen python=3.4 ipykernel
source activate pointgen
conda install pytorch=0.4.1 cuda80 -c pytorch
Thats all. What Novak said is right, there is remaining question is how could I manually change the python version from 3.6 back to 3.4, is there any config file I can deal with?
|
As you can see here there is no version of pytorch for python3.4... The default version of pytorch is for python3.6 and that is the version you installed installed. In the process anaconda prompts you that it will have to upgrade/downgrade some package versions and there is probably the the line in which it says it will upgrade python to 3.6
|
https://stackoverflow.com/questions/54075383/
|
Efficient PyTorch DataLoader collate_fn function for inputs of various dimensions
|
I'm having trouble writing a custom collate_fn function for the PyTorch DataLoader class. I need the custom function because my inputs have different dimensions.
I'm currently trying to write the baseline implementation of the Stanford MURA paper. The dataset has a set of labeled studies. A study may contain more than one image. I created a custom Datasetclass that stacks these multiple images using torch.stack.
The stacked tensor is then provided as input to the model and the list of outputs is averaged to obtain a single output. This implementation works fine with DataLoader when batch_size=1. However, when I try to set the batch_size to 8, as is the case in the original paper, the DataLoader fails since it uses torch.stack to stack the batch and the inputs in my batch have variable dimensions (since each study can have multiple number of images).
In order to fix this, I tried to implement my custom collate_fn function.
def collate_fn(batch):
imgs = [item['images'] for item in batch]
targets = [item['label'] for item in batch]
targets = torch.LongTensor(targets)
return imgs, targets
Then in my training epoch loop, I loop through each batch like this:
for image, label in zip(*batch):
label = label.type(torch.FloatTensor)
# wrap them in Variable
image = Variable(image).cuda()
label = Variable(label).cuda()
# forward
output = model(image)
output = torch.mean(output)
loss = criterion(output, label, phase)
However, this does not give me any improved timings on the epoch and still takes as long as it did with a batch size of only 1. I've also tried setting the batch size to 32 and that does not improve the timings either.
Am I doing something wrong?
Is there a better approach to this?
|
Very interesting problem! If I understand you correctly (and also checking the abstract of the paper), you have 40,561 images from 14,863 studies, where each study is manually labeled by radiologists as either normal or abnormal.
I believe the reason why you had the issue you faced was, say, for example, you created a stack for,
study A - 12 images
study B - 13 images
study C - 7 images
study D - 1 image, etc.
And you try to use a batch size of 8 during training which would fail when it gets to study D.
Therefore, is there a reason why we want to average the list of outputs in a study to fit a single label? Otherwise, I would simply collect all 40,561 images, assign the same label to all images from the same study (such that list of outputs in A is compared with a list of 12 labels).
Therefore, with a single dataloader you can shuffle across studies (if desired) and use the desired batch size during training.
I see this question has been around for a while, I hope it helps someone in the future :)
|
https://stackoverflow.com/questions/54083349/
|
How to train a L2-SVM classifier on top of a flattened vector of representations as per DCGAN paper
|
In the original DCGAN paper, the GAN is partly evaluated by being used as a feature extractor to classify CIFAR-10, after having been trained on Imagenet.
From the paper:
To evaluate the quality of the representations learned by DCGANs for supervised tasks,
we train on Imagenet-1k and then use the discriminator’s convolutional features from all layers,
maxpooling each layers representation to produce a 4 × 4 spatial grid. These features are then
flattened and concatenated to form a 28672 dimensional vector and a regularized linear L2-SVM
classifier is trained on top of them.
I have tried to replicate this using PyTorch to train the official PyTorch DCGAN and then use scikit-learn to classify using their linear SVC, but find the wording of the paper confusing and am not sure where to go from here. I've been able to maxpool each layer and then concatenate them, but am stumped on how to proceed with the classification of CIFAR-10.
In e.g. sklearn, you use model.fit(x,y) to fit the model according to the given training data, and then model.predict([X]) to predict the class labels for the samples in X. In model.fit(x,y), x is the (2D) features (e.g. images) and y is the labels. But it feels like they’re saying in the above quote make this 28672 dimensional vector the x. But that’s a 1D vector, and they use it to classify CIFAR-10, which has 50k images, and 50000 > 28672. Am I missing something obvious?
Do I use e.g. model.fit with x being the CIFAR-10 images (using e.g. torchvision.datasets.cifar10) (although how to make 50k Tensors of RGB images a 2D array is another story) and y being their labels, and then somehow predict using the 28672 dimensional vector?
Apologies if this is super obvious; unfortunately that’s all they say about it in the paper, and no one seems to have reproduced it (at least on GitHub etc.). Any help would be greatly appreciated!
|
DCGAN would give you a 28672 dimenstional vector for each image. Hence the shape of the output of DCGAN woud be (50000,28672) for a complete CIFAR10 dataset.
you have to take this as input for your sklearn SVM x, which as you mentioned takes a 2D data.
|
https://stackoverflow.com/questions/54091864/
|
Image similarity using Tensorflow or PyTorch
|
I want to compare two images for similarity. Since my purpose is to match a given image against a massive collection of images, I want to run the comparisons on GPU.
I came across tf.image.ssim and tf.image.psnr functions but I am unable to find and working examples only. The solutions in PyTorch is also appreciated. Since I don't have a good understanding of CUDA and C language, I am hesitant to try kernels in PyCuda.
Will it be helpful in terms of processing if I read the entire image collection and store as Tensorflow Records for future processing?
Any guidance or solution, greatly appreciated. Thank you.
Edit:- I am matching images of same size only. I don't want to do mere histogram match. I want to do SSIM or PSNR implementation for image similarity. So, I am assuming it would be similar in color, content etc
|
There is no implementation of PSNR or SSIM in PyTorch. You can either implement them yourself or use a third-party package, like piqa which I have developed.
Assuming you already have torch and torchvision installed, you can get it with
pip install piqa
Then for the image comparison
import torch
from torchvision import transforms
from PIL import Image
im1 = Image.open('path/to/im1.png')
im2 = Image.open('path/to/im2.png')
transform = transforms.ToTensor()
x = transform(im1).unsqueeze(0).cuda() # .cuda() for GPU
y = transform(im2).unsqueeze(0).cuda()
from piqa import PSNR, SSIM
psnr = PSNR()
ssim = SSIM().cuda()
print('PSNR:', psnr(x, y))
print('SSIM:', ssim(x, y))
|
https://stackoverflow.com/questions/54091984/
|
ModuleNotFoundError: No module named 'torch.utils.serialization'
|
When I run a project used Pytorch I came up with this error:
Traceback (most recent call last):
File "train_drnet.py", line 10, in <module>
import utils
File "/home/muse/drnet-py/utils.py", line 18, in <module>
from data.kth import KTH
File "/home/muse/drnet-py/data/kth.py", line 7, in <module>
from torch.utils.serialization import load_lua
ModuleNotFoundError: No module named 'torch.utils.serialization'
how to solve this,please?
|
I think it was removed from Pytorch about a year ago, you can try tourch file instead - https://github.com/bshillingford/python-torchfile
|
https://stackoverflow.com/questions/54107156/
|
Having issues with neural network training. Loss not decreasing
|
I'm largely following this project but am doing a pixel-wise classification. I have 8 classes and 9 band imagery. My images are gridded into 9x128x128. My loss is not reducing and training accuracy doesn't fluctuate much. I'm guessing I have something wrong with the model. Any advice is much appreciated! I get at least 91% accuracy using random forest.
My classes are extremely unbalanced so I attempted to adjust training weights based on the proportion of classes within the training data.
# get model
learning_rate = 0.0001
model = unet.UNetSmall(8)
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
# set up weights based on data proportion
weights = np.array([0.79594768, 0.07181202, 0.02347426, 0.0042031, 0.00366211, 0.00764327, 0.07003923, 0.02321833])
weights = (1 - weights)/7
print('Weights of training data based on proportion of the training labels. Not compted here')
print(weights)
print(sum(weights))
criterion = nn.CrossEntropyLoss(weight = weight)
lr_scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1)
Weights of training data based on proportion of the training labels.
Not compted here [0.02915033 0.13259828 0.13950368 0.1422567
0.14233398 0.14176525
0.13285154 0.13954024]
1.0000000000000002
I've normalized the data using the transforms.functional.normalize function. I calculated the mean and standard deviation of the training data and added this augmentation to my data loader.
dataset_train = data_utils.SatIn(data_path, 'TrainValTest.csv', 'train', transform=transforms.Compose([aug.ToTensorTarget(), aug.NormalizeTarget(mean=popmean, std=popstd)]))
I augmented my training data in preprocessing by rotating and flipping the imagery. 1 image grid then became 8.
I checked that my training data matched my classes and everything checked out.
Since I'm using 8 classes I chose to use CrossEntropyLoss since it has Softmax built in.
Current model
class UNetSmall(nn.Module):
"""
Main UNet architecture
"""
def __init__(self, num_classes=1):
super().__init__()
# encoding
self.conv1 = encoding_block(9, 32)
self.maxpool1 = nn.MaxPool2d(kernel_size=2)
self.conv2 = encoding_block(32, 64)
self.maxpool2 = nn.MaxPool2d(kernel_size=2)
self.conv3 = encoding_block(64, 128)
self.maxpool3 = nn.MaxPool2d(kernel_size=2)
self.conv4 = encoding_block(128, 256)
self.maxpool4 = nn.MaxPool2d(kernel_size=2)
# center
self.center = encoding_block(256, 512)
# decoding
self.decode4 = decoding_block(512, 256)
self.decode3 = decoding_block(256, 128)
self.decode2 = decoding_block(128, 64)
self.decode1 = decoding_block(64, 32)
# final
self.final = nn.Conv2d(32, num_classes, kernel_size=1)
def forward(self, input):
# encoding
conv1 = self.conv1(input)
maxpool1 = self.maxpool1(conv1)
conv2 = self.conv2(maxpool1)
maxpool2 = self.maxpool2(conv2)
conv3 = self.conv3(maxpool2)
maxpool3 = self.maxpool3(conv3)
conv4 = self.conv4(maxpool3)
maxpool4 = self.maxpool4(conv4)
# center
center = self.center(maxpool4)
# decoding
decode4 = self.decode4(conv4, center)
decode3 = self.decode3(conv3, decode4)
decode2 = self.decode2(conv2, decode3)
decode1 = self.decode1(conv1, decode2)
# final
final = nn.functional.upsample(self.final(decode1), input.size()[2:], mode='bilinear')
return final
Training method
def train(train_loader, model, criterion, optimizer, scheduler, epoch_num):
correct = 0
totalcount = 0
scheduler.step()
# iterate over data
for idx, data in enumerate(tqdm(train_loader, desc="training")):
# get the inputs and wrap in Variable
if torch.cuda.is_available():
inputs = Variable(data['sat_img'].cuda())
labels = Variable(data['map_img'].cuda())
else:
inputs = Variable(data['sat_img'])
labels = Variable(data['map_img'])
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels.long())
loss.backward()
optimizer.step()
test = torch.max(outputs.data, 1)[1] == labels.long()
correct += test.sum().item()
totalcount += test.size()[0] * test.size()[1] * test.size()[2]
print('Training Loss: {:.4f}, Accuracy: {:.2f}'.format(loss.data[0], correct/totalcount))
return {'train_loss': loss.data[0], 'train_acc' : correct/totalcount}
Training call in epoch loop
lr_scheduler.step()
train_metrics = train(train_dataloader, model, criterion, optimizer, lr_scheduler, epoch)
Some epoch iteration output
#### Epoch 0/19
---------- training: 100%|████████████████████████████████████████████████████████████████████████|
84/84 [00:17<00:00, 5.77it/s] Training Loss: 0.8901, Accuracy: 0.83
Current elapsed time 2m 6s
#### Epoch 1/19
---------- training: 100%|████████████████████████████████████████████████████████████████████████|
84/84 [00:17<00:00, 5.72it/s] Training Loss: 0.7922, Accuracy: 0.83
Current elapsed time 2m 24s
#### Epoch 2/19
---------- training: 100%|████████████████████████████████████████████████████████████████████████|
84/84 [00:18<00:00, 5.44it/s] Training Loss: 0.8753, Accuracy: 0.84
Current elapsed time 2m 42s
#### Epoch 3/19
---------- training: 100%|████████████████████████████████████████████████████████████████████████|
84/84 [00:18<00:00, 5.53it/s] Training Loss: 0.7741, Accuracy: 0.84
Current elapsed time 3m 1s
|
It's hard to debug your model with those informations, but maybe some of those ideas will help you in some way:
Try to overfit your network on much smaller data and for many epochs without augmenting first, say one-two batches for many epochs. If this one doesn't work, than your model is not capable to model relation between data and desired target or you have an error somewhere. Furthermore it's easier to debug it that way.
I'm not sure about the weights idea, maybe try to upsample underrepresented classes in order to make it more balanced (repeat some underrepresented examples in your dataset). Curious where is this idea from, never heard of it.
Have you tried to run the model from the repo you provided before applying your own customisations? How well it performs, were you able to replicate their findings? Why do you think this architecture would be a good fit for your, from what I understand, different case? Loss function in the link you provided is different, while the architecture is the same. I haven't read this paper, neither have I tried your model, but it seems a little strange.
Link inside GitHub repo points to a blog post, where bigger batches are advised as it stabilizes the training, what is your batch size?
Maybe start with smaller and easier model and work you way up from there?
And the most important coming last; I don't think SO is the best place for such question (especially as it is research oriented), I see you have already asked it on GitHub issues though, maybe try to contact author directly?
If I were you I would start with the last point and thorough understanding of operations and their effect on your goal, good luck.
|
https://stackoverflow.com/questions/54116080/
|
How can I write the below equivalent code of Keras Neural Net in Pytorch?
|
How can I write the below equivalent code of Keras Neural Net in Pytorch?
actor = Sequential()
actor.add(Dense(20, input_dim=9, activation='relu', kernel_initializer='he_uniform'))
actor.add(Dense(20, activation='relu'))
actor.add(Dense(27, activation='softmax', kernel_initializer='he_uniform'))
actor.summary()
# See note regarding crossentropy in cartpole_reinforce.py
actor.compile(loss='categorical_crossentropy',
optimizer=Adam(lr=self.actor_lr))[Please find the image eq here.][1]
[1]: https://i.stack.imgur.com/gJviP.png
|
Similiar questions have already been asked, but here it goes:
import torch
actor = torch.nn.Sequential(
torch.nn.Linear(9, 20), # output shape has to be specified
torch.nn.ReLU(),
torch.nn.Linear(20, 20), # same goes over here
torch.nn.ReLU(),
torch.nn.Linear(20, 27), # and here
torch.nn.Softmax(),
)
print(actor)
Initialization: By default, from version 1.0 onward, linear layers will be initialized with Kaiming Uniform (see this post). If you want to initialize your weights differently, see most upvoted answer to this question.
You may also use Python's OrderedDict to match certain layers easier, see Pytorch's documentation, you should be able to proceed from there.
|
https://stackoverflow.com/questions/54125698/
|
pytorch grad is None after .backward()
|
I just installed torch-1.0.0 on Python 3.7.2 (macOS), and trying the tutorial, but the following code:
import torch
x = torch.ones(2, 2, requires_grad=True)
y = x + 2
z = y * y * 3
out = z.mean()
out.backward()
print(out.grad)
prints None which is not what's expected.
What's the problem?
|
This is the expected result.
.backward accumulate gradient only in the leaf nodes. out is not a leaf node, hence grad is None.
autograd.backward also does the same thing
autograd.grad can be used to find the gradient of any tensor w.r.t to any tensor. So if you do autograd.grad (out, out) you get (tensor(1.),) as output which is as expected.
Ref:
Tensor.backward (https://pytorch.org/docs/stable/autograd.html#torch.Tensor.backward)
autograd.backward (https://pytorch.org/docs/stable/autograd.html#torch.autograd.backward)
autograd.grad (https://pytorch.org/docs/stable/autograd.html#torch.autograd.grad)
|
https://stackoverflow.com/questions/54150684/
|
When does PyTorch automatically cast Tensor dtype?
|
When does PyTorch automatically cast Tensor dtype? Why does it sometimes do it automatically and other times throws and error?
For example this automatically casts c to be a float:
a = torch.tensor(5)
b = torch.tensor(5.)
c = a*b
a.dtype
>>> torch.int64
b.dtype
>>> torch.float32
c.dtype
>>> torch.float32
But this throws an error:
a = torch.ones(2, dtype=torch.float)
b = torch.ones(2, dtype=torch.long)
c = torch.matmul(a,b)
Traceback (most recent call last):
File "<ipython-input-128-fbff7a713ff0>", line 1, in <module>
torch.matmul(a,b)
RuntimeError: Expected object of scalar type Float but got scalar type Long for argument #2 'tensor'
I'm confused since Numpy seems to automatically cast all arrays as necessary e.g.
a = np.ones(2, dtype=np.long)
b = np.ones(2, dtype=np.float)
np.matmul(a,b)
>>> 2.0
a*b
>>> array([1., 1.])
|
It looks like the PyTorch team is working on those types of problems, see this issue. It seems like some basic upcasting is already implemented in 1.0.0 as per your example (probably for the overloaded operators, tried some others like '//' or addition and they work fine), did not find any proof of this though (like github issue or info in documentation). If someone finds it (implicit casting of torch.Tensor for various operations), please post a comment or another answer.
This issue is a proposal on type promotion, as you can see all of those are still open.
|
https://stackoverflow.com/questions/54155275/
|
pytorch autoencoder model evaluation fail
|
I am literally a beginner of PyTorch.
I trained an autoencoder network so that I can plot the distribution of the latent vectors (the result of encoders).
This is the code that I used for network training.
import torch
import torchvision
from torch import nn
from torch.utils.data import DataLoader
from torchvision import transforms
from torchvision.utils import save_image
from torch.utils.data import Dataset
from PIL import Image
import os
import glob
dir_img_decoded = '/media/dohyeong/HDD/mouth_autoencoder/dc_img_2'
if not os.path.exists(dir_img_decoded):
os.mkdir(dir_img_decoded)
dir_check_point = '/media/dohyeong/HDD/mouth_autoencoder/ckpt_2'
if not os.path.exists(dir_check_point):
os.mkdir(dir_check_point)
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
num_epochs = 200
batch_size = 150 # up -> GPU memory increase
learning_rate = 1e-3
dir_dataset = '/media/dohyeong/HDD/mouth_autoencoder/mouth_crop/dir_normalized_mouth_cropped_images'
images = glob.glob(os.path.join(dir_dataset, '*.png'))
train_images = images[:-113]
test_images = images[-113:]
train_images.sort()
test_images.sort()
class TrumpMouthDataset(Dataset):
def __init__(self, images):
super(TrumpMouthDataset, self).__init__()
self.images = images
self.transform = transforms.Compose([
# transforms.Resize((28, 28)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
def __getitem__(self, index):
image = Image.open(self.images[index])
return self.transform(image)
def __len__(self):
return len(self.images)
train_dataset = TrumpMouthDataset(train_images)
test_dataset = TrumpMouthDataset(test_images)
train_dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_dataloader = DataLoader(test_dataset, batch_size=batch_size, shuffle=True)
class Autoencoder(nn.Module):
def __init__(self):
super(Autoencoder, self).__init__()
self.encoder = nn.Sequential(
nn.Linear(60000, 60),
nn.ReLU(True),
nn.Linear(60, 3),
nn.ReLU(True),
)
self.decoder = nn.Sequential(
nn.Linear(3, 60),
nn.ReLU(True),
nn.Linear(60, 60000),
nn.Tanh()
)
def forward(self, x):
x = x.view(x.size(0), -1)
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return encoded, decoded
model = Autoencoder().cuda()
if torch.cuda.device_count() > 1:
model = nn.DataParallel(model)
model.to(device)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(),
lr=learning_rate,
weight_decay=1e-5)
for epoch in range(num_epochs):
total_loss = 0
for index, imgs in enumerate(train_dataloader):
imgs = imgs.to(device)
# ===================forward=====================
outputs = model(imgs)
imgs_flatten = imgs.view(imgs.size(0), -1)
loss = criterion(outputs, imgs_flatten)
# ===================backward====================
optimizer.zero_grad()
loss.backward()
optimizer.step()
total_loss += loss.item()
print('{} Epoch, [{}/{}] batch, loss: {:.4f}'.format(epoch, index + 1, len(train_dataloader), loss.item()))
avg_loss = total_loss / len(train_dataset)
print('{} Epoch, avg_loss: {:.4f}'.format(epoch, avg_loss))
if epoch % 10 == 0:
check_point_file = os.path.join(dir_check_point, str(epoch) + ".pth")
torch.save(model.state_dict(), check_point_file)
After training, I tried to get encoded values using this code.
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
check_point = '/media/dohyeong/HDD/mouth_autoencoder/290.pth'
model = torch.load(check_point)
for index, imgs in enumerate(train_dataloader):
imgs = imgs.to(device)
# ===================evaluate=====================
encoded, _ = model(imgs)
It finished with this error message.
"TypeError: 'collections.OrderedDict' object is not callable"
May I get some help?
|
Hi and welcome to the PyTorch community :D
TL;DR
Change model = torch.load(check_point) to model.load_state_dict(torch.load(check_point)).
The only problem is with the line:
model = torch.load(check_point)
The way you saved the checkpoint was:
torch.save(model.state_dict(), check_point_file)
That is, you saved the model's state_dict (which is just a dictionary of the various parameters that together describe the current instance of the model) in check_point_file.
Now, in order to load it back, just reverse the process.
check_point_file contains just the state_dict.
It knows nothing about the internals of the model - what it's architecture is, how it's supposed to work etc.
So, load it back:
state_dict = torch.load(check_point)
This state_dict can now be copied onto your Model instance as follows:
model.load_state_dict(state_dict)
Or, more succinctly,
model.load_state_dict(torch.load(check_point))
You got the error because the torch.load(check_point) returned the state_dict which you assigned to model.
When you subsequently called model(imgs), model was an OrderedDict object (not callable).
Hence the error.
See the Serialization Semantics Notes for more details.
Apart from that, your code sure is thorough for a beginner. Great going!
P.S. Your device agnosticity is brilliant! Perhaps you'd want to take a look at:
the line model = Autoencoder().cuda()
The map_location argument of torch.load()
|
https://stackoverflow.com/questions/54166865/
|
How to implement current pytorch activation functions with parameters?
|
I am looking for a simple way to use an activation function which exist in the pytorch library, but using some sort of parameter. for example:
Tanh(x/10)
The only way I came up with looking for solution was implementing the custom function completely from scratch. Is there any better/more elegant way to do this?
edit:
I am looking for some way to append to my model the function Tanh(x/10) rather than plain Tanh(x). Here is the relevant code block:
self.model = nn.Sequential()
for i in range(len(self.layers)-1):
self.model.add_module("linear_layer_" + str(i), nn.Linear(self.layers[i], self.layers[i + 1]))
if activations == None:
self.model.add_module("activation_" + str(i), nn.Tanh())
else:
if activations[i] == "T":
self.model.add_module("activation_" + str(i), nn.Tanh())
elif activations[i] == "R":
self.model.add_module("activation_" + str(i), nn.ReLU())
else:
#no activation
pass
|
Instead of defining it as a specific function, you could inline it in a custom layer.
For instance your solution could look like:
import torch
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(4, 10)
self.fc2 = nn.Linear(10, 3)
self.fc3 = nn.Softmax()
def forward(self, x):
return self.fc3(self.fc2(torch.tanh(self.fc1(x)/10)))
where torch.tanh(output/10) is inlined in the forward function of your module.
|
https://stackoverflow.com/questions/54174054/
|
How to put datasets created by torchvision.datasets in GPU in one operation?
|
I’m dealing with CIFAR10 and I use torchvision.datasets to create it. I’m in need of GPU to accelerate the calculation but I can’t find a way to put the whole dataset into GPU at one time. My model need to use mini-batches and it is really time-consuming to deal with each batch separately.
I've tried to put each mini-batch into GPU separately but it seems really time-consuming.
|
TL;DR
You won't save time by moving the entire dataset at once.
I don't think you'd necessarily want to do that even if you have the GPU memory to handle the entire dataset (of course, CIFAR10 is tiny by today's standards).
I tried various batch sizes and timed the transfer to GPU as follows:
num_workers = 1 # Set this as needed
def time_gpu_cast(batch_size=1):
start_time = time()
for x, y in DataLoader(dataset, batch_size, num_workers=num_workers):
x.cuda(); y.cuda()
return time() - start_time
# Try various batch sizes
cast_times = [(2 ** bs, time_gpu_cast(2 ** bs)) for bs in range(15)]
# Try the entire dataset like you want to do
cast_times.append((len(dataset), time_gpu_cast(len(dataset))))
plot(*zip(*cast_times)) # Plot the time taken
For num_workers = 1, this is what I got:
And if we try parallel loading (num_workers = 8), it becomes even clearer:
|
https://stackoverflow.com/questions/54174854/
|
How to iterate over layers in Pytorch
|
Let's say I have a network model object called m. Now I have no prior information about the number of layers this network has. How can create a for loop to iterate over its layer?
I am looking for something like:
Weight=[]
for layer in m._modules:
Weight.append(layer.weight)
|
Let's say you have the following neural network.
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# define the forward function
return x
Now, let's print the size of the weight parameters associated with each NN layer.
model = Net()
for name, param in model.named_parameters():
print(name, param.size())
Output:
conv1.weight torch.Size([6, 1, 5, 5])
conv1.bias torch.Size([6])
conv2.weight torch.Size([16, 6, 5, 5])
conv2.bias torch.Size([16])
fc1.weight torch.Size([120, 400])
fc1.bias torch.Size([120])
fc2.weight torch.Size([84, 120])
fc2.bias torch.Size([84])
fc3.weight torch.Size([10, 84])
fc3.bias torch.Size([10])
I hope you can extend the example to fulfill your needs.
|
https://stackoverflow.com/questions/54203451/
|
Torch C++: API to check NAN
|
I am using libtorch C++. In python version we can easily check the value of a tensor by calling its numpy value, and in numpy we have np.isnan(). I was wondering if there is a built in function in libtorch C++ to check whether a tensor has any NAN value?
Thanks,
Afshin
|
Adding on to Fábio's answer (my reputation is too low to comment):
If you actually want to use the information about NANs in an assert or if condition you need convert it from a torch::Tensor to a C++ bool like so
torch::Tensor myTensor;
// do something
auto tensorIsNan = at::isnan(myTensor).any().item<bool>(); // will be of type bool
|
https://stackoverflow.com/questions/54205116/
|
How to speed up the "ImageFolder" for ImageNet
|
I am in an university, and all the file system are in a remote system, wherever I log in with my account, I could aways access my home directory. even though I log into the GPU servers through SSH command. This is the condition where I employ the GPU servers to read data.
Currently, I use the PyTorch to train ResNet from scratch on ImageNet, my codes only use all the GPUs in the same computer, I found that the "torchvision.datasets.ImageFolder" will take almost two hours.
Would you please provide some experiences in how to speed up "torchvision.datasets.ImageFolder"? Thanks very much.
|
Why it takes so long?
Setting up an ImageFolder can take a long time, especially when the images are stored on a slow remote disk. The reason for this latency is that the __init__ function for the dataset goes over all files in the image folders and check whether this file is an image file. For ImageNet that can take quite a while as there are over 1 million files to check.
What can you do?
- As Kevin Sun already pointed out, copying the dataset to a local (and possibly much faster) storage can significantly speed up things.
- Alternatively, you can create a modified dataset class that does not read all the files, but relies on a cached list of files - a cached list that you prepare only once in advance and to be used for all runs.
|
https://stackoverflow.com/questions/54207204/
|
How to use multiple GPUs in pytorch?
|
I use this command to use a GPU.
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
But, I want to use two GPUs in jupyter, like this:
device = torch.device("cuda:0,1" if torch.cuda.is_available() else "cpu")
|
Assuming that you want to distribute the data across the available GPUs (If you have batch size of 16, and 2 GPUs, you might be looking providing the 8 samples to each of the GPUs), and not really spread out the parts of models across difference GPU's. This can be done as follows:
If you want to use all the available GPUs:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = CreateModel()
model= nn.DataParallel(model)
model.to(device)
If you want to use specific GPUs:
(For example, using 2 out of 4 GPUs)
device = torch.device("cuda:1,3" if torch.cuda.is_available() else "cpu") ## specify the GPU id's, GPU id's start from 0.
model = CreateModel()
model= nn.DataParallel(model,device_ids = [1, 3])
model.to(device)
To use the specific GPU's by setting OS environment variable:
Before executing the program, set CUDA_VISIBLE_DEVICES variable as follows:
export CUDA_VISIBLE_DEVICES=1,3 (Assuming you want to select 2nd and 4th GPU)
Then, within program, you can just use DataParallel() as though you want to use all the GPUs. (similar to 1st case). Here the GPUs available for the program is restricted by the OS environment variable.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = CreateModel()
model= nn.DataParallel(model)
model.to(device)
In all of these cases, the data has to be mapped to the device.
If X and y are the data:
X.to(device)
y.to(device)
|
https://stackoverflow.com/questions/54216920/
|
Unable to optimize function using pytorch
|
I am trying to write an estimator for a Structural Equation Model. So basically I start with the random parameters for the model B, gamma, phi_diag, psi. And using this I compute the implied covariance matrix sigma. And my optimization function f_ml is computed based on the sigma and the covariance matrix of the data S. Here's my computation code:
device = torch.device('cpu')
dtype = torch.float
B_s = (4, 4)
gamma_s = (4, 1)
phi_s = (1, 1)
psi_s = (4, 4)
# Covariance matrix of data
S = torch.tensor(data.cov().values, dtype=dtype, device=device, requires_grad=False)
# Defining parameters of the model
B = torch.rand(*B_s, dtype=dtype, device=device, requires_grad=True)
B_lower = B.tril(diagonal=-1)
gamma = torch.rand(*gamma_s, dtype=dtype, device=device, requires_grad=True)
phi_diag = torch.rand(phi_s[0], dtype=dtype, device=device, requires_grad=True)
phi = torch.diag(phi_diag)
psi = torch.rand(*psi_s, dtype=dtype, device=device, requires_grad=True)
psi_sym = psi @ psi.t()
B_inv = (torch.eye(*B_s, dtype=dtype, device=device, requires_grad=False) - B_lower).inverse()
sigma_yy = B_inv @ (gamma @ phi @ gamma.t() + psi_sym) @ B_inv.t()
sigma_xy = phi @ gamma.t() @ B_inv.t()
sigma_yx = sigma_xy.t()
sigma_xx = phi
# Computing the covariance matrix from the parameters
sigma = torch.cat((torch.cat((sigma_yy, sigma_yx), 1), torch.cat((sigma_xy, sigma_xx), 1)), 0)
And I am trying to do the optimization as:
optim = torch.optim.Adam([B, gamma, phi_diag, psi], lr=0.01)
for t in range(5000):
optim.zero_grad()
f_ml = sigma.logdet() + (S @ sigma.inverse()).trace() - S.logdet() - (4 + 1)
f_ml.backward(retain_graph=True)
optim.step()
The problem I am facing is that the values of my parameters aren't updated during the optimization. I tried to debug the problem a bit and what I noticed is that in the first loop of optimization the gradients get calculated but the values of the parameters don't get updated. Here's an example using pdb (breakpoint set right after the for loop):
> <ipython-input-232-c6a6fda6610b>(14)<module>()
-> optim.zero_grad()
(Pdb) B
tensor([[ 6.0198e-01, 8.7188e-01, 5.4234e-01, 6.0800e-01],
[-4.9971e+03, 9.3324e-01, 8.1482e-01, 8.3517e-01],
[-1.4002e+04, 2.6706e+04, 2.6412e-01, 4.7804e-01],
[ 1.1382e+04, -2.1603e+04, -6.0834e+04, 1.2768e-01]],
requires_grad=True)
(Pdb) c
> <ipython-input-232-c6a6fda6610b>(13)<module>()
-> import pdb; pdb.set_trace()
(Pdb) B.grad
tensor([[ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00],
[ 1.6332e+04, 0.0000e+00, 0.0000e+00, 0.0000e+00],
[ 4.6349e+04, -8.8694e+04, 0.0000e+00, 0.0000e+00],
[-3.7612e+04, 7.1684e+04, 2.0239e+05, 0.0000e+00]])
(Pdb) B
tensor([[ 6.0198e-01, 8.7188e-01, 5.4234e-01, 6.0800e-01],
[-4.9971e+03, 9.3324e-01, 8.1482e-01, 8.3517e-01],
[-1.4002e+04, 2.6706e+04, 2.6412e-01, 4.7804e-01],
[ 1.1382e+04, -2.1603e+04, -6.0834e+04, 1.2768e-01]],
requires_grad=True)
I can't figure out what am I doing wrong. Any ideas?
|
The problem is that the value of sigma wasn't getting computed in each iteration. Basically, the computation code needs to be moved in a function and it needs to be computed in every iteration.
|
https://stackoverflow.com/questions/54219691/
|
Saving/Loading models in AllenNLP package
|
I am trying to load an AllenNLP model weights. I could not find any documentation on how to save/load a whole model, so playing with weights only.
from allennlp.nn import util
model_state = torch.load(filename_model, map_location=util.device_mapping(-1))
model.load_state_dict(model_state)
I modified my input corpus a bit and I am guessing because of this I am getting corpus-size mismatch:
RuntimeError: Error(s) in loading state_dict for BasicTextFieldEmbedder:
size mismatch for token_embedder_tokens.weight:
copying a param with shape torch.Size([2117, 16]) from checkpoint,
the shape in current model is torch.Size([2129, 16]).
Seemingly there is no official way to save model with corpus vocabulary. Any hacks around it?
|
There's a functionality in AllenNLP allowing to load or save a model.
Have you followed the steps outlined in AllenNLP's tutorial? Below I pasted a snippet from the tutorial that might be of interest to you:
# Here's how to save the model.
with open("/tmp/model.th", 'wb') as f:
torch.save(model.state_dict(), f)
vocab.save_to_files("/tmp/vocabulary")
# And here's how to reload the model.
vocab2 = Vocabulary.from_files("/tmp/vocabulary")
model2 = LstmTagger(word_embeddings, lstm, vocab2)
with open("/tmp/model.th", 'rb') as f:
model2.load_state_dict(torch.load(f))
If the above somehow doesn't work for you, you can check allennlp.models.archival.archive_model helper function. Using this function you should be able to archive your model's training configuration along with weights and vocabulary to model.tar.gz.
Here you can find more information on the constraints of the two approaches that I discussed
|
https://stackoverflow.com/questions/54227872/
|
Order of layers in hidden states in PyTorch GRU return
|
This is the API I am looking at, https://pytorch.org/docs/stable/nn.html#gru
It outputs:
output of shape (seq_len, batch, num_directions * hidden_size)
h_n of shape (num_layers * num_directions, batch, hidden_size)
For GRU with more than one layers, I wonder how to fetch the hidden state of the last layer, should it be h_n[0] or h_n[-1]?
What if it's bidirectional, how to do the slicing to obtain the last hidden layer states of GRUs in both directions?
|
The documentation nn.GRU is clear about this. Here is an example to make it more explicit:
For the unidirectional GRU/LSTM (with more than one hidden layer):
output - would contain all the output features of all the timesteps t
h_n - would return the hidden state (at last timestep) of all layers.
To get the hidden state of the last hidden layer and last timestep, use:
first_hidden_layer_last_timestep = h_n[0]
last_hidden_layer_last_timestep = h_n[-1]
where n is the sequence length.
This is because description says:
num_layers – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two GRUs together to form a stacked GRU, with the second GRU taking in outputs of the first GRU and computing the final results.
So, it is natural and intuitive to also return the results (i.e. hidden states) accordingly in the same order.
|
https://stackoverflow.com/questions/54242123/
|
Pytorch - Can not slice torchvision MNIST dataset
|
In Pytorch, when using torchvision's MNIST dataset, we can get a digit as follows:
from torchvision import datasets, transforms
from torch.utils.data import DataLoader, Dataset, TensorDataset
tsfm = transforms.Compose([transforms.Resize((16, 16)),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])
mnist_ds = datasets.MNIST(root='../../../_data/mnist',train=True,download=True,
transform=tsfm)
digit_12 = mnist_ds[12]
Though it is possible to slice on many datasets, we cannot slice on this one:
>>> digit_12_to_14 = mnist_ds[12:15]
ValueError: Too many dimensions: 3 > 2.
This is due to a Image.fromarray() in the getItem().
Is it possible to use MNIST dataset without using a Dataloader?
PS: The reason why I would like to avoid using Dataloader is that sending batches one at a time to the GPU slows down the training. I prefer to send the entire dataset to the GPU at once. For this I need to have access to the whole transformed dataset.
|
You can use torch.utils.data.Subset() to get an index based slice of a torch Dataset e.g:
import torch.utils.data as data_utils
indices = torch.arange(12,15)
mnist_12to14 = data_utils.Subset(tr, indices)
|
https://stackoverflow.com/questions/54251798/
|
Transferring pretrained pytorch model to onnx
|
I am trying to convert pytorch model to ONNX, in order to use it later for TensorRT. I followed the following tutorial https://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html, but my kernel dies all the time.
This is the code that I implemented.
# Some standard imports
import io
import numpy as np
from torch import nn
import torch.onnx
from deepformer.nets.quicknat import quickNAT
param = {
'num_channels': 64,
'num_filters': 64,
'kernel_h': 5,
'kernel_w': 5,
'kernel_c': 1,
'stride_conv': 1,
'pool': 2,
'stride_pool': 2,
'num_classes': 1,
'padding': 'reflection'
}
net = quickNAT(param)
checkpoint_path = 'checkpoint_epoch36_loss0.78.t7'
checkpoints=torch.load(checkpoint_path)
map_location = lambda storage, loc: storage
if torch.cuda.is_available():
map_location = None
net.load_state_dict(checkpoints['net'])
net.train(False)
# Input to the modelvcdfx
x = torch.rand(1, 64, 256, 1600, requires_grad=True)
# Export the model
torch_out = torch.onnx._export(net, # model being run
x, # model input (or a tuple for multiple inputs)
"quicknat.onnx", # where to save the model (can be a file or file-like object)
export_params=True) # store the trained parameter weights inside the model file
|
What is the output you get? It seems SuperResolution is supported with the export operators in pytorch as mentioned in the documentation
Are you sure the input to your model is:
x = torch.rand(1, 64, 256, 1600, requires_grad=True)
That could be the variable that you used for training, since for deployment you run the network on one or multiple images the dummy input to export to onnx is usually:
dummy_input = torch.randn(1, 3, 720, 1280, device='cuda')
With 1 being the batch size, 3 being the channels of the image(RGB), and then the size of the image, in this case 720x1280. Check on that input, I guess you don't have a 64 channel image as input right?
Also, it'd be helpful if you post the terminal output to see where it fails.
Good luck!
|
https://stackoverflow.com/questions/54254313/
|
Saving and Loading Pytorch Model Checkpoint for inference not working
|
I have a trained model using LSTM. The model is trained on GPU (On Google COLABORATORY).
I have to save the model for inference; which I will run on CPU.
Once trained, I saved the model checkpoint as follows:
torch.save({'model_state_dict': model.state_dict()},'lstmmodelgpu.tar')
And, for inference, I loaded the model as :
# model definition
vocab_size = len(vocab_to_int)+1
output_size = 1
embedding_dim = 300
hidden_dim = 256
n_layers = 2
model = SentimentLSTM(vocab_size, output_size, embedding_dim, hidden_dim, n_layers)
# loading model
device = torch.device('cpu')
checkpoint = torch.load('lstmmodelgpu.tar', map_location=device)
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()
But, it is raising the following error:
model.load_state_dict(checkpoint['model_state_dict'])
File "workspace/envs/envdeeplearning/lib/python3.5/site-packages/torch/nn/modules/module.py", line 719, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for SentimentLSTM:
Missing key(s) in state_dict: "embedding.weight".
Unexpected key(s) in state_dict: "encoder.weight".
Is there anything I missed while saving the checkpoint?
|
There are two things to be considered here.
You mentioned that you're training your model on GPU and using it for inference on CPU, so u need to add a parameter map_location in load function passing torch.device('cpu').
There is a mismatch of state_dict keys (indicated in your ouput message), which might be caused by some missing keys or having more keys in state_dict you are loading than the model u are using currently. And for it you have to add a parameter strict with value False in the load_state_dict function. This will make method to ignore the mismatch of keys.
Side note : Try to use extension of pt or pth for checkpoint files as it is a convention .
|
https://stackoverflow.com/questions/54261892/
|
ImportError: No module named 'torchvision.datasets.mnist'
|
Even after installing pytorch, this error is coming for this line.
from torchvision import datasets
|
If you're using anaconda distribution, first install torchvision using:
$ conda install -c conda-forge torchvision
If the package is not installed, then it will be installed. Else, it will throw the message
# All requested packages already installed.
After this, try to import the torchvision.datasets as you mentioned.
In [1]: from torchvision import datasets
In [2]: dir(datasets)
Out[2]:
['CIFAR10',
'CIFAR100',
'CocoCaptions',
'CocoDetection',
'DatasetFolder',
'EMNIST',
'FakeData',
'FashionMNIST',
'ImageFolder',
'LSUN',
'LSUNClass',
'MNIST',
'Omniglot',
'PhotoTour',
'SEMEION',
'STL10',
'SVHN',
....,
....
]
As you can see from above listing of dir(datasets), the dataset class for MNIST is listed which will be the case when the torchvision package is installed correctly.
|
https://stackoverflow.com/questions/54274716/
|
Pytorch softmax along different masks without for loop
|
Say I have a vector a , with an index vector b of the same length. The indexs are in range 0~N-1, corresponding to N groups. How can I do softmax for every group without for loop?
I'm doing some sort of attention operation here. The numbers for every group are not the same, so I can't reshape a to a matrix and use the dim in standard Softmax() API.
Toy example:
a = torch.rand(10)
a: tensor([0.3376, 0.0557, 0.3016, 0.5550, 0.5814, 0.1306, 0.2697, 0.9989, 0.4917,
0.6306])
b = torch.randint(0,3,(1,10), dtype=torch.int64)
b: tensor([[1, 2, 0, 2, 2, 0, 1, 1, 1, 1]])
I want to do softmax like
for index in range(3):
softmax(a[b == index])
but without the for loop to save time.
|
Maybe this answer will have to change slightly based on a potential response to my comment, but I'm just going ahead and throwing in my two cents about Softmax.
Generally, the formula for softmax is explained rather well in the PyTorch documentation, where we can see that this is a exponential of the current value, divided by the sum of all classes.
The reason for doing this is founded in probability theory, and probably a little outside of my comfort zone, but essentially it helps you to maintain a rather simple backpropagation derivative, when using it in combination with a popular loss strategy called "Cross Entropy Loss" (CE) (see the corresponding function in PyTorch here.
Furthermore, you can also see in the description for CE that it automatically combines two functions, namely a (numerically stable) version of the softmax function, as well as the negative log likelihood loss (NLLL).
Now, to tie back to your original question, and hopefully resolving your issue:
For the sake of the question - and the way you asked it - it seems you are playing around with the popular MNIST handrwitten digit dataset, in which we want to predict some values for your current input image.
I am also assuming that your output a will at some point be the output from a layer from a neural network. It does not matter whether this is squashed to a specific range or not (e.g., by applying some form of activation function), as the softmax will be basically a normalization. Specifically, it will give us, as discussed before, some form of distribution across all the predicted values, and sum up to 1 across all classes. To do this, we can simply apply something like
soft_a = softmax(a, dim=0) # otherwise throws error if we don't specify axis
print(torch.sum(soft_a)) # should return "Tensor(1.)"
Now, if we assume that you want to do the "classical" MNIST example, you could then use the argmax() function to predict which value your system thinks is the correct answer, and calculate an error based off that, e.g., with the nn.NLLLoss() function.
If you are indeed predicting values for each position in a single output, you have to think slightly different about this.
First of all, softmax() ceases to make sense here, since you are computing a probability distribution across multiple outputs, and unless you are fairly certain that their distributions are dependent on one another in a very specific way, I would argue that this is not the case here.
Also, keep in mind that you are then looking to calculate a pairwise loss, i.e. something for every index of your output. The function that comes to mind for this specific purpose would be nn.BCELoss(), which calculates a binarized (element-wise) version of Cross-Entropy.
For this, you can then simply "prop in" your original prediction tensor a, as well as your ground truth tensor b. A minimal example for this would look like this:
bce = torch.nn.BCELoss(reduction="none") # to keep losses for each element separate
loss = bce(a,b) # returns tensor with respective pairwise loss
If you are interested in a single loss, you can obviously use BCELoss with a different argument for reduction, as described in the docs.
Let me know if I can clarify some parts of the answer for you.
EDIT: Something else to keep in mind here: The BCELoss() requires you to feed in values that can potentially be close to the value you want to predict. This is especially a problem if you feed in values to an activation function first (e.g., sigmoid or tanh), which can then never reach the value you want to predict, since they are bound by an interval!
|
https://stackoverflow.com/questions/54284077/
|
batch_size not match in torchtext BucketIterator
|
I have set batch_size equals to 64, but when i print out the train_batch and val_batch, the size is not equal to 64.
The train data and val data are in the below format:
First, i define TEXT and LABEL field.
tokenize = lambda x: x.split()
TEXT = data.Field(sequential=True, tokenize=tokenize)
LABEL = data.Field(sequential=False)
And then i keep trying follow tutorials, and wrote things below:
train_data, valid_data = data.TabularDataset.splits(
path='.',
train='train_intent.csv', validation='val.csv',
format='csv',
fields= {'sentences': ('text', TEXT),
'labels': ('label',LABEL)}
)
test_data = data.TabularDataset(
path='test.csv',
format='csv',
fields={'sentences': ('text', TEXT)}
)
TEXT.build_vocab(train_data)
LABEL.build_vocab(train_data)
BATCH_SIZE = 64
train_iter, val_iter = data.BucketIterator.splits(
(train_data, valid_data),
batch_sizes=(BATCH_SIZE, BATCH_SIZE),
sort_key=lambda x: len(x.text),
sort_within_batch=False,
repeat=False,
device=device
)
But when i want to know the iter is fine or not, i just find the below strange things:
train_batch = next(iter(train_iter))
print(train_batch.text.shape)
print(train_batch.label.shape)
[output]
torch.Size([15, 64])
torch.Size([64])
And the train process output errorValueError: Expected input batch_size (15) to match target batch_size (64).:
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
predictions = model(batch.text)
loss = criterion(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
Anyone could give me a hint would be highly appreciated. Thanks!
|
The returned batch size not always equal to batch_size. ex: you have 100 train data, batch_size is 64. The returned batch_size should be [64, 36].
Code: https://github.com/pytorch/text/blob/1c2ae32d67f7f7854542212b229cd95c85cf4026/torchtext/data/iterator.py#L255-L271
|
https://stackoverflow.com/questions/54307824/
|
Modification to Caffe VGG 16 to handle 1 channel images on PyTorch
|
I am converting a VGG16 network to be a Fully Convolutional network and also modifying the the input to accept a single channel image. The complete code for reproducibilty is given below.
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
import torchvision.datasets as datasets
import copy
from torch.utils import model_zoo
from torchvision import models
from collections import OrderedDict
def convolutionalize(modules, input_size):
"""
Recast `modules` into fully convolutional form
"""
fully_conv_modules = []
x = Variable(torch.zeros((1, ) + input_size))
for m in modules:
if isinstance(m, nn.Linear):
n = nn.Conv2d(x.size(1), m.weight.size(0), kernel_size=(x.size(2), x.size(3)))
n.weight.data.view(-1).copy_(m.weight.data.view(-1))
n.bias.data.view(-1).copy_(m.bias.data.view(-1))
m = n
fully_conv_modules.append(m)
x = m(x)
return fully_conv_modules
def vgg16(is_caffe=True):
"""
Load the VGG-16 net for use as a fully convolutional backbone.
"""
vgg16 = models.vgg16(pretrained=True)
# cast into fully convolutional form (as list of layers)
vgg16 = convolutionalize(list(vgg16.features) + list(vgg16.classifier),
(3, 224, 224))
# name layers like the original paper
names = ['conv1_1', 'relu1_1', 'conv1_2', 'relu1_2', 'pool1',
'conv2_1', 'relu2_1', 'conv2_2', 'relu2_2', 'pool2',
'conv3_1', 'relu3_1', 'conv3_2', 'relu3_2', 'conv3_3', 'relu3_3', 'pool3',
'conv4_1', 'relu4_1', 'conv4_2', 'relu4_2', 'conv4_3', 'relu4_3', 'pool4',
'conv5_1', 'relu5_1', 'conv5_2', 'relu5_2', 'conv5_3', 'relu5_3', 'pool5',
'fc6', 'relu6', 'drop6', 'fc7', 'relu7', 'drop7', 'fc8']
vgg16 = nn.Sequential(OrderedDict(zip(names, vgg16)))
if is_caffe:
# substitute original Caffe weights for improved fine-tuning accuracy
# see https://github.com/jcjohnson/pytorch-vgg
caffe_params = model_zoo.load_url('https://s3-us-west-2.amazonaws.com/'
'jcjohns-models/vgg16-00b39a1b.pth')
for new_p, old_p in zip(vgg16.parameters(), caffe_params.values()):
new_p.data.copy_(old_p.view_as(new_p))
# surgery: decapitate final classifier
del vgg16._modules['fc8'] # note: risky use of private interface
# surgery: keep fuller spatial dims by including incomplete pooling regions
for m in vgg16.modules():
if isinstance(m, nn.MaxPool2d):
m.ceil_mode = True
return vgg16
class Learner(nn.Module):
def __init__(self, num_classes, singleChannel=False):
super().__init__()
backbone = vgg16(is_caffe=True)
for k in list(backbone._modules)[-6:]:
del backbone._modules[k]
supp_backbone = copy.deepcopy(backbone)
# Modify conv1_1 of conditioning branch to have 1 input channels
# Init the weights in the new channels to the channel-wise mean
# of the pre-trained conv1_1 weights
if singleChannel==True:
old_conv1 = backbone._modules['conv1_1'].weight.data
mean_conv1 = torch.mean(old_conv1, dim=1, keepdim=True)
new_conv1 = nn.Conv2d(1, old_conv1.size(0), kernel_size=old_conv1.size(2), stride=1, padding=1)
new_conv1.weight.data = mean_conv1
new_conv1.bias.data = backbone._modules['conv1_1'].bias.data
backbone._modules['conv1_1'] = new_conv1
self.encoder = copy.deepcopy(backbone)
self.num_classes=num_classes
def forward(self,im):
# encode image
supp_feats = self.encoder(im)
return supp_feats
model=Learner(num_classes=2,singleChannel=True).cpu()
mnist_trainset = datasets.MNIST(root='./data', train=True, download=True, transform=None)
im2arr = np.array(mnist_trainset[1][0])
im2arr = im2arr[np.newaxis,:, :,] # shape(1,28,28)
model.train()
x=model(torch.from_numpy(im2arr))
I expected x to be a torch tensor output but get an error message
'ValueError: Expected 4D tensor as input, got 3D tensor instead.' on the last line
|
You need your input shape to be Batch-Channel-Height-Width, which is 4D. In your case, you only have one channel so you "squeezed out" this singleton dimension, but pytorch does not like it!
try
im2arr = im2arr[np.newaxis, np.newaxis, :, :] # add singleton for the channles as well
|
https://stackoverflow.com/questions/54324844/
|
ML Model inputs and outputs analogy
|
As I learn more and more about ML (I am a mobile DEV) I'm starting to form an analogy in my head. I would like the communities opinion / validation.
As a front end DEV you have a backend and an API that you can make requests to. The standard format for the inputs and outputs to the API is JSON.
I'm running into a problem with ML Models that I am trying to use where I don't know how to read the expected input (API) and I don't know how to decode the expected output.
So far I my experience has been fragmented because some models say "Give me an image of [1,2,120,120]" or something like that.
To analogize, is there a unified way to define inputs and outputs for a ML model like JSON unifies the inputs and outputs for an backend API?
If so, what are some rules one must follow to encode and decode data into this format?
|
Assuming this "ML Model" is in the context of running an input through say a trained pytorch model's forward pass to get an output, the unified way to define inputs and outputs for an ML model are through Tensors. Tensors are essentially a multi-dimensional matrix containing elements of a single data type. Think multi-dimensional lists with a single data type.
Tensors:MLModels::JSON:WebAPI
An Example using an Object Detector
Model
Let's say your model example with the image is an object detector model that takes in an image as input and outputs either dog or cat
The input would usually be:
A tensor representation of an Image with the shape of [1, 2, 120, 120] where 1 represents the batch size, 2 is the dimension of your rgb channels, and 120x120 is the width and height of an image.
The output would usually be:
A normalized 2 dimensional tensor like [0.7, 0.3] where index 0 represents the probability of the image depicting a dog and index 1 represents the probability it's a cat.
Encoding and Decoding
Decoding the output to a string like "dog" or "cat" is obvious.
Encoding an image is slightly less obvious. At its heart, the format
of an image is that of a tensor...a multi-dimensional matrix
containing a single datatype. So is still intuitive to encode an
image in the form of a JPEG or PNG to a tensor representation through
the rgb channel dimensions and the pixel values for each channel.
Typically image files are loaded in using libraries and methods like
the python imaging library and pytorch's
torchvision.transforms.ToTensor().
This example is very specific to an object detector type model, but most supervised ML models will output a tensor like the above or a one-hot label. Most ML models in general will always have data inputs and outputs that can be represented as Tensors.
|
https://stackoverflow.com/questions/54338081/
|
Confusion in understand python commands for deep learning
|
I recently started learning the deep learning with pytorch using this tutorial.
I am having problem with these lines of code.
Parameter train=True means it will take out the training data.
But how much data does it take for the training 50%?
How can we specify the amount of data for training. Similarly, couldn't understand batch_size and num_workers, what that means in loading the data data? Is the batch_size parameter is similar to one used in deep learning for training?
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
|
If you don't split your data previously, the trainloader will use the entire train folder. You can specify the amount of training by splitting your data, see:
from torchvision import datasets
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
valid_size = 0.2
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)```
The Batch size is the numbers of files that you catch by iteration (epoch). For example, if your training_size is 1000, and you have a batch_size of 10, then each epoch would contain 100 iterations.
The number of workers is used to preprocess the data of batch. More workers will consume more memory usage and workers are helpful to speed up the Input and Output process.
num_workers = 0 means that will do the data loading when needed,
num_workers > 0 means your data will be preprocessed with the number of workers you defined.
|
https://stackoverflow.com/questions/54340766/
|
NumPy is faster than PyTorch for larger cross or outer products
|
I'm computing huge outer products between vectors of size (50500,) and found out that NumPy is (much?) faster than PyTorch while doing so.
Here are the tests:
# NumPy
In [64]: a = np.arange(50500)
In [65]: b = a.copy()
In [67]: %timeit np.outer(a, b)
5.81 s ± 56.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
-------------
# PyTorch
In [73]: t1 = torch.arange(50500)
In [76]: t2 = t1.clone()
In [79]: %timeit torch.ger(t1, t2)
7.73 s ± 143 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
I'd ideally like to have the computation done in PyTorch. So, how can I speed things up for computing outer product in PyTorch for such huge vectors?
Note: I tried to move the tensors to GPU but I was treated with MemoryError because it needs around 19 GiB of space. So, I eventually have to do it on the CPU.
|
Unfortunately there's really no way to specifically speed up torch's method of computing the outer product torch.ger() without a vast amount of effort.
Explanation and Options
The reason numpy function np.outer() is so fast is because it's written in C, which you can see here: https://github.com/numpy/numpy/blob/7e3d558aeee5a8a5eae5ebb6aef03de892a92ebd/numpy/core/numeric.py#L1123
where the function uses operations from the umath C source code.
Pytorch's torch.ger() function is written in C++ here: https://github.com/pytorch/pytorch/blob/7ce634ebc2943ff11d2ec727b7db83ab9758a6e0/aten/src/ATen/native/LinearAlgebra.cpp#L142 which makes it ever so slightly slower as you can see in your example.
Your options to "speed up computing outer product in PyTorch" would be to add a C implementation for outer product in pytorch's native code, or make your own outer product function while interfacing with C using something like Cython if you really don't want to use numpy (which wouldn't make much sense).
P.S.
Also just as an aside, using GPUs would only improve your parallel computation speed on the GPU which may not outweigh the cost of time required to transfer data between RAM and GPU memory.
|
https://stackoverflow.com/questions/54357836/
|
packed_padded_sequence gives error when used with GPU
|
I am trying to set up an RNN capable of utilizing a GPU but packed_padded_sequence gives me a
RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor
here is how I direct gpu computing
parser = argparse.ArgumentParser(description='Trainer')
parser.add_argument('--disable-cuda', action='store_true',
help='Disable CUDA')
args = parser.parse_args()
args.device = None
if not args.disable_cuda and torch.cuda.is_available():
args.device = torch.device('cuda')
torch.set_default_tensor_type(torch.cuda.FloatTensor)
else:
args.device = torch.device('cpu')
here is the relevant part of the code.
def Tensor_length(track):
"""Finds the length of the non zero tensor"""
return int(torch.nonzero(track).shape[0] / track.shape[1])
.
.
.
def forward(self, tracks, leptons):
self.rnn.flatten_parameters()
# list of event lengths
n_tracks = torch.tensor([Tensor_length(tracks[i])
for i in range(len(tracks))])
sorted_n, indices = torch.sort(n_tracks, descending=True)
sorted_tracks = tracks[indices].to(args.device)
sorted_leptons = leptons[indices].to(args.device)
# import pdb; pdb.set_trace()
output, hidden = self.rnn(pack_padded_sequence(sorted_tracks,
lengths=sorted_n.cpu().numpy(),
batch_first=True)) # this gives the error
combined_out = torch.cat((sorted_leptons, hidden[-1]), dim=1)
out = self.fc(combined_out) # add lepton data to the matrix
out = self.softmax(out)
return out, indices # passing indices for reorganizing truth
I have tried everything from casting sorted_n to a long tensor to having it be a list, but i aways seem to get the same error.
I have not worked with pytorch on gpu before and any advice will be greatly appreciated.
Thanks!
|
I assume you are using the GPU and probably on the Google Colab. Check yours device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
device
you may solve this error by downgrading the torch version, if you are using colab the following command will help you:
!pip install torch==1.6.0 torchvision==0.7.0
once you downgrade the torch, this error of padded length will go.
|
https://stackoverflow.com/questions/54358280/
|
Implementing an “infinite loop” Dataset & DataLoader in PyTorch
|
I’d like to implement an infinite loop Dataset & DataLoader. Here’s what I tried:
class Infinite(Dataset):
def __len__(self):
return HPARAMS.batch_size
# return 1<<30 # This causes huge memory usage.
def __getitem__(self, idx):
"""Randomly generates one new example."""
return sample_func_to_be_parallelized()
infinite_loader = DataLoader(
dataset=Infinite(),
batch_size=HPARAMS.batch_size,
num_workers=16,
worker_init_fn=lambda worker_id: np.random.seed(worker_id),
)
while True:
for idx, data in enumerate(infinite_loader):
# forward + backward on "data"
As you can see, the main challenge here is the __len()__ method. If I put a large enough number there, like 1<<30, the symptom is memory usage will JUMP TO 10+GB on the first iteration of train loop. After a while the workers are killed presumably due to OOM.
If I put a small number there, like 1 or BATCH_SIZE, the sampled “data” in the train loop will be periodically duplicated. This is not what I want as I’d like new data to be generated & trained on at every iteration.
I’m guessing the culprit of the excessive memory usage is somewhere in the stack, a bunch of things are cached. Upon a casual look at the Python side of things I can’t pinpoint where.
Can someone advise what’s the best way to have what I want implemented? (Use DataLoader’s parallel loading, while simultaneously guaranteeing every batch loaded is entirely new.)
|
This seems to be working without periodically duplicating the data:
import numpy as np
import torch
from torch.utils.data import Dataset, DataLoader
BATCH_SIZE = 2
class Infinite(Dataset):
def __len__(self):
return BATCH_SIZE
def __getitem__(self, idx):
return torch.randint(0, 10, (3,))
data_loader = DataLoader(Infinite(), batch_size=BATCH_SIZE, num_workers=16)
batch_count = 0
while True:
batch_count += 1
print(f'Batch {batch_count}:')
data = next(iter(data_loader))
print(data)
# forward + backward on "data"
if batch_count == 5:
break
Result:
Batch 1:
tensor([[4, 7, 7],
[0, 8, 0]])
Batch 2:
tensor([[6, 8, 6],
[2, 6, 7]])
Batch 3:
tensor([[6, 6, 2],
[8, 7, 0]])
Batch 4:
tensor([[9, 4, 8],
[2, 4, 1]])
Batch 5:
tensor([[9, 6, 1],
[2, 7, 5]])
So I think the problem is in your function sample_func_to_be_parallelized().
Edit: If instead of torch.randint(0, 10, (3,)) I use np.random.randint(10, size=3) in __getitem__ (as an example of the sample_func_to_be_parallelized()), then the data is indeed duplicated at each batch. See this issue.
So if you are using numpy's RGN somewhere in your sample_func_to_be_parallelized(), then the workaround is to use
worker_init_fn=lambda worker_id: np.random.seed(np.random.get_state()[1][0] + worker_id)
and to reset the seed by np.random.seed() before each call of data = next(iter(data_loader)).
|
https://stackoverflow.com/questions/54359243/
|
How to fix this strange error: "RuntimeError: CUDA error: out of memory"
|
I successfully trained the network but got this error during validation:
RuntimeError: CUDA error: out of memory
|
The error occurs because you ran out of memory on your GPU.
One way to solve it is to reduce the batch size until your code runs without this error.
|
https://stackoverflow.com/questions/54374935/
|
How do I extract only subset of classes from torchvision.datasets.CIFAR10?
|
How do I extract only 2 or 3 classes from torchvision.datasets.CIFAR10?
Standard way of loading all 10 classes
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2)
|
By inspecting the code of CIFAR10, you can see that the data is stored as numpy array and the labels are stored as a list. You can therefore subclass this and filter the two arrays adequately. An example is below:
class SubLoader(torchvision.datasets.CIFAR10):
def __init__(self, *args, exclude_list=[], **kwargs):
super(SubLoader, self).__init__(*args, **kwargs)
if exclude_list == []:
return
if self.train:
labels = np.array(self.train_labels)
exclude = np.array(exclude_list).reshape(1, -1)
mask = ~(labels.reshape(-1, 1) == exclude).any(axis=1)
self.train_data = self.train_data[mask]
self.train_labels = labels[mask].tolist()
else:
labels = np.array(self.test_labels)
exclude = np.array(exclude_list).reshape(1, -1)
mask = ~(labels.reshape(-1, 1) == exclude).any(axis=1)
self.test_data = self.test_data[mask]
self.test_labels = labels[mask].tolist()
|
https://stackoverflow.com/questions/54380140/
|
Compute the number of epoch from iteration in training?
|
I have a Caffe prototxt as follows:
stepsize: 20000
iter_size: 4
batch_size: 10
gamma =0.1
in which, the dataset has 40.000 images. It means after 20000 iters, the learning rate will decrease 10 times. In pytorch, I want to compute the number of the epoch to have the same behavior in caffe (for learning rate). How many epoch should I use to decrease learning rate 10 times (note that, we have iter_size=4 and batch_size=10). Thanks
Ref: Epoch vs Iteration when training neural networks
My answer: Example: if you have 40000 training examples, and batch size is 10, then it will take 40000/10 =4000 iterations to complete 1 epoch. Hence, 20000 iters to reduce learning rate in caffe will same as 5 epochs in pytorch.
|
You did not take into account iter_size: 4: when batch is too large to fit into memory, you can "split" it into several iterations.
In your example, the actual batch size is batch_sizexiter_size=10 * 4 = 40. Therefore, an epoch takes only 1,000 iterations and therefore you need to decrease the learning rate after 20 epochs.
|
https://stackoverflow.com/questions/54385679/
|
Preprocessing in image recognition
|
I am a beginner in image recognition and need some help about preprocessing images.
I use transfer learning model resnet18 to do the recognition work. And I get:
In [3]: pretrainedmodels.pretrained_settings['resnet18']
Out[3]:
{'imagenet': {'url':
'https://download.pytorch.org/models/resnet18-
5c106cde.pth',
'input_space': 'RGB',
'input_size': [3, 224, 224],
'input_range': [0, 1],
'mean': [0.485, 0.456, 0.406],
'std': [0.229, 0.224, 0.225],
'num_classes': 1000}}
I find that the mean and std is quite different from my image dataset's.
How should I normalize my trainset? Use the mean and std above or use the mean and std I calculate myself?
I divided my dataset into train_set, valid_set and test_set.
I have two methods:
A.calculate their mean and std and normalize them individually
B.calculate the whole dataset's mean and std and then do normalization.
Which one is right?
3.When should i do normalization? Before data_augmentation or after data_augmentation?
|
If you training a new model with your own dataset, with the pre-trained weights, you will need to a new mean and std for your new dataset.
Basically you will need to repeat the process of how ImageNet did it. Make a script that calculates the general [mean, std] value of your entire dataset.
But remember to keep watch on your dataset distribution as it will definitely affect the model performance.
Then define a transformer method individually for your train/valset. Usually we do not normalize the test set as in real world scenario your model will takes in data of different sort. You should perform the normalization process when building the dataset, together with other augmentation techniques.
For instance, consider this toy example
"transformer": {
"train": transforms.Compose([
transforms.Resize(size=299),
transforms.RandomHorizontalFlip(p=0.2),
transforms.ToTensor(),
transforms.Normalize(new_mean, new_std)
]),
"valid": transforms.Compose([
transforms.Resize(size=299),
transforms.ToTensor(),
])
}
train_ds = CustomDataset(type="train", transformer=transformer["train"])
valid_ds = CustomDataset(type="valid", transformer=transformer["valid"])
Let me know if you have more confusion
|
https://stackoverflow.com/questions/54395051/
|
How to (quickly) extract bilinear-interpolated patches from a 2d image at specific points?
|
Update: The original question formulation was a bit unclear. I am not just cropping the image but applying bilinear interpolation during the patches extraction process. (See the paper reference below). That's why the algorithm is a bit more involved than just taking slices.
I am trying to train a deep learning model to predict face landmarks following this paper. I need to crop parts of the image that contains face into smaller patches around facial landmarks. For example, if we have the image shown below:
The function should generate N=15 "patches", one patch per landmark:
I have the following naïve implementation build on top of torch tensors:
def generate_patch(x, y, w, h, image):
c = image.size(0)
patch = torch.zeros((c, h, w), dtype=image.dtype)
for q in range(h):
for p in range(w):
yq = y + q - (h - 1)/2
xp = x + p - (w - 1)/2
xd = 1 - (xp - math.floor(xp))
xu = 1 - (math.ceil(xp) - xp)
yd = 1 - (yq - math.floor(yq))
yu = 1 - (math.ceil(yq) - yq)
for idx in range(c):
patch[idx, q, p] = (
image[idx, math.floor(yq), math.floor(xp)]*yd*xd +
image[idx, math.floor(yq), math.ceil(xp)]*yd*xu +
image[idx, math.ceil(yq), math.floor(xp)]*yu*xd +
image[idx, math.ceil(yq), math.ceil(xp)]*yu*xu
).item()
return patch
def generate_patches(image, points, n=None, sz=31):
if n is None:
n = len(points)//2
patches = []
for i in range(n):
x_val, y_val = points[i], points[i + n]
patch = generate_patch(x_val, y_val, sz, sz, image)
patches.append(patch)
return patches
The code does its work but too slowly. I guess because of all these for-loops and separate pixels indexing. I would like to vectorize this code, or maybe find some C-based implementation that could do it faster.
I know there is the extract_patches_2d function from sklearn package that helps to pick random patches from the image. However, I would like to pick the patches from specific points instead of doing it randomly. I guess that I can somehow adapt the aforementioned function, or convert the implementation shown above into Cython/C code but probably someone has already done something like this before.
Could you please advise some alternative to the code shown above, or maybe a proposal on how to make it faster? (Except using several parallel workers).
|
1) use numpy
2) select patches with index extraction. Example:
Patch=img[0:100,0:100]
3) create 3 dimensional body where in 3rd dimension are patches. [15x15xnumber of patches]
4) do your bilinear int. With numpy for all patches in the same time( insted of one pixel calculate with all pixels in 3rd dimension).
That will increase your processing beyond your imagination
If you dont want to get old waiting for your job to be done forget math module. It has no place in datascience.
|
https://stackoverflow.com/questions/54421321/
|
Using CNN for pixel classification,how?
|
I have already learn about some classification using CNN like for Mnist. But recently I received a dataset which is consist of a vector set. The normal image dataset(mnist) is like nxcxwxh. The one I received is (w*h)x1xc. The goal is to train a network to classify these pixels(as I understand,is classification for pixels). The label length is groundtruth picture.
I am a little confused about this work. As I understand, for Image processing, we use the CNN with different receiption feld to make the convolution operations so that a feature can be obtained to represent the image. But in this case, the image is already expand to pixel-set. Why is the convolutional neural network still suitable?
Still I am not sure about the work but I started to try.I used the 1d convolution instead of 2d in the network. After 4-Conv1d,the output is connected to a softmax layer then fed to crossentropy loss function. It seems, I have some problems with the output dimensions so the network is not able to train.
I use pytorch to implement the work. Below is the network form I try to build. The dimensions do not match those need for crossentropyloss function. 122500 was set to be the sample numbers. So I think the convolution was performed along the 1-200 directions.
First I want to know,is that right to implement like this using conv1d when I want to classify the pixels?
If this thought was right, how can I continue to feed the features to the loss function?
If this is wrong,can I have some similar examples for this kind of work? I am new to python, so if there were some stupid mistakes, pls point out.
Thanks all.
class network(nn.Module):
"""
Building network
"""
def __init__(self):
super(network, self).__init__()
self.conv1 = nn.Conv1d(in_channels = 1,out_channels = 32,stride = 1,kernel_size = 3)
self.conv2 = nn.Conv1d(in_channels = 32,out_channels = 64,stride = 1,kernel_size = 3)
self.conv3 = nn.Conv1d(in_channels = 64,out_channels = 128,stride = 1,kernel_size = 3)
self.conv4 = nn.Conv1d(in_channels = 128,out_channels = 256,stride = 1,kernel_size = 3)
self.fc = nn.Linear(13, 2)
def forward(self,s):
s = self.conv1(s)
s = F.relu(F.max_pool1d(s, 2))
s = self.conv2(s)
s = F.relu(F.max_pool1d(s, 2))
s = self.conv3(s)
s = F.relu(F.max_pool1d(s, 2))
s = self.conv4(s)
s = F.relu(F.max_pool1d(s, 2))
s = self.fc(s)
s = F.softmax(s,1)
output = model(input)
loss = loss_fn(output, labels)
|
I guess what you're supposed to do is image segmentation and in the shape of the labels you got, the last dimension of 200 corresponds to 200 possible categories of pixels (that sounds like a lot to me, but without more context I cannot judge). The problem of image segmentation is way too broad to explain in an SO answer, but I suggest you check resources such as this tutorial and check out the influential papers in this field.
|
https://stackoverflow.com/questions/54426268/
|
Accessing PyTorch GPU matrix from TensorFlow directly
|
I have a neural network written in PyTorch, that outputs some Tensor a on GPU. I would like to continue processing a with a highly efficient TensorFlow layer.
As far as I know, the only way to do this is to move a from GPU memory to CPU memory, convert to numpy, and then feed that into TensorFlow. A simplified example:
import torch
import tensorflow as tf
# output of some neural network written in PyTorch
a = torch.ones((10, 10), dtype=torch.float32).cuda()
# move to CPU / pinned memory
c = a.to('cpu', non_blocking=True)
# setup TensorFlow stuff (only needs to happen once)
sess = tf.Session()
c_ph = tf.placeholder(tf.float32, shape=c.shape)
c_mean = tf.reduce_mean(c_ph)
# run TensorFlow
print(sess.run(c_mean, feed_dict={c_ph: c.numpy()}))
This is a bit far fetched maybe but is there a way to make it so that either
a never leaves GPU memory, or
a goes from GPU memory to Pinned Memory to GPU memory.
I attempted 2. in the code snipped above using non_blocking=True but I am not sure if it does what I expect (i.e. move it to pinned memory).
Ideally, my TensorFlow graph would operate directly on the memory occupied by the PyTorch tensor, but I supposed that is not possible?
|
I am not familiar with tensorflow, but you may use pyTorch to expose the "internals" of a tensor.
You can access the underlying storage of a tensor
a.storage()
Once you have the storage, you can get a pointer to the memory (either CPU or GPU):
a.storage().data_ptr()
You can check if it is pinned or not
a.storage().is_pinned()
And you can pin it
a.storage().pin_memory()
I am not familiar with interfaces between pyTorch and tensorflow, but I came across an example of a package (FAISS) directly accessing pytorch tensors in GPU.
|
https://stackoverflow.com/questions/54444137/
|
conda list returning run-time error Path not Found after installing PyTortch
|
Recently I tried to install Pytrotch using the command
conda install pytorch torchvision cuda100 -c pytorch
To verity the package installed correctly I ran conda list in the anaconda prompt and got the following error:
RuntimeError: Path not found: C:\Users\[name]\AppData\Local\Continuum\Anaconda3\Lib\site-packages\Sphinx-1.5.6-py3.6.egg\EGG-INFO
I'm currently running conda version 4.6.1 and python version 3.6.7 on windows 10, I'd appreciate any help in determining what caused this error and how it can be fixed so I can properly manage my anaconda packages.
Full stack trace:
Traceback (most recent call last):
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\exceptions.py", line 1001, in __call__
return func(*args, **kwargs)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\cli\main.py", line 84, in _main
exit_code = do_call(args, p)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\cli\conda_argparse.py", line 81, in do_call
exit_code = getattr(module, func_name)(args, parser)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\cli\main_list.py", line 142, in execute
show_channel_urls=context.show_channel_urls)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\cli\main_list.py", line 80, in print_packages
show_channel_urls=show_channel_urls)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\cli\main_list.py", line 45, in list_packages
installed = sorted(PrefixData(prefix, pip_interop_enabled=True).iter_records(),
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\core\prefix_data.py", line 116, in iter_records
return itervalues(self._prefix_records)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\core\prefix_data.py", line 145, in _prefix_records
return self.__prefix_records or self.load() or self.__prefix_records
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\core\prefix_data.py", line 69, in load
self._load_site_packages()
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\core\prefix_data.py", line 258, in _load_site_packages
python_record = read_python_record(self.prefix_path, af, python_pkg_record.version)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\gateways\disk\read.py", line 245, in read_python_record
pydist = PythonDistribution.init(prefix_path, anchor_file, python_version)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\common\pkg_formats\python.py", line 90, in init
return PythonEggInfoDistribution(anchor_full_path, python_version, sp_reference)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\common\pkg_formats\python.py", line 400, in __init__
super(PythonEggInfoDistribution, self).__init__(anchor_full_path, python_version)
File "C:\Users\[name]\AppData\Local\Continuum\Anaconda3\lib\site-packages\conda\common\pkg_formats\python.py", line 104, in __init__
raise RuntimeError("Path not found: %s" % anchor_full_path)
RuntimeError: Path not found: C:\Users\[name]\AppData\Local\Continuum\Anaconda3\Lib\site-packages\Sphinx-1.5.6-py3.6.egg\EGG-INFO
any help would be appreciated.
|
AS @P.Antoniadis said, this is an ongoing issue. And removing 'Sphinx-1.5.6-py3.6.egg' folder is the suggested workaround.
https://github.com/conda/conda/issues/8156#issuecomment-458777849
|
https://stackoverflow.com/questions/54445893/
|
How to use GPUs with Ray in Pytorch? Should I specify the num_gpus for the remote class?
|
When I use the Ray with pytorch, I do not set any num_gpus flag for the remote class.
I get the following error:
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False.
The main process is: I create a remote class and transfer a pytorch model state_dict()(created in main function) to it. In the main function, the torch.cuda.is_available() is True, but In the remote function, torch.cuda.is_available() is False. Thanks
I try to set the num_gpus=1 and got a new issue: the program just got stuck. Below is the minimal example code for reproducing this issue. Thanks.
import ray
@ray.remote(num_gpus=1)
class Worker(object):
def __init__(self, args):
self.args = args
self.gen_frames = 0
def set_gen_frames(self, value):
self.gen_frames = value
return self.gen_frames
def get_gen_num(self):
return self.gen_frames
class Parameters:
def __init__(self):
self.is_cuda = False;
self.is_memory_cuda = True
self.pop_size = 10
if __name__ == "__main__":
ray.init()
args = Parameters()
workers = [Worker.remote(args) for _ in range(args.pop_size)]
get_num_ids = [worker.get_gen_num.remote() for worker in workers]
gen_nums = ray.get(get_num_ids)
print(gen_nums)
|
If you also want to deploy the model on a gpu, you need to make sure that your actor or task indeed has access to a gpu (with @ray.remote(num_gpus=1), this will make sure that torch.cuda.is_available() will be true in that remote function). If you want to deploy your model on a CPU, you need to specify that when loading the model, see for example https://github.com/pytorch/pytorch/issues/9139.
|
https://stackoverflow.com/questions/54451362/
|
AttributeError: module 'torch' has no attribute 'device'
|
In following the Pytorch tutorial at https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html
I received the following error:
(pt_gpu) [martin@A08-R32-I196-3-FZ2LTP2 mlm]$ python pytorch-1.py
Traceback (most recent call last):
File "pytorch-1.py", line 39, in <module>
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
AttributeError: module 'torch' has no attribute 'device'
In my code below, I added this statement:
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net.to(device)
But this seems not right or enough. This is the first time for me to run Pytorch with GPU on a linux machine. What else should I do to get right running?
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net.to(device)
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
print(transform)
trainSet = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
trainLoader = torch.utils.data.DataLoader(trainSet, batch_size=4, shuffle=True, num_workers=2)
testSet = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testLoader = torch.utils.data.DataLoader(testSet, batch_size=4, shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2):
running_loss = 0.0
for i, data in enumerate(trainLoader, 0):
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % 2000 == 1999:
print('[%d, %5d] loss %.3f' % (epoch + 1, i + 1, running_loss / 2000))
print('Finished traning!')
def imshow(img):
img = img / 2 + 0.5
npimg = img.numpy()
plt.imshow(numpy.transpose(npimg, (1, 2, 0)))
plt.show()
dataIter = iter(trainLoader)
images, labels = dataIter.next()
# imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
outputs = net(images)
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4)))
dataIter = iter(testLoader)
images, labels = dataIter.next()
# imshow(torchvision.utils.make_grid(images))
correct = 0
total = 0
with torch.no_grad():
for data in testLoader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print("accuracy: %d %%", 100 * correct / total)
EDIT:
My conda version is the latest:
(pt_gpu) [martin@A08-R32-I196-3-FZ2LTP2 mlm]$ conda -V
conda 4.6.2
Then I installed pytorch-gpu with:
(pt_gpu) [martin@A08-R32-I196-3-FZ2LTP2 mlm]$ conda install -c anaconda pytorch-gpu
As you can see, the version 0.1.12 is installed:
Collecting package metadata: done
Solving environment: done
## Package Plan ##
environment location: /home/martin/anaconda3/envs/pt_gpu
added / updated specs:
- pytorch-gpu
The following packages will be downloaded:
package | build
---------------------------|-----------------
ca-certificates-2018.12.5 | 0 123 KB anaconda
certifi-2018.11.29 | py36_0 146 KB anaconda
pytorch-gpu-0.1.12 | py36_0 16.8 MB anaconda
------------------------------------------------------------
Total: 17.0 MB
The following packages will be UPDATED:
openssl pkgs/main::openssl-1.1.1a-h7b6447c_0 --> anaconda::openssl-1.1.1-h7b6447c_0
The following packages will be SUPERSEDED by a higher-priority channel:
ca-certificates pkgs/main --> anaconda
certifi pkgs/main --> anaconda
mkl pkgs/main::mkl-2017.0.4-h4c4d0af_0 --> anaconda::mkl-2017.0.1-0
pytorch-gpu pkgs/free --> anaconda
Proceed ([y]/n)? y
Downloading and Extracting Packages
certifi-2018.11.29 | 146 KB | ########################################################################################################################## | 100%
ca-certificates-2018 | 123 KB | ########################################################################################################################## | 100%
pytorch-gpu-0.1.12 | 16.8 MB | ########################################################################################################################## | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
To verify the version, I do:
(pt_gpu) [martin@A08-R32-I196-3-FZ2LTP2 mlm]$ python -c "import torch; print(torch.__version__)"
0.1.12
Why does it install such a low version?
|
Although this question is very old, I would recommend those who are facing this problem to visit pytorch.org and check the command to install pytorch from there, there is a section dedicated to this:
or in your case:
As you can see, the command you used to install pytorch is different from the one here. I have not tested it on Linux, but I used the command for Windows and it worked great for me on Anaconda. (Initially, I also got the same error, that was before following this)
|
https://stackoverflow.com/questions/54466772/
|
GANs on color images
|
Most (PyTorch) open source GANs work on MNIST dataset, i.e. gray level image.
Can I use a GAN on each channel of a color image, then combine the result?
|
You can just have your generator and discriminator generate and classify 3-channel images - speaking in terms of implementation, make them work on B x 3 x H x W tensors instead of B x 1 x H x W, as they do for MNIST.
You can't just use your GAN on each channel separately and concatenate at the end, because you would have no way to ensure that each channel corresponds to the same image. Say you're generating celebrity faces by first generating red channel, then green and finally blue. How would you make sure that you don't get a female sample for the red channel and a male for the green?
|
https://stackoverflow.com/questions/54484832/
|
How to return back to cpu from gpu in pytorch?
|
I have a text classifier in pytorch and I want to use GPUs to increase running speed.
I have used this part of code to check CUDA and use it:
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
my_rnn_model = nn.DataParallel(my_rnn_model)
if torch.cuda.is_available():
my_rnn_model.cuda()
Now I want to return back to use cpu (instead of gpu). So I cleared this part of code. But it does’nt work and I receive this error:
RuntimeError: cuda runtime error (8) : invalid device function at /opt/conda/conda-bld/pytorch_1503963423183/work/torch/lib/THC/THCTensorCopy.cu:204
Would you please guide me how can I return back to cpu running?
|
You can set the GPU device that you want to use using:
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
And in your case just you can return to CPU using:
torch.device('cpu')
|
https://stackoverflow.com/questions/54490351/
|
Problem with building PyTorch from source on Linux
|
❓ Problem with building PyTorch from source
Hello everyone,
I have problem with building PyTorch from source. I followed the official build instructions. I use Anaconda Python 3.7.1 (version 2018.12, build py37_0). I installed all neccessary dependencies using conda and issued python setup.py install command to build it. It builds all files successfully but then it fails at the installation step saying:
build/temp.linux-x86_64-3.7/torch/csrc/stub.o: file not recognized: file format not recognized
I tried building with gcc/g++ versions 5,6,7 but it did not help. Can you help me resolve this problem?
Build output in terminal
This is the output of python setup.py install with the error I am getting:
[manjaro-pc pytorch]# python setup.py install
Building wheel torch-1.1.0a0+44809fd
-- Building version 1.1.0a0+44809fd
[0/1] Install the project...
-- Install configuration: "Release"
running install
running build
running build_py
copying torch/version.py -> build/lib.linux-x86_64-3.7/torch
copying caffe2/proto/caffe2_pb2.py -> build/lib.linux-x86_64-3.7/caffe2/proto
copying caffe2/proto/caffe2_legacy_pb2.py -> build/lib.linux-x86_64-3.7/caffe2/proto
copying caffe2/proto/predictor_consts_pb2.py -> build/lib.linux-x86_64-3.7/caffe2/proto
copying caffe2/proto/metanet_pb2.py -> build/lib.linux-x86_64-3.7/caffe2/proto
copying caffe2/proto/torch_pb2.py -> build/lib.linux-x86_64-3.7/caffe2/proto
copying caffe2/proto/prof_dag_pb2.py -> build/lib.linux-x86_64-3.7/caffe2/proto
copying caffe2/proto/hsm_pb2.py -> build/lib.linux-x86_64-3.7/caffe2/proto
running build_ext
-- Building with NumPy bindings
-- Not using cuDNN
-- Not using MIOpen
-- Not using CUDA
-- Not using MKLDNN
-- Not using NCCL
-- Building without distributed package
Copying extension caffe2.python.caffe2_pybind11_state
Copying caffe2.python.caffe2_pybind11_state from torch/lib/python3.7/site-packages/caffe2/python/caffe2_pybind11_state.cpython-37m-x86_64-linux-gnu.so to /home/manjaro/Downloads/pytorch/build/lib.linux-x86_64-3.7/caffe2/python/caffe2_pybind11_state.cpython-37m-x86_64-linux-gnu.so
building 'torch._C' extension
gcc -pthread -B /opt/anaconda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/opt/anaconda/include/python3.7m -c torch/csrc/stub.cpp -o build/temp.linux-x86_64-3.7/torch/csrc/stub.o -std=c++11 -Wall -Wextra -Wno-strict-overflow -Wno-unused-parameter -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-deprecated-declarations -fno-strict-aliasing -Wno-missing-braces
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
g++ -pthread -shared -B /opt/anaconda/compiler_compat -L/opt/anaconda/lib -Wl,-rpath=/opt/anaconda/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.7/torch/csrc/stub.o -L/home/manjaro/Downloads/pytorch/torch/lib -lshm -ltorch_python -o build/lib.linux-x86_64-3.7/torch/_C.cpython-37m-x86_64-linux-gnu.so -Wl,-rpath,$ORIGIN/lib
/opt/anaconda/compiler_compat/ld: build/temp.linux-x86_64-3.7/torch/csrc/stub.o: unable to initialize decompress status for section .debug_info
/opt/anaconda/compiler_compat/ld: build/temp.linux-x86_64-3.7/torch/csrc/stub.o: unable to initialize decompress status for section .debug_info
/opt/anaconda/compiler_compat/ld: build/temp.linux-x86_64-3.7/torch/csrc/stub.o: unable to initialize decompress status for section .debug_info
/opt/anaconda/compiler_compat/ld: build/temp.linux-x86_64-3.7/torch/csrc/stub.o: unable to initialize decompress status for section .debug_info
build/temp.linux-x86_64-3.7/torch/csrc/stub.o: file not recognized: file format not recognized
collect2: error: ld returned 1 exit status
error: command 'g++' failed with exit status 1
From this output, I assume the error was caused by this line:
g++ -pthread -shared -B /opt/anaconda/compiler_compat -L/opt/anaconda/lib -Wl,-rpath=/opt/anaconda/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.7/torch/csrc/stub.o -L/home/manjaro/Downloads/pytorch/torch/lib -lshm -ltorch_python -o build/lib.linux-x86_64-3.7/torch/_C.cpython-37m-x86_64-linux-gnu.so -Wl,-rpath,$ORIGIN/lib
My environment
PyTorch version: 1.0
Is debug build: N/A
CUDA used to build PyTorch: 10
OS: Manjaro Linux
GCC version: (GCC) 6.4.1 20171002
CMake version: version 3.12.2
Python version: 3.7
Is CUDA available: N/A
CUDA runtime version: 10.0.130
GPU models and configuration: GPU 0: GeForce GTX 650
Nvidia driver version: 415.27
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.7.4.1
/usr/lib/libcudnn_static_v7.a
Versions of relevant libraries:
[pip] Could not collect
[conda] blas 1.0 mkl
[conda] magma-cuda100 2.4.0 1 pytorch
[conda] mkl 2019.1 144
[conda] mkl-include 2019.1 144
[conda] mkl-service 1.1.2 py37he904b0f_5
[conda] mkl_fft 1.0.6 py37hd81dba3_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] mkldnn 0.16.1 0 mingfeima
My CMake configuration printed before building PyTorch
Building wheel torch-1.1.0a0+44809fd
-- Building version 1.1.0a0+44809fd
['cmake',
'/home/manjaro/Downloads/pytorch',
'-GNinja',
'-DBLAS=MKL',
'-DBUILDING_WITH_TORCH_LIBS=ON',
'-DBUILD_BINARY=False',
'-DBUILD_CAFFE2_OPS=False',
'-DBUILD_PYTHON=True',
'-DBUILD_SHARED_LIBS=ON',
'-DBUILD_TEST=False',
'-DBUILD_TORCH=ON',
'-DCAFFE2_STATIC_LINK_CUDA=False',
'-DCMAKE_BUILD_TYPE=Release',
'-DCMAKE_CXX_FLAGS= ',
'-DCMAKE_C_FLAGS= ',
'-DCMAKE_EXE_LINKER_FLAGS=',
'-DCMAKE_INSTALL_PREFIX=/home/manjaro/Downloads/pytorch/torch',
'-DCMAKE_PREFIX_PATH=/opt/anaconda/bin/../',
'-DCMAKE_SHARED_LINKER_FLAGS=',
'-DINSTALL_TEST=False',
'-DNCCL_EXTERNAL=False',
'-DNUMPY_INCLUDE_DIR=/opt/anaconda/lib/python3.7/site-packages/numpy/core/include',
'-DONNX_NAMESPACE=onnx_torch',
'-DPYTHON_EXECUTABLE=/opt/anaconda/bin/python',
'-DPYTHON_INCLUDE_DIR=/opt/anaconda/include/python3.7m',
'-DPYTHON_LIBRARY=/opt/anaconda/lib/libpython3.7m.so.1.0',
'-DTHD_SO_VERSION=1',
'-DTORCH_BUILD_VERSION=1.1.0a0+44809fd',
'-DUSE_CUDA=False',
'-DUSE_DISTRIBUTED=False',
'-DUSE_FBGEMM=False',
'-DUSE_FFMPEG=False',
'-DUSE_LEVELDB=False',
'-DUSE_LMDB=False',
'-DUSE_MKLDNN=False',
'-DUSE_NNPACK=False',
'-DUSE_NUMPY=True',
'-DUSE_OPENCV=False',
'-DUSE_QNNPACK=False',
'-DUSE_ROCM=False',
'-DUSE_SYSTEM_EIGEN_INSTALL=OFF',
'-DUSE_SYSTEM_NCCL=False',
'-DUSE_TENSORRT=False']
{'BLAS': 'MKL',
'BUILD_CAFFE2_OPS': '0',
'BUILD_TEST': '0',
'CMAKE_PREFIX_PATH': '/opt/anaconda/bin/../',
'DBUS_SESSION_BUS_ADDRESS': 'unix:path=/run/user/1000/bus',
'HG': '/usr/bin/hg',
'HOME': '/root',
'LANG': 'en_US.UTF-8',
'LC_ADDRESS': 'sk_SK.UTF-8',
'LC_IDENTIFICATION': 'sk_SK.UTF-8',
'LC_MEASUREMENT': 'sk_SK.UTF-8',
'LC_MONETARY': 'sk_SK.UTF-8',
'LC_NAME': 'sk_SK.UTF-8',
'LC_NUMERIC': 'sk_SK.UTF-8',
'LC_PAPER': 'sk_SK.UTF-8',
'LC_TELEPHONE': 'sk_SK.UTF-8',
'LC_TIME': 'sk_SK.UTF-8',
'LD_LIBRARY_PATH': ':/usr/local/lib64:/usr/local/lib:/opt/cuda/lib64',
'LOGNAME': 'manjaro',
'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.pdf=00;32:*.ps=00;32:*.txt=00;32:*.patch=00;32:*.diff=00;32:*.log=00;32:*.tex=00;32:*.doc=00;32:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:',
'MAIL': '/var/spool/mail/manjaro',
'MAX_JOBS': '4',
'MOZ_PLUGIN_PATH': '/usr/lib/mozilla/plugins',
'NO_CAFFE2_OPS': '1',
'NO_CUDA': '1',
'NO_CUDNN': '1',
'NO_DISTRIBUTED': '1',
'NO_FBGEMM': '1',
'NO_MIOPEN': '1',
'NO_MKLDNN': '1',
'NO_NNPACK': '1',
'NO_QNNPACK': '1',
'NO_TEST': '1',
'OLDPWD': '/home/manjaro/Downloads',
'PATH': '/opt/anaconda/bin:/opt/anaconda/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/opt/cuda/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/usr/local/bin:/opt/cuda/bin:/usr/local/bin:/opt/cuda/bin',
'PWD': '/home/manjaro/Downloads/pytorch',
'PYTHONPATH': ':/usr/local/lib/python3.7/site-packages/:/usr/local/lib/python3.7/site-packages/',
'QT_LINUX_ACCESSIBILITY_ALWAYS_ON': '1',
'SHELL': '/bin/bash',
'SHLVL': '2',
'SSH_CLIENT': '62.197.243.30 45360 44444',
'SSH_CONNECTION': '62.197.243.30 45360 192.168.1.147 44444',
'SSH_TTY': '/dev/pts/4',
'TERM': 'xterm-256color',
'TORCH_CUDA_ARCH_LIST': '3.0',
'USER': 'manjaro',
'USE_CUDA': '0',
'USE_CUDNN': '0',
'USE_DISTRIBUTED': '0',
'USE_FBGEMM': '0',
'USE_MIOPEN': '0',
'USE_MKLDNN': '0',
'USE_NNPACK': '0',
'USE_QNNPACK': '0',
'XDG_DATA_DIRS': '/home/manjaro/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share',
'XDG_RUNTIME_DIR': '/run/user/1000',
'XDG_SESSION_ID': 'c8',
'YAOURT_COLORS': 'nb=1:pkg=1:ver=1;32:lver=1;45:installed=1;42:grp=1;34:od=1;41;5:votes=1;44:dsc=0:other=1;35',
'_': '/opt/anaconda/bin/python'}
-- The CXX compiler identification is GNU 5.4.1
-- The C compiler identification is GNU 5.4.1
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Performing Test COMPILER_WORKS
-- Performing Test COMPILER_WORKS - Success
-- Performing Test SUPPORT_GLIBCXX_USE_C99
-- Performing Test SUPPORT_GLIBCXX_USE_C99 - Success
-- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED
-- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED - Success
-- std::exception_ptr is supported.
-- Performing Test CAFFE2_IS_NUMA_AVAILABLE
-- Performing Test CAFFE2_IS_NUMA_AVAILABLE - Success
-- NUMA is available
-- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING
-- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING - Success
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS - Success
-- Current compiler supports avx2 extension. Will build perfkernels.
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS - Success
-- Current compiler supports avx512f extension. Will build fbgemm.
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Success
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Success
-- Performing Test COMPILER_SUPPORTS_RDYNAMIC
-- Performing Test COMPILER_SUPPORTS_RDYNAMIC - Success
-- Building using own protobuf under third_party per request.
-- Use custom protobuf build.
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Caffe2 protobuf include directory: $<BUILD_INTERFACE:/home/manjaro/Downloads/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
-- Trying to find preferred BLAS backend of choice: MKL
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of void*
-- Check size of void* - done
-- Checking for [mkl_intel_lp64 - mkl_gnu_thread - mkl_core - gomp - pthread - m - dl]
-- Library mkl_intel_lp64: /opt/anaconda/lib/libmkl_intel_lp64.so
-- Library mkl_gnu_thread: /opt/anaconda/lib/libmkl_gnu_thread.so
-- Library mkl_core: /opt/anaconda/lib/libmkl_core.so
-- Found OpenMP_C: -fopenmp (found version "4.0")
-- Found OpenMP_CXX: -fopenmp (found version "4.0")
-- Found OpenMP: TRUE (found version "4.0")
-- Library gomp: -fopenmp
-- Library pthread: /usr/lib/libpthread.so
-- Library m: /usr/lib/libm.so
-- Library dl: /usr/lib/libdl.so
-- Looking for cblas_sgemm
-- Looking for cblas_sgemm - found
-- MKL library found
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Caffe2: Found gflags with new-style gflags target.
-- Caffe2: Cannot find glog automatically. Using legacy find.
-- Found glog: /usr/include
-- Caffe2: Found glog (include: /usr/include, library: /usr/lib/libglog.so)
-- Found Numa: /usr/include
-- Found Numa (include: /usr/include, library: /usr/lib/libnuma.so)
-- Downloading PSimd to /home/manjaro/Downloads/pytorch/build/confu-srcs/psimd (define PSIMD_SOURCE_DIR to avoid it)
-- Configuring done
-- Generating done
-- Build files have been written to: /home/manjaro/Downloads/pytorch/build/confu-deps/psimd-download
[1/9] Creating directories for 'psimd'
[2/9] Performing download step (git clone) for 'psimd'
Cloning into 'psimd'...
Already on 'master'
Your branch is up to date with 'origin/master'.
[3/9] No patch step for 'psimd'
[4/9] Performing update step for 'psimd'
Current branch master is up to date.
[5/9] No configure step for 'psimd'
[6/9] No build step for 'psimd'
[7/9] No install step for 'psimd'
[8/9] No test step for 'psimd'
[9/9] Completed 'psimd'
-- Using third party subdirectory Eigen.
Python 3.7.1
-- Found PythonInterp: /opt/anaconda/bin/python (found suitable version "3.7.1", minimum required is "2.7")
-- Found PythonLibs: /opt/anaconda/lib/libpython3.7m.so.1.0 (found suitable version "3.7.1", minimum required is "2.7")
-- Found PythonInterp: /opt/anaconda/bin/python (found version "3.7.1")
-- Found PythonLibs: /opt/anaconda/lib/libpython3.7m.so.1.0
-- System pybind11 found
-- pybind11 include dirs: /usr/include;/opt/anaconda/include/python3.7m
CMake Warning at cmake/Dependencies.cmake:805 (message):
Not using CUDA, so disabling NCCL. Suppress this warning with
-DUSE_NCCL=OFF.
Call Stack (most recent call first):
CMakeLists.txt:219 (include)
CMake Warning at cmake/Dependencies.cmake:950 (message):
Metal is only used in ios builds.
Call Stack (most recent call first):
CMakeLists.txt:219 (include)
--
-- ******** Summary ********
-- CMake version : 3.12.2
-- CMake command : /opt/anaconda/bin/cmake
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler version : 5.4.1
-- CXX flags : -fvisibility-inlines-hidden -Wnon-virtual-dtor
-- Build type : Release
-- Compile definitions : TH_BLAS_MKL
-- CMAKE_PREFIX_PATH : /opt/anaconda/bin/../
-- CMAKE_INSTALL_PREFIX : /home/manjaro/Downloads/pytorch/torch
-- CMAKE_MODULE_PATH : /home/manjaro/Downloads/pytorch/cmake/Modules;/usr/share/cmake/pybind11
--
-- ONNX version : 1.4.1
-- ONNX NAMESPACE : onnx_torch
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNX_USE_LITE_PROTO : OFF
-- ONNXIFI_DUMMY_BACKEND : OFF
--
-- Protobuf compiler :
-- Protobuf includes :
-- Protobuf libraries :
-- BUILD_ONNX_PYTHON : OFF
-- Found gcc >=5 and CUDA <= 7.5, adding workaround C++ flags
-- Could not find CUDA with FP16 support, compiling without torch.CudaHalfTensor
-- Removing -DNDEBUG from compile flags
-- Found OpenMP_C: -fopenmp (found version "4.0")
-- Found OpenMP_CXX: -fopenmp (found version "4.0")
-- Compiling with OpenMP support
-- MAGMA not found. Compiling without MAGMA support
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- Looking for cpuid.h
-- Looking for cpuid.h - found
-- Performing Test HAVE_GCC_GET_CPUID
-- Performing Test HAVE_GCC_GET_CPUID - Success
-- Performing Test NO_GCC_EBX_FPIC_BUG
-- Performing Test NO_GCC_EBX_FPIC_BUG - Success
-- Performing Test C_HAS_AVX_1
-- Performing Test C_HAS_AVX_1 - Failed
-- Performing Test C_HAS_AVX_2
-- Performing Test C_HAS_AVX_2 - Success
-- Performing Test C_HAS_AVX2_1
-- Performing Test C_HAS_AVX2_1 - Failed
-- Performing Test C_HAS_AVX2_2
-- Performing Test C_HAS_AVX2_2 - Success
-- Performing Test CXX_HAS_AVX_1
-- Performing Test CXX_HAS_AVX_1 - Failed
-- Performing Test CXX_HAS_AVX_2
-- Performing Test CXX_HAS_AVX_2 - Success
-- Performing Test CXX_HAS_AVX2_1
-- Performing Test CXX_HAS_AVX2_1 - Failed
-- Performing Test CXX_HAS_AVX2_2
-- Performing Test CXX_HAS_AVX2_2 - Success
-- AVX compiler support found
-- AVX2 compiler support found
-- Performing Test HAS_C11_ATOMICS
-- Performing Test HAS_C11_ATOMICS - Failed
-- Performing Test HAS_MSC_ATOMICS
-- Performing Test HAS_MSC_ATOMICS - Failed
-- Performing Test HAS_GCC_ATOMICS
-- Performing Test HAS_GCC_ATOMICS - Success
-- Atomics: using GCC intrinsics
-- Performing Test BLAS_F2C_DOUBLE_WORKS
-- Performing Test BLAS_F2C_DOUBLE_WORKS - Failed
-- Performing Test BLAS_F2C_FLOAT_WORKS
-- Performing Test BLAS_F2C_FLOAT_WORKS - Success
-- Performing Test BLAS_USE_CBLAS_DOT
-- Performing Test BLAS_USE_CBLAS_DOT - Success
-- Found a library with BLAS API (mkl).
-- Found a library with LAPACK API (mkl).
disabling CUDA because NOT USE_CUDA is set
-- CuDNN not found. Compiling without CuDNN support
disabling ROCM because NOT USE_ROCM is set
-- MIOpen not found. Compiling without MIOpen support
disabling MKLDNN because USE_MKLDNN is not set
-- Looking for clock_gettime in rt
-- Looking for clock_gettime in rt - found
-- Looking for mmap
-- Looking for mmap - found
-- Looking for shm_open
-- Looking for shm_open - found
-- Looking for shm_unlink
-- Looking for shm_unlink - found
-- Looking for malloc_usable_size
-- Looking for malloc_usable_size - found
-- Performing Test C_HAS_THREAD
-- Performing Test C_HAS_THREAD - Success
-- GCC 5.4.1: Adding gcc and gcc_s libs to link line
-- NUMA paths:
-- /usr/include
-- /usr/lib/libnuma.so
disabling CUDA because USE_CUDA is set false
-- Check size of long double
-- Check size of long double - done
-- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE
-- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE - Success
-- Performing Test COMPILER_SUPPORTS_FLOAT128
-- Performing Test COMPILER_SUPPORTS_FLOAT128 - Success
-- Performing Test COMPILER_SUPPORTS_SSE2
-- Performing Test COMPILER_SUPPORTS_SSE2 - Success
-- Performing Test COMPILER_SUPPORTS_SSE4
-- Performing Test COMPILER_SUPPORTS_SSE4 - Success
-- Performing Test COMPILER_SUPPORTS_AVX
-- Performing Test COMPILER_SUPPORTS_AVX - Success
-- Performing Test COMPILER_SUPPORTS_FMA4
-- Performing Test COMPILER_SUPPORTS_FMA4 - Success
-- Performing Test COMPILER_SUPPORTS_AVX2
-- Performing Test COMPILER_SUPPORTS_AVX2 - Success
-- Performing Test COMPILER_SUPPORTS_SVE
-- Performing Test COMPILER_SUPPORTS_SVE - Failed
-- Performing Test COMPILER_SUPPORTS_AVX512F
-- Performing Test COMPILER_SUPPORTS_AVX512F - Failed
-- Found OpenMP_C: -fopenmp (found version "4.0")
-- Found OpenMP_CXX: -fopenmp (found version "4.0")
-- Performing Test COMPILER_SUPPORTS_OPENMP
-- Performing Test COMPILER_SUPPORTS_OPENMP - Success
-- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES
-- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Success
-- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH
-- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Success
-- Configuring build for SLEEF-v3.2
Target system: Linux-4.14.94-1-MANJARO
Target processor: x86_64
Host system: Linux-4.14.94-1-MANJARO
Host processor: x86_64
Detected C compiler: GNU @ /usr/bin/cc
-- Using option `-Wall -Wno-unused -Wno-attributes -Wno-unused-result -Wno-psabi -ffp-contract=off -fno-math-errno -fno-trapping-math` to compile libsleef
-- Building shared libs : OFF
-- MPFR : /opt/anaconda/lib/libmpfr.so
-- MPFR header file in /opt/anaconda/include
-- GMP : /opt/anaconda/lib/libgmp.so
-- RUNNING_ON_TRAVIS : 0
-- COMPILER_SUPPORTS_OPENMP : 1
-- /usr/bin/c++ /home/manjaro/Downloads/pytorch/torch/abi-check.cpp -o /home/manjaro/Downloads/pytorch/build/abi-check
-- Determined _GLIBCXX_USE_CXX11_ABI=1
-- NCCL operators skipped due to no CUDA support
-- Excluding ideep operators as we are not using ideep
-- Excluding image processing operators due to no opencv
-- Excluding video processing operators due to no opencv
-- MPI operators skipped due to no MPI support
-- Include Observer library
-- Using lib/python3.7/site-packages as python relative installation path
CMake Warning at CMakeLists.txt:416 (message):
Generated cmake files are only fully tested if one builds with system glog,
gflags, and protobuf. Other settings may generate files that are not well
tested.
--
-- ******** Summary ********
-- General:
-- CMake version : 3.12.2
-- CMake command : /opt/anaconda/bin/cmake
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler version : 5.4.1
-- BLAS : MKL
-- CXX flags : -fvisibility-inlines-hidden -D_FORCE_INLINES -D_MWAITXINTRIN_H_INCLUDED -D__STRICT_ANSI__ -fopenmp -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-unused-but-set-variable -Wno-maybe-uninitialized
-- Build type : Release
-- Compile definitions : TH_BLAS_MKL;ONNX_NAMESPACE=onnx_torch;USE_GCC_ATOMICS=1;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1
-- CMAKE_PREFIX_PATH : /opt/anaconda/bin/../
-- CMAKE_INSTALL_PREFIX : /home/manjaro/Downloads/pytorch/torch
--
-- TORCH_VERSION : 1.1.0
-- CAFFE2_VERSION : 1.1.0
-- BUILD_ATEN_MOBILE : OFF
-- BUILD_ATEN_ONLY : OFF
-- BUILD_BINARY : False
-- BUILD_CUSTOM_PROTOBUF : ON
-- Link local protobuf : ON
-- BUILD_DOCS : OFF
-- BUILD_PYTHON : True
-- Python version : 3.7.1
-- Python executable : /opt/anaconda/bin/python
-- Pythonlibs version : 3.7.1
-- Python library : /opt/anaconda/lib/libpython3.7m.so.1.0
-- Python includes : /opt/anaconda/include/python3.7m
-- Python site-packages: lib/python3.7/site-packages
-- BUILD_CAFFE2_OPS : False
-- BUILD_SHARED_LIBS : ON
-- BUILD_TEST : False
-- USE_ASAN : OFF
-- USE_CUDA : False
-- USE_ROCM : False
-- USE_EIGEN_FOR_BLAS :
-- USE_FBGEMM : OFF
-- USE_FFMPEG : False
-- USE_GFLAGS : ON
-- USE_GLOG : ON
-- USE_LEVELDB : False
-- USE_LITE_PROTO : OFF
-- USE_LMDB : False
-- USE_METAL : OFF
-- USE_MKL : ON
-- USE_MKLDNN : OFF
-- USE_NCCL : OFF
-- USE_NNPACK : False
-- USE_NUMPY : ON
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENCV : False
-- USE_OPENMP : OFF
-- USE_PROF : OFF
-- USE_QNNPACK : False
-- USE_REDIS : OFF
-- USE_ROCKSDB : OFF
-- USE_ZMQ : OFF
-- USE_DISTRIBUTED : False
-- Public Dependencies : Threads::Threads;caffe2::mkl;glog::glog
-- Private Dependencies : cpuinfo;/usr/lib/libnuma.so;fp16;onnxifi_loader;rt;gcc_s;gcc;dl
-- Configuring done
-- Generating done
CMake Warning:
Manually-specified variables were not used by the project:
NCCL_EXTERNAL
THD_SO_VERSION
|
Problem solved.
I found what was wrong and I fixed it. The whole problem lies in the fact that Anaconda distribution comes with its own ld linker that is located in /opt/anaconda/compiler_compat/ and it overshadows system ld residing at /usr/bin.
To fix my error I ran python setup.py clean and then I temporarily renamed Anaconda's ld linker to ld-old to make it invisible during PyTorch installation.
|
https://stackoverflow.com/questions/54492378/
|
Running LSTM with multiple GPUs gets "Input and hidden tensors are not at the same device"
|
I am trying to train a LSTM layer in pytorch. I am using 4 GPUs. When initializing, I added the .cuda() function move the hidden layer to GPU. But when I run the code with multiple GPUs I am getting this runtime error :
RuntimeError: Input and hidden tensors are not at the same device
I have tried to solve the problem by using .cuda() function in the forward function like below :
self.hidden = (self.hidden[0].type(torch.FloatTensor).cuda(), self.hidden[1].type(torch.FloatTensor).cuda())
This line seems to solve the problem, but it raises my concern that if the updated hidden layer is seen in different GPUs. Should I move the vector back to cpu at the end of the forward function for a batch or is there any other way to solve the problem.
|
When you call .cuda() on the tensor, Pytorch moves it to the current GPU device by default (GPU-0). So, due to data parallelism, your data lives in a different GPU while your model goes to another, this results in the runtime error you are facing.
The correct way to implement data parallelism for recurrent neural networks is as follows:
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
class MyModule(nn.Module):
# ... __init__, other methods, etc.
# padded_input is of shape [B x T x *] (batch_first mode) and contains
# the sequences sorted by lengths
# B is the batch size
# T is max sequence length
def forward(self, padded_input, input_lengths):
total_length = padded_input.size(1) # get the max sequence length
packed_input = pack_padded_sequence(padded_input, input_lengths,
batch_first=True)
packed_output, _ = self.my_lstm(packed_input)
output, _ = pad_packed_sequence(packed_output, batch_first=True,
total_length=total_length)
return output
m = MyModule().cuda()
dp_m = nn.DataParallel(m)
You also need to set the CUDA_VISIBLE_DEVICES environment variable accordingly for a multi GPU setup.
References:
Data Parallelism
Fast.ai Forums
RNNs and Data Parallelism
|
https://stackoverflow.com/questions/54511769/
|
How does PyTorch handle labels when loading image/mask files for image segmentation?
|
I am starting an image segmentation project using PyTorch. I have a reduced dataset in a folder and 2 subfolders - "image" to store the images and "mask" for the masked images. Images and masks are .png files with 3 channels and 256x256 pixels. Because it is image segmentation, the labelling has to be performed a pixel by pixel. I am working only with 2 classes at the moment for simplicity. So far, I achieved the following:
I was able to load my files into classes "images" or "masks" by
root_dir="./images_masks"
train_ds_untransf = torchvision.datasets.ImageFolder(root=root_dir)
train_ds_untransf.classes
Out[621]:
['images', 'masks']
and transform the data into tensors
from torchvision import transforms
train_trans = transforms.Compose([transforms.ToTensor()])
train_dataset = torchvision.datasets.ImageFolder(root=root_dir,transform=train_trans)
Each tensor in this "train_dataset" has the following shape:
train_dataset[1][0].shape
torch.Size([3, 256, 256])
Now I need to feed the loaded data into the CNN model, and have explored the PyTorch DataLoader for this
train_dataloaded = DataLoader(train_dataset, batch_size=2, shuffle=False, num_workers=4)
I use the following code to check the resulting tensor's shape
for x, y in train_dl:
print (x.shape)
print (y.shape)
print(y)
and get
torch.Size([2, 3, 256, 256])
torch.Size([2])
tensor([0, 0])
torch.Size([2, 3, 256, 256])
torch.Size([2])
tensor([0, 1])
.
.
.
Shapes seem correct. However, the first problem is that I got tensors of the same folder, indicated by some "y" tensors with the same value [0, 0]. I would expect that they all are [1, 0]: 1 representing image, 0 representing masks.
The second problem is that, although the documentation is clear when labels are entire images, it is not clear as to how to apply it for labeling at the pixel level, and I am certain the labels are not correct.
What would be an alternative to correctly label this dataset?
thank you
|
The class torchvision.datasets.ImageFolder is designed for image classification problems, and not for segmentation; therefore, it expects a single integer label per image and the label is determined by the subfolder in which the images are stored. So, as far as your dataloader concern you have two classes of images "images" and "masks" and your net tries to distinguish between them.
What you actually need is a different implementation of dataset that for each __getitem__ return an image and the corresponding mask. You can see examples of such classes here.
Additionally, it is a bit weird that your binary pixel-wise labels are stored as 3 channel image. Segmentation masks are usually stored as a single channel image.
|
https://stackoverflow.com/questions/54528338/
|
Pytorch - Efficient Elementwise Multiply?
|
I have a tensor of 3D Points of [100x3]
I have a vector of weights of [100x1], which needs to be element wise multiplied into the X,Y,Z coordinates.
Currently, I am creating a new vector W where I stack the [100x3] element with repetition into a [100x3] tensor, before i do an element wise multiply.
I need to do this many times, and this is way too slow and memory intensive. Is there a better way?
|
Standard multiplication (*) in PyTorch already is elementwise. Additionally, it broadcasts. So
import torch
xyz = torch.randn(100, 3)
w = torch.randn(100, 1)
multiplied = xyz * w
will just do the trick.
|
https://stackoverflow.com/questions/54543082/
|
How to Save pytorch tensor in append mode
|
How to save several tensor appending using torch.save()?
For example
for i in range(20):
......
loss = criterion(scores, labels)
torch.save(loss,'loss.pt')
How to save these all 20 losses?
|
It's probably not possible to directly append to the file, at least, I could not find documentation for this. In your example, however, a better approach is to append to a list, and save at the end.
import torch
losses = []
for i in range(20):
# ......
loss = criterion(scores, labels)
losses.append(loss.item())
torch.save(losses, 'loss.pt')
|
https://stackoverflow.com/questions/54570525/
|
How to create a custom PyTorch dataset when the order and the total number of training samples is not known in advance?
|
I have a 42 GB jsonl file. Every element of this file is a json object. I create training samples from every json object. But the number of training samples from every json object that I extract can vary between 0 to 5 samples. What is the best way to create a custom PyTorch dataset without reading the entire jsonl file in memory?
This is the dataset I am talking about - Google Natural Questions.
|
You have a couple of options.
The simplest option, if having lots of small files is not a problem, is to preprocess each json object into a single file. Then you can just read each one depending on the index requested. E.g
class SingleFileDataset(Dataset):
def __init__(self, list_of_file_paths):
self.list_of_file_paths = list_of_file_paths
def __getitem__(self, index):
return np.load(self.list_of_file_paths[index]) # Or equivalent reading code for single file
You can also split the data into a constant number of files, and then calculate, given the index, which file the sample resides in. Then you need to open that file into memory and read the appropriate index. This gives a trade-off between disk access and memory usage. Assume you have n samples, and we split the samples into c files evenly during preprocessing. Now, to read the sample at index i we would do
class SplitIntoFilesDataset(Dataset):
def __init__(self, list_of_file_paths, n_splits):
self.list_of_file_paths = list_of_file_paths
self.n_splits = n_splits
def __getitem__(self, index):
# index // n_splits is the relevant file, and
# index % len(self) is the index in in that file
file_to_load = self.list_of_file_paths[index // self.n_splits]
# Load file
file = np.load(file)
datapoint = file[index % len(self)]
Finally, you could use a HDF5 file that allows access to rows on disk. This is possibly the best solution if you have a lot of data, since the data will be close on disk. There's an implementation here which I have copy pasted below:
import h5py
import torch
import torch.utils.data as data
class H5Dataset(data.Dataset):
def __init__(self, file_path):
super(H5Dataset, self).__init__()
h5_file = h5py.File(file_path)
self.data = h5_file.get('data')
self.target = h5_file.get('label')
def __getitem__(self, index):
return (torch.from_numpy(self.data[index,:,:,:]).float(),
torch.from_numpy(self.target[index,:,:,:]).float())
def __len__(self):
return self.data.shape[0]
|
https://stackoverflow.com/questions/54571377/
|
What does log_prob do?
|
In some (e.g. machine learning) libraries, we can find log_prob function. What does it do and how is it different from taking just regular log?
For example, what is the purpose of this code:
dist = Normal(mean, std)
sample = dist.sample()
logprob = dist.log_prob(sample)
And subsequently, why would we first take a log and then exponentiate the resulting value instead of just evaluating it directly:
prob = torch.exp(dist.log_prob(sample))
|
As your own answer mentions, log_prob returns the logarithm of the density or probability. Here I will address the remaining points in your question:
How is that different from log? Distributions do not have a method log. If they did, the closest possible interpretation would indeed be something like log_prob but it would not be a very precise name since if begs the question "log of what"? A distribution has multiple numeric properties (for example its mean, variance, etc) and the probability or density is just one of them, so the name would be ambiguous.
The same does not apply to the Tensor.log() method (which may be what you had in mind) because Tensor is itself a mathematical quantity we can take the log of.
Why take the log of a probability only to exponentiate it later? You may not need to exponentiate it later. For example, if you have the logs of probabilities p and q, then you can directly compute log(p * q) as log(p) + log(q), avoiding intermediate exponentiations. This is more numerically stable (avoiding underflow) because probabilities may become very close to zero while their logs do not. Addition is also more efficient than multiplication in general, and its derivative is simpler. There is a good article about those topics at https://en.wikipedia.org/wiki/Log_probability.
|
https://stackoverflow.com/questions/54635355/
|
How to load a checkpoint file in a pytorch model?
|
In my pytorch model, I'm initializing my model and optimizer like this.
model = MyModelClass(config, shape, x_tr_mean, x_tr,std)
optimizer = optim.SGD(model.parameters(), lr=config.learning_rate)
And here is the path to my checkpoint file.
checkpoint_file = os.path.join(config.save_dir, "checkpoint.pth")
To load this checkpoint file, I check and see if the checkpoint file exists and then I load it as well as the model and optimizer.
if os.path.exists(checkpoint_file):
if config.resume:
torch.load(checkpoint_file)
model.load_state_dict(torch.load(checkpoint_file))
optimizer.load_state_dict(torch.load(checkpoint_file))
Also, here's how I'm saving my model and optimizer.
torch.save({'model': model.state_dict(), 'optimizer': optimizer.state_dict(), 'iter_idx': iter_idx, 'best_va_acc': best_va_acc}, checkpoint_file)
For some reason I keep getting a strange error whenever I run this code.
model.load_state_dict(torch.load(checkpoint_file))
File "/home/Josh/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 769, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for MyModelClass:
Missing key(s) in state_dict: "mean", "std", "attribute.weight", "attribute.bias".
Unexpected key(s) in state_dict: "model", "optimizer", "iter_idx", "best_va_acc"
Does anyone know why I'm getting this error?
|
You saved the model parameters in a dictionary. You're supposed to use the keys, that you used while saving earlier, to load the model checkpoint and state_dicts like this:
if os.path.exists(checkpoint_file):
if config.resume:
checkpoint = torch.load(checkpoint_file)
model.load_state_dict(checkpoint['model'])
optimizer.load_state_dict(checkpoint['optimizer'])
You can check the official tutorial on PyTorch website for more info.
|
https://stackoverflow.com/questions/54677683/
|
Apply a PyTorch CrossEntropy method for multiclass segmentation
|
I am trying to implement a simple example of how to apply cross-entropy to what is supposed to be the output of my semantic segmentation CNN.
Using the pytorch format I would have something like this:
out = np.array([[
[
[1.,1, 1],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]
],
[
[0, 0, 0],
[1, 1, 1],
[0, 0.,0],
[0, 0, 0]
],
[
[0, 0, 0],
[0, 0, 0],
[1, 1, 1],
[0, 0, 0]
],
[
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[1, 1, 1]
]
]])
out = torch.tensor(out)
So, my output here has dimensions (1, 4, 4, 3), being 1 element batch, 4 channels representing the 4 possible classes, and 4 by 3 data in each, storing the probability of that cell being from it's class.
Now my target is like this:
target=[
[0, 0, 0],
[1, 1, 1],
[2, 2, 2],
[3, 3, 3]
]
Please notice how in the 'out' tensor each row has a 1.0 probability of being from that class resulting in a perfect match with the target.
For example, the third channel (channel 2) has its whole 3rd row (row 2) with 1.0 probabilities of being from that channel, and zero's in any other place; so it matches the 2's on the target in third row as well.
With this example I expect a minimal loss value between the two tensors.
My question are:
What's the best way to use a cross-entropy loss method in PyTorch in order to reflect that this case has no difference between the target and its prediction?
What loss value should I expect from this?
This is what I got so far:
import torch
from torch.nn import CrossEntropyLoss
import numpy as np
out = torch.Tensor(np.array([[
[
[1.,1, 1],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]
],
[
[0, 0, 0],
[1, 1, 1],
[0, 0.,0],
[0, 0, 0]
],
[
[0, 0, 0],
[0, 0, 0],
[1, 1, 1],
[0, 0, 0]
],
[
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[1, 1, 1]
]
]]))
target = torch.Tensor([[
[0, 0, 0],
[1, 1, 1],
[2, 2, 2],
[3, 3, 3]
]]).type('torch.LongTensor')
criterion = CrossEntropyLoss()
print(criterion(out, target))
And outputs: tensor(0.7437)
Shouldn't I expect a value closer to cero?
Thank you in advance
|
Look at the description of nn.CrossEntropyLoss function, the prediction out you provide to nn.CrossEntropyLoss are not treated as class probabilities, but rather as logits; The loss function derive the class probabilities from out using soft max therefore nn.CrossEntropyLoss will never output exactly zero loss.
|
https://stackoverflow.com/questions/54680267/
|
Optional tensors in PyTorch c++ extension
|
I'm writing a C++ extension for pytorch, and using the c++ api to do so. To my forward function, I need to pass an optional tensor. Inside the function, I want to do different things based on whether this optional parameter was passed or not. In general, we use NULL for optional pointer arguments in C++ and check inside the function if the pointer is NULL or not. I don't know how to do this for the at::Tensor type of Torch's c++ api.
void xyz_forward(
const at::Tensor xyz1,
const at::Tensor xyz2,
const at::Tensor optional_constraints = something)
{
if(optional_constraints){
//do something
}else{
//do something else
}
}
Note that, I can't do const at::Tensor optional_constraints = at::ones or something, because that parameter can take any real value and can be of varying size/shape. I can't assign it a numerical value as an optional argument. Is there a NULL equivalent for this?
|
One possibility could be to use std::optional as std::optional<at::Tensor> optional_constraints = std::nullopt. It is contextually convertible to bool, so you can check it with if (optional_constraints). Use the .value() method to get the tensor if you pass one, otherwise the default value will be std::nullopt.
|
https://stackoverflow.com/questions/54685444/
|
Can anyone solve this?
|
torch.tensor(2,require_grad=True)
Traceback (most recent call last):
File "", line 1, in
torch.tensor(2,require_grad=True)
TypeError: tensor() got an unexpected keyword argument 'require_grad'
|
It's actually:
requires_grad
|
https://stackoverflow.com/questions/54686300/
|
How does one determine when the CartPole environment has been solved?
|
I was going through this tutorial and saw the following piece of code:
# Calculate score to determine when the environment has been solved
scores.append(time)
mean_score = np.mean(scores[-100:])
if episode % 50 == 0:
print('Episode {}\tAverage length (last 100 episodes): {:.2f}'.format(
episode, mean_score))
if mean_score > env.spec.reward_threshold:
print("Solved after {} episodes! Running average is now {}. Last episode ran to {} time steps."
.format(episode, mean_score, time))
break
however, it didn't really made sense to me. How does one define when a "RL environment has been solved"? Not sure what that even means. I guess in classification it would make sense to define it to be when loss is zero. In regression maybe when the total l2 loss is less than some value? Perhaps it would have made sense to define it when the expected returns (discounted rewards) is greater than some value.
But here it seems they are counting the # of time steps? This doesn't make any sense to me.
Note the original tutorial had this:
def main(episodes):
running_reward = 10
for episode in range(episodes):
state = env.reset() # Reset environment and record the starting state
done = False
for time in range(1000):
action = select_action(state)
# Step through environment using chosen action
state, reward, done, _ = env.step(action.data[0])
# Save reward
policy.reward_episode.append(reward)
if done:
break
# Used to determine when the environment is solved.
running_reward = (running_reward * 0.99) + (time * 0.01)
update_policy()
if episode % 50 == 0:
print('Episode {}\tLast length: {:5d}\tAverage length: {:.2f}'.format(episode, time, running_reward))
if running_reward > env.spec.reward_threshold:
print("Solved! Running reward is now {} and the last episode runs to {} time steps!".format(running_reward, time))
break
not sure if this makes much more sense...
is this only a particular quirk of this environment/task? How does the task end in general?
|
The time used in case of cartpole equals the reward of the episode. The longer you balance the pole the higher the score, stopping at some maximum time value.
So the episode would be considered solved if the running average of the last episodes is near enough that maximum time.
|
https://stackoverflow.com/questions/54737990/
|
Calling super's forward() method
|
What is the most appropriate way to call the forward() method of a parent Module? For example, if I subclass the nn.Linear module, I might do the following
class LinearWithOtherStuff(nn.Linear):
def forward(self, x):
y = super(Linear, self).forward(x)
z = do_other_stuff(y)
return z
However, the docs say not to call the forward() method directly:
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
which makes me think super(Linear, self).forward(x) could result in some unexpected errors. Is this true or am I misunderstanding inheritance?
|
TLDR;
You can use super().forward(...) freely even with hooks and even with hooks registered in super() instance.
Explanation
As stated by this answer __call__ is here so the registered hooks (e.g. register_forward_hook) will be run.
If you inherit and want to reuse base class's forward, e.g. this:
import torch
class Parent(torch.nn.Module):
def forward(self, tensor):
return tensor + 1
class Child(Parent):
def forward(self, tensor):
return super(Child, self).forward(tensor) + 1
module = Child()
# Increment output by 1 so we should get `4`
module.register_forward_hook(lambda module, input, output: output + 1)
print(module(torch.tensor(1))) # and it is 4 indeed
print(module.forward(torch.tensor(1))) # here it is 3 still
You are perfectly fine if you call __call__ method, forward won't run the hook (so you get 3 as above).
It is unlikely you would like to register_hook on the instance of super , but let's consider such example:
def increment_by_one(module, input, output):
return output + 1
class Parent(torch.nn.Module):
def forward(self, tensor):
return tensor + 1
class Child(Parent):
def forward(self, tensor):
# Increment by `1` from Parent
super().register_forward_hook(increment_by_one)
return super().forward(tensor) + 1
module = Child()
# Increment output by 1 so we should get `5` in total
module.register_forward_hook(increment_by_one)
print(module(torch.tensor(1))) # and it is 5 indeed
print(module.forward(torch.tensor(1))) # here is 3
You are perfectly fine using super().forward(...) and even hooks will work correctly (and that is the main idea of using __call__ instead of forward).
BTW. Calling super().__call__(...) would raise InifiniteRecursion error.
|
https://stackoverflow.com/questions/54752983/
|
How to work with large dataset in pytorch
|
I have a huge dataset that does not fit in memory (150G) and I'm looking for the best way to work with it in pytorch. The dataset is composed of several .npz files of 10k samples each. I tried to build a Dataset class
class MyDataset(Dataset):
def __init__(self, path):
self.path = path
self.files = os.listdir(self.path)
self.file_length = {}
for f in self.files:
# Load file in as a nmap
d = np.load(os.path.join(self.path, f), mmap_mode='r')
self.file_length[f] = len(d['y'])
def __len__(self):
raise NotImplementedException()
def __getitem__(self, idx):
# Find the file where idx belongs to
count = 0
f_key = ''
local_idx = 0
for k in self.file_length:
if count < idx < count + self.file_length[k]:
f_key = k
local_idx = idx - count
break
else:
count += self.file_length[k]
# Open file as numpy.memmap
d = np.load(os.path.join(self.path, f_key), mmap_mode='r')
# Actually fetch the data
X = np.expand_dims(d['X'][local_idx], axis=1)
y = np.expand_dims((d['y'][local_idx] == 2).astype(np.float32), axis=1)
return X, y
but when a sample is actually fetched, it takes more than 30s. It looks like the entire .npz is opened, stocked in RAM and it accessed the right index.
How to be more efficient ?
EDIT
It appears to be a misunderstading of .npz files see post, but is there a better approach ?
SOLUTION PROPOSAL
As proposed by @covariantmonkey, lmdb can be a good choice. For now, as the problem comes from .npz files and not memmap, I remodelled my dataset by splitting .npz packages files into several .npy files. I can now use the same logic where memmap makes all sense and is really fast (several ms to load a sample).
|
How large are the individual .npz files? I was in similar predicament a month ago. Various forum posts, google searches later I went the lmdb route. Here is what I did
Chunk the large dataset into small enough files that I can fit in gpu — each of them is essentially my minibatch. I did not optimize for load time at this stage just memory.
create an lmdb index with key = filename and data = np.savez_compressed(stff)
lmdb takes care of the mmap for you and insanely fast to load.
Regards,
A
PS: savez_compessed requires a byte object so you can do something like
output = io.BytesIO()
np.savez_compressed(output, x=your_np_data)
#cache output in lmdb
|
https://stackoverflow.com/questions/54753720/
|
Espresso ANERuntimeEngine Program Inference overflow
|
I have two CoreML models. One works fine, and the other generates this error message:
[espresso] [Espresso::ANERuntimeEngine::__forward_segment 0] evaluate[RealTime]WithModel returned 0; code=5 err=Error Domain=com.apple.appleneuralengine Code=5 "processRequest:qos:qIndex:error:: 0x3: Program Inference overflow" UserInfo={NSLocalizedDescription=processRequest:qos:qIndex:error:: 0x3: Program Inference overflow}
[espresso] [Espresso::overflow_error] /var/containers/Bundle/Application/E0DE5E08-D2C6-48AF-91B2-B42BA7877E7E/xxx demoapp.app/mpii-hg128.mlmodelc/model.espresso.net:0
Both models are very similar, (Conv2D models). There are generated with the same scripts and versions of PyTorch, ONNX, and onnx-coreml. The model that works has 1036 layers, and the model that generates the error has 599 layers. They both use standard layers - Conv2D, BatchNorm, ReLU, MaxPool, and Upsample (no custom layers and no Functional or Numpy stuff). They both use relatively the same number of features per layer. They follow essentially the same structure, except the erroring model skips a maxpool layer at the start (hence the higher output resolution).
They both take a 256x256 color image as input, and output 16 channels at (working) 64x64 and (erroring) 128x128 pixels.
The app does not crash, but gives garbage results for the erroring model.
Both models train, evaluate, etc. fine in their native formats (PyTorch).
I have no idea what a Code=5 "processRequest:qos:qIndex:error:: 0x3: Program Inference overflow" error is, and google searches are not yielding anything productive, as I gather "Espresso" and "ANERuntimeEngine" are both private Apple Libraries.
What is this error message telling me? How can I fix it?
Can I avoid this error message by not running the model on the bionic chip but on the CPU/GPU?
Any help is appreciated, thanks.
|
That's a LOT of layers!
Espresso is the C++ library that runs the Core ML models. ANERuntimeEngine is used with the Apple Neural Engine chip.
By passing in an MLModelConfiguration with computeUnits set to .cpuAndGPU when you load the Core ML model, you can tell Core ML to not use the Neural Engine.
|
https://stackoverflow.com/questions/54773171/
|
Correctly converting a NumPy array to a PyTorch tensor running on the gpu
|
I have created a DataLoader that looks like this
class ToTensor(object):
def __call__(self, sample):
return torch.from_numpy(sample).to(device)
class MyDataset(Dataset):
def __init__(self, data, transform=None):
self.data = data
self.transform = transform
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
sample = self.data[idx, :]
if self.transform:
sample = self.transform(sample)
return sample
I am using this data loader like so
dataset = MLBDataset(
data=data,
transform=transforms.Compose([
ToTensor()
]))
dataloader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4)
dataiter = iter(dataloader)
x = dataiter.next()
This fails with the message
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/THC/THCGeneral.cpp line=55 error=3 : initialization error
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/THC/THCGeneral.cpp line=55 error=3 : initialization error
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/THC/THCGeneral.cpp line=55 error=3 : initialization error
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/THC/THCGeneral.cpp line=55 error=3 : initialization error
...
torch._C._cuda_init()
RuntimeError: cuda runtime error (3) : initialization error at /opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/THC/THCGeneral.cpp:55
For the return command inside ToTensor(), in fact any attempt to move the tensor te the GPU will fail inside that class. I have tried:
a = np.array([[[1, 2, 3, 4], [5, 6, 7, 8], [25, 26, 27, 28]],
[[11, 12, np.nan, 14], [15, 16, 17, 18], [35, 36, 37, 38]]])
print(torch.from_numpy(a).to(device))
inside the body of __call__ in ToTensor() and it fails with the same message, whereas it succeeds everywhere else.
Why is this error generated and how can I resolve this?
|
Try this one:
Code:
import numpy as np
import torch
import torch.nn as nn
torch.cuda.set_device(0)
X = np.ones((1, 10), dtype=np.float32)
print(type(X), X)
X = torch.from_numpy(X).cuda(0)
print(type(X), X)
model = nn.Linear(10, 10).cuda(0)
Y = model(X)
print(type(Y), Y)
Output:
<class 'numpy.ndarray'> [[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]
<class 'torch.Tensor'> tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]], device='cuda:0')
<class 'torch.Tensor'> tensor([[ 0.4867, -1.0050, 0.4872, -0.0260, -0.0788, 0.0161, 1.2210, -0.3957,
0.2097, 0.2296]], device='cuda:0', grad_fn=<AddmmBackward>)
|
https://stackoverflow.com/questions/54773293/
|
The strange loss fluctuation when loading previous trained model
|
I am using PyTorch for deep learning now.
I trained a model before and had the parameters saved. Loss values before the end of the training were about 0.003~0.006.
However, when I load the same model with the same training data, loss values at first fluctuate to around 0.5.
The loss values then decrease very quickly to around 0.01 in ~10 iterations and now decreasing slowly.
Does anyone know why this situation keep happens? Since I am loading the same model/ training data. I was expecting the loss values would start at a similar level as the end of last training.
|
When resuming a training, you should not only load the network's weights but also the optimizer state. For that, you can use torch.save:
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss,
...
}, PATH)
Then, for resuming the training:
model = TheModelClass(*args, **kwargs)
model.train()
optimizer = TheOptimizerClass(*args, **kwargs)
checkpoint = torch.load(PATH)
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
epoch = checkpoint['epoch']
loss = checkpoint['loss']
If you don't save the optimizer state, you loss important information such as the current learning rate, momentum etc. This is probably the cause of your problem.
Reference:
https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-loading-a-general-checkpoint-for-inference-and-or-resuming-training
|
https://stackoverflow.com/questions/54806742/
|
PyTorch why does the forward function run multiple times and can I change the input shape?
|
import torch
import torch.nn as nn
import torchvision.datasets as dsets
from skimage import transform
import torchvision.transforms as transforms
from torch.autograd import Variable
import pandas as pd;
import numpy as np;
from torch.utils.data import Dataset, DataLoader
import statistics
import random
import math
class FashionMNISTDataset(Dataset):
'''Fashion MNIST Dataset'''
def __init__(self, csv_file, transform=None):
"""
Args:
csv_file (string): Path to the csv file
transform (callable): Optional transform to apply to sample
"""
data = pd.read_csv(csv_file)
self.X = np.array(data.iloc[:, 1:]).reshape(-1, 1, 28, 28)
self.Y = np.array(data.iloc[:, 0])
del data
self.transform = transform
def __len__(self):
return len(self.X)
def __getitem__(self, idx):
item = self.X[idx]
label = self.Y[idx]
if self.transform:
item = self.transform(item)
return (item, label)
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.layer1 = nn.Sequential(
nn.Linear(616,300),
nn.ReLU())
self.layer2 = nn.Sequential(
nn.Linear(300,100),
nn.ReLU())
self.fc = nn.Linear(100, 10)
def forward(self, x):
print("x shape",x.shape)
out = self.layer1(x)
out = self.layer2(out)
out = self.fc(out)
return out
def run():
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
num_epochs = 15
batch_size = 100
learning_rate = 0.0001
train_dataset = FashionMNISTDataset(csv_file='fashion-mnist_train.csv')
test_dataset = FashionMNISTDataset(csv_file='fashion-mnist_test.csv')
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,batch_size=batch_size,shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,batch_size=batch_size,shuffle=True)
#instance of the Conv Net
cnn = CNN()
cnn.to(device)
#loss function and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(cnn.parameters(), lr=learning_rate)
losses = []
for epoch in range(num_epochs):
l = 0
for i, (images, labels) in enumerate(train_loader):
images = Variable(images.float())
labels = Variable(labels)
#print(images[0])
images = images.to(device)
labels = labels.to(device)
print("img shape=",images.shape, "label shape=",labels.shape)
images = images.resize_((100,616))
print("img shape=",images.shape, "label shape=",labels.shape)
# Forward + Backward + Optimize
optimizer.zero_grad()
outputs = cnn(images)
loss = criterion(outputs, labels)
#print(loss)
loss.backward()
optimizer.step()
#print(loss.item())
losses.append(loss.item())
l = loss.item()
cnn.eval()
with torch.no_grad():
val_loss = []
for images, labels in test_loader:
images = Variable(images.float()).to(device)
labels = labels.to(device)
outputs = cnn.forward(images)
batch_loss = criterion(outputs, labels)
val_loss.append(batch_loss.item())
avgloss = statistics.mean(val_loss)
if avgloss < min(losses):
torch.save(cnn.state_dict(), 'model')
cnn.train()
if (i+1) % 100 == 0:
print ('Epoch : %d/%d, Iter : %d/%d, Loss: %.4f'
%(epoch+1, num_epochs, i+1, len(train_dataset)//batch_size, loss.item()))
print(l)
final_model = CNN()
final_model.load_state_dict(torch.load('model'))
final_model.eval()
correct = 0
total = 0
for images, labels in test_loader:
images = Variable(images.float()).to(device)
outputs = final_model(images).to(device)
labels.to(device)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print('Test Accuracy of the model on the 10000 test images: %.4f %%' % (100 * correct / total))
if __name__ == '__main__':
run()
I have enclosed all the code for testing purposes. But Here is the error I get
img shape= torch.Size([100, 1, 28, 28]) label shape= torch.Size([100])
img shape= torch.Size([100, 616]) label shape= torch.Size([100]) x
shape torch.Size([100, 616]) x shape torch.Size([100, 1, 28, 28])
Traceback (most recent call last): File "test.py", line 145, in
run() File "test.py", line 115, in run
outputs = cnn.forward(images) File "test.py", line 56, in forward
out = self.layer1(x) File "/usr/share/anaconda3/envs/DL/lib/python3.6/site-packages/torch/nn/modules/module.py",
line 489, in call
result = self.forward(*input, **kwargs) File "/usr/share/anaconda3/envs/DL/lib/python3.6/site-packages/torch/nn/modules/container.py",
line 92, in forward
input = module(input) File "/usr/share/anaconda3/envs/DL/lib/python3.6/site-packages/torch/nn/modules/module.py",
line 489, in call
result = self.forward(*input, **kwargs) File "/usr/share/anaconda3/envs/DL/lib/python3.6/site-packages/torch/nn/modules/linear.py",
line 67, in forward
return F.linear(input, self.weight, self.bias) File "/usr/share/anaconda3/envs/DL/lib/python3.6/site-packages/torch/nn/functional.py",
line 1354, in linear
output = input.matmul(weight.t()) RuntimeError: size mismatch, m1: [2800 x 28], m2: [616 x 300] at
/opt/conda/conda-bld/pytorch_1549630534704/work/aten/src/THC/generic/THCTensorMathBlas.cu:266
The problem here is that I want all 616 pixels to feed as input into the neural network but I dont know how to do so. I tried to reshape the input to solve the problem but it ran model.forward twice, once with the correct shape and then the wrong shape.
|
You are calling forward twice in run:
Once for the training data
Once for the validation data
However, you do not appear to have applied the following transformation to your validation data:
images = images.resize_((100,616))
Maybe consider doing the resize in the forward function.
|
https://stackoverflow.com/questions/54840612/
|
Load Pre-trained model in Pytorch
|
I am currently working on GANS, I have downloaded the code and models from http://www.cs.columbia.edu/~vondrick/tinyvideo/, where I just need to run to get output. I have given paths and all correctly, and the code where i am receiving an error is shown below. The have wrote the lines correctly but still getting syntax error. Please help me!
import torch
import torch.legacy.nn
import torchfile
from skimage import io, transform
import torch.nn as nn
import torch.backends.cudnn as cudnn
cudnn.enabled = True
opt = {
model = 'models/beach/iter63000_net.t7',
batchSize = 128,
gpu = 1,
cudnn = 1,
}
|
you are trying to init a python dictionary. this is how you do it:
opt = {
'model': 'models/beach/iter63000_net.t7',
'batchSize': 128,
'gpu': 1,
'cudnn': 1,
}
|
https://stackoverflow.com/questions/54851182/
|
Pytorch: Set Block-Diagonal Matrix Efficiently?
|
I have a Tensor A of size [N x 3 x 3], and a Matrix B of size [N*3 x N*3]
I want to copy the contents of A -> B, so that the diagonal elements are filled up basically, and I want to do this efficiently:
It should kind of fill up B to look something filled like this:
So each [i,3,3] fills into each [3x3] part in B diagonally down the line.
How do I do this? As efficiently as possible as this is for a real time application. I could write a CUDA kernel to do this, but I would prefer to do it with some special Pytorch function
|
Use torch.block_diag():
# Setup
A = torch.ones(3,3,3, dtype=int)
# Unpack blocks and apply
B = torch.block_diag(*A)
>>> B
tensor([[1, 1, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 1, 1, 1]])
|
https://stackoverflow.com/questions/54856333/
|
Input labels for semantic segmentation with U-Net as single layer
|
When doing semantic segmentation with the U-Net for example, it seems to be common-practice to provide the label data as one-hot-encoded tensors. In another SO question, a user pointed out that this is due to the labels usually representing categorical values. Feeding them to the network as class labels within only one layer (as greyscale intensity values) would introduce difficulties.
In another blog post however, the author explains that the labels
"[...] sometimes [get] packaged as greyscale images, where the pixel intensity represents the class id [...]. This method can be the easiest to work with. It allows for a small file size for distribution and [...] One Hot Vector representations [use] up more memory than [the greyscale encoding format]."
My hardware is only very limited, and I am hoping that encoding the labels as 1-layered greyscale tensors, rather than n-layered (n being the number of classes to segment), will lead to lower memory usage. However, the author of the blog then also states:
"Even if the deep learning framework you use accepts the labels data as class ids, as in [the greyscale format], it will convert that data to one-hot encoding behind the scenes."
Does this mean, there wouldn't be any savings memory-wise after all?
If it is worthwhile, how would I go on to implementing this in the dataset-reader? I also haven't encountered any implementation, where the greyscale labeling has in fact been practiced. I'd therefore also be thankful for any links to implementations that have been using greyscale labels for semantic segmentation!
I am working with PyTorch and my code is based on this implementation, with the difference that I have 3 classes to segment.
Any suggestions/links are greatly appreciated!
|
This can help you save disk memory as you will be able to store the labels, the ground truth, as a greyscale image (width, heigh, 1) and not as a bigger 3D tensor of shape (width, height, n). But, during the training process, you will have to convert the greyscale ground truth images to 3D tensor to be able to train your network. So It won't help you to reduce the RAM cost of the process.
If you really need to reduce the RAM usage, you can decrease the training batch size, or the image sizes.
|
https://stackoverflow.com/questions/54869612/
|
RuntimeError: _thnn_mse_loss_forward is not implemented for type torch.cuda.LongTensor
|
I'm using PyTorch,but I get a error!
My error code as following:
for train_data in trainloader:
example_count += 1
if example_count == 100:
break
optimer.zero_grad()
image, label = train_data
image = image.cuda()
label = label.cuda()
out = model(image)
_, out = torch.max(out, 1)
# print(out.cpu().data.numpy())
# print(label.cpu().data.numpy())
# out = torch.zeros(4, 10).scatter_(1, out.cpu(), 1).cuda()
# label= torch.zeros(4, 10).scatter_(1, label.cpu(), 1).cuda()
l = loss(out, label)
l.bakeward()
optimer.setp()
j += 1
count += label.size(0)
acc += (out == label).sum().item()
if j % 1000 == 0:
print(j + ' step:curent accurity is %f' % (acc / count))
the traceback:
Traceback (most recent call last):
File "VGG实现.py", line 178, in <module>
utils.train(testloader,model)
File "VGG实现.py", line 153, in train
l=loss(out,label)
File "/home/tang/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/tang/anaconda3/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 435, in forward
return F.mse_loss(input, target, reduction=self.reduction)
File "/home/tang/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 2156, in mse_loss
ret = torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction))
RuntimeError: _thnn_mse_loss_forward is not implemented for type torch.cuda.LongTensor
I get a answer whose here
Pytorch RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor'
But I don't know how to solve this question.
|
Look at the documentation of torch.max():
torch.max(input, dim, keepdim=False, out=None) -> (Tensor, LongTensor)
Returns the maximum value of each row of the input tensor in the given
dimension dim. The second return value is the index location of each
maximum value found (argmax).
Your line of code
_, out = torch.max(out, 1)
Takes the float prediction of the model, out, and uses torch.max() to return the argmax = type long int index of the maximal prediction.
The error message you get is that your loss function (you are using cross entropy with softmax I guess) does not support a long type first argument.
Moreover, you cannot take derivative through argmax - so I don't think converting out to float using .to(torch.float) is going to do you any good either.
The softmax function inside the loss function you are using is taking care of the argmax for you.
|
https://stackoverflow.com/questions/54878415/
|
RuntimeError: Expected hidden[0] size (2, 20, 256), got (2, 50, 256)
|
I get this error while trying to build a multiclass text classification network using LSTM (RNN). The code seems to run fine for the training part of the code whereas it throws the error for the validation part. Below is the network architecture and training code. Appreciate any help here.
I tried taking an existing code that predicts sentiment using RNN and replaced sigmoid with softmax function in the end and loss function from BCE Loss to NLLLoss()
def forward(self, x, hidden):
"""
Perform a forward pass of our model on some input and hidden state.
"""
batch_size = x.size(0)
embeds = self.embedding(x)
lstm_out,hidden= self.lstm(embeds,hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
out = self.dropout(lstm_out)
out = self.fc(out)
# softmax function
soft_out = self.sof(out)
# reshape to be batch_size first
soft_out = soft_out.view(batch_size, -1)
# soft_out = soft_out[:, -1] # get last batch of labels
# return last sigmoid output and hidden state
return soft_out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
# Instantiate the model w/ hyperparams
vocab_size = len(vocab_to_int)+1
output_size = 44
embedding_dim = 100
hidden_dim = 256
n_layers = 2
net = ClassificationRNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers)
print(net)
# loss and optimization functions
lr=0.001
criterion = nn.NLLLoss()
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
# training params
epochs = 4 # 3-4 is approx where I noticed the validation loss stop decreasing
counter = 0
print_every = 100
clip=5 # gradient clipping
# move model to GPU, if available
if(train_on_gpu):
net.cuda()
net.train()
# train for some number of epochs
for e in range(epochs):
# initialize hidden state
h = net.init_hidden(batch_size)
# batch loop
for inputs, labels in train_loader:
counter += 1
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
# zero accumulated gradients
net.zero_grad()
# get the output from the model
output, h = net(inputs, h)
# print('output:',output.squeeze())
# print('labels:',labels.float())
# calculate the loss and perform backprop
loss = criterion(output, labels)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(net.parameters(), clip)
optimizer.step()
# loss stats
if counter % print_every == 0:
# Get validation loss
val_h = net.init_hidden(batch_size)
val_losses = []
net.eval()
for inputs, labels in valid_loader:
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
val_h = tuple([each.data for each in val_h])
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
output, val_h = net(inputs, val_h)
val_loss = criterion(output, labels)
val_losses.append(val_loss.item())
net.train()
print("Epoch: {}/{}...".format(e+1, epochs),
"Step: {}...".format(counter),
"Loss: {:.6f}...".format(loss.item()),
"Val Loss: {:.6f}".format(np.mean(val_losses)))
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-41-805ed880b453> in <module>()
58 inputs, labels = inputs.cuda(), labels.cuda()
59
---> 60 output, val_h = net(inputs, val_h)
61
62 val_loss = criterion(output, labels)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)
<ipython-input-38-dbfb8d384231> in forward(self, x, hidden)
34 batch_size = x.size(0)
35 embeds = self.embedding(x)
---> 36 lstm_out,hidden= self.lstm(embeds,hidden)
37
38 # stack up lstm outputs
|
Try adding drop_last=True in your line of codes that loads data using DataLoader,
for example for loading training data from data set train_data:
train_loader = DataLoader(train_data, shuffle=True, batch_size=batch_size, drop_last=True)
Explanation:
The error may caused by your training data not being divisible by batch size. Suppose your training data has 130 items and batch size is 8, the last batch will have only 2 (remainder of 130/8) items. Therefore by setting drop_last to True, the 2 items will be ignored.
|
https://stackoverflow.com/questions/54878904/
|
Choosing a margin for contrastive loss in a siamese network
|
I'm building a siamese network for a metric-learning task, using a contrastive loss function, and I'm uncertain on how to set the 'margin' hyperparameter for the loss.
My inputs to the loss function are currently 1024-dimension dense embeddings from an RNN layer - Does the dimensionality of that input affect how I pick a margin? Should I use a dense layer to project it to a lower-dimensional space first? Any pointers on how to pick a specific margin value (or any relevant research) would be really appreciated! In case it matters, I'm using PyTorch.
|
You don't need to project it to a lower dimensional space.
The dependence of the margin with the dimensionality of the space depends on how the loss is formulated: If you don't normalize the embedding values and compute a global difference between vectors, the right margin will depend on the dimensionality. But if you compute a normalize difference, such as cosine distance, the margin values won't depend on the dimensionality of the embedding space.
Here ranking (or contrastive) losses are explained, it might be useful https://gombru.github.io/2019/04/03/ranking_loss/
|
https://stackoverflow.com/questions/54892607/
|
Mismatching Conda and Pycharm
|
I'm new in python and I'm confused about mismatching Conda packages list and Pycharm. In a project I need to install pytorch. Installing with pycharm lead to some error and when I install it through conda, It does not appear in pycharm. Both list is the same env.
Thanks in advance.
pycharm list
Anaconda list
|
PyCharm shows you the list of installed packages with pip, while conda list shows both pip and conda. Meanwhile, you can switch between pip and conda with a dedicated button in PyCharm:
|
https://stackoverflow.com/questions/54909322/
|
Fastai learner not loading
|
So I'm trying to load a model using:
learn = create_cnn(data, models.resnet50, lin_ftrs=[2048], metrics=accuracy)
learn.clip_grad();
learn.load(f'{name}-stage-2.1')
But I get the following error
RuntimeError: Error(s) in loading state_dict for Sequential:
size mismatch for 1.8.weight: copying a param with shape torch.Size([5004, 2048]) from checkpoint, the shape in current model is torch.Size([4542, 2048]).
size mismatch for 1.8.bias: copying a param with shape torch.Size([5004]) from checkpoint, the shape in current model is torch.Size([4542]).
The only thing that is different thing is that I added a random validation split that wasn't there in the stage-2.1 model, when I remove the split and have no validation set as the stage-2.1 was trained all goes well.
Whats happening?
|
Use cnn_learner method and latest Pytorch with latest FastAI. There was a breaking change and discontinuity so you suffer now.
The fastai website has many examples such as this one.
learn = cnn_learner(data, models.resnet50, metrics=accuracy)
|
https://stackoverflow.com/questions/54914106/
|
How to convert a Label matrix to colour matrix for image segmentation?
|
I have a label matrix of 256*256 for example. And the classes are 0-11 so 12 classes. I want to convert the label matrix to colour matrix. I tried do it in a code like this
`for i in range(256):
for j in range(256):
if x[i][j] == 11:
dummy[i][j] = [255,255,255]
if x[i][j] == 1:
dummy[i][j] = [144,0,0]
if x[i][j] == 2:
dummy[i][j] = [0,255,0]
if x[i][j] == 3:
dummy[i][j] = [0,0,255]
if x[i][j] == 4:
dummy[i][j] = [144,255,0]
if x[i][j] == 5:
dummy[i][j] = [144,0,255]
if x[i][j] == 6:
dummy[i][j] = [0,255,255]
if x[i][j] == 7:
dummy[i][j] = [122,0,0]
if x[i][j] == 8:
dummy[i][j] = [0,122,0]
if x[i][j] == 9:
dummy[i][j] = [0,0,122]
if x[i][j] == 10:
dummy[i][j] = [122,0,122]
if x[i][j] == 11:
dummy[i][j] = [122,122,0]
`
It is highly inefficient. PS: the shape of x is [256 256] and dummy is [256 256 3]. Is there any better way to do it ?
|
You are looking into indexed RGB images - an RGB image where you have a fixed "pallet" of colors, each pixel indexes to one of the colors of the pallet. See this page for more information.
from PIL import Image
img = Image.fromarray(x, mode="P")
img.putpalette([
255, 255, 255, # index 0
144, 0, 0, # index 1
0, 255, 0, # index 2
0, 0, 255, # index 3
# ... and so on, you can take it from here.
])
img.show()
|
https://stackoverflow.com/questions/54928522/
|
Efficient way to implement matrix multiplication when one matrix is extremely wide?
|
I need to multiply 3 matrices, A: 3000x100, B: 100x100, C: 100x3.6MM. I currently am just using normal matrix multiplication in PyTorch
A_gpu = torch.from_numpy(A)
B_gpu = torch.from_numpy(B)
C_gpu = torch.from_numpy(C)
D_gpu = (A_gpu @ B_gpu @ C_gpu.t()).t()
C is very wide so the data reuse on gpu is limited but are there other ways to speed this up? I have a machine with 4x GPUs.
|
Since you have four GPUs, you can harness them to perform efficient matrix multiplication. Notice however that the results of the multiplication has size 3000x3600000, which takes up 40GB in single precision floating point (fp32). Unless you have a large enough RAM for the CPU, you cannot store the results of this computation on the RAM.
A possible solution for this is to break up the large matrix C into four smaller chunks, perform the matrix multiplication of each chunk on a different GPU, and keep the result on the GPU. Provided that each GPU has at least 10GB of memory, you will have enough memory for this.
If you do have also enough CPU memory, you can then move the results of all four GPUs onto the CPU and concatenate them (in fact, in this case you could have used only a single GPU and transfer the results from GPU to CPU each time). Otherwise, you can keep the results chunked on the GPUs, and you need to remember and keep track that the four chunks are actually part of one matrix.
import numpy as np
import torch.nn as nn
import torch
number_of_gpus = 4
# create four matrics
A = np.random.normal(size=(3000,100))
B = np.random.normal(size=(100,100))
C = np.random.normal(size=(100,3600000))
# convert them to pytorch fp32 tensors
A = torch.from_numpy(A).float()
B = torch.from_numpy(B).float()
C = torch.from_numpy(C).float()
# calcualte `A@B`, which is easy
AB = A@B
# split the large matrix `C` into 4 smaller chunks along the second dimension.
# we assume here that the size of the second dimension of `C` is divisible by 4.
C_split = torch.split(C,C.shape[1]//number_of_gpus,dim=1)
# loop over the four GPUs, and perform the calculation on each using the corresponding chunk of `C`
D_split = []
for i in range(number_of_gpus):
device = 'cuda:{:d}'.format(i)
D_split.append( AB.to(device) @ C_split[i].to(device))
# DO THIS ONLY IF YOU HAVE ENOUGH CPU MEMORY!! :
D = torch.cat([d.cpu() for d in D_split],dim=1)
|
https://stackoverflow.com/questions/54932734/
|
CUDA out of memory with matrix multiply
|
I'm trying to multiply 3 matrices, but am running out of CUDA memory.
# A: 3000 x 100 (~2MB)
# B: 100 x 100 (~0.05MB)
# C: 100 x 3MM (~2GB)
A = np.random.randn(3000, 100)
B = np.random.randn(100, 100)
C = np.random.randn(100, 3e6)
A_gpu = torch.from_numpy(A).cuda()
B_gpu = torch.from_numpy(B).cuda()
C_gpu = torch.from_numpy(C).cuda()
R_gpu = (A_gpu @ B_gpu @ C_gpu)
Cuda is requesting about 90GB of memory for this operation. I don't understand why.
|
Multiplying matrices, your output size is going to be 3,000 x 3,000,000 matrix! so despite A and B being relatively small, the output R is HUGE: 9G elements. Moreover, I suspect dtype of your matrices is float64 and not float32 (because you used numpy to init them). Therefore, each of the 9G elements of R_gpu requires 8 bytes; bringing you to size of at least 72GB GPU memory only for R_gpu. I suspect intermediate results and some other stuff occupies a little more of you GPU memory.
|
https://stackoverflow.com/questions/54936628/
|
'NoneType' object has no attribute 'add_summary'
|
I'm having trouble with visualizing the weights and bias of my model using tensorboardX.
Here is my model (it's pretty simple anyway):
self.pipe = nn.Sequential(nn.Linear(9, 128),
nn.ReLU(),
nn.Linear(128, 256),
nn.ReLU(),
nn.Linear(256,2),
nn.Softmax()
)
def forward(self, x):
return self.pipe(x)
And here is where I get error in pytorch
for name, param in net.named_parameters():
writer.add_histogram(name, param, epoch_size, bins='auto')
and the error is
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-70-d060d2df4423> in <module>()
1 for name, param in net.named_parameters():
----> 2 writer.add_histogram(name, param, epoch_size, bins='auto')
~\Anaconda3\lib\site-packages\tensorboardX\writer.py in add_histogram(self, tag, values, global_step, bins, walltime)
403 if isinstance(bins, six.string_types) and bins == 'tensorflow':
404 bins = self.default_bins
--> 405 self.file_writer.add_summary(
406 histogram(tag, values, bins), global_step, walltime)
407
AttributeError: 'NoneType' object has no attribute 'add_summary'
but I really have to see the histogram where the weights stuck in suboptimal.
so I changed code little bit to proceed step by step
param = np.array(list(net.parameters()))
print(param[0].data)
writer.add_histogram('weight', param[0].data)
BOOM! still same error, maybe that doesn't change at all.
|
The posted code snippet is insufficient to root cause the issue.
The member variable file_writer is set to None when the close() method is invoked on writer. Please check if the close() method was invoked on writer. The close() method is also invoked when the writer object is used as a Context manager and the with block is exited.
with SummaryWriter() as writer:
writer.add_scalar...
writer.add_histogram # this will cause a crash
|
https://stackoverflow.com/questions/54937532/
|
Data Augmentation with torchvision.transforms in pytorch
|
I found out data augmentation can be done in PyTorch by using torchvision.transforms. I also read that transformations are apllied at each epoch. So I'm wondering whether or not the effect of copying each sample multiple times and then applying random transformation to them is same as using torchvision.transforms on original data set(unique images) and just training it for a longer time(more epochs).
Thanks in advance.
|
This is a question to be answered in a broad scale. don't get misunderstood that the TorchVision Transforms doesn't increase your dataset. It applies random or non-random transforms to your current data set at runtime. (hence unique each time and each epoch).
the effect of copying each sample multiple times and then applying random transformation to them is same as using torchvision.transforms on original data set(unique images) and just training it for a longer time(more epochs).
Answer-
To increase your dataset, you can copy paste, also use pyTorch or WEKA software. However, more epochs are a totally different concept to this. Of course, the more epochs you use, the better the model will be (only till the validation loss and training loss intersect each other)
Hope this helps.
|
https://stackoverflow.com/questions/54972534/
|
"function 'AddDllDirectory' not found" while importing pytorch
|
I keep getting this error while trying to import pytorch in jupyter notebook
import torch
AttributeError: function 'AddDllDirectory' not found
this error occur only when I try installing pytorch in anaconda (I used both command to install pytorch pip and conda, I also tried installing only the cpu version and nothing changed, i still get the same error while trying to import it)
I searched the internet and found nothing remotely close to my problem.
is there's a way I can fix this error?
thanks in advance
|
I fixed the issue by installing the file (KB2533623) from
Here (for windows 7 users). this file apparently contains the missing "AddDllDirectory" function.
Posting this answe in case someone ran into the same problem.
|
https://stackoverflow.com/questions/54985803/
|
Calling forward function without .forward()
|
While looking at some pytorch code on pose estimation AlphaPose I noticed some unfamiliar syntax:
Basically, we define a Darknet class which inherits nn.Module properties like so: class Darknet(nn.Module)
This re-constructs the neural net from some config file and also defines functions to load pre-trained weights and a forward pass
Now, forward pass takes the following parameters:
def forward(self, x, CUDA)
I should note that in class definition forward is the only method that has a CUDA attribute (this will become important later on)
In the forward pass we get the predictions:
for i in range(number_of_modules):
x = self.module[i](x)
where module[i] was constructed as:
module = nn.Sequential()
conv = nn.Conv2d(prev_fileters, filters, kernel_size, stride, pad, bias=bias)
module.add_module("conv_{0}".format(index), conv)
We then call invoke this model and (I presume) a forward method like so:
self.det_model = Darknet("yolo/cfg/yolov3-spp.cfg")
self.det_model.load_weights('models/yolo/yolov3-spp.weights')
self.det_model.cpu()
self.det_model.eval()
image = image.cpu()
prediction = self.det_model(img, CUDA = False)
I assume that the last line is the calling of the forward pass but why not use the .forward? Is this a pytorch specific syntax or am I missing some basic python principles?
|
This is nothing torch specific. When you call something as class_object(fn params) it invokes the __call__ method of that class.
If you dig the code of torch, specifically nn.Module you will see that __call__ internally invokes forward but taking care of hooks and states that pytorch allows. So when you are calling self.det_model(img, cuda) you are still calling forward.
See the code for nn.module here.
|
https://stackoverflow.com/questions/54989230/
|
how sort randomly images and its mask?
|
before splitting the dataset I need to randomly load the data and then do splitting . This is the snippet for splitting the dataset which is not randomly. I am wondering how can I do this for images and corresponding mask in folder_mask?
folder_data = glob.glob("D:\\Neda\\Pytorch\\U-net\\my_data\\imagesResized\\*.png")
folder_mask = glob.glob("D:\\Neda\\Pytorch\\U-net\\my_data\\labelsResized\\*.png")
# split these path using a certain percentage
len_data = len(folder_data)
print("count of dataset: ", len_data)
# count of dataset: 992
split_1 = int(0.6 * len(folder_data))
split_2 = int(0.8 * len(folder_data))
#folder_data.sort()
train_image_paths = folder_data[:split_1]
print("count of train images is: ", len(train_image_paths))
valid_image_paths = folder_data[split_1:split_2]
print("count of validation image is: ", len(valid_image_paths))
test_image_paths = folder_data[split_2:]
print("count of test images is: ", len(test_image_paths))
train_mask_paths = folder_mask[:split_1]
valid_mask_paths = folder_mask[split_1:split_2]
test_mask_paths = folder_mask[split_2:]
train_dataset = CustomDataset(train_image_paths, train_mask_paths)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=1,
shuffle=True, num_workers=2)
valid_dataset = CustomDataset(valid_image_paths, valid_mask_paths)
valid_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=1,
shuffle=True, num_workers=2)
test_dataset = CustomDataset(test_image_paths, test_mask_paths)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=1,
shuffle=False, num_workers=2)
dataLoaders = {
'train': train_loader,
'valid': valid_loader,
'test': test_loader,
}
|
As far as I understood, you want to randomize the order of the pictures, so that with each rerun there are different photos in the train and test set. Assuming you want to do this in more or less plain Python you can do the following.
The easiest way to use shuffle a list of elements in python is:
import random
random.shuffle(list) // shuffles in place
So you have to list and want to still keep the link between data and masks. So if you can accept a rather quick hack, I'd propose something like this.
import random
folder_data = glob.glob("D:\\Neda\\Pytorch\\U-net\\my_data\\imagesResized\\*.png")
folder_mask = glob.glob("D:\\Neda\\Pytorch\\U-net\\my_data\\labelsResized\\*.png")
assert len(folder_data) == len(folder_mask) // everything else would be bad
indices = list(range(len(folder_data)))
random.shuffle(indices)
Now you have a list of indices you can split and then use the indices from the splitted list to access the other lists.
split_1 = int(0.6 * len(folder_data))
split_2 = int(0.8 * len(folder_data))
train_image_paths = [folder_data[i] for i in indices]
// and so on...
This would be the plain Python way. But there are functions to do this in packages like sklearn. So you might consider using those. They'll gonna save you from doing a lot of work. (Usually it's way better to reuse code then to implement it yourself.)
|
https://stackoverflow.com/questions/55011980/
|
shall I apply softmax before cross entropy?
|
The pytorch tutorial (https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py) trains a convolutional neural network (CNN) on a CIFAR dataset.
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
The network looks good except that the very last layer fc3, which predicts the probability of belonging to 10 classes without a softmax. Shouldn't we apply a softmax first to make sure the output of the fc layer is between 0 and 1 and sum before calculating cross-entropy loss?
I tested this by applying the softmax and rerunning, butvthe accuracy dropped to around 35%. This seems counterintuitive. What is the explanation?
|
CrossEntropyLoss in PyTorch is already implemented with Softmax:
https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss
This criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class.
The answer to the second part of your question is a little more complicated. There can be multiple causes for reduction in accuracy. Theoretically speaking, since the softmax layer you added can predict the correct answer in a reasonable accuracy, the following layer should be able to do the same by preserving the maximum value with identity between the last two layers. Although the softmax normalizes those bounded outputs (between 0 and 1) again, it may change the way those are distributed, but still can preserve the maximum and therefore the class that is predicted.
However, in practice, things are a little bit different. When you have a double softmax in the output layer, you basically change the output function in such way that it changes the gradients that are propagated to your network. The softmax with cross entropy is a preferred loss function due to the gradients it produces. You can prove it to yourself by computing the gradients of the cost function, and account for the fact that each "activation" (softmax) is bounded between 0 and 1. The additional softmax "behind" the original one just multiplies the gradients with values between 0 and 1 and thus reducing the value. This affects the updates to the weights. Maybe it can be fixed by changing the learning rate but this is strongly not suggested. Just have one softmax and you're done.
See Michael Nielsen's book, chapter 3 for more profound explanation on that.
|
https://stackoverflow.com/questions/55030217/
|
CycleGAN with 1-channel tiffs both as input and as output
|
I am running CycleGAN with different types of tiffs in trainA and trainB. The tiffs are 256x256 pixels in size and have 1 channel per pixel. I am using tiffs to have a wide range of values.
I changed the code as suggested in the pytorch-CycleGAN-and-pix2pix
repo (https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/320 and similar), but what I got out during the training in ./checkpoints are three-channels PNGs. Do you think it would be possible to change the code so that it goes from 1 channel tiff to 1 channel tiff with no information loss? As far as I understand, at present the code is converting the imported files to PNGs along the way. In other words: I would like my tensors to be [256*256*int_range,1]. Thanks for the help!
|
Have you tried adding the parameters --input_nc 1 --output_nc 1 during training? It will convert the number of channels from 3 to 1.
|
https://stackoverflow.com/questions/55032389/
|
cuDNN error: CUDNN_STATUS_BAD_PARAM.Can someone explain why i am getting this error and how can i correct it?
|
I am trying to implement a Character LSTM using Pytorch.But I am getting cudnn_status_bad_params errors.This is the training loop.I getting error on line output = model(input_seq).
for epoch in tqdm(range(epochs)):
for i in range(len(seq)//batch_size):
sidx = i*batch_size
eidx = sidx + batch_size
x = seq[sidx:eidx]
x = torch.tensor(x).cuda()
input_seq =torch.nn.utils.rnn.pack_padded_sequence(x,seq_lengths,batch_first = True)
y = out_seq[sidx:eidx]
output = model(input_seq)
loss = criterion(output,y)
loss.backward()
optimizer.step()
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/rnn.py in forward(self, input, hx)
180 else:
181 result = _impl(input, batch_sizes, hx, self._flat_weights, self.bias,
--> 182 self.num_layers, self.dropout, self.training, self.bidirectional)
183 output = result[0]
184 hidden = result[1:] if self.mode == 'LSTM' else result[1]
RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM
|
I got the same error, if you switch to CPU, you'll get a much better description of the error. In my case the problem was in type of input that I was giving to the network. I was sending I guess long, while the model needed float. I made the following changes and the code worked. Basically switching to cpu gives better error descriptions.
input_seq = input_seq.float().cuda()
|
https://stackoverflow.com/questions/55042931/
|
Torchtext AttributeError: 'Example' object has no attribute 'text_content'
|
I'm working with RNN and using Pytorch & Torchtext. I've got a problem with building vocab in my RNN. My code is as follows:
TEXT = Field(tokenize=tokenizer, lower=True)
LABEL = LabelField(dtype=torch.float)
trainds = TabularDataset(
path='drive/{}'.format(TRAIN_PATH), format='tsv',
fields=[
('label_start', LABEL),
('label_end', None),
('title', None),
('symbol', None),
('text_content', TEXT),
])
testds = TabularDataset(
path='drive/{}'.format(TEST_PATH), format='tsv',
fields=[
('text_content', TEXT),
])
TEXT.build_vocab(trainds, testds)
When I want to build vocab, I'm getting this annoying error:
AttributeError: 'Example' object has no attribute 'text_content'
I'm sure, that there is no missing text_content attr. I made try-catch in order to display this specific case:
try:
print(len(trainds[i]))
except:
print(trainds[i].text_content)
Surprisingly, I don't get any error and this specific print command shows:
['znana', 'okresie', 'masarni', 'walc', 'y', 'myśl', 'programie', 'sprawy', ...]
So it indicates, that there is text_content attr. When I perform this on a smaller dataset, it works like a charm. This problem occurs when I want to work with proper data. I ran out of ideas. Maybe someone had a similar case and can explain it.
My full traceback:
AttributeError Traceback (most recent call last)
<ipython-input-16-cf31866a07e7> in <module>()
155
156 if __name__ == "__main__":
--> 157 main()
158
<ipython-input-16-cf31866a07e7> in main()
117 break
118
--> 119 TEXT.build_vocab(trainds, testds)
120 print('zbudowano dla text')
121 LABEL.build_vocab(trainds)
/usr/local/lib/python3.6/dist-packages/torchtext/data/field.py in build_vocab(self, *args, **kwargs)
260 sources.append(arg)
261 for data in sources:
--> 262 for x in data:
263 if not self.sequential:
264 x = [x]
/usr/local/lib/python3.6/dist-packages/torchtext/data/dataset.py in __getattr__(self, attr)
152 if attr in self.fields:
153 for x in self.examples:
--> 154 yield getattr(x, attr)
155
156 @classmethod
AttributeError: 'Example' object has no attribute 'text_content'
|
This problem arises when the fields are not passed in the same order as they are in the csv/tsv file. Order must be same. Also check if no extra or less fields are mentioned than there are in the csv/tsv file..
|
https://stackoverflow.com/questions/55060888/
|
convert cv2.umat to numpy array
|
Processed_image() function returns a cv2.Umat type value which is to be reshaped from
3 dimensions(h, ch, w) to 4 dimensions(h, ch, w, 1) so i need it to be converted
to numpy array or also if possible help me to directally rehshape cv2.umat
type variable to be directally reshaped and converted to a pytorch tensor and can be
assigned to reshaped_image_tensor.
img_w=640
img_h=640
img_ch=3
umat_img = cv2.UMat(img)
display_one(umat_img, "RESPONSE") #function created by me to display image
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
with torch.no_grad():
processed_img = preprocess_image(umat_img, model_image_size = (img_h, img_ch, img_w))
#___________write YOUR CODE here________
reshaped_images_tensor = torch.from_numpy(processed_img.reshape(img_h, img_ch, img_w, 1)).float().to(device) #images_tensor.reshape(img_h, img_ch, img_w, 1)
outputs = model(reshaped_images_tensor)
_, predicted = torch.max(outputs, 1)
c = predicted.squeeze()
output_probability(predicted, processed_img, umat_img)
if ord('q')==cv2.waitKey(10):
exit(0)
|
I didn't quite catch your question, but you can get numpy data of an opencv's umat with "get()" like this
and you should probably permute your tensor before feeding it into your model.
|
https://stackoverflow.com/questions/55062886/
|
How to load images with multiple JSON annotation in PyTorch
|
I would like to know how I can use the data loader in PyTorch for the custom file structure of mine. I have gone through PyTorch documentation, but all those are with separate folders with class.
My folder structure consists of 2 folders(called training and validation), each with 2 subfolders(called images and json_annotations). Each image in the "images" folder has multiple objects(like cars, cycles, man etc) and each is annotated and have separate JSON files. Standard coco annotation is followed. My intention is to make a neural network which can do real-time classification from videos.
Edit 1:
I have done the coding as suggested by Fábio Perez.
class lDataSet(data.Dataset):
def __init__(self, path_to_imgs, path_to_json):
self.path_to_imgs = path_to_imgs
self.path_to_json = path_to_json
self.img_ids = os.listdir(path_to_imgs)
def __getitem__(self, idx):
img_id = self.img_ids[idx]
img_id = os.path.splitext(img_id)[0]
img = cv2.imread(os.path.join(self.path_to_imgs, img_id + ".jpg"))
load_json = json.load(open(os.path.join(self.path_to_json, img_id + ".json")))
#n = len(load_json)
#bboxes = load_json['annotation'][n]['segmentation']
return img, load_json
def __len__(self):
return len(self.image_ids)
When I try this
l_data = lDataSet(path_to_imgs = '/home/training/images', path_to_json = '/home/training/json_annotations')
I'm getting l_data with l_data[][0] - images and l_data with json. Now I'm confused. How will I use it with finetuning example availalbe in PyTorch? In that example, dataset and dataloader is done as shown below.
https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html
# Create training and validation datasets
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']}
# Create training and validation dataloaders
dataloaders_dict = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=batch_size, shuffle=True, num_workers=4) for x in ['train', 'val']}
|
You should be able to implement your own dataset with data.Dataset. You just need to implement __len__ and __getitem__ methods.
In your case, you can iterate through all images in the image folder (then you can store the image ids in a list in your Dataset). Then, you use the index passed to __getitem__ to get the corresponding image id. With this image id, you can read the corresponding JSON file and return the target data that you need.
Something like this:
class YourDataLoader(data.Dataset):
def __init__(self, path_to_imgs, path_to_json):
self.path_to_imags = path_to_imgs
self.path_to_json = path_to_json
self.image_ids = iterate_through_images(path_to_images)
def __getitem__(self, idx):
img_id = self.image_ids[idx]
img = load_image(os.path.join(self.path_to_images, img_id)
bboxes = load_bboxes(os.path.join(self.path_to_json, img_id)
return img, bboxes
def __len__(self):
return len(self.image_ids)
In iterate_through_images you get all the ids (e.g. filenames) of images in a directory.
In load_bboxes you read the JSON and get the information you need.
I have a JSON loader implementation here if you want a reference.
|
https://stackoverflow.com/questions/55075715/
|
Display a tensor image in matplotlib
|
I'm doing a project for Udacity's AI with Python nanodegree.
I'm trying to display a torch.cuda.FloatTensor that I obtained from an image file path. Below that image will be a bar chart showing the top 5 most likely flower names with their associated probabilities.
plt.figure(figsize=(3,3))
path = 'flowers/test/1/image_06743.jpg'
top5_probs, top5_class_names = predict(path, model,5)
print(top5_probs)
print(top5_class_names)
flower_np_image = process_image(Image.open(path))
flower_tensor_image = torch.from_numpy(flower_np_image).type(torch.cuda.FloatTensor)
flower_tensor_image = flower_tensor_image.unsqueeze_(0)
axs = imshow(flower_tensor_image, ax = plt)
axs.axis('off')
axs.title(top5_class_names[0])
axs.show()
fig, ax = plt.subplots()
y_pos = np.arange(len(top5_class_names))
plt.barh(y_pos, list(reversed(top5_probs)))
plt.yticks(y_pos, list(reversed(top5_class_names)))
plt.ylabel('Flower Type')
plt.xlabel('Class Probability')
The imshow function was given to me as
def imshow(image, ax=None, title=None):
if ax is None:
fig, ax = plt.subplots()
# PyTorch tensors assume the color channel is the first dimension
# but matplotlib assumes is the third dimension
image = image.transpose((1, 2, 0))
# Undo preprocessing
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
image = std * image + mean
# Image needs to be clipped between 0 and 1 or it looks like noise when displayed
image = np.clip(image, 0, 1)
ax.imshow(image)
return ax
But I get this output
[0.8310797810554504, 0.14590543508529663, 0.013837042264640331, 0.005048676859587431, 0.0027143193874508142]
['petunia', 'pink primrose', 'balloon flower', 'hibiscus', 'tree mallow']
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-17-f54be68feb7a> in <module>()
12 flower_tensor_image = flower_tensor_image.unsqueeze_(0)
13
---> 14 axs = imshow(flower_tensor_image, ax = plt)
15 axs.axis('off')
16 axs.title(top5_class_names[0])
<ipython-input-15-9c543acc89cc> in imshow(image, ax, title)
5 # PyTorch tensors assume the color channel is the first dimension
6 # but matplotlib assumes is the third dimension
----> 7 image = image.transpose((1, 2, 0))
8
9 # Undo preprocessing
TypeError: transpose(): argument 'dim0' (position 1) must be int, not tuple
<matplotlib.figure.Figure at 0x7f5855792160>
My predict function works, but the imshow just chokes with the call to transpose. Any ideas on how to fix this? I think it vaguely has something to do with converting back to a numpy array.
The notebook that I'm working on can be found at https://github.com/BozSteinkalt/ImageClassificationProject
Thanks!
|
You are trying to apply numpy.transpose to a torch.Tensor object, thus calling tensor.transpose instead.
You should convert flower_tensor_image to numpy first, using .numpy()
axs = imshow(flower_tensor_image.detach().cpu().numpy(), ax = plt)
|
https://stackoverflow.com/questions/55083571/
|
RuntimeError: Only tuples, lists and Variables supported as JIT inputs, but got NoneType
|
My code is
a=torch.randn(1,80,100,requires_grad=True)
torch.onnx.export(waveglow,a, "waveglow.onnx")
I am trying to export a PyTorch model to ONNX format so i can use it in TensorRT. while testing my model in PyTorch the input tensor dimension is (1,80,x) where x varies depending on the input text length(the model i am using is TTS model named waveglow)
when i try to run the above code for exporting the model to onnx, I am always getting this error
RuntimeError: Only tuples, lists and Variables supported as JIT inputs, but got NoneType
Please help
|
Given that you have NoneType, perhaps you could check if there is an actual input, because the fact is, you actually got None.
Also, any reason to not use Variable? Variable converts your inputs to a tensor that can be accepted as an input for torch.onnx.export.
|
https://stackoverflow.com/questions/55085660/
|
Model returns a Nan value
|
I was trying to build a neural network with 4 input nodes/ features and just one output feature(0/1). I wrote this code and it runs but while training the model returns NaN. I debugged too and weights and biases are fine until they go through the model.
From what I've searched so far, this could be a problem in the way I am passing the data.
My input data is : tensor([[0.0000e+00, 0.0000e+00, 0.0000e+00, 1.5340e+00],
[1.5000e+01, 1.0000e-01, 2.4210e+00, 3.0000e+01],
[3.0000e+00, 2.2000e-01, 2.2000e-01, 4.5000e+01],
...,
[1.0000e+00, 2.0000e-02, 2.0000e-02, 1.5000e+01],
[6.0000e+00, 2.0000e-01, 2.0000e-01, 1.5000e+01],
[1.7000e+01, 5.2400e-01, 5.2400e-01, 2.0000e+00]], dtype=torch.float64)
import torch
from torchvision import datasets, transforms
import pandas as pd
import numpy as np
from torch.autograd import Variable
# Import tensor dataset & data loader
from torch.utils.data import TensorDataset, DataLoader
from torch import nn, optim
import torch.nn.functional as F
file = pd.read_csv('ks-projects-201801.csv')
array = np.array(file.values)
result = np.empty(len(array))
input_data = np.empty((len(array), 4))
for i in range(len(array)):
input_data[i] = np.array([array[i][10], array[i][12]/1000, array[i][13]/1000, array[i][14]/1000])
if array[i][9] == 'successful':
result[i] = 1
else:
result[i] = 0
input_node = Variable(torch.from_numpy(input_data))
output = torch.from_numpy(result)
print(input_node)
print(output)
train_ds = TensorDataset(input_node.squeeze(), output.squeeze())
batch_size = 5
train_dl = DataLoader(train_ds, batch_size, shuffle=True)
This is the actual model and training
model = nn.Linear(4, 1)
print(model.weight)
print(model.bias)
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.003)
epochs = 5
model = model.double()
for e in range(epochs):
running_loss = 0
for xb, yb in train_dl:
optimizer.zero_grad()
res = model(xb)
loss = criterion(res, yb)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
print(f"model : {loss}")
This prints out model: nan for every epoch and terminates. I am very new to pytorch and I'm not sure how to handle this problem.
|
If you see NaN's in loss try gradient clipping and data normalisation. Normalising data is a must (i.e normalize input data such that mean = 0 and variance =1)
|
https://stackoverflow.com/questions/55087458/
|
A Basic Python Question about Defining Function
|
I have a basic question regarding Python code.
For example,
import torch
import torch.nn as nn
loss = nn.MSELoss()
input = torch.randn(3, 5, requires_grad=True)
target = torch.randn(3, 5)
output = loss(input, target)
output.backward()
Why do I need to define the loss function at the first line? I can't replace loss() at the 4th line with nn.MSELoss().
|
As a few others have pointed out, nn.MSELoss is a class and not a function. In line 1 you are creating an object of type torch.nn.modules.loss.MSELoss. And because it inherits from nn.Module, you can call this object like you would call a function, like you do in line 4.
If you don't want to use the MSELoss class, you can also import torch.nn.functional as F and then use F.mse_loss(input, target) directly (this is what pytorch normally calls for you).
|
https://stackoverflow.com/questions/55092187/
|
How to handle large JSON file in Pytorch?
|
I am working on a time series problem. Different training time series data is stored in a large JSON file with the size of 30GB. In tensorflow I know how to use TF records. Is there a similar way in pytorch?
|
I suppose IterableDataset (docs) is what you need, because:
you probably want to traverse files without random access;
number of samples in jsons is not pre-computed.
I've made a minimal usage example with an assumption that every line of dataset file is a json itself, but you can change the logic.
import json
from torch.utils.data import DataLoader, IterableDataset
class JsonDataset(IterableDataset):
def __init__(self, files):
self.files = files
def __iter__(self):
for json_file in self.files:
with open(json_file) as f:
for sample_line in f:
sample = json.loads(sample_line)
yield sample['x'], sample['time'], ...
...
dataset = JsonDataset(['data/1.json', 'data/2.json', ...])
dataloader = DataLoader(dataset, batch_size=32)
for batch in dataloader:
y = model(batch)
|
https://stackoverflow.com/questions/55109684/
|
How to visualize my training history in pytorch?
|
How do you guys visualize the training history of your pytorch model like in keras here.
I have a pytorch trained model and I want to see the graph of its training.
Can I do this using only matplotlib? If yes, can someone give me resources to follow.
|
You have to save the loss while training. A trained model won't have history of its loss. You need to train again.
Save the loss while training then plot it against the epochs using matplotlib. In your training function, where loss is being calculated save that to a file and visualize it later.
Also, you can use tensorboardX if you want to visualize in realtime.
This is a tutorial for tensorboardX: http://www.erogol.com/use-tensorboard-pytorch/
|
https://stackoverflow.com/questions/55112960/
|
How to calculate Batch Pairwise Distance in PyTorch efficiently
|
I have tensors X of shape BxNxD and Y of shape BxNxD.
I want to compute the pairwise distances for each element in the batch, i.e. I a BxMxN tensor.
How do I do this?
There is some discussion on this topic here: https://github.com/pytorch/pytorch/issues/9406, but I don't understand it as there are many implementation details while no actual solution is highlighted.
A naive approach would be to use the answer for non-batched pairwise distances as discussed here: https://discuss.pytorch.org/t/efficient-distance-matrix-computation/9065, i.e.
import torch
import numpy as np
B = 32
N = 128
M = 256
D = 3
X = torch.from_numpy(np.random.normal(size=(B, N, D)))
Y = torch.from_numpy(np.random.normal(size=(B, M, D)))
def pairwise_distances(x, y=None):
x_norm = (x**2).sum(1).view(-1, 1)
if y is not None:
y_t = torch.transpose(y, 0, 1)
y_norm = (y**2).sum(1).view(1, -1)
else:
y_t = torch.transpose(x, 0, 1)
y_norm = x_norm.view(1, -1)
dist = x_norm + y_norm - 2.0 * torch.mm(x, y_t)
return torch.clamp(dist, 0.0, np.inf)
out = []
for b in range(B):
out.append(pairwise_distances(X[b], Y[b]))
print(torch.stack(out).shape)
How can I do this without looping over B?
Thanks
|
I had a similar issue and spent some time to find the easiest and fastest solution. Now you can compute batched distance by using PyTorch cdist which will give you BxMxN tensor:
torch.cdist(Y, X)
Also, it works well if you just want to compute distances between each pair of rows of two matrixes.
|
https://stackoverflow.com/questions/55126072/
|
.py not reading from the content folder
|
|-content
|-utils
|- parse_config.py
|-models.py
folder structure
This is my folder structure in google colab.
I have already installed Pytorch and all other requirments for the project.
Here models.py file is not able to access the files in utils folder.
In my models.py I'm importing utils.parse_config which is inside the utils folder but it shows the following error.
ImportErrorTraceback (most recent call last) <ipython-input-6-77f4a3369184> in <module>()
----> 1 from models import *
2 from utils import *
3
4 import os, sys, time, datetime, random
5 import torch
/content/models.py in <module>()
9 from PIL import Image
10
---> 11 from utils.parse_config import *
12 from utils.utils import build_targets
13 from collections import defaultdict
ImportError: No module named utils.parse_config
--------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the "Open Examples" button below.
---------------------------------------------------------------------------
Error
How do I make models.py get access to files in utils folder?
|
Models.py seem not in utils folder. Please recheck
|
https://stackoverflow.com/questions/55129148/
|
GRU Language Model not Training Properly
|
I’ve tried reimplementing a simple GRU language model using just a GRU and a linear layer (the full code is also at https://www.kaggle.com/alvations/gru-language-model-not-training-properly):
class Generator(nn.Module):
def __init__(self, vocab_size, embedding_size, hidden_size, num_layers):
super(Generator, self).__init__()
# Initialize the embedding layer with the
# - size of input (i.e. no. of words in input vocab)
# - no. of hidden nodes in the embedding layer
self.embedding = nn.Embedding(vocab_size, embedding_size, padding_idx=0)
# Initialize the GRU with the
# - size of the input (i.e. embedding layer)
# - size of the hidden layer
self.gru = nn.GRU(embedding_size, hidden_size, num_layers)
# Initialize the "classifier" layer to map the RNN outputs
# to the vocabulary. Remember we need to -1 because the
# vectorized sentence we left out one token for both x and y:
# - size of hidden_size of the GRU output.
# - size of vocabulary
self.classifier = nn.Linear(hidden_size, vocab_size)
def forward(self, inputs, use_softmax=False, hidden=None):
# Look up for the embeddings for the input word indices.
embedded = self.embedding(inputs)
# Put the embedded inputs into the GRU.
output, hidden = self.gru(embedded, hidden)
# Matrix manipulation magic.
batch_size, sequence_len, hidden_size = output.shape
# Technically, linear layer takes a 2-D matrix as input, so more manipulation...
output = output.contiguous().view(batch_size * sequence_len, hidden_size)
# Put it through the classifier
# And reshape it to [batch_size x sequence_len x vocab_size]
output = self.classifier(output).view(batch_size, sequence_len, -1)
return (F.softmax(output,dim=2), hidden) if use_softmax else (output, hidden)
def generate(self, max_len, temperature=1.0):
pass
And the training routine:
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Set the hidden_size of the GRU
embed_size = 100
hidden_size = 100
num_layers = 1
# Setup the data.
batch_size=50
kilgariff_data = KilgariffDataset(tokenized_text)
dataloader = DataLoader(dataset=kilgariff_data, batch_size=batch_size, shuffle=True)
criterion = nn.CrossEntropyLoss(ignore_index=kilgariff_data.vocab.token2id['<pad>'], size_average=True)
model = Generator(len(kilgariff_data.vocab), embed_size, hidden_size, num_layers).to(device)
learning_rate = 0.003
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
#model = nn.DataParallel(model)
losses = []
def train(num_epochs, dataloader, model, criterion, optimizer):
plt.ion()
for _e in range(num_epochs):
for batch in tqdm(dataloader):
x = batch['x'].to(device)
x_len = batch['x_len'].to(device)
y = batch['y'].to(device)
# Zero gradient.
optimizer.zero_grad()
# Feed forward.
output, hidden = model(x, use_softmax=True)
# Compute loss:
# Shape of the `output` is [batch_size x sequence_len x vocab_size]
# Shape of `y` is [batch_size x sequence_len]
# CrossEntropyLoss expects `output` to be [batch_size x vocab_size x sequence_len]
_, prediction = torch.max(output, dim=2)
loss = criterion(output.permute(0, 2, 1), y)
loss.backward()
optimizer.step()
losses.append(loss.float().data)
clear_output(wait=True)
plt.plot(losses)
plt.pause(0.05)
train(50, dataloader, model, criterion, optimizer)
#learning_rate = 0.05
#optimizer = optim.SGD(model.parameters(), lr=learning_rate)
#train(4, dataloader, model, criterion, optimizer)
But when the model is predicting, we see that it’s only predicting “the” and comma “,”.
Anyone spot something wrong with my code? Or hyperparameters?
The full code:
# coding: utf-8
# In[1]:
# IPython candies...
from IPython.display import Image
from IPython.core.display import HTML
from IPython.display import clear_output
# In[2]:
import numpy as np
from tqdm import tqdm
import pandas as pd
from gensim.corpora import Dictionary
import torch
from torch import nn, optim, tensor, autograd
from torch.nn import functional as F
from torch.utils.data import Dataset, DataLoader
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# In[3]:
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("darkgrid")
sns.set(rc={'figure.figsize':(12, 8)})
torch.manual_seed(42)
# In[4]:
try: # Use the default NLTK tokenizer.
from nltk import word_tokenize, sent_tokenize
# Testing whether it works.
# Sometimes it doesn't work on some machines because of setup issues.
word_tokenize(sent_tokenize("This is a foobar sentence. Yes it is.")[0])
except: # Use a naive sentence tokenizer and toktok.
import re
from nltk.tokenize import ToktokTokenizer
# See https://stackoverflow.com/a/25736515/610569
sent_tokenize = lambda x: re.split(r'(?<=[^A-Z].[.?]) +(?=[A-Z])', x)
# Use the toktok tokenizer that requires no dependencies.
toktok = ToktokTokenizer()
word_tokenize = word_tokenize = toktok.tokenize
# In[5]:
import os
import requests
import io #codecs
# Text version of https://kilgarriff.co.uk/Publications/2005-K-lineer.pdf
if os.path.isfile('language-never-random.txt'):
with io.open('language-never-random.txt', encoding='utf8') as fin:
text = fin.read()
else:
url = "https://gist.githubusercontent.com/alvations/53b01e4076573fea47c6057120bb017a/raw/b01ff96a5f76848450e648f35da6497ca9454e4a/language-never-random.txt"
text = requests.get(url).content.decode('utf8')
with io.open('language-never-random.txt', 'w', encoding='utf8') as fout:
fout.write(text)
# In[6]:
# Tokenize the text.
tokenized_text = [list(map(str.lower, word_tokenize(sent)))
for sent in sent_tokenize(text)]
# In[7]:
class KilgariffDataset(nn.Module):
def __init__(self, texts):
self.texts = texts
# Initialize the vocab
special_tokens = {'<pad>': 0, '<unk>':1, '<s>':2, '</s>':3}
self.vocab = Dictionary(texts)
self.vocab.patch_with_special_tokens(special_tokens)
# Keep track of the vocab size.
self.vocab_size = len(self.vocab)
# Keep track of how many data points.
self._len = len(texts)
# Find the longest text in the data.
self.max_len = max(len(txt) for txt in texts)
def __getitem__(self, index):
vectorized_sent = self.vectorize(self.texts[index])
x_len = len(vectorized_sent)
# To pad the sentence:
# Pad left = 0; Pad right = max_len - len of sent.
pad_dim = (0, self.max_len - len(vectorized_sent))
vectorized_sent = F.pad(vectorized_sent, pad_dim, 'constant')
return {'x':vectorized_sent[:-1],
'y':vectorized_sent[1:],
'x_len':x_len}
def __len__(self):
return self._len
def vectorize(self, tokens, start_idx=2, end_idx=3):
"""
:param tokens: Tokens that should be vectorized.
:type tokens: list(str)
"""
# See https://radimrehurek.com/gensim/corpora/dictionary.html#gensim.corpora.dictionary.Dictionary.doc2idx
# Lets just cast list of indices into torch tensors directly =)
vectorized_sent = [start_idx] + self.vocab.doc2idx(tokens) + [end_idx]
return torch.tensor(vectorized_sent)
def unvectorize(self, indices):
"""
:param indices: Converts the indices back to tokens.
:type tokens: list(int)
"""
return [self.vocab[i] for i in indices]
# In[8]:
kilgariff_data = KilgariffDataset(tokenized_text)
len(kilgariff_data.vocab)
# In[9]:
batch_size = 10
dataloader = DataLoader(dataset=kilgariff_data, batch_size=batch_size, shuffle=True)
for data_dict in dataloader:
# Sort indices of data in batch by lengths.
sorted_indices = np.array(data_dict['x_len']).argsort()[::-1].tolist()
data_batch = {name:_tensor[sorted_indices]
for name, _tensor in data_dict.items()}
print(data_batch)
break
# In[97]:
class Generator(nn.Module):
def __init__(self, vocab_size, embedding_size, hidden_size, num_layers):
super(Generator, self).__init__()
# Initialize the embedding layer with the
# - size of input (i.e. no. of words in input vocab)
# - no. of hidden nodes in the embedding layer
self.embedding = nn.Embedding(vocab_size, embedding_size, padding_idx=0)
# Initialize the GRU with the
# - size of the input (i.e. embedding layer)
# - size of the hidden layer
self.gru = nn.GRU(embedding_size, hidden_size, num_layers)
# Initialize the "classifier" layer to map the RNN outputs
# to the vocabulary. Remember we need to -1 because the
# vectorized sentence we left out one token for both x and y:
# - size of hidden_size of the GRU output.
# - size of vocabulary
self.classifier = nn.Linear(hidden_size, vocab_size)
def forward(self, inputs, use_softmax=False, hidden=None):
# Look up for the embeddings for the input word indices.
embedded = self.embedding(inputs)
# Put the embedded inputs into the GRU.
output, hidden = self.gru(embedded, hidden)
# Matrix manipulation magic.
batch_size, sequence_len, hidden_size = output.shape
# Technically, linear layer takes a 2-D matrix as input, so more manipulation...
output = output.contiguous().view(batch_size * sequence_len, hidden_size)
# Put it through the classifier
# And reshape it to [batch_size x sequence_len x vocab_size]
output = self.classifier(output).view(batch_size, sequence_len, -1)
return (F.softmax(output,dim=2), hidden) if use_softmax else (output, hidden)
def generate(self, max_len, temperature=1.0):
pass
# In[98]:
# Set the hidden_size of the GRU
embed_size = 12
hidden_size = 10
num_layers = 4
_encoder = Generator(len(kilgariff_data.vocab), embed_size, hidden_size, num_layers)
# In[99]:
# Take a batch.
_batch = next(iter(dataloader))
_inputs, _lengths = _batch['x'], _batch['x_len']
_targets = _batch['y']
max(_lengths)
# In[100]:
_output, _hidden = _encoder(_inputs)
print('Output sizes:\t', _output.shape)
print('Input sizes:\t', batch_size, kilgariff_data.max_len -1, len(kilgariff_data.vocab))
print('Target sizes:\t', _targets.shape)
# In[101]:
_, predicted_indices = torch.max(_output, dim=2)
print(predicted_indices.shape)
predicted_indices
# In[103]:
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Set the hidden_size of the GRU
embed_size = 100
hidden_size = 100
num_layers = 1
# Setup the data.
batch_size=50
kilgariff_data = KilgariffDataset(tokenized_text)
dataloader = DataLoader(dataset=kilgariff_data, batch_size=batch_size, shuffle=True)
criterion = nn.CrossEntropyLoss(ignore_index=kilgariff_data.vocab.token2id['<pad>'], size_average=True)
model = Generator(len(kilgariff_data.vocab), embed_size, hidden_size, num_layers).to(device)
learning_rate = 0.003
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
#model = nn.DataParallel(model)
losses = []
def train(num_epochs, dataloader, model, criterion, optimizer):
plt.ion()
for _e in range(num_epochs):
for batch in tqdm(dataloader):
x = batch['x'].to(device)
x_len = batch['x_len'].to(device)
y = batch['y'].to(device)
# Zero gradient.
optimizer.zero_grad()
# Feed forward.
output, hidden = model(x, use_softmax=True)
# Compute loss:
# Shape of the `output` is [batch_size x sequence_len x vocab_size]
# Shape of `y` is [batch_size x sequence_len]
# CrossEntropyLoss expects `output` to be [batch_size x vocab_size x sequence_len]
_, prediction = torch.max(output, dim=2)
loss = criterion(output.permute(0, 2, 1), y)
loss.backward()
optimizer.step()
losses.append(loss.float().data)
clear_output(wait=True)
plt.plot(losses)
plt.pause(0.05)
train(50, dataloader, model, criterion, optimizer)
#learning_rate = 0.05
#optimizer = optim.SGD(model.parameters(), lr=learning_rate)
#train(4, dataloader, model, criterion, optimizer)
# In[ ]:
list(kilgariff_data.vocab.items())
# In[105]:
start_token = '<s>'
hidden_state = None
max_len = 20
temperature=0.8
i = 0
while start_token not in ['</s>', '<pad>'] and i < max_len:
i += 1
start_state = torch.tensor(kilgariff_data.vocab.token2id[start_token]).unsqueeze(0).unsqueeze(0).to(device)
model.embedding(start_state)
output, hidden_state = model.gru(model.embedding(start_state), hidden_state)
batch_size, sequence_len, hidden_size = output.shape
output = output.contiguous().view(batch_size * sequence_len, hidden_size)
output = model.classifier(output).view(batch_size, sequence_len, -1)
_, prediction = torch.max(F.softmax(output, dim=2), dim=2)
start_token = kilgariff_data.vocab[int(prediction.squeeze(0).squeeze(0))]
print(start_token, end=' ')
|
I'm by no means a PyTorch expert, but that snippet looks fishy to me:
# Put the embedded inputs into the GRU.
output, hidden = self.gru(embedded, hidden)
# Matrix manipulation magic.
batch_size, sequence_len, hidden_size = output.shape
# Technically, linear layer takes a 2-D matrix as input, so more manipulation...
output = output.contiguous().view(batch_size * sequence_len, hidden_size)
When GRU is not instantiated with batch_first=True, then the output shape is (seq_len, batch, num_directions * hidden_size) -- not that seq_len and batch_size are flipped. For the view command it actually doesn't technically matter, but that's my main issue here.
view(batch_size * sequence_len, hidden_size) doesn't look right at all. Say you start with a batch of size 32, but after that you have size of 32*seq_len. Usually, only the output of the last step is used (or the average or the max over all steps)
Something like this should work:
# Put the embedded inputs into the GRU.
output, hidden = self.gru(embedded, hidden)
# Not needed, just to show the true output shape order
seq_len, batch_size, hidden_size = output.shape
# Given the shape of output, this is the last step
output = output[-1]
# output.shape = (batch_size, hidden_size) <-- What you want
Two personal words of warning:
view() is a dangerous command! PyTorch or any other framework only throws errors when the dimensions of the tensors do not match up. But just because the dimensions fit after view() does not mean the reshaping was done correctly, i.e., that the values are in the right spot of the output tensor. For example, if you have to flatten a shape (seq_len, batch_size, hidden_size) to (batch_size, seq_len*hidden_size), you cannot simply do view(batch_size, -1), but first have to do transpose(1,0) to get a shape of (batch_size, seq_len, hidden_size). With out without transpose(), view() will work and the dimensions will be correct. But only with transpose(), the values are at the right position after view()
Since this is such an easy mistake to make, I saw many examples on GitHub and such where in my opinion it's no done correctly. The problem is that the network often still learns something. In short, I'm not much more careful when looking and adopting code snippets and the view() command is in my opinion the biggest trap.
If it helps, here's the forward method of a GRU classifier network:
def forward(self, batch, method='last_step'):
embeds = self.word_embeddings(batch)
x = torch.transpose(embeds, 0, 1)
x, self.hidden = self.gru(x, self.hidden)
if method == 'last_step':
x = x[-1]
elif method == 'average_pooling':
x = torch.sum(x, dim=0) / len(batch[0])
elif method == 'max_pooling':
x, _ = torch.max(x, dim=0)
else:
raise Exception('Unknown method.')
# A series of Linear layers with ReLU and Dropout
for l in self.linears:
x = l(x)
log_probs = F.log_softmax(x, dim=1)
return log_probs
|
https://stackoverflow.com/questions/55137631/
|
Curious on how to use some basic machine learning in a web application
|
A co-worker and I had an idea to create a little web game where a user enters a chunk of data about themselves and then the application would write for them to sound like them in certain structures. (Trying to leave the idea a little vague.) We are both new to ML and thought this could be a fun first dive.
We have a decent bit of background with PHP, JavaScript (FE and Node), Ruby a little bit of other languages, and have had interest in learning Python for ML. Curious if you can run a cost efficient ML library for text well with a web app, being most servers lack GPUs?
Perhaps you have to pay for one of the cloud based systems, but wanted to find the best entry point for this idea without racking up too much cost. (So far I have been reading about running Pytorch or TensorFlow, but it sounds like you lose a lot of efficiency running with CPUs.)
Thank you!
(My other thought is doing it via an iOS app and trying Apple's ML setup.)
|
It sounds like you are looking for something like Tensorflow JS
|
https://stackoverflow.com/questions/55149138/
|
Implement dropout to fully connected layer in PyTorch
|
How to apply dropout to the following fully connected network in Pytorch:
class NetworkRelu(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784,128)
self.fc2 = nn.Linear(128,64)
self.fc3 = nn.Linear(64,10)
def forward(self,x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.softmax(self.fc3(x),dim=1)
return x
|
Since there is functional code in the forward method, you could use functional dropout, however, it would be better to use nn.Module in __init__() so that the model when set to model.eval() evaluate mode automatically turns off the dropout.
Here is the code to implement dropout:
class NetworkRelu(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784,128)
self.fc2 = nn.Linear(128,64)
self.fc3 = nn.Linear(64,10)
self.dropout = nn.Dropout(p=0.5)
def forward(self,x):
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = F.softmax(self.fc3(x),dim=1)
return x
|
https://stackoverflow.com/questions/55157514/
|
Pytorch sum over a list of tensors along an axis
|
I have a list of tensors of the same shape.
I would like to sum the entire list of tensors along an axis.
Does torch.cumsum perform this op along a dim?
If so it requires the list to be converted to a single tensor and summed over?
|
you don't need cumsum, sum is your friend
and yes you should first convert them into a single tensor with stack or cat based on your needs, something like this:
import torch
my_list = [torch.randn(3, 5), torch.randn(3, 5)]
result = torch.stack(my_list, dim=0).sum(dim=0).sum(dim=0)
print(result.shape) #torch.Size([5])
|
https://stackoverflow.com/questions/55159955/
|
Faster pytorch dataset file
|
I have the following problem, I have many files of 3D volumes that I open to extract a bunch of numpy arrays.
I want to get those arrays randomly, i.e. in the worst case I open as many 3D volumes as numpy arrays I want to get, if all those arrays are in separate files.
The IO here isn't great, I open a big file only to get a small numpy array from it.
Any idea how I can store all these arrays so that the IO is better?
I can't pre-read all the arrays and save them all in one file because then that file would be too big to open for RAM.
I looked up LMDB but it all seems to be about Caffe.
Any idea how I can achieve this?
|
I iterated through my dataset, created an hdf5 file and stored elements in the hdf5. Turns out, when the hdf5 is opened, it doesn't load all data in ram, it loads the header instead.
The header is then used to fetch the data on request, that's how I solved my problem.
Reference:
http://www.machinelearninguru.com/deep_learning/data_preparation/hdf5/hdf5.html
|
https://stackoverflow.com/questions/55166874/
|
Need very different learning rate for manual updates vs. using model
|
I am currently just trying to write some pedagogical material, in which I borrow from some common examples that have been reworked numerous times on the web.
I have a simple bit of code where I manually create tensors for layers, and update them within a loop. E.g.:
w1 = torch.randn(D_in, H, dtype=torch.float, requires_grad=True)
w2 = torch.randn(H, D_out, dtype=torch.float, requires_grad=True)
learning_rate = 1e-6
for t in range(501):
y_pred = x.mm(w1).clamp(min=0).mm(w2)
loss = (y_pred - y).pow(2).sum()
loss.backward()
w1 -= learning_rate * w1.grad
w2 -= learning_rate * w2.grad
w1.grad.zero_()
w2.grad.zero_()
This works great. Then I construct similar code using actual modules:
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-4
for t in range(501):
y_pred = model(x)
loss = loss_fn(y_pred, y)
model.zero_grad()
loss.backward()
for param in model.parameters():
param.data -= learning_rate * param.grad
This also works great.
BUT there is a difference here. If I use a 1e-4 LR in the manual case, the loss explodes, become large, then inf, then nan. So that's no good. If I use a 1e-6 LR in the model case, the loss decreases far too slowly.
Basically I'm just trying to understand why learning rate means something very different in these two snippets which are otherwise equivalent.
|
The crucial difference is the initialization of the weights. The weight matrix in a nn.Linear is initialized smart. I'm pretty sure that if you construct both the models and copy the weight matrices in one way or the other, you'll get consistent behavior.
Additionally, please note that the two models are not equivalent, as your handcrafted model lacks biases. Which matters.
|
https://stackoverflow.com/questions/55170175/
|
Convert a python list of python lists to pytorch tensor
|
What is the conventional way to convert python list of lists to PyTorch tensors?
a = [0,0]
b = [1,1]
c = [2]
c = [a, b, c]
I want c to be converted to a flattened Torch tensor as below:
torch([ 0, 0, 1, 1, 2])
|
You can flatten your list first in Python:
flat_list = [item for sublist in c for item in sublist]
And create your Tensor:
flattened_tensor = torch.FloatTensor(flat_list)
|
https://stackoverflow.com/questions/55193322/
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.