instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
load dataset using glob and how to see what comes out | I am trying to create a dataset that can be used for training a ML model (CNN), but I am having trouble reading the files (have I used glob correct) and sorting them in a way that is useful for training. I am not sure what comes out of this when i run load_data. (Dataset an code comes from kaggle) The code is written in jupyter notebook. I intend to use pytorch on this dataset.
Here is the code I have got so far:
emotions={
'01':'neutral',
'02':'calm',
'03':'happy',
'04':'sad',
'05':'angry',
'06':'fearful',
'07':'disgust',
'08':'surprised'
}
#DataFlair - Emotions to observe
observed_emotions=['neutral', 'happy', 'sad', 'angry']
# os.listdir("../input")
def extract_feature(emo_file, mfcc):
with soundfile.SoundFile(emo_file) as emo_file:
X, sr = librosa.load(emo_file, sr=22050, mono=True, offset=1.0, duration=2.0)
# X = sound_file.read(dtype="float32")
# sample_rate=sound_file.samplerate
mfccs = np.mean(librosa.feature.mfcc(y=X, sr=sr, n_mfcc=40).T, axis=0)
result = np.hstack((mfccs))
return result
def load_data(test_size=0.2): # hold back 20% as test-set
x,y=[],[]
#for file in glob.glob('/Users/.../NeuralNetworks/SER_dataset'):
for file in glob.glob('/Users/.../NeuralNetworks/SER_dataset/Actor*/*'):
emo_file=os.path.basename(file)
plt.figure(figsize=(18, 3)) # plot using librosa
matplotlib.waveplot(y, sr=sr)
plt.ylim([-0.1, 0.1])
emotion=emotions[emo_file.split("-")[2]] #load file based on label
if emotion not in observed_emotions:
continue
feature=extract_feature(emo_file, mfcc=True)
x.append(feature)
y.append(emotion)
return train_test_split(np.ndarray(x), y, test_size=test_size, random_state=9)
numpy_dataset = load_data
print(np.ndarray.shape(numpy_dataset))
Here´s the error message that I get:
TypeError Traceback (most recent call last)
Input In [127], in <cell line: 2>()
1 numpy_dataset = load_data
----> 2 print(np.ndarray.shape(numpy_dataset))
TypeError: 'getset_descriptor' object is not callable
If you also can give me some advise on how to make this dataset trainable with pytorch it would be amazing. (turn it into a tensor)
Here´s my suggestion:
emo_dataset = torch.from_numpy(numpy_dataset)
| numpy_dataset = load_data
print(np.ndarray.shape(numpy_dataset))
Here´s the error message that I get:
... 'getset_descriptor' object is not callable
Yup, that makes sense.
You assigned a function's definition to the dataset,
rather than assigning the function's result.
You want to actually call that function
numpy_dataset = load_data()
before trying any .shape() queries.
Asking for the shape of the function's bytecode
won't elicit anything useful.
| https://stackoverflow.com/questions/73261119/ |
Pytorch DL model, updates normally with converging losses during training, but SAME OUTPUT values for all data (regression) | I've only used Neural Networks in CNNs and RNNs,
but this is my first time using it as a regression task.
There are 30000 sets of data.
Each data has 50 input features, and I must predict 14 output features for each.
So, my goal is to make a prediction for about 30000 datasets,
so that would make my task : IN - 30000 data X 50 features -> OUT - 30000 predictions X 14 features
these are my hyperparameters :
input_size = 50
hidden_size = 40
num_epochs = 7
learning_rate =1.00E-03
output_size=15
batch_size=30
My code worked fine, losses converged at every iteration/epoch.
However, for some reason
I noticed my output which was expected of 30000 prediction (rows) X 14 feature (columns)
returned just a same 1 tensor X 14 feature X 30000 times.
Like this :
[[ 1.3311, 1.0411, 0.9971, 13.6349, 31.4082, 16.5008, 3.2034,
-26.2985, -26.3108, -22.4322, 24.3007, -26.2376, -26.2337, -26.2369],
[ 1.3311, 1.0411, 0.9971, 13.6349, 31.4082, 16.5008, 3.2034,
-26.2985, -26.3108, -22.4322, 24.3007, -26.2376, -26.2337, -26.2369],
[ 1.3311, 1.0411, 0.9971, 13.6349, 31.4082, 16.5008, 3.2034,
-26.2985, -26.3108, -22.4322, 24.3007, -26.2376, -26.2337, -26.2369]]
Just imagine this outcome, only with 3000 rows. (I don't even understand how I reached converging loss)
example of same output values for all data
I tried to track where this problem started, and it seems it's been happening during training as well.
# 5. Training loop
n_total_steps = len(DS)
n_iterations = -(-n_total_steps // batch_size) # ceiling division
training_loss=[]
loss_fn = nn.MSELoss()
trainloader = torch.utils.data.DataLoader(
DS,
batch_size=batch_size, shuffle = True)
testloader = torch.utils.data.DataLoader(
TS,
batch_size=batch_size)
for epoch in range(num_epochs):
print('\n')
for i, (data, target) in enumerate(trainloader):
data, target = data.to(device), target.to(device)
outputs = model(data)
loss = torch.sqrt(loss_fn(outputs, target))
training_loss.append(loss.item())
# 5.5 Backward pass
opt.zero_grad() # 5.6 Empty the values in the gradient attribute, or model.zero_grad()
loss.backward() # 5.7 Backprop
opt.step() # 5.8 Update params
# 5.9 Print loss
if (i+1) % 100 == 0:
print(f'Epoch {epoch+1}/{num_epochs}, Iteration {i+1}/{n_iterations}, Loss={loss.item():.4f} ')
Epoch 1/7, Iteration 100/1321, Loss=1.5157
tensor([[ 1.4186, 1.1157, 1.0471, 13.5818, 31.3844, 16.5334, 3.1015,
-26.3141, -26.2974, -22.4117, 24.3678, -26.2477, -26.2577, -26.2387],
[ 1.4186, 1.1157, 1.0471, 13.5818, 31.3844, 16.5334, 3.1015,
-26.3141, -26.2974, -22.4117, 24.3678, -26.2477, -26.2577, -26.2387],
[ 1.4186, 1.1157, 1.0471, 13.5818, 31.3844, 16.5334, 3.1015,
-26.3141, -26.2974, -22.4117, 24.3678, -26.2477, -26.2577, -26.2387],
[ 1.4186, 1.1157, 1.0471, 13.5818, 31.3844, 16.5334, 3.1015,
-26.3141, -26.2974, -22.4117, 24.3678, -26.2477, -26.2577, -26.2387],
[ 1.4186, 1.1157, 1.0471, 13.5818, 31.3844, 16.5334, 3.1015,
-26.3141, -26.2974, -22.4117, 24.3678, -26.2477, -26.2577, -26.2387],
....
Epoch 1/7, Iteration 300/1321, Loss=0.9697
tensor([[ 1.3142, 1.0427, 0.9661, 13.6267, 31.2973, 16.5265, 3.1028,
-26.2207, -26.2468, -22.3516, 24.4410, -26.1698, -26.1708, -26.1715],
[ 1.3142, 1.0427, 0.9661, 13.6267, 31.2973, 16.5265, 3.1028,
-26.2207, -26.2468, -22.3516, 24.4410, -26.1698, -26.1708, -26.1715],
[ 1.3142, 1.0427, 0.9661, 13.6267, 31.2973, 16.5265, 3.1028,
-26.2207, -26.2468, -22.3516, 24.4410, -26.1698, -26.1708, -26.1715],
[ 1.3142, 1.0427, 0.9661, 13.6267, 31.2973, 16.5265, 3.1028,
-26.2207, -26.2468, -22.3516, 24.4410, -26.1698, -26.1708, -26.1715],
[ 1.3142, 1.0427, 0.9661, 13.6267, 31.2973, 16.5265, 3.1028,
-26.2207, -26.2468, -22.3516, 24.4410, -26.1698, -26.1708, -26.1715],
[ 1.3142, 1.0427, 0.9661, 13.6267, 31.2973, 16.5265, 3.1028,
-26.2207, -26.2468, -22.3516, 24.4410, -26.1698, -26.1708, -26.1715],
To elaborate : my prediction model's output returns same value for every different 3000 input data. It updates them altogether as well, not updating each rows separately!
I don't understand how it is reaching low loss and convergence.
NN code :
class MyDataset(Dataset) :
def __init__(self, file_name) :
train_df = pd.read_csv(file_name)
x = train_df.filter(regex='X') # Input : X Featrue
y = train_df.filter(regex='Y') # Output : Y Feature
self.train_x = torch.tensor(x.values,dtype=torch.float32)
self.train_y = torch.tensor(y.values,dtype=torch.float32)
def __len__(self) :
return len(self.train_y)
def __getitem__ (self,idx) :
return self.train_x[idx],self.train_y[idx]
class LGNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super().__init__()
self.layer1 = nn.Linear(input_size, hidden_size)
self.relu = nn.Tanh()
self.layer2 = nn.Linear(hidden_size, hidden_size)
self.layer3 = nn.Linear(hidden_size, hidden_size)
self.layer4 = nn.Linear(hidden_size, output_size)
def forward(self, x):
out = self.layer1(x)
out = self.relu(out)
out = self.layer2(out)
out = self.relu(out)
out = self.layer3(out)
out = self.relu(out)
out = self.layer4(out)
return out
# 4.1 Create NN model instance
model = LGNN(input_size, hidden_size, output_size).to(device) #to(device)는 GPU
model.apply(reset_weights)
# 4.2 Loss and Optimiser
opt = optim.Adam(model.parameters(), lr=learning_rate)
loss_fn = nn.MSELoss()
Also, the model has gone through k-fold and validation. No overifitting or other problems found :(
| Why don't you change your learning rate? check out this post : https://discuss.pytorch.org/t/why-am-i-getting-same-output-values-for-every-single-data-in-my-ann-model-for-multi-class-classification/57760/7
Otherwise check if the weights of your model layers are being updated during training or input data are properly being fed to the model
| https://stackoverflow.com/questions/73264096/ |
TypeError: __init__() got an unexpected keyword argument 'progress_bar_refresh_rate' | I tried resolving this issue since two days however still facing this issue any leads would be helpful.
Please refer the link for screenshot
| Please try by deleting the progress_bar_refresh_rate=5 argument, since this keyword argument is no longer supported by the latest version (1.7.0) of PyTorch.Lightning.Trainer module. Check this screenshot for clarity.
| https://stackoverflow.com/questions/73264813/ |
using list in creating pytorch NN module | This code runs fine to create a simple feed-forward neural Network. The layer (torch.nn.Linear) is assigned to the class variable by using self.
class MultipleRegression3L(torch.nn.Module):
def __init__(self, num_features):
super(MultipleRegression3L, self).__init__()
self.layer_1 = torch.nn.Linear(num_features, 16)
## more layers
self.relu = torch.nn.ReLU()
def forward(self, inputs):
x = self.relu(self.layer_1(inputs))
x = self.relu(self.layer_2(x))
x = self.relu(self.layer_3(x))
x = self.layer_out(x)
return (x)
def predict(self, test_inputs):
return self.forward(test_inputs)
However, when I tried to store the layer using the list:
class MultipleRegression(torch.nn.Module):
def __init__(self, num_features, params):
super(MultipleRegression, self).__init__()
number_of_layers = 3 if not 'number_of_layers' in params else params['number_of_layers']
number_of_neurons_in_each_layer = [16, 32, 16] if not 'number_of_neurons_in_each_layer' in params else params['number_of_neurons_in_each_layer']
activation_function = "relu" if not 'activation_function' in params else params['activation_function']
self.layers = []
v1 = num_features
for i in range(0, number_of_layers):
v2 = number_of_neurons_in_each_layer[i]
self.layers.append(torch.nn.Linear(v1, v2))
v1 = v2
self.layer_out = torch.nn.Linear(v2, 1)
if activation_function == "relu":
self.act_func = torch.nn.ReLU()
else:
raise Exception("Activation function %s is not supported" % (activation_function))
def forward(self, inputs):
x = self.act_func(self.layers[0](inputs))
for i in range(1, len(self.layers)):
x = self.act_func(self.layers[i](x))
x = self.layer_out(x)
return (x)
The two models do not behave the same way. What can be wrong here?
| Pytorch needs to keep the graph of the modules in the model, so using a list does not work. Using self.layers = torch.nn.ModuleList() fixed the problem.
| https://stackoverflow.com/questions/73268576/ |
PyTorch Autograd for Regression | another PyTorch newbie here trying to understand their computational graph and autograd.
I'm learning the following model on potential energy and corresponding force.
model = nn.Sequential(
nn.Linear(1, 32),
nn.Linear(32, 32), nn.Tanh(),
nn.Linear(32, 32), nn.Tanh(),
nn.Linear(32, 1)
)
optimizer = torch.optim.Adam(model.parameters())
loss = nn.MSELoss()
# generate data
r = torch.linspace(0.95, 3, 50, requires_grad=True).view(-1, 1)
E = 1 / r
F = -grad(E.sum(), r)[0]
inputs = r
for epoch in range(10**3):
E_pred = model.forward(inputs)
F_pred = -grad(E_pred.sum(), r, create_graph=True, retain_graph=True)[0]
optimizer.zero_grad()
error = loss(E_pred, E.data) + loss(F_pred, F.data)
error.backward()
optimizer.step()
However, if I change the inputs = r to inputs = 1*r, the training loop breaks and gives the following error
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
Could you please explain why this happens?
| This error occurs when backward is executed after backward. (without reset gradient) Here is the example code.
output = model.forward(x)
loss = criterion(label, output)
optimizer.zero_grad()
loss.backward()
loss2 = criterion(loss, output2)
loss2.backward()
optimizer.step()
And as you can see in the following code, if you just put r in inputs, a shallow copy occurs. Therefore, when the value of r changes, the value of inputs also changes. However, if multiplied by 1, it becomes a deep copy and the value does not change even if r is changed.
r = torch.linspace(0.95, 3, 50).view(-1, 1)
inputs_1 = r
inputs_2 = 1 * r
r[0] = 100
print(inputs_1)
print(inputs_2)
And the requires grad of E.data is False. Therefore, you can think that an error occurred because of inputs. Also, optimizer.zero_grad resets only the gradient of the model and does not reset the gradient of E or inputs.
print(E.data.requires_grad) # False
# You want to update only the parameters of the model......
optimizer = torch.optim.Adam(model.parameters())
As I said before, if inputs = r is used, shallow copy occurs, and if inputs = 1 * r is used, deep copy occurs, so the following difference occurs.
In the case of shallow copy, since the inputs equals to r, the gradient just builds up and no error occurs.
However, since 1 * r is a calculated value, an error occurs if backward is used several times here.
I think it would be good to set r's requires_grad to false. If requires_grad is set to True, the value is changed through the gradient. This should only be used for parameters. However, the input does not need to change its value. Check it out with the code below.
Code:
# generate data
r = torch.linspace(0.95, 3, 50, requires_grad=False).view(-1, 1)
E = 1 / r
inputs = 1 * r
for epoch in range(10**3):
E_pred = model.forward(inputs)
optimizer.zero_grad()
error = loss(E_pred, E.data)
error.backward()
optimizer.step()
print(model.forward(inputs))
If you want only r to set requires grad to true, use the following code
# generate data
r = torch.linspace(0.95, 3, 50, requires_grad=True).view(-1, 1)
with torch.no_grad():
E = 1 / r
inputs = 1 * r
for epoch in range(10**3):
E_pred = model.forward(inputs)
optimizer.zero_grad()
error = loss(E_pred, E.data)
error.backward()
optimizer.step()
print(model.forward(inputs))
| https://stackoverflow.com/questions/73284709/ |
Is there a way to compute the matrix logarithm of a Pytorch tensor? | I am trying to compute matrix logarithms in Pytorch but I need to keep tensors because I then apply gradients which means I can't use numpy arrays.
Basically I'm trying to do the equivalent of https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.logm.html but with Pytorch tensors.
Thank you.
| Unfortunately the matrix logarithm (unlike the matrix exponential) is not implemented yet, but matrix powers are, this means
in the mean time you can approximate the matrix logarithm by using a the power series expansion, and just truncate it after you get a sufficient accuracy.
Alternatively Lezcano proposes a (slow) solution of a differentiable matrix logarithm via adjoint here. I'll cite their suggested solution:
import scipy.linalg
import torch
def adjoint(A, E, f):
A_H = A.T.conj().to(E.dtype)
n = A.size(0)
M = torch.zeros(2*n, 2*n, dtype=E.dtype, device=E.device)
M[:n, :n] = A_H
M[n:, n:] = A_H
M[:n, n:] = E
return f(M)[:n, n:].to(A.dtype)
def logm_scipy(A):
return torch.from_numpy(scipy.linalg.logm(A.cpu(), disp=False)[0]).to(A.device)
class Logm(torch.autograd.Function):
@staticmethod
def forward(ctx, A):
assert A.ndim == 2 and A.size(0) == A.size(1) # Square matrix
assert A.dtype in (torch.float32, torch.float64, torch.complex64, torch.complex128)
ctx.save_for_backward(A)
return logm_scipy(A)
@staticmethod
def backward(ctx, G):
A, = ctx.saved_tensors
return adjoint(A, G, logm_scipy)
logm = Logm.apply
| https://stackoverflow.com/questions/73288332/ |
The model did not return a loss from the inputs - LabSE error | I want to fine tune LabSE for Question answering using squad dataset. and i got this error:
ValueError: The model did not return a loss from the inputs, only the following keys: last_hidden_state,pooler_output. For reference, the inputs it received are input_ids,token_type_ids,attention_mask.
I am trying to fine tune the model using pytorch. I tried to use smaller batch size and i took just 10% of training dataset because i had problems with memory allocation.
If memory allocation problems are gone this error happens.
To be honest i'm stuck with it. Do you have any hints?
I'm trying to use huggingface tutorial, but i want to use other evaluation (i want to do it myself ) so i skipped using evaluation part of dataset.
from datasets import load_dataset
raw_datasets = load_dataset("squad", split='train')
from transformers import BertTokenizerFast, BertModel
from transformers import AutoTokenizer
model_checkpoint = "setu4993/LaBSE"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = BertModel.from_pretrained(model_checkpoint)
max_length = 384
stride = 128
def preprocess_training_examples(examples):
questions = [q.strip() for q in examples["question"]]
inputs = tokenizer(
questions,
examples["context"],
max_length=max_length,
truncation="only_second",
stride=stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
offset_mapping = inputs.pop("offset_mapping")
sample_map = inputs.pop("overflow_to_sample_mapping")
answers = examples["answers"]
start_positions = []
end_positions = []
for i, offset in enumerate(offset_mapping):
sample_idx = sample_map[i]
answer = answers[sample_idx]
start_char = answer["answer_start"][0]
end_char = answer["answer_start"][0] + len(answer["text"][0])
sequence_ids = inputs.sequence_ids(i)
# Find the start and end of the context
idx = 0
while sequence_ids[idx] != 1:
idx += 1
context_start = idx
while sequence_ids[idx] == 1:
idx += 1
context_end = idx - 1
# If the answer is not fully inside the context, label is (0, 0)
if offset[context_start][0] > start_char or offset[context_end][1] < end_char:
start_positions.append(0)
end_positions.append(0)
else:
# Otherwise it's the start and end token positions
idx = context_start
while idx <= context_end and offset[idx][0] <= start_char:
idx += 1
start_positions.append(idx - 1)
idx = context_end
while idx >= context_start and offset[idx][1] >= end_char:
idx -= 1
end_positions.append(idx + 1)
inputs["start_positions"] = start_positions
inputs["end_positions"] = end_positions
return inputs
train_dataset = raw_datasets.map(
preprocess_training_examples,
batched=True,
remove_columns=raw_datasets.column_names,
)
len(raw_datasets), len(train_dataset)
from transformers import TrainingArguments
args = TrainingArguments(
"bert-finetuned-squad",
save_strategy="epoch",
learning_rate=2e-5,
num_train_epochs=3,
weight_decay=0.01,
)
from transformers import Trainer
trainer = Trainer(
model=model,
args=args,
train_dataset=train_dataset,
tokenizer=tokenizer,
)
trainer.train()
| Hi,
Please make sure you are good with the below :
You may need to pass the label_names argument in TrainingArguments with the label column or key which you are providing, else you need to know what is the default forward argument which is accepted by the model of your choice
For example : with BertForQuestionAnswering model, at huggingface github
we can see we need start_positions and end_positions as key/column_name, which is what gets accepted by model during forward pass.
Also, from the same link, you need to verify what is the shape required by your labels/target(s) at Trainer ( this can be different from the logits shape ), and provide the one as per link.
Let me know if you or someone is able to resolve the error with the mentioned fix!
Thanks!
| https://stackoverflow.com/questions/73290491/ |
Understanding the role of num_workers in Pytorch's Dataloader | In PyTorch's Dataloader suppose:
I) Batch size=8 and num_workers=8
II) Batch size=1 and num_workers=8
III) Batch size=1 and num_workers=1
with exact same get_item() function.
So,
in case I) will 1 worker be assigned for each batch and in case II) only 1 worker will be used and 7 idle.
Or is it that even in case II) all 8 workers will be used for that loading that single batch
Or is it that 1 worker will be used to load batch for each iteration. I mean say I am on iteration x, and irrespective of batch size batches for future iterations will be pre-loaded as I am using multiple workers?
Finally, will speed of training my CNN be greater in Case II or case III, or will it be same?
| Every worker process is always responsible for loading a whole batch, so the batch size and number of workers are not really related.
in case I) will 1 worker be assigned for each batch and in case II) only 1 worker will be used and 7 idle.
All 8 workers will load batches and deliver them whenever required. So as soon as they are done loading their number of batches (defined by prefetch_factor) they just queue up to deliver the data.
Or is it that even in case II) all 8 workers will be used for that loading that single batch
No, there is always just one worker responsible per batch.
Or is it that 1 worker will be used to load batch for each iteration. I mean say I am on iteration x, and irrespective of batch size batches for future iterations will be pre-loaded as I am using multiple workers?
Every batch is loaded by one worker, so if you only load one batch per iteration, only one worker is in taht iteration.
| https://stackoverflow.com/questions/73290826/ |
Pytorch: How to generate random vectors with length in a certain range? | I want a k by 3 by n tensor representing k batches of n random 3d vectors, each vector has a magnitude (Euclidean norm) between a and b. Other than rescaling the entries of a random kx3xn tensor to n random lengths in a for loop, is there a better/more idiomatic way to do this?
| Assuming a < b, you now have a constraint on the 3rd random number due to the norm. i.e sqrt(a^2 - x^2 - y^2) < z < sqrt(b^2 - x^2 - y^2)
Now a^2 - x^2 - y^2 > 0 which implies that x^2 + y^2 < a^2
We need two sets of generate numbers such that x^2 + y^2 < a^2
import numpy as np
def rand_generator(a,b,n,k):
req_array = np.zeros((n,k,3))
# first generate random numbers for x i.e 0<x<a
req_array[:,:,0] = np.random.rand(n,k)*a
# now generate random numbers for y such that 0 < y < a^-x2
req_array[:,:,1] = np.random.rand( n,k) * np.sqrt(a**2 - req_array[:,:,0]**2)
norm_temp = np.linalg.norm(req_array,axis=2)
a1 = np.sqrt(a**2 - norm_temp**2)
b1 = np.sqrt(b**2 - norm_temp**2)
# generate numbers for z such that they are inbetween a1 and b1
req_array[:,:,2] = a1 + np.random.rand(n,k)*(b1-a1)
return req_array
ll = rand_generator(2,5,10,12)
lp = np.linalg.norm(ll,axis=2)
print(np.all(lp>2) and np.all(lp<5))
##output: True
You can also use spherical coordinates for this(which is exactly same as above)
x = rsin(theta)cos(phi), y = rsin(theta)sin(phi), z = rcos(theta) with a< r <b 0<theta<pi/2 and 0<phi<pi/2
import numpy as np
def rand_generator(a,b,n,k):
req_array = np.zeros((n,k,3))
# first generate random numbers for r in [a,b)
r = a + np.random.rand(n,k)*(b-a)
# now generate random numbers for theta in [0,pi/2)
theta = np.random.rand( n,k) * np.pi/2
# now generate random numbers for phi in [0,pi/2)
phi = np.random.rand( n,k) * np.pi/2
req_array[:,:,0] = r*np.sin(theta)*np.cos(phi)
req_array[:,:,1] = r*np.sin(theta)*np.sin(phi)
req_array[:,:,2] = r*np.cos(theta)
return req_array
ll = rand_generator(2,5,10,12)
lp = np.linalg.norm(ll,axis=2)
print(np.all(lp>2) and np.all(lp<5))
##output: True
| https://stackoverflow.com/questions/73294933/ |
understanding the torch.nn.functional.grid_sample op by concrete example | I am debugging a neural network which has a torch.nn.functional.grid.sample operator inside. Using the Pycharm IDE, I can watch the values during debugging. My grid is a 1*15*2 tensor, here are the values in the first batch.
My input is a 1*128*16*16 tensor, here are the values in the first channel of the first batch:.
My output is 1*128*1*15 tensor, here are the values in the first channel of the first batch.
align_corners = False, mode = 'bilinear', padding_mode = 'zero'.
For gird coordinates (-1,-1), I can understand that the value(-4.74179) is sampled from 4 values on the top-left corner with 3 of them being the padded '0's and 1 of them being the value '-18.96716'.(-18.96716/4 = -4.74179).
But for other grid coordinates, I am confused. Taking the value '84.65594' for example, it's corresponding grid coordinate is (-0.45302, 0.53659). I firstly convert them from (-1,1) to (0,15) by adding 1 and then dividing by 2 and then multiplying 15(see official implementation). The converted coordinate is then (4.10235, 11.524425), Upon which I see the four values that should be sampled from are :
(x)44.20010---0.10235---------(y)26.68777
| | |
| | |
0.524425---(a,b)--------------------
| | |
| | |
(w)102.18765---------------------(z)30.03996
here are my calculation by hand step, Let:
a = 0.10235
b = 0.524425
x = 44.20010
y = 26.68777
z = 30.03996
w = 102.18765
The interpolated value should then be:
output = a*b*z + (1 - a)*(1 - b)*x + (1 - a)*b*w + (1-b)*a*y
= 0.10235*0.524425*30.03996 + (1-0.10235)*(1-0.524425)*44.20010 + (1-
0.10235)*0.524425*102.18765 + (1-0.524425)*0.10235*26.68777
= 69.8852865171
which isn't 84.65594, I cant't figure out how the value '84.65594' in the output is calculated, please help!
| I answer my own question, it turns out that the inconsistency is due to the 'align_corners' flag. My way of calculation is actually under the case when 'align_corners' is true while in the program, this flag is set to be false. For how to calculate sample coordinates, please see this
| https://stackoverflow.com/questions/73300183/ |
PyTorch - Train imbalanced dataset (set weights) for object detection | I am quite new with PyTorch, and I am trying to use an object detection model to do transfer learning in order to learn how to detect my new dataset.
Here is how I load the dataset:
train_dataset = MyDataset(train_data_path, 512, 512, train_labels_path, get_train_transform())
train_loader = DataLoader(train_dataset,batch_size=8,shuffle=True,num_workers=4,collate_fn=collate_fn)
valid_dataset = MyDataset(test_data_path, 512, 512, test_labels_path, get_valid_transform())
valid_loader = DataLoader(valid_dataset,batch_size=8, shuffle=False,num_workers=4,collate_fn=collate_fn)
I define the model and optimizer as follows:
# load Faster RCNN pre-trained model
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(weights="FasterRCNN_ResNet50_FPN_Weights.COCO_V1") # get the number of input features
in_features = model.roi_heads.box_predictor.cls_score.in_features
# define a new head for the detector with the required number of classes
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
model = model.to(DEVICE)
# get the model parameters
params = [p for p in model.parameters() if p.requires_grad]
# define the optimizer
# We are using the SGD optimizer with a learning rate of 0.001 and momentum on 0.9.
optimizer = torch.optim.SGD(params, lr=0.001, momentum=0.9, weight_decay=0.0005)
I train the model as follows:
def train(train_data_loader, model, optimizer, train_loss_hist):
global train_itr
global train_loss_list
prog_bar = tqdm(train_data_loader, total=len(train_data_loader), position=0, leave=True, ascii=True)
# Then we have the for loop iterating over the batches.
for i, data in enumerate(prog_bar):
optimizer.zero_grad()
images, targets = data
images = list(image.to(DEVICE) for image in images)
targets = [{k: v.to(DEVICE) for k, v in t.items()} for t in targets]
# Forward pass
loss_dict = model(images, targets)
# Then we sum the losses and append the current iterations loss value to train_loss_list list.
losses = sum(loss for loss in loss_dict.values())
loss_value = losses.item()
# We also send the current loss value to train_loss_hist of the Averager class.
train_loss_list.append(loss_value)
train_loss_hist.send(loss_value)
# Then we backpropagate the gradients and update parameters.
losses.backward()
optimizer.step()
train_itr += 1
return train_loss_list
Considering that I adapted one code I found and I am not sure where the loss is defined (I have not defined any kind of loss in the code, so I believe it will use the default loss that was used to train the original object detector), how can I train my network considering such an imbalanced dataset and update my code?
| It seems that you have two questions.
How to deal with imbalanced dataset.
Note that Faster-RCNN is an Anchor-Based detector, which means number of anchors containing the object is extremely small compared to the number of total anchors, so you don't need to deal with the imbalanced dataset. Or you can use RetinaNet which proposed a loss function called focal loss to improve performance upon imbalanced dataset.
Where is the loss function.
torchvision integrated the loss function inside the model object, you can debug your python code step by step inside the torchvision package and see the implementation details
| https://stackoverflow.com/questions/73302325/ |
How to get gradient (dL/dw) during training in Pytorch? | class ConvNet(torch.nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
self.conv1 = torch.nn.Conv2d(1, 6, 5)
self.pool = torch.nn.MaxPool2d(2, 2)
self.conv2 = torch.nn.Conv2d(6, 16, 5)
self.fc1 = torch.nn.Linear(16 * 4 * 4, 62)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 4 * 4)
x = self.fc1(x)
return x
for x, y in loader:
x, y = x.to(device), y.to(device)
optimizer.zero_grad()
loss = torch.nn.CrossEntropyLoss()(model(x), y)
loss.backward()
print(Convnet().conv1.weight.grad)
optimizer.step()
I tried Convnet().conv1.weight.grad, but it gives None output. What are the other options to print gradient in Pytorch?
| Okay, a few things to note here:
I'm assuming you have already instantiated/initialized your ConvNet class with an object called model. (model = ConvNet())
The way you're accessing the model's weight gradients is correct, however, you're using the wrong object to access these weights. Specifically, you're supposed to use the instantiated running model to access these gradients, which is the model object you instantiated. When you use ConvNet().conv1.weight.grad, you're creating a new instance of the class ConvNet() on every call, and none of these instances were used to process your data x, hence they all give None for gradients.
Based on the above points, the correct way to access the gradients is to use your instaniated model which you've used to process your data, which is:
model.conv1.weight.grad
Side note; you might want to use torch's functional API to find the loss as it's more readable: loss = F.cross_entropy(model(x), y)
| https://stackoverflow.com/questions/73311567/ |
optimizer got an empty parameter list | If I use optim.SGD(model_conv.fc.parameters() I'm getting an error:
optimizer got an empty parameter list
This error is, when model_conv.fc is nn.Hardtanh(...) (also when I try to use ReLu).
But with nn.Linear it works fine.
What could be the reason?
model_conv.fc = nn.Hardtanh(min_val=0.0, max_val=1.0) #not ok --> optimizer got an empy parameter list
#model_conv.fc = nn.ReLU() #also Not OK
# num_ftrs = model_conv.fc.in_features
# model_conv.fc = nn.Linear(num_ftrs, 1) #it works fine
model_conv = model_conv.to(config.device())
optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=config.learning_rate, momentum=config.momentum) #error is here
| Hardtanh and ReLU are parameter-free layers but Linear has parameters.
| https://stackoverflow.com/questions/73312561/ |
Clustering Graphs using graph distance | What I'm currently doing is:
Train a GNN and see which graphs are labelled wrongly compared to the ground truth.
Use a GNN-explainer model to help explain which minimum sub-graph is responsible for the mislabeling by checking the wrongly label instances.
Use the graph_edit_distance from networkx to see how much these graphs differentiate from another.
See if I can find clusters that help explain why the GNN might label some graphs wrongly.
Does this seem reasonable?
How would I go around step 4? Would I use something like sklearn_extra.cluster.KMedoids?
All help is appreciated!
|
Use the graph_edit_distance from networkx to see how much these graphs
differentiate from another.
Guessing this gives you a single number for any pair of graphs.
The question is: on what direction is this number? How many dimensions ( directions ) are there? Suppose two graphs have the same distance from a third. Does this mean that the two graphs are close together, forming a cluster at a distance from the third graph?
If you have answers to the questions in the previous paragraph, then the KMeans algorithm can find clusters for as many dimensions as you might have. It is fast and easy to code, usually giving satisfactory results. https://en.wikipedia.org/wiki/K-means_clustering
| https://stackoverflow.com/questions/73320165/ |
Plotting the confuison matrix into wandb (pytorch) | I'm training a model and I'm trying to add a confusion matrix, which would be displayed in my wandb, but I got lost a bit. Basically, the matrix works; I can print it, but it's not loaded into wandb. Everything should be OK, except it's not. Can you please help me? I'm new to all this. Thanks a lot!
the code
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs))
print('-' * 10)
for phase in ['train', 'val']:
if phase == 'train':
model.train()
else:
model.eval()
running_loss = 0.0
running_corrects = 0
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
optimizer.zero_grad()
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
if phase == 'train':
loss.backward()
optimizer.step()
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
from sklearn.metrics import f1_score
f1_score = f1_score(labels.cpu().data, preds.cpu(), average=None)
wandb.log({'F1 score' : f1_score})
nb_classes = 7
confusion_matrix = torch.zeros(nb_classes, nb_classes)
with torch.no_grad():
for i, (inputs, classes) in enumerate(dataloaders['val']):
inputs = inputs.to(device)
classes = classes.to(device)
outputs = model_ft(inputs)
_, preds = torch.max(outputs, 1)
for t, p in zip(classes.view(-1), preds.view(-1)):
confusion_matrix[t.long(), p.long()] += 1
wandb.log({'matrix' : confusion_matrix})
if phase == 'train':
scheduler.step()
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
wandb.log({'epoch loss': epoch_loss,
'epoch acc': epoch_acc})
data = [[i, random.random() + math.sin(i / 10)] for i in range(100)]
table = wandb.Table(data=data, columns=["step", "height"])
wandb.log({'line-plot1': wandb.plot.line(table, "step", "height")})
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc, f1_score))
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
print('f1_score: {}'.format(f1_score))
model.load_state_dict(best_model_wts)
return model
| Have you tried the wandb Confusion matrix that comes with wandb?
cm = wandb.plot.confusion_matrix(
y_true=ground_truth,
preds=predictions,
class_names=class_names)
wandb.log({"conf_mat": cm})
| https://stackoverflow.com/questions/73320449/ |
How would you write the Multivariate Normal Distribution from scratch in Torch? | I would like to implement the Multivariate Normal Distribution in the Torch library from scratch. My implementation is not giving me the same output as the distribution at torch.distributions.MultivariateNormal. What part do I have wrong?
I tried implementing an equation of the Multivariate Normal Distribution I found on the internet but it doesn't match the output of the Torch MultivariateNormal distribution. I don't see an equation for it in the Torch Documentation.
My code
import torch
µ = torch.tensor([[-10.5, 2.0], [-0.5, 2.0]])
cov = torch.tensor([[12.0, 8.0], [12.0, 40.0]])
pos_def_cov = torch.matmul(cov, cov.T)
Σ = torch.linalg.cholesky(pos_def_cov)
x = torch.randn([1])
d = torch.tensor(x.shape[0])
(1 / torch.sqrt(2 * torch.pi**d) * torch.abs(Σ)) * torch.exp(-0.5 * (x - µ).T * Σ**-1 * (x - µ))
Typically the value in the upper right corner of the matrix is zero.
tensor([[ 0.2842, 0.0000],
[12.4068, 8.7792]])
The Torch distribution with the same matrices.
torch.distributions.MultivariateNormal(µ, pos_def_cov).sample()
The output doesn't have a constant zero value like my output does.
tensor([[-5.4596, 7.1297],
[ 0.8562, -7.6340]])
This is the equation I believe I have implemented correctly from scratch in Torch above. I think my problem may have something to do with my Cholesky Decomposition and making the covariant a positive definite matrix, if this is a fine equation to use and I implemented it correctly.
I have looked at the source code of torch.distributions.MultivariateNormal and I find it too abstract to get a foothold in.
| Σ should be a symmetric matrix by definition. In your provided example, the following code is not correct.
Σ = torch.linalg.cholesky(pos_def_cov)
Moreover, the pdf should return a scalar but not a matrix. The following code is also wrong. You should not use torch.abs() but torch.det()
(1 / torch.sqrt(2 * torch.pi**d) * torch.abs(Σ)) * torch.exp(-0.5 * (x - µ).T * Σ**-1 * (x - µ))
The problem is you are trying to compare a probability density function with a randomly generated sample.
A correct demo is the following code:
import torch
µ = torch.tensor([-10.5,2.0])
cov = torch.tensor([[12.0, 8.0], [8.0, 40.0]])
x = torch.randn(2)
d = torch.tensor(x.shape[0])
# your manual implementation of pdf
torch.log(1 / torch.sqrt((2 * torch.pi)**d * torch.det(cov)) * torch.exp(-0.5 * torch.sum((x - µ) * torch.mv(torch.inverse(cov), (x - µ)))))
# pdf from pytorch
torch.distributions.MultivariateNormal(µ, cov).log_prob(x)
| https://stackoverflow.com/questions/73326851/ |
Configuring a progress bar while training for Deep Learning | I have this tiny training function upcycled from a tutorial.
def train(epoch, tokenizer, model, device, loader, optimizer):
model.train()
with tqdm.tqdm(loader, unit="batch") as tepoch:
for _,data in enumerate(loader, 0):
y = data['target_ids'].to(device, dtype = torch.long)
y_ids = y[:, :-1].contiguous()
lm_labels = y[:, 1:].clone().detach()
lm_labels[y[:, 1:] == tokenizer.pad_token_id] = -100
ids = data['source_ids'].to(device, dtype = torch.long)
mask = data['source_mask'].to(device, dtype = torch.long)
outputs = model(input_ids = ids, attention_mask = mask, decoder_input_ids=y_ids, labels=lm_labels)
loss = outputs[0]
tepoch.set_description(f"Epoch {epoch}")
tepoch.set_postfix(loss=loss.item())
if _%10 == 0:
wandb.log({"Training Loss": loss.item()})
if _%1000==0:
print(f'Epoch: {epoch}, Loss: {loss.item()}')
optimizer.zero_grad()
loss.backward()
optimizer.step()
# xm.optimizer_step(optimizer)
# xm.mark_step()
The function trains fine, the problem is that I can't seem to make the progress bar work correctly. I played around with it, but haven't found a configuration that correctly updates the loss and tells me how much time is left.
Does anyone have any pointers on what I might be doing wrong?
Thanks in advance!
| In case anyone else has run in my same issue, thanks to the previous response I was able to configure the progress bar as I wanted with just a little tweak of what I was doing before:
def train(epoch, tokenizer, model, device, loader, optimizer):
model.train()
for _,data in tqdm(enumerate(loader, 0), unit="batch", total=len(loader)):
everything stays the same, and now I have a progress bar showing percentage and loss. I prefer this solution because it allows me to keep the other logging functions I had without further changes.
| https://stackoverflow.com/questions/73327697/ |
convert bounding box coordinates to x,y pairs | I have a bounding box coordinates in this format [x, y, width, height],
how can I get all the x and y pairs from it?
the result is going to be in this format [(x1,y1),(x2,y2),...,(xn,yn)]
thanks in advance!
| I'm not sure if I understand your data description correctly, but here's an example that might fit:
data = [
[1, 2, 100, 100],
[3, 4, 100, 100],
[5, 6, 200, 200],
]
result = [tuple(x[:2]) for x in data]
Result:
[(1, 2), (3, 4), (5, 6)]
| https://stackoverflow.com/questions/73339564/ |
How to copy a tensor with gradient information into another tensor? | I have a tensor A of shape (1, 768) with gradient and a tensor B of shape (2, 4, 768). I want to replace some values of tensor B with tensor A and have it pass back the gradient normally. However, direct assignment like B[batch][replace_ids].data = A seems to lose all gradients in A while B[batch][replace_ids] = A will get a RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation. Is there any feasible way?
Thanks in advance.
| Would be great if we can see a MWE but I guess you can try
B = B.clone()[batch, replace_ids] = A
| https://stackoverflow.com/questions/73349482/ |
Streamlit crashes while importing torch | Im trying to load my model and make an app using streamlit but the app crashes while importing torch. Does anyone know the reason?
| As long as you add it to your requirements file as "torch" (not pytorch -- I've had my app crash when I've tried importing it as pytorch or listing it as pytorch in my requirements file), it should work. If your app is using a large amount of resources, that could be the cause of the crash. Here's a repo for a Streamlit app that imports torch that I've been able to deploy succesfully.
| https://stackoverflow.com/questions/73355634/ |
Problem with executing python machine learning code I found on GitHub | I need some clear instructions on how to execute some code.
Context:
This is a python machine learning peptide binding script, but you don't need to know biology to help me.
I am trying to recreate this scientific paper to test its validity and if I can use it. I work in the biotech industry and am only somewhat familiar with C# and python.
The paper is linked to a GitHub page. And the GitHub page has some instructions on how to execute the code. But every time I try to execute this code as instructed, it gives me an error. I already installed its requirements of the most updated pytorch, numpy, scikit-learn; I also switched between GPU and CPU, but no method worked. I don't know what to do at this point.
Paper Title:
"Prediction of Specific TCR-Peptide Binding From Large Dictionaries of TCR-Peptide Pairs" by Ido Springer, Hanan Besser. etc.
Paper's Github8 (found in the paper's abstract):
https://github.com/louzounlab/ERGO
These are the example codes I input in the terminal. The example code was found in a comment at the end of ERGO.py
GPU ver:
python ERGO.py train lstm mcpas specific cuda:0 --model_file=model.pt --train_data_file=train_data --test_data_file=test_data
GPU code results:
Traceback (most recent call last): File "D:\D Download\ERGO-master\ERGO.py", line 437, in <module>
main(args) File "D:\D Download\ERGO-master\ERGO.py", line 141, in main
model, best_auc, best_roc = lstm.train_model(train_batches, test_batches, args.device, arg, params) File "D:\D Download\ERGO-master\lstm_utils.py", line 163, in train_model
model.to(device) File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 927, in to
return self._apply(convert) File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn) File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 602, in _apply
param_applied = fn(param) File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 925, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\cuda\__init__.py", line 211, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled
CPU code ver (only replaced specific cuda:0 with specific cpu):
python ERGO.py train lstm mcpas specific cpu --model_file=model.pt --train_data_file=train_data --test_data_file=test_data
CPU code results:
epoch: 1 C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\functional.py:1960: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead. warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.") Traceback (most recent call last): File "D:\D Download\ERGO-master\ERGO.py", line 437, in <module>
main(args) File "D:\D Download\ERGO-master\ERGO.py", line 141, in main
model, best_auc, best_roc = lstm.train_model(train_batches, test_batches, args.device, arg, params) File "D:\D Download\ERGO-master\lstm_utils.py", line 173, in train_model
loss = train_epoch(batches, model, loss_function, optimizer, device) File "D:\D Download\ERGO-master\lstm_utils.py", line 137, in train_epoch
loss = loss_function(probs, batch_signs) File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs) File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\loss.py", line 613, in forward
return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction) File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\functional.py", line 3074, in binary_cross_entropy
raise ValueError( ValueError: Using a target size (torch.Size([50])) that is different to the input size (torch.Size([50, 1])) is deprecated. Please ensure they have the same size.
| Looking at the ValueError, it seems that what you're trying to do is deprecated in pytorch, so you have a more recent version of the package than the one it was developed in. I suggest you try
pip install pytorch 1.4.0
in command line.
I'm not familiar with pytorch but menaging tensor shapes in tensorflow is the biggest pain in the a** for me. What it actually looks like to be the problem is that the input has an extra dimension than it should, so you would have to manually reshape it.
| https://stackoverflow.com/questions/73356319/ |
Putting all the tensors on one device | I am using ViViT in my model. Although I moved the input and the my whole model to the cuda, the train process shows an error in the line of position embedding:
class ViViTBackbone(nn.Module):
""" Model-3 backbone of ViViT """
def __init__(self, t, h, w, patch_t, patch_h, patch_w, num_classes, dim, depth, heads, mlp_dim, dim_head=3,
channels=3, mode='tubelet', emb_dropout=0., dropout=0., model=3):
super().__init__()
assert t % patch_t == 0 and h % patch_h == 0 and w % patch_w == 0, "Video dimensions should be divisible by " \
"tubelet size "
self.T = t
self.H = h
self.W = w
self.channels = channels
self.t = patch_t
self.h = patch_h
self.w = patch_w
self.mode = mode
self.nt = self.T // self.t
self.nh = self.H // self.h
self.nw = self.W // self.w
tubelet_dim = self.t * self.h * self.w * channels
self.to_tubelet_embedding = nn.Sequential(
Rearrange('b c (t pt) (h ph) (w pw) -> b t (h w) (pt ph pw c)', pt=self.t, ph=self.h, pw=self.w),
nn.Linear(tubelet_dim, dim)
)
# repeat same spatial position encoding temporally
self.pos_embedding = nn.Parameter(torch.randn(1, 1, self.nh * self.nw, dim)).repeat(1, self.nt, 1, 1)
self.dropout = nn.Dropout(emb_dropout)
if model == 3:
self.transformer = FSATransformerEncoder(dim, depth, heads, dim_head, mlp_dim,
self.nt, self.nh, self.nw, dropout)
elif model == 4:
assert heads % 2 == 0, "Number of heads should be even"
self.transformer = FDATransformerEncoder(dim, depth, heads, dim_head, mlp_dim,
self.nt, self.nh, self.nw, dropout)
self.to_latent = nn.Identity()
self.mlp_head = nn.Sequential(
nn.LayerNorm(dim),
nn.Linear(dim, num_classes)
)
def forward(self, x):
""" x is a video: (b, C, T, H, W) """
tokens = self.to_tubelet_embedding(x)
tokens += self.pos_embedding #The error is because of this line
tokens = self.dropout(tokens)
x = self.transformer(tokens)
return x
This is the error:
I create the ViViT according to the following method inside my model class:
self.vivit_FSA_F_8 = ViViTBackbone(t=8, h=16, w=24, patch_t=1, patch_h=16, patch_w=24, num_classes=10, dim=128,
depth=6, heads=10, mlp_dim=8, model=3)
How can I fix that?
| There are multiple ways:
Instead of creating parameters like:
self.T = t
do:
self.T = nn.Parameter(t)
then model.to(device) will push all the parameters to the correct device too.
An alternative is to use the device parameter whenever you create a tensor
some_tensor = torch.tensor(1.0,device=self.device)
or
some_tensor = torch.ones([3,4],device=self.device)
| https://stackoverflow.com/questions/73363204/ |
When retraining a model in pytorch should the optimizer be defined outside the train method? Will it get rid of previous weights if not? | So I am training a GNN on pytorch and after training it, with it's saved weights, I want to train it more with a separate dataset. When re training with the new dataset I don't want the weights to be reset, I want the weights to update from my last training session. Currently, my training code looks like this:
def train(data,model):
train_loader, val_loader, test_loader, feature_len = data
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
loss_fn = torch.nn.MSELoss()
epoch = 17
print('start training\n')
evaluate(model, 'train', train_loader)
evaluate(model, 'val', val_loader)
evaluate(model, 'test', test_loader)
for i in range(epoch):
print('epoch %d:' % i)
model.train()
for graph1, graph2, target in train_loader:
pred = torch.squeeze(model(graph1, graph2))
loss = loss_fn(pred, target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
evaluate(model, 'train', train_loader)
evaluate(model, 'val', val_loader)
evaluate(model, 'test', test_loader)
print()
At the moment, I create my model object outside of the function, and then train it using that code above(I also have an evaluate function but it is left out so to be more specific with my question). My question is, if after using this train method, I decide to train again on more data, will the fact I have the optimizer definition within the method mean it will train from scratch again? If so, to avoid this would I just define my optimizer outside of the train method? I'm slightly confused about retraining my model with saved weights, the pytorch tutorials didn't help.
| You can define the optimizer and model wherever you want (both inside and outside the train() method) as long as you are loading the weights correctly before the training loop. What you are missing probably is loading weights!!
From Pytorch tutorial,
Defining model and optimizer:
model = TheModelClass(*args, **kwargs)
optimizer = TheOptimizerClass(*args, **kwargs)
Loading the weights:
checkpoint = torch.load(PATH)
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
epoch = checkpoint['epoch']
loss = checkpoint['loss']
model.eval()
# - or -
model.train()
| https://stackoverflow.com/questions/73364288/ |
CNN model for RGB images giving 0% accuracy | I am trying to train a CNN model on CelebA (RGB images) dataset. But, when I train the model and check its accuracy it is 0% or close to 0%. I think the issue is in the ConNeuralNet function or the hyperparameters but due to my limited knowledge I'm not sure what I'm missing here. Can someone please help. Thanks
# Creating a simple network
class ConvNeuralNet(torch.nn.Module):
def __init__(self, num_classes=10178):
super(ConvNeuralNet, self).__init__()
self.conv_layer1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3)
self.conv_layer2 = nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3)
self.max_pool1 = nn.MaxPool2d(kernel_size = 2, stride = 2)
self.conv_layer3 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3)
self.conv_layer4 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3)
self.max_pool2 = nn.MaxPool2d(kernel_size = 2, stride = 2)
self.fc1 = nn.Linear(13312, 128)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(128, num_classes)
def forward(self, x):
out = self.conv_layer1(x)
out = self.conv_layer2(out)
out = self.max_pool1(out)
out = self.conv_layer3(out)
out = self.conv_layer4(out)
out = self.max_pool2(out)
out = out.reshape(out.size(0), -1)
out = self.fc1(out)
out = self.relu1(out)
out = self.fc2(out)
return F.log_softmax(out,dim=-1)
def trainTorch(torch_model, train_loader, test_loader,
nb_epochs=NB_EPOCHS, batch_size=BATCH_SIZE, train_end=-1, test_end=-1, learning_rate=LEARNING_RATE, optimizer=None):
train_loss = []
total = 0
correct = 0
step = 0
for _epoch in range(nb_epochs):
for xs, ys in train_loader:
xs, ys = Variable(xs), Variable(ys)
if torch.cuda.is_available():
xs, ys = xs.cuda(), ys.cuda()
optimizer.zero_grad()
preds = torch_model(xs)
preds = F.log_softmax(preds, dim=1)
loss = F.cross_entropy(preds, ys)
loss.backward()
train_loss.append(loss.data.item())
optimizer.step() # update gradients
preds_np = preds.cpu().detach().numpy()
correct += (np.argmax(preds_np, axis=1) == ys.cpu().detach().numpy()).sum()
total += train_loader.batch_size
step += 1
if total % 1000 == 0:
acc = float(correct) / total
print('[%s] Training accuracy: %.2f%%' % (step, acc * 100))
total = 0
correct = 0
nb_epochs = 8
image_size = 64
batch_size = 64
num_classes = 10178
learning_rate = 0.001
num_epochs = 8
# Device will determine whether to run the training on GPU or CPU.
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
trans = transforms.Compose([
transforms.Resize(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
train_loader = torch.utils.data.DataLoader(
datasets.CelebA('data', split='train', target_type='identity', transform=trans, download="True"),
batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.CelebA('data', split='test', target_type='identity', transform=trans),
batch_size=batch_size)
#Training the model
print("Training Model")
# Set optimizer with optimizer
optimizer = torch.optim.SGD(model1.parameters(), lr=learning_rate, weight_decay = 0.005, momentum = 0.9)
total_step = len(train_loader)
trainTorch(model1, train_loader, test_loader, nb_epochs, batch_size, train_end, test_end, learning_rate, optimizer = optimizer)
| **Update I ran the code for a bit to see if it would start converging. One thing is that there are over 10,000 classes. With a batch size of 64 this means that it will take more than 150 mini-batches before your model has seen every class in your dataset. You certanly shouldn't expect the model to start achieving accurate predictions within a few hundred steps.
When I printed the loss value I noticed it was decreasing very slowly. I changed to learning rate to 0.01 and it started decreasing faster.
Also, your model is very shallow for a face recognition model. You're better off using something like a resnet variant (e.g. resnet-50 or resnet-101 from torchvision), rather than custom rolling your own model.
Primary changes include
Learning rate increased
Fix the loss function
Remove log_softmax from output of model
Add activation to the conv layers
IMO the comments about softmax are a bit misleading since you don't need to softmax the output of your model if you are using cross_entropy. You also don't need softmax to get the argmax of the prediction since both softmax and log_softmax don't change the relative ordering of the predictions (i.e. both softmax and log are strictly increasing functions).
IMO the comment about using average pooling to reduce the input size of the first fc layer is a good one and may improve performance, but you'll need to experiment with that one to find good parameters for it so I left it out of this answer.
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from torchvision import datasets, transforms
# Creating a simple network
class ConvNeuralNet(torch.nn.Module):
def __init__(self, num_classes=10178):
super(ConvNeuralNet, self).__init__()
self.conv_layer1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3)
self.conv_layer2 = nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3)
self.max_pool1 = nn.MaxPool2d(kernel_size = 2, stride = 2)
self.conv_layer3 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3)
self.conv_layer4 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3)
self.max_pool2 = nn.MaxPool2d(kernel_size = 2, stride = 2)
self.fc1 = nn.Linear(13312, 128)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(128, num_classes)
def forward(self, x):
# note the relu activations on the conv layers
out = F.relu(self.conv_layer1(x))
out = F.relu(self.conv_layer2(out))
out = self.max_pool1(out)
out = F.relu(self.conv_layer3(out))
out = F.relu(self.conv_layer4(out))
out = self.max_pool2(out)
# you may want an adaptive average pool 2d here to reduce size of feature map further
out = out.reshape(out.size(0), -1)
out = self.fc1(out)
out = self.relu1(out)
out = self.fc2(out)
# return raw logits, not log-softmax output
return out
def trainTorch(torch_model, train_loader, test_loader, nb_epochs, batch_size, learning_rate, optimizer):
train_loss = []
total = 0
correct = 0
step = 0
for _epoch in range(nb_epochs):
for xs, ys in train_loader:
# the Variable interface has been deprecated for years, it is effectively a no-op in modern pytorch
# see: https://pytorch.org/docs/stable/autograd.html#variable-deprecated
if torch.cuda.is_available():
xs, ys = xs.cuda(), ys.cuda()
optimizer.zero_grad()
logits = torch_model(xs)
# don't softmax or log-softmax the inputs to cross_entropy
loss = F.cross_entropy(logits, ys)
# The following is equivalent but less numerically stable
# loss = F.nll_loss(F.log_softmax(logits), ys)
loss.backward()
train_loss.append(loss.item())
optimizer.step() # update gradients
logits_np = logits.cpu().detach().numpy()
correct += (np.argmax(logits_np, axis=1) == ys.cpu().detach().numpy()).sum()
total += train_loader.batch_size
step += 1
if step % 200 == 0:
acc = float(correct) / total
avg_loss = sum(train_loss) / len(train_loss)
print(f'[{step}] Training accuracy: {acc*100:.2f}% Training loss: {avg_loss:.4f}')
total = 0
correct = 0
train_loss = []
nb_epochs = 8
image_size = 64
batch_size = 64
num_classes = 10178
# increased learning rate to 0.01
learning_rate = 0.01
num_epochs = 8
# Device will determine whether to run the training on GPU or CPU.
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
trans = transforms.Compose([
transforms.Resize(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
train_loader = torch.utils.data.DataLoader(
datasets.CelebA('data', split='train', target_type='identity', transform=trans, download=True),
batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.CelebA('data', split='test', target_type='identity', transform=trans),
batch_size=batch_size)
model = ConvNeuralNet(num_classes)
if torch.cuda.is_available():
model.cuda()
#Training the model
print("Training Model")
# Set optimizer with optimizer
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=0.005, momentum=0.9)
total_step = len(train_loader)
trainTorch(model, train_loader, test_loader, nb_epochs, batch_size, learning_rate, optimizer=optimizer)
Output
Training Model
[200] Training accuracy: 0.00% Training loss: 9.2286
[400] Training accuracy: 0.02% Training loss: 9.2286
[600] Training accuracy: 0.04% Training loss: 9.2265
[800] Training accuracy: 0.00% Training loss: 9.2253
[1000] Training accuracy: 0.00% Training loss: 9.2222
[1200] Training accuracy: 0.00% Training loss: 9.2105
[1400] Training accuracy: 0.02% Training loss: 9.1776
[1600] Training accuracy: 0.03% Training loss: 9.1329
[1800] Training accuracy: 0.02% Training loss: 9.1013
[2000] Training accuracy: 0.02% Training loss: 9.0830
[2200] Training accuracy: 0.02% Training loss: 9.0715
[2400] Training accuracy: 0.01% Training loss: 9.0622
[2600] Training accuracy: 0.02% Training loss: 9.0456
[2800] Training accuracy: 0.00% Training loss: 9.0301
[3000] Training accuracy: 0.00% Training loss: 9.0357
[3200] Training accuracy: 0.02% Training loss: 9.0402
[3400] Training accuracy: 0.02% Training loss: 9.0321
[3600] Training accuracy: 0.02% Training loss: 9.0217
[3800] Training accuracy: 0.02% Training loss: 8.9757
[4000] Training accuracy: 0.09% Training loss: 8.9059
[4200] Training accuracy: 0.09% Training loss: 8.8331
[4400] Training accuracy: 0.09% Training loss: 8.7601
[4600] Training accuracy: 0.09% Training loss: 8.7356
[4800] Training accuracy: 0.10% Training loss: 8.6717
[5000] Training accuracy: 0.12% Training loss: 8.6311
[5200] Training accuracy: 0.16% Training loss: 8.5515
[5400] Training accuracy: 0.16% Training loss: 8.4943
[5600] Training accuracy: 0.14% Training loss: 8.4345
[5800] Training accuracy: 0.14% Training loss: 8.4107
[6000] Training accuracy: 0.18% Training loss: 8.3317
[6200] Training accuracy: 0.22% Training loss: 8.2716
[6400] Training accuracy: 0.31% Training loss: 8.1934
[6600] Training accuracy: 0.30% Training loss: 8.1500
[6800] Training accuracy: 0.35% Training loss: 8.0979
[7000] Training accuracy: 0.21% Training loss: 8.0739
[7200] Training accuracy: 0.44% Training loss: 8.0220
[7400] Training accuracy: 0.29% Training loss: 7.9819
From the output we see the loss is decreasing and the accuracy is starting to increase. Its hard to predict how well this will work and when it will converge but this is a good start. You'll probably need to use a better model and a learning rate scheduler to get better performance.
For example, just switching for a resnet-50
model = torchvision.models.resnet50(pretrained=True)
model.fc = nn.Linear(model.fc.in_features, num_classes)
The model starts converging much faster
Training Model
[200] Training accuracy: 0.05% Training loss: 9.1942
[400] Training accuracy: 0.05% Training loss: 8.9244
[600] Training accuracy: 0.15% Training loss: 8.5936
[800] Training accuracy: 0.30% Training loss: 8.3147
[1000] Training accuracy: 0.39% Training loss: 8.0745
[1200] Training accuracy: 0.43% Training loss: 7.9146
[1400] Training accuracy: 0.45% Training loss: 7.7706
[1600] Training accuracy: 0.64% Training loss: 7.6551
[1800] Training accuracy: 0.68% Training loss: 7.5784
[2000] Training accuracy: 0.74% Training loss: 7.5327
[2200] Training accuracy: 0.72% Training loss: 7.4689
[2400] Training accuracy: 0.63% Training loss: 7.4378
[2600] Training accuracy: 0.83% Training loss: 7.3789
[2800] Training accuracy: 0.90% Training loss: 7.2812
[3000] Training accuracy: 0.84% Training loss: 7.2771
[3200] Training accuracy: 0.96% Training loss: 7.2536
[3400] Training accuracy: 1.00% Training loss: 7.2538
| https://stackoverflow.com/questions/73364701/ |
why is ray Tune with pytorch HPO error 'trials did not complete, incomplete trials'? | Could someone explain why this code (that I took from here):
## Standard libraries
import os
import json
import math
import numpy as np
import time
## Imports for plotting
import matplotlib.pyplot as plt
#%matplotlib inline
#from IPython.display import set_matplotlib_formats
#set_matplotlib_formats('svg', 'pdf') # For export
from matplotlib.colors import to_rgb
import matplotlib
matplotlib.rcParams['lines.linewidth'] = 2.0
import seaborn as sns
sns.reset_orig()
sns.set()
import torch_geometric
import torch_geometric.nn as geom_nn
import torch_geometric.data as geom_data
from pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint
## Progress bar
from tqdm.notebook import tqdm
## PyTorch
import torch
import torchmetrics
from torchmetrics.functional import precision_recall
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as data
import torch.optim as optim
# Torchvision
import torchvision
from torchvision.datasets import CIFAR10
from torchvision import transforms
# PyTorch Lightning
import pytorch_lightning as pl
from ray import tune
def __init__(self, config):
super(LightningMNISTClassifier, self).__init__()
self.layer_1_size = config["layer_1_size"]
self.layer_2_size = config["layer_2_size"]
self.lr = config["lr"]
self.batch_size = config["batch_size"]
from ray.tune.integration.pytorch_lightning import TuneReportCallback
callback = TuneReportCallback(
{
"loss": "val_loss",
"mean_accuracy": "val_accuracy"
},
on="validation_end")
def train_tune(config, epochs=10, gpus=0):
model = LightningMNISTClassifier(config)
trainer = pl.Trainer(
max_epochs=epochs,
gpus=gpus,
progress_bar_refresh_rate=0,
callbacks=[callback])
trainer.fit(model)
config = {
"layer_1_size": tune.choice([32, 64, 128]),
"layer_2_size": tune.choice([64, 128, 256]),
"lr": tune.loguniform(1e-4, 1e-1),
"batch_size": tune.choice([32, 64, 128])
}
def train_tune(config, epochs=10, gpus=0):
model = LightningMNISTClassifier(config)
trainer = pl.Trainer(
max_epochs=epochs,
gpus=gpus,
progress_bar_refresh_rate=0,
callbacks=[callback])
trainer.fit(model)
from functools import partial
tune.run(
partial(train_tune, epochs=10, gpus=0),
config=config,
num_samples=10)
generates this error:
Traceback (most recent call last):
File "example_hpo_working.py", line 89, in <module>
num_samples=10)
File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/tune.py", line 741, in run
raise TuneError("Trials did not complete", incomplete_trials)
ray.tune.error.TuneError: ('Trials did not complete', [train_tune_6f362_00000, train_tune_6f362_00001, train_tune_6f362_00002, train_tune_6f362_00003, train_tune_6f362_00004, train_tune_6f362_00005, train_tune_6f362_00006, train_tune_6f362_00007, train_tune_6f362_00008, train_tune_6f362_00009])
I can see a similar question was asked here but not answered (the ultimate aim is to use ray hyperparameter optimisation with a pytorch network).
This is the full trace from the code:
2022-08-16 15:44:08,204 WARNING function_runner.py:604 -- Function checkpointing is disabled. This may result in unexpected behavior when using checkpointing features or certain schedulers. To enable, set the train function arguments to be `func(config, checkpoint_dir=None)`.
2022-08-16 15:44:08,411 ERROR syncer.py:147 -- Log sync requires rsync to be installed.
== Status ==
Memory usage on this node: 16.8/86.4 GiB
Using FIFO scheduling algorithm.
Resources requested: 1.0/64 CPUs, 0/0 GPUs, 0.0/62.79 GiB heap, 0.0/9.31 GiB objects
Result logdir: /root/ray_results/train_tune_2022-08-16_15-44-08
Number of trials: 10/10 (9 PENDING, 1 RUNNING)
+------------------------+----------+------------------+--------------+----------------+----------------+-------------+
| Trial name | status | loc | batch_size | layer_1_size | layer_2_size | lr |
|------------------------+----------+------------------+--------------+----------------+----------------+-------------|
| train_tune_43fd5_00000 | RUNNING | 172.17.0.2:41684 | 64 | 64 | 256 | 0.00233834 |
| train_tune_43fd5_00001 | PENDING | | 64 | 64 | 256 | 0.00155955 |
| train_tune_43fd5_00002 | PENDING | | 128 | 128 | 64 | 0.00399358 |
| train_tune_43fd5_00003 | PENDING | | 128 | 128 | 64 | 0.000184477 |
...deleted a few similar lines here
..and then there's:
(func pid=41684) 2022-08-16 15:44:10,774 ERROR function_runner.py:286 -- Runner Thread raised error.
(func pid=41684) Traceback (most recent call last):
(func pid=41684) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 277, in run
(func pid=41684) self._entrypoint()
(func pid=41684) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 352, in entrypoint
(func pid=41684) self._status_reporter.get_checkpoint(),
(func pid=41684) File "/root/miniconda3/lib/python3.7/site-packages/ray/util/tracing/tracing_helper.py", line 462, in _resume_span
(func pid=41684) return method(self, *_args, **_kwargs)
(func pid=41684) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 645, in _trainable_func
(func pid=41684) output = fn()
(func pid=41684) File "example_hpo_working.py", line 76, in train_tune
(func pid=41684) model = LightningMNISTClassifier(config)
(func pid=41684) NameError: name 'LightningMNISTClassifier' is not defined
2022-08-16 15:44:10,977 ERROR trial_runner.py:886 -- Trial train_tune_43fd5_00000: Error processing event.
NoneType: None
Result for train_tune_43fd5_00000:
date: 2022-08-16_15-44-10
experiment_id: c8977e85cbf84a9badff15fb2de6f516
hostname: 0e26c6a24ffa
node_ip: 172.17.0.2
pid: 41684
timestamp: 1660664650
trial_id: 43fd5_00000
(func pid=41722) 2022-08-16 15:44:13,241 ERROR function_runner.py:286 -- Runner Thread raised error.
(func pid=41722) Traceback (most recent call last):
(func pid=41722) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 277, in run
(func pid=41722) self._entrypoint()
(func pid=41722) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 352, in entrypoint
(func pid=41722) self._status_reporter.get_checkpoint(),
(func pid=41722) File "/root/miniconda3/lib/python3.7/site-packages/ray/util/tracing/tracing_helper.py", line 462, in _resume_span
(func pid=41722) return method(self, *_args, **_kwargs)
(func pid=41722) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 645, in _trainable_func
(func pid=41722) output = fn()
(func pid=41722) File "example_hpo_working.py", line 76, in train_tune
(func pid=41722) model = LightningMNISTClassifier(config)
(func pid=41722) NameError: name 'LightningMNISTClassifier' is not defined
(func pid=41720) 2022-08-16 15:44:13,253 ERROR function_runner.py:286 -- Runner Thread raised error.
(func pid=41720) Traceback (most recent call last):
(func pid=41720) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 277, in run
(func pid=41720) self._entrypoint()
(func pid=41720) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 352, in entrypoint
(func pid=41720) self._status_reporter.get_checkpoint(),
(func pid=41720) File "/root/miniconda3/lib/python3.7/site-packages/ray/util/tracing/tracing_helper.py", line 462, in _resume_span
(func pid=41720) return method(self, *_args, **_kwargs)
(func pid=41720) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 645, in _trainable_func
(func pid=41720) output = fn()
(func pid=41720) File "example_hpo_working.py", line 76, in train_tune
(func pid=41720) model = LightningMNISTClassifier(config)
(func pid=41720) NameError: name 'LightningMNISTClassifier' is not defined
(func pid=41718) 2022-08-16 15:44:13,253 ERROR function_runner.py:286 -- Runner Thread raised error.
(func pid=41718) Traceback (most recent call last):
(func pid=41718) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 277, in run
(func pid=41718) self._entrypoint()
(func pid=41718) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 352, in entrypoint
(func pid=41718) self._status_reporter.get_checkpoint(),
(func pid=41718) File "/root/miniconda3/lib/python3.7/site-packages/ray/util/tracing/tracing_helper.py", line 462, in _resume_span
(func pid=41718) return method(self, *_args, **_kwargs)
(func pid=41718) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 645, in _trainable_func
(func pid=41718) output = fn()
(func pid=41718) File "example_hpo_working.py", line 76, in train_tune
(func pid=41718) model = LightningMNISTClassifier(config)
(func pid=41718) NameError: name 'LightningMNISTClassifier' is not defined
(func pid=41734) 2022-08-16 15:44:13,340 ERROR function_runner.py:286 -- Runner Thread raised error.
(func pid=41734) Traceback (most recent call last):
(func pid=41734) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 277, in run
(func pid=41734) self._entrypoint()
(func pid=41734) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 352, in entrypoint
(func pid=41734) self._status_reporter.get_checkpoint(),
(func pid=41734) File "/root/miniconda3/lib/python3.7/site-packages/ray/util/tracing/tracing_helper.py", line 462, in _resume_span
(func pid=41734) return method(self, *_args, **_kwargs)
(func pid=41734) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 645, in _trainable_func
(func pid=41734) output = fn()
(func pid=41734) File "example_hpo_working.py", line 76, in train_tune
(func pid=41734) model = LightningMNISTClassifier(config)
(func pid=41734) NameError: name 'LightningMNISTClassifier' is not defined
(func pid=41732) 2022-08-16 15:44:13,325 ERROR function_runner.py:286 -- Runner Thread raised error.
(func pid=41732) Traceback (most recent call last):
(func pid=41732) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 277, in run
(func pid=41732) self._entrypoint()
(func pid=41732) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 352, in entrypoint
(func pid=41732) self._status_reporter.get_checkpoint(),
(func pid=41732) File "/root/miniconda3/lib/python3.7/site-packages/ray/util/tracing/tracing_helper.py", line 462, in _resume_span
(func pid=41732) return method(self, *_args, **_kwargs)
(func pid=41732) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 645, in _trainable_func
(func pid=41732) output = fn()
(func pid=41732) File "example_hpo_working.py", line 76, in train_tune
(func pid=41732) model = LightningMNISTClassifier(config)
(func pid=41732) NameError: name 'LightningMNISTClassifier' is not defined
(func pid=41728) 2022-08-16 15:44:13,309 ERROR function_runner.py:286 -- Runner Thread raised error.
(func pid=41728) Traceback (most recent call last):
(func pid=41728) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 277, in run
(func pid=41728) self._entrypoint()
(func pid=41728) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 352, in entrypoint
(func pid=41728) self._status_reporter.get_checkpoint(),
(func pid=41728) File "/root/miniconda3/lib/python3.7/site-packages/ray/util/tracing/tracing_helper.py", line 462, in _resume_span
(func pid=41728) return method(self, *_args, **_kwargs)
(func pid=41728) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 645, in _trainable_func
(func pid=41728) output = fn()
(func pid=41728) File "example_hpo_working.py", line 76, in train_tune
(func pid=41728) model = LightningMNISTClassifier(config)
(func pid=41728) NameError: name 'LightningMNISTClassifier' is not defined
(func pid=41730) 2022-08-16 15:44:13,272 ERROR function_runner.py:286 -- Runner Thread raised error.
(func pid=41730) Traceback (most recent call last):
(func pid=41730) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 277, in run
(func pid=41730) self._entrypoint()
(func pid=41730) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 352, in entrypoint
(func pid=41730) self._status_reporter.get_checkpoint(),
(func pid=41730) File "/root/miniconda3/lib/python3.7/site-packages/ray/util/tracing/tracing_helper.py", line 462, in _resume_span
(func pid=41730) return method(self, *_args, **_kwargs)
(func pid=41730) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 645, in _trainable_func
(func pid=41730) output = fn()
(func pid=41730) File "example_hpo_working.py", line 76, in train_tune
(func pid=41730) model = LightningMNISTClassifier(config)
(func pid=41730) NameError: name 'LightningMNISTClassifier' is not defined
2022-08-16 15:44:13,444 ERROR trial_runner.py:886 -- Trial train_tune_43fd5_00003: Error processing event.
NoneType: None
Result for train_tune_43fd5_00003:
date: 2022-08-16_15-44-13
experiment_id: 02204d81b72943e3bbfcc822d35f02a0
hostname: 0e26c6a24ffa
node_ip: 172.17.0.2
pid: 41722
timestamp: 1660664653
trial_id: 43fd5_00003
(func pid=41724) 2022-08-16 15:44:13,457 ERROR function_runner.py:286 -- Runner Thread raised error.
(func pid=41724) Traceback (most recent call last):
(func pid=41724) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 277, in run
(func pid=41724) self._entrypoint()
(func pid=41724) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 352, in entrypoint
(func pid=41724) self._status_reporter.get_checkpoint(),
(func pid=41724) File "/root/miniconda3/lib/python3.7/site-packages/ray/util/tracing/tracing_helper.py", line 462, in _resume_span
(func pid=41724) return method(self, *_args, **_kwargs)
(func pid=41724) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 645, in _trainable_func
(func pid=41724) output = fn()
(func pid=41724) File "example_hpo_working.py", line 76, in train_tune
(func pid=41724) model = LightningMNISTClassifier(config)
(func pid=41724) NameError: name 'LightningMNISTClassifier' is not defined
== Status ==
Current time: 2022-08-16 15:44:13 (running for 00:00:05.24)
Memory usage on this node: 17.6/86.4 GiB
Using FIFO scheduling algorithm.
Resources requested: 8.0/64 CPUs, 0/0 GPUs, 0.0/62.79 GiB heap, 0.0/9.31 GiB objects
Result logdir: /root/ray_results/train_tune_2022-08-16_15-44-08
Number of trials: 10/10 (2 ERROR, 8 RUNNING)
+------------------------+----------+------------------+--------------+----------------+----------------+-------------+
| Trial name | status | loc | batch_size | layer_1_size | layer_2_size | lr |
|------------------------+----------+------------------+--------------+----------------+----------------+-------------|
| train_tune_43fd5_00001 | RUNNING | 172.17.0.2:41718 | 64 | 64 | 256 | 0.00155955 |
| train_tune_43fd5_00002 | RUNNING | 172.17.0.2:41720 | 128 | 128 | 64 | 0.00399358 |
| train_tune_43fd5_00004 | RUNNING | 172.17.0.2:41724 | 128 | 64 | 128 | 0.0221855 |
| train_tune_43fd5_00005 | RUNNING | 172.17.0.2:41726 | 64 | 128 | 128 | 0.00041038 |
| train_tune_43fd5_00006 | RUNNING | 172.17.0.2:41728 | 64 | 64 | 256 | 0.0105243 |
| train_tune_43fd5_00007 | RUNNING | 172.17.0.2:41730 | 128 | 32 | 256 | 0.000929454 |
| train_tune_43fd5_00008 | RUNNING | 172.17.0.2:41732 | 64 | 64 | 128 | 0.00176483 |
| train_tune_43fd5_00009 | RUNNING | 172.17.0.2:41734 | 128 | 32 | 256 | 0.000113077 |
| train_tune_43fd5_00000 | ERROR | 172.17.0.2:41684 | 64 | 64 | 256 | 0.00233834 |
| train_tune_43fd5_00003 | ERROR | 172.17.0.2:41722 | 128 | 128 | 64 | 0.000184477 |
+------------------------+----------+------------------+--------------+----------------+----------------+-------------+
Number of errored trials: 2
+------------------------+--------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Trial name | # failures | error file |
|------------------------+--------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| train_tune_43fd5_00000 | 1 | /root/ray_results/train_tune_2022-08-16_15-44-08/train_tune_43fd5_00000_0_batch_size=64,layer_1_size=64,layer_2_size=256,lr=0.0023_2022-08-16_15-44-08/error.txt |
| train_tune_43fd5_00003 | 1 | /root/ray_results/train_tune_2022-08-16_15-44-08/train_tune_43fd5_00003_3_batch_size=128,layer_1_size=128,layer_2_size=64,lr=0.0002_2022-08-16_15-44-10/error.txt |
+------------------------+--------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2022-08-16 15:44:13,487 ERROR trial_runner.py:886 -- Trial train_tune_43fd5_00001: Error processing event.
NoneType: None
Result for train_tune_43fd5_00001:
date: 2022-08-16_15-44-13
experiment_id: e738348e77c64919931d70c916cbfaf8
hostname: 0e26c6a24ffa
node_ip: 172.17.0.2
pid: 41718
timestamp: 1660664653
trial_id: 43fd5_00001
2022-08-16 15:44:13,490 ERROR trial_runner.py:886 -- Trial train_tune_43fd5_00007: Error processing event.
NoneType: None
Result for train_tune_43fd5_00007:
date: 2022-08-16_15-44-13
experiment_id: f79be7b9e98a43f1a41893071c4e1f6b
hostname: 0e26c6a24ffa
node_ip: 172.17.0.2
pid: 41730
timestamp: 1660664653
trial_id: 43fd5_00007
2022-08-16 15:44:13,493 ERROR trial_runner.py:886 -- Trial train_tune_43fd5_00002: Error processing event.
NoneType: None
Result for train_tune_43fd5_00002:
date: 2022-08-16_15-44-13
experiment_id: 8e7422287e3e44f9b2e7b249a8ae18cd
hostname: 0e26c6a24ffa
node_ip: 172.17.0.2
pid: 41720
timestamp: 1660664653
trial_id: 43fd5_00002
2022-08-16 15:44:13,512 ERROR trial_runner.py:886 -- Trial train_tune_43fd5_00006: Error processing event.
NoneType: None
Result for train_tune_43fd5_00006:
date: 2022-08-16_15-44-13
experiment_id: 2d56b152a6a34e1f9e26dad1aec25d00
hostname: 0e26c6a24ffa
node_ip: 172.17.0.2
pid: 41728
timestamp: 1660664653
trial_id: 43fd5_00006
2022-08-16 15:44:13,527 ERROR trial_runner.py:886 -- Trial train_tune_43fd5_00008: Error processing event.
NoneType: None
Result for train_tune_43fd5_00008:
date: 2022-08-16_15-44-13
experiment_id: b2158026b3b947bfbb9c3da4e6f7b977
hostname: 0e26c6a24ffa
node_ip: 172.17.0.2
pid: 41732
timestamp: 1660664653
trial_id: 43fd5_00008
2022-08-16 15:44:13,543 ERROR trial_runner.py:886 -- Trial train_tune_43fd5_00009: Error processing event.
NoneType: None
Result for train_tune_43fd5_00009:
date: 2022-08-16_15-44-13
experiment_id: 6b5a73f09241440085bd6c09f6f681e9
hostname: 0e26c6a24ffa
node_ip: 172.17.0.2
pid: 41734
timestamp: 1660664653
trial_id: 43fd5_00009
(func pid=41726) 2022-08-16 15:44:13,484 ERROR function_runner.py:286 -- Runner Thread raised error.
(func pid=41726) Traceback (most recent call last):
(func pid=41726) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 277, in run
(func pid=41726) self._entrypoint()
(func pid=41726) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 352, in entrypoint
(func pid=41726) self._status_reporter.get_checkpoint(),
(func pid=41726) File "/root/miniconda3/lib/python3.7/site-packages/ray/util/tracing/tracing_helper.py", line 462, in _resume_span
(func pid=41726) return method(self, *_args, **_kwargs)
(func pid=41726) File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/function_runner.py", line 645, in _trainable_func
(func pid=41726) output = fn()
(func pid=41726) File "example_hpo_working.py", line 76, in train_tune
(func pid=41726) model = LightningMNISTClassifier(config)
(func pid=41726) NameError: name 'LightningMNISTClassifier' is not defined
2022-08-16 15:44:13,660 ERROR trial_runner.py:886 -- Trial train_tune_43fd5_00004: Error processing event.
NoneType: None
Result for train_tune_43fd5_00004:
date: 2022-08-16_15-44-13
experiment_id: 60f51e072c7942bdb5d9298e0e147555
hostname: 0e26c6a24ffa
node_ip: 172.17.0.2
pid: 41724
timestamp: 1660664653
trial_id: 43fd5_00004
2022-08-16 15:44:13,687 ERROR trial_runner.py:886 -- Trial train_tune_43fd5_00005: Error processing event.
NoneType: None
Result for train_tune_43fd5_00005:
date: 2022-08-16_15-44-13
experiment_id: 79701d1c19ac4c55b5a73746c1872724
hostname: 0e26c6a24ffa
node_ip: 172.17.0.2
pid: 41726
timestamp: 1660664653
trial_id: 43fd5_00005
== Status ==
Current time: 2022-08-16 15:44:13 (running for 00:00:05.46)
Memory usage on this node: 16.4/86.4 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/64 CPUs, 0/0 GPUs, 0.0/62.79 GiB heap, 0.0/9.31 GiB objects
Result logdir: /root/ray_results/train_tune_2022-08-16_15-44-08
Number of trials: 10/10 (10 ERROR)
+------------------------+----------+------------------+--------------+----------------+----------------+-------------+
| Trial name | status | loc | batch_size | layer_1_size | layer_2_size | lr |
|------------------------+----------+------------------+--------------+----------------+----------------+-------------|
| train_tune_43fd5_00000 | ERROR | 172.17.0.2:41684 | 64 | 64 | 256 | 0.00233834 |
| train_tune_43fd5_00001 | ERROR | 172.17.0.2:41718 | 64 | 64 | 256 | 0.00155955 |
| train_tune_43fd5_00002 | ERROR | 172.17.0.2:41720 | 128 | 128 | 64 | 0.00399358 |
| train_tune_43fd5_00003 | ERROR | 172.17.0.2:41722 | 128 | 128 | 64 | 0.000184477 |
| train_tune_43fd5_00004 | ERROR | 172.17.0.2:41724 | 128 | 64 | 128 | 0.0221855 |
| train_tune_43fd5_00005 | ERROR | 172.17.0.2:41726 | 64 | 128 | 128 | 0.00041038 |
| train_tune_43fd5_00006 | ERROR | 172.17.0.2:41728 | 64 | 64 | 256 | 0.0105243 |
| train_tune_43fd5_00007 | ERROR | 172.17.0.2:41730 | 128 | 32 | 256 | 0.000929454 |
| train_tune_43fd5_00008 | ERROR | 172.17.0.2:41732 | 64 | 64 | 128 | 0.00176483 |
| train_tune_43fd5_00009 | ERROR | 172.17.0.2:41734 | 128 | 32 | 256 | 0.000113077 |
+------------------------+----------+------------------+--------------+----------------+----------------+-------------+
Number of errored trials: 10
+------------------------+--------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Trial name | # failures | error file |
|------------------------+--------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| train_tune_43fd5_00000 | 1 | /root/ray_results/train_tune_2022-08-16_15-44-08/train_tune_43fd5_00000_0_batch_size=64,layer_1_size=64,layer_2_size=256,lr=0.0023_2022-08-16_15-44-08/error.txt |
| train_tune_43fd5_00001 | 1 | /root/ray_results/train_tune_2022-08-16_15-44-08/train_tune_43fd5_00001_1_batch_size=64,layer_1_size=64,layer_2_size=256,lr=0.0016_2022-08-16_15-44-10/error.txt |
| train_tune_43fd5_00002 | 1 | /root/ray_results/train_tune_2022-08-16_15-44-08/train_tune_43fd5_00002_2_batch_size=128,layer_1_size=128,layer_2_size=64,lr=0.0040_2022-08-16_15-44-10/error.txt |
| train_tune_43fd5_00003 | 1 | /root/ray_results/train_tune_2022-08-16_15-44-08/train_tune_43fd5_00003_3_batch_size=128,layer_1_size=128,layer_2_size=64,lr=0.0002_2022-08-16_15-44-10/error.txt |
| train_tune_43fd5_00004 | 1 | /root/ray_results/train_tune_2022-08-16_15-44-08/train_tune_43fd5_00004_4_batch_size=128,layer_1_size=64,layer_2_size=128,lr=0.0222_2022-08-16_15-44-10/error.txt |
| train_tune_43fd5_00005 | 1 | /root/ray_results/train_tune_2022-08-16_15-44-08/train_tune_43fd5_00005_5_batch_size=64,layer_1_size=128,layer_2_size=128,lr=0.0004_2022-08-16_15-44-10/error.txt |
| train_tune_43fd5_00006 | 1 | /root/ray_results/train_tune_2022-08-16_15-44-08/train_tune_43fd5_00006_6_batch_size=64,layer_1_size=64,layer_2_size=256,lr=0.0105_2022-08-16_15-44-10/error.txt |
| train_tune_43fd5_00007 | 1 | /root/ray_results/train_tune_2022-08-16_15-44-08/train_tune_43fd5_00007_7_batch_size=128,layer_1_size=32,layer_2_size=256,lr=0.0009_2022-08-16_15-44-10/error.txt |
| train_tune_43fd5_00008 | 1 | /root/ray_results/train_tune_2022-08-16_15-44-08/train_tune_43fd5_00008_8_batch_size=64,layer_1_size=64,layer_2_size=128,lr=0.0018_2022-08-16_15-44-10/error.txt |
| train_tune_43fd5_00009 | 1 | /root/ray_results/train_tune_2022-08-16_15-44-08/train_tune_43fd5_00009_9_batch_size=128,layer_1_size=32,layer_2_size=256,lr=0.0001_2022-08-16_15-44-10/error.txt |
+------------------------+--------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Traceback (most recent call last):
File "example_hpo_working.py", line 89, in <module>
num_samples=10)
File "/root/miniconda3/lib/python3.7/site-packages/ray/tune/tune.py", line 741, in run
raise TuneError("Trials did not complete", incomplete_trials)
ray.tune.error.TuneError: ('Trials did not complete', [train_tune_43fd5_00000, train_tune_43fd5_00001, train_tune_43fd5_00002, train_tune_43fd5_00003, train_tune_43fd5_00004, train_tune_43fd5_00005, train_tune_43fd5_00006, train_tune_43fd5_00007, train_tune_43fd5_00008, train_tune_43fd5_00009])
| Is there a longer stacktrace where the real error is printed?
Also could you go to the result folder and see the error file?
Usually result folder is under ~/ray_results.
| https://stackoverflow.com/questions/73374386/ |
How to convert image from folder into tensors using torch? | I'm trying to convert images in a folder to tensors, save it and load them later, as shown below
transform = transforms.Compose([
transforms.ToTensor()])
dataset = datasets.ImageFolder(
r'imagedata', transform=transform)
torch.save(dataset, 'train_data.pt')
But I get a value error when trying to load the trained file as below:
train_codes = torch.Tensor(torch.load(os.path.join(self.data_dir, "train_data.pt")))
ValueError: only one element tensors can be converted to Python scalars
Any help or suggestion to fix this will be highly appreciated.
| You met this problem because train_data.pt was not saved as a Tensor, since that variable was read the data by ImageFolder which was inherited from DatasetFolder, it should be loaded and used as a Torch Dataset. The example below use DataLoader as documents:
import torch
from torchvision import transforms, datasets
# Saving part
transform = transforms.Compose([
transforms.ToTensor()
])
dataset = datasets.ImageFolder(r'imagedata', transform=transform)
torch.save(dataset,'train_data.pt')
# Loading part
data = torch.load(torch.load(os.path.join(self.data_dir, "train_data.pt")))
loader = torch.utils.data.DataLoader(data, batch_size = 32)
for image, label in loader:
# Processing....
| https://stackoverflow.com/questions/73378920/ |
Pytorch model function gives Error :'NoneType' object has no attribute 'size' | I trying to run below code in my image segmentation problem without CUDA as I don't have GPU. I have trained my model on CPU using pytorch but on prediction level I'm getting
AttributeError: 'NoneType' object has no attribute 'size'
Here's the code:
idx = 20
model.load_state_dict(torch.load('/content/best_model.pt'))
image, mask = validset[idx]
image = image.unsqueeze_(0)
print(type(image))
# logits_mask = model(image.to(DEVICE).unsqueeze(0)) # (c,h,w) -> (1,c,h,w)
logits_mask = model(image) # (c,h,w) py-> (1,c,h,w)
The resulting error, from the output, is at line number 8:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-56-edf3f0fae49c> in <module>
6 print(type(image))
7 # logits_mask = model(image.to(DEVICE).unsqueeze(0)) # (c,h,w) -> (1,c,h,w)
----> 8 logits_mask = model(image) # (c,h,w) py-> (1,c,h,w)
9
10 pred_mask = torch.sigmoid(logits_mask)
3 frames
/usr/local/lib/python3.7/dist-packages/segmentation_models_pytorch/losses/dice.py in forward(self, y_pred, y_true)
57 def forward(self, y_pred: torch.Tensor, y_true: torch.Tensor) -> torch.Tensor:
58
---> 59 assert y_true.size(0) == y_pred.size(0)
60
61 if self.from_logits:
AttributeError: 'NoneType' object has no attribute 'size'
| assert y_true.size(0) == y_pred.size(0) erroring signifies that either y_true or y_pred are None, so you can try checking the types of image, model(image), & mask respectively.
IMO, this might be the root cause: image = image.unsqueeze_(0)
unsqueze_ is an inplace operator, which means it will return nothing and change the tensor image inplace.
| https://stackoverflow.com/questions/73379963/ |
BertModel weights are randomly initialized? | recently, I've been trying to re-implement DiffCSE
During refactoring the codes that the authors uploaded on Github, I've run into some issues.
I have 2 questions
1.
If I set seed like set_seed(30), I was under the impression that the model has the same initialized weights, thus making the same result when training. But It feels like I was wrong
for example,
config = AutoConfig.from_pretrained('bert-base-uncased')
a = BertModel(config)
b = BertModel(config)
a_query =a.encoder.layer[0].attention.self.query.weight
b_query =b.encoder.layer[0].attention.self.query.weight
a_query == b_query
# tensor([[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]])
print(a_query, b_query)
Parameter containing:
tensor([[ 0.0168, -0.0072, 0.0141, ..., 0.0060, -0.0098, -0.0361],
[ 0.0121, -0.0106, 0.0169, ..., -0.0512, 0.0154, -0.0251],
[ 0.0252, 0.0375, 0.0215, ..., -0.0097, -0.0009, -0.0102],
...,
[ 0.0038, 0.0120, -0.0205, ..., -0.0082, -0.0066, 0.0125],
[ 0.0032, -0.0330, 0.0073, ..., 0.0072, 0.0484, 0.0143],
[-0.0153, 0.0207, -0.0086, ..., -0.0087, -0.0032, 0.0022]],
requires_grad=True) Parameter containing:
tensor([[ 0.0239, 0.0236, 0.0181, ..., -0.0331, 0.0062, 0.0142],
[-0.0116, 0.0417, -0.0379, ..., 0.0059, 0.0207, 0.0155],
[ 0.0178, 0.0017, 0.0064, ..., -0.0007, 0.0405, -0.0170],
...,
[ 0.0115, 0.0039, -0.0508, ..., 0.0187, 0.0043, -0.0048],
[ 0.0025, -0.0079, -0.0132, ..., -0.0003, -0.0079, 0.0320],
[-0.0105, -0.0097, -0.0076, ..., 0.0214, -0.0068, 0.0016]],
requires_grad=True)
I can't understand why it happens. Also, Every time I execute this code, the weights are different from each case.
2.
There are many models provided by Huggingface. When it comes to BERT, they have BertModel, BertForPretraining, BertForMaskedLM,, etc. As far as I know, the only difference between each Bert model is whether they have heads on the top layer or not.
Then, the heads are also pretrained?? or just randomly initailzed weights and provieded for users' convenience.??
| You have a small misunderstanding of how seeds work. The seed defines how the random values are sampled, it doesn't reset after each sample. This means that the sequences sampled will be the same when starting from the seed. For example, if you have a code like:
seed = 1
sample = sample_4_values()
You should always get the same four values because the seed defined this sequence.
In your case you define 2 BERT models without resetting the seed to the starting point for each sample isn't the same!
In order to get the same weights to reset the seed before each initialization of BERT
##Edit
To better understand what the seed does you need to think about it as a starting point. Imagine that setting the seed to 30 tells the computer to sample the following numbers: 1,2,3,5,6
Calling the sample function 1 time will return 1. Calling it again will return 2 and so on. What you are basically doing is sampling 2 times but each time your starting point is different.
| https://stackoverflow.com/questions/73382965/ |
pytorch tensor sort rows based on column | In a 2D tensor like so
tensor([[0.8771, 0.0976, 0.8186],
[0.7044, 0.4783, 0.0350],
[0.4239, 0.8341, 0.3693],
[0.5568, 0.9175, 0.0763],
[0.0876, 0.1651, 0.2776]])
How do you sort the rows based off the values in a column? For instance if we were to sort based off the last column, I would expect the rows to be such...
tensor([[0.7044, 0.4783, 0.0350],
[0.5568, 0.9175, 0.0763],
[0.0876, 0.1651, 0.2776],
[0.4239, 0.8341, 0.3693],
[0.8771, 0.0976, 0.8186]])
Values in the last column are now in ascending order.
| a = <your tensor>
ind = a[:,-1].argsort(dim=0)
a[ind]
argsort "Returns the indices that sort a tensor along a given dimension in ascending order by value." So, basically, you get sorting indices for the last column and reorder the rows according to these indices.
| https://stackoverflow.com/questions/73389603/ |
PyTorch Lightning - How to automatically reload last checkpoint when loss unexpectedly spikes? | I'm facing a problem where during training, my loss will unexpectedly spike, like so:
When this happens, I want to automatically reload the last checkpoint, reset the optimizer and resume training. How do I do this?
Edit: I tried training with fp64 precision and the unstable learning problem still occurred albeit later in training.
| you could write a callback where you can check the spike and load the last checkpoint. Please, let me know if this helps!
| https://stackoverflow.com/questions/73395943/ |
ImportError about Detectron2 | I was trying to do semantic segmentation using Detectron2, but some tricky errors occurred when I ran my program. It seems there might be some problems in my environment.
Does anyone know how to fix it?
ImportError: cannot import name 'is_fx_tracing' from 'torch.fx._symbolic_trace' (/home/eric/anaconda3/envs/detectron_env/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py)
| This seems to be an issue with the latest commit of detectron2, you can use the previous commit of detectron2 while installing to avoid this error.
pip install 'git+https://github.com/facebookresearch/detectron2.git@5aeb252b194b93dc2879b4ac34bc51a31b5aee13'
The issue is resolved in the latest commit of detectron2
| https://stackoverflow.com/questions/73408083/ |
ValueError: The following `model_kwargs` are not used by the model: ['encoder_outputs'] (note: typos in the generate arguments will also show up | When I try to run my code for Donut for DocVQA model, I got the following error
"""Test"""
from donut import DonutModel
from PIL import Image
import torch
model = DonutModel.from_pretrained(
"naver-clova-ix/donut-base-finetuned-cord-v2")
if torch.cuda.is_available():
model.half()
device = torch.device("cuda")
model.to(device)
else:
model.encoder.to(torch.bfloat16)
model.eval()
image = Image.open(
"./src/png-report-page_capone_v1_August_2017_GMS_Balance_Sheet_0.png").convert("RGB")
output = model.inference(image=image, prompt="<s_cord-v2>")
print(output)
The error`
ValueError: The following `model_kwargs` are not used by the model: ['encoder_outputs'] (note: typos in the generate arguments will also show up in this list)
| Check the transformers library version.
For me, I reinstalled the 4.21.3 version and it worked.
| https://stackoverflow.com/questions/73413237/ |
PyTorch Tensor methods to nn.Modules? | I'm programming some callable custom modules in PyTorch and I wanted to know if I'm doing it correctly. Here's an example scenario where I want to construct a module that takes a torch.Tensor as input, performs a learnable linear operation and outputs a diagonal covariance matrix to use in a multivariate distribution downstream.
class Exp(nn.Module):
def forward(self, x):
return x.exp()
class Diag(nn.Module):
def forward(self, x):
return x.diag_embed()
def init_model(input_size, output_size):
log_variance_module = nn.Linear(input_size, output_size)
diag_covariance_module = nn.Sequential(logvar_module, Exp(), Diag())
return diag_covariance_module
model = init_model(5, 5)
cov = model(some_input_tensor)
dist = MultivariateNormal(some_mean, cov)
I know that this works, but is it the right design pattern? How is one recommended to approach these modules?
| This looks like the correct design pattern.
Ideally, you would also write your main network as an nn.Module:
class Model(nn.Sequential):
def __init__(self, input_size, output_size):
logvar_module = nn.Linear(input_size, output_size)
super().__init__(logvar_module, Exp(), Diag())
| https://stackoverflow.com/questions/73417157/ |
Local maximums of sub-tensors by index tensor | I have a tensor x of shape (1,n), and another index tensor d of shape (1,k). I’m trying to find the maximums of k sub-tensors
x[0:d[0]], x[d[0]:d[1]], x[d[1]:d[2]], ..., x[d[-2]: d[-1]]
So the output is a tensor of shape (1,k) with k local maximums. I can implement a for loop, but that’s too slow. Can I do it in parallel in PyTorch (or Numpy)?
| I found the answer thanks to user7138814. There is a SegmentCSR function in torch_scatter that does the job:
from torch_scatter import segment_csr
src = torch.randn(10, 6, 64)
indptr = torch.tensor([0, 2, 5, 6])
indptr = indptr.view(1, -1) # Broadcasting in the first and last dim.
out = segment_csr(src, indptr, reduce="sum")
print(out.size())
torch.Size([10, 3, 64])
output: torch.Size([10, 3, 64])
| https://stackoverflow.com/questions/73420220/ |
How pytorch loss connect to model parameters? | I know that in PyTorch optimizer is connected to the model's parameters by
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
and inside the training loop we have to do the backward and update the gradient by execute this two lines
loss.backward()
optimizer.step()
But how does the loss actually connect to the model parameters? because we only define connection between optimizer and model and never define the connection between loss and the model.
And when we execute loss.backward(), how does PyTorch know that we will do backpropagation for our model?
I put the full code here for the context
import torch
import torch.nn as nn
X = torch.tensor([[1], [2], [3], [4]], dtype=torch.float32)
Y = torch.tensor([[2], [4], [6], [8]], dtype=torch.float32)
X_test = torch.tensor([[5]], dtype=torch.float32)
n_sample, n_feature = X.shape
input_size = n_feature
output_size = n_feature
model = nn.Linear(input_size, output_size)
# Training
learning_rate = 0.01
n_iters = 100
loss = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# print(model(X_test))
print(f"Prediction before training f(5) = {model(X_test).item():.3f}")
for epoch in range(n_iters):
y_pred = model(X)
# compute loss
l = loss(Y, y_pred)
# gradient
l.backward()
# update gradient
optimizer.step()
# zero gradient
optimizer.zero_grad()
if epoch % 10 == 0:
w, b = model.parameters()
# print(model.parameters())
print(f"Epoch {epoch + 1}, w = {w[0][0].item():.3f}, loss = {l:.5f}")
print(f"Prediction after training f(5) = {model(X_test).item():.3f}")
| Q: When we execute loss.backward(), how does PyTorch know that we will do backpropagation for our model?
In the line l = loss(Y, y_pred), the predictions are used to calculate the loss. This effectively connects the model parameters with the loss such that loss.backward() can do the backpropagation for the network to compute the parameter gradients. Note that the tensors in model() have requires_grad=True, while this is not the case for the labels which do not need gradients. Through l.backward(), each tensor value that went into the loss calculation and requires a gradient (in our case that's the model parameters) is assigned a gradient. See the documentation for the grad attribute.
Q: But how does the loss actually connect to the model parameters?
The statement optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate) connects optimizer and model parameters.
Since the gradients computed through loss.backward() become attributes of the model parameters, they are accessible to the optimizer.
| https://stackoverflow.com/questions/73423703/ |
Instance Norm: ValueError: Expected more than 1 spatial element when training, got input size torch.Size([128, 512, 1, 1]) | I have a ResNet-18 working well. Now, I want to use InstanceNorm as normalization layer instead of BatchNorm, so I changed all the batchnorm layers in this way:
resnet18.bn1 = nn.InstanceNorm2d(64, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer1[0].bn1 = nn.InstanceNorm2d(64, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer1[0].bn2 = nn.InstanceNorm2d(64, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer1[1].bn1 = nn.InstanceNorm2d(64, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer1[1].bn2 = nn.InstanceNorm2d(64, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer2[0].bn1 = nn.InstanceNorm2d(128, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer2[0].bn2 = nn.InstanceNorm2d(128, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer2[1].bn1 = nn.InstanceNorm2d(128, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer2[1].bn2 = nn.InstanceNorm2d(128, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer2[0].downsample[1] = nn.InstanceNorm2d(128, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer3[0].bn1 = nn.InstanceNorm2d(256, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer3[0].bn2 = nn.InstanceNorm2d(256, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer3[1].bn1 = nn.InstanceNorm2d(256, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer3[1].bn2 = nn.InstanceNorm2d(256, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer3[0].downsample[1] = nn.InstanceNorm2d(256, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer4[0].bn1 = nn.InstanceNorm2d(512, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer4[0].bn2 = nn.InstanceNorm2d(512, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer4[1].bn1 = nn.InstanceNorm2d(512, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer4[1].bn2 = nn.InstanceNorm2d(512, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.layer4[0].downsample[1] = nn.InstanceNorm2d(512, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
resnet18.fc = nn.Linear(in_features=512, out_features=10, bias=True)
All the num_features are equal to the BatchNorm2d ones, I just changed BatchNorm2d into InstanceNorm2d. So my ResNet-18 is this:
ResNet(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): InstanceNorm2d(64, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): InstanceNorm2d(64, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): InstanceNorm2d(64, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
)
(1): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): InstanceNorm2d(64, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): InstanceNorm2d(64, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
)
)
(layer2): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): InstanceNorm2d(128, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): InstanceNorm2d(128, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): InstanceNorm2d(128, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): InstanceNorm2d(128, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): InstanceNorm2d(128, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
)
)
(layer3): Sequential(
(0): BasicBlock(
(conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): InstanceNorm2d(256, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): InstanceNorm2d(256, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): InstanceNorm2d(256, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): InstanceNorm2d(256, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): InstanceNorm2d(256, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
)
)
(layer4): Sequential(
(0): BasicBlock(
(conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): InstanceNorm2d(512, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): InstanceNorm2d(512, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): InstanceNorm2d(512, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): InstanceNorm2d(512, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): InstanceNorm2d(512, eps=1e-05, momentum=0.9, affine=True, track_running_stats=True)
)
)
(avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
(fc): Linear(in_features=512, out_features=10, bias=True)
)
I have the error in title. Do you know how can I fix?
| I was using CIFAR-10 with size 32x32. If I resize the images to 64x64 It works. This because resnet-18 reduces the filters to 1x1 and as the title says, InstanceNorm wants dimensions (H and W) > 1
| https://stackoverflow.com/questions/73426072/ |
Lightning Flash error with SemanticSegmentationData (NameError: name 'K' is not defined) | I'm trying to perform an image segmentation task with Colab and Lightning Flash.
I'm installing Flash with:
!pip install lightning-flash
I'm trying to instanciate a Lightning Flash SemanticSegmentationData using from_folders method like this:
datamodule = SemanticSegmentationData.from_folders(
train_folder=x_train_dir,
train_target_folder=y_train_dir,
val_folder=x_valid_dir,
val_target_folder=y_valid_dir,
test_folder=x_test_dir,
test_target_folder=y_test_dir,
transform_kwargs=dict(image_size=(256, 256)),
num_classes=1,
batch_size=16,
)
But I'm getting this error:
/usr/local/lib/python3.7/dist-packages/flash/image/segmentation/input_transform.py in train_per_sample_transform(self)
49 [DataKeys.INPUT, DataKeys.TARGET],
50 KorniaParallelTransforms(
---> 51 K.geometry.Resize(self.image_size, interpolation="nearest"), K.augmentation.RandomHorizontalFlip(p=0.5)
52 ),
53 )
NameError: name 'K' is not defined
How can I solve this problem?
| Initially, I was trying to solve the problem this way:
!pip install kornia
import kornia as K
This didn't solve the problem. Then, I opened an issue in their GitHub Issue #1423. With a help from https://github.com/krshrimali, I discovered that just installing kornia would solve the problem.
So, the solution is just:
!pip install kornia
| https://stackoverflow.com/questions/73427839/ |
Is it possible to perform step according to batch size in pytorch? | I am iterating over training samples in batches, however last batch always returns fewer samples.
Is it possible to specify step size in torch according to the current batch length?
For example most batch are of size 64, last batch only 6 samples.
If I do the usual routine:
optimizer.zero_grad()
loss.backward()
optimizer.step()
It seems that the last 6 samples carry the same weight when updating the gradients as the 64 sized batches, but in fact they should only carry about 1/10 weight due to fewer samples.
In Mxnet I could specify the step size accordingly but I don't know how to do it in torch.
| You can define a custom loss function and then e.g. reweight it based on batch size
def reweighted_cross_entropy(my_outputs, my_labels):
# compute batch size
my_batch_size = my_outputs.size()[0]
original_loss = nn.CrossEntropyLoss()
loss = original_loss (my_outputs, my_labels)
# reweight accordingly
return my_batch_size * loss
if you are using something like gradient descent then it is easy to see that
[1/10 * lr] grad [loss] = lr * grad [ 1/10 loss]
so reweighting the loss will be equivalent to reweighting your learning rate. This won't be exactly true for more comlpex optimisers though but can be good enough in practise.
| https://stackoverflow.com/questions/73434706/ |
How to apply random forests to the output produced by Bert? | I'm trying to get the output embeddings of a RoBERTa model, so I can train a random forests classifier on it for text classification (sentiment analysis). The original dataset this is based on is 500 news articles that each have a left/center/right bias rating. 80% of this dataset is training data, the other 20% is test data.
I run the following code for my training set:
# Tokenize sentences van trainingset
encoded_input = tokenizer(X_train, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output,encoded_input['attention_mask'])
## Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=3, dim=1)
start = True
numpy_emb = []
if not start:
np_emb = sentence_embeddings.cpu().detach().numpy()
numpy_emb = np.vstack([numpy_emb, np_emb])
else:
start = False
numpy_emb = np_emb = sentence_embeddings.cpu().detach().numpy()
Which gives me numpy_emb. Which I think are the embeddings that the RoBERTa model outputs.
When I print it, it gives me:
tensor([[ 0.5329, -0.1224],
[ 0.5409, -0.0730],
[ 0.4594, -0.1282],
[ 0.5116, -0.0769],
[ 0.4861, -0.0212],
[ 0.5246, -0.0560],
[ 0.5555, -0.0962],
[ 0.4779, -0.0551],
[ 0.5428, -0.0904],
[ 0.5939, -0.0504],
[ 0.5219, -0.1342],
[ 0.4672, -0.0936],
[ 0.5051, -0.0518],
[ 0.5536, -0.1016],
[ 0.4761, -0.0736],
[ 0.4754, -0.0991],
[ 0.5613, -0.0541],
[ 0.5155, 0.0303],
[ 0.6053, 0.0214],
[ 0.4766, -0.1019],
[ 0.4262, -0.0869],
[ 0.3871, -0.0756],
[ 0.5048, -0.0067],
[ 0.5425, -0.1303],
[ 0.5020, -0.0715],
...
[ 0.5462, -0.0686],
[ 0.5476, -0.1465],
[ 0.4968, -0.0354],
[ 0.5586, -0.1234],
[ 0.5725, -0.0685]])
I then repeat this process for my test set as well, giving me another set of embeddings.
Then I try to train a random forests classifier using the embeddings given by the training set. But when I try to predict using the embeddings from my test set, I get very random results. Accuracy goes as low as 24% and as high as 58%. Is this because of the small amount of data that I have? Or is there something else I'm doing wrong?
I also have the suspicion that I can't properly link the output embedding to their respective label. Which would also explain the random results I get.
Code for random forests that I used:
from sklearn.ensemble import RandomForestClassifier
text_classifier = RandomForestClassifier(n_estimators=100, random_state=0)
text_classifier.fit(numpy_emb, y_train)
predictions = text_classifier.predict(numpy_emb_test)
#confusion matrix
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
print(confusion_matrix(y_test,predictions))
print(classification_report(y_test,predictions))
| This shape doesn't look like proper embedding. For classification purpose, a usual approach to encoder-only models is just supplying the last hidden state as embeddings for the classifier, for example:
features = model_output[0][:,0,:].numpy()
text_classifier.fit(features, y_train)
| https://stackoverflow.com/questions/73439107/ |
How to add a channel layer for a 3D image for 3D CNN | I am working on a medical dataset of 3d images the shapeimage.shape gives me (512,512,241) I assume it is height , width, depth but when I run 3D cnn i get this error Given groups=1, weight of size [32, 3, 3, 3, 3], expected input[1, 1, 241, 512, 512] to have 3 channels, but got 1 channels instead. what should be done? I am using pytorch.
Thank you in advance.
| Your network expects each slice of the 3d volume to have three channels (RGB). You can simply convert grayscale (single channel) data to "fake" RGB by duplicating the single channel you have:
x_gray = ... # your tensor of shape batch-1-241-512-512
x_fake_rgb = x_gray.expand(-1, 3, -1, -1, -1)
See expand for more details.
| https://stackoverflow.com/questions/73441205/ |
AttributeError: module 'dill' has no attribute 'extend' | I have just installed Pytorch, using:
(base) C:\>pip3 install torch torchvision torchaudio
Requirement already satisfied: torch in c:\users\Emil\appdata\roaming\python\python38\site-packages (1.9.0)
Requirement already satisfied: torchvision in c:\users\Emil\appdata\roaming\python\python38\site-packages (0.10.0)
Requirement already satisfied: torchaudio in c:\users\Emil\anaconda3\lib\site-packages (0.12.1)
Requirement already satisfied: typing-extensions in c:\users\Emil\anaconda3\lib\site-packages (from torch) (3.7.4.3)
Requirement already satisfied: numpy in c:\users\Emil\anaconda3\lib\site-packages (from torchvision) (1.22.2)
Requirement already satisfied: pillow>=5.3.0 in c:\users\Emil\anaconda3\lib\site-packages (from torchvision) (8.0.1)
Then, I tried to import torch in Spyder but received the following error:
import torch
C:\Users\Emil\AppData\Roaming\Python\Python38\site-packages\torch\package\_mock_zipreader.py:17: UserWarning: Failed to initialize NumPy: numpy.core.multiarray failed to import (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:67.)
_dtype_to_storage = {data_type(0).dtype: data_type for data_type in _storages}
Traceback (most recent call last):
File "<ipython-input-1-eb42ca6e4af3>", line 1, in <module>
import torch
File "C:\Users\Emil\AppData\Roaming\Python\Python38\site-packages\torch\__init__.py", line 705, in <module>
import torch.utils.data
File "C:\Users\Emil\AppData\Roaming\Python\Python38\site-packages\torch\utils\data\__init__.py", line 28, in <module>
from torch.utils.data import datapipes
File "C:\Users\Emil\AppData\Roaming\Python\Python38\site-packages\torch\utils\data\datapipes\__init__.py", line 1, in <module>
from . import iter
File "C:\Users\Emil\AppData\Roaming\Python\Python38\site-packages\torch\utils\data\datapipes\iter\__init__.py", line 8, in <module>
from torch.utils.data.datapipes.iter.callable import \
File "C:\Users\Emil\AppData\Roaming\Python\Python38\site-packages\torch\utils\data\datapipes\iter\callable.py", line 13, in <module>
dill.extend(use_dill=False)
AttributeError: module 'dill' has no attribute 'extend'
What can I do to overcome this error?
| Based on Mike McKerns' comment, I successfully used:
pip install dill --upgrade
| https://stackoverflow.com/questions/73443382/ |
How to flush GPU memory using CUDA on WSL2 | I have interrupted the training of the model in PyTorch on CUDA, which I've run on Windows Subsystem for Linux 2 (WSL2). The dedicated GPU memory of NVIDIA GeForce RTX 3080Ti was not flushed.
What I have tried:
gc.collect() and torch.cuda.empty_cache() does not resolve the problem (reference)
When running numba.cuda.select_device(0) to potentially cuda.close(), the notebook hangs (reference)
After running nvidia-smi to potentially reset the GPU (reference), the command prompt hangs
Win + Ctrl + Shift + B to reset the graphics stack in Windows does not help (reference)
Restarting the notebook kernel as well as restarting the notebook server does not help
Physical reset is not available
UPDATE:
Running nvidia-smi in the command prompt on Windows (not on WSL2) yields the following
| I don't know your actual environment.
suppose that you use anaconda window-venv
On cmd >nvidia-smi shows following.
Check pid of python process name(>envs\psychopy\python.exe).
On cmd taskkill /f /PID xxxx
this could be help.
and you don't want doing like this.
if you feeling annoying you can run the script on prompt, it would be automatically flushing gpu memory.
| https://stackoverflow.com/questions/73447464/ |
When using torch.autocast, how do I force individual layers to float32 | I'm trying to train a model in mixed precision. However, I want a few of the layers to be in full precision for stability reasons. How do I force an individual layer to be float32 when using torch.autocast? In particular, I'd like for this to be onnx compileable.
Is it something like:
with torch.cuda.amp.autocast(enabled=False, dtype=torch.float32):
out = my_unstable_layer(inputs.float())
Edit:
Looks like this is indeed the official method. See the torch docs.
| I think the motivation of torch.autocast is to automate the reduction of precision (not the increase).
If you have functions that need a particular dtype, you should consider using, custom_fwd
import torch
@torch.cuda.amp.custom_fwd(cast_inputs=torch.complex128)
def get_custom(x):
print(' Decorated function received', x.dtype)
def regular_func(x):
print(' Regular function received', x.dtype)
get_custom(x)
x = torch.tensor(0.0, dtype=torch.half, device='cuda')
with torch.cuda.amp.autocast(False):
print('autocast disabled')
regular_func(x)
with torch.cuda.amp.autocast(True):
print('autocast enabled')
regular_func(x)
autocast disabled
Regular function received torch.float16
Decorated function received torch.float16
autocast enabled
Regular function received torch.float16
Decorated function received torch.complex128
Edit: Using torchscript
I am not sure how much you can rely on this, due to a comment in the documentation. However the comment is apparently outdated.
Here is an example where I trace the model with autocast enabled, feeze it and then I use it and the value is indeed cast to the specified type
class Cast(torch.nn.Module):
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float64)
def forward(self, x):
return x
with torch.cuda.amp.autocast(True):
model = torch.jit.trace(Cast().eval(), x)
model = torch.jit.freeze(model)
x = torch.tensor(0.0, dtype=torch.half, device='cuda')
print(model(x).dtype)
torch.float64
But I suggest you to validate this approach before using it for a serious application.
| https://stackoverflow.com/questions/73449288/ |
Cost of back-propagation for subset of DNN parameters | I am using pytorch for evaluating gradients of feed-forward network, but only for a subset of parameters, related to the first two layers.
Since backpropagation is carried backwards layer by layer, I wonder: why is it computationally faster than evaluating gradients of whole network?
| Pytorch builds a computation graph for backward propagation that only contains the minimum nodes and edges to get the accumulated gradient for leaves that require gradient. Even if the first two layers require gradient, there are many tensors (intermediate tensors or frozen parameters tensors) that are unused and that are cut in the backward graph. Plus the built-in function AccumulatedGradient that stores the gradients in .grad attribute is call less time reducing the total computation time too.
Here you can see an example for an "AddBackward Node" where for instance A is an intermediate tensor computed with the first two layers and B is the 3rd (constant) layer that can be ignored.
An other example: if you have a matrix-matrix product (MmBackward Node) that uses an intermediate tensor that not depends on the 2 first layers. In this case the tensor itself is required to compute the backprop but the "previous" tensors that were used to compute it can be ignored in the graph.
To visualize the sub-graph that is actually computed (and compare when the model is unfrozen), you can use torchviz.
| https://stackoverflow.com/questions/73464737/ |
PyTorch: How to create a Parameter without specifying the dimension | Say I want to defined a module. In this module, the __init__() function will create a Parameter called self.weight without known the input_dim of the module. My question is, how can I expand the self.weight and initialize it when I first call the forward() function?
For example, I want my module looks like this:
class MyModel(torch.nn.Module):
def __init__(self, out_dim):
super(MyModel, self).__init__()
# I don't know the input_dim yet
self.weight = torch.nn.Parameter(torch.FloatTensor(None, out_dim))
self.init_weight = False
def init_parameters(self, in_dim):
# what should I do in this function?
# Is this correct?
self.weight = self.weight.expand(in_dim, -1)
torch.nn.init.xvaier_normal_(self.weight)
self.init_weight = True
def forward(self, X):
if not self.init_weight:
# first call, so now I can initialize the weight since I know the input_dim
self.init_parameters(X.shape[1])
# do some forward ops
return torch.sigmoid(torch.matmul(X, self.weight))
And my training code looks like this (The parameter self.weight is passed to the optimizer after I create the model):
def train(X_train, y_train):
model = MyModel(y_train.shape[1])
optimize = torch.optim.Adam(model.parameters(), lr=1e-3)
loss_fn = torch.nn.MSELoss()
for epoch in range(10000):
optimize.zero_grad()
prediction = model(X_train)
loss = loss_fn(prediction, y_train)
loss.backward()
optimize.step()
| After all, it works for me using the way I explained in the comments - to allocate the weights parameter right in the init_parameters function.
import torch
class MyModel(torch.nn.Module):
def __init__(self, out_dim):
super(MyModel, self).__init__()
self.weight = torch.nn.Parameter(torch.FloatTensor([0.0]))
self.out_dim = out_dim
self.init_weight = False
def init_parameters(self, in_dim):
self.weight = torch.nn.Parameter(torch.FloatTensor(in_dim, self.out_dim), requires_grad=True)
torch.nn.init.xavier_normal_(self.weight)
self.init_weight = True
def forward(self, X):
if not self.init_weight:
# first call, so now I can initialize the weight since I know the input_dim
self.init_parameters(X.shape[1])
# do some forward ops
result = torch.sigmoid(torch.matmul(X, self.weight))
print(X.shape, result.shape)
return result
def train(X_train, y_train):
model = torch.nn.Sequential(MyModel(out_dim=100), MyModel(out_dim=20))
optimize = torch.optim.Adam(model.parameters(), lr=1e-3)
loss_fn = torch.nn.MSELoss()
for epoch in range(10000):
#print('.', end='')
optimize.zero_grad()
prediction = model(X_train)
loss = loss_fn(prediction, y_train)
loss.backward()
optimize.step()
batch_size, in_dim, out_dim = 100, 5, 20
X_train=torch.randn((batch_size, in_dim))
y_train=torch.randn((batch_size, out_dim))
train(X_train, y_train)
| https://stackoverflow.com/questions/73468424/ |
torch suppress to kth largest values | I have the following function which works, but just not for half precision values (get a NotImplemented error for kthvalue).
def suppress_small_probabilities(probabilities: torch.FloatTensor, k: int) -> torch.FloatTensor:
kth_largest, _ = (-probabilities).kthvalue(k, dim=-1, keepdim=True)
return probabilities * (probabilities >= -kth_largest)
How would you do the equivalent without using kthvalue? I'm guessing topk has something to do with it, but I want to suppress the smaller values. probabilities is of size batch_size x 1000.
| Implement your own topk, e.g.
def mytopk(xs: Tensor, k: int) -> Tensor:
mask = torch.zeros_like(xs)
batch_idx = torch.arange(0, len(xs))
for _ in range(k):
_, index = torch.where(mask == 0, xs, -1e4).max(-1)
mask[(batch_idx, index)] = 1
return mask
This will return a boolean mask tensor where the row-wise top-k elements will have value 1, rest 0.
Then use the mask to index your original tensor, e.g.
xs = torch.rand(3, 5, dtype=torch.float16)
# tensor([[0.0626, 0.9620, 0.5596, 0.4423, 0.1932],
# [0.5289, 0.0857, 0.7802, 0.7730, 0.4807],
# [0.8272, 0.5016, 0.1169, 0.4372, 0.1843]], dtype=torch.float16)
mask = mytopk(xs, 2)
# tensor([[0., 1., 1., 0., 0.],
# [0., 0., 1., 1., 0.],
# [1., 1., 0., 0., 0.]])
top_only = torch.where(mask == 1, xs, 0)
# tensor([[0.0000, 0.9620, 0.5596, 0.0000, 0.0000],
# [0.0000, 0.0000, 0.7802, 0.7730, 0.0000],
# [0.8271, 0.5016, 0.0000, 0.0000, 0.0000]], dtype=torch.float16)
| https://stackoverflow.com/questions/73486637/ |
Writing a pytorch neural net class that has functions for both model fitting and prediction | I want a PyTorch neural net to predict y using x where, for example,
x = torch.tensor([[6,2],[5,2],[1,3],[7,6]]).float()
y = torch.tensor([1,5,2,5]).float()
For this, I have written the following PyTorch class which can fit y using x. But, I am not able to extend the code to predict using new values of x, x_test. Can somebody please guide me in this regard?
import torch
import torch.nn as nn
from torch.optim import SGD
class MyNeuralNet(nn.Module):
def __init__(self):
super().__init__()
self.layer1 = nn.Linear(2,4,bias=True)
self.layer2 = nn.Linear(4,1,bias=True)
self.loss = nn.MSELoss()
def fit(self,x,y):
def forward(self,x):
x = self.layer1(x)
x = self.layer2(x)
return x.squeeze()
def compile(self,x):
forward(self,x)
opt = SGD(self.parameters(),lr=0.01) # parameters from first forward pass
compile(self,x)
losses = []
for _ in range(100):
opt.zero_grad() # flush previous epoch's gradient
res = forward(self,x)
loss_value = self.loss(res,y)
loss_value.backward() # compute gradient
opt.step() # Perform iteration using gradient above
losses.append(loss_value.item())
y_train_hat = forward(self,x)
return y_train_hat.detach().numpy()
def predict(self,x_test):
"""
need help in this part
"""
# y_test_hat = forward(self,x_test)
# return y_test_hat.detach().numpy()
I changed the code provided by Ivan. The following is the working version of the code:
class MyNeuralNet(nn.Module):
def __init__(self):
super().__init__()
self.layer1 = nn.Linear(2, 4, bias=True)
self.layer2 = nn.Linear(4, 1, bias=True)
self.loss = nn.MSELoss()
self.compile_()
def forward(self, x):
x = self.layer1(x)
x = self.layer2(x)
return x.squeeze()
def fit(self, x, y):
losses = []
for epoch in range(100):
## Inference
res = self(x) #instead of self(self,x)
loss_value = self.loss(res,y)
## Backpropagation
self.opt.zero_grad() # flush previous epoch's gradient; this should be the first step, I think
loss_value.backward() # compute gradient
self.opt.step() # Perform iteration using gradient above
## Logging
losses.append(loss_value.item())
def compile_(self):
self.opt = SGD(self.parameters(), lr=0.01)
def predict(self,x_test):
self.eval()
y_test_hat = self(x_test) #instead of forward(self,x_test)
return y_test_hat.detach().numpy()
self.train()
x = torch.tensor([[6,2],[5,2],[1,3],[7,6]]).float()
y = torch.tensor([1,5,2,5]).float()
model = MyNeuralNet()
model.fit(x,y)
model.predict(x)
Is this correct?
| Here is a minimal example with some modifications carried out to your code:
class MyNeuralNet(nn.Module):
def __init__(self):
super().__init__()
self.layer1 = nn.Linear(2, 4, bias=True)
self.layer2 = nn.Linear(4, 1, bias=True)
self.loss = nn.MSELoss()
self.compile_()
def forward(self, x):
x = self.layer1(x)
x = self.layer2(x)
return x.squeeze()
def fit(self, x, y):
losses = []
for epoch in range(100):
## Inference
res = self(self,x)
loss_value = self.loss(res,y)
## Backpropagation
loss_value.backward() # compute gradient
self.opt.zero_grad() # flush previous epoch's gradient
self.opt.step() # Perform iteration using gradient above
## Logging
losses.append(loss_value.item())
def compile_(self, x):
self.opt = SGD(self.parameters(), lr=0.01)
def predict(self, x_test):
self.eval()
y_test_hat = self(x_test)
return y_test_hat.detach().numpy()
self.train()
Which can be used like this for training:
>>> model = MyNeuralNet()
>>> model.fit(x, y) # fit on some data (x, y)
Then perform inference with some other data x_test:
>>> model.predict(x_test)
| https://stackoverflow.com/questions/73493198/ |
How to feed different pad IDs to a collate function? | I usually use a custom collate_fn and use it as an argument when defining my DataLoader. It usually looks something like:
def collate_fn(batch):
max_len = max([len(b['input_ids']) for b in batch])
input_ids = [b['input_ids'] + ([0] * (max_len - len(b['input_ids'])))]
labels = [b['label'] for b in batch]
return input_ids
As you can see, I'm using 0 for my padding sequence. What I'm wondering is, since language models and their tokenizers use different IDs for padding tokens, is there a way that I can make the collate_fn flexible to take that into account?
| I was able to make a workaround by making a Trainer class and making the collate_fn a method. After that I was able to do something like self.pad_token_id = tokenizer.pad_token_id and modify the original collate_fn to use self.pad_token_id rather than a hardcoded value.
I'm still curious if there's any way to do this while keeping collate_fn a top-level function though. For example if there would be any way to pass an argument or something.
<Original>
def collate_fn(batch):
max_len = max([len(b['input_ids']) for b in batch])
input_ids = [b['input_ids'] + ([0] * (max_len - len(b['input_ids']))) for b in batch]
return input_ids
class Trainer():
def __init__(self, tokenizer, ...):
...
def train(self):
train_dataloader = DataLoader(features, collate_fn=collate_fn, ...)
...
<Workaround>
class Trainer():
def __init__(self, tokenizer, ...):
self.pad_token_id = tokenizer.pad_token_id
...
def collate_fn(self, batch):
max_len = max([len(b['input_ids']) for b in batch])
input_ids = [b['input_ids'] + ([self.pad_token_id] * (max_len - len(b['input_ids']))) for b in batch]
return input_ids
def train(self):
train_dataloader = DataLoader(features, collate_fn=self.collate_fn, ...)
...
| https://stackoverflow.com/questions/73494999/ |
Ensure that every column in a matrix has at least `e` non-zero elements | I would like to ensure that each column in a matrix has at least e non-zero elements, and for each column that does not randomoly replace zero-valued elements with the value y until the column contains e non-zero elements. Consider the following matrix where some columns have 0, 1 or 2 elements. After the operation, each column should have at least e elements of value y.
before
tensor([[0, 7, 0, 0],
[0, 0, 0, 0],
[0, 1, 0, 4]], dtype=torch.int32)
after, e = 2
tensor([[y, 7, 0, y],
[y, 0, y, 0],
[0, 1, y, 4]], dtype=torch.int32)
I have a very slow and naive loop-based solution that works:
def scatter_elements(x, e, y):
for i in range(x.shape[1]):
col = x.data[:, i]
num_connections = col.count_nonzero()
to_add = torch.clip(e - num_connections, 0, None)
indices = torch.where(col == 0)[0]
perm = torch.randperm(indices.shape[0])[:to_add]
col.data[indices[perm]] = y
Is it possible to do this without loops? I've thought about using torch.scatter and generate an index array first, but since the number of elements to be added varies per column, I see no straightforward way to use it. Any suggestions or hints would be greatly appreciated!
Edit: swapped indices and updated title and description based on comment.
| In the case where you care only that each column has at least e elements and not EXACTLY e elements, you can do it without a loop. The key is that in this case, we can create an array with every non-zero value replaced, and then sample e values from this array for each column.
For convenience let x.shape = [a,b]
Create an array replace with every value replaced (i.e. every 0 replaced with y).
Create a random array of same size as x.
Use torch.topk to get the k largest random numbers per column. This is used to get k random indices for each column (in your case k = e). Provided that x is non-negative integer, you can add x to the random array before the topk operation to ensure that the existing non-zero elements are selected first; this ensures that no more than e connections are added.
Replace the indexed values per row with the values from replace.
def scatter_elements(x,e,y):
x = x.float()
# 1. replace has same shape as x and has all 0s replaced with y
replace = torch.where(x > 0 , x, torch.ones(x.shape)*y)
# 2-3. get random indices per column
randn = torch.rand(x.shape)
if True: # True if you don't want the modification to ever itself assign more than e elements in a column a non-zero value
randn += x # assumes x is non-negative integer
ind = torch.topk(randn,e,dim = 0)[1] # first return is values, second return is indices
# create a second index to indicate which column each index in ind corresponds to
col_ind = torch.arange(x.shape[1]).unsqueeze(0).expand(ind.shape)
# 4. Index x with ind and col_ind and set these values to the corresponding values in replace
ind = ind.reshape(-1) # flatten into 1D array so we can use to index row
col_ind = col_ind.reshape(-1) # flatten into 1D array so we can use to index column
x[ind,col_ind] = replace[ind,col_ind]
return x
In my limited timing tests, the vectorized solution was about 5-6x faster than the original looping solution.
| https://stackoverflow.com/questions/73501398/ |
Multiply each tensor with a value from a another tensor | I have two tensors:
import torch
a = torch.randn((2,3,5))
b = torch.tensor([[2.0, 1.0, 2.0],[0.5, 1.0, 1.0]])
And I want to multiply the each element in the last dimension in a with the corresponding element in b. That means when a is:
tensor([[[ 1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]],
[[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]]])
the result should be:
tensor([[[ 2, 4, 6, 8, 10],
[1, 2, 3, 4, 5],
[ 2, 4, 6, 8, 10]],
[[0.5, 1.0, 1.5, 2.0, 2.5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]]])
How can I do that?
| All I need to do, is to add an dimension:
a * b.unsqueeze(-1)
| https://stackoverflow.com/questions/73502352/ |
Giving less weight to data coming from another dataset that is noisy | I have two datasets, one with clean data and one with dirty data. I train a Roberta model on the clean dataset and then get predictions for the dirty dataset. Those predictions with a probability greater than 0.9 go to the clean dataset. I then retrain the Roberta model with this new dataset (clean + dirty moving to clean).
For the retraining I am using the MAE loss function (more robust to noisy labels) and I use weights to give less value to the data that passes from the dirty to the clean dataset, as follows:
loss = torch.mean(torch.abs(y_true - y_pred) * weights)
Initially I am using an arbitrary weight of 0.5 for all the dirty data that gets passed into the clean dataset. However, I would like to assign them a weight in a more academic way, not so arbitrary.
How can I do that?
| One way to choose the weight is based on your confidence in the dirty data and assign the weight accordingly. For example, if you think that 90% of dirty data is labeled correctly, then choosing 0.9 as the weight for the noisy data is a reasonable option.
Additionally, there is a whole literature on learning from noisy labels, you can check this survey for more information: https://arxiv.org/abs/2007.08199
| https://stackoverflow.com/questions/73512467/ |
ValueError: too many values to unpack using sum return function | Attempting to implement this Deep Embedded Clustering GitHub algorithm.
def acc(y_pred, y_target):
D = max(y_pred.max(), y_target.max()) + 1
w = np.zeros((D, D), dtype=np.int64)
for i in range(y_pred.size):
w[y_pred[i], y_target[i]] += 1
ind = linear_assignment(w.max() - w)
return sum(w[i, j] for i, j in ind) * 1.0 / y_pred.size # <- Error Line
Just using the code within the repo I am encountering this ValueError. I attempted to use the zip function to solve this as well as assigning the output to a variable before passing return.
| Solution if anyone else encounters similar issue. The Github Repo using linear_assignment is deprecated and removed from updated scikit learn packages.
I had used the solution accepted as the answer within this thread , However, as mentioned in that thread scipy.optimize.linear_sum_assignment is not the perfect replacement for linear assignment. This was causing my issue as the output of the two functions is different.
Two options is you can downgrade your Scikit learn package to the version which supports linear assignment, or comment out the import and use the old function definition as posted by InputBlackBoxOutput in linked thread, and below.
def linear_assignment(cost_matrix):
try:
import lap
_, x, y = lap.lapjv(cost_matrix, extend_cost=True)
return np.array([[y[i], i] for i in x if i >= 0])
except ImportError:
from scipy.optimize import linear_sum_assignment
x, y = linear_sum_assignment(cost_matrix)
return np.array(list(zip(x, y)))
| https://stackoverflow.com/questions/73513994/ |
Defining my own gradient function for pytorch to use | I want to feed pytorch gradients manually. In my real problem, I have my own adjoint function that does not use tensors. Is there any way I can define my own gradient function for pytorch to use during optimization?
import numpy as np
import torch
# define rosenbrock function and gradient
x0 = np.array([0.1, 0.1])
a = 1
b = 5
def f(x):
return (a - x[0]) ** 2 + b * (x[1] - x[0] ** 2) ** 2
def jac(x):
dx1 = -2 * a + 4 * b * x[0] ** 3 - 4 * b * x[0] * x[1] + 2 * x[0]
dx2 = 2 * b * (x[1] - x[0] ** 2)
return np.array([dx1, dx2])
# create stochastic rosenbrock function and gradient
# (the crude analogy is that I have predefined stochastic
# forward and backward functions)
def f_rand(x):
return f(x) * np.random.uniform(0.5, 1.5)
def jac_rand(x): return jac(x) * np.random.uniform(0.5, 1.5)
x_tensor = torch.tensor(x0, requires_grad=False)
optimizer = torch.optim.Adam([x_tensor], lr=0.1)
# here, closure is fed f_rand to compute the gradient.
# I need to feed closer the gradient directly from jac_rand
def closure():
optimizer.zero_grad()
loss = f_rand(x_tensor)
loss.backward() # jac_rand(x)
return loss
for ii in range(200):
optimizer.step(closure)
print(x_tensor, f(x_tensor))
# tensor([1.0000, 1.0000], dtype=torch.float64, requires_grad=True) tensor(4.5799e-09, dtype=torch.float64, grad_fn=<AddBackward0>)
# ( this is the right answer, E[f(1, 1)] = 0 )
I've tried defining a custom function, but I can't get it to work. This is my best attempt so far:
import numpy as np
import torch
# define rosenbrock function and gradient
x0 = np.array([0.1, 0.1])
a = 1
b = 5
def f(x):
return (a - x[0]) ** 2 + b * (x[1] - x[0] ** 2) ** 2
def jac(x):
dx1 = -2 * a + 4 * b * x[0] ** 3 - 4 * b * x[0] * x[1] + 2 * x[0]
dx2 = 2 * b * (x[1] - x[0] ** 2)
return np.array([dx1, dx2])
# create stochastic rosenbrock function and gradient
def f_rand(x):
return f(x) * np.random.uniform(0.5, 1.5)
def jac_rand(x): return jac(x) * np.random.uniform(0.5, 1.5)
class custom_function(torch.autograd.Function):
@staticmethod
def forward(ctx, input):
ctx.save_for_backward(input)
return f_rand(input)
@staticmethod
def backward(ctx, grad_output):
input, = ctx.saved_tensors
return grad_output * g_rand(input)
x_tensor = torch.tensor(x0, requires_grad=False)
optimizer = torch.optim.Adam([x_tensor], lr=0.1)
for ii in range(200):
print('x_tensor ', x_tensor)
optimizer.step(custom_function())
print(x_tensor, f(x_tensor))
It says:
RuntimeError: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)
| Not quite sure if this is exactly what you want but the method call loss.backward() computes gradients via pytorch's computational graph and stores the gradient values in the weight tensors themselves (in your case it's in x_tensor). And these gradients can be accessed via x_tensor.grad. However, if you don't want to use pytorch's gradient computing method using loss.backward(), then you can manually feed your gradients into your tensor's .grad attribute as follows:
with torch.no_grad():
def closure():
optimizer.zero_grad()
loss = f_rand(x_tensor)
x_tensor.grad = torch.from_numpy(jac_rand(x_tensor))
return loss
| https://stackoverflow.com/questions/73532345/ |
Where to find the docs of axis parameter of torch.sum? | I'm trying to read about the sum function of torch here. I noticed that the following works:
> print(torch.sum(torch.randint(0,2,(2,2)),axis=1))
tensor([1, 0])
But in the docs above I don't see explanation for axis.
In the signature of the function I see *:
torch.sum(input, *, dtype=None) → Tensor
Does it have to do something with this *? Where can I find the PyTorch docs that explains how to use axis? I came across with the same thing with other methods (with other arguments). So, although I know how to use axis, I would want to figure how to actually read the docs for the "hidden" arguments.
| PyTorch emulates much of the basic functionality of Numpy (with additional GPU acceleration and autograd mechanics) but the API also differs in some small ways. For example, PyTorch generally uses the dim keyword argument to specify which dimension a function should operate on while Numpy uses the axis keyword argument to specify the same thing.
While undocumented it appears that the PyTorch devs have allowed users to use the axis keyword argument in place of dim for the sum function. This is probably to make the library more compatible with NumPy functions.
Therefore
torch.sum(x, dim=1)
is equivalent to
torch.sum(x, axis=1)
and both specify that you want to sum-reduce along dimension 1.
You can read about the NumPy version of sum here, which also provides additional examples of the axis argument.
| https://stackoverflow.com/questions/73541200/ |
Adapting MNIST designed network for a larger dataset | I am attempting to implement this deep clustering algorithm, which was designed to cluster the MNIST dataset. Single Channel 28x28 images.
The images I am trying to use are 416x416 and 3-Channel RGB. The script is initialised with the following functions.
class CachedMNIST(Dataset):
def __init__(self, train, cuda, testing_mode=False):
img_transform = transforms.Compose([transforms.Lambda(self._transformation)])
# img_transform = transforms.Compose([transforms.Resize((28*28)), transforms.ToTensor(), transforms.Grayscale()])
self.ds = torchvision.datasets.ImageFolder(root=train, transform=img_transform)
self.cuda = cuda
self.testing_mode = testing_mode
self._cache = dict()
@staticmethod
def _transformation(img):
return (torch.ByteTensor(torch.ByteStorage.from_buffer(img.tobytes())).float()
* 0.02
)
If the images are left un-altered the resulting tensor shape output from the _transformation function is of size torch.Size{{256,519168]] far too large for the AutoEncoder network to calculate.
Error 1
RuntimeError: mat1 and mat2 shapes cannot be multiplied (128x519168 and 784x500)
When I attempted to resize the images a the result is a 4D Tensor, torch.Size([256,1,784,748]) which even when reducing the Batch Size to minuscule amounts the CUDA will crash as there is not enough memory.
Error 2
RuntimeError: CUDA out of memory.
I'm hoping someone can point me in the right direction to tackle this problem as there must be a more efficient way to adapt the network.
AutoEnocder Model
StackedDenoisingAutoEncoder(
(encoder): Sequential(
(0): Sequential(
(linear): Linear(in_features=784, out_features=500, bias=True)
(activation): ReLU()
)
(1): Sequential(
(linear): Linear(in_features=500, out_features=500, bias=True)
(activation): ReLU()
)
(2): Sequential(
(linear): Linear(in_features=500, out_features=2000, bias=True)
(activation): ReLU()
)
(3): Sequential(
(linear): Linear(in_features=2000, out_features=10, bias=True)
)
)
(decoder): Sequential(
(0): Sequential(
(linear): Linear(in_features=10, out_features=2000, bias=True)
(activation): ReLU()
)
(1): Sequential(
(linear): Linear(in_features=2000, out_features=500, bias=True)
(activation): ReLU()
)
(2): Sequential(
(linear): Linear(in_features=500, out_features=500, bias=True)
(activation): ReLU()
)
(3): Sequential(
(linear): Linear(in_features=500, out_features=784, bias=True)
)
)
)
| Error 1 is happening because the first linear layer has in_features=784. That number comes from the 28x28 pixels in the 1-channel MNIST data. Your input data is 416x416x3 = 519168 (different if you resize your inputs). In order to resolve this error, you need to make the in_features in that first linear layer match the number of pixels (times the number of channels) of your input. You can do this by changing that number or resizing your input (or, likely both). Also, note that you will likely have to flatten your input so that it is a vector. Also, note that whatever the in_features becomes (to the encoder) you'll want to make the out_features of the decoder match (otherwise you'll be trying to compare two vectors of different sizes when training).
Error 2 CUDA OOM could happen for lots of reasons (small GPU, too large of network, too large batch size, etc). The network you have doesn't appear to look particularly large. But you could reduce its size by shrinking some of the internal layers (numbers of in_features and out_features). Just be sure that if you adjust these, that you maintain the property that the number of out_features from one layer matches the number of in_features on the next layer. And, in this example, the decoder is a nice mirror of in the encoder (so if you adjust the encoder, make the corresponding mirror-adjustment in the decoder).
| https://stackoverflow.com/questions/73556207/ |
Pytorch "hipErrorNoBinaryForGpu: Unable to find code object for all current devices!" | SYSTEM: Ryzen 5800x, rx 6700xt, 32 gigs of RAM, Ubuntu 22.04.1
I'm attempting to install Stable-Diffusion by following https://youtu.be/d_CgaHyA_n4
When attempting to run the SD script, I get the "hipErrorNoBinaryForGpu: Unable to find code object for all current devices!" error.
I believe this is caused by PyTorch not working as expected. When validating Pytorchs' installation with "The Master Test", I get the same error:
"hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"
Aborted (core dumped)
I believe that it is install correctly as using the conda list command tells me that torch 1.12.0a0+git2a932eb and torchvision 0.13.0a0+f5afae5 are installed. Interestingly, when I change the command ever-so-slightly to torch.cuda.is_available, (without the parentheses), I get the following output: <function is_available at 0x7f42278788b0>. Granted, I'm not sure what this is telling me. Following the "Verification" step resulted in the expected array of random numbers. However, failed the GPU driver check.
Thank you in advance.
| Try running the following command:
export HSA_OVERRIDE_GFX_VERSION=10.3.0
This made it work on my machine using an RX 6600 XT, with which I got the same error running it, before exporting the variable.
| https://stackoverflow.com/questions/73575955/ |
IterableWrapper is not defined when using WikiText2 | I am trying to follow along this tutorial https://pytorch.org/tutorials/beginner/transformer_tutorial.html
I am getting the following error when calling this function.
----> 6 train_iter = WikiText2(split='train')
/usr/local/lib/python3.7/dist-packages/torchtext/datasets/wikitext2.py in WikiText2(root, split)
75 )
76
---> 77 url_dp = IterableWrapper([URL])
78 # cache data on-disk
79 cache_compressed_dp = url_dp.on_disk_cache(
NameError: name 'IterableWrapper' is not defined
Here is the code:
from torchtext.datasets import WikiText2
from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iterator
from torchdata.datapipes.iter import IterableWrapper
train_iter = WikiText2(split='train')
Let me know if you have any ideas.. Thanks
Antoine
| I tried running the snippet of code you provided. I don't see
NameError: name 'IterableWrapper' is not defined
but I have a different error which says,
No module named 'torchdata'
I don't have torchdata installed.
So in your case, I would make sure if the torchdata is installed correctly.
You can look at this official Git and see if it works out
https://github.com/pytorch/data
Best regards
| https://stackoverflow.com/questions/73590391/ |
Pytorch model output is not correct (torch.float32 and torch.float64) | I have created a DNN model with Pytorch (input_dim=6, output_dim=150). Normally, if I generate a random X_in=torch.randn(6000, 6), it will return me a model_out.shape=(6000, 150), and if I calculate the Rank of model_out, it should be 150 (since my model's weight and bias are also randomly initialised).
However, you can see this is NOT TRUE with the following code:
import torch
import torch.nn as nn
torch.manual_seed(923) # for reproducible result
class MyDNN(nn.Module):
def __init__(self):
super(MyDNN, self).__init__()
# layer 0:
self.linear_0 = nn.Linear(6, 150)
self.activ_0 = nn.Tanh()
# layer 1:
self.linear_1 = nn.Linear(150, 150)
self.activ_1 = nn.Tanh()
# layer 2:
self.linear_2 = nn.Linear(150, 150)
self.activ_2 = nn.Tanh()
# layer 3:
self.linear_3 = nn.Linear(150, 150)
self.activ_3 = nn.Tanh()
def forward(self, x):
out = self.activ_0(self.linear_0(x)) # output: layer 0
out = self.activ_1(self.linear_1(out)) # output: layer 1
out = self.activ_2(self.linear_2(out)) # output: layer 2
out = self.activ_3(self.linear_3(out)) # output: layer 3
return out
model = MyDNN()
X_in = torch.randn(6000, 6, dtype=torch.float32)
with torch.no_grad():
model_out = model(X_in)
print(f'model_out rank = {torch.linalg.matrix_rank(model_out)}')
model_out rank = 115. Apparently this is a WRONG output, there is no way that the output has so many linear dependent columns with all the inputs, weights and bias are randomly initialised!
This problem can be solved by changing the X_in dtype as well as the model dtype to float64 with the following code:
model_64 = MyDNN()
model_64.double()
X_in_64 = torch.randn(6000, 6, dtype=torch.float64)
with torch.no_grad():
model_64_out = model_64(X_in_64)
print(f'model_64_out rank = {torch.linalg.matrix_rank(model_64_out)}')
model_64_out rank = 150
Here is my question:
Why does this happen? Is this really a problem of data size? I mean float32 already has a good precision. Actually when I use my own training_data, even with mini_batch_size = 10 -> output.shape = (10, 150), my Rank(output) is less than 10.
Although this problem can be solved by using double precision, this slows down the whole training process a lot (and with Mac M1 pro GPU, it only supports float32 type). Is there any other solution?
| You have to realize that we are dealing with a numerical problem here: The rank of a matrix is a discrete value derived from a e.g. a singular value decomposition in the case of torch.matrix_rank. In this case we need to consider a threshold on the singular values: At what modulus tol do we consider a singular value as exactly zero?
Remember that we are dealing with floating point values where all operations always comes with truncation and rounding errors. In short there is no sense in trying to compute an exact rank.
So instead you might reconsider what kind of tolerance you use, you could e.g. use torch.linal.matrix_rank(..., tol=1e-6). The smaller the tolerance, the higher the expected rank.
But no matter what kind of floating point precision you use, I'd argue you will never be able to find meaningful "exact" number for the rank, it will always be a trade off! Therefore I'd reconsider whether you really need to compute the rank in the first place, or wether there is some other kind of criterion that is better suited for numerical considerations in the first place!
| https://stackoverflow.com/questions/73594922/ |
How to implement Laplace Posteriori Approximation on BERT in PyTorch? | I'm trying to implement the Laplace Posteriori Approximation on the last layer for the classification results obtained by BERT model. I get an error regarding input size, and after I fix it by extracting just embeddings and class labels from BERT to feed them into Laplace, I get another bunch of errors regarding input dimensions that I don't know how to debug.
As this is something I didn't find on the internet, and includes relatively new libraries, I will post here just the first error I got, code that might help in debugging and useful links.
I will update post if needed.
Of course, if someone knows how to implement Laplace Posteriori Approximation with BERT in some other library like Scikit or Trax, it would be helpful. Also, some other Transformer classification model with some other confidence approximation will be useful for me. Any help is appreciated!
Code:
# Import
import pandas as pd
import torch
from torch.utils.data import DataLoader
from torch.utils.data import TensorDataset
from torch import nn
from transformers import BertTokenizer
from transformers import BertModel
from transformers import BertForSequenceClassification
from sklearn.model_selection import train_test_split
import time
import os
#Toy Data
data_a_b_c = ["""category a. This is category a. In category a we talk about animals.
This category includes lions, fish, tigers, birds, elephants, mouses, dogs, cats, and all other animals."""] * 60 \
+ ["""category b. This is category b. In category b we talk about people. This category members are
Abraham Maslow, John Lennon, Drazen Petrovic, Nikola Tesla, Slavoljub Penkala, Nenad Bakic and Larry Page."""] * 60 \
+ ["""category c. This is category c. Category c is dedicated to car brands like Lamborgini, Rimac-Buggati, BMW, Mercedes,
Honda, Opel, Wolkswagen, and etc."""] * 60
label_0_1_2 = [0] * 60 + [1] * 60 + [2] * 60
d = {'text': data_a_b_c, 'labels': label_0_1_2}
df = pd.DataFrame(data=d)
print(df.head(3))
print(df.tail(3))
print(df.info())
# Parameters
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
batch_size = 2
learning_rate = 3e-4
epochs = 3
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
labels = pd.Series(df.labels.values).to_dict()
num_classes = 3
print(f'Tokenizer: {tokenizer}, Batch size:{batch_size}, Learning rate:{learning_rate}, Epochs:{epochs}')
print('Device: ', device)
print('Number of possible classes: ', num_classes)
# Model Architecture
class TransformerModel(nn.Module):
def __init__(self, num_classes, dropout=0.5):
super(TransformerModel, self).__init__()
self.bert = BertModel.from_pretrained('bert-base-multilingual-cased')
self.dropout = nn.Dropout(dropout)
self.linear = nn.Linear(768, num_classes)
self.relu = nn.ReLU()
def forward(self, input_id, mask):
_, pooled_output = self.bert(input_ids=input_id, attention_mask=mask, return_dict=False)
dropout_output = self.dropout(pooled_output)
linear_output = self.linear(dropout_output)
final_layer = self.relu(linear_output)
return final_layer
# Prepare Data Function
def prepare_data(data, labels):
texts = tokenizer(data, padding='max_length', max_length=512, truncation=True, return_tensors="pt")
input_ids = texts['input_ids']
attention_mask = texts['attention_mask']
train_dataset = TensorDataset(input_ids, attention_mask, torch.LongTensor(labels))
dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
return dataloader
#Run Training Function
def run_training(train_dataloader, val_dataloader, epochs=epochs, lr=learning_rate):
def train(dataloader):
model.train()
total_acc, total_count = 0, 0
log_interval = 128
start_time = time.time()
for idx, (input_id, mask, label) in enumerate(train_dataloader):
# print(idx)
mask = mask.to(device)
input_id = input_id.to(device)
label = label.type(torch.LongTensor).to(device)
output = model(input_id, mask)
optimizer.zero_grad()
loss = criterion(output, label)
loss.backward()
# torch.nn.utils.clip_grad_norm_(model.parameters(), 0.1)
optimizer.step()
total_acc += (output.argmax(1) == label).sum().item()
total_count += label.size(0)
if idx % log_interval == 0 and idx > 0:
elapsed = time.time() - start_time
print('| epoch {:3d} | {:5d}/{:5d} batches '
'| accuracy {:8.3f}'.format(epoch, idx, len(dataloader),
total_acc / total_count))
total_acc, total_count = 0, 0
start_time = time.time()
def evaluate(dataloader):
model.eval()
total_acc, total_count = 0, 0
with torch.no_grad():
for idx, (input_id, mask, label) in enumerate(dataloader):
mask = mask.to(device)
input_id = input_id.to(device)
label = label.to(device)
output = model(input_id, mask)
total_acc += (output.argmax(1) == label).sum().item()
total_count += label.size(0)
return total_acc / total_count
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=lr)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.1)
cuda = torch.cuda.is_available()
device = torch.device("cuda" if cuda else "cpu")
device = 'cuda'
model.to(device)
total_accu = None
for epoch in range(1, epochs + 1):
epoch_start_time = time.time()
train(train_dataloader)
accu_val = evaluate(val_dataloader)
if total_accu is not None and total_accu > accu_val:
scheduler.step()
else:
total_accu = accu_val
print('-' * 59)
print('| end of epoch {:3d} | time: {:5.2f}s | '
'valid accuracy {:8.3f} '.format(epoch,
time.time() - epoch_start_time,
accu_val))
print('-' * 59)
# Data Split And Preparation
X_train, X_test, y_train, y_test = train_test_split(df.text.values.tolist(), df.labels.values.tolist(), test_size=0.2, random_state=2)
train_dataloader = prepare_data(X_train, y_train)
val_dataloader = prepare_data(X_test, y_test)
# Run The Model
model = TransformerModel(num_classes)
run_training(train_dataloader, val_dataloader)
print('finished')
# Save And Load The Model (if needed)
PATH = ".../Torch_BERT_model"
torch.save(model, os.path.join(PATH, "Toy_Data_BERT.pth"))
model = torch.load(os.path.join(PATH, "Toy_Data_BERT.pth"))
print(model)
# Laplace
from laplace import Laplace
la = Laplace(model, 'classification', subset_of_weights='last_layer', hessian_structure='full')
la.fit(train_dataloader)
Error I get:
--------------------------------------------------------------------------- ValueError Traceback (most recent call
last) ~\AppData\Local\Temp\ipykernel_7144\3779742208.py in <cell line:
2>()
1 la = Laplace(model, 'classification', subset_of_weights='last_layer', hessian_structure='full')
----> 2 la.fit(train_dataloader)
~\anaconda3\lib\site-packages\laplace\lllaplace.py in fit(self,
train_loader, override)
98
99 if self.model.last_layer is None:
--> 100 X, _ = next(iter(train_loader))
101 with torch.no_grad():
102 try:
ValueError: too many values to unpack (expected 2)
Useful link for Laplace implementation with examples:
https://aleximmer.github.io/Laplace/#full-example-optimization-of-the-marginal-likelihood-and-prediction
Code that might help in debugging:
for x in train_dataloader:
print("The length of batch is:", len(x))
print()
print("The batch looks like:", x)
print()
print("The length of the first element in the batch is:") #embedding
print(len(x[0]))
print("The length of the second element in the batch is:") #1 if place is filled with word, 0 if it's empty?
print(len(x[1]))
print("The length of the third element in the batch is:") #category
print(len(x[2]))
print()
print("The lengths of the first tensor and second tensor in the first element in the batch is:")
print(len(x[0][0]), len(x[0][1])) # = max_length (512)
print("The lengths of the first tensor and second tensor in the second element in the batch is:")
print(len(x[1][0]), len(x[1][1])) # = max_length (512)
print()
print()
| The laplace library expects that the dataloader returns two parameters (X,y) and that the model requires exactly one argument to make its prediction (code). But your model forward pass requires two arguments, namely input_id and mask, and your dataloader returns three arguments input_id, mask, and labels.
There are several ways to work around this limitation (e.g. return a dict with input_ids and attention_mask). The way that requires the least understanding of the internals of the laplace library is to generate the attention mask at runtime in the forward pass (not great for the performance):
class TransformerModel(nn.Module):
def __init__(self, num_classes, pad_id, dropout=0.5):
super(TransformerModel, self).__init__()
self.bert = BertModel.from_pretrained('bert-base-multilingual-cased')
self.dropout = nn.Dropout(dropout)
self.linear = nn.Linear(768, num_classes)
self.relu = nn.ReLU()
self.pad_id = pad_id
def forward(self, input_id):
mask = (input_ids!=self.pad_id).type(input_ids.dtype)
_, pooled_output = self.bert(input_ids=input_id, attention_mask=mask, return_dict=False)
dropout_output = self.dropout(pooled_output)
linear_output = self.linear(dropout_output)
final_layer = self.relu(linear_output)
return final_layer
model = TransformerModel(num_classes, tokenizer.pad_token_id)
| https://stackoverflow.com/questions/73599356/ |
Gradient of X is NoneType in second iteration | I'm trying to make images which will fool model, but I have some problem with this code. In the second iteration I get TypeError: unsupported operand type(s) for -: 'Tensor' and 'NoneType'
Why is the grad NoneType even if it's working for the first time?
X_fooling = X.clone()
X_fooling.requires_grad_()
loss_f = torch.nn.MSELoss()
for i in range(1000):
score = model(X_fooling)
y = torch.zeros(1000)
y[target_y] = 1
loss = loss_f(score, y)
print(loss)
loss.backward()
if target_y == torch.argmax(score):
break
X_fooling = X_fooling - X_fooling.grad
| I could not fool the network with a target size of 1000. But I was able to fool it with a target size of 64. Here is a minimal code snippet that runs without error:
import matplotlib.pyplot as plt
import torch
torch.manual_seed(0)
target_size = 64
model = torch.nn.Linear(10, target_size)
target_y = torch.randint(0, target_size, (1, ))
X = torch.rand(1, 10)
X_fooling = X.clone()
X_fooling.requires_grad_()
loss_f = torch.nn.MSELoss()
history = []
goal_achieved = False
for i in range(10_000):
score = model(X_fooling)
y = torch.zeros(1, target_size)
y[:, target_y] = 1
loss = loss_f(score, y)
history.append(loss.detach().cpu().item())
loss.backward()
if target_y == torch.argmax(score):
goal_achieved = True
break
X_fooling = (X_fooling - X_fooling.grad).detach()
X_fooling.requires_grad_()
model.zero_grad()
plt.title(f"Goal achieved: {goal_achieved}")
plt.plot(history)
plt.show()
Output:
The thing is that you have to detach your input tensor to the graph after your backward pass.
| https://stackoverflow.com/questions/73624893/ |
What does self(variable) do in Python? | I'm trying to understand someone else's code in Python and I stumbled across a line I don't quite understand and which I can't find on the internet:
x=self(k)
with k being a torch-array.
I know what self.something does but I haven't seen self(something) before.
| self, for these purposes, is just a variable like any other, and when we call a variable with parentheses, it invokes the __call__ magic method. So
x = self(k)
is effectively a shortcut for
x = self.__call__(k)
Footnote: I say "effectively", because it's really more like
x = type(self).__call__(self, k)
due to the way magic methods work. This difference shouldn't matter unless you're doing funny things with singleton objects, though.
| https://stackoverflow.com/questions/73625459/ |
Error Running Stable Diffusion from the command line in Windows | I installed Stable Diffusion v1.4 by following the instructions described in https://www.howtogeek.com/830179/how-to-run-stable-diffusion-on-your-pc-to-generate-ai-images/#autotoc_anchor_2
My machine heavily exceeds the min reqs to run Stable Diffusion:
Windows 11 Pro
11th Gen Intel i7 @ 2.30GHz
Latest NVIDIA GeForce GPU
16GB Memory
1TB SSD
Yet, I get an error when trying to run the test prompt
python scripts/txt2img.py --prompt "a close-up portrait of a cat by pablo picasso, vivid, abstract art, colorful, vibrant" --plms --n_iter 5 --n_samples 1
RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 6.13 GiB already allocated; 0 bytes free; 6.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Reading a post by Marco Ramos it seems like it relates to the number of workers in PyTorch
Strange Cuda out of Memory behavior in Pytorch
How do I change the number of workers while running Stable Diffusion? And why is it throwing this error if my machine still has lots of memory? Has anyone encountered this same issue while running Stable Diffusion?
| I had the same issue, it's because you're using a non-optimized version of Stable-Diffusion. You have to download basujindal's branch of it, which allows it use much less ram by sacrificing the precision, this is the branch - https://github.com/basujindal/stable-diffusion
Everything else in that guide stays the same just clone from this version. It allow you to even push past 512x512 default resolution, you can use 756x512 to get rectangular images for example (but the results may vary since it was trained on a 512 square set).
the new prompt becomes python optimizedSD/optimized_txt2img.py --prompt "blue orange" --H 756 --W 512
Also another note: as of a few days ago an even faster and more optimized version was released by neonsecret (https://github.com/basujindal/stable-diffusion), however I'm having issues installing it, so can't really recommend it but you can try it as well and see if it works for you.
| https://stackoverflow.com/questions/73629682/ |
How to input images in rllib | last time I saw library rllib: https://docs.ray.io/en/latest/rllib/index.html.
It has amazing features for reinforcement learning, but unfortunately, I couldn't find a way to input images as an observation without flattening them (I basically want to use convolutional neural network). Is there any way to input image observations in models using rllib library?
| Rllib is compatible with openai's gym, you can create a custom env https://docs.ray.io/en/latest/rllib/rllib-env.html#configuring-environments and return a Box as an observation space like https://stackoverflow.com/a/69602365/4994352
| https://stackoverflow.com/questions/73644488/ |
How to make vgg pytorch's ptl size smaller on android? | import torch
# import joblib
from torch.utils.mobile_optimizer import optimize_for_mobile
from torchvision.models.vgg import vgg16
import torch, torchvision.models
# lb = joblib.load('lb.pkl')
device = torch.device('cuda:0')
#device = torch.device('cpu')#'cuda:0')
torch.backends.cudnn.benchmark = True
model = vgg16().to(device)
# model = torchvision.models.vgg16()
path = 'model-22222.pth'
torch.save(model.state_dict(), path) # nothing else here
model.load_state_dict(torch.load(path))
#model.load_state_dict(torch.load('./model-76-0.7754.pth'))
scripted_module = torch.jit.script(model)
optimized_scripted_module = optimize_for_mobile(scripted_module)
optimized_scripted_module._save_for_lite_interpreter("model-76-0.7754.ptl")
The optimize_for_mobile seems does not make the ptl file smaller, it's about 527M, too large on Android. How to make it smaller?
| You may try using quantization:
https://pytorch.org/docs/stable/quantization.html
Usually it allows to reduce the size of the model 2x-4x times with little or no loss of accuracy.
Example:
import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
from torchvision.models.vgg import vgg16
device = torch.device("cpu")
torch.backends.cudnn.benchmark = True
model = vgg16().to(device)
backend = "qnnpack"
model.qconfig = torch.quantization.get_default_qconfig(backend)
torch.backends.quantized.engine = backend
model_static_quantized = torch.quantization.prepare(model, inplace=False)
model_static_quantized = torch.quantization.convert(
model_static_quantized, inplace=False
)
scripted_module = torch.jit.script(model_static_quantized)
optimized_scripted_module = optimize_for_mobile(scripted_module)
optimized_scripted_module._save_for_lite_interpreter("model_quantized.ptl")
Apart from it I would try to use the architectures more suitable for mobile phones because VGG is kinda old and has not a great size/accuracy ratio. You can distillate your trained VGG model into smaller one with Knowledge distillation.
| https://stackoverflow.com/questions/73652307/ |
Best Practices for Distributed Training with PyTorch custom containers (BYOC) in SageMaker | What are the best practices for distributed training with PyTorch custom containers (BYOC) in Amazon Sagemaker? I understand that PyTorch framework supports native distributed training or using the Horovod library for PyTorch.
| The recommended approach on Amazon SageMaker is to use the SageMaker built in Data Parallel and Model Parallel Libraries. When you use the Pytorch Deep Learning container provided by SageMaker, the library is built in and you can follow the below examples to get started with examples.
https://docs.aws.amazon.com/sagemaker/latest/dg/distributed-training-notebook-examples.html
If you are bringing your own container, follow the below link to add SageMaker Distributed training support to your container
https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel-use-api.html#data-parallel-bring-your-own-container
Apart from this SageMaker also natively supports Pytorch DDP within the native Deep Learning Container used in Pytorch Estimator.
https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/sagemaker.pytorch.html
| https://stackoverflow.com/questions/73676590/ |
PyTorch convert function for op 'pad' not implemented | When I trying to convert model.ckpt to core ML model use coremltools I got this error:
File "/Users/peterpan/miniforge3/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 86, in convert_nodes
raise RuntimeError(
RuntimeError: PyTorch convert function for op 'pad' not implemented.
Here is converting code:
model: torch.nn.Module = make_training_model(train_config)
state = torch.load(path, map_location=map_location)
model.load_state_dict(state['state_dict'], strict=strict)
model.on_load_checkpoint(state)
model.eval()
jit_model_wrapper = JITWrapper(model)
image = torch.rand(1, 3, 120, 120)
mask = torch.rand(1, 1, 120, 120)
output = jit_model_wrapper(image, mask)
device = torch.device("cpu")
image = image.to(device)
mask = mask.to(device)
traced_model = torch.jit.trace(jit_model_wrapper, (image, mask), strict=False).to(device)
model1 = ct.convert(
traced_model,
source='pytorch',
inputs=[ct.ImageType(name='image',shape=image.shape), ct.ImageType(name='mask',shape=mask.shape)]
)
model1.save("newmodel.mlmodel")
I'm newbie in python. What's wrong with my code?
| Upgrading to coremltools==6.0 fixed this problem for me
| https://stackoverflow.com/questions/73676762/ |
Find euclidean distance between a tensor and each row tensor of a matrix efficiently in PyTorch | I have a tensor A of size torch.Size([3]) and another tensor B of size torch.Size([4,3]).
I want to find the distance between A and each of the 4 rows of B.
I'm new to Torch and I reckon a for loop for each of the rows wouldn't be efficient. I have looked into torch.linalg.norm and torch.cdist but I'm not sure if they solve my problem, unless I'm missing something.
| You look for:
torch.norm(A[None, :] - B, p=2, dim=1)
A[None, :] resize the tensor to shape (1, 3)
A[None, :] - B will copy 4 times the tensor A to match the size of B ("broadcast") and make the substraction
torch.norm(..., p=2, dim=1) computes the euclidian norme for each column.
Output shape: (4,)
| https://stackoverflow.com/questions/73680705/ |
AttributeError: 'str' object has no attribute 'cuda' | I am trying to move my data to GPU by doing this
batch["img"] = [img.cuda() for img in batch["img"]]
batch["label"] = [label.cuda() for label in batch["label"]]
However, i get this error for the labels for OCR
AttributeError: 'str' object has no attribute 'cuda'
I also tried .to('cuda') and similar error.
More details are as follows. This is the pytorch Dataset class
class SynthDataset(Dataset):
def __init__(self, opt):
super(SynthDataset, self).__init__()
self.path = os.path.join(opt['path'], opt['imgdir'])
self.images = os.listdir(self.path)
self.nSamples = len(self.images)
f = lambda x: os.path.join(self.path, x)
self.imagepaths = list(map(f, self.images))
transform_list = [transforms.Grayscale(1),
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))]
self.transform = transforms.Compose(transform_list)
self.collate_fn = SynthCollator()
def __len__(self):
return self.nSamples
def __getitem__(self, index):
assert index <= len(self), 'index range error'
imagepath = self.imagepaths[index]
imagefile = os.path.basename(imagepath)
img = Image.open(imagepath)
if self.transform is not None:
img = self.transform(img)
item = {'img': img, 'idx':index}
item['label'] = imagefile.split('_')[0]
return item
As you can see the generator outputs a dictionary with image and labels where the labels is the text contained in the image
| Yo !
If I get your code right, your are building your custom dataset to output an image and its label. Your label comes from the first part of your image filename, so it is a string, as stated in the error message. Your object needs to be a Torch tensor to be moved on the GPU with .cuda().
If you want to keep your code as is, you need to transform your label into a vector form. I suspect your labels to be stuff like "cat", "dog", etc The usual way to transform labels into numerical forms are one-hot-encoding, or simply map each label to an integer. There is abundant resources about it on the web. Then you can turn your label into a Tensor object and move it to the GPU.
However, I would highly recommand to change the way you build your tensor dataset. You are calling .cuda() on every image tensor in your list. When forwarding a neural network, you batch your data into a tensor (for instance stacking your images into a new dimension), then call .cuda() on the whole batch tensor. Same thing for the labels. Check the Pytorch documentation as there is probably the exact example you are looking for (a custom dataset with an image and its label).
Good luck !
| https://stackoverflow.com/questions/73695519/ |
Why we use validation set (not train or test set) in early stopping function ( DL / CNN )? | This is my first attempt to CNN in Pytorch. I have gone by few tutorials, but still need some clarification.
I have theoretical question, I don't understand why in early stopping function we base on validation set, not train or test set?
Has it something common with metrics we got from validation set?
| The number of training epochs is one of the training hyper-parameters. Therefore, you MUST NOT use the test data to determine the value of this hyper-parameter.
Additionally, you cannot use the training set itself to determine the value of early stopping. Therefore, you need to use the validation set for determining this value.
| https://stackoverflow.com/questions/73729351/ |
How to set backend to ‘gloo’ on windows in Pytorch | I am trying to use two gpus on my windows machine, but I keep getting
raise RuntimeError("Distributed package doesn't have NCCL " "built
in") RuntimeError: Distributed package doesn't have NCCL built in
I am still new to pytorch and couldnt really find a way of setting the backend to ‘gloo’. Any way to set backend= 'gloo' to run two gpus on windows.
| from torch import distributed as dist
Then in your init of the training logic:
dist.init_process_group("gloo", rank=rank, world_size=world_size)
Update:
You should use python multiprocess like this:
class Trainer:
def __init__(self, rank, world_size):
self.rank = rank
self.world_size = world_size
self.log('Initializing distributed')
os.environ['MASTER_ADDR'] = self.args.distributed_addr
os.environ['MASTER_PORT'] = self.args.distributed_port
dist.init_process_group("gloo", rank=rank, world_size=world_size)
if __name__ == '__main__':
world_size = torch.cuda.device_count()
mp.spawn(
Trainer,
nprocs=world_size,
args=(world_size,),
join=True)
| https://stackoverflow.com/questions/73730819/ |
Pytorch BERT input gradient | I am trying to get the input gradients from a BERT model in pytorch. How can I do that?
Suppose, y' = BertModel(x). I am trying to find $d(loss(y,y'))/dx$
| One of the problems with Bert models is that your input mostly contains token IDs rather than token embeddings, which makes getting gradient difficult since the relation between token ID and token embeddings is discontinued.
To solve this issue, you can work with token embeddings.
# get your batch data: token_id, mask and labels
token_ids, mask, labels = batch
# get your token embeddings
token_embeds=BertModel.bert.get_input_embeddings().weight[token_ids].clone()
# track gradient of token embeddings
token_embeds.requires_grad=True
# get model output that contains loss value
outs = BertModel(inputs_embeds=inputs_embeds,labels=labels)
loss=outs.loss
After getting loss value, you can use torch.autograd.grad in this answer or backward function
loss.backward()
grad=token_embeds.grad
| https://stackoverflow.com/questions/73743878/ |
What is the unit of time in the following code? | The following code shows inference time of the network. However, I am not sure what the unit of time is. Read through the documentation here, but still confused
####This code is for measuring performance such as inference time, FLOPs, Parameters etc...
dummy_input = torch.randn(1, 3, 256, 256).cuda()
macs, params = profile(model, inputs=(dummy_input,), verbose=0)
macs, params = clever_format([macs, params], "%.3f")
name = "SegDepthWithTwoDecoders"
print("<" * 50, name)
print("Flops:", macs)
print("Parameters:", params)
starter, ender = torch.cuda.Event(enable_timing=True), torch.cuda.Event(
enable_timing=True
)
repetitions = 300
timings = np.zeros((repetitions, 1))
for _ in range(10):
_ = model(dummy_input)
# MEASURE PERFORMANCE
with torch.no_grad():
for rep in range(repetitions):
starter.record()
_ = model(dummy_input)
ender.record()
# WAIT FOR GPU SYNC
torch.cuda.synchronize()
curr_time = starter.elapsed_time(ender)
timings[rep] = curr_time
print("time :", np.average(timings))
###This is where the code ends for measuring the performance
This is what the output looks like
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< SegDepthWithTwoDecoders
Flops: 1.924G
Parameters: 3.986M
time : 12.67153577486674
| From the documentation of Event.elapsed_time()
https://pytorch.org/docs/stable/generated/torch.cuda.Event.html
Returns the time elapsed in milliseconds after the event was recorded
and before the end_event was recorded.
| https://stackoverflow.com/questions/73744730/ |
Spyder crashing when importing torch | I am using a MacBook Pro (MacOS: Monterey) and I'm using Spyder downloaded as the app for MacOS via this page: https://github.com/spyder-ide/spyder/releases. So it is from a standalone installer and I have installed conda via miniconda3.
Everything works fine until I'm trying to install Pytorch. I have installed the package in a virtual environment with the following code snippet: conda install pytorch torchvision -c pytorch.
The installation is successful but when I write import torch, I get the following error message and the kernel restarts:
/Applications/Spyder.app/Contents/Resources/lib/python3.9/spyder/plugins/ipythonconsole/scripts/conda-activate.sh: line 18: 98840 Abort trap: 6
$CONDA_ENV_PYTHON -m spyder_kernels.console -f $SPYDER_KERNEL_SPEC
Fatal Python error: Aborted
Main thread:
Current thread 0x0000000112f1f600 (most recent call first):
File "<frozen importlib._bootstrap>", line 241 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 1176 in create_module
File "<frozen importlib._bootstrap>", line 571 in module_from_spec
File "<frozen importlib._bootstrap>", line 674 in _load_unlocked
File "<frozen importlib._bootstrap>", line 1006 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1027 in _find_and_load
File "/Users/andreasaspe/opt/miniconda3/envs/spyder-env/lib/python3.10/site-packages/torch/__init__.py", line 202 in <module>
File "<frozen importlib._bootstrap>", line 241 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 883 in exec_module
File "<frozen importlib._bootstrap>", line 688 in _load_unlocked
File "<frozen importlib._bootstrap>", line 1006 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1027 in _find_and_load
File "/var/folders/fk/q49x7w9j6t53t4bvkbj_nkdm0000gp/T/ipykernel_98840/4265195184.py", line 1 in <module>
Restarting kernel...
Note: If I activate the virtual environment in the terminal and run a python-script here, then pytorch works absolutely fine. And I have tried a few times that the python-script also suddenly starts running with no problems in the Spyder console. But when I close down Spyder and starts it again, then the issue starts all over and I cannot find a pattern for why it some times works. I don't know if I'm doing anything wrong regarding my virtual environment? I have changed my python interpreter inside of Spyder to be the one in my virtual environment.. Since it works in the terminal and not in the spyder-console I also suspect that it can be something with my spyder-kernel (as the error also suggest). But I can't really figure out how to fix the spyder-kernel.
I have tried to uninstall and install again, installing with pip instead of conda but nothing works. I have searched the internet and for other people it helped to update Spyder to the newest version and making sure that Pytorch is of the newest version as well. It seems like I have the newest editions of everything, though.
Information about Spyder (as standalone installer):
Spyder IDE: 5.3.3
Python 3.9.5 64-bit | Qt 5.15.2 | PyQt5 5.15.7 | Darwin 21.5.0
Information about Pytorch package:
Version 1.12.1
| Question:
Spyder crashing when importing torch on MacOS M2
Answer:
At the new update Anaconda @ MacOS Monterey it works with downgrading pytorch==1.7.1 for spyder==5.3.3.
$ conda install pytorch==1.7.1
| https://stackoverflow.com/questions/73745860/ |
RuntimeError: CUDA out of memory. How setting max_split_size_mb? | I found this problem running a neural network on Colab Pro+ (with the high RAM option).
RuntimeError: CUDA out of memory. Tried to allocate 8.00 GiB (GPU 0; 15.90 GiB total capacity; 12.04 GiB already allocated; 2.72 GiB free; 12.27 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I have already decreased the batch to 2. I upload the data using the h5py.
At this point, I assume the only thing I can try is setting the max_split_size_mb.
I could not find anything about how I can implement the max_split_size_mb. The Pytorch documentation (https://pytorch.org/docs/stable/notes/cuda.html) was not clear to me.
Anyone can support me?
Thank you.
| The max_split_size_mb configuration value can be set as an environment variable.
The exact syntax is documented at https://pytorch.org/docs/stable/notes/cuda.html#memory-management, but in short:
The behavior of caching allocator can be controlled via environment variable PYTORCH_CUDA_ALLOC_CONF. The format is PYTORCH_CUDA_ALLOC_CONF=<option>:<value>,<option2>:<value2>...
Available options:
max_split_size_mb prevents the allocator from splitting blocks larger than this size (in MB). This can help prevent fragmentation and may allow some borderline workloads to complete without running out of memory. Performance cost can range from ‘zero’ to ‘substatial’ depending on allocation patterns. Default value is unlimited, i.e. all blocks can be split. The memory_stats() and memory_summary() methods are useful for tuning. This option should be used as a last resort for a workload that is aborting due to ‘out of memory’ and showing a large amount of inactive split blocks.
...
So, you should be able to set an environment variable in a manner similar to the following:
Windows: set 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512'
Linux: export 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512'
This will depend on what OS you're using - in your case, for Google Colab, you might find Setting environment variables in Google Colab helpful.
| https://stackoverflow.com/questions/73747731/ |
Approximating an exponential fit with a simple neural network | I've been trying to train a network to solve exponential fits of the form s(t) = s0 * e^(-t/decay_constant). As input, the net takes s and t, and as output it should return s0 and the decay_constant.
This seems like a sufficiently simple problem that I would expect a net would be able to satisfactorely approximate.
However, I cannot get it to work. The loss does go down, and the results don't look completely random, but they are definitely worse than what I simple log-linear fit could achieve.
My setup was (takes about 20s on CPU)
A dense net with ReLU activations
Loss based on log-linear LSQ
Train on random exponential decay examples
import torch
net = torch.nn.Sequential(
torch.nn.Linear(10, 64), torch.nn.ReLU(),
torch.nn.Linear(64, 64), torch.nn.ReLU(),
torch.nn.Linear(64, 32), torch.nn.ReLU(),
torch.nn.Linear(32, 2), torch.nn.ReLU(), # Want outputs to always be positive.
)
loss = torch.nn.MSELoss()
optimizer = torch.optim.Adam(net.parameters(), lr=0.005)
def signal_model(x, s0, decay):
return s0 * torch.exp(-x / decay)
# Generate some datapoints
batch_size = 4096
for episode in range(1000):
# Generate random time, decay_constant s0.
t = t = (torch.arange(5, 55, 10.) + (torch.rand(batch_size, 5) * 10)).T
decay_constant = torch.rand(batch_size) * 70 + 10
s0 = (torch.rand(batch_size) * 2 - 1) * 50 + 100
# Generate random input data
y = signal_model(t, s0, decay_constant)
data = torch.vstack((t, y)).T
# Make a prediction and calculate loss
coefficients = net(data)
l = loss(-torch.log(signal_model(t, *coefficients.T) + 1e-20), -torch.log(y + 1e-20))
optimizer.zero_grad(); l.backward(); optimizer.step()
To visualize results
import matplotlib.pyplot as plt
t_ = torch.tensor([5, 15, 25, 35, 45], dtype=torch.float32)
plotgrid = torch.arange(0, 50, 0.1)
s = signal_model(t_, 20, 30)
fig, ax = plt.subplots()
ax.plot(plotgrid, signal_model(plotgrid, 20, 30).detach().numpy())
ax.scatter(t_, s.detach().numpy(), label="True")
ax.scatter(
t_,
signal_model(t_, *net(torch.hstack((t_, s)).T)).detach().numpy(),
label="Predicted",
)
ax.legend()
I was wondering whether anyone has insights on why this doesn't work. Is there some fundamental limitation to approximating simple functions? Or is there something that jumps to the eye as inherently wrong? Would love any insights.
What didn't work:
tweaking learning rate
tweaking number of layers / units per layer
tweaking batch_size
changing loss to MSE on coefficients or on the signal directly without log
regularizing with L1loss
using always the same t as input
Somewhat related question: https://stats.stackexchange.com/questions/379884/why-cant-a-single-relu-learn-a-relu
| s0 and decay_constant are not fixed parameter values in your generated data. There is not a single true parameter vector for the model to converge to. You might think that giving it a variety of different examples of exponential outputs from different parameters means it is being trained to predict proper exponential fit coefficients from any given dataset, but that's wrong. Rather it is just being trained to reproduce a specific distribution of exponential fit coefficients (what looks like normally distributed data with mean 10 and variance of 490 for decay_constant and mean 100 with variance of 2500 for s0). As a thought experiment, what if we fed in an example data set with parameters way way way far out in the tails of those distributions? The model couldn't be adequate at guessing that relationship given that that part of the (s0, decay_constant) space would be essentially missing entirely from the randomized training set. Think of the space of all possible exponential curves ...
The key thing you are missing is that you can't make a single model that predicts the coefficients for any possible dataset. Rather, you can train the model on any dataset to predict the coefficients for that dataset. If the training data is generated from a sample where those coefficients are fixed, then the model's loss optimization will cause the predicted coefficients to converge to the true coefficients for that dataset. You'd essentially be hijacking a NN optimizer to do what a more common optimizer would do with just the deterministic equation solving you'd normally do.
But if you hand a dataset with no fixed parameters, the outputs can at best represent the distribution of parameters according to however you defined it when making the synthetic training data. Because it's not possible to make a training set that adequately includes every possible exponential curve there can be, the model will perform poorly in general. It may perform OK for curve fitting where the parameters are close to the synthetic training distribution you made, but this will depend a lot of the variance and the quality of convergence.
I made a similar type of model for linear regression coefficient predictions. Note: not fitting a linear model, rather jumping straight to predicting the coefficients of a linear model from a dataset generated by a fixed, true, set of coefficients. It is very similar in spirit to what you're trying to do here, but note how it must be trained on the one single dataset you want the coefficients for. It cannot predict generic coefficients across all possible linear relationships. To get meaningful outputs, you don't just make single predictions - rather you would do something like average the predictions across a test set held out from the same data generated by the fixed, true parameters.
https://github.com/spearsem/nn4params
| https://stackoverflow.com/questions/73748438/ |
Is Cuda Toolkit release 11.7 compatible with pytorch version 11.6? | I have installed recent version of cuda toolkit that is 11.7 but now while downloading I see pytorch 11.6 is there, are they two compatible?
| There is a table with CUDA compatibility:
https://pytorch.org/get-started/locally/
At this moment the latest supported CUDA version is 11.6.
| https://stackoverflow.com/questions/73768657/ |
Why the training error is greater than the test error? | I have trained a model in mode .train() (nn.Models of Pytorch)
I have saved the BCELoss (in mode .train()) and the accuracies on ds test and on ds train (but in mode .eval() ).
So, the results were:
(for example in epoch 153)
error classification (train, test): 0.5819954128440368 , 0.37209302325581395
and the loss was:
0.0032386823306408906
How was it possibile?
Was I wrong to switch the two modes? Is this the problem?
| Test error is not always greater than train error, but it seems that your loss function or model structure has some problem considering that you trained your model for 153 epoches.
Why don't you design a new layer structure, referring lots of publication ? In my experience your problem usually occurs when your model is not the optimized model for your dataset. It is not solved by simply enhancing the depth of model.
| https://stackoverflow.com/questions/73775573/ |
How to turn a numpy array (mic/loopback input) into a torchaudio waveform for a PyTorch classifier | I am currently working on training a classifier with PyTorch and torchaudio. For this purpose I followed the following tutorial: https://towardsdatascience.com/audio-deep-learning-made-simple-sound-classification-step-by-step-cebc936bbe5
This all works like a charm and my classifier is now able to successfully classify .wav files.
However I would like to turn this into a real-time classifier, that is able to also classify recordings from a microphone/loopback input.
For this I would hope to not have to save a recording into a .wav file to load it again but instead directly feed the classifier with an in memory recording.
The tutorial uses the .load function of torchaudio to load a .wav file and return a waveform and sample rate as follows:
sig, sr = torchaudio.load(audio_file)
Now loopback is pretty much required and since pyaudio does apparently not support loopback devices yet (except for a fork that is very likely to be outdated) I stumbled across soundcard:
https://soundcard.readthedocs.io/en/latest/
I found this code to yield a recording of my speaker loopback:
speakers = sc.all_speakers()
# get the current default speaker on your system:
default_speaker = sc.default_speaker()
# get a list of all microphones:v
mics = sc.all_microphones(include_loopback=True)
# get the current default microphone on your system:
default_mic = mics[0]
with default_mic.recorder(samplerate=148000) as mic, \
default_speaker.player(samplerate=148000) as sp:
print("Recording...")
data = mic.record(numframes=1000000)
print("Done...Stop your sound so you can hear playback")
time.sleep(5)
sp.play(data)
However now of course I don't want to play that audio with the .play function but instead pass it onto to torchaudio/the classifier.
Since I am new to the world of audio processing I have no idea how to get this data into a suitable format similar to the one returned by torchaudio.
According to the docs of soundcard the data has the following format:
The data will be returned as a frames × channels float32 numpy array
As a last resort maybe saving it into an in memory .wav file and then reading it with torchaudio is possible?
Any help is appreciated. Thank you in advance!
| According to the doc, you will get a numpyarray of shape frames × channels. For a stereo microphone, this will be (N,2), for mono microphone (N,1).
This is pretty much what the torch load function outputs: sig is a raw signal, and sr the sampling rate. You have specified your sample rate yourself
to your mic (so sr = 148000), and you just need to convert your numpy raw signal to a torch tensor with:
sig_mic = torch.tensor(data)
Just check that the dimensions are similar, it might be something like (2,N) with torchaudio.load(), in such case, just reshape the tensor:
sig_mic = torch.tensor(data).reshape((2, -1))
| https://stackoverflow.com/questions/73787169/ |
How to vectorize a torch function? | When using numpy I can use np.vectorize to vectorize a function that contains if statements in order for the function to accept array arguments. How can I do the same with torch in order for a function to accept tensor arguments?
For example, the final print statement in the code below will fail. How can I make this work?
import numpy as np
import torch as tc
def numpy_func(x):
return x if x > 0. else 0.
numpy_func = np.vectorize(numpy_func)
print('numpy function (scalar):', numpy_func(-1.))
print('numpy function (array):', numpy_func(np.array([-1., 0., 1.])))
def torch_func(x):
return x if x > 0. else 0.
print('torch function (scalar):', torch_func(-1.))
print('torch function (tensor):', torch_func(tc.tensor([-1., 0., 1.])))
| You can use .apply_() for CPU tensors. For CUDA ones, the task is problematic: if statements aren't easy to SIMDify.
You may apply the same workaround for functorch.vmap as video drivers used to do for shaders: evaluate both branches of the condition and stick to arithmetics.
Otherwise, just use a for loop: that's what np.vectorize() mostly does anyway.
def torch_vectorize(f, inplace=False):
def wrapper(tensor):
out = tensor if inplace else tensor.clone()
view = out.flatten()
for i, x in enumerate(view):
view[i] = f(x)
return out
return wrapper
| https://stackoverflow.com/questions/73791594/ |
ValueError: too many values to unpack (expected 2) while trying to load yolov5 model | I am trying to load a trained yolov5 model on a custom dataset using this:
# Model
model = torch.hub.load('/home/yolov5/runs/train/yolo_sign_det2/weights', 'best') # or yolov5n - yolov5x6, custom
but I am running into this error:
ValueError Traceback (most recent call last)
<ipython-input-3-c832ab8c1eab> in <module>
2
3 # Model
----> 4 model = torch.hub.load('/home/yolov5/runs/train/yolo_sign_det2/weights', 'best') # or yolov5n - yolov5x6, custom
5
6 # Images
~/.conda/envs/yolo/lib/python3.6/site-packages/torch/hub.py in load(repo_or_dir, model, source, force_reload, verbose, skip_validation, *args, **kwargs)
395
396 if source == 'github':
--> 397 repo_or_dir = _get_cache_or_reload(repo_or_dir, force_reload, verbose, skip_validation)
398
399 model = _load_local(repo_or_dir, model, *args, **kwargs)
~/.conda/envs/yolo/lib/python3.6/site-packages/torch/hub.py in _get_cache_or_reload(github, force_reload, verbose, skip_validation)
163 os.makedirs(hub_dir)
164 # Parse github repo information
--> 165 repo_owner, repo_name, branch = _parse_repo_info(github)
166 # Github allows branch name with slash '/',
167 # this causes confusion with path on both Linux and Windows.
~/.conda/envs/yolo/lib/python3.6/site-packages/torch/hub.py in _parse_repo_info(github)
110 else:
111 repo_info, branch = github, None
--> 112 repo_owner, repo_name = repo_info.split('/')
113
114 if branch is None:
ValueError: too many values to unpack (expected 2)
Can anybody please tell what I'm doing wrong?
| From the Pytorch documentation website, it seems that the source option is set to github by default so your line of code:
model = torch.hub.load('/home/yolov5/runs/train/yolo_sign_det2/weights', 'best') # or yolov5n - yolov5x6, custom
actually means:
model = torch.hub.load('/home/yolov5/runs/train/yolo_sign_det2/weights', 'best', source='github') # or yolov5n - yolov5x6, custom
Thus, you need to set source=local as explained in the linked documentation, by keeping source='github' it means your programme is trying to find a GitHub repository (as can be seen in your error messages).
The fix is to use this line of code instead with the source set to local:
torch.hub.load('/home/yolov5/runs/train/yolo_sign_det2/weights', 'best', source='local')
| https://stackoverflow.com/questions/73796949/ |
How to convert pytorch model to being on GPU? | I want to run pytorch on a GPU.
I have this code:
import torch
import torch.nn as nn
device = torch.device("cuda:0")
n_input, n_hidden, n_out, batch_size, learning_rate = 10, 15, 1, 100, 0.01
data_x = torch.randn(batch_size, n_input)
data_y = (torch.rand(size=(batch_size, 1)) < 0.5).float()
print(data_x.size())
print(data_y.size())
model = nn.Sequential(nn.Linear(n_input, n_hidden),
nn.ReLU(),
nn.Linear(n_hidden, n_out),
nn.Sigmoid())
#model.to(device)
print(next(model.parameters()).is_cuda)
loss_function = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
losses = []
for epoch in range(5000):
pred_y = model(data_x)
loss = loss_function(pred_y, data_y)
losses.append(loss.item())
model.zero_grad()
loss.backward()
optimizer.step()
print(torch.cuda.get_device_name())
print(torch.__version__)
print(torch.version.cuda)
import matplotlib.pyplot as plt
plt.plot(losses)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.title("Learning rate %f"%(learning_rate))
plt.show()
When I run it, the output is:
torch.Size([100, 10])
torch.Size([100, 1])
False
Quadro P2000
1.12.1
11.3
when I uncomment the line model.to(device) and rerun it, I get:
Traceback (most recent call last):
File "basic_pytorch_with_gpu.py", line 26, in <module>
pred_y = model(data_x)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/container.py", line 139, in forward
input = module(input)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_addmm)
I can see other questions like this (e.g. here) but can't work out how I'm meant to convert to GPU.
For example, I change the data_x and data_y lines to:
data_x = torch.randn(batch_size, n_input).cuda()
data_y = (torch.rand(size=(batch_size, 1)) < 0.5).float().cuda()
But I get the same error - could someone explain how to run this code on a GPU?
| As suggested in the comments, you need to transfer both your model and your data to the same device. Below should work:
import torch
import torch.nn as nn
device = torch.device("cuda:0")
n_input, n_hidden, n_out, batch_size, learning_rate = 10, 15, 1, 100, 0.01
data_x = torch.randn(batch_size, n_input)
data_y = (torch.rand(size=(batch_size, 1)) < 0.5).float()
print(data_x.size())
print(data_y.size())
model = nn.Sequential(nn.Linear(n_input, n_hidden),
nn.ReLU(),
nn.Linear(n_hidden, n_out),
nn.Sigmoid())
# Transfer to device
model = model.to(device)
data_x = data_x.to(device)
data_y = data_y.to(device)
print(next(model.parameters()).is_cuda)
loss_function = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
losses = []
for epoch in range(5000):
pred_y = model(data_x)
loss = loss_function(pred_y, data_y)
losses.append(loss.item())
model.zero_grad()
loss.backward()
optimizer.step()
print(torch.cuda.get_device_name())
print(torch.__version__)
print(torch.version.cuda)
import matplotlib.pyplot as plt
plt.plot(losses)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.title("Learning rate %f"%(learning_rate))
plt.show()
| https://stackoverflow.com/questions/73798104/ |
python methods in comparison | Rock, Paper, Scissors variable bot has default value
Variable Alex, has values that are passed to main.py
When I call method comparison I get an error
Method comparisons
from secrets import choice
from variants import Variants
Player.py
class Player:
name = '',
choice = ''
def __init__(self, choise = 'ROCK', name = 'bot'):
self.name = name
self.choice = choice
def whoWins(self, bot, alex):
if bot.choice > alex.choice:
print('bot, winner')
if bot.choice < alex.choice:
print('Alex, winner')
if bot.choice == alex.choice:
print('draw')
main.py
from variants import Variants
from player import Player
bot = Player()
alex = Player(Variants.ROCK, "Alex")
print(bot.whoWins(bot, alex))
variants.py
from enum import Enum
class Variants(Enum):
ROCK = 1,
PAPER = 2,
SCISSORS = 3
| Answer:
I had to modify the comparison of methods, I did it like this:
also main.po remained the same:
> from variants import Variants from player import Player
>
> bot = Player() alex = Player(Variants.ROCK, "Alex")
> print(bot.whoWins(bot, alex))
variants.py remained the same
from enum import Enum
class Variants(Enum):
ROCK = 1
PAPER = 2
SCISSORS = 3
Player.py
from secrets import choice
from variants import Variants
class Player:
name = '',
choice = ''
def __init__(self, choice = Variants.ROCK, name = 'bot'):
self.name = name
self.choice = choice
def whoWins(self, bot, alex):
if (bot.choice == Variants.ROCK and alex.choice == Variants.PAPER):
print('Alex, win!')
if (bot.choice == Variants.PAPER and alex.choice == Variants.ROCK):
print('Alex, win!')
if (bot.choice == Variants.SCISSORS and alex.choice == Variants.SCISSORS):
print('draw')
if (bot.choice == Variants.ROCK and alex.choice == Variants.ROCK):
print('draw!')
if (bot.choice == Variants.ROCK and alex.choice == Variants.SCISSORS):
print('Bot, win')
if (bot.choice == Variants.SCISSORS and alex.choice == Variants.PAPER):
print('Bot, win')
if (bot.choice == Variants.SCISSORS and alex.choice == Variants.ROCK):
print('Alex, win!')
if (bot.choice == Variants.PAPER and alex.choice == Variants.SCISSORS):
print('Alex, win!')
elif (bot.choice == Variants.SCISSORS and alex.choice == Variants.SCISSORS):
print('draw!')
if bot.choice > alex.choice: TypeError: '>' not supported between instances of 'method' and 'method' - The error is gone
All, is ok!
PS C:\Users\user\2> python.exe main.py
Alex, win!
PS PS C:\Users\user\2>
| https://stackoverflow.com/questions/73817302/ |
Pytorch: Why does evaluating a string (of an optimizer) in a function break the function? | I have a pytorch lightning class that looks like this:
import torch.optim as optim
class GraphLevelGNN(pl.LightningModule):
def __init__(self,**model_kwargs):
super().__init__()
self.save_hyperparameters()
self.model = GraphGNNModel(**model_kwargs)
self.loss_module = nn.BCEWithLogitsLoss()
self.optimizer = eval('optim.SGD(self.parameters(),lr=0.1)')
def forward(self, data, mode="train"):
x, edge_index, batch_idx = data.x, data.edge_index, data.batch
x = self.model(x, edge_index, batch_idx)
x = x.squeeze(dim=-1)
if self.hparams.c_out == 1:
preds = (x > 0).float()
data.y = data.y.float()
else:
preds = x.argmax(dim=-1)
loss = self.loss_module(x, data.y)
acc = (preds == data.y).sum().float() / preds.shape[0]
return loss, acc,preds
def configure_optimizers(self):
optimizer = self.optimizer
return optimizer
def training_step(self, batch, batch_idx):
loss, acc _ = self.forward(batch, mode="train")
self.log('train_loss', loss,on_epoch=True,logger=True,batch_size=64)
self.log('train_acc', acc,on_epoch=True,logger=True,batch_size=64)
def validation_step(self, batch, batch_idx):
loss, acc,_ = self.forward(batch, mode="val")
self.log('val_acc', acc,on_epoch=True,logger=True,batch_size=64)
self.log('val_loss', loss,on_epoch=True,logger=True,batch_size=64)
def test_step(self, batch, batch_idx):
loss,acc, preds = self.forward(batch, mode="test")
self.log('test_acc', acc,on_epoch=True,logger=True,batch_size=64)
I eventually want to put the optimizer into a ray tune object, so I want it to not be hard coded in this function.
Why is it that when I have:
self.optimizer = optim.SGD(self.parameters(),lr=0.1)
in the __init__ part, the script works, but when I change to eval('optim.SGD(self.parameters(),lr=0.1)'), then the function breaks with the error:
File "script.py", line 560, in __init__
self.optimizer = eval('optim.SGD(self.parameters(),lr=0.1)')
File "<string>", line 1, in <module>
NameError: name 'optim' is not defined
I also tried changing optim to torch.optim but it produces the same error.
Should the eval not change the string 'optim.SGD(self.parameters(),lr=0.1)' to optim.SGD(self.parameters(),lr=0.1)
| Even though using eval is not the correct approach to your problem, let me just explain why you are facing this error.
The python function eval does not import any modules or functions in the script you are running by default.
For example, you can take eval as the python interpreter, when you just open the interpreter, and do not import any modules or write any functions, it will give you an error when you try and use them. Like if you use time.sleep() function without importing time, it gives you an error.
To overcome this, it allows you to assign variables in the arguments of the eval function.
eval(expression[, globals[, locals]])
You can read about it here -
https://docs.python.org/3/library/functions.html#eval
https://www.programiz.com/python-programming/methods/built-in/eval
| https://stackoverflow.com/questions/73817605/ |
Performance gap between `batch_size==32` and `batch_size==8, gradient_accumulation==4` | I tried to use gradient accumulation in my project. To my understanding, the gradient accumulation is the same as increasing the batch size by x times. I tried batch_size==32 and batch_size==8, gradient_accumulation==4 in my project, however the result varies even when I disabled shuffle in dataloader. The batch_size==8, accumulation==4 variant's result is significantly poorer.
I wonder why?
Here is my snippet:
loss = model(x)
epoch_loss += float(loss)
loss.backward()
# step starts from 1
if (step % accumulate_step == 0) or (step == len(dataloader)):
if clip_grad_norm > 0:
nn.utils.clip_grad_norm_(model.parameters(), max_norm=clip_grad_norm)
optimizer.step()
if scheduler:
scheduler.step()
optimizer.zero_grad()
| Assuming your loss is mean-reduced, then you need to scale the loss by 1/accumulate_step
The default behavior of most loss functions is to return the average loss value across each batch element. This is referred to as mean-reduction, and has the property that batch size does not affect the magnitude of the loss (or the magnitude of the gradients of loss). However, when implementing gradient accumulation, each time you call backward you are adding gradients to the existing gradients. Therefore, if you call backward four times on quarter-sized batches, you are actually producing gradients that are four-times larger than if you had called backward once on a full-sized batch. To account for this behavior you need to divide the gradients by accumulate_step, which can be accomplished by scaling the loss by 1/accumulate_step before back-propagation.
loss = model(x) / accumulate_step
loss.backward()
# step starts from 1
if (step % accumulate_step == 0) or (step == len(dataloader)):
if clip_grad_norm > 0:
nn.utils.clip_grad_norm_(model.parameters(), max_norm=clip_grad_norm)
optimizer.step()
if scheduler:
scheduler.step()
optimizer.zero_grad()
| https://stackoverflow.com/questions/73844065/ |
A simple question about torch.einsum function | A = torch.randn(5,5)
B = torch.einsum("ii->i",A)
C = torch.einsum("ii",A)
Just I exhibit above,I know the result about B that means getting the diagonal elements.
print("before:",A)
print("after:",B)
print('Why:',C)
results
before: tensor([[-0.2339, 0.2501, -1.1814, 1.4392, -0.5461],
[ 1.4908, 0.0626, -0.6849, -1.3106, 0.1257],
[ 3.3362, -1.7438, 0.3027, 0.4346, 0.6830],
[-0.6183, 0.5965, 1.2653, 1.0319, -0.0670],
[-0.5531, -0.4245, -2.4869, 1.2972, 0.6732]])
after: tensor([-0.2339, 0.0626, 0.3027, 1.0319, 0.6732])
Why: tensor(1.8366)
So,Why tensor C is 1.8366?
| Executing torch.einsum("ii", A) is equivalent to torch.einsum("ii->", A), this means the output has no index. You can interpret the output as a rank zero tensor.
So this corresponds to computing the sum of the diagonal elements.
| https://stackoverflow.com/questions/73867143/ |
How can I expand a tensor in Libtorch? (The C++ version of PyTorch) | How can I use LibTorch to expand a tensor of the shape 42, 358 into a shape of 10, 42, 358?
I know how to do this in PyTorch, (AKA Torch).
torch.ones(42, 358).expand(10, -1, -1).shape
returns
torch.Size([10, 42, 358])
In LibTorch I have a tensor of the same size I am trying to "expand" in the same way.
auto expanded_state_batch = state_batch.expand(10, -1, -1);
I get the following error...
error: no matching function for call to ‘at::Tensor::expand(int, int, int)’
335 | auto expanded_state_batch = state_batch.expand(10, -1, -1);
| ~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~
In file included from /home/iii/tor/m_gym/libtorch/include/ATen/core/Tensor.h:3,
from /home/iii/tor/m_gym/libtorch/include/ATen/Tensor.h:3,
from /home/iii/tor/m_gym/libtorch/include/torch/csrc/autograd/function_hook.h:3,
from /home/iii/tor/m_gym/libtorch/include/torch/csrc/autograd/cpp_hook.h:2,
from /home/iii/tor/m_gym/libtorch/include/torch/csrc/autograd/variable.h:6,
from /home/iii/tor/m_gym/libtorch/include/torch/csrc/autograd/autograd.h:3,
from /home/iii/tor/m_gym/libtorch/include/torch/csrc/api/include/torch/autograd.h:3,
from /home/iii/tor/m_gym/libtorch/include/torch/csrc/api/include/torch/all.h:7,
from /home/iii/tor/m_gym/libtorch/include/torch/csrc/api/include/torch/torch.h:3,
from /home/iii/tor/m_gym/multiv_normal.cpp:2:
/home/iii/tor/m_gym/libtorch/include/ATen/core/TensorBody.h:2372:19: note: candidate: ‘at::Tensor at::Tensor::expand(at::IntArrayRef, bool) const’
2372 | inline at::Tensor Tensor::expand(at::IntArrayRef size, bool implicit) const {
| ^~~~~~
/home/iii/tor/m_gym/libtorch/include/ATen/core/TensorBody.h:2372:19: note: candidate expects 2 arguments, 3 provided
It says that .expand only takes two integers but three were given. I've tried a few combinations and I always get an error.
Exactly what I'm doing here is concatenating the 42, 385 tensor ten times into a new tensor. I could do this in a loop with torch::cat, but this would be uglier.
| at::expand expects an at::IntArrayRef the compiler tells you. Hence you want to write something like
auto expanded_state_batch = state_batch.expand({10, -1, -1});
| https://stackoverflow.com/questions/73889591/ |
What does the "affine" parameter do in PyTorch nn.BatchNorm2d? | What is the purpose of the affine argument and what does it do?
class DilConv(nn.Module):
def __init__(self, in_C, out_C, kernel_size, stride, padding, affine=True):
super(DilConv, self).__init__()
self.ops = nn.Sequential(
nn.ReLU(),
nn.Conv2d(in_C, in_C, kernel_size=kernel_size, stride=stride,
padding=padding, dilation=2, groups=in_C, bias=False),
nn.Conv2d(in_C, out_C, kernel_size=1, bias=False),
nn.BatchNorm2d(out_C, affine=affine))
def forward(self, x):
return self.ops(x)
| If you look at the documentation page for BatchNorm2d, you will read:
affine – a boolean value that when set to True, this module has learnable affine parameters. Default: True
Checking the source code on the base class _NormBase, you will see parameters weight and bias are only defined if the argument affine is set to True. These parameters correspond to gamma and beta in the documentation formulae.
| https://stackoverflow.com/questions/73891401/ |
How to use pytorch multi-head attention for classification task? | I have a dataset where x shape is (10000, 102, 300) such as ( samples, feature-length, dimension) and y (10000,) which is my binary label. I want to use multi-head attention using PyTorch. I saw the PyTorch documentation from here but there is no explanation of how to use it. How can I use my dataset for classification using multi-head attention?
| I will write a simple pretty code for classification this will work fine, if you need implementation detail then this part is the same as the Encoder layer in Transformer, except in the last you would need a GlobalAveragePooling Layer and a Dense Layer for classification
attention_layer = nn.MultiHeadAttion(300 , 300%num_of_heads==0,dropout=0.1)
neural_net_output = point_wise_neural_network(attention_layer)
normalize = LayerNormalization(input + neural_net_output)
globale_average_pooling = nn.GlobalAveragePooling(normalize)
nn.Linear(input , num_of_classes)(global_average_pooling)
| https://stackoverflow.com/questions/73911967/ |
Size of WebDataset in Pytorch | When it comes to the Pytorch Dataloader which takes a default dataset (e.g. datasets.ImageFolder), we can find the size of a dataset that is used by the dataloader with len(dataloader). However, what about WebDataset?
As WebDataset is a PyTorch Dataset, is it possible to get the size of a loader which takes a WebDataset?
https://webdataset.github.io/webdataset/
| WebDataset doesn't provide a __len__ method, as it conforms to the PyTorch IterableDataset interface. IterableDataset is designed for stream-like data, and considers it wrong to have a len().
If you have code that depends on len() to be available, you can set the length to some value using with_length():
>>> dataset = wds.WebDataset(url)
>>> len(dataset)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object of type 'WebDataset' has no len()
>>> dataset = dataset.with_length(10)
>>> len(dataset)
10
| https://stackoverflow.com/questions/73918904/ |
Batched Cosine Similarity in PyTorch | Inputs:
Tensor a of shape [batch_size, n, d]
Tensor b of shape [batch_size, m, d]
Output:
Tensor c of shape [batch_size, n, m] where c[i, j, k] is the cosine similarity between a[i, j] and b[i, k]
How to implement this efficiently in PyTorch (preferably without for loops)?
| try this:
c = torch.cosine_similarity(a.unsqueeze(2), b.unsqueeze(1), dim=-1)
| https://stackoverflow.com/questions/73923751/ |
What's the difference between torch.mm, torch.matmul and torch.mul? | After reading the pytorch documentation, I still require help in understanding the difference between torch.mm, torch.matmul and torch.mul. As I do not fully understand them, I cannot concisely explain this.
B = torch.tensor([[ 1.1207],
[-0.3137],
[ 0.0700],
[ 0.8378]])
C = torch.tensor([[ 0.5146, 0.1216, -0.5244, 2.2382]])
print(torch.mul(B,C))
print(torch.matmul(B,C))
print(torch.mm(B,C))
All three produce the following output (i.e. they perform matrix multiplication):
tensor([[ 0.5767, 0.1363, -0.5877, 2.5084],
[-0.1614, -0.0381, 0.1645, -0.7021],
[ 0.0360, 0.0085, -0.0367, 0.1567],
[ 0.4311, 0.1019, -0.4393, 1.8752]])
A = torch.tensor([[1.8351,2.1536], [-0.8320,-1.4578]])
B = torch.tensor([[2.9355, 0.3450], [0.5708, 1.9957]])
print(torch.mul(A,B))
print(torch.matmul(A,B))
print(torch.mm(A,B))
Different outputs are produced. torch.mm no longer performs matrix multiplication (broadcasts and performs element-wise multiplication instead, whilst the other two still perform matrix multiplication.
tensor([[ 5.3869, 0.7430],
[-0.4749, -2.9093]])
tensor([[ 6.6162, 4.9310],
[-3.2744, -3.1964]])
tensor([[ 6.6162, 4.9310],
[-3.2744, -3.1964]])
Inputs
tensor1 = torch.randn(10, 3, 4)
tensor2 = torch.randn(4)
tensor1 =
tensor([[[-0.2267, 0.6311, -0.5689, 1.2712],
[-0.0241, -0.5362, 0.5481, -0.4534],
[-0.9773, -0.6842, 0.6927, 0.3363]],
[[-2.6759, 0.7817, 2.6821, 0.7037],
[ 0.1804, 0.3938, -1.2235, 0.8729],
[-1.9873, -0.5030, 0.0945, 0.2688]],
[[ 0.4244, 1.7350, 0.0558, -0.1861],
[-0.9063, -0.4737, -0.4284, -0.3883],
[ 0.4827, -0.2628, 1.0084, 0.2769]],
[[ 0.2939, 0.4604, 0.8014, -1.8760],
[ 1.8807, 0.1623, 0.2344, -0.6221],
[ 1.3964, 3.1637, 0.7889, 0.1195]],
[[-0.7202, 1.4250, 2.4302, 1.4811],
[-0.2301, 0.6280, 0.5379, 0.5178],
[-2.1073, -1.4399, -0.9451, 0.8534]],
[[ 2.8178, -0.4451, -0.7871, -0.5198],
[ 0.2825, 1.0692, 0.1559, 1.2945],
[-0.5828, -1.6287, -2.0661, -0.4107]],
[[ 0.5077, -0.6349, -0.0160, -0.4477],
[-0.8070, 0.3746, 1.1852, 0.0351],
[-0.6454, 1.5877, 0.8561, 1.1021]],
[[ 0.1191, 1.0116, 0.5807, 1.2105],
[-0.5403, 1.2404, 1.1532, 0.6537],
[ 1.4757, -1.3648, -1.7158, -1.0289]],
[[-0.1326, 0.3715, 0.2429, -0.0794],
[ 0.3224, -0.3064, 0.1963, 0.7276],
[ 0.9098, 1.5984, -1.4953, 0.0420]],
[[ 0.1511, 0.9691, -0.5204, 0.3858],
[ 0.4566, 1.5482, -0.3401, 0.5960],
[-0.9998, 0.7198, 0.9286, 0.4498]]])
tensor2 =
tensor([-1.6350, 1.0335, -0.9023, 0.0696])
print(torch.mul(tensor1,tensor2))
print(torch.matmul(tensor1,tensor2))
print(torch.mm(tensor1,tensor2))
Outputs are all different. I think torch.mul broadcasts and multiplies every 4 elements of the matrix by the vector, tensor2, i.e. [-0.2267, 0.6311, -0.5689, 1.2712] x tensor 2 element-wise, [-0.0241, -0.5362, 0.5481, -0.4534] x tensor 2 element-wise and so on. I do not understand what torch.matmul is doing. I think it is to do with the 5th bullet-point of the documentation (If both arugments...), but I am unable to make sense of this. https://pytorch.org/docs/stable/generated/torch.matmul.html
I think the reason torch.mm is unable to produce an output is the fact that it cannot broadcast (please correct me if I'm wrong).
tensor([[[ 3.7071e-01, 6.5221e-01, 5.1335e-01, 8.8437e-02],
[ 3.9400e-02, -5.5417e-01, -4.9460e-01, -3.1539e-02],
[ 1.5979e+00, -7.0715e-01, -6.2499e-01, 2.3398e-02]],
[[ 4.3752e+00, 8.0790e-01, -2.4201e+00, 4.8957e-02],
[-2.9503e-01, 4.0699e-01, 1.1040e+00, 6.0723e-02],
[ 3.2494e+00, -5.1981e-01, -8.5253e-02, 1.8701e-02]],
[[-6.9397e-01, 1.7931e+00, -5.0379e-02, -1.2945e-02],
[ 1.4818e+00, -4.8954e-01, 3.8657e-01, -2.7010e-02],
[-7.8920e-01, -2.7163e-01, -9.0992e-01, 1.9265e-02]],
[[-4.8055e-01, 4.7582e-01, -7.2309e-01, -1.3051e-01],
[-3.0750e+00, 1.6770e-01, -2.1146e-01, -4.3281e-02],
[-2.2832e+00, 3.2697e+00, -7.1183e-01, 8.3139e-03]],
[[ 1.1775e+00, 1.4727e+00, -2.1928e+00, 1.0304e-01],
[ 3.7617e-01, 6.4900e-01, -4.8534e-01, 3.6025e-02],
[ 3.4455e+00, -1.4882e+00, 8.5277e-01, 5.9369e-02]],
[[-4.6072e+00, -4.6005e-01, 7.1024e-01, -3.6160e-02],
[-4.6191e-01, 1.1051e+00, -1.4067e-01, 9.0053e-02],
[ 9.5283e-01, -1.6833e+00, 1.8643e+00, -2.8571e-02]],
[[-8.3005e-01, -6.5622e-01, 1.4461e-02, -3.1148e-02],
[ 1.3195e+00, 3.8716e-01, -1.0694e+00, 2.4421e-03],
[ 1.0553e+00, 1.6409e+00, -7.7250e-01, 7.6669e-02]],
[[-1.9477e-01, 1.0455e+00, -5.2398e-01, 8.4209e-02],
[ 8.8343e-01, 1.2820e+00, -1.0405e+00, 4.5478e-02],
[-2.4128e+00, -1.4106e+00, 1.5482e+00, -7.1578e-02]],
[[ 2.1675e-01, 3.8391e-01, -2.1914e-01, -5.5219e-03],
[-5.2707e-01, -3.1668e-01, -1.7711e-01, 5.0619e-02],
[-1.4876e+00, 1.6520e+00, 1.3493e+00, 2.9198e-03]],
[[-2.4706e-01, 1.0015e+00, 4.6955e-01, 2.6842e-02],
[-7.4663e-01, 1.6001e+00, 3.0685e-01, 4.1462e-02],
[ 1.6347e+00, 7.4395e-01, -8.3792e-01, 3.1291e-02]]])
tensor([[ 1.6247, -1.0409, 0.2891],
[ 2.8120, 1.2767, 2.6630],
[ 1.0358, 1.3518, -1.9515],
[-0.8583, -3.1620, 0.2830],
[ 0.5605, 0.5759, 2.8694],
[-4.3932, 0.5925, 1.1053],
[-1.5030, 0.6397, 2.0004],
[ 0.4109, 1.1704, -2.3467],
[ 0.3760, -0.9702, 1.5165],
[ 1.2509, 1.2018, 1.5720]])
| In short:
torch.mm - performs a matrix multiplication without broadcasting - (2D tensor) by (2D tensor)
torch.mul - performs a elementwise multiplication with broadcasting - (Tensor) by (Tensor or Number)
torch.matmul - matrix product with broadcasting - (Tensor) by (Tensor) with different behaviors depending on the tensor shapes (dot product, matrix product, batched matrix products).
Some details:
torch.mm - performs a matrix multiplication without broadcasting
It expects two 2D tensors so n×m * m×p = n×p
From the documentation https://pytorch.org/docs/stable/generated/torch.mm.html:
This function does not broadcast. For broadcasting matrix products, see torch.matmul().
torch.mul - performs a elementwise multiplication with broadcasting - (Tensor) by (Tensor or Number)
Docs: https://pytorch.org/docs/stable/generated/torch.mul.html
torch.mul does not perform a matrix multiplication. It broadcasts two tensors and performs an elementwise multiplication. So when you uses it with tensors 1x4 * 4x1 it will work similar to:
import torch
a = torch.FloatTensor([[1], [2], [3]])
b = torch.FloatTensor([[1, 10, 100]])
a, b = torch.broadcast_tensors(a, b)
print(a)
print(b)
print(a * b)
tensor([[1., 1., 1.],
[2., 2., 2.],
[3., 3., 3.]])
tensor([[ 1., 10., 100.],
[ 1., 10., 100.],
[ 1., 10., 100.]])
tensor([[ 1., 10., 100.],
[ 2., 20., 200.],
[ 3., 30., 300.]])
torch.matmul
It is better to check out the official documentation https://pytorch.org/docs/stable/generated/torch.matmul.html as it uses different modes depending on the input tensors. It may perform dot product, matrix-matrix product or batched matrix products with broadcasting.
As for your question regarding product of:
tensor1 = torch.randn(10, 3, 4)
tensor2 = torch.randn(4)
it is a batched version of a product. please check this simple example for understanding:
import torch
# 3x1x3
a = torch.FloatTensor([[[1, 2, 3]], [[3, 4, 5]], [[6, 7, 8]]])
# 3
b = torch.FloatTensor([1, 10, 100])
r1 = torch.matmul(a, b)
r2 = torch.stack((
torch.matmul(a[0], b),
torch.matmul(a[1], b),
torch.matmul(a[2], b),
))
assert torch.allclose(r1, r2)
So it can be seen as a multiple operations stacked together across batch dimension.
Also it may be useful to read about broadcasting:
https://pytorch.org/docs/stable/notes/broadcasting.html#broadcasting-semantics
| https://stackoverflow.com/questions/73924697/ |
Pytorch low gpu util after first epoch | Hi I'm training my pytorch model on remote server.
All the job is managed by slurm.
My problem is 'training is extremely slower after training first epoch.'
I checked gpu utilization.
On my first epoch, utilization was like below image.
I can see gpu was utilized.
But from second epoch utilized percentage is almos zero
My dataloader code like this
class img2selfie_dataset(Dataset):
def __init__(self, path, transform, csv_file, cap_vec):
self.path = path
self.transformer = transform
self.images = [path + item for item in list(csv_file['file_name'])]
self.smiles_list = cap_vec
def __getitem__(self, idx):
img = Image.open(self.images[idx])
img = self.transformer(img)
label = self.smiles_list[idx]
label = torch.Tensor(label)
return img, label.type(torch.LongTensor)
def __len__(self):
return len(self.images)
My dataloader is defined like this
train_data_set = img2selfie_dataset(train_path, preprocess, train_dataset, train_cap_vec)
train_loader = DataLoader(train_data_set, batch_size = 256, num_workers = 2, pin_memory = True)
val_data_set = img2selfie_dataset(train_path, preprocess, val_dataset, val_cap_vec)
val_loader = DataLoader(val_data_set, batch_size = 256, num_workers = 2, pin_memory = True)
My training step defined like this
train_loss = []
valid_loss = []
epochs = 20
best_loss = 1e5
for epoch in range(1, epochs + 1):
print('Epoch {}/{}'.format(epoch, epochs))
print('-' * 10)
epoch_train_loss, epoch_valid_loss = train(encoder_model, transformer_decoder, train_loader, val_loader, criterion, optimizer)
train_loss.append(epoch_train_loss)
valid_loss.append(epoch_valid_loss)
if len(valid_loss) > 1:
if valid_loss[-1] < best_loss:
print(f"valid loss on this {epoch} is better than previous one, saving model.....")
torch.save(encoder_model.state_dict(), 'model/encoder_model.pickle')
torch.save(transformer_decoder.state_dict(), 'model/decoder_model.pickle')
best_loss = valid_loss[-1]
print(best_loss)
print(f'Epoch : [{epoch}] Train Loss : [{train_loss[-1]:.5f}], Valid Loss : [{valid_loss[-1]:.5f}]')
In my opinion, if this problem comes from my code. It wouldn't have hitted 100% utilization in first epoch.
| I fixed this issue with moving my training data into local drive.
My remote server(school server) policy was storing personel data into NAS.
And file i/o from NAS proveked heavy load on network.
It was also affected by other user's file i/o from NAS.
After I moved training data into NAS, everything is fine.
| https://stackoverflow.com/questions/73944743/ |
How to convert a PyTorch nn.Module into a HuggingFace PreTrainedModel object? | Given a simple neural net in Pytorch like:
import torch.nn as nn
net = nn.Sequential(
nn.Linear(3, 4),
nn.Sigmoid(),
nn.Linear(4, 1),
nn.Sigmoid()
).to(device)
How do I convert it into a Huggingface PreTrainedModel object?
The goal is to convert the Pytorch nn.Module object from nn.Sequential into the Huggingface PreTrainedModel object, then run something like:
import torch.nn as nn
from transformers.modeling_utils import PreTrainedModel
net = nn.Sequential(
nn.Linear(3, 4),
nn.Sigmoid(),
nn.Linear(4, 1),
nn.Sigmoid()
).to(device)
# Do something to convert the Pytorch nn.Module to the PreTrainedModel object.
shiny_model = do_some_magic(net, some_args, some_kwargs)
# Save the shiny model that is a `PreTrainedModel` object.
shiny_model.save_pretrained("shiny-model")
PreTrainedModel.from_pretrained("shiny-model")
And it seems like to build/convert any native Pytorch models into a Huggingface one, there's a need for some configurations https://huggingface.co/docs/transformers/main_classes/configuration
There are many how-tos to train models "from scratch", e.g.
[Using BertLMHeadModel, not that scratch] https://www.kaggle.com/code/mojammel/train-model-from-scratch-with-huggingface/notebook (this is also fine-tuning from bert, not scratch)
[Not really scratch, using roberta as template] https://huggingface.co/blog/how-to-train (this is fine-tuning from roberta, not really training from scratch)
[Sort of uses some Config template] https://www.thepythoncode.com/article/pretraining-bert-huggingface-transformers-in-python (this is kinda from scratch but uses the template from BERT to generate the config, what if we want to change how the model works, how should the config look like?)
[Kinda defined a template but using RobertaForMaskedLM] https://skimai.com/roberta-language-model-for-spanish/ (this looks like it kinda defines a template but restricts it to RobertaForMaskedLM template)
Questions in parts:
If we have a much simpler Pytorch model like in the code snippet above, how to create a Pretrained Model from scratch in Huggingface?
How to create the Pretrained model config we need for Huggingface to make the converting from native Pytorch nn.Module work?
| You will need to define custom configuration and custom model classes. It is important to define attributes model_type and config_class inside those classes:
import torch.nn as nn
from transformers import PreTrainedModel, PretrainedConfig
from transformers import AutoModel, AutoConfig
class MyConfig(PretrainedConfig):
model_type = 'mymodel'
def __init__(self, important_param=42, **kwargs):
super().__init__(**kwargs)
self.important_param = important_param
class MyModel(PreTrainedModel):
config_class = MyConfig
def __init__(self, config):
super().__init__(config)
self.config = config
self.model = nn.Sequential(
nn.Linear(3, self.config.important_param),
nn.Sigmoid(),
nn.Linear(self.config.important_param, 1),
nn.Sigmoid()
)
def forward(self, input):
return self.model(input)
Now you can create (and obviously train a new model), save and then load your model locally
config = MyConfig(4)
model = MyModel(config)
model.save_pretrained('./my_model_dir')
new_model = MyModel.from_pretrained('./my_model_dir')
new_model
If you wish to use AutoModel, you will have to register your classes:
AutoConfig.register("mymodel", MyConfig)
AutoModel.register(MyConfig, MyModel)
new_model = AutoModel.from_pretrained('./my_model_dir')
new_model
| https://stackoverflow.com/questions/73948214/ |
How to use CTC Loss Seq2Seq correctly? | I am trying to create ASR model by myself and learn how to use CTC loss.
I test and I see this:
ctc_loss = nn.CTCLoss(blank=95)
output: tensor([[63, 8, 1, 38, 29, 14, 41, 71, 14, 29, 45, 41, 3]]): torch.Size([1, 13]); output_size: tensor([13])
input1: torch.Size([167, 1, 96]); input1_size: tensor([167])
After applying the argmax on this input (= prediction of phonems)
torch.argmax(input1, dim=2)
I get a series of symbols:
tensor([[63, 63, 63, 63, 63, 63, 95, 95, 63, 63, 95, 95, 8, 8, 8, 95, 8, 95,
8, 8, 95, 95, 95, 1, 1, 95, 1, 95, 1, 1, 95, 95, 38, 95, 95, 38,
38, 38, 38, 38, 29, 29, 29, 29, 29, 29, 29, 95, 29, 29, 95, 95, 95, 95,
95, 95, 95, 95, 95, 95, 14, 95, 14, 95, 95, 95, 95, 14, 95, 14, 41, 41,
41, 95, 41, 41, 41, 41, 41, 41, 71, 71, 71, 95, 71, 71, 71, 71, 71, 95,
95, 14, 14, 95, 14, 14, 95, 14, 14, 95, 29, 29, 95, 29, 29, 29, 29, 29,
29, 29, 45, 95, 95, 45, 45, 95, 45, 45, 45, 45, 41, 95, 41, 41, 95, 95,
95, 41, 41, 41, 3, 3, 3, 3, 3, 95, 3, 3, 3, 95, 95, 95, 95, 95,
95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95,
95, 95, 95, 95, 95]])
and the following loss values.
ctc_loss(input1, output, input_size, output_size)
# Returns 222.8446
With a different input:
input2: torch.Size([167, 1, 96]) input2_size: tensor([167])
torch.argmax(input2, dim=2)
the prediction is just a sequence of blank symbols.
tensor([[95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95,
95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95,
95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95,
95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95,
95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95,
95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95,
95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95,
95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95,
95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95, 95,
95, 95, 95, 95, 95]])
However, the loss value with the same desired output is much lower.
ctc_loss(input2, output, input_size, output_size)
# Returns 3.7955
I don't know why input1 is better than input2 but the loss of input1 is higher than input2? Can someone explain that?
| The CTC loss does not operate on the argmax predictions but on the entire output distribution. The CTC loss is the sum of the negative log-likelihood of all possible output sequences that produce the desired output. The output symbols might be interleaved with the blank symbols, which leaves exponentially many possibilities. It is, in theory, possible that the sum of the negative log-likelihoods of the correct outputs is low and still the most probable sequence is all blanks.
In practice, this is quite rare, so I guess there might be a problem somewhere else. The CTCLoss as implemented in PyTorch requires log probabilities as the input that you get, e.g., by applying the log_softmax function. Different sorts of input might lead to strange results such the one you observe.
| https://stackoverflow.com/questions/73956505/ |
Multiclass classification, IndexError: Target 2 is out of bounds | I am facing a multiclass classification problem related to the activity of some drugs using Pytorch neural net, I have three activity classes (0, 1 and 2), to tackle the problem I adopted the one vs. one approach, thus creating three binary classifiers: 0 vs. 1, 1 vs. 2 and 2 vs. 0. When I train the second classifier (class 1 vs. class 2) I get the following error:
IndexError: Target 2 is out of bounds.
Is there a method to solve it without reassigning labels? Thank you all!
This is my net, is a Graph Isomorphism Network build with Pytorch Geometric:
class GIN1(torch.nn.Module):
def __init__(self, h):
super(GIN1, self).__init__()
dim_h_conv = h
dim_h_fc = dim_h_conv*5
# Convolutional layers
self.conv1 = GINConv(Sequential(Linear(14, dim_h_conv),
BatchNorm1d(dim_h_conv), ReLU(),
Linear(dim_h_conv, dim_h_conv), ReLU()))
self.conv2 = GINConv(Sequential(Linear(dim_h_conv, dim_h_conv),
BatchNorm1d(dim_h_conv), ReLU(),
Linear(dim_h_conv, dim_h_conv), ReLU()))
self.conv3 = GINConv(Sequential(Linear(dim_h_conv, dim_h_conv),
BatchNorm1d(dim_h_conv), ReLU(),
Linear(dim_h_conv, dim_h_conv), ReLU()))
self.conv4 = GINConv(Sequential(Linear(dim_h_conv, dim_h_conv),
BatchNorm1d(dim_h_conv), ReLU(),
Linear(dim_h_conv, dim_h_conv), ReLU()))
self.conv5 = GINConv(Sequential(Linear(dim_h_conv, dim_h_conv),
BatchNorm1d(dim_h_conv), ReLU(),
Linear(dim_h_conv, dim_h_conv), ReLU()))
# Fully connected layers
self.lin1 = Linear(dim_h_fc, dim_h_fc)
self.lin2 = Linear(dim_h_fc, 2)
self.initialize_w()
def forward(self, x, edge_index, batch):
h1 = self.conv1(x, edge_index)
h2 = self.conv2(h1, edge_index)
h3 = self.conv3(h2, edge_index)
h4 = self.conv4(h3, edge_index)
h5 = self.conv5(h4, edge_index)
# Graph level readout
h1 = global_add_pool(h1, batch)
h2 = global_add_pool(h2, batch)
h3 = global_add_pool(h3, batch)
h4 = global_add_pool(h4, batch)
h5 = global_add_pool(h5, batch)
# Concatenate graph embeddings
h = torch.cat((h1, h2, h3, h4, h5), dim=1)
# Classifier
h = self.lin1(h)
h = h.relu()
h = F.dropout(h, p=hp_gin1['p'], training=self.training)
h = self.lin2(h)
h = F.log_softmax(h, dim=1)
return h
def initialize_w(self):
for m in self.modules():
if isinstance(m, Linear):
torch.nn.init.kaiming_normal_(m.weight, mode='fan_in', nonlinearity='relu')
torch.nn.init.constant_(m.bias, 0)
if isinstance(m, BatchNorm1d):
torch.nn.init.constant_(m.weight, 1)
torch.nn.init.constant_(m.bias, 0)
And this is my training loop:
gin2 = GIN2(h=hp_gin2['h']) #40
optimizer = torch.optim.Adam(gin2.parameters(), lr=hp_gin2['lr'])
criterion = torch.nn.CrossEntropyLoss()
def train(train_loader):
gin2.train()
loss_all = 0
for data in train_loader:
output = gin2(data.x, data.edge_index, data.batch)
loss = criterion(output, data.y)
l2_lambda = hp_gin2['lambda']
l2_norm = sum(p.pow(2.0).sum()
for p in gin2.parameters())
loss = loss + l2_lambda * l2_norm
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_all += loss.item() * data.num_graphs
return loss_all / len(train_loader.dataset)
def test_loss(loader):
total_loss_val = 0
with torch.no_grad():
for data in loader:
output = gin2(data.x, data.edge_index, data.batch)
batch_loss = criterion(output, data.y)
total_loss_val += batch_loss.item() * data.num_graphs
return total_loss_val / len(loader.dataset)
def test(loader):
gin2.eval()
correct = 0
for data in loader:
output = gin2(data.x, data.edge_index, data.batch)
accuracy = Accuracy(average='macro', num_classes=2)
acc = accuracy(output, data.y)
return acc
| OP needed to match the output dimension of their model with the number of label classes (see discussion).
| https://stackoverflow.com/questions/73962097/ |
YOLOv7 RuntimeError: CUDA error: unknown error | Getting the following error with my current setup after following this guide when trying to train a custom model for YOLOv7: https://docs.nvidia.com/cuda/wsl-user-guide/index.html.
OS: Ubuntu (Windows 10 WSL)
Hardware: 16gb RAM, RTX 3070
Python Version: 3.8.10
Driver Version: 517.48
PyTorch Version:
>>> import torch
>>> print(torch.__version__)
1.10.0a0+3fd9dcf
>>> import torchvision
>>> print(torchvision.__version__)
0.11.0a0
Cuda Version:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Wed_Jul_14_19:41:19_PDT_2021
Cuda compilation tools, release 11.4, V11.4.100
Build cuda_11.4.r11.4/compiler.30188945_0
Some common issues that have solved this issue for other people that I've attempted:
Restarting the computer
apt-get install nvidia-modprobe (https://github.com/pytorch/pytorch/issues/49081)
Docker Container:
sudo docker run --name yolov7 --gpus all -it -v "/mnt/c/coco/":"/coco/" -v "/mnt/c/yolov7/":"/yolov7/" --shm-size=16gb nvcr.io/nvidia/pytorch:21.08-py3
The part that I don't understand is that torch appears to be operating correctly:
>>> import torch
>>> print(torch.cuda.current_device())
0
>>> torch.rand(1)
tensor([0.3052])
nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.76.02 Driver Version: 517.48 CUDA Version: 11.7 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:2B:00.0 On | N/A |
| 0% 46C P8 27W / 240W | 423MiB / 8192MiB | 1% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
From Within Docker Container:
python train.py --workers 1 --device 0 --batch-size 2 --data data/coco.yaml --img-size 1920 --cfg cfg/training/yolov7.yaml --weights 'yolov7_training.pt' --name yolov7 --hyp data/hyp.scratch.custom.yaml
The above call produces the following error:
Traceback (most recent call last):
File "train.py", line 616, in <module>
train(hyp, opt, device, tb_writer)
File "train.py", line 361, in train
pred = model(imgs) # forward
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1056, in _call_impl
return forward_call(*input, **kwargs)
File "/yolov7/models/yolo.py", line 599, in forward
return self.forward_once(x, profile) # single-scale inference, train
File "/yolov7/models/yolo.py", line 625, in forward_once
x = m(x) # run
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1056, in _call_impl
return forward_call(*input, **kwargs)
File "/yolov7/models/common.py", line 108, in forward
return self.act(self.bn(self.conv(x)))
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1056, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/activation.py", line 395, in forward
return F.silu(input, inplace=self.inplace)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py", line 1901, in silu
return torch._C._nn.silu(input)
RuntimeError: CUDA error: unknown error
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
| Even on WSL, your Host is Windows and will have the same limitations.
On windows there is a watchdog that kills too long kernels (WDDM).
If your NVidia GPU is a secondary GPU, try to disable WDDM TDR.
If it's your primary GPU, you can experiment screen freezes when you use pyTorch.
there is some informations on how to disable WDDM in this post : https://stackoverflow.com/a/13185441/4866974
| https://stackoverflow.com/questions/73967130/ |
filter out rows which satisfy a condition in each column | Suppose I have a tensor:
input: ([[-0.5535, 0.0000],
[ 0.0000, 0.0000],
[-1.1370, -0.2736],
[-1.2300, 0.9185]])
Output:([[-0.5535, 0.0000],
[-1.1370, -0.2736],
[-1.2300, 0.9185]])
I need to keep only the rows which have non-zero elements in all columns, and the index of the deleted row. For simplicity, I have limited the matrix to two columns, however in my case the number of columns and rows keeps changing in every iteration.
I have found solutions where the condition may satisfy any element in the matrix, or there may be separate conditions to satisfy per column, but I couldn't figure out how to solve this particular case.
Thank you.
| I will answer my own question since I found the solution in pytorch.
This function will return the row-indices of non
x[torch.nonzero(torch.tensor(x), as_tuple=True)[0].unique()]
OR
x[torch.nonzero(torch.tensor(x.sum(1)), as_tuple=True)[0]]
| https://stackoverflow.com/questions/73969621/ |
How to do multiple forward pass and one backward pass pytorch? | import torch
import torchvision.models as models
model = models.resnet18()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
x = torch.randn(1, 3, 224, 224)
y = torch.randn(1, 3, 224, 224)
#1st Approach
loss1 = model(x).mean()
loss2 = model(y).mean()
(loss1+loss2).backward()
optimizer.step()
I want to forward two datasets and their total loss will be used for backward and update one model. Is this approach correct?
#2nd Approach
loss1 = model(x).mean()
loss1.backward()
loss2 = model(y).mean()
loss2.backward()
optimizer.step()
And what is the difference between the first and second approachs ?
| Both of them are actually equivalent: The gradient gets acccumulated additively in the backpropagation (which is a convenient implementation for nodes that appear multiple times in the computation graph). So both of them are pretty much identical.
But to make the code readable and really make obvious what is happening, I would prefer the first approach. The second method (as described above) is basically "abusing" that effect of accumulating gradients - it is not actually abuse but it is way more common, and as I said in my opinion way easier to read to use the first way.
| https://stackoverflow.com/questions/73979121/ |
PyTorch 1.12 on Mac Monterey | I cannot use PyTorch 1.12.1 on macOS 12.6 Monterey with M1 chip.
Tried to install and run from Python 3.8, 3.9 and 3.10 with the same result.
I think that PyTorch was working before I updated macOS to Monterey. And the Rust bindings, tch-rs are still working.
Here is my install and the error messages I get when trying to run.
Install
brew install libtorch
python3.9 -m venv venv39
source venv39/bin/activate
pip3 install torch torchvision torchaudio
Error message
python
Python 3.9.14 (main, Sep 6 2022, 23:16:16)
[Clang 13.1.6 (clang-1316.0.21.2.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "~/Documents/install/Modern_Computer_Vision/venv39/lib/python3.9/site-packages/torch/__init__.py", line 202, in <module>
from torch._C import * # noqa: F403
ImportError: dlopen(~/Documents/install/Modern_Computer_Vision/venv39/lib/python3.9/site-packages/torch/_C.cpython-39-darwin.so, 0x0002): Symbol not found: (__ZN4c10d11debug_levelEv)
Referenced from: '@/Documents/install/Modern_Computer_Vision/venv39/lib/python3.9/site-packages/torch/lib/libtorch_python.dylib'
Expected in: '/opt/homebrew/Cellar/libtorch/1.12.1/lib/libtorch_cpu.dylib'
Tried using Miniconda
I had almost the same result.
conda create -n conda39 python=3.9 -y
conda activate conda39
conda install pytorch torchvision torchaudio -c pytorch
❯ python
Python 3.9.12 (main, Apr 5 2022, 01:52:34)
[Clang 12.0.0 ] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/sami/miniconda3/lib/python3.9/site-packages/torch/__init__.py", line 202, in <module>
from torch._C import * # noqa: F403
ImportError: dlopen(/Users/sami/miniconda3/lib/python3.9/site-packages/torch/_C.cpython-39-darwin.so, 0x0002): Symbol not found: (__ZN4c10d11debug_levelEv)
Referenced from: '/Users/sami/miniconda3/lib/python3.9/site-packages/torch/lib/libtorch_python.dylib'
Expected in: '/opt/homebrew/Cellar/libtorch/1.12.1/lib/libtorch_cpu.dylib'
| I recommend not touching your system python installations for your own projects, instead the recommended way is using conda (see here). The reason is that each conda environment encapsulates a whole separate python installation that does not interfere (and doesn't get interfered with) with any other programs. This is especially important for C/C++ libraries like the ones pytorch is using.
| https://stackoverflow.com/questions/73986257/ |
Why am getting pylint import and no member errors when I didn't before? | Hi I've been working on this code for months without any of the pylint errors that have suddenly appeared? How do I fix this?
| You need pandas and torch installed in the same environment than the one you run pylint.
From the documentation at https://pylint.pycqa.org/en/latest/user_guide/messages/error/no-member.html:
If you are getting the dreaded no-member error, there is a possibility that either:
pylint found a bug in your code
You're launching pylint without the dependencies installed in its environment.
pylint would need to lint a C extension module and is refraining to do so.
Linting C extension modules is not supported out of the box, especially since pylint has no way to get an AST object out of the extension module.
But pylint actually has a mechanism which you might use in case you want to analyze C extensions. Pylint has a flag, called extension-pkg-allow-list (formerly extension-pkg-whitelist), through which you can tell it to import that module and to build an AST from that imported module:
pylint --extension-pkg-allow-list=your_c_extension
Be aware though that using this flag means that extensions are loaded into the active Python interpreter and may run arbitrary code, which you may not want. This is the reason why we disable by default loading C extensions. In case you do not want the hassle of passing C extensions module with this flag all the time, you can enable unsafe-load-any-extension in your configuration file, which will build AST objects from all the C extensions that pylint encounters:
pylint --unsafe-load-any-extension=y
Alternatively, since pylint emits a separate error for attributes that cannot be found in C extensions, c-extension-no-member, you can disable this error for your project.
| https://stackoverflow.com/questions/73998700/ |