instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Delete and Reinitialize pertained BERT weights / parameters
I tried to fine-tune BERT for a classification downstream task. Now I loaded the model again and I run into the following warning: Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: ['cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.weight', 'cls.predictions.bias', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight'] This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [Screen Shot][1] [1]: https://i.stack.imgur.com/YJZVc.png I already deleted and reinstalled transformers==4.6.0 but nothing helped. I thought maybe through the parameter "force_download=True" it might get the original weights back but nothing helped. Shall I continue and ignore the warning? Is there a way to delete the model checkpoints such when the model is downloaded the weights are fixed again? Thanks in advance! Best, Alex
As long as you're fine-tuning a model for a downstream task this warning can be ignored. The idea is that the [CLS] token weights from the pretrained model aren't going to be useful for downstream tasks and need to be fine-tuned. Huggingface randomly initializes them because you're using bert-base-cased which is a BertForPretraing model and you're created a BertModel from it. The warning is to ensure that you understand the difference of directly using the pretrained model directly or if you're planning on finetuning them for a different task. On that note if you plan working on a classification task I'd recommend using their BertForSequenceClassification class instead. TL;DR you can ignore it as long as you're finetuning.
https://stackoverflow.com/questions/67590284/
Computing the loss of a function of predictions with pytorch
I have a convolutional neural network that predicts 3 quantities: Ux, Uy, and P. These are the x velocity, y-velocity, and pressure field. They are all 2D arrays of size [100,60], and my batch size is 10. I want to compute the loss and update the network by calculating the CURL of the predicted velocity with the CURL of the target velocity. I have a function that does this: v = curl(Ux_pred, Uy_pred). Given the predicted Ux and Uy, I want to compute the loss by comparing it to ground truth targets that I have: true_curl = curl(Ux_true, Uy_true) - I've already computed the true curl and added it to my Y data, as the fourth channel. However, I want my network to only predict Ux, Uy, and P. I want my NN parameters to update based on the LOSS of the curls to improve the accuracy of Ux and Uy. The loss of the curl has to be in terms of Ux and Uy. I have been trying to do this using Pytorch autograd, and have already read many similar questions, but I just can't get it to work. This is my code so far: print("pred_Curl shape:", np.shape(pred_curl)) print("pred_Ux shape:", np.shape(pred[:,0,:,:])) print("pred_Uy shape:", np.shape(pred[:,1,:,:])) true_curl = torch.from_numpy(y[:,3,:,:]) # not sure where to use the true curl? pred_curl = Variable(pred_curl, requires_grad=True) pred_ux = pred[:,0,:,:] pred_uy = pred[:,1,:,:] pred_ux = Variable(pred_ux, requires_grad=True) pred_uy = Variable(pred_uy, requires_grad=True) grad_tensor = torch.autograd.grad(outputs=pred_curl, inputs=(pred_ux, pred_uy), grad_outputs=torch.ones_like(pred_curl), retain_graph=True,create_graph=True) loss = torch.sum(grad_tensor) optimizer.zero_grad() loss.backward() optimizer.step() This has the following output: pred_Curl shape: torch.Size([10, 100, 60]) pred_Ux shape: torch.Size([10, 100, 60]) pred_Uy shape: torch.Size([10, 100, 60]) RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior. Any help would be appreciated! Edit: Here is my curl function: def discrete_curl(self,x,y,new_arr): for m in range(100): for n in range(60): if n <= 58: if m <= 98: if x[m,n] != 0 and y[m,n] != 0: new_arr[m,n] = ((y[m+1,n] - y[m-1,n]) / 2*1) - ((x[m,n+1] - x[m,n-1]) / 2*1) return new_arr Where x and y are Ux ad Uy, and new_arr is the curl output.
You could try something like this: def discrete_curl(self, pred): new_arr = torch.zeros((pred.shape[0],100,60)) for pred_idx in range(pred.shape[0]): for m in range(100): for n in range(60): if n <= 58: if m <= 98: if pred[pred_idx,0,m,n] != 0 and pred[pred_idx,1,m,n] != 0: new_arr[pred_idx,m,n] = ((pred[pred_idx,1,m+1,n] - pred[pred_idx,1,m-1,n]) / 2*1) - ((pred[pred_idx,0,m,n+1] - pred[pred_idx,0,m,n-1]) / 2*1) return new_arr pred_curl = discrete_curl(pred) true_curl = torch.from_numpy(y[:,3,:,:]) loss = torch.nn.functional.mse_loss(pred_curl, true_curl) optimizer.zero_grad() loss.backward() optimizer.step() I think the curl computation can be optimized, but I tried to stick to your structure for the most part.
https://stackoverflow.com/questions/67606907/
RuntimeError: No CUDA GPUs are available
I want to train a gpt2 model in my laptop and I have a GPU in it and my os is windows , but I always got this error in python: torch._C._cuda_init() RuntimeError: No CUDA GPUs are available when I tried to check the availability of GPU in the python console, I got true: import torch torch.cuda.is_available() Out[4]: True but I can't get the version by nvcc version #or nvcc --version NameError: name 'nvcc' is not defined I use this command to install CUDA conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch What can I do to make the GPU available for python?
In my case the problem was that the CUDA drivers that I was trying to install, didn't support my GPU model. In your case, please check which CUDA driver supports your GPU model. You are now installing 10.2. In my case CUDA 11.0 and 11.2 supported my GPU model but not 11.3 which I was trying to install. If you got the same error after a while, which can happen if you run a cloud vm which hardware can be updated automatically, here is how to solve it: Remove CUDA drivers sudo apt-get remove --purge nvidia* Then reinstall the drivers as follows. Note! in this case I have debian distro on x64 system. wget https://developer.download.nvidia.com/compute/cuda/repos/debian11/x86_64/cuda-keyring_1.0-1_all.deb sudo dpkg -i cuda-keyring_1.0-1_all.deb sudo add-apt-repository contrib sudo apt-get update sudo apt-get -y install cuda Get the correct commands that works for your distro and system from the link: https://developer.nvidia.com/cuda-downloads?target_os=Linux Good luck!
https://stackoverflow.com/questions/67613855/
Binary classification - BCELoss and model output size not corresponding
I'm doing a binary classification, hence I used a binary cross entropy loss: criterion = torch.nn.BCELoss() However, I'm getting an error: Using a target size (torch.Size([64, 1])) that is different to the input size (torch.Size([64, 2])) is deprecated. Please ensure they have the same size. My model ends with: x = self.wave_block6(x) x = self.sigmoid(self.fc(x)) return x.squeeze() I tried removing the squeeze, but to no avail. My batch size is 64. It seems like I'm doing something simple wrong here. Is my model giving 1 output and BCE loss expecting 2 inputs? Which loss should I use then?
Binary Cross-Entropy Loss (BCELoss) is used for binary classification tasks. Therefore if N is your batch size, your model output should be of shape [64, 1] and your labels must be of shape [64].Therefore just squeeze your output at the 2nd dimension and pass it to the loss function - Here is a minimal working example import torch a = torch.randn((64, 1)) b = torch.randn((64)) loss = torch.nn.BCELoss() b = torch.round(torch.sigmoid(b)) # just to create some labels a = torch.sigmoid(a).squeeze(1) l = loss(a, b) Update - Basing on the conversation in the comments, focal loss can be defined as follows - class focalLoss(nn.Module): def __init__(self, alpha=0.25, gamma=3): super(focalLoss, self).__init__() self.alpha = alpha self.gamma = gamma def forward(self, pred_logits: torch.Tensor, target: torch.Tensor): batch_size = pred_logits.shape[0] pred = pred.view(batch_size, -1) target = target.view(batch_size, -1) pred = pred_logits.sigmoid() ce = F.binary_cross_entropy(pred_logits, target, reduction='none') alpha = target * self.alpha + (1. - target) * (1. - self.alpha) pt = torch.where(target == 1, pred, 1 - pred) return alpha * (1. - pt) ** self.gamma * ce
https://stackoverflow.com/questions/67614640/
FastAI fastbook - what does it do and why do I need to setup a book?
I tried running on my google colab notebook: !pip install -Uqq fastbook import fastbook as it is written in the FastAI book, chapter 2. but nor the book or anywhere on google there is an explanation on what is this liberty at all. amazingly, the page for it does not include any explanation on what fastbook does- only about some course for deep learning. so, what does it do? also, when I run: fastbook.setup_book() what does that do? in which way does it setup a book, and what kind of book is it? ty.
fastbook.setup_book() It is used setup when you are using google colab specifically and working with FastAI library. It helps to connect the colab notebook to google drive using an authentication token.
https://stackoverflow.com/questions/67615589/
AWS Sagemaker custom PyTorch model inference on raw image input
I am new to AWS Sagemaker. I have custom CV PyTorch model locally and deployed it to Sagemaker endpoint. I used custom inference.py code to define model_fn, input_fn, output_fn and predict_fn methods. So, I'm able to generate predictions on json input, which contains url to the image, the code is quite straigtforward: def input_fn(request_body, content_type='application/json'): logging.info('Deserializing the input data...') image_transform = transforms.Compose([ transforms.Resize(size=(224, 224)), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) if content_type: if content_type == 'application/json': input_data = json.loads(request_body) url = input_data['url'] logging.info(f'Image url: {url}') image_data = Image.open(requests.get(url, stream=True).raw) return image_transform(image_data) raise Exception(f'Requested unsupported ContentType in content_type {content_type}') Then I am able to invoke endpoint with code: client = boto3.client('runtime.sagemaker') inp = {"url":url} inp = json.loads(json.dumps(inp)) response = client.invoke_endpoint(EndpointName='ENDPOINT_NAME', Body=json.dumps(inp), ContentType='application/json') The problem is, I see, that locally url request return slightly different image array comparing to the one on Sagemaker. Which is why on the same URL I obtain slightly different predictions. To check that at least model weights are the same I want to generate predictions on image itself, downloaded locally and to Sagemaker. But I fail trying to put image as input to endpoint. E.g.: def input_fn(request_body, content_type='application/json'): logging.info('Deserializing the input data...') image_transform = transforms.Compose([ transforms.Resize(size=(224, 224)), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) if content_type == 'application/x-image': image_data = request_body return image_transform(image_data) raise Exception(f'Requested unsupported ContentType in content_type {content_type}') Invoking endpoint I experience the error: ParamValidationError: Parameter validation failed: Invalid type for parameter Body, value: {'img': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=630x326 at 0x7F78A61461D0>}, type: <class 'dict'>, valid types: <class 'bytes'>, <class 'bytearray'>, file-like object Does anybody know how to generate Sagemaker predictions by Pytorch model on images?
As always, after asking I found a solution. Actually, as the error suggested, I had to convert input to bytes or bytearray. For those who may need the solution: from io import BytesIO img = Image.open(open(PATH, 'rb')) img_byte_arr = BytesIO() img.save(img_byte_arr, format=img.format) img_byte_arr = img_byte_arr.getvalue() client = boto3.client('runtime.sagemaker') response = client.invoke_endpoint(EndpointName='ENDPOINT_NAME Body=img_byte_arr, ContentType='application/x-image') response_body = response['Body'] print(response_body.read())
https://stackoverflow.com/questions/67622080/
Can I reduce number of GPUs without terminating the training?
Let's say I am using multiple GPUs (0,1,2,3) on one machine and later someone else also needs to use GPUs on this machine. Is there a way for me to reduce the number of gpu usage (i.e. only use 0 and 1) from my training without terminating the training and start over again? I don't want to waste the training I already did. This sounds like a common need in a team. Is that possible?
I do not think that this is possible. You should save checkpoints so that you can later continue training where you left. This is possible with Hugging Face API. training_args = Seq2SeqTrainingArguments( output_dir=model_directory, num_train_epochs=args.epochs, do_eval=True, evaluation_strategy='epoch', load_best_model_at_end=True, # the last checkpoint is the best model wrt metric_for_best_model metric_for_best_model='eval_loss', greater_is_better=False, save_total_limit=args.epochs ) The save_total_limit is the number of checkpoints it will save. In the above case it would save a single checkpoint after each epoch. You can adjust the number based on your memory.
https://stackoverflow.com/questions/67644405/
How does one run PyTorch on a A40 GPU without errors (with DDP too)?
I tried running my pytorch code but got this error: A40 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37. If you want to use the A40 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/ warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name)) Using backend: pytorch /home/miranda9/miniconda3/envs/metalearningpy1.7.1c10.2/lib/python3.8/site-packages/torch/cuda/__init__.py:104: UserWarning: A40 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37. If you want to use the A40 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/ warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name)) Traceback (most recent call last): File "/home/miranda9/ML4Coq/ml4coq-proj-src/embeddings_zoo/tree_nns/main_brando.py", line 305, in <module> main_distributed() File "/home/miranda9/ML4Coq/ml4coq-proj-src/embeddings_zoo/tree_nns/main_brando.py", line 201, in main_distributed mp.spawn(fn=train, args=(opts,), nprocs=opts.world_size) File "/home/miranda9/miniconda3/envs/metalearningpy1.7.1c10.2/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 199, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/home/miranda9/miniconda3/envs/metalearningpy1.7.1c10.2/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 157, in start_processes while not context.join(): File "/home/miranda9/miniconda3/envs/metalearningpy1.7.1c10.2/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 118, in join raise Exception(msg) Exception: -- Process 0 terminated with the following error: Traceback (most recent call last): File "/home/miranda9/miniconda3/envs/metalearningpy1.7.1c10.2/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap fn(i, *args) File "/home/miranda9/ML4Coq/ml4coq-proj-src/embeddings_zoo/tree_nns/main_brando.py", line 210, in train setup_process(opts, rank, master_port=opts.master_port, world_size=opts.world_size) File "/home/miranda9/ultimate-utils/ultimate-utils-proj-src/uutils/torch/distributed.py", line 165, in setup_process dist.init_process_group(backend, rank=rank, world_size=world_size) File "/home/miranda9/miniconda3/envs/metalearningpy1.7.1c10.2/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 455, in init_process_group barrier() File "/home/miranda9/miniconda3/envs/metalearningpy1.7.1c10.2/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1960, in barrier work = _default_pg.barrier() RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1607369981906/work/torch/lib/c10d/ProcessGroupNCCL.cpp:31, unhandled cuda error, NCCL version 2.7.8 but then it sends me to download it for my mac...? which is weird. What version of pytorch, cuda, cudnn, nccl and other things do I need for a GPU A40? to see the code I ran and conda env info see this: https://github.com/pytorch/pytorch/issues/58794 related links https://github.com/pytorch/pytorch/issues/45021 https://github.com/pytorch/pytorch/issues/45028 https://github.com/pytorch/pytorch/issues/58794 How does one install pytorch 1.9 in an HPC that seems to refuse to cooperate?
My guess is the following: A40 gpus have CUDA capability of sm_86 and they are only compatible with CUDA >= 11.0. But CUDA >= 11.0 is only compatible with PyTorch >= 1.7.0 I believe. So do: conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch or conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch or conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch if you are in an HPC you might want to do: module load gcc/9.2.0 #module load cuda-toolkit/10.2 module load cuda-toolkit/11.1 this seemed to work: (metalearning) miranda9~/automl-meta-learning $ python -c "import uutils; uutils.torch_uu.gpu_test()" device name: A40 Success, no Cuda errors means it worked see: out=tensor([[2.3272], [5.6796]], device='cuda:0') (metalearning) miranda9~/automl-meta-learning $ (metalearning) miranda9~/automl-meta-learning $ (metalearning) miranda9~/automl-meta-learning $ (metalearning) miranda9~/automl-meta-learning $ conda list | grep torch _pytorch_select 0.1 cpu_0 pytorch 1.7.1 py3.9_cuda11.0.221_cudnn8.0.5_0 pytorch torchaudio 0.7.2 py39 pytorch torchmeta 1.7.0 pypi_0 pypi torchvision 0.8.2 cpu_py39ha229d99_0
https://stackoverflow.com/questions/67645531/
AttributeError: module 'torch' has no attribute 'rfft' with PyTorch
I am getting an error using a code that should work according to the documentation. The goal is to calculate the Feature Similarity Index Measure (FSIM) using the piq Python library. Terminal Output: TiffPage 1: ByteCounts tag is missing Traceback (most recent call last): File "...\.venv\lib\site-packages\IPython\core\interactiveshell.py", line 3441, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-3044cfc208ce>", line 1, in <module> runfile('.../stackoverflow.py', wdir='...') File "...\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "...\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File ".../stackoverflow.py", line 15, in <module> main() File "...\.venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File ".../stackoverflow.py", line 10, in main fsim_index: torch.Tensor = piq.fsim(x, y, data_range=1., reduction='none') File "...\.venv\lib\site-packages\piq\fsim.py", line 84, in fsim pc_x = _phase_congruency( File "...\.venv\lib\site-packages\piq\fsim.py", line 241, in _phase_congruency imagefft = torch.rfft(x, 2, onesided=False) AttributeError: module 'torch' has no attribute 'rfft' Code: from skimage import io import torch import piq @torch.no_grad() def main(): x = torch.tensor(io.imread('scikit_image\cover\cover_1.tiff')).permute(2, 0, 1)[None, ...] / 255. y = torch.tensor(io.imread('scikit_image\stego\stego_1.tiff')).permute(2, 0, 1)[None, ...] / 255. fsim_index: torch.Tensor = piq.fsim(x, y, data_range=1., reduction='none') print(fsim_index) if __name__ == "__main__": main()
The latest version of pytorch implements all fast fourier functions in the module torch.fft, apparently piq rely on an older version of pytorch, so if you want to run piq consider downgrading your pytorch version, for example: pip3 install torch==1.7.1 torchvision==0.8.2
https://stackoverflow.com/questions/67647299/
How to fix an error with the quickstart tutorial for pytorch?
I am trying to follow the tutorial on pytorch HERE, but there seems to be a problem. I have created a custom dataloader named training_data that returns an object as required HERE which is a dictionary {"image": image, "label": label} where image is a tensor and label is a string. I then follow the tutorial and create a DataLoader as follows: train_dataloader = DataLoader(training_data, batch_size=batch_size) and use that DataLoader in the method train: def train(dataloader, model, loss_fn, optimizer): size = len(dataloader) for batch, (X, y) in enumerate(dataloader): X, y = X.to(device), y.to(device) # Compute prediction error pred = model(X) loss = loss_fn(pred, y) # Backpropagation optimizer.zero_grad() loss.backward() optimizer.step() if batch % 100 == 0: loss, current = loss.item(), batch * len(X) print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]") However, when I call the training method for a batch train(train_dataloader, model, loss_fn, optimizer) I get an error Traceback (most recent call last): File "train_network.py", line 110, in <module> train(train_dataloader, model, loss_fn, optimizer) File "train_network.py", line 76, in train X, y = X.to(device), y.to(device) AttributeError: 'str' object has no attribute 'to' as y is a string with the content label. What am I doing wrong?
Your labels y need to be torch tensors. Since you currently have strings, and assuming you are doing classification among n classes, you can simply map them using a list. For example, with three classes, inside the __init__ of your Dataset class: self.label_names = ["class1", "class2", "class3"] Then, in __getitem__, you could add: label = torch.tensor(label_names.index(label)) where label previously stored a string.
https://stackoverflow.com/questions/67649060/
Pytorch List of all gradients in a model
I'm trying to clip my gradients in a simple deep network model (for RL). But for that I want to fetch statistics of gradients in each epochs, e.g. mean, max etc. Through this I will be able to determine the threshold value to clip my gradients to. So the way I can approach this was if there was any way to fetch all the calculated gradients as an array after model.backwards() step. How can I do this? Or is there any other way to determine this hyper-parameter?
You can iterate over the parameters to obtain their gradients. For example, for param in model.parameters(): print(param.grad) The example above just prints the gradient, but you can apply it suitably to compute the information you need.
https://stackoverflow.com/questions/67665126/
Running BERT SQUAD model on GPU
I am using the BERT Squad model to ask the same question on a collection of documents (>20,000). The model currently runs on my CPU and it takes around a minute to process a single document - which means that I'll need several days to complete the program. I was wondering if I could speed this up by running the model on a GPU. However, I am new to GPUs and I don't know how to send these inputs and the model to the device (Titan xp). The code is borrowed from Chris McChormick. import torch import tensorflow as tf from transformers import BertForQuestionAnswering from transformers import BertTokenizer model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad') tokenizer = BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad') 'question' and 'answer_text' are the question and the context string respectively. input_ids = tokenizer.encode(question, answer_text) # ======== Set Segment IDs ======== # Search the input_ids for the first instance of the `[SEP]` token. sep_index = input_ids.index(tokenizer.sep_token_id) if len(input_ids)>512: input_ids=input_ids[:512] num_seg_a = sep_index + 1 num_seg_b = len(input_ids) - num_seg_a # Construct the list of 0s and 1s. segment_ids = [0]*num_seg_a + [1]*num_seg_b # There should be a segment_id for every input token. assert len(segment_ids) == len(input_ids) # ======== Evaluate ======== # Run our example through the model. outputs = model(torch.tensor([input_ids]), # The tokens representing our input text. token_type_ids=torch.tensor([segment_ids]), # The segment IDs to differentiate question from answer_text return_dict=True) start_scores = outputs.start_logits end_scores = outputs.end_logits I know that I can send the model to the GPU using model.tocuda(). But how do I send the inputs, train the model, and the retreive output from the GPU?
It's been a while, but I'll answer anyway in the hope that maybe it will help someone. You can copy each tensor to the GPU using the to method. For example your batch contains 4 pytorch tensors: input ids, attention masks, segment ids and labels device = torch.device("cuda") b_input_ids = batch[0].to(device) b_input_mask = batch[1].to(device) b_seg_ids = batch[2].to(device) b_labels = batch[2].to(device) Then,You can use the .cpu() to transfer the logits and labels from the gpu back to the cpu. In example; start_logits = start_logits.detach().cpu() end_logits = end_logits.detach().cpu() or similarly to(device) you can use start_logits = start_logits.to('cpu') end_logits = end_logits.to('cpu') Note that: Since you will be using them in the model, you will probably need to add .numpy() to the end and convert them to a numpy array. Source:https://discuss.pytorch.org/t/time-to-transform-gpu-to-cpu-with-cpu/18856
https://stackoverflow.com/questions/67675458/
Loss is nan, stopping training when training Mask-RCNN multi-class segmentation
number of train data: 346 number of test data: 69 Epoch: [0] [0/346] eta: 0:35:20 lr: 0.000019 loss: -312.6024 (-312.6024) loss_classifier: 1.5789 (1.5789) loss_box_reg: 0.1299 (0.1299) loss_mask: -314.3485 (-314.3485) loss_objectness: 0.0266 (0.0266) loss_rpn_box_reg: 0.0106 (0.0106) time: 6.1275 data: 0.1599 max mem: 0 Loss is nan, stopping training {‘loss_classifier’: tensor (nan, grad_fn = ), ‘loss_box_reg’: tensor (nan, grad_fn = ), ‘loss_mask’: tensor (nan, grad_fn = ), ’ tensor (nan, grad_fn = ), ‘loss_rpn_box_reg’: tensor (nan, grad_fn = )} An exception has occurred, use% tb to see the full traceback. SystemExit : 1 And this is the dataset code: class maskrcnn_Dataset(torch.utils.data.Dataset): def __init__(self, root, transforms=None): self.root = root self.transforms = transforms # load all image files, sorting them to # ensure that they are aligned self.imgs = list(sorted(os.listdir(os.path.join(root, "images")))) self.masks = list(sorted(os.listdir(os.path.join(root, "masks")))) #self.class_masks = list(sorted(os.listdir(os.path.join(root, "SegmentationClass")))) def __getitem__(self, idx): # load images ad masks img_path = os.path.join(self.root, "images", self.imgs[idx]) x=self.imgs[idx].split('.') mask_path = os.path.join(self.root, "masks", self.masks[idx]) #class_mask_path = os.path.join(self.root, "SegmentationClass", self.class_masks[idx]) #read and convert image to RGB img = cv2.imread(img_path) mask_for_all=[] img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # note that we haven't converted the mask to RGB, # because each color corresponds to a different instance # with 0 being background # mask = Image.open(mask_path) mask_folder=os.path.join(self.root,"masks") source_mask = os.path.join(mask_folder, x[0]) #print(os.listdir(source_mask)) boxes = [] xx=trier(os.listdir(source_mask)) #print(xx) for file_name in xx: mask = Image.open(os.path.join(source_mask,file_name)) mask = np.array(mask) mask_for_all.append(mask) obj_ids = np.unique(mask) obj_ids = obj_ids[1:] masks = mask == obj_ids[:, None, None] num_objs = len(obj_ids) for i in range(num_objs): pos = np.where(masks[i]) xmin = np.min(pos[1]) xmax = np.max(pos[1]) ymin = np.min(pos[0]) ymax = np.max(pos[0]) boxes.append([xmin, ymin, xmax, ymax]) num_objs=len(boxes) boxes = torch.as_tensor(boxes, dtype=torch.float32) # there is only one class if(self.root.find("train")!=-1): #print("bisgltjf") labels =class_ids_train[class_ids_train_names.index(self.imgs[idx])] #print(labels) else: labels =class_ids_val[class_ids_val_names.index(self.imgs[idx])] #print('l3assba') #labels = np.array([]) #for i in range(masks.shape[0]): # labels = np.append(labels, (masks[i] * class_mask).max()) labels = torch.as_tensor(labels, dtype=torch.int64) #print(boxes,":",labels) masks = torch.as_tensor(mask_for_all, dtype=torch.uint8) #print(labels) #print(masks) #print(masks.shape) image_id = torch.tensor([idx]) area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0]) # suppose all instances are not crowd iscrowd = torch.zeros((num_objs,), dtype=torch.int64) #print(img.shape) #print(self.imgs[idx]) target = {} target["boxes"] = boxes #print(boxes) target["labels"] = labels #print(labels.shape) target["masks"] = masks #print(masks.shape) target["image_id"] = image_id #print(image_id.shape) target["area"] = area #print(area) target["iscrowd"] = iscrowd #print(iscrowd.shape) if self.transforms is not None: img, target = self.transforms(img, target) return img, target def __len__(self): return len(self.imgs)
There can be two issues: Check the coordinate of boxes, make sure [xmin, ymin, xmax, ymax] is positive Make sure the mask's length is the same as boxes.
https://stackoverflow.com/questions/67678922/
Data Loading in Pytorch for a dataset having all the classes in same folder
I am new to deep learning and Pytorch. I have data set of 6000 images that have all four classes in a single folder. I used the following snippet to upload my data. torchvision.datasets.ImageFolder(root='/content/drive/My Drive/DFU/base_dir/train_dir', transform=None) I read that for ImageFolder, the images should be organized into sub-folders based on class labels. However, my dataset has all four class images in a single folder. I have a .csv file that contains the one-hot-encoded class label for each image. How to load my dataset to Pytorch?
The simplest solution would be to reorganise the images into class-subfolders based on the csv file, and load as intended by ImageFolder: import pandas as pd from pathlib import Path root = '/content/drive/My Drive/DFU/base_dir/train_dir' my_csv_file = ... # Loading csv as {image:class,...} format df = pd.read_csv(my_csv_file).set_index('images') class_dict = df.idxmax(axis="columns").to_dict() # Moving files to class-named subfolders for path in Path(root).iterdir(): if path.is_file() and path.name in class_dict.keys(): path.rename(Path(path.parent, class_dict[path.name], path.name) # Loading dataset dataset = torchvision.datasets.ImageFolder(root=root, transform=None)
https://stackoverflow.com/questions/67694644/
Is torch.empty_like() dependent on the input value as well as input size?
The description in the torch docs for torch.empty_like says: torch.empty_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) → Tensor Returns an uninitialized tensor with the same size as input. torch.empty_like(input) is equivalent to torch.empty(input.size(), dtype=input.dtype, layout=input.layout, device=input.device). Parameters input (Tensor) – the size of input will determine size of the output tensor. What I do is : >>> torch.empty(3,4) tensor([[-1.8597e+15, 4.5657e-41, -1.8597e+15, 4.5657e-41], [ 4.4842e-44, 0.0000e+00, 8.9683e-44, 0.0000e+00], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00]]) >>> c1 tensor([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> torch.empty_like(c1) tensor([[139942262173040, 93851872482144, 1, 0], [ 0, 0, 93851872492496, 0], [ 0, 0, 0, 0]]) >>> d tensor([[0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]]) >>> torch.empty_like(d) tensor([[-8.6092e-25, 3.0620e-41, 0.0000e+00, 0.0000e+00], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00]]) It seems that the tensor returned by torch.empty_like depends on input value, contrary to the description in the docs. Can someone explain this?
The docs description is correct. I am not sure if you are confused by torch.empty_like returning different outputs on different calls, but you can see this is also the behaviour of torch.empty by calling e.g. torch.empty((2,3), dtype=torch.int64) multiple times. Note torch.empty_like does depend on the dtype of the input (but not its specific values).
https://stackoverflow.com/questions/67697347/
Load pytorch model from S3 bucket
I want to load a pytorch model (model.pt) from a S3 bucket. I wrote the following code: from smart_open import open as smart_open import io load_path = "s3://serial-no-images/yolo-models/model4/model.pt" with smart_open(load_path) as f: buffer = io.BytesIO(f.read()) model.load_state_dict(torch.load(buffer)) This results in the following error: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte One solution would be to download the model locally, but I want to avoid this and load the model directly from S3. Unfortunately, I couldn't find a good solution for that online. Can someone help me out here?
According to the documentation, the following works: from smart_open import open as smart_open import io load_path = "s3://serial-no-images/yolo-models/model4/model.pt" with smart_open(load_path, 'rb') as f: buffer = io.BytesIO(f.read()) model.load_state_dict(torch.load(buffer)) I have tried this before, but didn't see that I have to set 'rb' as argument.
https://stackoverflow.com/questions/67706477/
issue in loading Model using PyTorch in google-collaboratory
I am trying to Load the Model in google_collaboratory to get evaluate it and generate all the statistics results. My trying import torch import torch.nn.functional as F import torch.optim as optim import torch.backends.cudnn as cudnn import torch.backends.cudnn as cudnn import numpy as np import torch.nn as nn import os def load_checkpoint(filepath): checkpoint = torch.load(filepath) model = fc_model.Network(checkpoint['input_size'], checkpoint['output_size'], checkpoint['hidden_layers']) model.load_state_dict(checkpoint['state_dict']) return model PATH = "/content/gdrive/MyDrive/best.pt" state_dict = load_checkpoint(PATH) The Error --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-24-0515f2edfa1a> in <module>() 18 19 PATH = "/content/gdrive/MyDrive/best.pt" ---> 20 state_dict = load_checkpoint(PATH) 2 frames /usr/local/lib/python3.7/dist-packages/torch/serialization.py in _load(zip_file, map_location, pickle_module, pickle_file, **pickle_load_args) 849 unpickler = pickle_module.Unpickler(data_file, **pickle_load_args) 850 unpickler.persistent_load = persistent_load --> 851 result = unpickler.load() 852 853 torch._utils._validate_loaded_sparse_tensors() ModuleNotFoundError: No module named 'models' --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. --------------------------------------------------------------------------- I tried to install some library but it gives me the same is there anyways to load models inside google collaboratory.
This problem is that when you save the weight you actually uses torch.save(model instead of model.state_dict() One way to solve this is import the models "the same way you did when train". This is important as when you save the whole model it save the name reference along with the weight. Maybe you'll need to upload it if models is a file. If it's an object then just put it to a cell and it'll work.
https://stackoverflow.com/questions/67708073/
PyTorch does not make initial weights random
I created a Neural Network that takes two greyscale images 14x14 pixels portraying a digit (from MNIST database) and returns 1 if the first digit is less or equal to the second digit, returns 0 otherwise. The code runs, but every time the initial weights are the same. They should be random Forcing the initial weights to be random, by using the following line of code in the Net class, does not help. torch.nn.init.normal_(self.layer1.weight, mean=0.0, std=0.01) Here is the code of the "main.py" file: import os; os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE" import torch import torch.nn as nn from dlc_practical_prologue import * class Net(nn.Module): def __init__(self): super().__init__() self.layer1 = nn.Linear(2*14*14, 32) #torch.nn.init.normal_(self.layer1.weight, mean=0.0, std=0.01) #self.layer2 = nn.Linear(100, 100) #self.layer3 = nn.Linear(100, 100) self.layer2 = nn.Linear(32, 1) def forward(self, x): x = torch.relu(self.layer1(x)) #x = torch.relu(self.layer2(x)) #x = torch.relu(self.layer3(x)) x = torch.sigmoid(self.layer2(x)) return x if __name__ == '__main__': # Data initialization N = 1000 train_input, train_target, train_classes, _, _, _, = generate_pair_sets(N) _, _, _, test_input, test_target, test_classes = generate_pair_sets(N) train_input = train_input.view(-1, 2*14*14) test_input = test_input.view(-1, 2*14*14) train_target = train_target.view(-1, 1) test_target = test_target.view(-1, 1) # I convert the type to torch.float32 train_input, train_target, train_classes, test_input, test_target, test_classes = \ train_input.type(torch.float32), train_target.type(torch.float32), train_classes.type(torch.long), \ test_input.type(torch.float32), test_target.type(torch.float32), test_classes.type(torch.long) # Create the neural network net = Net() # Training learning_rate = 0.01 # Use MSELoss loss = nn.MSELoss() # Use Adam optimizer optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate) EPOCHS = 50 for param in net.parameters(): print(param) for epoch in range(EPOCHS): target_predicted = net(train_input) l = loss(train_target, target_predicted) #loss = nn.MSELoss() #l = loss(target_predicted, train_target) l.backward() optimizer.step() optimizer.zero_grad() #print(l) # Testing total = 1000 correct = 0 with torch.no_grad(): correct = ( test_target == net(test_input).round() ).sum() print("Accuracy %.2f%%" % (correct / total * 100)) Here is the code for "dlc_practical_monologue.py": import os; os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE" import torch from torchvision import datasets import argparse import os import urllib ###################################################################### parser = argparse.ArgumentParser(description='DLC prologue file for practical sessions.') parser.add_argument('--full', action='store_true', default=False, help = 'Use the full set, can take ages (default False)') parser.add_argument('--tiny', action='store_true', default=False, help = 'Use a very small set for quick checks (default False)') parser.add_argument('--seed', type = int, default = 0, help = 'Random seed (default 0, < 0 is no seeding)') parser.add_argument('--cifar', action='store_true', default=False, help = 'Use the CIFAR data-set and not MNIST (default False)') parser.add_argument('--data_dir', type = str, default = None, help = 'Where are the PyTorch data located (default $PYTORCH_DATA_DIR or \'./data\')') # Timur's fix parser.add_argument('-f', '--file', help = 'quick hack for jupyter') args = parser.parse_args() if args.seed >= 0: torch.manual_seed(args.seed) ###################################################################### # The data def convert_to_one_hot_labels(input, target): tmp = input.new_zeros(target.size(0), target.max() + 1) tmp.scatter_(1, target.view(-1, 1), 1.0) return tmp def load_data(cifar = None, one_hot_labels = False, normalize = False, flatten = True): if args.data_dir is not None: data_dir = args.data_dir else: data_dir = os.environ.get('PYTORCH_DATA_DIR') if data_dir is None: data_dir = './data' if args.cifar or (cifar is not None and cifar): print('* Using CIFAR') cifar_train_set = datasets.CIFAR10(data_dir + '/cifar10/', train = True, download = True) cifar_test_set = datasets.CIFAR10(data_dir + '/cifar10/', train = False, download = True) train_input = torch.from_numpy(cifar_train_set.data) train_input = train_input.transpose(3, 1).transpose(2, 3).float() train_target = torch.tensor(cifar_train_set.targets, dtype = torch.int64) test_input = torch.from_numpy(cifar_test_set.data).float() test_input = test_input.transpose(3, 1).transpose(2, 3).float() test_target = torch.tensor(cifar_test_set.targets, dtype = torch.int64) else: print('* Using MNIST') ###################################################################### # import torchvision # raw_folder = data_dir + '/mnist/raw/' # resources = [ # ("https://fleuret.org/dlc/data/train-images-idx3-ubyte.gz", "f68b3c2dcbeaaa9fbdd348bbdeb94873"), # ("https://fleuret.org/dlc/data/train-labels-idx1-ubyte.gz", "d53e105ee54ea40749a09fcbcd1e9432"), # ("https://fleuret.org/dlc/data/t10k-images-idx3-ubyte.gz", "9fb629c4189551a2d022fa330f9573f3"), # ("https://fleuret.org/dlc/data/t10k-labels-idx1-ubyte.gz", "ec29112dd5afa0611ce80d1b7f02629c") # ] # os.makedirs(raw_folder, exist_ok=True) # # download files # for url, md5 in resources: # filename = url.rpartition('/')[2] # torchvision.datasets.utils.download_and_extract_archive(url, download_root=raw_folder, filename=filename, md5=md5) ###################################################################### mnist_train_set = datasets.MNIST(data_dir + '/mnist/', train = True, download = True) mnist_test_set = datasets.MNIST(data_dir + '/mnist/', train = False, download = True) train_input = mnist_train_set.data.view(-1, 1, 28, 28).float() train_target = mnist_train_set.targets test_input = mnist_test_set.data.view(-1, 1, 28, 28).float() test_target = mnist_test_set.targets if flatten: train_input = train_input.clone().reshape(train_input.size(0), -1) test_input = test_input.clone().reshape(test_input.size(0), -1) if args.full: if args.tiny: raise ValueError('Cannot have both --full and --tiny') else: if args.tiny: print('** Reduce the data-set to the tiny setup') train_input = train_input.narrow(0, 0, 500) train_target = train_target.narrow(0, 0, 500) test_input = test_input.narrow(0, 0, 100) test_target = test_target.narrow(0, 0, 100) else: print('** Reduce the data-set (use --full for the full thing)') train_input = train_input.narrow(0, 0, 1000) train_target = train_target.narrow(0, 0, 1000) test_input = test_input.narrow(0, 0, 1000) test_target = test_target.narrow(0, 0, 1000) print('** Use {:d} train and {:d} test samples'.format(train_input.size(0), test_input.size(0))) if one_hot_labels: train_target = convert_to_one_hot_labels(train_input, train_target) test_target = convert_to_one_hot_labels(test_input, test_target) if normalize: mu, std = train_input.mean(), train_input.std() train_input.sub_(mu).div_(std) test_input.sub_(mu).div_(std) return train_input, train_target, test_input, test_target ###################################################################### def mnist_to_pairs(nb, input, target): input = torch.functional.F.avg_pool2d(input, kernel_size = 2) a = torch.randperm(input.size(0)) a = a[:2 * nb].view(nb, 2) input = torch.cat((input[a[:, 0]], input[a[:, 1]]), 1) classes = target[a] target = (classes[:, 0] <= classes[:, 1]).long() return input, target, classes ###################################################################### def generate_pair_sets(nb): if args.data_dir is not None: data_dir = args.data_dir else: data_dir = os.environ.get('PYTORCH_DATA_DIR') if data_dir is None: data_dir = './data' train_set = datasets.MNIST(data_dir + '/mnist/', train = True, download = True) train_input = train_set.data.view(-1, 1, 28, 28).float() train_target = train_set.targets test_set = datasets.MNIST(data_dir + '/mnist/', train = False, download = True) test_input = test_set.data.view(-1, 1, 28, 28).float() test_target = test_set.targets return mnist_to_pairs(nb, train_input, train_target) + \ mnist_to_pairs(nb, test_input, test_target) ###################################################################### Note that I have to add the following line of code to run the code on Windows 10, while it is not necessary to run it on Linux. import os; os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE" Also on Linux I always get the same initial weights. Please, can you help me?
Correct me if I'm wrong here but only the weights of the first layer should be the same each time you run this. The thing is when you import the dlc_practical_monologue.py there's this thing in it: if args.seed >= 0: torch.manual_seed(args.seed) which fires up if the seed is >=0 (default is 0). This should only initialize the first layer with the same weights for each run. Check if this is the case.
https://stackoverflow.com/questions/67709281/
Using weights in CrossEntropyLoss and BCELoss (PyTorch)
I am training a PyTorch model to perform binary classification. My minority class makes up about 10% of the data, so I want to use a weighted loss function. The docs for BCELoss and CrossEntropyLoss say that I can use a 'weight' for each sample. However, when I declare CE_loss = nn.BCELoss() or nn.CrossEntropyLoss() and then do CE_Loss(output, target, weight=batch_weights), where output, target, and batch_weights are Tensors of batch_size, I get the following error message: forward() got an unexpected keyword argument 'weight'
Another way you could accomplish your goal is to use reduction=none when initializing the loss and then multiply the resulting tensor by your weights before computing the mean. e.g. loss = torch.nn.BCELoss(reduction='none') model = torch.sigmoid weights = torch.rand(10,1) inputs = torch.rand(10,1) targets = torch.rand(10,1) intermediate_losses = loss(model(inputs), targets) final_loss = torch.mean(weights*intermediate_losses) Of course for your scenario you still would need to calculate the weights tensor. But hopefully this helps!
https://stackoverflow.com/questions/67730325/
apply a function over all combination of tensor rows in pytorch
I want to make a function f1(arg_tensor) which gets a pytorch tensor as an argument. In this function I use another function: f2(tensor_row_1, tensor_row_2) which gets two pytorch's tensor rows as an arguments and outputs a scalar. f2(..) should be applied over all combinations of tensor's rows [1..n] (i.e. apply function f2(..) on tensor rows' indices: [0,1], [0,2], [0,3]...[0,n-1]...[n-1,0]..[n-1,n-1]). The output of f1(..) should be a tensor such that at element [0,0] there will the output value of f2(tensor_rows[0], tensor_rows[0]) and so on... Is there a way to perform it efficiently (and not with double for loop)?
Yes, one can do it with a simple broadcasting trick: def f1(tensor): tensor = tensor.permute(1, 0) return torch.nn.functional.kl_div( tensor.unsqueeze(dim=2), tensor.unsqueeze(dim=1), reduction="none" ).mean(dim=0) def manual_f1(tensor): result = [] for row1 in tensor: for row2 in tensor: result.append(torch.nn.functional.kl_div(row1, row2)) return torch.stack(result).reshape(tensor.shape[0], -1) data = torch.randn(5, 7) result = f1(data) manual_result = manual_f1(data) print(torch.all(result == manual_result).item()) Please notice, for more rows the result will differ due to numerical difference. You can: print the values and inspect manually use torch.isclose to verify similarity In the second case, last print would become: print(torch.all(torch.isclose(result, manual_result)).item())
https://stackoverflow.com/questions/67741628/
Constrain parameters to be -1, 0 or 1 in neural network in pytorch
I want to constrain the parameters of an intermediate layer in a neural network to prefer discrete values: -1, 0, or 1. The idea is to add a custom objective function that would increase the loss if the parameters take any other value. Note that, I want to constrain parameters of a particular layer, not all layers. How can I implement this in pytorch? I want to add this custom loss to the total loss in the training loop, something like this: custom_loss = constrain_parameters_to_be_discrete loss = other_loss + custom_loss May be using a Dirichlet prior might help, any pointer to this?
Extending upon @Shai answer and mixing it with this answer one could do it simpler via custom layer into which you could pass your specific layer. First, the calculated derivative of torch.abs(x**2 - torch.abs(x)) taken from WolframAlpha (check here) would be placed inside regularize function. Now the Constrainer layer: class Constrainer(torch.nn.Module): def __init__(self, module, weight_decay=1.0): super().__init__() self.module = module self.weight_decay = weight_decay # Backward hook is registered on the specified module self.hook = self.module.register_full_backward_hook(self._weight_decay_hook) # Not working with grad accumulation, check original answer and pointers there # If that's needed def _weight_decay_hook(self, *_): for parameter in self.module.parameters(): parameter.grad = self.regularize(parameter) def regularize(self, parameter): # Derivative of the regularization term created by @Shia sgn = torch.sign(parameter) return self.weight_decay * ( (sgn - 2 * parameter) * torch.sign(1 - parameter * sgn) ) def forward(self, *args, **kwargs): # Simply forward and args and kwargs to module return self.module(*args, **kwargs) Usage is really simple (with your specified weight_decay hyperparameter if you need more/less force on the params): constrained_layer = Constrainer(torch.nn.Linear(20, 10), weight_decay=0.1) Now you don't have to worry about different loss functions and can use your model normally.
https://stackoverflow.com/questions/67772546/
What is the edifference between spacy.load('en_core_web_sm') vs spacy.load(en)
I have seen both of these written down in Colab Notebooks, Can someone please explain the difference between them? Thanks
In spaCy v2, it was possible to use shorthand to refer to a model in some circumstances, so "en" could be the same as "en_core_web_sm". The way this worked internally kind of relied on symlinks, which added file system state and caused issues on Windows. This caused troubleshooting problems and confusion, so it was decided the convenience of the short names wasn't worth it, and there are no short names in v3. So if you see code using spacy.load("en") it's using v2. There's no meaningful difference in how it works though.
https://stackoverflow.com/questions/67774456/
How do I use the fastai saved model?
I trained my model in google colab, and downloaded the .pkl file in my computer. Now, how do I use it? How do I load the .pkl file and do I need to install fastai for it to work?
How do I load the .pkl file Assuming you've saved your model using learner.save you can use complementary learner.load method. do I need to install fastai for it to work Yes, you need fastai if you saved it this way. You could also save PyTorch model itself contained inside learner via: torch.save(learner.model, "/path/to/model.pt") # or save it's state_dict, better option model = torch.load("/path/to/model.pt") Either way you need those libraries as pickle stores the data but class definition and creation has to be provided code-wise.
https://stackoverflow.com/questions/67778201/
IndexError: too many indices for tensor of dimension 2
here is the dataset: class price_dataset(Dataset): def __init__(self, transform=None): xy = pd.read_csv('data_balanced_full.csv') self.n_samples = xy.shape[0] xy = xy.to_numpy() self.x_data = torch.from_numpy(xy[:, 7:].astype(np.float32)) self.y_data = torch.from_numpy(xy[:, 6].astype(np.float32)) self.transform = transform def __getitem__(self, index): x = self.x_data[index] y = self.y_data[index] sample = {'data': x, 'label': y} if self.transform: sample = self.transform(sample) return sample # we can call len(dataset) to return the size def __len__(self): return self.n_samples and I'm trying to split the dataset into testing and training: dataset_normalized = price_dataset(transform=transforms.ToTensor()) train_dataset, test_dataset = train_test_split(dataset_normalized['data'], dataset_normalized['label'], test_size=0.10, random_state=0) but I'm getting this error: IndexError: too many indices for tensor of dimension 2
'data' and 'label' are not indices but keys of a dictionnary. This dictionary is accessible and calling __getitem__ as follows : dataset_normalized[idx] with idx an integer. Moreover, you cannot invoke your transformation directly on a dictionary. You should call it on sample['data'] instead. I advise you to carefully read this example of the PyTorch documentation which is very nice.
https://stackoverflow.com/questions/67779568/
Installation problem with PyTorch's Geometric. "torch-scatter" produces an error with exit status 1
Could anyone if used PyTorch geometric before, help me resolve this issue. I'm having trouble installing torch-scatter from PyTorch Geometric to deal with some tabular data for question answering task based on TAPAS model. I presume there is a compile error at source. I tried checking other forums and found no solution for this yet. Procedure followed to produce the error: pip3 install torch==1.8.1+cpu torchvision==0.9.1+cpu torchaudio===0.8.1 -f https://download.pytorch.org/whl/torch_stable.html pip3 install torch-scatter Console output: ERROR: Command errored out with exit status 1: I also tried using the python -f flag and specifically tried to pull from the source at: pip3 install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.8.1+cpu.html Following are my PyTorch and CUDA versions with the respective imports and console outputs: python -c "import torch; print(torch.__version__)" Output: 1.8.1+cpu CUDA version: python -c "import torch; print(torch.version.cuda)" Output: None Python version: Python 3.7.5 Thank you very much for your time and guidance.
I had this problem too which is solved by install C++ build tools. You can install it from vs_buildtools.exe that is downloadable here
https://stackoverflow.com/questions/67787392/
Extremely poor accuracy upon training network from scratch
I am trying to retrain resnet50 from scratch using a dataset that is similar to ImageNet. I wrote the following training loop: def train_network(epochs , train_loader , val_loader , optimizer , network): since = time.time ( ) train_acc_history = [] val_acc_history = [] best_model_weights = copy.deepcopy (network.state_dict ( )) best_accuracy = 0.0 for epoch in range (epochs): correct_train = 0 correct_val = 0 for x , t in train_loader: x = x.to (device) t = t.to (device) optimizer.zero_grad ( ) z = network (x) J = loss (z , t) J.backward ( ) optimizer.step ( ) _ , y = torch.max (z , 1) correct_train += torch.sum (y == t.data) with torch.no_grad ( ): network.eval ( ) for x_val , t_val in val_loader: x_val = x_val.to (device) t_val = t_val.to (device) z_val = network (x_val) _ , y_val = torch.max (z_val , 1) correct_val += torch.sum (y_val == t_val.data) network.train ( ) train_accuracy = correct_train.float ( ) / len (train_loader.dataset) val_accuracy = correct_val.float ( ) / len (val_loader.dataset) print ( F"Epoch: {epoch + 1} train_accuracy: {(train_accuracy.item ( ) * 100):.3f}% val_accuracy: {(val_accuracy.item ( ) * 100):.3f}%" , flush = True) # time_elapsed_epoch = time.time() - since # print ('Time taken for Epoch {} is {:.0f}m {:.0f}s'.format (epoch + 1, time_elapsed_epoch // 60 , time_elapsed_epoch % 60)) if val_accuracy > best_accuracy: best_accuracy = val_accuracy best_model_weights = copy.deepcopy (network.state_dict ( )) train_acc_history.append (train_accuracy) val_acc_history.append (val_accuracy) print ( ) time_elapsed = time.time ( ) - since print ('Training complete in {:.0f}m {:.0f}s'.format (time_elapsed // 60 , time_elapsed % 60)) print ('Best Validation Accuracy: {:3f}'.format (best_accuracy * 100)) network.load_state_dict (best_model_weights) return network , train_acc_history , val_acc_history But I am getting extremely poor training and validation accuracies as below: > Epoch: 1 train_accuracy: 3.573% val_accuracy: 3.481% > Epoch: 2 train_accuracy: 3.414% val_accuracy: 3.273% > Epoch: 3 train_accuracy: 3.515% val_accuracy: 4.039% > Epoch: 4 train_accuracy: 3.567% val_accuracy: 4.195% Upon googling, I found that the accuracies of training from scratch are usually not so poor (in fact they start off from around 40% - 50%). I am finding it difficult to understand where the glitch might be. It would be great if someone could help me figure out where I might be going wrong. Thanks
I tried your training loop without the weight checkpoint and got accuracy over 90% on fashionMNIST dataset using my own ResNet. So if you are using a good loss/optimizer I would suggest looking at the network architecture or creation of the data-loaders. def train_network(epochs , train_loader , val_loader , optimizer , network): #since = time.time ( ) train_acc_history = [] val_acc_history = [] loss = nn.CrossEntropyLoss() #best_model_weights = copy.deepcopy (network.state_dict ( )) #best_accuracy = 0.0 for epoch in range (epochs): correct_train = 0 correct_val = 0 network.train ( ) for x , t in train_loader: x = x.to (device) t = t.to (device) optimizer.zero_grad ( ) z = network (x) J = loss (z , t) J.backward ( ) optimizer.step ( ) _ , y = torch.max (z , 1) correct_train += torch.sum (y == t.data) with torch.no_grad ( ): network.eval ( ) for x_val , t_val in val_loader: x_val = x_val.to (device) t_val = t_val.to (device) z_val = network (x_val) _ , y_val = torch.max (z_val , 1) correct_val += torch.sum (y_val == t_val.data) network.train ( ) train_accuracy = correct_train.float ( ) / len (train_loader.dataset) val_accuracy = correct_val.float ( ) / len (val_loader.dataset) print ( F"Epoch: {epoch + 1} train_accuracy: {(train_accuracy.item ( ) * 100):.3f}% val_accuracy: {(val_accuracy.item ( ) * 100):.3f}%" , flush = True) ''' if val_accuracy > best_accuracy: best_accuracy = val_accuracy best_model_weights = copy.deepcopy (network.state_dict ( )) train_acc_history.append (train_accuracy) val_acc_history.append (val_accuracy) #time_elapsed = time.time ( ) - since #print ('Training complete in {:.0f}m {:.0f}s'.format (time_elapsed // 60 , time_elapsed % 60)) print ('Best Validation Accuracy: {:3f}'.format (best_accuracy * 100)) #network.load_state_dict (best_model_weights) ''' return network , train_acc_history , val_acc_history optimizer = optim.Adam(net.parameters(), lr = 0.01) train_network(10,trainloader, testloader, optimizer, net) Epoch: 1 train_accuracy: 83.703% val_accuracy: 86.820% Epoch: 2 train_accuracy: 88.893% val_accuracy: 89.400% Epoch: 3 train_accuracy: 90.297% val_accuracy: 89.700% Epoch: 4 train_accuracy: 91.272% val_accuracy: 90.640% Epoch: 5 train_accuracy: 91.948% val_accuracy: 91.250% ... So, if you tested with the training loop I used (yours with small mods) and it still doesn't work I would check data loader and play around with network architecture.
https://stackoverflow.com/questions/67788670/
Failed to Build Torch-Scatter in Pytorch Geometry
I am very new to the concept of Graph Neural Networks. To learn more I tried installing torch geometric, but it is giving a huge error(which I can't even paste here). My Versions: >>> import torch >>> torch.__version__ '1.8.1' >>> torch.version.cuda '10.1' The command I used to install torch geometric: pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.8.1+cu101.html The Error Trace: C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.29.30037/include\xutility(4424): error: function "torch::OrderedDict<Key, Value>::Item::operator=(const torch::OrderedDict<std::string, at::Tensor>::Item &) [with Key=std::string, Value=at::Tensor]" (declared implicitly) cannot be referenced -- it is a deleted function detected during: instantiation of "_OutIt std::_Move_unchecked(_InIt, _InIt, _OutIt) [with _InIt=torch::OrderedDict<std::string, at::Tensor>::Item *, _OutIt=torch::OrderedDict<std::string, at::Tensor>::Item *]" C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.29.30037/include\vector(1419): here instantiation of "std::vector<_Ty, _Alloc>::iterator std::vector<_Ty, _Alloc>::erase(std::vector<_Ty, _Alloc>::const_iterator) [with _Ty=torch::OrderedDict<std::string, at::Tensor>::Item, _Alloc=std::allocator<torch::OrderedDict<std::string, at::Tensor>::Item>]" C:/Users/Gopu/anaconda3/envs/torch_geometry/lib/site-packages/torch/include\torch/csrc/api/include/torch/ordered_dict.h(419): here instantiation of "void torch::OrderedDict<Key, Value>::erase(const Key &) [with Key=std::string, Value=at::Tensor]" C:/Users/Gopu/anaconda3/envs/torch_geometry/lib/site-packages/torch/include/torch/csrc/api/include\torch/nn/modules/container/parameterdict.h(51): here C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.29.30037/include\xutility(4424): error: function "torch::OrderedDict<Key, Value>::Item::operator=(const torch::OrderedDict<std::string, std::shared_ptr<torch::nn::Module>>::Item &) [with Key=std::string, Value=std::shared_ptr<torch::nn::Module>]" (declared implicitly) cannot be referenced -- it is a deleted function detected during: instantiation of "_OutIt std::_Move_unchecked(_InIt, _InIt, _OutIt) [with _InIt=torch::OrderedDict<std::string, std::shared_ptr<torch::nn::Module>>::Item *, _OutIt=torch::OrderedDict<std::string, std::shared_ptr<torch::nn::Module>>::Item *]" C:/Program Files (x86)/Microsoft Visual Studio/2019/Community/VC/Tools/MSVC/14.29.30037/include\vector(1419): here instantiation of "std::vector<_Ty, _Alloc>::iterator std::vector<_Ty, _Alloc>::erase(std::vector<_Ty, _Alloc>::const_iterator) [with _Ty=torch::OrderedDict<std::string, std::shared_ptr<torch::nn::Module>>::Item, _Alloc=std::allocator<torch::OrderedDict<std::string, std::shared_ptr<torch::nn::Module>>::Item>]" C:/Users/Gopu/anaconda3/envs/torch_geometry/lib/site-packages/torch/include\torch/csrc/api/include/torch/ordered_dict.h(419): here instantiation of "void torch::OrderedDict<Key, Value>::erase(const Key &) [with Key=std::string, Value=std::shared_ptr<torch::nn::Module>]" C:/Users/Gopu/anaconda3/envs/torch_geometry/lib/site-packages/torch/include/torch/csrc/api/include\torch/nn/modules/container/moduledict.h(196): here 2 errors detected in the compilation of "C:/Users/Gopu/AppData/Local/Temp/tmpxft_00001354_00000000-7_scatter_cuda.cpp1.ii". scatter_cuda.cu error: command 'C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.1\\bin\\nvcc.exe' failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: 'C:\Users\Gopu\anaconda3\envs\torch_geometry\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Gopu\\AppData\\Local\\Temp\\pip-install-amy5gxh5\\torch-scatter_f0827337be9443d09c6b48e753621f6e\\setup.py'"'"'; __file__='"'"'C:\\Users\\Gopu\\AppData\\Local\\Temp\\pip-install-amy5gxh5\\torch-scatter_f0827337be9443d09c6b48e753621f6e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Gopu\AppData\Local\Temp\pip-record-h5amznql\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\Gopu\anaconda3\envs\torch_geometry\Include\torch-scatter' Check the logs for full command output.
Try checking python version it should be less then 3.9 as wheel for torch-scatter for python 3.9 is not released yet. Create new environment with python 3.8 install pytorch cuda version and then :- pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.8.1+cu101.html if still not working try pip install --upgrade torch-scatter -f https://pytorch-geometric.com/whl/torch-1.8.1+cu101.html hope its helpful
https://stackoverflow.com/questions/67792006/
Weighted random sampler - oversample or undersample?
Problem I am training a deep learning model in PyTorch for binary classification, and I have a dataset containing unbalanced class proportions. My minority class makes up about 10% of the given observations. To avoid the model learning to just predict the majority class, I want to use the WeightedRandomSampler from torch.utils.data in my DataLoader. Let's say I have 1000 observations (900 in class 0, 100 in class 1), and a batch size of 100 for my dataloader. Without weighted random sampling, I would expect each training epoch to consist of 10 batches. Questions Will only 10 batches be sampled per epoch when using this sampler - and consequently, would the model 'miss' a large portion of the majority class during each epoch, since the minority class is now overrepresented in the training batches? Will using the sampler result in more than 10 batches being sampled per epoch (meaning the same minority class observations may appear many times, and also that training would slow down)?
A small snippet of code to use WeightedRandomSampler First, define the function: def make_weights_for_balanced_classes(images, nclasses): n_images = len(images) count_per_class = [0] * nclasses for _, image_class in images: count_per_class[image_class] += 1 weight_per_class = [0.] * nclasses for i in range(nclasses): weight_per_class[i] = float(n_images) / float(count_per_class[i]) weights = [0] * n_images for idx, (image, image_class) in enumerate(images): weights[idx] = weight_per_class[image_class] return weights And after this, use it in the following way: import torch dataset_train = datasets.ImageFolder(traindir) # For unbalanced dataset we create a weighted sampler weights = make_weights_for_balanced_classes(dataset_train.imgs, len(dataset_train.classes)) weights = torch.DoubleTensor(weights) sampler = torch.utils.data.sampler.WeightedRandomSampler(weights, len(weights)) train_loader = torch.utils.data.DataLoader(dataset_train, batch_size=args.batch_size, shuffle = True, sampler = sampler, num_workers=args.workers, pin_memory=True)
https://stackoverflow.com/questions/67799246/
Pytorch - Use a UNet to perform Image Deblurring/Image Reconstruction
Currently, I'm working with a dataset where I have two kinds of images: "sharp version" of the image and "blurry version" of the same images, where a blur was added synthetically. My goal is to train a model that takes the blurry version of the images in and tries to deblur the image as much as it can so that the "deblurred image" is closer to the sharp version. In the literature, the UNet architecture seemed to be a model with good results. Additionally, I can use a pre-trained U-Net via Pytorch (https://pytorch.org/hub/mateuszbuda_brain-segmentation-pytorch_unet/). My problem is now: When I train this pre-trained U-Net with my images and then try it on my test set, I get the following output: The original image: I know that this pre-trained model is usually used for biomedical image segmentation but I'm rather confused about how I have to modify the model to use it for an Image Deblurring/Reconstruction task. Does anyone have any advice on how to do this? I would appreciate any feedback :)
The U-net you're using is for segmentation (classification of each pixels of the image) whereas you're trying to denoise the image (getting your image "sharper"/remove noise). It explains the results you got. To get what you want you need and as DerekG said, you first need to modify the number of channels of the output. By modify it, you can't load all the pretrained model. You will have to copy parameters by parameters until the last one. As the last layer is initialized randomly, you can retrained the model with your training set. You can freeze or not the pretrained parts. Also, I'm not sure what your new dataset is but if it's really not related to biomedical images you should retrain your network from scratch (transfer learning shouldn't be done in these cases), maybe even change the encoder-decoder network.
https://stackoverflow.com/questions/67807350/
Installing geffnet with pip
I used a google colab notebook to run a certain model. It required me to install geffnet like this. !pip -q install geffnet How can I install geffnet locally? I tried the line below but I get an error when trying to get efficientnet_b7. "RuntimeError: Unknown model (efficientnet_b7) pip3 install geffnet
Were your other python installing commands work properly? Try with a version likethis, pip install geffnet==0.9.0 Still not working,try to use Pytorch instead of Colab, sometimes issue may be fixed
https://stackoverflow.com/questions/67812297/
Convert list of tensors into tensor pytorch
I have a list of embeddings. The list has N lists with M embedding (tensors) each. list_embd = [[M embeddings], [M embeddings], ...] (Each embedding is a tensor with size (1,512)) What I want to do is create a tensor size (N, M), where each "cell" is one embedding. Tried this for numpy array. array = np.zeros(n,m) for i in range(n): for j in range(m): array[i, j] = list_embd[i][j] But still got errors. In pytorch tried to concat all M embeddings into one tensor size (1, M), and then concat all rows. But when I concat along dim 1 two of those M embeddings, I get a tensor shaped (1, 1028) instead (1, 2). final = torch.tensor([]) for i in range(n): interm = torch.tensor([]) for j in range(m): interm = torch.cat((interm, list_embd[i][j]), 0) final = = torch.cat((final, interm), 1) Any ideas or suggestions? I need a matrix with the embeddings in each cell.
You can use torch.cat and torch.stack to create a final 3D tensor of shape (N, M, 512): final = torch.stack([torch.cat(sub_list, dim=0) for sub_list in list_embd], dim=0) First, you use torch.cat to create a list of N 2D tensors of shape (M, 512) from each list of M embeddings. Then torch.stack is used to stack these N 2D matrices into a single 3D tensor final.
https://stackoverflow.com/questions/67814465/
Runtime error: CUDA out of memory by the end of training and doesn’t save model; pytorch
I'm not so experienced in Data Science and pytorch and I have problems with implementing at least anything here(currently I'm making a NN for segmentation tasks). There is some kind of memory problem, although it doesn't meen anything - every epoch takes a lot less memory than it is in the risen import torch from torch import nn from torch.autograd import Variable from torch.nn import Linear, ReLU6, CrossEntropyLoss, Sequential, Conv2d, MaxPool2d, Module, Softmax, Softplus ,BatchNorm2d, Dropout, ConvTranspose2d import torch.nn.functional as F from torch.nn import LeakyReLU,Tanh from torch.optim import Adam, SGD import numpy as np import cv2 as cv def train(epoch,model,criterion, x_train, y_train, loss_val): model.train() tr_loss = 0 # getting the training set x_train, y_train = Variable(x_train), Variable(y_train) # converting the data into GPU format # clearing the Gradients of the model parameters optimizer.zero_grad() # prediction for training and validation set output_train = model(x_train) # computing the training and validation loss loss_train = criterion(output_train, y_train) train_losses.append(loss_train) # computing the updated weights of all the model parameters loss_train.backward() optimizer.step() tr_loss = loss_train.item() return loss_train # printing the validation loss class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 96, (3,3), padding=1) self.conv11= nn.Conv2d(96, 96, (3,3), padding=1) self.conv12= nn.Conv2d(96, 96, (3,3), padding=1) self.pool = nn.MaxPool2d((2,2), 2) self.conv2 = nn.Conv2d(96, 192, (3,3), padding=1) self.conv21 = nn.Conv2d(192, 192, (3,3), padding=1) self.conv22 = nn.Conv2d(192, 192, (3,3), padding=1) self.b = BatchNorm2d(96) self.b1 = BatchNorm2d(192) self.b2 = BatchNorm2d(384) self.conv3 = nn.Conv2d(192,384,(3,3), padding=1) self.conv31= nn.Conv2d(384,384,(3,3), padding=1) self.conv32= nn.Conv2d(384,384,(3,3), padding=1) self.lin1 = nn.Linear(384*16*16, 256*2*2, 1) self.lin2 = nn.Linear(256*2*2, 16*16, 1) self.uppool = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False) self.upconv1= nn.ConvTranspose2d(385,192,(3,3), padding=1) self.upconv11=nn.ConvTranspose2d(192,32,(3,3), padding=1) self.upconv12=nn.ConvTranspose2d(32,1,(3,3), padding=1) self.upconv2= nn.ConvTranspose2d(193,96,(3,3), padding=1) self.upconv21= nn.ConvTranspose2d(96,16,(3,3), padding=1) self.upconv22= nn.ConvTranspose2d(16,1,(3,3), padding=1) self.upconv3= nn.ConvTranspose2d(97,16,(3,3), padding=1) self.upconv4= nn.ConvTranspose2d(16,8,(3,3), padding=1) self.upconv6= nn.ConvTranspose2d(8,1,(3,3), padding=1) def forward(self, x): m=Tanh() x1=self.b(m(self.conv12(m(self.conv11(m(self.conv1(x))))))) x = self.pool(x1) x2=self.b1(m(self.conv22(m(self.conv21(m(self.conv2(x))))))) x = self.pool(x2) x3=self.b2(m(self.conv32(m(self.conv31(m(self.conv3(x))))))) x=self.pool(x3) x = x.view(-1, 16*16*384) x = m(self.lin1(x)) x = m(self.lin2(x)) x = x.view(1, 1, 16, 16) x=torch.cat((x,self.pool(x3)),1) x = self.uppool(m(self.upconv12(m(self.upconv11(m(self.upconv1(x))))))) x=torch.cat((x,self.pool(x2)),1) x = self.uppool(m(self.upconv22(m(self.upconv21(m(self.upconv2(x))))))) x=torch.cat((x,self.pool(x1)),1) x = (self.uppool(m(self.upconv3(x)))) x = (m(self.upconv4(x))) l=Softplus() x= l(self.upconv6(x)) return x train_data=[] for path in range(1000): n="".join(["0" for i in range(5-len(str(path)))])+str(path) paths="00000\\"+n+".png" train_data.append(cv.imread(paths)) for path in range(2000,3000): n="".join(["0" for i in range(5-len(str(path)))])+str(path) paths="02000\\"+n+".png" train_data.append(cv.imread(paths)) train_output=[] for path in range(1,2001): n="outputs\\"+str(path)+".jpg" train_output.append(cv.imread(n)) data=torch.from_numpy((np.array(train_data,dtype=float).reshape(2000,3,128,128)/255)).reshape(2000,3,128,128) data_cuda=torch.tensor(data.to('cuda'), dtype=torch.float32) output=torch.from_numpy(np.array(train_output,dtype=float).reshape(2000,3,128,128))[:,2].view(2000,1,128,128)*2 output_cuda=torch.tensor(output.to('cuda'),dtype=torch.float32) model=Net() optimizer = Adam(model.parameters(), lr=0.1) criterion = nn.BCEWithLogitsLoss() if torch.cuda.is_available(): model = model.cuda() criterion = criterion.cuda() print(model) epochs=3 n_epochs = 1 train_losses = [] val_losses = [] for epoch in range(n_epochs): loss_train=0 for i in range(data.shape[0]): loss_train1=train(epoch,model,criterion,data_cuda[i].reshape(1,3,128,128),output_cuda[i].reshape(1,1,128,128),train_losses) loss_train+=loss_train1 print('Epoch : ',epoch+1, '\t', 'loss :', loss_train/data.shape[0]) with torch.no_grad(): torch.save(model.state_dict(), "C:\\Users\\jugof\\Desktop\\Python\\pytorch_models") a=np.array(model(data_cuda).to('cpu').numpy())*255 cv.imshow('',a.reshape(128,128)) cv.waitKey(0)""" Here is the error: PS C:\Users\jugof\Desktop\Python> & C:/Users/jugof/anaconda3/python.exe c:/Users/jugof/Desktop/Python/3d_visual_effect1.py c:/Users/jugof/Desktop/Python/3d_visual_effect1.py:98: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). data_cuda=torch.tensor(data.to('cuda'), dtype=torch.float32) c:/Users/jugof/Desktop/Python/3d_visual_effect1.py:101: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). output_cuda=torch.tensor(output.to('cuda'),dtype=torch.float32) Epoch : 1 loss : tensor(0.6933, device='cuda:0', grad_fn=) Traceback (most recent call last): File "c:/Users/jugof/Desktop/Python/3d_visual_effect1.py", line 120, in a=np.array(model(data_cuda).to('cpu').numpy())*255 File "C:\Users\jugof\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "c:/Users/jugof/Desktop/Python/3d_visual_effect1.py", line 62, in forward x1=self.b(m(self.conv12(m(self.conv11(m(self.conv1(x))))))) File "C:\Users\jugof\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(input, **kwargs) File "C:\Users\jugof\anaconda3\lib\site-packages\torch\nn\modules\conv.py", line 399, in forward return self._conv_forward(input, self.weight, self.bias) File "C:\Users\jugof\anaconda3\lib\site-packages\torch\nn\modules\conv.py", line 395, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: CUDA out of memory. Tried to allocate 11.72 GiB (GPU 0; 6.00 GiB total capacity; 2.07 GiB already allocated; 1.55 GiB free; 2.62 GiB reserved in total by PyTorch) I feed a numpy array (an image) of 128128 shape and recieve another of the same shape, it's a segmentation model(again) I was using Flickr-Faces-HQ Dataset (FFHQ) and used downsampled 128*128 labels - I used 00000, 01000 and 02000 files and masks were recieved by opencv haarscascades_eye
The problem is your loss_train list, which stores all losses from the beginning of your experiment. If the losses you put in were mere float, that would not be an issue, but because of your not returning a float in the train function, you are actually storing loss tensors, with all the computational graph embedded in them. Indeed, a tensor keeps pointers of all tensors that were involved in its computation, and as long as a pointer exist, the allocated memory cannot be freed. So basically, you keep all tensors from all epochs and prevent pytorch from cleaning them ; it's like a (deliberate) memory leak You can very easily monitor this type of issue by running nvidia-smi -l 1 after having started your experiment. You will watch your memory usage grow linearly until your GPU runs out of memory (`nvidia-smi is a good tool to use when doing stuff on your GPU). To prevent this from happening, simply replace the last line of the train function with return loss_train.item(), and the memory issue will vanish
https://stackoverflow.com/questions/67819077/
In Pytorch, quantity.backward() computes the gradient of quantity wrt which of the parameters?
The backward method computes the gradient wrt to which parameters? All of the params with requires_grad having True value? Interestingly, in Pytorch computing gradients and loading the optimizer that updates parameters based on gradients need different informations about the identity of parameters of interest to be able to work. The first one seem to know which parameters to compute the gradient for. The second one needs the parameters to be mentioned to it. See the code below. quantity.backward() optim = torch.SGD(model.parameters()) optim.step() How is that? Why backward does not need the model.parameters()? Would it not be more efficient to mention the specific subset of parameters?
Computing quantity requires constructing a 2-sorted graph with nodes being either tensors or differentiable operations on tensors (a so-called computational graph). Under the hood, pytorch keeps track of this graph for you. When you call quantity.backward(), you're asking pytorch to perform an inverse traversal of the graph, from the output to the inputs, using the derivative of each operation encountered rather the operation itself. Leaf tensors that are flagged as requiring gradients accumulate the gradients computed by backward. An optimizer is a different story: it simply implements an optimization strategy on a set of parameters, hence it needs to know which parameters you want it to be optimizing. So quantity.backward() computes gradients, optim.step() uses these gradients to perform on a optimization step, updating the parameters contained in model. As for efficiency, I don't see any argument in favor of specifying parameters in the backward pass (what would the semantics of that be?). If what you'd want is to avoid traversal of parts of the graph in backward mode, pytorch will do it automagically for you if you remember: you can mark leaf tensors as not requiring grad a non-leaf tensor -- the output of some operation f(x1,...xN) -- requires grad if at least one of x1...xN requires grad a tensor that doesn't require grad blocks backward traversal, ensuring no unnecessary computation
https://stackoverflow.com/questions/67826958/
How can I handle this datasets to create a datasetDict?
I'm trying to build a datasetDictionary object to train a QA model on PyTorch. I have these two different datasets: test_dataset Dataset({ features: ['answer_text', 'answer_start', 'title', 'context', 'question', 'answers', 'id'], num_rows: 21489 }) and train_dataset Dataset({ features: ['answer_text', 'answer_start', 'title', 'context', 'question', 'answers', 'id'], num_rows: 54159 }) In the dataset's documentation I didn't find anything. I'm quite a noob, thus the solution may be really easy. What I wish to obtain is something like this: dataset DatasetDict({ train: Dataset({ features: ['answer_text', 'answer_start', 'title', 'context', 'question', 'answers', 'id'], num_rows: 54159 }) test: Dataset({ features: ['answer_text', 'answer_start', 'title', 'context', 'question', 'answers', 'id'], num_rows: 21489 }) }) I really don't find how to use two datasets to create a dataserDict or how to set the keys. Moreover, I wish to "cut" the train set in two: train and validation sets, but also this passage is hard for me to handle. The final result should be something like this: dataset DatasetDict({ train: Dataset({ features: ['answer_text', 'answer_start', 'title', 'context', 'question', 'answers', 'id'], num_rows: 54159 - x }) validation: Dataset({ features: ['answer_text', 'answer_start', 'title', 'context', 'question', 'answers', 'id'], num_rows: x }) test: Dataset({ features: ['answer_text', 'answer_start', 'title', 'context', 'question', 'answers', 'id'], num_rows: 21489 }) }) Thank you in advance and pardon me for being a noob :)
to get the validation dataset, you can do like this: train_dataset, validation_dataset= train_dataset.train_test_split(test_size=0.1).values() This function will divide 10% of the train dataset into the validation dataset. and to obtain "DatasetDict", you can do like this: import datasets dd = datasets.DatasetDict({"train":train_dataset,"test":test_dataset})
https://stackoverflow.com/questions/67852880/
i'm confused with CoordConv
i read a paper which written by uber lab https://medium.com/@Cambridge_Spark/coordconv-layer-deep-learning-e02d728c2311 they create a network named Coordconv,and in this coordconv they not only add two layer of meshgrid but also with a simple conv net. it said through this way they add positional info to every pixel points? 2.so that after conv the pixel points still remain in same place as in original image? and this is also working to add two layers of meshgrid to freature maps which draw from neural network? how could Meshgrid help add positional info to the image? Does this just simply added two layers which are the same size as the original image but is in[-1,1] meshgrid to original input image? a big THANKS in advance!
About CoordConv Here is the original paper which proposed the CoordConv layer: CoordConv paper. I will try to convey my instinctive undersanding of this operation. How AddCoords works The way the information is added is by stacking (concatenating, to be more accurate) two new 2D tensors to the data. Those two channels are not multiplied together, therefore there is no meshgrid involved in this process. Say we are at a specific layer of the network. The last convolution step produced 4 2D-tensors of shape 8x8, each of which is the result of the previous convolution by a filter (thus we had 4 kernels in the previous step). They are in reality stacked in a single tensor of size bs * 8 * 8 * 4 where bs is the batch size, but let's ignore the batch size from now. The AddCoords method will create two other 2D tensors: xx_channel: [[0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1], [2, 2, 2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3, 3, 3], [4, 4, 4, 4, 4, 4, 4, 4], [5, 5, 5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 6, 6, 6], [7, 7, 7, 7, 7, 7, 7, 7]] and yy_channel: [[0, 1, 2, 3, 4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7], [0, 1, 2, 3, 4, 5, 6, 7]] Those are the results of the matmuls of the tf.range by the tf.ones. They will then be scaled to fit in the range [-1, 1] and casted to tensorflow.float32 type: xx_channel: [[-1. , -1. , -1. , -1. , -1. , -1. , -1. , -1. ], [-0.71428571, -0.71428571, -0.71428571, -0.71428571, -0.71428571, -0.71428571, -0.71428571, -0.71428571], [-0.42857143, -0.42857143, -0.42857143, -0.42857143, -0.42857143, -0.42857143, -0.42857143, -0.42857143], [-0.14285714, -0.14285714, -0.14285714, -0.14285714, -0.14285714, -0.14285714, -0.14285714, -0.14285714], [ 0.14285714, 0.14285714, 0.14285714, 0.14285714, 0.14285714, 0.14285714, 0.14285714, 0.14285714], [ 0.42857143, 0.42857143, 0.42857143, 0.42857143, 0.42857143, 0.42857143, 0.42857143, 0.42857143], [ 0.71428571, 0.71428571, 0.71428571, 0.71428571, 0.71428571, 0.71428571, 0.71428571, 0.71428571], [ 1. , 1. , 1. , 1. , 1. , 1. , 1. , 1. ]] yy_channel: [[-1., -0.71428571, -0.42857143, -0.14285714, 0.14285714, 0.42857143, 0.71428571, 1.], [-1., -0.71428571, -0.42857143, -0.14285714, 0.14285714, 0.42857143, 0.71428571, 1.], [-1., -0.71428571, -0.42857143, -0.14285714, 0.14285714, 0.42857143, 0.71428571, 1.], [-1., -0.71428571, -0.42857143, -0.14285714, 0.14285714, 0.42857143, 0.71428571, 1.], [-1., -0.71428571, -0.42857143, -0.14285714, 0.14285714, 0.42857143, 0.71428571, 1.], [-1., -0.71428571, -0.42857143, -0.14285714, 0.14285714, 0.42857143, 0.71428571, 1.], [-1., -0.71428571, -0.42857143, -0.14285714, 0.14285714, 0.42857143, 0.71428571, 1.], [-1., -0.71428571, -0.42857143, -0.14285714, 0.14285714, 0.42857143, 0.71428571, 1.]] They will then be concatenated to the other 2D-tensors along the last dimension ("-1"), ending up with a 3D-tensor with shape 8 * 8 * 6(again, the dimension of the batch size is ignored in my explanation). Those two generated channels are what the authors in the paper call coordinate informations. The method literally adds the coordinates of each 2D position : the y-coord and the x-coord. In our example, let's consider the values of an input tensor at position [4, 5], meaning the values along the last dimension (size 4), which is accessible like this : input_tensor[4, 5, :]. It may return something like this : input_tensor[4, 5, :] # > [0.75261177, 0.62114716, 0.76845441, 0.44747785] After AddCoords, it becomes: ret[4, 5, :] # > [0.75261177, 0.62114716, 0.76845441, 0.44747785, 0.14285714, 0.42857143] ... where 0.14285714 is the scaled value of 4 ie its y-coord and 0.42857143 is the scaled value of 5 ie its x-coord. The information about coordinates is now contained inside the resulting tensor, which is returned by the AddCoords method. The CoordConv It's a designed layer that applies AddCoords to the input and feeds the resulting tensor to a classic Conv2D layer. As such, it can be added to a neural network, as you would do with a Conv2D layer. That's what the authors did, when experimenting with GANs for example, where they substitued Conv2D with CoordConv (which, again, includes a Conv2D). Let me know if that answers your questions and/or correct any misconceptions. What does it imply for the neural network ? More trainable parameters... Let's give a bit more context to our previous example. In our previous example, the last layer yielded a tensor with shape 8 x 8 x 4. Let's say we want the next convolution layer to yield 16 output filters, from a convolution window of 3 * 3. You can see this link to get what convolution does mathematically , chapter 2.1 . You can get a basic understanding of what the convolution operation yields thanks to this visualizer. Just keep in mind both links show a single kernel and a single channel input matrix. If we don't add the coordinate tensors, the convolution to come will have 16 kernels with shape 3 x 3 x 4 each. If we do apply AddCoords, we will feed a tensor with shape 8 x 8 x 6 instead, and our 16 kernels will each have the shape 3 x 3 x 6. You can think of those kernels as neurons. Each neuron has 3 x 3 x 4 == 36 weights (Conv2D) or 3 x 3 x 6 == 54 weights (AddCoords+Conv2D, or CoordConv). Their weights will be updated during the learning process. Knowing this, it should appear evident that the coordinates channels of CoordConv implies new and specific weights to each kernel of the convolution layer. That's how the neural network takes into consideration these coordinates. ... implied in similar training processes If you haven't been experimenting with Machine Learning, the supervised learning process of a neural network might be quite complex to comprehend, but it's more general and could be resumed (oversimplified) as: We calculate the error, which is a mathematical way to describe how far the prediction is from the ground truth. Then we update (add) each parameter (or weight) in the network, layer after layer from the output layer to the input one, by a value that represents its implication in this error and the direction it should take to decrease the error. This process is called "backpropagation of the error".
https://stackoverflow.com/questions/67857323/
pytorch isn't running on gpu while true
I want to train on my local gpu but it's only running on cpu while torch.cuda.is_available() is actually true and i can see my gpu but it runs only on cpu , so how to fix it my CNN model: import torch.nn as nn import torch.nn.functional as F from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True # define the CNN architecture class Net(nn.Module): ### TODO: choose an architecture, and complete the class def __init__(self): super(Net, self).__init__() ## Define layers of a CNN self.conv1 = nn.Conv2d(3, 16, 3, padding=1) # convolutional layer (sees 16x16x16 tensor) self.conv2 = nn.Conv2d(16, 32, 3, padding=1) # convolutional layer (sees 8x8x32 tensor) self.conv3 = nn.Conv2d(32, 64, 3, padding=1) # max pooling layer self.pool = nn.MaxPool2d(2, 2) # linear layer (64 * 4 * 4 -> 500) self.fc1 = nn.Linear(64 * 28 * 28, 500) # linear layer (500 -> 10) self.fc2 = nn.Linear(500, 133) # dropout layer (p=0.25) self.dropout = nn.Dropout(0.25) def forward(self, x): ## Define forward behavior x = self.pool(F.relu(self.conv1(x))) #print(x.shape) x = self.pool(F.relu(self.conv2(x))) #print(x.shape) x = self.pool(F.relu(self.conv3(x))) #print(x.shape) #print(x.shape) # flatten image input x = x.view(-1, 64 * 28 * 28) # add dropout layer x = self.dropout(x) # add 1st hidden layer, with relu activation function x = F.relu(self.fc1(x)) # add dropout layer x = self.dropout(x) # add 2nd hidden layer, with relu activation function x = self.fc2(x) return x #-#-# You so NOT have to modify the code below this line. #-#-# # instantiate the CNN model_scratch = Net() # move tensors to GPU if CUDA is available if use_cuda: print("TRUE") model_scratch = model_scratch.cuda() train function : def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path): """returns trained model""" # initialize tracker for minimum validation loss valid_loss_min = np.Inf loaders_scratch = {'train': train_loader,'valid': valid_loader,'test': test_loader} for epoch in range(1, n_epochs+1): # initialize variables to monitor training and validation loss train_loss = 0.0 valid_loss = 0.0 ################### # train the model # ################### model.train() for batch_idx, (data, target) in enumerate(loaders['train']): # move to GPU if use_cuda: data, target = data.cuda(), target.cuda() ## find the loss and update the model parameters accordingly ## record the average training loss, using something like ## train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss)) # clear the gradients of all optimized variables optimizer.zero_grad() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the batch loss loss = criterion(output, target) # backward pass: compute gradient of the loss with respect to model parameters loss.backward() # perform a single optimization step (parameter update) optimizer.step() # update training loss train_loss += loss.item()*data.size(0) ###################### # validate the model # ###################### model.eval() for batch_idx, (data, target) in enumerate(loaders['valid']): # move to GPU if use_cuda: data, target = data.cuda(), target.cuda() ## update the average validation loss output = model(data) # calculate the batch loss loss = criterion(output, target) # update average validation loss valid_loss += loss.item()*data.size(0) # calculate average losses train_loss = train_loss/len(train_loader.dataset) valid_loss = valid_loss/len(valid_loader.dataset) # print training/validation statistics print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format( epoch, train_loss, valid_loss )) ## TODO: save the model if validation loss has decreased if valid_loss <= valid_loss_min: print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format( valid_loss_min, valid_loss)) torch.save(model.state_dict(), save_path) valid_loss_min = valid_loss # return trained model return model # train the model loaders_scratch = {'train': train_loader,'valid': valid_loader,'test': test_loader} model_scratch = train(100, loaders_scratch, model_scratch, optimizer_scratch, criterion_scratch, use_cuda, 'model_scratch.pt') # load the model that got the best validation accuracy model_scratch.load_state_dict(torch.load('model_scratch.pt')) while i am getting "TRUE" in torch.cuda.is_available() but still not running on GPU i am only running on CPU the below picture shows that i am running on cpu with 62%
To utilize cuda in pytorch you have to specify that you want to run your code on gpu device. a line of code like: use_cuda = torch.cuda.is_available() device = torch.device("cuda" if use_cuda else "cpu") will determine whether you have cuda available and if so, you will have it as your device. later in the code you have to pass your tensors and model to this device: net = net.to(device) and do the same for your other tensors that need to go to gpu, like test and training values.
https://stackoverflow.com/questions/67859185/
Understanding the order when reshaping a tensor
For a tensor: x = torch.tensor([ [ [[0.4495, 0.2356], [0.4069, 0.2361], [0.4224, 0.2362]], [[0.4357, 0.6762], [0.4370, 0.6779], [0.4406, 0.6663]] ], [ [[0.5796, 0.4047], [0.5655, 0.4080], [0.5431, 0.4035]], [[0.5338, 0.6255], [0.5335, 0.6266], [0.5204, 0.6396]] ] ]) Firstly would like to split it into 2 (x.shape[0]) tensors then concat them. Here, i dont really have to actually split it as long as i get the correct output, but it makes a lot more sense to me visually to split it then concat them back together. For example: # the shape of the splits are always the same split1 = torch.tensor([ [[0.4495, 0.2356], [0.4069, 0.2361], [0.4224, 0.2362]], [[0.4357, 0.6762], [0.4370, 0.6779], [0.4406, 0.6663]] ]) split2 = torch.tensor([ [[0.5796, 0.4047], [0.5655, 0.4080], [0.5431, 0.4035]], [[0.5338, 0.6255], [0.5335, 0.6266], [0.5204, 0.6396]] ]) split1 = torch.cat((split1[0], split1[1]), dim=1) split2 = torch.cat((split2[0], split2[1]), dim=1) what_i_want = torch.cat((split1, split2), dim=0).reshape(x.shape[0], split1.shape[0], split1.shape[1]) For the above result, i thought directly reshaping x.reshape([2, 3, 4]) would work, it resulted in the correct dimension but incorrect result. In general i am: not sure how to split the tensor into x.shape[0] tensors. confused about how reshape works. Most of the time i am able to get the dimension right, but the order of the numbers are always incorrect. Thank you
The order of the elements in memory in python, pytorch, numpy, c++ etc. are in row-major ordering: [ first, second third, forth ] While in matlab, fortran, etc. the order is column major: [ first, third second, fourth ] For higher dimensional tensors, this means elements are ordered from the last dimension to the first. You can easily visualize it using torch.arange followed by .view: a = torch.arange(24) a.view(2,3,4) Results with tensor([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) As you can see the elements are ordered first by row (last dimension), then by column, and finally by the first dimension. When you reshape a tensor, you do not change the underlying order of the elements, only the shape of the tensor. However, if you permute a tensor - you change the underlying order of the elements. Look at the difference between a.view(3,2,4) and a.permute(0,1,2) - the shape of the resulting two tensors is the same, but not the ordering of elements: In []: a.view(3,2,4) Out[]: tensor([[[ 0, 1, 2, 3], [ 4, 5, 6, 7]], [[ 8, 9, 10, 11], [12, 13, 14, 15]], [[16, 17, 18, 19], [20, 21, 22, 23]]]) In []: a.permute(1,0,2) Out[]: tensor([[[ 0, 1, 2, 3], [12, 13, 14, 15]], [[ 4, 5, 6, 7], [16, 17, 18, 19]], [[ 8, 9, 10, 11], [20, 21, 22, 23]]])
https://stackoverflow.com/questions/67868450/
Can I add new training pictures to my object detection model without re-running the whole training again?
I used yolov5 to train an object detection model. is it possible to add more annotated images after i have already trained the original model or must i restart the whole training with the new set of images?
You are asking about continual learning - this is a very active field of research, and there is no single solution/method to tackle it. You'll have to do more research to find the right approach for your specific settings.
https://stackoverflow.com/questions/67898366/
Pytorch mixed precision learning, torch.cuda.amp running slower than normal
I am trying to infer results out of a normal resnet18 model present in torchvision.models attribute. The model is simply trained without any mixed precision learning, purely on FP32. However, I want to get faster results while inferencing, so I enabled torch.cuda.amp.autocast() function only while running a test inference case. The code for the same is given below - model = torchvision.models.resnet18() model = model.to(device) # Pushing to GPU # Train the model normally Without amp - tensor = torch.rand(1,3,32,32).to(device) # Random tensor for testing with torch.no_grad(): model.eval() start = torch.cuda.Event(enable_timing=True) end = torch.cuda.Event(enable_timing=True) model(tensor) # warmup model(tensor) # warmpup start.record() for i in range(20): # total time over 20 iterations model(tensor) end.record() torch.cuda.synchronize() print('execution time in milliseconds: {}'. format(start.elapsed_time(end)/20)) execution time in milliseconds: 5.264944076538086 With amp - tensor = torch.rand(1,3,32,32).to(device) with torch.no_grad(): model.eval() start = torch.cuda.Event(enable_timing=True) end = torch.cuda.Event(enable_timing=True) model(tensor) model(tensor) start.record() with torch.cuda.amp.autocast(): # autocast initialized for i in range(20): model(tensor) end.record() torch.cuda.synchronize() print('execution time in milliseconds: {}'. format(start.elapsed_time(end)/20)) execution time in milliseconds: 10.619884490966797 Clearly, the autocast() enabled code is taking double the time. Even, with larger models like resnet50, the timing variation is approximately the same. Can someone help me out regarding this ? I am running this example on Google Colab and below are the specifications of the GPU +-----------------------------------------------------------------------------+ | NVIDIA-SMI 465.27 Driver Version: 460.32.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla P100-PCIE... Off | 00000000:00:04.0 Off | 0 | | N/A 43C P0 28W / 250W | 0MiB / 16280MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ torch.version.cuda == 10.1 torch.__version__ == 1.8.1+cu101
It's most likely because of the GPU you're using - P100, which has 3584 CUDA cores but 0 tensor cores -- the latter of which typically play the main role in mixed precision speedup. You may want to take a quick look at the "Hardware Comparison" section on this article. If you're stuck to using Colab, the only way I can foresee a possible speedup is if you get assigned a T4, which has tensor cores. Furthermore, it seems like you're using only a single image / a batch size of 1. If you get a T4, try re-running your benchmarks also using a larger batch size, like maybe 32-64-128-256 etc. You should be able to notice much more visible improvements when you parallelize over batches.
https://stackoverflow.com/questions/67904276/
How to convert this tensor flow code into pytorch code?
I am trying to implement an Image Denoising Gan which is written in tensorflow to pytorch and I am unable to understand what is tf.variable_scope and tf.Variable similar in pytorch. please help. def conv_layer(input_image, ksize, in_channels, out_channels, stride, scope_name, activation_function=lrelu, reuse=False): with tf.variable_scope(scope_name): filter = tf.Variable(tf.random_normal([ksize, ksize, in_channels, out_channels], stddev=0.03)) output = tf.nn.conv2d(input_image, filter, strides=[1, stride, stride, 1], padding='SAME') output = slim.batch_norm(output) if activation_function: output = activation_function(output) return output, filter def residual_layer(input_image, ksize, in_channels, out_channels, stride, scope_name): with tf.variable_scope(scope_name): output, filter = conv_layer(input_image, ksize, in_channels, out_channels, stride, scope_name+"_conv1") output, filter = conv_layer(output, ksize, out_channels, out_channels, stride, scope_name+"_conv2") output = tf.add(output, tf.identity(input_image)) return output, filter def transpose_deconvolution_layer(input_tensor, used_weights, new_shape, stride, scope_name): with tf.varaible_scope(scope_name): output = tf.nn.conv2d_transpose(input_tensor, used_weights, output_shape=new_shape, strides=[1, stride, stride, 1], padding='SAME') output = tf.nn.relu(output) return output def resize_deconvolution_layer(input_tensor, new_shape, scope_name): with tf.variable_scope(scope_name): output = tf.image.resize_images(input_tensor, (new_shape[1], new_shape[2]), method=1) output, unused_weights = conv_layer(output, 3, new_shape[3]*2, new_shape[3], 1, scope_name+"_deconv") return output
You can replace tf.Variable with torch.tensor, torch.tensor can hold gradients all the same. In torch, you also don't create a graph and then access things in there by name via some scope. You would just create the tensor and then can access it directly. The output variable there would just be accessible to you do with it however you want and to reuse however you see fit. In fact, if you're code isn't directly using this variable scope then you can likely just ignore it. Often the variable scopes are just to give convenient names to thing if you were ever to inspect the graph.
https://stackoverflow.com/questions/67940962/
Simulating many agents in PyTorch using multiprocessing
I want to simulate multiple reinforcement learning agents that are coded using Pytorch. The agents do not share any data dynamically, so I expect that the task should be "embarassingly parallel". I need a lot of simulations (I want to see what is the distribution my agents converge to) so I hope to speed it up using multiprocessing. I have a model class that stores all the parameters of my agents (which are the same across agents) and the environment. I can simulate N agents over T periods using model.simulate(N = 10, T = 50) My class would then run simulation loops and store all networks and simulation histories. I am very new to parallel programming, and I (naively) try the following: import torch.multiprocessing as mp num_processes = 6 processes = [] for _ in range(num_processes): p = mp.Process(target=model.simulate(N = 10, T = 50), args= ()) p.start() processes.append(p) for p in processes: p.join() For now I do not even try to store results, I just want to see some speed-up. But the time it takes to run the code above is roughly the same as when I simply run a loop and do 6 simulations consequently: for _ in range(num_processes): model.simulate(N = 10, T = 50) I also tried to make processes for different instances of the model class, but it did not help.
It looks like your problem is in this line p = mp.Process(target=model.simulate(N = 10, T = 50), args= ()) The part model.simulate(N = 10, T = 50) is executed first, then the result (I'm assuming None if there is no return from this method) is passed to the mp.Process as the target parameter. So you are doing all the computation sequentially, and not performing it on the new processes. What you need to do instead is to pass the simulate function (without executing it) and provide the args separately. i.e. something like... p = mp.Process(target=model.simulate, args=(10, 50)) Providing target=model.simulate will pass a reference to the function itself rather than executing it and passing the result. This way it will be executed on the new process and you should acheive the parallelism. See offical docs for an example.
https://stackoverflow.com/questions/67956061/
Python - PyTorch: IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
I am working with PyTorch on a Text Classification problem with BERT. This is the PyTorch Dataset format I am using but when I try to access the inputs from the Dataset I get an error. PyTorch Dataset The Dataset Returns a Dictionary containing : ids, mask, token_type_ids, targets class JigsawDataset: def __init__(self, df, train_transforms = None): self.comment_text = df["comment_text"].values self.target = df["toxic"].values self.tokenizer = config.BERT_TOKENIZER self.max_len = config.MAX_LEN self.langs = df["lang"].values self.train_transforms = train_transforms def __len__(self): return len(self.comment_text) def __getitem__(self, item): comment_text = str(self.comment_text[item]) comment_text = " ".join(comment_text.split()) lang = self.langs[item] if self.train_transforms: comment_text, _ = self.train_transforms(data=(comment_text, lang))['data'] inputs = self.tokenizer.encode_plus( comment_text, None, add_special_tokens=True, max_length=self.max_len, pad_to_max_length=True, truncation=True, ) ids = inputs["input_ids"] mask = inputs["attention_mask"] token_type_ids = inputs["token_type_ids"] data_loader_dict = {} data_loader_dict["ids"] = torch.tensor(ids, dtype=torch.long) data_loader_dict["mask"] = torch.tensor(mask, dtype=torch.long) data_loader_dict["token_type_ids"] = torch.tensor(token_type_ids, dtype=torch.long) data_loader_dict["targets"] = torch.tensor(self.target[item], dtype=torch.float) return data_loader_dict Relevant Code which Gives Error In this case I am trying to load only 1 Sample and make it to the format of the PyTorch Dataset df = pd.read_csv("dataset.csv") df = df.head(1) # Trying with only 1 Sample dataset = JigsawDataset(df) ids = dataset["ids"] # Error occurs at this line mask = dataset["mask"] token_type_ids = ["token_type_ids"] Error --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-78-4608dd623cac> in <module> 3 dataset = JigsawDataset(df) 4 ----> 5 ids = dataset["ids"] # Error occurs at this line 6 mask = dataset["mask"] 7 token_type_ids = ["token_type_ids"] <ipython-input-40-121d8aa71516> in __getitem__(self, item) 13 14 def __getitem__(self, item): ---> 15 comment_text = str(self.comment_text[item]) 16 comment_text = " ".join(comment_text.split()) 17 lang = self.langs[item] IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices How to fix this?
I figured out the problem. Incorrect Code ids = dataset["ids"] mask = dataset["mask"] token_type_ids = ["token_type_ids"] Correct Code ids = dataset[0]["ids"] mask = dataset[0]["mask"] token_type_ids = [0]["token_type_ids"] The problem was that "ids", "mask" and "token_type_ids" are Dictionary Keys. JigsawDataset returns a dictionary for each sample. So in order to access a sample we need to specify the index ([0]) before specifying the key.
https://stackoverflow.com/questions/67956097/
Pytorch issue with loss and number of epochs
I'm building a neural network by adapting the code shown in curiosily's tutorial. Instead of using weather data, I'm feeding in my own data (all numerical) to solve a time-series regression problem. Under the Finding Good Parameters section, they calculate the loss (difference between calculated and actual output values). With my data (and using different optimizer, no. nodes, no. layers, etc.), the Train set - loss and Test set - loss values can decrease with the no. epochs, then the loss values increase again. The accuracy is always 0.0. I want to understand why this happens, what would be an ideal loss value (zero?), and how I can adjust my model parameters to avoid this issue. I'm basically using the same code in the tutorial, with a different neural network: class Net(nn.Module): def __init__(self, n_features): super(Net, self).__init__() # n_features = no. inputs n1 = 8 # no. nodes in layer 1 n2 = 5 n3 = 4 n4 = 5 n5 = 2 self.fc1 = nn.Linear(n_features,n1) self.fc2 = nn.Linear(n1,n2) self.fc3 = nn.Linear(n2,n3) self.fc4 = nn.Linear(n3,n4) self.fc5 = nn.Linear(n4,n5) self.fc6 = nn.Linear(n5,1) def forward(self, x): #x = F.relu(self.fc1(x)) x = torch.tanh(self.fc1(x)) # activation function in layer 1 x = torch.sigmoid(self.fc2(x)) x = torch.sigmoid(self.fc3(x)) x = torch.sigmoid(self.fc4(x)) x = torch.tanh(self.fc5(x)) return torch.sigmoid(self.fc6(x)) For the training/testing data, print(X_train.shape, y_train.shape) print(X_test.shape, y_test.shape) gives torch.Size([20, 8]) torch.Size([20]) torch.Size([6, 8]) torch.Size([6]) Here's some of my data: Price f1 f2 f3 f4 \ Date 2015-03-02 90.196107 1803.892 113.146970 12.643646 2125.656231 2015-03-09 64.135647 1800.734 107.968714 5.875968 2121.790735 2015-03-16 79.552756 1704.983 110.304459 12.003638 2009.193045 2015-03-23 82.191813 1607.716 107.720195 6.442494 2020.463010 2015-03-30 69.386627 1522.380 108.315439 13.252422 1979.088367 2016-03-07 66.651752 2084.698 113.987594 15.707330 2101.044023 2016-03-14 65.263433 2089.886 110.828986 10.185968 2126.727206 2016-03-21 67.420919 2152.666 111.177730 8.500986 2167.854746 2016-03-28 41.540860 2280.450 95.394193 11.750658 2103.708359 2017-03-06 45.244413 2383.778 110.464190 21.425014 2053.123167 2017-03-13 54.460675 2289.858 109.539569 10.345976 1982.583561 2017-03-20 41.063493 2185.491 106.347338 25.485176 1946.495832 2017-03-27 49.431981 2087.931 110.003395 10.732664 2032.264678 2018-03-05 73.660636 2204.947 108.703186 5.965236 2017.757273 2018-03-12 65.089474 2244.313 105.978320 11.164498 2102.231834 2018-03-19 61.284307 2240.600 106.864093 8.307786 2130.436459 2018-03-26 57.872814 2256.034 107.546072 16.750366 2153.384082 2019-03-04 173.318212 1826.327 113.837832 16.328690 2130.480772 2019-03-11 199.718808 1789.397 110.402293 6.385144 2038.025531 2019-03-18 206.258064 1809.019 109.644544 4.469384 1957.963904 2019-03-25 186.447336 1779.967 111.211074 17.378698 1948.683384 2020-03-02 63.820617 2586.044 113.275140 8.278228 2108.441593 2020-03-09 52.762931 2513.891 111.669942 12.933696 2087.767817 2020-03-16 72.150978 2467.322 109.775070 15.961352 2058.925025 2020-03-23 75.902965 2394.069 111.015771 18.886624 2023.038540 2020-03-30 51.715278 2298.855 95.129930 10.840378 2122.552675 f5 f6 year week Date 2015-03-02 321349.480 232757.674 2015 10 2015-03-09 319000.479 221875.266 2015 11 2015-03-16 329682.915 226521.004 2015 12 2015-03-23 323335.102 221358.104 2015 13 2015-03-30 335423.556 222942.088 2015 14 2016-03-07 324917.837 235534.038 2016 10 2016-03-14 318739.973 229351.230 2016 11 2016-03-21 311516.881 231233.470 2016 12 2016-03-28 317998.580 198436.598 2016 13 2017-03-06 333304.312 227996.148 2017 10 2017-03-13 319538.063 225794.464 2017 11 2017-03-20 343361.214 219506.514 2017 12 2017-03-27 326703.683 227488.980 2017 13 2018-03-05 306569.458 225853.320 2018 10 2018-03-12 309483.605 219876.156 2018 11 2018-03-19 316931.421 221450.730 2018 12 2018-03-26 322248.386 224380.222 2018 13 2019-03-04 340449.937 235389.124 2019 10 2019-03-11 323107.510 227822.394 2019 11 2019-03-18 322681.705 226564.046 2019 12 2019-03-25 342102.164 229219.588 2019 13 2020-03-02 343116.127 234588.908 2020 10 2020-03-09 345827.356 230804.352 2020 11 2020-03-16 341559.653 226640.770 2020 12 2020-03-23 344563.904 229330.532 2020 13 2020-03-30 327042.742 196731.040 2020 14 I split the data into training/testing sets: # inputs cols0 = [i for i in cols if i != 'Price'] X = mydata[cols0] # output y = mydata[['Price']] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=RANDOM_SEED)
The original post is working with a binary classification problem, where the accuracy metric makes sense (note that the predicted floats are first converted to a boolean tensor: predicted = y_pred.ge(.5).view(-1)). On the other hand, your question stated that you are working with a regression problem, in which case accuracy doesn't really make sense. It's almost impossible to predict a float value exactly.
https://stackoverflow.com/questions/67977571/
TypeError: Int' object is not callable when calling Bert methods for producing embeddings
I have the following code and I obtain 'TypeError: 'tuple' object is not callable'(in new_time) but I dont understand why. I wrote it based on this tutorial https://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/ and https://github.com/getalp/Flaubert My code : #torch == 1.8.1 #numpy == 1.20.2 #pandas == 1.0.3 #transformers == 4.6.1 from transformers import logging logging.set_verbosity_warning() import numpy as np import torch from transformers import FlaubertModel, FlaubertTokenizer language_model_dir = 'flaubert/flaubert_small_cased' # version > 2.0.0 flaubert, info = FlaubertModel.from_pretrained(language_model_dir, output_loading_info=True) flaubert_tokenizer = FlaubertTokenizer.from_pretrained(language_model_dir) # f_verbatim is a " <class 'pandas.core.series.Series'>", table of sentences tokenized = f_verbatim.apply((lambda x: flaubert_tokenizer.encode(x, add_special_tokens=True, max_length=512, padding=True, truncation=True))) #print(tokenized) #Padding max_len = 0 for i in tokenized.values: if len(i) > max_len: max_len = len(i) padded = np.array([i + [0] * (max_len - len(i)) for i in tokenized.values]) # set data to tensor format input_ids = torch.tensor(padded) print(type(input_ids)) #<class 'torch.Tensor'> attention_mask = np.where(padded != 0, 1, 0) print(type(attention_mask)) #<class 'numpy.ndarray'> # this line is causing the error hidden_state = flaubert(input_ids, attention_mask=attention_mask) Error : #Stacktrace ​ -------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-7-8d53b819c31a> in <module> 1 print(flaubert) ----> 2 hidden_state = flaubert(input_ids, attention_mask=attention_mask) ~\Anaconda3\envs\bert\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), ~\Anaconda3\envs\bert\lib\site-packages\transformers\models\flaubert\modeling_flaubert.py in forward(self, input_ids, attention_mask, langs, token_type_ids, position_ids, lengths, cache, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict) 195 196 # generate masks --> 197 mask, attn_mask = get_masks(slen, lengths, self.causal, padding_mask=attention_mask) 198 # if self.is_decoder and src_enc is not None: 199 # src_mask = torch.arange(src_len.max(), dtype=torch.long, device=lengths.device) < src_len[:, None] ~\Anaconda3\envs\bert\lib\site-packages\transformers\models\xlm\modeling_xlm.py in get_masks(slen, lengths, causal, padding_mask) 104 105 # sanity check --> 106 assert mask.size() == (bs, slen) 107 assert causal is False or attn_mask.size() == (bs, slen, slen) 108 TypeError: 'int' object is not callable As I understand, the problem is due to a missing comma but I cannot figure it out. printing "flaubert" fucntion give : (FlaubertModel( (position_embeddings): Embedding(512, 512) (embeddings): Embedding(68729, 512, padding_idx=2) (layer_norm_emb): LayerNorm((512,), eps=1e-06, elementwise_affine=True) (attentions): ModuleList( (0): MultiHeadAttention( (q_lin): Linear(in_features=512, out_features=512, bias=True) (k_lin): Linear(in_features=512, out_features=512, bias=True) (v_lin): Linear(in_features=512, out_features=512, bias=True) (out_lin): Linear(in_features=512, out_features=512, bias=True) ) (1): MultiHeadAttention( (q_lin): Linear(in_features=512, out_features=512, bias=True) (k_lin): Linear(in_features=512, out_features=512, bias=True) (v_lin): Linear(in_features=512, out_features=512, bias=True) (out_lin): Linear(in_features=512, out_features=512, bias=True) ) (2): MultiHeadAttention( (q_lin): Linear(in_features=512, out_features=512, bias=True) (k_lin): Linear(in_features=512, out_features=512, bias=True) (v_lin): Linear(in_features=512, out_features=512, bias=True) (out_lin): Linear(in_features=512, out_features=512, bias=True) ) (3): MultiHeadAttention( (q_lin): Linear(in_features=512, out_features=512, bias=True) (k_lin): Linear(in_features=512, out_features=512, bias=True) (v_lin): Linear(in_features=512, out_features=512, bias=True) (out_lin): Linear(in_features=512, out_features=512, bias=True) ) (4): MultiHeadAttention( (q_lin): Linear(in_features=512, out_features=512, bias=True) (k_lin): Linear(in_features=512, out_features=512, bias=True) (v_lin): Linear(in_features=512, out_features=512, bias=True) (out_lin): Linear(in_features=512, out_features=512, bias=True) ) (5): MultiHeadAttention( (q_lin): Linear(in_features=512, out_features=512, bias=True) (k_lin): Linear(in_features=512, out_features=512, bias=True) (v_lin): Linear(in_features=512, out_features=512, bias=True) (out_lin): Linear(in_features=512, out_features=512, bias=True) ) ) (layer_norm1): ModuleList( (0): LayerNorm((512,), eps=1e-06, elementwise_affine=True) (1): LayerNorm((512,), eps=1e-06, elementwise_affine=True) (2): LayerNorm((512,), eps=1e-06, elementwise_affine=True) (3): LayerNorm((512,), eps=1e-06, elementwise_affine=True) (4): LayerNorm((512,), eps=1e-06, elementwise_affine=True) (5): LayerNorm((512,), eps=1e-06, elementwise_affine=True) ) (ffns): ModuleList( (0): TransformerFFN( (lin1): Linear(in_features=512, out_features=2048, bias=True) (lin2): Linear(in_features=2048, out_features=512, bias=True) ) (1): TransformerFFN( (lin1): Linear(in_features=512, out_features=2048, bias=True) (lin2): Linear(in_features=2048, out_features=512, bias=True) ) (2): TransformerFFN( (lin1): Linear(in_features=512, out_features=2048, bias=True) (lin2): Linear(in_features=2048, out_features=512, bias=True) ) (3): TransformerFFN( (lin1): Linear(in_features=512, out_features=2048, bias=True) (lin2): Linear(in_features=2048, out_features=512, bias=True) ) (4): TransformerFFN( (lin1): Linear(in_features=512, out_features=2048, bias=True) (lin2): Linear(in_features=2048, out_features=512, bias=True) ) (5): TransformerFFN( (lin1): Linear(in_features=512, out_features=2048, bias=True) (lin2): Linear(in_features=2048, out_features=512, bias=True) ) ) (layer_norm2): ModuleList( (0): LayerNorm((512,), eps=1e-06, elementwise_affine=True) (1): LayerNorm((512,), eps=1e-06, elementwise_affine=True) (2): LayerNorm((512,), eps=1e-06, elementwise_affine=True) (3): LayerNorm((512,), eps=1e-06, elementwise_affine=True) (4): LayerNorm((512,), eps=1e-06, elementwise_affine=True) (5): LayerNorm((512,), eps=1e-06, elementwise_affine=True) ) ), {'missing_keys': [], 'unexpected_keys': ['pred_layer.proj.bias', 'pred_layer.proj.weight'], 'error_msgs': []}) f_verbatim look likes this : <class 'pandas.core.series.Series'> 0 Dans le cadre de l’ATEX, il y a certains types de départ moteur qu’on va mesurer la température de pot du moteur et en cas d’anomalie il faut absolument couper le moteur. 1 moi ce qui me dérange. C’est quand on a des enfants en bas âge. C’est dangereux, c’est trop facile 2 par rapport à une… enfin, à ce qui existe actuellement, si on parle du Tesys U… Enfin, sur Canopen, par exemple. 3 Je ne verrais pas ça pour une machine, on va dire, une application. Ce serait pour plusieurs machines. 4 Spécifique : Pas n’importe qui pourrait le prendre attention_mask look like this : tensor([[ 0, 156, 20, ..., 0, 0, 0], [ 0, 253, 45, ..., 0, 0, 0], [ 0, 38, 243, ..., 0, 0, 0], ..., [ 0, 141, 104, ..., 0, 0, 0], [ 0, 59, 178, ..., 0, 0, 0], [ 0, 141, 432, ..., 0, 0, 0]], dtype=torch.int32) input_ids like this : [[0 1 1 ... 0 0 0] [0 1 1 ... 0 0 0] [0 1 1 ... 0 0 0] ... [0 1 1 ... 0 0 0] [0 1 1 ... 0 0 0] [0 1 1 ... 0 0 0]]
This is because from_pretrained function gives you a tuple of model and dictionary and you did not separate them. Modify you code like this (add another variable): flaubert, info = FlaubertModel.from_pretrained(language_model_dir, output_loading_info=True) You have set output_loading_info to True. So it also return a dictionary. If you don't specify assignment variable, it will pass a tuple (model,dictionary) to flaubert variable. And since falubert is a tuple, you can not execute it. UPDATE: attention_mask is a numpy array, but your model expects a torch tensor. So, convert it to a torch tensor before passing it to your model. attention_mask = torch.from_numpy(attention_mask) hidden_state = flaubert(input_ids, attention_mask=attention_mask)
https://stackoverflow.com/questions/67982333/
RuntimeError: Expected 4-dimensional input for 4-dimensional weight
I have a network, in which there are 3 architectures that share the same classifier. class VGGBlock(nn.Module): def __init__(self, in_channels, out_channels,batch_norm=False): super(VGGBlock,self).__init__() conv2_params = {'kernel_size': (3, 3), 'stride' : (1, 1), 'padding' : 1 } noop = lambda x : x self._batch_norm = batch_norm self.conv1 = nn.Conv2d(in_channels=in_channels,out_channels=out_channels , **conv2_params) self.bn1 = nn.BatchNorm2d(out_channels) if batch_norm else noop self.conv2 = nn.Conv2d(in_channels=out_channels,out_channels=out_channels, **conv2_params) self.bn2 = nn.BatchNorm2d(out_channels) if batch_norm else noop self.max_pooling = nn.MaxPool2d(kernel_size=(2, 2), stride=(2, 2)) @property def batch_norm(self): return self._batch_norm def forward(self,x): x = self.conv1(x) x = self.bn1(x) x = F.relu(x) x = self.conv2(x) x = self.bn2(x) x = F.relu(x) x = self.max_pooling(x) return x class VGG16(nn.Module): def __init__(self, input_size, num_classes=1,batch_norm=False): super(VGG16, self).__init__() self.in_channels,self.in_width,self.in_height = input_size self.block_1 = VGGBlock(self.in_channels,64,batch_norm=batch_norm) self.block_2 = VGGBlock(64, 128,batch_norm=batch_norm) self.block_3 = VGGBlock(128, 256,batch_norm=batch_norm) self.block_4 = VGGBlock(256,512,batch_norm=batch_norm) @property def input_size(self): return self.in_channels,self.in_width,self.in_height def forward(self, x): x = self.block_1(x) x = self.block_2(x) x = self.block_3(x) x = self.block_4(x) return x class VGG16Classifier(nn.Module): def __init__(self, num_classes=1,classifier = None,batch_norm=False): super(VGG16Classifier, self).__init__() self._vgg_a = VGG16((1,32,32),batch_norm=True) self._vgg_b = VGG16((1,32,32),batch_norm=True) self._vgg_star = VGG16((1,32,32),batch_norm=True) self.classifier = classifier if (self.classifier is None): self.classifier = nn.Sequential( nn.Linear(2048, 2048), nn.ReLU(True), nn.Dropout(p=0.5), nn.Linear(2048, 512), nn.ReLU(True), nn.Dropout(p=0.5), nn.Linear(512, num_classes) ) def forward(self, x1,x2,x3): op1 = self._vgg_a(x1) op1 = torch.flatten(op1,1) op2 = self._vgg_b(x2) op2 = torch.flatten(op2,1) op3 = self._vgg_star(x3) op3 = torch.flatten(op3,1) x1 = self.classifier(op1) x2 = self.classifier(op2) x3 = self.classifier(op3) return x1,x2,x3 model1 = VGG16((1,32,32),batch_norm=True) model2 = VGG16((1,32,32),batch_norm=True) model_star = VGG16((1,32,32),batch_norm=True) model_combo = VGG16Classifier(model1,model2,model_star) I want to traing model_combo using the following loss function: class CombinedLoss(nn.Module): def __init__(self, loss_a, loss_b, loss_star, _lambda=1.0): super().__init__() self.loss_a = loss_a self.loss_b = loss_b self.loss_star = loss_star self.register_buffer('_lambda',torch.tensor(float(_lambda),dtype=torch.float32)) def forward(self,y_hat,y): return (self.loss_a(y_hat[0],y[0]) + self.loss_b(y_hat[1],y[1]) + self.loss_combo(y_hat[2],y[2]) + self._lambda * torch.sum(model_star.weight - torch.pow(torch.cdist(model1.weight+model2.weight), 2))) In the training function I pass loaders, that for simplicity are loaders_a, loaders_b and again loaders_a, where loaders_a is related to the first 50% of data of MNIST and loaders_b to the latter 50% of MNIST. def train(net, loaders, optimizer, criterion, epochs=20, dev=None, save_param=False, model_name="valerio"): loaders_a, loaders_b, loaders_star = loaders # try: net = net.to(dev) #print(net) #summary(net,[(net.in_channels,net.in_width,net.in_height)]*2) criterion.to(dev) # Initialize history history_loss = {"train": [], "val": [], "test": []} history_accuracy_a = {"train": [], "val": [], "test": []} history_accuracy_b = {"train": [], "val": [], "test": []} history_accuracy_star = {"train": [], "val": [], "test": []} # Store the best val accuracy best_val_accuracy = 0 # Process each epoch for epoch in range(epochs): # Initialize epoch variables sum_loss = {"train": 0, "val": 0, "test": 0} sum_accuracy_a = {"train": 0, "val": 0, "test": 0} sum_accuracy_b = {"train": 0, "val": 0, "test": 0} sum_accuracy_star = {"train": 0, "val": 0, "test": 0} progbar = None # Process each split for split in ["train", "val", "test"]: if split == "train": net.train() #widgets = [ #' [', pb.Timer(), '] ', #pb.Bar(), #' [', pb.ETA(), '] ', pb.Variable('ta','[Train Acc: {formatted_value}]')] #progbar = pb.ProgressBar(max_value=len(loaders_a[split]),widgets=widgets,redirect_stdout=True) else: net.eval() # Process each batch for j, ((input_a, labels_a), (input_b, labels_b), (input_s, labels_s)) in enumerate(zip(loaders_a[split], loaders_b[split], loaders_star[split])): labels_a = labels_a.unsqueeze(1).float() labels_b = labels_b.unsqueeze(1).float() labels_s = labels_s.unsqueeze(1).float() input_a = input_a.to(dev) labels_a = labels_a.to(dev) input_b = input_b.to(dev) labels_b = labels_b.to(dev) input_s = input_s.to(dev) labels_s = labels_s.to(dev) # Reset gradients optimizer.zero_grad() # Compute output pred = net(input_a,input_b, input_s) loss = criterion(pred, [labels_a, labels_b, labels_s]) # Update loss sum_loss[split] += loss.item() # Check parameter update if split == "train": # Compute gradients loss.backward() # Optimize optimizer.step() # Compute accuracy pred_labels = (pred[2] >= 0.0).long() # Binarize predictions to 0 and 1 pred_labels_a = (pred[0] >= 0.0).long() # Binarize predictions to 0 and 1 pred_labels_b = (pred[1] >= 0.0).long() # Binarize predictions to 0 and 1 batch_accuracy_star = (pred_labels == labels_s).sum().item() / len(labels_s) batch_accuracy_a = (pred_labels_a == labels_a).sum().item() / len(labels_a) batch_accuracy_b = (pred_labels_b == labels_b).sum().item() / len(labels_b) # Update accuracy sum_accuracy_star[split] += batch_accuracy_star sum_accuracy_a[split] += batch_accuracy_a sum_accuracy_b[split] += batch_accuracy_b #if (split=='train'): #progbar.update(j, ta=batch_accuracy) #progbar.update(j, ta=batch_accuracy_a) #progbar.update(j, ta=batch_accuracy_b) #if (progbar is not None): #progbar.finish() # Compute epoch loss/accuracy #for split in ["train", "val", "test"]: #epoch_loss = sum_loss[split] / (len(loaders_a[split])+len(loaders_b[split])) #epoch_accuracy_combo = {split: sum_accuracy_combo[split] / len(loaders[split]) for split in ["train", "val", "test"]} #epoch_accuracy_a = sum_accuracy_a[split] / len(loaders_a[split]) #epoch_accuracy_b = sum_accuracy_b[split] / len(loaders_b[split]) epoch_loss = sum_loss["train"] / (len(loaders_a["train"])+len(loaders_b["train"])+len(loaders_s["train"])) epoch_accuracy_a = sum_accuracy_a["train"] / len(loaders_a["train"]) epoch_accuracy_b = sum_accuracy_b["train"] / len(loaders_b["train"]) epoch_accuracy_star = sum_accuracy_star["train"] / len(loaders_s["train"]) epoch_loss_val = sum_loss["val"] / (len(loaders_a["val"])+len(loaders_b["val"])+len(loaders_s["val"])) epoch_accuracy_a_val = sum_accuracy_a["val"] / len(loaders_a["val"]) epoch_accuracy_b_val = sum_accuracy_b["val"] / len(loaders_b["val"]) epoch_accuracy_star_val = sum_accuracy_star["val"] / len(loaders_s["val"]) epoch_loss_test = sum_loss["test"] / (len(loaders_a["test"])+len(loaders_b["test"])+len(loaders_s["test"])) epoch_accuracy_a_test = sum_accuracy_a["test"] / len(loaders_a["test"]) epoch_accuracy_b_test = sum_accuracy_b["test"] / len(loaders_b["test"]) epoch_accuracy_star_test = sum_accuracy_star["test"] / len(loaders_s["test"]) # Store params at the best validation accuracy if save_param and epoch_accuracy["val"] > best_val_accuracy: # torch.save(net.state_dict(), f"{net.__class__.__name__}_best_val.pth") torch.save(net.state_dict(), f"{model_name}_best_val.pth") best_val_accuracy = epoch_accuracy["val"] # Update history for split in ["train", "val", "test"]: history_loss[split].append(epoch_loss) history_accuracy_a[split].append(epoch_accuracy_a) history_accuracy_b[split].append(epoch_accuracy_b) history_accuracy_star[split].append(epoch_accuracy_star) # Print info print(f"Epoch {epoch + 1}:", f"Training Loss = {epoch_loss:.4f},",) print(f"Epoch {epoch + 1}:", f"Training Accuracy for A = {epoch_accuracy_a:.4f},") print(f"Epoch {epoch + 1}:", f"Training Accuracy for B = {epoch_accuracy_b:.4f},") print(f"Epoch {epoch + 1}:", f"Training Accuracy for star = {epoch_accuracy_star:.4f},") print(f"Epoch {epoch + 1}:", f"Val Loss = {epoch_loss_val:.4f},",) print(f"Epoch {epoch + 1}:", f"Val Accuracy for A = {epoch_accuracy_a_val:.4f},") print(f"Epoch {epoch + 1}:", f"Val Accuracy for B = {epoch_accuracy_b_val:.4f},") print(f"Epoch {epoch + 1}:", f"Val Accuracy for star = {epoch_accuracy_star_val:.4f},") print(f"Epoch {epoch + 1}:", f"Test Loss = {epoch_loss_test:.4f},",) print(f"Epoch {epoch + 1}:", f"Test Accuracy for A = {epoch_accuracy_a_test:.4f},") print(f"Epoch {epoch + 1}:", f"Test Accuracy for B = {epoch_accuracy_b_test:.4f},") print(f"Epoch {epoch + 1}:", f"Test Accuracy for star = {epoch_accuracy_star_test:.4f},") print("\n") But I got this error: RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 1, 3, 3], but got 2-dimensional input of size [128, 2048] instead
From your code & error, I guess you're passing binary image (h, w, 1) to the network. The issue raises in Conv2d layer, where it expects 4 dimensional input. To rephrase - Conv2d layer expects 4-dim tensor like: T = torch.randn(1,3,128,256) print(T.shape) out: torch.Size([1, 3, 128, 256]) Where: The first dimension (number 1) is batch dimension to stack multiple tensors across this dim to perform batch operation. Second dimension (number 3) is in_channels for Conv. It's basically number of channels of image. Standard RGB or BGR image has 3 channels. Third dimension (number 128) is height dimension. Fourth dimension (number 256) is width dimension. Binary images have 1 channel dimension: [128, 256, 1] , [Height, Width, Channels] OR [128, 256], [Height, Width] Take in consideration: That standard Numpy image array dims have [H, W, C] shape, where torch expects Channel dimension after batch dimension, so: [B, C, H, W]. I'm not sure where channel clamping is happening, but binary image becomes 2-dim image, because there's no need for channel dimension as long as it's just one. If you want to pass 2-dim binary image to Conv2d layer, you should unsqueeze it to 4-dim tensor. Before: Input.shape = torch.Size[128, 2048] Preprocess: Tensor = Input.view(1, 1, Input.shape[0], Input.shape[1]) Tensor.shape out: = torch.Size[1, 1, 128, 2048] Same could be done by just unsqueezing zeroth dim two times: Tensor = Input.unsqueeze(0).unsqueeze(0) Tensor.shape out: = torch.Size[1, 1, 128, 2048] But it's messier - so I'd recommend first option.
https://stackoverflow.com/questions/68001067/
Both validation loss and accuracy are increasing using a pre-trained VGG-16
So, I'm doing a 4 label x-ray images classification on around 12600 images: Class1:4000 Class2:3616 Class3:1345 Class4:4000 I'm using VGG-16 architecture pertained on the imageNet dataset with cross-entrpy and SGD and a batch size of 32 and a learning rate of 1e-3 running on pytorch [[749., 6., 50., 2.], [ 5., 707., 9., 1.], [ 56., 8., 752., 0.], [ 4., 1., 0., 243.]] I know since both train loss/acc are relatively 0/1 the model is overfitting, though I'm surprised that the val acc is still around 0.9! How to properly interpret that and what causing it and how to prevent it? I know it's something like because the accuracy is the argmax of softmax like the actual predictions are getting lower and lower but the argmax always stays the same, but I'm really confused about it! I even let it train for +64 epochs same results flat acc while loss increases gradually! PS. I have seen other questions with answers and didn't really get an explanation
I think your question already says about what is going on. Your model is overfitting as you have also figured out. Now, as you are training more your model slowly becoming more specialized to the train set and loosing the the capability to generalize gradually. So the softmax probabilities are getting more and more flat. But still it is showing more or less the same accuracy for validation set as still now the correct class has at least slightly more probability than the others. So in my opinion there can be some possible reasons for this: Your train set and validation set may not be from the same distribution. Your validation set doesn't cover all cases need to be evaluated, it probably contains similar types of images but they do not differ too much. So, when the model can identify one, it can identify many of them from the validation set. If you add more heterogeneous images in validation set, you will no longer see such a large accuracy in validation set. Similarly, we can say your train set has images which are heterogeneous i.e, they have a lot of variations, and the validation set is covering only a few varieties, so as training goes on, those minorities are getting less priority as the model yet to have many things to learn and generalize. This can happen if you augment your train-set and your model finds the validation set is relatively easier initially (until overfitting), but as training goes on the model gets lost itself while learning a lot of augmented varieties available in the train set. In this case don't make the augmentation too much wild. Think, if the augmented images are still realistic or not. Do augmentation on images as long as they remain realistic and each type of these images' variations occupy enough representative examples in the train set. Don't include unnecessary situations in augmentation those will never occur in reality, as these unrealistic examples will just increase burden on the model than doing any help.
https://stackoverflow.com/questions/68004619/
Unable to install Pytorch on Mac OS X from scratch due to Pytorch package conflicts with Conda - how to fix?
I have python 3.9 and I am trying to install pytorch current version (as of this writing 1.9). But when I do it I get the following error: (synthesis) miranda9@Brandos-MBP ~ % conda install pytorch torchvision torchaudio -c pytorch Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: - Found conflicts! Looking for incompatible packages. failed UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versions Package pytorch conflicts for: torchaudio -> pytorch[version='1.2.0|1.3.0|1.3.1|1.4.0|1.5.0|1.5.1|1.6.0|1.7.0|1.7.1|1.8.0|1.8.1|1.9.0'] torchvision -> pytorch[version='1.2.0|1.3.0|1.3.1|1.4.0|1.5.0|1.5.1|1.6.0|1.7.0|1.7.1|1.8.0|1.8.1|1.9.0|>=1.1.0|>=1.0.0|>=0.4|>=0.3|>=0.2|1.7.1.*|1.3.1.*'] pytorch Package six conflicts for: torchvision -> six pytorch -> mkl-service[version='>=2,<3.0a0'] -> six I only had numpy installed so far...it's essentially a brand new env: (synthesis) miranda9@Brandos-MBP ~ % conda list # packages in environment at /Users/miranda9/.conda/envs/synthesis: # # Name Version Build Channel blas 1.0 mkl ca-certificates 2021.5.25 hecd8cb5_1 certifi 2021.5.30 py39hecd8cb5_0 intel-openmp 2021.2.0 hecd8cb5_564 libcxx 10.0.0 1 libffi 3.3 hb1e8313_2 mkl 2021.2.0 hecd8cb5_269 mkl-service 2.3.0 py39h9ed2024_1 mkl_fft 1.3.0 py39h4a7008c_2 mkl_random 1.2.1 py39hb2f4e1b_2 ncurses 6.2 h0a44026_1 numpy 1.20.2 py39h4b4dc7a_0 numpy-base 1.20.2 py39he0bd621_0 openssl 1.1.1k h9ed2024_0 pip 21.1.2 py39hecd8cb5_0 python 3.9.5 h88f2d9e_3 readline 8.1 h9ed2024_0 setuptools 52.0.0 py39hecd8cb5_0 six 1.16.0 pyhd3eb1b0_0 sqlite 3.35.4 hce871da_0 tk 8.6.10 hb0a8c7a_0 tzdata 2020f h52ac0ba_0 wheel 0.36.2 pyhd3eb1b0_0 xz 5.2.5 h1de35cc_0 zlib 1.2.11 h1de35cc_3 why is this happening and how do I fix this? related/crossposted: https://www.reddit.com/r/pytorch/comments/o1hwgv/installing_pytorch_fails_on_macos_with_brand_new/ SO: python - Unable to install Pytorch on Mac OS X from scratch due to Pytorch package conflicts with Conda - how to fix? - Stack Overflow pytorch forum: https://discuss.pytorch.org/t/installing-pytorch-fails-on-macos/109361/3 UnsatisfiableError: The following specifications were found to be incompatible with each other:
For me it seems that adding conda-forge to the channels works. My understanding of why that works is that the pytorch channel doesn't have all packages or something (details here: https://github.com/pytorch/pytorch/issues/59517). Do: conda install -y pytorch torchvision torchaudio -c pytorch -c conda-forge other example installations: conda install -y pytorch==1.7.1 torchvision torchaudio cudatoolkit=10.2 -c pytorch -c conda-forge Full output: (synthesis) miranda9@Brandos-MBP ~ % conda install pytorch torchvision torchaudio -c pytorch -c conda-forge Collecting package metadata (current_repodata.json): done Solving environment: done ## Package Plan ## environment location: /Users/miranda9/.conda/envs/synthesis added / updated specs: - pytorch - torchaudio - torchvision The following packages will be downloaded: package | build ---------------------------|----------------- bzip2-1.0.8 | h0d85af4_4 155 KB conda-forge ca-certificates-2021.5.30 | h033912b_0 136 KB conda-forge certifi-2021.5.30 | py39h6e9494a_0 141 KB conda-forge ffmpeg-4.3 | h0a44026_0 10.1 MB pytorch freetype-2.10.4 | h4cff582_1 890 KB conda-forge gettext-0.19.8.1 | h7937167_1005 3.3 MB conda-forge gmp-6.1.2 | h0a44026_1000 734 KB conda-forge gnutls-3.6.13 | hc269f14_0 2.1 MB conda-forge lame-3.100 | h35c211d_1001 521 KB conda-forge libiconv-1.16 | haf1e3a3_0 1.3 MB conda-forge libpng-1.6.37 | h7cec526_2 313 KB conda-forge libuv-1.41.0 | hbcf498f_0 421 KB conda-forge libwebp-base-1.2.0 | h0d85af4_2 700 KB conda-forge lz4-c-1.9.2 | h4a8c4bd_1 169 KB conda-forge nettle-3.4.1 | h3efe00b_1002 1.0 MB conda-forge ninja-1.10.2 | hf7b0b51_1 106 KB olefile-0.46 | pyh9f0ad1d_1 32 KB conda-forge openh264-2.1.1 | hd174df1_0 1.5 MB conda-forge openssl-1.1.1k | h0d85af4_0 1.9 MB conda-forge pillow-8.2.0 | py39h5270095_0 587 KB python_abi-3.9 | 1_cp39 4 KB conda-forge pytorch-1.9.0 | py3.9_0 79.0 MB pytorch torchaudio-0.9.0 | py39 4.0 MB pytorch torchvision-0.10.0 | py39_cpu 6.8 MB pytorch typing_extensions-3.10.0.0 | pyha770c72_0 28 KB conda-forge ------------------------------------------------------------ Total: 115.8 MB The following NEW packages will be INSTALLED: bzip2 conda-forge/osx-64::bzip2-1.0.8-h0d85af4_4 ffmpeg pytorch/osx-64::ffmpeg-4.3-h0a44026_0 freetype conda-forge/osx-64::freetype-2.10.4-h4cff582_1 gettext conda-forge/osx-64::gettext-0.19.8.1-h7937167_1005 gmp conda-forge/osx-64::gmp-6.1.2-h0a44026_1000 gnutls conda-forge/osx-64::gnutls-3.6.13-hc269f14_0 jpeg pkgs/main/osx-64::jpeg-9b-he5867d9_2 lame conda-forge/osx-64::lame-3.100-h35c211d_1001 lcms2 pkgs/main/osx-64::lcms2-2.12-hf1fd2bf_0 libiconv conda-forge/osx-64::libiconv-1.16-haf1e3a3_0 libpng conda-forge/osx-64::libpng-1.6.37-h7cec526_2 libtiff pkgs/main/osx-64::libtiff-4.2.0-h87d7836_0 libuv conda-forge/osx-64::libuv-1.41.0-hbcf498f_0 libwebp-base conda-forge/osx-64::libwebp-base-1.2.0-h0d85af4_2 lz4-c conda-forge/osx-64::lz4-c-1.9.2-h4a8c4bd_1 nettle conda-forge/osx-64::nettle-3.4.1-h3efe00b_1002 ninja pkgs/main/osx-64::ninja-1.10.2-hf7b0b51_1 olefile conda-forge/noarch::olefile-0.46-pyh9f0ad1d_1 openh264 conda-forge/osx-64::openh264-2.1.1-hd174df1_0 pillow pkgs/main/osx-64::pillow-8.2.0-py39h5270095_0 python_abi conda-forge/osx-64::python_abi-3.9-1_cp39 pytorch pytorch/osx-64::pytorch-1.9.0-py3.9_0 torchaudio pytorch/osx-64::torchaudio-0.9.0-py39 torchvision pytorch/osx-64::torchvision-0.10.0-py39_cpu typing_extensions conda-forge/noarch::typing_extensions-3.10.0.0-pyha770c72_0 zstd pkgs/main/osx-64::zstd-1.4.5-h41d2c2f_0 The following packages will be UPDATED: ca-certificates pkgs/main::ca-certificates-2021.5.25-~ --> conda-forge::ca-certificates-2021.5.30-h033912b_0 The following packages will be SUPERSEDED by a higher-priority channel: certifi pkgs/main::certifi-2021.5.30-py39hecd~ --> conda-forge::certifi-2021.5.30-py39h6e9494a_0 openssl pkgs/main::openssl-1.1.1k-h9ed2024_0 --> conda-forge::openssl-1.1.1k-h0d85af4_0 Proceed ([y]/n)? y Downloading and Extracting Packages freetype-2.10.4 | 890 KB | ############################################################################################################################################################################################################################################################################# | 100% openh264-2.1.1 | 1.5 MB | ############################################################################################################################################################################################################################################################################# | 100% openssl-1.1.1k | 1.9 MB | ############################################################################################################################################################################################################################################################################# | 100% gmp-6.1.2 | 734 KB | ############################################################################################################################################################################################################################################################################# | 100% gnutls-3.6.13 | 2.1 MB | ############################################################################################################################################################################################################################################################################# | 100% gettext-0.19.8.1 | 3.3 MB | ############################################################################################################################################################################################################################################################################# | 100% libuv-1.41.0 | 421 KB | ############################################################################################################################################################################################################################################################################# | 100% libpng-1.6.37 | 313 KB | ############################################################################################################################################################################################################################################################################# | 100% olefile-0.46 | 32 KB | ############################################################################################################################################################################################################################################################################# | 100% python_abi-3.9 | 4 KB | ############################################################################################################################################################################################################################################################################# | 100% certifi-2021.5.30 | 141 KB | ############################################################################################################################################################################################################################################################################# | 100% pillow-8.2.0 | 587 KB | ############################################################################################################################################################################################################################################################################# | 100% torchaudio-0.9.0 | 4.0 MB | ############################################################################################################################################################################################################################################################################# | 100% lz4-c-1.9.2 | 169 KB | ############################################################################################################################################################################################################################################################################# | 100% pytorch-1.9.0 | 79.0 MB | ############################################################################################################################################################################################################################################################################# | 100% typing_extensions-3. | 28 KB | ############################################################################################################################################################################################################################################################################# | 100% ffmpeg-4.3 | 10.1 MB | ############################################################################################################################################################################################################################################################################# | 100% lame-3.100 | 521 KB | ############################################################################################################################################################################################################################################################################# | 100% torchvision-0.10.0 | 6.8 MB | ############################################################################################################################################################################################################################################################################# | 100% libwebp-base-1.2.0 | 700 KB | ############################################################################################################################################################################################################################################################################# | 100% ca-certificates-2021 | 136 KB | ############################################################################################################################################################################################################################################################################# | 100% bzip2-1.0.8 | 155 KB | ############################################################################################################################################################################################################################################################################# | 100% nettle-3.4.1 | 1.0 MB | ############################################################################################################################################################################################################################################################################# | 100% ninja-1.10.2 | 106 KB | ############################################################################################################################################################################################################################################################################# | 100% libiconv-1.16 | 1.3 MB | ############################################################################################################################################################################################################################################################################# | 100% Preparing transaction: done Verifying transaction: done Executing transaction: done
https://stackoverflow.com/questions/68010933/
Why Resnet model in tensorflow and pytorch give different feature length?
I'm trying to extract features of images through Resnet models pretrained on imagenet dataset as for the network should give the length of 2048 features. When I experimented with TensorFlow it gave the same amount of feature-length but when I try PyTorch version Resnet it gives me the length of 1000. codes are as below for Tensorflow import numpy as np from numpy.linalg import norm import pickle from tqdm import tqdm, tqdm_notebook import os import random import time import math import tensorflow from tensorflow.keras.preprocessing import image from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.applications.resnet50 import ResNet50, preprocess_input from tensorflow.keras.applications.vgg16 import VGG16 from tensorflow.keras.applications.vgg19 import VGG19 from tensorflow.keras.applications.mobilenet import MobileNet from tensorflow.keras.applications.inception_v3 import InceptionV3 from tensorflow.keras.models import Model from tensorflow.keras.layers import Input, Flatten, Dense, Dropout, GlobalAveragePooling2D def model_picker(name): if (name == 'vgg16'): model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3), pooling='max') elif (name == 'vgg19'): model = VGG19(weights='imagenet', include_top=False, input_shape=(224, 224, 3), pooling='max') elif (name == 'mobilenet'): model = MobileNet(weights='imagenet', include_top=False, input_shape=(224, 224, 3), pooling='max', depth_multiplier=1, alpha=1) elif (name == 'inception'): model = InceptionV3(weights='imagenet', include_top=False, input_shape=(224, 224, 3), pooling='max') elif (name == 'resnet'): model = ResNet50(weights='imagenet', include_top=False, input_shape=(224, 224, 3), pooling='max') elif (name == 'xception'): model = Xception(weights='imagenet', include_top=False, input_shape=(224, 224, 3), pooling='max') else: print("Specified model not available") return model model_architecture = 'resnet' model = model_picker(model_architecture) def extract_features(img_path, model): input_shape = (224, 224, 3) img = image.load_img(img_path, target_size=(input_shape[0], input_shape[1])) img_array = image.img_to_array(img) expanded_img_array = np.expand_dims(img_array, axis=0) preprocessed_img = preprocess_input(expanded_img_array) features = model.predict(preprocessed_img) flattened_features = features.flatten() normalized_features = flattened_features / norm(flattened_features) return normalized_features features = extract_features('dog.jpg', model) print(len(features)) > 2048 As you can see it gives a length of 2048 features through the resnet50 model Below is the code for PyTorch from torchvision import models, transforms from PIL import Image from torch.autograd import Variable import torch res_model = models.resnet50(pretrained=True) def image_loader(image,model,use_gpu= False): transform = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor() ]) img = Image.open(image) img = transform(img) print(img.shape) x = Variable(torch.unsqueeze(img, dim = 0).float(), requires_grad = False) print(x.shape) if use_gpu: x = x.cuda() model = model.cuda() y = model(x).cpu() print(y.size()) y = torch.squeeze(y) y = y.data.numpy() print(y.shape) print(len(y)) np.savetxt('features.txt',y,delimiter=',') image_loader('dog.jpg',res_model) > torch.Size([3, 224, 224]) torch.Size([1, 3, 224, 224]) torch.Size([1, > 1000]) (1000,) 1000 As you can see it gives a length of 1000 for the feature extracted through the Resnet model with the PyTorch model why am I getting different lengths isn't I get the same length according to architecture which is 2048 or am I doing anything wrong?
Printing the layers of the pytorch resnet will yield: (fc): Linear(in_features=2048, out_features=1000, bias=True) as the last layer of the resnet in Pytorch, because the model is by default set up for use as a classifier on imagenet data (1000 classes). If you want 2048 features instead, you can simply delete this last layer. del model.fc and your resulting output will then be of the desired dimension. Edit: perhaps better is to simply overwrite model.fc with an identity function rather than deleting it so it doesn't cause errors when forward is called: model.fc = torch.nn.Identity()
https://stackoverflow.com/questions/68020735/
Error while creating train transform using torch vision
I am using torch vision to create the following train transformation. I do not understand what's wrong and how I can fix it? train_transform = torch.nn.Sequential( transforms.ToTensor(), transforms.RandomApply([ transforms.RandomApply([transforms.RandomRotation(15)], 0.6), transforms.RandomApply([transforms.RandomAffine(0, shear=0.2)], 0.3), transforms.RandomApply([transforms.RandomResizedCrop(size = 224, scale=(0.8, 1.4))], 0.7) ], p= 0.5) ) This is the error: TypeError Traceback (most recent call last) in () 5 transforms.RandomApply([transforms.RandomAffine(0, shear=0.2)], 0.3), 6 transforms.RandomApply([transforms.RandomResizedCrop(size = 224, scale=(0.8, 1.4))], 0.7) ----> 7 ], p= 0.5) 8 ) 1 frames /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in add_module(self, name, module) 371 if not isinstance(module, Module) and module is not None: 372 raise TypeError("{} is not a Module subclass".format( --> 373 torch.typename(module))) 374 elif not isinstance(name, torch._six.string_classes): 375 raise TypeError("module name should be a string. Got {}".format( TypeError: torchvision.transforms.transforms.ToTensor is not a Module subclass
torch.nn.Sequential scripts your transformations. You can only use scriptable transformations in torch.nn.Sequential and transforms.ToTensor() is not a scriptable transformation. A scriptable transformation only takes a Tensor as an input. This is why you cannot use transforms.ToTensor() in the torch.nn.Sequential function, as you take an some data as input and transform it to a tensor. You can find this information in the PyTorch documentation here under Scriptable Transformations. The transforms.RandomApply transformation is also not scriptable but it says here in the transforms.visions docu how to use it in a scriptable way anyway (see the definition of the class RandomApply). In the source code it says: " In order to script the transformation, please use torch.nn.ModuleList as input instead of list/tuple of transforms" You have to wrap all your random transformations with torch.nn.ModuleList. my_transforms = transforms.RandomApply(torch.nn.ModuleList([ transforms.RandomApply( torch.nn.ModuleList([ transforms.RandomRotation(15)]), p=0.6), transforms.RandomApply( torch.nn.ModuleList([ transforms.RandomAffine(0, shear=0.2)]), p=0.3), transforms.RandomApply( torch.nn.ModuleList([ transforms.RandomResizedCrop(size = 224, scale=(0.8, 1.4))]), p=0.7), ]), p=0.3) scripted_transforms = torch.jit.script(my_transforms) If you want to, you can also wrap these transformations with torch.nn.Sequential but it is not necessary. Then the code should be this: my_transforms = torch.nn.Sequential(transforms.RandomApply(torch.nn.ModuleList([ transforms.RandomApply( torch.nn.ModuleList([ transforms.RandomRotation(15)]), p=0.6), transforms.RandomApply( torch.nn.ModuleList([ transforms.RandomAffine(0, shear=0.2)]), p=0.3), transforms.RandomApply( torch.nn.ModuleList([ transforms.RandomResizedCrop(size = 224, scale=(0.8, 1.4))]), p=0.7), ]), p=0.3)) scripted_transforms = torch.jit.script(my_transforms) As mentioned, the tensor transformation cannot be added to this, since it is not scriptable. Another solution is to use the transforms.Compose instead of torch.nn.Sequential like this: from torchvision import transforms train_transform = transforms.Compose([ transforms.ToTensor(), transforms.RandomApply([ transforms.RandomApply([transforms.RandomRotation(15)], 0.6), transforms.RandomApply([transforms.RandomAffine(0, shear=0.2)], 0.3), transforms.RandomApply([transforms.RandomResizedCrop(size = 224, scale=(0.8, 1.4))], 0.7) ], p= 0.5) ]) In both cases the complete list of transformations will be randomly applied to your data, exactly in the order as you have specified the transformations in the list. You can see this from the source code of Random.Apply, where the forward pass looks like this def forward(self, img): if self.p < torch.rand(1): return img for t in self.transforms: img = t(img) return img The function loops over all transformations t in self.transforms exactly in the order you have specified in the list. So technically it is not necessary that you use torch.nn.Sequential.
https://stackoverflow.com/questions/68024067/
Equivalent AdaptiveAvgPool2d API in cuDNN
Is there an equivalent API in cuDNN as the AdaptiveAvgPool2d in Pytorch?
yes, it's possible. you can create the pooling descriptor. here is the official documentation for the API- https://docs.nvidia.com/deeplearning/cudnn/api/index.html#cudnnPoolingMode_t
https://stackoverflow.com/questions/68029335/
RuntimeError: `lengths` array must be sorted in decreasing order when `enforce_sorted` is True. - Pytorch
It have been 5 hours sitting here getting the same error: RuntimeError: `lengths` array must be sorted in decreasing order when `enforce_sorted` is True. You can pass `enforce_sorted=False` to pack_padded_sequence and/or pack_sequence to sidestep this requirement if you do not need ONNX exportability. I'm working on this simple sentiment classification task using RNN in pytorch. I'm loading the my custom data using torchtext. I'm loading it from a json file which looks as follows: {"reviewText": "Da Silva takes the divine by ....", "overall": 4.0, "summary": "An amazing first novel"} I created my field as follows. And i created a pre-processing get_sentiment() function that convert overalls that are greater than 2 to 1 0 otherwise: get_sentiment = lambda x: 1 if x >=3 else 0 TEXT = data.Field(tokenize = 'spacy', tokenizer_language = 'en_core_web_sm', include_lengths=True ) LABEL = data.Field(sequential=False, use_vocab=False, preprocessing=get_sentiment) fields = { 'reviewText': ('review', TEXT), 'overall': ('sentiment', LABEL) } I loaded the data: train_data, test_data = data.TabularDataset.splits( path="/content/", train="Books_small_10000.json", test="Books_small.json", format="json", fields=fields ) I built the vocabularies: MAX_VOCAB_SIZE = 25_000 TEXT.build_vocab( train_data, max_size = MAX_VOCAB_SIZE, vectors = "glove.6B.100d", unk_init = torch.Tensor.normal_ ) LABEL.build_vocab(train_data) I created my iterators. BATCH_SIZE = 64 train_iterator, validation_iterator, test_iterator = data.BucketIterator.splits( (train_data, validation_data, test_data), device = device, batch_size = BATCH_SIZE, sort_key = lambda x: len(x.review), ) This is how my Model looks. class AmazonLSTMRNN(nn.Module): def __init__(self, vocab_size, embedding_size, hidden_size, output_size, num_layers , bidirectional, dropout, pad_idx): super(AmazonLSTMRNN, self).__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim=embedding_size, padding_idx=pad_idx) self.lstm = nn.LSTM(embedding_size, hidden_size=hidden_size, bidirectional=bidirectional, num_layers=num_layers, dropout=dropout) self.fc = nn.Linear(hidden_size * 2, out_features=output_size) self.dropout = nn.Dropout(dropout) def forward(self, text, text_lengths): embedded = self.dropout(self.embedding(text)) packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, text_lengths.to('cpu')) packed_output, (h_0, c_0) = self.rnn(packed_embedded) output, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_output) h_0 = self.dropout(torch.cat((h_0[-2,:,:], h_0[-1,:,:]), dim = 1)) return self.fc(h_0) INPUT_DIM = len(TEXT.vocab) # # 25002 EMBEDDING_DIM = 100 HIDDEN_DIM = 256 OUTPUT_DIM = 1 N_LAYERS = 2 BIDIRECTIONAL = True DROPOUT = 0.5 PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token] # 0 amazon_model = AmazonLSTMRNN(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM, N_LAYERS, BIDIRECTIONAL, DROPOUT, PAD_IDX) criterion = nn.BCEWithLogitsLoss() optimizer = torch.optim.Adam(amazon_model.parameters()) amazon_model = amazon_model.to(device) criterion = criterion.to(device) ..... Training function def train(model, iterator, optimizer, criterion): epoch_loss = 0 epoch_acc = 0 model.train() for batch in iterator: optimizer.zero_grad() text, text_lengths = batch.review predictions = model(text, text_lengths).squeeze(1) loss = criterion(predictions, batch.sentiment) acc = accuracy(predictions, batch.sentiment) loss.backward() optimizer.step() epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) Training loop. N_EPOCHS = 5 best_valid_loss = float('inf') for epoch in range(N_EPOCHS): start_time = time.time() train_loss, train_acc = train(amazon_model, train_iterator, optimizer, criterion) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(amazon_model.state_dict(), 'best-model.pt') print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s') print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%') If someone knows where am i wrong please correct me. Any help input will be appreciated.
After some few minutes I found the solution and I was able to get accuracy of aprox ~93% on a single training epoch. I changed my LABEL field to: LABEL = data.LabelField(preprocessing=get_sentiment, dtype = torch.float) Then i changed my AmazonLSTMRNN model in the forward method by adding enforce_sorted=False to the pack_padded_sequence. The forward method: def forward(self, text, text_lengths): embedded = self.dropout(self.embedding(text)) packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, text_lengths.to('cpu'), enforce_sorted=False) packed_output, (h_0, c_0) = self.lstm(packed_embedded) output, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_output) h_0 = self.dropout(torch.cat((h_0[-2,:,:], h_0[-1,:,:]), dim = 1)) return self.fc(h_0)
https://stackoverflow.com/questions/68033951/
Albumentations in Pytorch: Inconsistent Augmentation for multi-target datasets
I'm using Pytorch and want to perform the data augmentation of my images with Albumentations. My dataset object has two different targets: 'blurry' and 'sharp'. Each instance of both targets needs to have identical changes. When I try to perform the data augmentation with a Dataset object like this: class ApplyTransform(Dataset): def __init__(self, dataset, transformation): self.dataset = dataset self.aug = transformation def __len__(self): return (len(self.dataset)) def __getitem__(self, idx): sample, target = self.dataset[idx]['blurry'], self.dataset[idx]['sharp'] transformedImgs = self.aug(image=sample, target_image=target) sample_aug, target_aug = transformedImgs["image"], transformedImgs["target_image"] return {'blurry': sample_aug, 'sharp': target_aug} Unfortunately, I receive two images with two different augmentations: When I try the same without a Dataset object, I receive two images with the identical application of augmentations. Does anybody know how to make it work with a dataset object? Here is my augmentation pipeline: augmentation_transform = A.Compose( [ A.Resize(1024,1024, p=1), A.HorizontalFlip(p=0.25), A.Rotate(limit=(-45, 65)), A.VerticalFlip(p=0.24), A.RandomContrast(limit=0.3, p=0.15), A.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), A.pytorch.transforms.ToTensorV2(always_apply=True, p=1.0) ], additional_targets={"target_image": "image"} )
You can stack your blurry and sharp images, apply your augmentation then unstack them
https://stackoverflow.com/questions/68040933/
Tensorboard in pytorch does not load anything in Browser
I am using tensorboard to monitor the training progress of the model from this codebase. To open tensorboard, I ran the command tensorboard --logdir=checkpoints/ as suggested in the codebase. I know that to open tensorboard, I need to pass the directory path in --logdir where the events file is present, which I did. It does seem to open tensorboard since it returns the following in the terminal. I0620 12:52:16.737502 140647693104896 plugin.py:292] Monitor runs begin TensorBoard 2.5.0 at http://localhost:8088/ (Press CTRL+C to quit) But when I open the link in browser, this tensorboard loading screen occurs and it loads forever, and doesnt open any stats/plots that I want to visualize. My tensorboard version is 2.5.0 and pytorch version is 1.8.1+cu102
This issue got resolved once I uninstalled torch_tb_profiler and downgraded Tensorboard 2.5.0 to 1.15.0 as suggested in this answer
https://stackoverflow.com/questions/68058295/
"AssertionError: Cannot handle batch sizes > 1 if no padding token is > defined" and pad_token = eos_token
I am trying to finetune a pre-trained GPT2-model. When applying the respective tokenizer, I originally got the error message: Using pad_token, but it is not set yet. Thus, I changed my code to: GPT2_tokenizer = GPT2Tokenizer.from_pretrained("gpt2") GPT2_tokenizer.pad_token = GPT2_tokenizer.eos_token When calling the trainer.train() later, I end up with the following error: AssertionError: Cannot handle batch sizes > 1 if no padding token is defined. Since I specifically defined the pad_token above, I expect these errors (or rather my fix of the original error and this new error) to be related - although I could be wrong. Is this a known problem that eos_token and pad_token somehow interfer? Is there an easy work-around? Thanks a lot!
I've been running into a similar problem, producing the same error message you were receiving. I can't be sure if your problem and my problem were caused by the same issue, since I can't see your full stack trace, but I'll post my solution in case it can help you or someone else who comes along. You were totally correct to fix the first issue you described with your tokenizer by setting its pad token with the code provided. However, I also had to set the pad_token_id of my model's configuration to get my GPT2 model to function properly. I did this in the following way: # instantiate the configuration for your model, this can be imported from transformers configuration = GPT2Config() # set up your tokenizer, just like you described, and set the pad token GPT2_tokenizer = GPT2Tokenizer.from_pretrained("gpt2") GPT2_tokenizer.pad_token = GPT2_tokenizer.eos_token # instantiate the model model = GPT2ForSequenceClassification(configuration).from_pretrained(model_name).to(device) # set the pad token of the model's configuration model.config.pad_token_id = model.config.eos_token_id I suppose this is because the tokenizer and the model function separately, and both need knowledge of the ID being used for the pad token. I can't tell if this will fix your problem (since this post is 6 months old, it may not matter anyway), but hopefully my answer may be able to help someone else.
https://stackoverflow.com/questions/68084302/
How to get the size of a Hugging Face pretrained model?
I keep getting a CUDA out of memory error when trying to fine-tune a Hugging Face pretrained XML Roberta model. So, the first thing I want to find out is the size of the pretrained model. model = XLMRobertaForCausalLM.from_pretrained('xlm-roberta-base', config=config) device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model.to(device) I have tried to get the size of the model with sys.getsizeof(model) and, unsurprisingly, I get an incorrect result. I get 56 as a result, which is the size of the python object. But then, I tried model. element_size(), and I get the error ModuleAttributeError: 'XLMRobertaForCausalLM' object has no attribute 'element_size' I have searched in the Hugging Face documentation, but I have not found how to do it. Does anyone here know how to do it?
If you facing CUDA out of memory errors, the problem is mostly not the model, rather than the training data. You can reduce the batch_size (number of training examples used in parallel), so your gpu only need to handle a few examples each iteration and not a ton of. However, to your question: I would recommend you objsize. It is a library that calculates the "real" size (also known as "deep" size). So a straightforward solution would be: import objsize objsize.get_deep_size(model) However, the documentation says: Excluding non-exclusive objects. That is, objects that are also referenced from somewhere else in the program. This is true for calculating the object's deep size and for traversing its descendants. This shouldn't be a problem, but if it still gets a too small size for your model you can use Pympler, another Library that calculates the "deep" size via recursion. Another approach would be implementing a get_deep_size() function by yourself, e.g. from this article: import sys def get_size(obj, seen=None): """Recursively finds size of objects""" size = sys.getsizeof(obj) if seen is None: seen = set() obj_id = id(obj) if obj_id in seen: return 0 # Important mark as seen *before* entering recursion to gracefully handle # self-referential objects seen.add(obj_id) if isinstance(obj, dict): size += sum([get_size(v, seen) for v in obj.values()]) size += sum([get_size(k, seen) for k in obj.keys()]) elif hasattr(obj, '__dict__'): size += get_size(obj.__dict__, seen) elif hasattr(obj, '__iter__') and not isinstance(obj, (str, bytes, bytearray)): size += sum([get_size(i, seen) for i in obj]) return size
https://stackoverflow.com/questions/68086929/
Pytorch slowing down after few iterations
I am trying to implement a model in PyTorch. The training procedure is quite complex and take a while, but what I have noticed is that the model is very fast on the first few batches, and then suddenly gets about 500. I guess it is due to some memory leak issue, as if python was not really letting free the memory of released huge tensors. At first I thought that the problem was linked to the storing gradient, but actually even with torch.no_grad() the same issue appears. Here is an example to replicate the problem. (Note I am not trying to train this specific network, but the problem looks the same). To make things simpler I am not using the gradient and I am iterating on the same batch. import torch import torch.nn as nn from torchvision.datasets import MNIST import torchvision.transforms as T dataset = MNIST(root='./MNIST', train=True, download=True, transform=T.Compose([T.ToTensor(), T.Lambda(lambda x: torch.flatten(x))])) data_loader = torch.utils.data.DataLoader(dataset, batch_size=500) X, _ = next(iter(data_loader)) X = X.to('cuda') in_features = 28*28 out_features = 10 width= 15000 #defining huge network NN = nn.Sequential( nn.Linear(in_features=28*28, out_features=width, bias=False), nn.ReLU(), nn.Linear(in_features=width, out_features=width, bias=False), nn.ReLU(), nn.Linear(in_features=width, out_features=width, bias=False), nn.ReLU(), nn.Linear(in_features=width, out_features=width, bias=False), nn.ReLU(), nn.Linear(in_features=width, out_features=width, bias=False), nn.ReLU(), nn.Linear(in_features=width, out_features=width, bias=False), nn.ReLU(), nn.Linear(in_features=width, out_features=width, bias=False), nn.ReLU(), nn.Linear(in_features=width, out_features=width, bias=False), nn.ReLU(), nn.Linear(in_features=width, out_features=width, bias=False), nn.ReLU(), nn.Linear(in_features=width, out_features=width, bias=False), nn.ReLU(), nn.Linear(in_features=width, out_features=out_features, bias=False), ).to('cuda') import time iterations=100 X = X.to('cuda') with torch.no_grad(): for idx in range(iterations): print(f'Iteration {idx+1}') start = time.time() Y = NN(X) print(f'Time: {time.time() - start}') The output shows that everything is very fast up to almost the 50th iteration, then it suddenly slows down. Iteration 44 Time: 0.00035953521728515625 Iteration 45 Time: 0.00035309791564941406 Iteration 46 Time: 0.00035309791564941406 Iteration 47 Time: 0.048192501068115234 Iteration 48 Time: 0.1714644432067871 Iteration 49 Time: 0.16771984100341797 Iteration 50 Time: 0.1681973934173584 Iteration 51 Time: 0.16853046417236328 Iteration 52 Time: 0.16821908950805664 Why is there such a slow down? Is it possible to avoid it somehow?
Check out this page and scroll down to "Asynchronous execution". Basically, you are measuring the time to enqueue your operation into the GPU not the time it actually takes to execute your operations. This is because GPU calls are asynchronous as described in the link. I copied the relevant part below: By default, GPU operations are asynchronous. When you call a function that uses the GPU, the operations are enqueued to the particular device, but not necessarily executed until later. This allows us to execute more computations in parallel, including operations on CPU or other GPUs. In general, the effect of asynchronous computation is invisible to the caller, because (1) each device executes operations in the order they are queued, and (2) PyTorch automatically performs necessary synchronization when copying data between CPU and GPU or between two GPUs. Hence, computation will proceed as if every operation was executed synchronously. You can force synchronous computation by setting environment variable CUDA_LAUNCH_BLOCKING=1. This can be handy when an error occurs on the GPU. (With asynchronous execution, such an error isn’t reported until after the operation is actually executed, so the stack trace does not show where it was requested.) A consequence of the asynchronous computation is that time measurements without synchronizations are not accurate. To get precise measurements, one should either call torch.cuda.synchronize() before measuring, or use torch.cuda.Event to record times as following: start_event = torch.cuda.Event(enable_timing=True) end_event = torch.cuda.Event(enable_timing=True) start_event.record() # Run some things here end_event.record() torch.cuda.synchronize() # Wait for the events to be recorded! elapsed_time_ms = start_event.elapsed_time(end_event)
https://stackoverflow.com/questions/68087073/
How to move multiple tensors to the Cuda device concurrently?
policy_data, value_data, action_mask = policy_data.cuda(non_blocking=True), value_data.cuda(non_blocking=True), action_mask.cuda(non_blocking=True) rewards, regret_probs = rewards.cuda(non_blocking=True), regret_probs.cuda(non_blocking=True) return action_probs.cpu(), sample_probs.cpu(), sample_indices.cpu(), update I am doing some RL work and am wondering whether it would be possible to speed up fragments like the above by launching the data transfer to the GPU on different streams before waiting on them together. Does PyTorch have any functions that would make this easier? I'd rather ask here before I dive into the minutiae of optimizing data transfers.
Seems like one potential solution would be to pack all of the data into a single tensor (though of course you'd likely pay a small cost due to unused elements within this compacted representation.) An alternative would be to store this compact tensor as a sparse tensor (no additional data, but slightly more memory consumption per value). You'd have to test between these two to determine which was more efficient for your use case.
https://stackoverflow.com/questions/68087621/
PyTorch CUDA error: an illegal memory access was encountered
Relatively new to using CUDA. I keep getting the following error after a seemingly random period of time: RuntimeError: CUDA error: an illegal memory access was encountered I have seen people suggest things such as using cuda.set_device() rather than cuda.device(), setting torch.backends.cudnn.benchmark = False but I can't seem to get the error to go away. Here are some pieces of my code: torch.cuda.set_device(torch.device('cuda:0')) torch.backends.cudnn.benchmark = False class LSTM(nn.Module): def __init__(self, input_dim, hidden_dim, num_layers, output_dim): super(LSTM, self).__init__() self.hidden_dim = hidden_dim self.num_layers = num_layers self.lstm = nn.LSTM(input_dim, hidden_dim, num_layers, batch_first=True, dropout=0.2) self.fc = nn.Linear(hidden_dim, output_dim) def forward(self, x): h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).requires_grad_().cuda() c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).requires_grad_().cuda() out, (hn, cn) = self.lstm(x, (h0.detach(), c0.detach())) out = self.fc(out[:, -1, :]) return out def pred(self, x): return self(x) > 0 def train(model, loss_fn, optimizer, num_epochs, x_train, y_train, x_val, y_val, loss_stop=60): cur_best_loss = 999 loss_recur_count = 0 best_model = None for t in range(num_epochs): model.train() y_train_pred = model(x_train) train_loss = loss_fn(y_train_pred, y_train) tr_l = train_loss.item() optimizer.zero_grad() train_loss.backward() optimizer.step() model.eval() with torch.no_grad(): y_val_pred = model(x_val) val_loss = loss_fn(y_val_pred, y_val) va_l = val_loss.item() if va_l < cur_best_loss: cur_best_loss = va_l best_model = model loss_recur_count = 0 else: loss_recur_count += 1 if loss_recur_count == loss_stop: break if best_model is None: print("model is None.") return best_model def lstm_test(cols, df, test_percent, test_bal, initial_shares_test, max_price, last_sell_day): wdw = 20 x_train, y_train, x_test, y_test, x_val, y_val = load_data(df, wdw, test_percent, cols) x_train = torch.from_numpy(x_train).type(torch.Tensor).cuda() x_test = torch.from_numpy(x_test).type(torch.Tensor).cuda() x_val = torch.from_numpy(x_val).type(torch.Tensor).cuda() y_train = torch.from_numpy(y_train).type(torch.Tensor).cuda() y_test = torch.from_numpy(y_test).type(torch.Tensor).cuda() y_val = torch.from_numpy(y_val).type(torch.Tensor).cuda() input_dim = x_train.shape[-1] hidden_dim = 32 num_layers = 2 output_dim = 1 y_preds_dict = {} for i in range(11): model = LSTM(input_dim=input_dim, hidden_dim=hidden_dim, output_dim=output_dim, num_layers=num_layers).cuda() r = (y_train.cpu().shape[0] - np.count_nonzero(y_train.cpu()))/np.count_nonzero(y_train.cpu())/2 pos_w = torch.tensor([r]).cuda() loss_fn = torch.nn.BCEWithLogitsLoss(pos_weight=pos_w).cuda() optimizer = torch.optim.AdamW(model.parameters(), lr=0.01) best_model = train(model, loss_fn, optimizer, 300, x_train, y_train, x_val, y_val) y_test_pred = get_predictions(best_model, x_test) y_preds_dict[i] = y_test_pred.cpu().detach().numpy().flatten() and here is the error msg: <ipython-input-5-c52edc2c0508> in train(model, loss_fn, optimizer, num_epochs, x_train, y_train, x_val, y_val, loss_stop) 19 model.eval() 20 with torch.no_grad(): ---> 21 y_val_pred = model(x_val) 22 23 val_loss = loss_fn(y_val_pred, y_val) ~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] <ipython-input-4-9da8c811c037> in forward(self, x) 10 11 def forward(self, x): ---> 12 h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).requires_grad_().cuda() 13 c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).requires_grad_().cuda() 14 RuntimeError: CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
It was partially said by the answer of the OP, but the problem under the hood with illegal memory access is that the GPU runs out of memory. In my case, when I run a script on Windows I get the error message: RuntimeError: CUDA out of memory. Tried to allocate 1.64 GiB (GPU 0; 4.00 GiB total capacity; 1.10 GiB already allocated; 1.27 GiB free; 1.12 GiB reserved in total by PyTorch) but when run on Linux I get: RuntimeError: CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Perhaps the message in Windows is more understandable :) References: https://forums.fast.ai/t/runtimeerror-cuda-error-an-illegal-memory-access-was-encountered/93899
https://stackoverflow.com/questions/68106457/
Get file names and file path using PyTorch dataloader
I am using PyTorch 1.8 and Python 3.8 to read images from a folder using the following code: print(f"PyTorch version: {torch.__version__}") # PyTorch version: 1.8.1 # Device configuration- device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print(f"currently available device: {device}") # currently available device: cpu # Define transformations for training and test sets- transform_train = transforms.Compose( [ # transforms.RandomCrop(32, padding = 4), # transforms.RandomHorizontalFlip(), transforms.ToTensor(), # transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ] ) transform_test = transforms.Compose( [ transforms.ToTensor(), # transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ] ) # Define directory containing images- data_dir = 'My_Datasets/Cat_Dog_data/' # Define datasets- train_data = datasets.ImageFolder(data_dir + '/train', transform = train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform = test_transforms) print(f"number of train images = {len(train_data)} & number of validation images = {len(test_data)}") # number of train images = 22500 & number of validation images = 2500 print(f"number of training classes = {len(train_data.classes)} & number of validation classes = {len(test_data.classes)}") # number of training classes = 2 & number of validation classes = 2 # Define data loaders- trainloader = torch.utils.data.DataLoader(train_data, batch_size = 32) testloader = torch.utils.data.DataLoader(test_data, batch_size = 32) len(trainloader), len(testloader) # (704, 79) # Sanity check- len(train_data) / 32, len(test_data) / 32 You can iterate through the train data using 'train_loader' as follows: for img, lab in train_loader: print(img.shape, lab.shape) pass However, I am interested in getting the file name along with the file path from which the file was read. How can I achieve this? Thanks!
The default ImageFolder Dataset holds the paths of all images in self.samples. All you need to do is modify __getitem__ to return the paths as well.
https://stackoverflow.com/questions/68112479/
Good accuracy and loss on training vs bad accuracy on validation
I am learning pytorch and I have created binary classification algorithm. After having trained the model I have very low loss and quite good accuracy. However, on validation the accuracy is exactly 50%. I am wondering if I loaded samples incorrectly or the algorithm does not perform well. Here you can find the plot of Training loss and accuracy. Here is my training method: epochs = 15 itr = 1 p_itr = 100 model.train() total_loss = 0 loss_list = [] acc_list = [] for epoch in range(epochs): for samples, labels in train_loader: samples, labels = samples.to(device), labels.to(device) optimizer.zero_grad() output = model(samples) labels = labels.unsqueeze(-1) labels = labels.float() loss = criterion(output, labels) loss.backward() optimizer.step() total_loss += loss.item() scheduler.step() #if itr%p_itr == 0: pred = torch.round(output) correct = pred.eq(labels) acc = torch.mean(correct.float()) print('[Epoch {}/{}] Iteration {} -> Train Loss: {:.4f}, Accuracy: {:.3f}'.format(epoch+1, epochs, itr, total_loss/p_itr, acc)) loss_list.append(total_loss/p_itr) acc_list.append(acc) total_loss = 0 itr += 1 Here, I am loading data from the path: train_list_cats = glob.glob(os.path.join(train_cats_dir,'*.jpg')) train_list_dogs = glob.glob(os.path.join(train_dogs_dir,'*.jpg')) train_list = train_list_cats + train_list_dogs val_list_cats = glob.glob(os.path.join(validation_cats_dir,'*.jpg')) val_list_dogs = glob.glob(os.path.join(validation_dogs_dir,'*.jpg')) val_list = val_list_cats + val_list_dogs I am not attaching the model architecture, however I can add it if required. I think that my training method is correct, although, I am not sure about training/validation data processing. Edit: The network params are as follow: optimizer = torch.optim.RMSprop(model.parameters(), lr=0.001) criterion = nn.BCELoss() scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[500,1000,1500], gamma=0.5) Activation function is sigmoid. The network architecture: self.layer1 = nn.Sequential( nn.Conv2d(3,16,kernel_size=3), nn.ReLU(), nn.MaxPool2d(2), nn.Dropout(p=0.2) ) self.layer2 = nn.Sequential( nn.Conv2d(16,32, kernel_size=3), nn.ReLU(), nn.MaxPool2d(2), nn.Dropout(p=0.2) ) self.layer3 = nn.Sequential( nn.Conv2d(32,64, kernel_size=3), nn.ReLU(), nn.MaxPool2d(2), nn.Dropout(p=0.2) ) self.fc1 = nn.Linear(17*17*64,512) self.fc2 = nn.Linear(512,1) self.relu = nn.ReLU() self.sigmoid = nn.Sigmoid() def forward(self,x): out = self.layer1(x) out = self.layer2(out) out = self.layer3(out) out = out.view(out.size(0),-1) out = self.relu(self.fc1(out)) out = self.fc2(out) return torch.sigmoid(out)
Going by your "Training loss and accuracy" plot your model is overfitting. Your train loss is near zero after 25 epochs and you continue training for 200+ epochs. This is wrong way to train a model. You should rather be doing early stopping based on the validation set. ie. Run one epoch of train and one epoch of eval and repeat. Stop when your train epoch is improving and the corresponding eval epoch is not improving.
https://stackoverflow.com/questions/68113134/
difference in code between using nn.RNN or not
hi im new to rnn's and I found RNN NLP FROM SCRATCH from pytorch official tutorials, and I think it's named "from scartch" because it didn't use the nn.RNN built in nn in pytorch some line like this self.rnn = nn.RNN(input_size, hidden_size, num_layers, batch_first=True) in the def __init__(self, input_size, hidden_size, output_size): segment. so how to the code would have been evolved if the nn.RNN was been used? class RNN(nn.Module): # implement RNN from scratch rather than using nn.RNN def __init__(self, input_size, hidden_size, output_size): super(RNN, self).__init__() self.hidden_size = hidden_size self.i2h = nn.Linear(input_size + hidden_size, hidden_size) self.i2o = nn.Linear(input_size + hidden_size, output_size) self.softmax = nn.LogSoftmax(dim=1) def forward(self, input_tensor, hidden_tensor): combined = torch.cat((input_tensor, hidden_tensor), 1) hidden = self.i2h(combined) output = self.i2o(combined) output = self.softmax(output) return output, hidden def init_hidden(self): return torch.zeros(1, self.hidden_size) def train(line_tensor, category_tensor): hidden = rnn.init_hidden() for i in range(line_tensor.size()[0]): output, hidden = rnn(line_tensor[i], hidden) loss = criterion(output, category_tensor) optimizer.zero_grad() loss.backward() optimizer.step() return output, loss.item() another equivalent to this question is how to rewrite the code with using self.rnn = nn.RNN(input_size, hidden_size, num_layers, batch_first=True) or if it's not possible how internal nn.RNN structure look like?
This model is referring the implementation of RNN before autograde module introduce, it is a pure implementation of RNN. In this example hidden state and gradient entirely handled by graph. def init_hidden(self): return torch.zeros(1, self.hidden_size) the line above initializes the hidden state(which is zeros at first). and after first step we get the output and next hidden state which later feed in the next step. All this process handle by graph.
https://stackoverflow.com/questions/68116129/
What is the purpose of optimizer's state_dict in PyToch Big Graph's embedding dataset?
The documentation for PyTorch Big Graph (PBG) states that "An additional dataset may exist, optimizer/state_dict, which contains the binary blob (obtained through torch.save()) of the state dict of the model’s optimizer." When inspecting this dataset, it seems to be stored as an array of bytes. Could someone conceptually explain the point of state_dict and why it's stored as an array rather than a dictionary?
Could someone conceptually explain the point of state_dict If you know about Adam or SGD's momentum you probably know that there're some parameters in the optimizer that change in every step. When resume training on top of loading the model weights it'll make convergence faster if you load these parameters too. You can get away without it, just that sometime it'll almost as if you start training from scratch. why it's stored as an array rather than a dictionary? If it's really obtained through torch.save() then it is in fact stored in dictionary or at least a list of dictionaries. Just that your process of "inspecting" it is wrong. Try print(torch.load('path_to_the_file'))
https://stackoverflow.com/questions/68118646/
Slice a list with two other lists in tensorflow / pytorch
How can I slice a list with two other lists? In another word, how can I do a vectorized slicing in tensorflow? indptr = [0 2 2 5 7] values = [2 4 3 2 1 1 5] values[indptr[:-1]:indptr[1:]] # --> throws exception expected output: [[2, 4], [], [3, 2, 1], [1, 5]] More specifically, I wanna vectorized the following loop: import numpy as np # sparse representation in CSR format indptr = [0, 2, 2, 5, 7] indices = [1, 3, 0, 1, 2, 2, 3] values = [2, 4, 3, 2, 1, 1, 5] m, n = 4, 4 out = np.zeros((m, n)) for i in range(m): out[i][indices[indptr[i]:indptr[i + 1]]] = values[indptr[i]:indptr[i + 1]] expected output: # dense representation [[0, 2, 0, 4], [0, 0, 0, 0], [3, 2, 1, 0], [0, 0, 1, 5]])
In tensorflow, tf.scatter_nd can be used for the purpose. @tf.function def csr_to_dense(indptr,indices,values,m,n): repeats=indptr[1:]-indptr[:-1] ind1=tf.repeat(tf.range(m),repeats) indices=tf.stack([ind1,indices],1) return tf.scatter_nd(indices,values,(m,n)) indptr=tf.constant([0,2,2,5,7]) indices=tf.constant([1,3,0,1,2,2,3]) values=tf.constant([2,4,3,2,1,1,5],dtype=tf.float32) m=tf.constant(4) n=tf.constant(4) print(csr_to_dense(indptr,indices,values,m,n)) #from scipy docs, "Duplicate entries are summed together" indptr=tf.constant([0,9,9,9,9,9]) indices=tf.constant([0,0,0,0,0,0,0,1,2]) values=tf.constant([2,4,3,2,1,-1,5,-1,-3],dtype=tf.float32) m=tf.constant(5) n=tf.constant(3) print(csr_to_dense(indptr,indices,values,m,n)) target=csr_matrix(np.random.randint(9,size=(5,4))) indptr=tf.constant(target.indptr) indices=tf.constant(target.indices) values=tf.constant(target.data,dtype=tf.float32) m=tf.constant(target.shape[0]) n=tf.constant(target.shape[1]) print(np.allclose(csr_to_dense(indptr,indices,values,m,n),target.toarray())) ''' tf.Tensor( [[0. 2. 0. 4.] [0. 0. 0. 0.] [3. 2. 1. 0.] [0. 0. 1. 5.]], shape=(4, 4), dtype=float32) tf.Tensor( [[16. -1. -3.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]], shape=(5, 3), dtype=float32) True '''
https://stackoverflow.com/questions/68122235/
pytorch's grid_sample return an incorrect value
I have a 3D matrix: img[i, j, k] = i+j+k. In my opinion, if I want the value of (1, 2, 3), the grid_sample should return 6. But it not. The code is: import torch from torch.nn import functional as F import numpy as np X, Y, Z = 10, 20, 30 img = np.zeros(shape=[X, Y, Z], dtype=np.float32) for i in range(X): for j in range(Y): for k in range(Z): img[i,j,k] = i+j+k inp = torch.from_numpy(img).unsqueeze(0).unsqueeze(0) grid = torch.from_numpy(np.array([[1, 2, 3]], dtype=np.float32)).unsqueeze(1).unsqueeze(1).unsqueeze(1) grid[..., 0] /= (X-1) grid[..., 1] /= (Y-1) grid[..., 2] /= (Z-1) grid = 2*grid - 1 outp = F.grid_sample(inp, grid=grid, mode='bilinear', align_corners=True) print(outp) The grid_sample return 6.15. Is there anything wrong with my code?
Finally, I find the solution. The reason why the above code return an incorrect value is that the torch.grid_sample accept (z, y, x) point. Thus, the correct code should be: import torch from torch.nn import functional as F import numpy as np X, Y, Z = 10, 20, 30 img = np.zeros(shape=[X, Y, Z], dtype=np.float32) for i in range(X): for j in range(Y): for k in range(Z): img[i,j,k] = i+j+k inp = torch.from_numpy(img).unsqueeze(0).unsqueeze(0) grid = torch.from_numpy(np.array([[1, 2, 3]], dtype=np.float32)).unsqueeze(1).unsqueeze(1).unsqueeze(1) grid[..., 0] /= (X-1) grid[..., 1] /= (Y-1) grid[..., 2] /= (Z-1) grid = 2*grid - 1 newgrid = grid.clone() newgrid[..., 0] = grid[..., 2] newgrid[..., 1] = grid[..., 1] newgrid[..., 2] = grid[..., 0] outp = F.grid_sample(inp, grid=newgrid, mode='bilinear', align_corners=True) print(outp)
https://stackoverflow.com/questions/68131325/
huggingface-hub 0.0.12 requires packaging>=20.9, but you'll have packaging 20.4 which is incompatible
huggingface-hub 0.0.12 requires packaging>=20.9, but you'll have packaging 20.4 which is incompatible enter image description here
You will have to update the huggingface-hub through pip install --upgrade huggingface-hub
https://stackoverflow.com/questions/68140977/
I want to get feature value of an object with YOLOv5
I want to get feature value of an object with YOLOv5. I'm guessing there is a hint in "detect.py" in opensource. How can I get feature value of the object used for inference?Please tell me how to resolve.
The variable 'det' inside the def run in detect.py(line 181), you can know the xyxy value, the confidence score, and the number of class name of the object. Since 'det' is a tensor data type, you will need to converting 'det'. If you want to get only the number of class name of the object, you can easily get it by converting cls in detect.py(line 205) like 'int(cls)'.
https://stackoverflow.com/questions/68157783/
convert pytorch model with multiple networks to onnx
I am trying to convert pytorch model with multiple networks to ONNX, and encounter some problem. The git repo: https://github.com/InterDigitalInc/HRFAE The Trainer Class: class Trainer(nn.Module): def __init__(self, config): super(Trainer, self).__init__() # Load Hyperparameters self.config = config # Networks self.enc = Encoder() self.dec = Decoder() self.mlp_style = Mod_Net() self.dis = Dis_PatchGAN() ... Here is how the trained model process image: def gen_encode(self, x_a, age_a, age_b=0, training=False, target_age=0): if target_age: self.target_age = target_age age_modif = self.target_age*torch.ones(age_a.size()).type_as(age_a) else: age_modif = self.random_age(age_a, diff_val=25) # Generate modified image self.content_code_a, skip_1, skip_2 = self.enc(x_a) style_params_a = self.mlp_style(age_a) style_params_b = self.mlp_style(age_modif) x_a_recon = self.dec(self.content_code_a, style_params_a, skip_1, skip_2) x_a_modif = self.dec(self.content_code_a, style_params_b, skip_1, skip_2) return x_a_recon, x_a_modif, age_modif And as following is how I did to convert to onnx: enc = Encoder() dec = Decoder() mlp = Mod_Net() layers = [enc, mlp, dec] model = torch.nn.Sequential(*layers) # here is my confusion: how do I specify the inputs of each layer?? # E.g. one of the outputs of 'enc' layer should be input of 'mlp' layer, # or the outputs of 'enc' layer should be part of inputs of 'dec' layer... params = torch.load('./logs/001/checkpoint') model[0].load_state_dict(params['enc_state_dict']) model[1].load_state_dict(params['mlp_style_state_dict']) model[2].load_state_dict(params['dec_state_dict']) torch.onnx.export(model, torch.randn([1, 3, 1024, 1024]), 'trained_hrfae.onnx', do_constant_folding=True) Maybe the convert-part code is in wrong way?? Could anyone help, many thanks! #20210629-11:52GMT Edit: I found there's constraint of using torch.nn.Sequential. The output of former layer in Sequential should be consistent with latter input. So my code shouldn't work at all because the output of 'enc' layer is not consistent with input of 'mlp' layer. Could anyone help how to convert this type of pytorch model to onnx? Many thanks, again :)
After research and try, I found a method which maybe in correct way: Convert each net(Encoder, Mod_Net, Decoder) to onnx model, and handle their input/output in latter logic-process or any further procedure (e.g convert to tflite model). I'm trying to port onto Android using this method. #Edit 20210705-03:52GMT# Another approach may be better: write a new net combines the three nets. I've prove the output is same as origin pytorch model. class HRFAE(nn.Module): def __init__(self): super(HRFAE, self).__init__() self.enc = Encoder() self.mlp_style = Mod_Net() self.dec = Decoder() def forward(self, x, age_modif): content_code_a, skip_1, skip_2 = self.enc(x) style_params_b = self.mlp_style(age_modif) x_a_modif = self.dec(content_code_a, style_params_b, skip_1, skip_2) return x_a_modif and then convert use following: net = HRFAE() params = torch.load('./logs/002/checkpoint') net.enc.load_state_dict(params['enc_state_dict']) net.mlp_style.load_state_dict(params['mlp_style_state_dict']) net.dec.load_state_dict(params['dec_state_dict']) net.eval() torch.onnx.export(net, (torch.randn([1, 3, 512, 512]), torch.randn([1]).type(torch.long)), 'test_hrfae.onnx') This should be the answer.
https://stackoverflow.com/questions/68177899/
Read data from numpy array into a pytorch tensor without creating a new tensor
Let's say I have a numpy array arr = np.array([1, 2, 3]) and a pytorch tensor tnsr = torch.zeros(3,) Is there a way to read the data contained in arr to the tensor tnsr, which already exists rather than simply creating a new tensor like tnsr1 = torch.tensor(arr). This is a simplified example of the problem, since I am using a dataset that contains nearly 17 million entries. EDIT: I know I can manually loop through each entry in the array. With 17 million entries, that would take quite a while I believe...
You can do that using torch.from_numpy(arr). Here is an example that shows that it's not being copied. import numpy as np import torch arr = np.random.randint(0,high=10**6,size=(10**4,10**4)) %timeit arr.copy() tells me that it took 492 ms ± 6.54 ms to copy the array of random integers. On the other hand %timeit torch.from_numpy(arr) tells me that it took 1.14 µs ± 131 ns to turn it into a tensor. So there is no way that the 100 mio integers could have been copied. Pytorch is still using the same data. Finally your version i.e. %timeit torch.tensor(arr) gives 201 ms ± 4.08 ms. Which is quite surprising to me. Since it should not be faster than numpy's copy in copying. But when it's not copying what takes it 1/5 or a second? Maybe it's doing a shallow copy. Maybe somebody else can tell us what's going on exactly.
https://stackoverflow.com/questions/68183227/
Create a CNN that has a Kernel that is 1xD, where D is number of columns that slides vertically over a MxD matrix?
Create a CNN that has a Kernel that is 1xD, where D is number of columns that slides vertically over a MxD matrix? I'm trying to create a CNN in Pytorch that has a kernel that slides a 1xD kernel over a 2D image vertically so the output should be Mx1. As in the CNN convolves each row of the image then produces a single value for each row. Also having the ability to change from a 1xD to a NxD where N is some predefined number of rows would be nice as well. The input is purely just a matrix not a 3D matrix representing an image.
Kernels in nn.Conv2d do not have to be square, they can also be rectangular: class MyModel(nn.Module): def __init__(self, in_channels, out_channels, N, D): super(MyModel, self).__init__() self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=(N, D), padding=0, stride=1) def forward(self, x): return self.conv(x) Note that your input x has to be 4 dimensional: B-C-H-W. Where the number of channels C must match in_channels defined when constructing MyModel. If you have a single image with only one channel, then the input should have two leading singleton dimensions: that is, x should have the shape 1-1-M-D. See this answer for more information about why x should be 4D.
https://stackoverflow.com/questions/68187971/
"torch.relu_(input) unknown parameter type" from pytorch
I am trying to run this 3D pose estimation repo in Google Colab on a GPU, but after doing all of the steps and putting in my own left/right cam vids, I get this error in Colab: infering thread started 1 1 : cannot connect to X server Exception in thread Thread-1: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/content/Stereo-3D-Pose-Estimation/poseinferscheduler.py", line 59, in infer_pose_loop l_pose_t = infer_fast(self.net, l_img, height, self.stride, self.upsample_ratio, self.cpu) File "/content/Stereo-3D-Pose-Estimation/pose3dmodules.py", line 47, in infer_fast stages_output = net(tensor_img) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/content/Stereo-3D-Pose-Estimation/models/with_mobilenet.py", line 115, in forward backbone_features = self.model(x) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py", line 139, in forward input = module(input) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py", line 139, in forward input = module(input) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/activation.py", line 102, in forward return F.relu(input, inplace=self.inplace) File "/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py", line 1296, in relu result = torch.relu_(input) RuntimeError: unknown parameter type I am a bit confused as to why I am seeing it, I have already installed all necessary prerequisites; also can't interpret what it means either.
Since the traceback happens in the pytorch library, I checked the code there on the pytorch github. What the error means is that you are calling an inplace activation function in torch.relu_ to some object called input. However, what is happening is that the type of input is not recognized by the torch backend which is why it is a runtime error. Therefore, I would suggest to print out input and also run type(input) to find out what object input represents and what that variable is. As a further reference, this is the particular script that Pytorch runs in the backend that leads it to throw an unknown parameter type error. From a quick look, it seems to be a switch statement that confirms if a value falls into a list of types. If it is not in the list of types, then it will run the default block which throws unknown parameter type error. https://github.com/pytorch/pytorch/blob/aacc722aeca3de1aedd35adb41e6f8149bd656cd/torch/csrc/utils/python_arg_parser.cpp#L518-L541 EDIT: If type(input) returns a torch.tensor then it is probably an issue with the version of python you are using. I know you said you have the prerequisites but I think it would be good to double check if you have python 3.6, and maybe but less preferably python 3.5 or 3.7. These are the python versions that work with the repo you just sent me. You can find the python version on your collab by typing !python --version on one of cells. Make sure that it returns a correct version supported by the software you are running. This error might come from the fact that instead of torch, python itself is expressing this error in its backend. I found this stackoverflow useful as it shows how some code was unable to recognize a built in type dictionary in python: "TypeError: Unknown parameter type: <class 'dict_values'>" The solution to this was to check python versions. Sarthak
https://stackoverflow.com/questions/68188278/
How to implement this equation in pytorch?
I'm trying to implement GNNs from a research paper and I have to code the following equations to get some sort of relatedness scores sj. Since I am new to pytorch, I'm having some difficulties implementing equations. Following are the set of equations that I want to code: I have the following inputs h(t) = input_a = hidden[:,0:8,:] hc(t) = input_b = hidden[:,8:13,:] The dimensions of 'hidden' is (1000, 13, 128) #(Batch size, inputs(context-0:8, and candidates 8:13), embedding dimension) Also, the subscripts i in h(t) belong to 0:8, and j in hc(t) belong to 8:13. So, hi(t) would be hidden[:,i,:]. To provide some context, Could someone please help me code the equations in the first picture ? I can even implement the function g() by myself, but I'm confused about the first two. Could someone help ?
All the operations you could do to pytorch tensors are documented over here: https://pytorch.org/docs/stable/torch.html I suggest command + F to search for the operation you need. For the 1st equations you gave: You can find torch.tanh() as shown here: https://pytorch.org/docs/stable/generated/torch.tanh.html#torch.tanh You can find torch.transpose() over here https://pytorch.org/docs/stable/generated/torch.transpose.html#torch.transpose (Also has 2 aliases torch.swapaxes and torch.swapdims that you can find in the documentation I linked above) For the second equation You can find the exponential with torch.exp() as shown here https://pytorch.org/docs/stable/generated/torch.exp.html#torch.exp The summation can be coded with a for loop or clever indexing in pytorch. There are many operations you can do to a torch.tensor object that are pretty useful for papers. As a rule of thumb, when implementing papers just use this pytorch doc as a way to find out many of those operations are. Additionally if you are less comfortable with pytorch, another suggestion can be to do the equations in numpy then convert that value to a tensor. Sarthak
https://stackoverflow.com/questions/68191076/
Why the method of log_prob in my Pytorch doesn't work
For example, I have a Beta distribution in Pytorch, and the parameter a=0.01 and b=1.4709. The density function is as below: Density function of the Beta distribution Then I sample an action from this distribution which is 1.1754943508222875e-38. Now, there is something happened, after I calculate the log_prob of this action, what I get is 81.83833312988281. We know that the sampled action is extremely small and the probability of this action should be very close to 1. However, the log_prob becomes very large and more than 0. Firstly, shouldn't the log_prob be between [-inf, 0] ??
log_prob returns the log of the probability density/mass function (pdf) evaluated at the given sample value. Probability density mass is not the same as probability since for a continuous distribution like the Beta distribution the probability of any single value is actually 0. As such, there's no stipulation that the log-pdf evaluated at a given point should be between [-inf,0]. In fact, for your linked example you can see that the pdf evaluated at 1.1754943508222875e-38 would be extremely large - hence the large positive log_prob value of 81.83833312988281.
https://stackoverflow.com/questions/68199047/
Graph Neural Network Regression
I am trying to implement a regression on a Graph Neural Network. Most of the examples that I see are that of classification in this area, none so far of regression. I saw one for classification as follows: from torch_geometric.nn import GCNConv class GCN(torch.nn.Module): def __init__(self, hidden_channels): super(GCN, self).__init__() torch.manual_seed(12345) self.conv1 = GCNConv(dataset.num_features, hidden_channels) self.conv2 = GCNConv(hidden_channels, dataset.num_classes) def forward(self, x, edge_index): x = self.conv1(x, edge_index) x = x.relu() x = F.dropout(x, p=0.5, training=self.training) x = self.conv2(x, edge_index) return x model = GCN(hidden_channels=16) print(model) I am trying to modify it for my task, which basically includes performing a regression on a network with 30 nodes, each having 3 features and the edge has one feature. If anyone could point me to examples to do the same, that would be very helpful.
add a linear layer,and don't forget use a regression loss function class GCN(torch.nn.Module): def __init__(self, hidden_channels): super(GCN, self).__init__() torch.manual_seed(12345) self.conv1 = GCNConv(dataset.num_features, hidden_channels) self.conv2 = GCNConv(hidden_channels, dataset.num_classes) self.linear1 = torch.nn.Linear(100,1) def forward(self, x, edge_index): x = self.conv1(x, edge_index) x = x.relu() x = F.dropout(x, p=0.5, training=self.training) x = self.conv2(x, edge_index) x = self.linear1(x) return x
https://stackoverflow.com/questions/68202388/
What are the expected values in the input in Pytorch?
I am new to Pytorch, following the tutorials. I want to implement a regresor for a nonlinear function with 4 real inputs and 2 real outputs. I cannot find anywhere what is the supposed range for the inputs and outputs. They should go between -1 and 1? Between 0 and 1? Can it be anything? More details I have written the following simple parameterized model for experimenting: class Net(torch.nn.Module): def __init__(self, n_inputs: int, n_outputs: int, n_hidden_layers: int, n_nodes_per_hidden_layer: int): n_inputs = int(n_inputs) n_outputs = int(n_outputs) n_hidden_layers = int(n_hidden_layers) n_nodes_per_hidden_layer = int(n_nodes_per_hidden_layer) if any([i<=0 for i in [n_inputs,n_outputs,n_hidden_layers,n_nodes_per_hidden_layer]]): raise ValueError(f'All n_inputs, n_outputs, n_hidden_layers and n_nodes_per_hidden_layer must be greater than 0.') super().__init__() self.input_layer = torch.nn.Linear(n_inputs, n_nodes_per_hidden_layer) self.hidden_layers = [torch.nn.Linear(n_nodes_per_hidden_layer, n_nodes_per_hidden_layer) for i in range(n_hidden_layers)] self.output_layer = torch.nn.Linear(n_nodes_per_hidden_layer, n_outputs) def forward(self, x): x *= .1 activation_function = torch.nn.functional.relu x = activation_function(self.input_layer(x)) for idx,layer in enumerate(self.hidden_layers): x = activation_function(layer(x)) x = self.output_layer(x) return x and I instantiate it in this way: dnn = Net( n_inputs = 4, # Defined by the number of observables (charge of each channel now). n_outputs = 2, # x and y. n_hidden_layers = 3, # Choose yourself. n_nodes_per_hidden_layer = 66, # Choose yourself. ) My input x is data that distributes in a weird way from 0 to 1, my outputs are 2 values in the range 1e-2 +- 100e-6. I have tried x -= .5 and different scalings too, but cannot make it work. I don't get any error, just it does not seem to learn what it is supposed to learn. I know that this model should work because I have used it with similar data that distributes in a similar way but the inputs in the range 0-100e-12 using x *= 1e9 and it was performing reasonably well. I don't know why, however.
Data transformation They should go between -1 and 1? Between 0 and 1? Can it be anything? They can be any real valued numbers, but in general we standardize input values using mean and standard deviation (so the result has 0 mean and 1 variance) like this (for two dimensional data that you have, assuming samples are zeroth dimension and features are the first dimension): import torch samples, features = 128, 4 data = torch.randn(samples, features) std, mean = torch.std_mean(data, dim=0, keepdim=True) normalized = (data - mean) / std In general neural networks work best with normalized input with similar ranges. And this transformation could (and should as it will make it easier for nn.Linear layer output) also be applied to your regression target as it's reversible. Other things Use torch.nn.MSELoss for regression Use standard optimizer (like Adam) with default learning rates (you can fine tune it later) Make sure your pipeline works correctly
https://stackoverflow.com/questions/68212002/
Where can I get official PyTorch documentation in pdf form?
I want to learn PyTorch in great detail. I have read all the docs and tutorials on the main site. I learn better from paper. When I print from website pages, I am getting very small letters. This makes the printouts difficult to read them. The packages offer PDF documentation, but I cannot find a similar file for the main PyTorch site. Where can I find a similar resource for official PyTorch documentation?
I don't think there is an official pdf. The pytorch documentation uses sphinx to generate the web version of the documentation. But sphinx can also generate PDFs. So you could download the git repo of pytorch, install sphinx, and then generate the PDF yourself using sphinx. The instructions to built the HTML can be found here, and generating the PDF should be no different.
https://stackoverflow.com/questions/68220613/
Why transformations go into the dataset and not into the NN itself in Pytorch?
I am new to Pytorch and I am now following the tutorial on transforms. I see that the transformations are configured into the dataset object. I am wondering, however, why aren't they configured within the neural network itself. My naive point of view is that the transformations should be in any case the most external layers of the network, in the same way as the eye comes before the brain to transform light into signals for the brain, and you don't modify the world instead to adapt it to the brain. So, is there any technical reason for putting the transformations in the dataset instead of the net? Is it a good/bad practice to put the transformations within my neural network instead? Why?
These are some of the reason that can explain why one would do this. We would like to use the same NN code for training as well as testing / inference. Typically during inference, we don't want to do any transformation and hence one might want to keep it out of the network. However, you may argue that one can just simply use model.training flag to skip the transformation. Most of the transformations happen on CPU. Doing transformations in dataset allows to easily use multi-processing and prefetching. The dataset code can prefetch the data, transform, and keep it ready to be fed into the NN in a separate thread. If instead, we do it inside the forward function, GPUs will idle during the transformations (as these happen on CPU), likely leading to a longer training time.
https://stackoverflow.com/questions/68221863/
nvcc not found but cuda runs fine?
I was trying to run nvcc -V to check cuda version but I got the following error message. Command 'nvcc' not found, but can be installed with: sudo apt install nvidia-cuda-toolkit But gpu acceleration is working fine for training models on cuda. Is there another way to find out cuda compiler tools version. I know nvidia-smi doesn't give the right version. Is there a way to install or configure nvcc. So I don't have to install a whole new toolkit.
Most of the time, nvcc and other CUDA SDK binaries are not in the environment variable PATH. Check the installation path of CUDA; if it is installed under /usr/local/cuda, add its bin folder to the PATH variable in your ~/.bashrc: export CUDA_HOME=/usr/local/cuda export PATH=${CUDA_HOME}/bin:${PATH} export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:$LD_LIBRARY_PATH You can apply the changes with source ~/.bashrc, or the next time you log in, everything is set automatically.
https://stackoverflow.com/questions/68221962/
RuntimeError: Function AddmmBackward returned an invalid gradient
RuntimeError: Function AddmmBackward returned an invalid gradient at index 2 - got [100, 80] but expected shape compatible with [80, 80] And my NN :
It could be because of your neural network shape is not compatible to the previous shape. Try changing your fc1 from nn.Linear(in_feature=80, out_feature=80) to nn.Linear(in_feature=100, out_feature=80)
https://stackoverflow.com/questions/68222763/
pytorch cifar10 dataset - cannot get first item
I have selected the CIFAR 10 dataset using the torchvision library: trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transforms.ToTensor()) Then I try to select the first item in the dataset, which as I understand implements the get_item method of the dataset class: trainset[0] and I get File "env\lib\site-packages\torchvision\transforms\functional.py", line 129, in to_tensor np.array(pic, mode_to_nptype.get(pic.mode, np.uint8), copy=True) TypeError: __array__() takes 1 positional argument but 2 were given Any ideas why I would get this error? Python 3.7.9, torch==1.9.0, torchvision==0.10.0
I was hitting this error too: def get_transformations(): return transforms.Compose([transforms.ToTensor()]) ... self.transforms = get_transformations() ... # Load the image + augment img = Image.open(img_path).convert("RGB") img = self.transforms(img) ... Original Traceback (most recent call last): File "c:\2021-mcm-master\src\PyTorch-RCNN\ui-prediction\env\lib\site-packages\torch\utils\data\_utils\worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "c:\2021-mcm-master\src\PyTorch-RCNN\ui-prediction\env\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "c:\2021-mcm-master\src\PyTorch-RCNN\ui-prediction\env\lib\site-packages\torch\utils\data\_utils\fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "c:\2021-mcm-master\src\PyTorch-RCNN\ui-prediction\src\screenshot_dataset.py", line 112, in __getitem__ img = self.transforms(img) File "c:\2021-mcm-master\src\PyTorch-RCNN\ui-prediction\env\lib\site-packages\torchvision\transforms\transforms.py", line 60, in __call__ img = t(img) File "c:\2021-mcm-master\src\PyTorch-RCNN\ui-prediction\env\lib\site-packages\torchvision\transforms\transforms.py", line 97, in __call__ return F.to_tensor(pic) File "c:\2021-mcm-master\src\PyTorch-RCNN\ui-prediction\env\lib\site-packages\torchvision\transforms\functional.py", line 129, in to_tensor np.array(pic, mode_to_nptype.get(pic.mode, np.uint8), copy=True) TypeError: __array__() takes 1 positional argument but 2 were given As @Phil suggested, downgrading Pillow from 8.3.0 to 8.2.0 solved the issue: pip install pillow==8.2.0
https://stackoverflow.com/questions/68223871/
Almost non-existent training accuracy and low test accuracy
I am really new to Machine Learning and I am not so well versed in coding in general. However there is need to look through the customers feedback at our store, that average quite a lot each year, yet we cannot tell % of positive, negative and neutral. Currently I am trying to train a Bert Model to do simple multi labeled sentiment analysis. The input is our store's customers feedback. The customers feedback is not always so clearly defined since customers do tend to tell long and long about their experience and their sentiment is not always so clear. However we managed to get positive, negative and neutral, each set 2247 samples. But when I try to train it the training accuracy is around 0.4% which is super low. Validation score is around 60%. F1-score is around 60% for each of the label. I wonder what can be done to improve this training accuracy. I have been stuck for a while. Please take a look at my codes and help me out with this. I have tried changing learning rate (tried all learning rate Bert suggested and 1e-5),changing Max LEN, changing amount of EPOCH, changing drop out rate (0.1, 0.2, 0.3, 0.4, 0.5), but so far nothing yielded results. #read dataset df = pd.read_csv("data.csv",header=None, names=['content', 'sentiment'], sep='\;', lineterminator='\r',encoding = "ISO-8859-1",engine="python") from sklearn.utils import shuffle df = shuffle(df) df['sentiment'] = df['sentiment'].replace(to_replace = [-1, 0, 1], value = [0, 1, 2]) df.head() #Load pretrained FinBert model and get bert tokenizer from it PRE_TRAINED_MODEL_NAME = 'TurkuNLP/bert-base-finnish-cased-v1' tokenizer = BertTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME) #Choose sequence Length token_lens = [] for txt in df.content: tokens = tokenizer.encode(txt, max_length=512) token_lens.append(len(tokens)) sns.distplot(token_lens) plt.xlim([0, 256]); plt.xlabel('Token count'); MAX_LEN = 260 #Make a PyTorch dataset class FIDataset(Dataset): def __init__(self, texts, targets, tokenizer, max_len): self.texts = texts self.targets = targets self.tokenizer = tokenizer self.max_len = max_len def __len__(self): return len(self.texts) def __getitem__(self, item): text = str(self.texts[item]) target = self.targets[item] encoding = self.tokenizer.encode_plus( text, add_special_tokens=True, max_length=self.max_len, return_token_type_ids=False, pad_to_max_length=True, return_attention_mask=True, return_tensors='pt', ) return { 'text': text, 'input_ids': encoding['input_ids'].flatten(), 'attention_mask': encoding['attention_mask'].flatten(), 'targets': torch.tensor(target, dtype=torch.long) } #split test and train df_train, df_test = train_test_split( df, test_size=0.1, random_state=RANDOM_SEED ) df_val, df_test = train_test_split( df_test, test_size=0.5, random_state=RANDOM_SEED ) df_train.shape, df_val.shape, df_test.shape #data loader function def create_data_loader(df, tokenizer, max_len, batch_size): ds = FIDataset( texts=df.content.to_numpy(), targets=df.sentiment.to_numpy(), tokenizer=tokenizer, max_len=max_len ) return DataLoader( ds, batch_size=batch_size, num_workers=4 ) #Load data into train, test, val BATCH_SIZE = 16 train_data_loader = create_data_loader(df_train, tokenizer, MAX_LEN, BATCH_SIZE) val_data_loader = create_data_loader(df_val, tokenizer, MAX_LEN, BATCH_SIZE) test_data_loader = create_data_loader(df_test, tokenizer, MAX_LEN, BATCH_SIZE) # Sentiment Classifier based on Bert model just loaded class SentimentClassifier(nn.Module): def __init__(self, n_classes): super(SentimentClassifier, self).__init__() self.bert = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME) self.drop = nn.Dropout(p=0.1) self.out = nn.Linear(self.bert.config.hidden_size, n_classes) def forward(self, input_ids, attention_mask): returned = self.bert( input_ids=input_ids, attention_mask=attention_mask ) pooled_output = returned["pooler_output"] output = self.drop(pooled_output) return self.out(output) #Create a Classifier instance and move to GPU model = SentimentClassifier(3) model = model.to(device) #Optimize with AdamW EPOCHS = 5 optimizer = AdamW(model.parameters(), lr= 2e-5, correct_bias=False) total_steps = len(train_data_loader) * EPOCHS scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=0, num_training_steps=total_steps ) loss_fn = nn.CrossEntropyLoss().to(device) #Train each Epoch function def train_epoch( model, data_loader, loss_fn, optimizer, device, scheduler, n_examples ): model = model.train() losses = [] correct_predictions = 0 for d in data_loader: input_ids = d["input_ids"].to(device) attention_mask = d["attention_mask"].to(device) targets = d["targets"].to(device) outputs = model( input_ids=input_ids, attention_mask=attention_mask ) _, preds = torch.max(outputs, dim=1) loss = loss_fn(outputs, targets) correct_predictions += torch.sum(preds == targets) losses.append(loss.item()) loss.backward() nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0) optimizer.step() scheduler.step() optimizer.zero_grad() return correct_predictions.double() / n_examples, np.mean(losses) #Eval model function def eval_model(model, data_loader, loss_fn, device, n_examples): model = model.eval() losses = [] correct_predictions = 0 with torch.no_grad(): torch.cuda.empty_cache() for d in data_loader: input_ids = d["input_ids"].to(device) attention_mask = d["attention_mask"].to(device) targets = d["targets"].to(device) outputs = model( input_ids=input_ids, attention_mask=attention_mask ) _, preds = torch.max(outputs, dim=1) loss = loss_fn(outputs, targets) correct_predictions += torch.sum(preds == targets) losses.append(loss.item()) return correct_predictions.double() / n_examples, np.mean(losses) #training loop through each epochs import torch torch.cuda.empty_cache() history = defaultdict(list) best_accuracy = 0 if __name__ == '__main__': for epoch in range(EPOCHS): print(f'Epoch {epoch + 1}/{EPOCHS}') print('-' * 10) train_acc, train_loss = train_epoch( model, train_data_loader, loss_fn, optimizer, device, scheduler, len(df_train) ) print(f'Train loss {train_loss} accuracy {train_acc}') val_acc, val_loss = eval_model( model, val_data_loader, loss_fn, device, len(df_val) ) print(f'Val loss {val_loss} accuracy {val_acc}') print() history['train_acc'].append(train_acc) history['train_loss'].append(train_loss) history['val_acc'].append(val_acc) history['val_loss'].append(val_loss) if val_acc > best_accuracy: torch.save(model.state_dict(), 'best_model_state.bin') best_accuracy = val_acc -- Edit: I have printed out preds and targets as well as train and val accuracy
Here _, preds = torch.max(outputs, dim=1), you probably want argmax, not max? Print out preds and targets to better see what's going on. Edit after preds and targets printed out. For epochs 4 and 5, preds matches targets exactly, so train accuracy should be 1. I think the issue is that the accuracy is divided by n_examples, which is a number of examples in the whole train dataset, while it should be divided by the number of examples in the epoch.
https://stackoverflow.com/questions/68225540/
Making predictions on new images using a CNN in pytorch
I'm new in pytorch, and i have been stuck for a while on this problem. I have trained a CNN for classifying X-ray images. The images can be found in this Kaggle page https://www.kaggle.com/prashant268/chest-xray-covid19-pneumonia/ . I managed to get good accuracy both on training and test data, but when i try to make predictions on new images i get the same (wrong class) output for every image. Here's my model in detail. import os import matplotlib.pyplot as plt import numpy as np import torch import glob import torch.nn.functional as F import torch.nn as nn from torchvision.transforms import transforms from torch.utils.data import DataLoader from torch.optim import Adam from torch.autograd import Variable import torchvision import pathlib from google.colab import drive drive.mount('/content/drive') epochs = 20 batch_size = 128 learning_rate = 0.001 #Data Transformation transformer = transforms.Compose([ transforms.Resize((224,224)), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.5,0.5,0.5], [0.5,0.5,0.5]) ]) #Load data with DataLoader train_path = '/content/drive/MyDrive/Chest X-ray (Covid-19 & Pneumonia)/Data/train' test_path = '/content/drive/MyDrive/Chest X-ray (Covid-19 & Pneumonia)/Data/test' train_loader = DataLoader(torchvision.datasets.ImageFolder(train_path,transform = transformer), batch_size= batch_size, shuffle= True) test_loader = DataLoader(torchvision.datasets.ImageFolder(test_path,transform = transformer), batch_size= batch_size, shuffle= False) root = pathlib.Path(train_path) classes = sorted([j.name.split('/')[-1] for j in root.iterdir()]) print(classes) train_count = len(glob.glob(train_path+'/**/*.jpg')) + len(glob.glob(train_path+'/**/*.png')) + len(glob.glob(train_path+'/**/*.jpeg')) test_count = len(glob.glob(test_path+'/**/*.jpg')) + len(glob.glob(test_path+'/**/*.png')) + len(glob.glob(test_path+'/**/*.jpeg')) print(train_count,test_count) #Create the CNN class CNN(nn.Module): def __init__(self): super(CNN,self).__init__() '''nout = [(width + 2*padding - kernel_size) / stride] + 1 ''' # [128,3,224,224] self.conv1 = nn.Conv2d(in_channels = 3, out_channels = 12, kernel_size = 5) # [4,12,220,220] self.pool1 = nn.MaxPool2d(2,2) #reduces the images by a factor of 2 # [4,12,110,110] self.conv2 = nn.Conv2d(in_channels = 12, out_channels = 24, kernel_size = 5) # [4,24,106,106] self.pool2 = nn.MaxPool2d(2,2) # [4,24,53,53] which becomes the input of the fully connected layer self.fc1 = nn.Linear(in_features = (24 * 53 * 53), out_features = 120) self.fc2 = nn.Linear(in_features = 120, out_features = 84) self.fc3 = nn.Linear(in_features = 84, out_features = len(classes)) #final layer, output will be the number of classes def forward(self, x): x = self.pool1(F.relu(self.conv1(x))) x = self.pool2(F.relu(self.conv2(x))) x = x.view(-1, 24 * 53 * 53) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x # Training the model model = CNN() loss_function = nn.CrossEntropyLoss() #includes the softmax activation function optimizer = torch.optim.Adam(model.parameters(), lr = learning_rate) n_total_steps = len(train_loader) for epoch in range(epochs): n_correct = 0 n_samples = 0 for i, (images, labels) in enumerate(train_loader): # Forward pass outputs = model(images) _, predicted = torch.max(outputs, 1) n_samples += labels.size(0) n_correct += (predicted == labels).sum().item() loss = loss_function(outputs, labels) # Backpropagation and optimization optimizer.zero_grad() #empty gradients loss.backward() optimizer.step() acc = 100.0 * n_correct / n_samples print(f'Epoch [{epoch+1}/{epochs}], Step [{i+1}/{n_total_steps}], Accuracy: {round(acc,2)} %, Loss: {loss.item():.4f}') print('Done!!') # Testing the model with torch.no_grad(): n_correct = 0 n_samples = 0 n_class_correct = [0 for i in range(3)] n_class_samples = [0 for i in range(3)] for images, labels in test_loader: outputs = model(images) # max returns (value ,index) _, predicted = torch.max(outputs, 1) n_samples += labels.size(0) n_correct += (predicted == labels).sum().item() acc = 100.0 * n_correct / n_samples print(f'Accuracy of the network: {acc} %') torch.save(model.state_dict(),'/content/drive/MyDrive/Chest X-ray (Covid-19 & Pneumonia)/model.model') For loading the model and trying to make predictions on new images, the code is as follows: checkpoint = torch.load('/content/drive/MyDrive/Chest X-ray (Covid-19 & Pneumonia)/model.model') model = CNN() model.load_state_dict(checkpoint) model.eval() #Data Transformation transformer = transforms.Compose([ transforms.Resize((224,224)), transforms.ToTensor(), transforms.Normalize([0.5,0.5,0.5], [0.5,0.5,0.5]) ]) #Making preidctions on new data from PIL import Image def prediction(img_path,transformer): image = Image.open(img_path).convert('RGB') image_tensor = transformer(image) image_tensor = image_tensor.unsqueeze_(0) #so img is not treated as a batch input_img = Variable(image_tensor) output = model(input_img) #print(output) index = output.data.numpy().argmax() pred = classes[index] return pred pred_path = '/content/drive/MyDrive/Chest X-ray (Covid-19 & Pneumonia)/Test_images/Data/' test_imgs = glob.glob(pred_path+'/*') for i in test_imgs: print(prediction(i,transformer)) I'm guessing the problem must be in the way that i am preprocessing the data, although i cannot find my mistake. Any help will be deeply appreciated, since i have been stuck on this for a while now. p.s. i can share my notebook as well, if it is of any help
Regarding your problem, I have a really good way to debug this to target where the problem most likely will be and so it will be really easy to fix your issue. So, my debugging process would be based on the fact that your CNN performs well on the test set. Firstly set your test loader batch size to 1 temporarily. After that, One thing to do is in your test loop when you calculate the amount correct, you can run the following code: #Your code outputs = model(images) # Really only one image and 1 output. #Altered Code: correct = (predicted == labels).sum().item() # This will be either 1 or 0 since you have only one image per batch # My new code: if correct: # if value is 1 instead of 0 then turn value into a single image with no batch size single_correct_image = images.squeeze(0) # Then convert tensor image into PIL image pil_image = transforms.ToPILImage()(single_correct_image) # Save the pil image to any directory specified in quotes. pil_image = pil_image.save("/content") #Terminate testing process. Ignore Value Error if it says terminating process raise ValueError("terminating process") Now you have an image saved to disk that you know is correct in the test set. The next step would be to open such image and run it to your predict function. Couple of things can happen and thus give info about your situation If your model returns the wrong answer then there is something wrong with the different code you have within the prediction and testing code. One uses a torch.sum and torch.max the other uses np.argmax.Then you can use print statements to debug what is going on there. Perhaps some conversion error or your expectation of the output's format is different. If your code return the right answer then your model is just failing to predict on new images. I suggest running more trial cases with the above process. For additional reference, if you still get very stuck to the point where you feel like you can't solve it, then I suggest using this notebook to guide and give some suggestions on what code to atleast inspect. https://www.kaggle.com/salvation23/xray-cnn-pytorch Sarthak Jain
https://stackoverflow.com/questions/68239580/
How does pytorch.autograd.Function calculate dL_dy?
This code calculate grad of y=x**2. In this code, dL_dy is [0., 2., 8.] How do they calculate dL_dy? Where did this tensor came from ? import torch from torch.autograd import Function class Square(Function): @staticmethod def forward(ctx,input): ctx.save_for_backward (input) return torch.square(input) @staticmethod def backward(ctx,dL_dy): print('dL_dy',dL_dy) print('ctx.saved_tensors',ctx.saved_tensors) x, = ctx.saved_tensors return dL_dy * 2 * x square = Square.apply x = torch.arange(3).to(torch.float64).requires_grad_(True) y = square(x) L = torch.sum(y*y) L.backward() print(x.grad)
Let say your f(y) = x**2, so f'(y) = 2*x x = torch.arange(3).to(torch.float64).requires_grad_(True) means x=[0,1,2], we can compute y = x**2 = [0,1,4]. So when you call L.backward(), it will apply f'(y) (calculating the gradient of y), and stored in dL_dy. That's why dL_dy = f'(y) = 2 * [0,1,4] = [0,2,8]
https://stackoverflow.com/questions/68241041/
Error in creating an offline PDF documentatin for PyTorch
I wanted to make an offline PDF on my system for PyTorch documentation. After reading from several resources #1, #2, #3 git clone https://github.com/pytorch/pytorch cd pytorch/docs/ make latexpdf First two commands are working fine. Third command leads to the following error Traceback (most recent call last): File "source/scripts/build_activation_images.py", line 70, in <module> function = torch.nn.modules.activation.__dict__[function_name]() KeyError: 'SiLU' How to overcome this error and make a PDF document of PyTorch? 1.4.0 is the version of PyTorch in my system print(torch.__version__) 1.4.0 3.8.3 is the version of Python in my system python -V Python 3.8.3
The PyTorch version installed in your machine (1.4.0) is older than the one you cloned (most recent). Two ways to fix it: Checkout to the version you have installed (if you want the doc of 1.4 version): git clone https://github.com/pytorch/pytorch # move back to the 1.4 release, which you have installed in your machine cd pytorch git checkout release/1.4 cd docs make latexpdf Upgrade to the most-recent PyTorch version (if you want the most recent doc): # upgrade PyTorch to the nightly release (change it accordingly) python -m pip install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cu102/torch_nightly.html git clone https://github.com/pytorch/pytorch cd pytorch/docs/ make latexpdf
https://stackoverflow.com/questions/68244269/
RuntimeError: expected scalar type Float but found Double (LSTM classifier)
I'm training my LSTM classifier. epoch_num = 30 train_log = [] test_log = [] set_seed(111) for epoch in range(1, epoch_num+1): running_loss = 0 train_loss = [] lstm_classifier.train() for (inputs, labels) in tqdm(train_loader, desc='Training epoch ' + str(epoch), leave=False): inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() outputs = lstm_classifier(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() train_loss.append(loss.item()) train_log.append(np.mean(train_loss)) running_loss = 0 test_loss = [] lstm_classifier.eval() with torch.no_grad(): for (inputs, labels) in tqdm(test_loader, desc='Test', leave=False): inputs, labels = inputs.to(device), labels.to(device) outputs = lstm_classifier(inputs) loss = criterion(outputs, labels) test_loss.append(loss.item()) test_log.append(np.mean(test_loss)) plt.plot(range(1, epoch+1), train_log, color='C0') plt.plot(range(1, epoch+1), test_log, color='C1') display.clear_output(wait=True) display.display(plt.gcf()) error is: RuntimeError Traceback (most recent call last) in () 23 print((labels.dtype)) 24 print(outputs[:,0].dtype) ---> 25 loss = criterion(outputs, labels) 26 loss.backward() 27 optimizer.step() 2 frames /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction) 2822 if size_average is not None or reduce is not None: 2823 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 2824 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 2825 2826 RuntimeError: expected scalar type Float but found Double How to fix it?
RuntimeError: expected scalar type Float but found Double The error at line loss = criterion(outputs, labels) is quite clear in that it requites your datatype to be float rather than double, but it doesn't explicitly say whether outputs or label is creating this. My guess is its because of labels. Try converting it to float by doing labels.float()
https://stackoverflow.com/questions/68250903/
Easy way to convert a tensor shape In pytorch
Input I have torch tensor as fallow. The shape for this input_tensor is torch.size([4,4]) input_tensor = tensor([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) I'm going to create a tensor that stacks up the tensor that comes out of the above input_tensor by moving the Window in size (2,2). output my desired output is as follow The shape for this output_tensor is torch.size([8,2]) output = tensor([[ 0, 1], [ 4, 5], [ 2, 3], [ 6, 7], [ 8, 9], [12, 13], [10, 11], [14, 15]]) My code is as follows. x = torch.chunk(input_tensor, chunks=2, dim=0) x = list(x) for i, t in enumerate(x): x[i] = torch.cat(torch.chunk(t, chunks=2 ,dim=1)) output_tensor = torch.cat(x) Is there a simpler or easier way to get the result I want?
You can use torch.split() together with torch.cat() as follows: output_tensor = torch.cat(torch.split(input_tensor, 2, dim=1)) The ouput with be: output = tensor([[ 0, 1], [ 4, 5], [ 8, 9], [12, 13], [ 2, 3], [ 6, 7], [10, 11], [14, 15]])
https://stackoverflow.com/questions/68254939/
About pytorch tensor caculation
I want to ask a question about torch calculation that if I only want to subtract the elements on the diagonal of the matrix without changing the elements in the remaining positions, is there any way to achieve it?
One way to do this is by getting the diagonal, doint the required operation on its elements and replacing the original one. Example code: x = torch.rand(3, 3) #get the original diagonal and for example substract 3 replaced_diag = x.diagonal() - 3 #replace the original diagonal x.diagonal().copy_(replaced_diag) For reference look at this: Replace diagonal elements with vector in PyTorch
https://stackoverflow.com/questions/68255093/
Pytorch - RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' in call to _thnn_nll_loss_forward
I was trying & experimenting something with PyTorch, where I created my own inputs & targets. I fed these inputs to the model (which is a basic ANN with 2 hidden layers, nothing wrong with that). But for some reason I am not being able to calculate the CrossEntropyLoss(). I am not being able to figure out why. I know some of the other questions on StakcOverflow have the same title of mine or have a similar problem. I have gone through that but nothing worked out for me. Alot of people had an issue with the dataset, which does not seem to be the problem with me. import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import numpy as np import pandas as pd import matplotlib.pyplot as plt class Net(nn.Module): def __init__(self) -> None: super(Net, self).__init__() self.layer1 = nn.Linear(2, 10) self.layer2 = nn.Linear(10, 1) def forward(self, x): x = F.relu(self.layer1(x)) x = self.layer2(x) return x device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = Net().to(device=device) loss_fn = nn.CrossEntropyLoss() learning_rate = 1e-3 epochs = 20 optimizer = optim.Adam(model.parameters(), lr=learning_rate) inputs = torch.Tensor([ [0,0], [0,1], [1,0], [1,1] ], ).to(device=device) targets = torch.Tensor([ 0, 1, 1, 0 ]).to(device=device) model.train() for epoch in range(epochs): pred_output = model(inputs) print(pred_output.dtype) print(targets.dtype) loss = loss_fn(pred_output, targets) optimizer.zero_grad() loss.backward() optimizer.step() print() break The error that I see is, torch.float32 torch.float32 Traceback (most recent call last): File ".\main.py", line 57, in <module> loss = loss_fn(pred_output, targets) File "C:\Users\user\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\user\anaconda3\lib\site-packages\torch\nn\modules\loss.py", line 1047, in forward return F.cross_entropy(input, target, weight=self.weight, File "C:\Users\user\anaconda3\lib\site-packages\torch\nn\functional.py", line 2693, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "C:\Users\user\anaconda3\lib\site-packages\torch\nn\functional.py", line 2388, in nll_loss ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' in call to _thnn_nll_loss_forward
I could replicate you error using this code. import torch.nn as nn loss = nn.CrossEntropyLoss() input = torch.randn(3, 5, requires_grad=True) target = torch.tensor([1., 2., 3.]) loss(input, target) Error: RuntimeError: expected scalar type Long but found Float changed the datatype of target to target = torch.tensor([1., 2., 3.], dtype=torch.long) and everything worked fine. I believe the target variable does require long datatype because changing the input to float will also work. #this will also work input = torch.randn(3, 5, requires_grad=True, dtype=torch.float) target = torch.tensor([1., 2., 3.], dtype=torch.long) loss(input, target) Note the documentation also has this torch.long dtype in example code. https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html #Edit 1 The reason it's not working is because of the way you defined input/target tensors in your code. Use torch.tensor with a small 't' instead of torch.Tensor. For a detailed discussion see What is the difference between torch.tensor and torch.Tensor?. #this will work. Also notice the decimal. otherwise it will be interpreted differently by pytorch inputs = torch.tensor([[0.,0.],[0.,1.],[1.,0.],[1.,1.]]).to(device=device) targets = torch.tensor([0.,1.,1.,0.], dtype=torch.long).to(device=device)
https://stackoverflow.com/questions/68256087/
Pytorch: gradient computation fails when in-place operation follows certain functions
Consider the following piece of code: import torch from torch import nn a = torch.tensor([1.], requires_grad=True) b = nn.Tanh()(a) # b = nn.Linear(1,1)(a) b *= 1 # b = b * 1 b.sum().backward() Running the code results in RuntimeError: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1]], which is output 0 of TanhBackward, is at version 1; expected version 0 instead. However, if I change Tanh() to Linear(1,1) or change b*=1 to b=b*1 (as in the commented lines), the code will run successfully and get the correct gradient. Why is that? My environment: Python 3.8.5 (anaconda) on Windows Pytorch 1.8.0, running on CPU
I just found some text from the Pytorch official site: In-place correctness checks Every tensor keeps a version counter, that is incremented every time it is marked dirty in any operation. When a Function saves any tensors for backward, a version counter of their containing Tensor is saved as well. Once you access self.saved_tensors it is checked, and if it is greater than the saved value an error is raised. This ensures that if you’re using in-place functions and not seeing any errors, you can be sure that the computed gradients are correct. Source: https://pytorch.org/docs/stable/notes/autograd.html#in-place-correctness-checks According to this, an in-place operation may or may not break the gradient computation, depending on the actual situation. Therefore, as long as the code contains in-place operations, it should not be a surprise when (as what is observed in the question) changing something in the context magically fixes the gradient computation. Also according to the same passage, in-place operations may disrupt the gradient calculation even if they do not break it, in which case the speed of gradient calculation may be decreased. So it seems like a good strategy to avoid any in-place operation.
https://stackoverflow.com/questions/68256550/
Installing pytorch with conda
I've just been trying to dabble in AI in the past few weeks, I've tried installing pytorch with conda and it all seems to work but then I get the error: ImportError: /home/lp35791/.local/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so: cannot read file data I've been trawling through the web but can't seem to find the answer to this error. I've uninstalled and reinstalled anaconda and when I made a new environment and installed numpy along with pytorch, numpy imported successfully but pytorch did not. I'm just wondering what the problem is. Any help would be greatly appreciated!
You seem to have installed PyTorch in your base environment, you therefore cannot use it from your other "pytorch" env. Either: directly create a new environment (let's call it pytorch_env) with PyTorch: conda create -n pytorch_env -c pytorch pytorch torchvision switch to the pytorch environment you have already created with: source activate pytorch_env and then install PyTorch in it: conda install -c pytorch pytorch torchvision
https://stackoverflow.com/questions/68267305/
Poetry hangs when installing torch
I'm trying to add pytorch_pretrained_bert package, but it hangs on downloading torch. I've been waiting for almost 30 mins already. I'm running this command: poetry add pytorch_pretrained_bert -vvv and the output is as such: PS C:\Users\aaaa\Desktop\AI\nexus\ocr> poetry add pytorch_pretrained_bert -vvv Using virtualenv: C:\Users\aaaa\AppData\Local\pypoetry\Cache\virtualenvs\ocr-MxNkBiZL-py3.8 PyPI: 10 packages found for pytorch-pretrained-bert * Using version ^0.6.2 for pytorch-pretrained-bert Updating dependencies Resolving dependencies... 1: fact: ocr is 0.1.0 1: derived: ocr 1: fact: ocr depends on click (^8.0.1) 1: fact: ocr depends on pytesseract (^0.3.8) 1: fact: ocr depends on opencv-contrib-python (^4.5.2) 1: fact: ocr depends on numpy (^1.21.0) 1: fact: ocr depends on pdf2image (^1.16.0) 1: fact: ocr depends on poppler-utils (^0.1.0) 1: fact: ocr depends on deskew (^0.10.30) 1: fact: ocr depends on pytorch-pretrained-bert (^0.6.2) 1: fact: ocr depends on pytest (^5.2) 1: fact: ocr depends on pytest (^5.2) 1: selecting ocr (0.1.0) 1: derived: pytest (>=5.2,<6.0) 1: derived: pytorch-pretrained-bert (>=0.6.2,<0.7.0) 1: derived: deskew (>=0.10.30,<0.11.0) 1: derived: poppler-utils (>=0.1.0,<0.2.0) 1: derived: pdf2image (>=1.16.0,<2.0.0) 1: derived: numpy (>=1.21.0,<2.0.0) 1: derived: opencv-contrib-python (>=4.5.2,<5.0.0) 1: derived: pytesseract (>=0.3.8,<0.4.0) 1: derived: click (>=8.0.1,<9.0.0) 1: fact: pytest (5.4.3) depends on py (>=1.5.0) 1: fact: pytest (5.4.3) depends on packaging (*) 1: fact: pytest (5.4.3) depends on attrs (>=17.4.0) 1: fact: pytest (5.4.3) depends on more-itertools (>=4.0.0) 1: fact: pytest (5.4.3) depends on pluggy (>=0.12,<1.0) 1: fact: pytest (5.4.3) depends on wcwidth (*) 1: fact: pytest (5.4.3) depends on atomicwrites (>=1.0) 1: fact: pytest (5.4.3) depends on colorama (*) 1: selecting pytest (5.4.3) 1: derived: colorama 1: derived: atomicwrites (>=1.0) 1: derived: wcwidth 1: derived: pluggy (>=0.12,<1.0) 1: derived: more-itertools (>=4.0.0) 1: derived: attrs (>=17.4.0) 1: derived: packaging 1: derived: py (>=1.5.0) PyPI: 1 packages found for pytorch-pretrained-bert >=0.6.2,<0.7.0 1: fact: pytorch-pretrained-bert (0.6.2) depends on torch (>=0.4.1) 1: fact: pytorch-pretrained-bert (0.6.2) depends on numpy (*) 1: fact: pytorch-pretrained-bert (0.6.2) depends on boto3 (*) 1: fact: pytorch-pretrained-bert (0.6.2) depends on requests (*) 1: fact: pytorch-pretrained-bert (0.6.2) depends on tqdm (*) 1: fact: pytorch-pretrained-bert (0.6.2) depends on regex (*) 1: selecting pytorch-pretrained-bert (0.6.2) 1: derived: regex 1: derived: tqdm 1: derived: requests 1: derived: boto3 1: derived: torch (>=0.4.1) 1: fact: deskew (0.10.30) depends on numpy (*) 1: fact: deskew (0.10.30) depends on scikit-image (!=0.15.0) 1: selecting deskew (0.10.30) 1: derived: scikit-image (!=0.15.0) 1: fact: poppler-utils (0.1.0) depends on Click (>=7.0) 1: selecting poppler-utils (0.1.0) 1: fact: pdf2image (1.16.0) depends on pillow (*) 1: selecting pdf2image (1.16.0) 1: derived: pillow 1: selecting numpy (1.21.0) 1: fact: opencv-contrib-python (4.5.2.54) depends on numpy (>=1.13.3) 1: selecting opencv-contrib-python (4.5.2.54) 1: fact: pytesseract (0.3.8) depends on Pillow (*) 1: selecting pytesseract (0.3.8) 1: fact: click (8.0.1) depends on colorama (*) 1: selecting click (8.0.1) 1: selecting wcwidth (0.2.5) 1: selecting pluggy (0.13.1) 1: selecting more-itertools (8.8.0) 1: selecting attrs (21.2.0) 1: fact: packaging (21.0) depends on pyparsing (>=2.0.2) 1: selecting packaging (21.0) 1: derived: pyparsing (>=2.0.2) 1: selecting py (1.10.0) 1: selecting regex (2021.7.6) 1: fact: tqdm (4.61.2) depends on colorama (*) 1: selecting tqdm (4.61.2) 1: fact: requests (2.25.1) depends on chardet (>=3.0.2,<5) 1: fact: requests (2.25.1) depends on idna (>=2.5,<3) 1: fact: requests (2.25.1) depends on urllib3 (>=1.21.1,<1.27) 1: fact: requests (2.25.1) depends on certifi (>=2017.4.17) 1: selecting requests (2.25.1) 1: derived: certifi (>=2017.4.17) 1: derived: urllib3 (>=1.21.1,<1.27) 1: derived: idna (>=2.5,<3) 1: derived: chardet (>=3.0.2,<5) 1: fact: boto3 (1.17.105) depends on botocore (>=1.20.105,<1.21.0) 1: fact: boto3 (1.17.105) depends on jmespath (>=0.7.1,<1.0.0) 1: fact: boto3 (1.17.105) depends on s3transfer (>=0.4.0,<0.5.0) 1: selecting boto3 (1.17.105) 1: derived: s3transfer (>=0.4.0,<0.5.0) 1: derived: jmespath (>=0.7.1,<1.0.0) 1: derived: botocore (>=1.20.105,<1.21.0) 1: fact: torch (1.9.0) depends on typing-extensions (*) 1: selecting torch (1.9.0) 1: derived: typing-extensions 1: fact: scikit-image (0.18.2) depends on numpy (>=1.16.5) 1: fact: scikit-image (0.18.2) depends on scipy (>=1.0.1) 1: fact: scikit-image (0.18.2) depends on matplotlib (>=2.0.0,<3.0.0 || >3.0.0) 1: fact: scikit-image (0.18.2) depends on networkx (>=2.0) 1: fact: scikit-image (0.18.2) depends on pillow (>=4.3.0,<7.1.0 || >7.1.0,<7.1.1 || >7.1.1) 1: fact: scikit-image (0.18.2) depends on imageio (>=2.3.0) 1: fact: scikit-image (0.18.2) depends on tifffile (>=2019.7.26) 1: fact: scikit-image (0.18.2) depends on PyWavelets (>=1.1.1) 1: selecting scikit-image (0.18.2) 1: derived: PyWavelets (>=1.1.1) 1: derived: tifffile (>=2019.7.26) 1: derived: imageio (>=2.3.0) 1: derived: pillow (>=4.3.0,!=7.1.0,!=7.1.1) 1: derived: networkx (>=2.0) 1: derived: matplotlib (>=2.0.0,!=3.0.0) 1: derived: scipy (>=1.0.1) 1: selecting pillow (8.3.0) 1: selecting pyparsing (2.4.7) 1: selecting certifi (2021.5.30) 1: selecting urllib3 (1.26.6) 1: selecting idna (2.10) 1: selecting chardet (4.0.0) 1: fact: s3transfer (0.4.2) depends on botocore (>=1.12.36,<2.0a.0) 1: selecting s3transfer (0.4.2) 1: selecting jmespath (0.10.0) 1: fact: botocore (1.20.105) depends on jmespath (>=0.7.1,<1.0.0) 1: fact: botocore (1.20.105) depends on python-dateutil (>=2.1,<3.0.0) 1: fact: botocore (1.20.105) depends on urllib3 (>=1.25.4,<1.27) 1: selecting botocore (1.20.105) 1: derived: python-dateutil (>=2.1,<3.0.0) 1: selecting typing-extensions (3.10.0.0) 1: fact: pywavelets (1.1.1) depends on numpy (>=1.13.3) 1: selecting pywavelets (1.1.1) 1: fact: tifffile (2021.7.2) depends on numpy (>=1.15.1) 1: selecting tifffile (2021.7.2) 1: fact: imageio (2.9.0) depends on numpy (*) 1: fact: imageio (2.9.0) depends on pillow (*) 1: selecting imageio (2.9.0) 1: fact: networkx (2.5.1) depends on decorator (>=4.3,<5) 1: selecting networkx (2.5.1) 1: derived: decorator (>=4.3,<5) 1: fact: matplotlib (3.4.2) depends on cycler (>=0.10) 1: fact: matplotlib (3.4.2) depends on kiwisolver (>=1.0.1) 1: fact: matplotlib (3.4.2) depends on numpy (>=1.16) 1: fact: matplotlib (3.4.2) depends on pillow (>=6.2.0) 1: fact: matplotlib (3.4.2) depends on pyparsing (>=2.2.1) 1: fact: matplotlib (3.4.2) depends on python-dateutil (>=2.7) 1: selecting matplotlib (3.4.2) 1: derived: python-dateutil (>=2.7) 1: derived: kiwisolver (>=1.0.1) 1: derived: cycler (>=0.10) 1: fact: scipy (1.6.1) depends on numpy (>=1.16.5) 1: selecting scipy (1.6.1) 1: fact: python-dateutil (2.8.1) depends on six (>=1.5) 1: selecting python-dateutil (2.8.1) 1: derived: six (>=1.5) 1: selecting decorator (4.4.2) 1: selecting kiwisolver (1.3.1) 1: fact: cycler (0.10.0) depends on six (*) 1: selecting cycler (0.10.0) 1: selecting six (1.16.0) 1: selecting colorama (0.4.4) 1: selecting atomicwrites (1.4.0) 1: Version solving took 0.246 seconds. 1: Tried 1 solutions. Finding the necessary packages for the current system Package operations: 2 installs, 0 updates, 0 removals, 42 skipped • Installing six (1.16.0): Skipped for the following reason: Already installed • Installing jmespath (0.10.0): Skipped for the following reason: Already installed • Installing python-dateutil (2.8.1): Skipped for the following reason: Already installed • Installing urllib3 (1.26.6): Skipped for the following reason: Already installed • Installing botocore (1.20.105): Skipped for the following reason: Already installed • Installing cycler (0.10.0): Skipped for the following reason: Already installed • Installing kiwisolver (1.3.1): Skipped for the following reason: Already installed • Installing pillow (8.3.0): Skipped for the following reason: Already installed • Installing decorator (4.4.2): Skipped for the following reason: Already installed • Installing numpy (1.21.0): Skipped for the following reason: Already installed • Installing pyparsing (2.4.7): Skipped for the following reason: Already installed • Installing certifi (2021.5.30): Skipped for the following reason: Already installed • Installing chardet (4.0.0): Skipped for the following reason: Already installed • Installing imageio (2.9.0): Skipped for the following reason: Already installed • Installing networkx (2.5.1): Skipped for the following reason: Already installed • Installing colorama (0.4.4): Skipped for the following reason: Already installed • Installing scipy (1.6.1): Skipped for the following reason: Already installed • Installing pywavelets (1.1.1): Skipped for the following reason: Already installed • Installing matplotlib (3.4.2): Skipped for the following reason: Already installed • Installing tifffile (2021.7.2): Skipped for the following reason: Already installed • Installing typing-extensions (3.10.0.0): Skipped for the following reason: Already installed • Installing s3transfer (0.4.2): Skipped for the following reason: Already installed • Installing idna (2.10): Skipped for the following reason: Already installed • Installing atomicwrites (1.4.0): Skipped for the following reason: Already installed • Installing attrs (21.2.0): Skipped for the following reason: Already installed • Installing click (8.0.1): Skipped for the following reason: Already installed • Installing packaging (21.0): Skipped for the following reason: Already installed • Installing more-itertools (8.8.0): Skipped for the following reason: Already installed • Installing boto3 (1.17.105): Skipped for the following reason: Already installed • Installing py (1.10.0): Skipped for the following reason: Already installed • Installing requests (2.25.1): Skipped for the following reason: Already installed • Installing scikit-image (0.18.2): Skipped for the following reason: Already installed • Installing wcwidth (0.2.5): Skipped for the following reason: Already installed • Installing tqdm (4.61.2): Skipped for the following reason: Already installed • Installing torch (1.9.0) • Installing pluggy (0.13.1): Skipped for the following reason: Already installed • Installing regex (2021.7.6): Skipped for the following reason: Already installed As you can see installation had not finished. What could be a reason? I'm using Windows 10 operating system and running the command in Windows PowerShell.
TL;DR: This is easily verified by either a green colored version number and dot as shown in this screenshot or running poetry run <package> --version and having successful output. Short Answer When I've installed packages with Poetry, I've seen some packages, like pylint, only display the line Installing pylint (2.11.1) and appear to never complete. But if I run poetry run pylint --version it'll print out the installed version within the poetry venv: pylint 2.11.1 astroid 2.8.3 Python 3.9.7 (default, Sep 3 2021, ...) Longer Answer Not sure if this is a recent change with Poetry, but currently using the preview version of Poetry, 1.2.0a2, the output after running poetry add <package> -vvv looks something like this to indicate separately resolving dependencies and install progress: $ poetry add pylint -vvv Loading configuration file ... Using virtualenv: ... PyPI: 1 packages found for pylint Updating dependencies Resolving dependencies... 1: fact: pylint (2.11.1) depends on platformdirs (>=2.2.0) 1: fact: pylint (2.11.1) depends on astroid (>=2.8.0,<2.9) 1: fact: pylint (2.11.1) depends on isort (>=4.2.5,<6) 1: fact: pylint (2.11.1) depends on mccabe (>=0.6,<0.7) 1: fact: pylint (2.11.1) depends on toml (>=0.7.1) 1: fact: pylint (2.11.1) depends on typing-extensions (>=3.10.0) 1: fact: pylint (2.11.1) depends on colorama (*) 1: selecting pylint (2.11.1) 1: derived: typing-extensions (>=3.10.0) 1: derived: toml (>=0.7.1) 1: derived: mccabe (>=0.6,<0.7) 1: derived: isort (>=4.2.5,<6) 1: derived: astroid (>=2.8.0,<2.9) 1: derived: platformdirs (>=2.2.0) PyPI: 1 packages found for pylint ... Writing lock file Finding the necessary packages for the current system Package operations: 6 installs, 0 updates, 0 removals, 0 skipped • Installing pylint (2.11.1): Pending... • Installing pytest (2.11.1): Installing... • Installing pytest (2.11.1) # dot and version will be green to indicate package was installed Note that the output will initially start as Pending... and may update the line in place to Downloading X% and then Installing... or output on new lines. So if you see a line by itself, it is possible the package has been installed. This is easily verified by either a green colored version number and dot as shown in this screenshot or running poetry run <package> --version and having successful output. Hopefully, this extra output will clarify the install state of the package(s) you were installing.
https://stackoverflow.com/questions/68270223/
Conv2D padding in TensorFlow and PyTorch
I am trying to convert TensorFlow model to PyTorch but having trouble with padding. My code for for relevant platforms are as follow: TensorFlow conv1 = tf.layers.conv2d( inputs=input_layer, filters=32, kernel_size=[5, 5], padding="same", activation=tf.nn.relu, name = "conv1") PyTorch conv1 = nn.Conv2d(1, 32, kernel_size=5, stride=1, padding=2) I have few questions: Are the above codes equivalent? How many paddings are added on left/right/top/bottom if we use same padding in tensorflow? How many paddings are added on left/right/top/bottom if we use padding=2 in pytorch? If the above two code snippets are not equivalent then how can we make the same conv layer? Thanks in advance.
To answer your questions: The reason why Pytorch doesn't have padding = 'same' to quite simply put it is due to its dynamic computation graph in comparison to Tensorflow static graph. Both the codes are not equivalent as different padding is used. 'Same' padding tries to pad evenly on the left and right, but if the amount of columns to be added is odd, it will then add an extra column to the right. 'Padding = 2' in Pytorch applies 2 implicit paddings on either side. Pytorch 1.9 has added padding = 'same' for un-strided or stride = 1 convolutions. Which will work for your use case. But for stride > 2 padding needs to be added manually. Here is the good implementation to perform 'same' padding:- https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/layers/padding.py#L28
https://stackoverflow.com/questions/68271586/
Why does torch.scatter requires a smaller shape for indices than values?
A similar question was already asked here, but I think the solution is not suited for my case. I just wonder why it is not possible to do a torch.scatter operation, where my index tensor is bigger than my value tensor. In my case I have duplicate indices, e.g. the following value tensor a and the index tensor idx: a = torch.tensor([[0, 1, 0, 0], [0, 0, 1, 0]]) idx = torch.tensor([[1, 1, 2, 3, 3], [0, 0, 1, 2, 2]]) a.scatter(-1, idx, 1) returns: RuntimeError: Expected index [2, 5] to be smaller than self [2, 4] apart from dimension 1 and to be smaller size than src [2, 4] Is there another way to achieve this?
Not a solution, but a workaround: a = torch.tensor([[0, 1, 0, 0], [0, 0, 1, 0]]) idx = torch.tensor([[1, 1, 2, 3, 3], [0, 0, 1, 2, 2]]) rows = torch.arange(0, a.size(0))[:,None] n_col = idx.size(1) a[rows.repeat(1, n_col), idx] = 1 rows.repeat(1, n_col) gives the row index to the corresponding column index in idx.
https://stackoverflow.com/questions/68274722/
ValueError: no gopen handler defined
I am new to using webdataset library from pytorch. I have created .tar files of a sample dataset present locally in my system using webdataset.TarWriter(). The .tar files creation seems to be successful as I could extract them separately on windows platform and verify the same dataset files. Now, I create train_dataset = wds.Dataset(url) where url is the local file path of the .tar files. after this, I perform the following operations: train_loader = torch.utils.data.DataLoader(train_dataset, num_workers=0, batch_size=10) sample = next(iter(train_loader)) print(sample) It is resulting me in error like this The same code works fine if I use a web url example: "http://storage.googleapis.com/nvdata-openimages/openimages-train-000000.tar" mentioned in webdataset documentation: https://reposhub.com/python/deep-learning/tmbdev-webdataset.html I couldn't understand the error so far. Any idea on how to solve this problem?
I have had the same error since yesterday, I finally found the culprit. WebDataset/tarIterators.py makes use of WebDataset/gopen.py. In gopen.py urllib.parse.urlparse is called to parse the url to be opened, in your case the url is D:/PhD/.... gopen_schemes = dict( __default__=gopen_error, pipe=gopen_pipe, http=gopen_curl, https=gopen_curl, sftp=gopen_curl, ftps=gopen_curl, scp=gopen_curl) def gopen(url, mode="rb", bufsize=8192, **kw): """Open the URL. This uses the `gopen_schemes` dispatch table to dispatch based on scheme. Support for the following schemes is built-in: pipe, file, http, https, sftp, ftps, scp. When no scheme is given the url is treated as a file. You can use the OPEN_VERBOSE argument to get info about files being opened. :param url: the source URL :param mode: the mode ("rb", "r") :param bufsize: the buffer size """ global fallback_gopen verbose = int(os.environ.get("GOPEN_VERBOSE", 0)) if verbose: print("GOPEN", url, info, file=sys.stderr) assert mode in ["rb", "wb"], mode if url == "-": if mode == "rb": return sys.stdin.buffer elif mode == "wb": return sys.stdout.buffer else: raise ValueError(f"unknown mode {mode}") pr = urlparse(url) if pr.scheme == "": bufsize = int(os.environ.get("GOPEN_BUFFER", -1)) return open(url, mode, buffering=bufsize) if pr.scheme == "file": bufsize = int(os.environ.get("GOPEN_BUFFER", -1)) return open(pr.path, mode, buffering=bufsize) handler = gopen_schemes["__default__"] handler = gopen_schemes.get(pr.scheme, handler) return handler(url, mode, bufsize, **kw) As you can see in the dictionary the __default__ function is gopen_error. This is the function returning the error you are seeing. pr = urlparse(url) on your url will generate an urlparse where the scheme (pr.scheme) is 'd' because your disk is named D. However, it should be 'file' for the function to work as intended. Since it is not equal to 'file' or any of the other schemes in the dictionary (http, https, sftp, etc), the default function will be used, which returns the error. I circumvented this issue by adding d=gopen_file to the gopen_schemes dictionary. I hope this helps you further temporarily. I will address this issue on the WebDataset GitHub page as well and keep this page updated if I get a more practical update. Good luck!
https://stackoverflow.com/questions/68299665/
Convert tensor of integers to binary tensor with 1 only at that index
Is there a pain free to convert a tensor of integers to a binary tensor with 1 only at that integers index in pytorch? e.g. tensor([[1,3,2,6]]) would become tensor([[0,1,0,0,0,0,0], [0,0,0,1,0,0,0], [0,0,1,0,0,0,0], [0,0,0,0,0,0,1]])
t = tensor([[1,3,2,6]]) rows = t.shape[1] cols = t.max() + 1 output = torch.zeros(rows, cols) # initializes zeros array of desired dimensions output[list(range(rows)), t.tolist()] = 1 # sets cells to 1 To clarify the last operation, you can pass in a list of the row numbers and column numbers, and it will set all those elements to the value after the equal. So in our case we want to set the following locations to 1: (0,1), (1,3), (2,2), (3,6) Which we'd represent as: output[[0,1,2,3], [1,3,2,6]] = 1 And you'll see that those lists line up with a) an increasing list up to the total row count, and b) our original tensor
https://stackoverflow.com/questions/68308241/
How to understand decoder_start_token_id and forced_bos_token_id in mbart?
When I want to use huggingface's pretrained models such as mbart to conduct multilingual experiments, the meaning of paramaters decoder_start_token_id and forced_bos_token_id confuse me. I find codes like: # While generating the target text set the decoder_start_token_id to the target language id. # The following example shows how to translate English to Romanian # using the facebook/mbart-large-en-ro model. from transformers import MBartForConditionalGeneration, MBartTokenizer tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX") article = "UN Chief Says There Is No Military Solution in Syria" inputs = tokenizer(article, return_tensors="pt") translated_tokens = model.generate(**inputs, decoder_start_token_id=tokenizer.lang_code_to_id["ro_RO"]) tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] and: # To generate using the mBART-50 multilingual translation models, # eos_token_id is used as the decoder_start_token_id and the target language id is forced as the first generated token. # To force the target language id as the first generated token, # pass the forced_bos_token_id parameter to the generate method. # The following example shows how to translate between Hindi to French and Arabic to English # using the facebook/mbart-50-large-many-to-many checkpoint. from transformers import MBartForConditionalGeneration, MBart50TokenizerFast article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है" article_ar = "الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا." model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") # translate Hindi to French tokenizer.src_lang = "hi_IN" encoded_hi = tokenizer(article_hi, return_tensors="pt") generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"]) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire en Syria." # translate Arabic to English tokenizer.src_lang = "ar_AR" encoded_ar = tokenizer(article_ar, return_tensors="pt") generated_tokens = model.generate(**encoded_ar, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"]) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "The Secretary-General of the United Nations says there is no military solution in Syria." While the annotation of this two paramaters are: decoder_start_token_id (:obj:`int`, `optional`): If an encoder-decoder model starts decoding with a different token than `bos`, the id of that token. forced_bos_token_id (:obj:`int`, `optional`): The id of the token to force as the first generated token after the :obj:`decoder_start_token_id`. Useful for multilingual models like :doc:`mBART <../model_doc/mbart>` where the first generated token needs to be the target language token. And for different varients of mbart, such as facebook/mbart-large-cc25 and facebook/mbart-large-50, which one should we specify to generate response of specific language?
In the standard sequence-to-sequence models, the decoding starts with providing the decoder with the [bos] symbols, it generates the word w1, which is provided as the input of the decoder in the next step. and the decoder generates the word w2. This continues until the [eos] (end-of-sentence) token is generated. [bos] w_1 w_2 w_3 ↓ ↓ ↓ ↓ ┌──────────────────┐ │ DECODER │ └──────────────────┘ ↓ ↓ ↓ ↓ w_1 w_2 w_2 [eos] With mBART, this is more tricky because you need to tell it what the target language and source language are. For the encoder and for training data, the tokenizer takes care of that and adds the language-specific tags at the end of the source sentence and at the beginning of the target sentence. The sentences are then in format (given the source has 4 words and the target 3): source: v1 v2 v3 v4 [src_lng] target: [tgt_lng] w1 w2 w3 [eos] Unlike training, at inference time, the target sentence is unknown and you want to generate it. But you still need to say the decoder what it should use instead of the generic [bos] token. This where the forced_bos_token_id comes into play. It is still the tokenizer that knows the IDs of the specific tokens. Different mBARTs have different tokenizers, you should always use the language IDs for the tokenizer that matches the models. The attributes you mention seem to do the same thing, but I would stick to forced_bos_token_id mentioned in the mBART documentation. The method APIs in HuggingFace Transformers are very permissive and some of the attributes only apply to some models and get ignored by others. I would avoid using something that is not explicitly mentioned in the documentation of the particular model.
https://stackoverflow.com/questions/68313263/
Classification in LSTM returns same value for classification
This is my first time posting in stack overflow so forgive me if I do any sort of mistake. I have 10000 data, and each data has a label of 0 and 1. I want to perform classification using LSTM as this is time series data. input_dim = 1 hidden_dim = 32 num_layers = 2 output_dim = 1 # Here we define our model as a class class LSTM(nn.Module): def __init__(self, input_dim, hidden_dim, num_layers, output_dim): super(LSTM, self).__init__() self.hidden_dim = hidden_dim self.num_layers = num_layers self.lstm = nn.LSTM(input_dim, hidden_dim, num_layers, batch_first=True) self.fc = nn.Linear(hidden_dim, output_dim) def forward(self, x): #Initialize hidden layer and cell state h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).requires_grad_() c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).requires_grad_() # We need to detach as we are doing truncated backpropagation through time (BPTT) # If we don't, we'll backprop all the way to the start even after going through another batch out, (hn, cn) = self.lstm(x, (h0.detach(), c0.detach())) # Index hidden state of last time step # out.size() --> 100, 32, 100 # out[:, -1, :] --> 100, 100 --> just want last time step hidden states! out = self.fc(out[:, -1, :]) # For binomial Classification m = torch.sigmoid(out) return m model = LSTM(input_dim=input_dim, hidden_dim=hidden_dim, output_dim=output_dim, num_layers=num_layers) loss = nn.BCELoss() optimiser = torch.optim.Adam(model.parameters(), lr=0.00001, weight_decay=0.00006) num_epochs = 100 # Number of steps to unroll seq_dim =look_back-1 for t in range(num_epochs): y_train_class = model(x_train) output = loss(y_train_class, y_train) # Zero out gradient, else they will accumulate between epochs optimiser.zero_grad(set_to_none=True) # Backward pass output.backward() # Update parameters optimiser.step() This is an example of what the result looks like This code is initially from kaggle, I edited them for classification. Please, can you tell me what I am doing wrong? EDIT 1: Add dataloader from torch.utils.data import DataLoader from torch.utils.data import TensorDataset x_train = torch.from_numpy(x_train).type(torch.Tensor) y_train = torch.from_numpy(y_train).type(torch.Tensor) x_test = torch.from_numpy(x_test).type(torch.Tensor) y_test = torch.from_numpy(y_test).type(torch.Tensor) train_dataloader = DataLoader(TensorDataset(x_train, y_train), batch_size=128, shuffle=True) test_dataloader = DataLoader(TensorDataset(x_test, y_test), batch_size=128, shuffle=True) I realized I had forgotten to inverse the transformation before checking the result. When I did that, I got different values from classification, however all values are in the scale of 0.001-0.009, so when I round them, the result is same. Label 0 for all classification.
A common phenomenon in NN training is that they will initially converge to a very naive solution to the problem where they output a constant prediction that minimizes the error on the training data. My guess is that in your training data, the ratio between 0 and 1 classes is close to 0.5423. Depending on whether your model is of sufficient complexity, it might learn to make more specific predictions based on the input when given more learning steps. While increasing the number of epochs could help, there is something better you can do with your current setup. Currently, you are only performing a single optimizer step per epoch. Typically, you would want a step per batch and loop over your data in (mini)batches of, say, 32 inputs for example. To do this, it would be best to use a DataLoader where you can define a batch size, and loop over the dataloader inside your epoch loop similar to this example.
https://stackoverflow.com/questions/68315278/
Hermetic / Non Hermetic Packages in Python
While going through PyTorch documentation, I came across the term hermetic packages: torch.package adds support for creating hermetic packages containing arbitrary PyTorch code. These packages can be saved, shared, used to load and execute models at a later date or on a different machine, and can even be deployed to production using torch::deploy. I don't understand what hermetic packages mean in this context. Can someone explain what makes packages hermetic? What would non-hermetic packages look like? With some search over Stack Overflow [1][2], it seems this terminology is a generic term used in software world. Any examples - even outside of PyTorch/Python world - would help in solidifing my understanding. Thank you! [1] Creating Hermetic Maven Builds [2] Bazel: hermetic use of jar command?
In the context, hermatic is used to mean that the already preinstalled libraries and configuration of your machine you are running on (macos laptop, to windows desktop, etc.) will be able to build Pytorch and its depedancies in an identical way. The following link has a section on hermatic builds: https://www.google.com/search?q=what+is+hermatic+mean+in+software&oq=what+is+hermatic+mean+in+software&aqs=chrome..69i57j33l3.7998j0j7&sourceid=chrome&ie=UTF-8 "Our builds are hermetic, meaning that they are insensitive to the libraries and other software installed on the build machine. Instead, builds depend on known versions of build tools, such as compilers, and dependencies, such as libraries. The build process is self-contained and must not rely on services that are external to the build environment." This is also a good link to refer to: https://news.ycombinator.com/item?id=19610869 Sarthak Jain
https://stackoverflow.com/questions/68321832/
Retrieving intermediate features from pytorch torch.hub.load
I have a Net object instantiated in pytorch via torch.hub.load: model = torch.hub.load('facebookresearch/pytorchvideo', 'slowfast_r50', pretrained=True) The final layer is a projection to a 400-dim vector. Is there a way to get the pentultimate layer instead during a forward pass?
Yes, easiest way is to switch the layer with torch.nn.Identity (which simply returns it's inputs unchanged): Line below changes this submodule: (6): ResNetBasicHead( (dropout): Dropout(p=0.5, inplace=False) (proj): Linear(in_features=2304, out_features=400, bias=True) (output_pool): AdaptiveAvgPool3d(output_size=1) ) to Identity: model.blocks[6] = torch.nn.Identity() as you probably don't want to keep the Dropout anyway (you might only change proj or any other part of the network as needed).
https://stackoverflow.com/questions/68324172/
My Loss Function doesn't get smaller values during training
I am trying to predict the center of my palm The structure of my neural network consists of 2 cnn which both are followed by max-pooling and a linear layer that has 2 outputs, one for x and the other one for y. The input is a 720x720 image. class MyNeuralNetwork(torch.nn.Module): def __init__(self): super(MyNeuralNetwork, self).__init__() self.conv1 = torch.nn.Conv2d(4, 5, 5) self.conv2 = torch.nn.Conv2d(5, 5, 5) self.pool = torch.nn.MaxPool2d(3, 3) self.linear = torch.nn.Linear(5 * 78 * 78, 2) def forward(self, x): x = self.conv1(x) x = self.pool(x) x = self.conv2(x) x = self.pool(x) x = x.view(x.size(0), -1) x = self.linear(x) return x I have the pathnames of the images saved in a csv file. the x and y coordinates are saved in a different csv file. Here is the code for my Dataset. class MyHand(Dataset): """Creating the proper dataset to feed my neural network""" def __init__(self, name_path, root_dir, results_path, transform=None): self.names = pd.read_csv(name_path) self.rootdir = root_dir self.transform = transform self.results = pd.read_csv(results_path) def __len__(self): length = len(self.names.columns) return length def __getitem__(self, index): img_path = os.path.join(self.rootdir, self.names.columns[index]) image = pl.imread(img_path) x_top_left_corner = torch.tensor(self.results.iloc[index, 0]) y_top_left_corner = torch.tensor(self.results.iloc[index, 1]) width = torch.tensor(self.results.iloc[index, 2]) height = torch.tensor(self.results.iloc[index, 3]) # calculating the x and y center of my palm x_center = x_top_left_corner + width/2 y_center = y_top_left_corner - height/2 if self.transform: image = self.transform(image) return image, x_center, y_center and the code for training the network is dataset = MyHand(name_path='path to the names of the images csv', results_path='path to the results cvs', transform=torchvision.transforms.ToTensor( )) loader = DataLoader(dataset=dataset, batch_size=4) model = MyNeuralNetwork() criterion = torch.nn.MSELoss() EPOCHS = 5 LEARNING_RATE = 0.001 optimizer = optim.SGD(model.parameters(), LEARNING_RATE) for epoch in range(EPOCHS): print("epoch:", epoch) for data in dataset: pic, x, y = data model.zero_grad() outpout = model(pic[None, :, :, :]) loss1 = criterion(outpout[0, 0], x) loss2 = criterion(outpout[0, 1], y) loss = loss1 + loss2 loss.backward() print(loss) but as you can see below my loss function has exactly the same results at each epoch and it doesn't decrease at all. What can i do for that? I tried different values of learning rate but still the same.
Your loss values are extremly high as you see. I would propose that you normalize your outputs by using the sigmoid activation function. Now the coordinates are in the range 0-1 and can be later translated to the image by multiplying them with 720. To calculate the loss, you have to divide your target cooridnates by 720. Then you should get a nice and stable loss in the range 0-1. Also: either decay your learning rate or try a smaller one scale your image down (I don't know how the images look like but 720x720 is quite big) use three convolutions with smaller kernels add a second linear layer
https://stackoverflow.com/questions/68335560/
How to calculate Gradient of the loss with respect to input?
I have a pre-trained PyTorch model. I need to calculate the gradient of the loss with respect to the network's inputs using this model (without training again and only using the pre-trained model). I wrote the following code, but I am not sure it is correct or not. test_X, test_y = load_data(mode='test') testset_original = MyDataset(test_X, test_y, transform=default_transform) testloader = DataLoader(testset_original, batch_size=32, shuffle=True) model = MyModel(device=device).to(device) checkpoint = torch.load('checkpoint.pt') model.load_state_dict(checkpoint['model_state_dict']) gradient_losses = [] for i, data in enumerate(testloader): inputs, labels = data inputs= inputs.to(device) labels = labels.to(device) inputs.requires_grad = True output = model(inputs) loss = loss_function(output) loss.backward() gradient_losses.append(inputs.grad) My question is, does this list gradient_losses actually storing what I wish to store? If not, what is the correct way to do that?
does this list gradient_losses actually storing what I wish to store? Yes, if you are looking to get the derivative of the loss with respect to the input then that seems to be the correct way to do it. Here is minimal example, take f(x) = a*x. Then df/dx = a. >>> x = torch.rand(10, requires_grad=True) >>> y = torch.rand(10) >>> a = torch.tensor([3.], requires_grad=True) >>> loss = a*x - y >>> loss.mean().backward() >>> x.grad tensor([0.3000, 0.3000, ..., 0.3000, 0.3000]) Which, in this case is equal to a / len(x) Do note, each gradient you extract with input.grad will be averaged over the whole batch, and won't be a gradient over each individual input. Also, you don't need to .clone() your input gradients as they are not part of the model and won't get zeroed by model.zero_grad().
https://stackoverflow.com/questions/68338357/