docs
stringclasses
4 values
category
stringlengths
3
31
thread
stringlengths
7
255
href
stringlengths
42
278
question
stringlengths
0
30.3k
context
stringlengths
0
24.9k
marked
int64
0
1
huggingface
Beginners
How to change BERT’s pre-training tasks?
https://discuss.huggingface.co/t/how-to-change-berts-pre-training-tasks/5177
Hi guys, First of all, I’m new to Huggingface so I’m hoping my question won’t be sounded foolish I’m doing some research about the effects of pre-training tasks by changing BERT’s. Thus, I need to pre-train BERT from the scratch with different (besides from NSP and MLM) multiple or singular pre-training tasks. So basically I will create a Transformer model with the same architectural properties with BERT and train it on different tasks. As far as I know, examples are all about pre-training BERT using MLM (with/without NSP). Is it possible to use the BERT model in Huggingface with different pre-training tasks such as PLM, DAE etc ? Also I will evaluate those different models for BertForNextSentencePrediction, BertForMaskedLM, BertForSequenceClassification, BertForTokenClassification. Even if I use a different model class, can I still use it to BertFor… tasks ? How can I train BERT with multiple tasks that are different from MLM + NSP such as PLM+SOP etc ?
Is it possible to use the BERT model in Huggingface with different pre-training tasks such as PLM, DAE etc ? Well anything is possible. If the question was, is there any example doing it already, the answer is no, so you would have to write the training scripts for those tasks. Even if I use a different model class, can I still use it to BertFor… tasks ? No, you can’t use a BertForxxx if you use a different class (eg. not a BertYyy) for your pretraining.
0
huggingface
Beginners
Set TPU device in Trainer
https://discuss.huggingface.co/t/set-tpu-device-in-trainer/5109
Hi, I want to use TPU provided by Kaggle in my project. I use PyTorch XLA to do that import torch_xla import torch_xla.core.xla_model as xm device = xm.xla_device() Then I define a model model = AutoModelForMaskedLM.from_pretrained("xlm-roberta-base") And as I can see model on xla device, it fine model.device # device(type='xla', index=1) Then I define Trainer instance with my model trainer = Trainer( model, args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["validation"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics, ) And train it trainer.train() But it seems to me that trainer does not use xla device for TPU device is idle in Kaggle… So, how to use TPU device in Kaggle with PyTorch XLA in my case?
No, the trainer does not support training with TPUS inside colab. You have to use it in a script (like the example scripts) and launch training with our launcher (see here 35 for the instructions).
0
huggingface
Beginners
How to create “Other/garbage” class for classifier (e.g. COVID-19 classifier)
https://discuss.huggingface.co/t/how-to-create-other-garbage-class-for-classifier-e-g-covid-19-classifier/4249
Hi there, I’m currently training a classifier to classify news sentences in 20 classes of policy measures against COVID-19 like “Lockdown”, “Curfew” etc. It will be part of a system that identifies new policy measure announcements in news articles. (An older development version of the model is here on the model hub) The issue: In reality, 99% of sentences in news on covid have nothing to do with new policy announcements. So my task is (1) to identify the 1% of sentences which announce a new policy and (2) to classify these sentences. Step (2) works fine, but step (1) is quite difficult. My current approach: train a classifier for 21 classes: 20 for the 20 policy types and 1 “Other/garbage” class. I’m creating the data for the “Other” class by extracting sentences from news which are semantically very different from the policy announcements. I get decent accuracy on the 20 policy types, but the big issue is the “Other” class, which either creates too many false positives or false negatives. My question: What are best practices for eliminating “Other” sentences which are not relevant for the classification task? I feel like this must be a very common problem for real-world classification tasks (e.g. same issue for sentiment classifiers, which are only trained to classify as positive/negative - but in reality 99% sentences in e.g. news are neutral). But I couldn’t find literature on the issue. Would be thankful for advice or hints for literature! Best, Moritz
It sounds like any of the algorithms for extractive summarization, like LexRank would apply. You should be able to pick the most meaningful sentences of a news that way and then only classify those, without a seperate class for “other”. Of course there is no guarantee that you won’t exclude sentences that shouldn’t be in the “other” class. Another thing I would try is to classify anything as other, that doesn’t have very high probabilities for the rest of the classes. So if your model outputs a low probability, even for the highest-probability class (i.e. it’s not confident in it’s prediction), just treat that as “other”.
0
huggingface
Beginners
How to embed Hugging Face Pre-trained models in our own app
https://discuss.huggingface.co/t/how-to-embed-hugging-face-pre-trained-models-in-our-own-app/5112
Hello everyone, I have seen that hugging face has a lot of pre-trained models for different languages. How can I use these models for my own application with some recorded voices. I mean, I want to build up an app in which we can press the button to record my voice and then the app gives back the text version of it. As far as I have seen to-date all the implementations have used the hugging face data set and I don’t know how to feed my own voice. Thank you
Hi rzamarefat, I don’t think the Huggingface models will work on recorded voices. So far as I know, they all expect text input. See the docs Quick tour — transformers 4.4.2 documentation
0
huggingface
Beginners
Fine-Tuning Pre-trained Models Issues and Gotchas
https://discuss.huggingface.co/t/fine-tuning-pre-trained-models-issues-and-gotchas/5095
I’m trying to improve embeddings of the existing model (in my case pre-trained model check-point is “microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext”) for my specific domain (pathology diagnostics) by fine-tuning the pre-trained model with my custom language corpus, thus getting the transfer of knowledge from broad language to the specific language. The process seems to be conceptually straightforward, but I found a lot of evil in the details. Perhaps somebody from the community can point me to the right direction/resources to address these issues? To transfer learning of word embeddings from broad language to specific language, I need to continue training the word embedding model same way as it was trained before, but with different data set, so I have to figure out how the model was trained before and how to change the dataset. How to find out that how the model was trained before and how to continue using the same settings: How can I find out which task model was trained on ? Do I need to explicitly set up the same training task, or is there some way to continue the same training cycle from the model check-point data? Can I retrieve task information from check-point? How do I find out training params used to train that model? How can I use them to continue training the same way? Can I retrieve Trainer configuration and trainer.args from check-point? How to change the dataset for the pre-trained model: Since I need to load my own corpus for training, how do I find out how to prepare csv file for loading into the dataset, so the tokenizer and model Trainer will accept it for training? (column names, one sentence per line or document per line, any cleanup, any pre-tokenization processing expected by model Trainer). Can I retrieve dataset structure or reference from check-point? Since my language is different, my vocabulary is different too. I can add additional words to the vocabulary of pre-trained model tokenizer [tokenizer.add_tokens(add_vocab)] but should I do something with the model to accept updated vocabulary? I could not find answers in any fine-tuning references; perhaps someone from the community has more information? Many Thanks!
Hello! Thank you for your question. Regarding the first part of the question, I suggest reading this 4 paper as suggested here 4 and you will find a lot of details about how the model was trained (from parameters to task) or references to other papers where said info is contained. I suggest that you use the checkpoint loading functionality of the framework they used to implement to see if they saved the hyperparameters with the checkpoint or not. For any specific details about training and data pre-processing that are missing from their paper and cannot be inferred from references, it’s always worth contacting the authors! You will have to explore what learning parameters make sense for your task and data, you don’t need to train with the parameters they trained with. So, in a way, their training configuration does not impact all that much what you do. You just need to find a good set of parameters that work well for your task. Depending on your task, you might want to use the trainer and the existing data loaders or implement your own. So I can’t give any answers there. If you add your tokens to the vocabulary, I would expect the tokenizer converts them to subword units the model has learned in training. So I don’t expect there is any additional step you need to do in that regard.
0
huggingface
Beginners
Mask modelling on specific words
https://discuss.huggingface.co/t/mask-modelling-on-specific-words/4934
Hello, I would like to fine-tune a masked language model (based on CamemBert) in order to predict some words in a text or a sentence. During the training procedure, I want to mask specific words in order to force the model to focus on them. Indeed with the test data, the model will only have to predict these specific words and nothing else. My concern is that most of the specific words are unknown in the vocabulary and then are tokenized into subtokens. For instance with the sentence: “je rentre bredouille” where the word to mask is “bredouille”. When I tokenize this, it becomes : [‘▁je’, ‘▁rentre’, ‘▁bre’, ‘d’, ‘ouille’]. How should I handle this ? Should I use the mask like this: [‘▁je’, ‘▁rentre’, ‘MASK’, ‘MASK’, ‘MASK’]? If so, how will the model be able to predict ‘bredouille’ with a single token ? I have a subsidiary question: if my issue can be solved, how can I used the final trained model in order to make word embeddings ? Thank you very much,
to deal with vocabulary change, I had to (1) get vocab from the current model tokenizer tokenizer.get_vocab() (2) compare my custom vocab with the vocab of the model tokenizer (3) add my tokens to tokenizer vocab tokenizer.add_tokens(add_vocab) (4) resize the model for updated vocab model.resize_token_embeddings(len(tokenizer)). and cross my fingers that Trainer still works (would be very nice if Trainer could auto-reszise model for updated vocab, instead of crushing)
0
huggingface
Beginners
Convert_graph_to_onnx.py “ImportError: attempted relative import with no known parent package”
https://discuss.huggingface.co/t/convert-graph-to-onnx-py-importerror-attempted-relative-import-with-no-known-parent-package/4941
Hi, I’m trying to run this script but keep on getting failure because of Traceback (most recent call last): File “convert_graph_to_onnx.py”, line 22, in from .file_utils import ModelOutput, is_tf_available, is_torch_available ImportError: attempted relative import with no known parent package Running python convert_graph_to_onnx.py --pipeline ner --model "KB/bert-base-swedish-cased-ner" --framework pt --tokenizer "KB/bert-base-swedish-cased-ner" --quantize kb-bert-cased-ner.onnx Tried installing transformers from source Tried running from root, src & src/transformers Still not working
Yes, it’s a bug indded. This PR 49 should fix it.
0
huggingface
Beginners
Training on Domain specific Dataset
https://discuss.huggingface.co/t/training-on-domain-specific-dataset/4805
Hi , I want to train a sentiment analysis multi-label classifier and in addition to training the final output layers, I’d like to train the hidden BERT layers as well. I want to understand how much improvement can I get in my metric(F1 score) by feeding it my domain-specific data. All the documents /references I have seen thus far only point to training the final output layer that generates classification. Is there a way to train various hidden layers of BERT using (let’s say) BERT Base? Thanks in advance, Devesh
My understanding is that if you don’t specifically freeze any of the layers you will always train the whole model. If you want to train only particular layers, you can add a condition to this code: model = BertForSequenceClassification.from_pretrained('bert-base-uncased') for param in model.bert.parameters(): param.requires_grad = False
0
huggingface
Beginners
File size/speech length limit for Wave2Vec2?
https://discuss.huggingface.co/t/file-size-speech-length-limit-for-wave2vec2/3636
Hi there. I’ve been trying out Hugging Face’s implementation of Wave2Vec2 for transcribing on Colab Pro, and got pretty good results from short speeches under 80 seconds. Anything beyond that just crashes the notebook, even when I set it to High RAM, or compress the audio file drastically. Is there a practical limit to the length of the audio clip that can/should be run on HF-Wav2Vec2? I tried looking for documentation on this, but might have missed it. Appreciate any pointers on this.
Answering my own question in case anyone stumbles on this and wants a quick solution: seems like it’s a memory issue. I cobbled together a simple if clumsy way to transcribe the split-up clips one at a time. See attached screen-grab or check out the notebooks in my repo for this project: GitHub - chuachinhon/wav2vec2_transformers: Transcribing audio files using Hugging Face's implementation of Wav2Vec2 12 wav2_py1836×2508 556 KB
0
huggingface
Beginners
Question answering bot: yes/no answers
https://discuss.huggingface.co/t/question-answering-bot-yes-no-answers/4496
Hello everybody Following this guide 33 I was able to fine-tune an already fine-tuned model for Question Answering (this one 22). Now I am wondering if it is possible for a QAbot to produce “yes”/“no” as an answer, and if this is the case, how would it be done? The guide clearly states that it is not possible for this kind of models to get an answer that is not explicitly in the text (because that would mean to generate text). So my questions are: Can I adjust the code provided in the notebook for training a model that is able to create a yes/no answer? Maybe I should use another model, or the same model but provide a different training? If I have to use another model, is it possible to join them in some way? How should data for training yes/no answers be provided? Could you point to some docs/examples? Thanks a lot!
Hey @Neuroinformatica, if you already have labelled data then my suggestion would be to frame the problem as an entailment one, i.e. given a (question, passage) predict a boolean value for yes/no. This is the approach taken in the BoolQ paper 25 and fine-tuning models on this is pretty straight-forward, e.g. from datasets import load_dataset from transformers import AutoTokenizer, AutoModelForSequenceClassification boolq = load_dataset("super_glue", "boolq") model_ckpt = ... tokenizer = AutoTokenizer.from_pretrained(model_ckpt) boolq_enc = boolq.map(lambda x : tokenizer(x['question'], x['passage'], truncation="only_second"), batched=True) model = AutoModelForSequenceClassification.from_pretrained(model_ckpt) # fine-tune with Trainer or whatever method ... I’ve fine-tuned a few BERT models this way on the Hub, e.g. here: lewtun/bert-large-uncased-wwm-finetuned-boolq · Hugging Face 57
0
huggingface
Beginners
Run_mlm.py: Why does eval_loss at the last epoch differ from the do_eval eval_loss?
https://discuss.huggingface.co/t/run-mlm-py-why-does-eval-loss-at-the-last-epoch-differ-from-the-do-eval-eval-loss/4634
I’m using the example command line supplied with run_mlm.py (v4.4.1), but I’ve enabled more frequent logging as well as evaluation at the end of each epoch: python run_mlm.py --model_name_or_path roberta-base \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --output_dir test-wikitest \ --do_train \ --do_eval \ --evaluation_strategy epoch \ --logging_steps 10 By default, this is configured to train over 3 epochs. When I run this to completion, I find that the eval_loss calculated at the end of the last epoch is reported as: [INFO|trainer.py:1775] 2021-03-18 18:16:19,116 >> ***** Running Evaluation ***** [INFO|trainer.py:1776] 2021-03-18 18:16:19,116 >> Num examples = 496 [INFO|trainer.py:1777] 2021-03-18 18:16:19,116 >> Batch size = 8 {'eval_loss': 1.2713477611541748, 'eval_runtime': 27.0394, 'eval_samples_per_second': 18.344, 'epoch': 3.0} Immediately following this, the final evaluation (because I specified --do_eval) is executed. But the eval_loss reported is different: [INFO|trainer.py:1775] 2021-03-18 18:16:47,957 >> ***** Running Evaluation ***** [INFO|trainer.py:1776] 2021-03-18 18:16:47,957 >> Num examples = 496 [INFO|trainer.py:1777] 2021-03-18 18:16:47,957 >> Batch size = 8 [INFO|trainer_pt_utils.py:650] 2021-03-18 18:17:15,018 >> ***** eval metrics ***** [INFO|trainer_pt_utils.py:655] 2021-03-18 18:17:15,018 >> epoch = 3.0 [INFO|trainer_pt_utils.py:655] 2021-03-18 18:17:15,018 >> eval_loss = 1.2644 [INFO|trainer_pt_utils.py:655] 2021-03-18 18:17:15,018 >> eval_mem_cpu_alloc_delta = 0MB [INFO|trainer_pt_utils.py:655] 2021-03-18 18:17:15,018 >> eval_mem_cpu_peaked_delta = 0MB [INFO|trainer_pt_utils.py:655] 2021-03-18 18:17:15,018 >> eval_mem_gpu_alloc_delta = 0MB [INFO|trainer_pt_utils.py:655] 2021-03-18 18:17:15,018 >> eval_mem_gpu_peaked_delta = 1584MB [INFO|trainer_pt_utils.py:655] 2021-03-18 18:17:15,018 >> eval_runtime = 26.8687 [INFO|trainer_pt_utils.py:655] 2021-03-18 18:17:15,018 >> eval_samples = 496 [INFO|trainer_pt_utils.py:655] 2021-03-18 18:17:15,018 >> eval_samples_per_second = 18.46 [INFO|trainer_pt_utils.py:655] 2021-03-18 18:17:15,019 >> perplexity = 3.5409 I was expecting the two eval_loss values to be the same, since no training happens between the two back-to-back evaluations and since the validation set is the same. Further, when I invoke run_mlm.py with the trained model checkpoint and do only an evaluation (no training at all): python run_mlm.py --model_name_or_path test-wikitest \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --output_dir test-wikitest-evalonly \ --do_eval I get still another value for eval_loss: [INFO|trainer.py:1775] 2021-03-18 18:46:12,713 >> ***** Running Evaluation ***** [INFO|trainer.py:1776] 2021-03-18 18:46:12,713 >> Num examples = 496 [INFO|trainer.py:1777] 2021-03-18 18:46:12,714 >> Batch size = 8 [INFO|trainer_pt_utils.py:650] 2021-03-18 18:46:35,544 >> ***** eval metrics ***** [INFO|trainer_pt_utils.py:655] 2021-03-18 18:46:35,545 >> eval_loss = 1.287 [INFO|trainer_pt_utils.py:655] 2021-03-18 18:46:35,545 >> eval_mem_cpu_alloc_delta = 0MB [INFO|trainer_pt_utils.py:655] 2021-03-18 18:46:35,545 >> eval_mem_cpu_peaked_delta = 0MB [INFO|trainer_pt_utils.py:655] 2021-03-18 18:46:35,545 >> eval_mem_gpu_alloc_delta = 0MB [INFO|trainer_pt_utils.py:655] 2021-03-18 18:46:35,545 >> eval_mem_gpu_peaked_delta = 1584MB [INFO|trainer_pt_utils.py:655] 2021-03-18 18:46:35,545 >> eval_runtime = 22.6793 [INFO|trainer_pt_utils.py:655] 2021-03-18 18:46:35,545 >> eval_samples = 496 [INFO|trainer_pt_utils.py:655] 2021-03-18 18:46:35,545 >> eval_samples_per_second = 21.87 [INFO|trainer_pt_utils.py:655] 2021-03-18 18:46:35,545 >> init_mem_cpu_alloc_delta = 0MB [INFO|trainer_pt_utils.py:655] 2021-03-18 18:46:35,545 >> init_mem_cpu_peaked_delta = 0MB [INFO|trainer_pt_utils.py:655] 2021-03-18 18:46:35,545 >> init_mem_gpu_alloc_delta = 476MB [INFO|trainer_pt_utils.py:655] 2021-03-18 18:46:35,545 >> init_mem_gpu_peaked_delta = 0MB [INFO|trainer_pt_utils.py:655] 2021-03-18 18:46:35,545 >> perplexity = 3.6218 Can anyone tell me what the source of the non-determinism is? Thanks in advance!
The masking is randomly done, that’s why you get those results.
0
huggingface
Beginners
What is transfer learning and why is it needed?
https://discuss.huggingface.co/t/what-is-transfer-learning-and-why-is-it-needed/4431
I am using Hugging Face Models for NLP tasks. I see a lot of online examples like below which freezes the bottom layers and only train only few top layers. Basic idea is to use transfer learning, without having to train model from scratch. Which makes sense to me. for layer in model.layers[:-2]: layer.trainable = False However i find that accuracy of my model improves significantly when train it from scratch, Are there any downsides to train transformer model from scratch? What is the right approach? When should you unfreeze or freeze layers? Please advice
Transfer learning is using someone else’s trained model as your model’s initial weights. Training from scratch is using random numbers for your model’s initial weights. Training from scratch requires a lot of data and a lot of resources. You might need to train from scratch if your data is completely different from the standard data. For example, if it is in a different language, or chemical symbols. Otherwise, it will probably be better to use transfer learning, starting from the closest kind of data you can find. A lot of people do Intermediate training. That is where you use your data, but not your final downstream task. For example, you might choose to start with a pre-trained BERT, such as bert-base-uncased. Then you might do Masked Language Modelling using your text data. Finally, you might do Sequence Classification training, using your text and your labels. If your text is quite similar to the BERT corpus (wikipedia plus books), then you could probably get results by unfreezing only half the BERT layers. If your text is very different, you might get better results if you unfreeze all the layers. If your text is very similar to the BERT corpus, you might not need to do intermediate training, and you might not need to unfreeze any layers. If the results from using pre-trained BERT with your downstream task are “good enough”, then stop there. The more you unfreeze, the longer the training will take. I don’t know whether you should freeze the same layers for your downstream task training as for your intermediate training. Maybe you could try freezing half the layers for intermediate training, but freezing all the layers for your downstream task training.
0
huggingface
Beginners
Inheriting from BartForConditionalGeneration into a new class - weight not initializing
https://discuss.huggingface.co/t/inheriting-from-bartforconditionalgeneration-into-a-new-class-weight-not-initializing/4403
Failing to load and initiate the pre-trained BART model when inheriting it from the BartForConditionalGeneration from transformers.models.bart.modeling_bart import BartForConditionalGeneration,BartPretrainedModel,BartConfig import torch.nn as nn import torch class BartExp(BartPretrainedModel): def __init__(self, config: BartConfig): super().__init__(config=config) self.bart = BartForConditionalGeneration(config) def forward( self, input_ids=None, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, head_mask=None, decoder_head_mask=None, encoder_outputs=None, past_key_values=None, inputs_embeds=None, decoder_inputs_embeds=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): output = self.model(input_ids=input_ids,attention_mask=attention_mask, labels=labels,decoder_input_ids=decoder_input_ids,encoder_outputs=encoder_outputs) return output model = BartExp.from_pretrained('facebook/bart-base') And then I get that endless warning : Some weights of the model checkpoint at facebook/bart-base were not used when initializing BartEXP: ['model.shared.weight', 'model.encoder.embed_tokens.weight', 'model.encoder.embed_positions.weight', 'model.encoder.layers.0.self_attn.k_proj.weight', 'model.encoder.layers.0.self_attn.k_proj.bias', 'model.encoder.layers.0.self_attn.v_proj.weight', 'model.encoder.layers.0.self_attn.v_proj.bias', 'model.encoder.layers.0.self_attn.q_proj.weight', 'model.encoder.layers.0.self_attn.q_proj.bias', 'model.encoder.layers.0.self_attn.out_proj.weight', 'model.encoder.layers.0.self_attn.out_proj.bias', 'model.encoder.layers.0.self_attn_layer_norm.weight', 'model.encoder.layers.0.self_attn_layer_norm.bias', 'model.encoder.layers.0.fc1.weight', 'model.encoder.layers.0.fc1.bias', 'model.encoder.layers.0.fc2.weight', 'model.encoder.layers.0.fc2.bias', 'model.encoder.layers.0.final_layer_norm.weight', 'model.encoder.layers.0.final_layer_norm.bias', 'model.encoder.layers.1.self_attn.k_proj.weight', 'model.encoder.layers.1.self_attn.k_proj.bias', 'model.encoder.layers.1.self_attn.v_proj.weight', 'model.encoder.layers.1.self_attn.v_proj.bias', 'model.encoder.layers.1.self_attn.q_proj.weight', 'model.encoder.layers.1.self_attn.q_proj.bias', 'model.encoder.layers.1.self_attn.out_proj.weight', 'model.encoder.layers.1.self_attn.out_proj.bias', 'model.encoder.layers.1.self_attn_layer_norm.weight', 'model.encoder.layers.1.self_attn_layer_norm.bias', 'model.encoder.layers.1.fc1.weight', 'model.encoder.layers.1.fc1.bias', 'model.encoder.layers.1.fc2.weight', 'model.encoder.layers.1.fc2.bias', 'model.encoder.layers.1.final_layer_norm.weight', 'model.encoder.layers.1.final_layer_norm.bias', 'model.encoder.layers.2.self_attn.k_proj.weight', 'model.encoder.layers.2.self_attn.k_proj.bias', 'model.encoder.layers.2.self_attn.v_proj.weight', 'model.encoder.layers.2.self_attn.v_proj.bias', 'model.encoder.layers.2.self_attn.q_proj.weight', 'model.encoder.layers.2.self_attn.q_proj.bias', 'model.encoder.layers.2.self_attn.out_proj.weight', 'model.encoder.layers.2.self_attn.out_proj.bias', 'model.encoder.layers.2.self_attn_layer_norm.weight', 'model.encoder.layers.2.self_attn_layer_norm.bias', 'model.encoder.layers.2.fc1.weight', 'model.encoder.layers.2.fc1.bias', 'model.encoder.layers.2.fc2.weight', 'model.encoder.layers.2.fc2.bias', 'model.encoder.layers.2.final_layer_norm.weight', 'model.encoder.layers.2.final_layer_norm.bias', 'model.encoder.layers.3.self_attn.k_proj.weight', 'model.encoder.layers.3.self_attn.k_proj.bias', 'model.encoder.layers.3.self_attn.v_proj.weight', 'model.encoder.layers.3.self_attn.v_proj.bias', 'model.encoder.layers.3.self_attn.q_proj.weight', 'model.encoder.layers.3.self_attn.q_proj.bias', 'model.encoder.layers.3.self_attn.out_proj.weight', 'model.encoder.layers.3.self_attn.out_proj.bias', 'model.encoder.layers.3.self_attn_layer_norm.weight', 'model.encoder.layers.3.self_attn_layer_norm.bias', 'model.encoder.layers.3.fc1.weight', 'model.encoder.layers.3.fc1.bias', 'model.encoder.layers.3.fc2.weight', 'model.encoder.layers.3.fc2.bias', 'model.encoder.layers.3.final_layer_norm.weight', 'model.encoder.layers.3.final_layer_norm.bias', 'model.encoder.layers.4.self_attn.k_proj.weight', 'model.encoder.layers.4.self_attn.k_proj.bias', 'model.encoder.layers.4.self_attn.v_proj.weight', 'model.encoder.layers.4.self_attn.v_proj.bias', 'model.encoder.layers.4.self_attn.q_proj.weight', 'model.encoder.layers.4.self_attn.q_proj.bias', 'model.encoder.layers.4.self_attn.out_proj.weight', 'model.encoder.layers.4.self_attn.out_proj.bias', 'model.encoder.layers.4.self_attn_layer_norm.weight', 'model.encoder.layers.4.self_attn_layer_norm.bias', 'model.encoder.layers.4.fc1.weight', 'model.encoder.layers.4.fc1.bias', 'model.encoder.layers.4.fc2.weight', 'model.encoder.layers.4.fc2.bias', 'model.encoder.layers.4.final_layer_norm.weight', 'model.encoder.layers.4.final_layer_norm.bias', 'model.encoder.layers.5.self_attn.k_proj.weight', 'model.encoder.layers.5.self_attn.k_proj.bias', 'model.encoder.layers.5.self_attn.v_proj.weight', 'model.encoder.layers.5.self_attn.v_proj.bias', 'model.encoder.layers.5.self_attn.q_proj.weight', 'model.encoder.layers.5.self_attn.q_proj.bias', 'model.encoder.layers.5.self_attn.out_proj.weight', 'model.encoder.layers.5.self_attn.out_proj.bias', 'model.encoder.layers.5.self_attn_layer_norm.weight', 'model.encoder.layers.5.self_attn_layer_norm.bias', 'model.encoder.layers.5.fc1.weight', 'model.encoder.layers.5.fc1.bias', 'model.encoder.layers.5.fc2.weight', 'model.encoder.layers.5.fc2.bias', 'model.encoder.layers.5.final_layer_norm.weight', 'model.encoder.layers.5.final_layer_norm.bias', 'model.encoder.layernorm_embedding.weight', 'model.encoder.layernorm_embedding.bias', 'model.decoder.embed_tokens.weight', 'model.decoder.embed_positions.weight', 'model.decoder.layers.0.self_attn.k_proj.weight', 'model.decoder.layers.0.self_attn.k_proj.bias', 'model.decoder.layers.0.self_attn.v_proj.weight', 'model.decoder.layers.0.self_attn.v_proj.bias', 'model.decoder.layers.0.self_attn.q_proj.weight', 'model.decoder.layers.0.self_attn.q_proj.bias', 'model.decoder.layers.0.self_attn.out_proj.weight', 'model.decoder.layers.0.self_attn.out_proj.bias', 'model.decoder.layers.0.self_attn_layer_norm.weight', 'model.decoder.layers.0.self_attn_layer_norm.bias', 'model.decoder.layers.0.encoder_attn.k_proj.weight', 'model.decoder.layers.0.encoder_attn.k_proj.bias', 'model.decoder.layers.0.encoder_attn.v_proj.weight', 'model.decoder.layers.0.encoder_attn.v_proj.bias', 'model.decoder.layers.0.encoder_attn.q_proj.weight', 'model.decoder.layers.0.encoder_attn.q_proj.bias', 'model.decoder.layers.0.encoder_attn.out_proj.weight', 'model.decoder.layers.0.encoder_attn.out_proj.bias', 'model.decoder.layers.0.encoder_attn_layer_norm.weight', 'model.decoder.layers.0.encoder_attn_layer_norm.bias', 'model.decoder.layers.0.fc1.weight', 'model.decoder.layers.0.fc1.bias', 'model.decoder.layers.0.fc2.weight', 'model.decoder.layers.0.fc2.bias', 'model.decoder.layers.0.final_layer_norm.weight', 'model.decoder.layers.0.final_layer_norm.bias', 'model.decoder.layers.1.self_attn.k_proj.weight', 'model.decoder.layers.1.self_attn.k_proj.bias', 'model.decoder.layers.1.self_attn.v_proj.weight', 'model.decoder.layers.1.self_attn.v_proj.bias', 'model.decoder.layers.1.self_attn.q_proj.weight', 'model.decoder.layers.1.self_attn.q_proj.bias', 'model.decoder.layers.1.self_attn.out_proj.weight', 'model.decoder.layers.1.self_attn.out_proj.bias', 'model.decoder.layers.1.self_attn_layer_norm.weight', 'model.decoder.layers.1.self_attn_layer_norm.bias', 'model.decoder.layers.1.encoder_attn.k_proj.weight', 'model.decoder.layers.1.encoder_attn.k_proj.bias', 'model.decoder.layers.1.encoder_attn.v_proj.weight', 'model.decoder.layers.1.encoder_attn.v_proj.bias', 'model.decoder.layers.1.encoder_attn.q_proj.weight', 'model.decoder.layers.1.encoder_attn.q_proj.bias', 'model.decoder.layers.1.encoder_attn.out_proj.weight', 'model.decoder.layers.1.encoder_attn.out_proj.bias', 'model.decoder.layers.1.encoder_attn_layer_norm.weight', 'model.decoder.layers.1.encoder_attn_layer_norm.bias', 'model.decoder.layers.1.fc1.weight', 'model.decoder.layers.1.fc1.bias', 'model.decoder.layers.1.fc2.weight', 'model.decoder.layers.1.fc2.bias', 'model.decoder.layers.1.final_layer_norm.weight', 'model.decoder.layers.1.final_layer_norm.bias', 'model.decoder.layers.2.self_attn.k_proj.weight', 'model.decoder.layers.2.self_attn.k_proj.bias', 'model.decoder.layers.2.self_attn.v_proj.weight', 'model.decoder.layers.2.self_attn.v_proj.bias', 'model.decoder.layers.2.self_attn.q_proj.weight', 'model.decoder.layers.2.self_attn.q_proj.bias', 'model.decoder.layers.2.self_attn.out_proj.weight', 'model.decoder.layers.2.self_attn.out_proj.bias', 'model.decoder.layers.2.self_attn_layer_norm.weight', 'model.decoder.layers.2.self_attn_layer_norm.bias', 'model.decoder.layers.2.encoder_attn.k_proj.weight', 'model.decoder.layers.2.encoder_attn.k_proj.bias', 'model.decoder.layers.2.encoder_attn.v_proj.weight', 'model.decoder.layers.2.encoder_attn.v_proj.bias', 'model.decoder.layers.2.encoder_attn.q_proj.weight', 'model.decoder.layers.2.encoder_attn.q_proj.bias', 'model.decoder.layers.2.encoder_attn.out_proj.weight', 'model.decoder.layers.2.encoder_attn.out_proj.bias', 'model.decoder.layers.2.encoder_attn_layer_norm.weight', 'model.decoder.layers.2.encoder_attn_layer_norm.bias', 'model.decoder.layers.2.fc1.weight', 'model.decoder.layers.2.fc1.bias', 'model.decoder.layers.2.fc2.weight', 'model.decoder.layers.2.fc2.bias', 'model.decoder.layers.2.final_layer_norm.weight', 'model.decoder.layers.2.final_layer_norm.bias', 'model.decoder.layers.3.self_attn.k_proj.weight', 'model.decoder.layers.3.self_attn.k_proj.bias', 'model.decoder.layers.3.self_attn.v_proj.weight', 'model.decoder.layers.3.self_attn.v_proj.bias', 'model.decoder.layers.3.self_attn.q_proj.weight', 'model.decoder.layers.3.self_attn.q_proj.bias', 'model.decoder.layers.3.self_attn.out_proj.weight', 'model.decoder.layers.3.self_attn.out_proj.bias', 'model.decoder.layers.3.self_attn_layer_norm.weight', 'model.decoder.layers.3.self_attn_layer_norm.bias', 'model.decoder.layers.3.encoder_attn.k_proj.weight', 'model.decoder.layers.3.encoder_attn.k_proj.bias', 'model.decoder.layers.3.encoder_attn.v_proj.weight', 'model.decoder.layers.3.encoder_attn.v_proj.bias', 'model.decoder.layers.3.encoder_attn.q_proj.weight', 'model.decoder.layers.3.encoder_attn.q_proj.bias', 'model.decoder.layers.3.encoder_attn.out_proj.weight', 'model.decoder.layers.3.encoder_attn.out_proj.bias', 'model.decoder.layers.3.encoder_attn_layer_norm.weight', 'model.decoder.layers.3.encoder_attn_layer_norm.bias', 'model.decoder.layers.3.fc1.weight', 'model.decoder.layers.3.fc1.bias', 'model.decoder.layers.3.fc2.weight', 'model.decoder.layers.3.fc2.bias', 'model.decoder.layers.3.final_layer_norm.weight', 'model.decoder.layers.3.final_layer_norm.bias', 'model.decoder.layers.4.self_attn.k_proj.weight', 'model.decoder.layers.4.self_attn.k_proj.bias', 'model.decoder.layers.4.self_attn.v_proj.weight', 'model.decoder.layers.4.self_attn.v_proj.bias', 'model.decoder.layers.4.self_attn.q_proj.weight', 'model.decoder.layers.4.self_attn.q_proj.bias', 'model.decoder.layers.4.self_attn.out_proj.weight', 'model.decoder.layers.4.self_attn.out_proj.bias', 'model.decoder.layers.4.self_attn_layer_norm.weight', 'model.decoder.layers.4.self_attn_layer_norm.bias', 'model.decoder.layers.4.encoder_attn.k_proj.weight', 'model.decoder.layers.4.encoder_attn.k_proj.bias', 'model.decoder.layers.4.encoder_attn.v_proj.weight', 'model.decoder.layers.4.encoder_attn.v_proj.bias', 'model.decoder.layers.4.encoder_attn.q_proj.weight', 'model.decoder.layers.4.encoder_attn.q_proj.bias', 'model.decoder.layers.4.encoder_attn.out_proj.weight', 'model.decoder.layers.4.encoder_attn.out_proj.bias', 'model.decoder.layers.4.encoder_attn_layer_norm.weight', 'model.decoder.layers.4.encoder_attn_layer_norm.bias', 'model.decoder.layers.4.fc1.weight', 'model.decoder.layers.4.fc1.bias', 'model.decoder.layers.4.fc2.weight', 'model.decoder.layers.4.fc2.bias', 'model.decoder.layers.4.final_layer_norm.weight', 'model.decoder.layers.4.final_layer_norm.bias', 'model.decoder.layers.5.self_attn.k_proj.weight', 'model.decoder.layers.5.self_attn.k_proj.bias', 'model.decoder.layers.5.self_attn.v_proj.weight', 'model.decoder.layers.5.self_attn.v_proj.bias', 'model.decoder.layers.5.self_attn.q_proj.weight', 'model.decoder.layers.5.self_attn.q_proj.bias', 'model.decoder.layers.5.self_attn.out_proj.weight', 'model.decoder.layers.5.self_attn.out_proj.bias', 'model.decoder.layers.5.self_attn_layer_norm.weight', 'model.decoder.layers.5.self_attn_layer_norm.bias', 'model.decoder.layers.5.encoder_attn.k_proj.weight', 'model.decoder.layers.5.encoder_attn.k_proj.bias', 'model.decoder.layers.5.encoder_attn.v_proj.weight', 'model.decoder.layers.5.encoder_attn.v_proj.bias', 'model.decoder.layers.5.encoder_attn.q_proj.weight', 'model.decoder.layers.5.encoder_attn.q_proj.bias', 'model.decoder.layers.5.encoder_attn.out_proj.weight', 'model.decoder.layers.5.encoder_attn.out_proj.bias', 'model.decoder.layers.5.encoder_attn_layer_norm.weight', 'model.decoder.layers.5.encoder_attn_layer_norm.bias', 'model.decoder.layers.5.fc1.weight', 'model.decoder.layers.5.fc1.bias', 'model.decoder.layers.5.fc2.weight', 'model.decoder.layers.5.fc2.bias', 'model.decoder.layers.5.final_layer_norm.weight', 'model.decoder.layers.5.final_layer_norm.bias', 'model.decoder.layernorm_embedding.weight', 'model.decoder.layernorm_embedding.bias'] - This IS expected if you are initializing BartEXP from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BartEXP from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BartEXP were not initialized from the model checkpoint at facebook/bart-base and are newly initialized: ['model.bart.final_logits_bias', 'model.bart.model.shared.weight', 'model.bart.model.encoder.embed_tokens.weight', 'model.bart.model.encoder.embed_positions.weight', 'model.bart.model.encoder.layers.0.self_attn.k_proj.weight', 'model.bart.model.encoder.layers.0.self_attn.k_proj.bias', 'model.bart.model.encoder.layers.0.self_attn.v_proj.weight', 'model.bart.model.encoder.layers.0.self_attn.v_proj.bias', 'model.bart.model.encoder.layers.0.self_attn.q_proj.weight', 'model.bart.model.encoder.layers.0.self_attn.q_proj.bias', 'model.bart.model.encoder.layers.0.self_attn.out_proj.weight', 'model.bart.model.encoder.layers.0.self_attn.out_proj.bias', 'model.bart.model.encoder.layers.0.self_attn_layer_norm.weight', 'model.bart.model.encoder.layers.0.self_attn_layer_norm.bias', 'model.bart.model.encoder.layers.0.fc1.weight', 'model.bart.model.encoder.layers.0.fc1.bias', 'model.bart.model.encoder.layers.0.fc2.weight', 'model.bart.model.encoder.layers.0.fc2.bias', 'model.bart.model.encoder.layers.0.final_layer_norm.weight', 'model.bart.model.encoder.layers.0.final_layer_norm.bias', 'model.bart.model.encoder.layers.1.self_attn.k_proj.weight', 'model.bart.model.encoder.layers.1.self_attn.k_proj.bias', 'model.bart.model.encoder.layers.1.self_attn.v_proj.weight', 'model.bart.model.encoder.layers.1.self_attn.v_proj.bias', 'model.bart.model.encoder.layers.1.self_attn.q_proj.weight', 'model.bart.model.encoder.layers.1.self_attn.q_proj.bias', 'model.bart.model.encoder.layers.1.self_attn.out_proj.weight', 'model.bart.model.encoder.layers.1.self_attn.out_proj.bias', 'model.bart.model.encoder.layers.1.self_attn_layer_norm.weight', 'model.bart.model.encoder.layers.1.self_attn_layer_norm.bias', 'model.bart.model.encoder.layers.1.fc1.weight', 'model.bart.model.encoder.layers.1.fc1.bias', 'model.bart.model.encoder.layers.1.fc2.weight', 'model.bart.model.encoder.layers.1.fc2.bias', 'model.bart.model.encoder.layers.1.final_layer_norm.weight', 'model.bart.model.encoder.layers.1.final_layer_norm.bias', 'model.bart.model.encoder.layers.2.self_attn.k_proj.weight', 'model.bart.model.encoder.layers.2.self_attn.k_proj.bias', 'model.bart.model.encoder.layers.2.self_attn.v_proj.weight', 'model.bart.model.encoder.layers.2.self_attn.v_proj.bias', 'model.bart.model.encoder.layers.2.self_attn.q_proj.weight', 'model.bart.model.encoder.layers.2.self_attn.q_proj.bias', 'model.bart.model.encoder.layers.2.self_attn.out_proj.weight', 'model.bart.model.encoder.layers.2.self_attn.out_proj.bias', 'model.bart.model.encoder.layers.2.self_attn_layer_norm.weight', 'model.bart.model.encoder.layers.2.self_attn_layer_norm.bias', 'model.bart.model.encoder.layers.2.fc1.weight', 'model.bart.model.encoder.layers.2.fc1.bias', 'model.bart.model.encoder.layers.2.fc2.weight', 'model.bart.model.encoder.layers.2.fc2.bias', 'model.bart.model.encoder.layers.2.final_layer_norm.weight', 'model.bart.model.encoder.layers.2.final_layer_norm.bias', 'model.bart.model.encoder.layers.3.self_attn.k_proj.weight', 'model.bart.model.encoder.layers.3.self_attn.k_proj.bias', 'model.bart.model.encoder.layers.3.self_attn.v_proj.weight', 'model.bart.model.encoder.layers.3.self_attn.v_proj.bias', 'model.bart.model.encoder.layers.3.self_attn.q_proj.weight', 'model.bart.model.encoder.layers.3.self_attn.q_proj.bias', 'model.bart.model.encoder.layers.3.self_attn.out_proj.weight', 'model.bart.model.encoder.layers.3.self_attn.out_proj.bias', 'model.bart.model.encoder.layers.3.self_attn_layer_norm.weight', 'model.bart.model.encoder.layers.3.self_attn_layer_norm.bias', 'model.bart.model.encoder.layers.3.fc1.weight', 'model.bart.model.encoder.layers.3.fc1.bias', 'model.bart.model.encoder.layers.3.fc2.weight', 'model.bart.model.encoder.layers.3.fc2.bias', 'model.bart.model.encoder.layers.3.final_layer_norm.weight', 'model.bart.model.encoder.layers.3.final_layer_norm.bias', 'model.bart.model.encoder.layers.4.self_attn.k_proj.weight', 'model.bart.model.encoder.layers.4.self_attn.k_proj.bias', 'model.bart.model.encoder.layers.4.self_attn.v_proj.weight', 'model.bart.model.encoder.layers.4.self_attn.v_proj.bias', 'model.bart.model.encoder.layers.4.self_attn.q_proj.weight', 'model.bart.model.encoder.layers.4.self_attn.q_proj.bias', 'model.bart.model.encoder.layers.4.self_attn.out_proj.weight', 'model.bart.model.encoder.layers.4.self_attn.out_proj.bias', 'model.bart.model.encoder.layers.4.self_attn_layer_norm.weight', 'model.bart.model.encoder.layers.4.self_attn_layer_norm.bias', 'model.bart.model.encoder.layers.4.fc1.weight', 'model.bart.model.encoder.layers.4.fc1.bias', 'model.bart.model.encoder.layers.4.fc2.weight', 'model.bart.model.encoder.layers.4.fc2.bias', 'model.bart.model.encoder.layers.4.final_layer_norm.weight', 'model.bart.model.encoder.layers.4.final_layer_norm.bias', 'model.bart.model.encoder.layers.5.self_attn.k_proj.weight', 'model.bart.model.encoder.layers.5.self_attn.k_proj.bias', 'model.bart.model.encoder.layers.5.self_attn.v_proj.weight', 'model.bart.model.encoder.layers.5.self_attn.v_proj.bias', 'model.bart.model.encoder.layers.5.self_attn.q_proj.weight', 'model.bart.model.encoder.layers.5.self_attn.q_proj.bias', 'model.bart.model.encoder.layers.5.self_attn.out_proj.weight', 'model.bart.model.encoder.layers.5.self_attn.out_proj.bias', 'model.bart.model.encoder.layers.5.self_attn_layer_norm.weight', 'model.bart.model.encoder.layers.5.self_attn_layer_norm.bias', 'model.bart.model.encoder.layers.5.fc1.weight', 'model.bart.model.encoder.layers.5.fc1.bias', 'model.bart.model.encoder.layers.5.fc2.weight', 'model.bart.model.encoder.layers.5.fc2.bias', 'model.bart.model.encoder.layers.5.final_layer_norm.weight', 'model.bart.model.encoder.layers.5.final_layer_norm.bias', 'model.bart.model.encoder.layernorm_embedding.weight', 'model.bart.model.encoder.layernorm_embedding.bias', 'model.bart.model.decoder.embed_tokens.weight', 'model.bart.model.decoder.embed_positions.weight', 'model.bart.model.decoder.layers.0.self_attn.k_proj.weight', 'model.bart.model.decoder.layers.0.self_attn.k_proj.bias', 'model.bart.model.decoder.layers.0.self_attn.v_proj.weight', 'model.bart.model.decoder.layers.0.self_attn.v_proj.bias', 'model.bart.model.decoder.layers.0.self_attn.q_proj.weight', 'model.bart.model.decoder.layers.0.self_attn.q_proj.bias', 'model.bart.model.decoder.layers.0.self_attn.out_proj.weight', 'model.bart.model.decoder.layers.0.self_attn.out_proj.bias', 'model.bart.model.decoder.layers.0.self_attn_layer_norm.weight', 'model.bart.model.decoder.layers.0.self_attn_layer_norm.bias', 'model.bart.model.decoder.layers.0.encoder_attn.k_proj.weight', 'model.bart.model.decoder.layers.0.encoder_attn.k_proj.bias', 'model.bart.model.decoder.layers.0.encoder_attn.v_proj.weight', 'model.bart.model.decoder.layers.0.encoder_attn.v_proj.bias', 'model.bart.model.decoder.layers.0.encoder_attn.q_proj.weight', 'model.bart.model.decoder.layers.0.encoder_attn.q_proj.bias', 'model.bart.model.decoder.layers.0.encoder_attn.out_proj.weight', 'model.bart.model.decoder.layers.0.encoder_attn.out_proj.bias', 'model.bart.model.decoder.layers.0.encoder_attn_layer_norm.weight', 'model.bart.model.decoder.layers.0.encoder_attn_layer_norm.bias', 'model.bart.model.decoder.layers.0.fc1.weight', 'model.bart.model.decoder.layers.0.fc1.bias', 'model.bart.model.decoder.layers.0.fc2.weight', 'model.bart.model.decoder.layers.0.fc2.bias', 'model.bart.model.decoder.layers.0.final_layer_norm.weight', 'model.bart.model.decoder.layers.0.final_layer_norm.bias', 'model.bart.model.decoder.layers.1.self_attn.k_proj.weight', 'model.bart.model.decoder.layers.1.self_attn.k_proj.bias', 'model.bart.model.decoder.layers.1.self_attn.v_proj.weight', 'model.bart.model.decoder.layers.1.self_attn.v_proj.bias', 'model.bart.model.decoder.layers.1.self_attn.q_proj.weight', 'model.bart.model.decoder.layers.1.self_attn.q_proj.bias', 'model.bart.model.decoder.layers.1.self_attn.out_proj.weight', 'model.bart.model.decoder.layers.1.self_attn.out_proj.bias', 'model.bart.model.decoder.layers.1.self_attn_layer_norm.weight', 'model.bart.model.decoder.layers.1.self_attn_layer_norm.bias', 'model.bart.model.decoder.layers.1.encoder_attn.k_proj.weight', 'model.bart.model.decoder.layers.1.encoder_attn.k_proj.bias', 'model.bart.model.decoder.layers.1.encoder_attn.v_proj.weight', 'model.bart.model.decoder.layers.1.encoder_attn.v_proj.bias', 'model.bart.model.decoder.layers.1.encoder_attn.q_proj.weight', 'model.bart.model.decoder.layers.1.encoder_attn.q_proj.bias', 'model.bart.model.decoder.layers.1.encoder_attn.out_proj.weight', 'model.bart.model.decoder.layers.1.encoder_attn.out_proj.bias', 'model.bart.model.decoder.layers.1.encoder_attn_layer_norm.weight', 'model.bart.model.decoder.layers.1.encoder_attn_layer_norm.bias', 'model.bart.model.decoder.layers.1.fc1.weight', 'model.bart.model.decoder.layers.1.fc1.bias', 'model.bart.model.decoder.layers.1.fc2.weight', 'model.bart.model.decoder.layers.1.fc2.bias', 'model.bart.model.decoder.layers.1.final_layer_norm.weight', 'model.bart.model.decoder.layers.1.final_layer_norm.bias', 'model.bart.model.decoder.layers.2.self_attn.k_proj.weight', 'model.bart.model.decoder.layers.2.self_attn.k_proj.bias', 'model.bart.model.decoder.layers.2.self_attn.v_proj.weight', 'model.bart.model.decoder.layers.2.self_attn.v_proj.bias', 'model.bart.model.decoder.layers.2.self_attn.q_proj.weight', 'model.bart.model.decoder.layers.2.self_attn.q_proj.bias', 'model.bart.model.decoder.layers.2.self_attn.out_proj.weight', 'model.bart.model.decoder.layers.2.self_attn.out_proj.bias', 'model.bart.model.decoder.layers.2.self_attn_layer_norm.weight', 'model.bart.model.decoder.layers.2.self_attn_layer_norm.bias', 'model.bart.model.decoder.layers.2.encoder_attn.k_proj.weight', 'model.bart.model.decoder.layers.2.encoder_attn.k_proj.bias', 'model.bart.model.decoder.layers.2.encoder_attn.v_proj.weight', 'model.bart.model.decoder.layers.2.encoder_attn.v_proj.bias', 'model.bart.model.decoder.layers.2.encoder_attn.q_proj.weight', 'model.bart.model.decoder.layers.2.encoder_attn.q_proj.bias', 'model.bart.model.decoder.layers.2.encoder_attn.out_proj.weight', 'model.bart.model.decoder.layers.2.encoder_attn.out_proj.bias', 'model.bart.model.decoder.layers.2.encoder_attn_layer_norm.weight', 'model.bart.model.decoder.layers.2.encoder_attn_layer_norm.bias', 'model.bart.model.decoder.layers.2.fc1.weight', 'model.bart.model.decoder.layers.2.fc1.bias', 'model.bart.model.decoder.layers.2.fc2.weight', 'model.bart.model.decoder.layers.2.fc2.bias', 'model.bart.model.decoder.layers.2.final_layer_norm.weight', 'model.bart.model.decoder.layers.2.final_layer_norm.bias', 'model.bart.model.decoder.layers.3.self_attn.k_proj.weight', 'model.bart.model.decoder.layers.3.self_attn.k_proj.bias', 'model.bart.model.decoder.layers.3.self_attn.v_proj.weight', 'model.bart.model.decoder.layers.3.self_attn.v_proj.bias', 'model.bart.model.decoder.layers.3.self_attn.q_proj.weight', 'model.bart.model.decoder.layers.3.self_attn.q_proj.bias', 'model.bart.model.decoder.layers.3.self_attn.out_proj.weight', 'model.bart.model.decoder.layers.3.self_attn.out_proj.bias', 'model.bart.model.decoder.layers.3.self_attn_layer_norm.weight', 'model.bart.model.decoder.layers.3.self_attn_layer_norm.bias', 'model.bart.model.decoder.layers.3.encoder_attn.k_proj.weight', 'model.bart.model.decoder.layers.3.encoder_attn.k_proj.bias', 'model.bart.model.decoder.layers.3.encoder_attn.v_proj.weight', 'model.bart.model.decoder.layers.3.encoder_attn.v_proj.bias', 'model.bart.model.decoder.layers.3.encoder_attn.q_proj.weight', 'model.bart.model.decoder.layers.3.encoder_attn.q_proj.bias', 'model.bart.model.decoder.layers.3.encoder_attn.out_proj.weight', 'model.bart.model.decoder.layers.3.encoder_attn.out_proj.bias', 'model.bart.model.decoder.layers.3.encoder_attn_layer_norm.weight', 'model.bart.model.decoder.layers.3.encoder_attn_layer_norm.bias', 'model.bart.model.decoder.layers.3.fc1.weight', 'model.bart.model.decoder.layers.3.fc1.bias', 'model.bart.model.decoder.layers.3.fc2.weight', 'model.bart.model.decoder.layers.3.fc2.bias', 'model.bart.model.decoder.layers.3.final_layer_norm.weight', 'model.bart.model.decoder.layers.3.final_layer_norm.bias', 'model.bart.model.decoder.layers.4.self_attn.k_proj.weight', 'model.bart.model.decoder.layers.4.self_attn.k_proj.bias', 'model.bart.model.decoder.layers.4.self_attn.v_proj.weight', 'model.bart.model.decoder.layers.4.self_attn.v_proj.bias', 'model.bart.model.decoder.layers.4.self_attn.q_proj.weight', 'model.bart.model.decoder.layers.4.self_attn.q_proj.bias', 'model.bart.model.decoder.layers.4.self_attn.out_proj.weight', 'model.bart.model.decoder.layers.4.self_attn.out_proj.bias', 'model.bart.model.decoder.layers.4.self_attn_layer_norm.weight', 'model.bart.model.decoder.layers.4.self_attn_layer_norm.bias', 'model.bart.model.decoder.layers.4.encoder_attn.k_proj.weight', 'model.bart.model.decoder.layers.4.encoder_attn.k_proj.bias', 'model.bart.model.decoder.layers.4.encoder_attn.v_proj.weight', 'model.bart.model.decoder.layers.4.encoder_attn.v_proj.bias', 'model.bart.model.decoder.layers.4.encoder_attn.q_proj.weight', 'model.bart.model.decoder.layers.4.encoder_attn.q_proj.bias', 'model.bart.model.decoder.layers.4.encoder_attn.out_proj.weight', 'model.bart.model.decoder.layers.4.encoder_attn.out_proj.bias', 'model.bart.model.decoder.layers.4.encoder_attn_layer_norm.weight', 'model.bart.model.decoder.layers.4.encoder_attn_layer_norm.bias', 'model.bart.model.decoder.layers.4.fc1.weight', 'model.bart.model.decoder.layers.4.fc1.bias', 'model.bart.model.decoder.layers.4.fc2.weight', 'model.bart.model.decoder.layers.4.fc2.bias', 'model.bart.model.decoder.layers.4.final_layer_norm.weight', 'model.bart.model.decoder.layers.4.final_layer_norm.bias', 'model.bart.model.decoder.layers.5.self_attn.k_proj.weight', 'model.bart.model.decoder.layers.5.self_attn.k_proj.bias', 'model.bart.model.decoder.layers.5.self_attn.v_proj.weight', 'model.bart.model.decoder.layers.5.self_attn.v_proj.bias', 'model.bart.model.decoder.layers.5.self_attn.q_proj.weight', 'model.bart.model.decoder.layers.5.self_attn.q_proj.bias', 'model.bart.model.decoder.layers.5.self_attn.out_proj.weight', 'model.bart.model.decoder.layers.5.self_attn.out_proj.bias', 'model.bart.model.decoder.layers.5.self_attn_layer_norm.weight', 'model.bart.model.decoder.layers.5.self_attn_layer_norm.bias', 'model.bart.model.decoder.layers.5.encoder_attn.k_proj.weight', 'model.bart.model.decoder.layers.5.encoder_attn.k_proj.bias', 'model.bart.model.decoder.layers.5.encoder_attn.v_proj.weight', 'model.bart.model.decoder.layers.5.encoder_attn.v_proj.bias', 'model.bart.model.decoder.layers.5.encoder_attn.q_proj.weight', 'model.bart.model.decoder.layers.5.encoder_attn.q_proj.bias', 'model.bart.model.decoder.layers.5.encoder_attn.out_proj.weight', 'model.bart.model.decoder.layers.5.encoder_attn.out_proj.bias', 'model.bart.model.decoder.layers.5.encoder_attn_layer_norm.weight', 'model.bart.model.decoder.layers.5.encoder_attn_layer_norm.bias', 'model.bart.model.decoder.layers.5.fc1.weight', 'model.bart.model.decoder.layers.5.fc1.bias', 'model.bart.model.decoder.layers.5.fc2.weight', 'model.bart.model.decoder.layers.5.fc2.bias', 'model.bart.model.decoder.layers.5.final_layer_norm.weight', 'model.bart.model.decoder.layers.5.final_layer_norm.bias', 'model.bart.model.decoder.layernorm_embedding.weight', 'model.bart.model.decoder.layernorm_embedding.bias', 'model.bart.lm_head.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. I am aiming to use BartEXP for some modification over the output (adding another head and using multiple losses) so I need it to be in that format… How do I need to construct the class so I will inherit BartForConditionalGeneration and use it’s forward / overwrite it?
Hi @latent this won’t work since it changes the module structure, you can init BartForConditionalGeneration inside BartExp by calling from_pretrained, or you can create a class like BartForConditionalGeneration and your custom layers inside that class. You can see how BartForConditionalGeneration is implemented here 3 You could modify that class easily.
0
huggingface
Beginners
TokenClassification vs SequenceClassification
https://discuss.huggingface.co/t/tokenclassification-vs-sequenceclassification/4416
Hi, can anyone simply explain that what is the difference between these two? and which one is better for Multi-Label classification tasks?
I really need to know. anyone please?
0
huggingface
Beginners
mBART for translation truncating result
https://discuss.huggingface.co/t/mbart-for-translation-truncating-result/4429
I am using mBART (specifically “mrm8488/mbart-large-finetuned-opus-es-en-translation”) for translation and the model seems to be truncating the output. Below is the code and the result. Has anyone used this model successfully? Can you see an error in my code? Any suggestions on how I might get a better translation with this model? << Original Text: "Esta investigación presenta un análisis del gasto público federal asignado a los hogares en condición de marginación con enfoque asistencial. Se sustenta en que éste debe orientarse a facilitar la inversión y el impulso a los procesos de trabajo productivo, generadores de crecimiento y empleo. Para esto se presenta una propuesta de evaluación cuantitativa basada en el modelo de contabilidad social que formula el Sistema de Cuentas Nacionales, en su revisión de 1993 y actualizada con la misma perspectiva en 2008. Los resultados se analizan con el modelo de multiplicador keynesiano.">> model_name1 = "mrm8488/mbart-large-finetuned-opus-es-en-translation" tokenizer1 = AutoTokenizer.from_pretrained(model_name1) model1 = AutoModelForSeq2SeqLM.from_pretrained(model_name1) input_ids1 = tokenizer1(text, return_tensors=“pt”).input_ids outputs1 = model1.generate(input_ids1, num_return_sequences=4, num_beams=6, do_sample=True, early_stopping=True) print(tokenizer1.decode(outputs1[0])) <s> This Research presents a three-year review of the federal public expenditure model, the same as the nationally-allotment model,</s>
Hi @Buckeyes2019, looking at the docs 4 for MBart, it seems that you need to prepare the data in a special format and define decoder_start_token_id I’m not sure whether that will solve your truncation issue (I wonder if you can set max_length in model.generate to a larger value?) and looking at your example it seems like the model is “summarising” the input text instead of translating it which seems odd to me …
0
huggingface
Beginners
How to generate a samples of summaries with Pegasus?
https://discuss.huggingface.co/t/how-to-generate-a-samples-of-summaries-with-pegasus/4426
I am a beginner around here. I would like to generate some samples of abstractive text summaries using Pegasus. I used the code snippet from Pegasus — transformers 4.3.0 documentation 3 . However, I realize the summary is the same everytime I use decode. How should I generate distinct samples of summaries? Thank you in advance.
Here is an example of generating summaries with custom parameters. I am only using a small subset of the available tweaks. For more info, I find this blog post very helpful: How to generate text: using different decoding methods for language generation with Transformers 7. #! pip install transformers #! pip install datasets #! pip install sentencepiece from transformers import PegasusTokenizer, PegasusForConditionalGeneration import datasets model = PegasusForConditionalGeneration.from_pretrained("sshleifer/distill-pegasus-xsum-16-4") tokenizer = PegasusTokenizer.from_pretrained("sshleifer/distill-pegasus-xsum-16-4") # Download data samples data = datasets.load_dataset("xsum", split="validation[:10]") # Pick two examples text2summarize_1 = data["document"][0] text2summarize_2 = data["document"][3] #print(text2summarize_1) #print(text2summarize_2) def generate_for_sample(sample, **kwargs): """ Returns decoded summary (code snippets from the docs) kwargs are passed on to the model's generate function """ inputs = tokenizer(sample, truncation=True, max_length=1024, return_tensors='pt') summary_ids = model.generate(inputs['input_ids'], **kwargs) return [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids] print("Summaries generated with default parameters:") summary_1 = generate_for_sample(text2summarize_1) summary_2 = generate_for_sample(text2summarize_2) print("summary_1: {}".format(summary_1)) print("summary_2: {}".format(summary_2)) print("Some default parameter values: ", "num_beams={}, do_sample={}, top_k={}, top_p={}". format(model.config.num_beams, model.config.do_sample, model.config.top_k, model.config.top_p)) print("Summaries generated with custom parameter values:") summary_1 = generate_for_sample(text2summarize_1, num_beams=4) summary_2 = generate_for_sample(text2summarize_2, do_sample=True, top_k=10, top_p=0.8) print("summary_1: {}".format(summary_1)) print("summary_2: {}".format(summary_2)) Output: Summaries generated with default parameters: summary_1: [‘Apple has been accused of misleading customers in Australia over its new iPad.’] summary_2: [“The world’s first marine energy system has been installed in the North Sea.”] Some default parameter values: num_beams=8, do_sample=False, top_k=50, top_p=1.0 Summaries generated with custom parameter values: summary_1: [‘Apple is facing legal action in Australia over its new iPad with wi-fi and 4G.’] summary_2: [‘A marine energy system has been installed in the North Sea for the first time.’]
0
huggingface
Beginners
Is it possible to generate GPT2 output without an input prompt text
https://discuss.huggingface.co/t/is-it-possible-to-generate-gpt2-output-without-an-input-prompt-text/4293
Hi, So as the title says, I want to generate text without using any prompt text, just based on what the model learned from the training dataset. I tried by giving a single space as the input prompt but it did not work. So I tried below: prompt_text = ' ' encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens=False, return_tensors="pt") output_sequences = model.generate( input_ids=encoded_prompt, max_length=50 + len(encoded_prompt[0]), temperature=0.7, top_k=0, top_p=0.9, repetition_penalty=1.0, do_sample=True, num_return_sequences=5, ) and got the error: RuntimeError: cannot reshape tensor of 0 elements into shape [-1, 0] because the unspecified dimension size -1 can be any value and is ambiguous Thanks
You can wrap your samples in special tokens e.g. <|startoftext|> and <|endoftext|>. Then you can prompt the model by feeding it <|startoftext|> and stop the generation at <|endoftext|>.
0
huggingface
Beginners
Run_clm.py stops after some % with error
https://discuss.huggingface.co/t/run-clm-py-stops-after-some-with-error/4359
Hey i’m trying to fine tune a german model. Fine tuning worked previously with the gpt-medium and a small input.txt (around 100kb). Now i try to fine tune dbmdz/german with a dataset of ~3mb (Some fiction books i pasted in the .txt file). after some % i get a wall of text and the following error: “C:/w/b/windows/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: block: [8,0,0], thread: [88,0,0] Assertion srcIndex < srcSelectDimSize failed. C:/w/b/windows/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: block: [8,0,0], thread: [89,0,0] Assertion srcIndex < srcSelectDimSize failed. C:/w/b/windows/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: block: [8,0,0], thread: [90,0,0] Assertion srcIndex < srcSelectDimSize failed. C:/w/b/windows/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: block: [8,0,0], thread: [91,0,0] Assertion srcIndex < srcSelectDimSize failed. C:/w/b/windows/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: block: [8,0,0], thread: [92,0,0] Assertion srcIndex < srcSelectDimSize failed. C:/w/b/windows/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: block: [8,0,0], thread: [93,0,0] Assertion srcIndex < srcSelectDimSize failed. C:/w/b/windows/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: block: [8,0,0], thread: [94,0,0] Assertion srcIndex < srcSelectDimSize failed. C:/w/b/windows/pytorch/aten/src/ATen/native/cuda/Indexing.cu:662: block: [8,0,0], thread: [95,0,0] Assertion srcIndex < srcSelectDimSize failed. Traceback (most recent call last): File “run_clm.py”, line 407, in main() File “run_clm.py”, line 376, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File “E:\anaconda\lib\site-packages\transformers\trainer.py”, line 940, in train tr_loss += self.training_step(model, inputs) File “E:\anaconda\lib\site-packages\transformers\trainer.py”, line 1302, in training_step loss = self.compute_loss(model, inputs) File “E:\anaconda\lib\site-packages\transformers\trainer.py”, line 1334, in compute_loss outputs = model(**inputs) File “E:\anaconda\lib\site-packages\torch\nn\modules\module.py”, line 889, in _call_impl result = self.forward(*input, **kwargs) File “E:\anaconda\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py”, line 899, in forward transformer_outputs = self.transformer( File “E:\anaconda\lib\site-packages\torch\nn\modules\module.py”, line 889, in _call_impl result = self.forward(*input, **kwargs) File “E:\anaconda\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py”, line 689, in forward inputs_embeds = self.wte(input_ids) File “E:\anaconda\lib\site-packages\torch\nn\modules\module.py”, line 889, in _call_impl result = self.forward(*input, **kwargs) File “E:\anaconda\lib\site-packages\torch\nn\modules\sparse.py”, line 145, in forward return F.embedding( File “E:\anaconda\lib\site-packages\torch\nn\functional.py”, line 1913, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: CUDA error: device-side assert triggered” I already tried altering --block_size and --per_device_train_batch_size 1 but nothing seems to help.
Ok, it seems to be a problem with the text in the train.txt file. If i swap the text everything is fine. What can it be=
0
huggingface
Beginners
Preprocessing step for fine-tuning language model
https://discuss.huggingface.co/t/preprocessing-step-for-fine-tuning-language-model/4355
Hi, I want to fine-tune the language model of BERT/DistiLBERT (and later add a sequence classification head; the task is a kind of sentiment analysis). I have a mix of tweets, other social media posts, and speeches. Now, I’m wondering what are necessary preprocessing steps? I was thinking about removing urls, hashtags, and user mentions. Is this necessary? O shall I replace them with a special token? Thanks, Max
Since BERT works with a WordPiece tokenizer, I wouldn’t do any of that and see what happens, before you put effort into pre-processing. Since your texts aren’t really domain-specific, you may see decent results for your “kind of sentiment analysis” without doing any pre-processing
0
huggingface
Beginners
Value error : sentencepiece
https://discuss.huggingface.co/t/value-error-sentencepiece/4313
Hi, I have value error: This tokenizer cannot be instantiated. Please make sure you have sentencepiece installed in order to use this tokenizer. I installed and updated sentencepiece (0.1.95) but still getting the same error. Can someone help? Thank you!
Hi @Katarina, what happens if you try installing transformers in a new environment with pip install transformers[sentencepiece] Does that solve the problem?
0
huggingface
Beginners
CUDA is out of memory
https://discuss.huggingface.co/t/cuda-is-out-of-memory/4324
Hi I finetune xml-roberta-large according to this tutorial 9. I met a problem that during training colab CUDA is out of memory. RuntimeError: CUDA out of memory. Tried to allocate 978.00 MiB (GPU 0; 14.76 GiB total capacity; 12.62 GiB already allocated; 919.75 MiB free; 12.83 GiB reserved in total by PyTorch And it is given that batch_size = 1 I tried to do that on xml-roberta-base, training lasts longer but over all ends up with the same problem. I tried bert-base-uncase that is in tutorial and it’s okay. But my data is multillingual! I want to understand is it true that this problem is just because of natural limits of Colab or it is my fault. Is it possible to finetune xml roberta large in Colab? Thanks!
Hi @Constantin, it’s possible that you’re getting allocated one of the K80 GPUs on Colab which probably doesn’t have enough RAM to handle xlm-roberta-large. You can “cheat” you way to a better GPU (either Tesla T4 or P100) by selecting Runtime > Factory reset runtime in the settings: Screen Shot 2021-03-11 at 8.58.12 pm2150×1364 316 KB You can check what kind of GPU your notebook is running by executing the following in a code cell: !nvidia-smi
0
huggingface
Beginners
Matching Questions with Answers in Chat Text with multiple threads
https://discuss.huggingface.co/t/matching-questions-with-answers-in-chat-text-with-multiple-threads/4317
Hey everyone! amateur data scientist here, have worked with nltk/spacy/gensim insofar as to do soft cosine similarity of text documents in the past. I’m working on a new project, where I’m trying to identify questions and answers in a 6 month chat history where there are multiple participants and multiple threads of conversation (some linked some not). I’ve gotten question identification somewhat down, but am struggling to figure out a good approach to identifying answers. One approach I’ve thought about is through entity recognition clustering, though that seems a little too narrow/rigid. I saw this adversarial_qa · Datasets at Hugging Face 3 and was wondering if it might be applicable to my use case? If so, any suggestions on how to transfer learning train it on the domain of my corpus (which is mostly code and niche topic related text)? Thanks in advance all
Hi @ilemi, if you’ve already got a set of questions nailed down, couldn’t you use a retriever like TF-IDF / BM25 / DPR to return candidate answers (i.e. passages of text that are indexed in some fashion)? If yes, there’s a nice question-answering library called haystack that provides various retrievers for you to play with: https://haystack.deepset.ai/docs/latest/retrievermd 3
0
huggingface
Beginners
Sentence Similarity or Sentence Classification Task?
https://discuss.huggingface.co/t/sentence-similarity-or-sentence-classification-task/4288
I need to codify medical conditions with diagnostic codes. For example “head injury” may be coded as "S02.0, S02.1 Fracture of skull ". I would like to use a model to find likely diagnosis code candidates for entered text. What is the best approach to solving this task? I can either try to find the closest semantic similarity between input sentence and list of diagnosis or I can try to do multi-label classification where diagnostic code is a class. Any ideas, suggestions? Thanks.
Either of your approaches could work. Do you have a corpus of documents that contains both medical conditions and codified medical conditions?
0
huggingface
Beginners
How to set minimum length of generated text in hosted API
https://discuss.huggingface.co/t/how-to-set-minimum-length-of-generated-text-in-hosted-api/3893
I’m using the hosted API to generate text from gpt2-xl, like this: curl -X POST https://api-inference.huggingface.co/models/gpt2-xl \ -H "Authorization: Bearer api_org_AAAABBBBCCCCDDDD" \ -H "Content-Type: application/json" \ -d '{ "inputs":"Once upon a time, there was a horrible witch who", "options":{"wait_for_model":true} }' …which returns something like this: [{"generated_text":"Once upon a time, there was a horrible witch who had a cat named Chunky. She tortured and killed her cats and ate their fur and meat with the help from a huge snake that her mother fed to her with a spoon. The witch named"}] Which is great. Now I’d like to use a longer prompt (~1000 characters) and ask for a longer body of generated text in response (original length + ~1000 characters of new text). But I don’t see any info in the docs 6 about how to ask for a longer body of generated text. And if I make my prompt longer, the amount of generated text appended to my prompt gets proportionally shorter, and the whole response is about the same size. Is this a fundamental limitation of the hosted APIs or is there some way to achieve this?
Hi @benjismith , Sorry for the late reply. Currently the only way you can do that is by using "inputs":"Once upon a time, there was a horrible witch who", "options":{"wait_for_model":true} "parameters": {"max_length": 10} but that IS an issue because you need to know how many tokens your prompt is to be precise. We’re going to add a better parameter for this and document it.
0
huggingface
Beginners
Failed attempt to use new Automatic Speech Recognition
https://discuss.huggingface.co/t/failed-attempt-to-use-new-automatic-speech-recognition/3558
I got excited seeing a tweet 2 Automatic Speech Recognition is in transformers 4.3.0, so I had to try it. Unfortunately, I got an error. I started by recording a 14 second test file on Quicktime, and then used VLC to convert it from .m4a to .wav The first part ran fine: import librosa import soundfile as sf import torch from transformers import Wav2Vec2ForCTC, Wav2Vec2Tokenizer tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") speech, rate = sf.read("test1.wav") The next line caused an error – which I fixed (as below): speech = librosa.resample(speech, rate, 16000) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) ~/Documents/projects/misc-aiml/wav2vec.py in ----> 16 speech = librosa.resample(speech, rate, 16000) ~/miniconda3/envs/wav2vec/lib/python3.8/site-packages/librosa/core/audio.py in resample(y, orig_sr, target_sr, res_type, fix, scale, **kwargs) 582 y_hat = samplerate.resample(y.T, ratio, converter_type=res_type).T 583 else: --> 584 y_hat = resampy.resample(y, orig_sr, target_sr, filter=res_type, axis=-1) 585 586 if fix: ~/miniconda3/envs/wav2vec/lib/python3.8/site-packages/resampy/core.py in resample(x, sr_orig, sr_new, axis, filter, **kwargs) 95 96 if shape[axis] < 1: ---> 97 raise ValueError('Input signal length={} is too small to ' 98 'resample from {}->{}'.format(x.shape[axis], sr_orig, sr_new)) 99 ValueError: Input signal length=2 is too small to resample from 44100->16000 Based on the discussion here 4, I changed it to speech.T, which now seem to work so far speech = librosa.resample(speech.T, rate, 16000) input_values = tokenizer(speech, return_tensors = 'pt').input_values However, then I get a different error: logits = model(input_values).logits --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) ~/Documents/projects/misc-aiml/wav2vec.py in <module> ----> 1 model(input_values) ~/miniconda3/envs/wav2vec/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/miniconda3/envs/wav2vec/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py in forward(self, input_values, output_attentions, output_hidden_states, return_dict, labels) 793 return_dict = return_dict if return_dict is not None else self.config.use_return_dict 794 --> 795 outputs = self.wav2vec2( 796 input_values, 797 output_attentions=output_attentions, ~/miniconda3/envs/wav2vec/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/miniconda3/envs/wav2vec/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py in forward(self, input_values, output_attentions, output_hidden_states, return_dict) 641 return_dict = return_dict if return_dict is not None else self.config.use_return_dict 642 --> 643 hidden_states = self.feature_extractor(input_values) 644 hidden_states = self.feature_projection(hidden_states) 645 ~/miniconda3/envs/wav2vec/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/miniconda3/envs/wav2vec/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py in forward(self, input_values) 179 hidden_states = input_values[:, None] 180 for conv_layer in self.conv_layers: --> 181 hidden_states = conv_layer(hidden_states) 182 183 return hidden_states ~/miniconda3/envs/wav2vec/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/miniconda3/envs/wav2vec/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py in forward(self, hidden_states) 113 114 def forward(self, hidden_states): --> 115 hidden_states = self.conv(hidden_states) 116 hidden_states = self.dropout(hidden_states) 117 hidden_states = self.layer_norm(hidden_states) ~/miniconda3/envs/wav2vec/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/miniconda3/envs/wav2vec/lib/python3.8/site-packages/torch/nn/modules/conv.py in forward(self, input) 256 self.weight, self.bias, self.stride, 257 _single(0), self.dilation, self.groups) --> 258 return F.conv1d(input, self.weight, self.bias, self.stride, 259 self.padding, self.dilation, self.groups) 260 RuntimeError: Expected 3-dimensional input for 3-dimensional weight [512, 1, 10], but got 4-dimensional input of size [1, 1, 2, 221173] instead Does anyone happen to know anything about the new model, and what I might be doing wrong? Thanks!
I solved it - I made one change: logits = model(input_values[0]).logits I ran the model on the first element of the tensor (adding in the [0]) – now I succeded! Thanks!
0
huggingface
Beginners
How to handle “entities” during tokenization?
https://discuss.huggingface.co/t/how-to-handle-entities-during-tokenization/4279
Hi everyone, The text I’m wanting to perform my downstream tasks on contains a lot of domain-specific references. When I say refernces I mean citations i.e. character-sequences that identify entities such as other documents. Since those entitites are domain-specific none of the pre-trained models will understand them and they will probably be split into sub-tokens by BERT. Those citations are extremely valuable in terms of the meaning of the content. Which citations a document includes can say a lot about the topics of the content. Naturally, I’d like to preserve these citations and ideally also train meaningful emebddings for them. How do I best go about doing that? From what I can gather I could add the tokens to the tokenizer (via add_tokens()) then use TFBertForPreTraining() to continue to train BERT with domain-specific content (which will include those citations). Is that the right way to handle this? I’m not sure if add_tokens() is actually meant to expand the vocab or is just for additional special tokens like [CLS]. As always, any pointers would be much appreciated. So far this forum has been invaluable
I think what you suggested is a reasonable approach. Just don’t forget to do model.resize_token_embeddings(len(tokenizer)) after using tokenizer.add_tokens You can then, as you suggested, train your BERT model with MLM loss on your specific corpus to learn these new embeddings and proceed after that to fine-tune it on different tasks.
0
huggingface
Beginners
Index out of range layoutlm
https://discuss.huggingface.co/t/index-out-of-range-layoutlm/2516
I am trying to fine tune LayoutLm for SROIE receipt named entity extraction. I checked the github page of Layoutlm and used their run_seq_labelling.py and preprocess.py on this new dataset i prepared but i am receiving following error: Iteration: 4%|█████▉ | 21/577 [00:53<23:45, 2.56s/it] Epoch: 0%| | 0/100 [00:53<?, ?it/s] Traceback (most recent call last): File "run_seq_labeling.py", line 812, in <module> main() File "run_seq_labeling.py", line 705, in main args, train_dataset, model, tokenizer, labels, pad_token_label_id File "run_seq_labeling.py", line 220, in train outputs = model(**inputs) File "/home/ml3/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ml3/.local/lib/python3.6/site-packages/layoutlm/modeling/layoutlm.py", line 221, in forward head_mask=head_mask, File "/home/ml3/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ml3/.local/lib/python3.6/site-packages/layoutlm/modeling/layoutlm.py", line 171, in forward input_ids, bbox, position_ids=position_ids, token_type_ids=token_type_ids File "/home/ml3/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ml3/.local/lib/python3.6/site-packages/layoutlm/modeling/layoutlm.py", line 82, in forward bbox[:, :, 2] - bbox[:, :, 0] File "/home/ml3/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ml3/.local/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 126, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/home/ml3/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 1814, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self I am using transformers 2.9 as the github page states as a requirement
I found the issue. Turns out that OCR detects vertical text and in that case width comes up as negative
0
huggingface
Beginners
Is attention_mask needed for training Bart?
https://discuss.huggingface.co/t/is-attention-mask-needed-for-training-bart/4272
Hi, I’m experimenting fine-tuning Bart for summarization task. I tried both “with attention_mask” and “without attention_mask”. And it seems both worked. Could someone teach when to use attention_mask and why? Thanks for advance.
This should be of help Glossary — transformers 4.3.0 documentation 9
0
huggingface
Beginners
Different prediction tensors on single item vs a list of items
https://discuss.huggingface.co/t/different-prediction-tensors-on-single-item-vs-a-list-of-items/3607
I am doing sentiment analysis on tweets. I used the roberta-base model. I trained the model on a dataset containing around 90,000 entries. When I predict using the saved model, the results are different when predicting single sentence and when the same sentence is one of the items in a list and all sentences are looped through to predict. E.g., “hi there” will give a tensor value and a different tensor value when passed in a list like: [“hi there”, “let’s go out”, “how are you?”] The difference is so high that a sentence which is positive and correctly predicted as positive when predicted for the single string is predicted as negative when passed in a list. Is it something that is expected? Or is there anything I need to make sure to avoid this?
Hi @rashub can you post the code you’re using to generate the predictions (see here for general advice on getting help)? Even better would be a Google Colab notebook to be able to inspect the inputs / outputs
0
huggingface
Beginners
NER with electra
https://discuss.huggingface.co/t/ner-with-electra/4262
Hello Everyone, I am new to hugging face models. I would like to use electra (electra-large-discriminator-finetuned-conll03-english) for entity recognition. I was unable to find the code to do it. Pointing me in the right direction would be a great help. Thanks
Hello @swaraj, if I understand correctly you’d like to use Electra for inference right? If yes, then the simplest thing would be to use the pipeline abstraction: Summary of the tasks — transformers 4.3.0 documentation 35 You can specify the model name to electra-large-discriminator-finetuned-conll03-english which should download the model from the hub and load it in the pipeline for tagging.
0
huggingface
Beginners
How To Output “test_generations.txt” with run_seq2seq.py?
https://discuss.huggingface.co/t/how-to-output-test-generations-txt-with-run-seq2seq-py/3825
@stas or @sgugger would likely be able to answer this easily – thanks again for your comments on my previous query in December. I was using the finetune_trainer.py script back in December, and found that running a script like this… python3 -m torch.distributed.launch --nproc_per_node=8 /workspace/rabbit-py/transformers/examples/seq2seq/finetune_trainer.py \ --learning_rate=1e-4 \ --do_train --do_eval --do_predict \ --evaluation_strategy steps \ --predict_with_generate \ --n_test 100 \ --fp16 \ --sortish_sampler \ --num_train_epochs 24 \ --data_dir "/workspace/rabbit-py/corpii/short_name_sequential_source" \ --model_name_or_path "google/pegasus-large" \ --output_dir "/workspace/rabbit-py/predictions/$RUN" \ --per_device_train_batch_size 2\ --per_device_eval_batch_size 2\ --logging_steps 768\ --gradient_accumulation_steps 32\ --task 'summarization'\ --max_target_length 12 \ --val_max_target_length 12 \ --test_max_target_length 12 \ --overwrite_output_dir \ --freeze_embeds \ --adafactor \ --run_name $RUN "$@" … would output checkpoint folders that looked like this: The test_generations.txt file was exactly 100 lines long, so I assume it corresponded to the --n_test 100 argument, although I can’t be sure, as I struggled for a while to understand the difference between predict and eval and test, and eventually gave up as the terminology was just too confusing for me to understand. That said, the test_generations.txt file was generated and it was very useful. I have now migrated to the new seq2seq script, run_seq2seq.py, from here: transformers/examples/seq2seq at master · huggingface/transformers · GitHub 8 I am successfully using this, with a script like this: PREFIX=$(basename $BASH_SOURCE) python3 /workspace/fw-py/transformers/examples/seq2seq/run_seq2seq.py \ --model_name_or_path '/workspace/fw-py/models_foreign/pegasus_large' \ --do_train \ --do_eval \ --do_predict \ --logging_steps 768 \ --evaluation_strategy steps \ --num_train_epochs 10 \ --task summarization \ --train_file "/workspace/fw-py/corpii/${PREFIX}/train.json" \ --validation_file "/workspace/fw-py/corpii/${PREFIX}/val.json" \ --test_file "/workspace/fw-py/corpii/${PREFIX}/test.json" \ --output_dir "/workspace/fw-py/predictions/${PREFIX}" \ --overwrite_output_dir \ --per_device_train_batch_size=2 \ --per_device_eval_batch_size=2 \ --predict_with_generate \ --text_column "question" \ --summary_column "known_answer" Again, I don’t understand the difference between do_eval, and do_predict, and I am not sure what predict_with_generate really means, and can’t find that documented clearly anywhere, so I am just using all of them. This script is working, and is generating checkpoint folders that look like this: … which is a great start, however, I am missing the critical file that I need, to see what my model outputs… this file I am missing is the test_generations.txt file. Does anyone know if it is still possible to generate these test generations? I did consult the --help command, and found … --do_predict [DO_PREDICT] Whether to run predictions on the test set. --predict_with_generate [PREDICT_WITH_GENERATE] Whether to use generate to calculate generative metrics (ROUGE, BLEU). Which, although I don’t really understand what this means, does seem like something that could help create the test_generations.txt, but that does not seem to be happening in my case. Also, FYI, I am running the script again, trying just 3 epochs, and here is the first output from the console, which I think should show all of my arguments: 02/22/2021 19:45:54 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False 02/22/2021 19:45:54 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(output_dir='/workspace/fw-py/predictions/translated_one', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=2, per_device_eval_batch_size=2, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, warmup_ratio=0.0, warmup_steps=0, logging_dir='runs/Feb22_19-45-54_43a398359e63', logging_first_step=False, logging_steps=768, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', fp16_backend='auto', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=768, dataloader_num_workers=0, past_index=-1, run_name='/workspace/fw-py/predictions/translated_one', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=False, deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, report_to=['tensorboard', 'wandb'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, sortish_sampler=False, predict_with_generate=True) Thanks!
OK, it looks like I can now answer my own question. I can’t find any documentation of this, but it seems that the predict_with_generate flag WILL generation a text file with predictions, but not for each checkpoint – instead, it happens at the end, after all the epochs are complete. The relevant code: transformers/run_seq2seq.py at f991daed185261085301d72c2cd634836df1044a · huggingface/transformers · GitHub 4 If anyone figures out a way to get run_seq2seq.py to generate these predictions at each checkpoint, my understanding was that this was the previous behavior, and it was certainly useful for me…
0
huggingface
Beginners
Pipeline’s Tokenizer vs training tokenizer
https://discuss.huggingface.co/t/pipelines-tokenizer-vs-training-tokenizer/4240
Hello, I am trying to create a pipeline from a trained model. From what I understand I need to provide a tokenizer so that my new input will be tokenised. I guess, it should look like this; from transformers import pipeline, AutoModel model_name = "TestModel" model = AutoModelForSequenceClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer, return_all_scores=True) My question is where do the other steps from th tokenisation process take place, like the padding and truncation. During training, my sequences where processed as follows; train_encodings = tokenizer(seq_train, truncation=True, padding=True, max_length=1024, return_tensors="pt") Is that no longer needed?
The pipeline does the tokenisation for you, that’s why you have to pass in a trained model and it’s tokeniser. Basically, as I understand it the pipeline implementations simply reduce the amount of code you have to write for common use cases.
0
huggingface
Beginners
XML RoBERTa Multilanguage NER with OntoNotes 5 dataset
https://discuss.huggingface.co/t/xml-roberta-multilanguage-ner-with-ontonotes-5-dataset/4251
Hi I would like to fine-tune XML RoBERTa for multilanguage NER with OntoNotes 5 dataset, but I really can’t understand how to do that. Honestly, I read the paper and I know the theory behind this process, but I can’t understand how to that with transformers module! I did not find any relevant example for it! For now, I have my ontonotes5 data in the following form: ('لكن', 'O'), (‘وزارة الداخلية الباكستانية’, ‘ORG’), (‘وزارة’, ‘O’), (‘الداخلية’, ‘O’), (‘الباكستانية’, ‘O’), (‘قالت’, ‘O’), (‘ان’, ‘O’), (‘11’, ‘CARDINAL’), (‘11’, ‘O’), (‘شخصا’, ‘O’), (‘ً قتلوا’, ‘O’), and this model: xlm-roberta-large · Hugging Face
Hi @Constantin, there’s a detailed tutorial here on using transformers for NER: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/token_classification.ipynb#scrollTo=545PP3o8IrJV 2 I’ve been able to use it with XLM-R without problems. In your case, the main work will be loading your dataset into a datasets.Dataset object (recommended for fast processing!). For that see the docs here 1 or look at how one of the NER datasets is implemented to understand how the features need to be defined, e.g. GermanNER 2
0
huggingface
Beginners
Fine-tuned model for regression is missing output layer after saving to disk
https://discuss.huggingface.co/t/fine-tuned-model-for-regression-is-missing-output-layer-after-saving-to-disk/4188
So I have tried to fine-tune distilbert for regression task (using num_labels=1) and it seemed to work. But after saving it to disk (model.save_pretrained(f"checkpoints/model_epoch_{epoch}")) and loading it again and doing inference on a sample piece of text, it is outputting a 768-dimensional vector instead of a single number: text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) odict_values([tensor([[[-0.0013, 0.0024, 0.0388, ..., 0.0087, 0.0316, 0.0316], [ 0.0128, 0.0046, 0.0446, ..., 0.0043, 0.0132, 0.0331], [ 0.0124, 0.0069, 0.0430, ..., 0.0060, 0.0124, 0.0369], ..., [ 0.0167, 0.0159, 0.0357, ..., 0.0059, 0.0145, 0.0299], [ 0.0139, 0.0140, 0.0340, ..., 0.0076, 0.0157, 0.0298], [ 0.0144, 0.0284, 0.0265, ..., 0.0117, 0.0108, 0.0268]]], grad_fn=<NativeLayerNormBackward>)]) Not sure what I’m doing wrong here.
What command did you use to reload your model?
0
huggingface
Beginners
Predict if a word is an ending of sentence
https://discuss.huggingface.co/t/predict-if-a-word-is-an-ending-of-sentence/4198
Hi everyone, I want to predict if a word is an ending of a sentence or not in a paragraph; I mean if a word is an ending it should be predicted as 1 else 0. It’s a binary classification problem. Is there any model for that? Thanks, Kalyan.
Why do you need a model for that in the first place? You can just use a parser that does sentence segmentation (e.g. spacy or stanza) and then select the last word from it.
0
huggingface
Beginners
I have trained a model, how do I load it up?
https://discuss.huggingface.co/t/i-have-trained-a-model-how-do-i-load-it-up/4209
Hi, I have managed to train a model using trainer. I evaluated some results whilst the model was still on the disk using ‘trainer.predict()’. I then used trainer.save_model() and now want to load it up for usage again. By saving, I got three files in my drive; pytorch_model.bin config.json training_args.json I am assuming the model is pytorch_model.bin but I am unsure how do I load it up? I trained a classification model so do I call again; model = AutoModelForSequenceClassification.from_pretrained(**filepath**, num_labels=5, output_attentions = False, # output_hidden_states = False ) where filepath is the path to pytorch_model.bin? do I then call again a trainer and send the model to that to again use trainer.predict() for new predictions?
You can do it in pytorch or tf like you would with any other model, or you can use Pipelines — transformers 4.3.0 documentation 5
0
huggingface
Beginners
Create a custom model that works with any pretrained transformer body
https://discuss.huggingface.co/t/create-a-custom-model-that-works-with-any-pretrained-transformer-body/4186
I would like to create a custom model (in this case for text classification) that works on top of an arbitrary pre-trained transformer model body. More specifically, I want to use some transformer model (together with its tokenizer) to get an embedding for the given text and then do whatever on top of this embedding. The code below was inspired by the DistilBertForSequenceClassification model and works for the checkpoint "distilbert-base-uncased", but fails already for "bert-base-uncased" since there the embedding dimensionality is stored in config.hidden_size instead of config.dim: import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel class TransformerSequenceClassifier(torch.nn.Module): def __init__(self, num_labels, pretrained_name, dropout=0.1): super().__init__() self.num_labels = num_labels # load pre-trained transformer self.transformer = AutoModel.from_pretrained(pretrained_name) # initialize other layers (head after the transformer body) self.pre_classifier = torch.nn.Linear(self.transformer.config.dim, self.transformer.config.dim) self.classifier = torch.nn.Linear(self.transformer.config.dim, num_labels) self.dropout = torch.nn.Dropout(dropout) def forward(self, input_ids=None, **kwargs): # get text representation from transformer transformer_output = self.transformer( input_ids=input_ids, **kwargs, ) hidden_state = transformer_output[0] # (bs, seq_len, dim) pooled_output = hidden_state[:, 0] # (bs, dim) # apply classification layers pooled_output = self.pre_classifier(pooled_output) # (bs, dim) pooled_output = F.relu(pooled_output) # (bs, dim) pooled_output = self.dropout(pooled_output) # (bs, dim) output = self.classifier(pooled_output) # (bs, num_labels) return output if __name__ == '__main__': # initialize model and corresponding tokenizer pretrained_name = "distilbert-base-uncased" model = TransformerSequenceClassifier(2, pretrained_name) tokenizer = AutoTokenizer.from_pretrained(pretrained_name) # apply model to some example sentences batch = tokenizer( ["We are very happy to show you the 🤗 Transformers library.", "We hope you don't hate it."], padding=True, truncation=True, return_tensors="pt" ) y_pred = model(**batch) For my code to work I would need a model-agnostic way to: prune task specific heads from the model (if it has any) compute a single embedding vector for a sequence of input tokens (for BERT-based models afaik this is the representation for the [CLS] token at the beginning of the sequence, but I’m not sure about the rest of the model zoo) know in advance what the dimensionality of this embedding will be Are there any suggestions on how to accomplish the above steps? I’m also happy about a solution that works only for BERT-based models, but I can’t believe that I already failed on point 3…
Note that hidden_size is a property on all configurations, so if you use it instead of dim in your example, you should be good. For pruning specific heads, I don’t think you will have one when using the AutoModel architecture since it’s supposed to be the bare model. So the main issue will be to get the representation. For this, I’m afraid there is nothing generic that will work accross the model zoo (which is why the XxxForSequenceClassification are implemented in separate files) since some models expect to use the first token, others the last, others the mean etc.
0
huggingface
Beginners
I’m making ROBERTA dumber, and I don’t know why
https://discuss.huggingface.co/t/im-making-roberta-dumber-and-i-dont-know-why/3283
Hi there, I’m further training from roberta-base using my domain-specific corpus (parsed text related to space systems) and the run-mlm.py script. Here is my code: output=os.system("python run_mlm.py " "–model_name_or_path=roberta-base " "–overwrite_output_dir " "–train_file=‘data/training.txt’ " "–validation_file=‘data/testing.txt’ " "–per_device_train_batch_size=8 " "–per_device_eval_batch_size=8 " "–do_train " "–do_eval " "–line_by_line " "–save_steps=53769 " "–num_train_epochs=40 " "–output_dir=’./spaceROBERTA/’ " “–logging_steps=4481”) The training loss is decreasing (from around 2 to 1), the perplexity over the evaluation set is a bit high but also decreasing (starts at 10 and finishes around 7). So I thought all lights were green for the training, yeah! But when I fine-tune it over our labeled dataset for a Concept Recognition task, the performance is slightly worse than roberta-base, and getting significantly worse and worse as the number of training epochs increases I’m basically making roberta-base dumber and dumber and I don’t know why… I appreciate if anyone can point to a solution, thanks
Update: Increasing the batch size to 256, thanks to gradient-accumulation, improved the performance "--per_device_train_batch_size=16 " "--per_device_eval_batch_size=16 " "--gradient_accumulation_steps=16 "
0
huggingface
Beginners
Pretraining RoBERTa from scratch breaks down when using tokenizer with smaller vocabulary
https://discuss.huggingface.co/t/pretraining-roberta-from-scratch-breaks-down-when-using-tokenizer-with-smaller-vocabulary/304
I have pretrained two tokenizers. One has a vocabulary size of 15000 and the other is 30000. I use the same corpus and code except for the vocab_size parameter. from tokenizers import ByteLevelBPETokenizer tokenizer.train(files = ["samecorpus.txt"], vocab_size=..., min_frequency=2, special_tokens=[ "<s>", "<pad>", "</s>", "<unk>", "<mask>", ]) !mkdir folder tokenizer.save_model("folder") I intend to pre-train a RoBERTa from scratch using the code from Huggingface’s tutorial 11 with the following modification. Firstly, the configuration reflects the tokenizer size. from transformers import RobertaConfig config = RobertaConfig( vocab_size=..., max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1, ) Second, I used a custom LineByLineTextDataset. Essentially the same thing with huggingface’s implementation, but BatchedFile read the file lazily due to file size. class BatchedFile(): def __init__(self, tokenizer): self.file = open("largecorpus.txt", encoding="utf-8") self.COUNT = 51476181 self.tokenizer = tokenizer self.result = [] def get(self): if len(self.result) == 0: self.spawn() return self.result.pop(0) def spawn(self): sentences = [] while len(sentences) < 10_000 and self.COUNT > 0: teks = self.file.readline().strip() if (len(teks) > 0 and not teks.isspace()): sentences.append(teks) self.COUNT -= 1 if self.COUNT == 0: self.COUNT = 51476181 self.file.close() self.file = open("largecorpus.txt", encoding="utf-8") batch_encoding = self.tokenizer(sentences, add_special_tokens=True, truncation=True, max_length=128) self.result = batch_encoding["input_ids"] class LineByLineTextDataset(Dataset): """ This will be superseded by a framework-agnostic approach soon. """ def __init__(self, tokenizer): self.items = BatchedFile(tokenizer) def __len__(self): return 51476181 def __getitem__(self, i) -> torch.Tensor: return torch.tensor(self.items.get(), dtype=torch.long) I successfully ran this code with the 30 thousand words tokenizer. However, the code doesn’t work with the 15 thousand word tokenizer, no matter the size of the vocab_size of the configuration. Attempting to run the code with CPU resulted this /opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1722 # remove once script supports set_grad_enabled 1723 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1724 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1725 1726 IndexError: index out of range in self Attempting to run the code with GPU resulted this /opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in linear(input, weight, bias) 1610 ret = torch.addmm(bias, input, weight.t()) 1611 else: -> 1612 output = input.matmul(weight.t()) 1613 if bias is not None: 1614 output += bias RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)` I’m wondering where I went wrong.
Hi, I am also facing this issue. Did you ever manage to solve it? Do you remember what you changed? Thanks.
0
huggingface
Beginners
Error/warning: Not all data has been set. Are you sure you passed all values?
https://discuss.huggingface.co/t/error-warning-not-all-data-has-been-set-are-you-sure-you-passed-all-values/4114
Hi all, while using Trainer to train a BERT model, I receive the following error/warning: “Not all data has been set. Are you sure you passed all values?” I’m not able to fix it and it seems to calculate the wrong metrics. I assume because of the missing data. About my setup: I want to train a BERT model with a custom head for multilabel classification. This is my code: import pandas as pd import numpy as np import datasets import json import torch from sklearn import metrics from sklearn.metrics import accuracy_score, precision_recall_fscore_support, f1_score from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments from transformers import BertModel, BertTokenizer from sklearn.model_selection import train_test_split from datasets import Dataset from torch import cuda device = ‘cuda’ if cuda.is_available() else ‘cpu’ MODEL_NAME = ‘dbmdz/bert-base-german-uncased’ SEED = 321 def compute_metrics_multilables_b(eval_pred): predictions, labels = eval_pred predictions = torch.tensor(predictions) preds_full = torch.sigmoid(predictions).cpu().detach().numpy().tolist() preds_full = np.array(preds_full) >= 0.5 labels = np.array(labels) >= 0.5 accuracy = metrics.accuracy_score(labels, preds_full) f1_score_micro = metrics.f1_score(labels, preds_full, average='micro') f1_score_macro = metrics.f1_score(labels, preds_full, average='macro') metrics_result = { 'accuracy': accuracy, 'f1_micro': f1_score_micro, 'f1_macro': f1_score_macro, } return metrics_result class EmotionDataset(torch.utils.data.Dataset): def init(self, encodings, labels): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} item['labels'] = torch.tensor(self.labels[idx]) return item def __len__(self): return len(self.labels) class CustomTrainer(Trainer): def compute_loss(self, model, inputs): labels = inputs.pop(“labels”) outputs = model(inputs[‘input_ids’], inputs[‘attention_mask’], inputs[‘token_type_ids’]) labels = labels.type_as(outputs) logits = outputs return torch.nn.BCEWithLogitsLoss()(logits, labels) class MultiLabelClassifier(torch.nn.Module): def init(self): super(MultiLabelClassifier, self).init() self.l1 = BertModel.from_pretrained(MODEL_NAME) self.l2 = torch.nn.Dropout(0.3) # output is a 8-dim vector self.l3 = torch.nn.Linear(768, 8) def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None): output_1 = self.l1(input_ids, attention_mask = attention_mask, token_type_ids = token_type_ids).pooler_output output_2 = self.l2(output_1) output = self.l3(output_2) return output dataset_train = Dataset.from_pandas(df_train) dataset_validation = Dataset.from_pandas(df_validation) dataset_test = Dataset.from_pandas(df_test) load model and tokenizer tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) model = BertModel.from_pretrained(MODEL_NAME) preprocess data field_text = “Text” field_label = “list” tokenize data train_encodings = tokenizer(dataset_train[field_text], truncation=True, padding=True) val_encodings = tokenizer(dataset_validation[field_text], truncation=True, padding=True) test_encodings = tokenizer(dataset_test[field_text], truncation=True, padding=True) train_dataset = EmotionDataset(train_encodings, dataset_train[field_label]) val_dataset = EmotionDataset(val_encodings, dataset_validation[field_label]) test_dataset = EmotionDataset(test_encodings, dataset_test[field_label]) model = MultiLabelClassifier() _ = model.to(device) training_args = TrainingArguments( output_dir=’./results’, # output directory num_train_epochs=1, # total # of training epochs per_device_train_batch_size=8, # batch size per device during training per_device_eval_batch_size=20, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir=’./logs’, # directory for storing logs ) trainer = CustomTrainer( model=model, # the instantiated Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=test_dataset, # evaluation dataset compute_metrics=compute_metrics_multilables_b ) _ = trainer.train() trainer.evaluate() The target/predicition is a binary 8-dim vector for each data record. The error/warning is thrown by trainer.evaluate(). Any idea what I did wrong? Since the code is hard to read here, here the link to the pastebin snippet: https://pastebin.com/MNf68rfn Thanks, Max
I believe this is the same problem as in this topic 36. Make sure your model outputs tuples if you want to use it with Trainer. And your compute_loss should have a return_outputs argument to work well with the last version of transformers.
0
huggingface
Beginners
Sentences in Abstractive Summarization
https://discuss.huggingface.co/t/sentences-in-abstractive-summarization/4073
Hi, I am trying to use the summarization pipeline to summarize some text. I use BART and Pegasus, and often end up with sentences abruptly cut in half in the summaries. Is there a way to prevent this? Also I would like to know if there is a way to set the ‘temperature’ in generating sentences. That is, how much the model sticks to its input instead of trying to generate original sentences. Any help is appreciated, thank you!
@demegire please view the discussion here Summarization on long documents - #30 by marcoabrate 11 I have not come across the concept of temperature on BART and Pegasus summarization yet, other experts perhaps have an opinion about it. If so I am also interested in it.
0
huggingface
Beginners
KeyError: ‘loss’ while training QnA
https://discuss.huggingface.co/t/keyerror-loss-while-training-qna/4111
I was finetuning BertForQuestionAnswering on nlp squad dateset with the following arguments training_args = TrainingArguments( "test-qa-squad", learning_rate=2e-5, weight_decay=0.01, label_names = ["start_positions", "end_positions"], num_train_epochs=5, load_best_model_at_end=True, evaluation_strategy='epoch' ) trainer = Trainer( model=model, args=training_args, train_dataset=train_dl, eval_dataset=train_dl ) Then doing trainer.train() trains for some batches but then after a specific batch throws this error (one epoch isn’t complete yet) KeyError Traceback (most recent call last) <ipython-input-19-3435b262f1ae> in <module>() ----> 1 trainer.train() 3 frames /usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in __getitem__(self, k) 1444 if isinstance(k, str): 1445 inner_dict = {k: v for (k, v) in self.items()} -> 1446 return inner_dict[k] 1447 else: 1448 return self.to_tuple()[k] KeyError: 'loss' Is this some issue in the dataset? Any help is much appreciated
You should double check your datasets has items that are dictionaries with the keys "start_positions", "end_positions" (that may be why the model is not returning the loss). Also, you seem to be passing dataloaders to the Trainer? It takes datasets. Lastly, for easy debug you can do the following: for batch in trainer.get_train_dataloader(): break batch = {k: v.cuda() for k, v in batch.items()} outputs = trainer.model(**batch) to easily inspect what’s in your batch and your outputs.
0
huggingface
Beginners
Finetuning BART using custom loss
https://discuss.huggingface.co/t/finetuning-bart-using-custom-loss/4060
Hi everyone, I want o fine tune BART using custom loss. What I want to do is take the output text generated by the BART model, feed it to a classifier and update weights of the BART model using the classification loss. Please note that I do not want to train the classifier, rather I want to train the BART model using the classification loss on the generated text. Can someone give me pointers on how to do it? TIA
Hi @himanshu, the simplest way to implement custom loss functions is by subclassing the Trainer class and overriding the compute_loss function, e.g. from transformers import Trainer class BartTrainer(Trainer): def compute_loss(self, model, inputs): # implement custom logic here custom_loss = ... return custom_loss You can find more details in the docs here: Trainer — transformers 4.3.0 documentation 121
0
huggingface
Beginners
Getting better sentence embeddings with BERT - is it just pretraining, or it is pretraining + fine tuning?
https://discuss.huggingface.co/t/getting-better-sentence-embeddings-with-bert-is-it-just-pretraining-or-it-is-pretraining-fine-tuning/3692
I am hoping to confirm my understanding of some definitions in the context of BERT. (1) Pre-training means running a corpus through the BERT architecture where masked language modeling and next sentence prediction are used to derive weights. You can do this (a) from scratch with your own vocabulary and randomly initialized weights or (b) using the pre-trained BERT vocab/weights (so you are in effect “pre-training a pre-trained model.” (2) fine tuning means adding a layer to the BERT architecture for some downstream task, such as classification. Questions (A) Is there anything incorrect in my understanding above? (B) Suppose my goal is only to get better embeddings (e.g., for computing cosine similarity between sentences). Would I just want to pre-train the model on my corpus? Is fine tuning also used to get better embeddings - for example, if I fine tune the pretrained BERT model for some classification task, could I use the neurons in the 2nd to last hidden layer to derive sentence embeddings that could later be used to compare cosine similarity between sentences? I currently use the 2nd to last hidden layer of downloaded pretrained BERT models for my sentence embeddings. I’m trying to understand - if you wanted to do semantic similarity in the future, would you rather derive embeddings from your pre-trained BERT or your pre-trained AND fine tuned BERT?
@vintagedeek, (1) Pre-training means running a corpus through the BERT architecture where masked language modeling and next sentence prediction are used to derive weights. You can do this (a) from scratch with your own vocabulary and randomly initialized weights or (b) using the pre-trained BERT vocab/weights (so you are in effect “pre-training a pre-trained model.” (2) fine tuning means adding a layer to the BERT architecture for some downstream task, such as classification. seems fine to me for better embeddings and similarity, you may want to check this: github.com UKPLab/sentence-transformers 116 Sentence Embeddings with BERT &amp; XLNet. Contribute to UKPLab/sentence-transformers development by creating an account on GitHub. and this: github.com JohnGiorgi/DeCLUTR 57 The corresponding code from our paper "DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations". Do not hesitate to open an issue if you run into any trouble! - JohnGi...
0
huggingface
Beginners
Fast CPU Inference On Pegasus-Large Finetuned Model – Currently Impossible?
https://discuss.huggingface.co/t/fast-cpu-inference-on-pegasus-large-finetuned-model-currently-impossible/4048
I have a Pegasus model, fine-tuned from Pegasus Large, which works great, but CPU inference with a input about 2000 characters in length takes between 5 and 12 seconds. My understanding is that I need either a smaller model, or a “quantized” version of my current model, to speed up CPU inference. I have two leads, which I want to put into this post clearly: Export to ONNX. As far as I understand, either this is not possible currently for Pegasus, or nobody has publicly documented a successful export. The closest thing I can find is this: Pegasus ONNX format? · Issue #10042 · huggingface/transformers · GitHub 5 – leading to this StackOverflow post: python - how to convert HuggingFace's Seq2seq models to onnx format - Stack Overflow 10 Re-run my fine-tuning on one of the “distilled” or “student” models, shown here: Hugging Face – On a mission to solve NLP, one commit at a time. 7 … I tried this, and found that these models won’t accept my input text, because it is too long. I don’t understand the specifics, but conceptually it make sense to me that a ‘distilled’ or ‘student’ model might be made smaller by reducing the number of tokens it could accept.
Hi @the-pale-king, looking at the link 13 inside the SO link it seems that you need to split the Pegasus model into separate encoder / decoder blocks and then apply the graph optimizations from ONNX (their example is for T5, so presumably can be adapted to Pegasus without too much work). What model did you use for distillation? The choice of student will indeed determine the maximum sequence length you can work with, but with 2,000 tokens I’m not sure what you can use that would be faster than Pegasus Have you tried dynamic quantization? In PyTorch 9 you can do this with one line of code as follows: import torch from torch.quantization import quantize_dynamic model_ckpt = ... tokenizer = AutoTokenizer.from_pretrained(model_ckpt) model = (AutoModelForSequenceClassification .from_pretrained(model_ckpt).to("cpu")) model_quantized = quantize_dynamic(model, {nn.Linear}, dtype=torch.qint8) which can give you a 2-3x reduction in latency (depends on the hardware, model architecture etc). I’ve never tried it for a seq2seq model, but don’t see why it shouldn’t work “out of the box”
0
huggingface
Beginners
How do use lr_scheduler
https://discuss.huggingface.co/t/how-do-use-lr-scheduler/4046
How to use lr_scehuler in Trainer? it seems that whenever I pass AdamW optimizer, it also need the dictionary of params to tune. Since I am using just plain Trainer (not being intimate with PyTorch) The parameters are not exposed to pass to AdamW yielding an error. Does anyone have an idea of how I can do that?
Hi @Neel-Gupta, you’ll need to create a custom trainer by subclassing Trainer and overriding the create_optimizer_and_scheduler function (see here 10 for the source code): class MyAwesomeTrainer(Trainer): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # Add custom attributes here def create_optimizer_and_scheduler(self, num_training_steps): pass Assuming that you’re trying to learn some custom parameters, the idea is to add a dict like {"params": [p for n, p in self.model.named_parameters() if "name_of_custom_params" in n and p.requires_grad], "lr": self.args.custom_params_lr} to the optimizer_grouped_parameters list you can see in the source code. Then you can add the remaining bits with something like the following: def create_optimizer_and_scheduler(self, num_training_steps: int): no_decay = ["bias", "LayerNorm.weight"] # Add any new parameters to optimize for here as a new dict in the list of dicts optimizer_grouped_parameters = ... self.optimizer = AdamW(optimizer_grouped_parameters, lr=self.args.learning_rate, eps=self.args.adam_epsilon) self.lr_scheduler = get_linear_schedule_with_warmup( self.optimizer, num_warmup_steps=self.args.warmup_steps, num_training_steps=self.num_training_steps) Does that make sense?
0
huggingface
Beginners
Generate function returns random words for BartForConditionalGeneration
https://discuss.huggingface.co/t/generate-function-returns-random-words-for-bartforconditionalgeneration/4044
Hello, I am trying to use BartForConditionalGeneration, but the generate function returns random words. The model I am using is facebook/bart-base and it has not been fine-tuned at all. Here is the code: device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') model_name = "facebook/bart-base" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, ) max_length=120 model_cfg= BartConfig.from_pretrained(model_name) model_cfg.max_length = max_length # model_cfg.use_cuda=True model_cfg.force_bos_token_to_be_generated = True model = BartForConditionalGeneration(model_cfg).to(device) Here is the generation code: from transformers.models.bart.modeling_bart import shift_tokens_right ARTICLE_TO_SUMMARIZE ="My friends are cool but they eat too many carbs." inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=max_length, return_tensors='pt',truncation=True).to(device) summary_ids = model.generate(shift_tokens_right(inputs['input_ids'],tokenizer.bos_token_id,tokenizer.eos_token_id), num_beams=4,max_length=40) print([tokenizer.decode(g, skip_special_tokens=True) for g in summary_ids]) The output of this is: [' society society society Steve Steve Steve smoking smoking smoking contiguous contiguous contiguous Canucks Canucks Canucks concurrent concurrent concurrentRandRandRandLessLessLess Providence Providence ProvidenceumatumatumatCombatCombatCombat vicious vicious vicious Kre'] I have been trying to solve different issues with generate like this for nearly the past three day and a half, but have no idea… Any help is incredibly appreciated
Nvm. I realized I still needed to load the pretrained model and not use the config
0
huggingface
Beginners
Reproduce results on CNN/DailyMail - PEGASUS
https://discuss.huggingface.co/t/reproduce-results-on-cnn-dailymail-pegasus/3981
I currently aim to finetune the implemented Pegasus on the CNN/DailyMail dataset from the ‘google/pegasus-large’ checkpoint. However, I was unable to achieve claimed numbers (Pegasus: replication and distillation results · Issue #6844 · huggingface/transformers · GitHub 9). My results are ROUGE-1: 43.7 and ROUGE-L: 40.6. I assume that I need to modify some hyperparameters. I would be grateful if you could give me any comment or advice. P.S: these are hyperparameters I have tried max_input_length=1024, max_output_length=128, freeze_encoder=False, freeze_embeds=True, learning_rate=1e-4 (1e-3), weight_decay=0.0, adam_epsilon=1e-8, warmup_steps=10000, gradient_accumulation_steps=8, fp_16=False, opt_level=‘O1’, max_grad_norm=1.0, num_train_epochs=20 (10), train_batch_size=4, eval_batch_size=16
I can help telling you that you should use Adafactor with PEGASUS and not Adam. For optimization, both pre-training and fine-tuning used Adafactor (Shazeer & Stern, 2018) with square root learning rate decay and dropout rate of 0.1.
0
huggingface
Beginners
Pegasus Inference for production usecase
https://discuss.huggingface.co/t/pegasus-inference-for-production-usecase/2486
Hi, We have finetuned distill-pegasus-cnn-16-4 5 summarization model on our own data and results look good. However, when we want to deploy it for a real-time production use case - it is taking huge time on ml.c5.xlarge CPU (around 13seconds per document in a sequence). We tried a g4dn.xlarge 4 GPU for inference and it is taking around 1.7seconds for one document in a sequence. Inference on a GPU is a costly affair. Could anyone suggest me if anyone have any ideas to make faster with less cost. I have tried “onnx runtime”, TorchScript but both of them dont have support for pegasus model. Do we have any timelines for supporting pegasus. I tried num_beams as well - above numbers are after using them. Inference code: import torch from transformers import PegasusForConditionalGeneration, PegasusTokenizer src_text = [ “”" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow.""" ] model_name = “sshleifer/distill-pegasus-cnn-16-4” torch_device = ‘cuda’ if torch.cuda.is_available() else ‘cpu’ tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) inputs = tokenizer.batch_encode_plus(src_text, truncation=True, padding=“longest”, return_tensors=“pt”).to(torch_device) translated = model.generate(inputs[“input_ids”], num_beams=2) output = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in translated][0] print(output) Thanks in advance for your help. Apologies if I am missing something obvious. Regards, Karthik
Hi, Could someone help in this regard? Thanks in advance, Karthik
0
huggingface
Beginners
Training classifier with frozen DistilBERT embeddings
https://discuss.huggingface.co/t/training-classifier-with-frozen-distilbert-embeddings/4000
Hello everyone, I’m trying to do some sentiment analysis on the IMDB movie reviews dataset. First I trained a model based on GloVe embeddings followed by an LSTM layer, and then a fully connected feedforward layer. I implemented it with Pytorch and it works like a charm. Now I’m trying to replace the GloVe + LSTM by some transformer based model. I managed to do it and I chose DistilBERT as it is supposed to be lightweight (I’m training the model on my laptop which has no GPU). I kept the DistilBERT embedding frozen, in order, again, to minimize the computational cost. But the results are really bad. It basically looks like it is oscillating around random guess. Here are a few metrics after 1 epoch: Accuracy = 0.5201 F1 score binary = 0.6723 Recall score = 0.9898 Average precision score = 0.5090 Air under the ROC = 0.7421 Loss on validation set: 1040.9420166015625 Here are the same metrics after 5 epochs: Accuracy = 0.6855 F1 score binary = 0.6725 Recall score = 0.6493 Average precision score = 0.6974 Air under the ROC = 0.7497 Loss on validation set: 480.90106201171875 and here are the same metrics after 11 epochs (after that I stopped): Accuracy = 0.5939 F1 score binary = 0.3630 Recall score = 0.2327 Average precision score = 0.8251 Air under the ROC = 0.7330 Loss on validation set: 738.0289306640625 So not the magical bump in accuracy I was hoping for Here is also the model I used: class DistilBERTWithPoolingClf(nn.Module): """ Classifier based on HuggingFace Transformers implementation of DistillBERT, using a basic DistillBERT layer with maxpooling on top of it. """ __name__ = "DistilBERTbase" def __init__(self, keep_prob, seq_length): super(DistilBERTWithPoolingClf, self).__init__() self.DistilBERT = DistilBertModel.from_pretrained("distilbert-base-uncased") self.DistilBERT.requires_grad_(False) # Embeddings are frozen self.maxpool = nn.MaxPool1d(seq_length) self.dropout = nn.Dropout(1 - keep_prob) self.hidden2bin = nn.Linear(768, 2) # For Bi-LSTM def forward(self, ids, mask, token_type_ids): batch_size = ids.shape[0] # Unlike for BERT (Hugging Face implementation), the forward method returns # the embedding of every input token and there is no embedding of the CLS token # (as far as I know) hidden = self.DistilBERT(ids, attention_mask=mask, return_dict=False) hidden = hidden[0] hidden = hidden.permute(0, 2, 1) hidden = self.maxpool(hidden) hidden = self.dropout(hidden) logits = self.hidden2bin(hidden.view(batch_size, 768)) return logits I have a few questions: Am I naive trying to use these models without GPU? I was hoping that by freezing them, I’ll have only to train the feedforward layer and that this part would be accessible? Could it be that the results are that bad because the DistilBERT layer is frozen? Do you see some obvious mistake in my definition of the model? I am a complete beginner when it comes to Huggingface Transformers so I wouldn’t be surprised if there was any. Any other suggestion?
Hi @abercher, regarding your questions: You can certainly use Transformers as feature extractors on a CPU - since the weights are frozen, you only need the forward pass which is relatively quick to compute My experience has generally been that you can get significantly worse results when using the last hidden states as features vs fine-tuning end-to-end (in some cases > 20 F1 points!). But this varies depending on the dataset / task, so I am not sure if it’s also true for IMDB. One thing that seems a bit odd is the hidden.permute(0, 2, 1) part of your forward pass - why do you do this? You might get better results by using the average of the unmasked hidden state, e.g. in numpy code: input_ids = torch.tensor(batch["input_ids"]).to(device) attention_mask = torch.tensor(batch["attention_mask"]).to(device) with torch.no_grad(): last_hidden_state = model(input_ids, attention_mask).last_hidden_state last_hidden_state = last_hidden_state.cpu().numpy() # Use average of unmasked hidden states for classification lhs_shape = last_hidden_state.shape boolean_mask = ~np.array(batch["attention_mask"]).astype(bool) boolean_mask = np.repeat(boolean_mask, lhs_shape[-1], axis=-1) boolean_mask = boolean_mask.reshape(lhs_shape) masked_mean = np.ma.array(last_hidden_state, mask=boolean_mask).mean(axis=1) batch["hidden_state"] = masked_mean.data You could also see what the performance looks like with something simpler like logistic regression, e.g. first extract the features: def forward_pass(batch): input_ids = torch.tensor(batch["input_ids"]).to(device) attention_mask = torch.tensor(batch["attention_mask"]).to(device) with torch.no_grad(): last_hidden_state = model(input_ids, attention_mask, return_dict=True).last_hidden_state last_hidden_state = last_hidden_state.cpu().numpy() # Use average of unmasked hidden states for classification lhs_shape = last_hidden_state.shape boolean_mask = ~np.array(batch["attention_mask"]).astype(bool) boolean_mask = np.repeat(boolean_mask, lhs_shape[-1], axis=-1) boolean_mask = boolean_mask.reshape(lhs_shape) masked_mean = np.ma.array(last_hidden_state, mask=boolean_mask).mean(axis=1) batch["hidden_state"] = masked_mean.data return batch and then extract the hidden states from your tokenized dataset, e.g. imdb_enc: imdb_enc = imdb_enc.map(forward_pass, batched=True, batch_size=16) X_train = np.array(imdb_enc["train"]["hidden_state"]) X_test = np.array(imdb_enc["test"]["hidden_state"]) y_train = np.array(imdb_enc["train"]["label"]) y_test= np.array(imdb_enc["test"]["label"]) and then train a classifier from sklearn.linear_model import LogisticRegression lr_clf = LogisticRegression(n_jobs=-1, penalty="none") lr_clf.fit(X_train, y_train) lr_clf.score(X_valid, y_valid) If the result is still bad then you might have to try something more elaborate like averaging over certain layers as was done in the BERT paper. HTH!
0
huggingface
Beginners
MNLI Inference on a fine-tuned model from hub
https://discuss.huggingface.co/t/mnli-inference-on-a-fine-tuned-model-from-hub/3948
Hello, I tried to run various models such as huggingface/distilbert-base-uncased-finetuned-mnli microsoft/deberta-v2-xxlarge-mnli roberta-large-mnli squeezebert/squeezebert-mnli I can see the weights are loaded but the accuracy I get is about 7%. I use “transformers_version”: “4.3.2”, and the following arguments: python run_glue.py --model_name_or_path roberta-large-mnli --task_name mnli --do_eval --max_seq_length 128 --output_dir /tmp/mnli/ Any help would be appreciated, Thank you
Hi Ali93H, I don’t understand what you are trying to do. Are you wanting to do further fine-tuning (in which case you might want DistilBertForMaskedLM) or to classify your texts for example sentiment analysis (in which case you might want DistilBertForSequenceClassification). The transformers docs are here Quick tour — transformers 4.3.0 documentation 1 ,Summary of the models — transformers 4.3.0 documentation 1 , DistilBERT — transformers 4.3.0 documentation 1
0
huggingface
Beginners
How to improve F1 score in SQAUD2 Question Answering Task on Distilbert Pretarined Model
https://discuss.huggingface.co/t/how-to-improve-f1-score-in-sqaud2-question-answering-task-on-distilbert-pretarined-model/3995
While using Colab 4 with the inference code written I am getting the below results. { 'exact': 31.272635391223783, 'f1': 35.63616173418905, 'total': 11873, 'HasAns_exact': 59.83468286099865, 'HasAns_f1': 68.57424903340527, 'HasAns_total': 5928, 'NoAns_exact': 2.7922624053826746, 'NoAns_f1': 2.7922624053826746, 'NoAns_total': 5945, 'best_exact': 50.07159100480081, 'best_exact_thresh': 0.0, 'best_f1': 50.07159100480081, 'best_f1_thresh': 0.0} When we use Huggingface script for the evaluation script below we get better results. What things should I change in the colab code to move the EM and F1? python run_qa.py \ --model_name_or_path /path/to/distilbert-squad2 \ --dataset_name squad_v2 \ --version_2_with_negative \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir ./tmp Eval Results: 02/25/2021 07:13:08 - INFO - __main__ - ***** Eval results ***** 02/25/2021 07:13:08 - INFO - __main__ - HasAns_exact = 71.54183535762483 02/25/2021 07:13:08 - INFO - __main__ - HasAns_f1 = 78.03088635740741 02/25/2021 07:13:08 - INFO - __main__ - HasAns_total = 5928 02/25/2021 07:13:08 - INFO - __main__ - NoAns_exact = 72.22876366694702 02/25/2021 07:13:08 - INFO - __main__ - NoAns_f1 = 72.22876366694702 02/25/2021 07:13:08 - INFO - __main__ - NoAns_total = 5945 02/25/2021 07:13:08 - INFO - __main__ - best_exact = 71.88579129116482 02/25/2021 07:13:08 - INFO - __main__ - best_exact_thresh = 0.0 02/25/2021 07:13:08 - INFO - __main__ - best_f1 = 75.12567121424334 02/25/2021 07:13:08 - INFO - __main__ - best_f1_thresh = 0.0 02/25/2021 07:13:08 - INFO - __main__ - exact = 71.88579129116482 02/25/2021 07:13:08 - INFO - __main__ - f1 = 75.12567121424338 02/25/2021 07:13:08 - INFO - __main__ - total = 11873 How to make this colab evaluation code generalized for other transformer-based question answering models?
Hi @bhadresh-savani, there’s a lot of tricky pre- and post-processing needed to get the question-answering working. For example, I think your implementation is missing the sliding window needed to chunk long documents into passages and the sorting of the predicted answers in the evaluation. Sylvain Gugger has a nice Colab tutorial with all these details here 9, so my suggestion would be to compare his implementation against yours to see what you need to add.
0
huggingface
Beginners
ASR inference time too long
https://discuss.huggingface.co/t/asr-inference-time-too-long/3191
I am trying to test the ASR 8 model. I uploaded sample file (10 second long .wav file) and clicked compute. The page just says compute loading and does not provide output. Am I mistaken with the type of audio file to be uploaded or should I check for whether the audio should be 8 or 16KHz file?
Rajaram1996: should I check for whether the audio should be 8 or 16KHz file? Regarding to ASR config, I believe we need to submit 16KHZ file ref: https://zenodo.org/record/3957940#.YDdHjF0zbAN
0
huggingface
Beginners
Importing TFDebertaModel
https://discuss.huggingface.co/t/importing-tfdebertamodel/3895
In the DeBERTa documentation, it mentions the possibility of using TFDebertaModel. However, when I try to import it using the line below, I get the following error: from transformers import TFDebertaModel Error: ImportError: cannot import name 'TFDebertaModel' from 'transformers' (unknown location) How can I properly import TFDebertaModel, if it is even available?
Looks like it isn’t available yet. See this DeBERTa in TF (TFAutoModel): unrecognized configuration class · Issue #9361 · huggingface/transformers · GitHub 23 which says that (in Dec 2020) DeBERTa was only available in pytorch, not tensorflow.
0
huggingface
Beginners
Issue with MBart50 translation
https://discuss.huggingface.co/t/issue-with-mbart50-translation/3959
Hi, I am having an issue with the new MBart50 - I was wondering if you could help me figure out what I am doing wrong. I am trying to copy code from here 1 – specifically, I tweaked it to translate a sentence from French into Persian. from transformers import MBartForConditionalGeneration, MBart50TokenizerFast article_fr = "Paris est toujours une bonne idee" model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") # translate Hindi to French tokenizer.src_lang = "fr_XX" encoded_hi = tokenizer(article_fr, return_tensors="pt") generated_tokens = model.generate( **encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["fa_IR"] ) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) but it then outputs ['Paris is always a good idea'] (which is obviously in English – not in Persian) How can I get it to output in Persian? I tried using the "fa_IR" lang_code_to_id. Thanks
I have the following returned. However, longer strings yield FR results, not 100% sure why. I assume it is a lack of training sentence pairs. Good luck! سلام Returned from this snippet. from transformers import MBartForConditionalGeneration, MBart50TokenizerFast article_fr = "Bonjour" model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-one-to-many-mmt") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-one-to-many-mmt", src_lang="fr_XX") model_inputs = tokenizer(article_fr, return_tensors="pt") generated_tokens = model.generate( **model_inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fa_IR"] ) print(tokenizer.batch_decode(generated_tokens, skip_special_tokens=True))
0
huggingface
Beginners
How to load dataset that exist in cache path
https://discuss.huggingface.co/t/how-to-load-dataset-that-exist-in-cache-path/3713
Hi, I try this code in a server with internet connection: from datasets import load_dataset wiki = load_dataset("wikipedia", "20200501.en", split="train") Then automatic downloading process began and there is a folder ~/.cache/huggingface/datasets/wikipedia/20200501.en/1.0.0/4021357e28509391eab2f8300d9b689e7e8f3a877ebb3d354b01577d497ebc63/ which contains wikipedia-train.arrow and some other files. Now I’d like to use the dataset in a server without internet connection. What should I do? I tried it with from datasets import load_from_disk wiki = load_from_disk("~/.cache/huggingface/datasets/wikipedia/20200501.en/1.0.0/4021357e28509391eab2f8300d9b689e7e8f3a877ebb3d354b01577d497ebc63" It showed state.json not found in that folder. Any advice?
solved it by wiki = load_dataset("~/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/4021357e28509391eab2f8300d9b689e7e8f3a877ebb3d354b01577d497ebc63/wikipedia.py", "20200501.en", split="train")
0
huggingface
Beginners
Amharic BERT Training
https://discuss.huggingface.co/t/amharic-bert-training/3901
@yjernite Problem While training Amharic Language BERT on oscar dataset a866×711 93.4 KB Colab Link 3
You need to remove the id column in the dataset: tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["id", "text"])
0
huggingface
Beginners
Load weight from local ckpt file
https://discuss.huggingface.co/t/load-weight-from-local-ckpt-file/3854
I have download a standard bert ckpt file , ,but how can I load this weights into model? I have read the document: 图片829×265 15 KB and try some method like: config = BertConfig.from_json_file('./bert_model/bert_config.json') model =TFBertModel.from_pretrained('bert_model/bert_model.ckpt',config=config) or config = BertConfig.from_json_file('./bert_model/bert_config.json') model = TFBertModel(config).load_weights('bert_model/bert_model.ckpt') but it seem like doesn’t work.
Hi @Sniper, I’m not very familiar with the TensorFlow API of transformers but I think the following should work: config = BertConfig.from_pretrained("path/to/your/bert/directory") model = TFBertModel.from_pretrained("path/to/your/bert/directory", config=config) If that doesn’t work, can you share the error that you get?
0
huggingface
Beginners
Organization Pricing
https://discuss.huggingface.co/t/organization-pricing/3777
Regarding: I’d like to understand the configuration of the Start Up for Organizations. I’ve a language model finetuned on top of the pretrained GPT2 - Small. When I deployed the LanguageModel in AWS - Sagemaker, Google Colab, HuggingFace below are observed. Inferencing732×187 28.3 KB From the above tabular column can you please help me in knowing ? values
Hello @bala1802 . Do you mind sharing your testing scripts ? Inference time can widely vary depending on the input, and the actual parameters used to generate the text. 1/ Did you use use_gpu flag to actually use the GPU on the inference ? I’m seeing 6s inference on my test string. curl -X POST -d '{"inputs": "toto", "options": {"use_gpu": true, "use_cache": false}}' https://api-inference.huggingface.co/models/balawmt/LanguageModel_Trial_1 -H "Authorization: Bearer ${HF_API_TOKEN}" -D - 2/ first time vs second time, should not really make a difference , are you trying 2 different payloads ? 3/ The actual run time of a query on a text-generation pipeline can depend on the EOS token being generated randomly (otherwise it will simply generate max_tokens which seems to be set to 500 for your model). So when trying to test inference time, you need to make sure that you are generating the same number of tokens, and that EOS cannot be generated. Hope that helps.
0
huggingface
Beginners
Number of words
https://discuss.huggingface.co/t/number-of-words/3727
Hi, I am using ‘Helsinki-NLP/opus-mt-en-sla’ model, but I can not catch a pattern, how many words can this model transalte…? Can I see somewhere this settings and how I can change number of words? Thank you
Hi Katarina, this page might help config.json · Helsinki-NLP/opus-mt-en-sla at main I think this says that the maximum number of tokens, max_length, for this model is 512. 512 tokens might correspond to about 2500 characters (~letters), which might correspond to about 400 words. This is a very rough approximation, and different texts will have different conversion values. If you want to know more about tokens, there’s a nice introduction to BERT tokens by Chris McCormick BERT Word Embeddings Tutorial · Chris McCormick (I imagine that the Marian model uses something similar).
0
huggingface
Beginners
DistilBERT and CLS token
https://discuss.huggingface.co/t/distilbert-and-cls-token/3700
Hello, I’m completely new to Huggingface Transformers. I apologize if my questions are already answered somewhere else, and if it’s the case, I would be glad if you could point me to the given documentation. I’m trying to progress in NLP by training and testing different models to do sentiment analysis on the IMDB Movies Reviews data set. So I implement some custom sub classes of nn.Module. In the first one I used a BERTBase layer and used the embedding of the CLS token as the embedding of the sentence to classify it. The model was: class BERTBaseClassifier(nn.Module): """ Bi-LSTM on top of frozen embeddings initialized with GloVe vectors, followed by 1D max pooling on all the outputs of the Bi-LSTM layer. """ __name__ = "BERTbase" def __init__(self, keep_prob): super(BERTBaseClassifier, self).__init__() self.BERT = transformers.BertModel.from_pretrained("bert-base-uncased") self.BERT.requires_grad_(False) # Embeddings are frozen self.dropout = nn.Dropout(1 - keep_prob) self.hidden2bin = nn.Linear(768, 2) # For Bi-LSTM def forward(self, ids, mask, token_type_ids): batch_size = ids.shape[0] _, hidden = self.BERT(ids, attention_mask=mask, token_type_ids=token_type_ids) hidden = self.dropout(hidden) logits = self.hidden2bin(hidden.view(batch_size, 768)) return logits It was not working super well (I have to admit that I’m running it on the CPU of my laptop so it was taking around 8 hours for an epoch) and I thought it could be good to use the lighter version of the model: DistilBERT. More precisely, this one: DistilBertModel.from_pretrained("distilbert-base-uncased") But I was a bit surprised to see that its output is different from the output of BERT. Unless I missed it, what the forward method of DistilBERT outputs is the (final) embeddings of all the input tokens. I use input sequence of length 300 (by padding the sentences) and the output has length 300. So I guess that there is no additional embedding for a CLS token. I have three questions: Am I correct? Is there no way to get an embedding for this CLS token with this model? If yes, why is that so? Since what I’m looking for is an embedding of the sentence, am I correct to believe that the closest thing to a replacement in my model above of the BERTBase layer with something based on DistilBERT would be this sentence-transformers model: distilbert-base-nli-stsb-mean-tokens ? Thank you for your help
Hi abercher, it’s a few months since I used DistilBERT, but I’m sure I used a CLS token from it. When you run the tokenizer, have you set add_special_tokens=True?
0
huggingface
Beginners
Ensemble with `trainer`
https://discuss.huggingface.co/t/ensemble-with-trainer/3763
Can anyone inform me whether we can use trainer for ensembling 2 HuggingFace models? I want to fine-tune them both but want them to work as an ensemble without going much deep in the code. Is that possible?
No, the Trainer is there to quickly train/fine-tune one model, so you can apply it to your two models separately (and have two trainers) but you will then need to do the ensembling manually. If you just want to average the predictions you can call the predict method of each trainer on your data and then average the results.
0
huggingface
Beginners
Can I add custom classifier to NER pipeline?
https://discuss.huggingface.co/t/can-i-add-custom-classifier-to-ner-pipeline/3744
I need to resolve ner chunks to codes, in my case map identified health problem in the medical text to a diagnosis code. I can train a custom classifier to map ner chunk, but can I add this classifier as a stage / later to the NLP NER pipeline? Thanks!
You will need to do the steps of the pipeline manually, probably: so call the tokenizer on your text, the pretained model on the outputs and then apply your custom classifier.
0
huggingface
Beginners
How to utilize a summarization model
https://discuss.huggingface.co/t/how-to-utilize-a-summarization-model/3655
I want to summarize the T&Cs and privacy policies of various services. I’ve decided to do it via a hybrid approach where I initially pre-process the terms or policies and try to remove as many legalese/complex words as possible. Next, I would like to use a pre-trained model for the actual summarization where I would give the simplified text as an input. I wanna utilize either the second or the third most downloaded transformer( sshleifer / distilbart-cnn-12-6 or the google / pegasus-cnn_dailymail) whichever is easier for a beginner / explain for you. I already tried out the default pipeline. summarizer = pipeline(‘summarization’) and got back a summary for a paragraph of the T&C of Instagram. I tried using the Pegasus model following this tutorial 12 and got “RuntimeError: CUDA out of memory” where I ran out of memory on my GPU. Thank you for your valuable time and help
Do you have any concrete questions though? Where exactly are you stuck? Regarding the out of memory error - Have you tried decreasing the batch size or using a smaller model? I wouldn’t say any transformer is “easier” or harder. That’s what’s beautiful about huggingface, it gives you access to many models through one API. Different kinds of models may have different needs but I wouldn’t say there are easier and harder models, as a lot of the complexity is abstracted away by huggingface. One more thing I think I’d try is not to remove the legalese. Usually those are the important parts. Wouldn’t it be awesome if your model included readable summaries of that stuff? If you have examples of T&Cs and summaries, then you could fine tune any model designed for that task, or you could use an EncoderDecoderModel as explained here: Leveraging Pre-trained Language Model Checkpoints for Encoder-Decoder Models 2. If you don’t have any training data I’d still leave the legalese in and just see what the result looks like. It might still be okay. T&Cs are usually long though, are you currently just truncating the input (most models I’ve come across have a max input length of 512)? This is something I’m trying to solve myself right now. Disclaimer: I’m fairly new to this myself.
0
huggingface
Beginners
IndexError: string index out of range on TAPAS fine-tuning
https://discuss.huggingface.co/t/indexerror-string-index-out-of-range-on-tapas-fine-tuning/3676
I am trying to fine-tune “google/tapas-base” on ‘google/tapas-base-finetuned-wtq’ config. I get the following error. File "C:\Users\Kinjal\.conda\envs\nlp\lib\site- packages\transformers\models\tapas\tokenization_tapas.py", line 1668, in <listcomp> return [(coords[1], coords[0]) for coords in answer_coordinates_question] IndexError: string index out of range I think the error is because of the coordinates given in the “answer_coordinates” of the TSV sheet. I tried using (row,col) and (col,row) but in both the cases I get the same error. Also, the answer coordinates are within the range of the table i.e. (0 to nrow-1, 0 to ncol-1) Is there something I am missing here? Or is the error because of something else.
Hi, could it be that you’re providing the answer coordinates in the wrong way? You are probably providing the coordinates as a string, but they should be provided as Python tuples. Let me know if this helps.
0
huggingface
Beginners
Unable to import faiss
https://discuss.huggingface.co/t/unable-to-import-faiss/3439
Hello, I am running Transformers on a Mac OS with Python 3.8 in a virtual environment. I have faiss-cpu 1.7.0 installed in the env. (venv) sergey_mkrtchyan transformers (master) $ python Python 3.8.6 (v3.8.6:db455296be, Sep 23 2020, 13:31:39) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> >>> import faiss >>> faiss.__version__ '1.7.0' However transformers repo is doing an additional check of the version of the package using importlib_metadata in transformers/src/file_utils.py which ends up failing on me with “RagRetriever requires the faiss library but it was not found in your environment.” _faiss_available = importlib.util.find_spec("faiss") is not None try: _faiss_version = importlib_metadata.version("faiss") logger.debug(f"Successfully imported faiss version {_faiss_version}") except importlib_metadata.PackageNotFoundError: _faiss_available = False This is where it fails, mind you _faiss_available = importlib.util.find_spec("faiss") line above works just fine, but fails on the _faiss_version = importlib_metadata.version("faiss") line unable to find the faiss package. Not sure if it’s an issue in the repo or something wrong on my side. Any experience with this? Thank you!
I am facing the same issue. No version of the faiss library is being recognized as present in the environment I am running (installed through conda). I did solve this problem before, but I remember it being a pain; I built the library from source but had to deal with many issues along the way. Edit: I was just able to get it to be recognised. Solution: if you are able to import the library ( from terminal start python): python import faiss print(faiss) /home/akinwilson/miniconda3/envs/envRag/lib/python3.7/site-packages/faiss/init.py take the path; /home/akinwilson/miniconda3/envs/envRag/lib/python3.7/site-packages/faiss then in your bashrc file: nano ~/.bashrc add this line to the end of the file: export PYTHONPATH=$HOME/miniconda3/lib/python3.8/site-packages/faiss Note that I have changed the home directory to a variable. save it. Then finally, set the bashrc file as source again to reload the paths; source ~/.bashrc Restart any virtual environment and everything should work fine.
0
huggingface
Beginners
Load original T5 checkpoints
https://discuss.huggingface.co/t/load-original-t5-checkpoints/3680
I want to load one of the original T5 checkpoints. How can I do that? I found an answer referring to a convert_t5_original_tf_checkpoint_to_pytorch.py which does not seem to exist.
…they are moved to transformers/src/transformers/models/t5. But it requires a config.json which is not part of the original files (in gs://t5-data/pretrained_models/small).
0
huggingface
Beginners
Summarization taks, looking for clarifications before getting started
https://discuss.huggingface.co/t/summarization-taks-looking-for-clarifications-before-getting-started/3626
Hi everyone, I’m wanting to summarize pretty long texts (300 to 5000 words), I have about 30k examples to work with, and I have few questions before I get started, to avoid heading off in the wrong direction. Now I understand there are models like BART and T5, that are specifically tailored to such tasks. However there is also the option of using seperate models (say BERT and GPT2) and stitch those together via EncoderDecoderModel(). Now, to me it looks like either of those approaches should work fine. Which one will work better can probably only be found out by trying. Is this assumption correct ? Another thing I’m wondering about is which of these is going be more costly to train in terms of computing ressources. I’d assume that a single model approach (i.e. T5 oder BART) would use much less memory but maybe I’m missing something? Is there theoretically a way to train BERT and GPT2 seperately in the EncoderDecoderModel() approach? I know one can fine-tune this models on their own but is it still gonna be feasible to use them as EncoderDecoder in the model and expect good summarier after that? (I don’t think so but thought I’d ask). Any clarifications on thos points would be greatly appreciated Personally, I’m partial to the EncoderDecoderModel() approach, as there are BERT and GPT-2 models available in my target language, where there is only a small T5 model (not BART) available in the same language.
Hi @neuralpat, I believe you are right that you can fuse BERT with GPT-2 checkpoints with EncoderDecoderModel although I suspect the performance may not be great given this table from the paper that this class is based on (look at the BERT2GPT row): Screen Shot 2021-02-13 at 2.19.24 pm961×811 156 KB So you might be better off trying to find a RoBERTa model in your target language, and using that as the encoder instead of BERT. Now whether this will work better than fine-tuning BART or T5 is not obvious to me - as with most things in deep learning you probably have to determine it empirically
0
huggingface
Beginners
I have trained my classifier, now how do I do predictions?
https://discuss.huggingface.co/t/i-have-trained-my-classifier-now-how-do-i-do-predictions/3625
Hi everybody and thank you in advance for anyone who can help my out. I am not a total beginner when it comes to huggingface libraries (I have already built a well functioning sentiment analyzer) however I have mostly taken tutorials and integrated their content without going too much into details of who each line of code does. Trying to learn more I have put together a document classifier using a couple of tutorials I’ve found online. I have built the trainer and the validator and they work just fine. I started with a dataset that assigns 6 different labels to a text, with each text having 0, 1 or more than 1 label. I trained the model and saved it. My problem is: now what? I can’t understand exactly how to do the prediction part. Here is where I am: def validation(): model = torch.load(destination_folder+'model.pt') model.eval() with torch.no_grad(): for _, data in enumerate(testing_loader, 0): ids = data['ids'].to(device, dtype = torch.long) mask = data['mask'].to(device, dtype = torch.long) token_type_ids = data['token_type_ids'].to(device, dtype = torch.long) preds = model(ids, mask,token_type_ids) print(preds.argmax(1) + 1) This is a snippet of the output of the print command: tensor([1, 1, 1, 1]) tensor([6, 1, 1, 1]) tensor([1, 1, 1, 1]) tensor([1, 5, 2, 1]) I’ve done this using the validation data and by adapting the validation routine, while in reality I would need to do this for a single line of text, but regardless of the way the data is fed to the prediction function, how do I read the prediction data? How do I go from “This is the text of my document to be classified” to “This document is 75% label1, 15% label5, 2% label6”? Again, thank you in advance for any help!
Hi @Abe, if I understand correctly you’d like to go from an input string like “I love this movie!” to a set of predicted labels and their confidence scores (i.e. probabilities). The simplest way to achieve that would be to wrap your model and tokenizer in a TextClassificationPipeline 21 with return_all_scores=True: from transformers import TextClassificationPipeline model = ... tokenizer = ... pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True) # outputs a list of dicts like [[{'label': 'NEGATIVE', 'score': 0.0001223755971295759}, {'label': 'POSITIVE', 'score': 0.9998776316642761}]] pipe("I love this movie!") The above also works for multiple inputs by feed a list of examples instead of a single string: pipe(["I love this movie!", "I hate this movie!"]) If you want to have human-readable labels like “positive” and “negative” you can configure the id2label and label2id attributes of your model’s config class: Change label names on inference API - #3 by lewtun 12 HTH!
0
huggingface
Beginners
Feature extraction for regression/classification vs Fine Tuning
https://discuss.huggingface.co/t/feature-extraction-for-regression-classification-vs-fine-tuning/3605
This is sort of a general question, but I’ve been working on fine tuning some models on regression task using GPU instances on AWS. I can already estimate the cost seems to be rather astronomical. So I’m wondering as a cheaper option (hopefully) if I should just extract the pooled output and then run plain old regression models on a distributed architecture like Spark. Does anyone have experience comparing these two options in terms of cost and performance? Thanks!
Hi @thecity2, your dataset must be huge if you’re considering running Spark jobs on the model outputs . I’ve never done this exact comparison (Spark vs GPU), but can’t you get a rough estimate by doing the fine-tuning vs feature extraction comparison on a subset of the dataset? This would also give you an idea about whether the accuracy (or whatever metric you’re measuring) is good enough in the feature-based approach - in some cases, I’ve seen massive drops compared to fine-tuning. HTH!
0
huggingface
Beginners
Change label names on inference API
https://discuss.huggingface.co/t/change-label-names-on-inference-api/3063
Hi there, I recently uploaded my first model 2 to the model hub and I’m wondering how I can change the label names that are returned by the inference API. Right now, the API returns “LABEL_0”, “LABEL_1”, etc. with the predictions and I would like it to be something like “Economy”, “Welfare”, etc. I looked at the files of other hosted models and I saw that others changed the id2label and label2id in the config.json file, so I also did that here 3, but the inference API still returns “LABEL_0”. Do I need to change this somewhere else too? (Or maybe I just need to wait for a day or so until the model is refreshed on AWS?) Update: I looked more deeply into the docs here 4 and I didn’t find an explanation for how to change the label names. Maybe this could be added? Thanks for your advice, Moritz
Hey, does someone have advice on this? Would really like to change the output of my models on the model hub, but I don’t understand how to make it return something else than “LABEL_0”, “LABEL_1” etc. See above what I’ve tried.
0
huggingface
Beginners
Cross Entropy Weighted
https://discuss.huggingface.co/t/cross-entropy-weighted/3559
Hi all, I am using this 2 Notebook created by @valhalla to fine tune T5 model in my own classification task. I would like to apply some kind of class weighting in my loss function, since I am dealing with highly imbalanced data. I have tried this so far: def forward( self, input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, lm_labels=None ): # in lightning, forward defines the prediction/inference actions return self.model( input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, lm_labels=lm_labels ) def _step(self, batch): lm_labels = batch["target_ids"] lm_labels[lm_labels[:, :] == self.tokenizer.pad_token_id] = -100 outputs = self( input_ids=batch["source_ids"], attention_mask=batch["source_mask"], lm_labels=lm_labels, decoder_attention_mask=batch['target_mask'] ) logits = outputs[1] ##### IMBALANCE LEARNING class_weights = torch.FloatTensor(self.hparams.class_weights).cuda() loss_fct = CrossEntropyLoss(ignore_index=-100, weight=class_weights) loss = loss_fct(logits, lm_labels) return loss But it doesn’t work. I am passing a class_weight list of two elements (the number of classes) by parameter. I think I don’t fully understand how the loss is computed using the logits and the labels. I would appreciate any help, since I am pretty stuck. Best, Marcos
Which weights did you assign and what do you mean by “it does not work”? Do you get an error? If so, post the full error trace.
0
huggingface
Beginners
T5 user defined loss function
https://discuss.huggingface.co/t/t5-user-defined-loss-function/566
I am fine-tuning T5 for paraphrase generation and want to add a diversity measure for the generated sentences in the loss function. After reading the source code 26, I still have no clue how to add that.
I know I can generate multiple sentences using: outs = model.generate(input_ids=batch[‘source_ids’].cuda(), attention_mask=batch[‘source_mask’].cuda(), max_length=maxlen,do_sample=True, top_k=120, top_p=0.99, early_stopping=True, num_return_sequences=num_return_seq), and I know how to calculate my metrics based on this ‘outs’. However, I don’t know how to find this outputs in the return of ‘forward’ function for ’ T5ForConditionalGeneration’. Also, I couldn’t find the definition for this ‘generate’ function.
0
huggingface
Beginners
Question and advice on how to fine tune distilbert for multilabel classification
https://discuss.huggingface.co/t/question-and-advice-on-how-to-fine-tune-distilbert-for-multilabel-classification/3562
I am currently tuning distilbert for sequence classification on a multi label specifically 3 labels for sentiment classification on my own custom dataset and I am getting quite high loss values of between 0.5 to 0.4 and I have tried various methods like trying learning rates like 3e-05 to 1e-05 and adding dropout rate of 0.4 to 0.3 for the embedding and 0.4 to 0.2 to sequence classification layer is there other ways of reducing the loss
Maybe you are not training long enough. Is your validation loss much higher than your training loss? Maybe you do not have enough data Maybe your dataset is very imbalanced Maybe the problem is simply too hard and your labels are too similar That is just a small range of learning rate. Try starting from 1e-03 and decrease until you see that your train/validation loss curve is looking promising Use a lr scheduler
0
huggingface
Beginners
Distilbert-base-nli-stsb-mean-tokens OOM encoding sentences of 100K docs
https://discuss.huggingface.co/t/distilbert-base-nli-stsb-mean-tokens-oom-encoding-sentences-of-100k-docs/3555
Hi, Using sentence-transformers/distilbert-base-nli-stsb-mean-tokens to embed sentences from corpus of 100K academic articles. Model is defined as below: `self.model = ‘sentence-transformers/distilbert-base-nli-stsb-mean-tokens’ self.word_embedding_model = models.BERT( self.model, max_seq_length=128, do_lower_case=True) self.pooling_model = models.Pooling(self.word_embedding_model.get_word_embedding_dimension(), pooling_mode_mean_tokens=True, pooling_mode_cls_token=False, pooling_mode_max_tokens=False) self.model = SentenceTransformer(modules=[self.word_embedding_model, self.pooling_model]) self.corpus_embeddings = self.model.encode(self.corpus) Running with 64GB ram with 3090FE (24GB vram), the encoding task makes it ~50% through before running out of memory. Most grateful for any guidance on how I might be able to handle encoding of entire corpus - chunking it up, reduce model size (and best approach to that). Many thanks
Do you want one vector for the whole corpus, one per sentence, or what exactly do you want? What is inside that corpus variable?
0
huggingface
Beginners
Split document into sentences for sentence embedding
https://discuss.huggingface.co/t/split-document-into-sentences-for-sentence-embedding/3553
Hi, Wondering if there is a Huggingface alternative to the Gensim split_sentences method to take a document and split into sentences ready for model.encode()? kite.com Code Faster with Line-of-Code Completions, Cloudless Processing 18 Kite is a free autocomplete for Python developers. Code faster with the Kite plugin for your code editor, featuring Line-of-Code Completions and cloudless processing. A first timer says many thanks
So you want to split a text into sentences and then create a sentence embedding for each sentence? Just use a parser like stanza or spacy to tokenize/sentence segment your data. This is typically the first step in many NLP tasks.
0
huggingface
Beginners
Truncation strategy for long text documents
https://discuss.huggingface.co/t/truncation-strategy-for-long-text-documents/3415
Hello, My study partner and I are doing research on Twitter data for our Master’s Thesis. We have collected a dataset of tweets, aggregated on user-level. Each entry in the dataset corresponds to a user, and each user has a text document and a classification label. These text documents consist of several tweets from one user in one long string (not sentences or word-tokens, just one long string). We use BERTForSequenceClassification for this, but have a problem with truncation. The average number of tokens for these text documents is 28.000(!), and with a sequence length of 512, there are obviously a huge amount of tokens that are dropped. Our question is the truncation strategy. We set parameter truncation=True when initializing the BertTokenizer. Will the truncation just keep the 512 first tokens with this strategy, or will it keep the 512 tokens with the highest weights/WordPiece “score”. In other words, if the tokenizer strategy was e.g. TF-IDF, would the truncation process keep the top-512 TF-IDF scoring tokens or just the 512 first tokens. We don’t fully understand how WordPiece gives weights/scores to tokens and if these “scores” are used in truncation.
Hi @eirikdahlen, from the docs 7 one sees that truncation=True is equivalent to the longest_first strategy which just truncates all tokens beyond the maximum context size of the model (e.g. 512 for BERT-base). The other strategies are only_first and only_second which refer to whether one should apply the truncation exclusively on the first or second set of inputs, e.g. if you’re doing something like entailment 2 where the inputs are a premise and hypothesis. Since you’re dealing with long texts, you might want to check out the LongFormer model 12 - it can handle input sequences of 4096 tokens so should be able to capture more context in your use case
0
huggingface
Beginners
Demo of Open Domain Long Form Question Answering
https://discuss.huggingface.co/t/demo-of-open-domain-long-form-question-answering/138
Hi, I just wanted to try the demo https://huggingface.co/qa/ 381, but it seems it is down. Could we have it again please? Thanks.
The demo is back on, thanks for letting us know!
0
huggingface
Beginners
Community Notebook Pull Request?
https://discuss.huggingface.co/t/community-notebook-pull-request/674
Hi all, I’ve noticed a few posts about people attempting to train models using their own custom objective functions. Some of which involve generation. I’d love make a notebook that does all of this (including some pitfalls) and attach it to the community notebooks, but I’m not uber confident with how to properly go about a PR. Could someone please talk me through the process?
Hi @chrisdoyleIE, this will be super useful. Once you are done creating the notebook , you can directly edit the https://github.com/huggingface/transformers/blob/master/notebooks/README.md 4 file from github and make a PR. Add relevant title, Description, Author (you) and colab link in the “Community notebooks” table. Or just let me know, and I will open it for you
0
huggingface
Beginners
Feature extraction pipeline Vs model hidden states
https://discuss.huggingface.co/t/feature-extraction-pipeline-vs-model-hidden-states/3515
Hi, This could be a very naive question but I’m not able to understand what features are extracted by the “feature-extraction” pipeline. I tried the following so far text = 'I will learn the embeddings for this sentence now' tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased') feature_extractor = pipeline ('feature-extraction', model='bert-base-multilingual-uncased', tokenizer=tokenizer) try: features = torch.tensor (feature_extractor (text)) print (features) except RuntimeError: print ("Error") which gives me the following output tensor([[[ 0.0069, 0.0085, 0.0350, ..., -0.0127, 0.0450, -0.0289], [ 0.1185, 0.3802, -0.0386, ..., -0.2473, 0.4393, -0.5417], [-0.1408, -0.2094, -0.1027, ..., -0.0744, 0.3208, -1.0260], ..., [-0.0517, 0.0047, -0.1229, ..., -0.0555, 0.4420, -0.2788], [-0.1698, 0.2366, -0.3831, ..., -0.0218, 0.3211, -0.3036], [-0.4897, 0.3905, -0.1925, ..., -0.0605, 0.2510, -0.8872]]]) However, I then tried to extract the hidden states from the model with the following code: class BertFeatureExtractor (object): def __init__ (self, model_name): self.tokenizer = BertTokenizer.from_pretrained (model_name) self.model = BertModel.from_pretrained (model_name) self.model.eval() def extract (self, text): try: encoded_input = self.tokenizer(text, return_tensors='pt') output = self.model (**encoded_input, output_hidden_states=True) except RuntimeError: output = None print (f'Model cannot learn embeddings for {text}') return encoded_input, output I then get the embeddings as: feat_extractor = BertFeatureExtractor ('bert-base-multilingual-uncased') with torch.no_grad (): encoded_input, output = feat_extractor.extract (text) None of the output['hidden_states'] or output['last_hidden_state'] match the output of the feature-extraction pipeline. Is that expected? Are the features calculated by taking some combination of the layers? If so, how? Or the feature extraction is from a different way altogether?
I realized I had made a mistake in the type of tokenizer that I was using in the different ways of getting the embeddings. The “feature-extraction” gives last hidden state
0
huggingface
Beginners
Bart Large CNN summarization
https://discuss.huggingface.co/t/bart-large-cnn-summarization/3383
On facebook/bart-large-cnn · Hugging Face 25, an article can be pasted into the summarization tool. I am attempting to replicate this with the same model. By viewing the “use in transformers” button, the following code is able to be seen: from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn") model = AutoModel.from_pretrained("facebook/bart-large-cnn") Looking at the transformers/model_doc/bart documentation, the summarization example at the bottom of the page uses the bart-large model, and not the cnn. However, attempting to combine the two I come up with: from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn") model = AutoModel.from_pretrained("facebook/bart-large-cnn") ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs." inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='tf') # Generate Summary summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=5, early_stopping=True) print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids]) The following warning and trace is returned: Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`. --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-6-fb149e69ea96> in <module> 7 inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='tf') 8 # Generate Summary ----> 9 summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=5, early_stopping=True) 10 print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids]) ~/.local/lib/python3.6/site-packages/torch/autograd/grad_mode.py in decorate_no_grad(*args, **kwargs) 47 def decorate_no_grad(*args, **kwargs): 48 with self: ---> 49 return func(*args, **kwargs) 50 return decorate_no_grad 51 ~/.local/lib/python3.6/site-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, **model_kwargs) 821 # init `attention_mask` depending on `pad_token_id` 822 model_kwargs["attention_mask"] = self._prepare_attention_mask_for_generation( --> 823 input_ids, pad_token_id, eos_token_id 824 ) 825 ~/.local/lib/python3.6/site-packages/transformers/generation_utils.py in _prepare_attention_mask_for_generation(self, input_ids, pad_token_id, eos_token_id) 360 self, input_ids: torch.Tensor, pad_token_id: int, eos_token_id: int 361 ) -> torch.LongTensor: --> 362 is_pad_token_in_inputs_ids = (pad_token_id is not None) and (pad_token_id in input_ids) 363 is_pad_token_not_equal_to_eos_token_id = (eos_token_id is None) or ( 364 (eos_token_id is not None) and (pad_token_id != eos_token_id) ~/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in __bool__(self) 990 991 def __bool__(self): --> 992 return bool(self._numpy()) 993 994 __nonzero__ = __bool__ ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() I have tried switching different hyperparameters, specifically the max_length with truncation=True and more. End goal is to reproduce locally the output from this article summary. Essentially my question boils down to what follows after the bart large cnn model instantiation in order to obtain the desired article summary from the link above? from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn") model = AutoModel.from_pretrained("facebook/bart-large-cnn") inputs = tokenizer(params?) model.generate(params?) transformers-cli env - `transformers` version: 4.2.2 - Platform: Linux-5.4.0-62-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: tried both - Using distributed or parallel set-up in script?: no I have also verified that python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" has worked for me in my environment.
looks like you are returning tf tensors, from tokenizer, but the model is in torch. inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='tf') changing it to pt should fix the issue.
0
huggingface
Beginners
TFLongformer Error : Trying to create optimizer slot variable under the scope for tf.distribute.Strategy
https://discuss.huggingface.co/t/tflongformer-error-trying-to-create-optimizer-slot-variable-under-the-scope-for-tf-distribute-strategy/3485
Hello everyone, I am facing an issue that I have been trying to solve for 1 week now. I try to train a tensorflow longformer but I have the following error : Traceback (most recent call last): File “/home/pfrod/architectures/prosenet.py”, line 106, in trainer.train() File “/home/pfrod/anaconda3/envs/env_minus/lib/python3.7/site-packages/transformers/trainer_tf.py”, line 549, in train self.distributed_training_steps(batch) File “/home/pfrod/anaconda3/envs/env_minus/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py”, line 828, in call result = self._call(*args, **kwds) File “/home/pfrod/anaconda3/envs/env_minus/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py”, line 871, in _call self._initialize(args, kwds, add_initializers_to=initializers) File “/home/pfrod/anaconda3/envs/env_minus/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py”, line 726, in _initialize *args, **kwds)) File “/home/pfrod/anaconda3/envs/env_minus/lib/python3.7/site-packages/tensorflow/python/eager/function.py”, line 2969, in _get_concrete_function_internal_garbage_collected graph_function, _ = self._maybe_define_function(args, kwargs) File “/home/pfrod/anaconda3/envs/env_minus/lib/python3.7/site-packages/tensorflow/python/eager/function.py”, line 3361, in _maybe_define_function graph_function = self._create_graph_function(args, kwargs) File “/home/pfrod/anaconda3/envs/env_minus/lib/python3.7/site-packages/tensorflow/python/eager/function.py”, line 3206, in _create_graph_function capture_by_value=self._capture_by_value), File “/home/pfrod/anaconda3/envs/env_minus/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py”, line 990, in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) File “/home/pfrod/anaconda3/envs/env_minus/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py”, line 634, in wrapped_fn out = weak_wrapped_fn().wrapped(*args, **kwds) File “/home/pfrod/anaconda3/envs/env_minus/lib/python3.7/site-packages/tensorflow/python/eager/function.py”, line 3887, in bound_method_wrapper return wrapped_fn(*args, **kwargs) File “/home/pfrod/anaconda3/envs/env_minus/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py”, line 977, in wrapper raise e.ag_error_metadata.to_exception(e) ValueError: in user code: /home/pfrod/anaconda3/envs/env_minus/lib/python3.7/site-packages/transformers/trainer_tf.py:671 distributed_training_steps * self.args.strategy.run(self.apply_gradients, inputs) /home/pfrod/anaconda3/envs/env_minus/lib/python3.7/site-packages/transformers/trainer_tf.py:662 apply_gradients * self.optimizer.apply_gradients(list(zip(gradients, self.model.trainable_variables))) /home/pfrod/anaconda3/envs/env_minus/lib/python3.7/site-packages/transformers/optimization_tf.py:232 apply_gradients * return super(AdamWeightDecay, self).apply_gradients(zip(grads, tvars), name=name, **kwargs) /home/pfrod/anaconda3/envs/env_minus/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:604 apply_gradients ** self._create_all_weights(var_list) /home/pfrod/anaconda3/envs/env_minus/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:783 _create_all_weights self._create_slots(var_list) /home/pfrod/anaconda3/envs/env_minus/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/adam.py:127 _create_slots self.add_slot(var, ‘m’) /home/pfrod/anaconda3/envs/env_minus/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:844 add_slot .format(strategy, var)) ValueError: Trying to create optimizer slot variable under the scope for tf.distribute.Strategy (<tensorflow.python.distribute.one_device_strategy.OneDeviceStrategy object at 0x7f0f5c22fd50>), which is different from the scope used for the original variable (<tf.Variable ‘tf_longformer_for_sequence_classification/longformer/embeddings/word_embeddings/weight:0’ shape=(50265, 768) dtype=float32, numpy= array([[ 0.15307617, -0.03359985, 0.08703613, …, -0.02035522, 0.02037048, -0.00749207], [ 0.01556396, 0.00740433, -0.01169586, …, -0.00212097, 0.00801086, -0.01560974], [-0.04318237, -0.08050537, -0.02220154, …, 0.12414551, -0.01826477, -0.03604126], …, [ 0.03164673, 0.04992676, -0.03146362, …, 0.03674316, 0.00679016, 0.01078033], [ 0.06192017, -0.05645752, 0.02749634, …, -0.0916748 , 0.10888672, -0.0161438 ], [ 0.12585449, -0.01345062, 0.03518677, …, 0.01661682, 0.03457642, 0.01670837]], dtype=float32)>). Make sure the slot variables are created under the same strategy scope. This may happen if you’re restoring from a checkpoint outside the scope When running the following code : from transformers import TFLongformerForSequenceClassification, LongformerTokenizer, TFTrainer, TFTrainingArguments, LongformerForSequenceClassification, LongformerConfig, TFLongformerModel import numpy as np import tensorflow as tf from tensorflow.data import Dataset from pathlib import Path from tqdm import tqdm from sklearn.model_selection import train_test_split gpu_act = True if gpu_act : GPU = tf.config.list_physical_devices('GPU')[0] tf.config.experimental.set_virtual_device_configuration(GPU, [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=8192//2)]) tokenizer = LongformerTokenizer.from_pretrained('../storage/tokenizer', max_length = 2048) model = TFLongformerForSequenceClassification.from_pretrained('allenai/longformer-base-4096', gradient_checkpointing=True, attention_window = 512, return_dict = True) PATH = Path("../storage/treated_articles") iterd = PATH.iterdir() dat = [] labels = [] for label in iterd: for article in tqdm(label.iterdir()): dat.append(str(article)) labels.append(str(label)[-17 :] == '/RELEVANT_TREATED') files_train, files_test, y_train, y_test = train_test_split(dat, labels, test_size = 0.33, shuffle = True) x_train= {'input_ids' : [None]*len(files_train), 'attention_mask' : [None]*len(files_train)} for i, file in enumerate(files_train) : tok = tokenizer(open(file, 'r').read().replace('\n\n','. ').replace('..', '.').replace('\n', ''), padding = 'max_length', truncation = True, max_length = 2048, return_tensors = 'tf') x_train['input_ids'][i] = tok['input_ids'][0] x_train['attention_mask'][i] = tok['attention_mask'][0] x_test = {'input_ids' : [None]*len(files_test), 'attention_mask' : [None]*len(files_test)} for i, file in enumerate(files_test) : tok = tokenizer(open(file, 'r').read().replace('\n\n','. ').replace('..', '.').replace('\n', ''), padding = 'max_length', truncation = True, max_length = 2048, return_tensors = 'tf') x_test['input_ids'][i] = tok['input_ids'][0] x_test['attention_mask'][i] = tok['attention_mask'][0] x_train['input_ids'] = tf.convert_to_tensor(x_train['input_ids']) x_train['attention_mask'] = tf.convert_to_tensor(x_train['attention_mask']) x_test['input_ids'] = tf.convert_to_tensor(x_test['input_ids']) x_test['attention_mask'] = tf.convert_to_tensor(x_test['attention_mask']) data_x_train = Dataset.from_tensor_slices(x_t) data_y_train = Dataset.from_tensor_slices(list(map(int, y_train))) data_train = Dataset.zip((data_x_train, data_y_train)) data_x_test = Dataset.from_tensor_slices(x_te) data_y_test = Dataset.from_tensor_slices(list(map(int, y_test))) data_test = Dataset.zip((data_x_test, data_y_test)) training_args = TFTrainingArguments( output_dir = '../results/interpretable_longformer', num_train_epochs = 8, gradient_accumulation_steps = 8, evaluation_strategy = "epoch", disable_tqdm = False, warmup_steps=150, weight_decay=0.01, logging_steps = 4, fp16 = True, logging_dir='../results/logging_interpretable_longformer', run_name = 'longformer-classification-updated-rtx3090_paper_replication_2_warm', ) trainer = TFTrainer(model=model, args=training_args, train_dataset=data_train, eval_dataset=data_test) trainer.train() I am not really used to posting my issues, so if I didn’t give enough information about my code, please let me know ! Thanks in advance !
pinging @patrickvonplaten , @jplu
0
huggingface
Beginners
How can I get advantage using multi-GPUs
https://discuss.huggingface.co/t/how-can-i-get-advantage-using-multi-gpus/3305
Hello. I try to train RoBERTa from scratch. I have several V100 GPUs. I already know that huggingface’s transformers automatically detect multi-gpu. And I checked it for myself in training log. But, there is something I couldn’t understand. There is no improvement performance between using single and multi GPUs. I experimented 3 cases, which are training same model with same batch size on single-GPU, 2-GPUs, 4-GPUs. There are code and training loss graph for comparing below. from transformers import RobertaConfig config = RobertaConfig( num_hidden_layers=4, hidden_size=512, hidden_dropout_prob=0.1, num_attention_heads=8, attention_probs_dropout_prob=0.1, intermediate_size=2048, vocab_size=34492, type_vocab_size=1, initializer_range=0.02, max_position_embeddings=512, position_embedding_type="absolute" ) from transformers import RobertaTokenizerFast tokenizer = RobertaTokenizerFast.from_pretrained("tokenizer", max_len=512) from transformers import RobertaForMaskedLM model = RobertaForMaskedLM(config=config) from transformers import LineByLineTextDataset train_dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="train.txt", block_size=tokenizer.max_len_single_sentence ) from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) from transformers import Trainer, TrainingArguments num_train_epochs = 4 max_steps = num_train_epochs * len(train_dataset) warmup_steps = int(max_steps*0.05) training_args = TrainingArguments( output_dir="output", overwrite_output_dir=True, do_train=True, max_steps=max_steps, warmup_steps=warmup_steps, num_train_epochs=num_train_epochs, per_device_train_batch_size=100, learning_rate=5e-5, weight_decay=0, max_grad_norm=1, adam_beta1=0.9, adam_beta2=0.98, adam_epsilon=1e-6, # disable_tqdm=True logging_dir="log", logging_first_step=True ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, ) trainer.train() KakaoTalk_20210120_2240377413269×1806 1.12 MB The dark blue line is using 4-GPUs, grey line is using 2-GPUs and sky blue line is using single-GPU. As the number of GPU increases, the number of steps(x-axis) are much smaller. I understand that the shape of the loss reduction is the same. However I couldn’t understand why multi-GPU’s training speed is more slower than single-GPU. If it is normal, How can I upgrade performance with multi-GPU in this code? Is there option I can tune?
Hi @HyeyeonKoo for multi-GPU training you need to launch the script using torch.distributed.launch, have a look at this section of example docs github.com huggingface/transformers - Distributed training and mixed precision 98 master/examples 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
0
huggingface
Beginners
Fine tunning Spanish BERT model
https://discuss.huggingface.co/t/fine-tunning-spanish-bert-model/3482
Hi, how can I fine-tune the spanish BERT model: huggingface.co dccuchile/bert-base-spanish-wwm-cased · Hugging Face 4
Do you want to fine-tune it for a specific task or with more text data?
0
huggingface
Beginners
Using a dataset with already masked tokens
https://discuss.huggingface.co/t/using-a-dataset-with-already-masked-tokens/3436
I am trying to fine tune BERT for Masked Language Modeling and I would like to use a dataset that already contains masked tokens (I want to mask particular words rather than randomly chosen ones). How can I do this? I am following these https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/language_modeling.ipynb#scrollTo=KDBi0reX3l_g 4 instructions, but I am not sure which parts of the code I need to change for it to be compatible with a dataset that already has [MASK] tokens in it. Thanks!
The masking is done by the data collator DataCollatorForLanguageModeling. Just pass along mlm=False to that data collator to deactivate the random masking there.
0
huggingface
Beginners
How to prime GPT-2 with input-output pairs
https://discuss.huggingface.co/t/how-to-prime-gpt-2-with-input-output-pairs/3490
Hi, first post here! Let me know if I’m in the wrong subforum. It looks like it’s possible to prime GPT-3 with an input and output (see, e.g. github.com/shreyashankar/gpt3-sandbox 3). I’m wondering how to do this for GPT-2. Further details: My use case is to try to replicate the results of this demo 3, whose author primes GPT-3 with the following text: gpt.add_example(Example('apple', 'slice, eat, mash, cook, bake, juice')) gpt.add_example(Example('book', 'read, open, close, write on')) gpt.add_example(Example('spoon', 'lift, grasp, scoop, slice')) gpt.add_example(Example('apple', 'pound, grasp, lift')) I only have access to GPT-2, via the Huggingface Transformer. How can I prime GPT-2 large on Huggingface to replicate the above examples? The issue is that, with the online Hugginface demo, one doesn’t get to prime with the input and corresponding output separately (as the author of the GPT-3 demo did above). Similarly, I can’t find anything in the Huggingface documentation describing how to prime with examples of input-output pairs, like Example('apple', 'slice, eat, mash, cook, bake, juice'). Does anyone know how to do this? Desired output: use GPT-2 to return something like, for input “potato”, output “peel, slice, cook, mash, bake” (as in the GPT-3 demo above). Obviously the exact list of output verbs won’t be the same as GPT-2 and GPT-3 are not identical models.
Hi @DGhose, I’ve found that using the following prompt format to be reasonably good at getting GPT-2 to complete the pattern for the last input_n: input_1 => output_1 \n input_2 => output_2 \n ... input_n => So for your use case, you could try feeding something like the following apple => slice, eat, mash, cook, bake, juice \n book => read, open, close, write on \n spoon => lift, grasp, scoop, slice \n banana => which in the HuggingFace inference API for gpt2-xl produces a semi-coherent output for “banana”: You’ll probably need more examples if you’re doing more complex mappings (e.g. language translation) and it takes a few tries to “cherry pick” the desired output because the text generation is not deterministic in the API (I think they use sampling)
0
huggingface
Beginners
Error in fine-tuning BERT
https://discuss.huggingface.co/t/error-in-fine-tuning-bert/3469
As a follow-up from my previous question 5, I am trying to fine-tune a model, but I am getting an error: IndexError: tuple index out of range. I am trying to classify individual sentences with a binary classification. I am using transformers version 4.2.1 and datasets version 1.2.1 The dataset(s) are .csv files with two columns: “sentence” and “label”. The following is the code that led to the error - if anyone can help identify my error, please let me know import numpy as np from transformers import TrainingArguments, Trainer from transformers import BertTokenizer, BertForSequenceClassification from datasets import load_dataset, load_metric tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2) dataset = load_dataset('csv', data_files={'train': "train_data.csv", 'test': "test_data.csv"}) metric = load_metric('f1', 'accuracy') encoded_dataset = dataset.map(lambda x: tokenizer(x['sentence'], padding=True, truncation=True), batched=True,load_from_cache_file=False) batch_size = 16 args = TrainingArguments( "test_20210201_1200", evaluation_strategy = "epoch", learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=5, weight_decay=0.01, seed=18, label_names='label', load_best_model_at_end=True, metric_for_best_model='f1', ) def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return metric.compute(predictions=predictions, references=labels) trainer = Trainer( model, args, train_dataset=encoded_dataset['train'], eval_dataset=encoded_dataset['test'], tokenizer=tokenizer, compute_metrics=compute_metrics ) All of that runs with no problem. However, I get the following error next: trainer.train() --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-2-3435b262f1ae> in <module> ----> 1 trainer.train() /usr/local/bin/miniconda3/envs/tfhub/lib/python3.8/site-packages/transformers/trainer.py in train(self, model_path, trial) 933 934 self.control = self.callback_handler.on_epoch_end(self.args, self.state, self.control) --> 935 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) 936 937 if self.args.tpu_metrics_debug or self.args.debug: /usr/local/bin/miniconda3/envs/tfhub/lib/python3.8/site-packages/transformers/trainer.py in _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch) 1002 metrics = None 1003 if self.control.should_evaluate: -> 1004 metrics = self.evaluate() 1005 self._report_to_hp_search(trial, epoch, metrics) 1006 /usr/local/bin/miniconda3/envs/tfhub/lib/python3.8/site-packages/transformers/trainer.py in evaluate(self, eval_dataset, ignore_keys, metric_key_prefix) 1440 start_time = time.time() 1441 -> 1442 output = self.prediction_loop( 1443 eval_dataloader, 1444 description="Evaluation", /usr/local/bin/miniconda3/envs/tfhub/lib/python3.8/site-packages/transformers/trainer.py in prediction_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix) 1569 losses_host = losses if losses_host is None else torch.cat((losses_host, losses), dim=0) 1570 if logits is not None: -> 1571 preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100) 1572 if labels is not None: 1573 labels_host = labels if labels_host is None else nested_concat(labels_host, labels, padding_index=-100) /usr/local/bin/miniconda3/envs/tfhub/lib/python3.8/site-packages/transformers/trainer_pt_utils.py in nested_concat(tensors, new_tensors, padding_index) 83 ), f"Expected `tensors` and `new_tensors` to have the same type but found {type(tensors)} and {type(new_tensors)}." 84 if isinstance(tensors, (list, tuple)): ---> 85 return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors)) 86 elif isinstance(tensors, torch.Tensor): 87 return torch_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index) /usr/local/bin/miniconda3/envs/tfhub/lib/python3.8/site-packages/transformers/trainer_pt_utils.py in <genexpr>(.0) 83 ), f"Expected `tensors` and `new_tensors` to have the same type but found {type(tensors)} and {type(new_tensors)}." 84 if isinstance(tensors, (list, tuple)): ---> 85 return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors)) 86 elif isinstance(tensors, torch.Tensor): 87 return torch_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index) /usr/local/bin/miniconda3/envs/tfhub/lib/python3.8/site-packages/transformers/trainer_pt_utils.py in nested_concat(tensors, new_tensors, padding_index) 85 return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors)) 86 elif isinstance(tensors, torch.Tensor): ---> 87 return torch_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index) 88 elif isinstance(tensors, np.ndarray): 89 return numpy_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index) /usr/local/bin/miniconda3/envs/tfhub/lib/python3.8/site-packages/transformers/trainer_pt_utils.py in torch_pad_and_concatenate(tensor1, tensor2, padding_index) 46 def torch_pad_and_concatenate(tensor1, tensor2, padding_index=-100): 47 """Concatenates `tensor1` and `tensor2` on first axis, applying padding on the second if necessary.""" ---> 48 if len(tensor1.shape) == 1 or tensor1.shape[1] == tensor2.shape[1]: 49 return torch.cat((tensor1, tensor2), dim=0) 50 IndexError: tuple index out of range Thanks in advance!
Hi @AlanFeder, judging by the stack trace my first guess is that the problem comes from a conflict between padding in the dataset.map operation vs padding on-the-fly in the Trainer. As described in the Trainer docs 7, when you pass the tokenizer to the Trainer it will be used as follows: The tokenizer used to preprocess the data. If provided, will be used to automatically pad the inputs the maximum length when batching inputs, and it will be saved along the model to make it easier to rerun an interrupted training or reuse the fine-tuned model. So it seems that in your code, you’re doing padding twice: once in dataset.map and then again during training. Can you remove the padding=True argument from your tokenization step and see if that works?
0
huggingface
Beginners
Gpt2 inference with onnx and quantize
https://discuss.huggingface.co/t/gpt2-inference-with-onnx-and-quantize/3459
Hey guys, I’ve managed to create a quantize version of gpt2 using onnxruntime but i don’t seem to be able to run it for some reason. anyone has a tutorial for it? also how does the “generate” method of the model will work with that any ideas?
Hi @yanagar25 when you say you cannot run the quantized version, what kind of error are you running into? Here’s a notebook that explains how to export a pretrained model to the ONNX format: transformers/04-onnx-export.ipynb at master · huggingface/transformers · GitHub 83 You can also find more details here: Exporting transformers models — transformers 4.2.0 documentation 28 I don’t see an obvious reason why the generate method should not work after quantization, so as with most things in deep learning the best advice is to just try and see if it does
0
huggingface
Beginners
Training a domain-specific roberta from roberta-base
https://discuss.huggingface.co/t/training-a-domain-specific-roberta-from-roberta-base/2324
Hey there, I apologize in advance if the question below is simple but I’m new to transformers and I want to make sure I get things right before wasting my GPU time training the “wrong” model. The goal: I want to train a domain-specific roberta model, building on the pre-trained roberta model, therefore starting from roberta-base’s weights rather than from scratch. The issue: I first followed your tutorial 20 , before realizing that the weights were not initialized on roberta-base’s before training. My question: What are the correct steps to train a domain-specific model on-top of roberta-base? Train a ByteLevelBPETokenizer on my data tokenizer = ByteLevelBPETokenizer() tokenizer.train(files=paths, vocab_size=50_000, min_frequency=2, special_tokens=["<s>", “<pad>”, “</s>”, “<unk>”, “<mask>”]) tokenizer.save_model(“mymodel”)’ and use it to preprocess my data, FYI I have 700,000 sentences stored in a txt file, one sentence per line. from transformers import LineByLineTextDataset training_dataset = LineByLineTextDataset(tokenizer=tokenizer, file_path=“data/training.txt”, block_size=128,) evalutation_dataset = LineByLineTextDataset(tokenizer=tokenizer, file_path=“data/testing.txt”, block_size=128,) Get the roberta config from transformers import RobertaConfig, RobertaForMaskedLM config = RobertaConfig( vocab_size=52_000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=12, type_vocab_size=1,) config.save_model(“mymodel”) model = RobertaForMaskedLM(config=config) Or should I instead use: from transformers import RobertaForMaskedLM model= RobertaForMaskedLM.from_pretrained(‘roberta-base’) Get data Collaor from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15) Set up training arguments and train from transformers import Trainer, TrainingArguments, EvalPrediction What I currently have: training_args = TrainingArguments( output_dir="./mymodel", evaluation_strategy=“steps”, prediction_loss_only=True, per_device_train_batch_size= 32, per_device_eval_batch_size=32, eval_accumulation_steps = 200, weight_decay=0.01, adam_epsilon=1e-6, max_steps=200000, warmup_steps=1, save_steps=200, save_total_limit=5, eval_steps= 100, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=training_dataset, eval_dataset=evalutation_dataset, ) trainer.train() Should I instead use: python run_mlm.py –model_name_or_path roberta-base –train_file path_to_train_file –validation_file path_to_validation_file –do_train –do_eval –output_dir /tmp/test-mlm Thanks in advance for any help you can provide!
Hi aberquand, I don’t think you can use the pre-trained weights with your domain-specific vocabulary. [I am not an expert, and I’ve only used BERT not RoBERTA, and I didn’t use the Trainer, so I could be wrong.] If I understand it correctly, the way the weights learn is dependent on the particular vocabulary. I suggest you use the pre-trained vocabulary as well as the pre-trained weights. How different is your vocabulary from the original RoBERTA vocabulary? I would expect 700000 sentences would be enough to do fine-tuning, but probably not enough to train from scratch. Did you know, if you use Google Colaboratory you can get a limited amount of GPU-time for free. It generally maxes out after about 7 hours each day. Colab uses Jupyter notebooks. For more information on tokenizer vocabulary, I recommend Chris McCormick’s blogs, eg https://mccormickml.com/2019/05/14/BERT-word-embeddings-tutorial/ 5
0
huggingface
Beginners
Sequence Classification – Fine Tune?
https://discuss.huggingface.co/t/sequence-classification-fine-tune/3410
Hi, I am new to Transformers/NLP. I am trying to use Transformers for text classification. If I am not classifying to one of the pre-made GLUE benchmarks (and using my own use-case classes & texts), do I have to “fine-tune” the model? If I have 35k texts, and 2 labels (imbalanced – 98% vs 2%) can I just use the AutoModelforSequenceClassification to throw a softmax on the end of the transformer, and train that softmax? Or do I fine-tune the whole thing, using this tutorial 9? Thanks! I am excited about better understanding the field and more effectively using the library.
Hi @AlanFeder, I’m not familiar with imbalanced datasets, but if I were you, I would try using examples/text-classification/run_glue.py 10. In this example, instead of assigning GLUE benchmarks task names, we can use our own train_file, validation_file, and test_file. python run_glue.py \ --model_name_or_path bert-base-cased \ --train_file train_file_name \ --validation_file validation_file_name \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /path/to/output/ You can use CSV or JSON. datasets.load_dataset is used in the script, so the document of datasets.load_dataset for local files 1 may help you. I remember that columns of the following format were easier to handle, but even if the names or order of the columns are different, it may work if the code is rewritten appropriately. (Please see transformers/run_glue.py at 5ed5a54684ef059fa4c9710858b8e03c61295914 · huggingface/transformers · GitHub 3 for the detail.) sentence1, sentence2, label or sentence, label Again, I’m not familiar with imbalanced datasets, so I don’t think I’ve answered your question “do I have to fine-tune?” Sorry. I am just one of the users who is learning about this library, so I hope you can get more useful advice from someone who knows more than I do.
0
huggingface
Beginners
Issue with Transformer notebook’s Getting Started Tokenizers
https://discuss.huggingface.co/t/issue-with-transformer-notebooks-getting-started-tokenizers/3444
In Documentation, I went to Transformer Notebooks and clicked on the collab for Getting Started Tokenizer. I executed each cell and when I got to the cell where: from tokenizers.trainers import BpeTrainer trainer = BpeTrainer(vocab_size=25000, show_progress=True, initial_alphabet=ByteLevel.alphabet()) tokenizer.train(trainer, [“big.txt”]) print(“Trained vocab size: {}”.format(tokenizer.get_vocab_size())) I ran the cell and got this: TypeError: Can’t convert <tokenizers.trainers.BpeTrainer object at 0x7f8641325570> to Sequence I am assuming these cells should work so something changed with the software but not updated in the notebook. I am trying to learn transformers on my own so where can I go to learn if Hugging Face Doc is not up to date? Any help will be appreciated.
Hi @krwin, this indeed seems to be a bug in the notebook 5, where the order of the arguments for tokenizer.train() in this cell from tokenizers.trainers import BpeTrainer # We initialize our trainer, giving him the details about the vocabulary we want to generate trainer = BpeTrainer(vocab_size=25000, show_progress=True, initial_alphabet=ByteLevel.alphabet()) tokenizer.train(trainer, ["big.txt"]) print("Trained vocab size: {}".format(tokenizer.get_vocab_size())) is back-to-front (see the docs 6). To fix the problem you can just specify the arguments explicitly: tokenizer.train(trainer=trainer, files=["big.txt"]) cc: @anthony
0
huggingface
Beginners
HF Datasets loading csv
https://discuss.huggingface.co/t/hf-datasets-loading-csv/3434
I am attempting to use the following code snippet to load my own custom csv file: from datasets import load_dataset dataset = load_dataset(‘csv’, data_files=[‘my_file_1.csv’, ‘my_file_2.csv’]) I am replacing [‘my_file_1.csv’, ‘my_file_2.csv’] with one file name ‘my_file_1.csv’. However, I am getting the following error: TypeError : expected string or bytes-like object’ Any suggestions?
Hi ! Can you post the full stack trace please ?
0
huggingface
Beginners
BartTokenizer with vocab.json and merge.txt which were created by ByteLevelBPETokenizer encode <s> into 3 tokens
https://discuss.huggingface.co/t/barttokenizer-with-vocab-json-and-merge-txt-which-were-created-by-bytelevelbpetokenizer-encode-s-into-3-tokens/3393
Hi, I want to create vocab.json and merge.txt and use them with BartTokenizer. But somehow tokenizer encode <s> into [32, 87, 34] which was originally [0]. Could you show me how to create vocab.json and merge.txt correctly. or my way of loading vocab.json and merge.txt may be wrong. Anyway here is what I did. # in this notebook we'll only get one of the files (the Oscar one) for the sake of simplicity and performance # !wget -c https://cdn-datasets.huggingface.co/EsperBERTo/data/oscar.eo.txt # import from pathlib import Path from tokenizers import ByteLevelBPETokenizer paths = [str(x) for x in Path(".").glob("**/*.txt")] # Initialize a tokenizer tokenizer = ByteLevelBPETokenizer() # Customize training tokenizer.train(files=paths, vocab_size=52_000, min_frequency=2, special_tokens=[ "<s>", "<pad>", "</s>", "<unk>", "<mask>", ]) # check a sentence. input1 = "Mi estas Julien." tokenizer.encode("Mi estas Julien.").tokens Output: ['Mi', 'Ġestas', 'ĠJuli', 'en', '.'] < looks good. tokenizer.encode("Mi estas Julien.").ids Output: [958, 316, 14540, 276, 18] < looks good # check <s> and </s> tokenizer.encode("<s>").ids, tokenizer.encode("</s>").ids Output: ([0], [2]) < looks good # save vocab and merge !mkdir output tokenizer.save_model("output","test") # now let's load vocab.json and merge.txt # import BartTokenizer from transformers import BartTokenizer tokenizer = BartTokenizer( vocab_file="output/test-vocab.json", merges_file="output/test-merges.txt", bos_token="<s>", eos_token="</s>", sep_token="</s>", cls_token="<s>", unk_token="<unk>", pad_token="<pad>", mask_token="<mask>", ) input1 = "Mi estas Julien." encoded = tokenizer(input1, add_special_tokens=False, return_tensors="pt").input_ids Output: tensor([[ 958, 316, 14540, 276, 18]]) < looks good input1 = "<s>Mi estas Julien.</s>" encoded = tokenizer(input1, add_special_tokens=False, return_tensors="pt").input_ids Output: tensor([[ 32, 87, 34, 958, 316, 14540, 276, 18, 918, 87, 34]]) < ? # <s> is now [32, 87, 34] ??? input1 = "<s>" encoded = tokenizer(input1, add_special_tokens=False, return_tensors="pt").input_ids Output: tensor([[32, 87, 34]]) < ??? It seems encoding and decoding is working but only special_tokens is not working. Could you give me hint to fix this problem? My ultimate goal is to train Bart model with my language. Or is it okay that tokenizer encode <s> into 3 tokens? Or can I modify vocab.json and merge.txt manually to let BartTokenizer encode <s> into [0] ? Thanks in advance.
[UPDATED] I got a workaround. It seems like initializing BartTokenizer from vocab.json and merge.txt cause the problem. Even when I initialize BartTokenizer with vocab.json and merge.txt form Roberta’s pre-trained ones, same problem happened. Here’s my codes. # import BartTokenizer from transformers import BartTokenizer tokenizer = BartTokenizer( vocab_file="roberta/vocab.json", merges_file="roberta/merges.txt", bos_token="<s>", eos_token="</s>", sep_token="</s>", cls_token="<s>", unk_token="<unk>", pad_token="<pad>", mask_token="<mask>", ) vocab.json and merge.txt was downloaded from below. https://huggingface.co/roberta-base/resolve/main/vocab.json https://huggingface.co/roberta-base/resolve/main/merges.txt input1 = "This is a pen." encoded = tokenizer(input1, add_special_tokens=False, return_tensors="pt").input_ids Output: tensor([[ 713, 16, 10, 7670, 4]]) input1 = "<s>This is a pen.</s>" encoded = tokenizer(input1, add_special_tokens=False, return_tensors="pt").input_ids Output: tensor([[41552, 29, 15698, 713, 16, 10, 7670, 49803, 29, 15698]]) < ??? input1 = "<s>" encoded = tokenizer(input1, add_special_tokens=False, return_tensors="pt").input_ids Output: tensor([[41552, 29, 15698]]) < ??? I got similar problem even with from_pretrained. tokenizer = BartTokenizer.from_pretrained('facebook/bart-base', add_prefix_space=True) input1 = "This is a pen." encoded = tokenizer(input1, add_special_tokens=False, return_tensors="pt").input_ids Output: tensor([[ 152, 16, 10, 7670, 4]]) input1 = "<s> This is a pen.</s>" encoded = tokenizer(input1, add_special_tokens=False, return_tensors="pt").input_ids Output: tensor([[1437, 0, 152, 16, 10, 7670, 4, 2]]) < ??? encoded = tokenizer("<s>", add_special_tokens=False, return_tensors="pt").input_ids Output: tensor([[1437, 0]]) <<< ??? But when I use AutoTokenizer, it works fine. from transformers import AutoTokenizer # tokenizer tokenizer = AutoTokenizer.from_pretrained( "facebook/bart-base", ) input1 = "This is a pen." encoded = tokenizer(input1, add_special_tokens=False, return_tensors="pt").input_ids Output: tensor([[ 713, 16, 10, 7670, 4]]) input1 = "<s>This is a pen.</s>" encoded = tokenizer(input1, add_special_tokens=False, return_tensors="pt").input_ids Output: tensor([[ 0, 713, 16, 10, 7670, 4, 2]]) encoded = tokenizer("<s>", add_special_tokens=False, return_tensors="pt").input_ids Output: tensor([[0]]) < looks good And I found a workaround. Saving pre-trained tokenizer model first and replacing vocab.json and merge.txt with the files created by ByteLevelBPETokenizer works. # save tokenizer model. tokenizer.save_pretrained("./saved_model") # replace vocab.json and merge.txt # load tokenizer model tokenizer = AutoTokenizer.from_pretrained('./saved_model/') input1 = "Mi estas Julien." encoded = tokenizer(input1, add_special_tokens=False, return_tensors="pt").input_ids Output: tensor([[ 958, 316, 14540, 276, 18]]) input1 = "<s>Mi estas Julien.</s>" encoded = tokenizer(input1, add_special_tokens=False, return_tensors="pt").input_ids Output: tensor([[ 0, 958, 316, 14540, 276, 18, 2]]) input1 = "<s>" encoded = tokenizer(input1, add_special_tokens=False, return_tensors="pt").input_ids Output: tensor([[0]]) < looks good!
0
huggingface
Beginners
Why `from_pretrained` method still works when model config is mismatched?
https://discuss.huggingface.co/t/why-from-pretrained-method-still-works-when-model-config-is-mismatched/3360
Hi, This might be a silly question. But I try to config a customized Bart, and use from_pretrained method to load weights. And I expect some errors raised, as my config neither completely matches the config in config.json · facebook/bart-base at main 1 nor config.json · facebook/bart-large at main. But it didn’t. from transformers import BartForConditionalGeneration, BartConfig myconfig = BartConfig(d_model=1024, max_position_embeddings=256, encoder_attention_heads=8, decoder_attention_heads=8, encoder_layers=10, decoder_layers=10) model = BartForConditionalGeneration(myconfig) model = model.from_pretrained('facebook/bart-base') model = BartForConditionalGeneration(myconfig) model = model.from_pretrained('facebook/bart-large') Just wonder why…
calling from_pretrained initializes a new model using the new config and returns it, which is why there is no error
0
huggingface
Beginners
The location of script for “Training an Abstractive Summarization Model”
https://discuss.huggingface.co/t/the-location-of-script-for-training-an-abstractive-summarization-model/3357
Hi, I have a silly question. I want to try the scripts on Training an Abstractive Summarization Model 5. But somehow I could not find out the github url for that. I mean I want to run the scripts below on the page. python main.py \ --mode abstractive \ --model_name_or_path bert-base-uncased \ --decoder_model_name_or_path bert-base-uncased \ --cache_file_path data \ --max_epochs 4 \ --do_train --do_test \ --batch_size 4 \ --weights_save_path model_weights \ --no_wandb_logger_log_model \ --accumulate_grad_batches 5 \ --use_scheduler linear \ --warmup_steps 8000 \ --gradient_clip_val 1.0 \ --custom_checkpoint_every_n 300 Could someone give me the github url? Thanks in advance.
Hi @kouohhashi, as far as I know the summarisation scripts have been migrated to the seq2seq examples here: transformers/examples/seq2seq at master · huggingface/transformers · GitHub 9 There you can find BART, T5, and Pegasus, although I suggest starting with Pegasus since it produces decent summaries and is relatively light at ~500M params. Note that the URL you link to is a different library to transformers: if you have questions about that library I suggest you open an issue in their repo / forum
0
huggingface
Beginners
How to train gpt-2 from scratch? (no fine-tuning)
https://discuss.huggingface.co/t/how-to-train-gpt-2-from-scratch-no-fine-tuning/3351
Hi, I would like to train GPT-2 from scratch. I don’t want to fine-tuning an existing model, but actually train it from scratch with my own tokenizer. How could I do it? Thanks.
Hi @iamnotapenguin, the place I would start is by adapting the following script for causal language modelling to your dataset: transformers/run_clm.py at master · huggingface/transformers · GitHub 129 This script allows you to specify both the tokenizer and the model architecture, plus you can do multi-gpu training which is advisable if you’re training from scratch. Hope that helps!
0
huggingface
Beginners
Multi class text classification tutorial: how does he get away with one out_feature on linear layer?
https://discuss.huggingface.co/t/multi-class-text-classification-tutorial-how-does-he-get-away-with-one-out-feature-on-linear-layer/630
I’ve read this tutorial: https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb 25 As I see it, the dataset uses 4 labels. I thought that that would imply having 4 out_features on your last linear layer. But when I check his model, he uses just 1. Please help me to fix my misunderstanding
Hi @dickdanieljr, are you referring to this class? class DistillBERTClass(torch.nn.Module): def __init__(self): super(DistillBERTClass, self).__init__() self.l1 = DistilBertModel.from_pretrained("distilbert-base-uncased") self.pre_classifier = torch.nn.Linear(768, 768) self.dropout = torch.nn.Dropout(0.3) self.classifier = torch.nn.Linear(768, 4) def forward(self, input_ids, attention_mask): output_1 = self.l1(input_ids=input_ids, attention_mask=attention_mask) hidden_state = output_1[0] pooler = hidden_state[:, 0] pooler = self.pre_classifier(pooler) pooler = torch.nn.ReLU()(pooler) pooler = self.dropout(pooler) output = self.classifier(pooler) return output Here you can see that the last layer is torch.nn.Linear of shape [hidden_dim, num_labels] - or am I missing something?
0
huggingface
Beginners
Trainer epoch does not go through all training data?
https://discuss.huggingface.co/t/trainer-epoch-does-not-go-through-all-training-data/3339
Hello I’m training a model with transformers Trainer but when I set the number of epoch to eg: 1000 then it seems the training just does 1000 steps however an epoch is normally the number of times the model goes through the entire dataset. Thus, how can we use the trainer such that each epoch goes through the full training dataset (and that we see the progression of these) Thanks!
Hi there! Please post the command/code you are executing as we can’t really help without that.
0
huggingface
Beginners
Not sure why padding isn’t working for me
https://discuss.huggingface.co/t/not-sure-why-padding-isnt-working-for-me/3343
There doesn’t seem to be any padding occurring here: train_dataset = dataset.shard(10, 1) train_dataset.set_format(columns=['text']) train_dataset.cleanup_cache_files() encoded_dataset = train_dataset.map(lambda examples: tokenizer(examples['text'], padding=True)) encoded_dataset[:1] {'attention_mask': [[1, 1, 1, 1, 1, 1, 1]], 'input_ids': [[101, 1714, 22233, 21365, 4515, 8618, 102]], 'text': ['free instagram followers '], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0]]} What am I missing?
You are using the strategy to pad to the length of the longest sample while also passing your samples one by one to the tokenizer, so no padding happens. If you want to pass several samples at once, using batched=True in your call to map. If you want to pass to a specific max_length, pass max_length=xxx and padding="max_length" to your call to the tokenizer.
0
huggingface
Beginners
Predicting answers using DistilBertForQuestionAnswering
https://discuss.huggingface.co/t/predicting-answers-using-distilbertforquestionanswering/3307
I have fine-tuned DistilBertForQuestionAnswering model using custom data. And I saved that model in colab. I can load the model as well. I need to know how I can use that loaded model to input a question and get a answer(prediction).
You can find this info in the docs 24. Always nice to check docs before asking here
0