docs
stringclasses
4 values
category
stringlengths
3
31
thread
stringlengths
7
255
href
stringlengths
42
278
question
stringlengths
0
30.3k
context
stringlengths
0
24.9k
marked
int64
0
1
huggingface
Models
ImportError: cannot import name ‘TFLongformerForMaskedLM’
https://discuss.huggingface.co/t/importerror-cannot-import-name-tflongformerformaskedlm/1885
I am trying to follow the example mentioned in https://huggingface.co/transformers/model_doc/longformer.html#tflongformerformaskedlm 3. import tensorflow as tf from transformers import LongformerTokenizer, TFLongformerForMaskedLM I get below error: ImportError Traceback (most recent call last) in 1 import tensorflow as tf ----> 2 from transformers import LongformerTokenizer, TFLongformerForMaskedLM ImportError: cannot import name ‘TFLongformerForMaskedLM’ How to resolve this? Pls let me know. Thanks,
chandrak: from transformers import LongformerTokenizer, TFLongformerForMaskedLM Maybe you should check if you have the latest version of Huggingface Transformers. (i.e. 3.4.0) I can import it without any problem
0
huggingface
Models
Difference in memory efficiency in HF and fairseq
https://discuss.huggingface.co/t/difference-in-memory-efficiency-in-hf-and-fairseq/1715
Hello, I’ve been reading this paper on mbart(https://arxiv.org/pdf/2001.08210.pdf 3) and came across section 2.2 optimization where authors claim to have total batch size of 128K tokens per 32GB GPU. I got my hands on one of those but I only managed to put about 16k (or 32k if they count generator tokens too), I had max_seq_len of 512, batch_size of 4 and grad_acc 8, but it’s stil at least 4 times less. I am using fp16. So, my question is: what is the difference between HF optimization and fairseq optimization? or what is the difference between fairseq model and HF model? Thanks actually I have 1 more question while writing this: why there are 1024 pos_embeddings, when paper authors write about pre-training 512? are they randomly initialised or is it something different? P.S. I’ve been using Facebook/mbart-large-cc25
@patrickvonplaten maybe you can help me understand this. Thanks!
0
huggingface
Models
Help with finetuning mBART on an unseen language
https://discuss.huggingface.co/t/help-with-finetuning-mbart-on-an-unseen-language/1507
Hi everyone, I wanted to know how would we finetune mBART on a summarization task on a different language than that of English. Also, how can we finetune mBART on a translation task where one of the languages is not present in the language code list that mBART has been trained on. Appreciate any help!! Thank you.
Hi @LaibaMehnaz DISCLAIMER: I haven’t tried this myself , and as Sam found in his experiments mBART doesn’t always give good results. for mBart source seq ends with src lang id and tgt seq starts with the tgt seq id. so for summ you can pass the same lang id for both source and tgt lang and then finetune it the same way you finetune any other seq2seq model. For translation, if the lang is not present, you can try without using any lang id in the sequences.
0
huggingface
Models
Optimizing models using ONNX
https://discuss.huggingface.co/t/optimizing-models-using-onnx/1665
I came to know Hugging Face use optimized onnx models for inference on cpu. I tried to do something like that using keras VGGNet16 pretrained model using keras-onnx package (see this github issue 5) but couldn’t see any performance benefits. Can I know how exactly Hugging Face is optimizing models under the hood?
These two blog posts might help Medium – 17 Jun 20 Accelerate your NLP pipelines using Hugging Face Transformers and ONNX Runtime 25 This post was written by Morgan Funtowicz from Hugging Face and Tianlei Wu from Microsoft Reading time: 6 min read Medium – 31 Aug 20 Faster and smaller quantized NLP with Hugging Face and ONNX Runtime 37 Popular Hugging Face Transformer models (BERT, GPT-2, etc) can be shrunk and accelerated with ONNX Runtime quantization without retraining. Reading time: 6 min read
0
huggingface
Models
T5: Tips for finetuning on crossword clues (clue => answer)
https://discuss.huggingface.co/t/t5-tips-for-finetuning-on-crossword-clues-clue-answer/1514
As a baseline for a research project, I am trying to finetune T5 on a large crossword clue set (130,000 clues), where the source is a Clue, and the target is an Answer. I am using T5ForConditionalGeneration and the finetune.py script (examples/seq2seq). I started with T5-small. My source/target files have one pair per line (<Clue>\n, <Answer>\n). I started with from_pretrained(t5-small) for both model and tokenizer. I didn’t add any tokens to the vocabulary. The initial run gave me only gibberish (long strings of entirely non-English outputs), so I am trying an even simpler task: Can T5 learn to select the first word of the input sentence. I.e., I’ve modified inputs and outputs to be something like source: This is a clue with some normal language target: This Where again each entry is on its own line. I observed (under the same training regime as above, with T5-small) that, after 300 epochs, the model gives outputs that look like <first word> <long string of gibberish> I wonder if anyone has some ideas: Is there any issue with having only one word targets? I.e., Should I be using a different loss function than the default? At epoch 2, my loss was already down to 0.001. Rouge scores were around 1.5. I did not change the task name. The finetune.py script seems to default to adding task name summarization. Maybe I should remove or change the task name? How would T5 small vs T5 base or large change the results? I did not add separator tokens, but I think I am not required to given that the example (e.g. finetune_bart_tiny) does not add separator tokens My inputs and outputs generally do not have punctuation (i.e. the clues don’t end in a period, and the answers don’t end in a period). I wonder if this would help? I read the through https://discuss.huggingface.co/t/t5-finetuning-tips/684 1 but I’m not sure if those tweaks will change the results here. I’m mostly just slightly adapting the given finetune_t5.sh script. What’s a reasonable number of epochs of finetuning (using a 60% split of 130,000, so roughly 80k training examples) before I should expect the model to learn to output the first word? I also have an implementation question: Is there a way to get the finetune.py script to print validation results at every epoch so that I can see how the model is learning (qualitatively) over time?
I filed this bug 5 for the gibberish outputs I am observing
0
huggingface
Models
RAG Example and Word-Level contributions
https://discuss.huggingface.co/t/rag-example-and-word-level-contributions/1473
When RAG was presented, it did so along this very nice post: ai.facebook.com Retrieval Augmented Generation: Streamlining the creation of intelligent... 27 Teaching computers to understand how humans write and speak, known as natural language processing or NLP, is one of the oldest challenges in AI research. There has been… I was wondering if there is a way to obtain the same information shown in the graphs when using HF RAG implementation. That is, the documents weights, as well as the Word-level contribution as referred in the article, or the RAG-Token document posterior as in the paper. I am aware the document weights can be obtained when doing a forward pass, however these are not obtainable when using the generate() method, which I think would be a “nice have”. I guess now they can be obtained with an extra forward pass before generating, or just tweaking the generate method locally to return them. However I am not sure how they obtain the posterior for each document. I am guessing it has to do with an average value from the tokens coming from each of the documents (so one would need to “split” the last hidden layer into the document chunks?). Does anyone know better how these could be obtained at generation in order to obtain similar figures as in the article? Thanks
There’s a demo 86 by the awesome @yjernite that shows that you can get the per-examples and per-word contributions. There’s currently a PR to open source the code of the demo here 33 where you can check the code. Not sure we can get the RAG-Token posterior easily though
0
huggingface
Models
What is the license of /nlptown/bert-base-multilingual-uncased-sentiment?
https://discuss.huggingface.co/t/what-is-the-license-of-nlptown-bert-base-multilingual-uncased-sentiment/1445
Hi, I would like to use this multilingual sentiment model (nice work by the way) in the backend of a Dataiku DSS plugin. What is the open-source license of this model? Cheers, Alex Combessie
Probably best to ask the authors. I don’t think they’re active on this forum. https://mobile.twitter.com/nlptown https://mobile.twitter.com/yvespeirsman 2
0
huggingface
Models
Out of Memory on very small custom transformer
https://discuss.huggingface.co/t/out-of-memory-on-very-small-custom-transformer/1228
Hi everyone, I am creating a custom transformer to classify protein sequences. These sequences range from 20 to 60000 in length in the form “ANTGGTANGT…”. Once tokenized with a custom BPE tokenizer of vocab 1000, the longest input_ids sequence is 12000. For the transformer I have tried Roberta and Longformer. I got out of memory with both, even with very small parameters and splitting long sequences. For example: Config: config = LongformerConfig( vocab_size=1000, max_position_embeddings=5000, num_attention_heads=8, num_hidden_layers=3, type_vocab_size=1, hidden_size=64, intermediate_size=128 ) Model: Net( (roberta): LongformerModel( (embeddings): RobertaEmbeddings( (word_embeddings): Embedding(1000, 64, padding_idx=1) (position_embeddings): Embedding(12000, 64, padding_idx=1) (token_type_embeddings): Embedding(1, 64) (LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) (encoder): LongformerEncoder( (layer): ModuleList( (0): LongformerLayer( (attention): LongformerAttention( (self): LongformerSelfAttention( (query): Linear(in_features=64, out_features=64, bias=True) (key): Linear(in_features=64, out_features=64, bias=True) (value): Linear(in_features=64, out_features=64, bias=True) (query_global): Linear(in_features=64, out_features=64, bias=True) (key_global): Linear(in_features=64, out_features=64, bias=True) (value_global): Linear(in_features=64, out_features=64, bias=True) ) (output): BertSelfOutput( (dense): Linear(in_features=64, out_features=64, bias=True) (LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=64, out_features=128, bias=True) ) (output): BertOutput( (dense): Linear(in_features=128, out_features=64, bias=True) (LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (1): LongformerLayer( (attention): LongformerAttention( (self): LongformerSelfAttention( (query): Linear(in_features=64, out_features=64, bias=True) (key): Linear(in_features=64, out_features=64, bias=True) (value): Linear(in_features=64, out_features=64, bias=True) (query_global): Linear(in_features=64, out_features=64, bias=True) (key_global): Linear(in_features=64, out_features=64, bias=True) (value_global): Linear(in_features=64, out_features=64, bias=True) ) (output): BertSelfOutput( (dense): Linear(in_features=64, out_features=64, bias=True) (LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=64, out_features=128, bias=True) ) (output): BertOutput( (dense): Linear(in_features=128, out_features=64, bias=True) (LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (2): LongformerLayer( (attention): LongformerAttention( (self): LongformerSelfAttention( (query): Linear(in_features=64, out_features=64, bias=True) (key): Linear(in_features=64, out_features=64, bias=True) (value): Linear(in_features=64, out_features=64, bias=True) (query_global): Linear(in_features=64, out_features=64, bias=True) (key_global): Linear(in_features=64, out_features=64, bias=True) (value_global): Linear(in_features=64, out_features=64, bias=True) ) (output): BertSelfOutput( (dense): Linear(in_features=64, out_features=64, bias=True) (LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=64, out_features=128, bias=True) ) (output): BertOutput( (dense): Linear(in_features=128, out_features=64, bias=True) (LayerNorm): LayerNorm((64,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) (pooler): BertPooler( (dense): Linear(in_features=64, out_features=64, bias=True) (activation): Tanh() ) ) (cls): Linear(in_features=64, out_features=1314, bias=True) ) The out of memory error I do not get it when instanciating the model and moving it to a cuda device, I got the error always at the start of training, even with a batch size of 1. I got 8GB of cuda memory, and previously I have been able to train pretrained bert-base, roberta-base… and more models before. Is there something I am doing wrong? Do I need to change other parameters of the transformer architecture? Thank you very much
5000 max_positions_embedings is still too much for a 8GB GPU, you could try using fp16
0
huggingface
Models
SpanBert TACRED tokens
https://discuss.huggingface.co/t/spanbert-tacred-tokens/1376
For the SpanBert model fine-tuned on TACRED dataset (ie. https://huggingface.co/mrm8488/spanbert-base-finetuned-tacred 8) the subject and object entities (ie. PERSON, ORGANIZATION, etc) are substituted by an unused token at the original code : replace the subject and object entities by their NER tags such as “[CLS][SUBJ-PER] was born in [OBJ-LOC] , Michigan, . . . ” as described in their paper: https://arxiv.org/pdf/2004.14855.pdf 2 And can also be found in their code: https://github.com/facebookresearch/SpanBERT/blob/master/code/run_tacred.py in lines 134 to 139: def get_special_token(w): if w not in special_tokens: special_tokens[w] = "[unused%d]" % (len(special_tokens) + 1) return special_tokens[w] ... SUBJECT_START = get_special_token("SUBJ_START") SUBJECT_END = get_special_token("SUBJ_END") OBJECT_START = get_special_token("OBJ_START") OBJECT_END = get_special_token("OBJ_END") SUBJECT_NER = get_special_token("SUBJ=%s" % example.ner1) OBJECT_NER = get_special_token("OBJ=%s" % example.ner2) The issue is that to use the pre-trained models one has to substitute those tokens before tokenizing, but there is no way to obtain the originally used ones without the original data (which is not freely available). Does anyone have access to the TACRED dataset and is able to obtain these tokens (or the special_tokens dict) using the original code and share it? And maybe add it somewhere in the repo so it can be easily accessed. Thanks!
Tagging model author @mrm8488
0
huggingface
Models
What’s the license of joeddav/xlm-roberta-large-xnli?
https://discuss.huggingface.co/t/whats-the-license-of-joeddav-xlm-roberta-large-xnli/1446
Hi again, I would like to use this multilingual model for zero-shot classification. Super useful work btw! Before I can embed it in my Dataiku DSS plugin, I need to check the license. Could you confirm what type of open source license this model is released with? Cheers, Alex Combessie
Hey, glad you’ve found it useful! The set of models it’s fine-tuned from (xlm-roberta 3) list MIT as the license, so I’ve added the same to xlm-roberta-large-xnli 6. Also here 6 is the license for the XNLI dataset which it was fine-tuned on for reference.
0
huggingface
Models
RAG custom dataset
https://discuss.huggingface.co/t/rag-custom-dataset/1294
I just saw that Facebook AI released a blog post about RAG 2 ( Retrieval Augmented Generation: Streamlining the creation of intelligent natural language processing models) and that it is already incorporated in the HuggingFace API 3. I looked quickly, and I couldn’t see how to use a custom dataset with it. It seems like it will only pull down indexed datasets from HuggingFace’s AWS storage 3. I’m wondering if anyone can show me how to Create an indexed dataset. I’m assuming this is just a big collection of embeddings that have been made by running documents through a model and taking the output embedding. I’m wondering which model(s) can be used, how many dimensions the embeddings are expected to be, and how to format all of these vectors. Use that custom dataset with HF Rag models.
Indeed, it’s actually very simple to do with datasets and somehow explained on this page: https://huggingface.co/docs/datasets/faiss_and_ea.html 33 We will add an example script on this.
0
huggingface
Models
Why does RoBERTa behave differently if I provide a corpus that contains special tokens?
https://discuss.huggingface.co/t/why-does-roberta-behave-differently-if-i-provide-a-corpus-that-contains-special-tokens/1083
Hello all, Recently I’ve been working on training from scratch a RoBERTa model starting from the code in this tutorial 1. I am working with a specific corpus I prepared according to my own format <s> ID 10 <i> COUNTRY USA <i> CAPITAL Washington DC </s> and I noticed that one of the parameters that can be passed to the tokenizer.encoder_plus function is add_special_tokens. If add_special_tokens=True, the result of the encoding of the sentence <s> ID 10 <i> COUNTRY USA <i> CAPITAL Washington DC </s> becomes <s> <s> ID 10 <i> COUNTRY USA <i> CAPITAL Washington DC </s> </s> and the special_tokens_mask is 1 0 0 … 0 1 When I tried add_special_tokens=False on the same sentence <s> ID 10 <i> COUNTRY USA <i> CAPITAL Washington DC </s> the result of the encoding was correct: <s> ID 10 <i> COUNTRY USA <i> CAPITAL Washington DC </s> However, the special_tokens_mask remains 0 0 … 0 0. After testing both versions, the result I got from the first was very good, while the second failed. This raises a few issues that I wasn’t able to solve on my own: How can I access the special_tokens_mask to correct it to what it should be? Where does RoBERTa make use of that mask, if it does? Is there a method for setting the mask to something I want? e.g. the mask for <s> ID 10 <i> COUNTRY USA </s> should be 1 0 0 1 0 0 1 if <s>, </s> and <i> should all be treated as special tokens. If RoBERTa is not the correct model to do this, what model should I go for? Thanks a lot!
This actually feels more like a bug than a problem on your end. I suspect that the tokenization between the two is identical (i.e. <s> gets the same ID in both cases), but that means that the special_tokens_mask should also be the same. Best to wait for others who might be more certain.
0
huggingface
Models
Any Model for NER on French
https://discuss.huggingface.co/t/any-model-for-ner-on-french/1125
Hi all, I have been looking for a model to run a NER task in French. I see there are Camembert and RobertA models for token classification but these models are not fine-tuned for any NER tasks. Any suggestions on this? If there is not any model, is there any French dataset tagged for NER? Thank you, Sergul
I just asked Pedro (https://github.com/pjox 2), maybe he knows some good NER datasets for French that are publicly available for fine-tuning. If someone could give me access to FTB dataset (see CamemBERT paper), I could fine-tune a model + upload it to the model hub
0
huggingface
Models
Does the default weight_decay of 0.0 in transformers.AdamW make sense?
https://discuss.huggingface.co/t/does-the-default-weight-decay-of-0-0-in-transformers-adamw-make-sense/1180
Hi, I have a question regarding the AdamW optimizer default weight_decay value. In the Docs 3 we can clearly see that the AdamW optimizer sets the default weight decay to 0.0. Given that the whole purpose of AdamW is to decouple the weight decay regularization, is my understanding that the results anyone can get with AdamW and Adam if both are used with weight_decay=0.0 (this is, without weight decay) should be exactly the same. Therefore, shouldn’t make more sense to have the default weight decay for AdamW > 0? Thank you so much!
Even though I agree about the default value (it should probably be 0.01 as in the PyTorch 3 implementation), this probably should not be changed without warning because it breaks backwards compatibility. It was also implemented in transformers before it was available in PyTorch itself. I guess it is implemented in this way, because most of the time you decide in the initialization which parameters you want to decay and which ones shouldn’t be decayed, such as here: github.com huggingface/transformers/blob/a75c64d80c76c3dc71f735d9197a4a601847e0cd/examples/contrib/run_openai_gpt.py#L230-L237 14 optimizer_grouped_parameters = [ { "params": [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], "weight_decay": args.weight_decay, }, {"params": [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], "weight_decay": 0.0}, ] optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate, eps=args.adam_epsilon)
0
huggingface
Models
Tips for Choosing Model: Automatic Labelling of Articles
https://discuss.huggingface.co/t/tips-for-choosing-model-automatic-labelling-of-articles/976
I am looking for an advice in choosing the right model to perform Automatic generation of labels in an Article Any models which does “title” generation for articles can also be helpful ** Preferably I need some light-weight model to run it in my laptop ** Any suggestions are welcome!
You could try to use Seq2Seq models for this (BART, T5 etc) where input seq would be the article and output seq the title. But both these models are large, T5 has a small variant t5-small which is around 230MB.
0
huggingface
Models
BertModel.forward() output caveat removed?
https://discuss.huggingface.co/t/bertmodel-forward-output-caveat-removed/983
In Transformers 3.0.2 and prior, there used to be a small caveat accompanying the description of 3.0.2 BertModel.forward() 1: Returns pooler_output (torch.FloatTensor: of shape (batch_size, hidden_size)): Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pre-training. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence. However, the caveat was removed in 3.1.0 (current master). The description of 3.1.0 BertModel.forward() now just says: Returns pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) - Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. Is there any deeply meaningful reason why that line was removed? Is there a new important secret to that [CLS] token that you’re not telling us?
That caveat was removed because this output is actually what is used for classification in the Sequence Classification model and experiments showed it was given the same kinds of results as the mean/average.
0
huggingface
Models
How to use Huggingface model for continuous values directly?
https://discuss.huggingface.co/t/how-to-use-huggingface-model-for-continuous-values-directly/816
Hi, I have a dataset which contains continuous values [ batch_size, features ] Features look like this : [0.49221584, -0.021571456, -0.0920076, -0.14408934, -0.62306774] I want to apply transformer model on these values and pass it to the final layer, something like this batch_data ==> Transformer ==> output_layer ==> classification Currently, I am using hand-coded multi-head attention and norm with the feed-forward network to pass these values to the transformer block. I went through huggingface models, but all the models accept tokens and sequences, Is there any way/hack How I can use hugging face transformer models on direct continuous values? Looking for TensorFlow/Keras quick template to start with continuous values, that’d be helpful.
I think @patrickvonplaten’s suggestion in https://github.com/huggingface/transformers/issues/6608 27 to use input_embeds would be the way to go here. Or you could also try simply bucketizing your continuous values into buckets and embedding them, like tokens.
0
huggingface
Models
Info regarding sentence-transformers
https://discuss.huggingface.co/t/info-regarding-sentence-transformers/808
sentence-transformers were recently added in the model hub. I wanted to know the following if : all these models were trained on NLI (which dataset ? MNLI) for how many epochs ? If I want to use it for inference on NLI task, do I need to train an additional linear layer ? So freeze the AutoModel, train a nn.Linear for 1 epoch on the mean output and then perform inference. Can you please confirm this usage ? tfmr = AutoModel.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens") class Model(nn.Module): def __init__(self, model): super(model, self).__init__() self.encoder = model for param in self.encoder.parameters(): param.requires_grad = False self.classifier = nn.Linear(768, 3) self.criterion = nn.CrossEntropy() def forward(self, inputs): # x will come from dataloader using default_data_collator labels = inputs.pop('labels') model_output = self.encoder(**inputs) sentence_embeddings = mean_pooling(model_output, inputs['attention_mask']) logits = self.classifier(sentence_embeddings) loss = self.criterion(logits, labels) return loss, logits model = Model(tfmr)
I trained it on MNLI with the model frozen, it gets around 46% accuracy after epoch 1. Seems like it has not been trained on MNLI. Can anyone confirm ?
0
huggingface
Models
Helsinki-NLP/opus-mt-it-en missing
https://discuss.huggingface.co/t/helsinki-nlp-opus-mt-it-en-missing/849
Have the romance language models for opus-mt been removed? I tried it-en and ROMACE-en, but it seems they are not online: { “error”: “Model name ‘Helsinki-NLP/opus-mt-ROMANCE-en’ was not found in tokenizers model name list (). We assumed ‘Helsinki-NLP/opus-mt-ROMANCE-en’ was a path or url to a directory containing vocabulary files named [‘source.spm’, ‘target.spm’, ‘vocab.json’, ‘tokenizer_config.json’], but couldn’t find such vocabulary files at this path or url.” }
Not that I know of, but try en-roa/roa-en. It works for me locally. Note that the inference API is out of disk space until tomorrow.
0
huggingface
🤗Transformers
How can I use class_weights when training?
https://discuss.huggingface.co/t/how-can-i-use-class-weights-when-training/1067
I have an unbalanced dataset. When training I want to pass class_weights so the update for rare classes is highen than for large classes. How is this possible in HF with PyTorch? Thanks Philip
Answering my own question: Subclass Trainer and override the compute_loss method (see example here 423).
0
huggingface
🤗Transformers
Generation Probabilities: How to compute probabilities of output scores for GPT2
https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175
Now that it is possible to return the logits generated at each step, one might wonder how to compute the probabilities for each generated sequence accordingly. The following code snippet showcases how to do so for generation with do_sample=True for GPT2: import torch from transformers import AutoModelForCausalLM from transformers import AutoTokenizer gpt2 = AutoModelForCausalLM.from_pretrained("gpt2", return_dict_in_generate=True) tokenizer = AutoTokenizer.from_pretrained("gpt2") input_ids = tokenizer("Today is a nice day", return_tensors="pt").input_ids generated_outputs = gpt2.generate(input_ids, do_sample=True, num_return_sequences=3, output_scores=True) # only use id's that were generated # gen_sequences has shape [3, 15] gen_sequences = generated_outputs.sequences[:, input_ids.shape[-1]:] # let's stack the logits generated at each step to a tensor and transform # logits to probs probs = torch.stack(generated_outputs.scores, dim=1).softmax(-1) # -> shape [3, 15, vocab_size] # now we need to collect the probability of the generated token # we need to add a dummy dim in the end to make gather work gen_probs = torch.gather(probs, 2, gen_sequences[:, :, None]).squeeze(-1) # now we can do all kinds of things with the probs # 1) the probs that exactly those sequences are generated again # those are normally going to be very small unique_prob_per_sequence = gen_probs.prod(-1) # 2) normalize the probs over the three sequences normed_gen_probs = gen_probs / gen_probs.sum(0) assert normed_gen_probs[:, 0].sum() == 1.0, "probs should be normalized" # 3) compare normalized probs to each other like in 1) unique_normed_prob_per_sequence = normed_gen_probs.prod(-1)
Can I use this to generate sequences only over a probability threshold?
0
huggingface
🤗Transformers
License for models on huggingface
https://discuss.huggingface.co/t/license-for-models-on-huggingface/13373
Hi, wanted to use huggingface in production, but I know that some models don’t allow their use in production because of the license (example: GPL). Can I use all of the huggingface models while respecting the Apache license on the huggingface Github? Or should I check the main repo for that model to check the license to make sure I can use it? Thanks!
To add to the question above, what happens if a model doesn’t have a license on the repo or tag? Does it inherit the Apache 2.0 Transformer’s license?
0
huggingface
🤗Transformers
How to create Wav2Vec2 With Language model
https://discuss.huggingface.co/t/how-to-create-wav2vec2-with-language-model/12703
Now that language model boosted decoding is possible for Wav2Vec2 (https://twitter.com/PatrickPlaten/status/1468999507488788480 11 and patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm · Hugging Face 8 it’s important to know How can one create a Wav2Vec2 + LM repo?. Let’s explain (hopefully this is simpler in the future): Install kenlm: The best guide to build kenlm is this github gist here: kenlm/BUILDING at master · kpu/kenlm · GitHub 13 IMO Create an ngram model: This is explained quite well here: GitHub - kpu/kenlm: KenLM: Faster and Smaller Language Model Queries 11 I wrote a short python script that allows to quickly create a ngram from a text corpus of common voice: #!/usr/bin/env python3 from datasets import load_dataset import os import argparse parser = argparse.ArgumentParser() parser.add_argument( "--language", default="polish", type=str, required=True, help="Language to run comparison on. Choose one of 'polish', 'portuguese', 'spanish' or add more to this script." ) parser.add_argument( "--path_to_ngram", type=str, required=True, help="Path to kenLM ngram" ) args = parser.parse_args() ds = load_dataset("multilingual_librispeech", f"{args.language}", split="train") with open("text.txt", "w") as f: f.write(" ".join(ds["text"])) os.system(f"./kenlm/build/bin/lmplz -o 5 <text.txt > {args.path_to_ngram}") ## VERY IMPORTANT!!!: # After the language model is created, one should open the file. one should add a `</s>` # The file should have a structure which looks more or less as follows: # \data\ # ngram 1=86586 # ngram 2=546387 # ngram 3=796581 # ngram 4=843999 # ngram 5=850874 # \1-grams: # -5.7532206 <unk> 0 # 0 <s> -0.06677356 # -3.4645514 drugi -0.2088903 # ... # Now it is very important also add a </s> token to the n-gram # so that it can be correctly loaded. You can simple copy the line: # 0 <s> -0.06677356 # and change <s> to </s>. When doing this you should also inclease `ngram` by 1. # The new ngram should look as follows: # \data\ # ngram 1=86587 # ngram 2=546387 # ngram 3=796581 # ngram 4=843999 # ngram 5=850874 # \1-grams: # -5.7532206 <unk> 0 # 0 <s> -0.06677356 # 0 </s> -0.06677356 # -3.4645514 drugi -0.2088903 # ... # Now the ngram can be correctly used with `pyctcdecode` See: Wav2Vec2_PyCTCDecode/create_ngram.py at main · patrickvonplaten/Wav2Vec2_PyCTCDecode · GitHub 14 Feel free to copy those lines of code. Multi-Lingual librispeech is already a very clean text corpus. You might want to pre-process other text corpora to not include any punctuation etc… As an example this step created the Spanish 5-gram here: kensho/5gram-spanish-kenLM · Hugging Face 8 Now we should load the language model in a PyCTCBeamSearchDecoder as this is the format we need. Here one should be very careful to choose exactly the same vocabulary as the Wav2Vec2’s tokenizer vocab. At first we should pick a fine-tuned Wav2Vec2 model that we would like to add a language model to. Let’s choose: jonatasgrosman/wav2vec2-large-xlsr-53-spanish · Hugging Face 2 Now we instantiate a BeamSearchDecoder and save it to a folder wav2vec2_with_lm. E.g. you can run this code: from transformers import AutoTokenizer from pyctcdecode import build_ctcdecoder tokenizer = AutoTokenizer.from_pretrained("jonatasgrosman/wav2vec2-large-xlsr-53-spanish") vocab_dict = tokenizer.get_vocab() sorted_dict = {k: v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])} decoder = build_ctcdecoder( list(sorted_dict.keys()), args.path_to_ngram, ) decoder.save_to_dir("wav2vec2_with_lm") Now we should have saved the following files in wav2vec2_with_lm: - language_model - attrs.json - kenLM.arpa - unigrams.txt - alphabet.json That’s it! Now all you need to do is to upload this to your Wav2Vec2 model so that the directory structure looks as follows: huggingface.co patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm at main 19 Now, all you need to change is your decoding to: huggingface.co patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm · Hugging Face 14
This is cool! Can you make the inference widget use the language model?
0
huggingface
🤗Transformers
Fine Tuning BERT model on custom dataset
https://discuss.huggingface.co/t/fine-tuning-bert-model-on-custom-dataset/14134
I am trying to fine-tune the BERT model on a custom dataset for ner task with PyTorch. I did NOT get the EXACT dataset accepted by transformers example training scripts. The conll_2003 dataset comes in .txt files that are NOT accepted by training scripts. None of the CSV works well. Can I get a dataset that I can download and use locally without referring to the hub datasets? Why do datasets are NOT working when we use locally?
Hi, The scripts are meant as examples, you can easily tweak them to make it work with your local dataset. For instance, you can load a HuggingFace Dataset with your local data (as explained in the docs 1), which could be CSV, JSON, txt, Parquet, etc. Just make sure that you prepare your data in IOB format (as this is required for the token classification models). If you can provide me with a small portion, I can illustrate how to make a HuggingFace Dataset with it, and make it work with the script.
0
huggingface
🤗Transformers
Torchscript Example for BERT
https://discuss.huggingface.co/t/torchscript-example-for-bert/14158
I am looking at the example for torchscripting BERT-like models here: Exporting 🤗 Transformers Models. I have a basic question about the dummy inputs being passed for tracing which don’t make obvious sense to me. The input passed is a list containing token_ids and segment_ids (or token_type_ids) which torchscript will unpack. Now, BertModel.forward() expects input_ids and attention_mask as the first and second arguments respectively. So, how why is segment_ids being passed as the second argument for both tracing and later on for inference with the loaded torchscripted model? Does it somehow work because of the flag torchscript=True that’s passed when instantiating the model? If so, how does it work?
cc’ing @lewtun here
0
huggingface
🤗Transformers
Wav2vec with new LM causing CPU OOM
https://discuss.huggingface.co/t/wav2vec-with-new-lm-causing-cpu-oom/14069
Hi @patrickvonplaten and all other wav2vec LM users, thanks for the new LM addition. When using the following code (also without GPU), the CPU will run out of memory. Looks like something should be freed up but isn’t. This is not happening without the LM, so I guess this has something to do with pyctcdecode. Any ideas or am I missing a step? This code will trigger the OOM on a GPU. Maybe set the range higher if needed, but memory usage should increase anyways. from datasets import load_dataset from transformers import Wav2Vec2ProcessorWithLM, Wav2Vec2ForCTC import torch dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") processor = Wav2Vec2ProcessorWithLM.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm") model = Wav2Vec2ForCTC.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm").to("cuda") print("* * * * * * loaded models") for i in range(200): audio_sample = dataset[i] print(" * * * * Sample: ", i) inputs = processor(audio_sample["audio"]["array"], sampling_rate=audio_sample["audio"]["sampling_rate"], return_tensors="pt").to("cuda") with torch.no_grad(): logits = model(**inputs).logits transcription = processor.batch_decode(logits.cpu().numpy()).text print(transcription[0].lower()) And a CPU version, will just run slower OOM from datasets import load_dataset from transformers import Wav2Vec2ProcessorWithLM, Wav2Vec2ForCTC import torch dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") processor = Wav2Vec2ProcessorWithLM.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm") model = Wav2Vec2ForCTC.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm") print("* * * * * * loaded models") for i in range(200): audio_sample = dataset[i] print(" * * * * Sample: ", i) inputs = processor(audio_sample["audio"]["array"], sampling_rate=audio_sample["audio"]["sampling_rate"], return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits transcription = processor.batch_decode(logits.numpy()).text print(transcription[0].lower())
And solved on Github, a missing pool.close() in the processor code.
1
huggingface
🤗Transformers
M2M model finetuning on multiple language pairs
https://discuss.huggingface.co/t/m2m-model-finetuning-on-multiple-language-pairs/13203
Hi all, can anyone please help in suggesting how to finetune m2m100 on more than one pair? I am able to finetune for one lang pair using the below script: CUDA_VISIBLE_DEVICES=0,1,2,3,6 python -m torch.distributed.run --nproc_per_node=5 run_translation.py --model_name_or_path=m2m100_418M_new_token --do_train --do_eval --source_lang ja --target_lang en --fp16=True --evaluation_strategy epoch --output_dir bigfrall --per_device_train_batch_size=48 --per_device_eval_batch_size=48 --overwrite_output_dir --forced_bos_token “en” --train_file orig_manga/orig/train_exp_frame_50k.json --validation_file orig_manga/orig/valid_exp_frame_50k.json --tokenizer_name tokenizer_new_token --num_train_epochs 50 --save_total_limit=5 --save_strategy=epoch --load_best_model_at_end=True --predict_with_generate But, now I want to finetune it on ja-en and ja-zh pairs. How to pass these both languages?
I’m also curious about this. @nikhiljais - did you ever work this out?
0
huggingface
🤗Transformers
CUDA out memory only when performing hyperparameter search
https://discuss.huggingface.co/t/cuda-out-memory-only-when-performing-hyperparameter-search/14038
I am working with a GTX3070, which only has 8GB of GPU RAM. When I am running using trainer.train(), I run fine with a maximum batch size of 7 (6 if running in Jupiter notebook). However, when I attempt to run in a hyperparameter search with ray, I get CUDA out of memory every single time. I am wondering why this could be the case. Here is my code. Sorry if it’s a little long. It’s based off the following Jupiter notebooks: notebooks/text_classification.ipynb at master · huggingface/notebooks · GitHub Transformers-Tutorials/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_Seq2SeqTrainer.ipynb at master · NielsRogge/Transformers-Tutorials · GitHub def model_init(): if args.pretrained_checkpoint: model = VisionEncoderDecoderModel.from_pretrained( args.pretrained_checkpoint ) else: model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( args.encoder_checkpoint, args.decoder_checkpoint ) # set special tokens used for creating the decoder_input_ids from the labels model.config.decoder_start_token_id = tokenizer.cls_token_id model.config.pad_token_id = tokenizer.pad_token_id # make sure vocab size is set correctly model.config.vocab_size = model.config.decoder.vocab_size # set beam search parameters model.config.eos_token_id = tokenizer.sep_token_id model.config.max_length = 64 model.config.early_stopping = True model.config.no_repeat_ngram_size = 3 model.config.length_penalty = 2.0 model.config.num_beams = 4 return model def compute_metrics(pred, verbose=args.verbose_inference) -> Dict[str, float]: labels_ids = pred.label_ids pred_ids = pred.predictions pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True) labels_ids[labels_ids == -100] = tokenizer.pad_token_id label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True) if verbose: print(pred_str) # TODO: The following package from datasets load_metric # cer = cer_metric.compute(predictions=pred_str, references=label_str) cer = sum([editdistance.eval(a, b) for a, b in zip(pred_str, label_str)]) / sum( [len(b) for b in label_str] ) return {"cer": cer} training_args = Seq2SeqTrainingArguments( num_train_epochs=100, predict_with_generate=True, evaluation_strategy="epoch", per_device_train_batch_size=3, # 7 max for py, 6 max for ipynb per_device_eval_batch_size=3, fp16=True, # set to false if turning off gpu output_dir=args.logging_dir, save_strategy="epoch", save_total_limit=10, logging_steps=1000, learning_rate=1e-4, load_best_model_at_end=True, report_to="wandb", ) # instantiate trainer trainer = Seq2SeqTrainer( model_init=model_init, tokenizer=feature_extractor, args=training_args, compute_metrics=compute_metrics, train_dataset=train_dataset, eval_dataset=eval_dataset, data_collator=default_data_collator, ) def hp_space(trial) -> Dict[str, float]: # backend for ray return { "learning_rate": tune.loguniform(1e-6, 1e-4), "num_train_epochs": tune.choice(list(range(1, 6))), "seed": tune.uniform(1, 40), "per_device_train_batch_size": 1, } if args.hyperparameter_search: trainer.hyperparameter_search( hp_space=hp_space, backend="ray", n_trials=10, # search_alg=HyperOptSearch(metric="objective", mode="max"), # scheduler=ASHAScheduler(metric="loss", mode="min"), # fail_fast=True, max_failures=-1, name="testing_run_hellobro", ) else: trainer.train()
I encountered the same problem so +1 (post only for more attention)
0
huggingface
🤗Transformers
How to force bos_token_id for each example individually in MBart?
https://discuss.huggingface.co/t/how-to-force-bos-token-id-for-each-example-individually-in-mbart/8712
Say I have a batch of examples with fields of input_ids of size m*n and bos_token_id of size n. Is there a way that I could specify the bos_token_id for each example during the evaluation step when using generate?
I’m also curious about this. @mralexis - did you ever work this out? It seems like a similar question was also asked here: M2M model finetuning on multiple language pairs which also had no reply.
0
huggingface
🤗Transformers
Questions about ONNX
https://discuss.huggingface.co/t/questions-about-onnx/4517
Hi community, I have tried the convert_graph_to_onnx.py script to convert one transformer model from PyTorch to ONNX format. I have a few questions : I have installed onnxrutime-gpu. Does the model generated with the script will be functionning only with GPU or will it work also with CPU onnx rutile? So, do I have to generate one onnx model per device? Does the ONNX model dependant of the hardware it bas been generated from or do I have to generate the ONNX model on the target hardware where will be run the inference ? Are the outputs of the ONNX model identical wherever hardware the inference is run on? So, can I use the embeddings generated from the ONNX model but from different hardware platforms? How can I apply quantization on ONNX model for both CPU and GPU devices ? It seems that the --quantize flag IS deprecated, and I can’t manage to apply dynamical quantize on my ONNX model. Thanks!
Hi @Matthieu, Here’s a few tentative answers to your questions (I’m somewhat new to using ONNX): If you are not running optimisations like dynamic quantisation, then the resulting ONNX model should work on both CPU and GPU. You can see in the source code here 12 that convert_graph_to_onnx.py has a convert function that relies on e.g. the native ONNX module in PyTorch. ONNX Runtime is only used in the optimize function here 6. Similar to question 1, if you are not applying any optimisations then my understanding is that the resulting model should be hardware-independent (this is meant to be the whole benefit of having a universal format like ONNX :)) Interesting question. I don’t know the answer, but my naive guess is that it depends on which hardware accelerator you’re using (e.g. ONNX Runtime vs something else). As far as I know, quantisation for GPU is not supported (see issue here 13), but it definitely is for CPU. What kind of trouble are you running into? One thing you can try is using ONNX Runtime directly with an exported ONNX model as follows: from onnxruntime.quantization import quantize_dynamic, QuantType model_input = "onnx/model.onnx" model_output = "onnx/model.quant.onnx" quantize_dynamic(model_input, model_output, weight_type=QuantType.QInt8) There’s also some useful notebooks on the ONNX Runtime repo that I found useful: onnxruntime/PyTorch_Bert-Squad_OnnxRuntime_GPU.ipynb at master · microsoft/onnxruntime · GitHub 25
0
huggingface
🤗Transformers
Adding new features to Bert for NER
https://discuss.huggingface.co/t/adding-new-features-to-bert-for-ner/11369
Our goal is to train Bert for NER using a json dataset much like Conll2007 but with additional features: Distance from top of document (top) Distance from left edge of document (left) Note that I don’t want to do this with embeddings because I think using these values in training will get much better results. It seems like it should be easy to add the features but nothing seems to work. so this link 2 is not what I am looking for. I created a new object for BertForTokenClassification with a new forward(). The new forward() has 4 additional parameters (pos, chunk, top, left) and I also added these to the outputs call: class AeepBertForTokenClassification(BertForTokenClassification): def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, pos=None, chunk=None, top=None, left=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): r""" labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): Labels for computing the token classification loss. Indices should be in ``[0, ..., config.num_labels - 1]``. """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict outputs = self.bert( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, pos=pos, chunk=chunk, top=top, left=left, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) sequence_output = outputs[0] sequence_output = self.dropout(sequence_output) logits = self.classifier(sequence_output) loss = None if labels is not None: loss_fct = CrossEntropyLoss() # Only keep active parts of the loss if attention_mask is not None: active_loss = attention_mask.view(-1) == 1 active_logits = logits.view(-1, self.num_labels) active_labels = torch.where( active_loss, labels.view(-1), torch.tensor(loss_fct.ignore_index).type_as(labels) ) loss = loss_fct(active_logits, active_labels) else: loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) if not return_dict: output = (logits,) + outputs[2:] return ((loss,) + output) if loss is not None else output return TokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) This results in the error message: Running tokenizer on prediction dataset: 100%|██████████| 26/26 [00:03<00:00, 6.96ba/s] [INFO|trainer.py:540] 2021-11-04 09:00:04,306 >> The following columns in the training set don't have a corresponding argument in `AeepBertForTokenClassification.forward` and have been ignored: pos_tags, id, tokens, ner_tags, chunk_tags. [INFO|trainer.py:1196] 2021-11-04 09:00:04,334 >> ***** Running training ***** [INFO|trainer.py:1197] 2021-11-04 09:00:04,334 >> Num examples = 1287 [INFO|trainer.py:1198] 2021-11-04 09:00:04,334 >> Num Epochs = 3 [INFO|trainer.py:1199] 2021-11-04 09:00:04,334 >> Instantaneous batch size per device = 8 [INFO|trainer.py:1200] 2021-11-04 09:00:04,334 >> Total train batch size (w. parallel, distributed & accumulation) = 8 [INFO|trainer.py:1201] 2021-11-04 09:00:04,334 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1202] 2021-11-04 09:00:04,334 >> Total optimization steps = 483 0%| | 0/483 [00:00<?, ?it/s]Traceback (most recent call last): File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py", line 1483, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/Users/cccc/PycharmProjects/aegis-ml/aegis-ml-ner-trainer/mainSingleThread2.py", line 702, in <module> main() File "/Users/cccc/PycharmProjects/aegis-ml/aegis-ml-ner-trainer/mainSingleThread2.py", line 610, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/Users/cccc/PycharmProjects/aegis-ml/venv-source/lib/python3.8/site-packages/transformers/trainer.py", line 1316, in train tr_loss_step = self.training_step(model, inputs) File "/Users/cccc/PycharmProjects/aegis-ml/venv-source/lib/python3.8/site-packages/transformers/trainer.py", line 1849, in training_step loss = self.compute_loss(model, inputs) File "/Users/cccc/PycharmProjects/aegis-ml/venv-source/lib/python3.8/site-packages/transformers/trainer.py", line 1881, in compute_loss outputs = model(**inputs) File "/Users/cccc/PycharmProjects/aegis-ml/venv-source/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/Users/cccc/PycharmProjects/aegis-ml/aegis-ml-ner-trainer/mainSingleThread2.py", line 87, in forward outputs = self.bert( File "/Users/cccc/PycharmProjects/aegis-ml/venv-source/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'pos' python-BaseException [ERROR|tokenization_utils_base.py:954] 2021-11-04 10:20:38,330 >> Using bos_token, but it is not set yet. [ERROR|tokenization_utils_base.py:964] 2021-11-04 10:20:38,338 >> Using eos_token, but it is not set yet. Self.bert() seems to be a pyTorch module object (module.py) which, of course, does not know that we have added features (pos, etc) but it seems to expect only kwargs and not expect specific parameters. So I don’t know what I need to do to pass these new parameters on through forward(). I can’t tell for sure what exactly it is calling when I trace it. Can someone help me with adding these features?
@g3casey I also read your comments in this thread 1. I don’t have a solution for this and believe that it is a very important issue to consider for Huggingface experts. Adding several extra features (in my case, semantic features) should be feasible more easily by just adjusting some parameters maybe. And specially for token classification tasks. @nielsr @sgugger
0
huggingface
🤗Transformers
Script run_mlm.py line by line
https://discuss.huggingface.co/t/script-run-mlm-py-line-by-line/13980
Hello there I am trying to run the script run_mlm.py to perform the training of a BERT model. Basically, the idea is to start from an already existing italian BERT model, perform a second training on a specific topic (biomedical texts), and later fine-tune it on a QuestionAnswering downstream task. I was able to run the run_mlm.py script, both with and without –line_by_line parameter. I have a couple of questions, if you could kindly answer or point me to somewhere in the documentation: The run with –line_by_line took like 10x longer that the one without, why? I have full access to the complete dataset, and I can organize it as I want, so which is the best format? Is there a way to feed the model more files, if my corpus is split on several of them? Does this script train the model for the NSP task as well? If I evaluate the model, I get the perplexity score. Is there a way to get accuracy for the NSP task? (I think that accuracy does not make sense for MLM, right?) Many thanks for your patience
Ok so, regarding point 2 (load more files at the same time), I tried to tweak the code a bit (line 280 of the run_mlm.py original script). ORIGINAL CODE: else: data_files = {} if data_args.train_file is not None: data_files["train"] = data_args.train_file extension = data_args.train_file.split(".")[-1] if data_args.validation_file is not None: data_files["validation"] = data_args.validation_file extension = data_args.validation_file.split(".")[-1] if extension == "txt": extension = "text" raw_datasets = load_dataset(extension, data_files=data_files, cache_dir=model_args.cache_dir) MY EDITS: else: data_files = {} if data_args.train_file is not None: files_full_path = [] for filepath in os.listdir(data_args.train_file): files_full_path.append(data_args.train_file+filepath) data_train_from_file = [] for el in files_full_path: data_train_from_file.append(el) data_files["train"] = data_train_from_file extension = files_full_path[0].split(".")[-1] if data_args.validation_file is not None: data_files["validation"] = data_args.validation_file extension = data_args.validation_file.split(".")[-1] if extension == "txt": extension = "text" raw_datasets = load_dataset(extension, data_files=data_files, cache_dir=model_args.cache_dir) Do you think this approach can make sense? I am trying the code right now and it seems to work. My concern is that when I will load the entire dataset (several GBs of text data), the training will crash/be extremely long (several weeks). Thanks everyone
0
huggingface
🤗Transformers
Inference of finetuned wav2vec2-xls-r-300m model using the ASR pipeline does not remove special tokens
https://discuss.huggingface.co/t/inference-of-finetuned-wav2vec2-xls-r-300m-model-using-the-asr-pipeline-does-not-remove-special-tokens/13973
I have finetuned a wav2vec2-xls-r-300m model on Hindi language using the mozilla-foundation/common_voice_7_0 dataset. But when I infer the model using the ASR pipeline special tokens are not skipped. And even in the Huggingface demo special characters are retained. Model: shivam/xls-r-hindi (shivam/xls-r-hindi · Hugging Face 1) Another example of same issue in different model: Harveenchadha/vakyansh-wav2vec2-hindi-him-4200 (Harveenchadha/vakyansh-wav2vec2-hindi-him-4200 · Hugging Face 1) The tokenizer of wav2vec2-xls-r-300m is “Wav2Vec2CTCTokenizer” and in the asr pipeline if “CTC” is present in tokenizer class name then skip_special_tokens is set to False (transformers/automatic_speech_recognition.py at v4.15.0 · huggingface/transformers · GitHub 1), because of this special tokens are included in the output when using asr pipeline. Screenshot from 2022-01-21 11-18-50751×90 13.4 KB Screenshot from 2022-01-21 17-14-32660×516 34.8 KB
We need to not skip special tokens for CTC (wav2vec2 in particular) because of the [PAD] token. HELLO can only be transcribed because the CTC tokens are H, E, L, PAD, L, L, L, O, O for instance. It seems here that maybe <s> got confused with <pad> and hence is not properly skipped during decoding. Could that be it ? Also I am unsure to know how it would behave on the unicode scripts you are using (I hope if we remove <s> by properly using the <pad> token everything will work magically, but I can’t be sure.
0
huggingface
🤗Transformers
Finetuning T5 large for paraphrasing multiple time with the same parameters and data gives different results
https://discuss.huggingface.co/t/finetuning-t5-large-for-paraphrasing-multiple-time-with-the-same-parameters-and-data-gives-different-results/11845
I use this script to fine-tune T5 large for paraphrasing (Google Colab 2). And i do fine-tuning 2 times from scratch, using the same data and the same parameters and expected that the results are the same. However the results are quite different. Is there any way to set for example seed, so I can get the same result if everything else is the same. I am not sure if this means that the results should be the same? def set_seed(seed): random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) set_seed(42)
Hi @zokica , are you using colab pro for finetuninig t5-large model?, In my case I am getting RAM issues. Is there a way to finetune it on free colab instance?
0
huggingface
🤗Transformers
AttributeError: ‘str’ object has no attribute ‘item’ - Bert Fine tuning
https://discuss.huggingface.co/t/attributeerror-str-object-has-no-attribute-item-bert-fine-tuning/13884
Hi, I am working on a intent classification problem, so I am fine tuning bert for it, here is my code:- import random import numpy as np # This training code is based on the `run_glue.py` script here: # https://github.com/huggingface/transformers/blob/5bfcd0485ece086ebcbed2d008813037968a9e58/examples/run_glue.py#L128 # Set the seed value all over the place to make this reproducible. seed_val = 42 random.seed(seed_val) np.random.seed(seed_val) torch.manual_seed(seed_val) torch.cuda.manual_seed_all(seed_val) # We'll store a number of quantities such as training and validation loss, # validation accuracy, and timings. training_stats = [] # Measure the total training time for the whole run. total_t0 = time.time() # For each epoch... for epoch_i in range(0, epochs): # ======================================== # Training # ======================================== # Perform one full pass over the training set. print("") print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs)) print('Training...') # Measure how long the training epoch takes. t0 = time.time() # Reset the total loss for this epoch. total_train_loss = 0 # Put the model into training mode. Don't be mislead--the call to # `train` just changes the *mode*, it doesn't *perform* the training. # `dropout` and `batchnorm` layers behave differently during training # vs. test (source: https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch) model.train() # For each batch of training data... for step, batch in enumerate(train_dataloader): # Progress update every 40 batches. if step % 40 == 0 and not step == 0: # Calculate elapsed time in minutes. elapsed = format_time(time.time() - t0) # Report progress. print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed)) # Unpack this training batch from our dataloader. # # As we unpack the batch, we'll also copy each tensor to the GPU using the # `to` method. # # `batch` contains three pytorch tensors: # [0]: input ids # [1]: attention masks # [2]: labels b_input_ids = batch[0].to(device) b_input_mask = batch[1].to(device) b_labels = batch[2].to(device) # Always clear any previously calculated gradients before performing a # backward pass. PyTorch doesn't do this automatically because # accumulating the gradients is "convenient while training RNNs". # (source: https://stackoverflow.com/questions/48001598/why-do-we-need-to-call-zero-grad-in-pytorch) model.zero_grad() # Perform a forward pass (evaluate the model on this training batch). # The documentation for this `model` function is here: # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification # It returns different numbers of parameters depending on what arguments # arge given and what flags are set. For our useage here, it returns # the loss (because we provided labels) and the "logits"--the model # outputs prior to activation. loss, logits = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) # Accumulate the training loss over all of the batches so that we can # calculate the average loss at the end. `loss` is a Tensor containing a # single value; the `.item()` function just returns the Python value # from the tensor. print(loss) total_train_loss += loss.item() # Perform a backward pass to calculate the gradients. loss.backward() # Clip the norm of the gradients to 1.0. # This is to help prevent the "exploding gradients" problem. torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) # Update parameters and take a step using the computed gradient. # The optimizer dictates the "update rule"--how the parameters are # modified based on their gradients, the learning rate, etc. optimizer.step() # Update the learning rate. scheduler.step() # Calculate the average loss over all of the batches. avg_train_loss = total_train_loss / len(train_dataloader) # Measure how long this epoch took. training_time = format_time(time.time() - t0) print("") print(" Average training loss: {0:.2f}".format(avg_train_loss)) print(" Training epcoh took: {:}".format(training_time)) # ======================================== # Validation # ======================================== # After the completion of each training epoch, measure our performance on # our validation set. print("") print("Running Validation...") t0 = time.time() # Put the model in evaluation mode--the dropout layers behave differently # during evaluation. model.eval() # Tracking variables total_eval_accuracy = 0 total_eval_loss = 0 nb_eval_steps = 0 # Evaluate data for one epoch for batch in validation_dataloader: # Unpack this training batch from our dataloader. # # As we unpack the batch, we'll also copy each tensor to the GPU using # the `to` method. # # `batch` contains three pytorch tensors: # [0]: input ids # [1]: attention masks # [2]: labels b_input_ids = batch[0].to(device) b_input_mask = batch[1].to(device) b_labels = batch[2].to(device) # Tell pytorch not to bother with constructing the compute graph during # the forward pass, since this is only needed for backprop (training). with torch.no_grad(): # Forward pass, calculate logit predictions. # token_type_ids is the same as the "segment ids", which # differentiates sentence 1 and 2 in 2-sentence tasks. # The documentation for this `model` function is here: # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification # Get the "logits" output by the model. The "logits" are the output # values prior to applying an activation function like the softmax. (loss, logits) = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) # Accumulate the validation loss. total_eval_loss += loss.item() # Move logits and labels to CPU logits = logits.detach().cpu().numpy() label_ids = b_labels.to('cpu').numpy() # Calculate the accuracy for this batch of test sentences, and # accumulate it over all batches. total_eval_accuracy += flat_accuracy(logits, label_ids) # Report the final accuracy for this validation run. avg_val_accuracy = total_eval_accuracy / len(validation_dataloader) print(" Accuracy: {0:.2f}".format(avg_val_accuracy)) # Calculate the average loss over all of the batches. avg_val_loss = total_eval_loss / len(validation_dataloader) # Measure how long the validation run took. validation_time = format_time(time.time() - t0) print(" Validation Loss: {0:.2f}".format(avg_val_loss)) print(" Validation took: {:}".format(validation_time)) # Record all statistics from this epoch. training_stats.append( { 'epoch': epoch_i + 1, 'Training Loss': avg_train_loss, 'Valid. Loss': avg_val_loss, 'Valid. Accur.': avg_val_accuracy, 'Training Time': training_time, 'Validation Time': validation_time } ) print("") print("Training complete!") print("Total training took {:} (h:mm:ss)".format(format_time(time.time()-total_t0))) I am getting a error:- AttributeError Traceback (most recent call last) <ipython-input-27-a6f23d2754c8> in <module>() 92 # from the tensor. 93 print(loss) ---> 94 total_train_loss += loss.item() 95 96 # Perform a backward pass to calculate the gradients. AttributeError: 'str' object has no attribute 'item'
Here is the collab:- Google Colab
0
huggingface
🤗Transformers
Does starting training from a previous checkpoint reset the learning rate?
https://discuss.huggingface.co/t/does-starting-training-from-a-previous-checkpoint-reset-the-learning-rate/9247
Hi, I want to start training a new model by loading a previous model I trained, I want to know what happens to the learning rate in this case – does it start at the learning rate I set, or does it start from the prev learning rate of the checkpoint?
Hi! As far as I have experienced, it continues from the last learning rate number saved in the checkpoint from where you resumed. It also starts from the last epoch number.
0
huggingface
🤗Transformers
T5 fp16 issue is fixed
https://discuss.huggingface.co/t/t5-fp16-issue-is-fixed/3139
We have just fixed the T5 fp16 issue for some of the T5 models! (Announcing it here, since lots of users were facing this issue and T5 is one most widely used model in the library) TL;DR: Previously, there was an issue when using T5 models in fp16; it was producing nan loss and logits. Now on the master, this issue is fixed for the following T5 models and versions. Now you should be able to train and use these models for inference in fp16 and see a decent speed-up! T5v1 : t5-small , t5-base , t5-large T5v1_1 : google/t5-v1_1-small , google/t5-v1_1-base MT5 : google/mt5-small , google/mt5-base For those of you who are interested, here’s a description of what was causing nan loss and how it is fixed. t5-small was the only T5 model that was working in fp16. The rest of the models produce nan loss/logits. for all the models and versions (v1, v1.1, mT5), at some point, we get inf values in hidden_states after applying the final linear layer (wo) in T5DenseReluDense and T5DenseGatedGeluDense. github.com huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/models/t5/modeling_t5.py#L248-L278 29 class T5DenseReluDense(nn.Module): def __init__(self, config): super().__init__() self.wi = nn.Linear(config.d_model, config.d_ff, bias=False) self.wo = nn.Linear(config.d_ff, config.d_model, bias=False) self.dropout = nn.Dropout(config.dropout_rate) def forward(self, hidden_states): hidden_states = self.wi(hidden_states) hidden_states = F.relu(hidden_states) hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_statesclass T5DenseGatedGeluDense(nn.Module): def __init__(self, config): super().__init__() self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False) This file has been truncated. show original which results in nan values in T5LayerNorm. Also for t5-large, t5-v1_1-base, t5-v1_1-large, there are inf values in the output of T5LayerSelfAttention and T5LayerCrossAttention, specifically where we add the attn output with the hidden_states github.com huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/models/t5/modeling_t5.py#L548 10 normed_hidden_states = self.layer_norm(hidden_states) attention_output = self.SelfAttention( normed_hidden_states, mask=attention_mask, position_bias=position_bias, head_mask=head_mask, past_key_value=past_key_value, use_cache=use_cache, output_attentions=output_attentions, ) hidden_states = hidden_states + self.dropout(attention_output[0]) outputs = (hidden_states,) + attention_output[1:] # add attentions if we output them return outputsclass T5LayerCrossAttention(nn.Module): def __init__(self, config): super().__init__() self.EncDecAttention = T5Attention(config, has_relative_attention_bias=False) self.layer_norm = T5LayerNorm(config.d_model, eps=config.layer_norm_epsilon) self.dropout = nn.Dropout(config.dropout_rate) github.com huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/models/t5/modeling_t5.py#L584 7 normed_hidden_states, mask=attention_mask, key_value_states=key_value_states, position_bias=position_bias, head_mask=head_mask, past_key_value=past_key_value, use_cache=use_cache, query_length=query_length, output_attentions=output_attentions, ) layer_output = hidden_states + self.dropout(attention_output[0]) outputs = (layer_output,) + attention_output[1:] # add attentions if we output them return outputsclass T5Block(nn.Module): def __init__(self, config, has_relative_attention_bias=False): super().__init__() self.is_decoder = config.is_decoder self.layer = nn.ModuleList() self.layer.append(T5LayerSelfAttention(config, has_relative_attention_bias=has_relative_attention_bias)) This happens during both training and inference, to reproduce fix To avoid inf values we could clamp the hidden_states to the max values for the current data type if there are inf in it. i.e if torch.isinf(hidden_states).any(): clamp_value = torch.finfo(hidden_states.dtype).max - 1000 hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value) we need to add this after self attn, cross-attn, and the feed-forward layer which is where the inf values occur. This works for both apex and amp To verify this fix, trained t5-base, t5-v1_1-base and t5-v1_1-small on cnn/dm for 10k steps (1.11 epochs) Here’s the training command, to run this navigate to examples/seq2seq dir, follow the instructions in the readme to download cnn_dm and dataset, and then run the following command export M=google/t5-v1_1-base export OUT_DIR=t5-v1_1-base-cnn-fp16 export DATA_DIR=cnn_dm python finetune_trainer.py \ --model_name_or_path $M \ --data_dir $DATA_DIR \ --output_dir $OUT_DIR --overwrite_output_dir \ --max_steps=10000 \ --gradient_accumulation_steps=8 \ --learning_rate=1e-4 \ --per_device_train_batch_size=4 \ --n_val 500 \ --max_target_length=56 --val_max_target_length=128 \ --fp16 --fp16_backend apex \ --do_train --do_eval --evaluation_strategy steps \ --logging_steps=100 --logging_first_step --eval_steps=2500 --save_steps=2500 --save_total_limit=2 \ --sortish_sampler \ for evaluation python run_eval.py \ t5-v1_1-base-cnn-fp16 cnn_dm/test.source hypothesis.txt \ --reference_path cnn_dm/test.target \ --score_path metrics.json \ --device cuda:0 \ --prefix summarize: \ --bs 16 \ --fp16 \ and got the following metrics (ROUGE2) for t5-base: 19.2804 for t5-v1.1-base: 18.4316 (note that the score for t5-base is more because it’s already pre-trained on cnn/dm) To compare this, evaluated the pre-trained t5-base in both fp32 and fp16, which gave the following results fp16: 18.3681 fp32: 18.394 So the results are close enough. To verify the fix for t5-large, evaluated the pre-trained t5-large in fp32 and fp16 (use the same command above to evaluate t5-large) and got the following results fp16: 19.2734 fp32: 19.2342 Surprisingly, rouge2 is slightly better in fp16. So with the above fix, the following model types now work in fp16 (opt level 01), and give descent speed-up T5v1: t5-small, t5-base, t5-large T5v1_1: google/t5-v1_1-small, google/t5-v1_1-base MT5: google/mt5-small, google/mt5-base One interesting observation, For inference, the t5-base fine-tuned with fp16 and evaluated in fp32 is faster (~1.31x) than pre-trained t5-base evaluated in fp16. See this colab 49
Nice fix! The speed discrepancy might be because of different length generations.
0
huggingface
🤗Transformers
Can trainer.predict() return multiple generations for each sample?
https://discuss.huggingface.co/t/can-trainer-predict-return-multiple-generations-for-each-sample/6508
I am looking for a similar feature as in model.generate() which takes a parameter num_return_sequences. It decides how many generations should be returned for each sample. It is especially useful when using beam search and analyzing the effect of beam search on the metrics. Trainer.predict() does not seem to support this feature. Is there a workaround? I can use model.generate() but then it was very slow last time because I have to create a for loop iterating over batches whereas trainer.predict automatically handles the data loading separating
Hi @berkayberabi! I have a similar problem. Did you manage to solve this issue?
0
huggingface
🤗Transformers
Pool [CLS] token from DistilBERT
https://discuss.huggingface.co/t/pool-cls-token-from-distilbert/11238
Hey there everyone, I haven’t used this amazing API for a while. The code below is able to pool the [CLS] output embedding from DistilBERT? distilbert = DistilBertModel.from_pretrained('distilbert-base-uncased', return_dict=True) # more code ... # input_ids & attention_mask from a batch outputs = distilbert(input_ids, attention_mask=attention_mask) seq_embeddings = outputs[0] # DistilBERT output embeddings cls_embedding = seq_embeddings[:,0,:] # <--- [CLS] output embedding I’ve taken this idea from the following DPR code: github.com huggingface/transformers/blob/master/src/transformers/models/dpr/modeling_dpr.py#L204 ) -> Union[BaseModelOutputWithPooling, Tuple[Tensor, ...]]: outputs = self.bert_model( input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) sequence_output, pooled_output = outputs[:2] pooled_output = sequence_output[:, 0, :] if self.projection_dim > 0: pooled_output = self.encode_proj(pooled_output) if not return_dict: return (sequence_output, pooled_output) + outputs[2:] return BaseModelOutputWithPooling( last_hidden_state=sequence_output, pooler_output=pooled_output,
did you get a solution for your question, please
0
huggingface
🤗Transformers
Fixing the random seed in the Trainer does not produce the same results across runs
https://discuss.huggingface.co/t/fixing-the-random-seed-in-the-trainer-does-not-produce-the-same-results-across-runs/3442
Hi folks, I’ve noticed that fixing the seed in the Trainer does not produce the same results across multiple training runs. For example, suppose we fix the seed in the TrainingArguments and instantiate a model and trainer as follows: batch_size = 16 model_checkpoint = "distilbert-base-uncased" model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint) args = TrainingArguments( "test-glue", evaluation_strategy = "epoch", learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=3, seed=123 ) trainer = Trainer( model, args, train_dataset=train_ds, eval_dataset=valid_ds, tokenizer=tokenizer, compute_metrics=compute_metrics ) If I fine-tune this on COLA, I get the following results Epoch Training Loss Validation Loss Matthews Correlation Runtime Samples Per Second 1 0.517900 0.467562 0.455635 0.740500 1408.496000 2 0.335300 0.500026 0.490934 0.686200 1519.946000 3 0.232300 0.618692 0.493626 0.693100 1504.833000 Now suppose we re-instantiate the model and trainer, and fine-tune: model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint) trainer = Trainer( model, args, train_dataset=train_ds, eval_dataset=valid_ds, tokenizer=tokenizer, compute_metrics=compute_metrics ) trainer.train() I would expect to get exactly the same results, but instead find small differences in the outputs: Epoch Training Loss Validation Loss Matthews Correlation Runtime Samples Per Second 1 0.519000 0.473168 0.458883 0.702900 1483.911000 2 0.335800 0.502644 0.486363 0.701400 1487.070000 3 0.229200 0.616023 0.497130 0.710800 1467.258000 My best guess is that the DataLoader is the source of the difference since it uses a RandomSampler. However, I would have thought that fixing the seed would also fix that. Although these differences are tiny, having reproducible runs really helps stay sane during debugging and I’m wondering whether anyone here knows how to fix this? For context, I’m working in Jupyter notebooks and here is a small Colab notebook from which I produced the above numbers: https://colab.research.google.com/drive/15nv40o81JfKwubFBVOjZkqk8bRauXig8?usp=sharing 7
For full reproducibility you need to instantiate your model inside the Trainer by using the model_init argument (or setting a seed before instantiating your model). You have random weights in your model head and those are different in your two runs in the code you show.
0
huggingface
🤗Transformers
Metrics for Training Set in Trainer
https://discuss.huggingface.co/t/metrics-for-training-set-in-trainer/2461
Hey guys, I am currently using the Trainer 4 in order to train my DistilBertForSequenceClassification. My problem: I want to stepwise print/save the loss and accuracy of my training set by using the Trainer. Is there a way to do so? What I did so far: I have adjusted compute_metrics. But this function is only carried out on my evaluation set. I need the same for my training set. I also tried out the TrainerCallback 2. But I can’t access the current predictions of the model by using the predefined callbacks. Another idea would be to customize the Trainer using a custom train function. But firstly I want to ask you whether there is an easier way to do so. Thank you in advance!
There is no way to do this directly in the Trainer, it’s just not built that way (because evaluation is often pretty slow). You should twek the code in your own subclass of Trainer to add a self.evaluate(self.train_dataset) at the appropriate line and then handle the logging.
0
huggingface
🤗Transformers
Target {} is out of bounds
https://discuss.huggingface.co/t/target-is-out-of-bounds/13802
Hi, I am following this fantastic notebook to fine-tune a multi classifier. Context: I am using my own dataset. Dataset is a CSV file with two values, text and label. Labels are all numbers. I have 7 labels. When loading the pre-trained model, I am assigning num_labels=7. from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased",num_labels=7) When training, I am receiving this error: /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing) 2844 if size_average is not None or reduce is not None: 2845 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 2846 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) 2847 2848 IndexError: Target 7 is out of bounds. I have tried changing the number of labels to 2 and 5 and that didn’t solve the issue. Still getting out of bounds error. Training arguments: from transformers import TrainingArguments, Trainer training_args = TrainingArguments( output_dir="./results", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=5, weight_decay=0.01, ) trainer = Trainer( model=model, args=training_args, train_dataset=tokinized_jobs["train"], eval_dataset=tokinized_jobs["test"], tokenizer=tokenizer, data_collator=data_collator, ) trainer.train() and here is how tokenized data look like DatasetDict({ train: Dataset({ features: ['attention_mask', 'input_ids', 'label', 'text', 'token_type_ids'], num_rows: 1598 }) test: Dataset({ features: ['attention_mask', 'input_ids', 'label', 'text', 'token_type_ids'], num_rows: 400 }) }) Sample: { 'attention_mask': [1, 1, 1, 1, 1, 1, 1], 'input_ids': [101, 1015, 1011, 2095, 3325, 6871, 102], 'label': 2, 'text': '1-year experience preferred', 'token_type_ids': [0, 0, 0, 0, 0, 0, 0] } I tried it on Colab with GPU and TPU. Any idea what is the issue?
I found the solution. It was an indexing issue with my labels. My labels were starting from 1 to 8, I changed them to 0…7 and that fixed the issue for me. Credit to this answer on Stackoverflow. Hope this will help someone in the future.
1
huggingface
🤗Transformers
How to add a custom argument to TrainingArguments?
https://discuss.huggingface.co/t/how-to-add-a-custom-argument-to-trainingarguments/13742
I’m using my own loss function with the Trainer. I need to pass a custom criterion I wrote that will be used in the loss function to compute the loss. I have the following setup: from transformers import Trainer, TrainingArguments class MyTrainer(Trainer): def compute_loss(self, model, inputs, return_outputs=False): # I compute the loss here and I need my `criterion` return loss training_args = TrainingArguments(# the arguments... ) # model = my model... trainer = MyTrainer(model=model, args=training_args, # rest of the arguments... ) I wonder if there is any way I can pass my custom criterion object to the Trainer either through the Trainer or TrainingArguments? Or, what is the best way to use my criterion without changing the Trainer?
If I understand your scenario correctly you are creating your own child class, using the Trainer class as parent class, is that right? If so, you should be able to add any arguments you want to the child class, since it is yours to amend, no? Just like in this example: How do I add arguments to a subclass in Python 3 - Stack Overflow 3 Let me know if this is helpful at all or if I complete misunderstood your question Cheers Heiko
0
huggingface
🤗Transformers
Problem with EarlyStoppingCallback
https://discuss.huggingface.co/t/problem-with-earlystoppingcallback/3390
I set the early stopping callback in my trainer as follows: trainer = MyTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, compute_metrics=compute_metrics, callbacks=[EarlyStoppingCallback(3, 0.0)] ) the values for this callback in the TrainingArguments are as follows: load_best_model_at_end=True, metric_for_best_model=eval_loss, greater_is_better=False What I expect is that the training will continue as long as the eval_loss metric continues to drop. While the training will stop only when the eval_loss has not dropped for more than 3 epochs and the best model will be loaded. During the training I get these values for the eval_loss: epoch1: 'eval_loss': 0.8832499384880066 epoch2: 'eval_loss': 0.6109879612922668 epoch3: 'eval_loss': 0.52149897813797 epoch4: 'eval_loss': 0.48024266958236694 therefore, as it always drops, I would expect the training to continue. Instead the training stopped after 4 epochs and during the evaluation it uploaded the model related to the first epoch, where the eval_loss had the greatest value, as you can see in the following: 01/26/2021 11:08:57 - INFO - __main__ - ***** Eval results ***** 01/26/2021 11:08:57 - INFO - __main__ - eval_loss = 0.8832499384880066 Am I wrong to set some parameters? Thanks! EDIT: to clarify, I also printed the TrainerState values at the end of the training: log_history=[ {'eval_loss': 0.837020993232727, 'eval_accuracy_score': 0.8039973127309372, 'eval_precision': 0.7904381747255738, 'eval_recall': 0.7808047316067748, 'eval_f1': 0.7855919213776935, 'eval_runtime': 8.375, 'eval_samples_per_second': 67.343, 'epoch': 1.0, 'step': 411}, {'loss': 1.5377, 'learning_rate': 4.6958980235865466e-05, 'epoch': 1.22, 'step': 500}, {'eval_loss': 0.6051444411277771, 'eval_accuracy_score': 0.8406953308700034, 'eval_precision': 0.8297104717236403, 'eval_recall': 0.8243570212384622, 'eval_f1': 0.8270250831610176, 'eval_runtime': 8.3919, 'eval_samples_per_second': 67.208, 'epoch': 2.0, 'step': 822}, {'loss': 0.6285, 'learning_rate': 4.3917595505563304e-05, 'epoch': 2.43, 'step': 1000}, {'eval_loss': 0.5184187889099121, 'eval_accuracy_score': 0.856567013772254, 'eval_precision': 0.8464932024849194, 'eval_recall': 0.8425486154673358, 'eval_f1': 0.8445163028833199, 'eval_runtime': 8.4159, 'eval_samples_per_second': 67.016, 'epoch': 3.0, 'step': 1233}, {'loss': 0.4561, 'learning_rate': 4.087621077526113e-05, 'epoch': 3.65, 'step': 1500}, {'eval_loss': 0.46523478627204895, 'eval_accuracy_score': 0.868743701713134, 'eval_precision': 0.8599369085173502, 'eval_recall': 0.8550049287570571, 'eval_f1': 0.8574638267277793, 'eval_runtime': 8.3682, 'eval_samples_per_second': 67.398, 'epoch': 4.0, 'step': 1644}, {'train_runtime': 1783.4323, 'train_samples_per_second': 4.609, 'epoch': 4.0, 'step': 1644} ], best_metric=0.837020993232727 as you can also see from here, the best_metric is the value of the val_loss of the first epoch and not the lowest among the epochs it has done (which are still few because the value is always decreasing and therefore the training should not even stop …).
I’m trying to reproduce your issue, but on my side, the best_metric is correct and decreasing. Could you check you are using the latest version of Transformers and post the way you are creating your TrainingArguments?
0
huggingface
🤗Transformers
How much data were used to pre-train facebook/wav2vec2-base
https://discuss.huggingface.co/t/how-much-data-were-used-to-pre-train-facebook-wav2vec2-base/13682
The Original problem is here 1 I changed the pre-training model facebook/wav2vec2-base-100k-voxpopuli from to facebook/wav2vec2-base 1. It works. Everything is ok. However [facebook/wav2vec2-base-100k-voxpopuli] was pretrained on 100k hours data, I don’t know how much data were used to pretrain facebook/wav2vec2-base。 Maybe there is something wrong in facebook/wav2vec2-base-100k-voxpopuli?
Hey @zzuczy, We could indeed have explained it better on the model card. As you can see in the official paper: https://arxiv.org/pdf/2006.11477.pdf the model was pretrained on 960h hours of Librispeech
0
huggingface
🤗Transformers
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/bert/models/bert
https://discuss.huggingface.co/t/requests-exceptions-httperror-404-client-error-not-found-for-url-https-huggingface-co-api-bert-models-bert/9728
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') above code raise error requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/bert/models/bert 9 the whole call stack is Traceback (most recent call last): File "key_gen_search.py", line 20, in <module> from reader import dataset_str File "/home/jack/prjs/TSMH/key_gen/reader.py", line 9, in <module> from utils import keyword_pos2sta_vec File "/home/jack/prjs/TSMH/key_gen/utils.py", line 18, in <module> bert_scorer = BERT_Scorer(config.bert_path) File "../bert/bert_scorer.py", line 11, in __init__ self.tokenizer = BertTokenizer.from_pretrained(pretrained) File "/home/jack/.local/share/virtualenvs/key_gen-eKA6qkca/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1649, in from_pretrained fast_tokenizer_file = get_fast_tokenizer_file( File "/home/jack/.local/share/virtualenvs/key_gen-eKA6qkca/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 3425, in get_fast_tokenizer_file all_files = get_list_of_files( File "/home/jack/.local/share/virtualenvs/key_gen-eKA6qkca/lib/python3.8/site-packages/transformers/file_utils.py", line 1730, in get_list_of_files model_info = HfApi(endpoint=HUGGINGFACE_CO_RESOLVE_ENDPOINT).model_info( File "/home/jack/.local/share/virtualenvs/key_gen-eKA6qkca/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 503, in model_info r.raise_for_status() File "/home/jack/.local/share/virtualenvs/key_gen-eKA6qkca/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/bert/models/bert
@jackliusr Did you ever resolve this issue? Facing same error
0
huggingface
🤗Transformers
Example of how to pretrain T5?
https://discuss.huggingface.co/t/example-of-how-to-pretrain-t5/4129
Is there any codebase in huggingface that could be used to pretrain T5 model? Looking into the examples dir in the repo there is nothing mentioned about T5. Thanks!
Still need help on this…
0
huggingface
🤗Transformers
How do I change the classification head of a model?
https://discuss.huggingface.co/t/how-do-i-change-the-classification-head-of-a-model/4720
Hello, I like to change the number of labels that a trained model has. I am loading a model that was trained on 17 classes and I like adapt this model to my own task. Now if I simply change the number of labels like this: model_checkpoint ="vblagoje/bert-english-uncased-finetuned-pos" model = AutoModelForTokenClassification.from_pretrained(model_checkpoint,num_labels=2) I get an error saying: RuntimeError: Error(s) in loading state_dict for BertForTokenClassification: size mismatch for classifier.weight: copying a param with shape torch.Size([17, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]). size mismatch for classifier.bias: copying a param with shape torch.Size([17]) from checkpoint, the shape in current model is torch.Size([2]). My question is: How do I replace the classification head? Thanks a lot
The reason is: you are trying to use mode, which has already pretrained on a particular classification task. You have to remove the last part ( classification head) of the model. This is actually a kind of design fault too. In practice ( BERT base uncased + Classification ) = new Model . is your model. Now, if you want to reuse them on a different tasks, either use BERT base uncased or extract that part from new Model.
0
huggingface
🤗Transformers
Multilingual Finetuning XLS-R
https://discuss.huggingface.co/t/multilingual-finetuning-xls-r/13362
Hi there, I was searching for code references, blogs and any sort of resources that can help me in fine-tuning XLS-R on multiple languages all at once using huggingface. All help is highly appreciated @patrickvonplaten
We don’t have much content for this at the moment sadly! It does sound like a very cool project though. I think the closest thing that I can think of that might help is this paper: [2109.11680] Simple and Effective Zero-shot Cross-lingual Phoneme Recognition 1 (we have 2 of the models merged to the Hub)
0
huggingface
🤗Transformers
Trainer option to disable saving DeepSpeed checkpoints
https://discuss.huggingface.co/t/trainer-option-to-disable-saving-deepspeed-checkpoints/13262
I’d like to ask for opinions about adding a Trainer configuration option to disable saving of DeepSpeed checkpoints (potentially only keeping the model weights). Context: I’m finetuning gpt-j-6b for basic translation phrases on consumer hardware (128GB System RAM and Nvidia GPU with 24GB RAM). I use the DeepSpeed Zero optimizer, stages 2 and 3 so 99% of my system memory is fully allocated (I also have a huge swap file on an NVMe). Arguments for adding this flag: The DeepSpeed checkpoints are huge (60GB+) and take a long time to save. Because my system RAM is used by the DeepSpeed optimizer, I can’t use local storage (no system ram available for buffers) so I have to transfer over the network to a NAS. The DeepSpeed save_checkpoint code (as of v0.5.8) is very hungry for system RAM and may even have some memory leaks as after 10+ attempts I’ve never been able to save more than three checkpoints without the linux kernel deciding to OOM the finetuning python process. Out of frustration, I’ve modified the trainer code by commenting out all the calls to deepspeed.save_checkpoint(output_dir) and surprise: I’ve been able to finetune and have saved the model weights 15+ times without getting OOMed. The default state for this option would obviously be to save everything. I read the previous topic on the subject: Disable checkpointing in Trainer 1 which is nice but does not exactly cover my use case.
Well, if you start from weights-only you waste training resources as your optimizer will take time to get to a point where you stopped at. So when you resume training you typically want to resume the optimizer states and not start from scratch. That’s the whole point of saving intermediary checkpoints. Shuffling data shouldn’t make any difference to wanting ongoing optim states. You don’t need to change any DS code, you just need to set: stage3_gather_fp16_weights_on_model_save=false in ds_config file and it won’t gather and save the fp16 weights. And you can then extract the perfect fp32 weights at the end of your training using zero_to_fp32.py script. Of course do a very short training and try using zero_to_fp32.py script and ensure that you know that this is what you want. So the proposal is this: don’t gather any zero3 weights and use additional CPU memory to build a state_dict and just dump the intermediary states to disk really fast (should be ~0 CPU RAM overhead) resume training from checkpoint until you have finished your training - i.e. repeat this step as many times as you need to extract the final fp32 weights with zero_to_fp32.py If something is unclear please don’t hesitate to ask for further clarifications, @mihai
1
huggingface
🤗Transformers
How should I handle pre/post-processing with slow tokenizers for tasks like NER and question answering?
https://discuss.huggingface.co/t/how-should-i-handle-pre-post-processing-with-slow-tokenizers-for-tasks-like-ner-and-question-answering/13339
How should folks using slow tokenizers perform pre/post processing tasks for tasks like question answering and token classification … both of which, at least from the course, appear heavily dependent on the fast-tokenizer only methods word_ids() and sequence_ids(). Also, I’m curious to know why the slow tokenizers don’t have word_ids and sequence_ids methods … and if there is a way we can get at, or build, the equivalent of them for slow tokenizers? Thanks much!
There are no easy model-agnostic way to tackle those tasks for slow tokenizers, so you should really use a fast one for those tasks.
0
huggingface
🤗Transformers
Fine-Tuning AutoModelWithLMHead Model
https://discuss.huggingface.co/t/fine-tuning-automodelwithlmhead-model/13320
Hi everyone, I want to fine-tune the AutoModelWithLMHead model from this repository, which is a German GPT-2 model. I have prepocessed a bunch of text passages for the fine-tuning, but when beginning training, I receive the following error (copied with a little context): File "GPT\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "GPT\lib\site-packages\transformers\models\gpt2\modeling_gpt2.py", line 774, in forward raise ValueError("You have to specify either input_ids or inputs_embeds") ValueError: You have to specify either input_ids or inputs_embeds It’s asking for either input ids or embeddings, which I thought I provided by instantiating the trainer. Here’s my code for the preparation of the model: # Load data with open("Fine-Tuning Dataset/train.txt", "r", encoding="utf-8") as train_file: train_data = train_file.read().split("--") with open("Fine-Tuning Dataset/test.txt", "r", encoding="utf-8") as test_file: test_data = test_file.read().split("--") # Load pre-trained tokenizer and prepare input tokenizer = AutoTokenizer.from_pretrained('dbmdz/german-gpt2') tokenizer.pad_token = tokenizer.eos_token train_input = tokenizer(train_data, padding="longest") test_input = tokenizer(test_data, padding="longest") # Define model model = AutoModelWithLMHead.from_pretrained("dbmdz/german-gpt2") training_args = TrainingArguments("test_trainer") # Evaluation metric = load_metric("accuracy") def compute_metrics(eval_pred): logits, labels = eval_pred predictions = numpy.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) # Train trainer = Trainer( model=model, args=training_args, train_dataset=train_input, eval_dataset=test_input, compute_metrics=compute_metrics, ) trainer.train() trainer.evaluate() Does anyone know the cause for this? Any help is gladly appreciated! Thank you.
It looks like your train_input is not a dataset containing the "input_ids", as expected by the model. Look at train_input[0] for instance to see which keys it contains.
0
huggingface
🤗Transformers
Do trainer.save_model saves the best model?
https://discuss.huggingface.co/t/do-trainer-save-model-saves-the-best-model/13296
I have set load_best_model_at_end to True for the Trainer class. And I want to save the best model in a specified directory. Does the method save_model of Trainer saves the best model or the last model in the specified directory?
save_model itself does what it say on the can: saves the model, good, bad, best it does not matter. It’s the rotate checkpoints method that will keep the best model from being deleted.
0
huggingface
🤗Transformers
Disable checkpointing in Trainer
https://discuss.huggingface.co/t/disable-checkpointing-in-trainer/2335
Hi folks, When I am running a lot of quick and dirty experiments, I do not want / need to save any models as I’m usually relying on the metrics from the Trainer 3 to guide my next decision. One thing that slows down my iteration speed is the fact that the Trainer will save a checkpoint after some number of steps, defined by the save_steps parameter in TrainingArguments 11. To disable checkpointing, what I currently do is set save_steps to some large number, but my question is whether there is a more elegant way to do this? For example, is there a Trainer argument I can set that will disable checkpointing altogether? Thanks!
There is none for now. We could definitely add a save_strategy like the evaluation_strategy that could take the values no/steps/epoch. If you want to tackle this in a PR that would be a welcome contribution!
0
huggingface
🤗Transformers
How to use only one bert to do generation task with ‘past_key_values’ mechanism?
https://discuss.huggingface.co/t/how-to-use-only-one-bert-to-do-generation-task-with-past-key-values-mechanism/4409
I really like the rich text generation APIs of this project, especially the 'past_key_values' mechanism, which makes the generation process efficiently. I use UniLM, sadly it's not implemented in huggingface, and I'm eager to implement UniLM with 'past_key_values' mechanism, but has encountered a lot of difficulties. The structure of UniLM is virtually the same as Bert, except the mask type of attention, so first I tried 'BertForMaskedLM', but it's forward function does't support 'past_key_values'. Then I tried 'BertModel', but the shape of past_key_values it returns are strange, so at the next step of decoding, input the past_key_values into the generate function causes: --> 930 past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0 IndexError: tuple index out of range what's the simplest way to implement UniLM, and used the rich APIs for text generation especially the 'past_key_values' mechanism? Please help me, thank you very much!
Any updates?
0
huggingface
🤗Transformers
Logging & Experiment tracking with W&B
https://discuss.huggingface.co/t/logging-experiment-tracking-with-w-b/498
For people interested in tools for logging and comparing different models and training runs in general, Weights & Biases 158 is directly integrated with Transformers. You just need to have wandb installed and logged in. It automatically logs losses, metrics, learning rate, computer ressources, etc. image1176×718 56.5 KB Here is another cool example 90 where I ran a sweep to fine-tune GPT-2 on tweets. image1281×957 678 KB Finally you can use your runs to create cool reports. See for example my huggingtweets report 52. See documentation 158 for more details or this colab 122. At the moment it is integrated with Trainer and TFTrainer. If you use Pytorch Lightning, you can use WandbLogger. See Pytorch Lightning documentation 25. Let me know if you have any questions or ideas to make it better!
@boris I have a few questions for the HF Transformers integration: It looks like wandb is charting the loss, learning rate, and epoch for a given run of Trainer.train(). Are there other things that would be useful to have charted for a finetuning run? It also looks like wandb is using the logging_steps value in TrainerArguments. Is this right? Is it preferred to set wandb behavior through the environment variables or in the finetuning script directly?
0
huggingface
🤗Transformers
Precision vs recall when using transformer models?
https://discuss.huggingface.co/t/precision-vs-recall-when-using-transformer-models/13448
Hi, I am using a fine-tuned bert model for text classification and, of course, I am getting very high accuracy scores. However, this masks a more nuanced reality when I look at the precision/recall metrics. See, for instance: precision recall f1-score support 0 0.962 0.976 0.969 75532 1 0.516 0.395 0.448 4820 accuracy 0.941 80352 macro avg 0.739 0.686 0.708 80352 weighted avg 0.935 0.941 0.938 80352 My question is: for a given transformers architecture (say Bert) do we know what we can do to improve either precision or recall (I know there is a tradeoff between the two). For instance is there evidence that oversampling the most infrequent class could improve precision? What can be done here, beyond having more training data? Thanks!
You can use class weights in your loss function if one label is more probable than the other or if your dataset is imbalanced. There’s plenty of tutorials on this so it shouldn’t be too hard to find.
0
huggingface
🤗Transformers
Replacing the decoder of an xxxEncoderDecoderModel
https://discuss.huggingface.co/t/replacing-the-decoder-of-an-xxxencoderdecodermodel/13367
A question someone had was how to replace the decoder of an existing VisionEncoderDecoderModel from the hub. Namely, the TrOCR 1 model currently only has checkpoints on the hub with an English-only language model (RoBERTa) as decoder - how to replace it with a multilingual XLMRoBERTa model? Here’s the answer: from transformers import VisionEncoderDecoderModel, RobertaForCausalLM model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten") # replace decoder model.decoder = RobertaForCausalLM.from_pretrained("xlm-roberta-base", is_decoder=True, add_cross_attention=True) As you can see, we first initialize the entire model from the hub, after which we replace the decoder with a custom one. Also note that we are initializing a RobertaForCausalLM model, which includes the language modeling head on top (as opposed to RobertaModel). We also set the is_decoder and add_cross_attention attributes of the model’s configuration to make sure cross-attention is added between the encoder and decoder. A warning will be printed when we initialize the model, indicating that the weights of the cross-attention layers are randomly initialized. Preparing the data Also note that, in case you are going to prepare data for the model, one must use the appropriate tokenizer to create the labels for the model (in this case, one should use XLMRobertaTokenizer). Let’s define a TrOCRProcessor (which wraps a feature extractor and a tokenizer into a single object), by first using the one from the corresponding checkpoint, and then replace the tokenizer part: from transformers import TrOCRProcessor, XLMRobertaTokenizer processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten") processor.tokenizer = XLMRobertaTokenizer.from_pretrained("xlm-roberta-base") One can then prepare a single (image, text) pair for the model as follows: from PIL import Image import torch image = Image.open("...").convert("RGB") text = "..." pixel_values = processor(image, return_tensors="pt").pixel_values # add labels (input_ids) by encoding the text labels = processor.tokenizer(text, padding="max_length", truncation=True).input_ids # important: make sure that PAD tokens are ignored by the loss function labels = [label if label != processor.tokenizer.pad_token_id else -100 for label in labels] encoding = {"pixel_values": pixel_values.squeeze(), "labels": torch.tensor(labels)}
I also tried to use trocr-small-handwritten encoder and get this error, when tried to predict image by model: ‘VisionEncoderDecoderModel’ object has no attribute ‘enc_to_dec_proj’ from transformers import RobertaForCausalLM, VisionEncoderDecoderModel model = VisionEncoderDecoderModel.from_pretrained(f'microsoft/trocr-small-handwritten') model.decoder = RobertaForCausalLM.from_pretrained('xlm-roberta-base', is_decoder=True, add_cross_attention=True) image1837×738 133 KB
0
huggingface
🤗Transformers
Technical clarification on the validation data vs. the training data in the trainer API
https://discuss.huggingface.co/t/technical-clarification-on-the-validation-data-vs-the-training-data-in-the-trainer-api/13400
Hi, I am using the Trainer API to fine-tune my models but I realized I wanted to clarify something about the training and the evaluation datasets as they appear in trainer = Trainer( model, args, train_dataset=tokenized_datasets['train'], eval_dataset=tokenized_datasets['test'], tokenizer=tokenizer, compute_metrics=compute_metrics ) My understand is the following: For each batch of data from the training data, The loss is computed ONLY for that batch. Then a gradient descent (or other algorithm) will tweak the current parameters to make the loss smaller at the next iteration (batch). move to the next batch of the traning data Then, at the end of the epoch, the current model (with the new weights from 1. and 2. repeated for the current epoch) is applied to the full eval_dataset, predictions are computed and accuracy metrics (say “accuracy” or “precision”) are shown in the console. In other words, the eval_dataset is NEVER used for training. Its only purpose is to provide (at the cost of “consuming” some of the data) a rough measure of the out of sample error rate. Training only stops when the number of epochs have been consumed. Is that 100% correct? Thanks!
of course, I know that usually the validation data is not used for training. I want to be sure that this is the case here as well. Using the Trainer API is a bit more opaque than using my own splits… Any clarification would be greatly welcome! Thanks
0
huggingface
🤗Transformers
Loading model from pytorch_pretrained_bert into transformers library
https://discuss.huggingface.co/t/loading-model-from-pytorch-pretrained-bert-into-transformers-library/6124
I have a pretrained BERT model for a classification task trained on the pytorch_pretrained_bert library. I would like to use the initial weights from this model for further training with transformers library. When I try to load this model, I get the following runtime error. Does anyone know how to resolve this? model = BertClassification(weight_path= pretrained_weights_path, num_labels=num_labels) state_dict = torch.load(fine_tuned_weight_path,map_location=‘cuda:0’) model.load_state_dict(state_dict) RuntimeError: Error(s) in loading state_dict for BertClassification: Missing key(s) in state_dict: “bert.embeddings.position_ids”. Thanks very much.
Hi. Please help with this. I am facing the same issue.
0
huggingface
🤗Transformers
How to encode 3d input with BERTModel
https://discuss.huggingface.co/t/how-to-encode-3d-input-with-bertmodel/8823
I use the model to encode the 3d input in the shape of (3, 5, 64), where 3 is the batch size, 5 is the number of utterance of each sample (can be considered as the second batch size), 64 is the seq length. I encode them by: model = BertModel.from_pretrained("bert-base-uncased") batch = [] input_ids = batch["input_ids"] attention_mask = batch["attention_mask"] for i in input_ids.shape[0]: bert_output = model.forward(input_ids=input_ids[i], attention_mask=attention_mask[i]) stack.append(bert_output) out = torch.stack(stack) I wonder if there’s other ways of doing this?
Hey, I am doing a similar task where I need a 3d array of (batch size, no of utterance, tokens). Since we cannot use 3d array directly, in your case I would do something like- input_ids = torch.reshape(input_ids, (15, 64)) attention_mask = torch.reshape(attention_mask, (15, 64)) bert_output = model.forward(input_ids=input_ids, attention_mask=attention_mask) # now you can changethe shape back to original. bert_output = torch.reshape(bert_output, (3,5,768)) #change 768 to 1024 if using bert large model
0
huggingface
🤗Transformers
New pipeline for zero-shot text classification
https://discuss.huggingface.co/t/new-pipeline-for-zero-shot-text-classification/681
The master branch of Transformers now includes a new pipeline for zero-shot text classification. You can play with it in this notebook: https://colab.research.google.com/drive/1jocViLorbwWIkTXKwxCOV9HLTaDDgCaw?usp=sharing 2.6k PR: https://github.com/huggingface/transformers/pull/5760 731 The pipeline can use any model trained on an NLI task, by default bart-large-mnli. It works by posing each candidate label as a “hypothesis” and the sequence which we want to classify as the “premise”. In the first example in the gif above, the model would be fed, <cls> Who are you voting for in 2020 ? <sep> This example is politics. <sep> and likewise for each candidate label. It’s therefore important to keep in mind that each candidate label requires its own forward pass. In the single-label case we take the scores for entailment as logits and put them through a softmax such that the candidate label scores add to 1. When multi_class=True is passed, we instead softmax the scores for entailment vs. contradiction for each candidate label independently. You can also change the hypothesis template. As shown in the formatted example above, the default template is This example is {}. This seems to work well in general, but you may be able to improve results by tailoring it to your specific setting (discussed in the example notebook 2.6k). Also feel free to check out the blog post 832 I wrote on zero shot a few months back, and our live demo 472 which uses this method for zero-shot topic classification. I hope you find it useful! Edit: FYI, you will get a big speedup by using this on GPU. You can do this by passing device=0 where 0 is the device number, to the pipeline factory: classifier = pipeline("zero-shot-classification", device=0) We should be updating this to automatically use GPU when available soon.
Thank you for providing this pipeline, blog and collab notebook. I read your blog twice (once during ACL :)) and the associated paper (relevant part) to refresh. I have a few questions: What happens before and after multi_class=True exactly ?. So let’s say I trained bert-base-uncased on MNLI. The last linear layer outputs a scaler tensor with three values (one for entailment, contradiction and neutral). How is the last linear layer being handled since number of candidate_labels are variable and its not being trained? In the paper its mentioned that once entailment is predicted, we can take it as a prediction and (you specify in the collab notebook) that label applies to the text. I didn’t catch this part properly. Could you please explain ? I’m not sure exactly what happens when entailment is predicted. Do you take standard softmax over all the labels ? When we pass multi_class=True, do you just output the confidence scores (from linear layer directly) ? What does hypothesis_template actually does ? Does it help in computing better hidden representation ?
0
huggingface
🤗Transformers
Sequences shorter than model’s input window size
https://discuss.huggingface.co/t/sequences-shorter-than-models-input-window-size/13323
Hi, I wanted to better understand how does it work/reference on GitHub, of how does the transformers library handle inputs which are smaller in size than the model’s input window. For example with dynamic batching, one batch could have a max size of 32 tokens, how does the transformer library handle this into making that sequence be model_input_window_size input tokens? Does it add the pad token to each to complete up to model_input_window_size and masks with 0 those tokens automatically so we don’t have to do it manually? Thanks
You can use the padding=True flag within your Tokenizer. This ensures that for your batch, anything that is smaller than that amount is padded. (This is usually even smaller than the model input) Here is an example. If you drop the padding=True flag, you will get a ValueError. from transformers import AutoTokenizer checkpoint = 'distilbert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(checkpoint) inp = tokenizer(['This is a sentence', 'This is another'], padding=True, return_tensors='pt') inp
0
huggingface
🤗Transformers
Combine BertForSequenceClassificaion with Additional Features
https://discuss.huggingface.co/t/combine-bertforsequenceclassificaion-with-additional-features/6523
Hey, I’m using BertForSequenceClassification + Pytorch Lightning-Flash for a text classification task. I want to add additional features besides the text (e.g. categorical features). From what I understand, I need to override BertForSequenceClassification “forward” method and change the final classification layer (at least) to include the CLS vector + features vector. However, I didn’t understand how I adapt the data loading procedure to this task - the text part is represented as input ids, and the rest supposed to be represented differently. Is there a simple way to combine text+features for Bert classification task? Thank you!
See this response 158 where I explain how to modify BERT to add additional POS (part-of-speech) features to tokens to perform named-entity recognition.
0
huggingface
🤗Transformers
How to modify the internal layers of BERT
https://discuss.huggingface.co/t/how-to-modify-the-internal-layers-of-bert/1372
Hi everyone, I am new to this huggingface. I have a new architecture that modifies the internal layers of the BERT Encoder and Decoder blocks. Though, I can create the whole new model from scratch but I want to use the already well written BERT architecture by HF. How can I modify the layers in BERT src code to suit my demands. Thanks a lot!
hi @imflash217 Could provide more details about what changes you want to make. You can find the implementation here 241. It’s pretty easy to follow, you can take it and change it in any way you want.
0
huggingface
🤗Transformers
Re-Training with new number of classes
https://discuss.huggingface.co/t/re-training-with-new-number-of-classes/13302
Hi All. I already trained an NER (token classification) model on a custom training dataset with 19 classes. You can explore it here marefa-nlp/marefa-ner 1 The base model which I used to fine-tune my model was xlm-roberta-large I have now a new dataset and need to use the last trained model marefa-nlp/marefa-ner as the base model this time. The problem is that the last model was trained to predict the class out of 19 classes, while the new dataset is designed for just 6 classes. I tried to load the model and reset the configurations to the xlm-roberta-large configuration like this from transformers import AutoModelForTokenClassification, AutoConfig base_model = "xlm-roberta-large" ft_model = "marefa-nlp/marefa-ner" config = AutoConfig.from_pretrained(base_model) ner_model = AutoModelForTokenClassification.from_pretrained(ft_model, num_labels=19) ner_model.config = config # THEN using the ner_model to train with the new dataset but seems not working, as it still requires that the head size be 19 === Does anyone know how to solve this? Thanks
Hi, This can be done by passing the additional argument ignore_mismatched_sizes=True to the from_pretrained method.
1
huggingface
🤗Transformers
Convert tensorflow tokenclassifier checkpoint to pytorch
https://discuss.huggingface.co/t/convert-tensorflow-tokenclassifier-checkpoint-to-pytorch/13288
Hi, Is there a way to convert the checkpoint of a Bert model that was fine-tuned for token classification task (NER) using tensorflow Bert original script? I am trying to use “transformer-cli convert” but it doesn’t work except on pretrained models. I want to share my fine-tuned models on models hub but it can’t be converted. Thanks, Maaly
Hi, In HuggingFace, any model that has both an implementation in PyTorch and Tensorflow can be easily converted as follows: from transformers import BertForTokenClassification model = BertForTokenClassification.from_pretrained("name_of_your_directory", from_tf=True)
0
huggingface
🤗Transformers
Find the eqivalent for word.index in BERT?
https://discuss.huggingface.co/t/find-the-eqivalent-for-word-index-in-bert/13170
excuse me i need to get like a dictionary contains the word with its index like this def word_for_id(integer, tokenizer): for word, index in tokenizer.word_index.items(): if index == integer: return word return None but with using BERT i couldn’t find the equivalent as i got berttokenizer' object has no attribute 'word_index'
BERT has word-piece tokens, so if you are after the associated IDs for these word-piece tokens, you can find these. from transformers import AutoModel checkpoint = 'bert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(checkpoint) tokenizer.vocab
0
huggingface
🤗Transformers
TFLongformer Shape Error
https://discuss.huggingface.co/t/tflongformer-shape-error/12651
Hi, when trying to finetune the TFLongformer using the TFTrainer, I got this error InvalidArgumentError: 2 root error(s) found. (0) INVALID_ARGUMENT: Incompatible shapes: [2,1024,12,514] vs. [2,1024,12,513] [[node while/gradients/while/tf_longformer_for_sequence_classification/longformer/encoder/layer_._0/attention/self/SelectV2_4_grad/BroadcastGradientArgs_1 (defined at /usr/local/lib/python3.7/dist-packages/transformers/trainer_tf.py:633) ]] [[while/LoopCond/_568/_14]] (1) INVALID_ARGUMENT: Incompatible shapes: [2,1024,12,514] vs. [2,1024,12,513] [[node while/gradients/while/tf_longformer_for_sequence_classification/longformer/encoder/layer_._0/attention/self/SelectV2_4_grad/BroadcastGradientArgs_1 (defined at /usr/local/lib/python3.7/dist-packages/transformers/trainer_tf.py:633) This is my train configuration: training_args = TFTrainingArguments( output_dir='./results', num_train_epochs=3, per_device_train_batch_size=2, gradient_accumulation_steps=32, per_device_eval_batch_size=16, logging_steps=1, ) with training_args.strategy.scope(): model = TFLongformerForSequenceClassification.from_pretrained('allenai/longformer-base-4096', num_labels=5, return_dict=True, problem_type = "single_label_classification") trainer = TFTrainer(model=model, args=training_args, train_dataset=train_dataset, eval_dataset=test_dataset) Someone else had the same error when using the Tensorflow version of the Longformer.
I believe the TFTrainer is deprecated, one can now easily train using Keras’ .fit() method. You can check out the official example notebook 1 or script for reference. cc @Rocketknight1
0
huggingface
🤗Transformers
While device=0 so pipeline is only using cudo = 0 , is there is way to use all gpus on a ssh server?
https://discuss.huggingface.co/t/while-device-0-so-pipeline-is-only-using-cudo-0-is-there-is-way-to-use-all-gpus-on-a-ssh-server/13254
tf.debugging.set_log_device_placement(True) gpus = tf.config.list_logical_devices('GPU') strategy = tf.distribute.MirroredStrategy(gpus) with strategy.scope(): # classifer classifier = pipeline("zero-shot-classification", model="facebook/bart-large-mnli" , device=0) # run classifier on batches of dataframe def run_classifier(df_list): tqdm.pandas(desc='Processing Dataframe') for i in range(len(df_list)): df_list[i]['label'] = df_list[i]['Translation'].progress_apply(lambda x :(classifier(x, candidate_labels=labels.candidate_labels , multi_label= True ))) return df_list run_classifier(batch_df(df))
i am sorry i mean CUDA
0
huggingface
🤗Transformers
Wav2vec model train from scratch
https://discuss.huggingface.co/t/wav2vec-model-train-from-scratch/13176
Hi, I’m new to the field of automatic speech recognition. I have a research project where we try to make a speech to text translator for Romanian medics. I saw that there are many pre-trained models for different languages which people seem to fine-tune them. I wanted to know if it’s possible to train wav2vec for a specific language from scratch. If the answer is yes, could somebody give me an example for one language?
You can pretrain it using this Script transformers/run_wav2vec2_pretraining_no_trainer.py at master · huggingface/transformers · GitHub 8 but except it could be really unstable to pretrain from scratch as it’s written in the readme
0
huggingface
🤗Transformers
How to use inputs_embeds in generate()?
https://discuss.huggingface.co/t/how-to-use-inputs-embeds-in-generate/713
I have fine-tuned a T5 model to accept a sequence of custom embeddings as input. That is, I input inputs_embeds instead of input_ids to the model’s forward method. However, I’m unable to use inputs_embeds with T5ForConditionalGeneration.generate(). It complains that bos_token_id has to be given if not inputting input_ids, but even if I provide a bos_token_id, it still doesn’t run. I considered running the encoder separately, but there is no way I can pass the encoder output to generate() either. It will be very useful if generate() can accept inputs_embeds or encoder output, so that we can use the decoding strategies provided in the GenerationMixin.
Not an expert, but I just follow the example from this:https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb 95, and it works for me.
0
huggingface
🤗Transformers
Summarization on long documents
https://discuss.huggingface.co/t/summarization-on-long-documents/920
Hi to all! I am facing a problem, how can someone summarize a very long text? I mean very long text that also always grows. It is a concatenation of many smaller texts. I see that many of the models have a limitation of maximum input, otherwise don’t work on the complete text or they don’t work at all. So, what is the correct way of using these models with long documents. A code snippet with an example of how to handle long documents with the existing models would be perfect to start with! Thank you! @sshleifer
You can try extractive summarisation followed by abstractive. In the extractive step you choose top k sentences of which you choose top n allowed till model max length. Another way is to use successive abstractive summarisation where you summarise in chunk of model max length and then again use it to summarise till the length you want. This method will be super expensive. You can also combine first + second method.
0
huggingface
🤗Transformers
Fine-Tune Xlm-roberta-large-xnli
https://discuss.huggingface.co/t/fine-tune-xlm-roberta-large-xnli/7502
Hi everyone, I am working on joeddav/xlm-roberta-large-xnli 10 model and fine-tuning it on turkish language for text classification. (Positive, Negative, Neutral) My problem is with fine-tuning on a really small dataset (20K finance text) I feel like even training 1 epoch destroys all the weights in model so it doesnt generate any meaningful result after fine-tuning. Is there a way to regulate the rate of update of the model ? image809×707 69.8 KB Here is my model: #IMPORT MODEL tokenizer = AutoTokenizer.from_pretrained("joeddav/xlm-roberta-large-xnli") model = AutoModelForSequenceClassification.from_pretrained("joeddav/xlm-roberta-large-xnli").cuda() Downloading: 100% 2.24G/2.24G [00:33<00:00, 54.1MB/s] Some weights of the model checkpoint at joeddav/xlm-roberta-large-xnli were not used when initializing XLMRobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight'] - This IS expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). I’m not sure if this is expected for my fine-tune model or not. Another problem can be the batch_size since I am using it with batch_size=1 (beacuse the model is huge). Here is my training arguments: training_args = TrainingArguments("test_trainer", num_train_epochs=5, per_device_train_batch_size=1, per_device_eval_batch_size=1, evaluation_strategy="steps", seed=42, save_strategy="epoch", logging_strategy="steps", logging_steps=500, eval_steps=500)
Hi! I might be wrong, but this model was already fine-tuned and it is said that This model is intended to be used for zero-shot text classification. That is, as far as I understand, you should fine-tune on the base model which is xlm-roberta-large. Please keep us updated. I am interested in the outcome
0
huggingface
🤗Transformers
Using Huggingface Trainer in Colab -> Disk Full
https://discuss.huggingface.co/t/using-huggingface-trainer-in-colab-disk-full/5951
Hello everyone! I thought I’d post this here first, as I am not sure if it is a bug or if I am doing something wrong. I’m using the huggingface library to train an XLM-R token classifier. I originally wrote the training routine myself, which worked quite well, but I wanted to switch to the trainer for more advanced features like early stopping and easier setting of training arguments. To prototype my code, I usually run it on a free google colab account. While the training process works, I’ve had the code crash several times, because the disk space of the Compute Environment runs out. This is NOT my google drive space, but a separate disk of around 60GB space. I have observed, that during training the used space keeps on growing, but I have no idea where or what exactly is writing data. Once the disk is full, this results in the code crashing: image1183×717 73.3 KB The following are my training parameters/callbacks defined: ## Define Callbacks class PrinterCallback(TrainerCallback): def on_train_begin(self, args, state, control, **kwargs): print('\033[1m'+ '=' * 25 + " Model Training " + '=' * 25 + '\033[0m') def on_epoch_begin(self, args, state, control, **kwargs): print('\n'+ '\033[1m'+ '=' * 25 +' Epoch {:} / {:} '.format(int(trainer.state.epoch) + 1, int(trainer.state.num_train_epochs)) + '=' * 25) ## Training parameters # training arguments training_args = TrainingArguments( output_dir='./checkpoints', # output directory num_train_epochs=5, # total # of training epochs per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_steps=0, # number of warmup steps for learning rate scheduler weight_decay=0, # strength of weight decay learning_rate=2e-5, #2e-5 logging_dir='./logs', # directory for storing logs evaluation_strategy= "epoch", #"steps", "epoch", or "no" #eval_steps=100, save_total_limit=1, load_best_model_at_end=False, #loads the model with the best evaluation score metric_for_best_model="weightedF1", greater_is_better=True ) ## Start training # initialize huggingface trainer trainer = Trainer( model=xlmr_model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=xlmr_tokenizer, compute_metrics=validate, callbacks=[PrinterCallback] ) trainer.train() Any idea what is going wrong here? Edit: Here is the Error as text from another run; apparently Torch is continuously writing something to disk, but why and what is it? --------------------------------------------------------------------------- OSError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/torch/serialization.py in save(obj, f, pickle_module, pickle_protocol, _use_new_zipfile_serialization) 371 with _open_zipfile_writer(opened_file) as opened_zipfile: --> 372 _save(obj, opened_zipfile, pickle_module, pickle_protocol) 373 return 6 frames /usr/local/lib/python3.7/dist-packages/torch/serialization.py in _save(obj, zip_file, pickle_module, pickle_protocol) 490 num_bytes = storage.size() * storage.element_size() --> 491 zip_file.write_record(name, storage.data_ptr(), num_bytes) 492 OSError: [Errno 28] No space left on device During handling of the above exception, another exception occurred: RuntimeError Traceback (most recent call last) <ipython-input-36-3435b262f1ae> in <module>() ----> 1 trainer.train() /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs) 1170 self.control = self.callback_handler.on_step_end(self.args, self.state, self.control) 1171 -> 1172 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) 1173 1174 if self.control.should_epoch_stop or self.control.should_training_stop: /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch) 1267 1268 if self.control.should_save: -> 1269 self._save_checkpoint(model, trial, metrics=metrics) 1270 self.control = self.callback_handler.on_save(self.args, self.state, self.control) 1271 /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in _save_checkpoint(self, model, trial, metrics) 1317 elif self.is_world_process_zero() and not self.deepspeed: 1318 # deepspeed.save_checkpoint above saves model/optim/sched -> 1319 torch.save(self.optimizer.state_dict(), os.path.join(output_dir, "optimizer.pt")) 1320 with warnings.catch_warnings(record=True) as caught_warnings: 1321 torch.save(self.lr_scheduler.state_dict(), os.path.join(output_dir, "scheduler.pt")) /usr/local/lib/python3.7/dist-packages/torch/serialization.py in save(obj, f, pickle_module, pickle_protocol, _use_new_zipfile_serialization) 371 with _open_zipfile_writer(opened_file) as opened_zipfile: 372 _save(obj, opened_zipfile, pickle_module, pickle_protocol) --> 373 return 374 _legacy_save(obj, opened_file, pickle_module, pickle_protocol) 375 /usr/local/lib/python3.7/dist-packages/torch/serialization.py in __exit__(self, *args) 257 258 def __exit__(self, *args) -> None: --> 259 self.file_like.write_end_of_file() 260 self.buffer.flush() 261 RuntimeError: [enforce fail at inline_container.cc:274] . unexpected pos 2212230208 vs 2212230096```
There are three default arguments that are relevant here, but seeing that you set save_total_limit=1 I am not sure what else could be being saved… github.com huggingface/transformers/blob/fe82b1bfa07aa054ef70583a561cb7c3978c697f/src/transformers/training_args.py#L161-L172 4 save_strategy (:obj:`str` or :class:`~transformers.trainer_utils.IntervalStrategy`, `optional`, defaults to :obj:`"steps"`): The checkpoint save strategy to adopt during training. Possible values are: * :obj:`"no"`: No save is done during training. * :obj:`"epoch"`: Save is done at the end of each epoch. * :obj:`"steps"`: Save is done every :obj:`save_steps`. save_steps (:obj:`int`, `optional`, defaults to 500): Number of updates steps before two checkpoint saves if :obj:`save_strategy="steps"`. save_total_limit (:obj:`int`, `optional`): If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints in :obj:`output_dir`. Can you see what’s actually on the disk?
0
huggingface
🤗Transformers
Modify beam search objective
https://discuss.huggingface.co/t/modify-beam-search-objective/10391
Hi, I would like to experiment with adding an additional penalty term to the beam search objective used when calling generate(). What is the best way to go about this?
Hello! Have you find any solution? I found this post via Google searching the same question : D
0
huggingface
🤗Transformers
How do we use LXMERT for inference?
https://discuss.huggingface.co/t/how-do-we-use-lxmert-for-inference/1762
I see that we can download the LXMERT pretrained model. Great - so how do we actually load images and start asking questions?
I was looking into LXMERT too and wondering whether it might be used for image captioning. Has anyone tried something like that? Are there tutorials on how to use it for inference?
0
huggingface
🤗Transformers
LXMERT pre-trained model
https://discuss.huggingface.co/t/lxmert-pre-trained-model/1195
Hello, congrats to all contributors for the awesome work with LXMERT! It is exciting to see multimodal transformers coming to hugginface/transformers. Of course, I immediately tried it out and to played with the demo. Question: Does the line lxmert_base = LxmertForPreTraining.from_pretrained("unc-nlp/lxmert-base-uncased") load an already pre-trained LXMERT model on the tasks enumerated in the original paper “(1) masked crossmodality language modeling, (2) masked object prediction via RoI-feature regression, (3) masked object prediction via detected-label classification, (4) cross-modality matching, and (5) image question answering.” (Tan & Bansal, 2019)?
Tagging our LXMERT specialist @lysandre
0
huggingface
🤗Transformers
When can we expect TPU Trainer?
https://discuss.huggingface.co/t/when-can-we-expect-tpu-trainer/10353
Hi, wanted to know when can we expect, Trainer API to use TPU’s. Can i implement it myself? Give me some tips to where to start from Let me know, Kind regards
The Trainer API does support TPUs. For example, the language modeling examples 34 can be run on TPU. There’s one thing to take into account when training on TPUs: Note: On TPU, you should use the flag --pad_to_max_length in conjunction with the --line_by_line flag to make sure all your batches have the same length. You can take a look at the scripts for details.
1
huggingface
🤗Transformers
How to convert mT5 and ByT5 to ONNX format?
https://discuss.huggingface.co/t/how-to-convert-mt5-and-byt5-to-onnx-format/11509
Hi, ONNX allows to compress transformers models and speed up the inference time on CPU and GPU. Who could share code / notebook to convert mT5 and ByT5 models to ONNX format? There is the library fastT5 1 of @kira for T5 conversion (great!) but it has not been updated to the latest version of transformers and therefore, it does not accept mT5 and ByT5 models. Thanks. List of topicos about this subject: Speeding up T5 inference Boost inference speed of T5 models up to 5X & reduce the model size by 3X 1 Questions on distilling [from] T5 1 List of online documents about this subject: Exporting transformers models 2 Notebook: onnx_t5.ipynb 4 from @valhalla Notebook: onnx-export.ipynb 1
fastT5 now works with transformers>=4.12.5 (this is the latest version I have tested). It also has some tweaks to the quantization settings that should speed it up a bit more. I can confirm it works with byt5 now, and it should work with mt5 as well (except for needing to change the ConditionalGeneration class as noted above). However, byt5 is substantially slower due to both the larger encoder and needing one decoding step per character EDIT: I just opened a PR to explicitly support mt5 as well. I did some limited testing and it seems to work.
1
huggingface
🤗Transformers
Custom trainer does not work on multiple GPUs
https://discuss.huggingface.co/t/custom-trainer-does-not-work-on-multiple-gpus/4729
Hello, I am trying to incorporate knowledge distillation loss into the Seq2SeqTrainer. The training script that I use is similar to the run_summarization script 1. It works for cpu and 1 gpu but freezes when I try run on multiple GPUs (stuck at the first batch). Even when I set use_kd_loss to False (the loss is computed by the super call only), it still does not work on multiple GPUs. Below is the trainer that I am using, any help would be greatly appreciated! class Seq2SeqKDTrainer(Seq2SeqTrainer): """ """ def __init__(self, *args, use_kd_loss=False, teacher_model=None, temperature=2.0, normalize_hidden=False, alpha_data=1.0, alpha_logits=0.0, alpha_hidden=0.0, **kwargs): super().__init__(*args, **kwargs) self.use_kd_loss = use_kd_loss self.teacher_model = teacher_model # Get the configurations to compare sizes self.student_config_dict = self.model.config.to_diff_dict() self.teacher_config_dict = self.teacher_model.config.to_diff_dict() self.temperature = temperature self.normalize_hidden = normalize_hidden self.alpha_data = alpha_data self.alpha_logits = alpha_logits self.alpha_hidden = alpha_hidden def compute_loss(self, model, inputs, return_outputs=False): # Update inputs to output hidden states and in form of a dictionary inputs["output_hidden_states"] = self.use_kd_loss inputs["return_dict"] = True # Compute cross-entropy data loss, which is identical to the default loss of Seq2SeqTrainer data_loss, student_outputs = super().compute_loss(model, inputs, return_outputs=True) # Compute KD component losses # Initialize losses to all 0s and only update if we use knowledge-distillation loss enc_hidden_loss, dec_hidden_loss, logits_loss = 0.0, 0.0, 0.0 if self.use_kd_loss: # Set up variables input_ids, source_mask, labels = inputs["input_ids"], inputs["attention_mask"], inputs["labels"] pad_token_id = self.tokenizer.pad_token_id decoder_input_ids = shift_tokens_right(input_ids=labels, pad_token_id=pad_token_id, decoder_start_token_id=self.teacher_model.config.decoder_start_token_id) teacher_model = self.teacher_model.to(input_ids.device) teacher_outputs = teacher_model(input_ids=input_ids, attention_mask=source_mask, decoder_input_ids=decoder_input_ids, output_hidden_states=True, return_dict=True, use_cache=False) # Compute logits loss decoder_mask = decoder_input_ids.ne(pad_token_id) logits_loss = self._compute_logits_loss(student_logits=student_outputs.logits, teacher_logits=teacher_outputs.logits, mask=decoder_mask, temperature=self.temperature) # Only compute encoder's hidden loss if the student's encoder is smaller if self.student_config_dict["encoder_layers"] < self.teacher_config_dict["encoder_layers"]: enc_hidden_loss = self._compute_hidden_loss( student_hidden_states=student_outputs.encoder_hidden_states, teacher_hidden_states=teacher_outputs.encoder_hidden_states, attention_mask=source_mask, teacher_layer_indices=self.student_config_dict["encoder_layer_indices"], normalize=self.normalize_hidden ) # Only compute decoder's hidden loss if the student's decoder is smaller if self.student_config_dict["decoder_layers"] < self.teacher_config_dict["decoder_layers"]: dec_hidden_loss = self._compute_hidden_loss( student_hidden_states=student_outputs.decoder_hidden_states, teacher_hidden_states=teacher_outputs.decoder_hidden_states, attention_mask=decoder_mask, teacher_layer_indices=self.student_config_dict["decoder_layer_indices"], normalize=self.normalize_hidden ) total_loss = self.alpha_data * data_loss + \ self.alpha_logits * logits_loss + \ self.alpha_hidden * (enc_hidden_loss + dec_hidden_loss) return total_loss @staticmethod def _compute_logits_loss(student_logits: torch.Tensor, teacher_logits: torch.Tensor, mask: torch.Tensor, temperature: float = 2.0): sel_mask = mask[:, :, None].expand_as(student_logits) vocab_size = student_logits.size(-1) # Select logits based on mask student_logits_select = torch.masked_select(student_logits, sel_mask).view(-1, vocab_size) teacher_logits_select = torch.masked_select(teacher_logits, sel_mask).view(-1, vocab_size) assert ( student_logits_select.shape == teacher_logits_select.shape ), "Expected tensors of the same size. Got student: {}, teacher: {}".format(student_logits_select.shape, teacher_logits_select.shape) # Compute logits loss logits_loss_fct = nn.KLDivLoss(reduction="batchmean") logits_loss = ( logits_loss_fct( F.log_softmax(student_logits_select / temperature, dim=-1), F.log_softmax(teacher_logits_select / temperature, dim=-1) ) * temperature ** 2 ) return logits_loss @staticmethod def _compute_hidden_loss(student_hidden_states: Tuple[torch.Tensor], teacher_hidden_states: Tuple[torch.Tensor], attention_mask: torch.Tensor, teacher_layer_indices: list, normalize: bool = False ): mask = attention_mask.to(student_hidden_states[0]) # Type and/or device conversion valid_count = mask.sum() * student_hidden_states[0].size(-1) # Get valid count # Stack hidden states # Here we skip the first hidden state which is the output of the embeddings student_hidden_stack = torch.stack([state for state in student_hidden_states[1:]]) teacher_hidden_stack = torch.stack([teacher_hidden_states[i] for i in teacher_layer_indices]) assert ( student_hidden_stack.shape == teacher_hidden_stack.shape ), "Expected tensors of the same size. Got student: {}, teacher: {}".format(student_hidden_stack.shape, teacher_hidden_stack.shape) # Normalize if specified if normalize: student_hidden_stack = F.layer_norm(student_hidden_stack, student_hidden_stack.shape[1:]) teacher_hidden_stack = F.layer_norm(teacher_hidden_stack, teacher_hidden_stack.shape[1:]) # Compute MSE loss loss_fct = nn.MSELoss(reduction="none") mse_loss = loss_fct(student_hidden_stack, teacher_hidden_stack) masked_mse_loss = (mse_loss * mask.unsqueeze(dim=0).unsqueeze(dim=-1)).sum() / valid_count return masked_mse_loss
hiphan: input_ids.device I’m not an expert in hugging face, but check self.teacher_model.to(input_ids.device), this is explicitly moving the model to a single device, ‘cuda’ will move it to gpu:0, which is not what you want. Lemme know if removing it works
0
huggingface
🤗Transformers
How to set up Trainer for a regression?
https://discuss.huggingface.co/t/how-to-set-up-trainer-for-a-regression/12994
Hello, I am aware that I can run a regression model using float target values and num_labels=1 in a classification head like below: model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english", num_labels=1, ignore_mismatched_sizes=True) The problem is that right now I am merely adapting the Trainer specs for classification and during training I can see an accuracy metric where rmse or r-squared would be more appropriate. See the accuracy score below on the validation data: from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer batch_size = 32 args = TrainingArguments( evaluation_strategy = "epoch", save_strategy = "epoch", learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=3, report_to="none", weight_decay=0.01, output_dir='/content/drive/MyDrive/kaggle/', metric_for_best_model='accuracy') def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return metric.compute(predictions=predictions, references=labels) trainer = Trainer( model, args, train_dataset=tokenized_datasets['train'], eval_dataset=tokenized_datasets['test'], tokenizer=tokenizer, compute_metrics=compute_metrics ) which gives Epoch Training Loss Validation Loss Accuracy 1 0.507300 0.499625 0.503853 2 0.466000 0.495724 0.503853 Which arguments in trainer should I use to I get rmse or r-squared instead? I assume the loss (that is minimized) is already the mean squared error (maybe I am wrong?) Thanks!
Yes, you can see that in the source code here 1. Note that you can also set the problem_type of the model to “regression” (which is equivalent to setting num_labels=1).
1
huggingface
🤗Transformers
Always getting RuntimeError: CUDA out of memory with Trainer
https://discuss.huggingface.co/t/always-getting-runtimeerror-cuda-out-of-memory-with-trainer/12948
Hello, I am using huggingface on my google colab pro+ instance, and I keep getting errors like RuntimeError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 15.78 GiB total capacity; 13.92 GiB already allocated; 206.75 MiB free; 13.94 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF I dont understand why? My dataset is microscopic (40K sentences), and all I am doing is loading bert-large-uncased and follow along the text classification notebook from transformers import AutoTokenizer from transformers import AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained('bert-large-cased') from datasets import load_dataset, load_metric metric = load_metric('glue', 'sst2') model = AutoModelForSequenceClassification.from_pretrained("bert-large-cased", num_labels=2) my trainer args are super standard from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer batch_size = 16 args = TrainingArguments( evaluation_strategy = "epoch", save_strategy = "epoch", learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=5, report_to="none", weight_decay=0.01, output_dir='/content/drive/MyDrive/kaggle/', metric_for_best_model='accuracy') def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = predictions[:, 0] return metric.compute(predictions=predictions, references=labels) trainer = Trainer( model, args, train_dataset=tokenized_datasets['train'], eval_dataset=tokenized_datasets['test'], tokenizer=tokenizer, compute_metrics=compute_metrics ) Am I missing something? Should I change some of the options? Thanks!!
(Just posting this in case someone smarter doesn’t post a better idea) Colab’s performance varies a lot. I ran the same script (dataset in question had 1200 sentences) and sometimes I get out of memory error and sometimes not. My latest project has 270 sentences and ran fine on the first try.
0
huggingface
🤗Transformers
AttributeError: ‘Flaubert For Sequence Classification’ object has no attribute ‘predict’
https://discuss.huggingface.co/t/attributeerror-flaubert-for-sequence-classification-object-has-no-attribute-predict/12927
I am trying to classify new dataset from best model after loading but I get this error: best_model_from_training_testing = './checkpoint-900' best_model= FlaubertForSequenceClassification.from_pretrained(best_model_from_training_testing, num_labels=3) raw_pred, _, _ = best_model.predict(test_tokenized_dataset) predictedLabelOnCompanyData = np.argmax(raw_pred, axis=1) Traceback (most recent call last): File "/16uw/test/MODULE_FMC/scriptTraitements/classifying.py", line 408, in <module> pred = predict(emb, test_model) File "/16uw/test/MODULE_FMC/scriptTraitements/classifying.py", line 279, in predict raw_pred, _, _ = model.predict(emb_feature) File "/l16uw/.conda/envs/bert/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'FlaubertForSequenceClassification' object has no attribute 'predict'
This model has no predict method indeed. Were you trying to use Trainer.predict?
0
huggingface
🤗Transformers
How to define the compute_metrics() function in Trainer?
https://discuss.huggingface.co/t/how-to-define-the-compute-metrics-function-in-trainer/12953
Hello, Coming from tensorflow I am a bit confused as to how to properly define the compute_metrics() in Trainer. For instance, I see in the notebooks various possibilities def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = predictions[:, 0] return metric.compute(predictions=predictions, references=labels) My question may seem stupid (maybe it is) but how can I know how to compute the metrics if I cannot see what eval_pred looks like in Trainer? It is as if I had to guess what the output will be before actually training the model. Am I missing something here? Thanks!
Just run trainer.predict on your eval/test dataset.
1
huggingface
🤗Transformers
Translating multiple languages to English (Tensorflow) - repost
https://discuss.huggingface.co/t/translating-multiple-languages-to-english-tensorflow-repost/12623
Hello there! I am trying to use tensorflow to translate from many different languages to English but I am not able to adapt the little examples available in the docs (which are written in pytorch) See below: import sentencepiece import transformers from transformers import T5TokenizerFast, TFT5ForConditionalGeneration article_fr = "bonjour je voudrais un camembert." model = TFT5ForConditionalGeneration.from_pretrained("t5-small", return_dict = True) tokenizer = T5TokenizerFast.from_pretrained("t5-small") My two attempts at translating fail miserably encoded_hi = tokenizer("translate French to German: "+article_fr, return_tensors="tf") generated_tokens = model.generate(**encoded_hi) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) basically returns my input in French ['bonjour je voudrais un camembert.'] Same thing with a pipeline [{'translation_text': 'Bonjour je voudrais un camembert.'}] Do you know what the issue is? I am open to use any other model, my only requirements are to be able to translate both japanese and french to english in tensorflow Any help greatly appreciated! Thanks!!!
I would suggest you to use a model trained on that task instead, like OPUS models. !pip install transformers !pip install sentencepiece from transformers import pipeline translation = pipeline("translation", "Helsinki-NLP/opus-mt-fr-de") tr_text = translation("bonjour je voudrais un camembert.") tr_text[0]["translation_text"] ## Hallo, ich hätte gern einen Camembert.
0
huggingface
🤗Transformers
Do we need to load a model twice to get embeddings and probabilities?
https://discuss.huggingface.co/t/do-we-need-to-load-a-model-twice-to-get-embeddings-and-probabilities/9419
Hello the dream team! I have fine-tuned a bert model for sentence classification. Everything works correctly but I need to both classify a sentence and extract its embeddings (based on the CLS token). Right now I am doing something (likely) very inefficient which is to load the model twice: myinput = 'huggingface is great but I am learning every day' model_for_embeddings= TFAutoModel.from_pretrained(r"Z:\mymodel") #get the embeddings each of dimension 768 input_ids = tf.constant(tokenizer.encode(myinput))[None,:] outputs = model_for_embeddings(input_ids) outputs[0][0] And now I also load the same model to get a classification prediction model_for_classification = TFAutoModelForSequenceClassification.from_pretrained((r"Z:\mymodel") encoding = tokenizer([myinput], max_length=280, truncation=True, padding=True, return_tensors="tf") # forward pass outputs = model_for_classification(encoding) logits = outputs.logits # transform to array with probabilities probs = tf.nn.softmax(preds, axis=1).numpy() I think seems extremely inefficient. Can I load the model just once and do both tasks? Thanks!
Yes you can get both outputs with just a single forward pass. In HuggingFace Transformers, you can pass in output_hidden_states=True when performing a forward pass for a given model. This allows you to get both the logits (for classification) and the hidden states of all layers of the model. from transformers import AutoTokenizer, TFAutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained(r"Z:\mymodel") model = TFAutoModelForSequenceClassification.from_pretrained(r"Z:\mymodel") text = "hello world" encoding = tokenizer(text, return_tensors="tf") # forward pass outputs = model(encoding, output_hidden_states=True) # get the logits logits = outputs.logits # get the hidden states hidden_states = outputs.hidden_states Note that the hidden_states are a tuple of tensors. It contains the hidden states of all layers, as well as the embedding layer. This means that you can get the last hidden states as hidden_states[-1].
0
huggingface
🤗Transformers
Gradient_checkpointing = True results in error
https://discuss.huggingface.co/t/gradient-checkpointing-true-results-in-error/10744
Hi all, I’m trying to finetune a summarization model (bigbird-pegasus-large-bigpatent) on my own data. Of course even with premium colab I’m having memory issues, so I tried to set gradient_checkpointing = True in the Seq2SeqTrainingArguments, which is supposed to save some memory altgough increasing the computation time. The problem is that when starting the training this argument rises an error: AttributeError: module ‘torch.utils’ has no attribute ‘checkpoint’ Has anyone experienced this same error? I read in the Github discussion: github.com/huggingface/transformers Error in GPT2 while using gradient checkpointing. 2 opened Jan 15, 2021 closed Jan 18, 2021 devrimcavusoglu ## Environment info <!-- You can run the command `transformers-cli env` and cop…y-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.2.0 - Platform: Linux | 5.4.0-60-generic | 18.04.1-Ubuntu SMP | x86_64 - Python version: 3.7.7 - PyTorch version (GPU?): 1.7.1 - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help @LysandreJik ## Information Model I am using: GPT2 The problem arises when using: * GPT2LMHeadModel with config `gradient_checkpointing: True` When using GPT2 pretrained model, with the latest releases (4.x), `gpt2_modeling.py` fails due to the behavior arising from pytorch. It was due to the fact that `torch.utils.checkpoint.checkpoint` import is malfunctioning, see [this](https://discuss.pytorch.org/t/attributeerror-module-torch-utils-has-no-attribute-checkpoint/101543) discussion, but I tried with python3.8 as well, and the problem still occured. When I observe the other scripts for modeling on different models (like BERT, etc.) the import statement for `checkpoint` is handled successfully, but GPT2 script fails. It is discussed that the problem arises due to the import behaviour of python. ``` File "/home/username/.miniconda3/envs/prj/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/username/.miniconda3/envs/prj/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 901, in forward return_dict = return_dict, File "/home/username/.miniconda3/envs/prj/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/username/.miniconda3/envs/prj/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 728, in forward outputs = torch.utils.checkpoint.checkpoint( AttributeError: module 'torch.utils' has no attribute 'checkpoint' ``` ## Suggestion in `modeling_gpt2.py` add this import `import torch.utils.checkpoint`. github.com/huggingface/transformers ProphetNet with AttributeError: module 'torch.utils' has no attribute 'checkpoint' 1 opened Apr 12, 2021 closed Apr 13, 2021 StevenTang1998 ## Environment info - `transformers` version: 4.5.0 - Platform: Linux-5.4.0-…70-generic-x86_64-with-debian-buster-sid - Python version: 3.7.6 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten ## Information Model I am using ProphetNet: The problem arises when using: my own modified scripts (simplified): ```python self.model = ProphetNetForConditionalGeneration.from_pretrained(self.pretrained_model_path, config=self.config) outputs = self.model( input_ids, attention_mask=input_att, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_input_att, use_cache=False ) ``` And rised: ``` File "/home/ruc/tty/TextBox/textbox/model/Seq2Seq/prophetnet.py", line 89, in forward use_cache=False File "/home/ruc/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/ruc/anaconda3/lib/python3.7/site-packages/transformers/models/prophetnet/modeling_prophetnet.py", line 1841, in forward return_dict=return_dict, File "/home/ruc/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/ruc/anaconda3/lib/python3.7/site-packages/transformers/models/prophetnet/modeling_prophetnet.py", line 1725, in forward return_dict=return_dict, File "/home/ruc/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/ruc/anaconda3/lib/python3.7/site-packages/transformers/models/prophetnet/modeling_prophetnet.py", line 1272, in forward layer_outputs = torch.utils.checkpoint.checkpoint( AttributeError: module 'torch.utils' has no attribute 'checkpoint' ``` I think it is the same problem as [#9617](https://github.com/huggingface/transformers/issues/9617) and [#9919](https://github.com/huggingface/transformers/issues/9919). github.com/huggingface/transformers AttributeError: module 'torch.utils' has no attribute 'checkpoint' for fine tune LED 4 opened Feb 1, 2021 closed Feb 2, 2021 mmoya01 hello, I fine tuned my own LED model by following this [notebook](https://colab.…research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing#scrollTo=tLM3niQqhEzP) and I saved it using ```python led.save_pretrained("longformer2Bart") tokenizer.save_pretrained("longformer2Bart") ``` however, whenever I try testing that model using something like this ```python from transformers import LEDTokenizer, LEDForConditionalGeneration model = LEDForConditionalGeneration.from_pretrained("longformer2Bart") tokenizer = LEDTokenizer.from_pretrained("longformer2Bart") article = """(CNN)James Holmes made his introduction to the world in a Colorado cinema filled with spectators watching a midnight showing of the new Batman movie, "The Dark Knight Rises," in June 2012. The moment became one of the deadliest shootings in U.S. history. Holmes is accused of opening fire on the crowd, killing 12 people and injuring or maiming 70 others in Aurora, a suburb of Denver. Holmes appeared like a comic book character: He resembled the Joker, with red-orange hair, similar to the late actor Heath Ledger\'s portrayal of the villain in an earlier Batman movie, authorities said. But Holmes was hardly a cartoon. Authorities said he wore body armor and carried several guns, including an AR-15 rifle, with lots of ammo. He also wore a gas mask. Holmes says he was insane at the time of the shootings, and that is his legal defense and court plea: not guilty by reason of insanity. Prosecutors aren\'t swayed and will seek the death penalty. Opening statements in his trial are scheduled to begin Monday. Holmes admits to the shootings but says he was suffering "a psychotic episode" at the time, according to court papers filed in July 2013 by the state public defenders, Daniel King and Tamara A. Brady. Evidence "revealed thus far in the case supports the defense\'s position that Mr. Holmes suffers from a severe mental illness and was in the throes of a psychotic episode when he committed the acts that resulted in the tragic loss of life and injuries sustained by moviegoers on July 20, 2012," the public defenders wrote. Holmes no longer looks like a dazed Joker, as he did in his first appearance before a judge in 2012. He appeared dramatically different in January when jury selection began for his trial: 9,000 potential jurors were summoned for duty, described as one of the nation\'s largest jury calls. Holmes now has a cleaner look, with a mustache, button-down shirt and khaki pants. In January, he had a beard and eyeglasses. If this new image sounds like one of an academician, it may be because Holmes, now 27, once was one. Just before the shooting, Holmes was a doctoral student in neuroscience, and he was studying how the brain works, with his schooling funded by a U.S. government grant. Yet for all his learning, Holmes apparently lacked the capacity to command his own mind, according to the case against him. A jury will ultimately decide Holmes\' fate. That panel is made up of 12 jurors and 12 alternates. They are 19 women and five men, and almost all are white and middle-aged. The trial could last until autumn. When jury summonses were issued in January, each potential juror stood a 0.2% chance of being selected, District Attorney George Brauchler told the final jury this month. He described the approaching trial as "four to five months of a horrible roller coaster through the worst haunted house you can imagine." The jury will have to render verdicts on each of the 165 counts against Holmes, including murder and attempted murder charges. Meanwhile, victims and their relatives are challenging all media outlets "to stop the gratuitous use of the name and likeness of mass killers, thereby depriving violent individuals the media celebrity and media spotlight they so crave," the No Notoriety group says. They are joined by victims from eight other mass shootings in recent U.S. history. Raised in central coastal California and in San Diego, James Eagan Holmes is the son of a mathematician father noted for his work at the FICO firm that provides credit scores and a registered nurse mother, according to the U-T San Diego newspaper. Holmes also has a sister, Chris, a musician, who\'s five years younger, the newspaper said. His childhood classmates remember him as a clean-cut, bespectacled boy with an "exemplary" character who "never gave any trouble, and never got in trouble himself," The Salinas Californian reported. His family then moved down the California coast, where Holmes grew up in the San Diego-area neighborhood of Rancho Peñasquitos, which a neighbor described as "kind of like Mayberry," the San Diego newspaper said. Holmes attended Westview High School, which says its school district sits in "a primarily middle- to upper-middle-income residential community." There, Holmes ran cross-country, played soccer and later worked at a biotechnology internship at the Salk Institute and Miramar College, which attracts academically talented students. By then, his peers described him as standoffish and a bit of a wiseacre, the San Diego newspaper said. Holmes attended college fairly close to home, in a neighboring area known as Southern California\'s "inland empire" because it\'s more than an hour\'s drive from the coast, in a warm, low-desert climate. He entered the University of California, Riverside, in 2006 as a scholarship student. In 2008 he was a summer camp counselor for disadvantaged children, age 7 to 14, at Camp Max Straus, run by Jewish Big Brothers Big Sisters of Los Angeles. He graduated from UC Riverside in 2010 with the highest honors and a bachelor\'s degree in neuroscience. "Academically, he was at the top of the top," Chancellor Timothy P. White said. He seemed destined for even higher achievement. By 2011, he had enrolled as a doctoral student in the neuroscience program at the University of Colorado Anschutz Medical Campus in Aurora, the largest academic health center in the Rocky Mountain region. The doctoral in neuroscience program attended by Holmes focuses on how the brain works, with an emphasis on processing of information, behavior, learning and memory. Holmes was one of six pre-thesis Ph.D. students in the program who were awarded a neuroscience training grant from the National Institutes of Health. The grant rewards outstanding neuroscientists who will make major contributions to neurobiology. A syllabus that listed Holmes as a student at the medical school shows he was to have delivered a presentation about microRNA biomarkers. But Holmes struggled, and his own mental health took an ominous turn. In March 2012, he told a classmate he wanted to kill people, and that he would do so "when his life was over," court documents said. Holmes was "denied access to the school after June 12, 2012, after he made threats to a professor," according to court documents. About that time, Holmes was a patient of University of Colorado psychiatrist Lynne Fenton. Fenton was so concerned about Holmes\' behavior that she mentioned it to her colleagues, saying he could be a danger to others, CNN affiliate KMGH-TV reported, citing sources with knowledge of the investigation. Fenton\'s concerns surfaced in early June, sources told the Denver station. Holmes began to fantasize about killing "a lot of people" in early June, nearly six weeks before the shootings, the station reported, citing unidentified sources familiar with the investigation. Holmes\' psychiatrist contacted several members of a "behavioral evaluation and threat assessment" team to say Holmes could be a danger to others, the station reported. At issue was whether to order Holmes held for 72 hours to be evaluated by mental health professionals, the station reported. "Fenton made initial phone calls about engaging the BETA team" in "the first 10 days" of June, but it "never came together" because in the period Fenton was having conversations with team members, Holmes began the process of dropping out of school, a source told KMGH. Defense attorneys have rejected the prosecution\'s assertions that Holmes was barred from campus. Citing statements from the university, Holmes\' attorneys have argued that his access was revoked because that\'s normal procedure when a student drops enrollment. What caused this turn for the worse for Holmes has yet to be clearly detailed. In the months before the shooting, he bought four weapons and more than 6,000 rounds of ammunition, authorities said. Police said he also booby-trapped his third-floor apartment with explosives, but police weren\'t fooled. After Holmes was caught in the cinema parking lot immediately after the shooting, bomb technicians went to the apartment and neutralized the explosives. No one was injured at the apartment building. Nine minutes before Holmes went into the movie theater, he called a University of Colorado switchboard, public defender Brady has said in court. The number he called can be used to get in contact with faculty members during off hours, Brady said. Court documents have also revealed that investigators have obtained text messages that Holmes exchanged with someone before the shooting. That person was not named, and the content of the texts has not been made public. According to The New York Times, Holmes sent a text message to a fellow graduate student, a woman, about two weeks before the shooting. She asked if he had left Aurora yet, reported the newspaper, which didn\'t identify her. No, he had two months left on his lease, Holmes wrote back, according to the Times. He asked if she had heard of "dysphoric mania," a form of bipolar disorder marked by the highs of mania and the dark and sometimes paranoid delusions of major depression. The woman asked if the disorder could be managed with treatment. "It was," Holmes wrote her, according to the Times. But he warned she should stay away from him "because I am bad news," the newspaper reported. It was her last contact with Holmes. After the shooting, Holmes\' family issued a brief statement: "Our hearts go out to those who were involved in this tragedy and to the families and friends of those involved," they said, without giving any information about their son. Since then, prosecutors have refused to offer a plea deal to Holmes. For Holmes, "justice is death," said Brauchler, the district attorney. In December, Holmes\' parents, who will be attending the trial, issued another statement: They asked that their son\'s life be spared and that he be sent to an institution for mentally ill people for the rest of his life, if he\'s found not guilty by reason of insanity. "He is not a monster," Robert and Arlene Holmes wrote, saying the death penalty is "morally wrong, especially when the condemned is mentally ill." "He is a human being gripped by a severe mental illness," the parents said. The matter will be settled by the jury. CNN\'s Ana Cabrera and Sara Weisfeldt contributed to this report from Denver.""" input_ids = tokenizer(article, return_tensors="pt").input_ids output_ids = model.generate(input_ids) print(tokenizer.decode(output_ids[0], skip_special_tokens=True)) ``` I get the following error ``` AttributeError Traceback (most recent call last) <ipython-input-16-6227477597c7> in <module> 8 9 input_ids = tokenizer(article, return_tensors="pt").input_ids ---> 10 output_ids = model.generate(input_ids) 11 12 # print(tokenizer.decode(output_ids[0], skip_special_tokens=True)) ~/.virtualenvs/insights2/lib/python3.6/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 24 try: 25 with self: ---> 26 x = next(gen) 27 yield x 28 except StopIteration: ~/.virtualenvs/insights2/lib/python3.6/site-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, **model_kwargs) 831 if self.config.is_encoder_decoder: 832 # add encoder_outputs to model_kwargs --> 833 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs) 834 835 # set input_ids as decoder_input_ids ~/.virtualenvs/insights2/lib/python3.6/site-packages/transformers/generation_utils.py in _prepare_encoder_decoder_kwargs_for_generation(self, input_ids, model_kwargs) 376 argument: value for argument, value in model_kwargs.items() if not argument.startswith("decoder_") 377 } --> 378 model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs) 379 return model_kwargs 380 ~/.virtualenvs/insights2/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 self._forward_hooks.values()): 726 hook_result = hook(self, input, result) --> 727 if hook_result is not None: 728 result = hook_result 729 if (len(self._backward_hooks) > 0) or (len(_global_backward_hooks) > 0): ~/.virtualenvs/insights2/lib/python3.6/site-packages/transformers/models/led/modeling_led.py in forward(self, input_ids, attention_mask, global_attention_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict) 1703 return custom_forward 1704 -> 1705 layer_outputs = torch.utils.checkpoint.checkpoint( 1706 create_custom_forward(encoder_layer), 1707 hidden_states, AttributeError: module 'torch.utils' has no attribute 'checkpoint' ``` I don't run into this error if I try loading the `patrickvonplaten/led-large-16384-pubmed` model. Not sure if I saved the model incorrectly, @patrickvonplaten or the rest of the community, I'd greatly appreciate any help with this that in some other cases the same error was appearing but it was supposed to be solved here: github.com/huggingface/transformers Fix: torch.utils.checkpoint.checkpoint attribute error. 1 huggingface:master ← devrimcavusoglu:master opened Jan 16, 2021 devrimcavusoglu +13 -0 # What does this PR do? Fixes #9617 along with the other `modeling_<modelname…>.py` as well where the import statements are missing. ## Who can review? @LysandreJik, @patrickvonplaten Any help would be appreciated. Thanks
Hi! I am facing a similar issue. Have you been able to solve it? The code that causing the problem here is the following: model_path = "facebook/s2t-small-librispeech-asr" # Initialize the model model = Speech2TextForConditionalGeneration.from_pretrained(model_path) model = model.eval() # Attach decoder model = SpeechRecognizer(model, labels=labels) # Apply quantization / script / optimize for mobile quantized_model = torch.quantization.quantize_dynamic(model, qconfig_spec={torch.nn.Linear}, dtype=torch.qint8) scripted_model = torch.jit.script(quantized_model) The SpeechRecognizer is just a simple torch.nn.Module wrapper.
0
huggingface
🤗Transformers
Precise meaning of “`d_head“` and “`d_inner“`
https://discuss.huggingface.co/t/precise-meaning-of-d-head-and-d-inner/12793
I am trying to train a transformerXL model from scratch, but I am struggling to understand the meaning of the d_head and d_inner in the config. I understand d_head as being the dimension of the value vector after attetion has been applied, but I have no clue what d_inner should be. The doc only states that: d_inner (int, optional, defaults to 4096) — Inner dimension in FF What does FF mean here?
It stands for FeedForward. d_inner is the dimensionality of the hidden layer of the feedforward neural network (FF, FFN, or also called MLP as it’s a multilayer perceptron) inside the layers of the Transformer-XL model.
1
huggingface
🤗Transformers
How to freez a model?
https://discuss.huggingface.co/t/how-to-freez-a-model/12819
I use for p in model.parameters(): p.requires_grad = False to freeze a T5 model (t5-small), but when I print parameters that require grad, there is still one parameter with the size 32121x512. What is this? Is it the embeddings matrix? Should I freeze it too? It seems backward gradients affect this one remaining parameter.
It seems I called model.resize_token_embeddings(len(tokenizer)) after freezing parameters, and it can reset the embeddings require_grad to True
0
huggingface
🤗Transformers
How to make it so that GPT-2 generates the text to the end of the sentence and does not cut it off in the middle?
https://discuss.huggingface.co/t/how-to-make-it-so-that-gpt-2-generates-the-text-to-the-end-of-the-sentence-and-does-not-cut-it-off-in-the-middle/4755
Hello. I am using gpt-2 to generate text, I am trying to generate headings from a short description, but I ran into such a problem that some sentences are generated incomplete and are cut off in the middle. How can you fix this and make it so that the model generates an offer to the end?
Was wondering about this too. Is there any way to ask GPT-2 to finish sentences?
0
huggingface
🤗Transformers
TrOCR, CER metric error
https://discuss.huggingface.co/t/trocr-cer-metric-error/12653
I am finetuning TrOCR and using Character Error Rate from jiwer as the metric. def compute_cer(pred_ids, label_ids, processor): pred_str = processor.batch_decode(pred_ids, skip_special_tokens=True) label_ids[label_ids == -100] = processor.tokenizer.pad_token_id print(f"len of label_ids {len(label_ids)}") label_str = processor.batch_decode(label_ids, skip_special_tokens=True) print(f"len_pred_str={len(pred_str)}, len_label={len(label_str)}") cer = cer_metric.compute(predictions=pred_str, references=label_str) return cer Except for the print statements the code is a direct copy from @nielsr tutorial 2 . Despite len(pred_str) and len(label_str) being the same, I am getting ValueError: number of ground truth inputs (17) and hypothesis inputs (24) must match. I have attached the screenshot of the same image996×623 105 KB Please let me know, if you have any clue what might be causing the issue
I believe this was a bug that has been fixed, see Datasets.load_metric("cer") does not work 2
1
huggingface
🤗Transformers
How do I do inference using the GPT models on TPUs?
https://discuss.huggingface.co/t/how-do-i-do-inference-using-the-gpt-models-on-tpus/12644
I have tried the following, but it did not work: !pip uninstall -y torch !pip uninstall -y torchvision !pip uninstall -y torchtext !pip uninstall -y torchaudio !pip install transformers \ cloud-tpu-client==0.10 \ datasets \ torchvision \ torchaudio \ librosa \ jiwer \ parsivar \ num2fawords \ torch==1.9.0 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.9-cp37-cp37m-linux_x86_64.whl !pip install datasets transformers[sentencepiece] import torch_xla import torch_xla.core.xla_model as xm import torch from torch import nn, optim from torchvision import transforms, datasets from torch.optim import Adam import torch.nn.functional as F import torch_xla import torch_xla.core.xla_model as xm import torch_xla.debug.metrics as met import torch_xla.distributed.parallel_loader as pl import torch_xla.distributed.xla_multiprocessing as xmp import torch_xla.utils.utils as xu device = xm.xla_device() from transformers import pipeline, AutoTokenizer, GPT2LMHeadModel tokenizer = AutoTokenizer.from_pretrained('flax-community/gpt2-medium-persian') model = GPT2LMHeadModel.from_pretrained('flax-community/gpt2-medium-persian') model = model.to(device) generator = pipeline('text-generation', model, tokenizer=tokenizer, config={'max_length':100}) But it errors: out = generator('در مورد پدر هری پاتر شک هایی وجود دارد.') Setting `pad_token_id` to `eos_token_id`:5 for open-end generation. --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-18-f6da4d1d2479> in <module>() ----> 1 get_ipython().magic("timeit out = generator('در مورد پدر هری پاتر شک هایی وجود دارد.')") 19 frames <decorator-gen-52> in timeit(self, line, cell) <magic-timeit> in inner(_it, _timer) /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2041 # remove once script supports set_grad_enabled 2042 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 2043 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 2044 2045 RuntimeError: torch_xla/csrc/aten_xla_bridge.cpp:69 : Check failed: xtensor *** Begin stack trace *** tensorflow::CurrentStackTrace() torch_xla::bridge::GetXlaTensor(at::Tensor const&) torch_xla::AtenXlaType::index_select(at::Tensor const&, long, at::Tensor const&) c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&, long, at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, long, at::Tensor const&> >, at::Tensor (at::Tensor const&, long, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, long, at::Tensor const&) at::Tensor::index_select(long, at::Tensor const&) const at::native::embedding(at::Tensor const&, at::Tensor const&, long, bool, bool) torch_xla::AtenXlaType::embedding(at::Tensor const&, at::Tensor const&, long, bool, bool) c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&, at::Tensor const&, long, bool, bool), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&, long, bool, bool> >, at::Tensor (at::Tensor const&, at::Tensor const&, long, bool, bool)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&, long, bool, bool) at::embedding(at::Tensor const&, at::Tensor const&, long, bool, bool) _PyMethodDef_RawFastCallKeywords _PyCFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyObject_Call_Prepend PyObject_Call _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyObject_Call_Prepend _PyObject_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyObject_Call_Prepend PyObject_Call _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyObject_Call_Prepend _PyObject_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyObject_Call_Prepend PyObject_Call _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyObject_Call_Prepend PyObject_Call _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyObject_Call_Prepend PyObject_Call _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallDict _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyObject_Call_Prepend PyObject_Call _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyObject_Call_Prepend PyObject_Call _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyObject_Call_Prepend PyObject_Call _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyObject_Call_Prepend PyObject_Call _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyObject_Call_Prepend _PyObject_FastCallKeywords _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallDict _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyObject_Call_Prepend PyObject_Call _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName PyEval_EvalCode _PyMethodDef_RawFastCallKeywords _PyCFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyObject_Call_Prepend PyObject_Call _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallDict _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallDict _PyEval_EvalFrameDefault _PyEval_EvalCodeWithName _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault _PyFunction_FastCallKeywords _PyEval_EvalFrameDefault *** End stack trace *** Input tensor is not an XLA tensor: torch.LongTensor
The pipeline function does not support TPUs, you will have to manually pass your batch through the model (after placing it on the right XLA device) and then post-process the outputs.
0
huggingface
🤗Transformers
Using Accelerated Inference API to produce sentense embeddings
https://discuss.huggingface.co/t/using-accelerated-inference-api-to-produce-sentense-embeddings/6223
Is it possible to use Accelerated Inference API 7 to produce sentense embeddings as described here 4? from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1) sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9) return sum_embeddings / sum_mask #Sentences we want sentence embeddings for sentences = ['This framework generates embeddings for each input sentence', 'Sentences are passed as a list of string.', 'The quick brown fox jumps over the lazy dog.'] #Load AutoModel from huggingface model repository tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens") model = AutoModel.from_pretrained("sentence-transformers/bert-base-nli-mean-tokens") #Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt') #Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) #Perform pooling. In this case, mean pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
hey @vitali i believe integration with sentence-transformers in the inference API is currently in progress, so maybe @osanseviero can share some details (or whether it’s currently possible)
0
huggingface
🤗Transformers
Using hyperparameter-search in Trainer
https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785
This branch 86 hasn’t been merged, but I want to use optuna in my workflow. Although I have tried it, I want to confirm the usage. @sgugger (firstly thanks for the PR) could you please provide instructions on what changes do I need to make to make it work (like defining the search space and then getting results on them, and finding the best hyperparams). I want to confirm if I’m using it in the right manner. Also is the implementation complete ?
Hi there! This is a work in progress so I’d hold on a tiny bit before starting using it (I’ll actually make some changes today). I’ll add an example in the PR once I’m done (hopefully by end of day) so you (and others) can start playing with it and give us potential feedback, but be prepared for some slight changes in the API as we polish it (we want to support other hp-search platforms such as Ray)
0
huggingface
🤗Transformers
NER pipeline aggregation for BILOU
https://discuss.huggingface.co/t/ner-pipeline-aggregation-for-bilou/12411
The simple aggregation strategy for an NER pipeline nlp = pipeline("ner", model=model_directory, aggregation_strategy="simple") aggregates correctly if we use BIO tags, but not if using BILOU style, is there a way to amend this easily? I can change nlp.model.config.id2label = {k: v.replace('L-', 'I-').replace('U-', 'B-') for k, v in nlp.model.config.id2label.items()} but is there an in-built way to handle such cases where we have non BIO format labels?
the simpliest way if found is to adapt the config.json and adapt the ‘id2label’ dictionary to map to IOB “id2label”: { “0”: “O”, “1”: “B-DISORDER”, “2”: “B-DISORDER”, “3”: “I-DISORDER”, “4”: “I-DISORDER”, “5”: “B-FINDING”, “6”: “B-FINDING”, “7”: “I-FINDING”, “8”: “I-FINDING” }, Hope this helps, Herman
1
huggingface
🤗Transformers
Replace weights in TFBertModel
https://discuss.huggingface.co/t/replace-weights-in-tfbertmodel/12270
Hi everyone, I have a multilabel model built from the TFAutoModelForSequenceClassification in which I took the TFBertMainLayer (in the code below it is the bert = transformer_model.layers[0]) on top of which I added a Dropout and a Dense layer. After compiling and fitting model I saved the model weights as an h5 file and saved the model architecture in a json file (using model.to_json() in Keras). bert = transformer_model.layers[0] input_ids = tf.keras.layers.Input(shape=(input_dim,), name='input_ids', dtype='int32') attention_mask = tf.keras.layers.Input(shape=(input_dim,), name='attention_mask', dtype='int32') inputs = {'input_ids': input_ids, 'attention_mask': attention_mask} # https://github.com/huggingface/transformers/issues/7540 bert_model = bert(input_ids, attention_mask)[1] X = tf.keras.layers.Dropout(transformer_model.config.hidden_dropout_prob, name='pooled_output', trainable=True)(bert_model) X = tf.keras.layers.Dense(units=num_labels, activation='sigmoid', name='dense', trainable=True)(X) model = tf.keras.Model(inputs=inputs, outputs=X) I want to visualize the attention weights of the model and came across https://github.com/jessevig/bertviz 1. However, it doesn’t look like it works well with models not in based on pytorch objects. A possible solution I thought about includes the following steps: Use theTFBertModel: initialize the TFBertModel and replace the weights of the TFBertMainLayer with the weights of my trained model. Namely, I tried doing something like this tf_bert_model = TFBertModel.from_pretrained('bert-base-uncased') bert.layers[0]=model.layers[2] But it doesn’t seem to work and I am not able to replace the weights. Then if I can get step #1 to work I thought to save the tf_bert_model using tf_bert_model.save_pretrained() and load it to the pytorch class BertModel which should then enable me to work with bertviz. Any ideas how I can replace the weights to make step #1 work? Or another idea to get around the issue so I can get bertviz working with my keras model? Any help will be greatly appreciated. Ayala Allon
Hi (@ayalaall ) Ayala, I had a similar problem. (I had a model in TF and I wanted to use BertViz). Like you, I thought it should be possible to copy the TF weights into a Pytorch framework. After all, they are just a bunch of numbers. However, I couldn’t get it to work. I could be wrong, but I don’t think there is an easy way to move a model from TF to Pytorch. My solution (which worked eventually) was to start again and train a new model using Pytorch. This made sense for me, as I wasn’t particularly expert in TF/keras, and I though it might be handy to learn Pytorch. Afterwards, I thought it would have been Better to write a copy of BertViz that was designed to work with TF. (How hard can it be…?). If you are expert in TF and in Python then this might be a good solution for you. A third possibility would be to look at the internal structures of the way the weights are stored for TF and for Pytorch, and to force your model’s numbers into a Pytorch-like structure. Since both the TF and the Pytorch models are implementations of the same Attention-based Encoder, I think this should be theoretically possible. It doesn’t sound easy though. It is possible that somebody has written a TF-Pytorch Converter program. I couldn’t find one when I looked, but that was nearly two years ago, so there might be one now. If you ask another question on this forum with “TF toPytorch Model Conversion” in the title, somebody might know (but don’t hold your breath waiting). It is possible that somebody has written a Visualisation tool for TF Bert models. Again, I couldn’t find one two years ago. How much have you searched?
0
huggingface
🤗Transformers
TextDataset can’t set max_seq_length?
https://discuss.huggingface.co/t/textdataset-cant-set-max-seq-length/12415
I’m trying to train bert from scratch, here is my code: import logging import sys import os from typing import Optional import code import datasets from dataclasses import dataclass, field import transformers logger = logging.getLogger(__name__) @dataclass class CustomArguments: train_file: Optional[str] = field(default=None) validation_file: Optional[str] = field(default=None) max_seq_length: Optional[int] = field(default=128) vocab_path: Optional[str] = field(default=None) model_conf_path: Optional[str] = field(default=None) def main(): parser = transformers.HfArgumentParser((CustomArguments, transformers.TrainingArguments)) custom_args, training_args = parser.parse_args_into_dataclasses() logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) log_level = training_args.get_process_log_level() logger.setLevel(log_level) datasets.utils.logging.set_verbosity(log_level) transformers.utils.logging.set_verbosity(log_level) transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() logger.warning( f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}" + f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}" ) # Set the verbosity to info of the Transformers logger (on main process only): logger.info(f"Training/evaluation parameters {training_args}") if custom_args.train_file is None: raise ValueError("train_file must be specified!") tokenizer = transformers.BertTokenizerFast( vocab_file=custom_args.vocab_path, do_lower_case=False, max_length=128) model_config = transformers.BertConfig.from_pretrained( custom_args.model_conf_path) model = transformers.BertForPreTraining(config=model_config) model.resize_token_embeddings(len(tokenizer)) train_dataset = transformers.TextDatasetForNextSentencePrediction( tokenizer=tokenizer, file_path=custom_args.train_file, block_size=128) eval_dataset = None if custom_args.validation_file is not None: eval_dataset = transformers.TextDatasetForNextSentencePrediction( tokenizer=tokenizer, file_path=custom_args.validation_file, block_size=128) data_collator = transformers.DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15) trainer = transformers.Trainer(model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=eval_dataset) trainer.train() if eval_dataset: trainer.evaluate() if __name__ == "__main__": main() but error occur: File "/usr/local/anaconda3/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 221, in forward embeddings += position_embeddings RuntimeError: The size of tensor a (2190) must match the size of tensor b (512) at non-singleton dimension 1 Then I print the input shape, found the seq length is 2190! It seem block_size not work? How can I set max_seq_length combine with TextDatasetForNextSentencePrediction. By the way, is there a way to get nsp dataset combine with load_dataset?
Is there any advice? @ lhoestq
0
huggingface
🤗Transformers
[HELP] Model Evaluation for NER yields different results (sklearn vs metric.compute())
https://discuss.huggingface.co/t/help-model-evaluation-for-ner-yields-different-results-sklearn-vs-metric-compute/10528
I am using a model for evaluating the capacity of my Transformer → AutoModel → XLNetForTokenClassification. I am using the exact evaluation like in this case, the tutorial Sylvain Gugger created : Google Colab 18 I have a dilemma which is the following: metric = load_metric("seqeval") results = metric.compute(predictions=[true_predictions], references=[true_labels]) and classification_report(true_labels, true_predictions) (from sklearn.metrics) yield different scores. In essence, the classification report yields better Recall, Precision and F1-Score. For HuggingFace built-in metric: {'S': {'precision': 0.7408599678086917, 'recall': 0.794182893763865, 'f1': 0.7665952890792291, 'number': 4057}, 'overall_precision': 0.7408599678086917, 'overall_recall': 0.794182893763865, 'overall_f1': 0.7665952890792291, 'overall_accuracy': 0.9210345258944208} Sklearn classification_report: precision recall f1-score support label_1 0.80 0.85 0.83 4051 label_2 0.84 0.82 0.83 4056 label_3 0.96 0.95 0.95 23869 accuracy 0.92 31976 macro avg 0.87 0.87 0.87 31976 weighted avg 0.92 0.92 0.92 31976 Can anyone tell me where this difference comes from? Note that I pass exactly the same lists of prediction and labels in metric.compute() and classification_report(). I also manually went through every example of the validation set and predicted with my loaded PyTorch model (so not directly from Trainer), and created the classification report. The metrics are the same with the sklearn classification report above, which means that the trainer.predict() and basic PyTorch predict predictions do not vary at all.
I also faced same issue and did the same analysis you did. Would like to know why this happens
0
huggingface
🤗Transformers
Logging training accuracy using Trainer class
https://discuss.huggingface.co/t/logging-training-accuracy-using-trainer-class/5524
Hello, I am running BertForSequenceClassification and I would like to log the accuracy as well as other metrics that I have already defined for my training set. I saw in another issue that I have to add a self.evaluate(self.train_dataset) somewhere in the code, but I am a beginner when it comes to Python and deep learning in general so I am not sure where exactly I have to include it. I was trying to replicate the evaluate() method of the Trainer class, taking the train_dataset as argument, but it did not work. It would really mean a lot if you could guide me as for where I should tweak the code! Thank you for your help!
hey @dbejarano31, assuming that you want to log the training metrics during training, i think there are (at least) two options: subclass TrainerCallback (docs 54) to create a custom callback that logs the training metrics by triggering an event with on_evaluate subclass Trainer and override the evaluate function (docs 56) to inject the additional evaluation code option 2 might be easier to implement since you can use the existing logic as a template
0