docs
stringclasses 4
values | category
stringlengths 3
31
| thread
stringlengths 7
255
| href
stringlengths 42
278
| question
stringlengths 0
30.3k
| context
stringlengths 0
24.9k
| marked
int64 0
1
|
---|---|---|---|---|---|---|
huggingface | Beginners | [Beginner] fine-tune Bart with custom dataset in other language? | https://discuss.huggingface.co/t/beginner-fine-tune-bart-with-custom-dataset-in-other-language/3334 | @valhalla @sshleifer
Hi, I’m new to the seq2seq model. And I want to fine-tune Bart/T5 for the summarization task. There are some documents related to the fine-tuning procedure.
Such as
github.com
huggingface/transformers 24
master/examples/seq2seq
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
ohmeow.github.io
modeling.seq2seq.summarization | blurr 16
This module contains custom models, custom splitters, etc... summarization tasks.
And also thanks for the distilbart version.
But my custom dataset is in Japanese. Directly fine-tuning might be impossible. Is it necessary to train a new bpe tokenizer with Japanese data? But I don’t know how to do it.
The second way is to use an existing Japanese tokenizer like bert-japanese, but could I just use it for Bart? How to modify it?
The third way is to use a multilingual model like MBart or MT5. I haven’t tested it. Could I just fine-tune them with the Japanese dataset ?
Please forgive me if this is a stupid question. Thanks in advance.
Thanks in advance. | Hi @HeroadZ
Bart is trained in English so I don’t think fine-tuning it directly will help. If you want to train a model from scratch in a new language then yes, you should train a new tokenizer. To train a new tokenizer checkout the tokenizers 14 library.
And both MBart and MT5 support Japanese so that would be a good starting point.
Another option is to leverage language-specific encoder only bert model (in your case bert-japanese) to create a seq2seq model using the EncoderDecoder framework. See this notebook to know more about EncoderDecoder models
Leverage BERT for Encoder-Decoder Summarization on CNN/Dailymail 59 | 0 |
huggingface | Beginners | Streamlit App Faster Model Loading | https://discuss.huggingface.co/t/streamlit-app-faster-model-loading/3211 | An open question for anyone that has used Transformer models in a Streamlit app. I am using:::
pipeline(“summarization”, model=“sshleifer/distilbart-cnn-6-6”, tokenizer=“sshleifer/distilbart-cnn-6-6”,framework=“pt”)
::: to do summarization in the app. However, it takes about 55 seconds to create the summary, and it appears that 35 seconds or more of that time is spent downloading the model. Is there another way to access the model quicker? Perhaps by pre-loading the model to Streamlit Sharing (via the github repo the app sits in)?
Also, the summary generation part of the app appears to work once or twice, but if done any more times the app crashes. Has anyone else had this experience? | No experience with Streamlit itself, but you can always download the model 6 locally. Usage is a bit different then: you need to provide a directory to the model argument instead of just the model name. So download all those files to a directory, and then use that directory as your arguments. | 0 |
huggingface | Beginners | Creating Trainer object is deleting my ‘labels’ feature | https://discuss.huggingface.co/t/creating-trainer-object-is-deleting-my-labels-feature/3325 | I have a dataset called tokenized_datasets:
>>> tokenized_datasets
Dataset({
features: ['attention_mask', 'input_ids', 'labels', 'token_type_ids'],
num_rows: 755988
})
>>> tokenized_datasets[0].keys()
dict_keys(['attention_mask', 'input_ids', 'labels', 'token_type_ids'])
But when I create a Trainer object, the labels key just disappears!
>>> training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # total # of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
)
>>> trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=tokenized_datasets # training dataset
)
>>> tokenized_datasets[0].keys() # WHAT???
dict_keys(['attention_mask', 'input_ids', 'token_type_ids'])
This causes training to fail like so:
>>> trainer.train()
KeyError Traceback (most recent call last)
<ipython-input-108-21d21c7948cc> in <module>()
----> 1 trainer.train()
[SNIPPED]
/home/sbendl/.local/lib/python3.6/site-packages/transformers/file_utils.py in __getitem__(self, k)
1414 if isinstance(k, str):
1415 inner_dict = {k: v for (k, v) in self.items()}
-> 1416 return inner_dict[k]
1417 else:
1418 return self.to_tuple()[k]
KeyError: 'loss'
I’m at a loss here, and quite frustrated, why on earth is this happening? It doesn’t when I follow the (very similar) code here. All the code snippets are sequentially run in my notebook, there’s not “hidden” code. I have a dataset, I pass it to the trainer, and as a result my dataset is broken. | How did you create your model? If the key is dropped by the Trainer, it means the model signature does not accept it. You cna also deactivate that behavior by passing remove_unused_columns=False in your TraininingArguments. | 0 |
huggingface | Beginners | Masked language modeling perplexity | https://discuss.huggingface.co/t/masked-language-modeling-perplexity/3260 | Hello, in RoBERTa article, authors refer to the model’s perplexity. However, I have yet to find a clear definition of what perplexity means in the context of a model training on the Masked Language Modeling Objective as opposed to the Causal Language Modeling task.
Could someone give me a clear definition? Thanks! | It’s the exponential of the cross-entropy loss, like for CLM. | 0 |
huggingface | Beginners | Need of beginning and end of speech tokens for causal language modeling | https://discuss.huggingface.co/t/need-of-beginning-and-end-of-speech-tokens-for-causal-language-modeling/3320 | Hello everyone, I’m trying to learn more about language modeling using huggingface, and how some problems can be modeled as a language to train a model to predict the next token in an arbitrary sequence. In this huggingface tutorial, they mention the use of a [BOS token]. Why is this needed? Does a causal language model need tokens to denote when a sequence begins and ends? What might happen if tokens like this are not included in the training dataset? Will this significantly affect the ability of the model to generate sequences that begin and end properly? | Transformer models create embeddings based on context. If the model doesn’t know where a sentence or document begins and ends, it’s harder for it to determine context, and the resulting embeddings could be affected. If information from an irrelevant context bleeds into the embedding for a word, it can only be a bad thing. As for whether it’s significant, it’s a huge “it depends”. But, unless by some luck things turn out just right, the model will always be a little bit worse. | 0 |
huggingface | Beginners | “table-question-answering” is not an available task under pipeline | https://discuss.huggingface.co/t/table-question-answering-is-not-an-available-task-under-pipeline/3284 | I am trying to load the “table-question-answering” task using the pipeline but I keep getting the message that -
"Unknown task table-question-answering, available tasks are ['feature-extraction',
'sentiment-analysis',............
Below are the lines I run.
from transformers import pipeline
import pandas as pd
tqa = pipeline("table-question-answering") | You should check your version of transformers, it looks like it’s not up-to-date. | 0 |
huggingface | Beginners | Create a transformer model from a pytorch model (model.bin)) | https://discuss.huggingface.co/t/create-a-transformer-model-from-a-pytorch-model-model-bin/3270 | How to go from a model created from Bert and trained in a specific domain stored as model.bin to generate a transformer, tokenizer, model and optimizer??
Sorry if it is very easy but I’m getting crazy!!!
ck=torch.load(’/content/drive/MyDrive/saved_models/pytorch_model.bin’)
print(ck.keys())
odict_keys([‘bert.embeddings.word_embeddings.weight’, ‘bert.embeddings.position_embeddings.weight’, ‘bert.embeddings.token_type_embeddings.weight’, ‘bert.embeddings.LayerNorm.weight’, ‘bert.embeddings.LayerNorm.bias’, ‘bert.encoder.layer.0.attention.self.query.weight’, ‘bert.encoder.layer.0.attention.self.query.bias’, ‘bert.encoder.layer.0.attention.self.key.weight’, ‘bert.encoder.layer.0.attention.self.key.bias’, ‘bert.encoder.layer.0.attention.self.value.weight’, ‘bert.encoder.layer.0.attention.self.value.bias’, ‘bert.encoder.layer.0.attention.output.dense.weight’, ‘bert.encoder.layer.0.attention.output.dense.bias’, ‘bert.encoder.layer.0.attention.output.LayerNorm.weight’, ‘bert.encoder.layer.0.attention.output.LayerNorm.bias’, ‘bert.encoder.layer.0.intermediate.dense.weight’, ‘bert.encoder.layer.0.intermediate.dense.bias’, ‘bert.encoder.layer.0.output.dense.weight’, ‘bert.encoder.layer.0.output.dense.bias’, ‘bert.encoder.layer.0.output.LayerNorm.weight’, ‘bert.encoder.layer.0.output.LayerNorm.bias’, ‘bert.encoder.layer.1.attention.self.query.weight’, ‘bert.encoder.layer.1.attention.self.query.bias’, ‘bert.encoder.layer.1.attention.self.key.weight’, ‘bert.encoder.layer.1.attention.self.key.bias’, ‘bert.encoder.layer.1.attention.self.value.weight’, ‘bert.encoder.layer.1.attention.self.value.bias’, ‘bert.encoder.layer.1.attention.output.dense.weight’, ‘bert.encoder.layer.1.attention.output.dense.bias’, ‘bert.encoder.layer.1.attention.output.LayerNorm.weight’, ‘bert.encoder.layer.1.attention.output.LayerNorm.bias’, ‘bert.encoder.layer.1.intermediate.dense.weight’, ‘bert.encoder.layer.1.intermediate.dense.bias’, ‘bert.encoder.layer.1.output.dense.weight’, ‘bert.encoder.layer.1.output.dense.bias’, ‘bert.encoder.layer.1.output.LayerNorm.weight’, ‘bert.encoder.layer.1.output.LayerNorm.bias’, ‘bert.encoder.layer.2.attention.self.query.weight’, ‘bert.encoder.layer.2.attention.self.query.bias’, ‘bert.encoder.layer.2.attention.self.key.weight’, ‘bert.encoder.layer.2.attention.self.key.bias’, ‘bert.encoder.layer.2.attention.self.value.weight’, ‘bert.encoder.layer.2.attention.self.value.bias’, ‘bert.encoder.layer.2.attention.output.dense.weight’, ‘bert.encoder.layer.2.attention.output.dense.bias’, ‘bert.encoder.layer.2.attention.output.LayerNorm.weight’, ‘bert.encoder.layer.2.attention.output.LayerNorm.bias’, ‘bert.encoder.layer.2.intermediate.dense.weight’, ‘bert.encoder.layer.2.intermediate.dense.bias’, ‘bert.encoder.layer.2.output.dense.weight’, ‘bert.encoder.layer.2.output.dense.bias’, ‘bert.encoder.layer.2.output.LayerNorm.weight’, ‘bert.encoder.layer.2.output.LayerNorm.bias’, ‘bert.encoder.layer.3.attention.self.query.weight’, ‘bert.encoder.layer.3.attention.self.query.bias’, ‘bert.encoder.layer.3.attention.self.key.weight’, ‘bert.encoder.layer.3.attention.self.key.bias’, ‘bert.encoder.layer.3.attention.self.value.weight’, ‘bert.encoder.layer.3.attention.self.value.bias’, ‘bert.encoder.layer.3.attention.output.dense.weight’, ‘bert.encoder.layer.3.attention.output.dense.bias’, ‘bert.encoder.layer.3.attention.output.LayerNorm.weight’, ‘bert.encoder.layer.3.attention.output.LayerNorm.bias’, ‘bert.encoder.layer.3.intermediate.dense.weight’, ‘bert.encoder.layer.3.intermediate.dense.bias’, ‘bert.encoder.layer.3.output.dense.weight’, ‘bert.encoder.layer.3.output.dense.bias’, ‘bert.encoder.layer.3.output.LayerNorm.weight’, ‘bert.encoder.layer.3.output.LayerNorm.bias’, ‘bert.encoder.layer.4.attention.self.query.weight’, ‘bert.encoder.layer.4.attention.self.query.bias’, ‘bert.encoder.layer.4.attention.self.key.weight’, ‘bert.encoder.layer.4.attention.self.key.bias’, ‘bert.encoder.layer.4.attention.self.value.weight’, ‘bert.encoder.layer.4.attention.self.value.bias’, ‘bert.encoder.layer.4.attention.output.dense.weight’, ‘bert.encoder.layer.4.attention.output.dense.bias’, ‘bert.encoder.layer.4.attention.output.LayerNorm.weight’, ‘bert.encoder.layer.4.attention.output.LayerNorm.bias’, ‘bert.encoder.layer.4.intermediate.dense.weight’, ‘bert.encoder.layer.4.intermediate.dense.bias’, ‘bert.encoder.layer.4.output.dense.weight’, ‘bert.encoder.layer.4.output.dense.bias’, ‘bert.encoder.layer.4.output.LayerNorm.weight’, ‘bert.encoder.layer.4.output.LayerNorm.bias’, ‘bert.encoder.layer.5.attention.self.query.weight’, ‘bert.encoder.layer.5.attention.self.query.bias’, ‘bert.encoder.layer.5.attention.self.key.weight’, ‘bert.encoder.layer.5.attention.self.key.bias’, ‘bert.encoder.layer.5.attention.self.value.weight’, ‘bert.encoder.layer.5.attention.self.value.bias’, ‘bert.encoder.layer.5.attention.output.dense.weight’, ‘bert.encoder.layer.5.attention.output.dense.bias’, ‘bert.encoder.layer.5.attention.output.LayerNorm.weight’, ‘bert.encoder.layer.5.attention.output.LayerNorm.bias’, ‘bert.encoder.layer.5.intermediate.dense.weight’, ‘bert.encoder.layer.5.intermediate.dense.bias’, ‘bert.encoder.layer.5.output.dense.weight’, ‘bert.encoder.layer.5.output.dense.bias’, ‘bert.encoder.layer.5.output.LayerNorm.weight’, ‘bert.encoder.layer.5.output.LayerNorm.bias’, ‘bert.encoder.layer.6.attention.self.query.weight’, ‘bert.encoder.layer.6.attention.self.query.bias’, ‘bert.encoder.layer.6.attention.self.key.weight’, ‘bert.encoder.layer.6.attention.self.key.bias’, ‘bert.encoder.layer.6.attention.self.value.weight’, ‘bert.encoder.layer.6.attention.self.value.bias’, ‘bert.encoder.layer.6.attention.output.dense.weight’, ‘bert.encoder.layer.6.attention.output.dense.bias’, ‘bert.encoder.layer.6.attention.output.LayerNorm.weight’, ‘bert.encoder.layer.6.attention.output.LayerNorm.bias’, ‘bert.encoder.layer.6.intermediate.dense.weight’, ‘bert.encoder.layer.6.intermediate.dense.bias’, ‘bert.encoder.layer.6.output.dense.weight’, ‘bert.encoder.layer.6.output.dense.bias’, ‘bert.encoder.layer.6.output.LayerNorm.weight’, ‘bert.encoder.layer.6.output.LayerNorm.bias’, ‘bert.encoder.layer.7.attention.self.query.weight’, ‘bert.encoder.layer.7.attention.self.query.bias’, ‘bert.encoder.layer.7.attention.self.key.weight’, ‘bert.encoder.layer.7.attention.self.key.bias’, ‘bert.encoder.layer.7.attention.self.value.weight’, ‘bert.encoder.layer.7.attention.self.value.bias’, ‘bert.encoder.layer.7.attention.output.dense.weight’, ‘bert.encoder.layer.7.attention.output.dense.bias’, ‘bert.encoder.layer.7.attention.output.LayerNorm.weight’, ‘bert.encoder.layer.7.attention.output.LayerNorm.bias’, ‘bert.encoder.layer.7.intermediate.dense.weight’, ‘bert.encoder.layer.7.intermediate.dense.bias’, ‘bert.encoder.layer.7.output.dense.weight’, ‘bert.encoder.layer.7.output.dense.bias’, ‘bert.encoder.layer.7.output.LayerNorm.weight’, ‘bert.encoder.layer.7.output.LayerNorm.bias’, ‘bert.encoder.layer.8.attention.self.query.weight’, ‘bert.encoder.layer.8.attention.self.query.bias’, ‘bert.encoder.layer.8.attention.self.key.weight’, ‘bert.encoder.layer.8.attention.self.key.bias’, ‘bert.encoder.layer.8.attention.self.value.weight’, ‘bert.encoder.layer.8.attention.self.value.bias’, ‘bert.encoder.layer.8.attention.output.dense.weight’, ‘bert.encoder.layer.8.attention.output.dense.bias’, ‘bert.encoder.layer.8.attention.output.LayerNorm.weight’, ‘bert.encoder.layer.8.attention.output.LayerNorm.bias’, ‘bert.encoder.layer.8.intermediate.dense.weight’, ‘bert.encoder.layer.8.intermediate.dense.bias’, ‘bert.encoder.layer.8.output.dense.weight’, ‘bert.encoder.layer.8.output.dense.bias’, ‘bert.encoder.layer.8.output.LayerNorm.weight’, ‘bert.encoder.layer.8.output.LayerNorm.bias’, ‘bert.encoder.layer.9.attention.self.query.weight’, ‘bert.encoder.layer.9.attention.self.query.bias’, ‘bert.encoder.layer.9.attention.self.key.weight’, ‘bert.encoder.layer.9.attention.self.key.bias’, ‘bert.encoder.layer.9.attention.self.value.weight’, ‘bert.encoder.layer.9.attention.self.value.bias’, ‘bert.encoder.layer.9.attention.output.dense.weight’, ‘bert.encoder.layer.9.attention.output.dense.bias’, ‘bert.encoder.layer.9.attention.output.LayerNorm.weight’, ‘bert.encoder.layer.9.attention.output.LayerNorm.bias’, ‘bert.encoder.layer.9.intermediate.dense.weight’, ‘bert.encoder.layer.9.intermediate.dense.bias’, ‘bert.encoder.layer.9.output.dense.weight’, ‘bert.encoder.layer.9.output.dense.bias’, ‘bert.encoder.layer.9.output.LayerNorm.weight’, ‘bert.encoder.layer.9.output.LayerNorm.bias’, ‘bert.encoder.layer.10.attention.self.query.weight’, ‘bert.encoder.layer.10.attention.self.query.bias’, ‘bert.encoder.layer.10.attention.self.key.weight’, ‘bert.encoder.layer.10.attention.self.key.bias’, ‘bert.encoder.layer.10.attention.self.value.weight’, ‘bert.encoder.layer.10.attention.self.value.bias’, ‘bert.encoder.layer.10.attention.output.dense.weight’, ‘bert.encoder.layer.10.attention.output.dense.bias’, ‘bert.encoder.layer.10.attention.output.LayerNorm.weight’, ‘bert.encoder.layer.10.attention.output.LayerNorm.bias’, ‘bert.encoder.layer.10.intermediate.dense.weight’, ‘bert.encoder.layer.10.intermediate.dense.bias’, ‘bert.encoder.layer.10.output.dense.weight’, ‘bert.encoder.layer.10.output.dense.bias’, ‘bert.encoder.layer.10.output.LayerNorm.weight’, ‘bert.encoder.layer.10.output.LayerNorm.bias’, ‘bert.encoder.layer.11.attention.self.query.weight’, ‘bert.encoder.layer.11.attention.self.query.bias’, ‘bert.encoder.layer.11.attention.self.key.weight’, ‘bert.encoder.layer.11.attention.self.key.bias’, ‘bert.encoder.layer.11.attention.self.value.weight’, ‘bert.encoder.layer.11.attention.self.value.bias’, ‘bert.encoder.layer.11.attention.output.dense.weight’, ‘bert.encoder.layer.11.attention.output.dense.bias’, ‘bert.encoder.layer.11.attention.output.LayerNorm.weight’, ‘bert.encoder.layer.11.attention.output.LayerNorm.bias’, ‘bert.encoder.layer.11.intermediate.dense.weight’, ‘bert.encoder.layer.11.intermediate.dense.bias’, ‘bert.encoder.layer.11.output.dense.weight’, ‘bert.encoder.layer.11.output.dense.bias’, ‘bert.encoder.layer.11.output.LayerNorm.weight’, ‘bert.encoder.layer.11.output.LayerNorm.bias’, ‘bert.pooler.dense.weight’, ‘bert.pooler.dense.bias’, ‘cls.predictions.bias’, ‘cls.predictions.transform.dense.weight’, ‘cls.predictions.transform.dense.bias’, ‘cls.predictions.transform.LayerNorm.weight’, ‘cls.predictions.transform.LayerNorm.bias’, ‘cls.predictions.decoder.weight’, ‘cls.seq_relationship.weight’, ‘cls.seq_relationship.bias’])
Thanks a lot | Not sure what you mean could please rephrase the question?
What do you mean by
generate a transformer, tokenizer, model and optimizer | 0 |
huggingface | Beginners | Add data augmentation process during training every epoch | https://discuss.huggingface.co/t/add-data-augmentation-process-during-training-every-epoch/3274 | Hello,
I’d like to process my training dataset every epoch.
I want to add random processing as data augmentation, and I want to do it during training, not preprocessing.
I think I can do it with Trainer, DataCollator, or __getitem__ of datasets.arrow_dataset.Dataset, but where should I do it?
For the evaluation set and test set, I plan to do a preprocess using datasets.arrow_dataset.Dataset.map.
Thank you in advance. | The DataCollator can help if you have something randomized in the call that returns the batch. A getitem in your Dataset can also help, it all depends on what you are trying to do exactly.
The Trainer in itself has nothing implemented for data augmentation, so it won’t help you. | 0 |
huggingface | Beginners | Calculate F1 score in a NER task with BERT | https://discuss.huggingface.co/t/calculate-f1-score-in-a-ner-task-with-bert/3217 | Hi everyone,
I fine tuned a BERT model to perform a NER task using a BILUO scheme and I have to calculate F1 score.
However, in named-entity recognition, f1 score is calculated per entity, not token.
Moreover, there is the Word-Piece “problem” and the BILUO format, so I should:
aggregate the subwords in words
remove the prefixes “B-”, “I-”, “L-” from each entity
calculate the F1 score on the entity
Before I spend hours (if not days) to try to implement such code, I would like to know if an implemented solution already exists.
Thanks in advance | You should use the datasets metric seqeval that will do all of this for you. Check the new run_ner script 116 for an example. | 0 |
huggingface | Beginners | [Tensorflow Export] How to export a fine tuned GPT2 model to a tensorflow model file? | https://discuss.huggingface.co/t/tensorflow-export-how-to-export-a-fine-tuned-gpt2-model-to-a-tensorflow-model-file/3207 | Sorry if this is a silly question, but I cant seem to find any proper solution to this.
I am using transformers==2.8.0 and have fine-tuned a gpt2 model with my own dataset. I know that during training it creates checkpoints in pytorch and that can be used for text generation, but I want to save/load model in tensorflow.
I know that TFGPT2LMHeadModel exists and that it can be used, but I havent found an example online doing this.
Can someone help me please? How can I export a fine-tuned model into tensorflow, so that I can then generate text using that model?
Thanks | You can check this part of the docs 9 that shows how to reload a model trained with PyTorch into TensorFlow (and vice versa). | 0 |
huggingface | Beginners | Multilabel classification for text | https://discuss.huggingface.co/t/multilabel-classification-for-text/3068 | Hello,
Could you please point me out the direction to look for a possibility to do multi-class text classification?
I need to assign multiple tags (genres) for texts describing movies/books. | Should it be done using class BertForMultipleChoice(BertPreTrainedModel)?
Found also transformers-tutorials/transformers_multi_label_classification.ipynb at master · abhimishra91/transformers-tutorials · GitHub 18 , but authors didn’t use BertForMultipleChoice there. | 0 |
huggingface | Beginners | Loading a model from local with best checkpoint | https://discuss.huggingface.co/t/loading-a-model-from-local-with-best-checkpoint/1707 | Hi all,
I have trained a model and saved it, tokenizer as well. During the training I set the load_best_checkpoint_at_end to True and can see the test results, which are good
Now I have another file where I load the model and observe results on test data set. I want to be able to do this without training over and over again. But the test results in the second file where I load the model are worse than the ones right after training.
Is there a way to load the model with best validation checkpoint ?
This is how I save:
tokenizer.save_pretrained(model_directory)
trainer.save_model()
and this is how i load:
tokenizer = T5Tokenizer.from_pretrained(model_directory)
model = T5ForConditionalGeneration.from_pretrained(model_directory, return_dict=False) | I don’t understand the question. With load_best_model_at_end the model loaded at the end of training is the one that had the best performance on your validation set. So when you save that model, you have the best model on this validation set.
If it’s crap on another set, it means your validation set was not representative of the performance you wanted and there is nothing we can do on Trainer to fix that. | 1 |
huggingface | Beginners | Data shape needed for training TransformerXL from scratch | https://discuss.huggingface.co/t/data-shape-needed-for-training-transformerxl-from-scratch/3079 | Hello everyone,
I’m having trouble getting my data into the proper shape to train a TransformerXL model from scratch. I have a custom Pytorch Dataset that returns a dict from __getitem__ with the keys input_ids, and labels both assigned the same 1-D Tensor of ids for a particular sequence. When I pass this Dataset to the Trainer object with the default_collator I get a grad can be implicitly created only for scalar outputs error. What am I missing here? Am I missing a needed field for the TransfoXLLMHeadModel's forward method? I’ve tried just about everything and cannot figure it out. | Hi @jodiak
Would be hard to answer this without looking at the code, could you post a small code snippet ?
Also TransformerXL is a language model so the required inputs are input_ids and labels, which are of shape [batch_size, seq_len]. | 0 |
huggingface | Beginners | What happens in the MT5 documentation example? | https://discuss.huggingface.co/t/what-happens-in-the-mt5-documentation-example/3117 | Hi,
I’m trying to understand the provided example to the MT5 model 11 but have some difficulties.
Here is the example:
from transformers import MT5Model, T5Tokenizer
model = MT5Model.from_pretrained(“google/mt5-small”)
tokenizer = T5Tokenizer.from_pretrained(“google/mt5-small”)
article = “UN Offizier sagt, dass weiter verhandelt werden muss in Syrien.”
summary = “Weiter Verhandlung in Syrien.”
batch = tokenizer.prepare_seq2seq_batch(src_texts=[article], tgt_texts=[summary], return_tensors=“pt”)
outputs = model(input_ids=batch.input_ids, decoder_input_ids=batch.labels)
hidden_states = outputs.last_hidden_state
So I understand that tokenizer.prepare_seq2seq_batch is to encode the input to provide to the model. It is a BatchEncoding containting the input_ids, labels and attention_mask.
However, I don’t understand what follows, what happens in : model(input_ids=batch.input_ids, decoder_input_ids=batch.labels) ? This does not train or fine tune the model but what does it do ?
Why do we provide it a source and target then ? What if we wanted the model to generate the target (summary) ?
Thanks ! | The example is just a general example of how to do a forward pass through the model, just like you can do in any model. In practice, you’d see something like this:
github.com
huggingface/transformers/blob/4f7022d68d4bae4b5e6a748b7a7323515c6fdcd3/examples/seq2seq/seq2seq_trainer.py#L162-L176 4
def _compute_loss(self, model, inputs, labels): if self.args.label_smoothing == 0: if self.data_args is not None and self.data_args.ignore_pad_token_for_loss: # force training to ignore pad token logits = model(**inputs, use_cache=False)[0] loss = self.loss_fn(logits.view(-1, logits.shape[-1]), labels.view(-1)) else: # compute usual loss via models loss, logits = model(**inputs, labels=labels, use_cache=False)[:2] else: # compute label smoothed loss logits = model(**inputs, use_cache=False)[0] lprobs = torch.nn.functional.log_softmax(logits, dim=-1) loss, _ = self.loss_fn(lprobs, labels, self.args.label_smoothing, ignore_index=self.config.pad_token_id) return loss, logits | 0 |
huggingface | Beginners | Can LayoutLM be used for images? | https://discuss.huggingface.co/t/can-layoutlm-be-used-for-images/1723 | Hi,
I am very new to transformers and found out about it when looking for a LayoutLM implementation.
Now from my understanding, LayoutLM can be used to extract information from a document based on the layout it guessed.
When browsing the documentation, I could only see examples using plain text and I don’t know where to begin to put an image instead.
If it would be possible to help a newbie like me, showing how to pass it an image and how to interpret the results, you would really make me an happy man!!
I really hope someone can help me.
Have a great day | Hi eveningkid,
transformer models are designed for text.
It might be possible to force the model to accept a numeric representation of an image (after all, it’s all ones and noughts), but it would be unlikely to do anything useful. | 0 |
huggingface | Beginners | Entity Relationship Modeling | https://discuss.huggingface.co/t/entity-relationship-modeling/3072 | Hi all,
I am new to Hugging Face and am looking to solve a problem of extracting entity relationships as a diagram/graph from company documents. For instance, Bob reports to Sally.
My first guess would be to use NER to label relationships, but is that the right track? All thoughts welcome.
Many thanks in advance,
Ari | I’m guessing you would need to combine NER with aspect analysis to understand reports to points to Sally and topic modelling/classification to understand how they go together. But I am not sure how you would go about combining the three. I believe a lot of the BERT question-answering models are able to understand and answer about these relationships, so it is happening somewhere inside those models.I guess you would need to look deeper into how those work.
Thanks & Regards,
Daryl | 0 |
huggingface | Beginners | How to change the linear classifier? | https://discuss.huggingface.co/t/how-to-change-the-linear-classifier/3101 | hi, i’m using HuggingFace for multi-label classification. i am curious if there is a way to change/customize the classifier head on top? and second question is, the default type of classifier is linear. i heard of classifiers like SVM, decision tree etc. can anyone explain their connection? between linear and those ones? | You can extend a pretrained model with your own layers as much as you want. Something like this can work:
class BertCustomClassification(BertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.bert = BertModel(config)
# Add classifier layers below
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.preclassifier = nn.Linear(config.hidden_size, config.hidden_size)
self.act = nn.GELU()
self.classifier = nn.Linear(config.hidden_size, config.num_labels)
# don't forget to init the weights for the new layers
self.init_weights()
You should then also change the forward pass, of course.
Your question about other ML architectures like SVMs and decision trees is too broad for this forum and fall outside of the scope of HuggingFace Transformers. You can ask such question on a website like Stack Overflow (but search first because this question has been asked a billion times). | 0 |
huggingface | Beginners | How does trainer handle lists with None items? | https://discuss.huggingface.co/t/how-does-trainer-handle-lists-with-none-items/3107 | I am working through SquAD 2.0 QA with Bert and I am trying to figure out how to deal with data that contains None in it with PyTorch.
For QA tasks, for validation/test data only, we have data contained under the key: 'offset_mapping' and some of its data looks like:
# [[None, None, None, None, None, None, None, None, None, (0, 3), (4, 10), (10, 11), (12, 13), (13, 19), (19, 20), (21, 23), (23, 25), (25, 28), (28, 30), (30, 3]]
In this guide: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb#scrollTo=s3cuvZsNleG2&line=4&uniqifier=1, data that looks like above is successfully fed into trainer.predict() as well as into torch data loaders. However, torch data loaders will not accept tensors with None in them. How is huggingface handling and getting around these None values?
I am having a hard time figuring that out from the source code here: transformers.trainer — transformers 4.1.1 documentation | The Trainer automatically ignores columns that do not match model arguments. In this case, “offset_mapping” is not an argument of the QA model, so it’s ignored inside the Trainer. That’s why it’s not a problem if it has None items. | 0 |
huggingface | Beginners | way to make inference Zero Shot pipeline faster? | https://discuss.huggingface.co/t/way-to-make-inference-zero-shot-pipeline-faster/1384 | Hi
Can you guys give me tips how to make Zero Shot pipeline inference faster?
My current approach right now is reducing the model size/parameter
(trying to train “base model” instead of "large model)
Is there another approach? | There’s some discussion in this topic 56 that you could check out.
Here are a few things you can do:
Try out one of the community-uploaded distilled models 56 on the hub (thx @valhalla) . I’ve found them to get pretty similar performance on zero shot classification and some of them are much smaller and faster. I’d start with valhalla/distilbart-mnli-12-3 (models can be specified by passing e.g. pipeline("zero-shot-classification", model="valhalla/distilbart-mnli-12-3") when you construct a model.
If you’re on GPU, make sure you’re passing device=0 to the pipeline factory in to utilize cuda.
If you’re on CPU, try running the pipeline with ONNX Runtime. You should get a boost. Here’s a project 81 (thx again @valhalla) that lets you use HF pipelines with ORT automatically.
If you have a lot of candidate labels, try to get clever about passing just the most likely ones to the pipeline. Passing a large # of labels for each sentence is really going to slow you down since each sentence/label pair has to be passed to the model together. If you have 100 possible labels but you can use some kind of heuristic or simpler model to narrow it down, that will help a lot.
Use mixed precision. This is pretty easy 70 if using PyTorch 1.6. | 0 |
huggingface | Beginners | Tapas online API | https://discuss.huggingface.co/t/tapas-online-api/3031 | hi, wanted to play with the tapas online API on [https://huggingface.co/google/tapas-base-finetuned-wtq?] 5 but seems it does not work if any element of the table or the question is changed (got the message “Model google/tapas-base-finetuned-wtq is currently loading” for ever)…Any help on this please ? Best. L | cc @lysandre | 0 |
huggingface | Beginners | Remove dataset downloaded by dataset library from local computer | https://discuss.huggingface.co/t/remove-dataset-downloaded-by-dataset-library-from-local-computer/3051 | Hi ,
Currently I have downloaded dataset “wmt19” with python library dataset in my WSL2 Ubuntu distro but now I want to delete this dataset but I cannot find where it located.
Could somebody tell me where is the dataset locate, or did it have function for remove dataset from local.
Also I accidently delete cache data in .cache/huggingface. I’m not sure it will be any consequence here ? | I just found that It was a problem with WSL unable to reclaim space after delete file. | 0 |
huggingface | Beginners | Resources for model design (number of layers, attention heads, etc) | https://discuss.huggingface.co/t/resources-for-model-design-number-of-layers-attention-heads-etc/3004 | I’ve been using the transformers libraries for the last several months in creative generative text projects. I’m just a hobbyist – I mostly understand how everything works abstractly, but definitely don’t have a firm grasp on the underlying math. I started out by fine-tuning GPT-2, but lately I’ve been playing around with training models from scratch (e.g. arpabet, where “phonetic english” is represented as “fA0nE1tI0k I1GglI0S”), and I’m looking for resources/tips/pointers for how the various parameters will affect the models and their output.
I started my from-scratch model training by loading a GPT2LMHeadModel with the pre-trained gpt2 config, and that worked great. But I figured I could get better results if I went bigger. I tried training again using the gpt2-xl config, but the memory requirements were too high. So now I’m dialing in the parameters based on gpt2-large.
I’ve noticed that with all other values the same, I can fit the model into memory with the default 36 layers, 20 heads, but also something like 18 layers and 64 heads. I’d experiment with these (and other) values, but since training for 3 epochs through my data set would take weeks at a time, I was wondering if anyone could point me in the right direction as to what the tradeoffs are between number of layers, hidden, attention heads, etc.
Thanks! | This may not be exactly what you’re looking for, but this paper explores the performance impacts of different configurations of decoder, attention heads, hidden layers and intermediate layers.
arxiv.org
2010.10499.pdf 5
552.32 KB | 0 |
huggingface | Beginners | Strange output using BioBERT for imputing MASK tokens | https://discuss.huggingface.co/t/strange-output-using-biobert-for-imputing-mask-tokens/3014 | I’m trying to use BioBERT (downloaded from the HuggingFace models repository at dmis-lab/biobert-v1.1) to fill in MASK tokens in text, and I’m getting some unexpected behavior with the suggested tokens.
I pasted a screenshot below comparing bert-base-uncased (which behaves as expected and has sensible most-likely tokens) with BioBERT:
Screen Shot 2020-12-30 at 10.31.42 PM1844×1818 543 KB
Here’s the code to reproduce this:
from transformers import AutoTokenizer, AutoModelForMaskedLM
import torch
text = 'heart disease is [MASK] leading cause of death in the united states.'
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
model = AutoModelForMaskedLM.from_pretrained('bert-base-uncased')
tokenized = tokenizer(text, return_tensors='pt')
idx = tokenizer.convert_ids_to_tokens(tokenized.input_ids[0]).index(tokenizer.mask_token)
output = model(**tokenized, return_dict=True)
print(tokenizer.convert_ids_to_tokens(torch.topk(output.logits[0, idx, :], 10).indices))
tokenizer = AutoTokenizer.from_pretrained('dmis-lab/biobert-v1.1')
model = AutoModelForMaskedLM.from_pretrained('dmis-lab/biobert-v1.1')
tokenized = tokenizer(text, return_tensors='pt')
idx = tokenizer.convert_ids_to_tokens(tokenized.input_ids[0]).index(tokenizer.mask_token)
output = model(**tokenized, return_dict=True)
print(tokenizer.convert_ids_to_tokens(torch.topk(output.logits[0, idx, :], 10).indices))
And here’s my output from running transformers-cli env:
- `transformers` version: 4.1.1
- Platform: macOS-10.11.6-x86_64-i386-64bit
- Python version: 3.8.5
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
I also asked about similar issues with PubMedBERT as a Github issue 2 a while back, but haven’t gotten a response.
Do the pretrained weights for these models not contain the components necessary for doing masked language modeling/imputing MASK tokens? Is there any way to fix this issue? | Hi,
I am not an expert, but that is what it looks like to me.
Masked Language Modelling is usually used during pre-training, and is often not needed during fine-tuning, so I guess the DIMS team didn’t think the MLM parameters would be required.
I notice that the DIMS team have provided 5 models. Do any of the other models have MLM parameters?
It should certainly be possible to copy the DIMS weights into your own model, where your own model does include an MLM head. I expect you would then need to train your model before it would give sensible answers, unless you could find a suitable MLM head to copy (probably not…). | 0 |
huggingface | Beginners | Transformers, am i only using a Encoder for Binary Classification? | https://discuss.huggingface.co/t/transformers-am-i-only-using-a-encoder-for-binary-classification/3045 | Hi Guys, i’ve got a basic, begginers Questions. If im doing something like a Binary Classification (Sentiment Analysis) of Text and im using Transformers (like Bert for example). Am i using both Encoder, and the Decoder Part of the Transformer Network? If i understood the basics right im just using an Encoder for Sequences to Binary - and Sequences to Sequence would use both Decoder and Encoder?
Ty in advanced. | Hi @unknownTransformer,
BERT uses only the Encoder.
See this page of the docs: https://huggingface.co/transformers/model_summary.html 25
See also the Devlin paper: https://arxiv.org/abs/1810.04805 1
Also try Jay Alammar’s blogs, for example this one: alammar.github.io/illustrated-bert/ 4
When you do sentiment analysis, you are using the basic BERT to code your text as numbers, and then you tune a final layer to your task. The Huggingface models such as BertForSequenceClassification already include a “final” layer for you, which is randomly initialized. See this page: https://huggingface.co/transformers/model_doc/bert.html 15
You will probably want to Freeze most of the BERT layers while you fine-tune the last layer, at least initially. See this post and the reply by sgugger: How to freeze some layers of BertModel 9
Good luck with it all. | 0 |
huggingface | Beginners | Evaluating your model on more than one dataset | https://discuss.huggingface.co/t/evaluating-your-model-on-more-than-one-dataset/1544 | Hi,
Transformer’s Trainer and Trainingarguments classes allow for only one dataset to use for evaluation. Is there a simple way of adding another one? So, after after an epoch of training my model I could evaluate it on both training and developmental datasets and get metrics for both of them as one output? I know I could alter the training_args.py or trainer.py but I am pretty sure I would only mess things up… | I think the easiest way to do this is to use the new system of TrainerCallback 16 and write a callback that performs a new evaluation on your other datasets during the event on_validate. | 0 |
huggingface | Beginners | SSL3 errors from probable network issue | https://discuss.huggingface.co/t/ssl3-errors-from-probable-network-issue/3016 | I finally got the transformer library installed with CUDA support under WSL 2 Ubuntu. Yay.
- `transformers` version: 4.2.0dev0
- Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
However, something in the stack is giving me an unreliable network connection that reacts badly to.
This is what I’m trying to do (standard install test - I added the resume_download optional argument after reading a bug fix report. It doesn’t help). The question is, is there either a way to MANUALLY download models from HF into its cache, or is there a setting that does re-tries from the current file position in such cases?
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis',resume_download=True)('we love you'))"
A few megabytes into the download (it varies randomly, but never more than 60), I get the following exception:
(“read error: Error([(‘SSL routines’, ‘ssl3_get_record’, ‘decryption failed or bad record mac’)])”,)
`("read error: Error([('SSL routines', 'ssl3_get_record', 'decryption failed or bad record mac')])",) | 6.65M/268M [00:02<01:12, 3.60MB/s]`
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/contrib/pyopenssl.py", line 313, in recv_into
return self.connection.recv_into(*args, **kwargs)
File "/usr/lib/python3/dist-packages/OpenSSL/SSL.py", line 1822, in recv_into
self._raise_ssl_error(self._ssl, result)
File "/usr/lib/python3/dist-packages/OpenSSL/SSL.py", line 1647, in _raise_ssl_error
_raise_current_error()
File "/usr/lib/python3/dist-packages/OpenSSL/_util.py", line 54, in exception_from_error_queue
raise exception_type(errors)
OpenSSL.SSL.Error: [('SSL routines', 'ssl3_get_record', 'decryption failed or bad record mac')]
The exception handler for that apparently is subject to the same exception:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mjw/transformers/src/transformers/modeling_utils.py", line 1003, in from_pretrained
resolved_archive_file = cached_path(
File "/home/mjw/transformers/src/transformers/file_utils.py", line 1077, in cached_path
output_path = get_from_cache(
File "/home/mjw/transformers/src/transformers/file_utils.py", line 1303, in get_from_cache
http_get(url_to_download, temp_file, proxies=proxies, resume_size=resume_size, headers=headers)
File "/home/mjw/transformers/src/transformers/file_utils.py", line 1166, in http_get
for chunk in r.iter_content(chunk_size=1024):
File "/usr/lib/python3/dist-packages/requests/models.py", line 750, in generate
for chunk in self.raw.stream(chunk_size, decode_content=True):
File "/usr/lib/python3/dist-packages/urllib3/response.py", line 564, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "/usr/lib/python3/dist-packages/urllib3/response.py", line 507, in read
data = self._fp.read(amt) if not fp_closed else b""
File "/usr/lib/python3.8/http/client.py", line 458, in read
n = self.readinto(b)
File "/usr/lib/python3.8/http/client.py", line 502, in readinto
n = self.fp.readinto(b)
File "/usr/lib/python3.8/socket.py", line 669, in readinto
return self._sock.recv_into(b)
File "/usr/lib/python3/dist-packages/urllib3/contrib/pyopenssl.py", line 332, in recv_into
raise ssl.SSLError("read error: %r" % e)
ssl.SSLError: ("read error: Error([('SSL routines', 'ssl3_get_record', 'decryption failed or bad record mac')])",) | That extra resume-download argument turns out to be invalid, once the download succeeds randomly once. So ignore that | 0 |
huggingface | Beginners | Abnormal learn rate curve | https://discuss.huggingface.co/t/abnormal-learn-rate-curve/3023 | I am working on a school project which is to classify news headlines. It’s a binary classification. I scraped the news headlines, used sklearn train_test_split to split them. Then used ktrain 6 - distilBert to classify them. There is a learn rate finder function, I run that and get an abnormal learn rate curve as shown in below image:
Screenshot 2021-01-01 at 13.54.54792×564 28.3 KB
while the normal learn rate should be somehow in a U-shape, falls gradually from a higher loss then up again.
What does that abnormal learn rate curve imply? Is it to do with overfitting or anything? I am really new to the transformer thing and there are not many resources on the internet so I try to ask here. Thanks. | Hi, not sure if I understand correctly
Which “learn rate finder function” you use ? ( just curious as I am familiar of the idea coming from fast.ai team )
spike in 10^-1 looks plausible to me since it’s too much big learning rate
Did you initialize your model from some checkpoints ? If yes, maybe the model is already good and have small loss in the beginning, so small loss with LR=10^-7 is also plausible | 0 |
huggingface | Beginners | More GPUs = lower performance? | https://discuss.huggingface.co/t/more-gpus-lower-performance/2995 | I’ve been prototyping my model training code on my local machine (2x RTX 3090 GPUs), and I’m now trying to migrate it over for a full training run on the university HPC cluster. What’s confusing me is that training on the cluster node (which has 4x RTX 8000s) is reporting completion times that are a lot longer than what I was seeing locally (same dataset and batch size).
On my local machine, one epoch is projected to take ~84 hours:
49/586086 [00:28<84:02:02, 1.94it/s]
On the HPC, it’s predicting 455 hours(!):
76/293043 [07:13<455:24:38, 5.60s/it]
(note the different units: it/s vs s/it)
I’ve checked with nvidia-smi and all four GPUs are at 100%. The dataset is being stored on a local disk in both cases. So I’m running out of ideas for what could be happening… | I’ve looked into this more and I think it’s a performance bug related to excessive GPU-GPU communication: https://github.com/huggingface/transformers/issues/9371 12 | 0 |
huggingface | Beginners | Is beam search always better than greedy search? | https://discuss.huggingface.co/t/is-beam-search-always-better-than-greedy-search/2943 | How to generate text 7 states:
Beam search will always find an output sequence with higher probability than greedy search
It’s not clear to me why that is the case. Consider this example, comparing greedy search with beam search with beam width 2:
image551×665 24.1 KB
By the 3rd step, beam search with beam width 2 has found two sequences BFI and BGK, each with probability .49*.5*.51 = 0.12495, but greedy search found ADH, with probability .51*.4*.99 = 0.20196
Exploring more of the tree may cause beam search to get stuck on paths that seem more promising at first, but end up with lower probability than the greedy path. Is this right, or have I misunderstood how beam search works?
Thanks!
Robby | I think this is an interesting adversarial case for beam-search
So I agree with you that in this case Greedy found the more probable path. | 0 |
huggingface | Beginners | Weird errors run_squad.py | https://discuss.huggingface.co/t/weird-errors-run-squad-py/2973 | Hi all. I’m trying to get this script to run, but I’m getting errors. Any ideas? Here’s my output:
chmod +x run_squad.py
❯ ./run_squad.py
./run_squad.py: 16: Finetuning the library models for question-answering on SQuAD (DistilBERT, Bert, XLM, XLNet).: not found
import-im6.q16: attempt to perform an operation not allowed by the security policy PS' @ error/constitute.c/IsCoderAuthorized/408. import-im6.q16: attempt to perform an operation not allowed by the security policy PS’ @ error/constitute.c/IsCoderAuthorized/408.
import-im6.q16: attempt to perform an operation not allowed by the security policy PS' @ error/constitute.c/IsCoderAuthorized/408. import-im6.q16: attempt to perform an operation not allowed by the security policy PS’ @ error/constitute.c/IsCoderAuthorized/408.
import-im6.q16: attempt to perform an operation not allowed by the security policy PS' @ error/constitute.c/IsCoderAuthorized/408. import-im6.q16: attempt to perform an operation not allowed by the security policy PS’ @ error/constitute.c/IsCoderAuthorized/408.
import-im6.q16: unable to grab mouse ': No such file or directory @ error/xwindow.c/XSelectWindow/9187. import-im6.q16: unable to grab mouse ‘: No such file or directory @ error/xwindow.c/XSelectWindow/9187.
from: can’t read /var/mail/torch.utils.data
from: can’t read /var/mail/torch.utils.data.distributed
from: can’t read /var/mail/tqdm
import-im6.q16: unable to grab mouse `’: No such file or directory @ error/xwindow.c/XSelectWindow/9187.
./run_squad.py: 33: Syntax error: “(” unexpected
THanks, and Happy Holidays! | Never mind. I found the README | 0 |
huggingface | Beginners | Add_faiss_index usage example | https://discuss.huggingface.co/t/add-faiss-index-usage-example/2925 | Hi, I am trying to know how to use Rag/DPR, but first I want to get familiar with faiss usage.
I checked the official example in
huggingface.co
Main classes — datasets 1.1.3 documentation 6
But it seems the snippet code is not self-executable.
So I did some modification, aiming to retrieve similar examples in the sst2 dataset with query ‘I am happy’.
import datasets
from transformers import pipeline
embed = pipeline('sentiment-analysis', model="nlptown/bert-base-multilingual-uncased-sentiment")
ds = datasets.load_dataset('glue', 'sst2', split='test')
ds_with_embeddings = ds.map(lambda example: {'embeddings': embed(example['sentence'])})
ds_with_embeddings.add_faiss_index(column='embeddings')
# query
scores, retrieved_examples = ds_with_embeddings.get_nearest_examples('embeddings', embed('I am happy.'), k=10)
# save index
ds_with_embeddings.save_faiss_index('embeddings', 'my_index.faiss')
ds = datasets.load_dataset('glue', 'sst2', split='test')
# load index
ds.load_faiss_index('embeddings', 'my_index.faiss')
# query
scores, retrieved_examples = ds.get_nearest_examples('embeddings', embed('I am happy.'), k=10)
My problem is at ds_with_embeddings.add_faiss_index(column='embeddings')
I got error with there with " TypeError: float() argument must be a string or a number, not ‘dict’ "
If I changed it to
ds_with_embeddings_score = ds_with_embeddings.map(lambda example: {'embeddings_score': example['embeddings'][0]['score']})
ds_with_embeddings_score.add_faiss_index(column='embeddings_score')
Then I got error " TypeError: len() of unsized object "
Any adivce?Thanks. | I have little experience with pipelines, but I think the issue is that embed(example['sentence']) should return a vector representation for example['sentence']. However, calling a text classification pipeline returns a dict with labels and scores. Instead, you need to run a feature extraction pipeline which should return vectors. (You may need to unpack it though, as the return type is a nested list, presumably for batched processing.)
So (untested) you can try something like:
embed = pipeline('feature-extraction', model="nlptown/bert-base-multilingual-uncased-sentiment")
...
ds_with_embeddings = ds.map(lambda example: {'embeddings': embed(example['sentence'])[0]})
You may need to test a bit with the [0]. Not sure whether it is necessary. | 0 |
huggingface | Beginners | Authorization header in inference API | https://discuss.huggingface.co/t/authorization-header-in-inference-api/2931 | Hi,
I am just wondering what is the purpose of the “Authorization” http header in the inference API. If I remove this header, the request is still working. Example:
curl 'https://api-inference.huggingface.co/models/bert-base-uncased' \
-H 'Connection: keep-alive' \
-H 'Content-Type: text/plain;charset=UTF-8' \
-H 'Accept: */*' \
-H 'Accept-Language: en,en-US;q=0.9,id;q=0.8,de;q=0.7,ms;q=0.6' \
--data-binary '{"inputs":"If I am hungry, I will make [MASK]."}' \
--compressed | @Narsil and @jeffboudier can chime in, but we offer a certain number of non-authed requests, with IP-based rate limiting.
For production workloads you’ll need a token. | 0 |
huggingface | Beginners | Pegasus finetuning, should we always start with pegasus-large? | https://discuss.huggingface.co/t/pegasus-finetuning-should-we-always-start-with-pegasus-large/2909 | I’m fine-tuning pegasus on my own data, which is about 15,000 examples.
I am finding, when fine-tuning Pegasus, using pegasus-large , that the RAM requirements for even just a batch size of 1 are so extreme, that a Nvidia card with 16GB of memory is required… just to run the batch size of 1 ! So at this point I am thinking that maybe my training will run better on the CPU, using a machine with a huge amount of ram… like 512GB of ram… as this seems to allow a much bigger batch size, like up to 64 or 128 .
My guess is that the RAM requirements are so extreme because I am using pegasus-large. I’m doing this based on my understanding of this page:
: https://huggingface.co/transformers/model_doc/pegasus.html#checkpoints 2
All the checkpoints 9 are fine-tuned for summarization, besides pegasus-large, whence the other checkpoints are fine-tuned
My understanding from this is that, if we, as the newbie user, have some data we want to use with Pegasus, we should do this:
Start with pegasus-large: https://huggingface.co/google/pegasus-large 4
Fine tune it on our own data
Use the pytorch_model.bin output from this fine tuning process to run inference on our own data.
Am I getting something wrong here? Given that I have 15,000 examples, have I made the correct determination that I should fine-tune pegasus-large, and that this will lead to the best results, even though the memory requirements are huge?
I looked for distilled model, here: https://huggingface.co/models?search=pegasus 9
… But my understanding (possibly wrong?) is that these distilled models are ALREADY fine-tuned, so they would not be appropriate to use, given that I have a lot of my OWN data to fine-tune with.
Thanks! | To answer your second questions,
the student models are a smaller version of the finetuned-pegasus models, created by choosing alternating layers from the decoder. This method is described in the Pre-trained Summarization Distillation 18. To use these models you should fine-tune them | 0 |
huggingface | Beginners | Using MarianModel’s in pytorch is too slow to do back translation (not parallelised correctly) | https://discuss.huggingface.co/t/using-marianmodels-in-pytorch-is-too-slow-to-do-back-translation-not-parallelised-correctly/2887 | Hi.
I’m trying to use MarianModels for back translation as data augmentation. However, it’s too slow even using multiple GPUs. and I also can not use a batch size larger than 16 setting the max length to 300 though. Indeed it takes one day to complete half an epoch.
following is the code I’m using
target_langs = ['fr,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,pt,gl,lad,an,mwl,it,co,nap,scn,vec,sc,ro,la']
def translate(texts, model, tokenizer, language="fr"):
with torch.no_grad():
template = lambda text: f"{text}" if language == "en" else f">>{language}<< {text}"
src_texts = [template(text) for text in texts]
encoded = tokenizer.prepare_seq2seq_batch(src_texts,
truncation=True,
max_length=300, return_tensors="pt").to(device)
translated = model.module.generate(**encoded).to(device)
translated_texts = tokenizer.batch_decode(translated, skip_special_tokens=True)
return translated_texts
def back_translate(texts, source_lang="en", target_lang="fr"):
# Translate from source to target language
fr_texts = translate(texts, target_model, target_tokenizer,
language=target_lang)
# Translate from target language back to source language
back_translated_texts = translate(fr_texts, en_model, en_tokenizer,
language=source_lang)
return back_translated_texts
target_model_name = 'Helsinki-NLP/opus-mt-en-de'
target_tokenizer = MarianTokenizer.from_pretrained(target_model_name)
target_model = MarianMTModel.from_pretrained(target_model_name)
en_model_name = 'Helsinki-NLP/opus-mt-de-en'
en_tokenizer = MarianTokenizer.from_pretrained(en_model_name)
en_model = MarianMTModel.from_pretrained(en_model_name)
target_model = nn.DataParallel(target_model)
target_model = target_model.to(device) # same performance if I add .half()
target_model.eval()
en_model = nn.DataParallel(en_model)
en_model = en_model.to(device)# same performance if I add .half()
en_model.eval()
## x1 and x2 are batches of strings.
bk_x1 = back_translate(x1, source_lang="en", target_lang=np.random.choice(target_langs))
bk_x2 = back_translate(x2, source_lang="en", target_lang=np.random.choice(target_langs))
here are GPU’s performances: low utilization due to small batch size 16 but if I increase the batch size I got Cuda out of memory error. also, I can see only one gpu is used for processing so might be that the Marian model can not be parallelized correctly. if so what would be the solution?
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... Off | 00000000:1B:00.0 Off | N/A |
| 42% 78C P2 199W / 250W | 9777MiB / 11178MiB | 91% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 108... Off | 00000000:1C:00.0 Off | N/A |
| 29% 36C P8 10W / 250W | 2MiB / 11178MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 GeForce GTX 108... Off | 00000000:1D:00.0 Off | N/A |
| 31% 36C P8 9W / 250W | 2MiB / 11178MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 GeForce GTX 108... Off | 00000000:1E:00.0 Off | N/A |
| 35% 41C P8 9W / 250W | 2MiB / 11178MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 4 GeForce GTX 108... Off | 00000000:3D:00.0 Off | N/A |
| 29% 34C P8 9W / 250W | 2MiB / 11178MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 5 GeForce GTX 108... Off | 00000000:3F:00.0 Off | N/A |
| 30% 31C P8 8W / 250W | 2MiB / 11178MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 6 GeForce GTX 108... Off | 00000000:40:00.0 Off | N/A |
| 31% 38C P8 9W / 250W | 2MiB / 11178MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 7 GeForce GTX 108... Off | 00000000:41:00.0 Off | N/A |
| 30% 37C P8 9W / 250W | 2MiB / 11178MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 58780 C python 10407MiB |
| 1 N/A N/A 58780 C python 0MiB |
| 2 N/A N/A 58780 C python 0MiB |
| 3 N/A N/A 58780 C python 0MiB |
| 4 N/A N/A 58780 C python 0MiB |
| 5 N/A N/A 58780 C python 0MiB |
| 6 N/A N/A 58780 C python 0MiB |
| 7 N/A N/A 58780 C python 0MiB |
+-----------------------------------------------------------------------------+
FYI: I’m using
pytorch 1. 1.7.0
transformers 4.0.1
cudda 10.1 | The problem might be related to the tokenizer rather than the model. The MarianTokenizer does not have a rust (fast) implementation which lay cause a bottleneck, no matter how many GPUs you use. It might be a good idea to preprocess (tokenizer) your dataset once and use datasets for on-the fly superfast access to that cached dataset. | 0 |
huggingface | Beginners | Trainer.hyperparameter_search doesn’t work for me | https://discuss.huggingface.co/t/trainer-hyperparameter-search-doesnt-work-for-me/2910 | HI, as a beginner I try to follow few examples. Looking at this repo " Fine-tuning a model on a text classification task " (here - https://bit.ly/3mEBqTM 3) I tried to implement the trainer.hyperparameter_search() method with either “optuna” and “ray [tune]” .
!pip install optuna --> seems to work fine
When running the example shown I keep getting:
RuntimeError: At least one of optuna or ray should be installed. To install optuna run pip install optuna.To install ray run pip install ray[tune].
I tried installing both but none works… any guidance would be highly appreciated…
thanks, -Ofer | There is some environment problem it seems, since transformers didn’t detect you have optuna installed. I’d try restarting your kernel since you seem to be in a notebook. | 0 |
huggingface | Beginners | How to deploy a fine tuned t5 model in production | https://discuss.huggingface.co/t/how-to-deploy-a-fine-tuned-t5-model-in-production/2914 | Hi All,
I am trying to deploy a fine-tuned t5 model in production. This is something new to me, to deploy a PyTorch model in production. I went through the presentation from Hugging Face on youtube, about how they deploy the model. And some of the other blog posts.
It is mentioned by HF that they deploy the model on Cython environment as it gives a ~100 times boost to the inference. So, is it always advisable to run a model in production on Cython?
Converting a model in Pytorch to TF does it help and is advisable or not?
What is the preferred container approach to adopt to run multiple models on a set of GPUs?
I know some of these questions would be basic, I apologize for it, but I want to make sure that I follow the correct guidelines to deploy a model in production.
Thank you
Amit | Hi @as-stevens,
I don’t know what blog post you’re referring to for using Cython to get 100x but I guess it really depends where the bottleneck is.
For t5 models, they are Seq2Seq models, and I would recommend to stick to PyTorch and finding a way to optimize the hot path (decoder path). TF could work, but transformers currently can’t use various graph optimizations in TF (we’re working on it).
Or you can try to run it on our hosted inference API to alleviate the hassle of managing all the different layers: https://huggingface.co/pricing 33 (Some optimizations are only enabled for customers)
Hope that helps.
Cheers,
Nicolas | 0 |
huggingface | Beginners | What is best way to serve huggingface model with API? | https://discuss.huggingface.co/t/what-is-best-way-to-serve-huggingface-model-with-api/203 | Use TF or PyTorch?
For PyTorch TorchServe or Pipelines with something like flask? | You have a few different options, here are some in increasing level of difficulty
You can use the Hugging Face Inference API via Model Hub if you are just looking for a demo.
You can use a hosted model deployment platform: GCP AI predictions, SageMaker, https://modelzoo.dev/ 211. Full disclaimer, I am the developer behind Model Zoo, happy to give you some credits for experimentation.
You can roll your own model server with something like https://fastapi.tiangolo.com/ 252 and deploy it on a generic serving platform like AWS Elastic Beanstalk or Heroku. This is the most flexible option. | 0 |
huggingface | Beginners | XLNet conversion to onnx | https://discuss.huggingface.co/t/xlnet-conversion-to-onnx/2866 | I’m trying to run text classification inference on a XLNet model using onnx, but when trying to run the inference I’m getting the following error:
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Add node. Name:‘Add_26’ Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:475 void onnxruntime::BroadcastIterator::Init(int64_t, int64_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. 6 by 48
Any clue about it?
Thanks in advance for your help! | Hi @uyjco0
Something seems to be running wrong when you’re trying to export to ONNX.
How did you extract your ONNX graph ?
Are you sure about the tensors you’re feeding your session ?
Without a little more information it’s hard to give you more feedback than what google will.
Cheers, | 0 |
huggingface | Beginners | PyTorch version | https://discuss.huggingface.co/t/pytorch-version/2826 | I’m having all sorts of issues training transformers on my 3090 - this card requires the cuda 11.1 which in turn requires torch 1.7.1
Is this supported? I’m using python 3.7 (I tried 3.9 but there’s no wheel for one of the dependencies for datasets and it wouldn’t build so I rolled back).
I installed pip install torch==1.7.1+cu110 -f https://download.pytorch.org/whl/torch_stable.html
Training of RobertaForMaskedLM frequently crashes with CUDA exceptions. If this should work I’ll create specific issues. | OK So I tracked down the crash. The problem was the position embedding. I had max_seq_length == max_position_embeddings, this results in a position index > max_position_embeddings for any sequence which is truncated.
This is because create_position_ids_from_input_ids in modeling_roberta.py' below adds pdding_idx to the cumsum - if there are no masked input_ids this will be > max_seq_length`
mask = input_ids.ne(padding_idx).int()
incremental_indices = torch.cumsum(mask, dim=1).type_as(mask) * mask
return incremental_indices.long() + padding_idx | 0 |
huggingface | Beginners | Applying Tapas/TableQuestionAnswering pipelines on a csv via Pandas? | https://discuss.huggingface.co/t/applying-tapas-tablequestionanswering-pipelines-on-a-csv-via-pandas/2872 | Hi guys!
Great work on Tapas and the v4.1.1 releaser!
Is there any guidance on how to apply this pipeline 6 to dataframes uploaded via pandas.read_csv?
Thanks,
Charly | Hello! Here’s how I would setup a pipeline with a pd.DataFrame
from transformers import pipeline
import pandas as pd
tqa_pipeline = pipeline("table-question-answering")
data = {
"Repository": ["Transformers", "Datasets", "Tokenizers"],
"Stars": ["36542", "4512", "3934"],
"Contributors": ["651", "77", "34"],
"Programming language": ["Python", "Python", "Rust, Python and NodeJS"],
}
queries = "What repository has the largest number of stars?"
table = pd.DataFrame.from_dict(data)
output = tqa_pipeline(table, queries)
# {'answer': 'Transformers', 'coordinates': [(0, 0)], 'cells': ['Transformers']}
If you want to use a CSV file, you also can; here’s the previous example converted to CSV and saved in ~/pipeline.csv:
Repository,Stars,Contributors,Programming language
Transformers,36542,651,Python
Datasets,4512,77,Python
Tokenizers,3934,34,"Rust, Python and NodeJS"
Here’s how I would do (note the type conversion):
from transformers import pipeline
import pandas as pd
tqa_pipeline = pipeline("table-question-answering")
queries = "What repository has the largest number of stars?"
# Convert everything to a string, as the tokenizer can only handle strings
table = pd.read_csv("~/pipeline.csv").astype(str)
output = tqa_pipeline(table, queries)
# {'answer': 'Transformers', 'coordinates': [(0, 0)], 'cells': ['Transformers']}
Hope that helps! | 0 |
huggingface | Beginners | Summarization: Is finetune_trainer.py accepting length arguments correctly? | https://discuss.huggingface.co/t/summarization-is-finetune-trainer-py-accepting-length-arguments-correctly/2879 | Hi, thanks for this impressive library - I expect Huggingface to shortly take over the world. This is my first post.
I am using the most recent version of the library, cloned from master, as of 12-16-2020, specifically the code from here: https://github.com/huggingface/transformers/tree/master/examples/seq2seq 4.
It looks like @stas, and @sgugger have most recently touched this code, and might be best positioned to tell me what stupid mistake I am making.
I am trying to do some summarization with finetune_trainer.py.
As a proof of concept, I first started with the xsum dataset, running this shell script:
RUN="xsum-1500-train"
python3 /workspace/rabbit-py/transformers/examples/seq2seq/finetune_trainer.py \
--learning_rate=3e-5 \
--fp16 \
--do_train --do_eval --do_predict \
--evaluation_strategy steps \
--predict_with_generate \
--n_train 1500 \
--n_val 300 \
--n_test 100 \
--num_train_epochs 1 \
--data_dir "/workspace/rabbit-py/corpii_foreign/xsum" \
--model_name_or_path "t5-small" \
--output_dir "/workspace/rabbit-py/predictions/$RUN" \
--per_device_train_batch_size 5 \
--per_device_eval_batch_size 8\
--task 'summarization' \
--overwrite_output_dir \
--run_name $RUN
"$@"
This works well, and in about two minutes (using 2x RTX 2070 Super), generates text in the test_generations.txt output file.
Here is the first line of output in the test_generations.txt output file:
the trio are up for best UK act and best album, as well as two nominations in the best song category. they have been nominated for their favourite album, Number One and Strong Again.
This is indeed a summary of the originating text, in the first line of test.source
The London trio are up for best UK act and best album, as well as getting two nominations in the best song category."We got told like this morning 'Oh I think you're nominated'", said Dappy."And I was like 'Oh yeah, which one?' And now we've got nominated for four awards. I mean, wow!"Bandmate Fazer added: "We thought it's best of us to come down and mingle with everyone and say hello to the cameras. And now we find we've got four nominations."The band have two shots at the best song prize, getting the nod for their Tynchy Stryder collaboration Number One, and single Strong Again.Their album Uncle B will also go up against records by the likes of Beyonce and Kanye West.N-Dubz picked up the best newcomer Mobo in 2007, but female member Tulisa said they wouldn't be too disappointed if they didn't win this time around."At the end of the day we're grateful to be where we are in our careers."If it don't happen then it don't happen - live to fight another day and keep on making albums and hits for the fans."Dappy also revealed they could be performing live several times on the night.The group will be doing Number One and also a possible rendition of the War Child single, I Got Soul.The charity song is a re-working of The Killers' All These Things That I've Done and is set to feature artists like Chipmunk, Ironik and Pixie Lott.This year's Mobos will be held outside of London for the first time, in Glasgow on 30 September.N-Dubz said they were looking forward to performing for their Scottish fans and boasted about their recent shows north of the border."We just done Edinburgh the other day," said Dappy."We smashed up an N-Dubz show over there. We done Aberdeen about three or four months ago - we smashed up that show over there! Everywhere we go we smash it up!"
So far, so good.
Note that I am running only 1 epoch, and only using a small fraction of the data, because at this point I only want to do a proof of concept.
I now want to try a proof of concept on my own data.
My own training data uses shorter length text, both in input, and output.
For my training data I am trying to summarize text like:
HSH Solid Wood Bookshelf, 2 Tier Rustic Vintage Industrial Etagere Bookcase, Open Metal Farmhouse Book Shelf, Distressed Brown
…and end up with a summary like:
HSH Rustic Industrial
This example, as you can see, happens to fit the description of an “extractive” summarization, where all the text in the training target is included in the training source, but not all of my rows are like that – many of my rows might require something closer to an “abstractive” summarization. (Just FYI).
So, as a proof of concept, I now try a minimal modification of my script, just putting in my data directory, instead of xsum:
RUN="sn-vs-n-1-simple"
python3 /workspace/rabbit-py/transformers/examples/seq2seq/finetune_trainer.py \
--learning_rate=3e-5 \
--fp16 \
--do_train --do_eval --do_predict \
--evaluation_strategy steps \
--predict_with_generate \
--n_train 1500 \
--n_val 300 \
--n_test 100 \
--num_train_epochs 1 \
--data_dir "/workspace/rabbit-py/corpii/short_name_vs_name" \
--model_name_or_path "t5-small" \
--output_dir "/workspace/rabbit-py/predictions/$RUN" \
--per_device_train_batch_size 5 \
--per_device_eval_batch_size 8\
--task 'summarization' \
--overwrite_output_dir \
--run_name $RUN
"$@"
I run this, and… given this first line of test.source…
Bloggerlove Rain Jacket Women Lightweight Raincoat Waterproof Windbreaker Striped Climbing Outdoor Hooded Trench Coats S-Xxl
… the first line of test_generations.txt is:
Bloggerlove Rain Jacket Women Lightweight Raincoat Waterproof Windbreaker Striped Climbing Outdoor Hooded Trench Coats S-Xxl
… whereas the first line of test.target is:
Bloggerlove Hooded Trench Coat
… and the second line of test.source is:
Sony Portable Bluetooth Digital Turner AM/FM CD Player Mega Bass Reflex Stereo Sound System
… and the second line of test_generations.txt is:
Sony Portable Bluetooth Digital Turner AM/FM CD Player Mega Bass Reflex Stereo Sound System. Sony portable Bluetooth digital Turner MP/FM MP3 player Mega bass Reflex stereo sound system.
…whereas the second line of test.target is:
Sony Bluetooth
So clearly this is not working right!
At a most basic level, the summaries are too long… and actually, it seems that T5 is hallucinating “additional” text to add to my input text!
So, my first stop is to look at the console output:
2/18/2020 19:28:54 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 2, distributed training: False, 16-bits training: True
12/18/2020 19:28:54 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(output_dir='/workspace/rabbit-py/predictions/sn-vs-n-1-simple', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, model_parallel=False, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=5, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=3e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=1.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Dec18_19-28-54_94c29ef5e746', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=True, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='sn-vs-n-1-simple', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, label_smoothing=0.0, sortish_sampler=False, predict_with_generate=True, adafactor=False, encoder_layerdrop=None, decoder_layerdrop=None, dropout=None, attention_dropout=None, lr_scheduler='linear')
[INFO|configuration_utils.py:422] 2020-12-18 19:28:54,234 >> loading configuration file https://huggingface.co/t5-small/resolve/main/config.json from cache at /workspace/rabbit-py/models_foreign/fe501e8fd6425b8ec93df37767fcce78ce626e34cc5edc859c662350cf712e41.406701565c0afd9899544c1cb8b93185a76f00b31e5ce7f6e18bbaef02241985
[INFO|configuration_utils.py:458] 2020-12-18 19:28:54,236 >> Model config T5Config {
"architectures": [
"T5WithLMHeadModel"
],
"d_ff": 2048,
"d_kv": 64,
"d_model": 512,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "relu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "t5",
"n_positions": 512,
"num_decoder_layers": 6,
"num_heads": 8,
"num_layers": 6,
"output_past": true,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
},
"use_cache": true,
"vocab_size": 32128
}
[INFO|configuration_utils.py:422] 2020-12-18 19:28:54,466 >> loading configuration file https://huggingface.co/t5-small/resolve/main/config.json from cache at /workspace/rabbit-py/models_foreign/fe501e8fd6425b8ec93df37767fcce78ce626e34cc5edc859c662350cf712e41.406701565c0afd9899544c1cb8b93185a76f00b31e5ce7f6e18bbaef02241985
[INFO|configuration_utils.py:458] 2020-12-18 19:28:54,467 >> Model config T5Config {
"architectures": [
"T5WithLMHeadModel"
],
"d_ff": 2048,
"d_kv": 64,
"d_model": 512,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "relu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "t5",
"n_positions": 512,
"num_decoder_layers": 6,
"num_heads": 8,
"num_layers": 6,
"output_past": true,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
},
"use_cache": true,
"vocab_size": 32128
}
[INFO|tokenization_utils_base.py:1793] 2020-12-18 19:28:54,944 >> loading file https://huggingface.co/t5-small/resolve/main/spiece.model from cache at /workspace/rabbit-py/models_foreign/65fc04e21f45f61430aea0c4fedffac16a4d20d78b8e6601d8d996ebefefecd2.3b69006860e7b5d0a63ffdddc01ddcd6b7c318a6f4fd793596552c741734c62d
[INFO|tokenization_utils_base.py:1793] 2020-12-18 19:28:54,944 >> loading file https://huggingface.co/t5-small/resolve/main/tokenizer.json from cache at /workspace/rabbit-py/models_foreign/06779097c78e12f47ef67ecb728810c2ae757ee0a9efe9390c6419783d99382d.8627f1bd5d270a9fd2e5a51c8bec3223896587cc3cfe13edeabb0992ab43c529
[INFO|modeling_utils.py:1014] 2020-12-18 19:28:55,263 >> loading weights file https://huggingface.co/t5-small/resolve/main/pytorch_model.bin from cache at /workspace/rabbit-py/models_foreign/fee5a3a0ae379232608b6eed45d2d7a0d2966b9683728838412caccc41b4b0ed.ddacdc89ec88482db20c676f0861a336f3d0409f94748c209847b49529d73885
[WARNING|modeling_utils.py:1122] 2020-12-18 19:28:56,647 >> Some weights of the model checkpoint at t5-small were not used when initializing T5ForConditionalGeneration: ['decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight']
- This IS expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[INFO|modeling_utils.py:1139] 2020-12-18 19:28:56,647 >> All the weights of T5ForConditionalGeneration were initialized from the model checkpoint at t5-small.
If your task is similar to the task the model of the checkpoint was trained on, you can already use T5ForConditionalGeneration for predictions without further training.
12/18/2020 19:28:56 - INFO - utils - using task specific params for summarization: {'early_stopping': True, 'length_penalty': 2.0, 'max_length': 200, 'min_length': 30, 'no_repeat_ngram_size': 3, 'num_beams': 4, 'prefix': 'summarize: '}
12/18/2020 19:28:58 - INFO - __main__ - *** Train ***
[INFO|trainer.py:668] 2020-12-18 19:28:58,881 >> ***** Running training *****
[INFO|trainer.py:669] 2020-12-18 19:28:58,881 >> Num examples = 1500
[INFO|trainer.py:670] 2020-12-18 19:28:58,881 >> Num Epochs = 1
[INFO|trainer.py:671] 2020-12-18 19:28:58,881 >> Instantaneous batch size per device = 5
[INFO|trainer.py:672] 2020-12-18 19:28:58,881 >> Total train batch size (w. parallel, distributed & accumulation) = 10
[INFO|trainer.py:673] 2020-12-18 19:28:58,881 >> Gradient Accumulation steps = 1
[INFO|trainer.py:674] 2020-12-18 19:28:58,881 >> Total optimization steps = 150
sn-vs-n-1-simple
[INFO|integrations.py:360] 2020-12-18 19:28:58,898 >> Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
wandb: Currently logged in as: ghengis (use `wandb login --relogin` to force relogin)
wandb: Tracking run with wandb version 0.10.12
wandb: Syncing run sn-vs-n-1-simple
wandb: ⭐ View project at https://wandb.ai/---/huggingface
wandb: 🚀 View run at https://wandb.ai/----/huggingface/runs/tu1t9h5g
wandb: Run data is saved locally in /workspace/rabbit-py/src/learning/wandb/run-20201218_192859-tu1t9h5g
wandb: Run `wandb offline` to turn off syncing.
0%| | 0/150 [00:00<?, ?it/s]/usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:136: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
100%|██████████| 150/150 [00:42<00:00, 3.53it/s][INFO|trainer.py:821] 2020-12-18 19:29:43,464 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
{'epoch': 1.0}
100%|██████████| 150/150 [00:42<00:00, 3.49it/s]
[INFO|trainer.py:1183] 2020-12-18 19:29:43,467 >> Saving model checkpoint to /workspace/rabbit-py/predictions/sn-vs-n-1-simple
[INFO|configuration_utils.py:289] 2020-12-18 19:29:43,471 >> Configuration saved in /workspace/rabbit-py/predictions/sn-vs-n-1-simple/config.json
[INFO|modeling_utils.py:814] 2020-12-18 19:29:43,893 >> Model weights saved in /workspace/rabbit-py/predictions/sn-vs-n-1-simple/pytorch_model.bin
12/18/2020 19:29:43 - INFO - __main__ - ***** train metrics *****
12/18/2020 19:29:43 - INFO - __main__ - train_samples_per_second = 33.64
12/18/2020 19:29:43 - INFO - __main__ - train_runtime = 44.5896
12/18/2020 19:29:43 - INFO - __main__ - train_n_ojbs = 1500
12/18/2020 19:29:43 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:1369] 2020-12-18 19:29:43,950 >> ***** Running Evaluation *****
[INFO|trainer.py:1370] 2020-12-18 19:29:43,950 >> Num examples = 300
[INFO|trainer.py:1371] 2020-12-18 19:29:43,951 >> Batch size = 16
100%|██████████| 19/19 [00:20<00:00, 1.06s/it]
12/18/2020 19:30:05 - INFO - __main__ - ***** val metrics *****
12/18/2020 19:30:05 - INFO - __main__ - val_loss = 2.1546
12/18/2020 19:30:05 - INFO - __main__ - val_rouge1 = 26.3976
12/18/2020 19:30:05 - INFO - __main__ - val_rouge2 = 13.6039
12/18/2020 19:30:05 - INFO - __main__ - val_rougeL = 26.1308
12/18/2020 19:30:05 - INFO - __main__ - val_rougeLsum = 26.18
12/18/2020 19:30:05 - INFO - __main__ - val_gen_len = 37.9
12/18/2020 19:30:05 - INFO - __main__ - epoch = 1.0
12/18/2020 19:30:05 - INFO - __main__ - val_samples_per_second = 14.091
12/18/2020 19:30:05 - INFO - __main__ - val_runtime = 21.29
12/18/2020 19:30:05 - INFO - __main__ - val_n_ojbs = 300
12/18/2020 19:30:05 - INFO - __main__ - *** Predict ***
[INFO|trainer.py:1369] 2020-12-18 19:30:05,241 >> ***** Running Prediction *****
[INFO|trainer.py:1370] 2020-12-18 19:30:05,241 >> Num examples = 100
[INFO|trainer.py:1371] 2020-12-18 19:30:05,241 >> Batch size = 16
100%|██████████| 7/7 [00:05<00:00, 1.23it/s]12/18/2020 19:30:12 - INFO - __main__ - ***** test metrics *****
12/18/2020 19:30:12 - INFO - __main__ - test_loss = 2.2199
12/18/2020 19:30:12 - INFO - __main__ - test_rouge1 = 27.7161
12/18/2020 19:30:12 - INFO - __main__ - test_rouge2 = 13.4332
12/18/2020 19:30:12 - INFO - __main__ - test_rougeL = 27.8038
12/18/2020 19:30:12 - INFO - __main__ - test_rougeLsum = 27.7593
12/18/2020 19:30:12 - INFO - __main__ - test_gen_len = 37.2
12/18/2020 19:30:12 - INFO - __main__ - test_samples_per_second = 13.715
12/18/2020 19:30:12 - INFO - __main__ - test_runtime = 7.2913
12/18/2020 19:30:12 - INFO - __main__ - test_n_ojbs = 100
100%|██████████| 7/7 [00:06<00:00, 1.14it/s]
… And something that jumps out at me is this:
12/18/2020 19:28:56 - INFO - utils - using task specific params for summarization: {'early_stopping': True, 'length_penalty': 2.0, 'max_length': 200, 'min_length': 30, 'no_repeat_ngram_size': 3, 'num_beams': 4, 'prefix': 'summarize: '}
It seems I am using task specific params, which are asking the model for a max_legnth of 200 tokens, right?
And then I see this Github comment:
github.com/huggingface/transformers
Summarization pipeline max_length parameter seems to just cut the summary rather than generating a complete sentence within the max length
opened
Apr 2, 2020
closed
Apr 2, 2020
Weilin37
🐛 Bug
Information
Model I am using (Bert, XLNet ...): default model from pipeline("summarization")
Language I am using the model on (English, Chinese ...):...
As @aychang95 suggested you have to play around with the generate method arguments to see what works best for your example. Especially take a look at num_beams, max_length, min_length, early_stopping and length_penalty.
So my idea is: I should shorten this max_length. My target summaries never go over 50 tokens, so I should tell this to T5!
I reference the --help in finetune.trainer.py
--max_target_length MAX_TARGET_LENGTH
The maximum total sequence length for target text
after tokenization. Sequences longer than this will be
truncated, sequences shorter will be padded.
--val_max_target_length VAL_MAX_TARGET_LENGTH
The maximum total sequence length for validation
target text after tokenization. Sequences longer than
this will be truncated, sequences shorter will be
padded.
--test_max_target_length TEST_MAX_TARGET_LENGTH
The maximum total sequence length for test target text
after tokenization. Sequences longer than this will be
truncated, sequences shorter will be padded.
So it seems these arguments might do something. I can’t personally figure out why these values should be different, I mean, shouldn’t all these values match the maximum prediction length that I want? So I assume that is the case, for the time being.
So next, I run this script:
RUN="sn-vs-n-1-with-target-length"
python3 /workspace/rabbit-py/transformers/examples/seq2seq/finetune_trainer.py \
--learning_rate=3e-5 \
--fp16 \
--do_train --do_eval --do_predict \
--evaluation_strategy steps \
--predict_with_generate \
--n_train 1500 \
--n_val 300 \
--n_test 100 \
--num_train_epochs 1 \
--data_dir "/workspace/rabbit-py/corpii/short_name_vs_name" \
--model_name_or_path "t5-small" \
--output_dir "/workspace/rabbit-py/predictions/$RUN" \
--per_device_train_batch_size 5 \
--per_device_eval_batch_size 8 \
--max_target_length 50 \
--val_max_target_length 50 \
--test_max_target_length 50 \
--overwrite_output_dir \
--run_name $RUN
"$@"
The only change here are these added arguments:
--max_target_length 50 \
--val_max_target_length 50 \
--test_max_target_length 50 \
… this script finishes… and then I find that the newly generated test_generations.txt are exactly the same!
So, as far as I can tell, these three added arguments have had no effect…!
and… the console output contains the same thing:
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
...
12/18/2020 19:52:35 - INFO - utils - using task specific params for summarization: {'early_stopping': True, 'length_penalty': 2.0, 'max_length': 200, 'min_length': 30, 'no_repeat_ngram_size': 3, 'num_beams': 4, 'prefix': 'summarize: '}
12/18/2020 19:52:38 - INFO - __main__ - *** Train ***
So, my rough guess here is that somehow these max_target_length arguments are being overridden. I re-run the script, this time, removing this line:
--task 'summarization' \
… But once again, I get the same “too long” summary, and in the console, I see the same using task specific params….
So my guess at this point is there might be either a bug or something lacking in the documentation, something that needs to be done to over-ride task specific params when using finetune_trainer.py?
Or (quite possibly) I’m doing something else wrong??
thanks! | From a quick look it appears that your diagnosis might be correct.
I can see how those length args are used to truncate the records in datasets, but model.config remains unmodified, so when it comes to generate it uses the task specific param defaults.
Most likely after use_task_specific_params() is run, model.config needs to be overriden again with user overrides.
So something like:
--- a/examples/seq2seq/finetune_trainer.py
+++ b/examples/seq2seq/finetune_trainer.py
@@ -205,6 +205,10 @@ def main():
# use task specific params
use_task_specific_params(model, data_args.task)
+ if model.config.max_length is not None and data_args.max_target_length is not None:
+ print(f"before {model.config.max_length}")
+ model.config.max_length = data_args.max_target_length
+ print(f"after {model.config.max_length}")
# set num_beams for evaluation
if data_args.eval_beams is None:
So using your last command line (btw, I think it’s missing --task summarization)
2020-12-18 14:30:47 | INFO | utils | using task specific params for summarization: {'early_stopping': True, 'length_penalty': 2.0, 'max_length': 200, 'min_length': 30, 'no_repeat_ngram_size': 3, 'num_beams': 4, 'prefix': 'summarize: '}
before 200
after 50
but then there are 3 of those.
but first please test if this one makes a difference.
I’m using cnn_dm db from README.md to test this:
./finetune_trainer.py --learning_rate=3e-5 --fp16 --do_train --do_eval --do_predict \
--evaluation_strategy steps --predict_with_generate --n_train 100 --n_val 100 --n_test 100 \
--num_train_epochs 1 --data_dir cnn_dm --model_name_or_path "t5-small" --output_dir output_dir \
--per_device_train_batch_size 5 --per_device_eval_batch_size 8 --max_target_length 50 \
--val_max_target_length 50 --test_max_target_length 50 --overwrite_output_dir --task summarization | 0 |
huggingface | Beginners | Loss is “nan” when fine-tuning NLI model (both RoBERTa/BART) | https://discuss.huggingface.co/t/loss-is-nan-when-fine-tuning-nli-model-both-roberta-bart/2839 | Hi,
I’m trying to fine-tune ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli on a dataset of around 276.000 hypothesis-premise pairs. I’m following the instructions from the docs here 2 and here 1. I have the impression that the fine-tuning works (it does the training and saves the checkpoints), but trainer.train() and trainer.evaluate() return “nan” for the loss.
What I’ve tried:
I tried using both ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli and facebook/bart-large-mnli to make sure that it’s not linked to specific model, but I get the issue for both models
I tried following the advice in this related github issue 2, but adding num_labels=3 to the config file does not solve the issue. (I think my issue is different because the models are already fine-tuned on NLI in my case)
I tried changing the class XDataset(torch.utils.data.Dataset) (which I mostly copied from the docs), because I suspected that there could be an issue with my input data, but I also couldn’t solve it that way.
=> Does anyone know where this issues comes from? See my code below.
Thanks a lot in advance for any suggestion!
Here is my code:
### load model & tokenize
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
max_length = 256
hg_model_hub_name = "ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli"
# also tried: hg_model_hub_name = "facebook/bart-large-mnli"
tokenizer = AutoTokenizer.from_pretrained(hg_model_hub_name)
model = AutoModelForSequenceClassification.from_pretrained(hg_model_hub_name)
model.config
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Device: {device}")
if device == "cuda":
model = model.half()
model.to(device)
model.train();
#... some data preprocessing
encodings_train = tokenizer(premise_train, hypothesis_train, return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=False, padding=True)
encodings_val = tokenizer(premise_val, hypothesis_val, return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=False, padding=True)
encodings_test = tokenizer(premise_test, hypothesis_test, return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=False, padding=True)
### create pytorch dataset object
class XDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.as_tensor(val[idx]) for key, val in self.encodings.items()}
#item = {key: torch.as_tensor(val[idx]).to(device) for key, val in self.encodings.items()}
item['labels'] = torch.as_tensor(self.labels[idx])
#item['labels'] = self.labels[idx]
return item
def __len__(self):
return len(self.labels)
dataset_train = XDataset(encodings_train, label_train)
dataset_val = XDataset(encodings_val, label_val)
dataset_test = XDataset(encodings_test, label_test)
## training
from transformers import Trainer, TrainingArguments
# https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
per_device_train_batch_size=8, # batch size per device during training
per_device_eval_batch_size=8, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=100,
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=dataset_train, # training dataset
eval_dataset=dataset_val # evaluation dataset
)
trainer.train()
# output: TrainOutput(global_step=181, training_loss=nan)
trainer.evaluate()
# output: {'epoch': 1.0, 'eval_loss': nan} | Update: I spent several hours trying to solve this and I opened a github issue with a detailed description of the issue here: https://github.com/huggingface/transformers/issues/9160 427 | 0 |
huggingface | Beginners | GPT2 Generated Output Always the Same? | https://discuss.huggingface.co/t/gpt2-generated-output-always-the-same/2836 | I’m in the process of training a small GPT2 model on C source code. At the moment I’m trying to get a sense of what it has learned so far by getting it to generate some samples. However, every time I generate samples the output is exactly the same, even though I’m giving it a different seed (based on the current time) every time.
My code is:
#!/usr/bin/env python
import sys
import random
import numpy as np
import time
import torch
from transformers import GPT2Tokenizer
from transformers import GPT2Model, GPT2Config,GPT2LMHeadModel
from transformers.trainer_utils import set_seed
SEED = int(time.time())
set_seed(SEED)
print("Loading tokenizer...")
tokenizer = GPT2Tokenizer.from_pretrained("./csrc_vocab",
additional_special_tokens=["<s>","<pad>","</s>","<unk>","<mask>"],
pad_token='<pad>', max_len=512)
print("Loading model...")
model = GPT2LMHeadModel.from_pretrained(sys.argv[1],
pad_token_id=tokenizer.eos_token_id).to('cuda')
input_ids = tokenizer.encode("int ", return_tensors='pt').to('cuda')
print("Generating...")
gen_output = model.generate(
input_ids,
max_length=128,
temperature=1.1,
repetition_penalty=1.4,
early_stopping=True
)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(gen_output[0], skip_special_tokens=True))
How do I properly seed the RNG so that I can get different outputs? I’ve also tried manually seeding with random.seed(), np.random.seed(), and torch.manual_seed(), but the output is always the same. | Hi @moyix!
I believe the set_seed() method being called is for the random processes that happen inside the Trainer class that is used for training and finetuning HF models. So, naively, I would say that calling set_seed() to generate different output from the nominal GPT2 won’t work.
Unfortunately, I can’t think of a way to do this. Here 13 is an article by @patrickvonplaten about generating text with different decoder methods that might be useful. Otherwise, maybe @sgugger can provide some insight? | 0 |
huggingface | Beginners | Colab RAM crash error - Fine-tuning RoBERTa in Colab | https://discuss.huggingface.co/t/colab-ram-crash-error-fine-tuning-roberta-in-colab/2830 | Hi,
I’m trying to fine-tune my first NLI model with Transformers on Colab. I’m trying to fine-tune ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli on a dataset of around 276.000 hypothesis-premise pairs. I’m following the instructions from the docs here 5 and here 3.
The issue is that I get a memory error, when I run the code below on colab. My colab GPU seems to have around 12 GB RAM. The error occurs at the end during the training step, but I see in colab that already after the encoding step, 7~GB RAM is occupied. Then RAM usage shoots up at training and colab crashes.
I’m new to fine-tuning models. It would be great if someone could give some advice on how to reduce the RAM footprint in the code below.
What I’ve tried:
Use model.half() to reduce memory footprint
I changed per_device_train_batch_size and per_device_eval_batch_size from 32 to 8 to 2. (Not sure if a lower number here reduces the memory requirement? Or are higher numbers better for RAM?)
What else can/should be improved in the code below?
Thanks a lot for your help!
My code:
# ... some data preparation
### load model and tokenizer
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
max_length = 256
hg_model_hub_name = "ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli"
tokenizer = AutoTokenizer.from_pretrained(hg_model_hub_name)
model = AutoModelForSequenceClassification.from_pretrained(hg_model_hub_name)
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Device: {device}")
if device == "cuda":
model = model.half() # for half-precision training. reduces RAM requirement; decreases speed if on older GPU # https://huggingface.co/transformers/v1.1.0/examples.html
model.to(device)
model.train();
# ... some data preparation ...
encodings_train = tokenizer(premise_train, hypothesis_train, return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=True, padding=True)
encodings_val = tokenizer(premise_val, hypothesis_val, return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=True, padding=True)
encodings_test = tokenizer(premise_test, hypothesis_test, return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=True, padding=True)
### create pytorch dataset object
import torch
class XDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
dataset_train = XDataset(encodings_train, label_train)
dataset_val = XDataset(encodings_val, label_val)
dataset_test = XDataset(encodings_test, label_test)
### training
from transformers import Trainer, TrainingArguments
# https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
per_device_train_batch_size=2, # batch size per device during training
per_device_eval_batch_size=2, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=dataset_train, # training dataset
eval_dataset=dataset_val # evaluation dataset
)
trainer.train() | Hi, you could try reducing max-length.
For a bert-base model, I found that I needed to keep maxlen x batchsize below about 8192. I think that limit would be even lower for a bert-large model.
Do you need roberta-large, or would roberta-base be sufficient?
(Or even distilroberta-base) | 0 |
huggingface | Beginners | ERROR: Could not find a version that satisfies the requirement torch==1.7.1+cpu | https://discuss.huggingface.co/t/error-could-not-find-a-version-that-satisfies-the-requirement-torch-1-7-1-cpu/2776 | I’m (a NLP newbie) trying to use the zero-shot models on a system without a GPU. None of the models seem to work. Can this work without a CPU?
example code:
from transformers import pipeline
classifier = pipeline(“zero-shot-classification”, model=‘joeddav/xlm-roberta-large-xnli’, device=-1)
sequence = “За кого вы голосуете в 2020 году?”
candidate_labels = [“Europe”, “public health”, “politics”]
classifier(sequence, candidate_labels)
output:
~/.local/lib/python3.8/site-packages/torch/cuda/init.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx 5 (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
Some weights of the model checkpoint at joeddav/xlm-roberta-large-xnli were not used when initializing XLMRobertaForSequenceClassification: [‘roberta.pooler.dense.weight’, ‘roberta.pooler.dense.bias’]
This IS expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). | Hi @waldenn
All the models and pipelines can work on CPU.
What you posted are warnings which you can safely ignore.
I think you have installed torch cuda versio on CPU machine, which is giving you the first warning | 0 |
huggingface | Beginners | Tutorial on Pretraining BERT | https://discuss.huggingface.co/t/tutorial-on-pretraining-bert/2828 | I would like to pretrain a BERT model with custom data. For example, data in my native language. Is there any example script about pretraining a BERT model? I found the tutorial with the EsperBERTo model but this is not a BERT model as far as I understand. | You can have a look at the example scripts here: https://github.com/huggingface/transformers/tree/master/examples/language-modeling#robertabertdistilbert-and-masked-language-modeling 52 | 0 |
huggingface | Beginners | Question for Input of BERT | https://discuss.huggingface.co/t/question-for-input-of-bert/2792 | Hello, if I want to maintain two different dictionaries, one is BERT’s original dictionary and the other is a custom dictionary, and then the input is [CLS] BERT dictionary corpus [SEP] custom dictionary corpus [SEP] , how do I handle the input of the model and what part of the source code do I need to change? Thanks! | What are you trying to accomplish? The dictionary/vocabulary is an input to the tokenizer so you should be able to just switch it (if it conforms to how the tokenizer and and models wants to process it) but I don’t see how you could use two different vocabularies for the same model and get any meaningful results. | 0 |
huggingface | Beginners | Cannot Resume Training | https://discuss.huggingface.co/t/cannot-resume-training/2823 | I’m trying to resume training using a checkpoint with RobertaForMaskedLM
I’m using the same script I trained except at the last stage I call trainer.train("checkpoint-200000"), i.e. the model is created as usual from config and the tokenizer loaded from disk.
The trainer prints the message below, but then the loss isn’t consistent with the original training. At this stage in the original training the loss was around 0.6, on resume it drops to 0.0004 and then 7.85
This makes me not trust reloading the trained model as I have no confidence I’ve correctly trained and serialised it. What could I be doing wrong?
***** Running training *****
Num examples = 1061833
Num Epochs = 50
Instantaneous batch size per device = 256
Total train batch size (w. parallel, distributed & accumulation) = 256
Gradient Accumulation steps = 1
Total optimization steps = 207400
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 48
Continuing training from global step 200000
Will skip the first 896 batches in the first epoch
{‘loss’: 0.00040710282906264845, ‘learning_rate’: 1.7815814850530375e-06, ‘epoch’: 48.21841851494696}
{‘loss’: 7.853060150146485, ‘learning_rate’: 1.7791706846673097e-06, ‘epoch’: 48.22082931533269}
{‘loss’: 7.491885375976563, ‘learning_rate’: 1.7767598842815817e-06, ‘epoch’: 48.22324011571842} | The loss you were at isn’t saved so you can’t trust what it tells you when training resumes. First it’s not an average of all the losses since the beginning of the training, just since you restarted. Then the first time it’s logged, it’s divided by the wrong number (a number way too big) which is why you have that low loss.
I’ll look at this tomorrow and see if we can have the same losses printed in a full training and a resumed training. | 0 |
huggingface | Beginners | XLNet model applied to text classification | https://discuss.huggingface.co/t/xlnet-model-applied-to-text-classification/2701 | I’m a data science student, recently I reviewed the XLNet paper and I have a doubt about it:
Imagine that we have a dataset with categories, let’s say 200, and we have 20.000 instances to train/validate the model, for example:
text: about an specific objectA
category: objectA
I thought that having so many categories can be a problem when we categorize, so I thought, ok, let’s make these categories have relation parent-child like:
Jeans - jeansA, jeansB, jeansC, …
Shirts - shirtA, shirtB, shirtC, …
instead of: jeansA, jeansB, jeansC, shirtA, shirtB, shirtC, …
My intention here is to take profit of the hierarchical classification together with the XLNet model in order to improve the accuracy. But here is when my doubt appeared:
In many examples I saw in some websites (for example Kaggle) people use XLNet directly (after a pre-processing), so I’m not sure about what I am thinking, maybe with the XLNet model alone it’s enough powerful to achieve a good classification. The question is: Has some sense what am I saying or I didn’t properly understand what XLNet does since I didn’t see anyone applying this proposal for many categories? | Is a pretrained model so there’s no way to do this proposal | 0 |
huggingface | Beginners | Which CNN Summarization models to use? | https://discuss.huggingface.co/t/which-cnn-summarization-models-to-use/317 | Hi @sshleifer,
For what task the following models were fine tuned / trained? Can be used for text summarization?
sshleifer/student_cnn_12_6
sshleifer/student_cnn_6_6
Thank you. | These student models are created by copying layers from bart-large-cnn to reduce their size. These are un fine-tuned checkpoints so you’ll need to fine-tune them for summerization. More details can be found here
GitHub
huggingface/transformers 6
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0. - huggingface/transformers | 0 |
huggingface | Beginners | NeuralCoreference and Spacy 3 | https://discuss.huggingface.co/t/neuralcoreference-and-spacy-3/2756 | Hello All, I am interested in NER and am employing spaCy together with huggingface neuralcoref plugin.
I am supposing that neuralcoref will help identify some ambiguity in reference with pronouns so that I can identify what is being said (adjectives, nouns) further down my spaCy pipeline.
My question:
(1) It seems that the neuralcoref is not as active as a topic/library as other tools from HF. Is there a reason for this? Has this feature been integrated in other places?
(2) Has anyone been able to integrate neuralcoref in spaCy 3 (nightly). I tried to install them together but was unable to because of dependency hell (i.e. thinc…) and required versions. I might need to hack neuralcoref into my spaCy3 pipeline via the REST http calls.
Thanks for your help., and thanks to HF for their great tools. | hi @gonzobrandon
The neuralcoref package is available here https://github.com/huggingface/neuralcoref 111 | 0 |
huggingface | Beginners | Metrics mismatch between BertForSequenceClassification Class and my custom Bert Classification | https://discuss.huggingface.co/t/metrics-mismatch-between-bertforsequenceclassification-class-and-my-custom-bert-classification/2781 | Hi All,
I implemented my custom Bert Binary Classification Model class, by adding a classifier layer on top of Bert Model (attached below). However, the accuracy/metrics are significantly different when I train with the official BertForSequenceClassification model, which makes me wonder if I am missing somehting in my class.
Few Doubts I have:
While loading the official BertForSequenceClassification from_pretrained are the classifiers weight initialized as well from pretrained model or they are randomly initialized? Because in my custom class they are randomly initialized.
class MyCustomBertClassification(nn.Module):
def __init__(self, encoder='bert-base-uncased',
num_labels,
hidden_dropout_prob):
super(MyCustomBertClassification, self).__init__()
self.config = AutoConfig.from_pretrained(encoder)
self.encoder = AutoModel.from_pretrained(self.config)
self.dropout = nn.Dropout(hidden_dropout_prob)
self.classifier = nn.Linear(self.config.hidden_size, num_labels)
def forward(self, input_sent):
outputs = self.encoder(input_ids=input_sent['input_ids'],
attention_mask=input_sent['attention_mask'],
token_type_ids=input_sent['token_type_ids'],
return_dict=True)
pooled_output = self.dropout(outputs[1])
# for both tasks
logits = self.classifier(pooled_output)
return logits
` | Hi adrshkm,
the weights of the SequenceClassification head are initialized randomly.
See this page https://huggingface.co/transformers/training.html
which says
When we instantiate a model with from_pretrained() , the model configuration and pre-trained weights of the specified model are used to initialize the model. The library also includes a number of task-specific final layers or ‘heads’ whose weights are instantiated randomly when not present in the specified pre-trained model. For example, instantiating a model with BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2) will create a BERT model instance with encoder weights copied from the bert-base-uncased model and a randomly initialized sequence classification head on top of the encoder with an output size of 2. | 0 |
huggingface | Beginners | Token Classification with WNUT17 | https://discuss.huggingface.co/t/token-classification-with-wnut17/2696 | Hey guys, I’m following the steps described here: https://huggingface.co/transformers/custom_datasets.html#token-classification-with-w-nut-emerging-entities 7 and fine-tune with native tensorflow ( see https://huggingface.co/transformers/custom_datasets.html#ft-native 1) .
During training I’d like to add a metric to see how the model is performing. Currently I have added SparseCategoricalAccuracy but it gets stuck at an accuracy of around 0.2. I can have the Distilbertlayer trainable True or False, it doesn’t make a difference.
What can I do now? What metric would you suggest and what performance can I expect?
Also, do you recommend to set all layer to trainable or only the final Dense layer?
Thanks! Have a nice day! | I’d like to add a question
After playing around a little it turns out that the model is predicting the majority class “O” all the time. Any best practice to handle class imbalance? I removed the samples with only few occurences of entities from the dataset but is not enough the solve the problem. | 0 |
huggingface | Beginners | XLMForSequenceClassification classifier layer? | https://discuss.huggingface.co/t/xlmforsequenceclassification-classifier-layer/1320 | I’m trying to probe a pretrained model of XLMForSequenceClassification. I want to freeze all layers but the last classifying layer. What layer is that for XLMForSequenceClassification? When I call .named_parameters(), the last layer seems to be:
sequence_summary.summary.weight
sequence_summary.summary.bias
This is unlike using a pretrained BERTForSequenceClassification since the last layer there is explicitly specified as a classifier in its name. What is sequence_summary? Can I assume this is the classifying layer?
Even if I leave this layer unfrozen, I still seem to get the error:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn | model = BertForSequenceClassification.from_pretrained(‘bert-base-cased’)
When I print the named_parameters of this model, I get “classifier.weight” and “classifier.bias”.
When I just the print the model above, I see the last layer is:
(classifier): Linear(in_features=768, out_features=2, bias=True)
model = XLNetForSequenceClassification.from_pretrained(‘xlnet-base-cased’)
When I print the named_parameters of this model, I get “logits_proj.weight” and “logits_proj.bias”
When I just the print the model above, I see the last layer is:
(logits_proj): Linear(in_features=768, out_features=2, bias=True)
In the model above, the “sequence_summary” you mentioned is actually one level above the logits_proj parameter. Since you see the "sequence_summary’ as the last layer, can you show how you create your model? | 0 |
huggingface | Beginners | LongformerForQuestionAnswering - reaching TriviaQA leaderboard results | https://discuss.huggingface.co/t/longformerforquestionanswering-reaching-triviaqa-leaderboard-results/2749 | Hi everyone,
I’m trying to reach the reported leaderboard results of Longformer (from the paper), and I am struggling.
Steps that I took:
I downloaded TriviaQA’s original dev set.
I’m using LongformerForQuestionAnswering for evaluation.
I normalize the predicted answers and compare them to the gold-label answers to compute ExactMatch.
Am I missing something? Should any further processing be done before evaluating with LongformerForQuestionAnswering?
I already looked at the Github repo of Longformer, it doens’t seem like they do any additional preprocessing to the dev data/context. | Maybe @beltagy can help | 0 |
huggingface | Beginners | Zero shot learning classification | https://discuss.huggingface.co/t/zero-shot-learning-classification/2707 | Is huggingface zero shot learning classification based on some pre-trained checkpoints? | When you select the tag zero-shot-classification in the model hub (see here 7) you can see all the models for the task. The default in the pipeline is facebook/bart-large-mnli (see source code here 2)
Hugging Face’s zero-shot classification is always based on models pre-trained on Natural Language Inference datasets (like mnli). So under the hood, it’s actually doing an NLI classification, it’s just abstracting that part away for you (see details and further links here 10) | 0 |
huggingface | Beginners | What happens if the fine-tuning is done twice? | https://discuss.huggingface.co/t/what-happens-if-the-fine-tuning-is-done-twice/1124 | Apologies in advance if the question is silly, I’m trying to learn about huggingface and nlp in general.
My doubt is the following: let’s suppose that I want to do text-generation and I will work with the gtp2 pre-trained model. First of all I do finetuning with an astronomy dataset. I save the model as gtp2-astronomy. Then, I finetuned gtp2-astronomy with a physics dataset and saved it as a final-model.
My question is: will this final-model be good for text generation of astronomy and also for physics? Or by fine-tuning the second time, do I “eliminate” the ability of the model with astronomy subjects?
I ask this question because, as I understand it, when finetuning you are basically working with the last layer of the network, so, I don’t know if fine-tuning the second time will reset the last layer, which the first time learned about astronomy. | Apologies if the answer is silly, I’ve been using BERT and not GPT2.
I think your twice-trained model would probably remember at least some of the astronomy training, as well as the physics training.
If you had a really huge corpus of physics texts it might overwrite your astronomy training, but I think it is unlikely. Some researchers have shown that many transformers models have a lot more capacity than they need. Also, there is probably overlap of physics and astronomy vocab.
When you fine-tune, you can define whether you want the model layers to be altered or frozen. You could consider gradual unfreezing of layers. | 0 |
huggingface | Beginners | Sentiment analysis for long sequences | https://discuss.huggingface.co/t/sentiment-analysis-for-long-sequences/516 | I’m trying to use the sentiment analysis pipeline,. currently using:
nlp = pipeline('sentiment-analysis')
nlp.tokenizer = transformers.DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
to classify a large corpus of textual data. Most of my data points are < 512 tokens, and the pipeline seems to be working well. However, some data points (~15% of the whole datasets) have more than 512 tokens. I’ve tried to split them into chunks of size 512 and aggregate the results, but this didn’t seem to work very well. Is there any principled/recommended approach for such situations? Perhaps using a different model/tokenizer? I’ve tried using XLNet but didn’t get very good results… | Hi @adamh, if your context is really long then you can consider using the longformer 16 model, it allows to use sequences with upto 4096 tokens. But you’ll need to fine-tune the model first. | 0 |
huggingface | Beginners | Top-k closest/similar words to the input word | https://discuss.huggingface.co/t/top-k-closest-similar-words-to-the-input-word/2657 | Hello,
Given the pre-trained model how can I retrieve top-k closest words to the given word?
In othe5r words, how can I see vector representation of the word and top-k closest vectors (corresponding words) from pre-trained model? | This is not a great fix, but what I use.
via @sgugger
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
import torch.nn.functional as F
# Load model and tokenizer
model = GPT2LMHeadModel.from_pretrained('gpt2')
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
# Input example
input_txt = "Hello, my name is Sylvain."
inputs = tokenizer(input_txt, return_tensors='pt')
outputs = model(**inputs)
# If you are not on a source install, replace outputs.logits by outputs[0]
predictions = F.softmax(outputs.logits, dim=-1)
thresh = 1e-2
vocab_size = predictions.shape[-1]
# Predictions has one sentence (index 0) and we look at the last token predicted (-1)
idxs = torch.arange(0, vocab_size)[predictions[0][-1] >= thresh]
print(tokenizer.convert_ids_to_tokens(idxs))
You’d have to an input like this. At the core of the United States’ mismanagement of the Coronavirus lies its distrust of science. At the core of the United States’ mismanagement of the Coronavirus lies its
You can also do the same thing, but by masking.
from transformers import RobertaTokenizer, RobertaForMaskedLM
import torch
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = RobertaForMaskedLM.from_pretrained('roberta-base')
sentence = """At the core of the United States' mismanagement of the Coronavirus lies its distrust of science. At the core of the United States' mismanagement of the Coronavirus lies its <mask> of science."""
token_ids = tokenizer.encode(sentence, return_tensors='pt')
# print(token_ids)
token_ids_tk = tokenizer.tokenize(sentence, return_tensors='pt')
print(token_ids_tk)
masked_position = (token_ids.squeeze() == tokenizer.mask_token_id).nonzero()
masked_pos = [mask.item() for mask in masked_position ]
print (masked_pos)
with torch.no_grad():
output = model(token_ids)
last_hidden_state = output[0].squeeze()
print ("\n\n")
print ("sentence : ",sentence)
print ("\n")
list_of_list =[]
for mask_index in masked_pos:
mask_hidden_state = last_hidden_state[mask_index]
idx = torch.topk(mask_hidden_state, k=100, dim=0)[1]
words = [tokenizer.decode(i.item()).strip() for i in idx]
list_of_list.append(words)
print (words)
best_guess = ""
for j in list_of_list:
best_guess = best_guess+" "+j[0] | 0 |
huggingface | Beginners | How to find the doc - and especially example code - for previous versions? | https://discuss.huggingface.co/t/how-to-find-the-doc-and-especially-example-code-for-previous-versions/2646 | Hi
Please let me explain the context of my question. I am a beginner and in the last 2 days have opened two issues that - after I got good answers from you - seem to involve the fact that I installed Transformers 3.5.1 but then tried to run examples from the 4.0.0 documentation.
I am trying to find a URL that would show me the 3.5.1 documentation, until you great gals and guys stabilize the 4.0.0 version, so I can copy-paste examples from the stable version.
For reference, here are the two issues I opened (but the question is a general one):
The question-answering example in the doc throws an AttributeError exception. Please help - Hugging Face Forums
Pipeline example in the doc throws an error (question-answering) - Beginners - Hugging Face Forums
Thanks. I really appreciate the fact that you answered those two questions so rapidly. It seems like a good community to join. | Try this https://huggingface.co/transformers/v3.5.1/ 1
(If you are in 4.0 docs, and you click the triangle just under the hugginface icon top left of screen, you can pick any version from a list). | 0 |
huggingface | Beginners | The question-answering example in the doc throws an AttributeError exception. Please help | https://discuss.huggingface.co/t/the-question-answering-example-in-the-doc-throws-an-attributeerror-exception-please-help/2611 | Hi.
Admittedly I am a beginner to HuggingFace, though I do have some Python experience and general programming experience.
I am using
transformers version: 3.5.1
Platform: Windows-10-10.0.18362-SP0
Python version: 3.6.12
PyTorch version (GPU?): 1.7.0 (False)
Tensorflow version (GPU?): not installed (NA)
Using GPU in script?: No
Using distributed or parallel set-up in script?: No
I copy-pasted the following code (see at the bottom of the post) from the Transformers doc on Summary of the tasks — transformers 4.0.0 documentation (huggingface.co) 1
However this code (I made no changes) returns with the following error:
Traceback (most recent call last):
File "c:/Workspace/py-conda-workspaces/py36-conda-speechDemo/text-question-answering.py", line 21, in <module>
answer_start_scores = outputs.start_logits
AttributeError: 'tuple' object has no attribute 'start_logits'```
Here is the code:
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
model = AutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
text = r"""
🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose
architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural
Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between
TensorFlow 2.0 and PyTorch.
"""
questions = [
"How many pretrained models are available in 🤗 Transformers?",
"What does 🤗 Transformers provide?",
"🤗 Transformers provides interoperability between which frameworks?",
]
for question in questions:
inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="pt")
input_ids = inputs["input_ids"].tolist()[0]
text_tokens = tokenizer.convert_ids_to_tokens(input_ids)
outputs = model(**inputs)
answer_start_scores = outputs.start_logits
answer_end_scores = outputs.end_logits
answer_start = torch.argmax(
answer_start_scores
) # Get the most likely beginning of answer with the argmax of the score
answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score
answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
print(f"Question: {question}")
print(f"Answer: {answer}")
Please help. What did I do wrong? | You’re using code aimed at transformers v4 with a previous version so it doesn’t work
You can either:
upgrade your installation
replace the line defining your model by this:
model = AutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad", return_dict=True) | 0 |
huggingface | Beginners | What’s the difference between bart-base tokenizer and bart-large tokenizer | https://discuss.huggingface.co/t/whats-the-difference-between-bart-base-tokenizer-and-bart-large-tokenizer/2607 | Hi,
tokenizer1 = BartTokenizer.from_pretrained('facebook/bart-base')
tokenizer2 = BartTokenizer.from_pretrained('facebook/bart-large')
What’s the difference conceptually? I can understand the diff in uncased and cased ones for bert.
But why this?
btw, bart base and large have the same “vocab_size”: 50265 in their config.
Thanks. | It is obviously related to more number of parameters used in the bart-large as mentioned in the description.
facebook/bart-large 24-layer, 1024-hidden, 16-heads, 406M parameters
facebook/bart-base 12-layer, 768-hidden, 16-heads, 139M parameters | 0 |
huggingface | Beginners | Pipeline example in the doc throws an error (question-answering) | https://discuss.huggingface.co/t/pipeline-example-in-the-doc-throws-an-error-question-answering/2632 | Hi.
I am using
transformers version: 4
Platform: Windows-10-10.0.18362-SP0
Python version: 3.6.12
PyTorch version (GPU?): 1.7.0 (False)
Tensorflow version (GPU?): not installed (NA)
Using GPU in script?: No
Using distributed or parallel set-up in script?: No
I am running the pipeline example for question answering from the doc. It throws the following error:
Traceback (most recent call last):
File "c:/Workspace/py-conda-workspaces/py36-conda-speechDemo/text-question-answering.py", line 9, in <module>
result = nlp(question="What is extractive question answering?", context=context)
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\transformers\pipelines.py", line 1874, in __call__
start, end = self.model(**fw_args)[:2]
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\transformers\models\distilbert\modeling_distilbert.py", line 706, in forward
return_dict=return_dict,
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\transformers\models\distilbert\modeling_distilbert.py", line 480, in forward
inputs_embeds = self.embeddings(input_ids) # (bs, seq_length, dim)
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\transformers\models\distilbert\modeling_distilbert.py", line 107, in forward
word_embeddings = self.word_embeddings(input_ids) # (bs, max_seq_length, dim)
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\torch\nn\modules\sparse.py", line 126, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "C:\Users\Gilad\miniconda3\envs\speechDemoEnv_NMT\lib\site-packages\torch\nn\functional.py", line 1852, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.IntTensor instead (while checking arguments for embedding)
Here is the code, it is copy-pasted from Summary of the tasks — transformers 4.0.0 documentation (huggingface.co) 1
from transformers import pipeline
nlp = pipeline("question-answering")
context = r"""
Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
a model on a SQuAD task, you may leverage the examples/question-answering/run_squad.py script.
"""
result = nlp(question="What is extractive question answering?", context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
result = nlp(question="What is a good example of a question answering dataset?", context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
If it helps, I yesterday upgraded from v3.5.1 to v4, using pip install --upgrade transformers. Since this upgrade, the environment seems damaged also in other places, for example transformer-cli --help is broken (see a related issue at The question-answering example in the doc throws an AttributeError exception. Please help - Beginners - Hugging Face Forums )
Help will be appreciated. Thanks! | Edit: This is a Windows issue, we’ll try to find a fix soon! | 0 |
huggingface | Beginners | Data collator for training bart from scratch | https://discuss.huggingface.co/t/data-collator-for-training-bart-from-scratch/2047 | Hello,
I would like to train bart from scratch.
It seems the official example script is not available yet (if any, please tell me!).
So I try to have one by modifying the example scripts run_mlm.py and run_clm.py.
And not sure how to set the data collator part for bart.
In run_mlm.py , DataCollatorForLanguageModeling is used:
github.com
huggingface/transformers/blob/0c9bae09340dd8c6fdf6aa2ea5637e956efe0f7c/examples/language-modeling/run_mlm.py#L342 16
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map tokenized_datasets = tokenized_datasets.map( group_texts, batched=True, num_proc=data_args.preprocessing_num_workers, load_from_cache_file=not data_args.overwrite_cache, )# Data collator# This one will take care of randomly masking the tokens.data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=data_args.mlm_probability)# Initialize our Trainertrainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"] if training_args.do_train else None, eval_dataset=tokenized_datasets["validation"] if training_args.do_eval else None, tokenizer=tokenizer, data_collator=data_collator,)
In run_clm.py , default_data_collator is used:
github.com
huggingface/transformers/blob/0c9bae09340dd8c6fdf6aa2ea5637e956efe0f7c/examples/language-modeling/run_clm.py#L310 7
)# Initialize our Trainertrainer = Trainer( model=model, args=training_args, train_dataset=lm_datasets["train"] if training_args.do_train else None, eval_dataset=lm_datasets["validation"] if training_args.do_eval else None, tokenizer=tokenizer, # Data collator will default to DataCollatorWithPadding, so we change it. data_collator=default_data_collator,)# Trainingif training_args.do_train: trainer.train( model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None ) trainer.save_model() # Saves the tokenizer too for easy upload# Evaluation
Can someone give some advice about how to set the data collator for bart?
Thanks. | @zuujhyt Maybe try creating different Dataset classes using different DataCollators. You could then use PyTorch Lightning to create a Dataloader w/ multiple Dataset classes like this 68, and train Bart on that. I’ll try doing this.
Let me know if you have any updates on this. | 0 |
huggingface | Beginners | Fundamental newbie questions | https://discuss.huggingface.co/t/fundamental-newbie-questions/2626 | New to NLP / transformers - tried some examples and it is awesome. Love it! great work.
I am trying to create a Q&A system - to answer questions from a corpus of pdf documents in English.
questions where i need help to correct my understanding -
any example of fine tuning a pre-trained model on your own custom data set from PDF documents available? My understanding is i need to use a pretrained model for QA and then fine tune it with my own questions and answers from my corpus to increase the model accuracy.
need more info on pipelines - specially text2text-generation. how do i see what models and parameters does it use behind the abstraction? in python how do i access the model metadata being used when i use pipelines. I love the abstraction but would also like control on tweaking the parameters being used by the pipeline.
whats the best way to save the models in the cloud so that they can be pointed to for inference instead of getting downloaded?
again - awesome work! | For 1, your use case does seem a little specific, so there is no example of that exactly in our examples. Otherwise all our examples are in the examples folder of the repo and there is a tutorial 6 on how to fine-tune your model on a custom dataset in the documentation.
For 2, if you need more control, you should directly use the tokenizer/model and not the pipeline API. The task summary tutorial 3 shows examples of both ways on most tasks supported by the library.
For 3, you can upload models to our model hub. There is a (paying) inference API 1 to use them directly without downloads. | 0 |
huggingface | Beginners | How to save model with .pt extension | https://discuss.huggingface.co/t/how-to-save-model-with-pt-extension/2433 | How to save the trained Roberta model with .pt extension instead of .bin extension? | Follow the steps mentioned here 14
or follow the code specified below.
Creating the trace
traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors])
torch.jit.save(traced_model, "traced_bert.pt") | 0 |
huggingface | Beginners | Finetuning Sequence-Pairs (GLUE) with higher sequence lengths seems to fail? | https://discuss.huggingface.co/t/finetuning-sequence-pairs-glue-with-higher-sequence-lengths-seems-to-fail/1656 | Questions & Help
Details
I have an issue where I tried to use the standard GLUE finetuning script for task STS-B with longer sequence lengths and the results are bad (see below). Correlation decreases massively when using longer sequence lengths, and when I instead use binary classification with two classes instead of regression, it is the same situation. For 128 and with some data (e. g. Yelp) 256 works well but longer sequence lengths then simply fail.
My assumption was that longer sequence lengths should results in similar or sometimes better results and that for shorter input sequences padding is added but not incorporated into the embedding because of masking (where the input sequence is and where it is not)?
Initially, I was using the Yelp Business Review dataset for sentiment prediction (which worked well for sequence lengths of 128, 256 and 512) but pairing same reviews sentiments for the same business should be similar to sequence pair classification (I know the task/data works) but it only gave good results for a sequence length of 128 and 256, but 400 or 512 just predicted zeros (as far as I observed). I then tried to just use this with the GLUE STS-B data with the same issue happening.
Background:
Before that, I was using GluonNLP (MXNet) and the BERT demo finetuning script (also GLUE STS-B like) with the same data and basically same framework/workflow (even hyperparameters) as here in PyTorch but there all sequence lengths worked, and longer sequence length even improved results (even with smaller batch sizes because of GPU RAM and longer training durations). As the input texts were were smaller and longer (about a third of the data, I guess) this was not that surprising. I’m currently trying to switch to transformers because of the larger choice and support of models…
So, what am I doing wrong?
I tried using a constant learning rate schedule (using the default learning rate in the code) but it gave no improvements.
I tried different datasets also with almost similar end results. (even if input texts were longer than the maximum sequence length)
Can others reproduce this? (Just switch to seqlen 512 and batchsize 8 / seqlen 256 and batchsize 16)
Do I have to choose another padding strategy?
Results on GeForce RTX 2080 with transformers version 3.3.1 and CUDA 10.2:
# my script args (basically just changing the output dir and the sequence length (batch size for GPU memory reasons))
# transformers_copy being the cloned repo root folder
export GLUE_DIR=data/glue
export TASK_NAME=STS-B
python transformers_copy/examples/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --data_dir data/sentiment/yelp-pair-b/ --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir output/glue_yelp_128_32
CUDA_VISIBLE_DEVICES=1 python transformers_copy/examples/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --data_dir data/glue/STS-B/ --max_seq_length 256 --per_device_train_batch_size 16 --per_device_eval_batch_size 16 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir output/glue_STS-B_256_16 --save_steps 1000
CUDA_VISIBLE_DEVICES=1 python transformers_copy/examples/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --data_dir data/glue/STS-B/ --max_seq_length 512 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir output/glue_STS-B_512_8 --save_steps 2000
# cat glue_STS-B_128_32/eval_results_sts-b.txt
# seqlen 128
eval_loss = 0.5857866220474243
eval_pearson = 0.8675888610991327
eval_spearmanr = 0.8641174656753431
eval_corr = 0.865853163387238
epoch = 3.0
total_flos = 1434655122529536
# cat glue_STS-B_256_16/eval_results_sts-b.txt
# seqlen 256
# this result should be bad, as far as I would think
eval_loss = 2.2562920122146606
eval_pearson = 0.22274851498729242
eval_spearmanr = 0.09065396938535858
eval_corr = 0.1567012421863255
epoch = 3.0
total_flos = 2869310245059072
# cat glue_STS-B_512_8/eval_results_sts-b.txt
# seqlen 512
eval_loss = 2.224635926246643
eval_pearson = 0.24041184048438544
eval_spearmanr = 0.08133980923357159
eval_corr = 0.1608758248589785
epoch = 3.0
total_flos = 5738620490118144
Yelp (sentiment, single sequence) with sequence length of 512
# cat yelp-sentiment-b_512_16_1/eval_results_sent-b.txt
eval_loss = 0.2301591751359403
eval_acc = 0.92832
eval_f1 = 0.945765994794504
eval_acc_and_f1 = 0.937042997397252
eval_pearson = 0.8404006160382227
eval_spearmanr = 0.8404006160382247
eval_corr = 0.8404006160382237
eval_class_report = {'not same': {'precision': 0.9099418011639767, 'recall': 0.8792393761957215, 'f1-score': 0.8943271612218422, 'support': 17249}, 'same': {'precision': 0.937509375093751, 'recall': 0.954169338340814, 'f1-score': 0.945765994794504, 'support': 32751}, 'accuracy': 0.92832, 'macro avg': {'precision': 0.9237255881288639, 'recall': 0.9167043572682677, 'f1-score': 0.920046578008173, 'support': 50000}, 'weighted avg': {'precision': 0.9279991134394574, 'recall': 0.92832, 'f1-score': 0.928020625988607, 'support': 50000}}
epoch = 0.08
total_flos = 26906733281280000
Yelp (sequence pairs) with 128, 256 and 512 (were 512 fails)
# cat yelp-pair-b_128_32_3/eval_results_same-b.txt
# seqlen 128
eval_loss = 0.4788903475597093
eval_acc = 0.8130612708878027
eval_f1 = 0.8137388152678672
eval_acc_and_f1 = 0.813400043077835
eval_pearson = 0.6262220422479998
eval_spearmanr = 0.6262220422479998
eval_corr = 0.6262220422479998
eval_class_report = {'not same': {'precision': 0.8189660129967221, 'recall': 0.8058966668552996, 'f1-score': 0.8123787792355962, 'support': 35342}, 'same': {'precision': 0.8072925445249733, 'recall': 0.8202888622481018, 'f1-score': 0.8137388152678672, 'support': 35034}, 'accuracy': 0.8130612708878027, 'macro avg': {'precision': 0.8131292787608477, 'recall': 0.8130927645517008, 'f1-score': 0.8130587972517317, 'support': 70376}, 'weighted avg': {'precision': 0.8131548231814548, 'recall': 0.8130612708878027, 'f1-score': 0.8130558211583339, 'support': 70376}}
epoch = 3.0
total_flos = 71009559802626048
# cat yelp-pair-b_256_16_1/eval_results_same-b.txt
# seqlen 256
eval_loss = 0.3369856428101318
eval_acc = 0.8494088893941116
eval_f1 = 0.8505977218901545
eval_acc_and_f1 = 0.850003305642133
eval_pearson = 0.6990572001217541
eval_spearmanr = 0.6990572001217481
eval_corr = 0.6990572001217511
eval_class_report = {'not same': {'precision': 0.8588791553054476, 'recall': 0.8377850715862147, 'f1-score': 0.8482009854474619, 'support': 35342}, 'same': {'precision': 0.840315302768648, 'recall': 0.8611348975281156, 'f1-score': 0.8505977218901545, 'support': 35034}, 'accuracy': 0.8494088893941116, 'macro avg': {'precision': 0.8495972290370477, 'recall': 0.8494599845571651, 'f1-score': 0.8493993536688083, 'support': 70376}, 'weighted avg': {'precision': 0.8496378513129752, 'recall': 0.8494088893941116, 'f1-score': 0.8493941090198912, 'support': 70376}}
epoch = 1.0
total_flos = 47339706535084032
# cat yelp-pair-b_512_8_3/eval_results_same-b.txt
# seqlen 512
# here it basically just predicts zeros all the time (as fas as I saw)
eval_loss = 0.6931421184073636
eval_acc = 0.5021882459929522
eval_f1 = 0.0
eval_acc_and_f1 = 0.2510941229964761
eval_pearson = nan
eval_spearmanr = nan
eval_corr = nan
eval_class_report = {'not same': {'precision': 0.5021882459929522, 'recall': 1.0, 'f1-score': 0.6686089407669461, 'support': 35342}, 'same': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 35034}, 'accuracy': 0.5021882459929522, 'macro avg': {'precision': 0.2510941229964761, 'recall': 0.5, 'f1-score': 0.33430447038347305, 'support': 70376}, 'weighted avg': {'precision': 0.25219303441347785, 'recall': 0.5021882459929522, 'f1-score': 0.3357675512189583, 'support': 70376}}
epoch = 3.0
total_flos = 284038239210504192
Side note:
I also ran Yelp with regression and it worked for 128 but for 512 the correlation was below 0.3 so it also failed again.
And I worked on another (private) dataset with similar results… | A short reply for now. More details will follow.
It seems to slightly depend on the batch size. Using gradient accumulation to augment the smaller batch sizes (of longer sequence length) corrects my issue where the model previously only predicted 1 or 0. So it might be the case that it just can’t generalize enough if the batch size is too small.
But this is still somehow connected to the learning rate and optimizer (I would assume), as with an older implementation of BERT finetuning in MXNet I could train with batch sizes of 2 and still be on the same level as with shorter sequence lengths or even better. The current finetuning code from GluonNLP (MXNet) seems to have a similar issue with the longer sequence lengths/smaller batch sizes, and gradient accumulation helped here, too. So finding the changes compared to the older code might help find the root cause. | 0 |
huggingface | Beginners | Having fine-tunning as well pre-training together as multi-task | https://discuss.huggingface.co/t/having-fine-tunning-as-well-pre-training-together-as-multi-task/2390 | Hi
I am interested in having MLM task as well as classification task as single setup .
any leads ? | I think it would be simpler to do MLM first and then classification. Is there any reason why you need to define the model with both heads at once?
It is certainly possible (in native pytorch or native tensorflow) to define two different pathways through a model.
When you say “pre-training”, do you mean that you want to train a model from scratch, or are you going to start with a pre-trained model and then do multiple further training steps? | 0 |
huggingface | Beginners | Probing fine-tuned model | https://discuss.huggingface.co/t/probing-fine-tuned-model/2393 | Any tools to see how the model weighs the various words/pieces in the input for the predictions. | hi thak123,
If you are using BERT model in PyTorch, then BertViz by Jesse Vig is quite nice.
GitHub
jessevig/bertviz 21
Tool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.) - jessevig/bertviz | 0 |
huggingface | Beginners | Using Electra model | https://discuss.huggingface.co/t/using-electra-model/2406 | Hi everyone,
I would to like to use an Electra model instead of a Bert model.
In order to do this, do I need to replace only the path to the model in my code?
So, for example, instead of “bert-base-uncased” I have to use " electra-base-uncased-discriminator". | Hi Sergio, there is an example in Electra manual that you can follow : https://huggingface.co/transformers/model_doc/electra.html#tfelectraforsequenceclassification 5 | 0 |
huggingface | Beginners | How to get translation with attention using MarianMT | https://discuss.huggingface.co/t/how-to-get-translation-with-attention-using-marianmt/2363 | Hi, I am trying to achieve this translation attention with MarianMT model as tf tutorial
TensorFlow
Neural machine translation with attention | TensorFlow Core 1
Basically is to tell which word corresponding to the generated word.
I am not sure if I have use the correct field in transformers output.
Here is the core code
from transformers import MarianMTModel, MarianTokenizer
import numpy as np
class MarianZH():
def __init__(self):
model_name = 'Helsinki-NLP/opus-mt-en-zh'
self.tokenizer = MarianTokenizer.from_pretrained(model_name)
print(self.tokenizer.supported_language_codes)
self.model = MarianMTModel.from_pretrained(model_name)
def input_format(self,en_text):
if type(en_text)==list:
# use batch
src_text=[]
for i in en_text:
src_text.append(">>cmn_Hans<< "+i)
elif type(en_text)==str:
src_text=[
'>>cmn_Hans<< '+en_text,
]
else:
raise TypeError("Unsupported type of {}".format(en_text))
return src_text
def get_attention_weight(self,en_text):
src_text=self.input_format(en_text)
batch = self.tokenizer.prepare_seq2seq_batch(src_text)
tensor_output=self.model(batch['input_ids'],return_dict=True,output_attentions=True)
attention_weights=tensor_output.cross_attentions[-1].detach()
batch_size, attention_heads,input_seq_length,output_seq_length=attention_weights.shape
translated = self.model.generate(**batch)
for i in range(batch_size):
attention_weight_i=attention_weights[i,:,:,:].reshape(attention_heads,input_seq_length,output_seq_length)
cross_weight=np.sum(attention_weight_i.numpy(),axis=0) # cross weight
yield cross_weight
if __name__ == '__main__':
src_text = [
'>>cmn_Hans<< Thai food is delicious.',
]
mdl=MarianZH()
attention_weight=mdl.get_attention_weight(src_text)
btw. I am using transformers==3.5.1
Is this cross_weight the attention matrix corresponding to translation attention? But the output seems to be always focus on first columns or last columns. | I’m not sure about if this attention weight matrix is accessible in MarianMT model or not. As the structure of the MarianMT is different in contrast to the one in TF tutorial. If anyone could tell if this task is possible please? | 0 |
huggingface | Beginners | Labels in language modeling: which tokens to set to -100? | https://discuss.huggingface.co/t/labels-in-language-modeling-which-tokens-to-set-to-100/2346 | I am confused on how we should use “labels” when doing non-masked language modeling tasks (for instance, the labels in OpenAIGPTDoubleHeadsModel).
I found this example on how to use OpenAI GPT for roc stories,
github.com
huggingface/transformers/blob/master/examples/contrib/run_openai_gpt.py 4
# coding=utf-8
# Copyright 2018 Google AI, Google Brain and Carnegie Mellon University Authors and the HuggingFace Inc. team.
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" OpenAI GPT model fine-tuning script.
Adapted from https://github.com/huggingface/pytorch-openai-transformer-lm/blob/master/train.py
It self adapted from https://github.com/openai/finetune-transformer-lm/blob/master/train.py
This script with default values fine-tunes and evaluate a pretrained OpenAI GPT on the RocStories dataset:
This file has been truncated. show original
And here it seems that the tokens in the continuation part are set to -100, and not the context (i.e., the other inputs). I also found this discussion here:
ttps://discuss.huggingface.co/t/gpt2-for-qa-pair-generation/759 6
Which seems to suggest that the context (the question) is what has to be set to -100, and what has to be generated not (the answer?).
So my question is, which component should be set to -100 when doing language modeling? The tokens that we want to predict or the tokens that are there for extra information (“the context”, “the question” for which the model needs to generate an answer) etc. | Hi @Kwiebes1995!
Let me try to clear some things up from that post. I think the title is a bit misleading as it says QA Pairs, but ultimately I was interested in question generation. Let’s assume for this discussion that we are working in question generation, i.e. I want GPT2 to generate a relevant question based off a context and answer.
I carried out the finetuning on this task as follows:
Create a finetuning set in the following format:
text_str = 'context: 42 is the answer to life, the universe and everything. answer: 42. question: What is the answer to life, universe and everything ?'
After encoding an example with a tokenizer, set the attention mask to 0 for all text after the question: What is the... text, since this is the text we want to predict.
We will want to calculate the loss on the question: What is the... text. To do this we need to set the label value for everything that comes before the question: What is the... text to -100. This will ensure that cross entropy ignores that part of the example.
Here is an explicit piece of code that should help with what has been described:
def qgen_data_collator(text_list: List[str]) -> dict:
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
tokenizer.pad_token = tokenizer.eos_token
q_id = tokenizer(' question', return_tensors='pt')['input_ids'][0][0]
encoded_results = tokenizer(text_list, padding=True, truncation=True, return_tensors='pt',
return_attention_mask=True)
q_idxs = (encoded_results['input_ids'] == q_id).nonzero()
for idx, attn_mask in enumerate(encoded_results['attention_mask']):
attn_mask[q_idxs[idx][1]:] = 0
tmp_labels = []
for idx, input_id in enumerate(encoded_results['input_ids']):
label = input_id.detach().clone()
label[:q_idxs[idx][1]] = -100
tmp_labels.append(label)
batch = {}
batch['input_ids'] = torch.stack([result for result in encoded_results['input_ids']])
batch['attention_mask'] = torch.stack([result for result in encoded_results['attention_mask']])
batch['labels'] = torch.stack([result for result in tmp_labels])
return batch
This worked for Transformers 3.0.2. To summarize, the attention_mask for the text you want to predict gets set to 0. The labels value for the text that is not being predicted gets set to -100.
Let me know if that clears things up. | 0 |
huggingface | Beginners | Doubt on Tokenization in Pegasus | https://discuss.huggingface.co/t/doubt-on-tokenization-in-pegasus/2365 | Hi, i created a 16-2 pegasus student with make_student.py then tried to use finetune.py on XSUM dataset. The script i run is:
python finetune.py --max_source_length 500 --data_dir xsum --freeze_encoder --freeze_embeds --learning_rate=1e-4 --do_train --do_predict --val_check_interval 0.1 --n_val 1000 --max_target_length=60 --val_max_target_length=60 --test_max_target_length=100 --model_name_or_path dpx_xsum_16_2 --train_batch_size=1 --eval_batch_size=1 --sortish_sampler --num_train_epochs=6 --warmup_steps 500 --output_dir distilpeg_xsum_sft_16_2 --gpus 0 --gradient_accumulation_steps 256 --adafactor --dropout 0.1 --attention_dropout 0.1 --overwrite_output_dir
The question is, is it normal if i don’t specify --max_source_length 500 i obtain an error during embedding? If i leave it like that the fine-tuning is efficient?
Thanks in advance! | I noticed on the script in the repo --max_source_lenght 512 is set and so i ran with such setting. But i notice that starting r2 score is 0.0 in metrics.json. Is this a problem? | 0 |
huggingface | Beginners | How to Add Validation Loss to run_squad.py? | https://discuss.huggingface.co/t/how-to-add-validation-loss-to-run-squad-py/2236 | I’m using the ‘bert-base-uncased’ as the model on SQuADv1.1 and have args.evaluate_during_training set to True. I tried adding "start_positions": batch[3], "end_positions": batch[4] into the evaluate method so that BertForQuestionAnswering returns total loss.
However, when I try to do that, I get cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [0,0,0] Assertion t >= 0 && t < n_classes failed.
What might be the problem? The only difference is that I’m trying to get the loss output from the model by passing in the start_positions and end_positions similar to that during training for evaluation using the dev. dataset. | I noticed that in the dev dataset that it contains multiple possible answers. Is there a way to account for that in terms of validation loss or is there a better way to see if the model is overfitting? | 0 |
huggingface | Beginners | Can I get logits for each sequence I acqired from model.generate()? | https://discuss.huggingface.co/t/can-i-get-logits-for-each-sequence-i-acqired-from-model-generate/2239 | Hi, I’m currently stucked in getting logits from model.generate. I’m wondering if it is possible to get logits of each seqeucne returned by model.generate. (like logits for each token returned by model.logits) | Hi, one way to do that is to modify generation_utils
huggingface.co
transformers.generation_utils — transformers 3.5.0 documentation 37
For examples, we can extract logits of greedy search in the scores variable (there are many options on next-token search eg. greedy, beam or sample searches) | 0 |
huggingface | Beginners | How to convert TF Checkpoints to sentence embedings | https://discuss.huggingface.co/t/how-to-convert-tf-checkpoints-to-sentence-embedings/2004 | Hello
I trained the model with original bert code https://github.com/google-research/bert but I don’t know:
How to convert files to the model:
I tried to understand the documentation but deinetly i do something wrong ( I don’t know what I have to do with index and meta file, why file model.ckpt-500000.data-00000-of-00001 is much bigger than typical model.ckpt file for Bert standard, why I don’t have model.ckpt file as aoutput?)
https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel
How to convert whole sentence to embeding vector? ( I know how to load tokenizer from file vocab.txt)
I’am afraid that I didn’t understand some general concept of transformers models and this is a reason of my problems. Are there any online course for transformers library?
Regards Peter | To convert a model trained with the original repository you should use the conversion script here: https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py 9 | 0 |
huggingface | Beginners | How to use an image tensor for caption generation with Transformer-XL or BERT? | https://discuss.huggingface.co/t/how-to-use-an-image-tensor-for-caption-generation-with-transformer-xl-or-bert/950 | I am fairly new to transformers and deep learning in general so please be kind,
I am currently working on a project that will caption images using either Transformer-XL or BERT, however, I am not sure how to pass the image tensor that is [608, 608, 3] from my CNN to the transformer model for text generation, can anyone help?
Please feel free to ask questions, I would be glad to assist in any way I can. | Guess I’m late. Although I’m not an expert, I can give you some idea. You can use some network like ResNet, DenseNet to ‘encode’ the image into a 1-D tensor, and then use this tensor to generate captions using a transformer. | 0 |
huggingface | Beginners | Token positions when using the Inference API | https://discuss.huggingface.co/t/token-positions-when-using-the-inference-api/2188 | Hi,
I am looking to run some NER models via the Inference API, but I am running into some issues. My problem is that the Inference API does not seem to return token positions. Consider this request:
curl -X POST https://api-inference.huggingface.co/models/dslim/bert-base-NER \
-H "Authorization: Bearer <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d "Hello Sarah Jessia Parker who lives in New York."
It returns:
[
{
"entity_group": "PER",
"score": 0.9959956109523773,
"word": "Sarah Jessia Parker"
},
{
"entity_group": "LOC",
"score": 0.9994343519210815,
"word": "New York"
}
]
So it finds the right tokens (and, nicely, returns the identified tokens grouped correctly.) However, there is no indication of where the tokens start in the input text.
Confusingly, the model page (https://huggingface.co/dslim/bert-base-NER?text=Hello+Sarah+Jessia+Parker+who+lives+in+New+York.) 1 highlights the right tokens, which would suggest you can get token positions. (unless it’s doing something terribly hacky like looking for the first occurence of a particular token).
Is there an option I’m missing? | Indeed, the page seems to just be highlighting the first occurence of a token. Note how it (probably) picks up the wrong “Jessica” in this example: https://huggingface.co/dslim/bert-base-NER?text=Hello+Sarah+Jessica+Parker+who+Jessica+lives+in+New+York.
Is there a way to extract the token position from the Inference API? | 0 |
huggingface | Beginners | Transformers CLI tool: error: invalid choice: ‘repo’ | https://discuss.huggingface.co/t/transformers-cli-tool-error-invalid-choice-repo/2196 | Hi, this is first time for me to share pre-trained weights.
When I followed the steps in https://huggingface.co/transformers/model_sharing.html, I got error below.
$ transformers-cli repo create <my model name>
2020-11-25 06:52:26.478076: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
2020-11-25 06:52:26.478102: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
PyTorch version 1.6.0 available.
TensorFlow version 2.3.0 available.
usage: transformers-cli <command> [<args>]
Transformers CLI tool: error: invalid choice: 'repo' (choose from 'convert', 'download', 'env', 'run', 'serve', 'login', 'whoami', 'logout', 's3', 'upload')
Is repo deprecated?
Thanks in advance. | I created a repo on the website and uploaded the model.
but it seems i can not load the model I just uploaded.
OSError Traceback (most recent call last)
~/anaconda3/envs/fastaiv2/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
355 if resolved_config_file is None:
--> 356 raise EnvironmentError
357 config_dict = cls._dict_from_json_file(resolved_config_file)
OSError:
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-13-1f125ff93000> in <module>
1 from transformers import AutoTokenizer, AutoModel
2
----> 3 tokenizer = AutoTokenizer.from_pretrained("kouohhashi/roberta_ja")
4
5 model = AutoModel.from_pretrained("kouohhashi/roberta_ja")
~/anaconda3/envs/fastaiv2/lib/python3.7/site-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
204 config = kwargs.pop("config", None)
205 if not isinstance(config, PretrainedConfig):
--> 206 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
207
208 if "bert-base-japanese" in str(pretrained_model_name_or_path):
~/anaconda3/envs/fastaiv2/lib/python3.7/site-packages/transformers/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
297 {'foo': False}
298 """
--> 299 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
300
301 if "model_type" in config_dict:
~/anaconda3/envs/fastaiv2/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
363 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a {CONFIG_NAME} file\n\n"
364 )
--> 365 raise EnvironmentError(msg)
366
367 except json.JSONDecodeError:
OSError: Can't load config for 'kouohhashi/roberta_ja'. Make sure that:
- 'kouohhashi/roberta_ja' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'kouohhashi/roberta_ja' is the correct path to a directory containing a config.json file
I can find my mode on https://huggingface.co/models 1
Thanks in advance. | 0 |
huggingface | Beginners | BERT for token & sentence classification | https://discuss.huggingface.co/t/bert-for-token-sentence-classification/2108 | Hi everyone,
I’m trying to realize a Resume Parser through a NER task using BERT, so it would be a token level classification task.
Now, I have a problem with the Work Experience section of the resume.
I would like to extract (date, job title, company name, job description).
The problem is that while the first three are entities with few words, the last one is made up of many words, so I don’t think a token level classification is ok, because I should associate a description entity for each word of the description (I think it would be not efficient).
The ideal solution should be a sentence classification for the description, so for the entire description I would associate only a label.
But in this way, BERT should perform a token & sentence classification at the same time, and I don’t know if this feasible.
I don’t want use two BERT, one for token and the other one for sentence.
Is it possible to perform the two tasks at the same time with just one network?
Many thanks in advance | BERT produces a 768-dimension vector for each token, processed to take into account a small amount of information about each of the other tokens in the input text. Then it can be made to combine all of those vectors (for a single input text) into a single 768-dimension vector that can be considered as a representation of the whole input text.
I believe it should be possible for you to access both the token-vectors and the whole-text-vector, without having to delve into the model code.
With a bit more trouble, you could create your own vector that is a combination of only the tokens from the job description. The tricky bit is deciding how to combine the vectors. Some researchers suggest using the vectors from each of the last four layers.
What are you planning to do with the embeddings once BERT has created them? Do you want your “job description” embedding to include a small amount of context information from your date/title/name? If not, you would need to put them through a separate BERT.
Are you planning to use BERT just to produce embeddings, or are you planning to fine-tune BERT to your task?
Note that BERT will only accept a maximum of 512 tokens per text. | 0 |
huggingface | Beginners | Improving NER BERT performing POS tagging | https://discuss.huggingface.co/t/improving-ner-bert-performing-pos-tagging/2158 | Hi everyone,
I’m fine-tuning BERT to perform a NER task.
I’m wondering, if I fine-tune the same BERT model used for NER, to perform a POS tagging task, could the performance of NER task be improved?
To clarify the question:
class EntityModel(nn.Module):
def __init__(self, num_tag, num_pos):
super(EntityModel, self).__init__()
self.num_tag = num_tag
self.num_pos = num_pos
self.bert = transformers.BertModel.from_pretrained(config.BASE_MODEL_PATH)
self.bert_drop_1 = nn.Dropout(0.3)
self.bert_drop_2 = nn.Dropout(0.3)
self.out_tag = nn.Linear(768, self.num_tag)
self.out_pos = nn.Linear(768, self.num_pos)
def forward(self, ids, mask, token_type_ids, target_pos, target_tag):
o1, _ = self.bert(ids, attention_mask=mask, token_type_ids=token_type_ids)
bo_tag = self.bert_drop_1(o1)
bo_pos = self.bert_drop_2(o1)
tag = self.out_tag(bo_tag)
pos = self.out_pos(bo_pos)
loss_tag = loss_fn(tag, target_tag, mask, self.num_tag)
loss_pos = loss_fn(pos, target_pos, mask, self.num_pos)
loss = (loss_tag + loss_pos) / 2
return tag, pos, loss
This is my model. It takes the target_pos, target_entities and the train_data (the sentences) ehich are the same for both the tasks.
For example:
Sentence #,Word,POS,Tag
Sentence: 1:
Thousands,NNS,O
,of,IN,O
,demonstrators,NNS,O
,have,VBP,O
,marched,VBN,O
,through,IN,O
,London,NNP,B-geo
,to,TO,O
,protest,VB,O
,the,DT,O
,war,NN,O
,in,IN,O
,Iraq,NNP,B-geo
,and,CC,O
,demand,VB,O
,the,DT,O
,withdrawal,NN,O
,of,IN,O
,British,JJ,B-gpe
,troops,NNS,O
,from,IN,O
,that,DT,O
,country,NN,O
Two FeedForward layers are used for the POS and NER prediction respectively, but the embeddings come from the same BERT model.
Then, the total loss is calculated as the average loss computed individually from the two tasks.
So, my question is:
Could the NER accuracy/Precision/Recall etc… be improved performing the POS task too?
Many thanks in advance | It’s a reasonable assumption. I remember a paper form Sebastian Ruder that showed multitask learners have better performance on the downstream tasks so I would expect this to give better results.
You need to experiment to be sure though | 0 |
huggingface | Beginners | ALBERT Pretraining example (Tensorflow) | https://discuss.huggingface.co/t/albert-pretraining-example-tensorflow/1427 | Hello everyone,
Is there any guide / example of how to do pretraining of ALBERT (in Tensorflow) using code only from huggingface?
I would really appreciate any help.
Thanks | @sgugger any help here would be very appreciated | 0 |
huggingface | Beginners | Way to train a basic Transformer | https://discuss.huggingface.co/t/way-to-train-a-basic-transformer/2145 | Is there any way to train a basic Transformer (the transformer architecture presented in the original Attention Is All You Need paper). I’m trying to set up a translation baseline on my custom dataset (previously trained models do not exist for the language). If anyone can point out a ready-made implementation of just the basic Transformer it would be very helpful.
Thanks! | I think OpenNMT is the better framework to do something like that. It allows to train a model that you describe easily. transformers is more directed towards specialized, specific transformer architectures. | 0 |
huggingface | Beginners | Request: Mask-LM Training Google Colab | https://discuss.huggingface.co/t/request-mask-lm-training-google-colab/2119 | Does anyone have a Google Colab notebook they can share? (If possible, one that has been adapted to the new model upload) | Hi, not exactly what you requested (ie. not the latest Huggingface version),
but it should give the main idea with a bonus on working with TPU:
Kaggle notebook on implementation of MLM finetuning on TPUs with a custom training loop in TensorFlow 2.
https://www.kaggle.com/riblidezso/finetune-xlm-roberta-on-jigsaw-test-data-with-mlm 11 | 0 |
huggingface | Beginners | Got KeyError(‘inputs’) | https://discuss.huggingface.co/t/got-keyerror-inputs/1094 | Hi,
For some reason I get KeyError(‘inputs’) on any request to QA hosted inference like:
https://huggingface.co/deepset/bert-base-cased-squad2 6
Thanks for checking | This is for @mfuntowicz | 0 |
huggingface | Beginners | Difference between calling model() and using Trainer()? | https://discuss.huggingface.co/t/difference-between-calling-model-and-using-trainer/1696 | Hi All,
I was wondering if there is any tangible difference between calling model() and feeding data in manually, or using the Trainer() object?
If I run a program to batch my data and feed it manually, will the training results be the same as using Trainer()?
I only ask because I am getting some errors while applying the language modelling protocol (from: https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=YZ9HSQxAAbme) with the Trainer() module but manually seems to work fine. | This might be related, as it discusses a recently fixed bug with a Colan notebook.
Using hyperparameter-search in Trainer 🤗Transformers
Hi @sgugger, in case you’re not aware of it, it seems the latest commit on master broke the Colab notebook you shared on Twitter
Trying to run that notebook, I hit the following error when trying to run
best_run = trainer.hyperparameter_search(n_trials=10, direction="maximize")
with the optuna backend.
Stack trace:
/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py:900: RuntimeWarning:
invalid value encountered in double_scalars
[W 2020-10-22 14:58:41,815] Trial 0 f…
But to answer your question concerning the results: the results when using the Trainer or your own trainer should be the same as long as you use the same loss function and hyperparameters. | 0 |
huggingface | Beginners | Image Features as Model Input | https://discuss.huggingface.co/t/image-features-as-model-input/782 | Hello All. I apologize for the basic question but for some reason I am having difficulty using image features as input to a huggingface model.
My data comes in the form of an image feature numpy array extracted by a 2D CNN but all the models seem to be built for text based input.
If anybody could point me in the right direction of an example or a code snippet I would greatly appreciate it! | Hi CMPE-PL,
the transformers models are designed for Natural Language Processing, ie text. What makes you think they would be good for image features?
I expect you could bypass the tokenizers and input numbers directly, but I’m not sure it would do anything useful. If you did want to do that, you would need to ensure that your numbers were in the right format. For example, a BERT model expects input that is a vector of 768 real numbers for each word-token, or rather a matrix of 768 x N real numbers, where N is number of word-tokens in a text.
What size is your image feature array?
The main trick of transformers is to use Attention mechanisms. It is certainly possible to use Attention in image recognition models, but without using transformers. See this article for an example https://arxiv.org/abs/2004.13621 3 | 0 |
huggingface | Beginners | Funcom Dataset for summarization | https://discuss.huggingface.co/t/funcom-dataset-for-summarization/2066 | Hi everybody!
I’m just started on nlp and i’m working on my degree thesis, which involves experimenting with some dataset. I found the funcom dataset 5 which is made of pieces of java code and their javadocs. My question is, does anybody has ever tested sota models for summarization on this dataset? Would it give good results? Or the pretraining on such models does not give any knowledge on source code?
Thanks in advance | Hi airnicco8,
I’m not an expert, but that looks a bit tricky. What would you intend to do with the funcom data? Would you be trying to build a seq-2-seq model that could translate from java code to comment string?
If you are supposed to be doing NLP, then java code might not be appropriate, as java is not a Natural Language.
A big advantage of the huggingface library is that it includes many pre-trained models, that you can fine-tune to your own data. I don’t think there are any models pre-trained on java code. See this page for the list of models available in huggingface https://huggingface.co/transformers/pretrained_models.html 3
I suggest you start with something simpler. | 0 |
huggingface | Beginners | TransformerXL run_clm.py grad can be implicitly created only for scalar outputs | https://discuss.huggingface.co/t/transformerxl-run-clm-py-grad-can-be-implicitly-created-only-for-scalar-outputs/2038 | Hello,
I am trying to execute run_clm.py for TransformerXL (transfo-xl-wt103) but get the following error:
0% 0/10170 [00:00<?, ?it/s]/usr/local/lib/python3.6/dist-packages/transformers/modeling_transfo_xl.py:445: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
indices_i = mask_i.nonzero().squeeze()
Traceback (most recent call last):
File “language-modeling/run_clm.py”, line 352, in
main()
File “language-modeling/run_clm.py”, line 321, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File “/usr/local/lib/python3.6/dist-packages/transformers/trainer.py”, line 775, in train
tr_loss += self.training_step(model, inputs)
File “/usr/local/lib/python3.6/dist-packages/transformers/trainer.py”, line 1126, in training_step
loss.backward()
File “/usr/local/lib/python3.6/dist-packages/torch/tensor.py”, line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py”, line 126, in backward
grad_tensors_ = make_grads(tensors, grad_tensors)
File “/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py”, line 50, in _make_grads
raise RuntimeError(“grad can be implicitly created only for scalar outputs”)
RuntimeError: grad can be implicitly created only for scalar outputs
0% 0/10170 [00:00<?, ?it/s]
I haven’t changed the original (run_clm.py) code. I am using:
!python language-modeling/run_clm.py
–output_dir=’/content/drive/My Drive/XL-result’
–model_type=transfo-xl-wt103
–model_name_or_path=transfo-xl-wt103
–save_total_limit=2
–num_train_epochs=2
–do_train
–train_file=’/content/drive/My Drive/train.txt’
–do_eval
–validation_file=’/content/drive/My Drive/test.txt’
–per_device_train_batch_size=4
–per_device_train_batch_size=4
–learning_rate 5e-5
–seed 42
–overwrite_output_dir
–block_size 125
Any help would be much appreciated! | Have you read this?
PyTorch Forums – 11 Jan 18
Loss.backward() raises error 'grad can be implicitly created only for scalar... 49
Hi, loss.backward() do not go through while training and throws an error when on multiple GPUs using torch.nn.DataParallel grad can be implicitly created only for scalar outputs But, the same thing trains fine when I give only deviced_ids=[0] to...
Reading time: 1 mins 🕑
Likes: 48 ❤
If you haven’t changed the run_clm code, something else must be different. What versions of python, pytorch and huggingface_transformers are you using?
Are you using CPU/GPU/TPU ? Are you using DataParallel (whatever that is)? | 0 |
huggingface | Beginners | Multiple-Token Input for Text Generations and PPLM? | https://discuss.huggingface.co/t/multiple-token-input-for-text-generations-and-pplm/1953 | Hello. I am trying to integrate the results of a LDA topic model and controlled-text generation, which is usually a set of keywords, to generate readable semantics/sentences. I have read some relevant papers and tried the codes at ‘transformers/examples/text-generation/pplm’ and ‘run_geneartion’, but still struggling to understand how to input “a list of strings” as input instead of a single string as the demo presents. Thank you! | This may help:
https://towardsdatascience.com/data-to-text-generation-with-t5-building-a-simple-yet-advanced-nlg-model-b5cce5a6df45 40 | 0 |
huggingface | Beginners | The reason `prepare_seq2seq_batch` for ProphetNet is not existed | https://discuss.huggingface.co/t/the-reason-prepare-seq2seq-batch-for-prophetnet-is-not-existed/1758 | Hi,
I tried to use ProphetNet with Seq2SeqTrainer, but it failed.
The error message tell me: This is because the collator I implemented uses prepare_seq2seq_batch() in _encode(), but prepare_seq2seq_batch() is not implemented for ProphetNet Tokenizer.
Is there any reason ProphetNet cannot have prepare_seq2seq_batch() in its tokenizer?
My understanding may be insufficient, but it seems that a function that assigns special tokens in a unique way is implemented for the tokenizer. Is that the cause?
If it is implemented like other Seq2SeqLM, will ProphetNet’s original performance not be exhibited?
Thank you in advance.
yusukemori | I don’t know about prophetnet special tokens, but it would be useful to implement that method!
cc @patrickvonplaten @valhalla | 0 |
huggingface | Beginners | How to mask multiple tokens in BartForConditionalGeneration? | https://discuss.huggingface.co/t/how-to-mask-multiple-tokens-in-bartforconditionalgeneration/2052 | Hi!
How can I use BartForConditionalGeneration to predict multiple tokens instead of one? Or is the text filling task not (yet) supported? | I cant remember their name, but someone wrote this code allowing one to mask several tokens. If you replace Roberta with Bart, I expect it’ll work.
from transformers import RobertaTokenizer, RobertaForMaskedLM
import torch
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = RobertaForMaskedLM.from_pretrained('roberta-base')
sentence = """My name is <mask> and I enjoy <mask>."""
token_ids = tokenizer.encode(sentence, return_tensors='pt')
# print(token_ids)
token_ids_tk = tokenizer.tokenize(sentence, return_tensors='pt')
print(token_ids_tk)
masked_position = (token_ids.squeeze() == tokenizer.mask_token_id).nonzero()
masked_pos = [mask.item() for mask in masked_position ]
print (masked_pos)
with torch.no_grad():
output = model(token_ids)
last_hidden_state = output[0].squeeze()
print ("\n\n")
print ("sentence : ",sentence)
print ("\n")
list_of_list =[]
for mask_index in masked_pos:
mask_hidden_state = last_hidden_state[mask_index]
idx = torch.topk(mask_hidden_state, k=100, dim=0)[1]
words = [tokenizer.decode(i.item()).strip() for i in idx]
list_of_list.append(words)
print (words)
best_guess = ""
for j in list_of_list:
best_guess = best_guess+" "+j[0] | 0 |
huggingface | Beginners | Download models for local loading | https://discuss.huggingface.co/t/download-models-for-local-loading/1963 | Hi,
Because of some dastardly security block, I’m unable to download a model (specifically distilbert-base-uncased) through my IDE. Specifically, I’m using simpletransformers (built on top of huggingface, or at least uses its models). I tried the from_pretrained method when using huggingface directly, also, but the error is the same:
OSError: Can’t load weights for ‘distilbert-base-uncased’
From where can I download this pretrained model so that I can load it locally?
Thanks very much,
Mark | You can now (since the switch to git models explained here: [Announcement] Model Versioning: Upcoming changes to the model hub 74) just git clone a model to your laptop and make from_pretrained point to it:
# In a google colab install git-lfs
!sudo apt-get install git-lfs
!git lfs install
# Then
!git clone https://huggingface.co/ORGANIZATION_OR_USER/MODEL_NAME
from transformers import AutoModel
model = AutoModel.from_pretrained('./MODEL_NAME')
For instance:
# In a google colab install git-lfs
!sudo apt-get install git-lfs
!git lfs install
# Then
!git clone https://huggingface.co/facebook/bart-base
from transformers import AutoModel
model = AutoModel.from_pretrained('./bart-base')
cc @julien-c for confirmation | 0 |
huggingface | Beginners | Message “Some layers from the model were not used” | https://discuss.huggingface.co/t/message-some-layers-from-the-model-were-not-used/1972 | Hi,
I have a local Python 3.8 conda environment with tensorflow and transformers installed with pip (because conda does not install transformers with Python 3.8)
But I keep getting warning messages like “Some layers from the model checkpoint at (model-name) were not used when initializing (…)”
Even running the first simple example from the quick tour page generates 2 of these warning (although slightly different) as shown below
Code:
from transformers import pipeline
classifier = pipeline('sentiment-analysis')
Output:
Downloading: 100% 629/629 [00:11<00:00, 52.5B/s]
Downloading: 100% 268M/268M [00:11<00:00, 23.9MB/s]
Some layers from the model checkpoint at distilbert-base-uncased-finetuned-sst-2-english were not used when initializing TFDistilBertModel: ['pre_classifier', 'classifier', 'dropout_19']
- This IS expected if you are initializing TFDistilBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFDistilBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
All the layers of TFDistilBertModel were initialized from the model checkpoint at distilbert-base-uncased-finetuned-sst-2-english.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFDistilBertModel for predictions without further training.
Downloading: 100% 232k/232k [00:02<00:00, 111kB/s]
Downloading: 100% 230/230 [00:01<00:00, 226B/s]
Some layers from the model checkpoint at distilbert-base-uncased-finetuned-sst-2-english were not used when initializing TFDistilBertForSequenceClassification: ['dropout_19']
- This IS expected if you are initializing TFDistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFDistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some layers of TFDistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased-finetuned-sst-2-english and are newly initialized: ['dropout_38']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
My configuration (‘transformers-cli env’ output):
2020-11-10 21:32:33.799767: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found
2020-11-10 21:32:33.804571: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
WARNING:tensorflow:From c:\users\basvdw\miniconda3\envs\lm38\lib\site-packages\transformers\commands\env.py:36: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2020-11-10 21:32:37.029143: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-11-10 21:32:37.049021: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x154dca447d0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-11-10 21:32:37.055558: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-11-10 21:32:37.061622: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found
2020-11-10 21:32:37.065536: W tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit: UNKNOWN ERROR (303)
2020-11-10 21:32:37.074543: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: S825
2020-11-10 21:32:37.080321: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: S825
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 3.5.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.8.6
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: No:
- Using distributed or parallel set-up in script?: No
Does anyone know what causes this messages and how I could fix this ? I do not really understand the warning, because I thought I was using a pre-trained model which doesn’t need any more training…
Any help would be appreciated ! | Tagging @jplu so he’s aware, this looks like a bug (but you can proceed safely, the bug is that there is a warning when there should be nothing). | 0 |
huggingface | Beginners | Summarization - Pegasus - min_length | https://discuss.huggingface.co/t/summarization-pegasus-min-length/1947 | Hello.
I am playing around with the minimal number of tokens in the generated output by “google/pegasus-cnn_dailymail”. Basically I follow the documentation so my code looks like this
batch = tokPeg.prepare_seq2seq_batch(src_texts=[s]).to(torch_device)
gen = modelPeg.generate(**batch,
num_beams=int(8), min_lenght=100)
summary: List[str] = tokPeg.batch_decode(gen, skip_special_tokens=True)
However, when I count number of tokens in output text by len(tokPeg.tokenize(summary[0])) the output text produces fewer tokens than is specified in min_length. Is there anything I am missing? | this might be a red herring, but your code snippet shows “min_lenght” where it should be “min_length” | 0 |
huggingface | Beginners | Restarting gpt-2 finetuning after power failure | https://discuss.huggingface.co/t/restarting-gpt-2-finetuning-after-power-failure/1935 | Hi All,
I was several days in on finetuning the medium model when my machine shut down due to a power failure. How do I restart from the last saved checkpoint? I was using the following command:
python run_language_modeling.py --output_dir=output --model_type=gpt2 --model_name_or_path=D:\Development\models\gpt2-medium --do_train --train_data_file=./twitter_train.txt --do_eval --eval_data_file=./twitter_test.txt --per_device_train_batch_size=1 --overwrite_output_dir
Thanks! | [I am assuming that gpt-2 saving works in the same way as BERT saving. I am not an expert.]
Hi @pgfeldman
did you save the optimizer state-dictionary?
In order to restart a previous training run, you need to have both the saved model state and the state of the optimizer’s parameters. (These take up a surprisingly large amount of memory - about half the size of the model).
If you haven’t got the optimizer state-dict, then you can still load the saved model from the model checkpoint, but you will need to start a new training run . You will probably need to estimate how far along the first run was, and what Learning Rate it might have got up to.
This thread might help:
Loading finetuned model to generate text 🤗Transformers
This is a basic question. Suppose I finetune a pretrained GPT2 model on my particular corpus. How do I go about loading that new model and generating some text from that new model? | 0 |
huggingface | Beginners | Looking for ways to extract custom tokens from text | https://discuss.huggingface.co/t/looking-for-ways-to-extract-custom-tokens-from-text/1950 | Hello community,
I am working on a project that requires extraction of a specific value from a text. Here is an example:
“This job offers a salary of $60000 and additional benefits like equity, health insurance and a private apartment”.
I want to be able to train a model that is able to recognize that $60000 is the salary of the job, but also be able to get the additional information that is related to the benefits like the equity and health insurance.
I have already solved this with a large corpus of regular expressions and manual text extraction, but as you are aware, there is always this one example that breaks the system. Therefore, I am hoping that I can use something to train my model model to recognize these “tokens”.
So in my internal language, the “$60000” is a token of the type “salary_value” and “equity”, “health insurance” and “private apartment” are tokens of the type “benefits”. There are a couple of other token types, but for the example let’s stay with this.
I have a lot of training data where these are annotated, so the text area that hast the token and what token is expected.
Can I use any of the hugging face libraries to build something similar? I have looked at the existing models, but they focus a lot on NER like “location”, “name”, “company”, etc.
I guess a good summary is that I am looking for some guidance on what to use best here.
Thanks!
Alex | it really looks like something called knowledge graph extraction, i remembered that I’ve seen something similar there, LSTM and convnet maybe transformers perform well there:
Medium – 25 Oct 19
Knowledge extraction from unstructured texts 6
There is an unreasonable amount of information that can be extracted from what people publicly say on the internet. Learn how to do it.
Reading time: 11 min read
you should really search about this
another links:
https://towardsdatascience.com/auto-generated-knowledge-graphs-92ca99a81121 1
Programmer Backpack – 1 Feb 20
Python NLP Tutorial: Building A Knowledge Graph using Python and SpaCy 6
NLP tutorial for Information Extraction and building a Knowledge Graph in Python and spaCy.
Mining Knowledge Graphs from Text
Mining Knowledge Graphs from Text 4
A Tutorial | 0 |
huggingface | Beginners | How can I go about building Grammarly for my local language? | https://discuss.huggingface.co/t/how-can-i-go-about-building-grammarly-for-my-local-language/1923 | Hello Hugging Face forum members,
my name is Mislav, I’m a recent computer science graduate from Croatia. I had the idea of building Grammarly for Croatian Google Chrome plugin. I want to pursue that idea, but before I do that, I want to have a roadmap of what I would have to do in order to build it.
I am completely new to NLP and don’t have any experience with NLP, but I do have some basic experience with machine learning.
Here are the features that I want to build (in order). There will probably be others later on:
Spell checker
Tone detector - tells the user how does the text sound - confident, formal, informal etc.
Improvements suggesstor - suggests how the user can improve his/her text so that it resembles a certain tone better (i.e. how to make his/her message seem more formal)
I don’t know how Grammarly works - what does it use as a technology to detect grammar errors? What does it use to predict sentiment? What does it use to measure how engaging is the text? How does it judge the delivery of the message intended by the text? Essentially, if you had to reverse-engineer Grammarly and use that knowledge to help me build it, I’d be grateful.
I’m doing this project to both learn about NLP and ship the product, so ideally I could have a balance of learning & speed while working on this project. Maybe I could reuse some code from the Hugging Face repo? I intend to make the product a freemium product, so I’m not sure if that’s aligned with the code license.
Any suggestions as to how to go about building this would be appreciated. Again, I am new to this, so a detailed description would go a long way.
Best,
Mislav | Hi Mislav
Although I respect your dedication, I urge you to be realistic. The road from “knowing nothing about NLP and just the basics of ML” to “building and shipping a whole NLP and deep learning focused product” is long. On top of that, this “Beginners” category is not a good fit for this kind of question. “Research” might be better suited.
I can’t give you a detailed description because “multiple roads lead to Rome”, in other words there are different ways to go about this. I’ll list some things that I think are important for you to get familiar with before even starting with this project. Don’t try to do too much at once. What you want to do takes years of expertise to get even the background right.
Read papers about readability. This is probably the most important field that you need to get a theoretical understanding of if you want to do things right. Grammarly does not only do a “right or wrong” prediction, but it also suggests improvements. Those improvements increase readability, which is quite a big field in (psycho)linguistics. This paper 4 is a good (recent) starting point, code available here 5. However, I’d also suggest to read up on the theoretical work that has been done way back concerning readability formulas to get a better understanding of the problem.
For spell checking you need to look into (grammatical) error correction. A lot of work has been done and lately it has been influenced by many neighbouring fields such as machine translation and quality estimation. From a programming perspective, here you may want to look into seq2seq and denoising auto-encoders, although I have to admit that I do not know what the SOTA is these days.
Sentiment analysis will be another large chunk of what you want to do, as its principles can be applied to style as well. You will find a lot information about this. The difficult part is not necessarily detecting which style a text has, but highlighting in that text which spans are confidently the points of interest that contribute to this sentiment. The problem, as so often with neural systems these days, is ensuring that there is some linguistic validation there and that the predicted spans are actually meaningful. (For instance, you may want to exclude prepositions and proper nouns from the result.)
Perhaps the hardest part is the “improvement suggestor” that you mention. You may want to look into the task of paraphrasing which also has gained some attention over the last few years. Style transfer also seems to be useful here.
As a first step, you should read up on all subfields above. Second, you can start learning how to implement these kinds of things, and lastly you can try to bring them all together into a single system with an interface.
As is hopefully clear to you now, this is not a one day, not even a one year, job. If it interests you, go for it! But be aware that if you want to do this right and actually understand what you are doing, that it will take time.
Good luck!
Bram | 0 |
huggingface | Beginners | Clarifying multi-GPU memory usage | https://discuss.huggingface.co/t/clarifying-multi-gpu-memory-usage/1905 | Am I reading this thread (Training using multiple GPUs 50) correctly? I interpret that to mean:
Training a model with batch size 16 on one GPU is equivalent to running a model with batch size 4 on 4 GPUs
Is that correct? And does it differ between DataParallel and DistributedDataParallel modes? | It is correct. The difference between DataParallel and DistributedDataParallel is (in your current example):
in DataParallel mode you have to set the batch size to 16 for your data loaders.
in DistributedDataParallel you have to set the batch size to 4 for your data loaders. | 0 |
huggingface | Beginners | T5 training from scratch | https://discuss.huggingface.co/t/t5-training-from-scratch/1898 | Hi all,
I would like to train a T5 model (t5-base version) without loading the pretrained weights, if I write the following:
from transformers import T5Config, T5Model
config = T5Config.from_pretrained(‘t5-base’)
model = T5Model(config)
It will produce the t5-base sized T5 model without loading the checkpoint weights?
Thanks | sarapapi:
from transformers import T5Config, T5Model
config = T5Config.from_pretrained(‘t5-base’)
model = T5Model(config)
Yes, that’s correct. You will definitely see the “pretrained weights” download progress bar in the other case | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.