docs
stringclasses
4 values
category
stringlengths
3
31
thread
stringlengths
7
255
href
stringlengths
42
278
question
stringlengths
0
30.3k
context
stringlengths
0
24.9k
marked
int64
0
1
huggingface
Beginners
Custom Distilbert does not use CUDA for predition
https://discuss.huggingface.co/t/custom-distilbert-does-not-use-cuda-for-predition/8699
Hi, I am using the model classifier = pipeline(β€œtext-classification”,model=β€˜bhadresh-savani/distilbert-base-uncased-emotion’, return_all_scores=True) for emotion classification of my datatset of 1.2m client feedbacks. I am not training the model, I am just doing a prediction. I noticed that the model uses CPU and does not use CUDA (I have RTX5000) and the prediction takes ages to compute. Can you explain why it is the case? Is there a way to use CUDA for this model predictions? Thank you Krzysztof
I found a solution, add parameter device=0. But I have to classify in small batches as the GPU RAM is a serious limit, even my GPU has 16 GB
0
huggingface
Beginners
Trying to Build Datasets, Random Items Get Added
https://discuss.huggingface.co/t/trying-to-build-datasets-random-items-get-added/8724
Hi all, I’m currently trying to load in fastai’s version of the IMDB dataset, to learn how to build a Dataset from a folder of .txt's. I’m preparing my data with the following: # Downloading the dataset from fastai.data.external import untar_data, URLs from fastai.data.transforms import get_files path = untar_data(URLs.IMDB, dest='./IMDB') From there I can get all the training .txt's with: texts = get_files(path/'train', extensions='.txt') texts = [str(t) for t in texts] In turn, this is a list of 25,000 text files. However, when I use the load_dataset api to bring this in, suddenly my dataset has 25,682 items! Can anyone help me figure out why? This is an issue as I need to use add_column to add in a label, and there is a mismatch between the number of actual training items vs the ones Datasets picked up. Here is how I’m building the dataset: dset = load_dataset('text', data_files={'train':texts}) TIA!
I think it’s because the β€œtext” loader creates a new sample for each β€œ\n” it sees, so the texts you have that contain some of those are then split into several samples. @lhoestq or @albertvillanova if you could conifrm? PS: it would be easier to just do dsets = load_dataset('imdb') :-p
0
huggingface
Beginners
Evaluation became slower and slower during Trainer.train()
https://discuss.huggingface.co/t/evaluation-became-slower-and-slower-during-trainer-train/8682
When I used Trainer.train() to fine-tune BartBase, I found something weird that the speed shown in progress bar became slower and slower (from 6 item/s to 0.29 item/s. Please help me, I’m new to transformers. Here are my codes. training_args = TrainingArguments( output_dir="Model/BartBase", overwrite_output_dir=True, per_device_train_batch_size=8, per_device_eval_batch_size=16, learning_rate=1e-5, num_train_epochs=20, lr_scheduler_type='linear', label_smoothing_factor=0, # logging_dir='runs', logging_strategy='steps', # log according to log_steps logging_steps=1, save_strategy='steps', # log according to save_steps save_steps=4000, save_total_limit=10, # limit the total amount of checkpoints evaluation_strategy="steps", # log according to eval_steps eval_steps=1, # I set eval_steps=1 to debug eval_accumulation_steps=1, seed=42, load_best_model_at_end=True, # load best model according to metric_for_best_model metric_for_best_model='f1' # the string should be ) from datasets import load_metric import numpy as np def compute_metrics(eval_pred): f1_metric = load_metric('f1') accuracy_metric = load_metric('accuracy') pred, label = eval_pred pred = np.argmax(pred, axis=-1) f1_score = f1_metric.compute(predictions=pred, references=label, average='micro') accuracy = accuracy_metric.compute(predictions=pred, references=label) return f1_socre.update(accuracy) from transformers import Trainer trainer = Trainer( model=model, tokenizer=tokenizer, args=training_args, data_collator=collator, # if tokenizer is provided, no need to provide it explicitly train_dataset=train_dataset, # torch.utils.data.dataset.Dataset eval_dataset=eval_dataset, compute_metrics=compute_metrics ) trainer.train()
After debugging step by step, I found that If I remove the compute_metrics=compute_metrics in Trainer, the evaluation went well. Even if I use a quite simple compute_metrics, the evaluation became slow and stopped eventually (without finishing progress) .def compute_metrics(eval_pred): return {'f1': 1} Please give me some helps. Thanks a lot!!!
0
huggingface
Beginners
Passing additional tensors into metric evaluation function
https://discuss.huggingface.co/t/passing-additional-tensors-into-metric-evaluation-function/8618
Hi, I was wondering if it is possible to pass additional labels into metric evaluation function. In more details, I want to calculate metrics using not all predicted tokens but only a small subset, and to do this I need to pass a tensor of indices of that tokens into the metric evaluation function. But I don’t need this tensor for anything else, e.g. for calculating the loss. I tried to add this tensor into evaluation dataset, using label_names and remove_unused_columns = False options. But in this case the model (T5 encoder-decoder) sees unrecognized keywords and fails. Default remove_unused_columns = True just deletes the tensor from the batch. I am using Transfomers version 4.7.0 Any help is much appreciated! Thanks.
If you leave it in your dataset, you will have an error since your model can’t consume it. You will need to write a manual evaluation loop for this use case.
0
huggingface
Beginners
Error while training a custom pretrained model
https://discuss.huggingface.co/t/error-while-training-a-custom-pretrained-model/8588
Hi, I trained a model as follows: checkpoint = β€œbert-base-uncased” tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=5) After 3 epochs I wanted to finetune this model again with the following code (I tried all three versions): #1 model = BertForSequenceClassification.from_pretrained(β€œC:/Users/THINK/Dysk Google/_Priv/_Courses/Huggingface/Fine_tuned_models/rmp_eval_bert_base_uncased/”) #2 model = BertModel.from_pretrained(β€˜C:/Users/THINK/Dysk Google/_Priv/_Courses/Huggingface/Fine_tuned_models/rmp_eval_bert_base_uncased/’) #3 config = BertConfig.from_json_file(β€˜C:/Users/THINK/Dysk Google/_Priv/_Courses/Huggingface/Fine_tuned_models/rmp_eval_bert_base_uncased/config.json’) model = BertModel.from_pretrained(β€˜C:/Users/THINK/Dysk Google/_Priv/_Courses/Huggingface/Fine_tuned_models/rmp_eval_bert_base_uncased/’, config=config) But in each case I got the error: TypeError: forward() got an unexpected keyword argument β€˜labels’ The input format is the same as before. The model was saved as: output_dir = β€˜C:/Users/THINK/Dysk Google/_Priv/_Courses/Huggingface/Fine_tuned_models/rmp_diff_bert_base_uncased’ model.save_pretrained(output_dir) Can you help me, what code I should use to load a custom pretrained bert_base_uncased model? Thank you.
It’s logical you got that error for the last two lines since BertModel does not accept a labels argument. I don’t think you had that exact error for the first one as it does accept a labels argument.
0
huggingface
Beginners
How can I do word classification?
https://discuss.huggingface.co/t/how-can-i-do-word-classification/8639
I am very new to nlp so sorry for the question. I am wondering if there a name for this task and if I can do it in deep learning or even in huggingdace Let say I have some sentences, for example: s1: I have a dog s2: I am new to this, I convert my sentences to embedded features and now I have an inputs with shape of 1x2048x4 for s1 and 1x2048x5 for s2. Now I want to classify each word to a category out of 10 categories, so the output will be like: o1: 1x4x10 and o1: 1x5x10 the dependency between the word in each sentence is important in the classification. it is kinda like token classification but the labels are not representing the token, but instead the feeling that that word will imply to the sentence. so the length of input may change and the length of output will change as well, but classes are the same. Can I do it with transformers and self-attention models? should I look for any specific model or task? Sorry for the boring question, nlp newbie here with a lot of passion
I’m not sure that I understood your question entirely. But classifying words like you said, would be close to a Named Entity Recognition task I think. BERT like models can produce word embeddings for input sentences, which means separate embedding vector for each word. Maybe you could use such model for your task.
0
huggingface
Beginners
How to specify different batch sizes for different GPUs when training with rum_mlm.py?
https://discuss.huggingface.co/t/how-to-specify-different-batch-sizes-for-different-gpus-when-training-with-rum-mlm-py/8573
Hi, I’m using run_mlm.py to train a custom BERT model. I have two GPUs available. One has 24GB of memory and the other has 11 GB of memory. I want to use the batch size of 64 for the larger GPU and the batch size of 16 for the smaller GPU. How can I do so? The --per_device_train_batch_size parameter only takes one number. Or can I just give the combined batch size (80) and let the script figure out how to split the data between GPUs? Thanks!
This use case is not supported by the Trainer API, it would require custom scripts (one per GPU) to work, and even then, I’m not sure you will see any speed gain by training on two different GPUs that are not of the same type.
0
huggingface
Beginners
β€œcompute_loss” function
https://discuss.huggingface.co/t/compute-loss-function/8649
Could someone give some insight to the β€œmodel.compute_loss” function which is used when fine-tuning the models without the trainer API (e.g- Keras native training). How does it work (i.e.- for different tasks how it’s adapted.)? It says that it takes the first element of the output, how is the loss calculated then?
as shown in here in Huggingface documentation, Fine-tuning with custom datasets β€” transformers 4.7.0 documentation 1
0
huggingface
Beginners
BERT: AttributeError: β€˜RobertaForMaskedLM’ object has no attribute β€˜bert’
https://discuss.huggingface.co/t/bert-attributeerror-robertaformaskedlm-object-has-no-attribute-bert/8362
I am trying to freeze some layers of my masked language model using the following code: for param in model.bert.parameters(): param.requires_grad = False However, when I execute the code above, I get this error: AttributeError: 'RobertaForMaskedLM' object has no attribute 'bert' In my code, I have the following imports for my masked language model, but I am unsure what is causing the error above: from transformers import AutoModelForMaskedLM model = AutoModelForMaskedLM.from_pretrained(model_checkpoint) So far, I have tried to replace bert with model in my code, but that did not work. Any help would be good. Thanks.
The name of the body of the model roberta for Roberta models, not bert. So you should loop on for param in model.roberta.parameters(). In general, the attribute that is model agnostic is base_model, so for param in model.base_model.parameters() should work anywhere.
0
huggingface
Beginners
Accessing uncontextualized BERT word embeddings
https://discuss.huggingface.co/t/accessing-uncontextualized-bert-word-embeddings/1812
Hi there! Once I’ve imported a BERT model from HuggingFace, is there a way to convert a sequence of encoded tokens into BERT’s raw embeddings without contextualizing them using self-attention, or otherwise extract the raw embedding for a given token?
Try this. I think the apis change a bit between models so take a look before you copy paste model = DistilBertForTokenClassification.from_pretrained( "distilbert-base-cased", num_labels=self.num_labels ) word_embeddings = model.distilbert.embeddings.word_embeddings(["my token ids here"]) word_embeddings_with_positions = model.distilbert.embeddings(["my token ids here"])
0
huggingface
Beginners
Converting TF-Bert to Torch using conversion script works, but
https://discuss.huggingface.co/t/converting-tf-bert-to-torch-using-conversion-script-works-but/574
Hi there - we are using this BERT architecture from Google: * attention_probs_dropout_prob:0.1 * hidden_act:"gelu" * hidden_dropout_prob:0.1 * hidden_size:768 * initializer_range:0.02 * intermediate_size:3072 * max_position_embeddings:512 * num_attention_heads:12 * num_hidden_layers:12 * type_vocab_size:2 * vocab_size:32000 we trained it from scratch with 10 millions of documents from our very specific domain and also changed optimizers and sentence tokenizer. Now, our BERT works wonderfully, it is evaluated on mask_token and next sentence prediction and we can also fine tune it for e.g. downstream Classfication tasks - all of that works. As long as we stay in the TensorFlow/Nvidia world. We would however love to move into all the possibilities of PyTorch as well. So we applied the following script: " convert_bert_original_tf_checkpoint_to_pytorch.py 2" This script initially fails, because our optimizer are not recognized. By adding them to the list of skipped attributes if any(n in ["adam_v", "adam_m", "*AdamWeightDecayOptimizer", "AdamWeightDecayOptimizer_1", "global_step"*] for n in name): The scripts runs through fine and we get a working torch model. And here things become very strange. Initially we tried to fine tune the torch model of our BERT and nothing happened - inspecting the attention values and outputs for input tokens we realized that for any given input sequence ALL attention values are identical, yet different when the input changes. Also the output values in the hidden stated of the converted BASE model are identical for each token, irrespective of the input. Nevertheless the conversion seems to work in that way that the model can be loaded and is a fully valid Pytorch model file. I understand that all of that is quite vague but maybe it sounds familiar to somebody and or we get a hint where to look next. The manifestation of that effect: outputs = torch_bert_model(**example_input) print(outputs[0]) # last hidden state, irrespective of input always this: tensor([[[ 1.5048e-04, -4.1375e-06, 1.4493e-04, ..., 2.1820e-04, -6.6411e-05, 2.1333e-04], [ 1.5048e-04, -4.1375e-06, 1.4493e-04, ..., 2.1820e-04, -6.6411e-05, 2.1333e-04], [ 1.5048e-04, -4.1375e-06, 1.4493e-04, ..., 2.1820e-04, -6.6411e-05, 2.1333e-04], ..., [ 1.5048e-04, -4.1375e-06, 1.4493e-04, ..., 2.1820e-04, -6.6411e-05, 2.1333e-04], [ 1.5048e-04, -4.1375e-06, 1.4493e-04, ..., 2.1820e-04, -6.6411e-05, 2.1333e-04], [ 1.5048e-04, -4.1375e-06, 1.4493e-04, ..., 2.1820e-04, -6.6411e-05, 2.1333e-04]]], grad_fn=<NativeLayerNormBackward>) And the attention heads: 4=CLS, 5=SEP Always different for any given input but always the identical value on all heads. tensor([[ 4, 13, 8, 6060, 5, 13, 2840, 350, 8, 6060, 5]]) (tensor([[[[0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], ..., [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909]], [[0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], ..., [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909]], [[0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], ..., [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909]], ..., [[0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], ..., [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909]], [[0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], ..., [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909]], [[0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], ..., [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909]]]], grad_fn=<SoftmaxBackward>), tensor([[[[0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], [0.0909, 0.0909, 0.0909, ..., 0.0909, 0.0909, 0.0909], Thanks a lot for any help! Much appreciated.
Just in case someone runs into the same issue: It was a problem with the naming of the layers our naming came from an Nvidia TF package and it is differed from standard naming - we did the mapping ourselves and now the model is working and producing the same output for identical input. the script was still useful to see how it is done in principle.
0
huggingface
Beginners
Training a reformer from scratch
https://discuss.huggingface.co/t/training-a-reformer-from-scratch/8287
Hello, I want to train a reformer for a sequence classification task. The sequences are of protein so I thought of making a new tokenizer and then loaded as a reformer tokenizer which is defined as below. spm.SentencePieceTrainer.train(input='./sequences_scope.txt', model_prefix='REFORM', max_sentence_length=2000, vocab_size=25) tokenizer = ReformerTokenizer("REFORM.model", padding=True) tokenizer.add_special_tokens({'pad_token': '[PAD]'}) The dataset was created as below - with open("sequences_class_scope.csv", "w") as fp: fp.write("idx,sequence,label\n") for n,i in enumerate(sequences): fp.write(str(n)+","+i+","+str(labels[n])+"\n") dataset = load_dataset('csv', data_files='sequences_class_scope.csv', split='train[:60%]') And the model was then defined as follows - config = ReformerConfig( vocab_size=25, max_position_embeddings=2000, num_attention_heads=12, num_hidden_layers=6, ) model = ReformerForSequenceClassification(config=config) data_collator = DataCollatorForTokenClassification( tokenizer=tokenizer, max_length = 2000 ) training_args = TrainingArguments( output_dir="./pREFORMo", overwrite_output_dir=True, num_train_epochs=1, per_gpu_train_batch_size=64, save_steps=10_000, save_total_limit=2, prediction_loss_only=True, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, ) trainer.train() I am currently getting the error ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label'] and if I map my dataset to include theinput_ids and attention_masks, I get the error /usr/local/lib/python3.7/dist-packages/transformers/data/data_collator.py in <listcomp>(.0) 186 padding_side = self.tokenizer.padding_side 187 if padding_side == "right": --> 188 batch["labels"] = [label + [self.label_pad_token_id] * (sequence_length - len(label)) for label in labels] 189 else: 190 batch["labels"] = [[self.label_pad_token_id] * (sequence_length - len(label)) + label for label in labels] TypeError: object of type 'int' has no len() Also, my end goal is to train the reformer and then use it to generate embeddings rather than classification. Is this the correct approach to do so?
hey @choke what does an example in your tokenized inputs look like? from the error, i think you could try renaming the target column to labels which is what the Trainer expects by default
0
huggingface
Beginners
How does BERT know which contextualised embedding to choose for a word?
https://discuss.huggingface.co/t/how-does-bert-know-which-contextualised-embedding-to-choose-for-a-word/8516
Hi! I am trying to explain BERT. I understand the concept of contextualised embedding, where one word has different embeddings depending on the context. I also understand that when by using bidirectionality, BERT can learn these contextualised embedding during pretraining. My question is when finetuning BERT for a task and feeding it a sentence, how does BERT know which contextualised embedding to used for the word, given there are several to choose from?
I am new to the topic of Transformers, but my understanding (limited as it is) of BERT is that whilst the pre-trained embedding is created through semi-supervised learning (just large unannoted text corpora), when it comes to fine-tuning BERT for a specific task then this is usually a supervised learning process. So effectively you tell it what the right answers are and as part of the learning process in fine-tuning it will learn which contextualised embedding (and maybe a lot of other related knowledge) is relevant to what you want it to do.
0
huggingface
Beginners
Is Transformers using GPU by default?
https://discuss.huggingface.co/t/is-transformers-using-gpu-by-default/8500
I’m instantiating a model with this tokenizer = AutoTokenizer.from_pretrained("nlptown/bert-base-multilingual-uncased-sentiment") model = AutoModelForSequenceClassification.from_pretrained("nlptown/bert-base-multilingual-uncased-sentiment") Then running a for loop to get prediction over 10k sentences on a G4 instance (T4 GPU). GPU usage (averaged by minute) is a flat 0.0%. What is wrong? How to use GPU with Transformers?
This like with every PyTorch model, you need to put it on the GPU, as well as your batches of inputs.
0
huggingface
Beginners
Small Doubt :- Is there any mistake while finding results from DeBERTa model for SNLI Dataset
https://discuss.huggingface.co/t/small-doubt-is-there-any-mistake-while-finding-results-from-deberta-model-for-snli-dataset/8311
Hi everyone. from transformers import DebertaTokenizer, DebertaForSequenceClassification import torch max_length = 512 premise = "I do not love you" hypothesis = "I love you" hg_model_hub_name = "microsoft/deberta-base-mnli" tokenizer = DebertaTokenizer.from_pretrained(hg_model_hub_name) model = DebertaForSequenceClassification.from_pretrained(hg_model_hub_name) tokenized_input_seq_pair = tokenizer.encode_plus(premise, hypothesis, max_length=max_length, return_token_type_ids=True, truncation=True) input_ids = torch.Tensor(tokenized_input_seq_pair['input_ids']).long().unsqueeze(0) # remember bart doesn't have 'token_type_ids', remove the line below if you are using bart. token_type_ids = torch.Tensor(tokenized_input_seq_pair['token_type_ids']).long().unsqueeze(0) attention_mask = torch.Tensor(tokenized_input_seq_pair['attention_mask']).long().unsqueeze(0) outputs = model(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, labels=None) # Note: # "id2label": { # "0": "entailment", # "1": "neutral", # "2": "contradiction" # }, predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() # batch_size only one print("Premise:", premise) print("Hypothesis:", hypothesis) print("Entailment:", predicted_probability[0]) print("Neutral:", predicted_probability[1]) print("Contradiction:", predicted_probability[2]) The output is :- Premise: I do not love you Hypothesis: I love you Entailment: 0.9993366599082947 Neutral: 0.0004206844314467162 Contradiction: 0.00024267268599942327 While calculating accuracy using above code on SNLI Dataset I am getting 29% which should not be… Resluts from SNLI dataset precision recall f1-score support 0 0.04 0.04 0.04 3329 1 0.82 0.81 0.82 3235 2 0.02 0.03 0.02 3278 accuracy 0.29 9842 macro avg 0.30 0.29 0.29 9842 weighted avg 0.29 0.29 0.29 9842 Can anybody guide me where i am doing mistake or how should i approach to find the entailment, neutral, contradiction prediction from DeBERTa model @DeBERTa @lewtun can you please help??
hey @akshat-suwalka i think the reason why you’re getting a much lower score on the snli dataset is due to a misalignment between the label β†’ label_id mappings in the model and dataset. to explain what i mean, note that the config.json of the deberta model has the following mappings: "id2label": { "0": "CONTRADICTION", "1": "NEUTRAL", "2": "ENTAILMENT" }, "label2id": { "CONTRADICTION": 0, "ENTAILMENT": 2, "NEUTRAL": 1 } while the snli dataset has the contradiction and neutral labels swapped: id2label = {0: 'entailment', 1: 'neutral', 2: 'contradiction'} label2id = {'contradiction': 2, 'entailment': 0, 'neutral': 1} To fix this you can use the Dataset.align_labels_with_mapping function (docs): hg_model_hub_name = "microsoft/deberta-base-mnli" model = DebertaForSequenceClassification.from_pretrained(hg_model_hub_name) config = model.config snli = load_dataset("snli") snli_aligned = snli.align_labels_with_mapping(label2id=config.label2id, label_column="label") hth!
0
huggingface
Beginners
Some functions when customizing trainer
https://discuss.huggingface.co/t/some-functions-when-customizing-trainer/8413
Hi, it is glad to find the behavior of β€œTrainer” can be customized by overriding its methods. However, I am facing a problem with the originally existed functions. For example: def training_step(self, model: nn.Module, inputs: Dict[str, Union[torch.Tensor, Any]]) -> torch.Tensor: ... if is_sagemaker_mp_enabled(): scaler = self.scaler if self.use_amp else None loss_mb = smp_forward_backward(model, inputs, self.args.gradient_accumulation_steps, scaler=scaler) return loss_mb.reduce_mean().detach().to(self.args.device) ... loss = self.compute_loss(model, inputs) print(loss) # This is the only place I want to change return loss.detach() This is the part of the code of method β€œtraining_step”, which I want to rewrite. Suppose I just want to print the loss in each training step without changing other codes. But apparently, I cannot import the function β€œis_sagemaker_mp_enabled()”, and thus I have to delete them. I don’t think this is a good solution and is there any elegant way? Thanks for the help!
There is no reason you shouldn’t be able to import is_sagemaker_mp_enabled from its location (transformers.file_utils).
0
huggingface
Beginners
Compare the likelihood of various sentences in a LM?
https://discuss.huggingface.co/t/compare-the-likelihood-of-various-sentences-in-a-lm/7234
Hi! I have the beginning of a sentence (2-3 words) and couple candidate full sentences starting or ending with those words. Is there an easy way to use Transformers to know which of the full sentences was more likely to be output by a language model?
Hi Olivier, What about comparing the average log-likelihoods 4 of yours candidates?
0
huggingface
Beginners
Make predictions with the Dropout on
https://discuss.huggingface.co/t/make-predictions-with-the-dropout-on/5138
The default behavior of Trainer(...) when evaluating model is disabling Dropout. Concretely, y_pred for M runs will be exactly the same for i in range(M): logits, labels, metrics = trainer.predict(tokenized_datasets["eval"]) y_pred = np.argmax(logits, axis=2) ... Now I am trying to apply Monte Carlo Dropout trick introduced this this answer 5. This requires to turn the Dropout on while making predictions on the validation set. I am wondering how I achieve this goal. Any input is appreciated
I don’t think this is possible with the Trainer class as it is, but you can derive this class and then change the relevant methods. In your case, I think you need to change the evaluation_loop method and delete the model.eval() line. I think it would be better to keep the model.eval() line and set only the Dropout layer to train mode like it is shown in this post 2. There might be some more changes that you need to make. You can find the Trainer source code here. Hope this helps .
0
huggingface
Beginners
AdamW implementation
https://discuss.huggingface.co/t/adamw-implementation/8426
Hi, I was looking at the implementation of the AdamW optimizer 5 and I didn’t understand why you put the weight decay at the end. Shouldn’t you swap between this line: p.data.addcdiv_(exp_avg, denom, value=-step_size) and the weight decay part? Thanks. The AdamW algorithm from the β€œDECOUPLED WEIGHT DECAY REGULARIZATION” paper & The relevant source code for transformers.AdamW: image1681Γ—794 201 KB
The two lines substract an independent thing to the model parameters, so executing them in any order will give the same results.
0
huggingface
Beginners
Fine-tune a model on translation:
https://discuss.huggingface.co/t/fine-tune-a-model-on-translation/8443
when passing all along with datasets to the Seq2SeqTrainercall I got : HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/repos/create - Only regular characters and β€˜-’, β€˜_’, β€˜.’ accepted
You should post relevant code when asking for help, otherwise no one can really understand what’s going on. In this instance, the problem is very likely when your define your Seq2SeqTrainingArguments, so please share how you defined those, or if you used your code as a script, what arguments you passed to it,
0
huggingface
Beginners
Memory issues with model deployment
https://discuss.huggingface.co/t/memory-issues-with-model-deployment/748
With the help of awesome transformers library I have trained a multilabel classificator, which predicts topics for comments. I’m using TFDistilBertModel as a layer, wrapped in Keras functional API. So the architecture of a model is following: 2 input layers, then a layer, respawned as such: distilbert_layer = TFDistilBertModel.from_pretrained(’/path/to/distilbert_model’), then some LSTM and Dense layers. The model is trained, everything is perfect at this time. Now comes the prediction time. We serve our models on AWS containers, managed by kubernetes. Each time a user wants to generate topic of a new comment, we send a prediction task. To make a prediction I initiate model architecture, including respawning distilbert_layer as described above, then I load model weights, which I saved after training with the help of keras save_weights() function. Now the model is ready to predict. The issue: Each time a prediction task is sent, it consumes around 600mb of memory, which is not released after prediction. I assume there works a loadbalancer principle, so each time the task can be sent to a different process on the container, which uses different ram, which is why caching the model does not help. RAM quickly gets exhausted, then the worker just freezes and gets rebooted or initiates load of another worker. Has anyone experienced such issues? Any help is very much appreciated.
I am currently facing the same issue. I deployed a question and answering model on digital ocean server droplet. After sending it a few texts the server stopped running and went down. My take is ONNX runtime 2 would help; have you tried it? I am converting the model to onnx and redeploying it to see how it would perform. Please let me know if you have found a solution. Thanks.
0
huggingface
Beginners
Deploy multilingual sentence tansformer into cloud
https://discuss.huggingface.co/t/deploy-multilingual-sentence-tansformer-into-cloud/4354
Hi community, I am new to transformer models and particularly interested by a multilingual sentence transformer (stsb-xlm-r-multilingual, 1.1 Go). I have passed multiple times to search on how to deploy this model on cloud with the highest throughput (up to 1000 requests/sec) and lowest latency (<1 sec), while trying to saving costs. It seems to be painful. Would anyone have advice on deployment requirements (CPU, GPU, RAM,…), clouds (AWS, GCP,…), and frameworks (torchserve, triton,…) to meet my needs? Thanks!
Hi, Would anyone have advice? Thanks !
0
huggingface
Beginners
Truncating sequence – within a pipeline
https://discuss.huggingface.co/t/truncating-sequence-within-a-pipeline/336
Hi all, Thanks for making this forum! I have a list of tests, one of which apparently happens to be 516 tokens long. I have been using the feature-extraction pipeline to process the texts, just using the simple function: nlp = pipeline('feature-extraction') When it gets up to the long text, I get an error: Token indices sequence length is longer than the specified maximum sequence length for this model (516 > 512). Running this sequence through the model will result in indexing errors Alternately, if I do the sentiment-analysis pipeline (created by nlp2 = pipeline('sentiment-analysis'), I did not get the error. Is there a way for me put an argument in the pipeline function to make it truncate at the max model input length? I tried reading this 37, but I was not sure how to make everything else in pipeline the same/default, except for this truncation.
One quick follow-up – I just realized that the message earlier is just a warning, and not an error, which comes from the tokenizer portion. I then get an error on the model portion: IndexError: index out of range in self So I have two questions: Is there a way to just add an argument somewhere that does the truncation automatically? Is there a way for me to split out the tokenizer/model, truncate in the tokenizer, and then run that truncated in the model? Thank you!
0
huggingface
Beginners
Do we need to fine-tune Wav2Vec2FeatureExtractor?
https://discuss.huggingface.co/t/do-we-need-to-fine-tune-wav2vec2featureextractor/8260
Hi, I’m thinking about training wav2vec2 model for Japanese. And I have a question. Do we need Wav2Vec2FeatureExtractor as well? Or can we use Wav2Vec2FeatureExtractor for any languages? Thanks in advacne.
[Found answer]It seems there are 2 kinds of feature extractors. 1st one is just normalize raw audio and 2nd one is part of architecture. Since 1st one is just normalizing raw audio, I don’t think we need to train it.
0
huggingface
Beginners
Batch input for wav2vec2 pretraining
https://discuss.huggingface.co/t/batch-input-for-wav2vec2-pretraining/8133
Hi I have a question about how to pad audio when training wav2vec2 model. The tutorial explains how to handle batch size of one. input_values = processor(ds["speech"][0], return_tensors="pt").input_values # Batch size 1 logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) But I think I need to pad shorter audio when batch size is bigger than 2. Can I pad shorter audio with 0 ? or is there convenient function for that? Thanks in advance.
[Found Answer] It’s going to pad with 0.0. I found it…
0
huggingface
Beginners
Is it possible to create a RΓ©sumΓ© parser using a Huggingface model?
https://discuss.huggingface.co/t/is-it-possible-to-create-a-resume-parser-using-a-huggingface-model/2840
In other words, is it possible to train a supervised transformer model to pull out specific from unstructured or semi-structured text and if so, which pretrained model would be best for this? In the resume example, I’d want to input the text version of a person’s resume and get a json like the following as output: {β€˜Education’: [β€˜BS Harvard University 2010’, β€˜MS Stanford University 2012’], β€˜Experience’: [β€˜Microsoft, 2012-2016’, β€˜Google, 2016 - Present’]} Obviously, I’ll need to label hundreds or thousands of resumes with their relevant Education and Experience fields before I’ll have a model that is capable of the above. Here’s another example 31 of the solution that I’m talking about although this person seems to be using GPT-3 and didn’t have any code provided. Is this something that any of the huggingface pipelines 7 is capable of and if so, which pipeline would be most appropriate?
Is there any reason you’re looking to do this with a transformer? This is a common vision problem, and transformers aren’t usually the first port of call for a problem like this.
0
huggingface
Beginners
How to freeze layers using trainer?
https://discuss.huggingface.co/t/how-to-freeze-layers-using-trainer/4702
Hey, I am trying to figure out how to freeze layers of a model and read that I had to use for param in model.base_model.parameters(): param.requires_grad = False if I wanted to freeze the encoder of a pretrained MLM for example. But how do I use this with the Trainer? I tried the following: from transformers import BertTokenizer, BertForMaskedLM. LineByLineTextDataset, DataCollatorForLanguageModeling, Trainer, TrainingArguments model = BertForMaskedLM.from_pretrained('bert-base-uncased') tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') for param in model.base_model.parameters(): param.requires_grad = False dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path=in_path, block_size=512, ) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) training_args = TrainingArguments( output_dir=out_path, overwrite_output_dir=True, num_train_epochs=25, per_device_train_batch_size=48, save_steps=500, save_total_limit=2, seed=1 ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset ) trainer.train() If the encoder was frozen I would expect it to produce the same outputs as a fresh instance of the pretrained encoder, but it doesn’t: model_fresh = BertForMaskedLM.from_pretrained('bert-base-uncased') inputs = tokenizer("This is a boring test sentence", return_tensors="pt") torch.all(model.bert(**inputs)[0].eq(model_fresh.bert(**inputs)[0])) --> tensor(false) So I must be doing somethin wrong here, I guess the Trainer is reseting the requires_grad attribute and I have to overwrite it somehow after I instanciated the trainer? Thanks in advance! Johannes
Looking at the source code of BertForMaskedLM 33, the base model is the β€œbert” attribute, not the β€œbase_model” attribute. So if you want to freeze the parameters of the base model before training, you should type for param in model.bert.parameters(): param.requires_grad = False instead.
0
huggingface
Beginners
Get Optuna study from hyperparameter-search in Trainer?
https://discuss.huggingface.co/t/get-optuna-study-from-hyperparameter-search-in-trainer/4784
Hi there, I use hyperparameter-search in Trainer with Optuna and wanted to know if there is an easy option to access the study itself. From what I’ve read in the implementation, only the BestRun is returned by run_hp_search_optuna() and not the study itself. (I’m asking because I wanted to try out the plot functions of Optuna that work on the study)
Hello, I have this confusion too. Have you solved this problem? After some trying, I find this page is helpfulβ€”β€”Saving/Resuming Study with RDB Backend 23.
0
huggingface
Beginners
Predicting only ” ” after training (S2T) Wav2Vec2CTC
https://discuss.huggingface.co/t/predicting-only-after-training-s2t-wav2vec2ctc/5702
My work so far: colab.research.google.com Google Colaboratory 2 So I have copied the code from Fine-Tune XLSR-Wav2Vec2 for low-resource ASR with πŸ€— Transformers 2 and tried to implement things separately. But, after I run trainer.train and try to get the predictions, my model just predicts β€œβ€ empty strings. So, what does trainer.train do to my model? It should update the weights so as to increase performance, but it apparently is deleting the model… Can someone help me please?
I have the same issue, the loss is nan and after 1 epoch the model predicts empty strings please, have you found the root of the issue? thanks.
0
huggingface
Beginners
β€œAttributeError: β€˜Seq2SeqTrainer’ object has no attribute β€˜repo’” after running trainer.push_to_hub()
https://discuss.huggingface.co/t/attributeerror-seq2seqtrainer-object-has-no-attribute-repo-after-running-trainer-push-to-hub/8274
Just fine-tuned pegasus-large on Google Colab Pro. I create a Seq2SeqTrainer like so: trainer = Seq2SeqTrainer( model=model, args=args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["test"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics And after running trainer.train(), I execute: !huggingface-cli login !pip install hf-lfs !git config --global user.email "jakemsc@example.com" !git config --global user.name "JakeMSc" trainer.push_to_hub("test_model") Which then gives this output: Saving model checkpoint to ./results Configuration saved in ./results/config.json Model weights saved in ./results/pytorch_model.bin tokenizer config file saved in ./results/tokenizer_config.json Special tokens file saved in ./results/special_tokens_map.json --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-21-a676d2b17752> in <module>() ----> 1 trainer.push_to_hub('test-model') /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in push_to_hub(self, commit_message, **kwargs) 2513 return 2514 -> 2515 return self.repo.push_to_hub(commit_message=commit_message) 2516 2517 # AttributeError: 'Seq2SeqTrainer' object has no attribute 'repo' System information: transformers version: 4.8.2 Platform: Linux-5.4.104Β±x86_64-with-Ubuntu-18.04-bionic Python version: 3.7.10 PyTorch version (GPU?): 1.9.0+cu102 (True) Tensorflow version (GPU?): 2.5.0 (True) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: Yes Using distributed or parallel set-up in script?: No Can’t see this error being discussed anywhere. Any advice?
hey @JakeMSc this error is a bit odd because it suggests the Trainer.init_git_repo function is not being called which can only happen if TrainingArguments.push_to_hub is not set to True. in particular, i could not reproduce your error by pushing a seq2seq model to the hub with the official translation tutorial here: Google Colaboratory perhaps you can also try running the push to hub part of the tutorial notebook in your environment to see if it’s a problem in your configuration?
0
huggingface
Beginners
Convert bert tokenizer to onnx
https://discuss.huggingface.co/t/convert-bert-tokenizer-to-onnx/8299
I was referring to the following blog to convert bert model to onnx. Medium – 17 Jun 20 Accelerate your NLP pipelines using Hugging Face Transformers and ONNX Runtime 2 This post was written by Morgan Funtowicz from Hugging Face and Tianlei Wu from Microsoft Reading time: 6 min read here, to take the inference of bert tokenizer, I’ll have to pass the 2d arrays. Is there a way, where I’ll be able to pass sentence as input to the onnx tokenizer and get encodings as output, so that I’ll be able to use the model platform-independent
hi @hasak, hasak: Is there a way, where I’ll be able to pass sentence as input to the onnx tokenizer and get encodings as output, so that I’ll be able to use the model platform-independent the tokenizer is independent of onnx / onnxruntime, so you could create a simple function that converts your string inputs into the numpy format that the onnxruntime session expects: tokenizer = ... def prepare_for_session(input: str): tokens = tokenizer(input, return_tensors="pt") return {k: v.cpu().detach().numpy() for k, v in tokens.items()} does this answer your question?
0
huggingface
Beginners
Set the format of the datasets to return pytorch tensors return list of tensors but why?
https://discuss.huggingface.co/t/set-the-format-of-the-datasets-to-return-pytorch-tensors-return-list-of-tensors-but-why/8084
Hello, I am folllowing this tutorial to use Fine-tuning a pretrained model β€” transformers 4.7.0 documentation 4 in order to use the flauBert to produce embeddings to train my classifier. In one of the lines , I have to set my dataset to pytorch tensors but when applying that line I get a list format which I do not understand. When printing element of the dataset I get tensors but when trying to pass the β€œinput_ids” to the model , it is actually a list so the model cannot treat the data. Could help me figure it out why I get list and not a pytoch tensors when using 'set_format to torch. def get_flaubert_layer(texte, path_to_lge): # last version tokenized_dataset, lge_size = preprocessed_with_flaubert(texte, path_to_lge) print("Set data to torch format...") tokenized_dataset.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask']) print("Format of data after -set.format- :", type(tokenized_dataset)) print("Format of input_ids after -set.format- :", type(tokenized_dataset['input_ids'])) print("Format of one element in the dataset", type(tokenized_dataset['input_ids'][0])) print(type(tokenized_dataset)) print('Loading model...') flaubert = FlaubertModel.from_pretrained(path_to_lge) hidden_state = flaubert(input_ids=tokenized_dataset['input_ids'], attention_mask=tokenized_dataset['attention_mask']) print(hidden_state[0][:, 0]) cls_embedding = hidden_state[0][:, 0] print(cls_embedding) # test with data path = '/gpfswork/rech/kpf/umg16uw/expe_5/model/sm' print("Load model...") flaubert = FlaubertModel.from_pretrained(path) emb, s = get_flaubert_layer(data1, path) Stacktrace and results 0%| | 0/4 [00:00<?, ?ba/s] 25%|β–ˆβ–ˆβ–Œ | 1/4 [00:00<00:01, 1.63ba/s] 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 2/4 [00:01<00:01, 1.72ba/s] 75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 3/4 [00:01<00:00, 1.81ba/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4/4 [00:01<00:00, 2.41ba/s] Traceback (most recent call last): File "/gpfs7kw/linkhome/rech/genlig01/umg16uw/test/expe_5/traitements/remove_noise.py", line 130, in <module> emb, s = get_flaubert_layer(data1, path) File "/gpfs7kw/linkhome/rech/genlig01/umg16uw/test/expe_5/traitements/functions_for_processing.py", line 206, in get_flaubert_layer hidden_state = flaubert(input_ids=tokenized_dataset['input_ids'], File "/linkhome/rech/genlig01/umg16uw/.conda/envs/bert/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/linkhome/rech/genlig01/umg16uw/.conda/envs/bert/lib/python3.9/site-packages/transformers/models/flaubert/modeling_flaubert.py", line 174, in forward bs, slen = input_ids.size() AttributeError: 'list' object has no attribute 'size' srun: error: r11i0n6: task 0: Exited with exit code 1 srun: Terminating job step 381611.0 real 1m37.690s user 0m0.013s format of the data passed to the flaubert model : Loading tokenizer... Transform data to format Dataset... Set data to torch format... Format of data after -set.format- : <class 'datasets.arrow_dataset.Dataset'> Format of input_ids after -set.format- : <class 'list'> Format of one element in the dataset <class 'torch.Tensor'> As you see the columns input_ids is in a formal list and not tensors
The reason could be that during tokenization, padding and/or truncation is not enabled which results in encoded inputs with different lengths. This would prevent the type conversion to convert input_ids to a tensor since its elements are of different size and the result would be a list of tensors.
0
huggingface
Beginners
How downstream tasks work
https://discuss.huggingface.co/t/how-downstream-tasks-work/8249
Hello. I was surprised that I only need to add a few lines of code to solve various tasks with the help of Bert. For exampe below is downstream task code for ML one: (cls): BertOnlyMLMHead( (predictions): BertLMPredictionHead( (transform): BertPredictionHeadTransform( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) ) (decoder): Linear(in_features=768, out_features=30522, bias=True) ) ) or for next sequence prediction one: (pooler): BertPooler( (dense): Linear(in_features=768, out_features=768, bias=True) (activation): Tanh() ) ) (cls): BertOnlyNSPHead( (seq_relationship): Linear(in_features=768, out_features=2, bias=True) ) But in other side for me not clear why it works… Why for example solving ML task I use last_hidden_state from Bert model - but for BertForNextSentencePrediction the pooler output? Or if I want to suggest own downstream task - what should I do at first? I found a lot of materials explaining how BERTworks - but nothing explaining how to build downstream task.
During the pretraining procedure of BERT, there are two tasks: Masked Language Modeling and Next Sequence Prediction. Masked Language Prediction requires the model to make predictions for every token in the model, including the [MASK] token which is modeled by a layer that generates outputs over the entire vocabulary, hence the size 30522. The Next Sequence Prediction task uses the pooler layer in order to do a binary classification, namely whether the next sequence is a suitable continuation of the first sequence. Therefore, the BertForPreTraining contains a BertPreTrainingHeads layer including both a language modeling head and a prediction head. However, for downstream language modeling tasks using BertForMaskedLM or BertForTokenClassification, the pooler layer is discarded since there is no need for sequence classification. Conversely, for downstream sequence classification tasks using BertForNextSentencePrediction or BertForSequencePrediction, the pooler layer is retained.
0
huggingface
Beginners
β€œDump_all() got an unexpected keyword argument β€˜sort_keys’” after running trainer.push_to_hub()
https://discuss.huggingface.co/t/dump-all-got-an-unexpected-keyword-argument-sort-keys-after-running-trainer-push-to-hub/8269
Just fine-tuned pegasus-large on Google Colab Pro with trainer.train(), then executed the following commands: !huggingface-cli login !pip install hf-lfs !git config --global user.email "jakemsc@example.com" !git config --global user.name "JakeMSc" trainer.push_to_hub("test_model") Which leads to the following error: TypeError Traceback (most recent call last) <ipython-input-29-0cd57aa71f88> in <module>() ----> 1 trainer.push_to_hub("test_model") 3 frames /usr/local/lib/python3.7/dist-packages/yaml/__init__.py in dump(data, stream, Dumper, **kwds) 198 If stream is None, return the produced string instead. 199 """ --> 200 return dump_all([data], stream, Dumper=Dumper, **kwds) 201 202 def safe_dump_all(documents, stream=None, **kwds): TypeError: dump_all() got an unexpected keyword argument 'sort_keys' Here are details of my system installation: transformers version: 4.8.2 Platform: Linux-5.4.104Β±x86_64-with-Ubuntu-18.04-bionic Python version: 3.7.10 PyTorch version (GPU?): 1.9.0+cu102 (True) Tensorflow version (GPU?): 2.5.0 (True) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: Yes Using distributed or parallel set-up in script?: No Any ideas on how to resolve this?
What is your version of pyaml? I think you need a more recent version (pip install --upgrade pyyaml). Will adjust the setup.
0
huggingface
Beginners
Metric computation code
https://discuss.huggingface.co/t/metric-computation-code/8266
Hi, can somebody points out where I can find the metric computation code in the HF? e.g. CIDEr, METEOR, ROUGE, BLEU Thanks.
hey @zuujhyt you can find all the scripts to compute metrics in the datasets library here: datasets/metrics at master Β· huggingface/datasets Β· GitHub 5
0
huggingface
Beginners
Defining a custom dataset for fine-tuning translation
https://discuss.huggingface.co/t/defining-a-custom-dataset-for-fine-tuning-translation/6913
I’m a first time user of the huggingface library. I am struggling to convert my custom dataset into one that can be used by the hugginface trainer for translation task with MBART-50 2. The languages I am trying to train on are a part of the pre-trained model, I am simply trying to improve the model’s translation capability for that specific pair. My data is in the form of two plaintext files, with each file containing the sentences in one of the languages (sentences at the same line number form a pair.) I have tried using the csv file data loader, but I am unsure of what the column names need to be so that the tokenizer can identify which is the source and which the target. Also, trainer.train() function needs to correctly pick up on the same detail. There is also the issue that I’m trying to train the model to translate back and forth using the same dataset and I am unsure on how to accommodate for that. In the simpletransformers library, with MT5, I could easily do it by specifying the task using a prefix column. Can someone point me towards a tutorial to load custom datasets for a translation task, similar to the tutorial given for loading a custom dataset for the sequence classification task 24? I assume the method is the same for any of the translation oriented models, so any assistance would be appreciated. If such a tutorial exists for other tasks such as summarization and sentiment analysis, those would also be very helpful. I apologize if it is an overly trivial question.
This is exactly what I am trying to do too with no luck yet. Please let me know if you have found a way to do this.
0
huggingface
Beginners
Add a classification head to a fine-tuned language model
https://discuss.huggingface.co/t/add-a-classification-head-to-a-fine-tuned-language-model/8176
We I have fine-tuned a GPT-2 model with a language model head on medical triage text, and would like to use this model as a classifier. However, as far as I can tell, the Automodel Huggingface library allows me to have either a LM or a classifier etc. head, but I don’t see a way to add a classifier on top of a fine-tuned LM. Am I mistaken in my understanding of the possibilities? We want to preserve the fine-tuning we have done for the LM task, so don’t want to swap this head for a classifier and lose all of those weights. I suppose we could unlock the top layers of the GPT2 model and try and push the training back into the model, but I suspect that would soon become untenable for our somewhat limited resources. I see this question has been asked here 7, and my fellow worker has posted a follow-up question in that thread, but we didn’t see any answer we could relate to, so I am asking again in this new topic.
You need to use the AutoModelForSequenceClassification class to add a classification head on top of your pretrained model.
0
huggingface
Beginners
What should be shifted for decoder input for Bart
https://discuss.huggingface.co/t/what-should-be-shifted-for-decoder-input-for-bart/8175
Hi, In HF’s doc 1, regarding decoder_input_ids it says ... create this tensor by shifting the `input_ids` to the right..... Shouldn’t it be shifting the labels? because the input is noisy? Thank you.
I think this is copy pasted from the causal language modeling doc (for which your labels are your inputs shifted to the right).
0
huggingface
Beginners
Getting the MLM accuracy for the BERT model I am training from scratch
https://discuss.huggingface.co/t/getting-the-mlm-accuracy-for-the-bert-model-i-am-training-from-scratch/6795
Hi I am training a BERTforMaskedLM model from scratch. This is my tokenizer (previously trained) tokenizer = BertTokenizer('vocab.txt') This is my config: config = BertConfig( vocab_size=20000, max_position_embeddings=258 ) This is how I load the model from the last checkpoint: model = BertForMaskedLM.from_pretrained("/BERT/bert-checkpoints/checkpoint-1726500",config=config) My data collator: data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) The compute metrics function: from datasets import load_metric metric = load_metric("accuracy") def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return {metric.compute(predictions=predictions, references=labels)} The training arguments: from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="./bert-checkpoints", overwrite_output_dir=True, num_train_epochs=1, per_device_train_batch_size=32, per_device_eval_batch_size=32, save_steps=500, save_total_limit=2, prediction_loss_only=True, evaluation_strategy = 'steps' ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, eval_dataset=dataset, compute_metrics=compute_metrics ) And finally, I use this to train the model (which works fine): trainer.train() However, when the model is being trained, I only see 3 metrics: Step, Training Loss, and Validation Loss. I also want to see the β€œAccuracy of the Masked Language Model” (MLM accuracy). How should I do that? Note that I have already defined the β€œcompute_metrics” function which has the β€œaccuracy”. I do not know what is wrong. But the accuracy is not being shown. Note: by the way, my dataset is an instance of the from torch.utils.data import Dataset object which has a member called β€œexamples”. For instance, dataset.examples[0] is [2, 507, 157, 3656, 117, 2100, 521, 122, 280, 3]
How many total steps are there in your training? Since you chose the "steps" strategy, I wonder if it’s just because evaluation is never run?
0
huggingface
Beginners
Summarization : Conversation
https://discuss.huggingface.co/t/summarization-conversation/8117
I am new in this area. Please advise some good models to generate a summary on conversation between two persons. Thank you!
hey @MattJan a good place to start would be by looking at models fine-tuned on the samsum dataset (dialogues between two people + their summary): Hugging Face – The AI community building the future. 12 if you want to fine-tune your own model, a good start would be to use a pegasus model that has already be trained for summarisation, e.g. google/pegasus-cnn_dailymail Β· Hugging Face 13
0
huggingface
Beginners
Training a Tokenizer on a Streamed Dataset
https://discuss.huggingface.co/t/training-a-tokenizer-on-a-streamed-dataset/8026
Hi, I’m trying to train a tokenizer on a dataset that uses streaming. I followed the instructions provided here 4, with the addition of streaming=True during the dataset loading step. However, it quickly failed as the IterableDataset class does not have a length property (unlike the normal Dataset class). How can I work around this issue without having to download the dataset file entirely, i.e. purely relying on dataset streaming? Here’s the Colab link to reproduce the error. Any help would be greatly appreciated. Thanks! Note: This issue 4 seems to be related to mine.
Hi ! You can follow the instructions here 4 and use this batch iterator instead: def batch_iterator(batch_size=1000): batch = [] for example in dataset: batch.append(example["text"]) if len(batch) == batch_size: yield batch batch = [] if batch: # yield last batch yield batch
0
huggingface
Beginners
Loading dataset with streaming model
https://discuss.huggingface.co/t/loading-dataset-with-streaming-model/7826
I am trying to load dataset in streaming model. The current datasets version I am using is 1.8. But it is producing the following error. --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-3-518060a18801> in <module>() ----> 1 dataset = load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True) 3 frames /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs) 339 if value is not None: 340 if not hasattr(builder_config, key): --> 341 raise ValueError(f"BuilderConfig {builder_config} doesn't have a '{key}' key.") 342 setattr(builder_config, key, value) 343 ValueError: BuilderConfig OscarConfig(name='unshuffled_deduplicated_en', version=1.0.0, data_dir=None, data_files=None, description='Unshuffled and deduplicated, English OSCAR dataset') doesn't have a 'streaming' key. Quick notebook link 2.
@valhalla Can you take a look?
0
huggingface
Beginners
ValueError: not enough values to unpack (expected 2, got 1)
https://discuss.huggingface.co/t/valueerror-not-enough-values-to-unpack-expected-2-got-1/3516
i am trying to create xlnet classification def __init__(self,n_classes): super(SentimentClassifier, self).__init__() self.xlnet = XLNetModel.from_pretrained(PRE_TRAINED_MODEL_NAME) self.drop = nn.Dropout(p=0.3) self.out = nn.Linear(self.xlnet.config.hidden_size, n_classes) def forward(self, input_ids, attention_mask): _, pooled_output = self.xlnet( input_ids=input_ids, attention_mask=attention_mask) output = self.drop(pooled_output) return self.out(output) class Classification(Dataset): def __init__(self, texts, labels, tokenizer, max_len): self.texts = texts self.labels = labels self.tokenizer = tokenizer self.max_len = max_len def __len__(self): return len(self.texts) def __getitem__(self, item): text = str(self.texts[item]) label = self.labels[item] encoding = self.tokenizer.encode_plus( text, add_special_tokens=True, max_length=self.max_len, return_token_type_ids=False, pad_to_max_length=False, return_attention_mask=True, return_tensors='pt', ) return { 'review_text': text, 'input_ids': encoding['input_ids'].flatten(), 'attention_mask': encoding['attention_mask'].flatten(), 'labelss': torch.tensor(label, dtype=torch.long) } def train_epoch( model, data_loader, loss_fn, optimizer, device, scheduler, n_examples ): model = xlnet_model.train() losses = [] correct_predictions = 0 for d in data_loader: input_ids = d["input_ids"].reshape(4,512).to(device) print(d['input_ids'].shape) attention_mask = d["attention_mask"].to(device) labels = d["labels"].to(device) outputs = xlnet_model(input_ids=input_ids, attention_mask=attention_mask) _, preds = torch.max(outputs, dim=1) loss = loss_fn(outputs, labels) correct_predictions += torch.sum(preds == labels) losses.append(loss.item()) loss.backward() nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0) optimizer.step() scheduler.step() optimizer.zero_grad() return correct_predictions.double() / n_examples, np.mean(loss)```
Hi @sru, Can you please include the stack trace so we can help out more?
0
huggingface
Beginners
Download model without the trained weights
https://discuss.huggingface.co/t/download-model-without-the-trained-weights/7613
Hey community, i hope you’re doing fine. I’m new to the huggingface framework, so my question is if there any way to download hugging face models(like bert…) without it’s pretrained weights? the architecture only? i’m using pytorch Thank you so much.
you can construct one using your own defined configuration. from transformers import BertForMaskedLM model = BertForMaskedLM(config=config) where in the config variable, you provide the parameters of the model - the no. of heads for attention, FCN size etc. So you can train from scratch, but you won’t need to download its pre-trained weights and use BERT however you wish.
0
huggingface
Beginners
Use Pretrained T5 for Summarization
https://discuss.huggingface.co/t/use-pretrained-t5-for-summarization/1992
Hello, Is there any code snippet of how to use T5 pretrained model in order to do summarization?
I used the following code to do my task: from transformers import T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained('t5-small') model = T5ForConditionalGeneration.from_pretrained('t5-small', return_dict=True) input = "This is a summarization example. This is a large sentence." input_ids = tokenizer("summarize: "+input, return_tensors="pt").input_ids # Batch size 1 outputs = model.generate(input_ids) decoded = tokenizer.decode(outputs[0], skip_special_tokens=True) print(decoded)
0
huggingface
Beginners
Separate LM fine tuning and classification head training
https://discuss.huggingface.co/t/separate-lm-fine-tuning-and-classification-head-training/1404
I have a large text corpus, and a small subset of it that is labelled for a mutli-label text classification task. I’ve seen many (excellent!) examples of fine-tuning different models for sequence classification, but I couldn’t find one in which the training is separated to two distinct stages: Fine-tune a specific language model on a specific corpus (unsupervised). Train a sequence classification model on top of it on a labelled subset of the original dataset (supervised). I assume this is a pretty common (and simple) scenario, but I couldn’t find any relevant docs. If anyone can provide any pointers, it would be much appreciated!
Hi @adamh You can use the run_language_modeling script here 16 to finetune the pre-trained model for ex BertForMaksedLM. Then you should be able to load the model using BertForSequenceClassification model which will take the base model and add a classification head on top, which you can then fine-tune for classification.
0
huggingface
Beginners
UnicodeDecodeError with xprophetnet-large-wiki100-cased-xglue-qg model
https://discuss.huggingface.co/t/unicodedecodeerror-with-xprophetnet-large-wiki100-cased-xglue-qg-model/7539
Hi I’m new to the transformer model and when I run this code from transformers import ProphetNetTokenizer, ProphetNetForConditionalGeneration, ProphetNetConfig model = ProphetNetForConditionalGeneration.from_pretrained('microsoft/xprophetnet-large-wiki100-cased-xglue-qg') tokenizer = ProphetNetTokenizer.from_pretrained('microsoft/xprophetnet-large-wiki100-cased-xglue-qg') ```from https://huggingface.co/microsoft/xprophetnet-large-wiki100-cased-xglue-qg. I got an "UnicodeDecodeError: 'utf-8' codec can't decode byte 0xaf in position 51: invalid start byte" error when I run "tokenizer = ProphetNetTokenizer.from_pretrained('microsoft/xprophetnet-large-wiki100-cased-xglue-qg')" Can any one help? Thank you!
Slight hack to fix the decoder error - I tried editing ProphetNetTokenizer itself, specifically the load_vocab method and changed the encoding to β€˜latin-1’. image998Γ—420 40 KB The edit is made here 2, I did it locally so I’m not sure whether this could be done for Colab notebooks. This stackoverflow page 3 gave some suggestions for a fix, of which the latin-1 encoding seemed to work. Caveat: I haven’t tried using the tokenizer any further than just getting the ProphetNetTokenizer.from_pretrained() step working. It gave a warning of "Special tokens have been added in the vocabulary, make sure the associated word embedding are fine-tuned or trained.", so there may be some extra steps needed after. The overall issue is something that can’t be encoded to utf-8 in the vocabulary used. Hope this helps
0
huggingface
Beginners
Running custom modifications in modeling_bart.py
https://discuss.huggingface.co/t/running-custom-modifications-in-modeling-bart-py/7448
Hi, I am planning to modify the output attention weights from the decoder in modeling_bart.py for conditional abstractive summarization. To achieve this I changed the transformers module to mytransformers and made the changes I wanted in the modeling_bart.py script. Now, when I run the run_summarization.py, I cannot import the mytransformers with a FILENOTFOUND Error and I am prompted to install transformers. If I install/import transformers I will be running the original bart instead of my modified code. Can you help me understand how to run my custom code changes in the transformers library ?
You can’t add files to the library like that, but you can have your updated model in the same folder as your example script with the name mytransformers.py which will then allow you to import from it.
0
huggingface
Beginners
Understanding data of dataset_infos.json
https://discuss.huggingface.co/t/understanding-data-of-dataset-infos-json/7549
Hi everyone, I was exploring dataset_infos.json , and I couldn’t figure out what some of the keys represent in the file. Could someone please point me to a reference, which I could use as column descriptions. eg of some confusing columns: β€œdownload_size”, β€œdataset_size”, β€œsize_in_bytes”, β€œpost_processing_size” and β€œnum_bytes”(splits). Another set of keys I couldn’t understand/interpret what they represent, were β€œpost_processed” and β€œsupervised_keys”. Is the structure documentation available, or is diving into the code from the command dataset-cli test would be the correct approach to figure this out? Example from Cifar-10 (canonical): image650Γ—558 48.3 KB
hey @dk-crazydiv you can find a description of all the DatasetInfo fields in the docs: Main classes β€” datasets 1.8.0 documentation 2 if something is unclear / could be improved, feel free to open a pr!
0
huggingface
Beginners
MLM: IndexError: index out of bounds
https://discuss.huggingface.co/t/mlm-indexerror-index-out-of-bounds/6722
Hi, I am following this tutorial on masked language modelling using my own dataset: notebooks/language_modeling.ipynb at master Β· huggingface/notebooks Β· GitHub 2, and I am coming across this error: Input: lm_datasets = tokenized_datasets.map( group_texts, batched=True, batch_size=1000, num_proc=4, ) Output: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/multiprocess/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 186, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py", line 397, in wrapper out = func(self, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 1977, in _map_single writer.write_batch(batch) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 383, in write_batch pa_table = pa.Table.from_pydict(typed_sequence_examples) File "pyarrow/table.pxi", line 1559, in pyarrow.lib.Table.from_pydict arrays.append(asarray(v)) File "pyarrow/array.pxi", line 331, in pyarrow.lib.asarray return array(values, type=type) File "pyarrow/array.pxi", line 222, in pyarrow.lib.array return _handle_arrow_array_protocol(obj, type, mask, size) File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol res = obj.__arrow_array__(type=type) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 100, in __arrow_array__ if trying_type and out[0].as_py() != self.data[0]: File "pyarrow/array.pxi", line 1067, in pyarrow.lib.Array.__getitem__ return self.getitem(_normalize_index(key, self.length())) File "pyarrow/array.pxi", line 549, in pyarrow.lib._normalize_index raise IndexError("index out of bounds") IndexError: index out of bounds """ The above exception was the direct cause of the following exception: IndexError Traceback (most recent call last) <ipython-input-34-e35eeb51570c> in <module>() 3 batched=True, 4 batch_size=1000, ----> 5 num_proc=4, 6 ) 16 frames /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._normalize_index() 547 raise IndexError("index out of bounds") 548 elif index >= length: --> 549 raise IndexError("index out of bounds") 550 return index 551 IndexError: index out of bounds
You should deactivate multiprocessing to have a clearer error message (remove num_proc=4). It looks like an indexing error in your dataset.
0
huggingface
Beginners
Does task specific prefix matters for T5 fine-tuning?
https://discuss.huggingface.co/t/does-task-specific-prefix-matters-for-t5-fine-tuning/501
If I understand correctly pre-trained T5 models were pre-trained with an unsupervised objective without any task specific prefix like β€œtranslate”, β€œsummarize”, etc. Is it important then to create my summarization dataset for fine-tuning in a way that every input starts with "summarize: "?
I think it is important, but am not totally certain why. You could test it pretty easily I bet.
0
huggingface
Beginners
Why do I get β€˜Δ β€™ when adding emojis to the tokenizer?
https://discuss.huggingface.co/t/why-do-i-get-g-when-adding-emojis-to-the-tokenizer/7056
Hello, I have added custom tokens to my tokenizer, which are emojis. This is the code I have used, which adds the new tokens: model = AutoModelForMaskedLM.from_pretrained(model_checkpoint) num_added_toks = tokenizer.add_tokens(['πŸ‘']) print('We have added', num_added_toks, 'tokens') model.resize_token_embeddings(len(tokenizer)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer Output: We have added 1 tokens Embedding(50270, 768) Though, when I try to tokenize a phrase using this code: print(tokenizer.tokenize('Congrats πŸ‘')) I get this output with that strange 'Δ ' symbol: ['Cong', 'rats', 'Δ ', 'πŸ‘']
github.com/pytorch/fairseq In the vocab of bart.bpe.bpe.decoder, what does Δ  mean for those words prefixed with 'Δ '? 15 opened Feb 17, 2020 closed Feb 21, 2020 v381654729 question ## ❓ Questions and Help #### What is your question? In the vocab of bart….bpe.bpe.decoder, what does Δ  mean for those words prefixed with 'Δ '? #### Code keywords = { "checkpoint_file" : "model.pt"} bart = BARTModel.from_pretrained(config_translation_zh_en.bart_english_directory, **keywords ) # bart.bpe.bpe.decoder is a dict, and it contains many 'strange' words like 'Δ the' 'Δ and' 'Δ of' and also many normal words like 'playing' 'bound' etc. ![image](https://user-images.githubusercontent.com/43415394/74647258-52024680-51b6-11ea-97dd-4631e0404d17.png) bart.bpe.bpe.decoder is a dict, and it contains many 'strange' words like 'Δ the' 'Δ and' 'Δ of' and also many normal words like 'playing' 'bound' etc. At first glance, words prefixed with Δ  seem to be complete words and words not prefixed with Δ  are incomplete words (like BertTokenizer's ##-prefixed words ) like 'ing' 'th' and ''re'. However, a high proportion of words not prefixed with Δ  seem not incomplete words, like frequency, relation, etc. #### Question What does Δ  mean for those words prefixed with 'Δ '? And how does the tokenizer of BART tokenize with the set of 50265-word vocabulary (bart.task.source_dictionary, 50257 (integer) words of which map to bart.bpe.bpe.decoder)? I haven't found documentations on the Internet or in the codes.
0
huggingface
Beginners
XLM-R classifier predictions produce errors
https://discuss.huggingface.co/t/xlm-r-classifier-predictions-produce-errors/7292
Hi, I am using tf-xlm-r-base model for a sentiment classification (multi-class) task with 4 classes. I used both trainer() api and keras native method. Initially, I got some acceptable result but later it predicts only one class for the same data set. I am following this guide. Below are my outputs and code. My inputs and labels are similar to the example mentioned in the guide(mentioned above) having class labels 0,1,2,3. I am trying this on a dataset of 1000 training data points. I also tried with 7K and 14K training data sets which did not solve the error. Only the single predicted class changed. X_train, X_test, y_train, y_test = train_test_split(comment_texts, comment_labels, test_size=0.1, random_state=0) model_checkpoint = "jplu/tf-xlm-roberta-base" from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) train_encodings = tokenizer(X_train, truncation=True, padding=True) val_encodings = tokenizer(val_texts, truncation=True, padding=True) test_encodings = tokenizer(X_test, truncation=True, padding=True) train_dataset = tf.data.Dataset.from_tensor_slices(( # convert to dataset objects dict(train_encodings), y_train )) val_dataset = tf.data.Dataset.from_tensor_slices(( dict(val_encodings), val_labels )) test_dataset = tf.data.Dataset.from_tensor_slices(( dict(test_encodings), y_test )) from transformers import TFAutoModelForSequenceClassification, TFTrainingArguments, TFTrainer num_labels=4# model = TFAutoModelForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels) Model: "tfxlm_roberta_for_sequence_classification_3" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= roberta (TFRobertaMainLayer) multiple 277453056 _________________________________________________________________ classifier (TFRobertaClassif multiple 593668 ================================================================= Total params: 278,046,724 Trainable params: 278,046,724 Non-trainable params: 0 from sklearn.metrics import accuracy_score,precision_score,recall_score def compute_metrics(p): pred, labels = p pred = np.argmax(pred, axis=1) accuracy = accuracy_score(y_true=labels, y_pred=pred) recall = recall_score(y_true=labels, y_pred=pred, average='weighted') precision = precision_score(y_true=labels, y_pred=pred, average='weighted') #f1 = f1_score(y_true=labels, y_pred=pred) return {"accuracy": accuracy, "precision": precision, "recall": recall} raining_args = TFTrainingArguments( output_dir='/content/drive/MyDrive/test_transformer/results', # output directory num_train_epochs=3, # total number of training epochs evaluation_strategy = "epoch", per_device_train_batch_size=8, # batch size per device during training per_device_eval_batch_size=8, # batch size for evaluation warmup_steps=100, # number of warmup steps for learning rate scheduler weight_decay=0.001, # strength of weight decay logging_dir='/content/drive/MyDrive/test_transformer/logs', # directory for storing logs logging_steps=10, ) with training_args.strategy.scope(): model = TFAutoModelForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels, output_attentions=True) trainer = TFTrainer( model=model, # the instantiated Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset, # evaluation dataset # should not be the test dataset although I am using it compute_metrics=compute_metrics ) trainer.train() For Keras native training, optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5) model.compile(optimizer=optimizer, loss=model.compute_loss, metrics=['accuracy'])# can also use any keras loss fn his=model.fit(train_dataset.shuffle(1000).batch(16), epochs=10, batch_size=16) out=model.predict(test_dataset.batch(16)) y_pred=np.array((np.argmax((tf.nn.softmax(out.logits,axis=-1)),axis=-1))) which gives results for a test dataset like, precision recall f1-score support 0 0.0000 0.0000 0.0000 31 1 0.4200 1.0000 0.5915 42 2 0.0000 0.0000 0.0000 26 3 0.0000 0.0000 0.0000 1 accuracy 0.4200 100 macro avg 0.1050 0.2500 0.1479 100 weighted avg 0.1764 0.4200 0.2485 100 The logits outputs tally with these, giving high value for a one particular class, PredictionOutput(predictions=array([[ 1.0289115 , 1.1212691 , 0.35617405, -2.9331322 ], [ 1.0295717 , 1.1219745 , 0.35612756, -2.934678 ], [ 1.0289289 , 1.1211748 , 0.3562293 , -2.9330313 ], [ 1.0290331 , 1.121373 , 0.35619155, -2.9334006 ], [ 1.0291184 , 1.121406 , 0.3562282 , -2.933531 ], [ 1.0291708 , 1.121502 , 0.35619184, -2.933691 ], [ 1.0289183 , 1.121173 , 0.35626405, -2.933055 ], [ 1.0282533 , 1.1204101 , 0.35632664, -2.9314802 ], [ 1.0289795 , 1.1212872 , 0.35624185, -2.933268 ], [ 1.0289418 , 1.1212839 , 0.35617486, -2.933162 ], [ 1.0290436 , 1.1213527 , 0.35621163, -2.933384 ], [ 1.0290844 , 1.1214646 , 0.35614225, -2.9335442 ], [ 1.0289063 , 1.1212507 , 0.35618058, -2.9330966 ], [ 1.0291401 , 1.1214608 , 0.3562286 , -2.9336445 ], [ 1.0290921 , 1.1214745 , 0.3561555 , -2.9335837 ], [ 1.0290128 , 1.1212608 , 0.35625243, -2.9332502 ], [ 1.0291014 , 1.1213605 , 0.3562572 , -2.9334743 ], [ 1.0291253 , 1.1214253 , 0.35621867, -2.9335544 ], [ 1.0290672 , 1.1213386 , 0.35625044, -2.933417 ], [ 1.0289694 , 1.1212484 , 0.35623503, -2.9331877 ], [ 1.0291487 , 1.1214706 , 0.35619804, -2.93363 ], [ 1.0289078 , 1.1212181 , 0.3562105 , -2.933069 ], [ 1.0291108 , 1.1214286 , 0.35619968, -2.9335427 ], [ 1.0291494 , 1.121471 , 0.35619843, -2.9336333 ], [ 1.0291737 , 1.1215005 , 0.35622108, -2.9337227 ], [ 1.0291069 , 1.1215074 , 0.3561285 , -2.9336267 ], [ 1.0289413 , 1.1212887 , 0.3561762 , -2.9331977 ], [ 1.0293871 , 1.1218258 , 0.35609013, -2.9342983 ], [ 1.0290092 , 1.1213552 , 0.3561896 , -2.9333515 ], [ 1.0283587 , 1.1205777 , 0.3562444 , -2.93175 ], [ 1.0289493 , 1.1212447 , 0.3562152 , -2.9331517 ], [ 1.0290315 , 1.1213847 , 0.3561749 , -2.93341 ], [ 1.0290761 , 1.1214103 , 0.35618582, -2.9334972 ], [ 1.0289084 , 1.1212412 , 0.35619718, -2.9331186 ], [ 1.0290228 , 1.1213146 , 0.35622573, -2.933303 ], [ 1.028934 , 1.1211748 , 0.3562406 , -2.933056 ], [ 1.0292109 , 1.1215464 , 0.356189 , -2.9337885 ], [ 1.0292367 , 1.1215956 , 0.3561677 , -2.9338856 ], [ 1.0290639 , 1.1214087 , 0.35618123, -2.9334571 ], [ 1.0290803 , 1.1214298 , 0.35617745, -2.933514 ], [ 1.0291355 , 1.1214267 , 0.35623217, -2.9335864 ], [ 1.0289494 , 1.1212884 , 0.35619822, -2.9332087 ], [ 1.0293379 , 1.1216819 , 0.35617125, -2.9340885 ], [ 1.0293596 , 1.1217119 , 0.35619378, -2.934169 ], [ 1.0291876 , 1.1215029 , 0.35619885, -2.9337032 ], [ 1.0288249 , 1.1211317 , 0.35620478, -2.9328792 ], [ 1.0291348 , 1.1215305 , 0.35612702, -2.93368 ], [ 1.0292716 , 1.1215887 , 0.3561728 , -2.9338777 ], [ 1.029365 , 1.1217095 , 0.35616824, -2.9341416 ], [ 1.0290942 , 1.1214304 , 0.35618833, -2.9335175 ], [ 1.0289066 , 1.1211537 , 0.3562669 , -2.933033 ], [ 1.0291075 , 1.121446 , 0.35618696, -2.9335594 ], [ 1.0294148 , 1.121831 , 0.3561264 , -2.9343624 ], [ 1.0292593 , 1.1216727 , 0.35611537, -2.9339767 ], [ 1.0290027 , 1.1213193 , 0.35620502, -2.9333057 ], [ 1.0292455 , 1.1215812 , 0.3561885 , -2.933863 ], [ 1.0291928 , 1.1215296 , 0.35619065, -2.9337544 ], [ 1.0289562 , 1.12131 , 0.3561774 , -2.9332416 ], [ 1.0292183 , 1.1215689 , 0.35616964, -2.9338312 ], [ 1.0292469 , 1.121585 , 0.35619286, -2.9338806 ], [ 1.0292145 , 1.1215852 , 0.35615656, -2.9338334 ], [ 1.0290105 , 1.1213479 , 0.3561993 , -2.933348 ], [ 1.0290651 , 1.1213914 , 0.35617766, -2.9334235 ], [ 1.0290747 , 1.1214159 , 0.3561734 , -2.9334788 ], [ 1.0290564 , 1.1214483 , 0.35613787, -2.933495 ], [ 1.0290995 , 1.1214293 , 0.3561933 , -2.9335384 ], [ 1.0292302 , 1.1215992 , 0.356152 , -2.933872 ], [ 1.0294645 , 1.1218221 , 0.35615602, -2.9344025 ], [ 1.0291889 , 1.1215235 , 0.35620224, -2.9337523 ], [ 1.0292081 , 1.1215614 , 0.35616958, -2.9337964 ], [ 1.0291911 , 1.1215253 , 0.35619062, -2.9337654 ], [ 1.028842 , 1.1211593 , 0.35621268, -2.9329333 ], [ 1.0290655 , 1.1213945 , 0.3562099 , -2.933466 ], [ 1.027889 , 1.1199136 , 0.35638046, -2.9304833 ], [ 1.0290161 , 1.1213044 , 0.35623237, -2.9333255 ], [ 1.0292108 , 1.1215161 , 0.3562079 , -2.9337723 ], [ 1.0290184 , 1.1213537 , 0.35618034, -2.9333456 ], [ 1.0289462 , 1.1212633 , 0.35620436, -2.9331503 ], [ 1.0292357 , 1.1216115 , 0.3561552 , -2.933901 ], [ 1.0290319 , 1.1213702 , 0.35618973, -2.933375 ], [ 1.0292478 , 1.1216022 , 0.35619214, -2.93391 ], [ 1.0288923 , 1.1211102 , 0.35628372, -2.932956 ], [ 1.0272568 , 1.1193739 , 0.3562229 , -2.929129 ], [ 1.0290308 , 1.1214021 , 0.35614964, -2.9334133 ], [ 1.0291088 , 1.1214335 , 0.356207 , -2.933562 ], [ 1.028747 , 1.1209638 , 0.3562615 , -2.932617 ], [ 1.0292501 , 1.1215758 , 0.35620457, -2.933898 ], [ 1.0288965 , 1.1211853 , 0.3562436 , -2.933039 ], [ 1.0290757 , 1.121396 , 0.3562069 , -2.9334598 ], [ 1.0290804 , 1.1213692 , 0.3562515 , -2.9334688 ], [ 1.0292016 , 1.1215339 , 0.35618612, -2.9337628 ], [ 1.0291742 , 1.1215011 , 0.35619456, -2.9337041 ], [ 1.0290596 , 1.1214275 , 0.35615835, -2.9334834 ], [ 1.0293068 , 1.1216859 , 0.35614944, -2.9340594 ], [ 1.0291089 , 1.1214725 , 0.35615313, -2.9335957 ], [ 1.0290648 , 1.1213634 , 0.35623825, -2.933438 ], [ 1.0292054 , 1.1215473 , 0.35618582, -2.9337878 ], [ 1.0281092 , 1.1202368 , 0.356357 , -2.931106 ], [ 1.0292268 , 1.1216258 , 0.3561134 , -2.9338837 ], [ 1.0290968 , 1.1214894 , 0.35614622, -2.9336073 ]], dtype=float32), label_ids=array([0, 1, 2, 0, 0, 2, 0, 1, 1, 1, 1, 1, 1, 2, 1, 0, 1, 2, 2, 0, 0, 2, 1, 1, 1, 3, 0, 2, 2, 2, 0, 1, 2, 2, 2, 0, 1, 0, 0, 1, 0, 1, 1, 2, 2, 1, 1, 1, 0, 1, 0, 0, 2, 1, 0, 0, 1, 0, 0, 0, 0, 1, 2, 2, 1, 0, 2, 1, 2, 0, 1, 1, 0, 2, 1, 2, 0, 1, 1, 1, 2, 0, 1, 0, 1, 2, 2, 1, 1, 0, 0, 1, 1, 1, 1, 2, 0, 2, 1, 1]), metrics={'eval_loss': 1.13915282029372, 'eval_accuracy': 0.42, 'eval_precision': 0.1764, 'eval_recall': 0.42}) Can someone please suggest a tip or tell me what could have gone wrong? (I also tried batch sizes 8/16, running on both CPU and GPU with same datasets and parameters, changing learning rates/epochs and down sampling training dataset to balance classes).
also found that for binary classification, (two from above four classes), it behaves similarly. And tried changing labels to float type and different loss function too.
0
huggingface
Beginners
Fine Tune BERT Models
https://discuss.huggingface.co/t/fine-tune-bert-models/1554
Hey, curious question to illuminate my understanding. Fine Tuning a BERT model for you downstream task can be important. So I like to tune the BERT weights. Thus, I can extract them from the BertForSequenceClassification which I can fine tune. if you fine tune eg. BertForSequenceClassification you tune the weights of the BERT model and the classifier layer too. But for making right fine tune, you would first need freeze the BERT weights, and tune the classifier. Afterwards you fine tune the BERT weights too, right? Now, there are myriads of ways to finetune the BERT weights? If I just use the main BERT model together with arbitrary neural network architecture afterwards I could fine tune the BERT weights in this way too, right?
Any suggestions? Also what came in my mind. That TFBertForSequence is using the pooled_output. So the model is finetuned viad this pooled_output. But instead I could use the cls embedding or the globalaveragepooling of the hiddensequence for finetuning (pass to the classifier layer), right?
0
huggingface
Beginners
How to merge two dataset objects?
https://discuss.huggingface.co/t/how-to-merge-two-dataset-objects/844
Hi everyone! I have two datasets, loaded as CSV files, which have the same features/columns. I would like to know if there is a way to merge both datasets into a larger one (like I would do with pd.concat((df_1, df_2))using pandas. In case that such method does not exist, would it be interesting to implement such functionality? Thanks in advance
I would rather combine the csv’s
0
huggingface
Beginners
Distilbert-base-multilingual-cased’
https://discuss.huggingface.co/t/distilbert-base-multilingual-cased/7054
Hello I am running distilbert-base-multilingual-cased’ on Pytorch. My model has 4 classes in the target. In the code in models/distilbert/modeling_distilbert.py. I am reaching this state elif self.config.problem_type == β€œmulti_label_classification”: ** loss_fct = BCEWithLogitsLoss()** ** loss = loss_fct(logits, labels)** I have two questions: 1 BCEWithLogitsloss must receive the labels as one hot rather integers . must I take care of doing one_hot? 2 If I wish to add regulation functions to the loss , what is the best practice for doing so? Thanks
Note that multi_label_classification is only for problems where you can have multiple labels for one example, so you should use the default if your samples can only have one label. If you are in a true multiple label problem, then it’s very likely your labels are already in a one-hot format. For your second question, you should just output the logits of your model and then compute the loss manually with your penalty. If you’re using the Trainer API, you can subclass and write a compute_loss function with that, see here 3 for an example.
0
huggingface
Beginners
How to test masked language model after training it?
https://discuss.huggingface.co/t/how-to-test-masked-language-model-after-training-it/7029
Hi, I have followed and trained my masked language model using this tutorial: notebooks/language_modeling.ipynb at master Β· huggingface/notebooks Β· GitHub 10 Now, once the model as been saved using this code below: trainer.save_model("my_model") But, the notebook does not seem to include any code to allow me to test my model, so I am unsure how to do this. I have saved my model, but I now want to mask a sentence using my model by doing something like this: The [MASK] of France is Paris Thanks!
You can load it in a pipeline by using the folder where you saved it: mask_filler = pipeline("fill-mask", model="my_model")
0
huggingface
Beginners
Is it normal of more memory use of DistributedDataParallel than single
https://discuss.huggingface.co/t/is-it-normal-of-more-memory-use-of-distributeddataparallel-than-single/6987
Hello , I am new here. I try to Fine-turn MBart model ( mbart-large-cc25 ), My device : one pc(ubuntu), gpu(10GB) memory * 2 When i fine-turn on single gpu, first load model to gpu , only cost 3(GB) memory and start train it, increase to 8(GB), so i can fine-turn with small batch. I use DistributedDataParallel to have more batch in fine-turn But, i load model to 2 gpu , than cost 7GB both, didn’t start training. Is it normal ? Here is my init memory code class SummarizationModule() : def __init__(self,local_rank) -> None: self.mBartModel = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-cc25") self.mBartTokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-cc25", src_lang="en_XX", tgt_lang="zh_CN") self.mLocal_rank = local_rank torch.cuda.set_device(self.mLocal_rank) print('Rank : [' , args.local_rank , '] is ready.') os.environ['CUDA_VISIBLE_DEVICES'] = '0,1' os.environ['MASTER_ADDR'] = 'localhost' os.environ['MASTER_PORT'] = '8787' torch.distributed.init_process_group(backend="nccl", init_method="env://", world_size=1 ,rank=self.mLocal_rank ) self.mBartModel = self.mBartModel.cuda() self.mBartModel = DistributedDataParallel(self.mBartModel, find_unused_parameters=True) os.system('nvidia-smi') if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument('--local_rank', default=-1, type=int, help='node rank for distributed training') args = parser.parse_args() mModel = SummarizationModule(args.local_rank) Use : CUDA_VISIBLE_DEVICES=0,1 python3 -m torch.distributed.launch --nproc_per_node=2 main_mbart_multi_gpu.py
There is a slight overhead when using DistributedDataParallel so it’s normal to see a bit more GPU usage yes.
0
huggingface
Beginners
How can I renew my API Key?
https://discuss.huggingface.co/t/how-can-i-renew-my-api-key/6975
Hello, is there a way I can renew my API Key? I’ve seen a few forum posts about it but I’d like to reset it.
cc @julien-c or @pierric
0
huggingface
Beginners
Non shuffle training
https://discuss.huggingface.co/t/non-shuffle-training/6986
Hi there, In order to debug something I need to make data non-shuffle. Can you please tell me how to turn off the shuffle? I am using from transformers import Trainer for training and from datasets import load_dataset for data loading with default arguments.
There is no option to do this natively in the Trainer, you can either make a source install and change the line that creates the training dataloader, or subclass Trainer and override the get_train_dataloader method.
0
huggingface
Beginners
Multiple Categories (labels)
https://discuss.huggingface.co/t/multiple-categories-labels/6961
Hi @joeddav and @bhadresh-savani I am using your text-classification models, but I am encountering a problem, nothing I try allows me to retrieve all the values: always just the first value is returned, would you be able to advise me, thanks so much. image1040Γ—157 7.12 KB huggingface.co joeddav/distilbert-base-uncased-go-emotions-student Β· Hugging Face 2 huggingface.co bhadresh-savani/distilbert-base-uncased-emotion Β· Hugging Face 1
Hi @snowdere, I am really glad you use this model, you can use it like either of the below ways from transformers import pipeline classifier = pipeline("text-classification",model='bhadresh-savani/distilbert-base-uncased-emotion', return_all_scores=True) prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", ) print(prediction) or from transformers import pipeline classifier = pipeline("sentiment-analysis",model='bhadresh-savani/distilbert-base-uncased-emotion', return_all_scores=True) prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", ) print(prediction) return_all_scores=True should be in the pipeline() argument. Output: [[ {'label': 'sadness', 'score': 0.0006792712374590337}, {'label': 'joy', 'score': 0.9959300756454468}, {'label': 'love', 'score': 0.0009452480007894337}, {'label': 'anger', 'score': 0.0018055217806249857}, {'label': 'fear', 'score': 0.00041110432357527316}, {'label': 'surprise', 'score': 0.0002288572577526793} ]] Thanks
0
huggingface
Beginners
Showing individual token and corresponding score during beam search
https://discuss.huggingface.co/t/showing-individual-token-and-corresponding-score-during-beam-search/3735
Hello, I am using beam search with a pre-trained T5 model for summarization. I would like to visualize the beam search process by showing the tokens with the highest scores, and eventually the chosen beam like this diagram: image894Γ—669 56.8 KB (Taken from How to generate text: using different decoding methods for language generation with Transformers 5) I am unsure how I can show the tokens and their corresponding scores. I followed the discussion [Announcement] GenerationOutputs: Scores, Attentions and Hidden States now available as outputs to generate 6 and https://github.com/huggingface/transformers/pull/9150. Following the docs, upon calling generate, I have set return_dict_in_generate=True, output_scores=True generated_outputs = model_t5summary.generate( input_ids=input_ids.to(device), attention_mask=features['attention_mask'].to(device), max_length=input_ids.shape[-1] + 2, return_dict_in_generate=True, output_scores=True, output_hidden_states=True, output_attentions=True, no_repeat_ngram_size=2, early_stopping=True, num_return_sequences=3, num_beams=5, ) Now I have an instance of BeamSearchEncoderDecoderOutput. If I understand the docs (Utilities for Generation β€” transformers 4.2.0 documentation 5) correctly, scores will provide me with what I want but I am unsure on how to use the scores. Any help/pointers from the community would be greatly appreciated, thank you
Tagging @patrickvonplaten reposted my question from Github, thanks for directing me to the forum
0
huggingface
Beginners
Token Classification (ValueError: NumPy boolean array indexing assignment)
https://discuss.huggingface.co/t/token-classification-valueerror-numpy-boolean-array-indexing-assignment/5564
I was reading and working through β€œToken Classification with W-NUT Emerging Entities” tutorial on Fine-tuning with custom datasets β€” transformers 4.5.0.dev0 documentation 6 using a different data. To replicate the data structure of the tutorial, I used the code below to insert a blank space between sentences/tags # insert blank row in python dataframe when value in column changes mask = DF['sentence_id'].ne(DF['sentence_id'].shift(-1)) DF1 = pd.DataFrame('',index=mask.index[mask] + .5, columns=DF.columns) DF2 = pd.concat([DF, DF1]).sort_index().reset_index(drop=True).iloc[:-1] DF2.head(18) I then wrote the data to a file DF2.to_csv(r'/content/drive/MyDrive/Colab Notebooks/model/dataset.txt', header=None, index=None, sep='\t', mode='a', encoding="utf-8") I read the data back using the code provided in the tutorial def read_wnut(file_path): file_path = Path(file_path, encoding='utf8') raw_text = file_path.read_text().strip() raw_docs = re.split(r'\n\t?\n', raw_text) token_docs = [] tag_docs = [] for doc in raw_docs: tokens = [] tags = [] for line in doc.split('\n'): token, tag = line.split('\t') tokens.append(token) tags.append(tag) token_docs.append(tokens) tag_docs.append(tags) return token_docs, tag_docs texts_df, tags_df = read_wnut(colab_file_path) Next, I split up the data into training, validation, and test sets. I then created encoding for the tags and the tokens as seen below tag2id = {tag: id for id, tag in enumerate(unique_tags)} id2tag = {id: tag for tag, id in tag2id.items()} # import the transformers module from transformers import BertTokenizerFast # import the small bert tokenizer model_name = "google/bert_uncased_L-4_H-512_A-8" tokenizer = BertTokenizerFast.from_pretrained(model_name) train_encodings = tokenizer(train_texts, is_split_into_words=True, return_offsets_mapping=True, padding=True, truncation=True) val_encodings = tokenizer(val_texts, is_split_into_words=True, return_offsets_mapping=True, padding=True, truncation=True) test_encodings = tokenizer(test_texts, is_split_into_words=True, return_offsets_mapping=True, padding=True, truncation=True) It’s when I run the offset function that I get an error message import numpy as np def encode_tags(tags, encodings): labels = [[tag2id[tag] for tag in doc] for doc in tags] encoded_labels = [] for doc_labels, doc_offset in zip(labels, encodings.offset_mapping): # create an empty array of -100 doc_enc_labels = np.ones(len(doc_offset),dtype=int) * -100 arr_offset = np.array(doc_offset) # set labels whose first offset position is 0 and the second is not 0 doc_enc_labels[(arr_offset[:,0] == 0) & (arr_offset[:,1] != 0)] = doc_labels encoded_labels.append(doc_enc_labels.tolist()) return encoded_labels # return the encoded labels train_labels = encode_tags(train_tags, train_encodings) val_labels = encode_tags(val_tags, val_encodings) test_labels = encode_tags(test_tags, test_encodings) The error message is below: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-18-7290e6aeda9c> in <module>() 17 18 # return the encoded labels ---> 19 train_labels = encode_tags(train_tags, train_encodings) 20 val_labels = encode_tags(val_tags, val_encodings) 21 test_labels = encode_tags(test_tags, test_encodings) <ipython-input-18-7290e6aeda9c> in encode_tags(tags, encodings) 11 12 # set labels whose first offset position is 0 and the second is not 0 ---> 13 doc_enc_labels[(arr_offset[:,0] == 0) & (arr_offset[:,1] != 0)] = doc_labels 14 encoded_labels.append(doc_enc_labels.tolist()) 15 ValueError: NumPy boolean array indexing assignment cannot assign 236 input values to the 120 output values where the mask is true I am not sure where my error is, as I have tried to replicate what is in the tutorial.
I have the same problem, maybe because of the training data problem, you can try to UTF-8 encoding to UTF-8-SIG raw_text = file_path.read_text(encoding='UTF-8-sig').strip() Because UTF-8 may cause an extra \ufeff character in the encoded data
0
huggingface
Beginners
Trouble with the built in inference API example
https://discuss.huggingface.co/t/trouble-with-the-built-in-inference-api-example/6983
Hi I’m just starting out with the inference API examples. While I can get other examples to work using the same formatting , I get this error {β€œerror”:β€œd argument needs to be of type (SquadExample, dict)”} When using the example for deepset/roberta-base-squad2 Β· Hugging Face import json import requests API_URL = "https://api-inference.huggingface.co/models/deepset/roberta-base-squad2" headers = {"Authorization": "Bearer api_xxx"} def query(payload): data = json.dumps(payload) response = requests.request("POST", API_URL, headers=headers, data=data) return json.loads(response.content.decode("utf-8")) data = query( { "inputs": { "question": "What's my name?", "context": "My name is Clara and I live in Berkeley.", } } ) I’d love know how else to format the query, or what else I should double check. Thanks for reading.
Hi @LowellR I just tried your exact pasted code and it worked fine. {'answer': 'Clara', 'end': 16, 'score': 0.9326569437980652, 'start': 11} Could you confirm if you are running this same code? Thanks!
0
huggingface
Beginners
Question about supported framework
https://discuss.huggingface.co/t/question-about-supported-framework/6942
Dear All, I have some experience with BERT and text analysis, but I’m a beginner at . And so, please bear my question which might be a little silly. Here is the situation, I would like to solve a text comprehension task with a proper model. However, some model seems not supported in TensorFlow according to the following screenshot. image1782Γ—1398 215 KB But it also mentioned that β€œany model saved as before can be loaded back either in PyTorch or TensorFlow” on the β€œQuick tour” page. I am a little confused by those two descriptions above. For say, I would like to choose Bert Generation for my task and I have no experience with PyTorch. Would the model work fine or just have some limited function while I fine-tune it in TensorFlow? I deeply appreciate your kind assistance. Sincerely,
The documentation should be interpreted as β€œany model saved as before can be loaded back either in PyTorch or TensorFlow as long as there is an implementation for both frameworks”, so no, you wouldn’t be able to use Bert Generation in TensorFlow at all.
0
huggingface
Beginners
Cannot download translation models in Colab
https://discuss.huggingface.co/t/cannot-download-translation-models-in-colab/6952
I am trying to translate English text to German. And so I run this- translator = pipeline("translation", model="Helsinki-NLP/opus-mt-en-de") But I get thrown an error- ValueError: This tokenizer cannot be instantiated. Please make sure you have sentencepiece installed in order to use this tokenizer. Full error message --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-65-accbe9f8763e> in <module>() ----> 1 translator = pipeline("translation", model="Helsinki-NLP/opus-mt-en-de") 1 frames /usr/local/lib/python3.7/dist-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, model_kwargs, **kwargs) 441 442 tokenizer = AutoTokenizer.from_pretrained( --> 443 tokenizer_identifier, revision=revision, use_fast=use_fast, _from_pipeline=task, **tokenizer_kwargs 444 ) 445 /usr/local/lib/python3.7/dist-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 449 else: 450 raise ValueError( --> 451 "This tokenizer cannot be instantiated. Please make sure you have `sentencepiece` installed " 452 "in order to use this tokenizer." 453 ) ValueError: This tokenizer cannot be instantiated. Please make sure you have `sentencepiece` installed in order to use this tokenizer. As it is suggested that I should have sentencepiece installed, I installed it via pip, but that does not help. I have tried importing it so that its namespace is available, but it still does not work. Note: Besides the Helsinki-NLP/opus-mt-en-de model, I have also tried using the Helsinki-NLP/opus-mt-fr-en model as shown in the course video 1, but it does not work either. What am I missing?
okay, I tried to run this locally (not in Colab): from transformers import pipeline translator = pipeline("translation", model="Helsinki-NLP/opus-mt-en-de") translation = translator("hello, my name is Bob") print(translation) and it printed out: [{'translation_text': 'Hallo, mein Name ist Bob.'}] I don’t know where you try run that code, but seems to work ok for me. Have you installed latest package of transformers? pip install transformers -U Where do you run that code? Can you share the full code to see if something else is going on there?
0
huggingface
Beginners
Model Parallelism, how to parallelize transformer?
https://discuss.huggingface.co/t/model-parallelism-how-to-parallelize-transformer/6260
Hi there, I am pretty new, I hope to do it right:) I have two gpus nvidia, which work fine. I can train model on each of them, I can use data parallelism. I wonder if I can parallelize the model itself. Surfing the internet I found it is possible but no one tells how. Some frameworks do it as torchgpipe, deepspeed PipelineModule, Fairscale but they wants sequential models but transformers are hard to turn sequential. Can you point me in the right direction? Specs: I want to parallelize BERT model on two gpus titan xp. Thank you, every hints or helps will be appreciated valgi0
hey @valgi0 my suggestion would be to try out the new accelerate library: GitHub - huggingface/accelerate: πŸš€ A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision 151 in particular, there is an nlp example that shows you how to configure accelerate for the multi-GPU case here: accelerate/examples at main Β· huggingface/accelerate Β· GitHub 74
0
huggingface
Beginners
How to do sentiment analysis on my own dataset?
https://discuss.huggingface.co/t/how-to-do-sentiment-analysis-on-my-own-dataset/6792
Hi, I have seen this 3 example, which does sentiment analysis on product reviews. Therefore, is there a way to do this with my own dataset? I want to classify the sentiment of text as β€œhappy”, β€œsad” or β€œneutral” using my own dataset. Thanks.
You have to define your own torch.utils.data.Dataset and torch.utils.data.DataLoader to load your own labeled text. Then choose an LM like BERT from the huggingface library to finetune.
0
huggingface
Beginners
Get word embeddings from transformer model
https://discuss.huggingface.co/t/get-word-embeddings-from-transformer-model/6929
Hi I would like to plot semantic space for specific words. Usually, we use word embeddings for this. But model I use (xlm-roberta) deala with language on the level of part of words (BPE tokens). It means that if I give to the model a word β€˜hello’ I will get 2-3 vectors for each part of this word, right? model(**tokenizer('hello', return_tensors="tf"),output_hidden_states=True) # hidden states are embedding I suppose So, what is the best approach to get one single vector to plot? To be clear what do I mean by semantic space: 850Γ—719 102 KB
I’m not sure what’s the best approach since I’m not an expert in this , but you can always do mean pooling to the output. Here is a working example from transformers import AutoTokenizer, AutoModelForMaskedLM def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1) sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9) return sum_embeddings / sum_mask tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base") model = AutoModelForMaskedLM.from_pretrained("xlm-roberta-base") encoded_input = tokenizer("hello", return_tensors='pt') model_output = model(**encoded_input) mean_pooling(model_output, encoded_input['attention_mask']) This is inspired in sentence transformers 62
0
huggingface
Beginners
Adding New Tokens - IndexError: index out of range in self
https://discuss.huggingface.co/t/adding-new-tokens-indexerror-index-out-of-range-in-self/6731
Hi, I have added custom tokens using this code: # Let's see how to increase the vocabulary of Bert model and tokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = AutoModelForMaskedLM.from_pretrained('bert-base-uncased') num_added_toks = tokenizer.add_tokens(['😎', '🀬']) print('We have added', num_added_toks, 'tokens') model.resize_token_embeddings(len(tokenizer)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer. However, when I execute this code: trainer.train() I get this error: --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-42-3435b262f1ae> in <module>() ----> 1 trainer.train() 11 frames /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1914 # remove once script supports set_grad_enabled 1915 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1916 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1917 1918 IndexError: index out of range in self
hey @anon58275033 without seeing the full code it’s a bit hard to debug, but my first question would be whether you tokenized the corpus after adding the new tokens? if yes, did you observe whether the tokenization is working as expected?
0
huggingface
Beginners
Reducing output size when performing hyperparameter search
https://discuss.huggingface.co/t/reducing-output-size-when-performing-hyperparameter-search/6701
Hello everyone! I’ve been trying to perform a simple hyperparameter research on β€˜distilroberta-base’. When using Kaggle notebooks there is a 20 GB limit on outputs. Even when i am doing only 5 trials the output directory fills up. I am used to using GridSearchCV for HP-search which only keeps track of the best parameters and retrains at the end without saving individual models during the search. I have tried using β€˜overwrite_output_dir=True’ in TrainingArguments but this doesn’t seem to reduce any output. I apologize if this is documented somewhere in the documentation, but I have not been able to find it. My code: model_training_arguments = TrainingArguments( "./model_output", evaluation_strategy = "epoch", fp16=True, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=4, seed=2, load_best_model_at_end=True, overwrite_output_dir=True ) model_trainer = Trainer( model_init=model_init, args=model_training_arguments, train_dataset=train_dataset, eval_dataset=valid_dataset, tokenizer=tokenizer, compute_metrics=metrics, ) model_trainer.hyperparameter_search( direction = "minimize", backend = "optuna", n_trials=5 ) Thanks in advance!
Set load_best_model_at_end=False, and add save_strategy = 'no', # The checkpoint save strategy to adopt during training. On the training arguments
0
huggingface
Beginners
Extracting token embeddings from pretrained language models
https://discuss.huggingface.co/t/extracting-token-embeddings-from-pretrained-language-models/6834
I am interested in extracting feature embedding from famous and recent language models such as GPT-2, XLNeT or Transformer-XL. Is there any sample code to learn how to do that? Thanks in advance
Hello! You can use the feature-extraction pipeline for this. from transformers import pipeline pipeline = pipeline('feature-extraction', model='xlnet-base-cased') data = pipeline("this is a test") print(data) You can also do this through the Inference API.
0
huggingface
Beginners
Saving-Loading Model in Colab and Making Predictions
https://discuss.huggingface.co/t/saving-loading-model-in-colab-and-making-predictions/6723
I’m fairly new to Python and HuggingFace and have what is probably a simple question about saving and loading a model. I can’t figure out how to save a trained classifier model and then reload so to make target variable predictions on new data. As an example, I trained a model to predict imbd ratings with an example from the HuggingFace resources, shown below. I’ve tried a number of ways (save_model, save_pretrained) and either am struggling to save it at all or when loaded, can’t figure out what to call to get predictions. Any help would be incredibly appreciated on the steps that involve saving/loading/predicting new scores. #example mainly from here: https://huggingface.co/transformers/training.html !pip install transformers !pip install datasets from datasets import load_dataset raw_datasets = load_dataset("imdb") from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") def tokenize_function(examples): return tokenizer(examples["text"], max_length = 128, padding="max_length", truncation=True) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) #choosing small datasets for example# small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(500)) ### TRAINING classification ### from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2) from transformers import TrainingArguments from transformers import Trainer training_args = TrainingArguments("test_trainer", evaluation_strategy="epoch", num_train_epochs=2, weight_decay=.0001, learning_rate=0.00001, per_device_train_batch_size=32) trainer = Trainer(model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset) trainer.train() y_test_predicted_original = model_loaded.predict(small_eval_dataset) #### Saving ### from google.colab import drive drive.mount('/content/gdrive') %cd /content/gdrive/My\ Drive/FOLDER trainer.save_pretrained ("Trained model") #assumed this would save but did not model.save_pretrained ("Trained model") #did save ### Loading Model and Creating Predicted Scores ### #perhaps this....# from transformers import BertConfig, BertModel conf = BertConfig.from_pretrained("Trained model", num_labels=2) model_loaded = AutoModelForSequenceClassification.from_pretrained("Trained model", config=conf) #or...# model_loaded = AutoModelForSequenceClassification.from_pretrained("Trained model", local_files_only=True) model_loaded #with ultimate goal of getting predicted scores (not sure what to call here)... y_test_predicted_loaded = model_loaded.predict(small_eval_dataset)
Any insights on this? I can’t find any examples start to finish, which seems like it should be straightforward
0
huggingface
Beginners
Does it make sense to train DistilBERT from scratch in a new corpus
https://discuss.huggingface.co/t/does-it-make-sense-to-train-distilbert-from-scratch-in-a-new-corpus/3503
Hi! First post in the forums, excited to start getting deep into this great library! I have a rookie, theoretical question. I have been reading the DistilBERT paper (fantastic!) and was wondering if it makes sense to pretrain a DistilBERT model from scratch. In the paper 2, the authors specify that β€œThe student is trained with a distillation loss over the soft target probabilities of the teacher.”. My question is, when pretraining DistilBERT on a new corpus (say, another language) what are the β€˜probabilities of the teacher’? AFAIK, the teacher does not have any interesting probabilites to show since it has never seen the corpus either. So my question is, how does the transfomers library distill knowledge into the model when I train DistilBertForMaskedLM 3 from scratch in a brand new corpus? Sorry in advance if there is something really obvious I’m missing, I’m quite new to using transformers. Just to be extra explicit, I would load my model like this: config = DistilBertConfig(vocab_size=VOCAB_SIZE) model = DistilBertForMaskedLM(config) and train it like this: trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["valid"], tokenizer=tokenizer, data_collator=data_collator, ) trainer.train()
Hi @lesscomfortable welcome to the forum! In the DistilBERT paper they use bert-base-uncased as the teacher for pretraining (i.e. masked language modelling). In particular, the DistilBERT student is pretrained on the same corpus as BERT (Toronto Books + Wikipedia) which is probably quite important for being able to effectively transfer the knowledge from the teacher to the student. So the answer to your question My question is, when pretraining DistilBERT on a new corpus (say, another language) what are the β€˜probabilities of the teacher’? AFAIK, the teacher does not have any interesting probabilites to show since it has never seen the corpus either. is that the pretrained BERT teacher generates logits and hidden states that can be used to guide the pretraining of the student (through the KL divergence and β€œcosine embedding” terms in the loss function). You can find more technical details here 29 and you should check out the distiller.py module to see how the loss is implemented and train.py for the pretraining logic. If you want to use a trainer my suggestion would be to subclass Trainer, add the teacher as an attribute and override the compute_loss function, e.g. class DistillationTrainer(Trainer): def __init__(self, *args, teacher_model=None, **kwargs): super().__init__(*args, **kwargs) self.teacher = teacher_model def compute_loss(self, model, inputs): # adapt the code from distiller.py here Then you could initialise the teacher and student along the lines you did teacher_model = BertForMaskedLM.from_pretrained('bert-base-uncased') student_config = DistilBertConfig(vocab_size=VOCAB_SIZE) student_model = DistilBertForMaskedLM(student_config) trainer = DistillationTrainer( model=student_model, teacher = teacher_model, args=training_args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["valid"], tokenizer=tokenizer, data_collator=data_collator, ) Caveats: I do not know how well this will work if your corpus is significantly different from the one BERT was pretrained on. For example, if your corpus is in another language then you’d be better off using a different teacher model in that language This approach is likely to be error-prone and expensive ($$$), so my suggestion is to use the battle-tested scripts from the link above Hope that helps!
0
huggingface
Beginners
Certain words don’t work with bert?
https://discuss.huggingface.co/t/certain-words-dont-work-with-bert/6562
hi, I was trying to run bert but was getting the error β€œIndexError: index out of range in self”. after troubleshooting for a couple of days I figured out it was the word β€œscrewing” that was breaking my code. is this a bug or is there certain words you cant use with bert? or am I just doing something wrong? thanks. here’s the code example: import transformers import torch import torch.nn as nn class SentimentClassifier(nn.Module): def init(self, n_classes): super(SentimentClassifier, self).init() self.bert = transformers.BertModel.from_pretrained(β€˜bert-base-cased’) self.drop = nn.Dropout(p=0.3) self.out = nn.Linear(self.bert.config.hidden_size, n_classes) def forward(self, input_ids, mask): output = self.bert( input_ids=input_ids, attention_mask=mask ) output = self.drop(output['pooler_output']) return self.out(output) this doesn’t work batch_sentences = [ β€˜screwing’, ] running this works batch_sentences2 = [ β€˜this is a test sentence’, β€˜another one’ ] bert_model = transformers.BertModel.from_pretrained(β€˜bert-base-uncased’) tokenizer = transformers.BertTokenizer.from_pretrained(β€˜bert-base-uncased’) encoded_inputs = tokenizer(batch_sentences, padding=True, truncation=True, add_special_tokens=True) samples = torch.tensor(encoded_inputs[β€˜input_ids’]) targets = torch.zeros(samples.shape[0]).long() mask = (samples != 0) print(samples.shape) model = SentimentClassifier(3) EPOCHS = 10 optimizer = transformers.AdamW(model.parameters(), lr=2e-5, correct_bias=False) total_steps = 1 * EPOCHS scheduler = transformers.get_linear_schedule_with_warmup( optimizer, num_warmup_steps=0, num_training_steps=total_steps ) criterion = nn.CrossEntropyLoss() model = model.train() for i in range(EPOCHS): print(i) preds = model(input_ids=samples, mask=mask) loss = criterion(preds, targets) print(loss) loss.backward() nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0) optimizer.step() scheduler.step() optimizer.zero_grad()
I hope you’ve figured out the solution already but it seems that the tokenizer you use bert-base-uncased and the model initialized in the SentimentClassifier class bert-base-cased do not match. There may be overlaps in the vocabulary of the cased and uncased tokenizers which may seem working fine in some cases but the same token ids can decode to totally different text sequences with different tokenizers. In general, using an uncased tokenizer for a cased model or vice versa should always be erroneous, even if there is no error message showing up.
0
huggingface
Beginners
Accuracy less than 100, but no mistakes
https://discuss.huggingface.co/t/accuracy-less-than-100-but-no-mistakes/6691
Hello all, I am training a model using Huggingface and when evaluating it I get approx 80% accuracy. I then try to plot the confusion matrix (using pycm) but I see no error. Any ideas what this might be?
Could you be providing the same labels to the confusion matrix plotting function, like plotting predictions versus predictions or ground truth versus ground truth? That kind of mistake is one that I make all the time.
0
huggingface
Beginners
How to stop Optuna saving checkpoints during Hyperparameter Search
https://discuss.huggingface.co/t/how-to-stop-optuna-saving-checkpoints-during-hyperparameter-search/6785
Hello I am running a Hyperparameter search using Optuna. As I am using Colab, I have limited diskspace, so I was wondering how to stop saving checkpoints, I only care about the final result and don’t need all the intermediate steps saved. I tried the following argument sin my TrainingArguments parameter, but its not working # Define the trainig arguments training_args = TrainingArguments( output_dir='./results', # output directory seed = 0, num_train_epochs=5, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=16, # batch size for evaluation warmup_steps=22, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay learning_rate=5e-5, # initial learning rate for AdamW optimizer. load_best_model_at_end=True, # load the best model when finished training (default metric is loss) do_train=True, # Perform training do_eval=True, # Perform evaluation logging_dir='./logs', # directory for storing logs logging_steps=10, gradient_accumulation_steps=2, # total number of steps before back propagation fp16=True, # Use mixed precision fp16_opt_level="02", # mixed precision mode evaluation_strategy="epoch", # evaluate each `logging_steps` save_strategy = 'no', # The checkpoint save strategy to adopt during training. I dont want to save, probably why it did save and take up disk space in HP search save_steps = 100000, save_total_limit = 1. # Trying this to stop octuna from saving Any help would be appreciated, thank you!
Ok after reading the documentation carefully, it turns out setting load_best_model_at_end=True, overrides the strategy. Took it off and now it works.
0
huggingface
Beginners
NameError: name β€˜BertTokenizer’ is not defined
https://discuss.huggingface.co/t/nameerror-name-berttokenizer-is-not-defined/6727
Hi, I am trying to add custom tokens using this code below: # Let's see how to increase the vocabulary of Bert model and tokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = AutoModelForMaskedLM.from_pretrained('bert-base-uncased') num_added_toks = tokenizer.add_tokens(['token_1']) print('We have added', num_added_toks, 'tokens') model.resize_token_embeddings(len(tokenizer)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer. Though, when executing the above code, I get this error: --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-36-31798d520617> in <module>() 1 # Let's see how to increase the vocabulary of Bert model and tokenizer ----> 2 tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') 3 model = AutoModelForMaskedLM.from_pretrained('bert-base-uncased') 4 5 num_added_toks = tokenizer.add_tokens(['token_1']) NameError: name 'BertTokenizer' is not defined
hey @anon58275033 what version of transformers are you using? i was not able to reproduce the error in v4.6.1
0
huggingface
Beginners
How to add new tokens for existing masked language modelling?
https://discuss.huggingface.co/t/how-to-add-new-tokens-for-existing-masked-language-modelling/6720
Hi, I have followed this tutorial from GitHub on masked language modelling: notebooks/language_modeling.ipynb at master Β· huggingface/notebooks Β· GitHub 3 But, I am wondering, how do I modfiy this code below for the masked language modelling task, and where in my code do I place it? In the tutorial, this line of code is used: from transformers import AutoModelForMaskedLM model = AutoModelForMaskedLM.from_pretrained(model_checkpoint) This is the code I need to modify to satisfy MLM: Let's see how to increase the vocabulary of Bert model and tokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained('bert-base-uncased') num_added_toks = tokenizer.add_tokens(['new_tok1', 'my_new-tok2']) print('We have added', num_added_toks, 'tokens') model.resize_token_embeddings(len(tokenizer)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer.
First of all, I guess you want to use BertForMaskedLM instead of BertModel. The other parts should work AFAIK.
0
huggingface
Beginners
API Rest with several models loaded using GPU but not at same time
https://discuss.huggingface.co/t/api-rest-with-several-models-loaded-using-gpu-but-not-at-same-time/6673
I am creating an API Rest (using Flask) that does inference with several models given a list. For example summarization, sequence-to-sequence classification, etc … The problem is that all the models don’t fit at GPU at the same time. Is there a way of loading a model into GPU make inference with that model and move it to CPU and load next model to GPU for inference then to CPU…
UPDATE The Summarization task works on GPU if I run the script on the Virtual Machine without calling it on flask. However, once I start it on Flask I get: RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasCreate(handle)`
0
huggingface
Beginners
How to test my text classification model after training it?
https://discuss.huggingface.co/t/how-to-test-my-text-classification-model-after-training-it/6689
Hello, I have followed this tutorial on text classification: notebooks/text_classification.ipynb at master Β· huggingface/notebooks Β· GitHub 5 Now, I have trained it using my own data, but I am unsure how to actually deploy it to carry out a classification task. For example, I want to input the following sentence: β€œYou look good today.” And, from there, I want to see if the classification is positive or negative, but I do not know how to do this. Any help is much appreciated.
That’s a good question. cc @sgugger, would be great if the several notebooks also include an inference part. I had to look into several notebooks before finding out you can access the trained model using trainer.model. Here’s how to do inference on a new, unseen sentence: sentence = β€œYou look good today.” # encode sentence (i.e. create input_ids, attention_mask) encoding = tokenizer(sentence) # make sure the keys of the "encoding" dict are on the same device as the model encoding = {k: v.to(trainer.args.device) for k, v in encoding.items()} # forward pass through the model with torch.no_grad(): outputs = trainer.model(**encoding) logits = outputs.logits print("Predicted class index:", logits.argmax(-1)) The predicted class index will be either a zero or a one (I guess one represents positive).
0
huggingface
Beginners
Which loss function in bertforsequenceclassification regression
https://discuss.huggingface.co/t/which-loss-function-in-bertforsequenceclassification-regression/1432
BertForSequenceClassification can be used for regression when number of classes is set to 1. The documentation says that BertForSequenceClassification calculates cross-entropy loss for classification. What kind of loss does it return for regression? (I’ve been assuming it is root mean square error, but I read recently that there are several other possibilities such as Huber or Negative Log Likelihood.) Which is it? How should I find out / where is the code?
This is the GitHub 135 link At line 1354, you have the condition to check the labels (if it is one or more) if self.num_labels == 1: # We are doing regression loss_fct = MSELoss() loss = loss_fct(logits.view(-1), labels.view(-1)) else: loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
0
huggingface
Beginners
Change length of GPT-neo output
https://discuss.huggingface.co/t/change-length-of-gpt-neo-output/5307
Any way to modify the length of the output text generated by the GPT-neo inference API?
Does anyone know a solution for this?
0
huggingface
Beginners
Evaluate Model on Test dataset (PPL)
https://discuss.huggingface.co/t/evaluate-model-on-test-dataset-ppl/6528
Hi guys, i am kinda new to hugginface and have a question regarding the PPL. So what i have is, i fine-tuned a model and at the end of the traning i get the PPL for the dev dataset by doing: eval_results = trainer.evaluate() print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}") However, in my DatasetsDict I have train, dev and test and also want to get the PPL for the test set. Is there any simple way do get it done? Best regards Chris
Hey Chris, i’m a begginer as well but i think you could try using EvalPrediction 4. You pass your test_set and it’s labels and then you should get your loss at the specific set.
0
huggingface
Beginners
How to load ckpt into my model base on tf2.x
https://discuss.huggingface.co/t/how-to-load-ckpt-into-my-model-base-on-tf2-x/6415
I need to load ckpt file from google-search’s bert ckpt. I have read all questions related to this and tried some methods but it seem still doesn’t work . ckpt files are like this: I have tried method like: bert_config = transformers.BertConfig.from_json_file('./bert/bert_config.json') bert = transformers.TFBertModel.from_pretrained('./bert/bert_model.ckpt', config=bert_config) I find a method said that use TFBertForPreTraining instead TFBertModel bert_config = transformers.BertConfig.from_json_file('./bert/bert_config.json') bert = transformers.TFBertModel.from_pretrained('./bert/bert_model.ckpt', config=bert_config) but it still doesn’t work. Also , bert.load_weights('./bert/bert_model.ckpt') doesn’t work too. I really can’t understand why bert.load_weights('./bert/bert_model.ckpt') can’t work.Maybe rewrite it? so how can a model base on tf2 can load a ckpt file?
hello?Can someone help me?
0
huggingface
Beginners
What’s the difference between wordpiece and sentencepiece?
https://discuss.huggingface.co/t/whats-the-difference-between-wordpiece-and-sentencepiece/6676
I found that the wordpiece training and the sentencepiece are almost the same. So what’s the difference ?? Thanks
See for example this post: https://towardsdatascience.com/a-comprehensive-guide-to-subword-tokenisers-4bbd3bad9a7c 7
0
huggingface
Beginners
Evaluating Finetuned BERT Model for Sequence Classification
https://discuss.huggingface.co/t/evaluating-finetuned-bert-model-for-sequence-classification/5265
Python 3.7.6 Transformers 4.4.2 Pytorch 1.8.0 Hi HF Community! I would like to finetune BERT for sequence classification on some training data I have and also evaluate the resulting model. I am using the Trainer class to do the training and am a little confused on what the evaluation is doing. Below is my code: import torch from torch.utils.data import Dataset from transformers import BertForSequenceClassification, BertTokenizer, Trainer, TrainingArguments import pandas as pd class MyDataset(Dataset): def __init__(self, csv_file: str): self.df = pd.read_csv(csv_file, encoding='ISO-8859-1') self.tokenizer = BertTokenizer.from_pretrained("bert-base-uncased", padding_side='right', local_files_only=True) self.label_list = self.df['label'].value_counts().keys().to_list() def __len__(self) -> int: return len(self.df) def __getitem__(self, idx: int) -> str: if torch.is_tensor(idx): idx = idx.tolist() text = self.df.iloc[idx, 1] tmp_label = self.df.iloc[idx, 3] if tmp_label != 'label_a': label = 1 else: label = 0 return (text, label) def data_collator(self, dataset_samples_list): tokenizer = BertTokenizer.from_pretrained("bert-base-uncased", padding_side='right', local_files_only=True) examples = [example[0] for example in dataset_samples_list] encoded_results = tokenizer(examples, padding=True, truncation=True, return_tensors='pt', return_attention_mask=True) batch = {} batch['input_ids'] = torch.stack([result for result in encoded_results['input_ids']]) batch['attention_mask'] = torch.stack([result for result in encoded_results['attention_mask']]) batch['labels'] = torch.stack([torch.tensor(example[1]) for example in dataset_samples_list]) return batch train_data_obj = MyDataset('/path/to/train/data.csv') eval_data_obj = MyDataset('/path/to/eval/data.csv') model = BertForSequenceClassification.from_pretrained("bert-base-uncased") training_args = TrainingArguments( output_dir='/path/to/output/dir', do_train=True, do_eval=True, per_device_train_batch_size=2, per_device_eval_batch_size=2, evaluation_strategy='epoch', num_train_epochs=2, save_steps=10, gradient_accumulation_steps=4, dataloader_drop_last=True ) trainer = Trainer( model=model, args=training_args, train_dataset=train_data_obj, eval_dataset=eval_data_obj, data_collator=data_collator ) trainer.train() trainer.save_model("/path/to/model/save/dir") trainer.evaluate() As I understand, once trainer.train() is called, after each epoch the model will be evaluated on the dataset from eval_data_obj and those results will be displayed. After the training is done and the model is saved using trainer.save_model("/path/to/model/save/dir"), trainer.evaluate() will evaluate the saved model on the eval_data_obj and return a dict containing the evaluation loss. Are there other metrics like accuracy that are included in this dict by default? Thank you in advance for your help!
If you want other metrics, you have to indicate that to the Trainer by passing a compute_metrics function. See for instance our official GLUE example 20 or the corresponding notebook 30.
0
huggingface
Beginners
Why my simple Bert model for text classification could not learn anything?
https://discuss.huggingface.co/t/why-my-simple-bert-model-for-text-classification-could-not-learn-anything/6654
Hello, I try transformers.BertModel to deal with a simple text classification, but the result makes me puzzled. the code is simple,I implement the model with pytorch. they are… # a Dataset class for BertModel class BertDataset(Dataset): def __init__(self, train_file, tokenizer): super(BertDataset, self).__init__() self.train_file = train_file self.data = [] self.label2id = {} self.id2label = {} self.tokenizer = tokenizer self.init() def init(self): with open(self.train_file, 'r', encoding='utf-8') as f: for line in f: blocks = line.strip().split('\t') if blocks[1] not in self.label2id: self.label2id[blocks[1]] = len(self.label2id) self.id2label[len(self.id2label)] = blocks[1] self.data.append({'token': self.tokenizer(blocks[0], add_special_tokens=True, max_length=100, padding='max_length', return_tensors='pt', truncation=True), 'label': self.label2id[blocks[1]]}) def __getitem__(self, item): return self.data[item] def __len__(self): return len(self.data) # a collate function for torch.utils.data.DataLoader def bert_collate_fn(batch_data): input_ids, token_type_ids, attention_mask, labels = [], [], [], [] for instance in copy.deepcopy(batch_data): input_ids.append(instance['token']['input_ids'][0].squeeze(0)) token_type_ids.append(instance['token']['token_type_ids'][0].squeeze(0)) attention_mask.append(instance['token']['attention_mask'][0].squeeze(0)) labels.append(instance['label']) return torch.stack(input_ids), torch.stack(token_type_ids), \ torch.stack(attention_mask), torch.tensor(labels) # Model class PTModel(nn.Module): def __init__(self, model, n_class): super(PTModel, self).__init__() self.n_class = n_class self.model = model self.linear = nn.Linear(768, self.n_class) self.softmax = nn.Softmax(dim=-1) def forward(self, input_ids, token_type_ids=None, attention_mask=None): cls_emb = self.model(input_ids=input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask) cls_emb = cls_emb[0][:, 0, :].squeeze(1) logits = self.linear(cls_emb) # logits = self.softmax(logits) return logits # train code def train1(): # data batch_size = 16 tokenizer = BertTokenizer.from_pretrained(pretrained_path) dataset = BertDataset('../data/dataset/data.txt', tokenizer) train_len = int(len(dataset)*0.8) train_dataset, dev_dataset = random_split(dataset=dataset, lengths=[train_len, len(dataset)-train_len]) train_dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, collate_fn=bert_collate_fn) dev_dataloader = DataLoader(dev_dataset, batch_size=batch_size, shuffle=True, collate_fn=bert_collate_fn) # model device = torch.device('cuda:{}'.format(args.cuda)) bert_model = BertModel.from_pretrained(pretrained_path) model = PTModel(model=bert_model, n_class=len(dataset.label2id)).to(device) optimizer = torch.optim.Adam(params=model.parameters(), lr=args.lr) scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer=optimizer, milestones=[30, 40], gamma=0.1) loss_func = torch.nn.CrossEntropyLoss() # train for i in range(args.epoch): model.train() train_loss, dev_loss, f1_train, f1_dev = [], [], [], [] dev_pred_list, dev_gold_list = [], [] for input_ids, token_type_ids, attention_mask, label in tqdm(train_dataloader): input_ids, token_type_ids, attention_mask, label = input_ids.to(device), token_type_ids.to(device), \ attention_mask.to(device), label.to(device), outputs = model(input_ids=input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask) array_outputs = np.array(outputs.cuda().data.cpu()) optimizer.zero_grad() loss = loss_func(outputs, label) results = outputs.cuda().data.cpu().argmax(dim=1) score = f1_score(label.cuda().data.cpu(), results, average='micro') train_loss.append(loss.item()) f1_train.append(score) # optim loss.backward() optimizer.step() scheduler.step() print('epoch {}'.format(i)) print('train_loss:{}'.format(np.mean(train_loss))) print('train_f1:{}'.format(np.mean(f1_train))) The train log is following(only 10 epoches). And the result was already clear: The model could not learn anything!!! PS: the learning rate was 1e-3. 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 250/250 [00:43<00:00, 5.72it/s] epoch 0 train_loss:4.217772917747498 train_f1:0.081 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 63/63 [00:03<00:00, 19.52it/s] dev_f1:0.08928571428571429 dev_loss:4.111690880760314 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 250/250 [00:43<00:00, 5.71it/s] epoch 1 train_loss:4.094675525665283 train_f1:0.084 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 63/63 [00:03<00:00, 19.16it/s] dev_f1:0.0882936507936508 dev_loss:4.1316274839734275 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 250/250 [00:43<00:00, 5.71it/s] epoch 2 train_loss:4.084259546279907 train_f1:0.08525 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 63/63 [00:03<00:00, 19.37it/s] dev_f1:0.08928571428571429 dev_loss:4.108004717599778 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 250/250 [00:44<00:00, 5.62it/s] epoch 3 train_loss:4.0770455904006955 train_f1:0.09425 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 63/63 [00:03<00:00, 19.07it/s] dev_f1:0.08928571428571429 dev_loss:4.1077501395392035 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 250/250 [00:45<00:00, 5.54it/s] epoch 4 train_loss:4.070150758743286 train_f1:0.086 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 63/63 [00:03<00:00, 19.41it/s] dev_f1:0.09027777777777778 dev_loss:4.103204295748756 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 250/250 [00:45<00:00, 5.52it/s] epoch 5 train_loss:4.064209712982178 train_f1:0.0895 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 63/63 [00:03<00:00, 19.31it/s] dev_f1:0.08928571428571429 dev_loss:4.117827377622089 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 250/250 [00:43<00:00, 5.70it/s] epoch 6 train_loss:4.065111406326294 train_f1:0.08425 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 63/63 [00:03<00:00, 19.34it/s] dev_f1:0.0882936507936508 dev_loss:4.099656305615864 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 250/250 [00:44<00:00, 5.58it/s] epoch 7 train_loss:4.0547873935699466 train_f1:0.09175 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 63/63 [00:03<00:00, 19.30it/s] dev_f1:0.08928571428571429 dev_loss:4.105985126798115 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 250/250 [00:43<00:00, 5.76it/s] epoch 8 train_loss:4.0595885887145995 train_f1:0.08875 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 63/63 [00:03<00:00, 19.26it/s] dev_f1:0.09027777777777778 dev_loss:4.121003010916332 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 250/250 [00:45<00:00, 5.46it/s] epoch 9 train_loss:4.054850312232971 train_f1:0.08825 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 63/63 [00:03<00:00, 18.86it/s] dev_f1:0.08928571428571429 dev_loss:4.12501887669639 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 250/250 [00:45<00:00, 5.46it/s] epoch 10 train_loss:4.0566882238388065 train_f1:0.08525 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 63/63 [00:03<00:00, 18.85it/s] dev_f1:0.09126984126984126 dev_loss:4.103033436669244 Before this BertModel, I have tried LSTM, and the LSTM worked well. the dev f1 reached 0.96. # LSTM class SimpleModel(nn.Module): def __init__(self, **kwargs): super(SimpleModel, self).__init__() self.embedding = nn.Embedding.from_pretrained(kwargs['pretrained_embedding'], freeze=False) self.lstm = nn.LSTM(kwargs['pretrained_embedding'].shape[1], kwargs['hidden_size'], batch_first=True, bidirectional=True) self.linear = nn.Linear(kwargs['hidden_size']*2, kwargs['n_class']) def forward(self, inputs, lens): inputs = self.embedding(inputs) _, (h, _) = self.lstm(pack_padded_sequence(inputs, lens, batch_first=True, enforce_sorted=False)) h = h.permute(1, 0, 2).contiguous().view(h.shape[1], -1) logits = self.linear(h) logits = logits.softmax(dim=-1) return logits Could any good man tell me why this code can’t work. Is there something wrong with my writing? I have been confused for days… Thank you very much!
I think the problem might be that you call optimizer.zero_grad() after outputs are calculated, and it zeros out the gradients from the forward pass. Try putting that line before the line where outputs are calculated.
0
huggingface
Beginners
How to train from scratch with run_mlm.py, .txt file?
https://discuss.huggingface.co/t/how-to-train-from-scratch-with-run-mlm-py-txt-file/6588
Hello! Essentially what I want to do is: point the code at a .txt file, and get a trained model out. How can I use run_mlm.py to do this? I’d be satisfied if someone could help me figure out how to even just recreate the EsperBERTo tutorial. I’m getting bogged down in flags, trying to load tokenizers, errors, etc. What I’ve done so far: I managed to run through the EsperBERTo tutorial (here 20), and now I’m trying to do the same thing with run_mlm.py. I went to the examples at transformers/examples/pytorch/language-modeling at master Β· huggingface/transformers Β· GitHub 26 first, and I’ve been attempting to adapt those. First of all, the section at transformers/examples/pytorch/language-modeling at master Β· huggingface/transformers Β· GitHub 18 seems the closest to EsperBERTo. So I started with the example for β€œyour own training and validation files” I wanted to train from scratch, so I edited it to use --model_type I wanted to just use the oscar.eo.txt, so I omitted --validation_file and --do_eval I set --max_seq_length 512 to match the tutorial. I keep running into problems with the tokenizer. run_mlm.py doesn’t let you not specify a tokenizer when you are training from scratch, apparently. if you give it --model_type roberta and --tokenizer_name <path to the vocab.json and mergest.txt> it complains about not being able to find a config.json if you then put a config.json for the Model, not the tokenizer in that folder, the error goes away. But why is it looking in the tokenizer folder for model config? And why does it need a model config at all? I told it model type? What finally I did was: made a folder called EsperBERTo. In there I put the vocab.json and merges.txt from the Colab tutorial, and a config.json that I found for RobertaForMaskedLM here 9 ran the following command: python run_mlm.py \ --model_type roberta \ --tokenizer_name /path/to/EsperBERTo/ \ --train_file /path/to/oscar.eo.txt \ --max_seq_length 512 \ --do_train \ --output_dir ./output/test-mlm But now I’m getting β€œIndexError: index out of range in self”. Some Googling lead me to believe this might be to do with the vocab size? I edited config.json to match the tokenizer (52000 vocab size), but no dice. Here’s config.json { "architectures": [ "RobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "eos_token_id": 2, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 207, "model_type": "roberta", "num_attention_heads": 6, "num_hidden_layers": 3, "pad_token_id": 1, "type_vocab_size": 1, "vocab_size": 52000 } At this point I’m thoroughly confused as to how to proceed, and I’m not even sure I’m going down the right path. Can anyone give guidance on the proper process to go from .txt file to trained model in the command line? I’m not having much success transferring my knowledge (and my trained tokenizer) from Colab.
You seem to be on the correct path, could you tell us more about the index error you encountered? What did the stack trace look like? Also could you try briefly with another model than roberta (like bert for instance) and report if the error disappears?
0
huggingface
Beginners
KeyError: β€˜loss’ during Fine Tuning bert-base-italian-cased for QA
https://discuss.huggingface.co/t/keyerror-loss-during-fine-tuning-bert-base-italian-cased-for-qa/6638
I was finetuning bert-base-italian-cased on SQuAD-it dateset with the following arguments args = TrainingArguments( f"test-squad_it", evaluation_strategy = "epoch", learning_rate=2e-5, per_device_train_batch_size=8, per_device_eval_batch_size=8, num_train_epochs=3, weight_decay=0.01, label_names = ["start_positions", "end_positions"] ) trainer = Trainer( model, args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["validation"], data_collator=data_collator, tokenizer=tokenizer, ) Trying to do trainer.train() it suddenly throws this error: KeyError Traceback (most recent call last) <ipython-input-168-4e078f57a6ea> in <module>() 1 #We can now finetune our model by just calling the train method: ----> 2 trainer.train() 3 frames /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs) 1270 tr_loss += self.training_step(model, inputs) 1271 else: -> 1272 tr_loss += self.training_step(model, inputs) 1273 self.current_flos += float(self.floating_point_ops(inputs)) 1274 /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in training_step(self, model, inputs) 1732 loss = self.compute_loss(model, inputs) 1733 else: -> 1734 loss = self.compute_loss(model, inputs) 1735 1736 if self.args.n_gpu > 1: /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs) 1774 else: 1775 # We don't use .loss here since the model may return tuples instead of ModelOutput. -> 1776 loss = outputs["loss"] if isinstance(outputs, dict) else outputs[0] 1777 1778 return (loss, outputs) if return_outputs else loss /usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in __getitem__(self, k) 1736 if isinstance(k, str): 1737 inner_dict = {k: v for (k, v) in self.items()} -> 1738 return inner_dict[k] 1739 else: 1740 return self.to_tuple()[k] KeyError: 'loss' I’ve already had a look on similar questions about this same issue, but nothing seems to work and I’m really desperate. Any suggestions? EDIT My data are taken from SQuAD-it. I’ve created this dataDict: DatasetDict({ train: Dataset({ features: ['answer_text', 'answer_start', 'title', 'context', 'question', 'answers', 'id'], num_rows: 48328 }) validation: Dataset({ features: ['answer_text', 'answer_start', 'title', 'context', 'question', 'answers', 'id'], num_rows: 5831 }) }) And then I’ve preprocessed the data as follows (from a tutorial with little adjustments): def prepare_train_features(examples): # Tokenize our examples with truncation and padding, but keep the overflows using a stride. This results # in one example possible giving several features when a context is long, each of those features having a # context that overlaps a bit the context of the previous feature. tokenized_examples = tokenizer( examples["question" if pad_on_right else "context"], examples["context" if pad_on_right else "question"], truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) # Since one example might give us several features if it has a long context, we need a map from a feature to # its corresponding example. This key gives us just that. sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") # The offset mappings will give us a map from token to character position in the original context. This will # help us compute the start_positions and end_positions. offset_mapping = tokenized_examples.pop("offset_mapping") # Let's label those examples! tokenized_examples["start_positions"] = [] tokenized_examples["end_positions"] = [] for i, offsets in enumerate(offset_mapping): # We will label impossible answers with the index of the CLS token. input_ids = tokenized_examples["input_ids"][i] cls_index = input_ids.index(tokenizer.cls_token_id) # Grab the sequence corresponding to that example (to know what is the context and what is the question). sequence_ids = tokenized_examples.sequence_ids(i) # One example can give several spans, this is the index of the example containing this span of text. sample_index = sample_mapping[i] answers = examples["answers"][sample_index] # If no answers are given, set the cls_index as answer. if answers["answer_start"] == 0: tokenized_examples["start_positions"].append(cls_index) tokenized_examples["end_positions"].append(cls_index) else: # Start/end character index of the answer in the text. start_char = answers["answer_start"] end_char = start_char + len(answers["text"]) # Start token index of the current span in the text. token_start_index = 0 while sequence_ids[token_start_index] != (1 if pad_on_right else 0): token_start_index += 1 # End token index of the current span in the text. token_end_index = len(input_ids) - 1 while sequence_ids[token_end_index] != (1 if pad_on_right else 0): token_end_index -= 1 # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index). if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char): tokenized_examples["start_positions"].append(cls_index) tokenized_examples["end_positions"].append(cls_index) else: # Otherwise move the token_start_index and token_end_index to the two ends of the answer. # Note: we could go after the last offset if the answer is the last word (edge case). while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char: token_start_index += 1 tokenized_examples["start_positions"].append(token_start_index - 1) while offsets[token_end_index][1] >= end_char: token_end_index -= 1 tokenized_examples["end_positions"].append(token_end_index + 1) return tokenized_examples tokenized_datasets = train_dataset.map(prepare_train_features, batched=True, remove_columns=train_dataset["train"].column_names) I’m quite new in coding and maybe I’m relying too much in other people’s work without understanding every detail.
How is your model created? How is your data processed? It’s hard to help debug the root of the error without seeing those.
0
huggingface
Beginners
Evaluating QA model on single SQuAD file
https://discuss.huggingface.co/t/evaluating-qa-model-on-single-squad-file/6622
Hello guys I would like to evaluate a model from the HF repo (β€˜mrm8488/bert-italian-finedtuned-squadv1-it-alfa’) on a SQuAD file I compiled, just to have a rough estimation of what could be the metrics. This is the code i wrote: from transformers import AutoTokenizer, AutoModelForQuestionAnswering, Trainer, TrainingArguments import torch from transformers import default_data_collator import json # Model from HuggingFace model_checkpoint = 'mrm8488/bert-italian-finedtuned-squadv1-it-alfa' # Import tokenizer my_tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) # Import model my_model = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint) # Dataset for evaluation eval_data_path = '/content/drive/MyDrive/BERT/SQuAD_files/result.json' with open(eval_data_path) as json_file: data = json.load(json_file) data_collator = default_data_collator trainer = Trainer( my_model, data_collator=data_collator, tokenizer=my_tokenizer ) trainer.evaluate(data) Unfortunately, this does not work and I can’t understand why. This is the error: KeyError Traceback (most recent call last) <ipython-input-31-54109037e744> in <module>() ----> 1 trainer.evaluate(data) 5 frames /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in evaluate(self, eval_dataset, ignore_keys, metric_key_prefix) 2006 prediction_loss_only=True if self.compute_metrics is None else None, 2007 ignore_keys=ignore_keys, -> 2008 metric_key_prefix=metric_key_prefix, 2009 ) 2010 /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in evaluation_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix) 2145 observed_num_examples = 0 2146 # Main evaluation loop -> 2147 for step, inputs in enumerate(dataloader): 2148 # Update the observed num examples 2149 observed_batch_size = find_batch_size(inputs) /usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in __next__(self) 515 if self._sampler_iter is None: 516 self._reset() --> 517 data = self._next_data() 518 self._num_yielded += 1 519 if self._dataset_kind == _DatasetKind.Iterable and \ /usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _next_data(self) 555 def _next_data(self): 556 index = self._next_index() # may raise StopIteration --> 557 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 558 if self._pin_memory: 559 data = _utils.pin_memory.pin_memory(data) /usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] /usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] KeyError: 0 A glimpse of my SQuAD formatted file: { "data": [ { "paragraphs": [ { "qas": [ { "question": "Qual Γ¨ l’etΓ ?", "id": 78079, "answers": [ { "answer_id": 89658, "document_id": 84480, "question_id": 78079, "text": "02/01/1966", "answer_start": 113, "answer_category": "SHORT" } ], "is_impossible": false }, { "question": "Qual Γ¨ il titolo di studio?", "id": 78082, "answers": [ { "answer_id": 89661, "document_id": 84480, "question_id": 78082, "text": "media superiore", "answer_start": 1157, "answer_category": "SHORT" } ], "is_impossible": false }, ...] "context" = "..." "document_id" = "..." What am I doing wrong? PS: is there a tool from the HF libraries to annotate files for question answering in a smart way? Thanks a lot
The Trainer requires processed data. Have a look at the run_qa examples to see how it can be done,
0
huggingface
Beginners
I got β€˜ValueError: You have to specify either input_ids or inputs_embeds’ when I am training GPT2 using huggingface Trainer
https://discuss.huggingface.co/t/i-got-valueerror-you-have-to-specify-either-input-ids-or-inputs-embeds-when-i-am-training-gpt2-using-huggingface-trainer/6611
Below is the error code generated ValueError Traceback (most recent call last) <ipython-input-38-29d47e6260b2> in <module>() ----> 1 trainer.train( ) 4 frames /usr/local/lib/python3.7/dist-packages/transformers/models/gpt2/modeling_gpt2.py in forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, use_cache, output_attentions, output_hidden_states, return_dict) 674 batch_size = inputs_embeds.shape[0] 675 else: --> 676 raise ValueError("You have to specify either input_ids or inputs_embeds") 677 678 device = input_ids.device if input_ids is not None else inputs_embeds.device and here is my code from transformers import GPT2Tokenizer, GPT2Model, Trainer, trainer_utils, TrainingArguments tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2Model.from_pretrained('gpt2') from datasets import Dataset dataset = Dataset.from_text('/content/chatbot.txt') trainArgs = TrainingArguments( output_dir = os.path.join( os.getcwd(), 'customGPT2' ) , overwrite_output_dir = True , do_train=True , do_eval=True , evaluation_strategy='steps' , per_device_train_batch_size=4 , per_device_eval_batch_size =4 , gradient_accumulation_steps=1 , eval_accumulation_steps=1 , weight_decay=0 , adam_epsilon= 1e-08 , max_grad_norm = 1.0 , num_train_epochs =3.0 , max_steps = -1 , lr_scheduler_type = trainer_utils.SchedulerType('linear') , logging_dir = os.path.join( os.getcwd(), 'log' ) , logging_steps = 2000 , logging_strategy = 'steps' , save_steps = 2000 , save_strategy = 'steps' , seed = 66 , fp16 = False , fp16_opt_level = 'O1') trainer=Trainer( model, args = trainArgs,train_dataset=dataset) As I am quite new using trainer I actually try to follow as closely to docs as possible and only change few thing such as dataset because I need to use local dataset but I make sure to import it to datasets.Dataset() to be the same as the format that docs require and changing some training arguments. Thank you
You haven’t processed your dataset: it only contains the raw texts and not the input IDs the model expects. Have a look at the training tutorial to see how you can tokenize it!
0
huggingface
Beginners
Request to reset API key
https://discuss.huggingface.co/t/request-to-reset-api-key/6613
Hi, may I get some help resetting my API key? I might have leaked mine. Thanks! @julien-c @pierric I followed a previous post: How can I renew my API key 6
@r3dhummingbird Your api token was successfully renewed.
0
huggingface
Beginners
Wav2Vec2-XLSR-53
https://discuss.huggingface.co/t/wav2vec2-xlsr-53/6587
I tried to run notebook facebook/wav2vec2-large-xlsr-53 Β· Hugging Face But when I run model = Wav2Vec2ForCTC.from_pretrained( "facebook/wav2vec2-base-960h", attention_dropout=0.1, hidden_dropout=0.1, feat_proj_dropout=0.0, mask_time_prob=0.05, layerdrop=0.1, gradient_checkpointing=True, ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id, vocab_size=len(processor.tokenizer) ) I get this error RuntimeError: version_ <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at ../caffe2/serialize/inline_container.cc:132, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. (init at ../caffe2/serialize/inline_container.cc:132) frame #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 135 (0x14e2f6787 in libc10.dylib) frame #1: caffe2::serialize::PyTorchStreamReader::init() + 2350 (0x13795414e in libtorch.dylib) frame #2: caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 143 (0x13795379f in libtorch.dylib) frame #3: void pybind11::cpp_function::initialize<void pybind11::detail::initimpl::constructor<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::execute<pybind11::class_<caffe2::serialize::PyTorchStreamReader>, 0>(pybind11::class_<caffe2::serialize::PyTorchStreamReader>&)::'lambda'(pybind11::detail::value_and_holder&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >), void, pybind11::detail::value_and_holder&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor>(pybind11::class_<caffe2::serialize::PyTorchStreamReader>&&, (*)(0...), void pybind11::detail::initimpl::constructor<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::execute<pybind11::class_<caffe2::serialize::PyTorchStreamReader>, 0>(pybind11::class_<caffe2::serialize::PyTorchStreamReader>&)::'lambda'(pybind11::detail::value_and_holder&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) const&...)::'lambda'(pybind11::detail::function_call&)::operator()(pybind11::detail::function_call&) const + 147 (0x1346277c3 in libtorch_python.dylib) frame #4: pybind11::cpp_function::dispatcher(_object*, _object*, _object*) + 3382 (0x13401fe66 in libtorch_python.dylib) <omitting python frames> During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-108-5337759b7278> in <module> 1 from transformers import Wav2Vec2ForCTC 2 ----> 3 model = Wav2Vec2ForCTC.from_pretrained( 4 #"facebook/wav2vec2-base-960h", 5 "facebook/wav2vec2-large-xlsr-53", /opt/anaconda3/lib/python3.8/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 1062 state_dict = torch.load(resolved_archive_file, map_location="cpu") 1063 except Exception: -> 1064 raise OSError( 1065 f"Unable to load weights from pytorch checkpoint file for '{pretrained_model_name_or_path}' " 1066 f"at '{resolved_archive_file}'" OSError: Unable to load weights from pytorch checkpoint file for 'facebook/wav2vec2-large-xlsr-53' at '/Users/elizavetatrusova/.cache/huggingface/transformers/5d2a20b45a1689a376ec4a6282b9d9be42f931cdf8daf07c3668ba1070a059d9.db2a69eb44bf7b1efcfff155d4cc22155230bd8c0941701b064e9c17429a623d'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. I run it in Jupyter, what can I do with it?
Perhaps it’s a permissions error?
0
huggingface
Beginners
How to see BERT,BART… output dimensions?
https://discuss.huggingface.co/t/how-to-see-bert-bart-output-dimensions/6517
how can i see the output dimensions for BERT Large,BART Large,RoBERTa Large, BART Large CNN, XLM RoBERTa Large?
really no one knows?!
0
huggingface
Beginners
Not able to import MBart50TokenizerFast from transformers
https://discuss.huggingface.co/t/not-able-to-import-mbart50tokenizerfast-from-transformers/3706
Hi I am not able to import MBart50TokenizerFast from transformers. Below is the error that it is giving In [6]: from transformers import MBartForConditionalGeneration, MBart50TokenizerFast --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-6-6bac02268080> in <module> ----> 1 from transformers import MBartForConditionalGeneration, MBart50TokenizerFast ImportError: cannot import name 'MBart50TokenizerFast' from 'transformers' (unknown location) Below are the transformers and torch versions $ pip freeze | grep transformers transformers==4.3.2 $ pip freeze | grep torch torch==1.7.1
AFAIK this model+tokenizer is not part of a full release yet. You’ll have to install from the master branch to use it.
0
huggingface
Beginners
AutoTokenizer vs. regular Tokenizer
https://discuss.huggingface.co/t/autotokenizer-vs-regular-tokenizer/6491
I understand that there are a few different tokenizers; e.g., DistilBertTokenizerFast, etc. However, I don’t understand the concept of β€œAuto” in tokenizer selection. Using IMDB text as an example. What do I get if I use AutoTokenizer to tokenize the text?
The AutoTokenizer will work on any checkpoint and pick the proper architecture for you (whereas DistilBNertTokenizerFast will only work for distilbert checkpoints).
0
huggingface
Beginners
Wav2vec2 for long audiofiles
https://discuss.huggingface.co/t/wav2vec2-for-long-audiofiles/6446
Hi, I’m trying to apply wave2vec2 models on long audiofiles (~1h) for speech to text. However processing the entire audio file at once is not feasible because it requires more than 16GB. How can I import a sound file as audio stream into the wave2vec models?
Here is one way to do this with librosa.stream: github.com/huggingface/transformers can't allocate memory error with wav2vec2 106 opened Feb 24, 2021 closed Apr 23, 2021 kleekaai I am trying out the wav2vec2 model for ASR from the huggingface library. Here, I… am passing a 7 min(~15 MB file) long wav file having a conversation(english) to the wav2vec2 model. I am getting "can't allocate memory" error. I found that the model uses all 64 GB of the available RAM. Can anyone help with this. - `transformers` version: 4.3.2 - Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.3 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: (NA) - Using distributed or parallel set-up in script?: (NA) Code ``` import os import librosa import soundfile as sf from pydub import AudioSegment def convert_audio_segment(fp, upload_dir_path): """Convert audio file""" USER_UPLOAD_DIR = upload_dir_path formats_to_convert = ['.m4a'] dirpath = os.path.abspath(USER_UPLOAD_DIR) if fp.endswith(tuple(formats_to_convert)): (path, file_extension) = os.path.splitext(fp) file_extension_final = file_extension.replace('.', '') file_handle = '' try: track = AudioSegment.from_file(fp, file_extension_final) print("track", track) wav_path = fp.replace(file_extension_final, 'wav') file_handle = track.export(wav_path, format='wav') except Exception: print("ERROR CONVERTING " + str(fp)) return file_handle else: print("No file format conversion required " + str(fp)) return fp def load_wav2vec_100h_model(): tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-100h") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-100h") return tokenizer, model def correct_sentence(input_text): sentences = nltk.sent_tokenize(input_text) return (' '.join([s.replace(s[0],s[0].capitalize(),1) for s in sentences])) def asr_transcript(tokenizer, model, input_file): speech, fs = sf.read(input_file) if len(speech.shape) > 1: speech = speech[:,0] + speech[:,1] if fs != 16000: speech = librosa.resample(speech, fs, 16000) input_values = tokenizer(speech, return_tensors="pt").input_values logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = tokenizer.decode(predicted_ids[0]) return correct_sentence(transcription.lower()) if __name__ == "__main__": tokenizer_100h, model_100h = load_wav2vec_100h_model() wav_input = 'Recording_biweu.wav' fp = wav_input processed_file = convert_audio_segment(str(fp), str(data_dir)) text = asr_transcript(tokenizer_100h,model_100h,processed_file) print(text) ``` I am adding more details about my wav file here ``` General Complete name : Recording_biweu.wav Format : Wave File size : 13.8 MiB Duration : 7 min 30 s Overall bit rate mode : Constant Overall bit rate : 256 kb/s Track name : Recording_biweu Recorded date : 2021 Writing application : Lavf57.83.100 Audio Format : PCM Format settings : Little / Signed Codec ID : 1 Duration : 7 min 30 s Bit rate mode : Constant Bit rate : 256 kb/s Channel(s) : 1 channel Sampling rate : 16.0 kHz Bit depth : 16 bits Stream size : 13.8 MiB (100%) ``` Error ``` Some weights of the model checkpoint at facebook/wav2vec2-base-100h were not used when initializing Wav2Vec2ForCTC: ['wav2vec2.mask_time_emb_vector'] - This IS expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Traceback (most recent call last): File "asr_wav2vec2.py", line 130, in <module> text = asr_transcript(tokenizer_100h,model_100h,processed_file) File "asr_wav2vec2.py", line 96, in asr_transcript logits = model(input_values).logits File "/home/joel/pyvenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/joel/pyvenv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 795, in forward outputs = self.wav2vec2( File "/home/joel/pyvenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/joel/pyvenv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 646, in forward encoder_outputs = self.encoder( File "/home/joel/pyvenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/joel/pyvenv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 457, in forward hidden_states, attn_weights = layer(hidden_states, output_attentions=output_attentions) File "/home/joel/pyvenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/joel/pyvenv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 392, in forward hidden_states, attn_weights, _ = self.attention(hidden_states, output_attentions=output_attentions) File "/home/joel/pyvenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/joel/pyvenv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 286, in forward attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) RuntimeError: [enforce fail at CPUAllocator.cpp:65] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 24373495488 bytes. Error code 12 (Cannot allocate memory) ```
0
huggingface
Beginners
β€œrun_lm_finetuning.py” was replaced?
https://discuss.huggingface.co/t/run-lm-finetuning-py-was-replaced/992
Hello to all, Looking at tutorials and examples I see that many do fine tuning with run_lm_finetuning.py. However when I install the latest version of transformers this file doesn’t exist, I haven’t found it in the github repository either. So my question is if this file was discontinued, and, if so, what is the actual .py file that should be used for fine tuning of a language model?
Hi, It’s now here: https://github.com/huggingface/transformers/tree/master/examples/language-modeling 960
0
huggingface
Beginners
Bug Report: Mask token mismatch with the model on hosted inference API of Model Hub
https://discuss.huggingface.co/t/bug-report-mask-token-mismatch-with-the-model-on-hosted-inference-api-of-model-hub/6476
In my model card, I used to be able to run the hosted inference successfully, but recently it prompted an error: "<mask>" must be present in your input. My model uses RoBERTa MLM and BERT Tokenizer. So the mask token is actually β€œ[MASK]”. I have already set it in tokenizer_confg.json but the inference API still mismatches with that. In the past it is OK but recently it turns to prompt an error. Seems like the front-end start to double-check the mask token. How can I set the mask token in an appropriate way? Is it documented to set mask token in inference API? Thanks! To reproduce Steps to reproduce the behavior: Go to ethanyt/guwenbert-base Β· Hugging Face 2 Run an example with β€œ[MASK]” Expected behavior In the past it was OK. See snapshot in guwenbert/README_EN.md at main Β· Ethan-yt/guwenbert Β· GitHub 1
Should be resolved in Mask token mismatch with the model on hosted inference API of Model Hub Β· Issue #11884 Β· huggingface/transformers Β· GitHub 8, but if possible do not open duplicate issues/forum posts. Thanks!
0