docs
stringclasses
4 values
category
stringlengths
3
31
thread
stringlengths
7
255
href
stringlengths
42
278
question
stringlengths
0
30.3k
context
stringlengths
0
24.9k
marked
int64
0
1
huggingface
🤗Transformers
Get output embedding of FeatureExtractor
https://discuss.huggingface.co/t/get-output-embedding-of-featureextractor/5636
Can anyone route me to get output of Feature extractor out of a transformers model? Specific case for me here is I want to prepare a voiceprint identification (with n-shot) maybe wav2vec2 embeddings (for auto identification of speaker) , since it is already trained on large data should at least get a proper voice embedding for audio data (Feature extractor is mainly used in finetuning pipeline Wav2Vec2FeatureExtractor). Especially I want to get a 1x512 or 1x768 embedding before converting that to text mapping. Maybe it is an overkill for Siamese network but at least may try for smaller version. Thanks.
Maybe solution is somewhat similar to this (without tokenizer) yet it will probably have no information on pretrained model. feature_extraction = pipeline('feature-extraction', model="distilroberta-base", tokenizer="distilroberta-base") features = feature_extraction("i am sentence") Source: machine learning - Getting sentence embedding from huggingface Feature Extraction Pipeline - Stack Overflow 4
0
huggingface
🤗Transformers
Error when finetuning pretrained huggingface conv-ai chatbot model
https://discuss.huggingface.co/t/error-when-finetuning-pretrained-huggingface-conv-ai-chatbot-model/5490
I’ve been trying to train the model using my own custom dataset, but the kernel terminates the process just before running the first epoch. The same thing happens on both my local machine, and on Google colab. The code I run is: ! python train.py --dataset_path counsel_chat_250-tokens_full.json --gradient_accumulation_steps=4 --lm_coef=2.0 --max_history=1 --n_epochs=3 --num_candidates=4 --train_batch_size=2 kilt1600×900 149 KB
@julien-c any thoughts?
0
huggingface
🤗Transformers
ASR: Offset and probability
https://discuss.huggingface.co/t/asr-offset-and-probability/5372
wav2vec2 - following this code: can't allocate memory error with wav2vec2 · Issue #10366 · huggingface/transformers · GitHub 11 Question: How to determine offset and the probability of each spoken word? Best wishes! Leah
@double were you able to find something on this. If not we can create an issue and work on something like this.
0
huggingface
🤗Transformers
Tutorial: Fine-tuning with custom datasets – sentiment, NER, and question answering
https://discuss.huggingface.co/t/tutorial-fine-tuning-with-custom-datasets-sentiment-ner-and-question-answering/733
Interested in fine-tuning on your own custom datasets but unsure how to get going? I just added a tutorial to the docs with several examples that each walk you through downloading a dataset, preprocessing & tokenizing, and training with either Trainer, native PyTorch, or native TensorFlow 2. Examples include: Sequence classification (sentiment) – IMDb 68 Token classification (NER) – W-NUT Emerging and Rare entities 886 Question answering (span selection) – SQuAD 2.0 91 Click the Open in Colab button at the top to open a colab notebook in either TF or PT. This tutorial demonstrates one workflow for working with custom datasets, but there are many valid ways to accomplish the same thing. The intention is to be demonstrative rather than definitive. Also, we highly recommend you check out and contribute to our NLP datasets & metrics library 27 for easy access 150+ datasets. Tutorial: https://huggingface.co/transformers/master/custom_datasets.html 771 Feedback and questions welcome!
I spotted a minor typo. “…which we can use for for evaluation and tuning without taining our test set results.” I believe you meant to say tainting. Otherwise, great tutorial. I’m looking forward to digging in more.
0
huggingface
🤗Transformers
Fine-tuning BERT Model on domain specific language and for classification
https://discuss.huggingface.co/t/fine-tuning-bert-model-on-domain-specific-language-and-for-classification/3106
Hi guys First of all, what I am trying to do: I want to fine-tune a BERT Model on domain specific language and in a second step further fine-tune it for classification. To do so, I want to use a pretrained model, what forces me to use the original tokenizer (cannot use own vocab). I would like to share my code with you and have your opinions (are there mistakes?): First we load the pre-trained tokenizer and model: from transformers import BertTokenizer, BertForMaskedLM tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForMaskedLM.from_pretrained('bert-base-uncased') We are using BertForMaskedLM since the first fine-tuning step is to train the model on domain specific language (a text file with one sentence per line). Next we are reading the text file: from transformers import LineByLineTextDataset dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="test.txt", block_size=128 ) and define the data collator as: from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) Finally we are training the model for MLM: from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="./TestBERT", overwrite_output_dir=True, num_train_epochs=1, per_gpu_train_batch_size=16, save_steps=10_000, save_total_limit=2 ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset ) trainer.train() We save the model and reload it for sequence classification (huggingface handles the heads): from transformers import BertForSequenceClassification trainer.save_model("./TestBERT") model = BertForSequenceClassification.from_pretrained("./TestBERT", num_labels=2) Finally we can fine-tune the model for sequence classification as usual. E.g.: !wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz !tar -xf aclImdb_v1.tar.gz from pathlib import Path def read_imdb_split(split_dir): split_dir = Path(split_dir) texts = [] labels = [] for label_dir in ["pos", "neg"]: for text_file in (split_dir/label_dir).iterdir(): texts.append(text_file.read_text()) labels.append(0 if label_dir is "neg" else 1) return texts, labels train_texts, train_texts= read_imdb_split('aclImdb/train') test_texts, test_labels = read_imdb_split('aclImdb/test') from sklearn.model_selection import train_test_split train_texts, val_texts, train_labels, val_labels = train_test_split(train_texts, further2, test_size=.2) train_encodings = tokenizer(train_texts, truncation=True, padding=True) val_encodings = tokenizer(val_texts, truncation=True, padding=True) test_encodings = tokenizer(test_texts, truncation=True, padding=True) import torch class IMDbDataset(torch.utils.data.Dataset): def __init__(self, encodings, labels): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} item['labels'] = torch.tensor(self.labels[idx]) return item def __len__(self): return len(self.labels) train_dataset = IMDbDataset(train_encodings, train_labels) val_dataset = IMDbDataset(val_encodings, val_labels) test_dataset = IMDbDataset(test_encodings, test_labels) from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=3, # total number of training epochs per_device_train_batch_size=8, # batch size per device during training per_device_eval_batch_size=8, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=10, ) trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset # evaluation dataset ) trainer.train() Does anyone detect any obvious mistakes I am making or is this the correct proceeding? Further I want to freeze some layers during the first fine-tuning step to avoid forgetting (of the pre-trained learning). I assume I would have to write my own trainer for it (will do any maybe comment on this post). Best
Does anyone detect any obvious mistakes I am making or is this the correct proceeding This question seems pretty vague, could maybe post a specific question so we can help better
0
huggingface
🤗Transformers
:hugs:Trainer not saving after save_steps
https://discuss.huggingface.co/t/trainer-not-saving-after-save-steps/5464
I am using Trainer for training. My training args are as follows: args = TrainingArguments( output_dir="bigbird-nq-output-dir", overwrite_output_dir=False, do_train=True, do_eval=True, evaluation_strategy="epoch", per_device_train_batch_size=2, per_device_eval_batch_size=2, gradient_accumulation_steps=4, learning_rate=5e-5, num_train_epochs=3, logging_strategy="epoch", save_strategy="steps", run_name="bigbird-nq", disable_tqdm=False, load_best_model_at_end=True, report_to="wandb", remove_unused_columns=False, fp16=True, ) I am unable to find checkpoints after every 500 steps. Any reasons why??
With load_best_model_at_end=True, your save_strategy will be ignored and default to evaluation_strategy. So you will find one checkpoint at the end of each epoch.
0
huggingface
🤗Transformers
How to reduce time at production in T5Tokenizer?
https://discuss.huggingface.co/t/how-to-reduce-time-at-production-in-t5tokenizer/4138
I am trying to reduce the time in production. I am using TensorFlow on Amazon Sagemaker. I am unable to figure out how to reduce the time. Currently, I am facing two issues: Screenshot (285)672×606 20 KB I am unable to figure out why the memory distribution In GPU is uneven. I am using the below-mentioned code: from transformers import T5Tokenizer, TFT5ForConditionalGeneration import time # initialize the model architecture and weights model = TFT5ForConditionalGeneration.from_pretrained("t5-large") # initialize the model tokenizer tokenizer = T5Tokenizer.from_pretrained("t5-large") import tensorflow as tf start_time = time.time() #strategy = tf.distribute.MultiWorkerMirroredStrategy() #strategy = tf.distribute.MirroredStrategy() #with strategy.scope(): inputs = tokenizer("summarize: " + text, return_tensors="tf").input_ids outputs = model.generate( inputs, max_length=150, min_length=41, length_penalty=5, num_beams=2, no_repeat_ngram_size=2, early_stopping=True) print(tokenizer.decode(outputs[0])) elapsed_time = time.time() - start_time print(elapsed_time)
Hi, thanks for posting on the forum! what do you mean by time at production? training time? If you run on SageMaker Training API 1, you can use the Profiler to diagnose bottlenecks.
0
huggingface
🤗Transformers
It takes so long before the model start training, wav2vec2 fine-tuning
https://discuss.huggingface.co/t/it-takes-so-long-before-the-model-start-training-wav2vec2-fine-tuning/5384
Almost same as fine-tuning example, while running trainer.train(). it takes so long(tens of minutes) before the model starting training. is that normal? And, how could we use apex(torch) for faster fine-tuning using trainer?
set group_by_length=False, fix this. seems that group by length takes too much time
0
huggingface
🤗Transformers
Wav2vec2-large-xlsr-53
https://discuss.huggingface.co/t/wav2vec2-large-xlsr-53/5425
Hi, Wanted to check on this model released by facebook. It a cross-lingual model, does it mean that this model can understand audio that contained mixed languages? Can I fine-tuned this model with 2 languages dataset? This is so that I do not need to detect my audio language source before I do the ASR. Thanks Becks
I have not personally tried it but finetuning on 2 languages simultaneously would probably work. But as the number of tokens would be higher the accuracy would be lower for the same amount of data. Another way you can approach it is by running 3 models in parallel. Language detection can be a separate pipeline. I am assuming that you are doing this in a business context. So you can have 3 models running in parallel, one for language detection and one each for the two languages. Based on the output of the language model you can pick the respective ASR output.
0
huggingface
🤗Transformers
Truncated last sentence on summaries
https://discuss.huggingface.co/t/truncated-last-sentence-on-summaries/1253
I am running the summarization finetuning on the latest master branch version of examples/seq2seq. I am using a custom dataset. However, the last sentence on some of the resulting summaries are truncated. The issue appears to worsen as I increase my dataset size, resulting in a greater proportion of truncated summaries. My parameters are as follows: '--data_dir=.../data', '--train_batch_size=1', '--eval_batch_size=1', '--output_dir=.../output', '--num_train_epochs=5', '--max_target_length=1024' '--max_source_length=56' '--model_name_or_path=facebook/bart-large' Here is a very small data set that I was able to reproduce the issue with (500 training instances). Is this expected? Any insights would be helpful. Thank you!
I’m experiencing this same issue with BART transformer and I created a Stackoverflow post about the issue: https://stackoverflow.com/questions/66996270/limiting-bart-huggingface-model-to-complete-sentences-of-maximum-length 10 Here are some of the output summaries with truncated sentences: EX1: The opacity at the left lung base appears stable from prior exam. There is elevation of the left hemidi EX 2: There is normal mineralization and alignment. No fracture or osseous lesion is identified. The ankle mort Were you able to find a solution to this problem you encountered @hf324?
0
huggingface
🤗Transformers
Save and deploy distilbert model in AWS SageMaker
https://discuss.huggingface.co/t/save-and-deploy-distilbert-model-in-aws-sagemaker/5329
I am trying to download the Hugging Face distilbert model, trying to save to S3. The model itself does not have a deploy method. So I am saving to S3, instantiating it and trying to deploy. May I know if this will work with Sagemaker. What I am doing wrong. Here are the steps: model_name = ‘distilbert-base-uncased-distilled-squad’ model = DistilBertForQuestionAnswering.from_pretrained(model_name) tokenizer = DistilBertTokenizerFast.from_pretrained(model_name) the below works and gives output context = “xxxx” question = “yyy?” nlp = pipeline(‘question-answering’, model=model, tokenizer=tokenizer) nlp({ ‘question’: ‘What organization is the IPCC a part of?’, ‘context’: context }) save the model to local folder model.save_pretrained(’./scripts/mymodel’) zip the model file. with tarfile.open("./scripts/mymodel/model.tar.gz", “w:gz”) as tar: tar.add("./scripts/mymodel/pytorch_model.bin") tar.add("./scripts/mymodel/config.json") upload the zipped file to S3 sagemaker.Session().upload_data(bucket=sagemaker_session_bucket, path=’./scripts/mymodel/model.tar.gz’, key_prefix=‘model’) instantiating the saved model bertmodel = PyTorchModel(entry_point=‘inference.py’, source_dir=‘scripts’, model_data=‘s3://’+sagemaker_session_bucket+’/model/model.tar.gz’, role=sagemaker.get_execution_role(), framework_version=‘1.5’, py_version=‘py3’) the below does not work nlp = pipeline(‘question-answering’, model=bertmodel, tokenizer=tokenizer) nlp({ ‘question’: ‘What organization is the IPCC a part of?’, ‘context’: context }) error recd - AttributeError: ‘PyTorchModel’ object has no attribute ‘config’ I am able to deploy the predictor predictor = bertmodel.deploy(initial_instance_count=1, instance_type=‘ml.m5.xlarge’)
@gopalkr272 I am afraid I don’t have a complete answer but review my support ticket here: github.com/huggingface/transformers Can't load model estimater after training 52 opened Apr 3, 2021 gwc4github I was trying to follow the Sagemaker instructions here to load the model I just trained and test an estimation. I... I hope this will help you find an answer.
0
huggingface
🤗Transformers
T5 Fine Tuning - Text to Text Generation
https://discuss.huggingface.co/t/t5-fine-tuning-text-to-text-generation/5340
I was working on an interesting problem of generating inferences from the excel data. I wrote a python program to generate rules from the data in the form of RDF Triple and now training using T5-Base model. with some 10k training data of rdf rules and inferences I was able to get some 80% to 85% test accuracy. I’m using ADAMW optimizer with lr of 1e-5. One issue I have seen is the model is not able to generalize well on new numbers. eg. if I pass a rule of “Critical” | priority_ticketshare | “23.09%” to the model, it returns inference as Critical priority tickets accounted for 22.09% of total tickets. While the statement is correct, the number it has taken is wrong. Any idea how to solve this
You are expecting a model to behave like a database :-). It will never do that buddy. Not even GPT-3. Use t5-small for less memory footprints. I gurantee you, there won’t be much drop n accuracy and you get better inference time.
0
huggingface
🤗Transformers
Transformers Huge Community feedback: 40k
https://discuss.huggingface.co/t/transformers-huge-community-feedback-40k/5313
Back in February we shared the second feedback requests on Transformers. We’ve got an amazing 500+ responses with a lot of constructive and freeform comments to analyze. This is the second edition of the survey, you can find the first analysis here 10. It is thanks to all of your answers that we’re able to steer the library - and the whole Hugging Face ecosystem - in a direction that fits you all; so we wanted to thank you for sharing your thoughts and a little of your time to let us know what you think, what you like and what you dislike. As a community-focused endeavour it is amazing to see that you all care about the direction in which we’re heading, to see the outpour of positive and encouraging comments, and the very constructive feedback you all are ready to give. From all of us at Hugging Face: Thank you! Let’s try to summarize and share some takeaways from all of your responses. Who are you? We’re still looking at the same three big user communities of roughly equal sizes (in the respondents): image1635×863 62.9 KB Researchers make up the biggest part of respondents, at more than 1/3rd of users (Blue) Data scientists are a close second (Red) Machine Learning Engineers (Green) Alongside this, we’ve asked whether you were more of a beginner in NLP, or more of an experienced user: image1653×925 30.6 KB For how long? image1256×713 36.7 KB The repartition is very similar across different specialties - it is interesting to see that while nearly half of the respondents have been using Transformers for more than a year (Purple + Orange), there remains a significant influx of new users, as ~16% of you have adopted Transformers in the last three months. Work or ‍:art: fun We were interested in understanding how our users use transformers - for work or for fun: image1944×783 67.6 KB We’re happy to observe that for all categories, a huge majority of the user-base uses Transformers for work - More than 85% of respondents in all categories - but that more than the majority uses it for fun as well. Recommending the library image1422×455 20.3 KB We’re glad to see that you appreciate the direction in which we’re going - and that most of you would recommend the library to your peers. We wanted to gather more precise feedback, relative to the two main frameworks supported by the library - PyTorch and TensorFlow. PyTorch side of the library image1427×453 24.8 KB Here too, we’re satisfied to see that most of you (>88%) would give an 8-10 score to the PyTorch side of the library. Some interesting takeaways we learned from your answers: The implementation is solid, yet some parts are hard to dive into and modify Needs some more detailed documentation for advanced parts of the library There remains some ** backward-incompatible changes** across versions TensorFlow side of the library image1391×438 23.9 KB While the general sentiment of the TensorFlow side of the library is high, we understand that it is not on par with the rest of the library; we greatly appreciate your feedback, with some examples visible below: Keras examples are lacking PyTorch has better support across the board Few examples detailing how to use transformers and put it in production Documentation image1430×460 25.4 KB The general sentiment regarding the documentation is good - but some of you still have some very interesting feedback which we’re taking into account. Some examples below: The code and documentation is clear but is sometimes lacking examples of how to put them into practice Occasionally lacking Few errors and bugs across the documentation Likes image1024×768 368 KB What you like the most is: Ease of use of the library Amount of models available API and most notably: the community itself! A lot of you answered that one aspect that you enjoy the most with HuggingFace is the community built around it. As a community-centered library, we couldn’t be happier than to foster this and continue to build amazing stuff with all of you. Dislikes image1024×768 414 KB What you dislike the most is: Oversimplification of examples Lack of backward compatibility Duplicate code Thank you all for your feedback. We’ve read each and every one of your comments, and we aim to address the pain points you’ve shared with us. Open Feedback And finally, since we enjoy HuggingFace-shaped word clouds, here’s a final one containing the freeform comments you have shared with us: image1024×768 414 KB We’re happy to see the most noticeable one is still: Thanks <=
I wanted to share that I was so happy to be a part of Wav2Vec2 fine-tuning week!
0
huggingface
🤗Transformers
Questions about pseudolabels
https://discuss.huggingface.co/t/questions-about-pseudolabels/1415
github.com huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md 19 ### Precomputed pseudolabels + decompress with tar -xzvf. The produced directory name may differ from the filename. | Dataset | Model | Rouge Scores | Notes | Link | |---------|-----------------------------|--------------------|-------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------| | XSUM | facebook/bart-large-xsum | 49.8/28.0/42.5 | | [download](https://s3.amazonaws.com/datasets.huggingface.co/pseudo/xsum/bart_xsum_pl.tgz) | | XSUM | google/pegasus-xsum | 53.3/32.7/46.5 | | [download](https://s3.amazonaws.com/datasets.huggingface.co/pseudo/xsum/pegasus_xsum.tgz) | | XSUM | facebook/bart-large-xsum | ? | Bart pseudolabels filtered to those with Rouge2 > 10.0 w GT | [download](https://s3.amazonaws.com/datasets.huggingface.co/pseudo/xsum/xsum_pl2_bart.tgz) | | | | | | [download](https://s3.amazonaws.com/datasets.huggingface.co/pseudo/xsum/pegasus_xsum_on_cnn.tgz) | | CNN/DM | sshleifer/pegasus-cnn-ft-v2 | 47.316/26.65/44.56 | do not worry about the fact that train.source is one line shorter. | [download](https://s3.amazonaws.com/datasets.huggingface.co/pseudo/cnn_dm/pegasus_cnn_cnn_pls.tgz) | | CNN/DM | facebook/bart-large-cnn | | 5K (2%) are missing, there should be 282173 | [download](https://s3.amazonaws.com/datasets.huggingface.co/pseudo/cnn_dm/cnn_bart_pl.tgz) | | CNN/DM | google/pegasus-xsum | 21.5/6.76/25 | extra labels for xsum distillation Used max_source_length=512, (and all other pegasus-xsum configuration). | [download](https://s3.amazonaws.com/datasets.huggingface.co/pseudo/cnn_dm/pegasus_xsum_on_cnn.tgz) | | EN-RO | Helsinki-NLP/opus-mt-en-ro | | | [download](https://s3.amazonaws.com/datasets.huggingface.co/pseudo/wmt_en_ro/opus_mt_en_ro.tgz) | | EN-RO | facebook/mbart-large-en-ro | | | [download](https://s3.amazonaws.com/datasets.huggingface.co/pseudo/wmt_en_ro/mbart_large_en_ro.tgz) | ### Generating Pseudolabels + These command takes a while to run. For example, pegasus_cnn_cnn_pls.tgz took 8 hours on 8 GPUs. + Pegasus does not work in fp16 :(, Bart, mBART and Marian do. ``` This file has been truncated. show original Feel free to contribute your own pseudolabels via google drive link!
Thanks for all the update @sshleifer. Can I ask which dataset is the google/pegasus-large trained on?
0
huggingface
🤗Transformers
XLNetForSqeuenceClassification warnings
https://discuss.huggingface.co/t/xlnetforsqeuenceclassification-warnings/1096
Hi, In Google Colab notebook, I install (!pip transformers) and import XLNetForSequenceClassification model. When I instantiate the model the firs time (before training), I get the below: Some weights of the model checkpoint at xlnet-base-cased were not used when initializing XLNetForSequenceClassification: [‘lm_loss.weight’, ‘lm_loss.bias’] This IS expected if you are initializing XLNetForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). This IS NOT expected if you are initializing XLNetForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of XLNetForSequenceClassification were not initialized from the model checkpoint at xlnet-base-cased and are newly initialized: [‘sequence_summary.summary.weight’, ‘sequence_summary.summary.bias’, ‘logits_proj.weight’, ‘logits_proj.bias’] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. After training, I save the model’s state_dict using torch.save(). When I load this model for inference (using torch.load), I get the same messages as above. Why would I get the messages post training?
I got the exact same problem for XLNet’s QuestionsAnsweringModel when I use the pretrained xlnet-base-cased model. It was working well probably 2 months ago and I didn’t change anything. When I ignore this warning, the training loss will be too huge in each epoch. I want to know is there a solution or reason now?
0
huggingface
🤗Transformers
Distilbart paper
https://discuss.huggingface.co/t/distilbart-paper/428
Good evening, Is there a paper about distilbart? I need documentation for my master thesis and I couldn’t find any. Thanks for your help!
Hi @Hildweig, There is no paper for distilbart, the idea of distllbart came from @sshleifer’s great mind You can find the details of the distillation process here 35 For the CNN models, the distiiled model is created by copying the alternating layers from bart-large-cnn . This is no teacher distillation i.e you just copy layers from teacher model and then fine-tune the student model in stander way. And for XSUM it uses ombination of Distillbert’s ce_loss and the hidden states MSE loss used in the tinybert paper. DistilBERT paper 13 tinybert paper 4
0
huggingface
🤗Transformers
How much fire power are we expected to have in order to fine tune the W2V2 XLSR model?
https://discuss.huggingface.co/t/how-much-fire-power-are-we-expected-to-have-in-order-to-fine-tune-the-w2v2-xlsr-model/4376
Just tried running the finetuning code plus some minor modifications on an EC2 instance with a V100 and it just wasn’t enough, even when reducing the batch size. What are yall’s experiences when using the Wav2Vec2 big models? Especially the XLSR multilingual model?
I am also trying to train XLSR multilingual model. Can you please tell how much bigger your data and training details (elapsed time, epoch, etc…)? I guess that your problem is computational cost when you are saying that it is not enough.
0
huggingface
🤗Transformers
Sizes of Query, key and value vector in Bert Model
https://discuss.huggingface.co/t/sizes-of-query-key-and-value-vector-in-bert-model/1102
I have a question about the sizes of query, key and value vectors. As mentioned in this paper 6 and also demonstrated in this medium 14, we should be expecting the sizes of query, key and value vectors as [seq_length x seq_length]. But when I print the sizes of the parameter like below, I see the sizes of those vectors as [768 x 768]. for name, param in model.named_parameters(): print(name, param.size()) >>> bert.bert.encoder.layer.0.attention.self.query.weight torch.Size([768, 768]) bert.bert.encoder.layer.0.attention.self.key.weight torch.Size([768, 768]) bert.bert.encoder.layer.0.attention.self.value.weight torch.Size([768, 768]) I am really confused. I feel like I am missing something, could someone please help me figure it out?
You are looking at the weights of the query/key/value heads, not the value vectors.
0
huggingface
🤗Transformers
Speech language detection using Wave2vec 2.0
https://discuss.huggingface.co/t/speech-language-detection-using-wave2vec-2-0/5031
I wanted to know whether if we can train Wave2vec 2.0 to detect languages from audio. Like speech to text work only if we know the original language of the audio. Can we detect the language from audio files? Any input will be helpful @valhalla
Do you mean classifying the language with given audio? How much big is your data? It is an easy task by using a few convolution layer with a few language.
0
huggingface
🤗Transformers
Same PAD Position but Different PAD Embedding
https://discuss.huggingface.co/t/same-pad-position-but-different-pad-embedding/4581
Hi everyone, I am asking my question here since I couldn’t find an answer to it anywhere. I am a junior NLP engineer and I am experimenting some trouble with Bert Model, and more especially with its returned embeddings. I am concerned about the fact that PAD embeddings are not the same. I have seen some forums where it is explained that this is due to the fact that their embedding directly depend on the positional encoding; which I agree with. Nevertheless, for two simple sentences like "hello, I am a boy’ and “hello, I am a girl” in a batch of other longer sentences, these sentences would be padded and their pads would have the exact same positional encoding; yet, the pad embedding still differ in the end, even with the model in .eval() mode. It can’t be due to attention layers because of attention masks, it can’t be due to randomness of dropout since I turned it down, and it can’t be due to positional encoding because of the fact that they have exact same position for two different sentences. Would anybody have an answer to my concerns? I understand that I can just “ignore” the pad embeddings if I just want information about the word embeddings, but I still would like to understand. Have a nice day, Thank you! Coco
Up please! Anybody has an idea? That fact should annoy more than one person here.
0
huggingface
🤗Transformers
How to use xlnet in hugging Face?
https://discuss.huggingface.co/t/how-to-use-xlnet-in-hugging-face/4954
from simpletransformers.classification import ClassificationModel, ClassificationArgs model = ClassificationModel(‘xlm’, ‘seyonec/PubChem10M_SMILES_BPE_396_250’, args={‘evaluate_each_epoch’: True, ‘evaluate_during_training_verbose’: True, ‘no_save’: True, ‘num_train_epochs’: 10, ‘auto_weights’: True}) # You can set class weights by using the optional weight argument
Your question is about simpletransformers which is not a library. You should ask it on their GitHub 1
0
huggingface
🤗Transformers
Use tf.data.Data with HuggingFace datasets
https://discuss.huggingface.co/t/use-tf-data-data-with-huggingface-datasets/4885
I was going through this tutorial Using a Dataset with PyTorch/Tensorflow — datasets 1.5.0 documentation 24 . The example s for PyTorch. Do we have the same for Tensorflow?
Well there’s a section for tensorflow, on the top right corner there’s a split for tensorflow or pytorch, default is in pytorch This is was took from the official documentation, this is for tensorflow btw >>> import tensorflow as tf >>> from datasets import load_dataset >>> from transformers import AutoTokenizer >>> dataset = load_dataset('glue', 'mrpc', split='train') >>> tokenizer = AutoTokenizer.from_pretrained('bert-base-cased') >>> dataset = dataset.map(lambda e: tokenizer(e['sentence1'], truncation=True, padding='max_length'), batched=True) >>> >>> dataset.set_format(type='tensorflow', columns=['input_ids', 'token_type_ids', 'attention_mask', 'label']) >>> features = {x: dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.model_max_length]) for x in ['input_ids', 'token_type_ids', 'attention_mask']} >>> tfdataset = tf.data.Dataset.from_tensor_slices((features, dataset["label"])).batch(32) >>> next(iter(tfdataset)) ({'input_ids': <tf.Tensor: shape=(32, 512), dtype=int32, numpy= array([[ 101, 7277, 2180, ...,
0
huggingface
🤗Transformers
Ensure the sentence is complete during generation
https://discuss.huggingface.co/t/ensure-the-sentence-is-complete-during-generation/4489
During generation, I’m using the constraint of max_length to stop if longer sequences are not required. However, I do not want the generation to stop if the sentence is not complete. Is there any reliable way to stop after one sentence has been generated ?
AFAIK, the generation should stop once it generates an end-of-sentence token, if you don’t specify max_length. You can use StoppingCriteria 20 (which you implicitly do by setting max_length) to construct arbitrary constraints on when to stop your generation.
0
huggingface
🤗Transformers
Checkpoint breaks with deepspeed
https://discuss.huggingface.co/t/checkpoint-breaks-with-deepspeed/4442
Hi, I am trying to continue training from a saved checkpoint when using deepspeed. I am using transformers 4.3.3 Here is how I run the codes. Since T5 pretraining is not added yet to HF repo, I wrote it up myself, and I also modified T5 model only itself by incorporating some adapter layers within the model layers: USE_TF=0 deepspeed run_mlm.py --model_name_or_path google/mt5-base --dataset_name opus100 --dataset_config_name de-en --do_train --do_eval --output_dir /user/dara/test --max_seq_length 128 --deepspeed ds_config.json --save_steps 10 --fp16 Here is the error I got once trying to continue training from checkpoints. I greatly appreciate your input on this, the key this happens for is ‘exp_avg’. I also add that without deepspeed I do not get this error. Thank you so much. I am really puzzled by this and you are my only hope @stas [2021-03-16 13:01:52,899] [INFO] [engine.py:1284:_load_checkpoint] rank: 0 loading checkpoint: /users/dara/test/checkpoint-20/global_step20/mp_rank_00_model_states.pt successfully loaded 1 ZeRO state_dicts for rank 0 p tensor([ 1.7500, -1.6719, 2.4062, ..., -0.1953, 0.2002, -0.6484], requires_grad=True) key exp_avg saved torch.Size([15013760]) parameter shape torch.Size([597396608]) Traceback (most recent call last): File "run_mlm.py", line 592, in <module> main() File "run_mlm.py", line 558, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/user/dara/dev/codes/seq2seq/third_party/trainers/trainer.py", line 780, in train self._load_optimizer_and_scheduler(resume_from_checkpoint) File "/user/dara/dev/codes/seq2seq/third_party/trainers/trainer.py", line 1169, in _load_optimizer_and_scheduler self.deepspeed.load_checkpoint(checkpoint, load_optimizer_states=True, load_lr_scheduler_states=True) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 1265, in load_checkpoint load_optimizer_states=load_optimizer_states) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 1337, in _load_zero_checkpoint load_from_fp32_weights=self.zero_load_from_fp32_weights()) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/deepspeed/runtime/zero/stage2.py", line 1822, in load_state_dict self._restore_base_optimizer_state(state_dict_list) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/deepspeed/runtime/zero/stage2.py", line 1783, in _restore_base_optimizer_state self.optimizer.state[p][key].data.copy_(saved.data) RuntimeError: The size of tensor a (597396608) must match the size of tensor b (15013760) at non-singleton dimension 0
I haven’t seen this type of error yet, but we will sort it out. Could you please post it as an Issue 3 and meanwhile I will try to reproduce this. Please tag @stas00 in the issue. Also please make sure you use the deepspeed master as there are a lot of fixes in there. When posting tracebacks please always use multiline code formatting which preserves new lines so that it’d be possible to decipher it. Your above formatting is very difficult to understand. I edited your post to fix the formatting.
0
huggingface
🤗Transformers
Exclude words from GPT-2 generate( )
https://discuss.huggingface.co/t/exclude-words-from-gpt-2-generate/4741
Hello, I want to exclude some ids of the GPT-2 vocabulary from the generate() function, e.i. I want when the model generates the next word, not to be able to use a word from a list of words. How can I achieve that? Thank you in advance.
Hello! I seems that this functionality is supported using the bad_words_ids input to the generate API. The docs 15 briefly describe that you need to find a list of integers for the words you care about using the tokenizer and then simply pass those to generate: **bad_words_ids** ( `List[List[int]]` , optional) – List of token ids that are not allowed to be generated. In order to get the tokens of the words that should not appear in the generated text, use `tokenizer(bad_word, add_prefix_space=True).input_ids` . I hope this helps! ``
0
huggingface
🤗Transformers
Choosing correct seq2seq model
https://discuss.huggingface.co/t/choosing-correct-seq2seq-model/4666
Hi all, I have a QA dataset for my company with both the questions and answers being similar enough for me to think that I could use a seq2seq model to suggest answers. A QA pair could be something like, “Hey, I forgot to cancel service, please refund me. Thanks, Sachin”, “Hi Sachin, No worries we will refund you”. Note there is no context paragraph. I am following this blog post for EncodeDecoderModels. I am able to follow most of it and implemented it, but the results for generation are looking pretty gibberish. And after 6 hours of training on a GPU the loss has only gone from 7 → 6. I’m wondering if I’m doing something wrong. What I was thinking instead was to concatenate the QA pair with a “” string in the middle and use a BART/ GPT-2 model instead. Are there any examples of this. I know this github issue asked this a while back, but has there been any blogs/ tutorials on the subject? This is the current model that I am using at the moment, which is a bert2bert model incase you see anything obviously wrong here. But otherwise I’m leaning more towards the option outlined in the paragraph above: class Model(pl.LightningModule): def __init__(self, lr: float) -> None: super().__init__() self.lr = lr self.tokenizer = Tokenizer() self.model = EncoderDecoderModel.from_encoder_decoder_pretrained(BASE_MODEL, BASE_MODEL) self.initialize_hyper_parameters() for name, param in self.model.named_parameters(): if "crossattention" not in name: param.requires_grad = False def initialize_hyper_parameters(self): self.model.config.decoder_start_token_id = self.tokenizer.tokenizer.cls_token_id self.model.config.eos_token_id = self.tokenizer.tokenizer.sep_token_id self.model.config.pad_token_id = self.tokenizer.tokenizer.pad_token_id self.model.config.vocab_size = self.model.config.encoder.vocab_size self.model.config.max_length = 256 self.model.config.no_repeat_ngram_size = 3 self.model.config.early_stopping = True self.model.config.length_penalty = 2.0 self.model.config.num_beams = 4 self.val_batch_count = 0 def common_step(self, batch: Tuple[List[str], List[str]]) -> torch.FloatTensor: questions, answers = batch question_tokens = {k: v.to(self.device) for k, v in self.tokenizer(questions).items()} answer_tokens = {k: v.to(self.device) for k, v in self.tokenizer(answers).items()} labels = answer_tokens["input_ids"].clone() labels[answer_tokens["attention_mask"]==0] = -100 outputs = self.model( input_ids=question_tokens["input_ids"], attention_mask=question_tokens["attention_mask"], decoder_input_ids=answer_tokens["input_ids"], decoder_attention_mask=answer_tokens["attention_mask"], labels=labels, return_dict=True ) return outputs["loss"] def training_step(self, batch: Tuple[List[str], List[str]], *args) -> torch.FloatTensor: loss = self.common_step(batch) self.log(TRAIN_LOSS, loss, on_step=True, on_epoch=True) return loss def validation_step(self, batch: Tuple[List[str], List[str]], *args) -> None: loss = self.common_step(batch) if self.val_batch_count == 0: self.generate_examples(batch) self.log(VALID_LOSS, loss, on_step=True, on_epoch=True) self.val_batch_count += 1 def generate_examples(self, batch): questions, answers = batch question_tokens = {k: v.to(self.device) for k, v in self.tokenizer(questions).items()} generated = self.model.generate( input_ids=question_tokens["input_ids"], attention_mask=question_tokens["attention_mask"], # decoder_start_token_id=self.model.config.decoder.pad_token_id ) self.tokenizer.decode(question_tokens["input_ids"][0]) print(self.tokenizer.decode(generated[0])) def validation_step_end(self, *args): self.val_batch_count = 0 # reset def training_epoch_end(self, *args) -> None: print("Unfreezing") if self.current_epoch == FREEZE: for name, param in self.model.named_parameters(): if "crossattention" not in name: param.requires_grad = True def configure_optimizers(self) -> torch.optim.Adam: cross_attention_params = [] embedding_params = [] other_params = [] for name, param in self.model.named_parameters(): if "crossattention" in name: cross_attention_params.append(param) elif "embedding" in name: embedding_params.append(param) else: other_params.append(param) return torch.optim.Adam( [ {"params": cross_attention_params, "lr": self.lr}, {"params": other_params, "lr": self.lr / 20}, {"params": embedding_params, "lr": self.lr / 100}, ] )
sachin: both the questions and answers being similar enough for me to think that I could use a seq2seq model to suggest answers. Dear Sachin, The models that we train are only as good as the data that we train them on. So start by thinking about your data. The assumption of your dataset is that a sequence-to-sequence model can predict the answer to a question. In practice, sequence-to-sequence models were originally designed for translation. They were designed to predict an equivalent sequence. So I would expect that a sequence-to-sequence model would only predict the correct answer if the question contains the information necessary to predict that correct answer. Perhaps instead you could classify the answers? If so, you could train a model to predict the correct classification of the question. Or you might look at Facebook’s BART model 4. It’s a sequence-to-sequence model, but at inference time, they use it to predict a classification. And what’s really cool is that it’s a zero-shot classification. The model did not see the classifications during training. So in practice you might find many uses for the trained model. Best wishes, - Eryk
0
huggingface
🤗Transformers
Difference between setting label index to -100 & setting attention mask to 0
https://discuss.huggingface.co/t/difference-between-setting-label-index-to-100-setting-attention-mask-to-0/4503
According to Docs 8 setting a token label index to -100 makes the model not compute loss on such tokens. And attention mask does the same 1 thing Is the functionality of both the same or does it differ where one or the other is used Thanks
No the attention mask is not used in the loss computation. It’s just there to make sure your model is not paying attention to the masked tokens. The two things should be used together.
0
huggingface
🤗Transformers
Model training in Multi GPU
https://discuss.huggingface.co/t/model-training-in-multi-gpu/4458
Hi, I am trying to train Xl-NET and it requires around 14 GB. I have access to 12 GB GPU nodes. However, when I try to train the model using two nodes ( that is 24 GB) the trainer returns Not enough memory error from CUDA. Can you help me to overcome this error?
Usually model training on two GPUs is there to help you get a bigger batch size: what the Trainer and the example scripts do automatically is that each GPU will treat batch of the given --pre_device_train_batch_size which will result on a training with 2 * per_device_train_batch_size. This still requires the model to fit on each GPU. What you want to use is model parallelism but this is still very experimental. You can check the deepspeed or fairscale integrations 23 to use ZeRO-DP3 and split your model accross your two GPUs.
0
huggingface
🤗Transformers
Wav2Vec2 For Swedish
https://discuss.huggingface.co/t/wav2vec2-for-swedish/4232
The KTH division of speech music and hearing are working on making a Swedish Wav2Vec2 model. huggingface.co KTH (KTH Division of Speech, Music and Hearing) 5 KTH Division of Speech, Music and Hearing | KTH 4 Research at the Division of Speech, Music and Hearing (TMH) is truly multi-disciplinary including linguistics, phonetics, auditory perception, vision and experimental psychology. Rooted in an engineering modelling approach, our research forms a so... Our first step will be to use the multi-language-model to fine-tune on Swedish voices / transcriptions to make a Swedish Speech to Text model. You are welcome to help out, this thread might also be useful for people working on similar tasks in another language.
Hey, I’ve added most of 's Wav2Vec2 code and I’m more than happy to help at fine-tuning the multi-language checkpoint for Swedish. So feel free to tag me in this thread for any questions you might have. Also, I’m planning on releasing an in-detail notebook about fine-tuning Wav2Vec2 in a couple of days, which I’ll link here.
0
huggingface
🤗Transformers
Missing `model_type` key in config.json of TinyBERT
https://discuss.huggingface.co/t/missing-model-type-key-in-config-json-of-tinybert/2855
Hello, I’m trying to use one of the TinyBERT models produced by HUAWEI (link 4) and it seems there is a field missing in the config.json file: >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("huawei-noah/TinyBERT_General_4L_312D") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/lewtun/git/transformers/src/transformers/models/auto/tokenization_auto.py", line 345, in from_pretrained config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) File "/Users/lewtun/git/transformers/src/transformers/models/auto/configuration_auto.py", line 360, in from_pretrained raise ValueError( ValueError: Unrecognized model in huawei-noah/TinyBERT_General_4L_312D. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: retribert, mt5, t5, mobilebert, distilbert, albert, bert-generation, camembert, xlm-roberta, pegasus, marian, mbart, mpnet, bart, blenderbot, reformer, longformer, roberta, deberta, flaubert, fsmt, squeezebert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm-prophetnet, prophetnet, xlm, ctrl, electra, encoder-decoder, funnel, lxmert, dpr, layoutlm, rag, tapas Looking at the config.json file (link 10) it seems like it should be an easy enough fix to add something like "model_type": "tinybert" so my question is how does one go about patching a fix in a community provided model? Do I raise an issue on the Transformers repo or somewhere else?
Oh that field is missing indeed. From what I know though, it can’t be filled with “tinybert” as it needs to be a model type implemented in the library, so I think it should be “bert”. As for adding it, I think it needs to be done on our side (in the future we’ll have the ability to open PRs on model repos, which would help in that case!) Looking at that config, there are a few things that are a bit weird, so it may be that they have an internal class for those models not compatible with Transformers.
0
huggingface
🤗Transformers
Problem with torch.multiprocessing and Roberta
https://discuss.huggingface.co/t/problem-with-torch-multiprocessing-and-roberta/4373
I have a project in which I extract entities from multiple files, line by line. So I wrote a function that receives a file and both Roberta and it’s Tokenizer. The idea is to spawn multiple processes and run this function asynchronously for each file (actually at this point the files are already loaded on memory). I have 16GB of ram on my machine and I thought this would be sufficient to at least run 2 or 3 robertas in parallel, but the following codes hangs and fills 100% of my ram and does nothing. Does someone knows what am I doing wrong with the multiprocessed code? I’ve simplified my problem to this few lines of code that have the same problem. import torch from transformers import RobertaTokenizer, RobertaModel,RobertaForTokenClassification from tqdm import tqdm from torch.multiprocessing import Pool import torch.multiprocessing as mp model = RobertaForTokenClassification.from_pretrained('distilroberta-base') tokenizer = RobertaTokenizer.from_pretrained('distilroberta-base') model.share_memory() # is this necessary? model.eval() ctx = mp.get_context('spawn') p = ctx.Pool(2) def f(model,tokenizer,sentence): inputs = tokenizer(sentence, return_tensors="pt") logits = model(**inputs) return 0 sentences = [ 'yo this is a test', 'yo this is not a test', 'yo yo yo' ] jobs = [] with torch.no_grad(): for i in range(len(sentences)): job = p.apply_async(f, [model,tokenizer,sentences[i]]) jobs.append(job) results=[] for job in tqdm(jobs): pass results.append(job.get())
This actually doesn’t work even with just one worker p = ctx.Pool(1) So I think it is related to the multiprocessing code
0
huggingface
🤗Transformers
Finetuning DPR on Custom Dataset
https://discuss.huggingface.co/t/finetuning-dpr-on-custom-dataset/4170
I am using DPR models provided by the library to finetune on a custom dataset passage_tokenizer = DPRContextEncoderTokenizer.from_pretrained('facebook/dpr-ctx_encoder-single-nq-base') passage_model = DPRContextEncoder.from_pretrained('facebook/dpr-ctx_encoder-single-nq-base') query_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained('facebook/dpr-question_encoder-single-nq-base') query_model = DPRQuestionEncoder.from_pretrained('facebook/dpr-question_encoder-single-nq-base') class Huggingface_DPR(nn.Module): def __init__(self, query_model, passage_model, query_tokenizer, passage_tokenizer, passage_dict, questions, dense_size, freeze_params = 0.0, batch_size = 2, sample_size = 4): super(Huggingface_DPR, self).__init__() self.query_model = query_model self.query_tokenizer = query_tokenizer self.passage_model = passage_model self.passage_tokenizer = passage_tokenizer self.freeze_params = freeze_params self.sample_size = sample_size self.batch_size = batch_size self.passage_to_dense = nn.Sequential(nn.Linear(768, dense_size * 2), nn.ReLU(), nn.Linear(dense_size * 2, dense_size), nn.GELU()) self.query_to_dense = nn.Sequential(nn.Linear(768, dense_size * 2), nn.ReLU(), nn.Linear(dense_size * 2, dense_size), nn.GELU()) self.passage_dict = passage_dict self.query_tuple = questions self.log_softmax = nn.LogSoftmax(dim=1) def batch_tokenize(self): rand_idx = np.random.randint(0, len(self.passage_dict), (self.batch_size, self.sample_size)) queries = [] passages = [] true_idx = [] for row_idx,row in enumerate(rand_idx): rand_query_idx = random.randint(0, len(self.query_tuple)) query, passage_id = self.query_tuple[rand_query_idx] queries.append(query) if passage_id not in row: idx = random.randint(0, self.sample_size - 1) rand_idx[row_idx][idx] = passage_id true_idx.append(idx) else: true_idx.append(np.where(rand_idx[row_idx] == passage_id)[0][0]) for col_idx, col in enumerate(row): passages.append(self.passage_dict[col]) passage_tensor = self.passage_tokenizer(passages, padding='longest', return_tensors="pt") query_tensor = self.query_tokenizer(queries, padding='longest', return_tensors="pt") return passage_tensor, query_tensor, true_idx def dot_product(self, q_vector, p_vector): q_vector = q_vector.unsqueeze(1) sim = torch.matmul(q_vector, torch.transpose(p_vector, -2, -1)) return sim def forward(self): passage_tensor, query_tensor, true_idx = self.batch_tokenize() passage_input_ids = passage_tensor.input_ids.reshape(self.batch_size, self.sample_size, -1) passage_attention_mask = passage_tensor.attention_mask.reshape(self.batch_size, self.sample_size, -1) dense_passage = self.passage_model(input_ids = passage_tensor.input_ids, attention_mask = passage_tensor.attention_mask) dense_query = self.query_model(input_ids = query_tensor['input_ids'], attention_mask = query_tensor['attention_mask']) dense_passage = dense_passage['pooler_output'] dense_passage = dense_passage.reshape(self.batch_size, self.sample_size, -1) dense_query = dense_query['pooler_output'] dense_passage = self.passage_to_dense(dense_passage) dense_query = self.query_to_dense(dense_query) similarity_score = self.dot_product(dense_query, dense_passage) similarity_score = similarity_score.squeeze(1) log_scores = self.log_softmax(similarity_score) return log_scores, torch.tensor(true_idx) I am using the dot product as the similarity metric. Negative log-likelihood as the loss function. In the batch_tokenize method, I am implementing negative sampling of the given sample. size. The output of the batch_tokenize method would be for passage_tensor of size (batch_size, sample_size, padded_length) and size of query_tensor would be (batch_size, padded_length), the true_idx is a list of length batch_size This is the training loop that I am using. ## With Batch for epo in range(5): epoch_loss = 0 sum_loss = 0 for b in range(1, 100): optimizer.zero_grad() pred, true_idx = dpr_model() loss = criterion(pred, true_idx) epoch_loss += loss.item() sum_loss += loss.item() loss.backward() optimizer.step() if b%2 == 0: print(f"Epoch : {epo + 1} Batch : {int(b)} Loss: {sum_loss/2}") sum_loss = 0 print(f"Epoch {epo + 1} : Loss : {epoch_loss/20}") Loss Epoch : 1 Batch : 2 Loss: 2.083882689476013 Epoch : 1 Batch : 4 Loss: 2.078924059867859 Epoch : 1 Batch : 6 Loss: 2.0736374855041504 Epoch : 1 Batch : 8 Loss: 2.080022931098938 Epoch : 1 Batch : 10 Loss: 2.0795756578445435 Epoch : 1 Batch : 12 Loss: 2.084058165550232 Epoch : 1 Batch : 14 Loss: 2.079327940940857 Epoch : 1 Batch : 16 Loss: 2.0794405937194824 Epoch : 1 Batch : 18 Loss: 2.079430937767029 Epoch : 1 Batch : 20 Loss: 2.0794434547424316 Epoch : 1 Batch : 22 Loss: 2.0794490575790405 Epoch : 1 Batch : 24 Loss: 2.0794308185577393 Epoch : 1 Batch : 26 Loss: 2.0794389247894287 Epoch : 1 Batch : 28 Loss: 2.079446792602539 Epoch : 1 Batch : 30 Loss: 2.0794416666030884 Epoch : 1 Batch : 32 Loss: 2.0794419050216675 Epoch : 1 Batch : 34 Loss: 2.079440116882324 Epoch : 1 Batch : 36 Loss: 2.079442262649536 Epoch : 1 Batch : 38 Loss: 2.079442024230957 Epoch : 1 Batch : 40 Loss: 2.0794413089752197 Epoch : 1 Batch : 42 Loss: 2.079441547393799 Epoch : 1 Batch : 44 Loss: 2.0794419050216675 Epoch : 1 Batch : 46 Loss: 2.0794419050216675 ........................... There is no noticeable decrease in the loss from the beginning itself. Is there a mistake in the manner I am fine-tuning?
Hi! What is your data processing method? Let me see
0
huggingface
🤗Transformers
New model output types
https://discuss.huggingface.co/t/new-model-output-types/195
As was requested in #5226 33, model outputs are now more informative than just plain tuples (without breaking changes); PyTorch models now return a subclass of ModelOutput that is appropriate. Here is an example on a base model: from transformers import BertTokenizer, BertForSequenceClassification import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForSequenceClassification.from_pretrained('bert-base-uncased') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") labels = torch.tensor([1]).unsqueeze(0) # Batch size 1 outputs = model(**inputs, labels=labels) Then outputs will be an SequenceClassifierOutput object, which has the returned elements as attributes. The old syntax loss, logits = outputs[:2] will still work, but you can also do loss = outputs.loss logits = outputs.logits or also loss = outputs["loss"] logits = outputs["logits"] Under the hood, outputs is a dataclass with optional fields that may be set to None if they are not returned by the model (like attentions in our example). If you index by integer or by slice, the None fields are skipped (for backward-compatibility). If you try to access an attribute that’s set to None by its key (for instance here outputs["attentions"]), it will return an error. You can convert those outputs to a regular tuple/dict with outputs.to_tuple() or outputs.to_dict(). You can revert to the old behavior of having tuple by setting return_tuple=True in the config you pass to your model, or when you instantiate your model, or when you call your model on some inputs. If you’re using torchscript (and the config you passed to your model has config.torchscript = True) this will automatically be the case (because jit only handles tuples as outputs). Hope you like this new feature!
So many quality of life improvements recently. Thanks for all your work and effort.
0
huggingface
🤗Transformers
Weights of pre-trained BERT model not initialized
https://discuss.huggingface.co/t/weights-of-pre-trained-bert-model-not-initialized/4280
I am using the Language Interpretability Toolkit 1 (LIT) to load and analyze the ‘bert-base-german-cased’ model that I pre-trained on an NER task with HuggingFace. However, when I’m starting the LIT script with the path to my pre-trained model passed to it, it fails to initialize the weights and tells me: modeling_utils.py:648] loading weights file bert_remote/examples/token-classification/Data/Models/results_21_03_04_cleaned_annotations/04.03._8_16_5e-5_cleaned_annotations/04-03-2021 (15.22.23)/pytorch_model.bin modeling_utils.py:739] Weights of BertForTokenClassification not initialized from pretrained model: ['bert.pooler.dense.weight', 'bert.pooler.dense.bias'] modeling_utils.py:745] Weights from pretrained model not used in BertForTokenClassification: ['bert.embeddings.position_ids'] It then simply uses the bert-base-german-cased version of BERT, which of course doesn’t have my custom labels and thus fails to predict anything. I think it might have to do with PyTorch or HuggingFace, but I can’t find the error. If relevant, here is how I load my dataset into CoNLL 2003 format (modification of the dataloader scripts): def __init__(self): # Read ConLL Test Files self._examples = [] data_path = "lit_remote/lit_nlp/examples/datasets/NER_Data" with open(os.path.join(data_path, "test.txt"), "r", encoding="utf-8") as f: lines = f.readlines() for line in lines[:2000]: if line != "\n": token, label = line.split(" ") self._examples.append({ 'token': token, 'label': label, }) else: self._examples.append({ 'token': "\n", 'label': "O" }) def spec(self): return { 'token': lit_types.Tokens(), 'label': lit_types.SequenceTags(align="token"), } And this is how I initialize the model and start the LIT server (modification of the simple_pytorch_demo.py script): def __init__(self, model_name_or_path): self.tokenizer = transformers.AutoTokenizer.from_pretrained( model_name_or_path) model_config = transformers.AutoConfig.from_pretrained( model_name_or_path, num_labels=15, # FIXME CHANGE output_hidden_states=True, output_attentions=True, ) # This is a just a regular PyTorch model. self.model = _from_pretrained( transformers.AutoModelForTokenClassification, model_name_or_path, config=model_config) self.model.eval() ## Some omitted snippets here def input_spec(self) -> lit_types.Spec: return { "token": lit_types.Tokens(), "label": lit_types.SequenceTags(align="token") } def output_spec(self) -> lit_types.Spec: return { "tokens": lit_types.Tokens(), "probas": lit_types.MulticlassPreds(parent="label", vocab=self.LABELS), "cls_emb": lit_types.Embeddings() Anyone has an idea what the issue could be?
It’s logical to not have bert pooler weights in token classification (we are not using the pooler in this model), the warning suggests you are not using the last version of Transformers but in any case, you can safely ignore it.
0
huggingface
🤗Transformers
Can’t reproduce xlm-roberta-large finetuned result on XNLI
https://discuss.huggingface.co/t/cant-reproduce-xlm-roberta-large-finetuned-result-on-xnli/4269
I’m trying to finetune xlm-roberta-large on MNLI English training data and make zero-shot classification on XNLI dataset. However, I found that xlm-roberta-large is super sensitive to hyper parameters. The reported average accuracy is 80.9, while my model can only achieve 79.74, which is 1% less than the reported accuracy. I used Adam optimizer with 5e-6 learning rate and the batch size is 16. Any one can suggest better hyperparameters to reproduce the XNLI result of xlm-roberta-large ?
What is the “reported accuracy” you’re trying to reproduce? Accuracy on XNLI? On zero-shot classification? What dataset? Are you trying to reproduce joeddav/xlm-roberta-large-xnli 4? If so I’m afraid I don’t have the exact hyperparameters I used, but I’ll also note that I trained that before the XNLI train set was released, so it was actually trained on the concatenation of the XNLI dev & test sets and the MNLI train set.
0
huggingface
🤗Transformers
Parameter groups and GPT2 LayerNorm
https://discuss.huggingface.co/t/parameter-groups-and-gpt2-layernorm/4239
When creating a default optimizer the Trainer class creates two parameter groups based on whether weight decay should be applied or not and it does that based on the parameter name (does it contain “LayerNorm” or “bias” in it or not). Problem is, not all models’ parameters are named the same way; GPT2’s layer normalization layers for example are named ln_ followed by a number or an f, hence weight decay will be applied to the weights of the LayerNorm layers in GPT2. I don’t know if this is an “issue” per se, but it’s definitely something to be cautious about when training/fine-tuning GPT2.
Oh indeed. A check on names only does not sound super smart. Will try to write something that checks the class of the modules instead.
0
huggingface
🤗Transformers
How to ensure the dataset is shuffled for each epoch using Trainer and Datasets?
https://discuss.huggingface.co/t/how-to-ensure-the-dataset-is-shuffled-for-each-epoch-using-trainer-and-datasets/4212
I am using the Seq2SeqTrainer and pass an datasets.arrow_dataset.Dataset as train_dataset when initiating the object. Is the dataset by default shuffled per epoch? If not, how to make it shuffled? An example is from the official example: transformers/run_seq2seq.py at master · huggingface/transformers · GitHub 36 Thanks!
Still needs help…
0
huggingface
🤗Transformers
Different doc with BertForPretraining and TFBertForPretraining
https://discuss.huggingface.co/t/different-doc-with-bertforpretraining-and-tfbertforpretraining/4167
there are some different betwween BERT — transformers 4.3.0 documentation and BERT — transformers 4.3.0 documentation 2 There are labels and next_sentence_label in BertForPretraining, but nothing in TFBertForPretraining. Does it means there are some different between BertForPretraining and TFBertForPretaining? Or there is a wrong in TF doc.
There is indeed something wrong in the TF doc: the next_sentence_label is there but not documented. If you want to open a PR to fix this that would be awesome!
0
huggingface
🤗Transformers
Using PyTorch model in TensorFlow
https://discuss.huggingface.co/t/using-pytorch-model-in-tensorflow/4202
Hi, I would like to use a model built using PyTorch (namely this one 1 ) in a Tensorflow environment. More specifically I would like to start by just extract some of the embeddings in the later layers, and then potentially run some fine-tuning. So I have two questions: Is there a way to load and run inference from a PyTorch model in TensorFlow? Is there a way to load and fine-tune a PyTorch model in Tensorflow? Thanks in advance!
Hi @gruffgoran, your use cases sound like a perfect match for the ONNX 1 format Having said that, you might be able to get a quick win by trying something like the following (see docs 1): tf_model = TFBertForSequenceClassification.from_pretrained("KB/bert-base-swedish-cased", from_pt=True) From here you can then run inference / fine-tune etc using TensorFlow. If you want to go the ONNX route, the idea would be to convert PyTorch → ONNX and then load the ONNX model in TensorFlow. Details on doing the conversion can be found here: Exporting transformers models — transformers 4.3.0 documentation 2
0
huggingface
🤗Transformers
Can I train pytorch T5 on TPU with variable batch shape?
https://discuss.huggingface.co/t/can-i-train-pytorch-t5-on-tpu-with-variable-batch-shape/4132
My goal is to group sequences with similar length together in a batch and pad them to match the longest one. Will it work on TPU? Since it does not support dynamic shapes as the doc says.
Hi @marton-avrios, I’ve done exactly this for T5, basing it off the following article: https://towardsdatascience.com/divide-hugging-face-transformers-training-time-by-2-or-more-21bf7129db9q-21bf7129db9e 1 Here’s the code: “”" from torch.nn.utils.rnn import pad_sequence def collate_batch(batch): pad_token_id = 0 src_ids = pad_sequence([sample['source_ids'] for sample in batch], batch_first=True, padding_value=pad_token_id) src_text = [sample['source_text'] for sample in batch] src_mask = pad_sequence([sample['source_mask'] for sample in batch], batch_first=True, padding_value=pad_token_id) tgt_ids = pad_sequence([sample['target_ids'] for sample in batch], batch_first=True, padding_value=pad_token_id) tgt_ids[tgt_ids[:, :] == 0] = -100 tgt_mask = pad_sequence([sample['target_mask'] for sample in batch], batch_first=True, padding_value=pad_token_id) tgt_text = [sample['target_text'] for sample in batch] return { 'source_ids': src_ids, 'target_ids': tgt_ids, 'source_mask': src_mask, "target_mask": tgt_mask, "source_text": src_text, "target_text": tgt_text } “”"
0
huggingface
🤗Transformers
Deploying inference model size and performance
https://discuss.huggingface.co/t/deploying-inference-model-size-and-performance/3575
I’ve got a trained/tuned model based on Michau/t5-base-en-generate-headline. I’m looking into options for deploying this model around a simple inference API (Python/Flask). I’m very new to developing and deploying ML models etc. so bear with me! Though I have it working, the performance it less than optimal. In a local development environment (VM + Docker) each request takes ~30" (compared to ~10" in Collab - no GPU). In a production environment, this is going to be run many 1,000’s of times, daily…ideally. So far my trained model (pytorch_model.bin) is ~900 MB. I do inference pretty simply: MODEL_PATH = "/src/model_files" def infer(title: str) -> str: model_path = pathlib.Path(MODEL_PATH).absolute() title_model_tokenizer = AutoTokenizer.from_pretrained(model_path) title_model = AutoModelWithLMHead.from_pretrained(model_path) tokenized_text = title_model_tokenizer.encode(title, return_tensors="pt") title_ids = title_model.generate( tokenized_text, num_beams=1, repetition_penalty=1.0, length_penalty=1.0, early_stopping=False, no_repeat_ngram_size=1 ) return title_model_tokenizer.decode(title_ids[0], skip_special_tokens=True) I can see from some profiling that the majority of the time is spent on: title_model = AutoModelWithLMHead.from_pretrained(model_path) So change #1 is to send in batches rather than one at a time, where possible. So as to not have to reload the model constantly. Is there anything obvious I’m missing or yet to discover for this type of thing? I’m hoping to not need a GPU, so any ideas or improvement you can throw at me would be appreciated, thanks. A secondary question is where would be suitable to deploy this kind of thing? Is it something that would be better outsourced to Sagemaker or similar? Or is it reasonable to host it on our own servers (specs notwithstanding)?
Hi @SMB, I think a quick win here would be to load the tokenizer and model just once when you spin up the Flask app and then call them in your infer function. For example something like this: from flask import Flask from transformers import AutoTokenizer, AutoModelWithLMHead app = Flask(__name__) MODEL_PATH = "/src/model_files" model_path = pathlib.Path(MODEL_PATH).absolute() title_model_tokenizer = AutoTokenizer.from_pretrained(model_path) title_model = AutoModelWithLMHead.from_pretrained(model_path).to("cpu") def infer(title: str) -> str: tokenized_text = title_model_tokenizer.encode(title, return_tensors="pt") title_ids = title_model.generate( tokenized_text, num_beams=1, repetition_penalty=1.0, length_penalty=1.0, early_stopping=False, no_repeat_ngram_size=1 ) return title_model_tokenizer.decode(title_ids[0], skip_special_tokens=True) As for deployment, the answer probably depends on your production environment but what I’ve usually done is to Dockerize the Flask app and then deploy the Docker container on Kubernetes or whatever platform is available (e.g. Heroku). There’s quite a few tutorials online on how to do this and one of my favourite resources for this kind of stuff is the Full Stack DL course: https://fullstackdeeplearning.com/ 2 Note that Kubernetes is a complex beast and not recommended for a single-purpose app PS. If you’re not bound to using Flask, I suggest having a look at FastAPI 4. It makes web app development much simpler, can handle concurrency, and comes with neat built in features like data validation which is really useful for ML!
0
huggingface
🤗Transformers
Bert followed by a GRU
https://discuss.huggingface.co/t/bert-followed-by-a-gru/4086
I want to add a GRU “layer” from pytorch after a the pretrained BertModel as follows but I am not sure about the input_size. class BERT_Arch(nn.Module): def __init__(self, bert): super(BERT_Arch, self).__init__() self.bert = BertModel.from_pretrained('bert-base-uncased') # GRU self.gru = nn.GRU(input_size=? , hidden_size=256, num_layers=2) # input_size, hidden_size, num_layers def forward(...) ... The input_size of the GRU should be the output size of the Bert, right? According to the docs 1, Bert returns pooler_output. Would I need to input that to the GRU?
It depends on what you want: the base BERT model will return both the final hidden stage (shape (batch_size, sequence_length, hidden_size) and the pooler output which has the state for the CLS token of shape (batch_size, hidden_size).
0
huggingface
🤗Transformers
Warning when adding compute_metrics function to Trainer
https://discuss.huggingface.co/t/warning-when-adding-compute-metrics-function-to-trainer/3782
When I add a custom compute_metrics function to the Trainer, I get the warning “Not all data has been set. Are you sure you passed all values?” at each evaluation step. This warning is defined in the finalize function of the class trainer_pt_utils.DistributedTensorGatherer: if self._offsets[0] != self.process_length: logger.warn(“Not all data has been set. Are you sure you passed all values?”) This is my compute_metrics function: def compute_metrics(eval_pred): preds, labels = eval_pred preds = np.argmax(preds, axis=1) accuracy = round(accuracy_score(labels, preds),3) micro_f1 = round(f1_score(labels, preds, average = "micro"),3) macro_f1 = round(f1_score(labels, preds, average = "macro"),3) return {"Accuracy": accuracy, "Micro F1": micro_f1, "Macro F1": macro_f1} The additional metrics are successfully returned. So what does this warning mean? Any help would be much appreciated.
Could you share the code/script you use so we can reproduce on our side? It seems to indicate that not all the predictions where used for the metric computation (or it could jsut be a bug).
0
huggingface
🤗Transformers
Multilabel sequence classification with Roberta value error expected input batch size to match target batch size
https://discuss.huggingface.co/t/multilabel-sequence-classification-with-roberta-value-error-expected-input-batch-size-to-match-target-batch-size/1653
Trying to tune a multilabel (4 labels) model based on roberta-base. I’ve followed the examples in https://huggingface.co/transformers/custom_datasets.html 11. Trying to debug this value error: Traceback (most recent call last): trainer.train() File “transformers/trainer.py”, line 762, in train tr_loss += self.training_step(model, inputs) File “transformers/trainer.py”, line 1112, in training_step loss = self.compute_loss(model, inputs) File “transformers/trainer.py”, line 1136, in compute_loss outputs = model(**inputs) File “torch/nn/modules/module.py”, line 532, in call result = self.forward(*input, **kwargs) File “transformers/modeling_roberta.py”, line 1015, in forward loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) File “torch/nn/modules/module.py”, line 532, in call result = self.forward(*input, **kwargs) File “torch/nn/modules/loss.py”, line 916, in forward ignore_index=self.ignore_index, reduction=self.reduction) File “torch/nn/functional.py”, line 2021, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File “torch/nn/functional.py”, line 1836, in nll_loss .format(input.size(0), target.size(0))) ValueError: Expected input batch_size (16) to match target batch_size (64). I see this in modeling_roberta at the point of error. I looks like the labels for each of the batch results have been flattened into a single tensor, while the batch has the labels separately for each example of the 16. Seems like this might be the cause of the ValueError? but I’m not sure, and don’t know where the labels would have been flattened. Any ideas? tensor([[ 0.1793, 0.1338, -0.2123, -0.0945], [ 0.0498, 0.0472, -0.1983, -0.0353], [ 0.1932, 0.1970, -0.2003, -0.0471], [ 0.0913, 0.1411, -0.1835, -0.1387], [ 0.0770, -0.0101, -0.1017, -0.0149], [ 0.1980, 0.0772, -0.1894, -0.0487], [ 0.0161, 0.0107, -0.0100, 0.0067], [ 0.1063, 0.1120, -0.1842, -0.0567], [ 0.1610, 0.0769, -0.1609, -0.0883], [ 0.1866, 0.0182, -0.1137, -0.1047], [ 0.1132, 0.0587, -0.2452, -0.0698], [ 0.1680, -0.0125, -0.2019, -0.0674], [-0.0282, 0.1099, -0.1637, -0.1112], [ 0.1620, 0.1197, -0.2099, 0.0236], [ 0.1197, 0.1232, -0.2318, -0.0955], [ 0.3232, 0.1935, -0.3226, -0.0547]], device=‘cuda:0’, grad_fn=) labels view tensor([0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0], device=‘cuda:0’)
I had the same problem. The problem lies in nll_loss. For multilabel problems BCEWithLogitsLoss is the most common I think. You can subclass Trainer and overwrite the compute_loss function in your custom trainer to make things work. This worked for me: class CustomTrainer(Trainer): def compute_loss(self, model, inputs, return_outputs=False): outputs = model( input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], token_type_ids=inputs['token_type_ids'] ) loss = th.nn.BCEWithLogitsLoss()(outputs['logits'], inputs['labels']) return (loss, outputs) if return_outputs else loss
0
huggingface
🤗Transformers
How to get per-eval-step score when using trainer?
https://discuss.huggingface.co/t/how-to-get-per-eval-step-score-when-using-trainer/4069
I am training a model and am hoping to get the log of the per-evaluation-step performance while training. I currently set --evaluation_strategy steps and I could see logs got printed. However, is there a way I could get the table as an object (json or pd.DataFrame) in Python? |Step|Training Loss|Validation Loss|Rouge1|Rouge2|Rougel|Rougelsum|Gen Len|Runtime|Samples Per Second| |---|---|---|---|---|---|---|---|---|---| |100|No log|2.458449|39.987500|16.891800|37.788500|37.794300|10.292600|26.358400|44.730000| |200|No log|2.128455|49.043000|24.468600|47.341900|47.270600|10.463100|26.245500|44.922000| |300|No log|1.980806|51.324400|25.405300|49.549300|49.507100|10.305300|25.733800|45.815000| |400|No log|1.892222|53.523700|27.361200|51.650900|51.613200|10.371500|25.708300|45.861000| |500|0.997900|1.840045|54.044500|27.843400|52.146600|52.125500|10.392700|25.551800|46.142000| |600|0.997900|1.810891|54.251600|28.728500|52.545900|52.492900|10.435100|26.443400|44.586000| |700|0.997900|1.799217|54.409300|28.685000|52.649800|52.611400|10.400300|26.362800|44.722000|
Hi @mralexis, you could write a simple callback that saves the logs to disk - e.g. by adapting the PrinterCallback: transformers.trainer_callback — transformers 4.3.0 documentation 10 You can then pass your callback to the Trainer with the callbacks argument. Then you can load and process them after training
0
huggingface
🤗Transformers
Workflow: how to avoid dummy_pt_objects.py in IDE search results?
https://discuss.huggingface.co/t/workflow-how-to-avoid-dummy-pt-objects-py-in-ide-search-results/1414
I always click the first result, which is often a dummy. Has anyone found a pycharm workaround? Some way to specify paths that search should ignore? image1392×292 43.4 KB
Great question, Sam! right click on dummy_objects.py and click “mark as plain text” h/t phpstorm docs 20 (works in pycharm).
0
huggingface
🤗Transformers
Error using `max_length` in transformers
https://discuss.huggingface.co/t/error-using-max-length-in-transformers/4008
I was under the assumption that the model automatically adjusts itself based on the max sequence length, but when I am fine-tuning Roberta-large I am getting an error during inference that the sequence is too large and thus the indices are out of range. Is there any way to use Trainer and PyTorch to set max_sequence_length as a parameter? I tried finding it in the docs but was unable to do so. This is a surprise because earlier models worked perfectly fine.
There is a way to do so when you are tokenising your data by setting the max_length parameter; train_encodings = tokenizer(seq_train, truncation=True, padding=True, max_length=1024)
0
huggingface
🤗Transformers
Pytorch BERT model not converging
https://discuss.huggingface.co/t/pytorch-bert-model-not-converging/4011
I’m currently working in research in transfer learning, and I’m trying to use bert-base-cased as a pretrained baseline, wrapped in a Pytorch model with dropout and a linear layer. I largely follow the recommendations of BERT and train using AdamW optimizer and a scheduler. My issue is, I can train on one task no problem, with the BERT recommended parameters of LR=2e-5, batch size=32, epochs=2. I use a cross entropy loss with class weighting to address a label imbalance issue. My issue is when I save this model, then reload the state as a base for further downstream training on a different classification task, it never finishes a single epoch. What’s also strange is I get 100% GPU utilisation, which doesn’t change (not the same on the baseline models). def train(self, optimizer, scheduler, minibatches: torch.utils.data.DataLoader) -> Dict: self.model = self.model.train() metrics = {"n_correct": 0, "losses": []} for batch in minibatches: texts, targets = batch targets = targets.to(self.device) encoded_input = BERTPreprocessor.encode(texts, self.tokenizer) for tensor in encoded_input: encoded_input[tensor] = (encoded_input[tensor] .to(self.device)) logits, *_ = self.model(**{ "input_ids": encoded_input["input_ids"], "attention_mask": encoded_input["attention_mask"], }) loss = self.loss_fn(logits, targets) _, preds = torch.max(logits, dim=1) metrics["n_correct"] += torch.sum(preds == targets) metrics["losses"].append(loss.item()) loss.backward() torch.nn.utils.clip_grad_norm_(self.model.parameters(), max_norm=1.0) optimizer.step() scheduler.step() optimizer.zero_grad() return metrics I’ve attached my training process above and I’m happy to provide any further information, but as I’m fairly new to the field, I’m at a loss with how to address this. To add, the weighting scheme I use for the weight parameter in the loss function is num_minority_class/class_n for the negative and positive classes (binary classification).
Hi, [I am not an expert, but I have saved and reloaded Bert models]. What commands are you using to save and reload your model? Are you saving just the BERT weights, or your custom dropout and linear layers too? Do you get any error messages on reloading? Do you get any error messages when you try to run your second classification task (or does it just go on in an infinite loop)? Have you tried running your second task immediately after your first task (without doing a save and reload)? Does it work? [Silly questions] How do you know your first task has “worked”? Are you using BertModel, and have you considered using BertForSequenceClassification? When you train, are you updating the BERT weights, or have you frozen them?
0
huggingface
🤗Transformers
Multi gpu training
https://discuss.huggingface.co/t/multi-gpu-training/4021
It seems that the hugging face implementation still uses nn.DataParallel for one node multi-gpu training. In the pytorch documentation page, it clearly states that " It is recommended to use DistributedDataParallel instead of DataParallel to do multi-GPU training, even if there is only a single node. Could you please clarify if my understanding is correct? and if your training support DistributedDataParallel for one node with multiple GPUs.
Both are supported by the Hugging Face Trainer. You just have to use the pytorch launcher to use DistributedDataParallel, see an example here 690.
0
huggingface
🤗Transformers
What’s the best way to load a saved Tokenizer json into a transformers PreTrainedTokenizerFast (or other transformers tokenizer)?
https://discuss.huggingface.co/t/whats-the-best-way-to-load-a-saved-tokenizer-json-into-a-transformers-pretrainedtokenizerfast-or-other-transformers-tokenizer/4012
This is kind of a cross-library question, so maybe it belongs in the Tokenizers forum instead. But hopefully this is the right place. I’ve made a custom Roberta-style BPE tokenizer for my project using the tokenizers library, with some useful preprocessors and other helpful goodies. I’m able to load the saved json into a tokenizers Tokenizer and it works as expected. But I’d like to load it as a transformers PreTrainedTokenizerFast instead, and I’m not sure if there’s a good way to do that. I can pass tokenizer_file="my_tokenizer.json" while creating a PreTrainedTokenizerFast, but it doesn’t seem to read the json for the padding token information, and several methods raise a NotImplementedError, so I assume that class is not meant to be used directly. Making a RobertaTokenizerFast requires vocab and merges files. So I created those as well, and I can pass them (redundantly), along with tokenizer_file="my_tokenizer.json". But the resulting tokenizer isn’t respecting the json’s add_prefix_space configuration, so I also have to pass in add_prefix_space=True to get my desired behavior. This is making me wonder if I’m doing something wrong here. Is there a way to load a saved Tokenizers json file directly into some kind of transformers tokenizer?
For now, you do have to specify all the information in the init (even if it’s also in the json). We’ll work on making that more seamless in the future.
0
huggingface
🤗Transformers
Trainer gives error after 1st epoch and evaluation
https://discuss.huggingface.co/t/trainer-gives-error-after-1st-epoch-and-evaluation/4006
Hi I have managed to get my code working for training, yet after 1 epoch I am getting weird results for the evaluation and also an error Trainer is attempting to log a value of "[0. 0. 0. 1. 0.]" of type <class 'numpy.ndarray'> for key "eval/recall" as a scalar. This invocation of Tensorboard's writer.add_scalar() is incorrect so we dropped this attribute. and then an error TypeError: Object of type ndarray is not JSON serializable I used the compute_metrics (measure accuracy, f1, precision and accuracy) provided by the tutorial and I am doing a classification task with 5 task.
I used the compute_metrics (measure accuracy, f1, precision and accuracy) provided by the tutorial and I am doing a classification task with 5 task. Could you tell us which tutorial? From the error message, it seems your metric value is a NumPy array, so sharing your compute_metric function would also help.
0
huggingface
🤗Transformers
Different models when loading checkpoint (run_mlm)
https://discuss.huggingface.co/t/different-models-when-loading-checkpoint-run-mlm/3929
I’m continuing training ‘bert-base-uncased’ with run_mlm.py. I expected this 2 options to return the same model: Training for 1 epoch. Training for 2 epochs and saving checkpoint after first epoch. Why are the first model (1 epoch) and the checkpoint in the second model are different? And another question - is there a way to get perplexity of each checkpoint? I tried to run the run_mlm script for each checkpoint (with --do_eval flag, without the --do_train flag). It worked though I’m not sure that’s the proper way… The perplexity is quite different from the scenario of training “in one shot” till the checkpoint (as explained above in option 1.). Thanks Thanks
Are you using a learning-rate-scheduler? If the learning rate of the single epoch is different from the learning-rate of the first epoch of two, then you will get quite different results.
0
huggingface
🤗Transformers
BERT for Speech
https://discuss.huggingface.co/t/bert-for-speech/3823
How can I use HF’s BERT models for speech-to-text training?
Not easily. BERT expects tokenized inputs, where natural language text has been coded (tokenized) as numbers. To use BERT for speech, you would need to convert your audio to similar tokens. If you want to use a pre-trained BERT model, then you would need to use exactly the same tokens. If you want to train a BERT model from scratch then you could define your own tokens. To learn more about tokenizing, try this BERT Word Embeddings Tutorial · Chris McCormick 1
0
huggingface
🤗Transformers
How to create a tokenizers from a custom pretrained tokenizer?
https://discuss.huggingface.co/t/how-to-create-a-tokenizers-from-a-custom-pretrained-tokenizer/3909
I have created a custom tokenizer from the tokenizers library, roughly following (The tokenization pipeline — tokenizers documentation 4). However, these tokenizers do not have utilities like transforming encoded sentences to torch tensors and so on. For this, I’d want to use the PreTrainedTokenizerFast class. It exposes an interface for getting the tokenizers of various pretrained models from google/facebook/etc, but I want to use my own tokenizer. How do I create a PreTrainedTokenizerFast from my own tokenizer? thanks!!
cc @anthony on this.
0
huggingface
🤗Transformers
Mask More Than one Word:
https://discuss.huggingface.co/t/mask-more-than-one-word/3800
image2226×813 85 KB Here, it says you can mask k tokens. However, in the documentation, it shows you only being able to mask one token. Is it possible to mask k words or am I mistaken?
Are you using a fill-mask pipeline? If so, there’s a hard-coded limit of a single mask in the class, even though the model itself may support multiple masks. I guess the added behavior would warrant some more functionality when it comes to choosing how to sample, e.g. if there are N masked tokens with a few top-k probabilities each one might either want to sample from the join distribution (i.e. ranking the pairs based on p1*p2) or independently. The best approach would depend on the model’s internals, I suppose. I saw a post a while back welcoming a PR for this matter, so it’s a wanted feature.
0
huggingface
🤗Transformers
What FineTuning can be done with a available models
https://discuss.huggingface.co/t/what-finetuning-can-be-done-with-a-available-models/3795
Hi, I am new to transformers and have a general question although I will ask it to my task at hand. I need to classify amino acid sequences and wanted to use the ProtBERT model to do so. This means I want to add a classification head at the end of the model. Having seen the tutorial , I have seen how to do this using DistilBertForSequenceClassification. I was wondering if such a module is available for all language models i.e is there ProtBertForSequenceClassification or is this available for only specific language models
It seems the ProtBERT checkpoints use the bert architecture so you can use them with BertForSequenceClassification. In general you can just try AutoModelForSequenceClassification.from_pretrained(checkpoint_name) and it will return an error if there is no sequence classification model in the library for the architecture you are using with checkpoint_name.
0
huggingface
🤗Transformers
How to restrict training to one GPU if multiple are available, co
https://discuss.huggingface.co/t/how-to-restrict-training-to-one-gpu-if-multiple-are-available-co/1244
I have multiple GPUs available in my enviroment, but I am just trying to train on one GPU. It looks like the default fault setting local_rank=-1 will turn off distributed training However, I’m a bit confused on their latest version of the code github.com huggingface/transformers/blob/master/src/transformers/training_args.py#L343 19 @cached_property @torch_required def _setup_devices(self) -> Tuple["torch.device", int]: logger.info("PyTorch: setting up devices") if self.no_cuda: device = torch.device("cpu") n_gpu = 0 elif is_torch_tpu_available(): device = xm.xla_device() n_gpu = 0 elif self.local_rank == -1: # if n_gpu is > 1 we'll use nn.DataParallel. # If you only want to use a specific subset of GPUs use `CUDA_VISIBLE_DEVICES=0` # Explicitly set CUDA to the first (index 0) CUDA device, otherwise `set_device` will # trigger an error that a device index is missing. Index 0 takes into account the # GPUs available in the environment, so `CUDA_VISIBLE_DEVICES=1,2` with `cuda:0` # will use the first GPU in that env, i.e. GPU#1 device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") n_gpu = torch.cuda.device_count() else: # Here, we'll use torch.distributed. If local_rank =-1 , then I imagine that n_gpu would be one, but its being set to torch.cuda.device_count() . But then the device is being set to cuda:0 And if local_rank is anything else, n_gpu is being set to one. I was thinking may be the meaning of local_rank has changed, but looking at the main training code, it doesn’t look like it github.com huggingface/transformers/blob/master/src/transformers/trainer.py#L331 5 ) dataset.set_format(type=dataset.format["type"], columns=columns) def _get_train_sampler(self) -> Optional[torch.utils.data.sampler.Sampler]: if isinstance(self.train_dataset, torch.utils.data.IterableDataset): return None elif is_torch_tpu_available(): return get_tpu_sampler(self.train_dataset) else: return ( RandomSampler(self.train_dataset) if self.args.local_rank == -1 else DistributedSampler(self.train_dataset) ) def get_train_dataloader(self) -> DataLoader: """ Returns the training :class:`~torch.utils.data.DataLoader`. Will use no sampler if :obj:`self.train_dataset` is a :obj:`torch.utils.data.IterableDataset`, a random sampler (adapted to distributed training if necessary) otherwise.
You can use the CUDA_VISIBLE_DEVICES directive to indicate which GPUs should be visible to the command that you’ll use. For instance # Only make GPUs #0 and #1 visible to the python script CUDA_VISIBLE_DEVICES=0,1 python train.py <args> # Only make GPU #3 visible to the script CUDA_VISIBLE_DEVICES=3 python train.py <args>
0
huggingface
🤗Transformers
Loss in on_step_end() callback methods
https://discuss.huggingface.co/t/loss-in-on-step-end-callback-methods/3717
I need to access the cross entropy loss for the current training step within the on_step_end() method, which won’t accept passing logs to it. How could this be done? Thanks
The loss is not passed along the callbacks right now, so you can’t access it from there.
0
huggingface
🤗Transformers
Error fine-tuning distilled Pegasus with run_seq2seq.py
https://discuss.huggingface.co/t/error-fine-tuning-distilled-pegasus-with-run-seq2seq-py/3661
Hello, This is my first post in the forum. I have successfully fine-tuned t5-small and distilled bart models using run_seq2seq.py. When I try to fine-tune sshleifer/distill-pegasus-xsum-16-8: !python examples/seq2seq/run_seq2seq.py \ --model_name_or_path $modelname \ --do_train \ --do_eval \ --task summarization \ --train_file $trainpath \ --validation_file $valpath \ --output_dir $modelsave \ --overwrite_output_dir \ --per_device_train_batch_size=1 \ --per_device_eval_batch_size=1 \ --predict_with_generate \ --text_column ctext \ --save_steps=100000 \ --num_train_epochs=1 \ --summary_column text I get the following error: 5.27it/s]/opt/conda/conda-bld/pytorch_1603729138878/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [213,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1603729138878/work/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [213,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. .................. File "/kaggle/working/transformers/src/transformers/models/pegasus/modeling_pegasus.py", line 340, in forward if torch.isinf(hidden_states).any() or torch.isnan(hidden_states).any(): RuntimeError: CUDA error: device-side assert triggered 14%|████▉ | 3684/26890 [12:08<1:16:30, 5.05it/s] I have tested with batches of size 1 and 2, and in both cases, the error is triggered at 14% of training steps. The size of the training dataset is 26890 and the validation dataset 6720. I tested the code in Google Colab and in Kaggle Kernels. Has anyone successfully fine-tuned PEGASUS or a distilled version of PEGASUS using run_seq2seq.py? Which arguments did you use? Thank you for your valuable time and help
Could you try to run the code on CPU? It usually helps to run the code on CPU when you have a CUDA error, to get a more informative error message.
0
huggingface
🤗Transformers
Multiple sequences per sample
https://discuss.huggingface.co/t/multiple-sequences-per-sample/3660
Hi, I am using bert to embed word similarities in a model I am working on. For each sample I have a list of object labels of variable lengths associated with it, for example: ['dog', 'cat', 'car', 'bus'] or ['person', 'sign'] My implementation so far has been to combine all the object labels together split with '[SEP]' and pass this through the bert network: ['dog', 'cat', 'car', 'bus'] – combine → 'dog [SEP] cat [SEP] car [SEP] bus' – tokenize → bert(tokens) → [768] embedding I am not sure if this is the best way to handle this situation and was thinking to pass each object label seperately through the model and perform some sort of combination of the embeddings at the end, for example: [bert('dog'), bert('cat'), bert('car'), bert('bus')] → torch.sum([[768],[768],[768],[768]]) For this method I have modified my code so that a vector of batch_size x T x object label tokens is generated where T is the number of objects for the sample and can vary. With padding the shape of this tensor is [192,8,5], I am unsure how I would pass this through to bert() though considering each tokenized object label (T) would have to be passed seperately. I dont have a great deal of experience with bert so I was hoping someone more experienced may be able to give me some advice. I grealty appreciated any help/suggestions.
For anyone wondering I got this to work by flattening the batch and sequence length (T) dimensions: # x.shape = [192,8,5] y = torch.flatten(x, start_dim=0, end_dim=1).to(torch.int64) # y.shape = [1536,5] bert_embeds = bert(y)
0
huggingface
🤗Transformers
How to predict in Tensorflow
https://discuss.huggingface.co/t/how-to-predict-in-tensorflow/3630
Hi, I have just finetuned RoBERTa for a classification problem, trained and stored the model. I have used native Tensorflow throughout, but I can’t find any examples anywhere related to finally predicting in TF. The only ones I found were all for PyTorch. Can someone please guide me to some snippet of code for prediction? The official example has no such code… EDIT:- I tried using model.predict() method that is used normally and even the PyTorch one which reports missing config. I am really confused as to how we are supposed to serve predictions. IMO the docs need updating
Hi @Neel-Gupta, The steps are usually as follows You load your trained model similar to this loaded_model = TFDistilBertForSequenceClassification.from_pretrained("./sentiment") Then you usually have to run your text through the preprocessing in the same way you did for your training. Usually some kind of tokenizing involved predict_input = tokenizer.encode(test_sentence, truncation=True, padding=True, return_tensors="tf") And then you finally can predict based on your loaded model and the preprocessed text tf_output = loaded_model.predict(predict_input)[0] Disclaimer, I am the author of this article, but It covers the process from data to deployment. https://blog.doit-intl.com/performing-surprisingly-easy-sentiment-analysis-on-google-cloud-platform-fc26b2e2b4b 48
0
huggingface
🤗Transformers
Inconsistent Model/Pipeline Behavior using Automodel/Pipeline/BartForConditionalGeneration
https://discuss.huggingface.co/t/inconsistent-model-pipeline-behavior-using-automodel-pipeline-bartforconditionalgeneration/3649
I’m using code 99% provided by huggingface, which is the main source of confusion. I am attempting summarization of medical scientific documents. I am on transformers version 4.2.0 My code comes from 3 locations, and for the most part, is unmodified. https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration model = BartForConditionalGeneration.from_pretrained('facebook/bart-large') tokenizer = BartTokenizer.from_pretrained('facebook/bart-large') inputs = tokenizer([text], max_length=1024, return_tensors='pt') # Generate Summary summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=150, min_length = 40, early_stopping=True) print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids]) https://huggingface.co/transformers/task_summary.html #model = AutoModelWithLMHead.from_pretrained("") model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large") tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large") # T5 uses a max_length of 512 so we cut the article to 512 tokens. inputs = tokenizer.encode(abstract, return_tensors="pt", max_length=512) outputs = model.generate(inputs, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True) print(tokenizer.decode(outputs[0])) The main significance here is that I changed the LMHead to Seq2SeqLM, as recommended by the warning when I run it. #Pipelines — transformers 4.3.0 documentation This is the current third method I’m using to run the code. summarizer = pipeline("summarization", model="facebook/bart-large", tokenizer="facebook/bart-large", framework="pt") summary = summarizer(text, min_length=40, max_length=150, length_penalty = 2.0, num_beams = 4, early_stopping = True) print(summary) I’ll summarize some results below. 1 is pipeline, 2 is AutoModel, 3 is BartForConditional When using facebook/bart-large, 1 & 3 … will give the same results. However, the second one, AutoModel, gives different results, despite documentation indicating (to me) that AutoModelForSeq2SeqLM should, in this case, be identical to the BArtForConditionalGeneration. Results get stranger when using facebook/bart-large-xsum, which gives the same results for 2/3… While 1 actually comes back with a result that’s nowhere to be found in the original text. Using facebook/bart-large-cnn, all 3 results are the same. I haven’t tested more than this. I don’t know if this is just major user error, or something for GitHub. Please let me know. Input text is from a medical abstract, located below. text = "Oesophageal squamous cell carcinoma (ESCC) is an aggressive malignancy and a leading cause of cancer-related death worldwide. Lack of effective early diagnosis strategies and ensuing complications from tumour metastasis account for the majority of ESCC death. Thus, identification of key molecular targets involved in ESCC carcinogenesis and progression is crucial for ESCC prognosis. In this study, four pairs of ESCC tissues were used for mRNA sequencing to determine differentially expressed genes (DEGs). 347 genes were found to be upregulated whereas 255 genes downregulated. By screening DEGs plus bioinformatics analyses such as KEGG, PPI and IPA, we found that there were independent interactions between KRT family members. KRT17 upregulation was confirmed in ESCC and its relationship with clinicopathological features were analysed. KRT17 was significantly associated with ESCC histological grade, lymph node and distant metastasis, TNM stage and five-year survival rate. Upregulation of KRT17 promoted ESCC cell growth, migration, and lung metastasis. Mechanistically, we found that KRT17-promoted ESCC cell growth and migration was accompanied by activation of AKT signalling and induction of EMT. These findings suggested that KRT17 is significantly related to malignant progression and poor prognosis of ESCC patients, and it may serve as a new biological target for ESCC therapy. SIGNIFICANCE: Oesophageal cancer is one of the leading causes of cancer mortality worldwide and oesophageal squamous cell carcinoma (ESCC) is the major histological type of oesophageal cancer in Eastern Asia. However, the molecular basis for the development and progression of ESCC remains largely unknown. In this study, RNA sequencing was used to establish the whole-transcriptome profile in ESCC tissues versus the adjacent non-cancer tissues and the results were bioinformatically analysed to predict the roles of the identified differentially expressed genes. We found that upregulation of KRT17 was significantly associated with advanced clinical stage, lymph node and distant metastasis, TNM stage and poor clinical outcome. Keratin 17 (KRT17) upregulation in ESCC cells not only promoted cell proliferation but also increased invasion and metastasis accompanied with AKT activation and epithelial-mesenchymal transition (EMT). These data suggested that KRT17 played an important role in ESCC development and progression and may serve as a prognostic biomarker and therapeutic target in ESCC. " EDIT1: I originally forgot “length_penalty = 2.0” in the BartConditinal/3. However, this had no effect on anything.
KublaiKhan1: inputs = tokenizer([text], max_length=1024, return_tensors='pt') [...] # T5 uses a max_length of 512 so we cut the article to 512 tokens. inputs = tokenizer.encode(abstract, return_tensors="pt", max_length=512) You are using a different tokenization for both examples, and in particular, the max_length is different. Maybe that’s the reason?
0
huggingface
🤗Transformers
Understanding BertLMPredictionHead
https://discuss.huggingface.co/t/understanding-bertlmpredictionhead/3618
Hey there! I am currently trying to understand how some of the transformer models work and start by focussing on BERT. Because I am trying to figure stuff out I highly appreciate corrections regarding any assumptions I state in this post! Mapping the output embeddings back to the initial tokens is of special interest to me - a task which is done by the MLM head: class BertLMPredictionHead(nn.Module): def __init__(self, config): super(BertLMPredictionHead, self).__init__() self.transform = BertPredictionHeadTransform(config) # The output weights are the same as the input embeddings, but there is # an output-only bias for each token. self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False) self.bias = nn.Parameter(torch.zeros(config.vocab_size)) def forward(self, hidden_states): hidden_states = self.transform(hidden_states) hidden_states = self.decoder(hidden_states) + self.bias return hidden_states So the hidden states processed by the head will first be transformed by BertPredictionHeadTransform class and then fed to a linear layer with an “external” bias. The transform() operaton simply performs a linear transformation but keeps the input shape and then applies an activation function + layernorm: class BertPredictionHeadTransform(nn.Module): def __init__(self, config): super(BertPredictionHeadTransform, self).__init__() self.dense = nn.Linear(config.hidden_size, config.hidden_size) if isinstance(config.hidden_act, str) or (sys.version_info[0] == 2 and isinstance(config.hidden_act, unicode)): self.transform_act_fn = ACT2FN[config.hidden_act] else: self.transform_act_fn = config.hidden_act self.LayerNorm = BertLayerNorm(config.hidden_size, eps=config.layer_norm_eps) def forward(self, hidden_states): hidden_states = self.dense(hidden_states) hidden_states = self.transform_act_fn(hidden_states) hidden_states = self.LayerNorm(hidden_states) return hidden_states So far so good (unless I got something wrong!) - what I don’t understand yet is how the weights for the decoder are determined. According to the comment above setting the decoder the input emeddings are used as weights of the linear layer. This seems to be enforced by the following code: def tie_weights(self): """ Make sure we are sharing the input and output embeddings. Export to TorchScript can't handle parameter sharing so we are cloning them instead. """ self._tie_or_clone_weights(self.cls.predictions.decoder, self.bert.embeddings.word_embeddings) This works because nn.Linear of pytorch by default transposes its weight matrix and so the shapes work out, correct? But the cloning of the weights is just some sort of initialization and they are still further trained (together with the bias) during the pretraining MLM task, right? So what I am wondering now is: Is there a special reasoning for cloning the weights? Has this also been done in the original BERT model and did they describe or explain it somewhere? Any information or feedback highly appreciated!
This is not something unique to BERT but actually an artefact from the original Transformer. I remember reading about it in their paper but before that paper, tying input and output embeddings was proposed in Press and Wolf (2017) 2. From the abstract: Finally, we show that weight tying can reduce the size of neural translation models to less than half of their original size without harming their performance.
0
huggingface
🤗Transformers
FAISS indexing for MARCO dataset
https://discuss.huggingface.co/t/faiss-indexing-for-marco-dataset/3613
Hey everyone, I’m trying to create a FAISS Index for the MS-MARCO dataset and I’m following the documentation provided here 7. I’m trying to understand if there is a way to create the Faiss index in a much more batch effective way. The current worked out example seems to be taking each example and encoding it one by one and I’m not sure if this is the only way to do it, or if datasets has some functionality that can make this go faster. The reason I’m asking is because the expected time show to index “just” the training data is around 530 hours on a GPU Colab notebook. Any insight on this would be appreciated. This is the code snippet that I’ve been working with: !pip install transformers datasets faiss-gpu from transformers import DPRContextEncoder, DPRContextEncoderTokenizerFast import torch torch.set_grad_enabled(False) ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") ctx_tokenizer = DPRContextEncoderTokenizerFast.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") from datasets import load_dataset ds = load_dataset('ms_marco', 'v2.1', split='train') ds_with_embeddings = ds.map(lambda example: {'embeddings': ctx_encoder(**ctx_tokenizer(example['passages']['passage_text'], return_tensors="pt", padding="longest"))[0][0].numpy()}) ds_with_embeddings.add_faiss_index(column='embeddings') ds_with_embeddings.save_faiss_index('embeddings', 'drive/MyDrive/marco.faiss.train')
There are two things you can do AFAIK: use multiprocessing in map by setting num_proc > 0 use batching (probably the biggest bottleneck) by setting batched=True and batch_size to a reasonable amount. E.g. something like this (untested; you may need to change some things here and there): ds_with_embeddings = ds.map(lambda batch: {'embeddings': ctx_encoder(**ctx_tokenizer(batch['passages']['passage_text'], return_tensors="np", padding="longest"))[0][0] }, batched=True, batch_size=64, num_proc=6) ds_with_embeddings = ds.map(lambda example: {‘embeddings’: ctx_encoder(**ctx_tokenizer(example[‘passages’][‘passage_text’], return_tensors=“pt”, padding=“longest”))[0][0].numpy()})
0
huggingface
🤗Transformers
Does T5 truncate input longer than 512 internally?
https://discuss.huggingface.co/t/does-t5-truncate-input-longer-than-512-internally/3602
Or will it process them correctly since it uses relative attention? To put it differently: does the memory and processing power it uses depend on the actual longest sequence within the current batch? So then I could speed up training by putting sequences with similar length together in a batch. Or every batch will be padded/truncated to the length specified in the config?
Great question! The 512 in T5’s config is a bit misleading since it is not a hard limit. T5 was mostly trained using 512 input tokens, however, thanks to its use of relative attention it can use much longer input sequences. This means that if you increase your input length more and more you won’t get a "index out of positional embedding matrix" error you will get for other models, but you’ll eventually get a “out of memory CUDA” error. T5 does use “normal” attention meaning that memory consumption scales quadractically (n^2) with the input length. So for T5 it makes a lot of sense to use padding/trunctuation and trying to have batches of similar length.
0
huggingface
🤗Transformers
NER for chunks / sentences
https://discuss.huggingface.co/t/ner-for-chunks-sentences/3590
I understand that NER task in transformers involves tokenization, embedding, and classification for tokens that are either whole words or partial words. Is it possible to adapt this task for chunk/sentence embedding and classification?
I suppose there should be a tokenizer to make chunks / senstences for embedding rather than words / subwords?
0
huggingface
🤗Transformers
EncoderDecoderModel with Longformer and Bert
https://discuss.huggingface.co/t/encoderdecodermodel-with-longformer-and-bert/3601
Hi, Is it possible to create EncoderDecoderModel using Longformer and Bert? I tried Longformer and Roberta and it works (the training runs), but if I use Longformer and Bert, I get this error message RuntimeError: CUDA error: device-side assert triggered when I train it.
Normally Longformer and BERT should work in an encoder-decoder setting. If you have a CUDA error, it’s advised to test the code on CPU and see if you’re getting an error that is more interpretable.
0
huggingface
🤗Transformers
Gradual Layer Freezing
https://discuss.huggingface.co/t/gradual-layer-freezing/3381
I have a short question. How do I perform gradual layer freezing using the huggingface trainer. I read that, one can freeze layers with: modules = [L1bb.embeddings, *L1bb.encoder.layer[:5]] #Replace 5 by what you want for module in mdoules: for param in module.parameters(): param.requires_grad = False but using the huggingface trainer, I do not write my own loops, where I can start freezing some layers lets say starting the second epoch. How can I start freezing some layers only from the second epoch on and then gradually increase the number of layers frozen per epoch? Thanks
There is nothing out of the box in the library to unfreeze parts of your model during training. You can pass the model with some layers frozen, using the code you wrote, but it will stay this way. You can try to use a TrainerCallback 20 to unfreeze parts of the model in the middle of the training (after a given number of steps/epochs).
0
huggingface
🤗Transformers
Gradual Layer Freezing with huggingface model
https://discuss.huggingface.co/t/gradual-layer-freezing-with-huggingface-model/3566
Hi I would like to freeze only the first few encoder layers in a bert model. I can do it by looping through the 202 layers and freeze them by order model = BertForMaskedLM.from_pretrained('bert-base-uncased') for param in model.parameters()[6:60]: param.requires_grad = False But isn’t there a better way to freeze some layers not having to rely on the order? Thanks
Hi, I don’t think you can freeze the encoder layers without effectively freezing the embedding layer too. (The changes won’t be able to propagate back through the frozen layers to reach the earlier layers.)
0
huggingface
🤗Transformers
PretrainedConfig example to use it in GPT2 text-generation pipeline
https://discuss.huggingface.co/t/pretrainedconfig-example-to-use-it-in-gpt2-text-generation-pipeline/3520
I want to modify the default parameters in the PretrainedConfig 1 and use it in the pipeline, prediction = pipeline('text-generation', model=model_path, tokenizer=tokenizerObject) Can someone please help me in constructing the PretrainedConfig object with parameters so that I can use it in the pipeline object.
Hi @bala1802, since the TextGenerationPipeline accepts a pretrained model as an argument perhaps you can adapt the following for your use case: from transformers import AutoModelWithLMHead, AutoTokenizer, AutoConfig, TextGenerationPipeline model_ckpt= "gpt2" # override default parameters here, eg output the hidden states config = AutoConfig.from_pretrained(model_ckpt, output_hidden_states=True) model = AutoModelWithLMHead.from_pretrained(model_ckpt, config=config) tokenizer = AutoTokenizer.from_pretrained(model_ckpt) pipe = TextGenerationPipeline(model=model, tokenizer=tokenizer) You might not even need the TextGenerationPipeline class since I think that this is equivalent to instantiating the pipeline as follows: pipe = pipeline('text-generation', model=model, tokenizer=tokenizer)
0
huggingface
🤗Transformers
Using penalized sampling from CTRL
https://discuss.huggingface.co/t/using-penalized-sampling-from-ctrl/3500
So CTRL uses a new decoding mechanism called Penalized sampling which basically discounts the scores of previously generated tokens. I was wondering about the implementation with model.generate. Is repetition_penalty=1.2 with temperature=0 sufficient ?
With repetition_penalty set, I’m not seeing as compelling generation as the ones mentioned in the paper. I’m trying out with other decoding strategies as well. If you’ve a suggestion about parameters which works best with CTRL, please let me know.
0
huggingface
🤗Transformers
What is the context as per run_clm?
https://discuss.huggingface.co/t/what-is-the-context-as-per-run-clm/3425
I am using a dataset from dataset library for text generation task. For each example in dataset, I want to provide sentence1 and label as context, and allow network to generate sentence2 and this should be used for loss calculation. In the run_clm script, I’m not able to find this distinction as to what is being used as context. Based on my understanding attention_mask and token_type_ids are responsible for controlling what network sees and what not. So from this, I want label+sentence1 to have 0s and sentence2 to have 1s in attention_mask. def tokenize_function(examples): tokenized_input = tokenizer(LABELS[examples['label']], examples['sentence1']) # not sure how to manage examples['sentence2'] return tokenized_input I want the output to be [CLS] [SEP] label [SEP] sentence 1 [SEP] sentence 2 (to be generated) [SEP] Can anyone clarify ?
If I understand your question correctly, I think you want to calculate loss using only the generated text. This is not handled by run_clm script. To handle this, you can prepare the labels such that, all the tokens will have -100 as value except the ones you want to include in loss, that way the -100 tokens will be ignored by the CrossEntropy loss function.
0
huggingface
🤗Transformers
AutoModel resolution outside of HF ecosystem
https://discuss.huggingface.co/t/automodel-resolution-outside-of-hf-ecosystem/3453
Hey guys, In the current API, unless I am doing something wrong, I have to specify each remote file (config, PyTorch model) explicitly in order to load a custom model outside of HF repo ecosystem. For example, to load custom model private_model_name from a hypothetical remote repo example.com I need to do the following: config = AutoConfig.from_pretrained(“https://example.com/models/private_model_name/config.json”) model = AutoModel.from_pretrained(“https://example.com/models/private_model_name/pytorch_model.bin”, config=config) Why not add naming resolution capabilities that currently exist to all remote repos so we can do: model = AutoModel.from_pretrained(“https://example.com/models/private_model_name/”) Can we add the same name resolution assumption for non-HF model repos?
I think the last command you type is supposed to work, as long as you pass use_auth_token=True to use your Hugging Face token (you need to be logged in via transformers-cli with an account having permission to access the private model though). cc @julien-c
0
huggingface
🤗Transformers
Gradient accumulation: should I duplicate data?
https://discuss.huggingface.co/t/gradient-accumulation-should-i-duplicate-data/3172
Hello! I am using gradient accumulation to simulate bigger batches when fine-tuning. However, I remember to have seen some notebooks in the documentation 7 where they would make N copies of the data when N is the number of gradient accumulation steps. I do not understand why this should be done. Is this good practice? Why? Thank you
Could you link to the exact notebook where you have seen this?
0
huggingface
🤗Transformers
[Urgent] trainer.predict() and model.generate creates totally different predictions
https://discuss.huggingface.co/t/urgent-trainer-predict-and-model-generate-creates-totally-different-predictions/3426
Sorry for the URGENT tag but I have a deadline. The title is self-explanatory. The predictions from trainer.predict() are extremely bad whereas model.generate gives qualitative results. I want to use trainer.predict() because it is paralilized on the gpu. My testing data set is huge, having 250k samples. I wonder if I am doing something wrong or the library contains an issue. Below you can find a minimum example of my code. # Load the model and tokenizer that were trained and saved. tokenizer = T5Tokenizer.from_pretrained(args.load_model) print("Loaded tokenizer from directory {}".format(args.load_model)) model = T5ForConditionalGeneration.from_pretrained(args.load_model) # create train and validation data set, not relevant for testing train_dataset = create_dataset(train_inputs, train_labels, tokenizer, pad_truncate=True, max_length=128) val_dataset = create_dataset(val_inputs, val_labels, tokenizer, pad_truncate=True) # training arguments training_args = Seq2SeqTrainingArguments( output_dir=model_directory, num_train_epochs=args.epochs, per_device_train_batch_size=args.batch_size, per_device_eval_batch_size=args.batch_size, warmup_steps=500, weight_decay=args.weight_decay, logging_dir=model_directory, logging_steps=100, do_eval=True, evaluation_strategy='epoch', learning_rate=args.learning_rate, load_best_model_at_end=True, metric_for_best_model='eval_loss', greater_is_better=False, save_total_limit=args.epochs, eval_accumulation_steps=args.eval_acc_steps, # set this lower, if testing or validation crashes disable_tqdm=True if args.load_model != '' else False, ) trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=val_dataset, optimizers=[torch.optim.Adam(params=model.parameters(), lr=args.learning_rate), None], tokenizer=tokenizer, ) class BugFixDataset(torch.utils.data.Dataset): def __init__(self, encodings, targets): self.encodings = encodings self.target_encodings = targets def __getitem__(self, index): item = {key: torch.tensor(val[index]) for key, val in self.encodings.items()} item['labels'] = torch.tensor(self.target_encodings['input_ids'][index], dtype=torch.long) return item def __len__(self): return len(self.encodings['input_ids']) # create the dataset test_warning_dataset = create_dataset(test_warning, test_warning_labels, tokenizer, pad_truncate=True, max_length=target_max_length) # the take models output --> logits --> argmax to obtain prediction ids. output_ids = np.argmax(trainer.predict(test_dataset=test_warning_dataset, num_beams=5, max_length=target_max_length).predictions[0], axis=2) trainer_outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True) model_generate_outputs = [] for i, code in enumerate(test_warning): input_ids = tokenizer.encode(code, truncation=True, padding=True).to(model.device) beam_outputs = model.generate(input_ids, max_length=target_max_length, num_beams=5, early_stopping=False, num_return_sequences=1) for pred in beam_outputs: x = tokenizer.decode(pred, skip_special_tokens) model_generate_outputs.append(x) model_generate_outputs and trainer_outputs are different. What is the issue? What am I doing wrong?
hi @berkayberabi to use Seq2SeqTrainer for prediction, you should pass predict_with_generate=True to Seq2SeqTrainingArguments. The trainer only does generation when that argument is True . If it’s true then predictions returned by the predict method will contain the generated token ids. Also if you want to do distributed evaluation, you could try this script transformers/run_distributed_eval.py at master · huggingface/transformers · GitHub 25
0
huggingface
🤗Transformers
How to do unsupervised fine-tuning?
https://discuss.huggingface.co/t/how-to-do-unsupervised-fine-tuning/3429
I have a custom text dataset, which I want BERT to get acquainted with. My final goal is not to run any supervised task (it is actually to act as a starting point to get sentence embeddings from S-BERT 5. I just want to continue doing the unsupervised training on my dataset. How do I do this? So far, I have come across two possible candidates in the documentation for this: BertForPreTraining (the self-explanatory name led me to this) BERTForMaskedLM (as used in this blog post 39). Can both of them be used for this purpose? Is one more attuned to my purpose? Have you previously tried to do something like this? Any additional suggestions would also be very helpful. Thank you
BertForPreTraining has two heads, one for masked language modeling and one for next sentence prediction task. This class should be used when you want to pre-train the bert as described in the paper i.e MLM + NSP BERTForMaskedLM, is for MLM training which can be used for pre-training.
0
huggingface
🤗Transformers
Efficient detokenization method
https://discuss.huggingface.co/t/efficient-detokenization-method/3421
Tokenizer has call function nad it accepts List[List[string]] so one can tokenize multiple samples at once efficiently. I am looking for a function that reverts this behavior. There are convert_ids_to_tokens() and convert_tokens_to_string() but they only accept List[int] and List[string]. I do not want to iterate over the sample and convert them back one by one. Is there an efficient method for this? I am using T5
There is the decode method that could help.
0
huggingface
🤗Transformers
How to Finetune Deberta Model on SQUAD dataset?
https://discuss.huggingface.co/t/how-to-finetune-deberta-model-on-squad-dataset/3391
Hi All, I am trying to finetune DeBerta Model on SQUAD. If I try existing notebooks like this 7 It uses Trainer and Fast Tokenizer. DeBerta doesn’t have support for Fast Tokenizer Yet. How can I finetune it on SQUAD? I am also willing to implement Fast Tokenizer for the Deberta model, Can anyone help me with resources so I can get started with that? Here is my notebook for training Deberta 6 (I am facing issues)
Hi @bhadresh-savani, as far as I can tell the problem seems to lie with your find_sublist_indices function, not on the availability of a fast tokenizer. One simple thing to try: can you pass a slice of examples to your convert_to_features function, e.g. convert_to_features(train_dataset[:3]) I’m not sure whether this will solve the problem, but perhaps your find_sublist_indices is expected a list of lists which is what you’ll get from the slice. I also noticed that your convert_to_features function is quite different to the prepare_train_features in the tutorial - what happens if you try the latter with your tokenizer? If that doesn’t work, then you might be able to use the old run_qa.py script that doesn’t rely on fast tokenizers: transformers/examples/legacy/question-answering at master · huggingface/transformers · GitHub 3 Lewis
0
huggingface
🤗Transformers
Saving check_points for run_mlm.py
https://discuss.huggingface.co/t/saving-check-points-for-run-mlm-py/3200
Hi friends- I am trying to train a Roberta on a large corpus with a server with time limitation. Is there any way to save the model like every 3000 steps to keep record of the training, and resume it later? Really need it with the project…Thanks for helping.
you can set it in trainig config: ( save_steps ( int , optional, defaults to 500) – Number of updates steps before two checkpoint saves.) i.e. “save_steps”: 3000
0
huggingface
🤗Transformers
Problem with a new Trainer in version 4.2.0
https://discuss.huggingface.co/t/problem-with-a-new-trainer-in-version-4-2-0/3363
I’m trying to instantiate a trainer like I did before in version 3.0.2: trainer = MyTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, compute_metrics=compute_metrics, callbacks=[EarlyStoppingCallback(3, 0.5)] ) where MyTrainer is: class MyTrainer(Trainer): def __init__( self, model: PreTrainedModel, args: TrainingArguments, data_collator: Optional[DataCollator] = None, train_dataset: Optional[Dataset] = None, eval_dataset: Optional[Dataset] = None, compute_metrics: Optional[Callable[[EvalPrediction], Dict]] = None, prediction_loss_only=False, tb_writer: Optional["SummaryWriter"] = None, callbacks: Optional[List[TrainerCallback]] = None, optimizers: Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None) ): super().__init__(model, args, data_collator, train_dataset, eval_dataset, compute_metrics, prediction_loss_only, tb_writer, callbacks, optimizers) when I try to train the model train_result = trainer.train( model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None ) I get the following error TypeError: False is not a callable object The model_path is None as also in version 3.0.2, but here it doesn’t work anymore. The value of args.model_name_or_path is bert-base-uncased specified in the file run.sh. How can I do to solve this problem? Thanks!
Hi there! prediction_loss_only was deprecated in v3.x and has been removed in v4. In its place you have a model_init, that’s why you get this error. You should change your signature to match: def __init__( self, model: Union[PreTrainedModel, torch.nn.Module] = None, args: TrainingArguments = None, data_collator: Optional[DataCollator] = None, train_dataset: Optional[Dataset] = None, eval_dataset: Optional[Dataset] = None, tokenizer: Optional["PreTrainedTokenizerBase"] = None, model_init: Callable[[], PreTrainedModel] = None, compute_metrics: Optional[Callable[[EvalPrediction], Dict]] = None, callbacks: Optional[List[TrainerCallback]] = None, optimizers: Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None), ): In general, when passing keyword arguments from one method to another, you should always use the syntax name=value (so here data_collator=data_collator, train_dataset=train_dataset…) because functions can have some keyword arguments added from one version to another. If you rely just on the order of their arguments like here, you can get some mismatches that create errors down the line.
0
huggingface
🤗Transformers
Fine-tune BERT for Masked Language Modeling
https://discuss.huggingface.co/t/fine-tune-bert-for-masked-language-modeling/1275
Hello, I have used a pre-trained BERT model using Hugging Transformers for a project. I would like to know how to “fine-tune” the BERT for Masked Language Modeling for a task like spelling correction. The links “https://github.com/huggingface/transformers/tree/master/examples/lm_finetuning 68” and “https://github.com/huggingface/transformers/blob/master/examples/lm_finetuning/pregenerate_training_data.py 32” are not found which seemed to be of great resource. As well as I would also like to know the dataset (like what kind of inputs and labels are to be given to the model) format that BERTForMaskedLM requires to be trained on. I would be grateful if anyone could help me in this regard. Thanks, Nes
Interested in this too…
0
huggingface
🤗Transformers
Training BERT from scratch with Wikipedia + Book Corpus Dataset
https://discuss.huggingface.co/t/training-bert-from-scratch-with-wikipedia-book-corpus-dataset/3252
Hello, everyone! I am a person who woks in a different field of ML and someone who is not very familiar with NLP. Hence I am seeking your help! I want to pre-train the standard BERT model with the wikipedia and book corpus dataset (which I think is the standard practice!) for a part of my research work. I am following the huggingface guide to pretrain model from scratch: https://huggingface.co/blog/how-to-train 151 Now since they are training a different model on a different language dataset, in the article they mention: We recommend training a byte-level BPE (rather than let’s say, a WordPiece tokenizer like BERT) So, in my case, should I go for WordPiece tokenizer for BERT pretraining? (I have a slight idea about tokenizer but I am not learned enough to understand the ramifications of this). Apart from this, from the article the only other deviation I see is the selection of the dataset, I understand Huggingface has both the wikipedia and the book corpus datasets. '2. So, how should I go about training? Should I train the model on Wikipedia first and then on Book Corpus? Or should I somehow concatenate them into a larger singular dataset. Any other thing should I keep in mind? I would really appreciate if someone could point me to materials/code for pretraining BERT. Any other tips/suggestions would be highly appreciated! Thanks a lot!
Or should I somehow concatenate them into a larger singular dataset. you would benefit from a bigger dataset; should I go for WordPiece tokenizer for BERT pretraining? BPE and WordPiece have a lot in common: https://huggingface.co/transformers/tokenizer_summary.html 36 BERT is trained with WordPiece, so it is natural to choose WordPiece in this case.
0
huggingface
🤗Transformers
Inference with DistilBertForQuestionAnswering
https://discuss.huggingface.co/t/inference-with-distilbertforquestionanswering/3308
I have fine tuned a DistilBert model for question answering using my custom data. I want to know how can I use that model to find the answers for input questions ?
Seems like a duplicate of Predicting answers using DistilBertForQuestionAnswering 8 removing it, as I have answered it there.
0
huggingface
🤗Transformers
Question on language modeling preprocessing
https://discuss.huggingface.co/t/question-on-language-modeling-preprocessing/3311
I am trying running the language modeling script run_mlm.py script but I am facing some storage issues when running the preprocessing of the input text data. The main issue here is that the preprocess data by default gets saved in the .cache/huggingface/datasets folder. But my .cache folder is pretty small. Is it possible to redirect the preprocessing of the input text data to a different folder? Thanks a lot for your help.
You can set an environment variable to control where the cache goes and change that default. For all HF libraries, the variable is "HF_HOME".
0
huggingface
🤗Transformers
Masked Language Modeling (MLM) using TFBertForMaskedLM (Tensorflow)
https://discuss.huggingface.co/t/masked-language-modeling-mlm-using-tfbertformaskedlm-tensorflow/528
Where can I find a complete example on how to fine tune a model using Tensorflow for TFBertForMaskedLM for custom text dataset using transformers and Tensorflow instead of PyTorch. I have an working example using PyTorch but I need an example with Tensorflow. Can someone guide me here?
Looking for the same thing! Have you found a good example by now?
0
huggingface
🤗Transformers
BertForMaskedLM train
https://discuss.huggingface.co/t/bertformaskedlm-train/2686
I have a question When training using BertForMaskedLM, is the train data as below correct? token2idx <pad> : 0, <mask>: 1, <cls>:2, <sep>:3 max len : 8 input token <cls> hello i <mask> cats <sep> input ids [2, 34,45,1,56,3,0,0] attention_mask [1,1,1,1,1,1,0,0] labels [-100,-100,-100,64,-100,-100,-100,-100] I wonder if I should also assign -100 to labels for padding token.
Hi, Were you able to figure it out? I’m also trying to do the same thing. Thanks, Ayala
0
huggingface
🤗Transformers
Using time series for SequenceClassification models
https://discuss.huggingface.co/t/using-time-series-for-sequenceclassification-models/3289
Im thinking of using Transformer models to classify other sequential data, namely time series data. My idea is to feed fixed-sized sequences of time series value as input into a BERT-like model with a classification head. Since using pre-trained models probably makes no sense, I would train it from scratch. Since time series values are already numerical, am I right to think that tokenization isn’t needed? How can I assure that a BERT-kind model even understands the input without using the corresponding tokenizer? Is there anything else to know when wanting to control the classification head layers, apart from passing num_values? What steps would I undergo for this task of time series classification? Any tips? Im grateful for any ideas. Perhaps someone already knows a repository/model? That would be extremly helpful.
you might find some pointers here Using transformers (BERT, RoBERTa) without embedding layer Research I’m looking to train a RoBERTa model on protein sequences, which is in many ways similar to normal nlp training, but in others quite different. In the language of proteins, I have 20 characters instead of the normal 26 characters used in english (it is 26 right? :D), so that is rather similar. The big difference is that you don’t really combine the characters in proteins to actual words, but rather just keep each character as a distinct token or class. Hence essentially my input to the transfo…
0
huggingface
🤗Transformers
Checkpointing in each step
https://discuss.huggingface.co/t/checkpointing-in-each-step/3293
Hi I use finetune_trainer.py, it saves the best model during checkpoints given an evaluation metric, so sometimes it calls the _save_checkpoint, but if the metric is not higher than the best saved one it would not save the cehckpoint in huggingface code, what I need is to write a callback, that save model+optimizer+scheduler to be called write after each time it calls the save_checkpoint to keep a copy of last updated model in the folder. I am not sure how to access optimizer/model/scheduler in the callbacks, I appreciate your input on this. thanks
you could use save_stpes arg to specify the number of update steps after which to save the checkpoint. If you use --evaluation_strategy steps --eval_steps 50 --save_steps 50 then it will eval and save checkpoint after every 50 steps
0
huggingface
🤗Transformers
How to use Seq2SeqTrainer (Seq2SeqDataCollator) in v4.2.1
https://discuss.huggingface.co/t/how-to-use-seq2seqtrainer-seq2seqdatacollator-in-v4-2-1/3243
Hello, I’d like to update my training script using Seq2SeqTrainer to match the newest version, v4.2.1. My code worked with v3.5.1. However, when I update it, it doesn’t work with v4.2.1. It is said that ValueError occurs. File "/****/seq2seq_trainer.py", line 193, in compute_loss loss, _ = self._compute_loss(model, inputs, labels) File "/****/seq2seq_trainer.py", line 180, in _compute_loss loss = self.loss_fn(logits.view(-1, logits.shape[-1]), labels.view(-1)) ValueError: Expected input batch_size (464) to match target batch_size (480). I tried print debug, inserted: def _compute_loss(self, model, inputs, labels): if self.args.label_smoothing == 0: if self.data_args is not None and self.data_args.ignore_pad_token_for_loss: # force training to ignore pad token logits = model(**inputs, use_cache=False)[0] print(inputs["input_ids"].shape) print(logits.shape) print(labels.shape) loss = self.loss_fn(logits.view(-1, logits.shape[-1]), labels.view(-1)) and got: torch.Size([8, 58]) torch.Size([8, 58, 50266]) torch.Size([8, 60]) (I added my own special token, so the embedding size becomes 50266) Am I forgetting to do the necessary processing when updating the file to fit the new version? In the Seq2SeqDataCollator, it seems that shift_tokens_right, which was imported from transformers.models.bart.modeling_bart is no longer needed. I update my own DataCollator on the basis of this new Seq2SeqDataCollator, and I think something I’m misunderstanding is related to here. Thank you in advance.
I lowered the version from 4.2.1 to 4.1.1 and reverted to the version that has shift_tokens_right in Seq2SeqDataCollator. I revert my own DataCollator to the old version, then, apparently, the above problem no longer occurs. Are there any tips to make my own DataCollator for Seq2SeqTrainer in v4.2.1? Thank you.
0
huggingface
🤗Transformers
LM example run_clm.py isn’t distributing data across multiple GPUs as expected
https://discuss.huggingface.co/t/lm-example-run-clm-py-isnt-distributing-data-across-multiple-gpus-as-expected/3239
EDIT: I think the missing piece is the -m torch.distributed.launch flag from the terminal command. I will test this when I get a chance and update the thread if that’s the fix. I am fine-tuning GPT-2 using examples/language-modeling/run_clm.py. It seems like the Trainer class instantiated in it will by default wrap the model in Distributed Data Parallel and spread it across the 4 gpus that I am providing it when I include CUDA_VISIBLE_DEVICES=0,1,2,3 at call time. However, when I run nvidia-smi only gpu:0 is being used. The first line the script prints is 01/16/2021 02:39:40 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 0 distributed training: False, 16-bits training: False I understand that for the model using DDP, n_gpu should be 1? I think the print of n_gpu=0 is just a result of n_gpu not actually being available as a flag for configuration. The next line I get is 01/16/2021 02:39:40 - INFO - __main__ - Training/evaluation parameters Trainin gArguments(output_dir=/data/saxon/nlp_abs_full/test, overwrite_output_dir=True, do_train=True, do_eval=False, do_predict=False, evaluation_strategy=EvaluationSt rategy.NO, prediction_loss_only=False, per_device_train_batch_size=5, per_device _eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_e psilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_schedule r_type=SchedulerType.LINEAR, warmup_steps=0, logging_dir=runs/Jan16_02-39-40_and rew.cs, logging_first_step=False, logging_steps=500, save_steps=500, save_total_ limit=3, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=aut o, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, data loader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=/data/saxon/nlp_abs_full/test, disable_tqdm=False, remove_unused_colum ns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=N one, greater_is_better=None, ignore_data_skip=False, sharded_ddp=False, deepspee d=None, label_smoothing_factor=0.0, adafactor=False, _n_gpu=4) It seems like what’s happening here is somewhere in the process of setting up the trainer n_gpu is defaulting to 4 (the number that I have), which I think is somehow interrupting the process that is supposed to happen where the trainer wraps the model in DDP. To try fixing this I added the line trainer._n_gpu = 1 to force the value to the argument that it should be for DDP according to the documentation 1 However, this does not fix the problem, so I’m stuck. I think this might be a bug in the example script, because the expected behavior, if I understand right, is that training should automatically use DDP when more than 1 gpu is available. Am I doing something wrong? The full command I’m running is TRANSFORMERS_CACHE=/data/saxon/cache CUDA_VISIBLE_DEVICES=0,1,2,3 python run_clm.py --model_name_or_path gpt2 --train_file /data/saxon/nlp_abs_full/ffl.txt --do_train --output_dir /data/saxon/nlp_abs_full/test --per_device_train_batch_size 5 --cache_dir /data/saxon/cache --save_total_limit 3 --save_steps 500 --overwrite_output_dir
If you want to use DDP (distributed data parallel) you do need to launch the script with python -m torch.distributed.launch.
0
huggingface
🤗Transformers
Xlm-Roberta Tokenizing
https://discuss.huggingface.co/t/xlm-roberta-tokenizing/3267
Hello everyone. How would you solve the following problem: I have a word and a sentence with this word, I also have the position of this word. How would you universally for all languages find the position of a tokenized word in a tokenized sentence? I really have a problem with this one, it’s mainly with a Chinese language
cc @Narsil, @anthony
0
huggingface
🤗Transformers
Distilbart-mnli-12-9
https://discuss.huggingface.co/t/distilbart-mnli-12-9/3064
@valhalla In distilbart, can i identify the weight of the words in the sequence associated to the candidate label/class. I want to narrow down on the reason for the model assigning a particular score to a given class. For example if “This is awesome anyone thinking to buy should blindly go for it” is assigned a positive label score of 0.99, then is it possible to identify the words in the sequence which carry the most weight/ contribute the most to the positive label, ( in this sequence the words/phrases - awesome, blindly go for it ) and the relative weight(cosine similarity/distance) of those words to the identified class(i.e positive). Can this be done by accessing or manipulating the end layers of the model or by any other method? Thank you for your help in advance!
@joeddav might have some ideas here.
0
huggingface
🤗Transformers
Can t5 transformer can be used to summarize conversations
https://discuss.huggingface.co/t/can-t5-transformer-can-be-used-to-summarize-conversations/3080
i have tons of transcripts of conversational data(ex agent and customer conversation in a call center) is t5 transformer capable of summarizing this type of conversational data and gives me a report on what custmer and agents intention ??
Absolutely. Train on your own data.
0
huggingface
🤗Transformers
Host gpt2 model in a browser
https://discuss.huggingface.co/t/host-gpt2-model-in-a-browser/3231
Is there any way to host gpt2 model in a browser? WASM maybe, even better?
I don’t think so. Only way is to use tflite model, which is not easy to get converted forvauto regressive tasks.
0
huggingface
🤗Transformers
Seq2Seq Encoder Decoder model Tensorflow
https://discuss.huggingface.co/t/seq2seq-encoder-decoder-model-tensorflow/3272
Can anyone help me to find Encoder Decoder implementation in Tensorflow. All HF models seems to be in PyTorch?
We do have important most-used seq2seq models like T5 and Bart in Tensorflow https://huggingface.co/transformers/model_doc/t5.html#tft5forconditionalgeneration 10 https://huggingface.co/transformers/model_doc/bart.html#tfbartforconditionalgeneration 8 https://huggingface.co/transformers/model_doc/led.html#tfledmodel 4
0
huggingface
🤗Transformers
How to add RNN layer on top of Huggingface BERT model
https://discuss.huggingface.co/t/how-to-add-rnn-layer-on-top-of-huggingface-bert-model/3256
I am working on a binary classification task and would like to try adding RNN layer on top of the last hidden layer of huggingface BERT PyTorch model. How can I extract the layer-1 and contact it with LSTM layer? tokenizer = BertTokenizer.from_pretrained(model_path) # Load BertForSequenceClassification, the pretrained BERT model with a single linear classification layer on top. model = BertForSequenceClassification.from_pretrained(model_path, num_labels=len(lab2ind))
We can use BertModel instead of BertForSequenceClassification https://huggingface.co/transformers/model_doc/bert.html#bertmodel 138 And feeds hidden states output to LSTM
0
huggingface
🤗Transformers
How can we customize pipeline?
https://discuss.huggingface.co/t/how-can-we-customize-pipeline/3226
How pipeline handles custom pre-processing or post-processing. For example, different doc_stride, max_seq_length in Question Answering?
I dont think that is possible. But lets hear from someone experienced.
0
huggingface
🤗Transformers
LM from Scratch for Tensorflow
https://discuss.huggingface.co/t/lm-from-scratch-for-tensorflow/3219
Hi, I was following this tutorial to train a LM from scratch: How to train a new language model from scratch using Transformers and Tokenizers 2 The result is a pytorch model. Though I need one for Tensorflow. Is there an easy way to convert it? I tried to modify the training code by using TFTrainer, TFBertForModelLM instead. but TFTrainer is causing trouble with the Data_collator and LineByLineTextDataset objects. When intitializing the trainer with the data collator i get the error: init() got an unexpected keyword argument ‘data_collator’ When calling trainer.train() (without collator) I receive the error: LineByLineTextDataset object has no attribute ‘_variant_tensor’ .
Hi! Here’s a nice example of custom TF MLM learning on XLM-Roberta with Kaggle TPU https://www.kaggle.com/riblidezso/finetune-xlm-roberta-on-jigsaw-test-data-with-mlm 14
0
huggingface
🤗Transformers
XLNetForSequenceClassification
https://discuss.huggingface.co/t/xlnetforsequenceclassification/2765
Hi All, I am using model = XLNetForSequenceClassification.from_pretrained(‘xlnet-base-cased’) for text classfication , on what dataset this XLNetForSequenceClassification pretrained Thanks in advance
HI @sru model = XLNetForSequenceClassification.from_pretrained(‘xlnet-base-cased’) this line instantiates xlnet for classification task by loading the pre-trained xlnet-base-cased language model and adds a classification head (linear layer) on top of it. Note that, it’s not trained for classification, the classification head is randomly initialised. You should fine-tune it on your dataset for classification.
0
huggingface
🤗Transformers
Training models for smaller epochs and then continue trianing
https://discuss.huggingface.co/t/training-models-for-smaller-epochs-and-then-continue-trianing/3153
Hi I am under limited compute hours, I need to train the models for 3 hours and then restart from the time it broke, I am using finetune_trainer.py, could you tell me how I can train my models for max_steps X into smaller chunks of max_steps=X/1000 for instance but still getting the same results. I am using evaluation_strategy = steps how I can save the current model in addition to the best model in each saving step when restarting, how I can skip the done steps @sgugger thanks
Hello @julia, welcome to the forum! I think you have created two topics for the same purpose, I will answer here. If I understand correctly, you are trying to save a checkpoint every time you do an evaluation. This can be done, using the finetune_trainer.py script, changing the parameter save_steps to be the same as eval_steps. For example if you want to evaluate and save a checkpoint every 1k steps, you call python finetune_trainer.py --evaluation_strategy steps --eval_steps 1000 --save_steps 1000 Hope this help
0