docs
stringclasses
4 values
category
stringlengths
3
31
thread
stringlengths
7
255
href
stringlengths
42
278
question
stringlengths
0
30.3k
context
stringlengths
0
24.9k
marked
int64
0
1
huggingface
🤗Transformers
Optimising performance non-standard systems
https://discuss.huggingface.co/t/optimising-performance-non-standard-systems/450
I am experimenting deploying a question answering system using huggingface pretrained models. I am finding it difficult to get consistent results and performance. the system performs inference over k items retrieved by elastic search (k~=50) and performs answer extraction on them for a given question, so for each query we perform inference over 50 question, text pairs. I have 3 versions, one simply iterating over the 50 pairs, one using hf pipeline and one where I load all pairs into a 3d tensor and compute simultaneously. Some interesting findings (all on k80 unless otherwise stated): My expectation was the 3d batch tensor would be fastest - the inference phase is, by a factor of x100 (taking 0.07s) but it takes so long to move the tensors to cpu for post processing that that this overall takes around 11s. hf pipelines is by far the slowest solution - taking around twice the time of this solutions (22s). whilst the pipeline accepts batches it has no optimisation for this and simply loops through the items. Whilst I havnt done in depth profiling of the pipeline code it seems that the additional time likely derives from converting to squad features. The surprise winner is my simple loop through the pairs which comes in at around 7s. A few questions: Given that the model inference is so fast in batch mode, does anyone have any tips for solving the slow tensor placement? does hf intend to have some more optimal pipelines for production uses? Am I using this incorrectly somehow? My base assumption was that the hf implementation would be best (and would support proper batching). any other pointers for a better solution design? any good tools for detailed profiling of ML/torch systems? It has been a really slow, difficult process finding these bottlenecks. Many thanks! Justin
I am also very interested in this topic as i am having issues with performance using pipelines as well. so bumping in hopes to see some expert reaction in this regard! If not pipeline or other methods, what is the best strategy to make sure you have maximum performance running a model continuously?
0
huggingface
🤗Transformers
Transfer learning
https://discuss.huggingface.co/t/transfer-learning/12153
Hello Everyone, I have a quick question. I trained a Mask Language Model starting from “AutoModelForMaskedLM.from_pretrained” (like a continual pre-training). Then I save that model and I want to use it as starting point for a text classification task, then I use AutoModel.from_pretrained to load the model architecture and finally I use a load_from_ckpt function to copy the weights from the pre-trained model into my new instantiated model. However, I see that the naming of the layers is slightly different from the two models, one has the prefix “bert” and the other does not. This cause a conflict when loading the weights since I check the names of the layers. What is the best way to address this situation?. What I am doing now is to use AutoModelForMaskedLM.from_pretrained also to load the architecture of the fine-tuning, even though i am not training for MLM. Many thanks!
@nielsr or @lhoestq any thoughts on this will be more than welcome!
0
huggingface
🤗Transformers
RuntimeError when running on Colab GPU
https://discuss.huggingface.co/t/runtimeerror-when-running-on-colab-gpu/12061
Hello, I am trying to train a model for token classification (NER), more or less following the example on the Huggingface course on Token Classification 1. When I attempt to initiate arguments, using transformers.TrainingArguments, as below: from transformers import TrainingArguments args = TrainingArguments( "bert-finetuned-ner", evaluation_strategy="epoch", save_strategy="epoch", learning_rate=2e-5, num_train_epochs=3, weight_decay=0.01, push_to_hub=True, ) I get the error: RuntimeError: Failed to import transformers.training_args because of the following error (look up to see its traceback): /usr/local/lib/python3.7/dist-packages/_XLAC.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN2at13_foreach_erf_EN3c108ArrayRefINS_6TensorEEE Why is this? Is this a bug for transformers on Colab GPUs?
Can you provide a Colab to reproduce this error? What’s your Transformers version?
0
huggingface
🤗Transformers
Continual pre-training vs. Fine-tuning a language model with MLM
https://discuss.huggingface.co/t/continual-pre-training-vs-fine-tuning-a-language-model-with-mlm/8529
I have some custom data I want to use to further pre-train the BERT model. I’ve tried two following approaches so far: Starting with a pre-trained BERT checkpoint and continuing the pre-training with Masked Language Modeling (MLM) + Next Sentence Prediction (NSP) heads (e.g. using BertForPreTraining model) Starting with a pre-trained BERT model with the MLM objective (e.g. using the BertForMaskedLM model assuming we don’t need NSP for the pretraining part.) But I’m still confused that if using either BertForPreTraining or BertForMaskedLM actually does the continual pre-training on BERT or these are just two models for fine-tuning that use MLM+NSP and MLM for fine-tuning BERT, respectively. Is there even any difference between fine-tuning BERT with MLM+NSP or continually pre-train it using these two heads or this is something we need to test?
I have similar question here. I was following this tutorial 3, but still got quite confused when we call BertForMaskedLM how much weights we retained from the original BERT model? Please let me know if you figure out! many thanks in advance.
0
huggingface
🤗Transformers
How to preserve Html when processing(paraphrasing)
https://discuss.huggingface.co/t/how-to-preserve-html-when-processing-paraphrasing/12096
Is there a way to preserve the html when using these the accelerated interference. I am trying to put strings split on html elements through PEGASUS paraphraser. Problem is that the html is removed in the output. How would you go about preserving the html.
Parse the html, while keeping all the indexes of each part and then make prediction. After getting model output place them back at those index. This won’t work without valid tags though
0
huggingface
🤗Transformers
Using Seq2SeqTrainer to eval during training?
https://discuss.huggingface.co/t/using-seq2seqtrainer-to-eval-during-training/11653
following the instruction of run_summarization.py in the example/summarization/ folder. To further eval the trained model during training, i set the eval_strategy = "steps" and the bash file is: CUDA_VISIBLE_DEVICES=4,5,6,7 python -m torch.distributed.launch \ --nproc_per_node 4 \ --use_env \ $(pwd)/run_summarization.py \ --model_name_or_path /path/to/bart-base \ --dataset_name cnn_dailymail \ --dataset_config_name 3.0.0 \ --per_device_train_batch_size 16 \ --per_device_eval_batch_size 16 \ --num_train_epochs 5 \ --do_train \ --do_eval \ --predict_with_generate \ --learning_rate 3e-5 \ --label_smoothing_factor 0.1 \ --weight_decay 0.01 \ --max_grad_norm 1.0 \ --logging_strategy 'steps' \ --logging_steps 1000 \ --save_strategy 'steps' \ --save_steps 5000 \ --save_total_limit 3 \ --evaluation_strategy 'steps' \ --eval_steps 5000 \ --fp16 \ --output_dir /path/to/output_dir \ But when eval during training, the eval results seem not will. The max_length & num_beams are set to default values in the Config files of BART(max_length=20, num_beams=4). Functions relative to the problem might be: train -- trainer.py _maybe_log_save_evaluate -- trainer.py evaluate --trainer_seq2seq.py evaluate -- trainer.py evaluation_loop -- train.py prediction_step --trainer_seq2seq.py maybe in the trainer_seq2seq.py, TheSeq2SeqTrainer should ovrride the _maybe_log_save_evaluate function to explictly provided the max_length & num_beams hyper-parameters ? Another problems is in the run_summarization_no_trainer.py. in the Link 293, It didn’t provide the fp16, so in the Link 444, the accelerator.use_fp16 must be False. I guess the fp16 should be added in the parser as a hyper-parameter ?
The same question for trainer also
0
huggingface
🤗Transformers
How to get the predicted labels per epoch or step for the huggingface.transformers Trainer?
https://discuss.huggingface.co/t/how-to-get-the-predicted-labels-per-epoch-or-step-for-the-huggingface-transformers-trainer/12078
Hello, I’m new to Hugging Face and I have a question regarding the huggingface.transformers Trainer class. How do we get the predicted labels per epoch or step for the huggingface.transformers Trainer? Thank you.
I think you need to define a compute metric function which get your prediction and return the accuracy. def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) precision, recall, f1, _ = precision_recall_fscore_support(labels, preds) precision_ma, recall_ma, f1_ma, _ = precision_recall_fscore_support(labels, preds, average=‘macro’) precision_mi, recall_mi, f1_mi, _ = precision_recall_fscore_support(labels, preds, average=‘micro’) acc = accuracy_score(labels, preds) return { ‘accuracy’: acc, ‘f1’: f1} and pass it to the trainer
0
huggingface
🤗Transformers
Big `generate()` refactor
https://discuss.huggingface.co/t/big-generate-refactor/1857
Soon we will merge a big refactor of the generate() method: https://github.com/huggingface/transformers/pull/6949 20 All transformer models that have a language model head rely on the generate() method, e.g. Bart, T5, Marian, ProphetNet for summarization, translation, … GPT2, XLNet for open-end text generation The generate() method has become hard to read and more and quite difficult to maintain. The goal of the refactor is thus as follows: Remove the _no_beam_search_generation and _beam_search_generation methods and replace them with the four different generation methods corresponding to the four different types of generation: greedy_search (corresponds to num_beams = 1, do_sample=False), sample (corresponds to num_beams = 1, do_sample=True), beam_search (corresponds to num_beams > 1, do_sample = False), and beam_sample (corresponds to num_beams > 1, do_sample = True). The following philosophy has been adapted here: the original generate() method is kept at 100% backwards compatibility (ping me on github under @patrickvonplaten, if your specific use case has nevertheless been broken by this PR). In addition each of the four specific generation methods can be used directly. The specific methods are as “bare-bone” as possible meaning that they don’t contain any magic pre-processing of input tensors. E.g. generate() automatically runs the input_ids through the encoder if model is encoder-decoder model vs. for each specific generate method they have to be added as encoder_outputs to the model_kwargs; input_ids are automatically created in case they are empty or automatically expanded for num_beams in generate() vs. this has to be done manually for each specific generate method. The reason behind this design is that it will give the user much more flexibility for specific use cases of generate() and improves maintainability and readability. The user should not be limited in any way when directly using the specific generate methods. It should therefore pave the way to allow backprop through generate, make beam generation much easier, makes it easier to write “higher level” generate functions as is done for RAG… For more information on this design please read the docs, look into the examples of greedy_search, sample, beam_search and beam_sample. All of the generate parameters that can be used to tweak the logits distribution for better generation results, e.g. no_repeat_ngram_size, min_length, … are now defined as separate classes that are added to a LogitsProcessorList. This has the following advantages: a) better readability b) much easier to test these functions c) easier to add new logits distribution warpers. A huge thanks goes to https://github.com/turtlesoupy 1 who has had the original idea of this design in this PR: https://github.com/huggingface/transformers/pull/5420 7 Move all beam_search relevant code into its own generation_beam_search.py file and speed up beam search. Beam search has gained more and more in importance thanks to many new and improved seq2seq models. This PR moves the very difficult to understand beam search code into its own file and makes sure that the beam_search generate function is easier to understand this way. Additionally, all Python List operations are now replaced by torch.tensor operations which led to a 5-10% speed up for beam_search generation. This change improves speed, readability and also maintainability. New beam search algorithms, such as https://arxiv.org/abs/2007.03909 2 , should now be easier to add to the library. Tests have been refactored and new more aggressive tests have been added. The tests can also be very helpful to understand how each of the methods work exactly. Check out test_generation_utils.py, test_generation_beam_search and test_generation_logits_process. More docstring especially on beam search and logits processors to make generate more accessible and understandable to the user. TODO: Do the same refactor for TF Check possibility of carrying gradients through generate Add GenerationOutputs similar to ModelOutputs that allows to return attention outputs and hidden states
Hi Patrick @patrickvonplaten, Thanks for your great work!! I have a question. I now almost finish translating TFDPR 1 , and currently also working on TFRag (I already made TFRagModel works like Pytorch, and I am investigating TFRagTokenForGeneration right now) . Since the current code of RagTokenForGeneration involves with both _no_beam_search_generation and _beam_search_generation , regarding the Refactor, I think I should just go ahead with the current version, and fix them later for refactor. Or would you suggest that I wait for Pytorch Refactor, or wait for both Pytorch & TF Refactor ?
0
huggingface
🤗Transformers
Beam search (FlaxT5) generates PAD tokens mid generation
https://discuss.huggingface.co/t/beam-search-flaxt5-generates-pad-tokens-mid-generation/12245
Hi, When I try to use beam search (num_beams > 1) with FlaxT5ForConditionalGeneration, the model generates PAD tokens randomly in the middle of the generation and the output is not usually complete. Is this a known issue? I have seen the FlaxMarianMTModel overrides _adapt_logits_for_beam_search method to fix a similar issue. Is something similar required for FlaxT5ForConditionalGeneration as well? If someone can guide me, I can look at implementing it and getting a PR up. Thank you!
I have also encountered this issue recently. I used the official t5_summarization_flax.py, but ran it on an internal dataset. I get good results using greedy search, but much worse results when using num_beams > 1. On some examples it generates the whole sequence, but on others it starts generating pad tokens mid-sequence, without outputting an EOS token. I tried monkeypatching the MarianMT _adapt_logits_for_beam_search onto the FlaxT5Model, as the method description seemed like it was implemented to solve this type of issue, but that didn’t seem to change the outputs for T5 at all. Any thoughts on why this might be happening, or how we could resolve it for Flax T5 beam search @patrickvonplaten / @patil-suraj?
0
huggingface
🤗Transformers
How to generate a sequence using inputs_embeds instead of input_ids?
https://discuss.huggingface.co/t/how-to-generate-a-sequence-using-inputs-embeds-instead-of-input-ids/4145
Hello, I am struggling with generating a sequence of tokens using model.generate() with inputs_embeds. For my research, I have to use inputs_embeds (word embedding vectors) instead of input_ids (token indices) as an input to the GPT2 model. I want to employ model.generate() which is a convenient tool for generating a sequence of tokens, but there is no argument for inputs_embeds. I tried to edit " transformers.generation_utils", but it was not easy to figure out which lines I should change. Is there any idea that I can easily generate tokens with default settings for hyper-paremeters as in model.generate()? If there is any idea, help me please.
Were you able to figure something out for this ? My needs are similar. I’m using BART model
0
huggingface
🤗Transformers
Deepspeed ZeRO Inference
https://discuss.huggingface.co/t/deepspeed-zero-inference/12250
@stas , I saw your tweet and had two questions. Does using deepspeed inference require consolidating the files in the checkpoint dir or it performs that for you and then reshards them across available GPUs? Does this now support model > GPU RAM or is the comment in the doc 1 still accurate: Additionally DeepSpeed is currently developing a related product called Deepspeed-Inference which has no relationship to the ZeRO technology, but instead uses tensor parallelism to scale models that can’t fit onto a single GPU. This is a work in progress and we will provide the integration once that product is complete. In general I am trying to figure out if there exists a sharded checkpoint format that allows either (a) inference or (b) resuming training on a different world size.
Deepspeed ZeRO Inference is the same as ZeRO Training except it doesn’t allocate optimizer and lr scheduler and that it requires ZeRO-3. Therefore it always supports model > single gpu RAM. During Training it indeed saves a sharded state checkpoint. During Inference it doesn’t need to do that. While it can load from a sharded trained-with-ZeRO checkpoint - It is meant to be used with a normal single file checkpoint. It uses zero.Init to split the weights to different GPUs while loading the model. When we do normal training, we need to do validation, so inference happens there already. So yes, DeepSpeed uses a sharded checkpoint format. I don’t think you can currently use a sharded checkpoint for N gpus and be able to load it on M gpus automatically. I wrote a script to consolidate fp32 weights: Zero Redundancy Optimizer (ZeRO) - DeepSpeed 3 which if you use it will allow you to move from N to M gpus, since it generates a single unsharded checkpoint. You can, of course, write code that will rewrite the checkpoints to change from N to M gpus. The main reason I wrote it as a separate tool is that it requires huge amount of CPU RAM, so doing this dynamically at run time could be too demanding and may require an instance with a lot of CPU memory In the case of TP+PP+DP which we use at Megatron-Deepspeed, Tunji has been working on converting any deepspeed PP-style checkpoint from any TP/PP degree to any other TP/PP degree. But the PP checkpoint is slightly different from ZeRO checkpoint. It has each layer in a separate checkpoint, and saves each TP split in a separate checkpoint. So TP is hardwired and can’t be changed once the training started, whereas PP can be changed on the fly. But of course one can reformat the checkpoint to change TP degree as well. The tools are here: github.com Megatron-DeepSpeed/tools/convert_checkpoint at main ·... 2 main/tools/convert_checkpoint Ongoing research training transformer language models at scale, including: BERT & GPT-2 - Megatron-DeepSpeed/tools/convert_checkpoint at main · microsoft/Megatron-DeepSpeed they also include conversion from Meg-DS → Megatron-LM → HF Transformers. More tools are planned to be added there. Deepspeed Inference (not ZeRO) uses Tensor Parallelism and at the moment this is still a WIP - I think right now it works with a single checkpoint. But it’d probably make sense to pre-shard it for a faster loading. Please let me know if I have addresses all the questions.
0
huggingface
🤗Transformers
Error when Fine-tuning pretrained Masked Language Model
https://discuss.huggingface.co/t/error-when-fine-tuning-pretrained-masked-language-model/5386
My whole question is here:- python - TypeError: zeros_like(): argument 'input' when fine-tuning on MLM - Stack Overflow 8 Basically, I am having this error when fine-tuning my pretrained model: ValueError: expected sequence of length 2033 at dim 1 (got 2036) Anyone have any idea how I can solve this?
Anyone ? I have put padding=True, so such issues should not exist
0
huggingface
🤗Transformers
Improve the performance of model prediction of transformers model
https://discuss.huggingface.co/t/improve-the-performance-of-model-prediction-of-transformers-model/12205
Hi I am new to transformers. I am using the some models of it for many tasks. One is the summarization using google pegasus-xum model, the performance is good on GPU but when I try to do it on CPU it takes around 16-18 seconds. I also started using the parrot-paraphrase library which uses the T5 model in the backend, it also performs the same in GPU but on CPU it is taking time around 5-8 seconds to process the result. Due to some limitation of GPU on my server I have to optimize it for CPU, to take down the response time to 2-4 seconds max. Here are the link of models I am using: Pegasus-XSUM: google/pegasus-xsum · Hugging Face Parrot=Paraphrase: prithivida/parrot_paraphraser_on_T5 · Hugging Face 1 Code for pegasus model: from transformers import PegasusTokenizer, PegasusForConditionalGeneration from trained_model import ModelFactory import os project_root = os.path.dirname(os.path.dirname(__file__)) path = os.path.join(project_root, 'models/') class Summarization: def __init__(self): self.mod = ModelFactory("summary") self.tokenizer = PegasusTokenizer.from_pretrained(path) def generate_abstractive_summary(self, text): """ To generate summary of text using pegasus model""" model = self.mod.get_preferred_model("summarization") inputs = self.tokenizer([text], max_length=1024, return_tensors='pt', truncation=True) summary_ids = model.generate(inputs['input_ids']) summary = [self.tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids] return summary[0] Is there any way to improve the performance, any small improvement also appreciated…
I have not used those two specific models before, however without a small reproducible example of your code is impossible to provide personalised feedback. You didn’t mention what you’re doing or using the model for, but I’ll assume you’re doing inference (not training or fine tuning) based on the run times you reported. Ultimately doing inference on a CPU will be much slower than on a GPU, there is absolutely no way around it (although the time difference if you’re doing training in a CPU vs GPU is even bigger). Whether there is still some margin for CPU optimisation in your specific case, is something you would need to share the code for the community to be able to comment on. The only thing that comes to my mind right now (without knowing anything at all about your code), is to only initialise the model once, and then do inference on the same instance, so avoid calling something like trained_model = AutoModel.from_pretrained(...) at every iteration inside a for loop, as that would be very detrimental, but without seeing the code I can’t say much else.
0
huggingface
🤗Transformers
Why such a learning rate value?
https://discuss.huggingface.co/t/why-such-a-learning-rate-value/12193
I resumed training from checkpoint. I set the learning rate in TrainingArguments to 5e-5. Now the learning rate in the first logging step is 2.38e-05. Its value decreases in subsequent steps. How can I set the learning rate to the desired value? I do not understand where this 2.38e-05 comes from. These are my training arguments. training_args = Seq2SeqTrainingArguments( output_dir=output_dir, num_train_epochs=8, max_steps=-1, evaluation_strategy='epoch', eval_steps=0, per_device_train_batch_size=8, per_device_eval_batch_size=8, learning_rate=5e-5, warmup_ratio=0.1, warmup_steps=0, logging_dir=None, logging_strategy='steps', logging_steps=50, disable_tqdm=disable_tqdm, save_strategy='epoch', save_steps=0, load_best_model_at_end=True, metric_for_best_model='eval_loss', seed=random_state, predict_with_generate=True, dataloader_num_workers=4, save_total_limit=10, )
The scheduler used by default is a linear decay, so that’s why you see this learning rate, since you’re logging after 50 steps.
0
huggingface
🤗Transformers
NLP Pretrained model model doesn’t use GPU when making inference
https://discuss.huggingface.co/t/nlp-pretrained-model-model-doesn-t-use-gpu-when-making-inference/1395
I am using Marian MT Pretrained model for Inference for machine Translation task integrated with a flask Service . I am running the Model on Cuda enabled device .While inferencing the model not using the GPU ,it is using the CPU only .I don’t want to use the cpu for inference as it is taking very long time for processing the request. Even if i am passing 1 sentence it is taking very long . Please help on this . Below is the code snippet and model i am using model_name = ‘Helsinki-NLP/opus-mt-ROMANCE-en’ tokenizer = MarianTokenizer.from_pretrained(model_name) print(tokenizer.supported_language_codes) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer.prepare_translation_batch(src_text)) tgt_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated] I have downloaded the pytorch model.bin and other tokenizer files from s3 environment and saved on my local …Please help on this how i can put the things on GPU for faster inference
Have you tried model.to(‘cuda’), to make the model use the GPU?
0
huggingface
🤗Transformers
Why fine-tuning BERT mlm on specific domain doesn’t work? What am I doing wrong?
https://discuss.huggingface.co/t/why-fine-tuning-bert-mlm-on-specific-domain-doesnt-work-what-am-i-doing-wrong/12033
I’m new. I’m trying to fine-tuned a BERT MLM (bert-base-uncased) on a target domain. Unfortunately, results are not good. Before fine-tuning, the pre-trained model fills the mask of a sentence with words in line of human expectations. E.g. Wikipedia is a free online [MASK], created and edited by volunteers around the world. The most probable prediction are encyclopedia (score: 0.650) and resource (score:0.087). After fine-tuning, the prediction are completely wrong. Often stopwords are predicted as result. E.g. Wikipedia is a free online [MASK], created and edited by volunteers around the world. The most probable prediction are the (score: 0.052) and be (score:0.033). I experimented with different epochs (from 1 to 10) and different datasets (from a few MB to a few GB) but I got the same issue. What am I doing wrong? I’m using the following code, I hope you can help me. from transformers import AutoConfig, AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') config = AutoConfig.from_pretrained('bert-base-uncased', output_hidden_states=True) model = AutoModelForMaskedLM.from_config(config) # BertForMaskedLM.from_pretrained(path) from transformers import LineByLineTextDataset dataset = LineByLineTextDataset(tokenizer=tokenizer, file_path="data/english/corpora.txt", block_size = 512) from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15) from transformers import Trainer, TrainingArguments training_args = TrainingArguments(output_dir="output/models/english", overwrite_output_dir=True, num_train_epochs=5, per_gpu_train_batch_size=8, save_steps = 22222222, save_total_limit=2) trainer = Trainer(model=model, args=training_args, data_collator=data_collator, train_dataset=dataset) trainer.train() trainer.save_model("output/models/english") from transformers import pipeline # Initialize MLM pipeline mlm = pipeline('fill-mask', model="output/models/english", tokenizer="output/models/english") # Get mask token mask = mlm.tokenizer.mask_token # Get result for particular masked phrase phrase = f'Wikipedia is a free online {mask}, created and edited by volunteers around the world' result = mlm(phrase) # Print result print(result)
What I believe happening here is, when you call config = AutoConfig.from_pretrained('bert-base-uncased', output_hidden_states=True) model = AutoModelForMaskedLM.from_config(config) the model is a randomly initialized BERT model having the same configuration with “bert-base-uncased” model. So, after fine-tuning, model predictions are not satisfactory because the initial model is not a pretrained one. I have checked it on a colab notebook like this: image669×736 35.6 KB If your intention is just to enable output_hidden_states functionality, you can try: AutoModelForMaskedLM.from_pretrained("bert-base-uncased", output_hidden_states=True) When I ran the line I recommended on colab, the download time was noticably greater which is a potential indicator that it is indeed the model parameters that are being loaded, not just the configuration file.
0
huggingface
🤗Transformers
Finetuned I-Bert for question answering task
https://discuss.huggingface.co/t/finetuned-i-bert-for-question-answering-task/12147
How can we fine-tune an Integer Bert for question answering task?
Hi, As explained in IBERT’s model card: fine-tuning the model consists of 3 stages: full precision fine-tuning quantization quantization-aware training. So for the first step, you can fine-tune IBERT just as any BERT model. You can take a look at the example scripts 1 as well as the official QA notebook 2. Step 2 is simply setting the quantize attribute of the model’s configuration to True. Step 3 is the same as step 1, but now with your quantized model.
0
huggingface
🤗Transformers
Language generation with torchscript model?
https://discuss.huggingface.co/t/language-generation-with-torchscript-model/669
I have fine-tuned a summarization model following the Hugging Face seq2seq guide (starting from sshleifer/distilbart-xsum-12-6). Our team is interested in using AWS elastic inference for deployment for cost reduction. (e.g. similar to this https://aws.amazon.com/blogs/machine-learning/fine-tuning-a-pytorch-bert-model-and-deploying-it-with-amazon-elastic-inference-on-amazon-sagemaker/ 6) I was wondering whether there’s any examples or any suggested way to use the beam searching logic in BartForConditionalGeneration with model inference from a torchscript model. Most of the examples for torchscript I’ve found are with classification tasks where this isn’t necessary.
I’ve had success with deploying a BartForConditionalGeneration model using SageMaker with EI. Try: model = BartForConditionalGeneration.from_pretrained(model_dir, torchscript=True)
0
huggingface
🤗Transformers
ModuleNotFoundError: No module named ‘transformers’
https://discuss.huggingface.co/t/modulenotfounderror-no-module-named-transformers/11609
Hi! I’ve been having trouble getting transformers to work in Spaces. When tested in my environment using python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))", the results show it’s been properly installed. When imported in Colab it works fine too, but whenever deployed to Spaces it always returns the same ModuleNotFound error. Full traceback message: Traceback: File "/home/user/.local/lib/python3.8/site-packages/streamlit/script_runner.py", line 354, in _run_script exec(code, module.__dict__)File "/home/user/app/app.py", line 1, in <module> from transformers import pipeline It’s a simple test app using transformers and streamlit, - both of which were reinstalled with pip after creating a new venv and reinstalling tensorflow and pytorch. I also tried cleaning, uninstalling, and reinstalling conda based on advice from another forum. No dice. Currently using: Python 3.9.4 Tensorflow 2.7.0 PyTorch 1.10.0 Transformers 4.12.3 Streamlit 1.2.0 Any help greatly appreciated! Thanks
it might be due to not having a requirements file. Here is an example of what your spaces app should have - flax-community/image-captioning at main 1 try adding the requirements as they till the environment what packages to load. Hope this helps.
1
huggingface
🤗Transformers
Cannot import FlaxVisionEncoderDecoderModel
https://discuss.huggingface.co/t/cannot-import-flaxvisionencoderdecodermodel/12049
So I tried doing from transformers import FlaxVisionEncoderDecoderModel but it throws me the following exception ImportError: cannot import name 'FlaxVisionEncoderDecoderModel' from 'transformers' (/.local/lib/python3.8/site-packages/transformers/__init__.py) I have Flax installed, (with jax as well) I upgraded Transformers to the latest version. As a reference: I can run GPT-J with transformers on this TPUVM just fine, so I don’t know why it doesnt like this flax model-thing? I’m totally lost, help needed.
user Cahya on discord helped me on this, turns out FlaxVisionEncoderDecoderModel is not on 4.12.5, so simply install it from the repo in order to have it!
1
huggingface
🤗Transformers
[Ray] How to get the best model per trial
https://discuss.huggingface.co/t/ray-how-to-get-the-best-model-per-trial/11155
Hi everyone, I’m trying to use the feature “hyperparameter_search” but cannot get the best model per the trial. Each trial always get the last value. I see the code line here calling the Ray tune function to get the best model without the “scope” parameter passed. Default value of that function is “last”. github.com huggingface/transformers/blob/123cce6ffcafe6ecece7eb9914395d2ebcc98a48/src/transformers/integrations.py#L294 # special attr set by tune.with_parameters if hasattr(trainable, "__mixins__"): dynamic_modules_import_trainable.__mixins__ = trainable.__mixins__ analysis = ray.tune.run( dynamic_modules_import_trainable, config=trainer.hp_space(None), num_samples=n_trials, **kwargs, ) best_trial = analysis.get_best_trial(metric="objective", mode=direction[:3]) best_run = BestRun(best_trial.trial_id, best_trial.last_result["objective"], best_trial.config) if _tb_writer is not None: trainer.add_callback(_tb_writer) return best_run def run_hp_search_sigopt(trainer, n_trials: int, direction: str, **kwargs) -> BestRun: from sigopt import Connection Is there any way to workaround this? Thanks!
Thanks for raising this. I am having the same problem with the Optuna backend.
0
huggingface
🤗Transformers
Got `ONNXRuntimeError` when try to run BART in ONNX format #12851
https://discuss.huggingface.co/t/got-onnxruntimeerror-when-try-to-run-bart-in-onnx-format-12851/8945
Environment info transformers version: 4.9.0 Platform: Linux-5.4.104±x86_64-with-Ubuntu-18.04-bionic Python version: 3.7.11 PyTorch version (GPU?): 1.9.0+cu102 (True) Using GPU in script?: Yes I was using Google Colab and trying to export model facebook/bart-large-cnn to the onnx format. I ran the command python -m transformers.onnx -m=facebook/bart-large-cnn onnx/bart-large-cnn , and the outputs seem okay. 2021-07-22 23:14:33.821472: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0 Using framework PyTorch: 1.9.0+cu102 Overriding 1 configuration item(s) - use_cache -> False /usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_bart.py:212: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): /usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_bart.py:218: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attention_mask.size() != (bsz, 1, tgt_len, src_len): /usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_bart.py:249: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): /usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_bart.py:863: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if input_shape[-1] > 1: tcmalloc: large alloc 1625399296 bytes == 0x5595ce83a000 @ 0x7f1780d9f887 0x7f177f695c29 0x7f177f696afb 0x7f177f696bb4 0x7f177f696f9c 0x7f17670dcbb7 0x7f17670dd064 0x7f175b75ba1c 0x7f176bf8eaff 0x7f176b949b88 0x55949fda8bf8 0x55949fe1c6f2 0x55949fe16c35 0x55949fda973a 0x55949fe1893b 0x55949fe16c35 0x55949fda973a 0x55949fe1bf40 0x55949fe16c35 0x55949fda973a 0x55949fe1893b 0x55949fda965a 0x55949fe17b0e 0x55949fda965a 0x55949fe17b0e 0x55949fe16c35 0x55949fe16933 0x55949fe14da0 0x55949fda7ea9 0x55949fda7da0 0x55949fe1bbb3 tcmalloc: large alloc 1625399296 bytes == 0x55962f654000 @ 0x7f1780d9f887 0x7f177f695c29 0x7f177f696afb 0x7f177f696bb4 0x7f177f696f9c 0x7f17670dcbb7 0x7f17670dd064 0x7f175b75ba1c 0x7f176bf8ecab 0x7f176b949b88 0x55949fda8bf8 0x55949fe1c6f2 0x55949fe16c35 0x55949fda973a 0x55949fe1893b 0x55949fe16c35 0x55949fda973a 0x55949fe1bf40 0x55949fe16c35 0x55949fda973a 0x55949fe1893b 0x55949fda965a 0x55949fe17b0e 0x55949fda965a 0x55949fe17b0e 0x55949fe16c35 0x55949fe16933 0x55949fe14da0 0x55949fda7ea9 0x55949fda7da0 0x55949fe1bbb3 tcmalloc: large alloc 1625399296 bytes == 0x5595ce83a000 @ 0x7f1780d9d1e7 0x55949fdd9a18 0x55949fda4987 0x7f176bf8ece2 0x7f176b949b88 0x55949fda8bf8 0x55949fe1c6f2 0x55949fe16c35 0x55949fda973a 0x55949fe1893b 0x55949fe16c35 0x55949fda973a 0x55949fe1bf40 0x55949fe16c35 0x55949fda973a 0x55949fe1893b 0x55949fda965a 0x55949fe17b0e 0x55949fda965a 0x55949fe17b0e 0x55949fe16c35 0x55949fe16933 0x55949fe14da0 0x55949fda7ea9 0x55949fda7da0 0x55949fe1bbb3 0x55949fe16c35 0x55949fda973a 0x55949fe17b0e 0x55949fe16c35 0x55949fce8eb1 tcmalloc: large alloc 1625399296 bytes == 0x55962f654000 @ 0x7f1780d9f887 0x7f177f695c29 0x7f177f695d47 0x7f177f6977a5 0x7f176bd60368 0x7f176bfbc844 0x7f176b949b88 0x55949fda8010 0x55949fda7da0 0x55949fe1bbb3 0x55949fe16c35 0x55949fda973a 0x55949fe1893b 0x55949fe16c35 0x55949fda973a 0x55949fe1bf40 0x55949fe16c35 0x55949fda973a 0x55949fe1893b 0x55949fda965a 0x55949fe17b0e 0x55949fda965a 0x55949fe17b0e 0x55949fe16c35 0x55949fe16933 0x55949fe14da0 0x55949fda7ea9 0x55949fda7da0 0x55949fe1bbb3 0x55949fe16c35 0x55949fda973a Validating ONNX model... -[✓] ONNX model outputs' name match reference model ({'last_hidden_state', 'encoder_last_hidden_state'} - Validating ONNX Model output "last_hidden_state": -[✓] (2, 8, 1024) matchs (2, 8, 1024) -[✓] all values close (atol: 0.0001) - Validating ONNX Model output "encoder_last_hidden_state": -[✓] (2, 8, 1024) matchs (2, 8, 1024) -[✓] all values close (atol: 0.0001) All good, model saved at: onnx/bart-large-cnn/model.onnx Then I tried to execute the model in onnxruntime , import onnxruntime as ort ort_session = ort.InferenceSession('onnx/bart-large-cnn/model.onnx') # Got input_ids and attention_mask using tokenizer outputs = ort_session.run(None, {'input_ids': input_ids.detach().cpu().numpy(), 'attention_mask': attention_mask.detach().cpu().numpy()}) And I got the error, --------------------------------------------------------------------------- RuntimeException Traceback (most recent call last) <ipython-input-30-380e6a0e1085> in <module>() ----> 1 outputs = ort_session.run(None, {'input_ids': input_ids.detach().cpu().numpy(), 'attention_mask': attention_mask.detach().cpu().numpy()}) /usr/local/lib/python3.7/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options) 186 output_names = [output.name for output in self._outputs_meta] 187 try: --> 188 return self._sess.run(output_names, input_feed, run_options) 189 except C.EPFail as err: 190 if self._enable_fallback: RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_109' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:42 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, std::vector<long int>&, bool) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{2}, requested shape:{1,1} I see that BART is recently supported for ONNX in the latest release, but there isn’t any code to exactly explain how to run the inference in onnxruntime . Maybe I’m doing something wrong here, so any help will be appreciated!
Hi @AlfredWGA, Thanks for bringing this up to us. As a sanity check, can you please provide the shape of input_ids and attention_mask? Also if you can share the ONNX Runtime version you’re using, that would be very helpful. Thanks!
0
huggingface
🤗Transformers
Training embeddings of tokens
https://discuss.huggingface.co/t/training-embeddings-of-tokens/3398
I added few tokens to the tokenizer, and would like now to train roberta model. Will it automatically also tune the embedding layer (the layer that embeds the tokens), or is there any flag or anything else I should change so that embedding layer will be tuned? Schematically, my code looks like that: model = RobertaForSequenceClassification.from_pretrained(‘roberta-base’, num_labels=2) trainer = Trainer( model=model, args=training_args, # training arguments, defined above … ) trainer.train() and the input data is the tokenized data.
[I am not an expert, but I believe this is right] If you need to add different tokens, then you will need to train the RoBerta model from scratch. (You probably don’t want to do that.) It doesn’t work to change the tokens after the model has been pre-trained. Do you definitely need to add different tokens? If you just include your different tokens in your data, the tokenizer will probably deal with them OK, by representing them as combinations or tokens it already knows. I recommend Chris McCormick’s blog posts about this, BERT Word Embeddings Tutorial · Chris McCormick 43 By default, if you fine-tune a pre-trained RoBerta model, then the embedding layer will be very slightly tuned. Most of the tuning change will happen in the last few layers, especially the Classification head layer. If you want to tune ONLY the last layer(s), you can Freeze the earlier layers. (It isn’t possible to freeze the later layers and tune the earlier ones).
0
huggingface
🤗Transformers
ModuleNotFoundError: No module named ‘transformers.modeling_roberta’
https://discuss.huggingface.co/t/modulenotfounderror-no-module-named-transformers-modeling-roberta/11175
im trying to use longformer and in its code it has from transformers.modeling_roberta import RobertaConfig, RobertaModel, RobertaForMaskedLM but although I install the transformers and I can do import transformers I still get an error when I do: from transformers.modeling_roberta import RobertaConfig, RobertaModel, RobertaForMaskedLM any suggestion on how to install it? or what to do?
You should import those objects from transformers directly.
0
huggingface
🤗Transformers
Wav2Vec2 Speech Pre-Training After a few epochs the contrastive loss was decreased to zero and the model stopped changing
https://discuss.huggingface.co/t/wav2vec2-speech-pre-training-after-a-few-epochs-the-contrastive-loss-was-decreased-to-zero-and-the-model-stopped-changing/11683
I tried to re-run the demo script with the same parameters on Colab. After a few epochs, the contrastive loss was decreased to zero and the model stopped changing. Original Script can be found here: github.com transformers/examples/pytorch/speech-pretraining at master ·... 2 master/examples/pytorch/speech-pretraining 🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX. - transformers/examples/pytorch/speech-pretraining at master · huggingface/transformers Here is sample code Colab 4: sample output: | loss: 9.969e-02| constrast_loss: 0.000e+00| div_loss: 9.969e-01| %_mask_idx: 5.137e-01| ppl: 2.000e+00| lr: 1.572e-03| temp: 1.902e+00| grad_norm: 8.068e-19 | loss: 9.969e-02| constrast_loss: 0.000e+00| div_loss: 9.969e-01| %_mask_idx: 4.952e-01| ppl: 2.000e+00| lr: 1.572e-03| temp: 1.902e+00| grad_norm: 4.017e-19 | loss: 9.969e-02| constrast_loss: 0.000e+00| div_loss: 9.969e-01| %_mask_idx: 4.831e-01| ppl: 2.000e+00| lr: 1.572e-03| temp: 1.902e+00| grad_norm: 5.166e-19 I used the parameters given in the README file so this behavior is unexpected and may indicate a different problem. Is this a bug in the official feature or am I doing some mistake if so please help me how can I fix this problem?
I am struggling with this issue since a month ago, Has anyone found an explanation such behaviour? Thanks.
0
huggingface
🤗Transformers
Batch generation with GPT2
https://discuss.huggingface.co/t/batch-generation-with-gpt2/1517
How to do batch generation with the GPT2 model?
Batch generation is now possible for GPT2 in master by leveraging the functionality shown in this PR: https://github.com/huggingface/transformers/pull/7552?notification_referrer_id=MDE4Ok5vdGlmaWNhdGlvblRocmVhZDEyMTMzNzA0MDA6MjM0MjM2MTk%3D#event-3876130796 156 . For more info on how to prepare a GPT2 for batch generation, you can checkout this test: github.com huggingface/transformers/blob/890e790e16084e58a1ecb9329c98ec3e76c45994/tests/test_modeling_gpt2.py#L430 94 def test_gpt2_sequence_classification_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_gpt2_for_sequence_classification(*config_and_inputs) def test_gpt2_gradient_checkpointing(self): config_and_inputs = self.model_tester.prepare_config_and_inputs(gradient_checkpointing=True) self.model_tester.create_and_check_forward_and_backwards(*config_and_inputs) @slow def test_batch_generation(self): model = GPT2LMHeadModel.from_pretrained("gpt2") model.to(torch_device) tokenizer = GPT2Tokenizer.from_pretrained("gpt2") tokenizer.padding_side = "left" # Define PAD Token = EOS Token = 50256 tokenizer.pad_token = tokenizer.eos_token model.config.pad_token_id = model.config.eos_token_id
0
huggingface
🤗Transformers
Whitelist specific tokens in beam search
https://discuss.huggingface.co/t/whitelist-specific-tokens-in-beam-search/4336
I’m using model.generate() for text generation. I’m wondering is there a way to whitelist specific tokens so that they are returned during the beam search phase. For example - I want to the “force” the response to contain a question mark or a speccific phrase
Every token is “whitelisted” in the sense that it is considered during beam search. However, you can modify/boost certain tokens you would like to see generated, using the LogitsProcessor 10. Just implement your own class that boosts your favourite tokens or sequence of tokens. If you are looking for a question mark - maybe you can “blacklist” other end of sentence punctuations in your LogitsProcessor, too?
0
huggingface
🤗Transformers
‘GPT2LMHeadModel’ object has no attribute ‘push_to_hub’
https://discuss.huggingface.co/t/gpt2lmheadmodel-object-has-no-attribute-push-to-hub/11566
Hello everybody, I recently fine-tuned a GPT model (GPT2LMHeadModel) and tried to push it to hub. I followed the tutorial but got this strange error: ‘GPT2LMHeadModel’ object has no attribute ‘push_to_hub’ Screenshot 2021-11-10 at 21.31.481182×727 218 KB I’ve updated the transformers library to the latest version, as shown in the screenshot. Can anybody help me with that? I’ve been working on this problem for 3 hours but still in vain… (By the way I was coding on Kaggle) Any help is appreciated!
Hi, The correct way to push a model to the hub is by typing: model.save_pretrained(".", push_to_hub=True, commit_message="First commit", repo_path_or_name: Optional[str] = None, repo_url: Optional[str] = None, use_temp_dir: bool = False, commit_message: Optional[str] = None, organization: Optional[str] = None, private: Optional[bool] = None, use_auth_token: Optional[Union[bool, str]] = None,) As can be seen here.
0
huggingface
🤗Transformers
Output of ‘bert-base-NER-uncased’ is different when using website and different when used via python
https://discuss.huggingface.co/t/output-of-bert-base-ner-uncased-is-different-when-using-website-and-different-when-used-via-python/11580
Hi, I’m trying to use the above mentioned model for token classification. Below is my sample text: 00:00:02 Speaker 1: hi john, it’s nice to see you again. how was your weekend? do anything special? 00:00:06 Speaker 2: yep, all good thanks. i was with my sister in derby. We saw, you know, that james bond film. what’s it called? then got a couple of drinks at the pitcher and piano, back in nottingham. 00:00:18 Speaker 1: that’s close to your flat, right? 00:00:25 Speaker 2: yeah, about five minutes away. i live on parliament street, remember? 00:00:39 Speaker 1: of course, i remember. you moved last year after you left your parents’ place. 00:00:39 Speaker 2: yeah, it was my sister’s birthday on sunday, susie, the older one. i told you last time about that new job she got. sainsbury’s, the one by victoria centre. When using the hosted interface API 1, the output is excellent: And here is the json output from the hosted API: [ { "entity_group": "PER", "score": 0.9778427481651306, "word": "john", "start": 23, "end": 27 }, { "entity_group": "LOC", "score": 0.9929279685020447, "word": "derby", "start": 166, "end": 171 }, { "entity_group": "MISC", "score": 0.7170370817184448, "word": "james bond", "start": 196, "end": 206 }, { "entity_group": "LOC", "score": 0.993842363357544, "word": "nottingham", "start": 293, "end": 303 }, { "entity_group": "LOC", "score": 0.9108084440231323, "word": "parliament street", "start": 420, "end": 437 }, { "entity_group": "PER", "score": 0.9840036034584045, "word": "susie", "start": 613, "end": 618 }, { "entity_group": "ORG", "score": 0.9001737236976624, "word": "sai", "start": 684, "end": 687 }, { "entity_group": "LOC", "score": 0.9343950748443604, "word": "##nsbury's", "start": 687, "end": 695 }, { "entity_group": "LOC", "score": 0.7310423851013184, "word": "victoria centre", "start": 708, "end": 723 } ] But when used via the python API using the following code: from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER-uncased") model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER-uncased") nlp = pipeline("token-classification", model=model, tokenizer=tokenizer) example = """00:00:02 Speaker 1: hi john, it's nice to see you again. how was your weekend? do anything special? 00:00:06 Speaker 2: yep, all good thanks. i was with my sister in derby. We saw, you know, that james bond film. what's it called? then got a couple of drinks at the pitcher and piano, back in nottingham. 00:00:18 Speaker 1: that's close to your flat, right? 00:00:25 Speaker 2: yeah, about five minutes away. i live on parliament street, remember? 00:00:39 Speaker 1: of course, i remember. you moved last year after you left your parents' place. 00:00:39 Speaker 2: yeah, it was my sister's birthday on sunday, susie, the older one. i told you last time about that new job she got. sainsbury's, the one by victoria centre.""" ner_results = nlp(example) print(ner_results) print(len(ner_results)) I get very different results, here is the output of the code: [{'entity': 'B-PER', 'score': 0.97784275, 'index': 10, 'word': 'john', 'start': 23, 'end': 27}, {'entity': 'B-LOC', 'score': 0.99292797, 'index': 50, 'word': 'derby', 'start': 166, 'end': 171}, {'entity': 'B-MISC', 'score': 0.8592305, 'index': 59, 'word': 'james', 'start': 196, 'end': 201}, {'entity': 'I-MISC', 'score': 0.5748464, 'index': 60, 'word': 'bond', 'start': 202, 'end': 206}, {'entity': 'B-LOC', 'score': 0.9938424, 'index': 83, 'word': 'nottingham', 'start': 293, 'end': 303}, {'entity': 'B-LOC', 'score': 0.8480199, 'index': 121, 'word': 'parliament', 'start': 420, 'end': 430}, {'entity': 'I-LOC', 'score': 0.973597, 'index': 122, 'word': 'street', 'start': 431, 'end': 437}, {'entity': 'B-PER', 'score': 0.9840036, 'index': 172, 'word': 'susie', 'start': 613, 'end': 618}, {'entity': 'B-ORG', 'score': 0.90017325, 'index': 190, 'word': 'sai', 'start': 684, 'end': 687}, {'entity': 'I-LOC', 'score': 0.93890965, 'index': 191, 'word': '##ns', 'start': 687, 'end': 689}, {'entity': 'I-LOC', 'score': 0.8916274, 'index': 192, 'word': '##bury', 'start': 689, 'end': 693}, {'entity': 'I-LOC', 'score': 0.9475074, 'index': 193, 'word': "'", 'start': 693, 'end': 694}, {'entity': 'I-LOC', 'score': 0.9595369, 'index': 194, 'word': 's', 'start': 694, 'end': 695}, {'entity': 'B-LOC', 'score': 0.55478203, 'index': 199, 'word': 'victoria', 'start': 708, 'end': 716}, {'entity': 'I-LOC', 'score': 0.90730333, 'index': 200, 'word': 'centre', 'start': 717, 'end': 723}] As can be seen its detecting 15 entities which is way more than the hosted API. And it is even detecting 's as an I-LOC which is very wrong and makes the result unusable. Why the difference in results? Am I doing something wrong in the code? Thanks
You need to add agregation_strategy="simple" to your pipeline creation, to tell the pipeline to group together the tokens in the same entities.
1
huggingface
🤗Transformers
MMBT Model (Resnet and BERT) for multimodal embeddings
https://discuss.huggingface.co/t/mmbt-model-resnet-and-bert-for-multimodal-embeddings/2865
Hi! I’m trying to use the librarys implementation of Multimodal Bitransformers (Kiela et all.) on how to classify images and text simultaneously. I’ve found it hard, because there is very little documentation, and no examples. In particular i’ve been having a hard time figuring out how to pass the encoded image with the tokenized text to the already intialized model. If anyone has already worked with this implementation i could really use some help Thanks
Hey, check out this example 179.
0
huggingface
🤗Transformers
Getting IndexError: list index out of range when fine-tuning
https://discuss.huggingface.co/t/getting-indexerror-list-index-out-of-range-when-fine-tuning/5314
Hi everyone! I want to fine-tune my pre-trained Longformer model and am getting this error:- --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-54-2f2d9c2c00fc> in <module>() 45 ) 46 ---> 47 train_results = trainer.train() 6 frames /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs) 1032 self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control) 1033 -> 1034 for step, inputs in enumerate(epoch_iterator): 1035 1036 # Skip past any already trained steps if resuming training /usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in __next__(self) 515 if self._sampler_iter is None: 516 self._reset() --> 517 data = self._next_data() 518 self._num_yielded += 1 519 if self._dataset_kind == _DatasetKind.Iterable and \ /usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _next_data(self) 555 def _next_data(self): 556 index = self._next_index() # may raise StopIteration --> 557 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 558 if self._pin_memory: 559 data = _utils.pin_memory.pin_memory(data) /usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] /usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] <ipython-input-53-5e4959dcf50c> in __getitem__(self, idx) 7 8 def __getitem__(self, idx): ----> 9 item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} 10 item['labels'] = torch.tensor(self.labels[idx]) 11 return item <ipython-input-53-5e4959dcf50c> in <dictcomp>(.0) 7 8 def __getitem__(self, idx): ----> 9 item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} 10 item['labels'] = torch.tensor(self.labels[idx]) 11 return item IndexError: list index out of range Evidently, it’s a problem with my tokenization. but I can’t it - for training the LM, I ensured length argument is set for tokenizer: tokenizer = LongformerTokenizerFast.from_pretrained("./ny_model", max_len=3500) with a hefty 52000 vocab size. next, when fine-tuning: train_encodings = tokenizer(list(train_text), truncation=True, padding=True, max_length=3500) val_encodings = tokenizer(list(val_text), truncation=True, padding=True, max_length=3500) you can see I truncate the sequences. I tried with some dummy data (ensuring they are of equal length), same problem. So what could the problem be? Any ideas? Note that I am fine-tuning the model after uploading the LM on Huggingface. Also, I have attached the code required to train the LM:------ https://colab.research.google.com/drive/153754DbFXRhKdHvjdSUUp9VSB5JqtZwX?usp=sharing 11
have you figured out the reason? getting the same error here
0
huggingface
🤗Transformers
Freeze Lower Layers with Auto Classification Model
https://discuss.huggingface.co/t/freeze-lower-layers-with-auto-classification-model/11386
I’ve been unsuccessful in freezing lower pretrained BERT layers when training a classifier using Huggingface. I’m using AutoModelForSequenceClassification particularly, via code below, and I want to freeze the lower X layers (ex: lower 9 layers). Is this possible in HuggingFace, and if so what code would I add to this for functionality? tokenizer = AutoTokenizer.from_pretrained(“bert-base-cased”) def tokenize_function(examples): return tokenizer(examples[“text”], max_length = 512, padding=“max_length”, truncation=True) tokenized_train = train.map (tokenize_function, batched=True) tokenized_test= test.map (tokenize_function, batched=True) model = AutoModelForSequenceClassification.from_pretrained(“bert-base-cased”, num_labels=1) for w in model.bert.parameters(): w._trainable= False training_args = TrainingArguments(“test_trainer”, evaluation_strategy=“epoch”, per_device_train_batch_size=8) trainer = Trainer(model=model, args=training_args, train_dataset=tokenized_train, eval_dataset=tokenized_test) trainer.train()
Yes, in PyTorch freezing layers is quite easy. It can be done as follows: from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained(“bert-base-cased”, num_labels=1) for name, param in model.named_parameters(): if name.startswith("..."): # choose whatever you like here param.requires_grad = False
0
huggingface
🤗Transformers
How to use Adaptive Learning rate during training?
https://discuss.huggingface.co/t/how-to-use-adaptive-learning-rate-during-training/9143
Hi, I am trying to train a Siamese network and want to incorporate adaptive learning rate during training. So I have a SiameseTrainer which is a subclass of Trainer class. My question is am I doing something wrong that I get the error TypeError: Object of type Tensor is not JSON serializable in this line of my code: train_result = trainer.train(resume_from_checkpoint=checkpoint) My code runs for few epochs and then crashes. So what I am doing is below: def compute_metrics(p: EvalPrediction): correct_predictions = p.predictions[1] if isinstance(p.predictions, tuple) else p.predictions labels = p.label_ids return { "MSE": np.mean((correct_predictions-labels)**2) } optimizer = Adafactor( model.parameters(), scale_parameter=True, # If True, learning rate is scaled by root mean square ) lr_scheduler = AdafactorSchedule(optimizer) trainer = SiameseTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, data_collator=data_collator, compute_metrics=compute_metrics, tokenizer=tokenizer, callbacks=[EarlyStoppingCallback(early_stopping_patience=2)], optimizers=(optimizer, lr_scheduler) ) When I comment out the optimizers=(optimizer, lr_scheduler) parameter and run the code, it does not crash in the middle of epochs. Appreciate any help!
Hi, anzaman I am stuck on the same problem as you do. Have You fixed this one yet? This is mine error. image1480×565 44 KB ##########UPDATE I found a way to trick it by adding logging_steps=9999999 in the trainer argument to make it not log between training , but is there any better solutions for this one.
0
huggingface
🤗Transformers
Model broken on Hub: wav2vec robust
https://discuss.huggingface.co/t/model-broken-on-hub-wav2vec-robust/10909
The example “use this in Transformers” on facebook/wav2vec2-large-robust · Hugging Face fails with an OSError, seemingly due to a problem with the uploaded model on the Hub. image1784×532 97.4 KB Notebook to replicate error: Google Colab It looks like there is a problem specifically with this model on the hub, because other models work with the same code snippet e.g. if you swap out “wav2vec2-large-robust” for “facebook/wav2vec2-large-960h-lv60-self”, that one works fine. See also Can't load tokenizer for 'facebook/wav2vec2-large-robust' 1
Here’s the state of the cache after I try instantiating both tokenizers, the robust one and the 960h one. image1595×343 99.3 KB And here’s what happens when I do a search in that directory using ag ~/.cache/huggingface/transformers# ll total 48 drwxr-xr-x 2 root root 4096 Oct 19 18:32 ./ drwxr-xr-x 3 root root 4096 Oct 19 18:21 ../ -rw-r--r-- 1 root root 1606 Oct 19 18:32 5681e9346f90f9fc4d72503284e96e6bbdc8bf5a38cafeb6ebf3791120b7570d.e0d02e2ed52b244ae1896cccc2beab5caccc2478b8b3d1131c14666c6e14cfdc -rw-r--r-- 1 root root 153 Oct 19 18:32 5681e9346f90f9fc4d72503284e96e6bbdc8bf5a38cafeb6ebf3791120b7570d.e0d02e2ed52b244ae1896cccc2beab5caccc2478b8b3d1131c14666c6e14cfdc.json -rwxr-xr-x 1 root root 0 Oct 19 18:32 5681e9346f90f9fc4d72503284e96e6bbdc8bf5a38cafeb6ebf3791120b7570d.e0d02e2ed52b244ae1896cccc2beab5caccc2478b8b3d1131c14666c6e14cfdc.lock* -rw-r--r-- 1 root root 162 Oct 19 18:32 814e23f251e4a5cd4763cf9b9b6ecb43e43f6a219ec036d9db3419f8dc9d93c3.6685801c836773b383173a1d86dd10317cc4f4eeadcf01f689918a50fdda946b -rw-r--r-- 1 root root 163 Oct 19 18:32 814e23f251e4a5cd4763cf9b9b6ecb43e43f6a219ec036d9db3419f8dc9d93c3.6685801c836773b383173a1d86dd10317cc4f4eeadcf01f689918a50fdda946b.json -rwxr-xr-x 1 root root 0 Oct 19 18:31 814e23f251e4a5cd4763cf9b9b6ecb43e43f6a219ec036d9db3419f8dc9d93c3.6685801c836773b383173a1d86dd10317cc4f4eeadcf01f689918a50fdda946b.lock* -rw-r--r-- 1 root root 85 Oct 19 18:32 de1143309c04207e22168c4563b24770c49eb4e933dbad506eadae8e43a7b422.9d6cd81ef646692fb1c169a880161ea1cb95f49694f220aced9b704b457e51dd -rw-r--r-- 1 root root 165 Oct 19 18:32 de1143309c04207e22168c4563b24770c49eb4e933dbad506eadae8e43a7b422.9d6cd81ef646692fb1c169a880161ea1cb95f49694f220aced9b704b457e51dd.json -rwxr-xr-x 1 root root 0 Oct 19 18:32 de1143309c04207e22168c4563b24770c49eb4e933dbad506eadae8e43a7b422.9d6cd81ef646692fb1c169a880161ea1cb95f49694f220aced9b704b457e51dd.lock* -rw-r--r-- 1 root root 291 Oct 19 18:32 e1f77599caea3f1f7004987f2f7a354d0fd31966b1b6bca5db52b63a8a8cb995.7c838a0a103758bad6ef4922531682da23a8b1c45d25f8d8e7a6d857c0b26544 -rw-r--r-- 1 root root 152 Oct 19 18:32 e1f77599caea3f1f7004987f2f7a354d0fd31966b1b6bca5db52b63a8a8cb995.7c838a0a103758bad6ef4922531682da23a8b1c45d25f8d8e7a6d857c0b26544.json -rwxr-xr-x 1 root root 0 Oct 19 18:32 e1f77599caea3f1f7004987f2f7a354d0fd31966b1b6bca5db52b63a8a8cb995.7c838a0a103758bad6ef4922531682da23a8b1c45d25f8d8e7a6d857c0b26544.lock* -rw-r--r-- 1 root root 1583 Oct 19 18:21 f4ed1cd2d2b55e3401644b177d5a166863754c345f98ed09260d0dce9a385d9a.2523c04309986c65617e9a8f2f66c3d656ba969fe07a994af31a3a0cf7b19b78 -rw-r--r-- 1 root root 145 Oct 19 18:21 f4ed1cd2d2b55e3401644b177d5a166863754c345f98ed09260d0dce9a385d9a.2523c04309986c65617e9a8f2f66c3d656ba969fe07a994af31a3a0cf7b19b78.json -rwxr-xr-x 1 root root 0 Oct 19 18:21 f4ed1cd2d2b55e3401644b177d5a166863754c345f98ed09260d0dce9a385d9a.2523c04309986c65617e9a8f2f66c3d656ba969fe07a994af31a3a0cf7b19b78.lock* ~/.cache/huggingface/transformers# ag 960h e1f77599caea3f1f7004987f2f7a354d0fd31966b1b6bca5db52b63a8a8cb995.7c838a0a103758bad6ef4922531682da23a8b1c45d25f8d8e7a6d857c0b26544.json 1:{"url": "https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self/resolve/main/vocab.json", "etag": "\"88181b954aa14df68be9b444b3c36585f3078c0a\""} 5681e9346f90f9fc4d72503284e96e6bbdc8bf5a38cafeb6ebf3791120b7570d.e0d02e2ed52b244ae1896cccc2beab5caccc2478b8b3d1131c14666c6e14cfdc.json 1:{"url": "https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self/resolve/main/config.json", "etag": "\"674493ec11ad5d90eaf72d07f69a4bb60203f46b\""} de1143309c04207e22168c4563b24770c49eb4e933dbad506eadae8e43a7b422.9d6cd81ef646692fb1c169a880161ea1cb95f49694f220aced9b704b457e51dd.json 1:{"url": "https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self/resolve/main/special_tokens_map.json", "etag": "\"25bc39604f72700b3b8e10bd69bb2f227157edd1\""} 814e23f251e4a5cd4763cf9b9b6ecb43e43f6a219ec036d9db3419f8dc9d93c3.6685801c836773b383173a1d86dd10317cc4f4eeadcf01f689918a50fdda946b.json 1:{"url": "https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self/resolve/main/tokenizer_config.json", "etag": "\"97d4216be71590fae568725f363d52f00eb7c944\""} 5681e9346f90f9fc4d72503284e96e6bbdc8bf5a38cafeb6ebf3791120b7570d.e0d02e2ed52b244ae1896cccc2beab5caccc2478b8b3d1131c14666c6e14cfdc 2: "_name_or_path": "facebook/wav2vec2-large-960h-lv60-self", ~/.cache/huggingface/transformers# ag robust f4ed1cd2d2b55e3401644b177d5a166863754c345f98ed09260d0dce9a385d9a.2523c04309986c65617e9a8f2f66c3d656ba969fe07a994af31a3a0cf7b19b78.json 1:{"url": "https://huggingface.co/facebook/wav2vec2-large-robust/resolve/main/config.json", "etag": "\"a52cf9097910107f4e0d1bccf82fd4e08d4e4b66\""}
0
huggingface
🤗Transformers
Clarify BERT model learnable parameters
https://discuss.huggingface.co/t/clarify-bert-model-learnable-parameters/11018
Hello, I want to use HateBERT from the paper’s repository 1, which is a BERT model that its pre-training was extended in abusive language. In order to do that, I created a BERT model (bert-base-uncased in PyTorch) and tried to load HateBERT’s weights with load_state_dict() (after having made minor changes in parameters names, to match BERT’s). load_state_dict() throws the error: RuntimeError: Error(s) in loading state_dict for BertModel: Missing key(s) in state_dict: "embeddings.position_ids". which means that the BERT model requires embeddings.position_ids. I checked this tensor and it is not a PyTorch parameters tensor, just a tensor. From the error message, it is also evident that all other parameters match, otherwise their names would also be mentioned. Can someone explain?
HateBERT is available on the hub. GroNLP/hateBERT · Hugging Face 1 from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("GroNLP/hateBERT") model = AutoModelForMaskedLM.from_pretrained("GroNLP/hateBERT")
0
huggingface
🤗Transformers
Speech to text model with tensorflow?
https://discuss.huggingface.co/t/speech-to-text-model-with-tensorflow/11340
Hi, I am looking for a tensorflow model that is capable of converting an audio file to text. Can we do this with tensorflow and/or huggingface? The only models I find on the hub are for pytorch… Thanks!
If you are looking for inference with TF based speech to text model, Here is TFwav2vec2 or are you looking for fine-tuning a TF based model? You can look Here 3 for that.
0
huggingface
🤗Transformers
Prepare data to fine-tune T5 model on unsupervised objective
https://discuss.huggingface.co/t/prepare-data-to-fine-tune-t5-model-on-unsupervised-objective/6693
Hi, I couldn’t find a way to fine-tune the T5 model on a dataset in a specific domain (let’s say medical domain) using the unsupervised objective. Does the current version of Huggingface support this? Basically, all I need is to prepare the dataset to train the T5 model on the unsupervised objective, which could itself be very tricky. Any pointer on this is highly appreciated. P.S: I am looking for something in PyTorch and not Tensorflow. @valhalla @clem
Hi! Have you found a solution? I also can’t figure out how to fine-tune the pretrained model (mT5) on unlabeled domain specific data using Transformer library. Thank you.
0
huggingface
🤗Transformers
Continue pre-training Greek BERT with domain specific dataset
https://discuss.huggingface.co/t/continue-pre-training-greek-bert-with-domain-specific-dataset/4339
Hello, I want to further pre-train Greek BERT in a domain specific dataset and the library provides scripts 35 for this. There is also a BERT model, BertForPreTraining, which has a head for masked language modeling and a head for next sentence prediction. Can this model be used for continuing pre-training as well? If it can should I use the script or the model?
Hi, Yes the script is only for masked language modeling (MLM), so you would have to modify this script if you want to also perform next sentence prediction. But what you could do is the following: First use the run_mlm.py script to continue pre-training Greek BERT on your domain specific dataset for masked language modeling. Define a BertForPreTraining model (which includes both the masked language modeling head as well as a sequence classification head), load in the weights of the model that you trained in step 1, and then train on the next sentence prediction task.
0
huggingface
🤗Transformers
T5: classification using text2text?
https://discuss.huggingface.co/t/t5-classification-using-text2text/504
My friend who wishes to remain anonymous asked a good question about T5 that I couldn’t answer: Say we have a model that predicts sentiment – answers are “positive/negative/neutral” – for something like RoBERTa we’d add a layer, slap on a softmax – and we get both argmax predictions, and some notion of probability amongst the three classes (as well as entropy). For T5, we just get a text reply. But of course if we looked at the outputs to the softmax in T5, we’d see p(“positive”) etc – assuming the response is 1 token. Has anyone tried to do this already or seen examples/notebooks like this? Ideally, we want to ask our model several questions. Without worrying too much about the conditional logic, we’d like to be able to measure the probability of text outputs, including some rare categories (that nonetheless are present in our training set). As well as to look for low and high entropy predictions. If nobody has done this, any code pointers where to look would be helpful.
I’m not sure I understand what is the difference between what you are describing and how the GLUE dataset is handled in the T5 paper
0
huggingface
🤗Transformers
Using onnx for text-generation with GPT-2
https://discuss.huggingface.co/t/using-onnx-for-text-generation-with-gpt-2/11253
Hi @valhalla @patrickvonplaten , I was working with onnx_transformers 1 and using onnx for GPT-2 model and text-generation task. I used transformer pipeline for text-generation and the runtime for generating text was a bit high(20~30s) and I’ve tried using different approaches like using cronjobs to handle it but it didn’t help. and I found your repo and think of using onnx to accelerate the text generation. As I read the README on the repo there is no text-generation for onnx_transformers. I also used some mehtods in this notebook: Inference_GPT2_with_OnnxRuntime_on_CPU 2 but the qulity of generated text was not even near transformer pipline, would you please give me some insight about this runtime issue and how can I accelerate text-generation besides increasing resources. Thanks
Hi, We’ve recently added an example of exporting BART with ONNX, including beam search generation: transformers/examples/onnx/pytorch/translation at master · huggingface/transformers · GitHub 4 However, it doesn’t include a README right now, which could be very useful to explain how exactly the model can be used. I’ve asked the author to add it.
0
huggingface
🤗Transformers
GPT2 summarization performance
https://discuss.huggingface.co/t/gpt2-summarization-performance/11218
Has anyone run benchmark studies to evaluate the generation/summarization performance of GPT2 on datasets such as “xsum” ? If so could you share the performance numbers (in-terms of ROUGE scores) you got? I search for these results online, but couldn’t find any.
Hi, I can suggest starting looking here: paperswithcode.com Papers with Code - XSum Dataset 12 The Extreme Summarization (XSum) dataset is a dataset for evaluation of abstractive single-document summarization systems. The goal is to create a short, one-sentence new summary answering the question “What is the article about?”. The dataset... I haven’t found neither to be honest. But! As I believe it seems smarter to use encoder-decoder style models (like PEGASUS or BART) for summarization. Decoder-only language models like GPT were not only trained to continue texts (unlike ~BART), but also they don’t extract the idea of the given texts. Though encoder can be interpreted as an “idea extractor” and the decoder as the generator for natural language text. P.S. I know one paper that tries to prove the point on summarizing using GPT3 is better than using BART. Even though it heavily relies on Russian language in experiments - you can use the references in the paper to look deeper and find what you are looking for. Paper arxiv link: arXiv.org Fine-tuning GPT-3 for Russian Text Summarization 12 Automatic summarization techniques aim to shorten and generalize information given in the text while preserving its core message and the most relevant ideas. This task can be approached and treated with a variety of methods, however, not many... Good luck and let me know if you find anything, Kirill
0
huggingface
🤗Transformers
T0 Tokenizer Throws Error
https://discuss.huggingface.co/t/t0-tokenizer-throws-error/11270
I’m trying to use the new T0 model (bigscience/T0pp · Hugging Face) but when I try following the instructions, I get the following error: from transformers import AutoTokenizer from transformers import AutoModelForCausalLM, AutoModelForSeq2SeqLM, GPT2Model, GPT2Config, pipeline t0_tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp") Traceback (most recent call last): File "<input>", line 1, in <module> File "/home/rschaef/CoCoSci-Language-Distillation/CLIP_prefix_caption/clipgpt_venv/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 469, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/rschaef/CoCoSci-Language-Distillation/CLIP_prefix_caption/clipgpt_venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1742, in from_pretrained resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs File "/home/rschaef/CoCoSci-Language-Distillation/CLIP_prefix_caption/clipgpt_venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1858, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/rschaef/CoCoSci-Language-Distillation/CLIP_prefix_caption/clipgpt_venv/lib/python3.7/site-packages/transformers/models/t5/tokenization_t5_fast.py", line 136, in __init__ **kwargs, File "/home/rschaef/CoCoSci-Language-Distillation/CLIP_prefix_caption/clipgpt_venv/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 117, in __init__ "Couldn't instantiate the backend tokenizer from one of: \n" ValueError: Couldn't instantiate the backend tokenizer from one of: (1) a `tokenizers` library serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. You need to have sentencepiece installed to convert a slow tokenizer to a fast one. Am I missing something from the instructions on that page?
According to python - Transformers v4.x: Convert slow tokenizer to fast tokenizer - Stack Overflow 2, I need to separately install a library called sentencepiece - is that correct?
0
huggingface
🤗Transformers
Intel OpenVINO backend
https://discuss.huggingface.co/t/intel-openvino-backend/11178
Hi! We would like to start a discussion about adding Intel OpenVINO backend in Transformers library. If you have not heard about OpenVINO before, it’s a library which accelerates deep learning inference (not training, but inference of pretrained models) on Intel Architecture (CPU, GPU, VPU and others). The library is distributed in PyPI and developed in open source. Currently, there is an issue here: Intel OpenVINO inference backend · Issue #13987 · huggingface/transformers · GitHub 1 (for GitHub discussions) and the latest proposal here: Intel OpenVINO backend by dkurt · Pull Request #1 · dkurt/transformers · GitHub 1 Example (QA): from transformers import AutoTokenizer, OVAutoModelForQuestionAnswering tok = AutoTokenizer.from_pretrained("dkurt/bert-large-uncased-whole-word-masking-squad-int8-0001") model = OVAutoModelForQuestionAnswering.from_pretrained("dkurt/bert-large-uncased-whole-word-masking-squad-int8-0001") context = """ Soon her eye fell on a little glass box that was lying under the table: she opened it, and found in it a very small cake, on which the words “EAT ME” were beautifully marked in currants. “Well, I’ll eat it,” said Alice, “ and if it makes me grow larger, I can reach the key ; and if it makes me grow smaller, I can creep under the door; so either way I’ll get into the garden, and I don’t care which happens !” """ question = "Where Alice should go?" input_ids = tok.encode(question + " " + tok.sep_token + " " + context, return_tensors="pt") outputs = model(input_ids) start_pos = outputs.start_logits.argmax() end_pos = outputs.end_logits.argmax() + 1 answer_ids = input_ids[0, start_pos:end_pos] answer = tok.convert_tokens_to_string(tok.convert_ids_to_tokens(answer_ids)) print("Question:", question) print("Answer:", answer)
Opened a pull request at Intel OpenVINO backend (inference only) by dkurt · Pull Request #14203 · huggingface/transformers · GitHub 2. Any feedback welcome!
0
huggingface
🤗Transformers
Panel Data APP - GPT2 Show Case
https://discuss.huggingface.co/t/panel-data-app-gpt2-show-case/11212
A user of Panel asked for help creating a data app based on the GPT2 transformer and bokeh plots Not able to update Bokeh bar plot based on button click which updates source.data - Panel - HoloViz Discourse 1. I took up the challenge and in an hour I had created this. I can only agree with the prediction image1920×1080 229 KB Check out the code Hugging Face GPT2 Transformer Example · GitHub 1.
Does any one know if I can host the app on spaces?
0
huggingface
🤗Transformers
‘BertEncoder’ object has no attribute ‘gradient_checkpointing’
https://discuss.huggingface.co/t/bertencoder-object-has-no-attribute-gradient-checkpointing/11207
I’m getting a strange error that previously worked OK. I’m only trying to use a previously trained NLP model to predict a label. Source Code: BATCH_SIZE = 128 MAX_LEN = 64 def generate_labels(model, tokenizer, df, segment_label): segments = list(df[segment_label].values) pred_labels = [] num_batches = int(len(segments) / BATCH_SIZE) + 1 print('Total num batches: ', num_batches) model.eval() with torch.no_grad(): for i in range(num_batches): if (i + 1) % 100 == 0: print('Processing batch #', (i + 1)) batch = segments[i * BATCH_SIZE : (i + 1) * BATCH_SIZE] tokenized_text = tokenizer(batch, return_tensors="pt", padding = 'max_length', \ truncation = True, max_length = MAX_LEN) input_ids = tokenized_text['input_ids'].to('cuda') attention_mask = tokenized_text['attention_mask'].to('cuda') outputs = model(input_ids, attention_mask = attention_mask).logits pred_labels_batch = torch.argmax(outputs, dim = 1).cpu().numpy() pred_labels.extend(pred_labels_batch) return pred_labels`Preformatted text` pred_labels = generate_labels(trained_model, tokenizer, sent_df, 'text') ERROR: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <command-98410652084852> in <module> ----> 1 pred_labels = generate_labels(trained_model, tokenizer, sent_df, 'text') 2 sent_df['PREDICTED_DISCOURSE_TAG'] = [pubmed_rct_idx_to_label[x] for x in pred_labels] <command-4224228297735130> in generate_labels(model, tokenizer, df, segment_label) 17 input_ids = tokenized_text['input_ids'].to('cuda') 18 attention_mask = tokenized_text['attention_mask'].to('cuda') ---> 19 outputs = model(input_ids, attention_mask = attention_mask).logits 20 pred_labels_batch = torch.argmax(outputs, dim = 1).cpu().numpy() 21 pred_labels.extend(pred_labels_batch) /databricks/python/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /databricks/python/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict) 1528 return_dict = return_dict if return_dict is not None else self.config.use_return_dict 1529 -> 1530 outputs = self.bert( 1531 input_ids, 1532 attention_mask=attention_mask, /databricks/python/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /databricks/python/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 994 past_key_values_length=past_key_values_length, 995 ) --> 996 encoder_outputs = self.encoder( 997 embedding_output, 998 attention_mask=extended_attention_mask, /databricks/python/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /databricks/python/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 558 past_key_value = past_key_values[i] if past_key_values is not None else None 559 --> 560 if self.gradient_checkpointing and self.training: 561 562 if use_cache: /databricks/python/lib/python3.8/site-packages/torch/nn/modules/module.py in __getattr__(self, name) 1128 if name in modules: 1129 return modules[name] -> 1130 raise AttributeError("'{}' object has no attribute '{}'".format( 1131 type(self).__name__, name)) 1132 AttributeError: 'BertEncoder' object has no attribute 'gradient_checkpointing' No idea where this is coming from. Any ideas or help? Note that this error goes away completely if I use transformers==4.10.0
You did not show how you create your model in your post, so no one can help you debug the problem. Form the error message, it looks like you used torch.save to save your whole model (and not the weights), which is not recommended at all because when the model changes (like it did between 4.10 and 4.11) you then can’t reload it directly with torch.load. Our advice is to always use save_pretrained/from_pretrained to save/load your models or if it’s not possible, to save the weights (model.state_dict) with torch.save and then reload them with model.load_state_dict, as this will works across different versions of the models.
0
huggingface
🤗Transformers
Document Similarity of long documents e.g. legal contracts
https://discuss.huggingface.co/t/document-similarity-of-long-documents-e-g-legal-contracts/3568
Is there any way of getting similarities between very long text documents. I know about the ways to get similarity between sentences using sentence transformers but is there a model that can give me a one shot output similar or not. Something like a siamese network that can tell if 2 random images are similar or not. I might be wrong about the analogy but it seems very similar. If such models don’t exist then is there a method where I can make use of transformers to get similarities between long documents.
Hi @hemangr8, a very simple thing you can try is: Split the document into passages or sentences Embed each passage / sentence as a vector Take the average of the vectors to get a single vector representation of the document Compare documents using your favourite similarity metric (e.g. cosine etc) Depending on the length of your documents, you could also try using the Longformer Encoder-Decoder which has a context size of 16K tokens: allenai/led-large-16384 · Hugging Face 81 If your document fit within the 16K limit you could embed them in one go. There’s some related ideas also in this thread: Summarization on long documents 93
0
huggingface
🤗Transformers
Error when fine-tuning imdb with the script
https://discuss.huggingface.co/t/error-when-fine-tuning-imdb-with-the-script/11185
Hi, I’m using the run_glue.py script I found here to fine tune imdb. I follow the example, training looks fine, eval looks fine but there is no result at the end. python run_glue.py \ --model_name_or_path bert-base-cased \ --dataset_name imdb \ --do_train \ --do_predict \ --max_seq_length 128 \ --per_device_train_batch_size 4 \ --learning_rate 2e-5 \ --num_train_epochs 1 \ --output_dir tmp/imdb/ I tried to add --remove_unused_columns False : python run_glue.py \ --model_name_or_path bert-base-cased \ --dataset_name imdb \ --do_train \ --do_predict \ --max_seq_length 128 \ --per_device_train_batch_size 4 \ --learning_rate 2e-5 \ --num_train_epochs 1 \ --remove_unused_columns False \ --output_dir tmp/imdb/ I’m getting this error at the end for each attempt : 10/29/2021 08:07:12 - INFO - __main__ - ***** Predict results None ***** [INFO|modelcard.py:449] 2021-10-29 08:07:12,801 >> Dropping the following result as it does not have all the necessary fields: {'task': {'name': 'Text Classification', 'type': 'text-classification'}, 'dataset': {'name': 'imdb', 'type': 'imdb', 'args': 'plain_text'}} I don’t really know what is happening because there is no crash. Transformers, datasets and tokenizers are up to date. Thank you.
You need to adapt the script a bit for it to work on the IMDB dataset. It has no “validation” set (the set is named “test”) so you have to change the name of that column.
0
huggingface
🤗Transformers
Does anyone else observer RoBERTa fine-tuning instability?
https://discuss.huggingface.co/t/does-anyone-else-observer-roberta-fine-tuning-instability/6499
I have come across this for two different tasks now, where my setup basically looks as follows: A smaller dataset for fine-tuning (500k samples), as well as a larger version of this data set (2 million samples). Hyperparameters are 1000 iterations warmup, 3 epochs training duration, and otherwise default. Previous runs with the small dataset gave decent results for BERT, and slightly better results with RoBERTa. However, once I go and train with the larger dataset, RoBERTa models no longer show any signs of convergence and instead just predict nonsense. Note that the BERT model still (consistently) performs fine. This is a problem across several (6) random seeds! My question now is whether someone else has observed a similar behavior, or whether there are some caveats to the parameters that only let selective models reach a stable training state. Generally the RoBERTa results were better on the smaller data, so obviously I’d like to go with a stable run on the larger data as well.
hey @dennlinger when you say: dennlinger: However, once I go and train with the larger dataset, RoBERTa models no longer show any signs of convergence and instead just predict nonsense. do you mean that both the training and validation loss don’t decrease or something else? the two main parameters i’ve needed to tune XLM-R (not RoBERTa exactly, but close enough) effectively for text classification have been the learning rate and number of warmup steps, with the former generally needing to be in the 10^-6-10-5 range. in the RoBERTa paper they used 6% of the total training steps for the warmup, so perhaps you could see whether increasing this to say 100,000 steps helps in your 2M samples training set. what task are you working on?
0
huggingface
🤗Transformers
Using EXTREMELY small dataset to finetune BERT
https://discuss.huggingface.co/t/using-extremely-small-dataset-to-finetune-bert/8847
Hi, I have a domain-specific language classification problem that I am attempting to use a bert model for. My approach has been to take the standard pretrained bert model and run further unsupervised learning using domain-specific language corpora (using TSDAE training from the Sentence-Transformer framework 6). I am now trying to take this domain-trained model and finetune it for a classification task. The problem is I only have an extremely small labelled dataset (~1000 samples), I have been running a few training experiments and surprisingly have received very good results that I am very sceptical of. The task is to take natural language text and classify it to 1 of 5 classes. Here is my training setup: criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=0.0001, momentum=0.9) class BertClassification(nn.Module): def __init__(self): super(BertClassification, self).__init__() self.bert = BertModel.from_pretrained("TSDAE_model/0_Transformer") self.to_class = nn.Linear(768, 5) def forward(self, x): x = self.bert(x)[0][:,0,:] x = self.to_class(x) return x And here are the training results: I am very unsure of how trust worthy these results are as the dataset is so small. I have also tried freezing the bert weights and just training the self.to_class linear layer (~4000 params) but the model peaks at only about 50% accuracy. I was hoping someone may be able to help me decide if this is an appropriate training strategy for this dataset or if maybe I should look at alternatives. Thanks!
hey @JoshuaP, are your 5 classes equally balanced? if not, you might be better off charting a metric like the f1-score which tends to be less biased by cases where you have a lot of examples in just a few classes. another idea would be to implement a baseline (e.g. the classic naive bayes ) and see how that compares against your transformer model. finally you could try cross-validation (with a stratified split if your classes aren’t balanced) to mitigate some of the problems that come from doing a train/test split with small datasets
0
huggingface
🤗Transformers
Error when trying Deberta for tensorflow version, Can’t import the model
https://discuss.huggingface.co/t/error-when-trying-deberta-for-tensorflow-version-cant-import-the-model/10844
i found that Deberta is available on tensorflow. i want to try the example for TFDebertaForSequenceClassification but i got error which said like this Screenshot_1541457×441 44.5 KB transformers version= 4.5.1
The TensorFlow Deberta model was added more recently, so you will need to update your version of Transformers.
0
huggingface
🤗Transformers
Using EncoderDecoderModel
https://discuss.huggingface.co/t/using-encoderdecodermodel/10242
Hi, i have tried to combine the ViT (BeiT weights 16patch-384) as Encoder with a Bert Model as Decoder. (Like Microsofts new arxiv TrOCR paper ) u1301×590 110 KB If i use the EncoderDecoderModel it does not support pixel_values for the encoder. feat_extractor = ViTFeatureExtractor.from_pretrained("microsoft/beit-base-patch16-384") tokenizer = XLMRobertaTokenizer.from_pretrained("microsoft/Multilingual-MiniLM-L12-H384") encoder = ViTModel.from_pretrained("microsoft/beit-base-patch16-384", output_attentions=True, output_hidden_states=True, return_dict=True, is_decoder=False) decoder = BertModel.from_pretrained("microsoft/Multilingual-MiniLM-L12-H384", is_decoder=True, add_cross_attention=True, return_dict=True) encoder_inputs = feat_extractor(torch.randn(3, 512, 512), return_tensors='pt') decoder_input = tokenizer.encode_plus("Hello World", return_tensors='pt') print(decoder_input) #encoder_outputs = encoder(**encoder_inputs) #decoder_outputs = decoder(input_ids=decoder_input['input_ids'],encoder_hidden_states=encoder_outputs.last_hidden_state) model = EncoderDecoderModel(encoder=encoder, decoder=decoder) outputs = model(input_embeds=encoder_inputs['pixel_values'], decoder_input_ids=decoder_input) File "/home/felix/anaconda3/envs/work/lib/python3.8/site-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py", line 425, in forward encoder_outputs = self.encoder( File "/home/felix/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'input_ids' And if i try it to combine with different specified: feat_extractor = ViTFeatureExtractor.from_pretrained("microsoft/beit-base-patch16-384") tokenizer = XLMRobertaTokenizer.from_pretrained("microsoft/Multilingual-MiniLM-L12-H384") encoder = ViTModel.from_pretrained("microsoft/beit-base-patch16-384", output_attentions=True, output_hidden_states=True, return_dict=True, is_decoder=False) decoder = BertModel.from_pretrained("microsoft/Multilingual-MiniLM-L12-H384", is_decoder=True, add_cross_attention=True, return_dict=True) encoder_inputs = feat_extractor(torch.randn(3, 512, 512), return_tensors='pt') decoder_input = tokenizer.encode_plus("Hello World", return_tensors='pt') print(decoder_input) encoder_outputs = encoder(**encoder_inputs) decoder_outputs = decoder(input_ids=decoder_input['input_ids'], encoder_hidden_states=encoder_outputs.last_hidden_state) #model = EncoderDecoderModel(encoder=encoder, decoder=decoder) #outputs = model(input_embeds=encoder_inputs['pixel_values'], decoder_input_ids=decoder_input) File "/home/felix/anaconda3/envs/work/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 990, in forward encoder_outputs = self.encoder( File "/home/felix/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/felix/anaconda3/envs/work/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 582, in forward layer_outputs = layer_module( File "/home/felix/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/felix/anaconda3/envs/work/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 494, in forward cross_attention_outputs = self.crossattention( File "/home/felix/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/felix/anaconda3/envs/work/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 401, in forward self_outputs = self.self( File "/home/felix/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/felix/anaconda3/envs/work/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 280, in forward key_layer = self.transpose_for_scores(self.key(encoder_hidden_states)) File "/home/felix/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/felix/.local/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 96, in forward return F.linear(input, self.weight, self.bias) File "/home/felix/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 1847, in linear return torch._C._nn.linear(input, weight, bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (577x768 and 384x384) Is there currently a way to do this with the transformers lib ? Else i think its also a way to use the ViT model pretrained from transformers and build the Decoder via nn.TransformerDecoder or from scratch and init with a pretrained Thanks
Hi, EncoderDecoderModel is meant to combine any bidirectional text encoder (e.g. BERT) with any autoregressive text decoder (e.g. GPT2). We’re planning to add a VisionEncoderDecoderModel (recently we’ve added SpeechEncoderDecoderModel 6, which allows you to combine any speech autoencoding model such as Wav2Vec2 with any autoregressive text decoder). Feel free to contribute this if you are interested!
1
huggingface
🤗Transformers
Dealing with T5 for Multiple Choice Task
https://discuss.huggingface.co/t/dealing-with-t5-for-multiple-choice-task/11104
I wanted to learn how others deal typically with a training text-to-text model for performing multiple choice task such as SWAG. Any pointers will be appreciated.
If anyone is interested: There’s a SWAG notebook in community section notebook that can be used for reference. The input can be context: sentence. 1: option_1, 2: option_2 where labels can be option_numbers.
0
huggingface
🤗Transformers
Using Huggingface for computer vision (Tensorflow)?
https://discuss.huggingface.co/t/using-huggingface-for-computer-vision-tensorflow/11094
Hi, I would like to to use huggingface to train some computer vision models. The issue is that I use tensorflow and I can only see pytorch models on the hub. Is tensorflow supported for computer vision? Is there a TF notebook I could use to train my own models? Thanks!
We’re currently adding a TF implementation of the Vision Transformer: Add TFViTModel by ydshieh · Pull Request #13778 · huggingface/transformers · GitHub 5 This will also make it easier to add the other vision models (DeiT, BEiT), which are very similar to ViT.
0
huggingface
🤗Transformers
Are albert-base-v1( and v2) pretrained enough?
https://discuss.huggingface.co/t/are-albert-base-v1-and-v2-pretrained-enough/11061
Hi all, I have questions on albert-base-v1 1 and v2 models uploaded in the huggingface model hub. I’ve checked the MLM loss of albert base models using book corpus dataset and squad context data(which is basically similar to Wikipedia data) based on this example script from transformer repo, in order to validate initial model performance (without additional training). It appears that the average MLM losses are around 2.5(squad context) and 3.2(book corpus) in albert-base-v1, which is much worse than I expected given that those two datasets must have been used for the pertaining albert base model. The value was much worse in v2. I would like to ask whether albert-base models are trained until the losses converge, or the model has been trained on one or two epochs. Also, it would be great if there is an actual training script and/or MLM loss history on albert pertaining models. Thanks! Hojin
The models on the hub are not trained by HuggingFace (unless explicitly mentioned), so the weights are the original implementation weights, ported/converted to the implementation by HF. Whether or not the model is trained “well enough” is a question for the original authors of the model.
1
huggingface
🤗Transformers
BertForMaskedLM’s loss and scores, how the loss is computed?
https://discuss.huggingface.co/t/bertformaskedlm-s-loss-and-scores-how-the-loss-is-computed/607
I have a simple MaskedLM model with one masked token at position 7. The model returns 20.2516 and 18.0698 as loss and score respectively. However, not sure how the loss is computed from the score. I assumed the loss should be loss = - log(softmax(score[prediction]) but computing this loss returns 0.0002. I’m confused about how the loss is computed in the model. import copy from transformers import BertForMaskedLM, BertTokenizerFast import torch model = BertForMaskedLM.from_pretrained('bert-base-uncased') tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') text = "Who was Jim Paterson ? Jim Paterson is a doctor".lower() inputs = tokenizer.encode_plus(text, return_tensors="pt", add_special_tokens = True, truncation=True, pad_to_max_length = True, return_attention_mask = True, max_length=64) input_ids = inputs['input_ids'] masked = copy.deepcopy(inputs['input_ids']) masked[0][7] = 103 for t in range(len(masked[0])): if masked[0][t] != 103: masked[0][t] = -100 loss, scores = model(input_ids = input_ids, attention_mask = inputs['attention_mask'] , token_type_ids=inputs['token_type_ids'] , labels=masked) print('loss',loss) print(scores.shape) pred = torch.argmax( scores[0][7]).item() print("predicted token:", pred, tokenizer.convert_ids_to_tokens([pred]) ) print("score:", scores[0][7][pred]) logSoftmax = torch.nn.LogSoftmax(dim=1) NLLLos = torch.nn.NLLLoss() output = NLLLos( logSoftmax(torch.unsqueeze(logit[0][7], 0)), torch.tensor([pred])) print(output)
Hi @sanaz, I can see few mistakes here You need to mask tokens in the input_ids not labels. And to prepare lables for masked LM set every position to -100 (ignore index) except the masked positions. masked loss is then calculated simply using the CrossEntropy loss between the logits and labels. So correct usage would be text = "Who was Jim Paterson ? Jim Paterson is a doctor".lower() inputs = tokenizer([text], return_tensors="pt") input_ids = inputs["input_ids"] # mask the token input_ids[0][7] = tokenizer.mask_token_id labels = inputs["input_ids"].clone() labels[labels != tokenizer.mask_token_id] = -100 # only calculate loss on masked tokens loss, logits = model( input_ids=input_ids, labels=labels, attention_mask=inputs["attention_mask"], token_type_ids=inputs["token_type_ids"] ) # loss => 18.2054 # calculate loss manually import torch.nn.functional as F loss2 = F.cross_entropy(logits.view(-1, tokenizer.vocab_size), labels.view(-1)) # loss2 => 18.2054 Hope this helps.
0
huggingface
🤗Transformers
Hidden States of OpenAI GPT2 inconsistent
https://discuss.huggingface.co/t/hidden-states-of-openai-gpt2-inconsistent/11052
Hi, I am trying to use the OpenAI GPT2 and I just realized that the hidden states change every time I run the model and I cannot figure out why. When I use BertModel this does not happen. Does anyone have an explanation for that? Thank you so much in advance!
Do you run the model in evaluation mode? i.e. model.eval() => this will turn off any dropout modules
0
huggingface
🤗Transformers
Jointly train two-stage models using Trainer
https://discuss.huggingface.co/t/jointly-train-two-stage-models-using-trainer/11020
Hi all, I want to train a two-stage model containing model1 and model2, in which model2 takes model1’s result as input. For now, I could train model2 properly through Huggingface Seq2SeqTrainer, but I have no clue of how to jointly train model1 and model2 through the Huggingface Trainer. Could someone give me some advice? Thank you very much
I don’t think this is possible. For suc a use case, you should definitely check out Accelerate 2 and use your custom training loop.
0
huggingface
🤗Transformers
I could not able to use save_pretrained on my T5 Model
https://discuss.huggingface.co/t/i-could-not-able-to-use-save-pretrained-on-my-t5-model/11022
I have fine tuned T5 model on a task and i am trying to save the model using save_pretrained method available for huggingface Model but i could not able to do so . image789×323 18.2 KB can some say what is the error ?
The class seems pointless because it doesn’t add functionality on top of T5 as currently written. You could have just used T5 as-is. save_pretrained is part of HF subclassed models, not of PyTorch’s nn.Module’s. Either don’t use this class and just finetune the given T5 model. Or instead of subclassing nn.Module, subclass transformers.PreTrainedModel.
1
huggingface
🤗Transformers
How to save and load fine-tune model
https://discuss.huggingface.co/t/how-to-save-and-load-fine-tune-model/1595
Hi, everyone~ I have defined my model via huggingface, but I don’t know how to save and load the model, hopefully someone can help me out, thanks! class MyModel(nn.Module): def __init__(self, num_classes): super(MyModel, self).__init__() self.bert = BertModel.from_pretrained('hfl/chinese-roberta-wwm-ext', return_dict=True).to(device) self.fc = nn.Linear(768, num_classes, bias=False) def forward(self, x_input_ids, x_type_ids, attn_mask): outputs = self.bert(x_input_ids, token_type_ids=x_type_ids, attention_mask=attn_mask) pred = self.fc(outputs.pooler_output) return pred model = MyModel(num_classes).to(device) # save # load
If you make your model a subclass of PreTrainedModel, then you can use our methods save_pretrained and from_pretrained. Otherwise it’s regular PyTorch code to save and load (using torch.save and torch.load).
0
huggingface
🤗Transformers
Issue in the Documentation of transformers for BiET
https://discuss.huggingface.co/t/issue-in-the-documentation-of-transformers-for-biet/10977
Hello, I have been reading the documentation of Biet model here 2. In the section pooler output this is what is written Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for the BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining After going through the code (here)[transformers/modeling_beit.py at master · huggingface/transformers · GitHub], the pooler output is actually the mean of all hidden states and not a linear projection on the CLS token. Is it possible to update the documentation as it creates confusion while going through it? Note: I thought of raising the issue in the GitHub repo but couldn’t find how to do it incase of documentation.
Hi, Thanks for reporting. Indeed, BeitModel currently returns an output of type BaseModelOutputWithPooling. This is a generic class that automatically generates the documentation for the model, defined here 1. However, in this case, it might be better to define a custom BeitModelOutput, that better describes the outputs of the model. Do you mind opening a PR for this? This would mean defining a new dataclass within modeling_beit.py. Otherwise, I’ll do it
0
huggingface
🤗Transformers
ValueError fp16 lm_head.weight
https://discuss.huggingface.co/t/valueerror-fp16-lm-head-weight/10433
I am trying to run run_translation.py with mt5-large and DeepSpeed enabled. I use ds_config_zero3.json as the config file. However, when I try to run this, I get the following error: ValueError: fp16 is enabled but the following parameters have dtype that is not fp16: lm_head.weight Is there some config setting I’m missing that could help resolve this issue?
Hey did you figure out to resolve this? I’d be interested to learn what you did. I ran the ASR example here, and it ran fine, but I noticed it had fp16 set to false. If I try to save memory by passing --fp16 at the command line or manually invoking fp16=True when calling TrainingArguments, I get the same error you report.
0
huggingface
🤗Transformers
Importance of padding for tokens and same size inputs for transformers
https://discuss.huggingface.co/t/importance-of-padding-for-tokens-and-same-size-inputs-for-transformers/11004
We usually pad our inputs/tokens to the transformers to have the same size (e.g. 512 in bert) I have a general question regarding padding to have a fix size (in your case 512). would transformers be able to learn if inputs are not pad to fix sizes? lets say I have batch size of B, I can fix the size of the data to be the same as the one with the largest sample in each batch, in this case in each iteration the batch will have different sizes. But I was wondering if it will actually work for transformers? and if not what is the reason and why? I really appreciate it if someone that know the answer or tried it before can help me here. Thanks S
Hey @seyeeet I have found a tutorial where the input pipeline is just the way you have described to be. Here the inputs are padded to the biggest sequence in the batch, where each batch has different padding lengths. Tutorial link: भाषा की समझ के लिए ट्रांसफार्मर मॉडल  |  Text  |  TensorFlow 2 Hope this helps
0
huggingface
🤗Transformers
Evaluate subset of data during training
https://discuss.huggingface.co/t/evaluate-subset-of-data-during-training/10952
Hi all, I’m using the run_mlm.py 1 script. My evaluation set is “too large” (i.e., takes too long to run through the entire evaluation set, every n steps of training); so I was hoping to sub-sample from the evaluation set during training. Is there a simple way to extend or use the Trainer with custom logic for sub-sampling examples from the provided eval_dataset? In the worst case, I can manually specify a subset of the eval set to be fed into the Trainer, but I was hoping to do a random subsample for each in-training evaluation so that I don’t overfit to one sub-sample of the evaluation set. Thanks!
Why do you not give a smaller evaluation dataset? You can then run trainer.evaluate(full_eval_dataset) to evaluate on the full validation dataset.
0
huggingface
🤗Transformers
Why does ignore_mismatched_sizes increase the number of TfAlbertMainLayer parameters?
https://discuss.huggingface.co/t/why-does-ignore-mismatched-sizes-increase-the-number-of-tfalbertmainlayer-parameters/10920
If I load the model from pretrained without much in the way of configs, I get about 11 million parameters in the albert main layer. If I load it but change the problem type and set it to ignore mismatched sizes, the main layer has 222 million parameters. This seems strange to me, I thought changing the problem type would only effect the classifier? model = TFAlbertForSequenceClassification.from_pretrained('albert-base-v2', config=AlbertConfig(problem_type="single_label_classification"), ignore_mismatched_sizes=True)
Hi! The problem here is that your config object has default layer numbers and sizes that are totally different from the ones in albert-base-v2. If you’d like to train a sequence classification model on top of Albert, you can just do: model = TFAlbertForSequenceClassification.from_pretrained('albert-base-v2', num_labels=2) Alternatively, if you want to use a config object, you should initialize it from albert-base-v2 like this: model = TFAlbertForSequenceClassification.from_pretrained('albert-base-v2', config=AlbertConfig.from_pretrained('albert-base-v2', problem_type="single_label_classification"))
1
huggingface
🤗Transformers
Using sample weights in compute_metrics
https://discuss.huggingface.co/t/using-sample-weights-in-compute-metrics/10924
Is it possible to include sample weights in the compute_metrics function used by Trainer? For simpler cases, I can just encode the gold labels for some samples as -1 and give them a weight of 0 when computing the metrics, but in other cases it might be trickier.
I don’t think it’s possible. For elaborate evaluations you can pick on your predictions after using trainer.predict then do something more sophisticated on them.
0
huggingface
🤗Transformers
How to fine-tune BERT model for NER if forward method doesn’t have “labels” argument
https://discuss.huggingface.co/t/how-to-fine-tune-bert-model-for-ner-if-forward-method-doesnt-have-labels-argument/10925
I want to fine-tune a german basic BERT model on the task of NER. Therefore I want to load a model like this. But when I load it like this model = AutoModel.from_pretrained("dbmdz/bert-base-german-cased", num_labels=len(unique_tags)) and then give it the training data like this outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) I get this error: TypeError: forward() got an unexpected keyword argument 'labels' When I check which kind of class AutoModel uses, it is BertForMaskedLM, which makes sense. So the forward method of this class doesn’t take labels. But how can I then fine-tune it for this specific task? I don’t want to use a model from the hub for token classification or NER because I have my special kind of labels I want to use for specific tokens. Therefore something like this isn’t an option I guess. Thank you very much!
Instead of AutoModel you’ll probably just need AutoModelForTokenClassification. You can also have a look at the existing examples with NER: transformers/examples/pytorch/token-classification at master · huggingface/transformers · GitHub 2
1
huggingface
🤗Transformers
Why TFBlenderbot SmallModel and TFBlenderbot SmallForConditionalGeneration are the same trainable_variables?
https://discuss.huggingface.co/t/why-tfblenderbot-smallmodel-and-tfblenderbot-smallforconditionalgeneration-are-the-same-trainable-variables/10615
i downloaded model from facebook / blenderbot_small-90M , and loaded it with BlenderbotSmallTokenizer.from_pretrained() and BlenderbotSmallForConditionalGeneration.from_pretrained() respectively. When i looked into trainable_variables using: for v in model.trainable_variables(): print(v) i found they were equal, but doc says there is a a language modeling head in TFBlenderbotSmallForConditionalGeneration, how can i get the weights of the head?
if last head dense in TFBlenderbotSmallForConditionalGeneration uses the kernel which weights shared with embeddings, how about the bias? Checking the PyTorch implementation, it seems that the language modeling head doesn’t use a bias, as seen here. how can i get all the variables of the model include trainable and untrainable variables? In PyTorch, you can get all parameters of a model as follows: for name, param in model.named_parameters(): print(name, param.shape) cc’ing @Rocketknight1 for how to do this in Tensorflow.
1
huggingface
🤗Transformers
Why save_steps should be a round multiple of eval_steps when load_best_model_at_end=True?
https://discuss.huggingface.co/t/why-save-steps-should-be-a-round-multiple-of-eval-steps-when-load-best-model-at-end-true/10841
I am confused about these three arguments, as explained here 1 by @sgugger save_steps doesn’t care about the best model, so if I set eval_steps=100 and save_steps=200, every 200 steps, there is a checkpoint (200, 400, 600, …) but every 100 steps we have an evaluation of our model (100, 200, 300, …). Now, if the evaluation in 300 is the best, it will not be saved and is lost. But if we set load_best_model_at_end=True and keep the eval_steps=100, save_steps=200, eval_steps will override the save_steps because it will save a checkpoint every 100 steps so it could load the best model at the end. Here is the question: If all I said is true, why when load_best_model_at_end=True is set, save_steps should be a round multiple of eval_steps? It doesn’t make sense because when load_best_model_at_end is True, the model doesn’t care about save_steps and saves every eval_steps.
You are entirely right! It’s a bug in the Trainer which should be fixed by this PR 3.
1
huggingface
🤗Transformers
Trainer using Checkpoint makes TPU crash
https://discuss.huggingface.co/t/trainer-using-checkpoint-makes-tpu-crash/10794
Hi, I am running a Bert from scratch on google cloud, and it is working. But when I am doing the training using a saved Checkpoint with “model_name_or_path” it makes the TPU crash with SIGKILL error (memory issue I guess). I don’t understand this behavior, since if I rerun it without the checkpoint it works with no problems. Running with checkpoint works only if I use just 1 core (num_cores=1) which is not convenient for me (takes a much larger time). Anyone have an idea to help me? Thanks
You might need more RAM to be able to resume from a checkpoint. The core of the issue is that the optimizer state is loaded on each TPU before being transferred to the XLA device (it can’t be directly loaded on the XLA device sadly) but since you have 8 processes loading it, it’s loaded 8 times on CPU.
0
huggingface
🤗Transformers
Some weights of BertModel were not initialized from the model checkpoint
https://discuss.huggingface.co/t/some-weights-of-bertmodel-were-not-initialized-from-the-model-checkpoint/3805
I able to train on a word level, after that i test with fill-mask pipeline and get below warning I get this error : Some weights of BertModel were not initialized from the model checkpoint at ./output_model and are newly initialized: ['bert.pooler.dense.weight', 'bert.pooler.dense.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. tokenizer = PreTrainedTokenizerFast(tokenizer_file="./my-tokenizer.json") model = BertForMaskedLM(config=BertConfig(vovab_size=1000000)) < after training> fill_mask = pipeline( "fill-mask", model="./output_model", tokenizer=tokenizer ) # output warning above
Hi, I also get the same warning, while using AutoModelForMaskedLM on a fill-mask pipeline. Even though I finetuned it with AutoModelForMaskedLM… Having randomly initialized layers should not be good for using the model. Is there any help to solve it?
0
huggingface
🤗Transformers
How to convert model output logits into string sentences during training to check what the model is outputting?
https://discuss.huggingface.co/t/how-to-convert-model-output-logits-into-string-sentences-during-training-to-check-what-the-model-is-outputting/10756
I’m training GPT-2, constructed in the following manner: configuration = GPT2Config( output_hidden_states=True, ) architecture = AutoModelForCausalLM.from_config( configuration) When I pass an input sequence into the model, like so, I can access its output logits: # Output type: BaseModelOutputWithPastAndCrossAttentions architecture_output = self.architecture( input_ids=intractn.task_input_ids, attention_mask=intractn.task_attention_masks) How does one convert the logits into string sentences? I discovered the .generate() method, but this seems to generate new outputs. I suppose I could convert the logits to a distribution, sample, convert to token ids and then use tokenizer.decode() but that seems too manual. What’s the right way to do this in HuggingFace?
RylanSchaeffer: # shape: (batch size, sequence length, hidden dimension e.g. 768) It seems to me that these are the outputs of the base model. To get token predictions, you need the output of the LMHead (often a linear projection of hidden_dim → vocab_size). If you have the logits of shape bs, seqlen, vocab_size, you can simply do a softmax on that last dimension, select top1, and decode. This is not as much manual work as you may expect, but you do need the outputs of the LMHead. As you said, you’d typically generate with generate, so I am not sure whether I understand your use case.
0
huggingface
🤗Transformers
No loss being logged, when running MLM script (Colab)
https://discuss.huggingface.co/t/no-loss-being-logged-when-running-mlm-script-colab/8134
When using the run_MLM script and pairing with XLA, I am seeing that despite logging to files I still don’t get a step-by-step output of the metrics. %%bash python xla_spawn.py --num_cores=8 ./run_mlm.py --output_dir="./results" \ --model_type="big_bird" \ --config_name="./config" \ --tokenizer_name="./tokenizer" \ --train_file="./dataset.txt" \ --validation_file="./val.txt" \ --line_by_line="True" \ --max_seq_length="16000" \ --weight_decay="0.01" \ --per_device_train_batch_size="1" \ --per_device_eval_batch_size="1" \ --learning_rate="3e-4" \ --tpu_num_cores='8' \ --warmup_steps="1000" \ --overwrite_output_dir \ --pad_to_max_length \ --num_train_epochs="1" \ --adam_beta1="0.9" \ --adam_beta2="0.98" \ --do_train \ --do_eval \ --logging_steps="10" \ --evaluation_strategy="steps" \ --eval_accumulation_steps='10' \ --report_to="tensorboard" \ --logging_dir='./logs' \ --save_strategy="epoch" \ --load_best_model_at_end='True' \ --metric_for_best_model='accuracy' \ --skip_memory_metrics='False' \ --gradient_accumulation_steps='500' \ --use_fast_tokenizer='True' \ --log_level='info' \ --logging_first_step='True' \ 1> >(tee -a stdout.log) \ 2> >(tee -a stderr.log >&2) As you can see, I am logging out stderr and stdout to files but I can see that it doesn’t log any step - only the end-of-epoch ones when training is finished. Using TensorBoard also doesn’t help when loss isn’t being logged anyways which is quite weird. I have adjusted logging_steps but that doesn’t seem to help. I am quite confused - Trainer is supposed to ouput loss to the Cell output too, but that doesn’t happen either. Does anyone know how I can log the metrics for ‘n’ steps?
Basically, despite providing the logging_steps argument, it doesn’t apparently override the default which I presume to be set to epoch - same with evaluation strategy which also runs during epochs instead of the no. of steps provided. This is what the script receives on its side:- adafactor=False, adam_beta1=0.9, adam_beta2=0.98, adam_epsilon=1e-08, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_pin_memory=True, ddp_find_unused_parameters=None, debug=[], deepspeed=None, disable_tqdm=False, do_eval=True, do_predict=False, do_train=True, eval_accumulation_steps=10, eval_steps=10, evaluation_strategy=IntervalStrategy.STEPS, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, gradient_accumulation_steps=500, greater_is_better=True, group_by_length=False, ignore_data_skip=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0003, length_column_name=length, load_best_model_at_end=True, local_rank=-1, log_level=20, log_level_replica=-1, log_on_each_node=True, logging_dir=./logs, logging_first_step=True, logging_steps=10, logging_strategy=IntervalStrategy.STEPS, lr_scheduler_type=SchedulerType.LINEAR, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=accuracy, mp_parameters=, no_cuda=False, num_train_epochs=1.0, output_dir=./results, overwrite_output_dir=True, past_index=-1, per_device_eval_batch_size=1, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=results, push_to_hub_organization=None, push_to_hub_token=None, remove_unused_columns=True, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=./results, save_on_each_node=False, save_steps=500, save_strategy=IntervalStrategy.EPOCH, save_total_limit=None, seed=42, sharded_ddp=[], skip_memory_metrics=False, tpu_metrics_debug=False, tpu_num_cores=8, use_legacy_prediction_loop=False, warmup_ratio=0.0, warmup_steps=1000, weight_decay=0.01, ) Which seem to be true to my provided flags, but just not being acted upon. Will dig more in the script to see what might be the issue.
0
huggingface
🤗Transformers
Encoder Decoder Loss
https://discuss.huggingface.co/t/encoder-decoder-loss/4335
Hi all, I was reading through the encoder decoder 23 transformers and saw how loss was generated. But I’m just wondering how it is internally generated? Is it something like the following: Suppose I have the following pair: ("How are you?", "I am doing great"). In this case, is it calculating the cross entropy loss for the four output tokens and then averaging them?
padding tokens in the labels should be replaced by -100 so the cross_entriopy loss ignores the pad tokens when computing the loss. and the loss is actually computed like this shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous() # prediction_scores is logits labels = labels[:, 1:].contiguous() loss_fct = CrossEntropyLoss() lm_loss = loss_fct(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
1
huggingface
🤗Transformers
How DeepSpeed interacts with Trainer optimizer
https://discuss.huggingface.co/t/how-deepspeed-interacts-with-trainer-optimizer/10618
Say we would like to use the Transformers+DeepSpeed integration to fine-tune a relatively large model. That model is too big to fit both the parameters and the full optimizer states in GPU memory at once, so instead we want to freeze most of the parameters and fine-tune a subset of them, or alternatively to tune an adapter that wraps the model. That way we avoid needing to store Adam buffers for the frozen ones. Additionally, we want to use DeepSpeed for ZeRO-Offload. In order to do that, I believe we need to manually pass our optimizer to Trainer (otherwise Trainer will create an optimizer for all of the parameters, which we would like to avoid). But it looks like DeepSpeed itself has certain optimizer configurations. If we pass a custom optimizer to Trainer along with the DeepSpeed config, will it appropriately train only the subset of parameters specified in that optimizer, or will it end up creating another on the backend that tries to optimize the whole model?
Might’ve answered my own question, looking at this piece of documentation: transformers.deepspeed — transformers 4.12.0.dev0 documentation 3 If you pass in an optimizer to Trainer and do not include an “optimizer” in the DeepSpeed configuration, it looks like it will use that passed in optimizer.
0
huggingface
🤗Transformers
Next sentence prediction on custom model
https://discuss.huggingface.co/t/next-sentence-prediction-on-custom-model/6908
I’m trying to use a BERT-based model (jeniya/BERTOverflow · Hugging Face 4) to do Next Sentence Prediction. This is essentially a BERT model that has been pretrained on StackOverflow data. Now, to pretrain it, they should have obviously used the Next Sentence Prediction task. But when I do a AutoModelForNextSentencePrediction.from_pretrained("jeniya/BERTOverflow"), I get a warning message saying: Some weights of BertForNextSentencePrediction were not initialized from the model checkpoint at jeniya/BERTOverflow and are newly initialized: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Now, I get that the message is telling me that the NSP head does not come with this model and so has been initialized randomly. My question is, if they have published pre-trained a BERT model on some custom data, shouldn’t they also have used an NSP head for their pretraining objective? If so, where did that head go? Did they just throw it away? If so, how would I go about getting this custom model to work for the task of NSP? Should I pre-train the whole goddamn thing again, but this time not throw away the NSP head? Or can I simply do something like use AutoModel, and extract the [CLS] token representation, and put a MLP on top of that and train it with a few examples to do NSP? The former would be infeasible given the compute requirements. I feel like the latter is just wrong. Am I missing something? Any help would be greatly appreciated! Thank you!
Hey there @msamogh I am facing a similar problem as yours: have you discovered something since the time you created this thread? Also, if you know it, does this mean that models with architecture “BertForMaskedLM” have been trained ONLY on MLM, and not on NSP, and so I have to do that again?
0
huggingface
🤗Transformers
Decoding the predicted output array in distilbertbase uncased model for NER
https://discuss.huggingface.co/t/decoding-the-predicted-output-array-in-distilbertbase-uncased-model-for-ner/10673
Hi I am working on extracting legal entities (date) from a corpus of agreements. In the training set, I have tokenized the agreement text, date labels are tagged in IOB convetion and fed to the distilbertbase uncased model. Defined training arguements, data collator and compute matrix methods. After the training is completed, I input the preprocessed prediction dataset which has only agreement text and I need to predict the date label in it. I am getting a tensor array as output and not aware of how to extract the date labels from this array Can I get support on this issue ?
In case of NER, one typically uses an xxxForTokenClassification model (which adds a linear layer on top of the base Transformer model). The logits of such models are typically of shape (batch_size, seq_len, num_labels). Let’s take an existing, fine-tuned BertForTokenClassification model from the hub and perform inference on a new, unseen text: from transformers import AutoTokenizer, BertForTokenClassification model_name = "dslim/bert-base-NER" tokenizer = AutoTokenizer.from_pretrained(model_name) model = BertForTokenClassification.from_pretrained(model_name) Let’s prepare a new text for the model: text = "Obama was the president of the United States and he was born in Hawai." encoding = tokenizer(text, return_tensors="pt") We now forward it through the model: # forward pass outputs = model(**encoding) We now take the logits from the outputs, which are the scores that the model gives for each of the classes. In case of token classification, the logits are of shape (batch_size, seq_len, num_labels). Let’s check the shape: logits = outputs.logits print(logits.shape) This prints torch.Size([1, 18, 9]). The batch size is 1 as we only have a single sentence, we have a sequence length of 18 tokens, and the number of labels is 9. So apparently this model classifies each token to belong to 1 of 9 possible labels. We can get the predictions by performing an argmax on the last dimension (i.e., the labels dimension), as follows: predicted_label_classes = logits.argmax(-1) print(predicted_label_classes) This prints: tensor([[0, 3, 0, 0, 0, 0, 0, 7, 8, 0, 0, 0, 0, 0, 7, 7, 0, 0]]) Let’s now convert each predicted class index to the corresponding label name using the id2label mapping of the model’s configuration, as follows: predicted_labels = [model.config.id2label[id] for id in predicted_label_classes.squeeze().tolist()] print(predicted_labels) This prints: ['O', 'B-PER', 'O', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'O', 'O', 'O', 'O', 'O', 'B-LOC', 'B-LOC', 'O', 'O'] To see the correspondence between the tokens and the predicted labels, let’s print them side-by-side: for id, label in zip(encoding.input_ids.squeeze().tolist(), predicted_labels): print(tokenizer.decode([id]), label) This prints: [CLS] O Obama B-PER was O the O president O of O the O United B-LOC States I-LOC and O he O was O born O in O Ha B-LOC ##wai B-LOC . O [SEP] O So now we can clearly see the predictions. However, as the model uses subword tokenization, we would like to convert those to word-level predictions. Here, a number of aggregation strategies apply. One strategy is to just select the prediction for the first token of each word, another strategy is to average the predictions for all tokens of a word, another strategy is to take the biggest logit for all tokens of a word, etc. This depends on how the model was fine-tuned. The pipeline API 1 of HuggingFace supports various aggregation strategies, and abstracts away all of what I did above + grouping the entities for the user. You can call it as follows: from transformers import pipeline nlp = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="first") nlp(text) This prints: [{'end': 5, 'entity_group': 'PER', 'score': 0.998693, 'start': 0, 'word': 'Obama'}, {'end': 44, 'entity_group': 'LOC', 'score': 0.9994223, 'start': 31, 'word': 'United States'}, {'end': 69, 'entity_group': 'LOC', 'score': 0.9988722, 'start': 64, 'word': 'Hawai'}] Under the hood, it will use the offsets_mapping (which is only supported by fast tokenizers) to know to which word each token belongs. You can check the source code here 3.
0
huggingface
🤗Transformers
Text format for language modeling
https://discuss.huggingface.co/t/text-format-for-language-modeling/10464
Hello I cannot seem to find this information anywhere, but perhaps because the search terms are quite general. I am wondering how the input data format has to look for language modeling. I am particularly interested in CLM but I’d also like to know for MLM. My intuition tells me that one should: split dataset into sentences ? do linguistic tokenization (split by whitespace) ? insert a “begin of sentence” and “end of sentence” tokens merge the sentences again chunk the text up in blocks of max_seq_len length for the model (you could even do a sliding window of max_seq_len size) so you just have a text file and every paragraph is one (potentially huge) line This way, the model should be able to generalize better as it learns arbitrary start and ending positions for sentences. That being said, I do not know whether the position embeddings have a negative impact on this process as they are not “correct” any more. (For chunked sentences, the first word may not be the first word of the sentence.) Looking at the LM examples 1, it seems that such steps are not taken automatically, except for chunking. So is my intuition wrong? If so, how exactly should a given text file look? As a side question: it is not clear to me from the example run_clm script what exactly is meant with " We drop the small remainder". If a text file is given with potentially very long lines (one line per paragraph), does that mean that everything exceeding the block size in that line is discarded? If so, there must be a better way to organise your text file than I illustrated above. EDIT: I found this useful documentation 4 by HF concerning a strided sliding window, which is exactly what I intended above. It seems that this is not implemented in the examples, however. I wonder why.
hey @BramVanroy you should check the new TF notebooks about that. i think they explain very clearly what are the steps involved. notebooks/language_modeling-tf.ipynb at new_tf_notebooks · huggingface/notebooks · GitHub 5
0
huggingface
🤗Transformers
Log Perplexity using Trainer
https://discuss.huggingface.co/t/log-perplexity-using-trainer/4947
Hi there, I am wondering, what would be the optimal solution to also report and log perplexity during the training loop via the Trainer API. How would the corresponding compute_metrics function look like. So far I tried without success since I am not 100% sure how the EvalPrediction output would look like. Thanks in advance Simon
This brings me to an adjacent question: How would a compute_metrics function look like that can also report the relative change perplexity respectively train_loss? I would be super grateful if anyone could provide a little guidance!
0
huggingface
🤗Transformers
Train T5 from scratch
https://discuss.huggingface.co/t/train-t5-from-scratch/1781
Hi all! I want to train T5 in a new language from scratch an I think the best way to do this is through the unsupervised denoising task. I’ve found that there is no function in huggingface to create the train data (masked data) as the T5 documentation indicates 25 so I’ve tried to create by my own. According with the T5 original paper, if you have two consecutive tokens to masking you must mask them using only one sentinel token so I need a function that searches the consecutive tokens (“rachas” in my language) in the random indices choosed. Here you have my code: def racha_detection(lista): # It returns a list of lists where each sub-list contains the consecutive tokens in the list rachas = [] racha = [] for i, element in enumerate(lista): if (i<len(lista)-1) and (lista[i+1] == element+1): racha.append(element) else: if len(racha)>0: rachas.append(racha + [element]) else:# (i!=len(lista)-1): rachas.append([element]) racha = [] return rachas def masking(tokenized_sentence, rachas): # Function to mask a tokenized_sentence (token ids) following the rachas described in rachas # Only one sentinel_token per racha sent_token_id = 0 enmascared = tokenized_sentence.copy() for racha in rachas: sent_token = f'<extra_id_{sent_token_id}>' sent_id = tokenizer.encode(sent_token)[0] for i, idx in enumerate(racha): if i==0: enmascared[idx] = sent_id else: enmascared[idx] = -100 sent_token_id += 1 enmascared = [t for t in enmascared if t!=-100] return enmascared def add_noise(sentence, tokenizer, percent=0.15): # Function that takes a sentence, tokenizer and a noise percentage and returns # the masked input_ids and masked target_ids accordling with the T5 paper and HuggingFace docs # To see the process working uncomment all the prints ;) tokenized_sentence = tokenizer.encode(sentence) #print('PRE-MASKED:') #print('INPUT: {}'.format(tokenizer.convert_ids_to_tokens(tokenized_sentence))) idxs_2_mask = sorted(random.sample(range(len(tokenized_sentence)), int(len(tokenized_sentence)*percent))) rachas = racha_detection(idxs_2_mask) enmascared_input = masking(tokenized_sentence, rachas) #print('RACHAS INPUT: {}'.format(rachas)) idxs_2_mask = [idx for idx in range(len(tokenized_sentence)) if idx not in idxs_2_mask] rachas = racha_detection(idxs_2_mask) enmascared_target = masking(tokenized_sentence, rachas) #print('RACHAS TARGET: {}'.format(rachas)) #print('POST-MASKED:') #print('INPUT: {}'.format(tokenizer.convert_ids_to_tokens(enmascared_input))) #print('TARGET: {}'.format(tokenizer.convert_ids_to_tokens(enmascared_target))) return enmascared_input, enmascared_target I dont know if it is correct but it generates sequences like the sequences in the examples What do you think?
@amlarraz - how did you get on with this? Did it work ok with T5 training? Any tips gratefully appreciated!
0
huggingface
🤗Transformers
Cannot load training_args.bin
https://discuss.huggingface.co/t/cannot-load-training-args-bin/10614
Hi, after the model training I’ve got saved training_args.bin. However, loading them with torch.load(...) as suggested in How to load training_args 1 results in an error: Снимок экрана 2021-10-08 в 13.55.192038×876 161 KB Am I doing something wrong or is it a bug? Thanks!
A mistake on my side, though may be relevant to somebody. I overwrote the class TrainingArguments to add my own one, and the file with the new class (training_arguments.py in my case) must be present in the same directory from which you are uploading the arguments. Closing this.
1
huggingface
🤗Transformers
Making sense of duplicate arguments in Huggingface’s hyperparameter search work flow
https://discuss.huggingface.co/t/making-sense-of-duplicate-arguments-in-huggingfaces-hyperparameter-search-work-flow/10419
In trying to use the Trainer’s hyperparameter_search to run trials of my experiments, I found that there are some arguments that seemingly mean the same thing and I am trying to understand which take precedence.or if any of them can be ignored. I think there should be some documentation on this issue as it’s easy to get confused. The reason for this duplicity comes from the fact that there are the arguments to hyperparameter_search itself, the arguments to TrainingArguments, as well as the additional arguments that can be passed to the library doing the hyperparameter tuning (such as ray tune) and they are not necessarily mutually exclusive (at least in the way that I interpret them). Here is a list of issues regarding parameter confusion that I am having: Checkpointing arguments. TrainingArguments and RayTune define their own checkpointing parameters. TrainingArguments has “metric_for_best_model”,“save_steps”, “save_total_limit” and ray tune has “checkpoint_score_attr”, “checkpoint_freq”, “keep_checkpoints_num” TrainingArgument’s load_best_model_at_end. This is more so a concern that I have in regards to issue #1. I am using WandB to log my results and for each trial, I want it to save the best performing model to its “artifacts” folder but can I have confidence that “load_best_model_at_end” will do anything if ray-tune handles checkpointing on its own side of things? Metric Optimization Parameters. hyperparameter_search has “compute_objective” and “direction”. TrainingArguments has metric_for_best_model and greater_is_better. Then ray-tune has metric and mode. I’m assuming the arguments to hyperparameter_search simply pass those arguments down to the corresponding ones of ray-tune (or whatever tuning library is being used). I think that just to give the user confidence in what is going on maybe a warning saying that passing those additional ray tune parameters is redundant? @sgugger maybe you can clear up some of this confusion? I know that there is also this post in regards to the hyperparameter_search function. But felt it better to make a new one as this is a bit of a loaded question.
I am also concerned with the following arguments. Still, I can probably clear up some of the questions for you. From what I’ve inspected from the logs, load_best_model_at_end argument is ignored during the hyperparameter search. Since load_best_model_at_end is ignored, I believe all the corresponding args (e.g. metric_for_best_model) are ignored as well - at least reproducing the results while changing this parameter, I get the same values. Hope @sgugger can clarify.
0
huggingface
🤗Transformers
How release notes are created in Transformers repo
https://discuss.huggingface.co/t/how-release-notes-are-created-in-transformers-repo/10575
Hi, The release notes in Transformers repository are amazing. Fully detailed, well written, organized. It feels like nothing is missing, PR are linked, contributors are mentioned. I’m trying to do the same for my repositories, but I’m facing some issues : I’m doing everything manually. It means sometimes I forget a PR, or a contributor. I do everything at once : when I make a release, I filter all the PR made since the last release, and summarize them in the release note. It’s time consuming, and I feel a continuous approach would be better. So I’m wondering how do you guys handle Release notes at HuggingFace ? Do you have a specific process ? Specific tools ?
Hi @colanim! Thank you for the kind words! Release notes are an important part of the project, as mentioning each contributor’s contribution is essential: the project would not exist without these contributions. In the past, we were doing everything manually, but, as you have seen, it is error-prone in a domain where errors are not acceptable. In order to mitigate this, we built a tool that scans all commits merged since the previous version, and formats them correctly. We’re left with a big list of contributions that can copy/paste in the release notes. I’d happily share the tool with you but it has quite a few flaws that make it unfit for general usage. Then comes a second important part, where we do it like you do: we do everything at once, filtering PRs and grouping them in categories. This is indeed time-consuming, but it is important work. It serves as documentation, and as a guide for potential breaking changes. It is not unusual to spend between 1 and 2 hours on release notes, reordering commits in categories. I hope that helps!
1
huggingface
🤗Transformers
EvalPrediction has an unequal number of label_ids and predictions
https://discuss.huggingface.co/t/evalprediction-has-an-unequal-number-of-label-ids-and-predictions/10557
The EvalPrediction object received in Trainers compute_metrics function returns an unequal number of label_ids and predictions. My compute_metrics function given to the Trainer looks like this: The shapes I print: And the error message I get: image807×55 8.67 KB Does anyone of you have a solution to this?
What does your data look like? I’m afraid there might have been some labels that were a list concatenated with single labels or something like that.
0
huggingface
🤗Transformers
How to use Transformer XL for sequence classification?
https://discuss.huggingface.co/t/how-to-use-transformer-xl-for-sequence-classification/10551
I cannot understand how to fine-tune ‘Transformer XL’ for sequence classification. I am getting this error RuntimeError: stack expects each tensor to be equal size, but got [1] at entry 0 and [8] at entry 2 and I understand that it is due to my sequences having varying lengths but I am not sure how this is intended to be remedied for this specific model. I have created a simple reproduceable toy example with the intention of understanding how this model is intended to be used: !pip install transformers==4.10.0 !pip install datasets==1.9.0 from transformers import AutoTokenizer from transformers import TransfoXLForSequenceClassification from transformers import TrainingArguments, Trainer import torch class newDatasetSimple(torch.utils.data.Dataset): def __init__(self, encodings, labels): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} item['labels'] = torch.tensor(self.labels[idx]) # print(item) return item def __len__(self): return len(self.labels) tokenizer = AutoTokenizer.from_pretrained('transfo-xl-wt103') model = TransfoXLForSequenceClassification.from_pretrained('transfo-xl-wt103') texts = ['This is a sentence', 'Here is another', 'Short sentence', 'I ran', 'This is the longest sentence of the bunch', 'What?', 'Who?', 'Test.', 'Hey', 'A', 'The', 'So', 'yes', 'cool', 'beans', 'In the flesh'] labels = [0,0,1,1,0,0,1,1,0,0,1,1,0,0,1,1] encodings = tokenizer(list(texts)) ds = newDatasetSimple(encodings, labels) training_args = TrainingArguments(output_dir='.', num_train_epochs=50, per_device_train_batch_size=16, warmup_steps=0, logging_steps=1, learning_rate=1e-5) trainer = Trainer(model=model, args=training_args, train_dataset=ds) trainer.train() Could someone show me what changes need to be made in the code to get the model to train properly? Thank you for reading!
Can you post the full error message? As noted here 3, TransformerXL is the only model in the library that is not supported by the Trainer (you would need to overwrite it).
0
huggingface
🤗Transformers
How to create a config.json after saving a model
https://discuss.huggingface.co/t/how-to-create-a-config-json-after-saving-a-model/10459
Hi, I am trying to convert my model to onnx format with the help of this notebook 1 I got error , since config.json does not exist. My model is a custom model with extra layers, similar to this, 1 Now how can I create a config.json file for this?
You need to subclass it to have the save_pretrained methods available. So instead of class Mean_Pooling_Model(nn.Module): use from transformers.modeling_utils import PreTrainedModel class Mean_Pooling_Model(PreTrainedModel): It will add extra functionality on top of nn.Module. github.com huggingface/transformers/blob/bcc3f7b6560c1ed427f051107c7755956a27a9f2/src/transformers/modeling_utils.py#L415 1 exclude_embeddings (:obj:`bool`, `optional`, defaults to :obj:`True`): Whether or not to count embedding and softmax operations. Returns: :obj:`int`: The number of floating-point operations. """ return 6 * self.estimate_tokens(input_dict) * self.num_parameters(exclude_embeddings=exclude_embeddings) class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin, PushToHubMixin): r""" Base class for all models. :class:`~transformers.PreTrainedModel` takes care of storing the configuration of the models and handles methods for loading, downloading and saving models as well as a few methods common to all models to: * resize the input embeddings, * prune heads in the self-attention heads. Class attributes (overridden by derived classes):
1
huggingface
🤗Transformers
Specify Loss for Trainer / TrainingArguments
https://discuss.huggingface.co/t/specify-loss-for-trainer-trainingarguments/10481
I’d like to fine-tune for a regression task rather than a classification task. How do I change the default loss in either TrainingArguments or Trainer()?
You can overwrite the compute_loss method of the Trainer, like so: from torch import nn from transformers import Trainer class RegressionTrainer(Trainer): def compute_loss(self, model, inputs, return_outputs=False): labels = inputs.get("labels") outputs = model(**inputs) logits = outputs.get('logits') loss_fct = MSELoss() loss = loss_fct(logits.squeeze(), labels.squeeze()) return (loss, outputs) if return_outputs else loss However, several models in the library 3 have an attribute of their config called problem_type, which you can set to “regression”. In that case, you shouldn’t overwrite anything, and you can just use the default loss of the model.
0
huggingface
🤗Transformers
Fine-Tuning results suggest some underlying implementation error?
https://discuss.huggingface.co/t/fine-tuning-results-suggest-some-underlying-implementation-error/10516
I’m trying to find-tune BERT on a regression task. My target values are approximately uniformly distributed over 0 to 100. I train using MSE The training loss and validation loss appear to go down: But when I look at the output predictions, they’re all nearly the same value: 13.34873,13.34946,13.34548,13.34980,13.34415,13.35009,13.35031,13.35068,13.35060,13.34515,13.34916,13.34391,13.32421,13.33146,13.29470,13.34953,13.35133,13.34735,13.34369,13.34804,13.35447,13.34434,13.35356,13.34438,13.35195,13.35314,13.34806,13.33857,13.34869,13.34059,13.35074,13.34365,13.35027,13.34974,13.35198,13.34209,13.34324,13.35140,13.35044,13.34025,13.34005,13.35257,13.30577,13.34795,13.33279,13.34773,13.33482,13.35300,13.34842,13.33357,13.34200,13.35000 This suggests to me that I’m doing something incorrectly. My code is below. Could someone please help me? training_args = TrainingArguments( output_dir=results_dir, # output directory num_train_epochs=10, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir=results_dir, # directory for storing logs logging_steps=10, report_to='wandb', do_eval=True, evaluation_strategy="steps", eval_steps=10, ) class RegressionTrainer(Trainer): def compute_loss(self, model, inputs, return_outputs=False): labels = inputs.get("labels") outputs = model(**inputs) logits = outputs.get('logits') loss = torch.mean(torch.square(logits.squeeze() - labels.squeeze())) return (loss, outputs) if return_outputs else loss pytorch_model_save_path = os.path.join(results_dir, 'pytorch_model.bin') if os.path.isfile(pytorch_model_save_path): # If model was already fine-tuned # yes, pass the whole results dir; see https://github.com/huggingface/transformers/issues/1620 model = DistilBertForSequenceClassification.from_pretrained( results_dir, num_labels=1) else: # If model needs to be fine-tuned # Set output dimension to 1 to perform regression model = DistilBertForSequenceClassification.from_pretrained( "distilbert-base-uncased", num_labels=1) trainer = RegressionTrainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, eval_dataset=eval_dataset, # compute_metrics=compute_eval_metrics, ) if force_train or not os.path.isfile(pytorch_model_save_path): trainer.train() trainer.save_model(output_dir=results_dir) all_prediction_output = trainer.predict(all_dataset)
Turns out there was no error! Two things: The learning rate was small and the validation loss was being evaluated very frequently, which explains why the validation loss was so smooth. I needed to run 50 training epochs to see a real difference. That seems odd, but so be it
0
huggingface
🤗Transformers
Subclassing a pretrained model for a new objective
https://discuss.huggingface.co/t/subclassing-a-pretrained-model-for-a-new-objective/10521
I would like to use a pretrained model as an encoder for a new task. It is essentially multiple sequence classification objectives, like in the ...ForSequenceClassification models, but with an output layer for each subtask. I could just create wrappers around the encoder, but I’d like to subclass from PreTrainedModel to better integrate with the Trainer class. How exactly should I do? Do I need to create a config class as well? I will at least need to supply an extra list or dict to the config telling how many classes each subtask has. Thanks!
You can definitely subclass PretrainedConfig for your custom config and PreTrainedModel for your custom model, then access all the methods of the library.
0
huggingface
🤗Transformers
What is cause and solution to Trainer error: cuda RuntimeError 711?
https://discuss.huggingface.co/t/what-is-cause-and-solution-to-trainer-error-cuda-runtimeerror-711/10482
I’m trying to fine-tune DistilBert using Trainer, following the HuggingFace tutorial: huggingface.co Fine-tuning with custom datasets This tutorial will take you through several examples of using 🤗 Transformers models with your own datasets. The guide shows one of many valid workflows for u... When I try running the example, I get the following error: RuntimeError: cuda runtime error (711) : peer mapping resources exhausted at /pytorch/aten/src/THC/THCGeneral.cpp:139 What does this error mean and how do I fix it?
RylanSchaeffer: RuntimeError: cuda runtime error (711) : peer mapping resources exhausted Can you run your code on CPU, to get a more informative error message?
0
huggingface
🤗Transformers
Saving eval loss for every evaluation/saved checkpoint with Trainer
https://discuss.huggingface.co/t/saving-eval-loss-for-every-evaluation-saved-checkpoint-with-trainer/10477
I am running the trainer with --do_eval --report_to none --evaluation_strategy epoch --num_train_epochs 10 so I am expecting the trainer to evaluate after every epoch and saving those results in either eval_results.json or all_results.json. But those files unfortunately only contain the results of the last evaluation. It is useful that checkpoints are saved, but I cannot seem to find the evaluation results for each checkpoint. That would be very useful to select the optimal checkpoint.
There will be in the log_history field of the trainer_state, which is also saved in the same folder.
1
huggingface
🤗Transformers
Using BERT and RoBERTa for (causal?) language modeling
https://discuss.huggingface.co/t/using-bert-and-roberta-for-causal-language-modeling/10442
Hi all! I’m hoping to use pretrained BERT/RoBERTa for language modeling, i.e. scoring the likelihood of sentences. There have been quite a few blog posts/issues on this, but no obvious consensus yet. I attempted an implementation with BertLMHeadModel and RobertaForCausalLM here 4, which had weird issues like: RoBERTa has super large perplexity values, and BERT cannot correctly compare the relative perplexity of simple sentences. (Please see more details in the Github issue above.) @gugarosa kindly suggests that I shouldn’t evaluate pretrained BERT/RoBERTa directly, but should train them with causal LM objective beforehand. However, given the size of their pretraining data, it’s unlikely to retrain them myself. Is there any existing checkpoints that I can use directly? Or, would you recommend any other models (e.g. BertForMaskedLM?) or evaluation metrics (other than perplexity) instead? Thanks in advance!
Continuing our discussion on Github… You are definitely correct when saying that it might be unfeasible to train from scratch as they initially did, specially due to the size of your data. On the other hand, imagine that you have a pre-trained BERT/RoBERTa model and you attach the LMHead on top of it. You could freeze the pre-trained parameters from the initial pre-trained BERT or even attach a small learning rate to this part of architecture, while you fine-tune the LMHead with a more aggressive rate using your own data and a causal language modeling (CLM) objective. The idea behind this would be to attempt to adapt the pre-trained BERT and start understanding how to model a CLM task, directly on your data and without losing some features that it may have already learned from the previous training. Nonetheless, it just an initial thought that I had and do not know how it will work in “real-world”, as my experience is just based on directly working with autoregressive models, such as GPT and Transformer-XL. Regarding some pre-trained models for language generation / CLM, there are few that I could found by tagging text-generation and bert: Models - Hugging Face 4. However, I can not assure whether they were trained with masked LM or CLM, as there were no model cards with descriptions. Regarding the evaluation metric, it is sure a challenge to define an appropriate metric or even just relying on the loss/perplexity. The problem with loss and perplexity is that they might mislead us when comparing models with close values because it strictly relies on the conditional probability of a token being generated given the previous tokens, so essentially we are trying to match the information according to a given target, whereas that given target might be valid if employed with some variations. For example: A sample in the test set “Hello, how are you” might give different perplexity when comparing to a generated prompt like “Hello, how you doing” and “Hello, how it is going”, even though they might have similar meaning, semantically speaking. I have seen some works that attempt to employ a exact match or even a partial match metric, trying to correlate the n-grams between a generated text and a reference (test sample), in the same way as BLEU, METEOR and ROUGE would be applied to a machine translation task. A qualitative assessment is also pretty interesting, specially if the model is going to be deployed into a real-world application or something like. Unfortunately, we are still lacking some advancements on how to turn grammar, syntax and semantics into more proper quantitative metrics, but that might be changed in the near future… at least I hope so!
0
huggingface
🤗Transformers
No skipping steps after loading from checkpoint
https://discuss.huggingface.co/t/no-skipping-steps-after-loading-from-checkpoint/6880
Hey! I am trying to continue training by loading a checkpoint. But for some reason, it always starts from scratch. Probably I am just missing something. training_arguments = Seq2SeqTrainingArguments( predict_with_generate=True, evaluation_strategy='steps', per_device_train_batch_size=training_config['per_device_train_batch_size'], per_device_eval_batch_size=training_config['per_device_eval_batch_size'], fp16=True, output_dir=training_output_path, overwrite_output_dir=True, logging_steps=training_config['logging_steps'], save_steps=training_config['save_steps'], eval_steps=training_config['eval_steps'], warmup_steps=training_config['warmup_steps'], metric_for_best_model='eval_loss', greater_is_better=False) trainer = Seq2SeqTrainer( model=model, tokenizer=tokenizer, args=training_arguments, compute_metrics=compute_metrics, train_dataset=train_ds, eval_dataset=eval_ds, ) Here are the logs: loading weights file .../models/checkpoint-2000/pytorch_model.bin All model checkpoint weights were used when initializing EncoderDecoderModel. ***** Running training ***** Num examples = 222862 Num Epochs = 3 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient Accumulation steps = 1 Total optimization steps = 83574 I am missing some like: Continuing training from checkpoint, will skip to saved global_step Continuing training from epoch 0 Continuing training from global step 48000 Continuing training from 0 non-embedding floating-point operations Will skip the first 48000 steps in the first epoch Which I found here: Load from checkpoint not skipping steps - Transformers - Hugging Face Forums 20 Maybe somebody can help me? Thank you in advance!
With overwrite_output_dir=True you reset the output dir of your Trainer, which deletes the checkpoints. If you remove that option, it should resume from the lastest checkpoint.
0
huggingface
🤗Transformers
Sentence Embeddings From Fine-Tuned BERTForSequenceClassification
https://discuss.huggingface.co/t/sentence-embeddings-from-fine-tuned-bertforsequenceclassification/10345
Hey everyone, I have a binary classification task for a set of documents, and I’d like to visualize these documents from their embeddings. I’ve previously used the sentence-transformers library to do this, but I wanted to see if it was possible to improve these embeddings by fine-tuning my own BERT model to the particular task rather than just using a pre-trained model. I read through some guides 2 and discussions online, and it seems like I should be able to use the embedding for the CLS token from the last hidden state layer as a sentence embedding. However, when I pull those values from the hidden_states of the fine-tuned BERTForSequenceClassification model, every embedding is the same. This is the code I’m using to fine-tune the pre-trained model: model = BertForSequenceClassification.from_pretrained("bert-base-cased", num_labels=1, output_attentions=False, output_hidden_states=True) model.to(device) model.train() optim = AdamW(model.parameters(), lr=5e-5) for epoch in range(3): for batch in dataloader_train: optim.zero_grad() input_ids = batch[0].to(device) attention_mask = batch[1].to(device) labels = batch[2].to(device) outputs = model(input_ids, attention_mask=attention_mask, labels=labels) loss = outputs[0] loss.backward() optim.step() model.eval() And this is the code I’m using to pull the embeddings: def embeddings(model, dataloader_val): model.cuda() embeddings = np.zeros((0, 768)) for batch in dataloader_val: batch = tuple(b.to(device) for b in batch) inputs = {'input_ids': batch[0], 'attention_mask': batch[1], } with torch.no_grad(): outputs = model(**inputs) embeddings = np.concatenate((embeddings, outputs[1][0][:,0,:].cpu().numpy()), axis=0) return embeddings Any ideas or thoughts on why the embeddings for all of the CLS tokens would be the same?
One way around this problem that I was thinking of was to train via the sequence classification task and then load the trained model as a normal BertModel and used the normal pooler_output. Has anyone tried something like this?
0
huggingface
🤗Transformers
Correct numeric labels for classification?
https://discuss.huggingface.co/t/correct-numeric-labels-for-classification/10033
Hello, This is a simple question but better safe than sorry! My understanding is that the transformers class of models (for text classification) can only deal with integer labels as classes. So it’s up to the user to provide a mapping between labels and scores. In the usual example one could have 0 = negative, 1 = neutral, 2 = positive. Here is the basic question: do the numeric scores necessarily need to be integers from 0 to N (the number of classes) or I can use any other numbers of my liking? Thanks!
yes, I can confirm the labels have to be integers starting at zero. I still wonder what is the mathematical reason for that? Any ideas @nielsr by any chance? Thanks!
0
huggingface
🤗Transformers
How to Pretrain XLSR wav2vec on my unlabeled speech data
https://discuss.huggingface.co/t/how-to-pretrain-xlsr-wav2vec-on-my-unlabeled-speech-data/10307
I want to update the XLSR wav2vec2 weights via unlabeled training data(.wav audios) of my domain. Or you can say that I want to pretrain it in that way it can get exposed to my data before I start it to fine-tune it on label data. This is the model from hugging face model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-xlsr-53", attention_dropout=0.1, hidden_dropout=0.1, feat_proj_dropout=0.0, mask_time_prob=0.05, layerdrop=0.1, gradient_checkpointing=True, ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id, vocab_size=len(processor.tokenizer) Please do let me know how can I just use some code to expose XLSR to my unlabeled data as well. Also when I try to Train the XLSR model on my unlabeled data without any validation data and evaluation measure, it gives me this error KeyError: ‘loss’ here is my code from transformers import Trainer from transformers import TrainingArguments training_args = TrainingArguments( output_dir="/content/drive/MyDrive/wav2vec2-large-xlsr", group_by_length=True, per_device_train_batch_size=16, gradient_accumulation_steps=2, evaluation_strategy="steps", num_train_epochs=30, fp16=True, save_steps=200, eval_steps=200, logging_steps=200, learning_rate=3e-4, warmup_steps=300, save_total_limit=3, do_train=True, ) trainer = Trainer( model=model, data_collator=data_collator, args=training_args, train_dataset=data, tokenizer=processor.feature_extractor, ) trainer.train() Thanks in advance
@patrickvonplaten I hope you can help me with this.
0
huggingface
🤗Transformers
Phoneme Recognition Model
https://discuss.huggingface.co/t/phoneme-recognition-model/7297
Hi There, I am looking for pre-trained model I could add to my project pipeline for phoneme recognition. I can see that wav2vec2.0 can be pretrained to do so. But is there any existing pertained model that I can use? Thanks!
Hello Manisa Have you managed to find the pretrained model for phonemes recognition?.. I’m also looking for one at the moment!..
0
huggingface
🤗Transformers
Speeding up T5 inference
https://discuss.huggingface.co/t/speeding-up-t5-inference/1841
seq2seq decoding is inherently slow and using onnx is one obvious solution to speed it up. The onnxt5 62 package already provides one way to use onnx for t5. But if we export the complete T5 model to onnx, then we can’t use the past_key_values for decoding since for the first decoding step past_key_values will be None and onnx doesn’t accept None input. Without past_key_values onnx won’t give any speed-up over torch for beam search. One other solution is to export the encoder and lm_head to onnx and keep the decoder in torch, this way the decoder can use the past_key_values. I’ve written a proof-of-concept script which does exactly this and also makes it compatible with the generate method gist.github.com https://gist.github.com/patil-suraj/09244978af5f7598dd30fb9b4f54fe29 178 onnx_t5.py import inspect import logging import os from pathlib import Path import torch from psutil import cpu_count from transformers import T5Config, T5ForConditionalGeneration, T5Tokenizer from transformers.generation_utils import GenerationMixin from transformers.modeling_outputs import BaseModelOutputWithPast, Seq2SeqLMOutput This file has been truncated. show original requirements.txt --find-links https://download.pytorch.org/whl/torch_stable.html torch==1.6.0+cpu transformers>=3.1.0 onnxruntime>=1.4.0 onnxruntime-tools>=1.4.2 psutil With this you can enc = tokenizer("translate English to French: This is cool!", return_tensors="pt") onnx_model = OnnxT5(model_name_or_path="t5-small", onnx_path="onnx_models") tokens = onnx_model.generate(**enc, num_beams=2, use_cache=True) # same HF's generate method tokenizer.batch_decode(tokens) In my experiments this gave ~1.4-1.6x speed-up with beam search. The first time you call OnnxT5 it’ll load the model from the hub, export it to onnx as described above and save the exported graphs at onnx_path. So loading will be slower the first time. Now to gain further speed-up we could distill the model and use less decoder layers. onnx + distillation should give even more speed-up with minimal drop in accuracy. @sshleifer has just published awesome seq2seq distillation paper 31 which can be used to distill T5 model as well. I’ll be sharing T5 distillation results soon! now this is a very hacky solution, so feel free to suggest feedback and improvements or any other method that can help speed things up cc. @abel, @patrickvonplaten , @sshleifer
Here’s a self-contained notebook 78 and small benchmark for summarization and translation tasks. summ_benchmark862×682 8.7 KB translation_benchmark862×682 8.97 KB
0
huggingface
🤗Transformers
Trainer won’t use GPU for evaluation
https://discuss.huggingface.co/t/trainer-wont-use-gpu-for-evaluation/10291
While training a LayoutLM V2 model with a QA head we noticed that the evaluation loop stops using the GPU and will take hours to complete a single loop. Any ideas what could be happening here?
The evaluation for question-answering is pretty long as the post-processing (going from the predictions of the models to spans of texts in the contexts) is not on the GPU and is rather long. If it’s for an evaluation during training, you should use a smaller validation set.
1
huggingface
🤗Transformers
What should I do if I want to use model from DeepSpeed
https://discuss.huggingface.co/t/what-should-i-do-if-i-want-to-use-model-from-deepspeed/10237
I am trying the language model pre-trained by using run_mlm.py. I want to add the mixture of expert (MoE) integrated with the original Bert-base model. Specifically, I reuse the MoELayer implemented by DeepSpeed and add it to the BertForMaskedLM. From the document of DeepSpeed, I find the training of DeepSpeed requires to call some functions like this: model_engine, optimizer, _, _ = deepspeed.initialize(args=cmd_args, model=net, model_parameters=net.parameters()) However, I just want to reuse the MoE model implemented by DeepSpeed and maintain the training behaviors of Huggingface. Currently, I ignore calling this function and directly pass the language model (with DeepSpeed model) into the Trainer(). Although it runs successfully, my question is that does this incurs some potential dangers?
Hi ezio98. I can’t answer your question, but I’m a bit confused. From what I have read about the MoE layer, the point of it is to facilitate the use (Mixture) of many different models (Experts) concurrently, but you say you are using the MoE layer on top of a single BertForMaskedLM model. What are you hoping the MoE layer will do for you? Does it have some other advantages?
0
huggingface
🤗Transformers
Why does translation quality go down after fine-tuning only one epoch?
https://discuss.huggingface.co/t/why-does-translation-quality-go-down-after-fine-tuning-only-one-epoch/10257
This might be better suited to #beginners, but it is definitely #transformers-specific… I’m new to and wanted to try model adaptation of one of the Helsinki-NLP MT models using fine-tuning. I’ve created a DatasetDict with train, dev and test, and managed to load the pre-trained model and run a trainer for one epoch. My corpus is very small (<1k segments), so my expectation is that it would have little impact on the baseline model. However, when I use the locally-saved config to translate, the results look more like the output of a model just starting its training. Is this expected, or am I doing something terribly wrong? Thanks!
Actually, the real problem is even more embarrassing… Even without freezing layers, the output shouldn’t have looked the way it did. It turns out that I was using .from_config() instead of .from_pretrained() in trying to load my fine-tuned model. Once I fixed that, the results were much more in line with my expectations.
1