docs
stringclasses
4 values
category
stringlengths
3
31
thread
stringlengths
7
255
href
stringlengths
42
278
question
stringlengths
0
30.3k
context
stringlengths
0
24.9k
marked
int64
0
1
huggingface
Models
T5 Model Problems - Constant Loss (doesn’t go down)
https://discuss.huggingface.co/t/t5-model-problems-constant-loss-doesnt-go-down/8365
I’ve been trying to train T5 on a custom dataset similar to Squad v2 by modifying the T5 on TPU colab written by Suraj Patil. (I have Colab Pro so I train on a high-ram TPU instance). The link I have shared is training on Squad v2 directly, which seems to have the same problem. link: Google Colaboratory 4 However, no matter how much I train the error seems to stay constant, i.e. the model does not seem to learn. This seems to be the case both for loss recorded during training, as well as loss during the valuation phase. Could someone please tell me what I am doing wrong? I am going a little crazy trying to figure it out. Thank you in advance all. I’ve corrected the following issues thus far: Different XLA import at start Modification of code to allow for answer-less questions under Squadv2 (as opposed to Squadv1 for Suraj’s original code) under the eos/encoder section Edited data imports to use huggingface’s datasets.load_datasets instead of NLP Under T2TDataCollator, modify batching to ensure that the inputs are tensors instead of lists, e.g.: torch.FloatTensor(example[‘input_ids’]).to(torch.int64) Specifying transformers version 2.9.1 to allow for Suraj’s particular usage of T5DataCollator (although I created a version using the current version of transformers, this also has the same problem described above). [5b. If I use the current version of transformers and not 2.9.1, I make various modifications to T5DataCollator and the labels generated in the training phase to be (‘labels’, ‘decoder_attention_mask’ instead of ‘target_ids’ and ‘target_attention_mask’]
Hey @pjahn89, I am facing a similar issue while trying to finetuning T5 on XSum using TPU/GPU. The training loss is not constant (it varies, but doesn’t converge). But, my validation loss is constant, like literally not even a change in 5th decimal place, I tried many things like creating my nn.Module compatible with the trainer. Subclassed the trainer to modify compute_loss(). But, I am not seeing any change. If you have solved this issue, can you please tell me how? Here’s a link 4 to my colab for reference. Thank you!
0
huggingface
Models
Bert question answering model without context
https://discuss.huggingface.co/t/bert-question-answering-model-without-context/5093
Regarding question answering systems using BERT, I seem to mainly find this being used where a context is supplied. Does anyone have any information where this was used to create a generative language model where no context is available?
Hey @EmuK, indeed most expositions of “question answering” are really referring to the simpler task of reading comprehension What you’re probably looking for is either: open-domain question answering, where only the query is supplied at runtime and a retriever fetches relevant documents (i.e. context) for a reader to extract answers from. You can find a really nice summary of these systems here: How to Build an Open-Domain Question Answering System? 78 closed-book question answering, where large language models like T5 or GPT-3 have memorised some facts during pre-training and can generate an answer without explicit context (the “closed-book” part is an analogy with humans taking exams, where we’ve learnt something in advance and have to use our memory to answer questions ). There’s a brief discussion of these models in the above blog post, but this T5 paper is well worth reading in it’s own right: [2002.08910] How Much Knowledge Can You Pack Into the Parameters of a Language Model? 40 There’s also a nifty library called Haystack that brings a lot of these ideas together in a unified API: https://haystack.deepset.ai/ 39
0
huggingface
Models
Possible wrong BigBirdTtokenizationFast special token initialization in pretrained model
https://discuss.huggingface.co/t/possible-wrong-bigbirdttokenizationfast-special-token-initialization-in-pretrained-model/8697
Environment info transformers version: 4.9.0 Platform: Linux-5.4.104±x86_64-with-Ubuntu-18.04-bionic Python version: 3.7.11 PyTorch version (GPU?): 1.9.0+cu102 (True) Tensorflow version (GPU?): 2.5.0 (True) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: No Using distributed or parallel set-up in script?: No Who can help @vasudevgupta Information Model I am using BigBirdTokenizerFast.from_pretrained(‘google/bigbird-roberta-base’) The likely problem is when you load the pre-trained tokenizer and check its eos and bos token mapping, it turns out to be opposite of what is expected, namely: bos_token = eos_token = To reproduce gist.github.com https://gist.github.com/nabito/64155ccff96f2fd08311374d5bf2bb6f 1 bigbird-playground.ipynb { "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "name": "BigBird Playground.ipynb", "provenance": [], "machine_shape": "hm", "authorship_tag": "ABX9TyO9QaJHqhGtlYbgYV84ucT8", "include_colab_link": true This file has been truncated. show original Expected behavior According to the implementation of BigBirdTokenizerFast the default special token mapping should be: bos_token = eos_token =
Hello @nabito, This issue is similar to Text Classification on GLUE on TPU using Jax/Flax : BigBird · Issue #12483 · huggingface/transformers · GitHub 1. Tokenizer config needs to be updated from HuggingFace Hub (while Tokenizer code is absolutely fine) & I need @patrickvonplaten’s approval for updating that. @patrickvonplaten, fix is quite simple & we just need to run this: wget https://huggingface.co/google/bigbird-roberta-base/resolve/main/spiece.model from transformers import BigBirdTokenizer tokenizer = BigBirdTokenizer("spiece.model") tokenizer.push_to_hub("google/bigbird-roberta-base") @nabito, for now you can do this: wget https://huggingface.co/google/bigbird-roberta-base/resolve/main/spiece.model from transformers import BigBirdTokenizer tokenizer = BigBirdTokenizer("spiece.model") # similarly for fast tokenizer
0
huggingface
Models
Reformer-enwik8 output does not seem to make sense
https://discuss.huggingface.co/t/reformer-enwik8-output-does-not-seem-to-make-sense/8751
Hi, I’m trying to use reformer-enwik8 to output prob for next character Here is my code import torch import torch.nn.functional as F from transformers import ReformerModelWithLMHead def encode(list_of_strings, pad_token_id=0): max_length = max([len(string) for string in list_of_strings]) # create emtpy tensors attention_masks = torch.zeros((len(list_of_strings), max_length), dtype=torch.long) input_ids = torch.full((len(list_of_strings), max_length), pad_token_id, dtype=torch.long) for idx, string in enumerate(list_of_strings): # make sure string is in byte format if not isinstance(string, bytes): string = str.encode(string) input_ids[idx, :len(string)] = torch.tensor([x + 2 for x in string]) attention_masks[idx, :len(string)] = 1 return input_ids, attention_masks # Decoding def decode(outputs_ids): decoded_outputs = [] for output_ids in outputs_ids.tolist(): # transform id back to char IDs < 2 are simply transformed to "" decoded_outputs.append("".join([chr(x - 2) if x > 1 else "" for x in output_ids])) return decoded_outputs def main(): model = ReformerModelWithLMHead.from_pretrained("google/reformer-enwik8") ids, masks = encode(["In 1965, Brooks left IBM to found the Department of"]) logits = model(ids, masks)["logits"] output = decode(torch.argmax(logits, dim=-1)) the output is [’ t 96 a n aeroha o ahfaorsoithint nonehonohtro’], which does not seem to make sense. Actually I don’t know how to get the correct logtis from the model, if there are any thing wrong in the code, please tell me Thanks!
it seems to be: model(input_ids=ids, attention_mask=masks)["logits"] and I can get [' t 94. aaitkl Beft tnI io rornd ahe [epartment of '] as output and it seems to be a correct one!
0
huggingface
Models
Sentence Similarity demo not working
https://discuss.huggingface.co/t/sentence-similarity-demo-not-working/8711
I am trying to see how “flax-sentence-embeddings/all_datasets_v3_distilroberta-base” works with my own examples but it is giving me an error on the huggingface website itself. [Errno 2] No such file or directory: ‘/data/sbert.net_models_flax-sentence-embeddings_all_datasets_v3_distilroberta-base/sentence_xlnet_config.json’ Any suggestions and/or fixes?
Doing this works fine (which is exactly the same as done in the Inference API). @Narsil this seems like an issue in the deployed image (?). Is there any chance an old version was deployed by accident? !pip install -e git+https://github.com/UKPLab/sentence-transformers@v2_dev#egg=sentence-transformers from sentence_transformers import SentenceTransformer, util model = SentenceTransformer("flax-sentence-embeddings/all_datasets_v3_distilroberta-base")
0
huggingface
Models
Wav2Vec2Model: Expected 3-dimensional input for 3-dimensional weight [512, 10, 10], but got 4-dimensional input of size [16, 1, 10, 1000] instead
https://discuss.huggingface.co/t/wav2vec2model-expected-3-dimensional-input-for-3-dimensional-weight-512-10-10-but-got-4-dimensional-input-of-size-16-1-10-1000-instead/8591
I’m trying to use Wav2Vec2Model with a multi-channel input. For this, I have edited the 1st layer in the feature_extractor of Wav2Vec2Model. Code: from transformers import Wav2Vec2Model, Wav2Vec2Config configuration = Wav2Vec2Config() model = Wav2Vec2Model(configuration) model.feature_extractor.conv_layers[0].conv = nn.Conv1d(10, 512, kernel_size=(10,), stride=(5,), bias=False) input = torch.rand(16,10,1000) out = model(input) Input Shape: (16, 10, 1000) where 16: batch size ; 10: num of channels ; 100: length Error: RuntimeError: Expected 3-dimensional input for 3-dimensional weight [512, 10, 10], but got 4-dimensional input of size [16, 1, 10, 1000] instead Any solutions?
this problem seems to be related to HuBERT: RuntimeError: Expected 3-dimensional input for 3-dimensional weight but got 5-dimensional input Models I get an error RuntimeError: Expected 3-dimensional input for 3-dimensional weight [512, 1, 10], but got 5-dimensional input of size [1, 1, 1, 240000, 2] instead while feeding the Wav2Vec2Processor and HubertForCTC with a wav audio file: processor = Wav2Vec2Processor.from_pretrained("facebook/hubert-xlarge-ls960-ft" , cache_dir=os.getenv("cache_dir", "../../models")) model = HubertForCTC.from_pretrained("facebook/hubert-xlarge-ls960-ft" , cache_dir=os.getenv("cache_dir", "../../models…
0
huggingface
Models
KeyError: ‘input_ids’. when training BERT with Trainer
https://discuss.huggingface.co/t/keyerror-input-ids-when-training-bert-with-trainer/2124
greetings fam just curious if anyone provide insight on the key error message (KeyError: ‘input_ids’.) i go to train my pretrained BertForMaskedLM model (using code: trainer_BERT.train()) via the huggingface Trainer on my Dataset object. not sure if it has to do with my creation of the dataset or how i am calling my model for training tho any insights are appreciated!! a detailed view of my code and the key error is available at the link below. thank you mick stackoverflow.com huggingface transformer models: KeyError: 'input_ids' message at beginning of BERT model training 12 python, nlp, bert-language-model asked by mickeymnemonic on 12:17PM - 19 Nov 20 UTC KeyError Traceback (most recent call last) in ----> 1 trainer_BERT.train() 2 trainer.save_model("./models/royalBERT") ~/anaconda3/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path, trial) 755 self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control) 756 –> 757 for step, inputs in enumerate(epoch_iterator): 758 759 # Skip past any already trained steps if resuming training ~/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py in next(self) 361 362 def next(self): –> 363 data = self._next_data() 364 self._num_yielded += 1 365 if self._dataset_kind == _DatasetKind.Iterable and \ ~/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self) 401 def _next_data(self): 402 index = self._next_index() # may raise StopIteration –> 403 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 404 if self._pin_memory: 405 data = _utils.pin_memory.pin_memory(data) ~/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 45 else: 46 data = self.dataset[possibly_batched_index] —> 47 return self.collate_fn(data) ~/anaconda3/lib/python3.7/site-packages/transformers/data/data_collator.py in call(self, examples) 193 ) -> Dict[str, torch.Tensor]: 194 if isinstance(examples[0], (dict, BatchEncoding)): –> 195 examples = [e[“input_ids”] for e in examples] 196 batch = self._tensorize_batch(examples) 197 if self.mlm: ~/anaconda3/lib/python3.7/site-packages/transformers/data/data_collator.py in (.0) 193 ) -> Dict[str, torch.Tensor]: 194 if isinstance(examples[0], (dict, BatchEncoding)): –> 195 examples = [e[“input_ids”] for e in examples] 196 batch = self._tensorize_batch(examples) 197 if self.mlm: KeyError: ‘input_ids’
The line tokenizerBERT(unlabelled_dataset['paragraphs'], padding=True, truncation=True) is not stored anywhere, so you’re passing to the Trainer a dataset that hasn’t been tokenized.
0
huggingface
Models
Hosting ELMO Model for Sentence Embedding on Huggingface
https://discuss.huggingface.co/t/hosting-elmo-model-for-sentence-embedding-on-huggingface/5475
I have a custom ELMO model with weights and config.json. Is it possible to host it on the Huggingface platform to produce sentence embeddings?
Hi Vitali, were you able to host your model using HuggingFace or did you find an alternative solution?
0
huggingface
Models
Reproducing DistilRoBERTa
https://discuss.huggingface.co/t/reproducing-distilroberta/5217
I’ve been trying to retrain DistilRoBERTa from the information given here 1 along with the example code/documentation here 1. I’m a bit unclear on the exact configuration used to train the DistilRoBERTa model. I have been assuming it uses the same configuration as the DistilBERT model with minor changes, though some things, such as the loss coefficients are still a bit ambiguous. Would it be possible to share the exact command/configuration to train DistilRoBERTa? I’ve been able to replicate DistilRoBERTa to similar evaluation MLM perplexity but there still seems to be a small but statistically significant difference, I can share the full config if it’s helpful. Thank you!
@VictorSanh it was mentioned 2 that you might be the best person to ask about this.
0
huggingface
Models
EleutherAI/gpt-neo-2.7B
https://discuss.huggingface.co/t/eleutherai-gpt-neo-2-7b/7423
npm huggingface-api 7 A wrapper for the huggingface api. Hy, how can i set the length of the AI Text? Any idea? Thanks
Note that this is a community-contributed, non-official API SDK. You might want to read the API’s doc directly: 🤗 Accelerated Inference API — Api inference documentation 12 Let us know if this helps or not
0
huggingface
Models
Unable to fine-tune wav2vec2
https://discuss.huggingface.co/t/unable-to-fine-tune-wav2vec2/7480
We tried to fine tune wav2vec2 model using the google colab shared by @patrickvonplaten using our own dataset on this google colab 3, we got significantly higher errors so we tried to recreate the results on timit dataset, but we still got higher errors, here is the link to it 2. I am not able to figure out what might be the reason for this. @patrickvonplaten could you look into this
Here is the link to dataset 1 that we are planning to use for fine-tuning wav2vec2.
0
huggingface
Models
Using T5 pre-trained weight for Text style transfer
https://discuss.huggingface.co/t/using-t5-pre-trained-weight-for-text-style-transfer/4791
Hi, I am trying to create a similar model to Riley et al., 2020 ([2010.03802] TextSETTR: Label-Free Text Style Extraction and Tunable Targeted Restyling 40), their model uses the the pre-trained weights for T5 model. My approach is similar to theirs as I am trying create a model with a encoder decoder structure , which both are initialized with the T5 weights. Similar to Riley´s model mine will also include a “style extractor” which has same structure as the encoder which also needs to be initialized with the T5 weights. I am able to access the weights using the from_pretrained() and state_dict() functions. The problem I am stuck with is loading/initializing my model with the weights. Since the model need to have same structure as the T5 model (from my understanding) to be able to load the weights. Any tips on this front ?
Were you able to figure this out? I am trying to implement a similar model and I am having a hard time understanding how the pieces connect.
0
huggingface
Models
Getting outputs of mode.predict() per sentence input
https://discuss.huggingface.co/t/getting-outputs-of-mode-predict-per-sentence-input/7037
Hi, I am using a TF XLM-R base classifier model-checkpoint (“jplu/tf-xlm-roberta-base”) and the tf keras native’train()’ method. On prediction (mode.predict()) I get an output logits array having 166632 length. I am providing an input of 786 data points (sentence) only. I think the 166632 is the product of the no. of input ids (212) from tokenization (from Autotokenizer) and input dataset length, but I’m not sure how that can be explained. Can someone explain how to derive prediction result per each sentence from this mode.predict output? test_encodings = tokenizer(X_test, truncation=True, padding=True) test_dataset = tf.data.Dataset.from_tensor_slices(( dict(test_encodings), y_test )) test_dataset <TensorSliceDataset shapes: ({input_ids: (212,), attention_mask: (212,)}, ()), types: ({input_ids: tf.int32, attention_mask: tf.int32}, tf.int64)> I form train and validation datasets similarly, for fine -tuning. when predicting, out=model.predict(test_dataset) len(out.logits) out 166632 TFSequenceClassifierOutput(loss=None, logits=array([[-0.27663636, 0.68009704, 1.0416636 , -0.9192458 ], [-0.27665925, 0.68014 , 1.0416217 , -0.91923165], [-0.27644584, 0.6797307 , 1.0419688 , -0.91936153], ..., [-0.25672776, 0.64896476, 1.0766468 , -0.92797905], [-0.2567277 , 0.64896476, 1.0766468 , -0.9279789 ], [-0.2567277 , 0.64896476, 1.0766468 , -0.9279789 ]], dtype=float32), hidden_states=None, attentions=None) Thanks
hey @vinurad13 what shape does test_encodings["input_ids"] have? without seeing the details behind X_test my guess is that you need to reshape your inputs so that input_ids has shape (batch_size, max_seq_length)
0
huggingface
Models
GPT-J-6B on the Hub
https://discuss.huggingface.co/t/gpt-j-6b-on-the-hub/7027
Any idea on when would GPT-J-6B be available from the HUB and/or via the inference API? Aran Komatsuzaki – 4 Jun 21 GPT-J-6B: 6B JAX-Based Transformer 313 Summary: We have released GPT-J-6B, 6B JAX-based (Mesh) Transformer LM (Github).GPT-J-6B performs nearly on par with 6.7B GPT-3 (or Curie) on various zero-shot down-streaming tasks.You can try out …
hey @hgarg there’s already a pull request in the works for this model that you can track here: GPT-J by StellaAthena · Pull Request #12243 · huggingface/transformers · GitHub 1.1k
0
huggingface
Models
GPT loss increasing
https://discuss.huggingface.co/t/gpt-loss-increasing/7034
Hi! I’m finetuning a GPT-2 model (360M params = size M) on a huge training set (dozens of GB) with a low learning rate (15e-6, or 0.000015). The loss value dips at the start, and then gradually increases for a long time. I haven’t seen it start to fall again yet. Is this behavior normal/expected? graph809×450 24.2 KB
hey @treeofknowledge this plot suggests your learning rate is too high after 25k training steps so you could try using a learning rate scheduler like the default provided in transformers.Trainer (see here 12)
0
huggingface
Models
Ways to detect language of the given text?
https://discuss.huggingface.co/t/ways-to-detect-language-of-the-given-text/2845
Is there any way to detect the language of the given input text? There are many models to translate form one language to other.
It should be fairly easy to train a small model to detect languages, not sure if there is one already but seems a bit overkill. I have used this library in the past to detect language: https://pypi.org/project/langdetect 215, it should be more than enough as a pre-processing step to then choose which translation model to use (if that is the use case you were referring to)
0
huggingface
Models
Gpt-neo 27 and 13
https://discuss.huggingface.co/t/gpt-neo-27-and-13/6702
When I run on my CPU everything is fine, but when I run on my GPU, I get garbage. Is this the correct syntax? generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B', device=1)
The devices are 0 indexed. So if you have 1 GPU you should use generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B', device=0) You can check how many devices are available with torch.cuda.device_count() Then check a specific device with torch.cuda.get_device_name[0]
0
huggingface
Models
How to upload a quantized model?
https://discuss.huggingface.co/t/how-to-upload-a-quantized-model/6683
Hi, I am wondering if it’s possible to upload a quantized model? from “model sharing” doc, looks like we could only upload some fine-tune models based on HF transformer models. I learned something from i-BERT, which is a Quantization-Aware-Training model. My question is it possible to upload a int8 transfomer model through Post-Training-Quantization rather Quantization-Aware-Training? The difference between Post-Training-Quantization and Quantization-Aware-Training is the former is only related with inference phase (calibration tensor range and quantize/dequantize to int8/fp32 for perf speedup) but the latter will emulate the quantization precision loss by inserting fake_quant ops in training phase. as the Post-Training-Quantization only involves inference phase (qconfig setting in PyTorch) and (graph rewrite In TensorFlow), I don’t know if it’s possible to upload this quantized model? through which API? Thanks for any guidence
You can upload any model you want on the hub since it’s git-based. It may not work out of the box with the Transformers library if there is no corresponding class, but you can still share the weights this way.
0
huggingface
Models
Wav2vec2 xlsr nan train loss
https://discuss.huggingface.co/t/wav2vec2-xlsr-nan-train-loss/6714
Hi, I’m running into nan training_loss when training wav2vec2 xlsr with my custom dataset. Weird thing is that even though training_loss goes to nan, eval_loss still goes down, and error_rate (cer and wer) also goes down. I’ve experimented with lower learning_rate, but still getting similar behavior. I’m logging with wandb. My graphs look like the following: lr2400×1200 118 KB error_rate2400×1200 139 KB loss2400×1200 122 KB There’s no value for train/loss after ~60 steps since it is nan, but eval/loss is still decreasing. Has anyone experienced similar behavior?
I’ve let it train over the weekend, still NAN train loss, but eval loss and both WER and CER continue to decrease
0
huggingface
Models
Further Train a fine-tuned Modell?
https://discuss.huggingface.co/t/further-train-a-fine-tuned-modell/5935
Hi Guys, how would i fruther train a already fine-tuned modell on new data? Iwant to (further) train a wav2vec2 Modell which was trained on CommonVoice on a new Dataset. Whould i have to train the modell from Scratch and use both datasets or could i just “finetune” the already fine tuned modell on new data? I tried multiple Learning-Rates and i just destroy the Modell (accuracy-wise) or the values dont change at all if i further train the modell
Has anyone got any information on this? I am trying to train a model for a low resource language and I have a continuous flow of data instead of a collection. So, I want to find a way to continuously train my model further fine-tuned on OpenSLR data. Thank you!
0
huggingface
Models
Longformer model evaluating all the datapoints to negative on evaluation data set after training
https://discuss.huggingface.co/t/longformer-model-evaluating-all-the-datapoints-to-negative-on-evaluation-data-set-after-training/1418
I am using transformers library to download longformer-base-4096 model and training on my own dataset for classification using Trainer API. So, during evaluation, all the data is categorized to negative class. I am suspecting gradient vanishing problem. If anyone can respond to how this can be handled. Below are the training arguments which are passed to Trainer class. training_args = TrainingArguments( output_dir=’./results’, do_train=True, do_eval=True, num_train_epochs=1, per_device_train_batch_size=1, per_device_eval_batch_size=1, warmup_steps=500, weight_decay=0.01, evaluate_during_training=True, logging_dir=’./logs’, )
@Harika were you able to handle the vanishing gradient problem? If yes how?
0
huggingface
Models
Leveraging pre-trained checkpoints for summarization
https://discuss.huggingface.co/t/leveraging-pre-trained-checkpoints-for-summarization/835
The effectiveness of initializing Encoder-Decoder models from pre-trained encoder-only models, such as BERT and RoBERTa, for sequence-to-sequence tasks has been shown in: https://arxiv.org/abs/1907.12461 20. Similarly, the EncoderDecoderModel framework of Transformers can be used to leverage initialize Encoder-Decoder models from “bert-base-cased” or “roberta-base” for summarization. One can initialize such a model with weights from pre-trained checkpoints via: from transformers import EncoderDecoderModel bert2bert = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased") A couple of models based on “bert-base-cased” or “roberta-base” have been trained this way for the CNN/Daily-Mail summarization task with the purpose of verifying that the EncoderDecoderModel framework is functional. Below the Rouge2 - fmeasure results on the test set of CNN/Daily-Mai: Bert2GPT2: 15.19 https://huggingface.co/patrickvonplaten/bert2gpt2-cnn_dailymail-fp16 13 Bert2Bert: 16.1 - https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16 11 Roberta2Roberta: 16.79: https://huggingface.co/patrickvonplaten/roberta2roberta-cnn_dailymail-fp16 12 Roberta2Roberta (shared): 16.59: https://huggingface.co/patrickvonplaten/roberta2roberta-share-cnn_dailymail-fp16 7 Note: The models below were trained without any hyper-parameter search and fp16 precision. For more detail, please refer to the respective model card. UPDATE: Better models using the Seq2Seq Trainer and code on the current master give the following results: BERT2BERT on CNN/Dailymail: 18.22 - https://huggingface.co/patrickvonplaten/bert2bert_cnn_daily_mail 7 Roberta2Roberta (shared) on BBC/XSum: 16.89 - https://huggingface.co/patrickvonplaten/roberta_shared_bbc_xsum 8 Also two notebooks are attached to the model cards showing how Encoder-Decoder models can be trained using master.
patrickvonplaten: Note : The models below were trained without any hyper-parameter search and fp16 precision. For more detail, please refer to the respective model card. Interesting results! would love to know how finetune times/inference times compare to bart-base/bart-large. These are roughly bart-base size, right? Would also love to know on xsum where gaps between good and worse models get magnified in ROUGE space. Feels like we desperately need some sort of lb/aggregator, like the one you tried to get going for benchmarking. I know bart-large takes ~24h to get to ~ 21 ROUGE on cnn. @VictorSanh got 15.5 ROUGE2 with bart-base on xsum which felt a little low to me. Are you using pip install wandb? Share your logs?
0
huggingface
Models
Xlm tokenizer.lang2id is None
https://discuss.huggingface.co/t/xlm-tokenizer-lang2id-is-none/5442
I was just following huggingface.co Multi-lingual models 4 Most of the models available in this library are mono-lingual models (English, Chinese and German). A few multi-lingual models are available and have a diffe... to play with the model “xlm-clm-enfr-1024”. It turns out that print(tokenizer.lang2id) gives me None instead of {'en': 0, 'fr': 1}. Any comment on this?
I am also getting the same issue. I tried running import torch from transformers import XLMTokenizer, XLMWithLMHeadModel tokenizer = XLMTokenizer.from_pretrained("xlm-clm-enfr-1024") model = XLMWithLMHeadModel.from_pretrained("xlm-clm-enfr-1024") language_id = tokenizer.lang2id['en'] And I am seeing NoneType error
0
huggingface
Models
mT5/T5v1.1 Fine-Tuning Results
https://discuss.huggingface.co/t/mt5-t5v1-1-fine-tuning-results/2098
Hey everybody, The mT5 and improved T5v1.1 models are added: Improved T5 models (small to large): google/t5-v1_1-small 36 google/t5-v1_1-base 24 google/t5-v1_1-large 15 and mT5 models (small to large): google/mt5-small 47 google/mt5-base 25 google/mt5-large 15 are in the model hub Will upload the 3b and 11b versions in the coming days… I want to start a thread here to collect some fine-tuning results and possibly some notebooks & tips and tricks. If anyone has fine-tuned a mT5 or T5v1.1 model, it would be awesome to share the results here Also, it might be interesting to see whether fp16 is compatible with the new T5 models, cf. with https://github.com/huggingface/transformers/issues/4287 39 I’ll try to allocate some time this week for fine-tuning, but I’m very excited about some possible discussions here. Tagging some of our power contributors @valhalla @mrm8488 @beltagy @Jung (just FYI )
I was trying to fine-tune it on a Chinese short text classification task and found MT5ForConditionalGeneration not in transformers-3.5.1 yet while it is here 58?
0
huggingface
Models
Why does Bart decoder’s attention mask mark relevant indices with 0 instead of 1?
https://discuss.huggingface.co/t/why-does-bart-decoders-attention-mask-mark-relevant-indices-with-0-instead-of-1/6477
Hi. When we don’t pass decoder_attention_mask to BartModel, the model automatically creates decoder input masks with _make_causal_mask. I’ve noticed that the method inserts ‘0’ in mask positions corresponding to indices the model needs to attend, and -inf in positions corresponding to indices to be ignored. Below is the link to aforementioned code: github.com huggingface/transformers/blob/8d171628fe84bdf92ee40b5375d7265278180f14/src/transformers/models/bart/modeling_bart.py#L85 return shifted_input_ids def _make_causal_mask(input_ids_shape: torch.Size, dtype: torch.dtype, past_key_values_length: int = 0): """ Make causal mask used for bi-directional self-attention. """ bsz, tgt_len = input_ids_shape mask = torch.full((tgt_len, tgt_len), float("-inf")) mask_cond = torch.arange(mask.size(-1)) mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0) mask = mask.to(dtype) if past_key_values_length > 0: mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype), mask], dim=-1) return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length) def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None): """ Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`. As far as I know attention masks should have 1 in indices we want to attend. Could anyone shed some light on this?
Further investigation shows this behavior is desired since attention mask is added to attention weights, so that 0 attention mask value preserves the inputs while -inf attention mask value “masks out” the inputs. (related code pasted below) However, shouldn’t the encoder attention mask be initialized the same way (0 for relevant inputs, -inf for padding inputs) as well? Currently, the documentation says encoder attention mask values should be 1 for relevant inputs and 0 for padding inputs. github.com huggingface/transformers/blob/8d171628fe84bdf92ee40b5375d7265278180f14/src/transformers/models/bart/modeling_bart.py#L223 if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): raise ValueError( f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is {attn_weights.size()}" ) if attention_mask is not None: if attention_mask.size() != (bsz, 1, tgt_len, src_len): raise ValueError( f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}" ) attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) attn_weights = F.softmax(attn_weights, dim=-1) if layer_head_mask is not None: if layer_head_mask.size() != (self.num_heads,): raise ValueError( f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {layer_head_mask.size()}" ) attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
0
huggingface
Models
Poor performance in zero-shot learning when using the model ‘typeform/distilbert-base-uncased-mnli’
https://discuss.huggingface.co/t/poor-performance-in-zero-shot-learning-when-using-the-model-typeform-distilbert-base-uncased-mnli/6374
Hi, I have tried to use the model ‘typeform/distilbert-base-uncased-mnli’ in zero-shot classification (multi-class, not multi-label). However, I am getting very poor results especially when compared to using the model ‘facebook/bart-large-mnli’. I have used both the zero-shot classification pipeline and without it, and the results are still just as bad. The test dataset I am trying to classify into categories has 46 entries and 13 categories, and the accuracy I am getting with the DistilBERT MNLI model is around 15%, whereas this goes up to 57% with the Bart large MNLI model. Has anyone else also found such a massive difference in performance when using a distilled model for this task? I assumed it would be comparable since DistilBERT and BERT have comparable performances for many NLP tasks, but the results are too different. Also, does anyone have any benchmark results for these two models, either in NLI tasks or zero-shot classification tasks? Thank you!
Hello, I am the one who fine-tuned this model. The original DistilBERT 1 paper reports 82.2 on accuracy in the MNLI task while BERT-base has 86.7 accuracy. Other following papers show slightly different numbers but in the same ballpark. For example, the MobileBERT 3 paper reports 81.5 and 84.6 on accuracy on DistilBERT and BERT-base respectively. In my fine-tuning, I got 82 accuracy for both MNLI and MNLI-mm. I use the run_glue.py (huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script to fine-tune the model with these hyperparameters: max_seq_length: 128 per_device_train_batch_size: 16 learning_rate: 2e-5 num_train_epochs: 5 When running this model on our own very small zero-shot classification test data, we didn’t see a big drop in accuracy, but we did observe that the model is less “certain” on the correct answer, i.e., it returns a lower probability on the correct label. You can also try our fine-tuned MobileBERT. It has a marginally better result in our testing.
0
huggingface
Models
Wav2vec fine-tuning with multiGPU
https://discuss.huggingface.co/t/wav2vec-fine-tuning-with-multigpu/4894
Hi, @patrickvonplaten @valhalla I’m fine-tuning wav2vec model with Fine-Tune XLSR-Wav2Vec2 for low-resource ASR with 🤗 Transformers 27 at local machine with 4xT4 GPU (16Gb) I have some problems with training. Very slowly process use_one_gpu1057×316 42.3 KB Why has the learning process slowed down so much?
I noticed a strange memory allocation on the GPU tesla_t4_4722×604 61 KB training_args = TrainingArguments( output_dir="./rus_model", group_by_length=True, per_device_train_batch_size=16, gradient_accumulation_steps=2, evaluation_strategy="steps", num_train_epochs=30, fp16=True, save_steps=400, eval_steps=200, logging_steps=100, learning_rate=3e-4, warmup_steps=500, save_total_limit=2, dataloader_num_workers=16, report_to='tensorboard' )
0
huggingface
Models
GPT Neo 2.7 not working
https://discuss.huggingface.co/t/gpt-neo-2-7-not-working/6315
We noticed it has stopped to work even in the site widget. Anyone knows what is happening?
I noticed using the API, it returns 503 error and it is not possible to skip it by using the wait_for_model parameter.
0
huggingface
Models
Matching original and translated words with MarianMT
https://discuss.huggingface.co/t/matching-original-and-translated-words-with-marianmt/6281
Hello, hopefully I’m at the right sub. After translating a sentence with MarianMT I’m trying to match the original words with the translations that they generated. Or at least come up with a probability. For example I want to translate from English to German and I have the following sentences. En: I will buy a washing machine tomorrow. De: Ich werde morgen eine Waschmaschine kaufen I want to be able to say that model took “washing” and “machine” from the original English sentence and matched it with the “Waschmaschine” in the translated text. Do you have any tips on how to achieve it? Original MarianMT mentions that scoring algorithm can be used to align two sentences. It seems similar but not the exact match it seems. Any ideas?
I’m not exactly sure how to do this but I think what you are basically trying to do is to find how strongly the decoder’s attention mechanism “attends” to each token that comes in from the encoder. An example is presented here in this blog post 1; image1228×601 89.8 KB Based on this idea, you could try taking the final transformer stack in the decoder and visualize the attention weights where it attends to the encoder inputs. I just googled and found this library that does this for you - link 9.
0
huggingface
Models
Unrecognized model in healx/gpt-2-pubmed-medium
https://discuss.huggingface.co/t/unrecognized-model-in-healx-gpt-2-pubmed-medium/6076
Hi, I would like to use and fine-tune the healx/gpt-2-pubmed-medium model, but if I try to load it with the provided snippet of code, or even if I directly try to fine tune it with run_clm.py it gives me the following error: ValueError: Unrecognized model in healx/gpt-2-pubmed-medium. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: bigbird_pegasus, deit, luke, gpt_neo, big_bird, speech_to_text, vit, wav2vec2, m2m_100, convbert, led, blenderbot-small, retribert, ibert, mt5, t5, mobilebert, distilbert, albert, bert-generation, camembert, xlm-roberta, pegasus, marian, mbart, megatron_bert, mpnet, bart, blenderbot, reformer, longformer, roberta, deberta-v2, deberta, flaubert, fsmt, squeezebert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm-prophetnet, prophetnet, xlm, ctrl, electra, encoder-decoder, funnel, lxmert, dpr, layoutlm, rag, tapas I may try to manually download the model, modify the config.json file adding model_type: 'gpt2', but I am not sure if this would be enough. Anyway, this seems a problem that should not happen
Hi, did you solve this problem? I am having the same problem now. Thanks!
0
huggingface
Models
Data-prep for new portuguese RoBERTa from scratch
https://discuss.huggingface.co/t/data-prep-for-new-portuguese-roberta-from-scratch/2421
I’m a NLP researcher from Brazil and our team is training RoBERTa base from scratch with around ~60GB Portuguese dataset. We plan on releasing it on HF model hub. Regarding data-prep, we have two options for documents longer than 512 tokens (maxlen): Truncate the data point and discard the rest Break long data points into smaller chunks of 512 tokens (generating new data points) What are your opinions on these approaches?
My opinion is that you should break long data into smaller chunks. One of the advantages of RoBERTa over BERT comes from the fact that it uses more data. if you throw away all but your first 512 tokens in each document, you will lose the “more data” advantage. Liu et al created English RoBERTa using DOC-SENTENCES or FULL-SENTENCES regimes, either of which uses most of the words in each document, not just the first 512 tokens. FULL-SENTENCES: each input is packed with full sentences sampled contiguously from one or more documents […] inputs may cross document boundaries. DOC-SENTENCES: Inputs are constructed similarly to FULL-SENTENCES, except that they may not cross document boundaries. I am not an expert, but I’m pretty sure that is correct. The next two ideas are only speculation: It might be even better if you could align the start of each chunk with the start of a sentence (but I don’t actually know whether that would make any difference). Liu et al used 160GB of data. Since you have only 60GB, you might consider sampling your data several times with the splits in different positions. Maybe you could wrap each document into itself (ie once you reach the end of that document, if you haven’t reached a 512-token boundary, start again from the beginning of it.)
0
huggingface
Models
Best model for factual/verfiable data?
https://discuss.huggingface.co/t/best-model-for-factual-verfiable-data/6098
What are some good pre-trained models for factual/verifiable text generation and QA (without context)? I’m referring to models that could be used for historical, medical, tech, research, and even wikipedia questions (without context) and text generation? I’m having a hard time wrapping my head around this problem and I’d appreciate any insightful action-oriented input! It would be awesome if it can be loaded with the api-inference…
I would start with T5 and try some mask filling/QA exercises to see if it works. Then try other models, also, you can check papers with code site to see which models are giving SOTA scores on the datasets you wish to use. There are several models trained on wikipedia dataset Hugging Face – The AI community building the future. 3 Start with that and see if it gives good results, There probably wont be one model+dataset that works for all your tasks, you will have to mix and match to see what works for you.
0
huggingface
Models
How to use Bertmodel?
https://discuss.huggingface.co/t/how-to-use-bertmodel/6177
The first time I use the function “BertModel.from_pretrained”, it took me a few minutes to download the model files, I thought the model will be stored locally. However, the function doesn’t work if there is no internet connection. Is there any way to use Bertmodel without internet connection. By the way, I don’t have enough computing resources to train a model from scratch.
You should set the environment variable TRANSFORMERS_OFFLINE to yes to be able to use a downloaded model without internet.
0
huggingface
Models
longformer speed compared to bert model
https://discuss.huggingface.co/t/longformer-speed-compared-to-bert-model/5168
We are trying to use a LongFormer and Bert model for multi-label classification of different documents. When we use the BERT model (BertForSequenceClassification) with max length 512 (batch size 8) each epoch takes approximately 30 minutes. When we use LongFormer (LongformerForSequenceClassification with the ‘allenai/longformer-base-4096’ and gradient_checkpointing=True) with max length 4096 (batch size 1, Gradient Accumulation step 8) each epoch takes approximately 12 hours. Is this reasonable or are we missing something? Is there anything that we can try to make the training faster?
I was using LED and found it’s also roughly 10 times slower than Bart model.
0
huggingface
Models
Incorrect model “stas/tiny-wmt19-en-ru“
https://discuss.huggingface.co/t/incorrect-model-stas-tiny-wmt19-en-ru/5936
stas/tiny-wmt19-en-ru This model is incorrect, file with model pytorch_model.bin have very small size 30 kbytes. @stas Do you reload this model? Thank you!
I know it may look broken, but it is supposed to be 30Kb in size. The real model is here: facebook/wmt19-en-ru · Hugging Face All those “tiny” models are designed specifically for testing, so they are very quick to download, but they are only useful for testing that the model trains or evals - it will of course not produce anything useful. We use these primarily in the HF transformers test suite. I have added the script that created it: fsmt-make-super-tiny-model.py · stas/tiny-wmt19-en-ru at main 1 So now you can see how it came about. I updated the README.md to indicate that 30kb is correct. You can also validate it via: python -c 'import torch; print(torch.load("pytorch_model.bin").keys())' dict_keys(['model.encoder.embed_tokens.weight', 'model.encoder.layers.0.self_attn.k_proj.weight', 'model.encoder.layers.0.self_attn.k_proj.bias', 'model.encoder.layers.0.self_attn.v_proj.weight', 'model.encoder.layers.0.self_attn.v_proj.bias', 'model.encoder.layers.0.self_attn.q_proj.weight', 'model.encoder.layers.0.self_attn.q_proj.bias', 'model.encoder.layers.0.self_attn.out_proj.weight', 'model.encoder.layers.0.self_attn.out_proj.bias', 'model.encoder.layers.0.self_attn_layer_norm.weight', 'model.encoder.layers.0.self_attn_layer_norm.bias', 'model.encoder.layers.0.fc1.weight', 'model.encoder.layers.0.fc1.bias', 'model.encoder.layers.0.fc2.weight', 'model.encoder.layers.0.fc2.bias', 'model.encoder.layers.0.final_layer_norm.weight', 'model.encoder.layers.0.final_layer_norm.bias', 'model.decoder.embed_tokens.weight', 'model.decoder.layers.0.self_attn.k_proj.weight', 'model.decoder.layers.0.self_attn.k_proj.bias', 'model.decoder.layers.0.self_attn.v_proj.weight', 'model.decoder.layers.0.self_attn.v_proj.bias', 'model.decoder.layers.0.self_attn.q_proj.weight', 'model.decoder.layers.0.self_attn.q_proj.bias', 'model.decoder.layers.0.self_attn.out_proj.weight', 'model.decoder.layers.0.self_attn.out_proj.bias', 'model.decoder.layers.0.self_attn_layer_norm.weight', 'model.decoder.layers.0.self_attn_layer_norm.bias', 'model.decoder.layers.0.encoder_attn.k_proj.weight', 'model.decoder.layers.0.encoder_attn.k_proj.bias', 'model.decoder.layers.0.encoder_attn.v_proj.weight', 'model.decoder.layers.0.encoder_attn.v_proj.bias', 'model.decoder.layers.0.encoder_attn.q_proj.weight', 'model.decoder.layers.0.encoder_attn.q_proj.bias', 'model.decoder.layers.0.encoder_attn.out_proj.weight', 'model.decoder.layers.0.encoder_attn.out_proj.bias', 'model.decoder.layers.0.encoder_attn_layer_norm.weight', 'model.decoder.layers.0.encoder_attn_layer_norm.bias', 'model.decoder.layers.0.fc1.weight', 'model.decoder.layers.0.fc1.bias', 'model.decoder.layers.0.fc2.weight', 'model.decoder.layers.0.fc2.bias', 'model.decoder.layers.0.final_layer_norm.weight', 'model.decoder.layers.0.final_layer_norm.bias', 'model.decoder.output_projection.weight'])
0
huggingface
Models
Output of BertEmbeddings
https://discuss.huggingface.co/t/output-of-bertembeddings/5907
I want to do some changes in the BertModel class based upon my use case. As such I wanted to look at some of the outputs in general for better understanding. When I tried printing the BertEmbeddings output inside BertModel, it is giving me an illegal memory access RuntimeError. I have attached the screenshot below. image991×258 15.7 KB Kindly help me correct this error. PS: When I tried printing the shape, it was giving me correct output of (batchsize, 512, 768)
The error was basically because of one of my predefined weight tensor was not on cuda. Now everything is working fine. Thanks.
0
huggingface
Models
DialoGPT fine-tuning dataset format
https://discuss.huggingface.co/t/dialogpt-fine-tuning-dataset-format/5682
I experience issues with training the model directly from the .csv file, I think I am doing something wrong with the formatting. Can somebody share an example of the format? @patrickvonplaten by any chance?
May be you can help @julien-c ?
0
huggingface
Models
Pre-train PEGASUS model from scratch
https://discuss.huggingface.co/t/pre-train-pegasus-model-from-scratch/4544
Hi @sgugger , I want to do a pre-training PEGASUS model from scratch, can you five me some suggestion? First, can I do this approach (How to train a new language model from scratch using Transformers and Tokenizers 12) to train this model from scratch? Secondly, how can I control the <mask_1> tokens to mask sentences (GSG objective)? And how can I specify the strategy of masking sentences like the selected one in PEGASUS paper? Thank you very much!
Can anyone give me some suggestion?
0
huggingface
Models
Using RAG with local documents
https://discuss.huggingface.co/t/using-rag-with-local-documents/5326
Hi, I have a requirement that model should search for relevant documents to answer the query and I found RAG 7 from Facebook AI which perfectly fits my usecase. I also found this 2 post in which HuggingFace explains RAG and came to know that HF implemented RAG which is awesome! My doubt is whether I could extend this functionality so that the model should do retrieval from local documents rather than from HF’s wikipedia corpus. Are there any notebooks to refer to?
hey @saichandra, one possibility would be to use Haystack’s 1 implementation of RAG (which is based on HF transformers), e.g. see here: https://haystack.deepset.ai/docs/latest/tutorial7md 2 one advantage of using Haystack is that they provide a nice API for FAISS (and other document stores) so you can store the embeddings locally with just a few lines of code. i’ve had mixed results from using RAG with the Natural Questions checkpoints (the answers are often gibberish). if you’re doing QA on a specialised corpus, you might be better off using the classic Retriever-Reader architecture or fine-tuning RAG on your domain
0
huggingface
Models
Smallest pretrained model?
https://discuss.huggingface.co/t/smallest-pretrained-model/5495
What is the smallest English pre-trained model (not distilled)?
BERT-tiny is pretty, uh, tiny (around 16MB). huggingface.co Hugging Face – The AI community building the future. 45
0
huggingface
Models
Can I use roberta-base-squad2 for QA on COVID-19 to rank documents?
https://discuss.huggingface.co/t/can-i-use-roberta-base-squad2-for-qa-on-covid-19-to-rank-documents/5497
The model is trained to extract answers to questions about covid19, but I also need to rank 100 covid19 papers on relevance to the question/search term.
hey @MSJohannessen, it sounds like you’re looking for a retriever-reader architecture - for that i’d suggest taking a look at haystack 5 (built on top of transformers). you can find a covid-19 example along the lines you’re talking about here: DataMuni: Building A Faster & Accurate Search Engine with Transformers & Haystack 3 in general, you can use squad2 models as baselines for the reader, but you’ll probably get better performance by fine-tuning them on your corpus
0
huggingface
Models
Fine tuning GPT2 on persona chat dataset outputs gibberish
https://discuss.huggingface.co/t/fine-tuning-gpt2-on-persona-chat-dataset-outputs-gibberish/3111
Hello all I’m trying to fine-tune GPT2 more or less using the code from that example: github.com huggingface/transfer-learning-conv-ai 82 🦄 State-of-the-Art Conversational AI with Transfer Learning Some things seem slightly outdated and I adapted the code to train with Pytorch-Lightning in a Jupyter notebook. Still im using 99% unchanged code from Github and the same dataset. Fine-tuning GPT2-medium seems to work. After one epoch the loss is down to roughly 4. At inference the chatbot only outputs gibberish like for example: Hello. How are you? !hey therehow are youwoooowhat are you?wherew where are?do you knowwayokhow are u?tellwhat are uwhatoodoiokwhere dohowi i’mdowhat aredo you?okdo you areyou are ado.you arei doyou arewowi’m so I don’t understand that. are there are what?do you?yesdo you?do you?whati amwhat?i.do you have anydodo youokwhatare?yourwhat are what?i see?sohow are youdoisoi’ve anddotoareiidoi’m youidowhat areiok What do you want to say? ?doidowhatyou are udoi’mdo uaredo uiyou?dodo uiiok,doiokdoi do you aredoare there aredoyouhow arewhat aredodoiwhat uiithat aresodorightwhat?doido u I tried several settings at inference but it’s mostly similar. Where do you think it goes wrong? Is the training not working? Over- or underfittig? Or am I making a mistake at inference? I’m hesitating to post the code yet. Maybe someone of you can already tell if it’s rather about inference or training and I will only post those parts.
Hi, did you ever manage to get this sorted? I’m coming across this problem myself, and was wondering if you could help. Thanks Error when finetuning pretrained huggingface conv-ai chatbot model 59
0
huggingface
Models
Error running GPT-NEO on local machine
https://discuss.huggingface.co/t/error-running-gpt-neo-on-local-machine/5460
Hi, I’m trying to run GPT-NEO through the hugging-face interface. from transformers import pipeline generator = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B') Error: - ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/dpacman/anaconda3/envs/tf-gpu/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 371, in pipeline framework, model = infer_framework_from_model(model, targeted_task, revision=revision, task=task) File "/home/dpacman/anaconda3/envs/tf-gpu/lib/python3.8/site-packages/transformers/pipelines/base.py", line 90, in infer_framework_from_model model = model_class.from_pretrained(model, **model_kwargs) File "/home/dpacman/anaconda3/envs/tf-gpu/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 382, in from_pretrained raise ValueError( ValueError: Unrecognized configuration class <class 'transformers.models.gpt_neo.configuration_gpt_neo.GPTNeoConfig'> for this kind of AutoModel: TFAutoModelForCausalLM. Model type should be one of BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMConfig, CTRLConfig. ```
GPT-Neo is only available for PyTorch, not TensorFlow.
0
huggingface
Models
How do I reduce DistilBERT model size?
https://discuss.huggingface.co/t/how-do-i-reduce-distilbert-model-size/5104
Hi everyone, I am recently start using huggingface’s transformer library and used BERT model to fit my data, after training on AWS sagemaker exported model is 300+ MB each. Then I tried distilBERT, it reduced to around 200MB, yet still too big to invoke if put into multi model endpoint. Is there anyway to reduce the size of distilBERT even more so I can fit them in the multi model endpoint?
Hi ! what makes it too big to invoke in multi-model endpoint? not enough time to deploy? invocation too slow?
0
huggingface
Models
RAG for Reading Comprehension
https://discuss.huggingface.co/t/rag-for-reading-comprehension/5316
Hi, I am currently working on a Reading Comprehension task. I was thinking of using RAG for answer generation instead of using a span extraction model. Find the starter code, question & passage below - query = “Who is Adam’s sister?” passage = “Adam is Bob’s friend. Bob was born in 1906. Bob married Angela and they are now happily living together. Angela is Adam’s sister. Angela lives in Los Angeles. Bob has a dog and its name is Moxie. Adam likes Bob because Bob is a kind person. Adam has 2 kids.” import torch from transformers import RagConfig, RagRetriever, RagTokenForGeneration, RagTokenizer, RagSequenceForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") nq_model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq") nq_model.rag.config.n_docs = 1 _ = nq_model.to("cuda:0") _ = nq_model.eval() num_beams=1 min_length=10 max_length=16 device = "cuda:0" input_ids = tokenizer(query, return_tensors="pt").input_ids.to(device) passage_ids = tokenizer(passage, return_tensors="pt").input_ids.to(device) passage_attention_mask = tokenizer(passage, return_tensors="pt").attention_mask.to(device) generated_ids = nq_model.generate( input_ids=input_ids, context_input_ids=passage_ids, context_attention_mask=passage_attention_mask, doc_scores=torch.tensor([[100.0]]).to("cuda:0"), num_beams=1, num_return_sequences=1, min_length=min_length, max_length=max_length, length_penalty=1.0, ) answer_texts = [ tokenizer.generator.decode(gen_seq.tolist(), skip_special_tokens=True).strip() for gen_seq in generated_ids ] print(answer_texts) the model outputs ‘exit bar houses j j j j j’. Any thoughts on what might be wrong here? Thanks!
Maybe related: the latest work of google AI found that often RAG does not use context at all. Google AI Blog Progress and Challenges in Long-Form Open-Domain Question Answering 5 Posted by Aurko Roy, Research Scientist, Google Research Open-domain long-form question answering (LFQA) is a fundamental challenge in n...
0
huggingface
Models
Does it make sense to use CLS token on RoBERTa based models?
https://discuss.huggingface.co/t/does-it-make-sense-to-use-cls-token-on-roberta-based-models/1649
Hello, I know that some transformer models are not pre trained with Next Sentence Prediction objective, like RoBERTa based models. In that case, CLS token does not mean anything, right? Given CLS token is not pretrained, when I am developing my downstream classification task, would be better to fine tune this CLS token or perform average pooling on all the tokens? Thanks in advance, Bruno
Hi, the importance of [CLS] token is not only limited to NSP (Next Sequence Prediction) tasks. As far as I understand its importance and its functioning, you can use it for fine-tuning in other tasks too, because [CLS] token is that special token that attends all other tokens in the sequence so it has a representation explaining the knowledge from the context explained in the sequence. Extending it to NSP task, it learns the representation through self-attention looking around at all the tokens in the context (from both input pair sequence).
0
huggingface
Models
How to find confidence score in QA model in the pipeline module
https://discuss.huggingface.co/t/how-to-find-confidence-score-in-qa-model-in-the-pipeline-module/5210
Hi folks, I would like to know how Hugging Face estimates the confidence score, displayed when we use QA model from the Hugging Face “pipeline”. What I know is, QA model tries to predict 2 tokens (starting and ending index). The span between starting and ending index represents the answer. For each token, we understand that the confidence score is represented by the max probability in the output vector. But how to estimate confidence scores when there are 2 output vectors (one for starting token and the other one for ending token) ?
That score is obtained by multiplying the probabilities for start and end tokens: taking the softmax of start_logits to get probabilities, the softmax of end_logits then the value corresponding to the predicted start index end index respectively, and finally multiplying those two numbers.
0
huggingface
Models
What is the magic behind BartForConditionalGeneration?
https://discuss.huggingface.co/t/what-is-the-magic-behind-bartforconditionalgeneration/5184
For some reason, I want to modify the linear layer inside BartForConditionalGeneration. Therefore, I use a BartModel with Linear just like BartForConditionalGeneration. The Performance has a large drop-down when using BartModel with Linear. It’s so strange For same training and evaluation data: BartForConditionalGeneration {'Bleu_1': 0.3756316307557612, 'Bleu_2': 0.2187763449001214, 'Bleu_3': 0.14257622050968358, 'Bleu_4': 0.09772224033332834, 'ROUGE_L': 0.31379157899331667, 'CIDEr': 0.2487453519966872} BartModel with Linear {'Bleu_1': 0.28135212418299216, 'Bleu_2': 0.039374791862140796, 'Bleu_3': 5.7869382968790495e-08, 'Bleu_4': 9.583990840791874e-11, 'ROUGE_L': 0.13023605134624447, 'CIDEr': 0.012828799693149772} Here is my code BartForConditionalGeneration 8 BartModel with Linear 4 Some trial and notes for your reference: use set_output_embeddings to replace linear layer - dropdown tie linear weight with BartModel.shared weight - dropdown re-init linear weight with config.std and 0 mean - dropdown clone BartModel.shared weight to linear weight - dropdown add bias or remove bias - dropdown extend BartForConditionalGeneration and rename lm_head model - dropdown use different seq2seq models (t5) - dropdown
For copy shard weight or tie shared.weight, the result will be little bit better, but still far from BartForConditionalGeneration: Copy weight: colab.research.google.com Google Colaboratory 2 import copy lm_head = nn.Linear(pretrained.config.hidden_size, tokenizer.__len__(), bias=False).to(device) lm_head.weight = copy.copy(pretrained.shared.weight) Result: {'Bleu_1': 0.3009317871068947, 'Bleu_2': 0.15865498886231086, 'Bleu_3': 0.09005394179103642, 'Bleu_4': 0.05191279861663496, 'ROUGE_L': 0.22372818945128858, 'CIDEr': 0.15579250859745616} Tie weight: colab.research.google.com Google Colaboratory 3 lm_head = nn.Linear(pretrained.config.hidden_size, tokenizer.__len__(), bias=False).to(device) lm_head.weight = pretrained.shared.weight Result: ·{‘Bleu_1’: 0.30315688210424285, ‘Bleu_2’: 0.1590543852533103, ‘Bleu_3’: 0.08880157836836094, ‘Bleu_4’: 0.04979010468389569, ‘ROUGE_L’: 0.22960729767442484, ‘CIDEr’: 0.1570861241454517}·
0
huggingface
Models
Weird problem with machine translation
https://discuss.huggingface.co/t/weird-problem-with-machine-translation/5044
Can someone explain to me what goes wrong here? Udklip1014×793 64.5 KB Danish text: Din anden bank Prøv en anden bank uden at forlade din gamle Det får du Konti og kort Få svar på dine spørgsmål Lunar er til dig, der vil have kontrol over dine penge, styre dem nemt og have endnu mere ud af dem. Vi er en moderne bank, som du kan skræddersy efter dine egne behov. Det koster intet at prøve Lunar, men det kan koste dig penge og tid at lade være. Gør som mange af vores brugere, og brug os som din anden bank uden at forlade din gamle. Rækker dine penge ikke til hele måneden? Drømmer du om flere rejser? Vil du sætte barren højere for din økonomi? Så vil du få stor gavn af Lunar. Få overblik og kontrol over din økonomi, og få råd til mere med appens revolutionerende nye features. Hent Lunar gratis, og tilmeld dig på få minutter direkte fra mobilen. Du kan bruge os som din eneste bank - men du kan også bruge os som din anden. Så behøver du ikke forlade den, du har nu. Det gør mange af vores brugere allerede, fordi de med Lunar som nummer to får et helt unikt overblik over pengene. Det er nemt: Du overfører bare penge til din Lunar-konto, og så har du appens features til at hjælpe med økonomien hver dag. Lav fx en forbrugskonto eller en opsparingskonto. Du kan knytte din gamle bank til Lunar-appen og samle alt ét sted. Se transaktioner, og lav overførsler på tværs af dine banker. Opsparing kan virke uoverskueligt, men med Lunar er det nemt. Opret dine opsparingsmål med et swipe, og opsæt regler for opsparingen. Så klarer appen det for dig. For eksempel kan den automatisk runde beløbet op, når du bruger dit kort. På den måde sparer du hele tiden sparer lidt op, uden du mærker det. Du vil blive overrasket over, hvor hurtigt små beløb kan vokse sig store. Før du aner det, har du sparet sammen til den nye computer eller rejse, du drømmer om. Vil du i gang med at investere? Med Lunar Invest behøver du hverken være ekspert eller bruge mange penge. Du kan investere nemt og billigt, og du kan have din første aktie i hænderne allerede efter få minutter. Investér i dine yndlingsbrands, grønne selskaber eller teknologi - eller i det, der interesserer dig allermest. Ved du, hvilke abonnementer du betaler til? Eller dræner de din konto, uden du er opmærksom på det?
Hey @MortenKP, it looks like the prepare_seq2seq_batch function might be the source of the problem - what happens if you just pass a snippet of danish_text to the tokenizer and model, followed by tokenizer.decode(translated[0], skip_special_tokens=True) Note: danish_text is probably too long for the model, so you’ll need to chunk it into smaller passages (e.g. sentences). I’d try doing this with just a single sentence before scaling out to the whole document
0
huggingface
Models
Separation token in GPT for text similarity/question answering
https://discuss.huggingface.co/t/separation-token-in-gpt-for-text-similarity-question-answering/4844
What is the separation token used to separate two input sequences for GPT? Given a pretrained GPT2, I’m interested in fine tuning the task for question answering (given a string of question, and a string of answer, need to classify 1 or 0, whether it is indeed the answer to the question or not) In BERT, the input sequences are separated with a [SEP] token, and this classification can be done by feeding in once sequence: question_text [SEP] answer_text What is the separation token in GPT required for this? If i’m finetuning a pretrained model, it means that this separation token would not have been encountered before, so will i be able to use any token i wish for this? and the mode would just learn that to be the token
GPT2 don’t have anything like SEP. All it have is or something similar.
0
huggingface
Models
Swedish ASR: Fine Tuning Wav2Vec2
https://discuss.huggingface.co/t/swedish-asr-fine-tuning-wav2vec2/4560
Hey everyone. I trained the model in Swedish (just on the standard params) and I’m curious if we could figure out a good way to fine tune the parameters. My WER after 4000 steps was 0.511916 on a dataset of 402mb. I created a spreadsheet, maybe if people could fill out some parameters on how we trained we could figure out better parameters for training. docs.google.com WAV2VEC2 Model language and WER 20 Sheet1 Language,Training dataset,Training dataset size,WER,Training params,Trained on,Trained by,Training time Swedish,Common voice,402 mb,0.511916,Standard from the notebook,Colaboratory Pro,Birger Moëll,5:01:17 Swedish,NST <a... Here is a link to my Google Colaboratory. colab.research.google.com Google Colaboratory 21
I ran the same (didn’t change any parameters but did filter out apostrophe) tonight and got WER of 0.514714.
0
huggingface
Models
Training Arguments - eval_step vs save_step
https://discuss.huggingface.co/t/training-arguments-eval-step-vs-save-step/4587
I am confused a little bit about these two arguments and I did read the documentation here 7. So my question is as follows: when eval_step is less than save_step and if the best eval_step results does not correspond to the save_step, which step is saved? For example --eval_step= 200 and --save_step=400. 300th step loss: 0.4 400th step loss: 0.5 So, obviously 300th is better than 400th in terms of loss. Which step’s weights would be saved? Thank you in advance.
Unless you are using another argument, the save doesn’t care about the best model, so will just save every save_steps regardless of which step had the better loss.
0
huggingface
Models
How create BERT2Rand Encoder-Decoder model
https://discuss.huggingface.co/t/how-create-bert2rand-encoder-decoder-model/4399
There are multiple helpful references (https://colab.research.google.com/drive/1WIk2bxglElfZewOHboPFNj8H44_VAyKE?usp=sharing#scrollTo=6r2-M5hYt-Vw 7 ) for creating instances of BERT2BERT and BERT2Share models. I was wondering how to create a BERT2Rand Encoder-Decoder model where the encoder parameters will be loaded from pre-trained checkpoint and decoder parameters should be randomly initialized? By reading document and codes, I tried this EncoderDecoderModel.from_encoder_decoder_pretrained(bert-base-multilingual-cased, None) which gave fallowing error: Huggingface AssertionError: If *decoder_model* is not defined as an argument, a *decoder pretrained model_name_or_path* has to be define I am not sure how to fix this. Please let me in this. Thank you!
Hi there, if you want a randomly initialized decoder then you can create the decoder separately, save it, and then pass that to the from_encoder_decoder_pretrained. Following code snipper shows how you can init the encoder using pre-tarined bert-base and a randomly initialized decoder of bert-base size. from transformers import BertConfig, BertLMHeadModel, EncoderDecoderModel decoder_config = BertConfig(is_decoder=True)# bert-base size decoder = BertLMHeadModel(decoder_config) decoder.save_pretrained("decoder") # save the decoder model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "decoder") # verify the decoder model.decoder.config.is_decoder model.decoder.config.add_cross_attention
0
huggingface
Models
Exporting models
https://discuss.huggingface.co/t/exporting-models/4348
Hello, Is there possibility to download or export model and then save it, so that I can use models offline? Thank you
Hi @Katarina, sure there’s a method called AutoModelForXXX.save_pretrained that you can use: Models — transformers 4.3.0 documentation 12 You can then load the model using the AutoModelForXXX.from_pretrained method
0
huggingface
Models
How to translate sentences after making a model
https://discuss.huggingface.co/t/how-to-translate-sentences-after-making-a-model/4393
This is the translation model. The model gets generated but there is no method given to use model to translate sentences. Training Your Models on Cloud TPUs in 4 Easy Steps on Google Colab | by Rishabh Anand | Analytics Vidhya | Medium 4 colab.research.google.com Google Colaboratory 8
Not sure if the question belongs here. Are you using transformers for translation? I looked at the colab but it seems it’s a custom model. Sadly I can-not answer that.
0
huggingface
Models
Fine-Tuning Pegasus - Model Not Training?
https://discuss.huggingface.co/t/fine-tuning-pegasus-model-not-training/2902
I’m trying to fine-tune Pegasus using a .csv with about 4,000 samples. The encodings are web text and the labels are abstract summaries. When I go to train the model (50 epochs, batch size of 16), it appears as though no training is taking place. Each iteration takes < 1 second and nearly ~30 seconds to iterate through 50 epochs… Not sure where I’m going wrong here, but would really appreciate some help/thoughts/suggestions. I’ve been following the fine-tuning tutorial for the most part, which can be found here: https://huggingface.co/transformers/master/custom_datasets.html Many thanks in advance! My Code from transformers import PegasusForConditionalGeneration, PegasusTokenizer from transformers import Trainer, TrainingArguments import torch from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, AdamW from torchvision import models import pandas as pd from torchvision import transforms, utils from sklearn.metrics import accuracy_score, precision_recall_fscore_support torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' torch.cuda.empty_cache() # Assign model & tokenizer model = AutoModelForSeq2SeqLM.from_pretrained("sshleifer/distill-pegasus-xsum-16-8") tokenizer = AutoTokenizer.from_pretrained("sshleifer/distill-pegasus-xsum-16-8") # Load data data = pd.read_csv('C:/data.csv', sep=',', encoding='cp1252') train_percentage = .8 test_percentage = 1-train_percentage train_test_split_pct = int(len(data)*train_percentage) train_summary = data.iloc[:train_test_split_pct,0].tolist() train_webtext = data.iloc[:train_test_split_pct,1].tolist() test_summary = data.iloc[train_test_split_pct:,0].tolist() test_webtext = data.iloc[train_test_split_pct:,1].tolist() # Tokenize our data train_summary = tokenizer(train_summary, return_tensors="pt", truncation=True, padding=True) train_webtext = tokenizer(train_webtext, return_tensors="pt",truncation=True, padding=True) test_summary = tokenizer(test_summary, return_tensors="pt",truncation=True, padding=True) test_webtext = tokenizer(test_webtext, return_tensors="pt",truncation=True, padding=True) # Setup dataset objects class Summary(torch.utils.data.Dataset): def __init__(self, encodings, labels): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} item['labels'] = torch.tensor(self.labels['input_ids'][idx]) # torch.tensor(self.labels[idx]) return item def __len__(self): return len(self.labels) # Get datasets train_dataset = Summary(train_webtext, train_summary) test_dataset = Summary(test_webtext, test_summary) # Train model def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='binary') acc = accuracy_score(labels, preds) return { 'accuracy': acc, 'f1': f1, 'precision': precision, 'recall': recall } training_args = TrainingArguments( output_dir='./results', num_train_epochs=50, per_device_train_batch_size=16, per_device_eval_batch_size=16, warmup_steps=500, weight_decay=0.01, #evaluate_during_training=True, logging_dir='./logs', ) trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_dataset, eval_dataset=test_dataset ) trainer.train() Output Epoch: 0%| | 0/50 [00:00<?, ?it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:01<00:00, 1.12s/it] Epoch: 2%|▏ | 1/50 [00:01<00:55, 1.12s/it] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.30it/s] Epoch: 4%|▍ | 2/50 [00:01<00:48, 1.02s/it] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.48it/s] Epoch: 6%|▌ | 3/50 [00:02<00:43, 1.09it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.59it/s] Epoch: 8%|▊ | 4/50 [00:03<00:38, 1.20it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.62it/s] Epoch: 10%|█ | 5/50 [00:03<00:34, 1.30it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.66it/s] Epoch: 12%|█▏ | 6/50 [00:04<00:31, 1.39it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.55it/s] Epoch: 14%|█▍ | 7/50 [00:05<00:30, 1.43it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.60it/s] Epoch: 16%|█▌ | 8/50 [00:05<00:28, 1.48it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.61it/s] Epoch: 18%|█▊ | 9/50 [00:06<00:27, 1.51it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.66it/s] Epoch: 20%|██ | 10/50 [00:06<00:25, 1.55it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.55it/s] Epoch: 22%|██▏ | 11/50 [00:07<00:25, 1.55it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.61it/s] Epoch: 24%|██▍ | 12/50 [00:08<00:24, 1.56it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.61it/s] Epoch: 26%|██▌ | 13/50 [00:08<00:23, 1.57it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.60it/s] Epoch: 28%|██▊ | 14/50 [00:09<00:22, 1.58it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.63it/s] Epoch: 30%|███ | 15/50 [00:10<00:21, 1.59it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.55it/s] Epoch: 32%|███▏ | 16/50 [00:10<00:21, 1.58it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.63it/s] Epoch: 34%|███▍ | 17/50 [00:11<00:20, 1.59it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.61it/s] Epoch: 36%|███▌ | 18/50 [00:11<00:20, 1.59it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.50it/s] Epoch: 38%|███▊ | 19/50 [00:12<00:19, 1.56it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.57it/s] Epoch: 40%|████ | 20/50 [00:13<00:19, 1.56it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.54it/s] Epoch: 42%|████▏ | 21/50 [00:13<00:18, 1.55it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.55it/s] Epoch: 44%|████▍ | 22/50 [00:14<00:18, 1.55it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.58it/s] Epoch: 46%|████▌ | 23/50 [00:15<00:17, 1.56it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.57it/s] Epoch: 48%|████▊ | 24/50 [00:15<00:16, 1.56it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.50it/s] Epoch: 50%|█████ | 25/50 [00:16<00:16, 1.54it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.36it/s] Epoch: 52%|█████▏ | 26/50 [00:17<00:16, 1.48it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.58it/s] Epoch: 54%|█████▍ | 27/50 [00:17<00:15, 1.51it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.63it/s] Epoch: 56%|█████▌ | 28/50 [00:18<00:14, 1.54it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.54it/s] Epoch: 58%|█████▊ | 29/50 [00:19<00:13, 1.54it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.62it/s] Epoch: 60%|██████ | 30/50 [00:19<00:12, 1.56it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.55it/s] Epoch: 62%|██████▏ | 31/50 [00:20<00:12, 1.55it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.54it/s] Epoch: 64%|██████▍ | 32/50 [00:21<00:11, 1.54it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.44it/s] Epoch: 66%|██████▌ | 33/50 [00:21<00:11, 1.51it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.56it/s] Epoch: 68%|██████▊ | 34/50 [00:22<00:10, 1.52it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.55it/s] Epoch: 70%|███████ | 35/50 [00:23<00:09, 1.52it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.48it/s] Epoch: 72%|███████▏ | 36/50 [00:23<00:09, 1.51it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.69it/s] Epoch: 74%|███████▍ | 37/50 [00:24<00:08, 1.56it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.56it/s] Epoch: 76%|███████▌ | 38/50 [00:25<00:07, 1.55it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.60it/s] Epoch: 78%|███████▊ | 39/50 [00:25<00:07, 1.57it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.44it/s] Epoch: 80%|████████ | 40/50 [00:26<00:06, 1.52it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.57it/s] Epoch: 82%|████████▏ | 41/50 [00:26<00:05, 1.53it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.50it/s] Epoch: 84%|████████▍ | 42/50 [00:27<00:05, 1.52it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.49it/s] Epoch: 86%|████████▌ | 43/50 [00:28<00:04, 1.51it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.65it/s] Epoch: 88%|████████▊ | 44/50 [00:28<00:03, 1.55it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.63it/s] Epoch: 90%|█████████ | 45/50 [00:29<00:03, 1.57it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.47it/s] Epoch: 92%|█████████▏| 46/50 [00:30<00:02, 1.54it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.64it/s] Epoch: 94%|█████████▍| 47/50 [00:30<00:01, 1.56it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.51it/s] Epoch: 96%|█████████▌| 48/50 [00:31<00:01, 1.54it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.58it/s] Epoch: 98%|█████████▊| 49/50 [00:32<00:00, 1.55it/s] Iteration: 0%| | 0/1 [00:00<?, ?it/s]A Iteration: 100%|██████████| 1/1 [00:00<00:00, 1.63it/s] Epoch: 100%|██████████| 50/50 [00:32<00:00, 1.53it/s] TrainOutput(global_step=50, training_loss=8.380059204101563)
I don’t see any error in the training code, I think you should manually try verifying the number of examples in the dataframe and the length of the dataset
0
huggingface
Models
Using the decoder half of BART for causal generation
https://discuss.huggingface.co/t/using-the-decoder-half-of-bart-for-causal-generation/4369
My rudimentary understanding of BART is that it’s basically a BERT encoder feeding into a GPT-2 decoder. Is there a simple way to take a fine-tuned BART model and use just its decoder for text generation? I’ve played with model.generate(input_ids=None, decoder_input_ids=my_tokenized_prompt,...) and the output looks reasonable enough. But 1) I don’t know if it’s actually giving me an accurate sense of what the decoder has learned and 2) I assume that’s an incredibly inefficient way to accomplish this task, and there must be a better option. Is there a smarter way?
If you only want to use the decoder of BART, you can do so by simply using the BartDecoder class 2. So your code could look something like: from transformers.models.bart.modeling_bart import BartDecoder model = BartDecoder.from_pretrained("facebook/bart-base") But note that BART is a seq2seq (encoder-decoder) model, it has been pre-trained in an encoder-decoder set-up, so the best results will probably be obtained by using both the encoder and decoder. But of course you can still use only the decoder if you want
0
huggingface
Models
Difference between transformer encoder and decoder
https://discuss.huggingface.co/t/difference-between-transformer-encoder-and-decoder/4127
I am trying to understand the difference between transformer encoder and decoder, after reading the article Transformer-based Encoder-Decoder Models 24 . Would it be correct that after bringing a causal masked to encoder only model, it will be the same as decoder only model? according to the article: auto-regressive models, such as GPT2, have the same architecture as transformer-based decoder models if one removes the cross-attention layer On a side-note, autoencoding models, such as Bert, have the same architecture as transformer-based encoder models. So, without involving cross-attention, the main difference between transformer encoder and decoder is that encoder uses bi-directional self-attention, decoder uses uni-directional self-attention layer instead. BERT is an encoder-only model and GPT is a decoder-only model. What if I add a causal mask on BERT model to make it become decoder. Refer to the extended attention mask on Bert. It can be done by changing the config with is_decoder. github.com huggingface/transformers/blob/b70f441b72accf3205185290efc563c0dea65bfc/src/transformers/models/bert/modeling_bert.py#L940 1 # past_key_values_length past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0 if attention_mask is None: attention_mask = torch.ones(((batch_size, seq_length + past_key_values_length)), device=device) if token_type_ids is None: token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] # ourselves in which case we just need to make it broadcastable to all heads. extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device) # If a 2D or 3D attention mask is provided for the cross-attention # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] if self.config.is_decoder and encoder_hidden_states is not None: encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) if encoder_attention_mask is None: encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) else: github.com huggingface/transformers/blob/4b919657313103f1ee903e32a9213b48e6433afe/src/transformers/modeling_utils.py#L221 encoder_extended_attention_mask = (1.0 - encoder_extended_attention_mask) * -1e9 else: raise ValueError( "{} not recognized. `dtype` should be set to either `torch.float32` or `torch.float16`".format( self.dtype ) ) return encoder_extended_attention_mask def get_extended_attention_mask(self, attention_mask: Tensor, input_shape: Tuple[int], device: device) -> Tensor: """ Makes broadcastable attention and causal masks so that future and masked tokens are ignored. Arguments: attention_mask (:obj:`torch.Tensor`): Mask with ones indicating tokens to attend to, zeros for tokens to ignore. input_shape (:obj:`Tuple[int]`): The shape of the input to the model. device: (:obj:`torch.device`): The device of the input to the model. After that, I have an experiment comparing the values of word embedding of “I” for input_ids and perturbed input_ids as same as the article. It seems that after changing BERT to a decoder, its hidden state will be changed on a different input, why is that happen ? from transformers import AutoModel,AutoTokenizer, AutoConfig import torch # bert model_config = AutoConfig.from_pretrained('bert-base-uncased') model_config.is_decoder = True bert_model = AutoModel.from_config(model_config) bert_tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') # gpt gpt_model = AutoModel.from_pretrained('gpt2') gpt_tokenizer = AutoTokenizer.from_pretrained('gpt2') Test GPT model embeddings = gpt_model.get_input_embeddings() # create ids of encoded input vectors decoder_input_ids = gpt_tokenizer("<pad> Ich will ein", return_tensors="pt", add_special_tokens=False).input_ids # pass decoder input_ids and encoded input vectors to decoder lm_logits = gpt_model(decoder_input_ids).last_hidden_state # change the decoder input slightly decoder_input_ids_perturbed = gpt_tokenizer("<pad> Ich will das", return_tensors="pt", add_special_tokens=False).input_ids lm_logits_perturbed = gpt_model(decoder_input_ids_perturbed).last_hidden_state # compare values of word embedding of "I" for input_ids and perturbed input_ids print("Is encoding for `Ich` equal to its perturbed version?: ", torch.allclose(lm_logits[0, 0], lm_logits_perturbed[0, 0], atol=1e-3)) Is encoding for Ich equal to its perturbed version?: True Test BERT model embeddings = bert_model.get_input_embeddings() # create ids of encoded input vectors decoder_input_ids = bert_tokenizer("<pad> Ich will ein", return_tensors="pt", add_special_tokens=False).input_ids # pass decoder input_ids and encoded input vectors to decoder lm_logits = bert_model(decoder_input_ids).last_hidden_state # change the decoder input slightly decoder_input_ids_perturbed = bert_tokenizer("<pad> Ich will das", return_tensors="pt", add_special_tokens=False).input_ids lm_logits_perturbed = bert_model(decoder_input_ids_perturbed).last_hidden_state # compare values of word embedding of "I" for input_ids and perturbed input_ids print("Is encoding for `Ich` equal to its perturbed version?: ", torch.allclose(lm_logits[0, 0], lm_logits_perturbed[0, 0], atol=1e-3)) Is encoding for Ich equal to its perturbed version?: False
It is because of the dropout. github.com/huggingface/transformers [Causal Language Modeling] seems not as expected 9 opened Mar 6, 2021 closed Mar 12, 2021 voidful Problem Causal Models is only attended to the left context. Therefore causal models should not depend on the right tokens. For example, The...
0
huggingface
Models
Bitext Alignment (Translation Source and Target Alignment)
https://discuss.huggingface.co/t/bitext-alignment-translation-source-and-target-alignment/3912
Hello! Looking for some guidance on handling Bitext Alignment. Similar to how Microsoft handles it with their Translation Service 2. For reference I’m using the Helsinki pre-trained models and have come across papers say alignment can be derived from the hidden states or the decoder attentions. I am returning them but can’t make any sense of the returned tensors. Looking for some documentation or examples on how to make sense of the decoder_attentions. generated = translation_model.generate(return_dict_in_generate = True, **prepare translation_model = MarianMTModel.from_pretrained('Helsinki-NLP/opus-mt-en-es', return_dict=True, output_attentions=True, output_scores=True, output_hidden_states=True)
If you want to extract a sentence representation (for alignment) out of the encoder output, you should consider using a pooling layer to obtain the sentence vector: sentence-transformers/Pooling.py at ec76488000f94efdba911356b8924cc46db0c2ee · UKPLab/sentence-transformers · GitHub 9 you can refer to the pooling class by sentence-transformers.
0
huggingface
Models
“deberta-v2-xxlarge”-Model not working!
https://discuss.huggingface.co/t/deberta-v2-xxlarge-model-not-working/3918
Hi huggingface Community I have a problem with the DeBERTa model. I do: from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained(“microsoft/deberta-v2-xxlarge”) model = AutoModel.from_pretrained(“microsoft/deberta-v2-xxlarge”) But always the same error occurs: config_class = CONFIG_MAPPING[config_dict[“model_type”]] KeyError: ‘deberta-v2’ Can somebody help me, please. Appreciate it!
Experiencing the same issue - figured it out by any chance?
0
huggingface
Models
OOM issues with exported vs. model card models
https://discuss.huggingface.co/t/oom-issues-with-exported-vs-model-card-models/4230
Having a weird issue with DialoGPT Large model deployment. From PyTorch 1.8.0 and Transformers 4.3.3 using model.save_pretrained and tokenizer.save_pretrained, the exported pytorch_model.bin is almost twice the size of the model card repo and results in OOM on a reasonably equipped machine that when using the standard transformers download process it works fine (I am building a CI pipeline to containerize the model hence the pre-populated model requirement): Model card: pytorch_model.bin 1.6GB model.save_pretrained and tokenizer.save_pretrained: -rw-r--r-- 1 jrandel jrandel 800 Mar 6 16:51 config.json -rw-r--r-- 1 jrandel jrandel 446K Mar 6 16:51 merges.txt -rw-r--r-- 1 jrandel jrandel 3.0G Mar 6 16:51 pytorch_model.bin -rw-r--r-- 1 jrandel jrandel 357 Mar 6 16:51 special_tokens_map.json -rw-r--r-- 1 jrandel jrandel 580 Mar 6 16:51 tokenizer_config.json -rw-r--r-- 1 jrandel jrandel 780K Mar 6 16:51 vocab.json When I download the model card files directly however, I’m getting the following errors: curl -L https://huggingface.co/microsoft/DialoGPT-large/resolve/main/config.json -o ./model/config.json curl -L https://huggingface.co/microsoft/DialoGPT-large/resolve/main/pytorch_model.bin -o ./model/pytorch_model.bin curl https://huggingface.co/microsoft/DialoGPT-large/resolve/main/tokenizer_config.json -o ./model/tokenizer_config.json curl https://huggingface.co/microsoft/DialoGPT-large/resolve/main/config.json -o ./model/config.json curl https://huggingface.co/microsoft/DialoGPT-large/resolve/main/merges.txt -o ./model/merges.txt curl https://huggingface.co/microsoft/DialoGPT-large/resolve/main/special_tokens_map.json -o ./model/special_tokens_map.json curl https://huggingface.co/microsoft/DialoGPT-large/resolve/main/vocab.json -o ./model/vocab.json <snip> tokenizer = AutoTokenizer.from_pretrained("model/") File "/var/lang/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 395, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/var/lang/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1788, in from_pretrained return cls._from_pretrained( File "/var/lang/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1801, in _from_pretrained slow_tokenizer = (cls.slow_tokenizer_class)._from_pretrained( File "/var/lang/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1876, in _from_pretrained special_tokens_map = json.load(special_tokens_map_handle) File "/var/lang/lib/python3.8/json/__init__.py", line 293, in load return loads(fp.read(), File "/var/lang/lib/python3.8/json/__init__.py", line 357, in loads return _default_decoder.decode(s) File "/var/lang/lib/python3.8/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/var/lang/lib/python3.8/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/var/runtime/bootstrap.py", line 481, in <module> main() File "/var/runtime/bootstrap.py", line 458, in main lambda_runtime_client.post_init_error(to_json(error_result)) File "/var/runtime/lambda_runtime_client.py", line 42, in post_init_error response = runtime_connection.getresponse() File "/var/lang/lib/python3.8/http/client.py", line 1347, in getresponse response.begin() File "/var/lang/lib/python3.8/http/client.py", line 307, in begin version, status, reason = self._read_status() File "/var/lang/lib/python3.8/http/client.py", line 276, in _read_status raise RemoteDisconnected("Remote end closed connection without" http.client.RemoteDisconnected: Remote end closed connection without response time="2021-03-08T09:01:39.33" level=warning msg="First fatal error stored in appctx: Runtime.ExitError" time="2021-03-08T09:01:39.33" level=warning msg="Process 14(bootstrap) exited: Runtime exited with error: exit status 1" time="2021-03-08T09:01:39.33" level=error msg="Init failed" InvokeID= error="Runtime exited with error: exit status 1" time="2021-03-08T09:01:39.33" level=warning msg="Failed to send default error response: ErrInvalidInvokeID" time="2021-03-08T09:01:39.33" level=error msg="INIT DONE failed: Runtime.ExitError" time="2021-03-08T09:01:39.33" level=warning msg="Reset initiated: ReserveFail" So what would be causing the large file variance between save_pretrained models and the model card repo? And any ideas why the directly downloaded model card files aren’t working in this example? Thanks in advance
Is this thing on. tap tap tap
0
huggingface
Models
Question regarding training of BartForConditionalGeneration
https://discuss.huggingface.co/t/question-regarding-training-of-bartforconditionalgeneration/4079
Hello Guys, I am trying to fine-tune the BART summarization model but due to the lack of big dataset, having some difficulties with the fine-tuning. Thus, I decided to look at the trainig process of BartForConditionalGeneration model in detail. I came across this article titled ‘Introducing BART’ (sorry, only 2 links allowed for new users ) from one of the engineers, @sshleifer, at HuggingFace. It says that BartModel was directly fine-tuned for the summarisation task without any new randomly initialized heads. My question is about this fine-tuning process, especially on CNN-DailyMail dataset. Do you guys fine-tune the entire Bart model or only the decoder or something else? I looked at the example fine-tuning script 9 provided on the GitHub but I didn’t find anything related to freezing some part of the model.   I also tried to look at the source code of the BartForConditionalGeneration model and observed the following - Its just adds a linear layer on top of the BartModel (copy-pasting the __init__ code here for quick reference). self.model = BartModel(config) self.register_buffer("final_logits_bias", torch.zeros((1, self.model.shared.num_embeddings))) self.lm_head = nn.Linear(config.d_model, self.model.shared.num_embeddings, bias=False) At first, I thought these are the new parameters that are being introduced and thus, being trained. Therefore, I tried the following code to check the number of trainable parameters while keeping the endoer and decoder fixed - from transformers import BartModel, BartForConditionalGeneration, BartTokenizer def freeze_params(model): for par in model.parameters(): par.requires_grad = False model_sum = BartForConditionalGeneration.from_pretrained('facebook/bart-large') freeze_params(model_sum.get_encoder()) ## freeze the encoder freeze_params(model_sum.get_decoder()) ## freeze the decoder model_sum.train() ## set the train mode train_p = [p for p in model_sum.parameters() if p.requires_grad] ## get the trainable params print(f'Length of train params in Summarization Model : {len(train_p)}') But this code shows that the list is empty. One thing I can do is to explictly set the requires_grad=True for the paramters in the model_sum.lm_head and only fine-tune these parameters. But I am curious to understand the original training/fine-tuning process. It would be of great help to me if you guys could answer my question. Thanks, Naman
I answered on github: Question regarding training of BartForConditionalGeneration · Issue #10479 · huggingface/transformers · GitHub 151
0
huggingface
Models
Amharic NLP - Train BERT-style model
https://discuss.huggingface.co/t/amharic-nlp-train-bert-style-model/3984
@israel Here is a thread where we can collaborate on work to pre-train a BERT-style model for Amharic on OSCAR data. One thing I have noticed on a lot of NLP efforts is it has a high barrier to entry. I believe the documentation needs to be so clear that anyone (with minimum data science knowledge) coming after us has to be able to implement the process with easy to follow step by step instructions. Thanks
Please DM me.
0
huggingface
Models
[Not working] QA inference API and conv-ai
https://discuss.huggingface.co/t/not-working-qa-inference-api-and-conv-ai/1369
Conversational AI 6 and QA API are not working in my browser. Can anyone please check it out. image1179×678 43.7 KB The “Loading” message is persistent and nothing happens after it. This is the first time i am using your web inference API. Am I missing something?
image1411×272 47.7 KB
0
huggingface
Models
Link to blog about RAG
https://discuss.huggingface.co/t/link-to-blog-about-rag/2868
Hi, I would like to read the blog about RAG (listed in https://huggingface.co/blog 9), but the link (https://huggingface.co/rag/ 8) doesn’t work. Thanks
Hi friend, I believe the site will come back soon since – refering to our last conversation – now even https://huggingface.co/qa/ 19 is back after several days of down-time … BTW, the RAG link is not about blog but it’s a RAG demo similar to the long-form QA demo we have discussed. I still have the screen-capture of the link , so I post them here as teasers for you while you are waiting RAG_demo1886×921 271 KB RAG_demo21911×864 233 KB
0
huggingface
Models
How to train BERT from scratch on a new domain for both MLM and NSP?
https://discuss.huggingface.co/t/how-to-train-bert-from-scratch-on-a-new-domain-for-both-mlm-and-nsp/3115
I’m trying to train BERT model from scratch using my own dataset. I would like to train the model in a way that it has the exact architecture of the original BERT model. In the original paper, it stated that: “BERT is trained on two tasks: predicting randomly masked tokens (MLM) and predicting whether two sentences follow each other (NSP). SCIBERT follows the same architecture as BERT but is instead pretrained on scientific text.” I’m trying to understand how to train the model on two tasks as above. At the moment, I initizalied the model as below: from transformers import BertForMaskedLM model = BertForMaskedLM(config=config) However, it would just be for MLM and not NSP. How can I initialize and train the model with NSP as well? My assumptions would be either Initialize with BertForPreTraining (for both MLM and NSP), OR After finish training with BertForMaskedLM, initalize the same model and train again with BertForNextSentencePrediction (but this approach’s computation and resources would cost twice…) I’m not sure which one is the correct way. Or maybe my original approach was fine as it is? Any insights or advice would be greatly appreciated.
Hi @tlqnguyen For MLM and NSP training, you should use the BertForPreTraining class. When you pass labels to the forward it will do MLM and when you pass next_sentence_label it’ll do NSP
0
huggingface
Models
TypeError: full_like() got an unexpected keyword argument ‘shape’
https://discuss.huggingface.co/t/typeerror-full-like-got-an-unexpected-keyword-argument-shape/2981
I have finished training my BERT model from scratch after calling trainer.train() Next, I want to evaluate my val set and get val loss by calling trainer.evaluate() but I have received this error TypeError: full_like() got an unexpected keyword argument 'shape' I couldn’t find much information about this error or a solution to fix it. If anyone could give me a suggestion it would be highly appreciated. Here is the full code of my Trainer and Training arguments: from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="./bertmodel", overwrite_output_dir=True, num_train_epochs=5, per_device_train_batch_size=32, per_device_eval_batch_size=32, save_steps=10000, save_total_limit=2, do_train=True, do_eval=True, logging_steps=700, eval_steps = None, prediction_loss_only=True, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=eval_dataset, )
Maybe you need upgrade package numpy by ‘pip install numpy --upgrade’
0
huggingface
Models
How does the API inference work on models such as Blenderbot?
https://discuss.huggingface.co/t/how-does-the-api-inference-work-on-models-such-as-blenderbot/3399
I assume models like blenderbot need to look at prior inputs and outputs in order to form some consistency. How does the inference API provide that to the model?
Hey, I’m dealing with the same subject. As far as I understand, there is a way to provide the context of the previous text in the conversation. The details are here: https://api-inference.huggingface.co/docs/python/html/detailed_parameters.html#conversational-task 9 Although, when I tried it on the 1B model, I’m getting the following error: “Cutting history off because it’s too long (36 > 28) for underlying model” I don’t know if this is a limitation of the model or the API. If you find a solution for that I let me know.
0
huggingface
Models
DeBERTa use for NLI tasks - Missing contradiction score
https://discuss.huggingface.co/t/deberta-use-for-nli-tasks-missing-contradiction-score/3379
Hi all, I was wondering if someone got to make DeBERTa 3 work on NLI task using the pre-trained model available in Hugging Face? When running this code, the contradiction is always empty (although entailment and neutral score are populated). from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch if __name__ == '__main__': max_length = 256 premise = "Two women are embracing while holding to go packages." hypothesis = "The men are fighting outside a deli." hg_model_hub_name = "microsoft/deberta-large" # hg_model_hub_name = "microsoft/deberta-base" tokenizer = AutoTokenizer.from_pretrained(hg_model_hub_name) model = AutoModelForSequenceClassification.from_pretrained(hg_model_hub_name) tokenized_input_seq_pair = tokenizer.encode_plus(premise, hypothesis, max_length=max_length, return_token_type_ids=True, truncation=True) input_ids = torch.Tensor(tokenized_input_seq_pair['input_ids']).long().unsqueeze(0) # remember bart doesn't have 'token_type_ids', remove the line below if you are using bart. token_type_ids = torch.Tensor(tokenized_input_seq_pair['token_type_ids']).long().unsqueeze(0) attention_mask = torch.Tensor(tokenized_input_seq_pair['attention_mask']).long().unsqueeze(0) outputs = model(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, labels=None) # Note: # "id2label": { # "0": "entailment", # "1": "neutral", # "2": "contradiction" # }, predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() # batch_size only one print("Premise:", premise) print("Hypothesis:", hypothesis) print("Entailment:", predicted_probability[0]) print("Neutral:", predicted_probability[1]) print("Contradiction:", predicted_probability[2])
The model is not tuned on MNLI task. To inference with MNLI, you need to fine-tuning with MNLI task.
0
huggingface
Models
Q & A Model Robustness for concluding periods
https://discuss.huggingface.co/t/q-a-model-robustness-for-concluding-periods/3349
Hi all, I am wondering if anyone might be able to provide some insight or logical reasons for say a BERT model trained on SQuAD for Q & A tasks might output different answers to the same question given the only difference is the absence or presence of a concluding full stop in the context (but also interested in other punctuation/performance for that matter). It does differ between models, the below capture is using bert-large-uncased-whole-word-masking-squad2 and I get consistent answers from roberta-base-squad2 (that’s obviously not shocking but wanted to add that model robustness might differ between models and why that might be for this particular observation). Additionally if anyone can recommend any papers on this that would be great! TYIA!!! Capture.PNG|690x477 1
Hi @pythagorasthe10th, one possible explanation is that BERT’s attention heads are known to pay special attention to commas and full-stops in the last few layers: Screen Shot 2021-01-23 at 9.31.13 am487×792 85.5 KB This figure comes from What Does BERT Look at? An Analysis of BERT’s Attention 1 by Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning where the authors explain the phenomenon in terms of the high frequency of periods and commas in the corpus: Interestingly, we found that a substantial amount of BERT’s attention focuses on a few tokens (see Figure 2). For example, over half of BERT’s attention in layers 6-10 focuses on [SEP]. To put this in context, since most of our segments are 128 tokens long, the average attention for a token occurring twice in a segments like [SEP] would normally be 1/64. [SEP] and [CLS] are guaranteed to be present and are never masked out, while periods and commas are the most common tokens in the data excluding “the,” which might be why the model treats these tokens differently. A similar pattern occurs for the uncased BERT model, suggesting there is a systematic reason for the attention to special tokens rather than it being an artifact of stochastic training. I’m not sure if this conclusion carries through to question-answering / fine-tuning, but naively I would guess so. Perhaps you don’t see this in RoBERTa since the next-sentence prediction task is dropped, but I’m not sure about this either. You might also find the BERTology papers of interest: [2002.12327] A Primer in BERTology: What we know about how BERT works HTH!
0
huggingface
Models
Text generation pipeline - output_scores parameter
https://discuss.huggingface.co/t/text-generation-pipeline-output-scores-parameter/3294
In text-generation pipeline, I am looking for a parameter which calculates the confidence score of the generated text. Source: here 4 I am assuming that, output_scores (from here 6) parameter is not returned while prediction, Code: predictedText = pipeline('text-generation',model=checkpoint_path, tokenizer=gpt2_tokenizer, config={'max_length':20, 'output_scores':True}) predictedText('This is a ') Output: Setting pad_token_idtoeos_token_id:50256 for open-end generation. [{'generated_text': 'This is a Generated Text'}] In the output, I am looking for a confidence score of the predicted text to be displayed
the text-generation pipeline doesn’t return scores, however you could the generate method directly, to get the scores, this should help Generation Probabilities: How to compute probabilities of output scores for GPT2 🤗Transformers Now that it is possible to return the logits generated at each step, one might wonder how to compute the probabilities for each generated sequence accordingly. The following code snippet showcases how to do so for generation with do_sample=True for GPT2: import torch from transformers import AutoModelForCausalLM from transformers import AutoTokenizer gpt2 = AutoModelForCausalLM.from_pretrained("gpt2", return_dict_in_generate=True) tokenizer = AutoTokenizer.from_pretrained("gpt2") input_ids …
0
huggingface
Models
Summarization - model for articles about finance
https://discuss.huggingface.co/t/summarization-model-for-articles-about-finance/1186
Hello. I am trying to summarize long articles about finance. I was able to write a working code but I am wondering if there is any specific model that is learned using textual data about finance. Thank you for recommendations
Look into “FinBERT: Financial Sentiment Analysis with Pre-trained Language Models”, maybe that’ll help. There is one finbert model on the hub but I have not played with it. Vladimir
0
huggingface
Models
Best practice for upgrading models?
https://discuss.huggingface.co/t/best-practice-for-upgrading-models/3044
Hi everybody. I’m looking for best practices for upgrading models which have been trained with Transformers. We have a model which is trained with Transformers 4.0.x, which breaks with the latest version (4.1.1). We had the same issue with models trained with Transformers 3 when Transformers 4 came out. Retraining models every time the Transformers package is updated is expensive and the new model might give different output. Pinning the version is of course possible, but we would prefer to keep the models up-to-date. Are there other options than fully retraining the model or pinning the version that we could consider? We save our models and tokenizers with the save_pretrained method.
How does your model “breaks”? Can you give more details?
0
huggingface
Models
Fine-tuning BERT Model on domain specific language
https://discuss.huggingface.co/t/fine-tuning-bert-model-on-domain-specific-language/3054
Hi everyone I want to further fine-tune a BERT Model on domain specific language as done in https://arxiv.org/pdf/1903.10676.pdf 37 or https://arxiv.org/abs/1908.10063 19. If I understood correctly, I have to use the same vocabulary as the original pre-trained model or have to train it from scratch. Since I don’t want to train the model form scratch I have to accept the fact that I have to use the same vocab. My first fine-tuning step is to adapt the model to the domain specific language, where I feed the model some (unlabeled) domain specific text (large dataset) for it to get familiar with the language (freezing some layers during training to prevent forgetting of the pre-trained corpus). Secondly, I want to further fine-tune it for sentiment classification giving the model labeled data (smaller dataset) to train on. Can anyone help me on how to do that (both steps)? Thank you very much in advance.
hey has anyone an idea?
0
huggingface
Models
Model illuin/camembert-large-fquad do not work anymore
https://discuss.huggingface.co/t/model-illuin-camembert-large-fquad-do-not-work-anymore/3043
Hi, I don’t know why but the model illuin/camembert-large-fquad for french Question Answering do not work fine anymore when I use the code: nlp = pipeline(‘question-answering’, model=‘illuin/camembert-large-fquad’, tokenizer=‘illuin/camembert-large-fquad’) I get the error: ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html 4 In the huggingface website for this same model, the example box do not work either with the error: The model illuin/camembert-large-fquad does not seem to have model files. Please check that it contains either pytorch_model.bin or tf_model.h5. Is the model still available? How can I fix this? Thanks all !
The first error is not related to transformers, as instructed you should update jupyter and ipywidgets following the instructions at the link provided.
0
huggingface
Models
Variable num_predict in target_mapping for XLNet
https://discuss.huggingface.co/t/variable-num-predict-in-target-mapping-for-xlnet/3021
I have a pretraining task for XLNet. One of the inputs for XLNetLMHeadModel is target_mapping that is of the shape (batch_size,num_predict,seq_len). I want to predict for all tokens in an input sentence, which means num_predict will vary within a batch, for sentences of different length. This leads to error while building a data_loader in PyTorch. Can anyone suggest a workaround for this problem? Thanks
I might be wrong here but I would assume that in the language modeling task, num_predict is actually the size of the vocabulary because for each mask you try to predict the highest probability token in the vocab (in MLM). Seq_len is the max length of sequences you want to be able to model. If a sentence is smaller then you just pad it.
0
huggingface
Models
SEBIS{URGENT},ValueError: You have to specify either decoder_inputs or decoder_inputs_embeds
https://discuss.huggingface.co/t/sebis-urgent-valueerror-you-have-to-specify-either-decoder-inputs-or-decoder-inputs-embeds/2988
image1527×834 50.8 KB ValueError: You have to specify either decoder_inputs or decoder_inputs_embeds Could somebody please help.
Hi @Rohit It’s not clear what exactly you want to do. Could you please provide more information? From what I can see, it’s a T5 model. If you want to use it for Seq2Seq generation then you should use the AutoModelForConditionalGeneration or T5ForConditionalGeneration class and use the generate method. And in general the forward of any seq2seq models expects input_ids (input for encoder) and decoder_input_ids (input for decoder). T5Model throws this error when the decoder_input_ids are not provided.
0
huggingface
Models
Summarization task fails with ProphetNet
https://discuss.huggingface.co/t/summarization-task-fails-with-prophetnet/2946
I tried to generate summary for CNN/DM or XSUM using prophetnet by running the following code: (based on the codes from https://github.com/huggingface/transformers/tree/master/examples/seq2seq 1) $ export DATA=cnndm $ export DATA_DIR=data/$DATA $ export OUTPUT_DIR=output/$DATA-prophetnet $ python -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py \ --model_name microsoft/prophetnet-large-uncased-cnndm \ --save_dir $OUTPUT_DIR \ --data_dir $DATA_DIR \ --bs 32 \ --task summarization_cnndm Then I received the following error messages: Index < srcSelectDimSize` failed. 0%| | 0/180 [00:01<?, ?it/s] Traceback (most recent call last): File "run_distributed_eval.py", line 281, in <module> run_generate() File "run_distributed_eval.py", line 213, in run_generate **generate_kwargs, File "run_distributed_eval.py", line 123, in eval_data_dir **generate_kwargs, File "/home/rachelzheng/acl/venv/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context return func(*args, **kwargs) File "/home/rachelzheng/acl/venv/lib/python3.6/site-packages/transformers/generation_utils.py", line 483, in generate model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs) File "/home/rachelzheng/acl/venv/lib/python3.6/site-packages/transformers/generation_utils.py", line 85, in _prepare_encoder_decoder_kwargs_for_generation model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs) File "/home/rachelzheng/acl/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/rachelzheng/acl/venv/lib/python3.6/site-packages/transformers/models/prophetnet/modeling_prophetnet.py", line 1225, in forward hidden_states, attn_probs = encoder_layer(hidden_states, attention_mask=extended_attention_mask) File "/home/rachelzheng/acl/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/rachelzheng/acl/venv/lib/python3.6/site-packages/transformers/models/prophetnet/modeling_prophetnet.py", line 1051, in forward attention_mask=attention_mask, File "/home/rachelzheng/acl/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/rachelzheng/acl/venv/lib/python3.6/site-packages/transformers/models/prophetnet/modeling_prophetnet.py", line 652, in forward query_states = self.query_proj(hidden_states) / (self.head_dim ** 0.5) File "/home/rachelzheng/acl/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/rachelzheng/acl/venv/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 93, in forward return F.linear(input, self.weight, self.bias) File "/home/rachelzheng/acl/venv/lib/python3.6/site-packages/torch/nn/functional.py", line 1692, in linear output = input.matmul(weight.t()) RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)` terminate called after throwing an instance of 'std::runtime_error' what(): NCCL error in: /pytorch/torch/lib/c10d/../c10d/NCCLUtils.hpp:136, unhandled cuda error, NCCL version 2.7.8 terminate called after throwing an instance of 'std::runtime_error' what(): NCCL error in: /pytorch/torch/lib/c10d/../c10d/NCCLUtils.hpp:136, unhandled cuda error, NCCL version 2.7.8
Since this generation framework python -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py works for other models, including BART, PEGASUS, I am not sure why it fails with ProphetNet here.
0
huggingface
Models
T5forConditionalGeneration + classification
https://discuss.huggingface.co/t/t5forconditionalgeneration-classification/2786
I would like to do sequence classification over the encoder in parallel with conditional generation using an auxiliary loss. However, I am confused about which hidden state I should take for the classification. Supposing that the hidden state of the last layer has the following dimensions: [batch size, seq length, hidden size] should I take the last one [:, -1, :] ?
It depends on the model. BERT uses the first one (where the CLS token is), some models use a pooling of all hidden states, other the one for the last logits (which is not necessarily -1 since you could have padding). I’d look at what is done in T5ForSequenceClassification and copy the code.
0
huggingface
Models
Number of epochs in pre-training BERT
https://discuss.huggingface.co/t/number-of-epochs-in-pre-training-bert/1776
Hi, In the BERT paper 21, it says: We train with batch size of 256 sequences (256 sequences * 512 tokens = 128,000 tokens/batch) for 1,000,000 steps, which is approximately 40 epochs over the 3.3 billion word corpus. How does this equation work? What is the unit “word” in “3.3 billion word corpus”? Is it the same as output from wc -w command on the entire text corpus? If this unit is a raw token, is there a guarantee that the number of “words” in the entire corpus matches the number tokens in the whole dataset after data preparation with create_pretraining_data.py 5 (assume duplicate factor is set to 1)? According to this line of code 3, in a training instance, some WordPiece tokens in the sequence will be dropped from the front or the back if the sequence is longer than max sequence length. Is this taken into account? If I understand this function 2 correctly, when the next segment gets randomly chosen, the segment that was there before it was swapped with this randomly chosen segment will be “put back.” (here 1) Does this mean that we have more tokens in total because of these randomly chosen segments? (I opened an issue 12 on Google’s repository, but I wanted to ask this in this community as well.)
@go-inoue Did you find an answer to this question? My best guess: 1 000 000 steps equals approx. 40 epochs -> (1*e6)/40=25 000 steps per epoch. Each step (iteration) is using a batch size of 128 000 tokens -> 25 000 * 128 000= 3.2 billion tokens in each epoch. One epoch is equal to one full iteration over the training data. In other words the training data contains approx. 3.2 billion tokens. I would expect the number of tokens to be higher than the number of words in the training data given that full stops, commas etc are separate tokens, and words sometimes are split into several tokens using the BERT-tokenizer. Could it be that the 3.4 bn word corpus is split into training, validation and test-data? I’m not even sure you have a split of train/val/test data during pre-training of BERT, given it is unsupervised? Some kind of cross-validation of all data would seem to make more sense?
0
huggingface
Models
[Announcement] All model cards will be migrated to hf.co model repos
https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
See PR on transformers repo: https://github.com/huggingface/transformers/pull/9013 25 TL;DR: Model cards are now living inside each huggingface.co 12 model repo, so we’ll migrate the ones in transformers over. For consistency, ease of use and scalability, README.md model cards now live directly inside each model repo on the HuggingFace model hub. How to update a model card You can directly update a model card inside any model repo you have write access to, i.e.: a model under your username namespace a model under any organization you are a part of. You can either: update it, commit and push using your usual git workflow (command line, GUI, etc.) or edit it directly from the website’s UI. What if you want to create or update a model card for a model you don’t have write access to? In that case, given that we don’t have a Pull request system yet on huggingface.co 12 (), you can open an issue on the transformers repo, post the card’s content, and tag the model author(s) and/or the Hugging Face team. Your early feedback is precious, especially if you’re contributed model cards before. Please let us know of any suggestion.
Hey @julien-c, I totally support this idea and the upcoming migration! I think this will heavily reduce workload from the team (now they can review other PRs instead of model cards). Simply editing a model card + commit it via browser is really great for productivity, because I no longer have to create a PR. Maybe we could encourage the people to just post a forum thread here to present a potential new model, so that new models are more visible to the community (in the past I just got GitHub notifications for new model card PRs). Another great feature would be a kind of model card “linting” or checker that scans model card READMEs for e.g. correct yaml syntax or if certain metadata tags (like language or licence) have been specified.
0
huggingface
Models
[Announcement] Model Versioning: Upcoming changes to the model hub
https://discuss.huggingface.co/t/announcement-model-versioning-upcoming-changes-to-the-model-hub/1914
Update: migration is now completed. TL;DR early next week, we will migrate the models stored on the huggingface.co 15 model hub. Accessing models from the library will be transparent and backward-compatible, however the process to upload models is going to be different. Please share your feedback! We host more and more of the community’s models which is awesome . To scale this sharing, we need to change the infra to both support more models, and unlock new powerful features. To that effect, we have rebuilt the storage backend that we use for models (currently S3), to our own git repos (using S3 as a git-lfs endpoint for large files), with one model = one repo. The benefits of this switch are: built-in versioning (I mean… it’s git. It’s pretty much what you use for versioning. Versioning in S3 has a ton a limitations) access control (will unlock private models, private datasets, etc) scalability (our usage of S3 to maintain lists of models was starting to bottleneck) Let’s dive in to the actual changes: I. On the website You’ll now see a “Browse files and versions” tab or button on each model page. (design is not final, we’ll make it more prominent/streamlined in the near future) This is what this page looks like: Screenshot 2020-11-06 at 19.23.051239×1060 136 KB Here’s a link to check it out directly in a staging env: https://moon-preprod.huggingface.co/julien-c/EsperBERTo-small/tree/main (disabled now that migration is completed) The UX should look familiar and self-explanatory, but we’ll add more ML-specific features in the future (what cool feature ideas do you have for version control for Machine learning ?) You can: see commit histories and diffs of changes made to any text file, like config.json: changes made by the HuggingFace team will be way clearer – we can perform updates to the models to ensure they work well with the library(ies) (you’ll be able to opt out from those changes) Large binary files are stored using https://git-lfs.github.com/ 15 which is pretty standard now, and interoperable out of the box with git Ability to update your text files, like your README.md model card, directly on the website! with instant preview II. In the transformers library We are soliciting feedback on the PR to enable this new storage mode in the transformers library: https://github.com/huggingface/transformers/pull/8324 35 This PR has two parts: 1. changes to the file downloading code used in from_pretrained() methods to use the new file URLs. Large files are stored in an S3 bucket and served by Cloudfront so downloads should be as fast as they are right now. In addition, you now have a way to pin a specific version of a model, to a commit hash, tag or branch. For instance: tokenizer = AutoTokenizer.from_pretrained( "julien-c/EsperBERTo-small", revision="v2.0.1" # tag name, or branch name, or commit hash ) Finally, the networking code is more robust and doesn’t gobble up errors anymore, so in case you have trouble downloading a specific file you’ll know exactly why. 2. changes to the model upload CLI to create a model repo then be able to git clone and git push to it. We are intentionally not wrapping git too much because we expect most model authors to be familiar with git (and possibly git-lfs), let us know if not the case. To create a repo: transformers-cli repo create your-model-name Then you’ll get a repo url that you’ll be able to clone: git clone https://huggingface.co/username/your-model-name # Then commit as usual cd your-model-name echo "hello" >> README.md git add . && git commit -m "Update from $USER" A nice side effect of the new system on the upload side is that file uploading should be more robust for very large files (hello T5!) as git-lfs handles the networking code. By the way, again, every model is its own repo. So you can git clone any public model if you’d like: git clone https://huggingface.co/gpt2 But you won’t be able to push unless it’s one of your models (or one of your orgs’). Again, please review this PR if possible : https://github.com/huggingface/transformers/pull/8324 35 III. Backward compatibility We intend to merge the PR in transformers next Tuesday morning (November 10). Backward compatibility on model downloads is expected, because even though the new models will be stored in huggingface.co-hosted git repos, we will backport all file changes to S3 automatically. Model uploads using the current system won’t work anymore: you’ll need to upgrade your transformers installation to the next release, v3.5.0, or to build from master. Alternatively, in the next week or so we’ll add the ability to create a repo from the website directly so you’ll be able to push even without the transformers library. Please let us know of your feedback! We are super excited about this change, because it’s going to unlock really powerful features in the future.
Awesome new feature! Can’t wait to test it; versioning is really great, especially for fine-tuned models (that can be improved over time). I would love to see an example of how a “tagged” version can be used. E.g. how can a “v2” tag of a model be used in Transformers then - with something like: from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("julien-c/EsperBERTo-small@v2") # specify commit/tag I really like the versioning concept. But how do you “sync” changes between the model_cards README.md and a specific tagged version of the model? E.g. I normally would open a PR for a model card README.md in the Transformers library. Then later I would update the model to a version 2, “tag” the old model with a “v1” tag and update the model card in Transformers library for version 2. How can I switch back to the version 1 model card, that belongs to the tagged model for v1 UX-wise It would be awesome to have a kind of version switcher (for tags) in a more prominant way
0
huggingface
Models
Using Cross-Encoders to calculate similarities among documents
https://discuss.huggingface.co/t/using-cross-encoders-to-calculate-similarities-among-documents/2414
Hello everyone! I have some questions for fine-tuning a Cross-Encoder for a passage/document ranking task. In Bi-Encoders (like DPR 2) we can use Negative Log-Likelihood (NLL) in training, where the similarities are calculated by the dot product among the vectors of the question and documents. I wonder if we can apply a similar strategy to Cross-Encoders. In other words, we would concatenate a pair of question-passage and do a forward pass in BERT. In the end, we obtain a similarity value and by doing this across several pairs in a training instance (each instance has a question, a positive document, and N negative documents) we would obtain N+1 similarities and then apply the (NLL). I have seen a similar approach: training SBERT Cross-Encoders 9. Here they use a model like Hugging face BertForSequenceClassification 1, set num_labels = 1, and do a forward pass with a pair of question and document. With this setting, the model is doing regression, where the logits are calculated with a linear with an output of size = num_labels = 1. After that, they apply BCEWithLogitsLoss and perform backpropagation. To apply the negative log-likelihood I need some sort of similarity values among 0 and 1 between a question and a document, using BertForSequenceClassification in regression seems to be a step in this direction. Should I 1) replace BCEWithLogitsLoss with just a sigmoid function to pass the logits returned by BertForSequenceClassification to map them to a [0,1] similarity, 2) do this for all documents, 3) compute NLL, and 4) backpropagate? Is there another way to keep their setting and then compute NLL across all documents and backpropagate on NLL? It seems that I will lose the similarity value if I apply the BCEWithLogitsLoss Cheers!
Hi, just want to note that, beside bi-encoders modules you mentioned, DPR also have a “reader” module which concat “question” and “passage” together and then do cross-attention like you said. Details in the paper Section 6 1. Code example for DPRReader (you can see the concat input): https://huggingface.co/transformers/model_doc/dpr.html#dprreader 3
0
huggingface
Models
Pretrain model to classify text as yes, no, not sure
https://discuss.huggingface.co/t/pretrain-model-to-classify-text-as-yes-no-not-sure/2407
hi, do we have any pretrained model to classify text as yes, no, not sure? any help is appreciated thanks
Hi, I think any binary pretrained classifier can do it. We just need probability output (already have from dense with sigmoid activation) Then, we can set our “unsure” threshold based on data (e.g. if prob is between 0.4-0.6 assign as unsure).
0
huggingface
Models
T5 Seq2Seq custom fine-tuning
https://discuss.huggingface.co/t/t5-seq2seq-custom-fine-tuning/1497
I have 2 questions regarding fine-tuning t5:- Is there anyway to change the lm_head on T5ForConditionalGeneration to intiliaze it from scratch to support new vocabulary size ? I did it by changing the T5ForConditionalGeneration code and add a new layer called final_layer, but I was wondering if there is an easier way. Is T5 generate method use teacher forcing or not ?
When you modify the vocab, you also need to resize the the token embeddings. The right way to do this is Add the new tokens to the tokenizer tokenizer.add_tokens(list of new toknes) Resize token embeddings model.resize_token_embeddings(len(tokenizer)) teacher forcing is used while training. generate does not use teacher forcing since it’s not used in training and meant for generating after training.
0
huggingface
Models
Unable to Process Concurrent User Request
https://discuss.huggingface.co/t/unable-to-process-concurrent-user-request/2385
I am using the https://huggingface.co/ktrapeznikov/biobert_v1.1_pubmed_squad_v2/tree/main# 1 for the question answering model. I am sending around 50 abstracts from Pubmed and then asking the question. It is working fine for 1 user , but when scale to 10 concurrent users the model is taking too long. Can anybody help
There’s a number of avenues that you could use to reduce inference time: Scale your deployment vertically or horizontally. Move to a smaller model. Improve your preprocessing/postprocessing efficiency.
0
huggingface
Models
How to run t5-3b or t5-11b on Google Ai Notebook?
https://discuss.huggingface.co/t/how-to-run-t5-3b-or-t5-11b-on-google-ai-notebook/2278
Hey everyone, I’m curious to try either the 3B or even the big 11B T5 model (preferably in the pipeline) for summarization. Are these two T5 models usable via the pipeline? (my understanding is they are, but maybe I’m failing already at this point) If they are, my guess is that my VM configuration must not be enough because I can only load “t5-large” on both a google Notebook (15GB vCPU, 1xTesla T4) or my datalore pro account (16GB RAM, 1xTesla T4) Is this enough for the 3B model, or do I need at least 2 GPUs / 32vCPU’s? Best, Enrico
Hi Enrico, More information about using t5-3b and t5-11b is available on this notebook from the authors: colab.research.google.com Google Colaboratory 64 Looks like you’ll need to pay for a more performant system.
0
huggingface
Models
Further pre-train language model in transformers like BERT
https://discuss.huggingface.co/t/further-pre-train-language-model-in-transformers-like-bert/2198
Hi all, yesterday through a workshop I learned about this forum. So I have the following question: is it possible to further pre-train transformers (e.g. BERT, DistilBert) using my own corpus? I mean not for the downstream task, but the language model (e.g. BERT tasks MSM and NSP) itself? Is it possible in general and with huggingface specifically? Thank you. best regards LIza
Hi @lizzzi111, nice to see you here Yes it’s possible. Examples and readme to do so are here: https://github.com/huggingface/transformers/tree/master/examples/language-modeling 15
0
huggingface
Models
Custom data loaded BERT
https://discuss.huggingface.co/t/custom-data-loaded-bert/2184
is this custom data loader for BERT correct, I am getting an error for the datatype with this code (bold line) def init(self, path, use_tokenizer, max_sequence_len=None): df = pd.read_csv(path) texts = [] labels = [] for index, row in df.iterrows(): source = row['source'] pubDate = row['pubDate'] author = row['author'] title = row['title'] content = row['content'] **text.append((source, pubDate,author,title,content))** label_id=row['label'] # Save encode labels. labels.append(label_id) the error is image1741×377 45.1 KB
We don’t have enough code in your snippet to understand what is happening. Can you share more?
0
huggingface
Models
Suggestion: Ability to Leave Comments Under Models
https://discuss.huggingface.co/t/suggestion-ability-to-leave-comments-under-models/2141
Suggestion: Ability to Leave Comments Under Models
My fear is that this will lead to clutter, and unnecessary comments about god-knows-what. I’d prefer that uploaders specify a way to get in touch if they want to, via Github repositories or social media, or just email. What is your use-case? Why do you want comments?
0
huggingface
Models
Issue in using trainer class for Finetuning GPT-2
https://discuss.huggingface.co/t/issue-in-using-trainer-class-for-finetuning-gpt-2/2155
While using trainer class to finetune gpt-2 on Hindi dataset. It’s outputting the following error While using trainer class to finetune gpt-2 on Hindi dataset. It’s outputting the following error TypeError Traceback (most recent call last) <ipython-input-44-3435b262f1ae> in <module>() ----> 1 trainer.train() 5 frames /usr/local/lib/python3.6/dist-packages/transformers/data/datasets/language_modeling.py in __getitem__(self, i) 99 100 def __getitem__(self, i) -> torch.Tensor: --> 101 return torch.tensor(self.examples[i], dtype=torch.long) 102 103 TypeError: an integer is required (got type NoneType) Here is the link: https://colab.research.google.com/drive/1um5UeY9hasmjPNcR1WkBe2uDDFhLUBrX?usp=sharing 7
@valhalla, Hey can you suggest something about what might be the problem. I am using trainer class on Hindi trained GPT-2. I have also shared the Colab.
0
huggingface
Models
Cannot import newly uploaded model
https://discuss.huggingface.co/t/cannot-import-newly-uploaded-model/2140
Hi, I imported a new model at https://huggingface.co/microsoft/SportsBERT 1 but I can’t import the model. I used the below commands from transformers import AutoTokenizer, AutoModel, BertTokenizer, BertModel tokenizerinp = AutoTokenizer.from_pretrained(“microsoft/SportsBERT”) modelinp = AutoModel.from_pretrained(“microsoft/SportsBERT”) and received the below error. OSError: Can’t load config for ‘microsoft/SportsBERT’. Make sure that: ‘microsoft/SportsBERT’ is a correct model identifier listed on ‘https://huggingface.co/models 3’ or ‘microsoft/SportsBERT’ is the correct path to a directory containing a config.json file Kindly let me know how I could resolve this issue.
prithvisrinivasan: from transformers import AutoTokenizer, AutoModel, BertTokenizer, BertModel tokenizerinp = AutoTokenizer.from_pretrained(“microsoft/SportsBERT”) modelinp = AutoModel.from_pretrained(“microsoft/SportsBERT”) Which version of transformers are you using ? With 3.5.0 The default place for models and url resolving was changed (to enable faster iteration on you models which are now git repositories with LFS for the models). If you used an earlier version, then we backport the models but with a batching job that runs every hour. It could explain the behavior you saw. Is that it ?
0
huggingface
Models
How can I do text Summarization using ProphetNet
https://discuss.huggingface.co/t/how-can-i-do-text-summarization-using-prophetnet/1661
The ProphetNet is now integrated, huggingface.co ProphetNet — transformers 3.4.0 documentation 23 Any idea how to make the model for a text summarization ?
ProphetNet automatically shifts the tokens so you call compute the loss as follows: prophetnet = ProphetNetForConditionalGeneration.from_pretrained(...) loss = prophetnet(input_ids=tokenized_article, decoder_input_ids=tokenized_summary, labels=tokenized_summary)
0
huggingface
Models
Multilingual T5 Model Not Found?
https://discuss.huggingface.co/t/multilingual-t5-model-not-found/1892
I could not able to see T5 Multilingual Model in Multilingual Model page of Hugging Face 3 page but i can able to see Multilingual Model of T5 in Google research 3 Page When can i able to see the model available in Hugging Face repo
I think that the mT5 is pretty new (published approx 2 weeks ago), maybe it will be available soon on Hugging Face, I hope so.
0
huggingface
Models
AutoModelForQuestionAnswering : TypeError: __init__() got an unexpected keyword argument ‘return_dict’
https://discuss.huggingface.co/t/automodelforquestionanswering-typeerror-init-got-an-unexpected-keyword-argument-return-dict/1943
When I run this , model = AutoModelForQuestionAnswering(“bert-large-uncased-whole-word-masking-finetuned-squad”,return_dict = True) I get the follwoing error TypeError Traceback (most recent call last) in () 1 tokenizer = AutoTokenizer.from_pretrained(“bert-large-uncased-whole-word-masking-finetuned-squad”) ----> 2 model = AutoModelForQuestionAnswering(“bert-large-uncased-whole-word-masking-finetuned-squad”,return_dict = True) TypeError: init() got an unexpected keyword argument ‘return_dict’
This means your version of transformers is too old, you should do an upgrade!
0
huggingface
Models
I meet the zero gradient descent
https://discuss.huggingface.co/t/i-meet-the-zero-gradient-descent/2023
I want use transformers to do text classification, I want code myself rather than use TFBertForSequenceClassification ,so I write the model with TFBertModel and tf.keras.layers.Dense ,but this is no gradient descent in my code, I try to find what wrong with my code but I can’t. So I submit this issues to ask for some help. my code is here: Model: 2 1 and I know train data is test data,just for quick debug. and when I train this model , 1
maybe @jplu has some idea here. If you could share a google colab exhibiting the behavior it would be a lot better than screen caps.
0
huggingface
Models
GPT 2.5-open source
https://discuss.huggingface.co/t/gpt-2-5-open-source/1982
What are the best models for text generation, similar to GPT2, but newer and open sourced? (Opposed to GPT 3)
It depends what type of generation you want to do. T5, BART, Pegasus, groover, CTRL are good models for instance.
0
huggingface
Models
A question about the modeling_bart.py
https://discuss.huggingface.co/t/a-question-about-the-modeling-bart-py/1930
Hello, I have a question about one part. github.com huggingface/transformers/blob/07708793f20ec3a949ccab32cc4fe0c7272dcc4c/src/transformers/modeling_bart.py#L940 4 decoder_padding_mask, decoder_causal_mask=causal_mask, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) if not return_dict: return decoder_outputs + encoder_outputs return Seq2SeqModelOutput( last_hidden_state=decoder_outputs.last_hidden_state, past_key_values=decoder_outputs.past_key_values, decoder_hidden_states=decoder_outputs.hidden_states, decoder_attentions=decoder_outputs.attentions, cross_attentions=decoder_outputs.cross_attentions, encoder_last_hidden_state=encoder_outputs.last_hidden_state, encoder_hidden_states=encoder_outputs.hidden_states, encoder_attentions=encoder_outputs.attentions, Why the BartModel return not just decoder_outputs and need to be decoder_outputs + encoder_outputs Thank you.
Because when you do generation you usually do a single pass in the encoder and reuse it’s output for the subsequent token generation for efficiency so you need to access the encoder output from the first forward pass.
0
huggingface
Models
RAG Retriever : Exact vs. Compressed Index?
https://discuss.huggingface.co/t/rag-retriever-exact-vs-compressed-index/1922
Hi Guys, With the command retriever = RagRetriever.from_pretrained("facebook/rag-token-base", index_name="compressed", use_dummy_dataset=True) We can choose index_name either “compressed” or “exact”, what is the difference between these two ? I also found from some topic (could not find it now), that @lhoestq suggested to use “compressed” index to match the performance of the paper; why is that the case ? Query: @lhoestq
Hi ! exact vs compressed refers to the quantization used for the FAISS index. The compressed one uses an IVF index with product quantization and requires significantly less RAM than the exact one. To reproduce the RAG papers result you will need the exact one though. Note that I will update this week the parameters of both index so that the exact one uses the same as RAG’s paper, and also to have an optimized compressed one.
0
huggingface
Models
TinyReformer/TinyLongformer details
https://discuss.huggingface.co/t/tinyreformer-tinylongformer-details/1757
Hi @patrickvonplaten I was just wondering if you could share any benchmarking or information on the tiny reformer/longformer models you trained. Which models are they distillations of? Have you benchmarked their performance at all? I am looking to do something similar but was hoping to get the details of these models before progressing.
I’m also wondering if you have any insight into why bert-base is so often used as the teacher model for the DistillBERT/TinyBERT models. I saw one paper on Robeta that really suggested teaching from a large model would make more sense, I believe.
0