docs
stringclasses
4 values
category
stringlengths
3
31
thread
stringlengths
7
255
href
stringlengths
42
278
question
stringlengths
0
30.3k
context
stringlengths
0
24.9k
marked
int64
0
1
huggingface
🤗Transformers
Difference betweeen DistilBertTokenizerFast and DistilBertTokenizer?
https://discuss.huggingface.co/t/difference-betweeen-distilberttokenizerfast-and-distilberttokenizer/5961
Hi everyone, I’m trying to understand what is the difference between the DistilBertTokenizerFast 18 and the DistilBertTokenizer 3. From the documentation it looks like that the DistilBertTokenizerFast “Construct a “fast” DistilBERT tokenizer (backed by HuggingFace’s tokenizers library).” whereas the DistilBertTokenizer “Construct a DistilBERT tokenizer.”, but I don’t understand what that means. This is what I found from the documentation: Tokenizer — transformers 4.5.0.dev0 documentation 8 To the best of my understanding the difference between a “fast” and a “non-fast” tokenizer is computation speed but there is no functional difference between them. Please correct me if I’m not the right direction. Any help will be greatly appreciated. Thank you, Ayala
@ayalaall I also have the same thought as you. I also want to know about this. Are they the same in terms of function?
0
huggingface
🤗Transformers
Is T5 expected to ignore padding tokens in `decoder_input_ids` when `decoder_attention_mask` is not provided
https://discuss.huggingface.co/t/is-t5-expected-to-ignore-padding-tokens-in-decoder-input-ids-when-decoder-attention-mask-is-not-provided/10271
I’m currently trying to train T5ForConditionalGeneration model for seq2seq task, and I was wondering if we can expect T5 to internally ignore (e.g. by generating attention mask) padding tokens in decoder_input_ids if we don’t explicitly provide decoder_attention_mask? I noticed from the code that T5 simply create attention mask of all 1s if decoder_attention_mask is not provided, so it seems we’re attending to padding tokens. I also ran a sanity check to see if providing decoder_attention_mask had any meaningful difference for the logits and saw that it does matter. So I’m wondering if this is by design, because it doesn’t seem to make sense to attend to padding tokens for batched passes. Below is sanity check that I ran (I know decoder_input_ids is supposed to be different from input_ids normally, but figured it’s not important for this particular issue). import torch import transformers model = transformers.T5ForConditionalGeneration.from_pretrained("t5-base") tokenizer = transformers.T5Tokenizer.from_pretrained("t5-base") model.cuda() model.eval() texts = ["This is a test input.", "This is a test input to test T5 padding scheme."] input_ids = tokenizer(texts, padding=True, truncation=True, return_tensors="pt") input_ids.to("cuda") with torch.inference_mode(): # Shift decoder input ids to the right decoder_input_ids = model._shift_right(input_ids.input_ids) # Manually give correct attention mask with_attn_mask_logits = model( input_ids=input_ids.input_ids, attention_mask=input_ids.attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=torch.cat(( torch.tensor([[1], [1]], device="cuda"), input_ids.attention_mask[:, :-1]), dim=1 ) ).logits # Give attention mask of all 1 explicitly all_1_attn_mask_logits = model( input_ids=input_ids.input_ids, attention_mask=input_ids.attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=torch.ones(decoder_input_ids.shape, device="cuda"), ).logits # Do give attention mask at all. no_attn_mask_logits = model( input_ids=input_ids.input_ids, attention_mask=input_ids.attention_mask, decoder_input_ids=decoder_input_ids, ).logits print(torch.all(torch.isclose(with_attn_mask_logits, no_attn_mask_logits)).item()) # False print(torch.equal(with_attn_mask_logits, no_attn_mask_logits)) # False print(torch.all(torch.isclose(all_1_attn_mask_logits, no_attn_mask_logits)).item()) # True print(torch.equal(all_1_attn_mask_logits, no_attn_mask_logits)) # True
Good catch. I think that in the decoder, one attends to all tokens (including padding), but as one needs to set the labels of the padding tokens to -100, they are not taken into account by the loss function. But it’s weird indeed, as in the encoder one uses the attention mask to ignore padding tokens when calculating the attention scores. cc @patrickvonplaten @valhalla
0
huggingface
🤗Transformers
Usind a fine-tuned sentence completion model in a Masked LM task
https://discuss.huggingface.co/t/usind-a-fine-tuned-sentence-completion-model-in-a-masked-lm-task/10227
Is it possible to use a BERT model that has been fine-tuned already (e.g. SQUAD-tuned BERT) on a masked LM task? I suspect that the sentence-completion model that is added on top of BERT is fundamentally incompatible with a masked LM task, but I’d like to know for a fact. I’ve attempted to do this, using: from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("deepset/bert-base-cased-squad2") model = AutoModelForMaskedLM.from_pretrained("deepset/bert-base-cased-squad2") but the results are very bad, so maybe I’m missing a step. Thanks!
Hi, that’s not really possible, unless the model has a language modeling head that has been trained. If that’s not the case, it will load a randomly initialized language modeling head, which gives random predictions. The "deepset/bert-base-cased-squad2" checkpoint has a fine-tuned question-answering head, but not a trained language modeling head.
0
huggingface
🤗Transformers
How to use BART as an encoder and a decoder separately for summarization?
https://discuss.huggingface.co/t/how-to-use-bart-as-an-encoder-and-a-decoder-separately-for-summarization/10109
Hi! I am a new comer to huggingface. I wanted to modify the encoder outputs from BART’s encoder and apply some operations on them, and then generate tokens from the decoder step by step. But I am not able to find any BARTEncoder model in huggingface, neither a BARTDecoder, so how should I go about it?
You might need to modify the encoder (and/or) the decoder in this: https://huggingface.co/transformers/_modules/transformers/models/bart/modeling_bart.html#BartModel 2 Copy-paste the entire file, save it locally as ‘modifiedbart.py’, and call it instead of the default “BARTModel” in transformers.
0
huggingface
🤗Transformers
Text-to-feature FinBERT for regression
https://discuss.huggingface.co/t/text-to-feature-finbert-for-regression/10186
I need to make a feature extractor for a project, so I am able to translate a given financial statement (text) into a vector that can be used as features in my main problem. I am currently doing revenue forecasting. I use historical fundamentals data in addition to stock prices in order to predict revenue growth for next quarter (regression problem). In addition I use text data (financial statements) where I want to use BERT in order to get new features for my regression model. That is, the vector from the BERT feature extraction will later be combined with several other values (fundamentals and stock price data) for the final prediction (next quarter revenue growth) in e.g. a random forest or XGBoost model. I want to try both a FINE-TUNED FinBERT model and a PRE-TRAINED FinBERT MODEL and compare. But how do I fine-tune the FinBERT model on my dataset (regression problem) and then use that new FinBERT model to do the feature extraction?
You can easily get a feature vector for a given piece of text as follows: from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer("ProsusAI/finbert") model = BertModel.from_pretrained("ProsusAI/finbert") text = "hello world" encoding = tokenizer(text, return_tensors="pt") # forward pass outputs = model(**encoding) # get feature vector feature_vector = outputs.last_hidden_state[:,0,:] Here I’m taking the final hidden state of the [CLS] token, which serves as a good representation of an entire piece of text. This is a vector of size (768,) for BERT-base-sized models. I’m not sure what you mean by fine-tuning a BERT model on your dataset. You can fine-tune BERT on a regression problem, where the inputs are text and the outputs are floats. Is this what you want to do?
0
huggingface
🤗Transformers
What is the classification head doing exactly?
https://discuss.huggingface.co/t/what-is-the-classification-head-doing-exactly/10138
Hello, When using a transformer model for text classification, one usually loads a model and then uses AutoModelForSequenceClassification to train the classifier over the N classes in the data. My question: which model is actually used for classification? Is it a logistic model (with uses as input the CLS representation)? In the case of several classes (say bad, neutral, good) the usual methodology in machine learning is to train several one-vs-all classifiers and then predict the label with most votes. Is this what is happening under the hood with huggingface? Thanks!
@nielsr I would be curious to have your take on this, if you have a few moments. Your comments have been incredibly useful so far. Thanks!
0
huggingface
🤗Transformers
Uploading files larger than 5GB to model hub
https://discuss.huggingface.co/t/uploading-files-larger-than-5gb-to-model-hub/4081
I want to upload ctrl to model hub. I have followed the instructions from the documentation and it seems that they are applicable for smaller models (<5GB). Issues have been raised here 3 and here 3 but it still seems unresolved. I followed the one shared by @julien-c (aws s3 cp …) but got the error Unable to locate credentials indicating that they are for HF staff. I am getting the error could not push some refs as 5GB limit is crossed. Can you please suggest a workaround if that exists ? @patrickvonplaten
Did you install git-lfs? git lfs install You also have to install our custom transfer agent for files >5GB: transformers-cli lfs-enable-largefiles Let me know if this helps!
0
huggingface
🤗Transformers
Model weights warning while loading any model from HuggingFace models
https://discuss.huggingface.co/t/model-weights-warning-while-loading-any-model-from-huggingface-models/10147
Hi , I am trying to load the model from huggingface hub and it throws me some initialization warning. It is happening while trying to load any kind of model. I am not able understand why this is happening. Can anyone please help me out here? image1816×226 30 KB
This warns you that some of the model weights have not been found in the checkpoint, and that there were therefore randomly initialized. This is because you are loading a model trained for masked language modeling (bert-base-cased) in a model for sequence classification, the classification head is randomly initialized. The warnings ends by telling you that your model needs to be trained because of that random initialization.
0
huggingface
🤗Transformers
Error while converting hf model to onnx
https://discuss.huggingface.co/t/error-while-converting-hf-model-to-onnx/10012
Hi everyone, I’m converting a finetuned bert model from huggingface to onnx (following this post 1). Transformers version: 4.10.2 When I run this on a terminal python -m transformers.onnx --model=/path/to/checkpoint output=/tmp I get this error; line 143, in validate_model_outputs from onnxruntime import InferenceSession, SessionOptions ModuleNotFoundError: No module named 'onnxruntime ’ I tried installing onnxruntime-tools but I still get the same error. Any ideas?
The package to install was onnxruntime
0
huggingface
🤗Transformers
Doing classification 100% from scratch?
https://discuss.huggingface.co/t/doing-classification-100-from-scratch/9699
Hello there! I have a quite specific corpus and I wanted to try an approach where I do everything from scratch (well, almost!). Specifically, I was thinking about: training a language model from scratch using my own corpus and creating my own tokenizer. Hugginface provides two colab for that (see Google Colab 4 for the LM) Then, simply loading the language model created at 1. and fine-tuning it for text classification using the usual imports from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("my_language_model") #do some classification #save final model Does that make sense? Are these the right conceptual steps? Thanks!
Yes, that are the right steps: first pre-training, than fine-tuning a head. However, note that you need a relatively big corpus in order for pre-training to be effective. If you really have some in-domain data that’s different from the corpora that were used to pre-train models like BERT and RoBERTa, then it might be useful to do it. Examples are BioBERT (pre-trained on biomedical language), SciBERT (pre-trained on scientific text), etc.
0
huggingface
🤗Transformers
What to use for the target input in the decoder for autoregressive usage
https://discuss.huggingface.co/t/what-to-use-for-the-target-input-in-the-decoder-for-autoregressive-usage/10037
I want to use a transformer (encoder and decoder) and use it for the seq 2 seq modeling. I like to use it as autoregressive model during the inference time. I know that I should use the targets to the decoder during the training, but then the issue is that when I do it my model overfits and performed well only if I have good targets as the input of the decoder. I was wondering if there is a trick or rule on how should I select the target for the decoder? my second question is if the target inputs to the decoder should be exactly similar to the target outputs of the decoder or it is kinda an art to choose the right target input for the decoder?
Encoder-decoder (seq2seq2) models like T5, BART and PEGASUS are trained using what is called “teacher forcing”, this just means supervised learning, i.e. the model needs to produce the target sentence given the source text. Normally, if your dataset is diverse enough, it will also perform well at inference time, when using model.generate(). Using decoding techniques such as beam search and top-k sampling, you can get good results, even on unseen inputs.
0
huggingface
🤗Transformers
Customizing the ordering of training samples
https://discuss.huggingface.co/t/customizing-the-ordering-of-training-samples/10038
I am using BERT to classify text that (in many cases) greatly exceeds 512 tokens. What I am doing is splititng the text into segments of 512 tokens and using those as the training samples instead. However, the segments pertaining to the same sequence will of course be quite similar to one another and therefore some bias towards longer sequences is introduced. And assuming that the huggingface library randomly samples from the training set, segments from the longer sequences are more likely to be chosen. What I want to do is override the ordering such that a segment from each sequence is used before sampling another segment from a sequence that has already been used. For shorter sequences, it is okay if the same segment needs to be sampled multiple times. For example if I have three sequences, each composed of three segments, say: [[‘a’, ‘b’, ‘c’], [‘d’, ‘e’, ‘f’], [‘g’, ‘h’, ‘i’]], then I would want a potential training order to be something like: ‘b’, ‘f’, ‘i’, ‘a’, ‘d’, ‘h’, ‘c’, ‘e’, ‘g’ How can I accomplish such a task?
Hi, What you can do is the same as what is explained in this tutorial: using a sliding window approach. This means that you create multiple training examples for a given text, by sliding a window (with some overlap) across the text. You can then label each training example with the label of the text. In this way, you have multiple training examples. You just need to add an additional return_overflowing_tokens=True when calling the tokenizer. Next, you can create a standard PyTorch dataloader, setting shuffle=True. This will automatically randomize all training examples, either coming from the same text or not.
0
huggingface
🤗Transformers
How to jit.trace gpt-neo-125mb
https://discuss.huggingface.co/t/how-to-jit-trace-gpt-neo-125mb/10032
Right now im doing this: inputs = torch.tensor([tokenizer.encode(“The Manhattan bridge”)]) traced_script_module = torch.jit.trace(model, inputs ) And got this error: Tracer cannot infer type of CausalLMOutputWithPast(loss=None, logits=tensor([[[ -7.3835, -6.2460, -8.1929, ...
cc @valhalla
0
huggingface
🤗Transformers
Make bert inference faster
https://discuss.huggingface.co/t/make-bert-inference-faster/9930
Hey everyone! I’m currently using gbert from huggingface to do sentence similarity. The dataset is nearly 3M The encoding part is taking too long. for sentence in list(data_dict.values()): tokens = {'input_ids': [], 'attention_mask': []} new_tokens = tokenizer.encode_plus(sentence, max_length=512, truncation=True, padding='max_length', return_tensors='pt', return_attention_mask=True) tokens['input_ids'].append(new_tokens['input_ids'][0]) tokens['attention_mask'].append(new_tokens['attention_mask'][0]) # reformat list of tensors into single tensor tokens['input_ids'] = torch.stack(tokens['input_ids']) tokens['attention_mask'] = torch.stack(tokens['attention_mask']) outputs = model(**tokens) # takes too long embeddings = outputs[0] Can someone advise me how to speed up this process? Is it possible to run outputs = model(**tokens) on GPU? Would converting the model into an onnx help? Thank you!
Hi, Looking at your code, you can already make it faster in two ways: by (1) batching the sentences and (2) by using a GPU, indeed. Deep learning models are always trained in batches of examples, hence you can also use them at inference time on batches. The tokenizer also supports preparing several examples at a time. Here’s a code example: from transformers import BertTokenizer, BertForSequenceClassification import torch model_name = "nlptown/bert-base-multilingual-uncased-sentiment" tokenizer = BertTokenizer.from_pretrained(model_name) model = BertForSequenceClassification.from_pretrained(model_name) sentences = ["I like this movie a lot", "This movie is super bad"] encoding = tokenizer(sentences, max_length=512, truncation=True, padding='max_length', return_tensors='pt') # set device to GPU device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # put model on GPU (is done in-place) model.to(device) # put data on GPU for k,v in encoding.items(): encoding[k] = v.to(device) # forward pass outputs = model(**encoding) # get predicted class indices predicted_class_indices = outputs.logits.argmax(-1).tolist() # turn into actual class names predicted_classes = [model.config.id2label[label] for label in predicted_class_indices] print(predicted_classes) If you want to speed it up even more, then you can indeed look at converting your trained model to ONNX.
0
huggingface
🤗Transformers
Training loss is not decreasing using TFBertModel
https://discuss.huggingface.co/t/training-loss-is-not-decreasing-using-tfbertmodel/9943
I have used the TFBertModel and AutoModel from the transformer library for training a two-class classification task and the training loss is not decreasing. bert = TFBertModel.from_pretrained('bert-base-uncased') input_ids = tf.keras.layers.Input(shape=(SEQ_LEN,), name='input_ids', dtype='int32') mask = tf.keras.layers.Input(shape=(SEQ_LEN,), name='attention_mask', dtype='int32') embeddings = bert(input_ids, attention_mask=mask)[1] X = tf.keras.layers.Dropout(0.1)(embeddings) X = tf.keras.layers.Dense(128, activation='relu')(X) y = tf.keras.layers.Dense(1, activation='sigmoid', name='outputs')(X) bert_model = tf.keras.Model(inputs=[input_ids, mask], outputs=y) bert_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) But when I use the TFBertForSequenceClassification model the model converges fast and the training loss reaches zero. bert_model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased',num_labels=2) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy') optimizer = tf.keras.optimizers.Adam(learning_rate=2e-5,epsilon=1e-08) bert_model.compile(loss=loss, optimizer=optimizer, metrics=[metric]) I want to use the sequence output of BERT and hence I need to load the model with TFBertModel or something similar which returns the outputs of BERT. @Rocketknight1
Your code in the first block looks like it would work. I suspect the problem is one of two things: The dropout may prevent the model from fully converging. The default learning rate for Adam is 1e-3, which is much too high for training Transformer models. Try learning rates in the range 1e-5 to 1e-4. If training loss is still not decreasing even with a lower learning rate and no dropout then let me know and I’ll investigate further.
0
huggingface
🤗Transformers
Can we use Gradient Checkpointing and Gradient Accumulation at Once?
https://discuss.huggingface.co/t/can-we-use-gradient-checkpointing-and-gradient-accumulation-at-once/9944
Hi I’m trying to train large batch size for my model, So can I use Gradient Checkpointing and Gradient Accumulation at once? I’m not sure that gradient would safely added when checkpointing is done P.S : would it be okay to use multi-GPU + Gradient Checkpointing + Gradient Accumulation at Once?
Yes, those two techniques can be used together, and also with distributed training as well.
0
huggingface
🤗Transformers
Any simple functionality to use multiple metrics together?
https://discuss.huggingface.co/t/any-simple-functionality-to-use-multiple-metrics-together/6227
Some tasks like question generation requires multiple metrics (BLEU, METEOR, ROUGE). It would be quite helpful if there is a function such as load_metric(['bleu', 'meteor', 'rouge'])
hey @ad26kr, thanks for the suggestion! one complication i see is that some metrics like glue are paired a subtask: metric = load_metric('glue', sub_task) so it’s not clear what should happen if someone passes something like load_metric(["bleu", "glue"]) in your proposal. nevertheless, i invite you to open a feature request on the datasets library where it can be discussed in more detail: Issues · huggingface/datasets · GitHub 11
0
huggingface
🤗Transformers
What to do for non-finite warning in `clip_grad_norm`?
https://discuss.huggingface.co/t/what-to-do-for-non-finite-warning-in-clip-grad-norm/8670
I started to see this warning for a language model training FutureWarning: Non-finite norm encountered in torch.nn.utils.clip_grad_norm_; continuing anyway. Note that the default behavior will change in a future release to error out if a non-finite total norm is encountered. At that point, setting error_if_nonfinite=false will be required to retain the old behavior. Is this an indicator that my model is not working well? And if so, is there any recommendation on what to change? Thanks!
Any insights on this?
0
huggingface
🤗Transformers
Which Bert model should we use for this problem. Next Word prediction using LM? Or Keyword Extraction problem?
https://discuss.huggingface.co/t/which-bert-model-should-we-use-for-this-problem-next-word-prediction-using-lm-or-keyword-extraction-problem/9593
Hi, I am trying to solve a text based problem. For a given text, say product description from amazon, I want to extract the brand name, type of the product, units and colour if mentioned in text. eg: input : Kore PVC 20-50 Kg Home Gym Set with One Plain + One Curl and One Pair Dumbbell Rods with Gym Accessories and PVC Dumbbells output : Kore Gym Set 50Kg #In the output brand is Kore and type is Gym Set and Units mentioned are 50Kg Any references to the similar problems or little bit of explanation guide on which direction should i move would be a great help. Thanks in advance.
Maybe you can view this as a NER-like problem. That is if all the brand/type/units can directly be identified (marked) in the text. For this a BERT-model is well suited. It depends a bit on how much labeled training data you have, and how it is organised. Another alternative could be to view it as a summarisation problem. I would then first consider using a seq-to-seq model, like BART or T5.
0
huggingface
🤗Transformers
Question about Gradient Accumulation step in Trainer
https://discuss.huggingface.co/t/question-about-gradient-accumulation-step-in-trainer/9876
I can see that gradient accumulation step helps to increase batch size. and also I can understand if the model has Batch Norm layer, gradient accumulation will not guarantee the exact same performance as the model that we trained in a large batch size (not using accumulation) but, most of models in Transformers are based on transformer architecture which utilizes layer normalization, so does that mean Can I guarantee that the trained model would give same metric performance in both ways? (e.g batch size 64 with 4 batch per device * 4 gpus * 4 accumulation step == batch size 64 with 16 batch per device * 4 gpus ) In short, My question is for transformer models which use layer normalization, will give same model performance between train batch size in once and using gradient accumulation steps
Yes, layer normalization does track statistics, so you will get the exact same thing with 4 batch size * 4 gradient accumulation or 16 batch size (and that would not be the case with neural nets using BatchNorm indeed).
0
huggingface
🤗Transformers
Extract similar word from model
https://discuss.huggingface.co/t/extract-similar-word-from-model/9744
Hello! Can I extract a similar word from the model? Also, is it possible to assess the similarity between two words, not sentences, but only words? I’ll hope you help.
Unless you want to compare words in a specific context, context-sensitive models are not the best way to do what you want. Better to look for word2vec 26 or similar then.
0
huggingface
🤗Transformers
Attention type ‘block_sparse’ is not possible if sequence_length: 458 <= num global tokens:
https://discuss.huggingface.co/t/attention-type-block-sparse-is-not-possible-if-sequence-length-458-num-global-tokens/9225
I am using the pre-trained google/bigbird-pegasus-large-arxiv model. But I receive the following update during the forward pass. Attention type 'block_sparse' is not possible if sequence_length: 458 <= num global tokens: 2 * config.block_size + min. num sliding tokens: 3 * config.block_size + config.num_random_blocks * config.block_size + additional buffer: config.num_random_blocks * config.block_size = 704 with config.block_size = 64, config.num_random_blocks = 3.Changing attention type to 'original_full'... I understand the update and I am aware of benefit of time and memory it saves while using block_sparse than original_full. So, how should I go about selecting the suitable block_size and num_random_blocks when I know that there is a lot of variation in the sequence length of my inputs?
I think one workaround is to pad your sequence length to a fixed number. i.e., >= 512/1024/2048 etc
0
huggingface
🤗Transformers
Adding cross-attention to custom models
https://discuss.huggingface.co/t/adding-cross-attention-to-custom-models/5408
Hi, I was thinking of adding cross attention between a visual transformer and a bert model. Was wondering if there was a way that I could do this using the HF library. What I was thinking was if somewhere in the HF Bert model API if I had access to where it took in the queries, keys, and values, I could subclass the BERT submodule and add cross attention instead of just having self attention. I’m visualizing something very much like this code snippet from the annotated transformer paper 1. Specifically the DecodeLayer class where there is self_attn as well an additional src_attn which I would need to add in. I am also aware that I would need to copy the weights for everything but the src_attn module. Just need some mechanism to do so. Fingers crossed there is some place in the HF API that I can do exactly that. Happy to do this myself if someone can point where in the HF library I should be looking at to see where it uses queries, keys and values arguments.
bump. Sorry guys just wondering if anyone had any ideas about this.
0
huggingface
🤗Transformers
Fintuning Transformer on CLEF dataset
https://discuss.huggingface.co/t/fintuning-transformer-on-clef-dataset/9584
I’m working on the CLEF dataset for a research project, which contains ~480 users’ posts, each user has a varied range of posts (text) starting from 10 to 2000, and each user has an associated binary label 0/1 (whether the user is depressed). So, basically, I’ve to take all the posts of a user and classify the label of the user. I’m thinking about how to go about solving this problem with HF transformers (BERT or something maybe), if anyone has any suggestions or pointers (notebook, etc.), please share, would be really helpful. Thanks also would like to cc: @lewtun and @sgugger
Hey @rasel277144, if I understand correctly you’d like to classify whether whether a user is “depressed” based on their posts? In this case, you could concatenate all the user posts and treat it as a standard classification problem and Sylvain has created a nice tutorial for this task here. Having said that, you will probably run into limitations with the maximum context size of models like BERT (typically just a few paragraphs), so you might want to see if models like BigBird 1 or LongFormer can help as their context size is 8x that of BERT. If that’s still not sufficient, you might want to adapt some of the suggestions in this thread 2 to text classification (e.g. you could create an embedding for each user post, average the embeddings, and then use those embeddings for a simple logistic regression classifier) PS I put “depressed” in quotes because I assume this is not a phenomenon we can hope to capture accurately from written text alone. I also suggest treading very carefully in this domain as there’s plenty of public examples 1 where using NLP to diagnose patient well-being leads to bad outcomes.
0
huggingface
🤗Transformers
Supporting ONNX optimized models
https://discuss.huggingface.co/t/supporting-onnx-optimized-models/891
In a lot of cases, ONNX optimized models offer much better performance benefits as compared to using PyTorch models. This performance boost coupled with the pipelines offered by HuggingFace are a really great combo for delivering a great experience both in terms of inference speed and model performance. Right now, it’s possible to use ONNX models with a little bit of modification to the pipeline.py code. On my tests on the QuestionAnsweringPipeline with SQuADv2 dataset, I see performance improvements of 1.5-2x with models like bert-large-uncased-whole-word-masking-finetuned-squad 4 and distilbert-base-cased-distilled-squad 6 I’m wondering if this is something that the devs/community considers worthwhile for supporting directly in the transformers repo. I’m doing some work on this for my own project, but it’s mostly a hacky impl. at this point. If there’s greater interest in this then I could try to integrate ONNX support for inference more fully in the code base. Would love to hear everyone’s thoughts. Thanks
I know @mfuntowicz is working on this so he may have more input.
0
huggingface
🤗Transformers
There is always something going wrong with hyper parameter tuning
https://discuss.huggingface.co/t/there-is-always-something-going-wrong-with-hyper-parameter-tuning/9611
Hi everyone, I’ve been trying to do grid search for hyper parameter tuning with the new trainer API and ray tune. I have to say there is always something messing up during that time. Has anyone successfully done grid search using Trainer API and BERT? tune_config = { "per_device_train_batch_size": 32, "per_device_eval_batch_size": 32, "num_train_epochs": tune.choice([2, 3, 4, 5]), "weight_decay": tune.uniform(0.0, 0.3), "num_epochs": tune.choice([2, 3, 4, 5]), #"max_steps": 1 if smoke_test else -1, # Used for smoke test. } training_args = TrainingArguments("test", eval_steps=500, disable_tqdm=True) trainer = Trainer( args=training_args, tokenizer=tokenizer, train_dataset=tokenized_datasets_train, eval_dataset=tokenized_datasets_val, model_init=model_init, compute_metrics=compute_metrics, ) trainer.hyperparameter_search( direction="maximize", backend="ray", hp_space=lambda _: tune_config) I’ve used ray tune but it tends to give errors like these: (pid=991) 2021-08-30 12:30:53,900 ERROR function_runner.py:266 -- Runner Thread raised error. (pid=991) Traceback (most recent call last): (pid=991) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 260, in run (pid=991) self._entrypoint() (pid=991) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 329, in entrypoint (pid=991) self._status_reporter.get_checkpoint()) (pid=991) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 594, in _trainable_func (pid=991) output = fn() (pid=991) File "/usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py", line 344, in inner (pid=991) trainable(config, **fn_kwargs) (pid=991) File "/usr/local/lib/python3.7/dist-packages/transformers/integrations.py", line 162, in _objective (pid=991) local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial) (pid=991) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1031, in train (pid=991) self._hp_search_setup(trial) (pid=991) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 860, in _hp_search_setup (pid=991) f"Trying to set {key} in the hyperparameter search but there is no corresponding field in `TrainingArguments`." (pid=991) AttributeError: Trying to set num_epochs in the hyperparameter search but there is no corresponding field in `TrainingArguments`. (pid=991) Exception in thread Thread-2: (pid=991) Traceback (most recent call last): (pid=991) File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner (pid=991) self.run() (pid=991) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 279, in run (pid=991) raise e (pid=991) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 260, in run (pid=991) self._entrypoint() (pid=991) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 329, in entrypoint (pid=991) self._status_reporter.get_checkpoint()) (pid=991) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 594, in _trainable_func (pid=991) output = fn() (pid=991) File "/usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py", line 344, in inner (pid=991) trainable(config, **fn_kwargs) (pid=991) File "/usr/local/lib/python3.7/dist-packages/transformers/integrations.py", line 162, in _objective (pid=991) local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial) (pid=991) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1031, in train (pid=991) self._hp_search_setup(trial) (pid=991) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 860, in _hp_search_setup (pid=991) f"Trying to set {key} in the hyperparameter search but there is no corresponding field in `TrainingArguments`." (pid=991) AttributeError: Trying to set num_epochs in the hyperparameter search but there is no corresponding field in `TrainingArguments`. (pid=991) 2021-08-30 12:30:54,010 ERROR trial_runner.py:773 -- Trial _objective_19f77_00000: Error processing event. Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/ray/tune/trial_runner.py", line 739, in _process_trial results = self.trial_executor.fetch_result(trial) File "/usr/local/lib/python3.7/dist-packages/ray/tune/ray_trial_executor.py", line 746, in fetch_result result = ray.get(trial_future[0], timeout=DEFAULT_GET_TIMEOUT) File "/usr/local/lib/python3.7/dist-packages/ray/_private/client_mode_hook.py", line 82, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/ray/worker.py", line 1621, in get raise value.as_instanceof_cause() ray.exceptions.RayTaskError(TuneError): ray::ImplicitFunc.train_buffered() (pid=991, ip=172.28.0.2, repr=<ray.tune.function_runner.ImplicitFunc object at 0x7f208feca750>) File "/usr/local/lib/python3.7/dist-packages/ray/tune/trainable.py", line 178, in train_buffered result = self.train() File "/usr/local/lib/python3.7/dist-packages/ray/tune/trainable.py", line 237, in train result = self.step() File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 379, in step self._report_thread_runner_error(block=True) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 527, in _report_thread_runner_error ("Trial raised an exception. Traceback:\n{}".format(err_tb_str) ray.tune.error.TuneError: Trial raised an exception. Traceback: ray::ImplicitFunc.train_buffered() (pid=991, ip=172.28.0.2, repr=<ray.tune.function_runner.ImplicitFunc object at 0x7f208feca750>) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 260, in run self._entrypoint() File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 329, in entrypoint self._status_reporter.get_checkpoint()) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 594, in _trainable_func output = fn() File "/usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py", line 344, in inner trainable(config, **fn_kwargs) File "/usr/local/lib/python3.7/dist-packages/transformers/integrations.py", line 162, in _objective local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1031, in train self._hp_search_setup(trial) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 860, in _hp_search_setup f"Trying to set {key} in the hyperparameter search but there is no corresponding field in `TrainingArguments`." AttributeError: Trying to set num_epochs in the hyperparameter search but there is no corresponding field in `TrainingArguments`. Result for _objective_19f77_00000: {} == Status == Memory usage on this node: 5.0/25.5 GiB Using FIFO scheduling algorithm. Resources requested: 1.0/4 CPUs, 1.0/1 GPUs, 0.0/15.0 GiB heap, 0.0/7.5 GiB objects (0.0/1.0 accelerator_type:T4) Result logdir: /root/ray_results/_objective_2021-08-30_12-30-46 Number of trials: 18/20 (1 ERROR, 16 PENDING, 1 RUNNING) +------------------------+----------+-------+--------------+--------------------+----------------+ | Trial name | status | loc | num_epochs | num_train_epochs | weight_decay | |------------------------+----------+-------+--------------+--------------------+----------------| | _objective_19f77_00001 | RUNNING | | 2 | 4 | 0.233907 | | _objective_19f77_00002 | PENDING | | 4 | 4 | 0.13375 | | _objective_19f77_00003 | PENDING | | 2 | 4 | 0.137775 | | _objective_19f77_00004 | PENDING | | 4 | 5 | 0.04286 | | _objective_19f77_00005 | PENDING | | 5 | 3 | 0.0169235 | | _objective_19f77_00006 | PENDING | | 3 | 5 | 0.281566 | | _objective_19f77_00007 | PENDING | | 2 | 5 | 0.297663 | | _objective_19f77_00008 | PENDING | | 2 | 5 | 0.183496 | | _objective_19f77_00009 | PENDING | | 4 | 5 | 0.00691873 | | _objective_19f77_00010 | PENDING | | 5 | 4 | 0.119958 | | _objective_19f77_00011 | PENDING | | 4 | 5 | 0.292127 | | _objective_19f77_00012 | PENDING | | 3 | 3 | 0.0271819 | | _objective_19f77_00013 | PENDING | | 5 | 4 | 0.114739 | | _objective_19f77_00014 | PENDING | | 2 | 5 | 0.140029 | | _objective_19f77_00015 | PENDING | | 2 | 4 | 0.204092 | | _objective_19f77_00016 | PENDING | | 2 | 4 | 0.00397949 | | _objective_19f77_00017 | PENDING | | 3 | 5 | 0.168986 | | _objective_19f77_00000 | ERROR | | 4 | 4 | 0.238963 | +------------------------+----------+-------+--------------+--------------------+----------------+ Number of errored trials: 1 +------------------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Trial name | # failures | error file | |------------------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------| | _objective_19f77_00000 | 1 | /root/ray_results/_objective_2021-08-30_12-30-46/_objective_19f77_00000_0_num_epochs=4,num_train_epochs=4,weight_decay=0.23896_2021-08-30_12-30-46/error.txt | +------------------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------+ (pid=989) 2021-08-30 12:31:01,040 ERROR function_runner.py:266 -- Runner Thread raised error. (pid=989) Traceback (most recent call last): (pid=989) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 260, in run (pid=989) self._entrypoint() (pid=989) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 329, in entrypoint (pid=989) self._status_reporter.get_checkpoint()) (pid=989) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 594, in _trainable_func (pid=989) output = fn() (pid=989) File "/usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py", line 344, in inner (pid=989) trainable(config, **fn_kwargs) (pid=989) File "/usr/local/lib/python3.7/dist-packages/transformers/integrations.py", line 162, in _objective (pid=989) local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial) (pid=989) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1031, in train (pid=989) self._hp_search_setup(trial) (pid=989) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 860, in _hp_search_setup (pid=989) f"Trying to set {key} in the hyperparameter search but there is no corresponding field in `TrainingArguments`." (pid=989) AttributeError: Trying to set num_epochs in the hyperparameter search but there is no corresponding field in `TrainingArguments`. (pid=989) Exception in thread Thread-2: (pid=989) Traceback (most recent call last): (pid=989) File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner (pid=989) self.run() (pid=989) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 279, in run (pid=989) raise e (pid=989) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 260, in run (pid=989) self._entrypoint() (pid=989) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 329, in entrypoint (pid=989) self._status_reporter.get_checkpoint()) (pid=989) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 594, in _trainable_func (pid=989) output = fn() (pid=989) File "/usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py", line 344, in inner (pid=989) trainable(config, **fn_kwargs) (pid=989) File "/usr/local/lib/python3.7/dist-packages/transformers/integrations.py", line 162, in _objective (pid=989) local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial) (pid=989) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1031, in train (pid=989) self._hp_search_setup(trial) (pid=989) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 860, in _hp_search_setup (pid=989) f"Trying to set {key} in the hyperparameter search but there is no corresponding field in `TrainingArguments`." (pid=989) AttributeError: Trying to set num_epochs in the hyperparameter search but there is no corresponding field in `TrainingArguments`. (pid=989) 2021-08-30 12:31:01,211 ERROR trial_runner.py:773 -- Trial _objective_19f77_00001: Error processing event. Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/ray/tune/trial_runner.py", line 739, in _process_trial results = self.trial_executor.fetch_result(trial) File "/usr/local/lib/python3.7/dist-packages/ray/tune/ray_trial_executor.py", line 746, in fetch_result result = ray.get(trial_future[0], timeout=DEFAULT_GET_TIMEOUT) File "/usr/local/lib/python3.7/dist-packages/ray/_private/client_mode_hook.py", line 82, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/ray/worker.py", line 1621, in get raise value.as_instanceof_cause() ray.exceptions.RayTaskError(TuneError): ray::ImplicitFunc.train_buffered() (pid=989, ip=172.28.0.2, repr=<ray.tune.function_runner.ImplicitFunc object at 0x7f4ad6a4cc50>) File "/usr/local/lib/python3.7/dist-packages/ray/tune/trainable.py", line 178, in train_buffered result = self.train() File "/usr/local/lib/python3.7/dist-packages/ray/tune/trainable.py", line 237, in train result = self.step() File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 379, in step self._report_thread_runner_error(block=True) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 527, in _report_thread_runner_error ("Trial raised an exception. Traceback:\n{}".format(err_tb_str) ray.tune.error.TuneError: Trial raised an exception. Traceback: ray::ImplicitFunc.train_buffered() (pid=989, ip=172.28.0.2, repr=<ray.tune.function_runner.ImplicitFunc object at 0x7f4ad6a4cc50>) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 260, in run self._entrypoint() File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 329, in entrypoint self._status_reporter.get_checkpoint()) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 594, in _trainable_func output = fn() File "/usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py", line 344, in inner trainable(config, **fn_kwargs) File "/usr/local/lib/python3.7/dist-packages/transformers/integrations.py", line 162, in _objective local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1031, in train self._hp_search_setup(trial) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 860, in _hp_search_setup f"Trying to set {key} in the hyperparameter search but there is no corresponding field in `TrainingArguments`." AttributeError: Trying to set num_epochs in the hyperparameter search but there is no corresponding field in `TrainingArguments`. Result for _objective_19f77_00001: {} Let me know, please!
The traceback is telling you Trying to set num_epochs in the hyperparameter search but there is no corresponding field in TrainingArguments So you problably should set some default value there, which will then be overwritten by ray during search.
0
huggingface
🤗Transformers
Predicted Start_index < Predicted End_index in BertForQuestionAnswering
https://discuss.huggingface.co/t/predicted-start-index-predicted-end-index-in-bertforquestionanswering/9666
We want to fine-tuned a QA model, which is based on BertForQuestionAnswering. After training, we can get a span-start/end scores by input_ids/token_type_ids/attention_mask and choose the indices with maximum span-start/end scores as predicted start_index and predicted end_index . But, sometimes predicted start_index would less than predicted end_index . If any reasonable method to solve this situation, thanks~ Ex: span-start scores = [-0.1, -2.1, 0.7, 1.3, 4.1] span-end scores = [-0.7, 3, 5, -0.7, 3.3] => predicted start_index = 4 predicted end_index = 2 It is not reasonable.
Those predictions are not considered usually.
0
huggingface
🤗Transformers
Cross Entropy Loss and loss of HuggingFace T5ForConditionalGeneration does not matches
https://discuss.huggingface.co/t/cross-entropy-loss-and-loss-of-huggingface-t5forconditionalgeneration-does-not-matches/9477
Hello, I am using T5ForConditionalGeneration for Question & Answering Model and Finetuning it, but In the train step, hugginface loss and my loss is not being matched, I want it for some experiment purpose. class UQAFineTuneModel(pl.LightningModule): def __init__(self): super().__init__() self.model = T5ForConditionalGeneration.from_pretrained( "allenai/unifiedqa-t5-small", return_dict=True ) self.model.train() def forward( self, source_text_input_ids, source_text_attention_mask, target_text_input_ids=None, ): output = self.model( input_ids=source_text_input_ids, attention_mask=source_text_attention_mask, labels=target_text_input_ids, ) return output.loss, output.logits def training_step(self, batch, batch_idx): source_text_input_ids = batch["source_text_input_ids"] source_text_attention_mask = batch["source_text_attention_mask"] target_text_input_ids = batch["target_text_input_ids"] # labels_attention_mask = batch["target_text_attention_mask"] loss, outputs = self( source_text_input_ids, source_text_attention_mask, target_text_input_ids ) loss_mine = None output = self.model( input_ids=source_text_input_ids, attention_mask=source_text_attention_mask, labels=target_text_input_ids, ) labels = batch["target_text_input_ids"].clone() labels[labels == 0] = -100 if target_text_input_ids is not None: loss_fct = CrossEntropyLoss(ignore_index=-100) loss_mine = loss_fct(output.logits.view(-1, outputs.size(-1)), labels.view(-1)) print(f"loss_huggingface: {loss.item()}, loss_mine : {loss_mine.item()}") self.log("train_loss", loss, prog_bar=True, logger=True) return {"loss": loss, "predictions": outputs} But my loss is different then huggingface loss, however they both are using CrossEntropy, image1027×274 14.8 KB
@valhalla , @BramVanroy Plz have a look at this
0
huggingface
🤗Transformers
How to apply TranslationPipeline from English to Brazilian Portuguese?
https://discuss.huggingface.co/t/how-to-apply-translationpipeline-from-english-to-brazilian-portuguese/9639
How to apply TranslationPipeline from English to Brazilian Portuguese? I’ve tried the fowling approach with no success: from transformers import pipeline translator = pipeline( model="t5-small", task="translation_en_to_br" ) translator("How old are you?", src_lang="en", tgt_lang="br") # [{'translation_text': ' '}] Could you give me some directions?
As far as I can tell, T5 has only been trained/finetuned on English, German, French, Romanian. You can read Section 3.1.3 in their paper. I am not aware of Brazilian Portuguese models. Also, I don’t think it has an official language code so “br” is not likely to work anyway.
0
huggingface
🤗Transformers
Problems Subclassing Trainer Class for Custom Evaluation Loop
https://discuss.huggingface.co/t/problems-subclassing-trainer-class-for-custom-evaluation-loop/9223
Hello Everybody, While training my model with deepspeed on 4GPUs, I was trying to Inject some custom behaviour in the evaluation loop. According to the Trainer docs under evaluate function it says. You can also subclass and override this method to inject custom behavior However when I tried doing this, I get the following Error: Traceback (most recent call last): File "GPT2nr.py", line 109, in <module> Traceback (most recent call last): File "GPT2nr.py", line 109, in <module> values = trainer.evaluate() File "/home/dagbert-skunkworks/src/skunkworks/models/GPT2Trainer.py", line 42, in evaluate values = trainer.evaluate() File "/home/dagbert-skunkworks/src/skunkworks/models/GPT2Trainer.py", line 42, in evaluate eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop AttributeError: 'TrainingArguments' object has no attribute 'use_legacy_prediction_loop' eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop AttributeError: 'TrainingArguments' object has no attribute 'use_legacy_prediction_loop' Traceback (most recent call last): File "GPT2nr.py", line 109, in <module> values = trainer.evaluate() File "/home/dagbert-skunkworks/src/skunkworks/models/GPT2Trainer.py", line 42, in evaluate eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop AttributeError: 'TrainingArguments' object has no attribute 'use_legacy_prediction_loop' Traceback (most recent call last): File "GPT2nr.py", line 109, in <module> values = trainer.evaluate() File "/home/dagbert-skunkworks/src/skunkworks/models/GPT2Trainer.py", line 42, in evaluate eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop AttributeError: 'TrainingArguments' object has no attribute 'use_legacy_prediction_loop' Killing subprocess 136 Killing subprocess 137 Killing subprocess 138 Killing subprocess 139 Traceback (most recent call last): File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.6/dist-packages/deepspeed/launcher/launch.py", line 171, in <module> main() File "/usr/local/lib/python3.6/dist-packages/deepspeed/launcher/launch.py", line 161, in main sigkill_handler(signal.SIGTERM, None) # not coming back File "/usr/local/lib/python3.6/dist-packages/deepspeed/launcher/launch.py", line 139, in sigkill_handler raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd) subprocess.CalledProcessError: Command '['/usr/bin/python', '-u', 'GPT2nr.py', '--local_rank=3']' returned non-zero exit status 1. GPT2Trainer.py import math import time from typing import Dict, List, NamedTuple, Optional, Tuple, Union import numpy as np from datasets import Dataset from torch.utils.data.dataloader import DataLoader from transformers.trainer import Trainer class EvalPrediction(NamedTuple): predictions: Union[np.ndarray, Tuple[np.ndarray]] label_ids: np.ndarray classes: List class EvalLoopOutput(NamedTuple): predictions: Union[np.ndarray, Tuple[np.ndarray]] label_ids: Optional[np.ndarray] metrics: Optional[Dict[str, float]] num_samples: Optional[int] classes: Optional[List] class GPT2Trainer(Trainer): def __init__(self, model,args = None,data_collator = None,train_dataset = None,eval_dataset = None,tokenizer = None, model_init = None,compute_metrics = None,callbacks = None,optimizers = (None,None)): super(GPT2Trainer,self).__init__(model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers) def evaluate(self, eval_dataset: Optional[Dataset] = None, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = "eval",) -> Dict[str, float]: self._memory_tracker.start() eval_dataloader = self.get_eval_dataloader(eval_dataset) start_time = time.time() eval_loop = self.prediction_loop if self.args.use_legacy_prediction_loop else self.evaluation_loop output = eval_loop( eval_dataloader, description="Evaluation", # No point gathering the predictions if there are no metrics, otherwise we defer to # self.args.prediction_loss_only prediction_loss_only=True if self.compute_metrics is None else None, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix, ) total_batch_size = self.args.eval_batch_size * self.args.world_size output.metrics.update( speed_metrics( metric_key_prefix, start_time, num_samples=output.num_samples, num_steps=math.ceil(output.num_samples / total_batch_size), ) ) self.log(output.metrics) if DebugOption.TPU_METRICS_DEBUG in self.args.debug: # tpu-comment: Logging debug metrics for PyTorch/XLA (compile, execute times, ops, etc.) xm.master_print(met.metrics_report()) self.control = self.callback_handler.on_evaluate(self.args, self.state, self.control, output.metrics) self._memory_tracker.stop_and_update_metrics(output.metrics) return output.metrics def evaluation_loop( self, dataloader: DataLoader, description: str, prediction_loss_only: Optional[bool] = None, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = "eval", ) -> EvalLoopOutput: """ Prediction/evaluation loop, shared by :obj:`Trainer.evaluate()` and :obj:`Trainer.predict()`. Works both with or without labels. """ prediction_loss_only = ( prediction_loss_only if prediction_loss_only is not None else self.args.prediction_loss_only ) # if eval is called w/o train init deepspeed here if self.args.deepspeed and not self.deepspeed: # XXX: eval doesn't have `resume_from_checkpoint` arg but we should be able to do eval # from the checkpoint eventually deepspeed_engine, _, _ = deepspeed_init(self, num_training_steps=0, resume_from_checkpoint=None) self.model = deepspeed_engine.module self.model_wrapped = deepspeed_engine self.deepspeed = deepspeed_engine # XXX: we don't need optim/sched for inference, but this needs to be sorted out, since # for example the Z3-optimizer is a must for zero3 to work even for inference - what we # don't need is the deepspeed basic optimizer which is self.optimizer.optimizer deepspeed_engine.optimizer.optimizer = None deepspeed_engine.lr_scheduler = None model = self._wrap_model(self.model, training=False) # if full fp16 is wanted on eval and this ``evaluation`` or ``predict`` isn't called while # ``train`` is running, halve it first and then put on device if not self.is_in_train and self.args.fp16_full_eval: model = model.half().to(self.args.device) batch_size = dataloader.batch_size logger.info(f"***** Running {description} *****") if isinstance(dataloader.dataset, collections.abc.Sized): logger.info(f" Num examples = {self.num_examples(dataloader)}") else: logger.info(" Num examples: Unknown") logger.info(f" Batch size = {batch_size}") model.eval() self.callback_handler.eval_dataloader = dataloader # Do this before wrapping. eval_dataset = dataloader.dataset if is_torch_tpu_available(): dataloader = pl.ParallelLoader(dataloader, [self.args.device]).per_device_loader(self.args.device) if self.args.past_index >= 0: self._past = None # Initialize containers # losses/preds/labels on GPU/TPU (accumulated for eval_accumulation_steps) losses_host = None preds_host = None labels_host = None # losses/preds/labels on CPU (final containers) all_losses = None all_preds = None all_labels = None # Will be useful when we have an iterable dataset so don't know its length. observed_num_examples = 0 # Main evaluation loop for step, inputs in enumerate(dataloader): # Update the observed num examples inputs, class_labels = inputs.get('input_ids'),inputs.get('labels') observed_batch_size = find_batch_size(inputs) if observed_batch_size is not None: observed_num_examples += observed_batch_size # Prediction step loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) # Update containers on host if loss is not None: losses = self._nested_gather(loss.repeat(batch_size)) losses_host = losses if losses_host is None else torch.cat((losses_host, losses), dim=0) if logits is not None: logits = self._pad_across_processes(logits) logits = self._nested_gather(logits) preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100) if labels is not None: labels = self._pad_across_processes(labels) labels = self._nested_gather(labels) labels_host = labels if labels_host is None else nested_concat(labels_host, labels, padding_index=-100) self.control = self.callback_handler.on_prediction_step(self.args, self.state, self.control) # Gather all tensors and put them back on the CPU if we have done enough accumulation steps. if self.args.eval_accumulation_steps is not None and (step + 1) % self.args.eval_accumulation_steps == 0: if losses_host is not None: losses = nested_numpify(losses_host) all_losses = losses if all_losses is None else np.concatenate((all_losses, losses), axis=0) if preds_host is not None: logits = nested_numpify(preds_host) all_preds = logits if all_preds is None else nested_concat(all_preds, logits, padding_index=-100) if labels_host is not None: labels = nested_numpify(labels_host) all_labels = ( labels if all_labels is None else nested_concat(all_labels, labels, padding_index=-100) ) # Set back to None to begin a new accumulation losses_host, preds_host, labels_host = None, None, None if self.args.past_index and hasattr(self, "_past"): # Clean the state at the end of the evaluation loop delattr(self, "_past") # Gather all remaining tensors and put them back on the CPU if losses_host is not None: losses = nested_numpify(losses_host) all_losses = losses if all_losses is None else np.concatenate((all_losses, losses), axis=0) if preds_host is not None: logits = nested_numpify(preds_host) all_preds = logits if all_preds is None else nested_concat(all_preds, logits, padding_index=-100) if labels_host is not None: labels = nested_numpify(labels_host) all_labels = labels if all_labels is None else nested_concat(all_labels, labels, padding_index=-100) # Number of samples if not isinstance(eval_dataset, IterableDataset): num_samples = len(eval_dataset) # The instance check is weird and does not actually check for the type, but whether the dataset has the right # methods. Therefore we need to make sure it also has the attribute. elif isinstance(eval_dataset, IterableDatasetShard) and hasattr(eval_dataset, "num_examples"): num_samples = eval_dataset.num_examples else: num_samples = observed_num_examples # Number of losses has been rounded to a multiple of batch_size and in a distributed training, the number of # samplers has been rounded to a multiple of batch_size, so we truncate. if all_losses is not None: all_losses = all_losses[:num_samples] if all_preds is not None: all_preds = nested_truncate(all_preds, num_samples) if all_labels is not None: all_labels = nested_truncate(all_labels, num_samples) # Metrics! if self.compute_metrics is not None and all_preds is not None and all_labels is not None: metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels,classes=class_labels)) else: metrics = {} # To be JSON-serializable, we need to remove numpy types or zero-d tensors metrics = denumpify_detensorize(metrics) if all_losses is not None: metrics[f"{metric_key_prefix}_loss"] = all_losses.mean().item() # Prefix all keys with metric_key_prefix + '_' for key in list(metrics.keys()): if not key.startswith(f"{metric_key_prefix}_"): metrics[f"{metric_key_prefix}_{key}"] = metrics.pop(key) return EvalLoopOutput(predictions=all_preds, label_ids=all_labels, metrics=metrics, num_samples=num_samples) GPT2nr.py import json from typing import Dict, List, Optional import deepspeed import numpy as np import pandas as pd import torch from datasets import Dataset, load_dataset, load_metric from skunkworks.models.GPT2Trainer import (EvalLoopOutput, EvalPrediction, GPT2Trainer) from transformers import (GPT2LMHeadModel, GPT2TokenizerFast, Trainer, TrainingArguments) from transformers.trainer_utils import get_last_checkpoint, is_main_process def compute_metrics(eval_pred: EvalPrediction) -> Dict: logits, label_ids,clas_labels = eval_pred logits = torch.Tensor(logits) logits = torch.argmax(logits,axis=-1) predictions = tokenizer.batch_decode(logits) label_ids = tokenizer.batch_decode(label_ids) metric_values = metric.compute(predictions=predictions, references=label_ids) avg_divergence_df = pd.DataFrame({"labels":class_labels,"scores":metric_values['raw_scores']}) return metric_values deepspeed_dict = json.load(open('ds_config_zero2manual.json','r')) metric = load_metric("../metrics/hf_metric.py") block_size = 128 def tokenize_function(examples,field): return tokenizer(examples[field], padding="max_length", truncation=True) def group_texts(examples): # Concatenate all texts. concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can # customize this part to your needs. total_length = (total_length // block_size) * block_size # Split by chunks of max_len. result = { k: [t[i : i + block_size] for i in range(0, total_length, block_size)] for k, t in concatenated_examples.items() } result["labels"] = result["input_ids"].copy() return result model = GPT2LMHeadModel.from_pretrained('gpt2-medium') tokenizer = GPT2TokenizerFast.from_pretrained('gpt2-medium') tokenizer.pad_token = tokenizer.eos_token train_dataset = load_dataset('csv', data_files={'train': 'train.csv'}) test_dataset = load_dataset('csv', data_files={'test': 'test.csv'}) tokenized_train_dataset = train_dataset.map(lambda x: tokenize_function(x,'text'), batched=True, num_proc=4, remove_columns=["text"]) tokenized_train_dataset = tokenized_train_dataset.map(group_texts,batched=True,batch_size=1,num_proc=4) tokenized_test_dataset = test_dataset.map(lambda x: tokenize_function(x,'data'), batched=True, num_proc=4, remove_columns=["data"]) training_args = TrainingArguments("test-trainer",per_device_train_batch_size=8, per_device_eval_batch_size=8, num_train_epochs=1, learning_rate=2e-5, weight_decay=0.01, eval_accumulation_steps=2, fp16=True, deepspeed=deepspeed_dict) trainer = GPT2Trainer( model=model,args=training_args, train_dataset=tokenized_train_dataset["train"], eval_dataset=tokenized_test_dataset["test"], tokenizer=tokenizer, compute_metrics=compute_metrics ) trainer.train() values = trainer.evaluate() Transformers Version: 4.5.2 Training GPUs: 4 Training GPU Model: Tesla T4
It looks like you are not using the latest version of Transformers.
0
huggingface
🤗Transformers
Tokenizer taking lot of memory
https://discuss.huggingface.co/t/tokenizer-taking-lot-of-memory/8597
I am using the BertTokenizerFast from transformers library to encode my text data. I am using pre-trained model for that “bert-base-uncased”. I have a data set of 7 Million rows which is around 2GB. I am using 64GB RAM. When I am trying to convert into vectors using the model BertModel with same pre-trained model"bert-base-uncased", It cannot convert even 10,000 encoded vectors. It is allocating by saying very high memory needed (100s of GB). I am using the reference code from here reference blog 3
You are probably trying to do “all data at once” which of course will not work. Try chunking your data into batches and process them one by one.
0
huggingface
🤗Transformers
Cardinality issue when training bert from scratch (tensorflow)
https://discuss.huggingface.co/t/cardinality-issue-when-training-bert-from-scratch-tensorflow/9546
Hello there! I am trying to adapt the official Google Colab for language generation to tensorflow and everything seems to work wonderfully by simply appending TF to most of the huggingface function calls (TFAutoModel, etc) Unfortunately, this strategy fails at the training step: from transformers import TFTrainer, TFTrainingArguments import tensorflow as tf training_args = TFTrainingArguments( "test-clm", evaluation_strategy = "epoch", learning_rate=2e-5) trainer = TFTrainer( model=model, args = training_args, train_dataset=lm_datasets[0:1000], eval_dataset=lm_datasets[1000:]) trainer.train() self.num_train_examples = self.train_dataset.cardinality().numpy() AttributeError: 'dict' object has no attribute 'cardinality' I have absolutely no idea what this cardinality is. Do you know what the issue can be? Thanks!
I saw a related issue here TensorFlow Question-Answering example fails to run (cardinality error) · Issue #10246 · huggingface/transformers · GitHub 13 … huggingface masters, do you have an idea? Thanks!
0
huggingface
🤗Transformers
How to structure labels for token classification?
https://discuss.huggingface.co/t/how-to-structure-labels-for-token-classification/1216
The documentation for the label parameter for BertForTokenClassification says that Indices should be in [0, ..., config.num_labels - 1] But BertConfig doesn’t have a num_labels parameter as far as I can tell, so what is this config.num_labels argument? Also, in this 3 tutorial it says that we can set the labels we want the model to ignore, to -100. If that is correct, why doesn’t the documentation for BertForTokenClassification mention it? Maybe it’s not correct, because when I make my labels like this, I get the error /opt/conda/conda-bld/pytorch_1591914880026/work/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [27,0,0] Assertion t >= 0 && t < n_classes failed which indicates to me that I can not have labels with values outside the interval [0, n_classes]. What I have are labels for which class each wordpiece belongs to, after tokenization. Then I add the special tokens and padding, and I’m setting labels for the special tokens to -100. So for example if I want a sequence length of 10, and I want to classify wordpieces with an ‘o’ in them as class 1, and wordpieces with a ‘p’ in them as class 2, I would have, for the sentence “Oh, that school is pretty cool”: Tokens: [‘oh’, ‘,’, ‘that’, ‘school’, ‘is’, ‘pretty’, ‘cool’] With special tokens: [’[CLS]’, ‘oh’, ‘,’, ‘that’, ‘school’, ‘is’, ‘pretty’, ‘cool’, ‘[SEP]’, ‘[PAD]’] Labels: [-100, 1, 0, 0, 1, 0, 2, 0, -100, -100]
You should be able to do something like this: config = AutoConfig.from_pretrained("bert-base-cased", num_labels=3) model = AutoModel.from_pretrained("bert-base-cased", config=config) Note that in your example you have three possible labels: with o, with p, and with neither. If you set num_labels to 2, you will have gotten the error that you described. -100 is the default ignore index for NLLLoss 14. When a target item has this index, it will be ignored from loss computation.
0
huggingface
🤗Transformers
Finetuing GPT model?
https://discuss.huggingface.co/t/finetuing-gpt-model/9549
Hi does someone know a fine tuning example with the HF trainer API? Have never fintined a gpt before so any tipis is much appreciated
I’ve never used GPT models before. However, if they are available on huggingface, I guess fine-tuning it won’t be different than fine-tuning any other model. I use PyTorch Lightning for training and replacing any huggingface models with each other wouldn’t change the process I guess…
0
huggingface
🤗Transformers
MLM train loss is very different after version update
https://discuss.huggingface.co/t/mlm-train-loss-is-very-different-after-version-update/9563
Hi, I use the run_mlm.py script to pretrain from scratch BERT model. Not sure if this is the most updated version of the script since I’ve been using it for a couple of months. When I used transofrmers 4.5.1 I used to get train loss of ~1.9 but after updating to transformers 4.9.2 the train loss is ~4.5. I’m training from scratch on my data file + trained tokenizer and I ran the exact same command on both times. This are the training argument on 4.5.1: output_dir=test-mlm-wiki, overwrite_output_dir=True, do_train=True, do_eval=None, do_predict=False, evaluation_strategy=IntervalStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=32, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=1.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_ratio=0.1, warmup_steps=0, logging_dir=runs/Aug08_11-51-18, logging_strategy=IntervalStrategy.STEPS, logging_first_step=False, logging_steps=500, save_strategy=IntervalStrategy.STEPS, save_steps=5000, save_total_limit=1, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=test-mlm-wiki, disable_tqdm=True, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, length_column_name=length, report_to=[‘tensorboard’], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, _n_gpu=1, mp_parameters= These are the training arguments on 4.9.2: TrainingArguments(_n_gpu=1, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_pin_memory=True, ddp_find_unused_parameters=None, debug=[], deepspeed=None, disable_tqdm=True, do_eval=False, do_predict=False, do_train=True, eval_accumulation_steps=None, eval_steps=None, evaluation_strategy=IntervalStrategy.NO, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, gradient_accumulation_steps=1, greater_is_better=None, group_by_length=False, ignore_data_skip=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, load_best_model_at_end=False, local_rank=-1, log_level=-1, log_level_replica=-1, log_on_each_node=True, logging_dir=test-mlm-wiki/runs/Aug27_17-12-33, logging_first_step=False, logging_steps=500, logging_strategy=IntervalStrategy.STEPS, lr_scheduler_type=SchedulerType.LINEAR, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, no_cuda=False, num_train_epochs=1.0, output_dir=test-mlm-wiki, overwrite_output_dir=True, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=32, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=test-mlm-wiki-sst5-100, push_to_hub_organization=None, push_to_hub_token=None, remove_unused_columns=True, report_to=[‘tensorboard’], resume_from_checkpoint=None, run_name=test-mlm-wik, save_on_each_node=False, save_steps=5000, save_strategy=IntervalStrategy.STEPS, save_total_limit=1, seed=42, sharded_ddp=[], skip_memory_metrics=True, tpu_metrics_debug=False, tpu_num_cores=None, use_legacy_prediction_loop=False, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0) These are the difference I’ve found: 4.5.1 4.9.2 debug=False debug=[] eval_steps=500 eval_steps=None logging_dir=runs/Aug08_11-51-18_dogfish-01 logging_dir=est-mlm-wiki/runs/Aug27_17-12-33_dogfish-01 - log_level=-1 - log_level_replica=-1 - log_on_each_node=True - push_to_hub=False - push_to_hub_model_id=test-mlm-wiki - push_to_hub_organization=None - push_to_hub_token=None - save_on_each_node=False - use_legacy_prediction_loop=False - resume_from_checkpoint=None Any ideas why this happens?
I wonder if something is going on with the tokenizer… When running on the 4.5.1 version, I changed in run_mlm script AutoTokenizer to BertTokenizer (it had a bug perhaps, wouldn’t recognize my tokenizer). On 4.9.2 I get this error when running the script: “The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is ‘T5TokenizerFast’. The class this function is called from is ‘BertTokenizer’.” That’s the code I’m using to train the tokenizer: tokenizer = BertWordPieceTokenizer() tokenizer.train(files=[‘wikipedia_1e6.txt], vocab_size=30000) tokenizer.save_model(’/tokenizers/vocab_folder’) tokenizer = BertTokenizer.from_pretrained(’/tokenizers/vocab_folder’) tokenizer.save_pretrained(’/tokenizers/wikipedia_1e6/’) However, when training tokenizer with transformers 4.9.2 and using it in run_mlm I still get a bigger train loss (than 4.5.1), but no warning.
0
huggingface
🤗Transformers
Bart outputing </s> in start of every decoded sentence
https://discuss.huggingface.co/t/bart-outputing-s-in-start-of-every-decoded-sentence/9088
Any sentence that I input into the pretrained model of BartforConditionalGeneration, it would outputs a sentence with the prefix . Am I doing something wrong? For example, if I input “Obama is the best president”, it gives back </s.><s.>Obama is the greatest president ever</s.>…Same is the case with every input sentence. TIA
This is expected behavior, if you see BartforConditionalGeneration docs, the decoder_start_token_id is the EOS token by default (which is ). This is in contrast to other models, a BERT decoder for instance has a CLS token as the starting token given to the decoder.
0
huggingface
🤗Transformers
How do i get Training and Validation Loss during fine tuning
https://discuss.huggingface.co/t/how-do-i-get-training-and-validation-loss-during-fine-tuning/9527
Hi, The trained API doesn’t seem to throw training loss. Only shows the validation loss. Could some one show me why this is the case? Here is my code snippet. def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = predictions[:, 0] return metric.compute(predictions=predictions, references=labels) #Training Arguments args = TrainingArguments( output_dir = "/Content/mod", evaluation_strategy = "epoch", #Can be epoch or steps learning_rate=2e-5, #According to original bert paper per_device_train_batch_size=32, #According to original bert paper per_device_eval_batch_size=32, num_train_epochs=3, #should be inbetween 2-4 according to the paper weight_decay=0.01, prediction_loss_only = True ) #Initializing Data Collator: this is for dynamic padding of tokens #Helps the training cycle to be quicker according to Hugging face Dcumentations data_collator_ = DataCollatorWithPadding(tokenizer=tokenizer) #Trainer itself. trainer = Trainer( model, args, train_dataset=tokenized_datasets_train, eval_dataset=tokenized_datasets_val, tokenizer=tokenizer, compute_metrics=compute_metrics, data_collator = data_collator_ ) torch.cuda.empty_cache() #This line is unnecessary but i still kept it cuz why not. trainer.train()
For example if you use evaluation_strategy="steps" and eval_steps=2000 in the TrainingArguments, you will get training and validation loss for every 2000 steps. If you wanna do it on an epoch level I think you need to set evaluation_strategy="epoch" and logging_strategy="epoch" in the TrainingArguments class.
0
huggingface
🤗Transformers
Saving only the best performing checkpoint
https://discuss.huggingface.co/t/saving-only-the-best-performing-checkpoint/552
Hi, Is there a parameter in config that allows us to save only the best performing checkpoint ? Currently, multiple checkpoints are saved based on save_steps (, batch_size and dataset size). If we want to train the model for lets say 10 epochs and 7th epoch gives the best performance on validation set, then how can we just save the checkpoint from 7th epoch and ignore the rest. Thanks.
There is no parameter for that yet, keeping this in mind for the future development of Trainer.
0
huggingface
🤗Transformers
How to make single-input inference faster? Create my own pipeline?
https://discuss.huggingface.co/t/how-to-make-single-input-inference-faster-create-my-own-pipeline/9360
Hello there! I was able to fine-tune my own model for text classification (from a distilbert-base-uncased-finetuned-sst-2-english · Hugging Face 1 model). Everything works correctly on my PC. Now comes the app development time but inference - even on a single sentence - is quite slow. I am processing one sentence at a time and using the simple function predict_single_sentence(['this is my input sentence']) tokenizer = AutoTokenizer.from_pretrained('C:\\Users\\mytrainedmodel') mymodel = TFAutoModelForSequenceClassification.from_pretrained('C:\\Users\\mytrainedmodel') def predict_single_sentence(text_list, model, tokenizer): #tokenize the text encodings = tokenizer(text_list, max_length=280, truncation=True, padding=True) #transform to tf.Dataset dataset = tf.data.Dataset.from_tensor_slices((dict(encodings))) #predict preds = model.predict(dataset.batch(1)).logits #transform to array with probabilities res = tf.nn.softmax(preds, axis=1).numpy() return res Despite my big RTX, inference is quite slow (about 5 sec for a single sentence). Am I missing something here? Should I create my own pipeline to speed things up? Thanks!
@sgugger sorry to pull you in, but I would really appreciate your input here. Am I doing things wrong? Is there anything huggingface provides that allows me to speed up inference? THank you so much!!
0
huggingface
🤗Transformers
Conceptual questions about transformers
https://discuss.huggingface.co/t/conceptual-questions-about-transformers/9467
Hello there, I am struggling with two simple questions about transformers models. I hope somebody in the huggingface communitity can shed some light on these points. transformers models create contextual embeddings. That is, the embedding of a word depend on the words around it in the sentence. Does this means a word has a different embedding for every possible sentence? At inference time, I am feeding a sentence to a transformer model for classification. How can the attention mechanism work (this word is related to this word, etc) if the model has never seen the sentence? I get the intuition at training time (we minimize the loss) but what happens at inference time? Any insights would be greatly appreciated! Thanks!
The final output of a word will differ depending on its position in the sentence as well as the words surrounding it. So, yes - for every different sentence, the word will have a different output. This is the whole point of training: the model learns which words it should pay attention to and which ones it should not for a given input vector. In practice it is more mathematical than that. This illustration 19 may help you get your head around it.
0
huggingface
🤗Transformers
How to change max_length of a fine tuned model
https://discuss.huggingface.co/t/how-to-change-max-length-of-a-fine-tuned-model/8382
Hello Everyone, I trained and shared a custom model based on gpt2 and now in config.json file of my model in the Model Hub I have the max_length as 50. I don’t remember passing that number as a training argument or such. However I want to use the whole capability of gpt-2 model and generate texts of length 1024 tokens. How can I update my model on Model Hub so that it’s possible to generate longer outputs using a pipeline? Here’s the config file of my model : config.json · YusufSahin99/IFIS_ZORK_AI_FANTASY at main 6 Any help would be appreciated
This number is just a default for when you use the model in a text-generation pipeline, which can be overridden by passing a new max_length in a call.
0
huggingface
🤗Transformers
Is the huggingface run_mlm Script dynamically masked?
https://discuss.huggingface.co/t/is-the-huggingface-run-mlm-script-dynamically-masked/9370
Hi, In the roberta paper, the model is trained by dynamic masking of sentences. If a roberta model is further pre-trained using the run_mlm.py , is the sentences going to be dynamically masked during pre-training using this file or is it statically masked like vanilla BERT? The script is found here github.com transformers/examples/pytorch/language-modeling at master ·... 9 master/examples/pytorch/language-modeling 🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX. - transformers/examples/pytorch/language-modeling at master · huggingface/transformers
Apparently, dynamic masking is always used, no matter the model. See this: dynamic masking for RoBERTa model · Issue #5979 · huggingface/transformers · GitHub 20
0
huggingface
🤗Transformers
[HELP] NER task single sentence/sample prediction
https://discuss.huggingface.co/t/help-ner-task-single-sentence-sample-prediction/9460
Hello everybody. I am trying to predict with the NER model, as in the tutorial from huggingface (it contains only the training+evaluation part). I am following this exact tutorial here : notebooks/token_classification.ipynb at master · huggingface/notebooks · GitHub. It works flawlessly, but the problems that I have begin when I try to predict on a simple sample. loaded_model = AutoModel.from_pretrained('./my_model_own_custom_training.pth', from_tf=False) input_sentence = "John Nash is a great mathematician, he lives in France" tokenized_input_sentence = tokenizer([input_sentence], truncation=True, is_split_into_words=False, return_tensors='pt') predictions = loaded_model(tokenized_input_sentence["input_ids"])[0] predictions is of shape (1,13,768) How can I arrive at the final result of the form [JOHN <-> ‘B-PER’, … France <-> “B-LOC”], where B-PER and B-LOC are two ground truth labels, representing the tag for a person and location respectively? The result of the prediction is: torch.Size([1, 13, 768]) If I write: print(predictions.shape) print(predictions.argmax(axis=2)) tensor([613, 705, 244, 620, 206, 206, 206, 620, 620, 620, 477, 693, 308]) I get the tensor above. However I would have expected to get the tensor representing the ground truth [0…8] labels from the ground truth annotations. Summary when loading the model : loading configuration file ./my_model_own_custom_training.pth/config.json Model config DistilBertConfig { “name_or_path": “distilbert-base-uncased”, “activation”: “gelu”, “architectures”: [ “DistilBertForTokenClassification” ], “attention_dropout”: 0.1, “dim”: 768, “dropout”: 0.1, “hidden_dim”: 3072, “id2label”: { “0”: “LABEL_0”, “1”: “LABEL_1”, “2”: “LABEL_2”, “3”: “LABEL_3”, “4”: “LABEL_4”, “5”: “LABEL_5”, “6”: “LABEL_6”, “7”: “LABEL_7”, “8”: “LABEL_8” }, “initializer_range”: 0.02, “label2id”: { “LABEL_0”: 0, “LABEL_1”: 1, “LABEL_2”: 2, “LABEL_3”: 3, “LABEL_4”: 4, “LABEL_5”: 5, “LABEL_6”: 6, “LABEL_7”: 7, “LABEL_8”: 8 }, “max_position_embeddings”: 512, “model_type”: “distilbert”, “n_heads”: 12, “n_layers”: 6, “pad_token_id”: 0, “qa_dropout”: 0.1, “seq_classif_dropout”: 0.2, “sinusoidal_pos_embds”: false, "tie_weights”: true, “transformers_version”: “4.8.1”, “vocab_size”: 30522 }
Calin: notebooks/token_classification.ipynb at master · huggingface/notebooks · GitHub cc’ing @sgugger, these demo notebooks really need an inference part. To do inference with NER, you need to load an AutoModelForTokenClassification rather than an AutoModel, like so: from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("path_to_directory") model = AutoModelForTokenClassification.from_pretrained("path_to_directory") input_sentence = "John Nash is a great mathematician, he lives in France" encoding = tokenizer([input_sentence], return_tensors='pt') # forward pass outputs = model(**encoding) logits = outputs.logits predictions = logits.argmax(-1) The logits will be of shape (batch_size, seq_len, num_labels).
0
huggingface
🤗Transformers
Why is the code for DataCollatorForSeq2Seq overwriting the labels?
https://discuss.huggingface.co/t/why-is-the-code-for-datacollatorforseq2seq-overwriting-the-labels/9382
Hey Guys, I can’t figure out why in the source code for DataCollatorForSeq2Seq, the feature[‘label’] is being overwritten? Does it not lead to code break when the training sample are shuffled?
I guess you are talking about this line (in the future please link to the code that you are talking about so that we can easily look it up). github.com huggingface/transformers/blob/143738214cb83e471f3a43652617c8881370342c/src/transformers/data/data_collator.py#L281-L287 1 if labels is not None: max_label_length = max(len(l) for l in labels) padding_side = self.tokenizer.padding_side for feature in features: remainder = [self.label_pad_token_id] * (max_label_length - len(feature["labels"])) feature["labels"] = ( feature["labels"] + remainder if padding_side == "right" else remainder + feature["labels"] It is not changing the given labels but it is padding them to ensure that all items in the batch have the same length. Pad tokens are ignored when the loss is calculated. It is a collator, so it happens after the shuffling process of the dataloader. It receives a number of items from the dataloader (possibly shuffled) and then collates them (prepares them for the model).
0
huggingface
🤗Transformers
Using Trainer at inference time
https://discuss.huggingface.co/t/using-trainer-at-inference-time/9378
Hello everyone, I successfully fine-tuned a model for text classification. Now I would like to run my trained model to get labels for a large test dataset (around 20,000 texts). So I had the idea to instantiate a Trainer with my model and use the trainer.predict() method on my data. This works fine, but I was wondering if it makes sense (and it’s efficient, advisable, & so on) to use a Trainer (which, of course, was meant to be used for training models) just for inference. If not, what would be a better way to perform inference on a large dataset? I cannot just pass all data to model() as I get out of memory errors. I would need to explicitly batch my data, I guess (while Trainer takes care of that part implicitly)… Thank you in advance for your thoughts on this!
Normally, the Trainer saves your trained model in a directory. You can specify this with the output_dir argument when instantiating the TrainingArguments. You can then instantiate your trained model using the .from_pretrained() method. Suppose that you have fine-tuned a BertForSequenceClassification model, then you can instantiate it as follows: from transformers import BertForSequenceClassification model = BertForSequenceClassification.from_pretrained("path_to_the_directory") You can then make batched predictions as follows: from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("path_to_the_directory") text = ["this is one sentence", "this is another sentence"] encoding = tokenizer(text, return_tensors="pt") # forward pass outputs = model(**encoding) predictions = outputs.logits.argmax(-1)
0
huggingface
🤗Transformers
Model trains with Seq2SeqTrainer but gets stuck using Trainer
https://discuss.huggingface.co/t/model-trains-with-seq2seqtrainer-but-gets-stuck-using-trainer/9333
Hi, I’ve been trying to finetune the BART large pre-trained on MNLI with the Financial Phrasebank dataset to build a model for news sentiment analysis. I’m just a beginner and so, I mostly use the code from GEM Getting Started. from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, Seq2SeqTrainer, Seq2SeqTrainingArguments tokenizer = AutoTokenizer.from_pretrained(‘facebook/bart-large-mnli’) model = AutoModelForSeq2SeqLM.from_pretrained(‘facebook/bart-large-mnli’) The model only trains when I use the AutoModelForSeq2SeqLM, Seq2SeqTrainer andSeq2SeqTrainingArguments. When I use model = AutoModelForSequenceClassification.from_pretrained(‘facebook/bart-large-mnli’) with the Trainer and TrainingArguments, the model does not train. Is it appropriate to use seq2seq for sentiment classification tasks? Any suggestions would be immensely helpful. Thanks in advance.
Are you framing your classification problem as a sequence generation task? what types of labels do you have for your training data? Are the labels text/sequence or a finite number of categories? If your task is classification I believe you’re using the wrong model class. You could probably use BertForSequenceClassification for a sentiment analysis task as has been done in the link below: huggingface.co nlptown/bert-base-multilingual-uncased-sentiment at main 3 And instead of using Seq2SeqTrainer, just use Trainer and TrainingArguments.
0
huggingface
🤗Transformers
Onnx tf bert sentiment-analysis input and outputs
https://discuss.huggingface.co/t/onnx-tf-bert-sentiment-analysis-input-and-outputs/9215
I trained a model based on bert-large-cased using the run_text_classification.py 3 tensorflow example. I converted the model to onnx using the convert_graph_to_onnx.py tool. That looks like this: convert_graph_to_onnx.convert( framework='tf', model="output", output=Path("model/model.onnx"), opset=12, tokenizer="output", use_external_format=True, pipeline_name="sentiment-analysis") I’m having a slight difficulty understanding the inputs and outputs. I am trying to use Rust and the onnxruntime 1. The onnx model input shape is (?, 5). I am using input_ids, attention mask, token ids constructed from the tokenizers Rust library. For example the inputs I’m using [[101, 1, 0, 0, 0], [1104, 1, 0, 0, 0], [23120, 1, 0, 0, 0], [188, 1, 0, 0, 0], [19091, 1, 0, 0, 0], [8124, 1, 0, 0, 0], [1111, 1, 0, 0, 0], [3062, 1, 0, 0, 0], [1105, 1, 0, 0, 0], [15470, 1, 0, 0, 0], [119, 1, 0, 0, 0], [102, 1, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], ... ... I truncate + pad the tokens out to 128 length, since I used a max length of 128 when training the model. the outputs are logits (I think) in the shape (?, 4). My sentiment analysis task has 4 classes, so I think this makes sense. [[0.87741166, 0.4000733, -2.557633, 1.5771139], [-0.7227318, -0.14528184, 2.4809465, -1.7312673], [0.88585603, 0.392128, -2.54713, 1.5852065], [-0.89909637, 0.5074229, 2.3639672, -1.8381689], [0.8940967, 0.40258378, -2.5756738, 1.5999701], ... ... Since this is a sentiment-analysis task why are there logits for each token? Any advice on things to check? Any things stand out as obviously wrong?
Usually, the last layer of a classification model (in your case TFBertForSequenceClassification), produces raw prediction values as real numbers ranging from [-infinity, +infinity] These raw, unconstrained prediction values are commonly known as logits Now after this, we usually want to normalize these outputs to a probability distribution over predicted output classes. To do this, we usually use a normalization layer such as sigmoid and softmax for binary and multi-class classification respectively. The output of your softmax function is now a probability and to convert that into your classes, we just take the max of these. To do that with your onnx outputs, you can do something like this: import numpy as np from scipy.special import softmax np.argmax(softmax(outputs[0][0], axis=0))
0
huggingface
🤗Transformers
Trainer vs seq2seqtrainer
https://discuss.huggingface.co/t/trainer-vs-seq2seqtrainer/3145
Hi, If I am not mistaken, there are two types of trainers in the library. The standard trainer and the seq2seq trainer. It seems that the Trainer works for every model since I am using it for a Seq2Seq model (T5). MY question is: What advantages does seq2seq trainer have over the standard one? And why does not the library handle the switch in the background or does it? I mean that the user can use Trainer all the time and in the background, it will be a seq2seqtrainer if the corresponding model needs it. Thank you!
Hi @berkayberabi You are right, in general, Trainer can be used to train almost any library model including seq2seq. Seq2SeqTrainer is a subclass of Trainer and provides the following additional features. lets you use SortishSampler lets you compute generative metrics such as BLEU, ROUGE, etc by doing generation inside the evaluation loop. The reason to add this as a separate class is that for calculating generative metrics we need to do generation using the .generate method in the predict step which is different from how other models to prediction, to support this you need to override the prediction related methods such as (prediction_step, predict) to customize the behaviour, hence the Seq2SeqTrainer. Hope this answers your question.
0
huggingface
🤗Transformers
How to use transformers for batch inference
https://discuss.huggingface.co/t/how-to-use-transformers-for-batch-inference/9366
I use transformers to train text classification models,for a single text, it can be inferred normally. The code is as follows from transformers import BertTokenizer, TFAlbertForSequenceClassification text = 'This is a sentence' model_path ='../albert_chinese_tiny' tokenizer = BertTokenizer.from_pretrained(model_path) model = TFAlbertForSequenceClassification.from_pretrained('../model_tf/20210818') encoding = tokenizer(text, truncation=True, padding=True, max_length=30, return_tensors="tf") result = model(encoding) When I predict more than one text at a time, an error will be reported. The code is as follows texts = ['This is a sentence', 'This is another sentence'] encodings = [] model_path ='../albert_chinese_tiny' tokenizer = BertTokenizer.from_pretrained(model_path) model = TFAlbertForSequenceClassification.from_pretrained('../model_tf/20210818') for text in texts: encoding = tokenizer(text, truncation=True, padding=True, max_length=30, return_tensors="tf") encodings.append(encoding) result = model(np.array(encodings)) The error information is as follows: tensorflow.python.framework.errors_impl.InvalidArgumentError: Value for attr ‘Tindices’ of string is not in the list of allowed values: int32, int64 ; NodeDef: {{node ResourceGather}}; Op<name=ResourceGather; signature=resource:resource, indices:Tindices → output:dtype; attr=batch_dims:int,default=0; attr=validate_indices:bool,default=true; attr=dtype:type; attr=Tindices:type,allowed=[DT_INT32, DT_INT64]; is_stateful=true> [Op:ResourceGather]
This question has been answered here 375.
0
huggingface
🤗Transformers
Multiple choice with variable number of choices
https://discuss.huggingface.co/t/multiple-choice-with-variable-number-of-choices/8607
Hi all, Similar question to Multiple choice with variable length options 🤗Transformers Hello! I have a beginner question. I am trying to create a model that makes predictions on the QAngaroo dataset with DistilBert. In this dataset, we get a list of supports and some candidate answers (between 2~100), and we need to choose the right answer for the model. Right now, I am trying to use TFDistilBertForMultipleChoice, but I am running into a problem since num_choices is a value that is fixed with the entire batch size. I was wondering how I could go about making that value dynamic. I… The reply there is to go full-blown text-to-text—which is a great idea!—but I’m interested in getting a discriminative BERT-esque baseline if possible (due to the dataset’s particular size, structure, and text content). Since multiple choice models (like RobertaForMultipleChoice) detect the number of questions dynamically per batch, it seems like the main challenge is getting each batch to have a consistent number of choices. Going off of the run_swag.py example, there are two main user-provided data processing functions: a preprocess_function() — for adding new features to the Dataset a collate function — for turning a raw batch into tensors Unfortunately, while it’s no problem to add the number of choices for an example in 1., by the time 2. comes around, we’ve already been given a batch, so it’s too late to ensure they all have the same number of choices. In other words, it seems like the sampler is the place where we’d make sure the number of choices is consistent per batch. To my delight, I found the --group_by_length and --length_column_name options, which enable the (Distributed)LengthGroupedSampler. This opens up a potential way for doing this: Add a feature with the number of choices Pass this feature as the --length_column_name and use --group_by_length Unfortunately for me, this constraint is a soft one rather than a hard one. (This makes sense for the original purpose, of course, which is just helping with padding.) This means that some batches do end up with multiple “lengths” (choices). I wrote a quick test that injects a number from 1-4 as the feature and checked how many batches had multiple “lengths.” I was hoping it would be just three batches (at the borders between 1-2, 2-3, 3-4). Over 500 batches, there were 14 that ended up with mixed numbers. I could truncate these batches, which would only lose ~3% of the data. Not a huge loss, but it makes me wonder whether I can do better! So, to complete the very long-winded question, I wonder whether anyone more familiar with Huggingface Transformers can recommend an approach to implement multiple choice with a variable number of choices. Right now my main options are: Write my own sampler to do this. Given this is just for a baseline, and my trepidation at debugging a custom distributed sampler, I worry this might not be worth the investment. Create multiple Dataset objects, each with a consistent number of choices. Do an outer training loop. (This would be less ideal because each number of choices corresponds to a question format, so this would increase coarse patterns into training / reduce how shuffled it is.) Just throw away the ~3% of mixed-data batches (less if we take the majority, so maybe ~1.5%). ??? (better approach I can’t see?) Huge thanks for your time!
In case anyone in the future is reading this, the above does work pretty well, but only for training. For evaluation, the (Distributed)LengthGroupedSampler is not used. Furthermore, even if it was, by nature of throwing some data away in the collator, we skip some of the evaluation set (which is a no-no for comparing results between methods). I provided an example implementation of a batch sampler that groups based on a provided feature in the following comment: Option for `(Distributed)LengthGroupedSampler` to treat groups as a hard constraint · Issue #12995 · huggingface/transformers · GitHub 5 It also requires a change to Transformers itself to support a batch sampler, which is in a PR linked to that issue.
0
huggingface
🤗Transformers
Cannot load a saved (fine-tuned) model?
https://discuss.huggingface.co/t/cannot-load-a-saved-fine-tuned-model/9307
Hello there, I am facing a rather annoying issue. I fine-tuned a Bert model for classification and saved the resulting model to disk. However, I am unable to subsequently load the model. This is essentially what I do: tokenizer = AutoTokenizer.from_pretrained('C:\\Users\\bert-sentiment') #tokenize text... model = TFAutoModelForSequenceClassification.from_pretrained('C:\\Users\\bert-sentiment', num_labels =3, ignore_mismatched_sizes = True) #fine tune model... model.fit() model.save_pretrained('C:\\Users\\mydir') mydir contains two files tf_model.h5 and config.json. However, when I tried to load the model by simply running mymodel = TFAutoModelForSequenceClassification.from_pretrained("C:\\Users\\mydir") I get the error OSError: Unable to load weights from h5 file. If you tried to load a TF 2.0 model from a PyTorch checkpoint, please set from_pt=True. do you know what the issue could be? Thanks!
Strange bug. I re-generated the saved models and then loading worked fine. Perhaps some corruption at saving time?
0
huggingface
🤗Transformers
Why BertForMaskedLM has decoder layer
https://discuss.huggingface.co/t/why-bertformaskedlm-has-decoder-layer/9263
I want to start a new pre training language. When I use ’bertformaskedLM‘, I find that the model has a decoder layer, and the output of the model should be 768 dimensions, not 20000 (20000 is the number of tokens). What is the reason for this problem ?and how can I pretrain a language correctly? thank you. this code: config = BertConfig( vocab_size=20000, hidden_size=768, num_hidden_layers=6, num_attention_heads=12, max_position_embeddings=512 ) model = BertForMaskedLM(config) training_args = TrainingArguments( output_dir='./bangla_data/working/', overwrite_output_dir=True, num_train_epochs=1, per_device_train_batch_size=32, save_steps=1000, #10000 save_total_limit=2, prediction_loss_only=True, learning_rate: float = 5e-05, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, ) model = BertForMaskedLM(config) print('No of parameters: ', model.num_parameters()) this model: BertForMaskedLM( (bert): BertModel( (embeddings): BertEmbeddings( (word_embeddings): Embedding(20000, 768, padding_idx=0) (position_embeddings): Embedding(512, 768) (token_type_embeddings): Embedding(2, 768) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) (encoder): BertEncoder( (layer): ModuleList( (0): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (1): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (2): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (3): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (4): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (5): BertLayer( (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): BertIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) ) (cls): BertOnlyMLMHead( (predictions): BertLMPredictionHead( (transform): BertPredictionHeadTransform( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) ) (decoder): Linear(in_features=768, out_features=20000, bias=True) ) ) )
Hi @ccfeidao For your first question about the decoder and the hidden size and output size: Internally, the model projects the input tokens which have dimension of the vocabulary size (20’000 in your case) to the hidden size (768 in your case). Inside the layers of BERT the embeddings of 768 are processed. Finally, after the last BERT layer we need to get back from the hidden size to the vocabulary size which corresponds to proper tokens. That’s what the decoder layer does: it takes embeddings of dim=768 and projects them to dim=20000. As for your second question about pretraining: there is a tutorial on Google Colab 7.
0
huggingface
🤗Transformers
Unable to torch.jit.trace quantized BigBird (0INTERNAL ASSERT FAILED runtime error) but works for BERT and RoBERTa
https://discuss.huggingface.co/t/unable-to-torch-jit-trace-quantized-bigbird-0internal-assert-failed-runtime-error-but-works-for-bert-and-roberta/9222
Hello, I am trying to torch.jit.trace Transformers’ implementation of BigBird. But I’m encountering a runtime error that I’m not very familiar with, specifically: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-4-1dfdd2340788> in <module> 4 ) 5 ----> 6 traced_model = torch.jit.trace(model, (input_ids, attention_mask)) 7 torch.jit.save(traced_model, "traced_bigbird.pt") /opt/conda/lib/python3.7/site-packages/torch/jit/_trace.py in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit) 742 strict, 743 _force_outplace, --> 744 _module_class, 745 ) 746 /opt/conda/lib/python3.7/site-packages/torch/jit/_trace.py in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit) 957 strict, 958 _force_outplace, --> 959 argument_names, 960 ) 961 check_trace_method = module._c._get_method(method_name) RuntimeError: 0INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/jit/ir/alias_analysis.cpp":532, please report a bug to PyTorch. We don't have an op for aten::constant_pad_nd but it isn't a special case. Argument types: Tensor, int[], bool, I’m trying to locate constant_pad_nd in the code base for BigBird to figure out how it relates to aten, but i’m also that familiar with aten. That said, I also ran the same code for BERT and RoBERTa but did not encounter the same issue and was able to trace the quantized models for both respectively. To reproduce this error, Git clone this repo Run example.ipynb Anyone familiar with this matter or knows enough to help debug this issue?
Created a Github issue here 17. Also tagging @patrickvonplaten @sgugger @lewtun for more visibility
0
huggingface
🤗Transformers
Huge difference in speed when finetuning summarization with different scripts
https://discuss.huggingface.co/t/huge-difference-in-speed-when-finetuning-summarization-with-different-scripts/9194
From transformers 3.x, I have been using examples/seq2seq/finetune.py (huggingface-transformers/finetune.py at v3.5.1 · microsoft/huggingface-transformers · GitHub 4) to finetune pegasus model and it’s been working fine. After upgrading to transformers 4.x, that script has been moved to legacy and/so I’m thinking of using the examples/seq2seq/run_summarization.py (transformers/run_summarization.py at v4.4.2 · huggingface/transformers · GitHub 1) for the same training. I moved pieces around to make the old finetune.py work under transformers 4.x as well to have fair comparison. Other than the dataset format difference between the two, I mainly noticed the huge difference in training speed between the two. For the same training dataset (~6 million data points): finetune.py: with 4 v100 GPU, taking ~6.5h/epoch. run_summarization.py with 4 v100 GPU, taking ~15h/epoch. Upon reading the code, finetune.py from transformers 3.x uses pytorch-lightning but run_summarization.py I believe just uses pytorch. Is this mainly causing the difference in speed? Or are there any signification implementation difference between the two? I can provide more details if needed. Thanks!
The two scripts don’t have the same defaults at all, so there could be plenty of reasons for the differences in speed. Could you tell us what command lines you use to run them in both cases? Thanks!
0
huggingface
🤗Transformers
Tokenizer vs. TokenizerFast
https://discuss.huggingface.co/t/tokenizer-vs-tokenizerfast/9187
Hi, When adding a new token in the vocabulary, there is a difference between Tokenizer and FastTokenizer. from transformers import BartTokenizer, BartTokenizerFast tokenizer = BartTokenizer.from_pretrained('facebook/bart-large') tokenizer_fast = BartTokenizerFast.from_pretrained('facebook/bart-large') tokenizer.add_tokens("<NEW_TOKEN>") tokenizer_fast.add_tokens("<NEW_TOKEN>") sentence = "I added a <NEW_TOKEN> in the vocabulary." print(tokenizer.encode(sentence)) # [0, 100, 355, 10, 50265, 179, 5, 32644, 4, 2] print(tokenizer_fast.encode(sentence)) # [0, 100, 355, 10, 1437, 50265, 11, 5, 32644, 4, 2] The fast tokenizer adds a space token before the <NEW_TOKEN> (1437) while the standard tokenizer removes the automatic space from the next token (179 vs. 11). I tried with RoBERTa and got the same problem. Thanks!
Technically speaking overall implementation of tokenizers wrt to Sentencepiece is kind of hacky in HuggingFace.
0
huggingface
🤗Transformers
How can I get the score from Question-Answer Pipeline? Is there a bug when Question-answer pipeline is used?
https://discuss.huggingface.co/t/how-can-i-get-the-score-from-question-answer-pipeline-is-there-a-bug-when-question-answer-pipeline-is-used/826
When I run the following code from transformers import AutoTokenizer, AutoModelForQuestionAnswering import torch tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad") model = AutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad") text = r""" As checked Dis is not yet on boarded to ARB portal, hence we cannot upload the invoices in portal """ questions = [ "Dis asked if it is possible to post the two invoice in ARB.I have not access so I wanted to check if you would be able to do it.", ] for question in questions: inputs = tokenizer.encode_plus(question, text, add_special_tokens=True, return_tensors="pt") input_ids = inputs["input_ids"].tolist()[0] text_tokens = tokenizer.convert_ids_to_tokens(input_ids) answer_start_scores, answer_end_scores = model(**inputs) answer_start = torch.argmax( answer_start_scores ) # Get the most likely beginning of answer with the argmax of the score answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end])) print(f"Question: {question}") print(f"Answer: {answer}\n") The answer that I get here is: Question: Dis asked if it is possible to post the two invoice in ARB.I have not access so I wanted to check if you would be able to do it. Answer: dis is not yet on boarded to ARB portal How do I get a score for this answer? Score here is very similar to what is I get when I run Question-Answer pipeline . I have to take this approach since Question-Answer pipeline when used is giving me Key Error for the below code from transformers import pipeline nlp = pipeline("question-answering") context = r""" As checked Dis is not yet on boarded to ARB portal, hence we cannot upload the invoices in portal. """ print(nlp(question="Dis asked if it is possible to post the two invoice in ARB?", context=context))
See how decode is called here 1 and defined here. You can modify decode like this to get the scores: def decode(start: np.ndarray, end: np.ndarray, topk: int, max_answer_len: int): # Compute the score of each tuple(start, end) to be the real answer outer = np.matmul(np.expand_dims(start, -1), np.expand_dims(end, 1)) # Remove candidate with end < start and end - start > max_answer_len candidates = np.tril(np.triu(outer), max_answer_len - 1) # Inspired by Chen & al. (https://github.com/facebookresearch/DrQA) scores_flat = candidates.flatten() if topk == 1: idx_sort = [np.argmax(scores_flat)] elif len(scores_flat) < topk: idx_sort = np.argsort(-scores_flat) else: idx = np.argpartition(-scores_flat, topk)[0:topk] idx_sort = idx[np.argsort(-scores_flat[idx])] idx = np.argpartition(-scores_flat, topk)[0:topk] idx_sort = idx[np.argsort(-scores_flat[idx])] starts, ends = np.unravel_index(idx_sort, candidates.shape)[1:] scores = candidates[0, starts, ends] return starts, ends, scores
0
huggingface
🤗Transformers
Fine-tuning Wav2Vec2 for English ASR with on local machine Transformers
https://discuss.huggingface.co/t/fine-tuning-wav2vec2-for-english-asr-with-on-local-machine-transformers/9121
I’m running https://huggingface.co/blog/fine-tune-wav2vec2-english#training–evaluation 7 example on my local machine and getting Training Loss=nan Step Training Loss Validation Loss Wer Runtime Samples Per Second 200 nan 13.842948 2.703102 204.199500 8.227000 400 nan 13.842948 2.703102 204.301000 8.223000 600 nan 13.842948 2.703102 204.371700 8.220000 Local modifications to the example training_args = TrainingArguments( output_dir="./wav2vec2-base-timit-demo", group_by_length=True, per_device_train_batch_size=4, ** changes from 32 ** … save_steps=200, eval_steps=200, logging_steps=100, … )
Update - by moving the job to the CPU the Training loss is getting values. steps so far make cuda unavailable import torch torch.cuda.is_available = lambda : False disabling Mixed precision training_args = TrainingArguments( … # fp16=True, …) *Mixed precision training with AMP or APEX (--fp16) and FP16 evaluation can only be used on CUDA devices. *
0
huggingface
🤗Transformers
Encoding error while fine-tuning
https://discuss.huggingface.co/t/encoding-error-while-fine-tuning/8800
Hello there! I have a question regarding the fine-tuning of mbart. I did the training like the example here transformers/examples/pytorch/translation at v4.6.1 · huggingface/transformers · GitHub and obtained a model pytorch_model.bin However when trying to use the model to translate I get an UnicodeDecodeError UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte The complete error is the following. As far as I can see it produces when loading the model I obtained from fine-tuning. Traceback (most recent call last): File "mbart/predict.py", line 41, in <module> main() File "mbart/predict.py", line 21, in main model = MBartForConditionalGeneration.from_pretrained(opt.model) File "/home/claudia/anaconda3/envs/speechenv/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1080, in from_pretrained **kwargs, File "/home/claudia/anaconda3/envs/speechenv/lib/python3.7/site-packages/transformers/configuration_utils.py", line 427, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/claudia/anaconda3/envs/speechenv/lib/python3.7/site-packages/transformers/configuration_utils.py", line 495, in get_config_dict config_dict = cls._dict_from_json_file(resolved_config_file) File "/home/claudia/anaconda3/envs/speechenv/lib/python3.7/site-packages/transformers/configuration_utils.py", line 578, in _dict_from_json_> text = reader.read() File "/home/claudia/anaconda3/envs/speechenv/lib/python3.7/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte Any ideas on how to solve this? Thanks in advance
maybe your inputpath is xxxxxxxxxxxxx/pytorch_model.bin change to xxxxxxxxxxxxx/ have a try
0
huggingface
🤗Transformers
Can’t load pre-trained tokenizer with additional new tokens
https://discuss.huggingface.co/t/cant-load-pre-trained-tokenizer-with-additional-new-tokens/8966
I first pretrained masked language model by adding additional list of words to the tokenizer. Then I saved the pretrained model and tokenizer. tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') model = AutoModelForMaskedLM.from_pretrained( 'bert-base-uncased' ) tokenizer.add_tokens(list_of_words) model.resize_token_embeddings(len(tokenizer)) trainer.train() model_to_save = model.module if hasattr(model, 'module') else model model_to_save.save_pretrained(data_args.output_file) tokenizer.save_pretrained(data_args.output_file) After that, I want to load the pre-trained tokenizer and model by tokenizer = BertTokenizer.from_pretrained(model_args.model_name_or_path) encoder = BertModel.from_pretrained(model_args.model_name_or_path, num_labels=num_classes) tokenizer.add_tokens(list_of_words) encoder.resize_token_embeddings(len(tokenizer)) However, an error occurred as shown below and it seems that the pretrained tokenizer couldn’t be loaded correctly. AssertionError: Non-consecutive added token 'IncelTears' found. Should have index 30525 but has index 30526 in saved vocabulary. Does anyone have an idea on this? Thanks a lot.
Which line threw the error? If it’s tokenizer.add_tokens(list_of_words), it’s because your tokenizer already has those words added from the first sample, so you can’t re-add them.
0
huggingface
🤗Transformers
Why transformer overfit quickly? how to solve it?
https://discuss.huggingface.co/t/why-transformer-overfit-quickly-how-to-solve-it/1842
Hi I have a general question and appreciate your feedback on it. I am new to transformers. My main problem is that it overfits so quickly, I am using regularization methods such as augmentation and dropout, but after 2 epochs my validation accuracy starts to drop while the training accuracy reaches to highest (basically my model overfit). do you have any suggestions? Interestingly I never see this behavior when I use convolutions…
My personal thought is if your data is less, it will overfit quickly. If you want to avoid it, reduce epochs. But, best way is to gather more data. Having said that neither transformer nor neural networks suffer too much from overfitting. Some papers are there I guess. It’s good in generalizing most of the times. Remember transformer like models have quite good number of parameters, that’s also one reason of overfitting. But in downstream tasks, even if it overfits, it’s useful right. Pre training is like a person who graduate with Masters. Fine tuning is like doing PhD ( except here it is quick , use your graduation skills to be an expert in specific field. So overfitting is okay. Personal opinion only.
0
huggingface
🤗Transformers
Trainer optimizer
https://discuss.huggingface.co/t/trainer-optimizer/2146
Hi everyone, in my code I instantiate a trainer as follows: trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, compute_metrics=compute_metrics, ) I don’t specify anything in the “optimizers” field as I’ve always used the default one (AdamW). I tried to create an optimizer instance similar to the default one so I could try to change the learning rate (lr). The code I used to simulate the default optimizer is the following: no_decay = ["bias", "LayerNorm.weight"] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, { "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, ] optimizer = AdamW(optimizer_grouped_parameters, lr=5e-05, eps=1e-08) scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=0, num_training_steps=750 ) and then pass the parameters optimizer and scheduler to the optimizer field of the trainer. The problem is that with this definition of optimizer, I have different results than the default one (even if I thought they were identical). What should I change to create an optimizer identical to the default one, but where I can change the lr directly from my code? Thanks!
At first glance, it might be linked to the number of training_steps? Are you sure your other training does 750 steps? Also I don’t know what your training_args are but if any of them don’t use the default value, that could also be the reason for the change in results.
0
huggingface
🤗Transformers
Load from checkpoint not skipping steps
https://discuss.huggingface.co/t/load-from-checkpoint-not-skipping-steps/1553
I’m pre training a distillBert model from scratch and saving the model every 300 steps , When trying to load a checkpoint to continue training from the Trainer show that it’s skipping the trained steps but it just starts from 0 and doesn’t start logging or saving until the trainer passes the number of skipped steps. device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = DistilBertForMaskedLM.from_pretrained("/content/drive/My Drive/AIMBert/output_gpu/checkpoint-48000",config=config).to(device) training_args = TrainingArguments( output_dir="/content/drive/My Drive/AIMBert/output_gpu", logging_dir='/content/drive/My Drive/AIMBert/logs_gpu', overwrite_output_dir=True, num_train_epochs=10, per_device_train_batch_size=32, per_device_eval_batch_size = 32, logging_steps = 100, save_steps=300, save_total_limit=5, evaluation_strategy = "steps", eval_steps=50000, seed = 42, prediction_loss_only=True, fp16 = True ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset= processed_train_dataset, eval_dataset = processed_valid_dataset ) trainer.train('/content/drive/My Drive/AIMBert/output_gpu/checkpoint-48000')
What do you mean by “It starts at 0?” Also, which version of transformers are you using?
0
huggingface
🤗Transformers
Extracting embeddings with distilbert? (in tensorflow)
https://discuss.huggingface.co/t/extracting-embeddings-with-distilbert-in-tensorflow/9023
Hello, I am trying to understand the transformers architecture better and in particular to extract the contextual embeddings for a given sentence. I know I can use the pipeline feature-extraction but I would like to extract them manually, but consider the small example below. Unfortunately, the last hidden states cannot be the contextual embeddings : I get a 2-dimensional vector whereas the embeddings have hundreds of dimensions. import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english') model = TFAutoModelForSequenceClassification.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english') input_ids = tf.constant(tokenizer.encode("Hello I am a dog."))[None, :] outputs = model(input_ids) last_hidden_states = outputs[0] last_hidden_states.numpy() Out[22]: array([[-1.651872 , 1.6822953]], dtype=float32) What is the issue here? Thanks!
You should use the model without head for this, given by the class TFAutoModel. Here you use a model for sequence classification.
0
huggingface
🤗Transformers
Upgrading to transformers 4.9.1?
https://discuss.huggingface.co/t/upgrading-to-transformers-4-9-1/8999
Hello, I am trying to upgrade to the latest distribution of transformers (4.9.1 according to Release v4.9.1: Patch release · huggingface/transformers · GitHub). I currently have the version 4.3.3 installed and it works fine. So, I downloaded the 4.9.1 zip file, extracted to a directory, cd to that directory and run python setup.py install. The install went fine (except for the huggingface-hub requirement at the end — which is blocked by my firewall). However, when I start spyder again the version is still 4.3.3. Am I doing something wrong to update the package offline (on windows)? Thanks!
I was able to upgrade by using pip instead. Weird that the good old python setup.py install would not work in this case.
0
huggingface
🤗Transformers
Problems saving model
https://discuss.huggingface.co/t/problems-saving-model/8952
image1511×592 50.1 KB can anyone help me out?
I don’t think Wav2Vec2 has beem updated to work with saved models, cc @Rocketknight1
0
huggingface
🤗Transformers
[Spaces] Streamlit does not support external components
https://discuss.huggingface.co/t/spaces-streamlit-does-not-support-external-components/8432
Hello, I have created streamlit tags but I was playing around with streamlit in Spaces Beta and I realised that it does not support any of the custom components like streamlit-tags Any support available for spaces? Regards, Gagan Bhatia
HF Spaces now supports Streamlit components (see Changelog 2)
0
huggingface
🤗Transformers
Value error : Connection error
https://discuss.huggingface.co/t/value-error-connection-error/3583
Hello, I am new in this forum and Hugging face models. Could someone help with this: I want to use model ‘Helsinki-NLP/opus-mt-en-sla’. I am using code from the site: from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(“Helsinki-NLP/opus-mt-en-sla”) model = AutoModelForSeq2SeqLM.from_pretrained(“Helsinki-NLP/opus-mt-en-sla”) but I get this error: ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. I have installed all necessary libraries and my internet connection is good…could someone help with this? Thanks
Hello @Katarina, I was able to run your code snippet without any problems on my machine, so I wonder whether there is some firewall / proxy you need to configure on your end? A simple test that your connection is fine would be to spin up a Google Colab notebook and see if your code works there. Alternatively, you could try upgrading to the latest version of transformers just to be sure it’s not an old bug that got fixed recently.
0
huggingface
🤗Transformers
Flax - core dump when starting training
https://discuss.huggingface.co/t/flax-core-dump-when-starting-training/7546
Trying to follow the instructions for training an Roberta-base mlm-model, as described here: github.com huggingface/transformers 1 master/examples/flax/language-modeling 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX. Everything is easy to follow until the start of training. Immediately ends in a core dump, with this error message: ttcmalloc: large alloc 500236124160 bytes == (nil) @ 0x7f51b5df3680 0x7f51b5e13ff4 0x7f51b590a309 0x7f51b590bfb9 0x7f51b590c056 0x7f4e5cc6a659 0x7f4e526a0954 0x7f51b5fe7b8a 0x7f51b5fe7c91 0x7f51b5d46915 0x7f51b5fec0bf 0x7f51b5d468b8 0x7f51b5feb5fa 0x7f51b5bbb34c 0x7f51b5d468b8 0x7f51b5d46983 0x7f51b5bbbb59 0x7f51b5bbb3da 0x67299f 0x682dcb 0x684321 0x5c3cb0 0x5f257d 0x56fcb6 0x56822a 0x5f6033 0x56ef97 0x5f5e56 0x56a136 0x5f5e56 0x569f5e terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc https://symbolize.stripped_domain/r/?trace=7f51b5c2918b,7f51b5c2920f&map= *** SIGABRT received by PID 14088 (TID 14088) on cpu 95 from PID 14088; stack trace: *** PC: @ 0x7f51b5c2918b (unknown) raise @ 0x7f4f86e6d800 976 (unknown) @ 0x7f51b5c29210 (unknown) (unknown) https://symbolize.stripped_domain/r/?trace=7f51b5c2918b,7f4f86e6d7ff,7f51b5c2920f&map=2a762cd764e70bc90ae4c7f9747c08d7:7f4f79f2b000-7f4f871ac280 E0628 16:55:18.669807 14088 coredump_hook.cc:292] RAW: Remote crash data gathering hook invoked. E0628 16:55:18.669833 14088 coredump_hook.cc:384] RAW: Skipping coredump since rlimit was 0 at process start. E0628 16:55:18.669843 14088 client.cc:222] RAW: Coroner client retries enabled (b/136286901), will retry for up to 30 sec. E0628 16:55:18.669852 14088 coredump_hook.cc:447] RAW: Sending fingerprint to remote end. E0628 16:55:18.669864 14088 coredump_socket.cc:124] RAW: Stat failed errno=2 on socket /var/google/services/logmanagerd/remote_coredump.socket E0628 16:55:18.669874 14088 coredump_hook.cc:451] RAW: Cannot send fingerprint to Coroner: [NOT_FOUND] Missing crash reporting socket. Is the listener running? E0628 16:55:18.669881 14088 coredump_hook.cc:525] RAW: Discarding core. E0628 16:55:18.673655 14088 process_state.cc:771] RAW: Raising signal 6 with default behavior Aborted (core dumped) Any ideas about what is causing this?
OOM for TPU cores; try with smaller batch sizes (1 for starters) or reduce data/model size
0
huggingface
🤗Transformers
[HELP]Bart summarization output exactly the same as labels
https://discuss.huggingface.co/t/help-bart-summarization-output-exactly-the-same-as-labels/8953
Hii, I’m trying to finetune BART on summariztion task using Tensorflow TPU. I first tokenized the data, stored them in *.tfrecords using the datasets export() function, then, created TF Datasets using them, I have given the preprocessing and finetuning code below. Problem: I am getting exact copy of labels as outputs. Its like BART is not learning anything. eg: ['The ICSI Meeting Recorder Dialog Act (MRDA) Corpus\nWe describe a new corpus of over 180,000 handannotated dialog act tags and accompanying adjacency pair annotations for roughly 72 hours of speech from 75 naturally-occurring meetings.\nWe provide a brief summary of the annotation system and labeling procedure, inter-annotator reliability statistics, overall distributional statistics, a description of auxiliary files distributed with the corpus, and information on how to obtain the data.', 'Templates-Based Information Extraction without the Templates\nStandard algorithms for template-based information extraction (IE) require predefined template schemas, and often labeled data, to learn to extract slot fillers (e.g., a template).\nThis paper describes an approach to template- based IE that removes this requirement and performs extraction without knowing the template structure in advance.\nOur algorithm instead learns the template structures automatically from raw text, inducing template Schema schemas as sets of linked events associated with semantic roles.\nWe also solve', 'Get Out The Vote: Determining Support Or Opposition From Congressional Floor-Debate Transcripts\nWe investigate whether one can determine from the transcripts of U.S. Congressional floor debates whether the speeches represent support of or opposition to proposed legislation.\nTo address this problem, we exploit the fact that these speeches occur as part of a discussion; this allows us to use sources of information regarding relationships between discourse segments, such as whether a given utterance indicates agreement with the opinion expressed by another.\nWe find that the incorporation of such information yields substantial improvements over classifying speeches'] ['The ICSI Meeting Recorder Dialog Act (MRDA) Corpus\nWe describe a new corpus of over 180,000 hand-annotated dialog act tags and accompanying adjacency pair annotations for roughly 72 hours of speech from 75 naturally-occurring meetings.\nWe provide a brief summary of the annotation system and labeling procedure, inter-annotator reliability statistics, overall distributional statistics, a description of auxiliary files distributed with the corpus, and information on how to obtain the data.', 'Template-Based Information Extraction without the Templates\nStandard algorithms for template-based information extraction (IE) require predefined template schemas, and often labeled data, to learn to extract their slot fillers (e.g., an embassy is the Target of a Bombing template).\nThis paper describes an approach to template-based IE that removes this requirement and performs extraction without knowing the template structure in advance.\nOur algorithm instead learns the template structure automatically from raw text, inducing template schemas as sets of linked events (e.g., bombings include detonate, set off, and destroy events) associated with semantic', 'Get Out The Vote: Determining Support Or Opposition From Congressional Floor-Debate Transcripts\nWe investigate whether one can determine from the transcripts of U.S. Congressional floor debates whether the speeches represent support of or opposition to proposed legislation.\nTo address this problem, we exploit the fact that these speeches occur as part of a discussion; this allows us to use sources of information regarding relationships between discourse segments, such as whether a given utterance indicates agreement with the opinion expressed by another.\nWe find that the incorporation of such information yields substantial improvements over classifying speeches in isolation.\nWe present a method based on support'] Validation results:--- {'rouge1': 78.3223, 'rouge2': 72.4416, 'rougeL': 76.0222, 'rougeLsum': 77.904} I am using facebook/bart-large-cnn as my checkpoint. At first, I was getting gibberish output. So, I passed in BartConfig.from_pretrained("facebook/bart-large-cnn") to the model, and it started copying the labels. I searched on the forum and found this thread, @valhalla suggested using prepare_seq2seq_batch() at that time. Now, since it is going to be deprecated in transformers 5.x, and the suggested way is to use tokenizer.as_target_tokenizer(), I did that. The code: class Config: num_epochs=3 train_batch_size=2 val_batch_size=4 test_batch_size=4 learning_rate=2e-5 num_warmup_steps=0 num_beams=4 max_input_length=1024 max_target_length=128 val_max_target_length=None ignore_pad_token_for_loss=True padding="max_length" train_data_len=None valid_data_len=None test_data_len=None num_val_take=6 num_test_take=6 num_val_examples=num_val_take * val_batch_size * REPLICAS num_test_examples=num_test_take * test_batch_size * REPLICAS def read_tfrecord(example, max_input_length, max_target_length): feature_description = { 'input_ids': tf.io.FixedLenFeature([max_input_length], tf.int64, default_value=[0]*max_input_length), 'attention_mask': tf.io.FixedLenFeature([max_input_length], tf.int64, default_value=[0]*max_input_length), 'decoder_input_ids': tf.io.FixedLenFeature([max_target_length], tf.int64, default_value=[0]*max_target_length), 'decoder_attention_mask': tf.io.FixedLenFeature([max_target_length], tf.int64, default_value=[0]*max_target_length), 'labels': tf.io.FixedLenFeature([max_target_length], tf.int64, default_value=[0]*max_target_length), } example = tf.io.parse_single_example(example, feature_description) return example, example["labels"] def preprocess_function(examples, article_column="article", summary_column="target", max_input_length=1024, max_output_length=128, prefix="summarize: "): inputs = [prefix + article for article in examples[article_column]] tokenized_inputs = tokenizer(inputs, max_length=max_input_length, padding="max_length", truncation=True) with tokenizer.as_target_tokenizer(): tokenized_outputs = tokenizer(examples[summary_column], max_length=max_output_length, padding="max_length", truncation=True) return {"input_ids": tokenized_inputs["input_ids"], "attention_mask": tokenized_inputs["attention_mask"], "decoder_input_ids": tokenized_outputs["input_ids"], "decoder_attention_mask": tokenized_outputs["attention_mask"], "labels": tokenized_outputs["input_ids"]} model_config = AutoConfig.from_pretrained("facebook/bart-large-cnn") tokenizer = AutoTokenizer.from_pretrained(tokenizer_name) with strategy.scope(): model = TFAutoModelForSeq2SeqLM.from_pretrained(model_checkpoint, config=model_config) model.compile(optimizer=optimizer, loss={"logits": masked_sparse_categorical_crossentropy}) metric = load_metric("rouge") def postprocess_text(preds, labels): preds = [pred.strip() for pred in preds] labels = [label.strip() for label in labels] # rougeLSum expects newline after each sentence preds = ["\n".join(nltk.sent_tokenize(pred)) for pred in preds] labels = ["\n".join(nltk.sent_tokenize(label)) for label in labels] return preds, labels def eval_fn(model, tokenizer, tokenized_tf_dataset, tf_dataset_take_size, pre_train_val_check=False): if Config.val_max_target_length is None: Config.val_max_target_length = Config.max_target_length gen_kwargs = { "max_length": Config.val_max_target_length, "num_beams": Config.num_beams, } if pre_train_val_check: # Checks Validation Loop before starting fine-tuning tokenized_tf_dataset = tokenized_tf_dataset.take(2) total = 2 else: total = tf_dataset_take_size decoded_labels = None decoded_pred = None for batch, labels in tqdm( tokenized_tf_dataset, total=total, unit="batchs" ): temp_batch = { "input_ids": batch["input_ids"], #"attention_mask": batch["attention_mask"], } temp_batch.update(gen_kwargs) generated_tokens = model.generate(**temp_batch) if isinstance(generated_tokens, tuple): generated_tokens = generated_tokens[0] decoded_preds = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels) metric.add_batch(predictions=decoded_preds, references=decoded_labels) result = metric.compute(use_stemmer=True) # Extract a few results from ROUGE result = {key: value.mid.fmeasure * 100 for key, value in result.items()} result = {k: round(v, 4) for k, v in result.items()} if pre_train_val_check: result_integrity = np.array([False if v == 0 else True for k, v in result.items()]) if False in result_integrity: print("Result: ", result) print("Result integrity failed") return False else: print(result) print("Valiation Epoch working correctly....") return True else: return result Thanks
try increasing the epochs maybe that will help I don’t have any experience with this model but every time the model doesn’t predict right increasing the epochs almost always helps
0
huggingface
🤗Transformers
Batch size for trainer.predict()
https://discuss.huggingface.co/t/batch-size-for-trainer-predict/3374
Hi, I pass a test dataset to trainer.predict but I have many samples. Therefore, I get a memory error. Does the library support a way of batch based trainer.predict? or do I have to implement it myself?
You can pass eval_accumulation_steps=xxx to pass the predictions to the CPU every xxx steps, this should help.
0
huggingface
🤗Transformers
Num_labels creates an error for some models
https://discuss.huggingface.co/t/num-labels-creates-an-error-for-some-models/8937
Hello, I am training a classifier for three classes (bad, medium, good) using distilbert-base-uncased-finetuned-sst-2-english but after saving the model to disk and re-loading it I am getting a strange error from transformers import AutoTokenizer, TFAutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english") model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english") tokenizer.save_pretrained('finetuned') model.save_pretrained('finetuned') model.from_pretrained('finetuned', num_labels = 3) ValueError: cannot reshape array of size 1536 into shape (768,3) This approach worked with bert-base-uncased. Am I doing something wrong here? Thanks!
With the latest version installed, you need to add ignore_mismatched_sizes=True to your from_pretrained call for this to work. Otherwise, you try to load a model with 2 labels inside a model with 3 labels, and you get mismatched sizes like that.
0
huggingface
🤗Transformers
Transformers suddenly complaining about pytorch?
https://discuss.huggingface.co/t/transformers-suddenly-complaining-about-pytorch/8936
Hello the team, Huggingface is suddenly complaining about pytorch … but I have been only using tensorflow all along so far! Do you know what could cause this issue? from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english") model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english") ImportError: AutoModelForSequenceClassification requires the PyTorch library but it was not found in your environment. Checkout the instructions on the installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment. Thanks!
If you want to use tensorflow models, try TFAutoModelForSequenceClassification module.
0
huggingface
🤗Transformers
Duplicate hyperparameter tuning trials with Ray and `hyperparameter_search`
https://discuss.huggingface.co/t/duplicate-hyperparameter-tuning-trials-with-ray-and-hyperparameter-search/8922
I’m using hyperparameter_search for hyperparamter tuning with the following configurations: def tune_config_ray(trial): return {"learning_rate": tune.choice([5e-5, 4e-5, 3e-5, 2e-5]), "num_train_epochs": tune.choice([4]), "per_device_train_batch_size": tune.choice([16]) } best_trial = trainer.hyperparameter_search(hp_space=tune_config_ray, backend='ray', direction='maximize', n_trials=4, ) Based on the config, there are 4 unique combinations for the learning_rate, num_train_epochs, and per_device_train_batch_size. However, when I run the tuning (as you can see below), I see some duplicates among the trials. I wonder why this happens and how I can have non-duplicate trials? Is this possibly because ray is also tuning some other hyperparameters that are not listed in the report and that is why I see some duplicates? +------------------------+----------+-------+-----------------+--------------------+-------------------------------+ | Trial name | status | loc | learning_rate | num_train_epochs | per_device_train_batch_size | |------------------------+----------+-------+-----------------+--------------------+-------------------------------| | _objective_7efa7_00000 | RUNNING | | 3e-05 | 4 | 16 | | _objective_7efa7_00001 | PENDING | | 2e-05 | 4 | 16 | | _objective_7efa7_00002 | PENDING | | 5e-05 | 4 | 16 | | _objective_7efa7_00003 | PENDING | | 3e-05 | 4 | 16 | +------------------------+----------+-------+-----------------+--------------------+-------------------------------+
cc @richardliaw
0
huggingface
🤗Transformers
Saving checkpoints in drive
https://discuss.huggingface.co/t/saving-checkpoints-in-drive/8865
from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="/gdrive/MyDrive/Thesis/GPT2/checkpoints", overwrite_output_dir=False, num_train_epochs=5, per_device_train_batch_size=6, #previous was 6 save_steps=100, save_total_limit=5, fp16 = True, dataloader_drop_last=True, #evaluate_during_training=True, warmup_steps=200 ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, # prediction_loss_only = True ) trainer.train() I want to save the checkpoints directly to my google drive. The problem is the code above saves my checkpoints upto to save limit all well. But after the limit it can’t delete or save any new checkpoints. Although it says checkpoints saved/deleted in the console. Any help?
I think just wait a bit and they appear up in drive , that’s what happened with me
0
huggingface
🤗Transformers
How to save the best trial’s model using `trainer.hyperparameter_search`
https://discuss.huggingface.co/t/how-to-save-the-best-trials-model-using-trainer-hyperparameter-search/8783
I’m using hyperparameter_search for hyperparameter tuning in the following way: trainer = Trainer( model_init=model_init, args=training_args, train_dataset=train_set, eval_dataset=dev_set, tokenizer=tokenizer, compute_metrics=compute_metrics, ) best_trial = trainer.hyperparameter_search( backend="ray", direction='maximize', n_trials=10, ) Everything’s working well and I can see the information for the best trial in the best_trial. However, my question is how can I save the actual best model from the best trial? I tried saving the model using the trainer’s save_model like trainer.save_model(path/to/a/folder), but I get the following error: trainer.save_model(path/to/a/folder) File "/home/ubuntu/anaconda3/envs/ccr/lib/python3.6/site-packages/transformers/trainer.py", line 1885, in save_model self._save(output_dir) File "/home/ubuntu/anaconda3/envs/ccr/lib/python3.6/site-packages/transformers/trainer.py", line 1930, in _save state_dict = self.model.state_dict() AttributeError: 'NoneType' object has no attribute 'state_dict' It looks like the trainer does not have the actual best model found as a result of hyperparameter tuning (?). My goal is simple, I basically want to use the best model from hyperparameter tuning to evaluate it on my final test set. But I can’t find a way to save the best model from hyperparameter tuning. Also, someone may say I can get the info from the best trial and fine-tune the model again, but I don’t want to do that and I just simply want to get the model from the hyperparameter tuning. Is there any way to do that? Thanks.
@sgugger Any thoughts on this?
0
huggingface
🤗Transformers
Need help on wav2vec 2.0 models training
https://discuss.huggingface.co/t/need-help-on-wav2vec-2-0-models-training/8880
Hello guys, I’m using transformers library and I want to build speech recognition systems base on wav2vec 2.0 I have different problems Base on the exemple here Wav2Vec2 — transformers 4.7.0 documentation 2 I have tried to pretrain a model. But I have seen in the github on fairseq wav2vec that the loss object at the end, I can call a backward method on, but here I can. how can I perform batch training in wav2vec. Guys please I need help, even another documentation or examples.
https://github.com/huggingface/blog/blob/master/fine-tune-wav2vec2-english.md 12 try this
0
huggingface
🤗Transformers
Custom SQuAD2.0 dataset gives an error when using run_qa.py script
https://discuss.huggingface.co/t/custom-squad2-0-dataset-gives-an-error-when-using-run-qa-py-script/8813
Hello, I am trying to follow the PyTorch Question Answering example 4. However, when running the run_qa.py script using my own (Dutch machine-translated) SQuAD train and test files (JSON), I get the following error: pyarrow.lib.ArrowInvalid: cannot mix list and non-list, non-null values. I use the following hyperparameters: python run_qa.py \ --model_name_or_path GroNLP/bert-base-dutch-cased \ --version_2_with_negative \ --do_train \ --do_eval \ --train_file "C:\Users\myname\data\squad\nl_squad_train_clean.json" \ --test_file "C:\Users\myname\data\squad\nl_squad_dev_clean.json" \ --per_device_train_batch_size 12 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --save_steps=800 \ --output_dir ../output When replacing the train and test by --dataset_name squad it works fine. What could be the problem with my own SQuAD files? Thanks in advance! Cheers!
hey @julifelipe, my guess is that you have your squad data in the conventional json format, while the run_qa.py expects the examples to be line-delimited json you can find a simple function to do the conversion here: Question answering bot: fine-tuning with custom dataset - #2 by lewtun 4
0
huggingface
🤗Transformers
Tokenizer from tokenizers library cannot be used in transformers.Trainer
https://discuss.huggingface.co/t/tokenizer-from-tokenizers-library-cannot-be-used-in-transformers-trainer/8773
Hi, I am trying to train my own model with Trainer with a pre-trained SentencePieceBPETokenizer from tokenizers library. However, it is missing several attributes as well as methods (e.g., pad ), which makes it incompatible with transformers.Trainer . Is there an easy way to convert it to PretrainedTokenizer from transformers ? Thanks!
If you want SentencePiece Tokenizer, you should use the sentencepiece library, then pass in the trained model as a parameter into the desired tokenizer model like T5, Bart etc. By doing this the vocab will be yours and the desired tokenizer will handle the padding, I am not sure about whether it will handle the special tokens though.
0
huggingface
🤗Transformers
Feed output from one transformer model as input to another
https://discuss.huggingface.co/t/feed-output-from-one-transformer-model-as-input-to-another/8759
So I am trying to train an automated essay scoring system, that combines the loss of predicting scores with predicting whether a sentence is grammatically correct. To do this I have split each sentence in the essay with a sep and a cls token so that an essay is fed into Bert like this: essay 1 : [cls] … sent 1 … [sep][cls] … sent 2 … [sep][cls] … sent 3 … [sep][cls] … etc essay 2 : [cls] … sent 1 … [sep][cls] … sent 2 … [sep][cls] … sent 3 … [sep][cls] … etc As well as a list of labels for each sentence whether it contains a grammatical error or not and a score for the essay i.e essay 1 : labels: [1,0,0,1,etc…],score:38 essay 2 : labels: [1,1,0,1,etc…],score:24 (So that the list of labels can be passed into a dataset they must have the same length as the input_ids, attention_masks etc. so I have padded them with 2’s at each word that is not a cls token, so they actually look something like this… [1,2,2,0,2,2,2,0,2,2,2,2,1,etc…] which would correspond to a sent [cls,w,w,cls,w,w,w,cls,w,w,w,w,cls…] (w=word) I then use this to get the index of the cls tokens in each essay. So the performance of the model on the grammatical error detection is comparable to feeding in a sentence normally to bert: sent 1 : [cls] … sent 1 … [sep] sent2 : [cls] … sent 2 … [sep] However, unsurprisingly the additional sep and cls tokens decrease the model performance when compared to feeding an essay into Bert normally: essay 1 : [cls] … essay 1 … [sep] essay 2 : [cls] … essay 2 … [sep] So to combat this I am trying to use the vector representation of each cls token in the output of each essay as input to another smaller transformer model as done in this paper here https://arxiv.org/pdf/1903.10318.pdf). However I cannot figure out how to do this, I have tried feeding the output into a embedding layer with input_embeds = True but this does not work. My is a simplified version of my code so far with just a mini batch I have used as a test: # encoded_dataset_train is my training dataset type = (datasets.arrow_dataset.Dataset) mini_batch = encoded_dataset_train[:2] print(mini_batch) '''{'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0]]), 'input_ids': tensor([[ 0, 23314, 5348, ..., 1, 1, 1], [ 0, 1360, 73, ..., 1, 1, 1]]), 'label': tensor([[1, 2, 2, ..., 2, 2, 2], [1, 2, 2, ..., 2, 2, 2]]), 'score': tensor([31, 23])}''' from transformers import AutoModel model = AutoModel.from_pretrained('distilroberta-base') # get the output of the final attention layer of the model input_ids = mini_batch['input_ids'] attention_mask = mini_batch['attention_mask'] output = model(input_ids=input_ids,attention_mask=attention_mask) print(output.shape) ''' torch.Size([2, 512, 768]) ''' # get a mask where each cls token in each batch is 1 and 0 for all other tokens bs,tok_len = mini_batch['label'].shape active_labels = torch.where(mini_batch['label']<2,1,0).reshape(bs,tok_len,1)expand(active_labels.shape[0],active_labels.shape[1],768) print(active_labels.shape) ''' torch.Size([2, 512, 768]) ''' # multiply the output by the mask active_loss = output*active_labels print(active_loss.shape) ''' torch.Size([2, 512, 768]) ''' # Create my smaller model (so far only extracted the layers, I want have not combined them yet) embeds = model.embeddings layers = model.encoder.layer[:2] classifier = model.classifier # Result for trying to pass the output into the embedding layer embeds.forward(input_ids=active_loss,inputs_embeds=True) ''' RuntimeError: The size of tensor a (512) must match the size of tensor b (768) at non-singleton dimension 2 ''' Cheers in advance
Is this possible or do I have to perform pooling at each sentence vector to get single value to represent that sentence and use that as an input?
0
huggingface
🤗Transformers
How to monitor both train and validation metrics at the same step?
https://discuss.huggingface.co/t/how-to-monitor-both-train-and-validation-metrics-at-the-same-step/1301
Hi, I am finetuning BertForSequenceClassification using run_glue.py and I would like to output every logging_steps all the performance metrics of my model. Currently, in the logs I see entries like {'loss': 0.1867809295654297, 'learning_rate': 1.3071895424836603e-07, 'epoch': 2.980392156862745, 'total_flos': 2121344853980160, 'step': 456} for the training loss and {'eval_loss': 0.4489470714636048, 'eval_mcc': 0.6251852674757565, 'epoch': 3.0, 'total_flos': 2133905557962240, 'step': 459} for the evaluation loss. They are being printed separately, with validation losses in output only at the end of the run. Where is the actual logging taking place in trainer.py? I’d like to know that so that I can output a single dictionary containing all the metrics. I am using transformers 3.3.0 and run_glue.py with the flag --evaluation_strategy steps, setting low values =32 for both --logging_steps and --eval_steps. I am confused because evaluation using the validation set doesn’t seem to occur every eval_steps. I revised Trainer doesn't show the loss at each step 23 but I am still not sure about how to do this.
Hi @davidefiocco logging_steps and eval_steps have different meaning, logging_steps will only log the train loss , lr, epoch etc info and not the metrics, eval_steps logs the metrics on valid set. Here the steps refer to actual optimization steps , so if you are using 2 grad accumulation steps and your BS is 4 then 1 optimization step is equal to 8 trainer steps, so in this case if your eval_steps is 2 then they will logged at trainer step 16. In the latest version if eval_steps are not specified, it’ll be set logging_steps by default logging is done in the log method here 74 and it’s invoked here 37 and here 24 in the train method. Hope this helps.
0
huggingface
🤗Transformers
What does “generate_with_predict=True” actually do?
https://discuss.huggingface.co/t/what-does-generate-with-predict-true-actually-do/8685
Hello, I am currently trying to finetuning T5 for summarization task using PyTorch/XLA, and I want to know what is the purpose of generate_with_predict. I saw the documentation and know its supposed to be used with ROUGE/BLEU. But, I am confused what it actually does. If I give generate_with_predict=True, then, will the output be decoded on its own and the metric will be directly calculated if I pass it like given below: from datasets import load_metric rouge_metric = load_metric("rouge") . . . trainer = Seq2SeqTrainer( model=model, args=training_args, compute_metrics=rouge_metric, train_dataset=train_dataset, eval_dataset=dev_dataset, data_collator=data_collator, ) Or, Do I have to wrap rouge inside another function like below, and then pass it: def compute_metrics(eval_pred): predictions, labels = eval_pred decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True) # Replace -100 in the labels as we can't decode them. labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) # Extract a few results result = {key: value.mid.fmeasure * 100 for key, value in result.items()} return {k: round(v, 4) for k, v in result.items()} PS: While using wrapper method, colab/kaggle crashes due to RAM usage exceeding 100%. Even when I use TPU.
Look at the example notebook 12 or the example script 9 for summarization. You will see you have to pass along the latter.
0
huggingface
🤗Transformers
How to make `pipeline` automatically scale?
https://discuss.huggingface.co/t/how-to-make-pipeline-automatically-scale/7432
Hi, I have trained a model for text classification. When I load it using the transformers’ pipeline, it just works well. The problem comes when I give as input a very big list of sentences: I get a CUDA out of memory error. When I take each example one by one in a for loop, I don’t get this error. Is there an option to pass when instantiating the pipeline() object that enables to make predictions on a very large sequence automatically (for example by setting a batch size and iterating through the batches)? Or do I have to code this myself? @sgugger Thanks.
No you will have to code one yourself, the pipeline API is not designed to handle a large number of inputs automatically.
0
huggingface
🤗Transformers
Generate logits from hidden state embeddings and decoder weights
https://discuss.huggingface.co/t/generate-logits-from-hidden-state-embeddings-and-decoder-weights/8734
Hi, I am trying to compute prediction_logits using BertForPreTraining model. For some reason, I don’t want to use outputs.prediction_logits and I want to be able to generate them by multiplying the last hidden state with decoder weights. The problem is that when I do this the results I get are not equal to outputs.prediction_logits. Here is the code: model = BertForPreTraining.from_pretrained("bert-base-multilingual-cased", output_hidden_states=True).to(device) w = model.state_dict()['cls.predictions.decoder.weight'].cpu().numpy() b = model.state_dict()['cls.predictions.decoder.bias'].cpu().numpy() with torch.no_grad(): outputs = model(**inputs) output_logits = outputs.prediction_logits.cpu().numpy() last_hidden_states = outputs.hidden_states[-1].cpu().numpy() preds = output_logits[i, token_idx] h = last_hidden_states[i, token_idx] h_transformed = np.dot(w, h) + b Basically, I expect h_transformed to be equal to preds, but it is not. Thanks for your help
I suspect dropout might be to blame here. You can create a small fully connected layer with dropout, and initialize it with decoder weights to use instead.
0
huggingface
🤗Transformers
Wav2vec2 finetuning custom dataset
https://discuss.huggingface.co/t/wav2vec2-finetuning-custom-dataset/8408
Hello, Thank you for sharing such nice model again on this framework. I am trying to finetune a wav2vec2 model on a custom dataset (so not from the dataset package of huggingface). I have tried to follow these two tutorials : Fine-Tune Wav2Vec2 for English ASR in Hugging Face with 🤗 Transformers 12 Fine-tuning with custom datasets — transformers 4.7.0 documentation 20 but I did not find how to use multiprocessing when using trainer on a custom dataset ? Should I use dataloader class from torch ? For now I am using the regular Dataset class from torch. I also encoutered memory issue with the GPU (16 gb) with a base Wav2vec2 model, even with batch size = 1. What is the maximum batch size for a base and large model for 16 gb, and with what length of sample ? (using fp16). I thank you for the help
You can convert your custom dataset if its a dataframe into Hugging Face format using this. from datasets import Dataset, load_metric train_data = Dataset.from_pandas(train_df) test_data = Dataset.from_pandas(test_df) If you are facing memory related problem, set num_proc = 1 It solved my problem.
0
huggingface
🤗Transformers
How to customize dataloader creation in trainer?
https://discuss.huggingface.co/t/how-to-customize-dataloader-creation-in-trainer/8681
Is there a way I could re-create the dataloader after each epoch through a custom function when using trainer?
No, you should probably use a manual training loop for that, using the Accelerate library.
0
huggingface
🤗Transformers
Distributed Training w/ Trainer
https://discuss.huggingface.co/t/distributed-training-w-trainer/8151
Does anyone have an end-to-end example of how to do multi-gpu, multi-node distributed training using the trainer? I can’t seem to find one anywhere.
All the examples using the Trainer run in multi-gpu multi-node, you just have to use the PyTorch launcher to properly launch a multi-GPU multinode training.
0
huggingface
🤗Transformers
BERT for Generative Chatbot
https://discuss.huggingface.co/t/bert-for-generative-chatbot/8583
Creating generative chatbot with BertGenerationEncoder and BertGenerationDecoder like: encoder = BertGenerationEncoder.from_pretrained("bert-large-uncased", bos_token_id=101, eos_token_id=102) # add cross attention layers and use BERT's cls token as BOS token and sep token as EOS token decoder = BertGenerationDecoder.from_pretrained("bert-large-uncased", add_cross_attention=True, is_decoder=True, bos_token_id=101, eos_token_id=102) bert2bert = EncoderDecoderModel(encoder=encoder, decoder=decoder) bert2bert.to(device) # create tokenizer... tokenizer = BertTokenizer.from_pretrained("bert-large-uncased") and training the model like: progress_bar = tqdm(range(num_training_steps)) bert2bert.train() for epch in range(num_epochs): for i in range(len(query)): input_ids = tokenizer(query[i], add_special_tokens=False, return_tensors="pt").input_ids labels = tokenizer(response[i], add_special_tokens = False, return_tensors="pt").input_ids loss = bert2bert(input_ids=input_ids.to(device), decoder_input_ids=labels.to(device), labels=labels.to(device)).loss loss.backward() optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) progress_bar.set_postfix_str(f'Loss: {loss.item():.5f}') Here query = [‘Hi’, ‘How are you’] response = [‘Hello’, ‘I’m good’] After training the model, getting strange generations like: input_ids = tokenizer('Hi', add_special_tokens=False, return_tensors="pt").input_ids #bert2bert.eval() outputs = bert2bert.generate(input_ids.to(device)) print(tokenizer.decode(outputs[0])) [CLS] is is is is is is is is is is is is is is is is is is is Kindly suggest where is the flaw. Thanks
I’m not particularly familiar with BERT-based generation models but in general, .generate(input_ids, do_sample=True) is an easy way to diversify the output of the model. Bear in mind that the output text can still lack coherence but it will less likely be so repetitive. You can also experiment with the repetition_penalty argument of the generate method as explained here
0
huggingface
🤗Transformers
How to convert wav2vec2 checkpoint to Huggingface processor and model?
https://discuss.huggingface.co/t/how-to-convert-wav2vec2-checkpoint-to-huggingface-processor-and-model/7844
Hi, I have a finetuned checkpoint.pt, which I trained using fairseq repo’s CLI tools. Now I want to infer this model with Huggingface Transformers library. Like this example from here 7 - processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") How can I do that? Any help appreciated. Thanks.
did u figure this out ? i am stuck at the same problem
0
huggingface
🤗Transformers
How to measure accuracy while fine-tuning bert-base model?
https://discuss.huggingface.co/t/how-to-measure-accuracy-while-fine-tuning-bert-base-model/7618
Hello everybody, While I am fine-tunning ‘dbmdz/bert-base-turkish-uncased model’, I can see the loss value during training the model as below: outputs = model(b_input_ids, attention_mask=b_input_mask, labels=b_labels) loss = outputs[0] # get loss but I could not measure accuracy value during training the model. any suggestion, please?
I’m not sure if it’s possible to print metrics with that object, but with model.fit it automatically displays the metrics as in TensorFlow. Check out the course page 14 However, I don’t know if it’s possible to include the attention layers here as you do in your example. model.fit( x=training['train']['input_ids'], y=training['labels_train'], validation_data=( validation['valid']['input_ids'], validation['labels_test'], ) ,batch_size = batch_size ,epochs = epochs )
0
huggingface
🤗Transformers
DeepSpeed with GPT2-XL on Colab
https://discuss.huggingface.co/t/deepspeed-with-gpt2-xl-on-colab/3336
Was anyone able to fine-tune GPT2-XL (or similar models with similar sizes) on COLAB with Deepspeed enabled? I tried it with a V100 and 25GB RAM instance, w/ and w/out cpu offloading and fp16 with a batch size of 1 but it is still giving OOM on the GPU side and CPU side.
I’m encountering the same problem with gpt-neo. Have you found any solutions?
0
huggingface
🤗Transformers
Train GPT2 from scratch (Tensorflow) - Loss function
https://discuss.huggingface.co/t/train-gpt2-from-scratch-tensorflow-loss-function/4165
Hello. I’m trying to train a GPT2 model (actually GPT2LMHeadModel) using tensorflow2. I found this post 35 where the author shows how to do it with great detail. However, there’s a special aspect regarding the definition of the loss function when compiling the model. Initially, when I was trying to implement it on my own, I defined the loss function as usual for the Keras models: model.compile(optimizer=optimizer, loss=loss_fn, metrics=[metric]) However, when trying to execute the fit method, it threw an error: history = model.fit(dataset, epochs=EPOCHS) ValueError: Shape mismatch: The shape of labels (received (490,)) should equal the shape of logits except for the last dimension (received (11760, 64)) In this case, the dateset corresponds to a tensorflow dataset, which shape is <TakeDataset shapes: ((10, 49), (10, 49)), types: (tf.int32, tf.int32)> (Batch size: 10, sequences length: 49). After that, I realized the loss function parameter is defined differently in the tutorial mentioned above. The loss parameter corresponds to a list, which actually happens to be a function followed by a collection of None values, depending on the number of layers defined for the model architecture. Under this approach, it works fine. image822×150 12.7 KB In both cases, the loss function corresponds to tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True). The documentation for TFGPT2LMHeadModel specifies: “The GPT2 Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).”. So, I think it makes sense to define the loss as that list-based parameter (Basically the loss function would work for the top layer). After seeing this, I have some questions: Is it correct to define the loss function that way? Does it have any implications for the inference process? Additionally, I would like to mention that I’ve also tried to train the model using the Trainer class, unfortunately, it throws a similar error when running the train method. training_args = TFTrainingArguments( output_dir='./results', # output directory num_train_epochs=EPOCHS, # total # of training epochs per_device_train_batch_size=10, # batch size per device during training per_device_eval_batch_size=10, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs) trainer = TFTrainer(model=model, args=training_args, train_dataset=dataset, eval_dataset=dataset) trainer.train() ValueError: Shapes (248040,) and (4410,) are incompatible I’m using transformers v 3.5.0 and tokenizers v 0.9.3 Sorry for the long post, thanks in advance for your help!
I am having the same question, hope someone could answer us
0
huggingface
🤗Transformers
Hidden states embedding tensors
https://discuss.huggingface.co/t/hidden-states-embedding-tensors/3549
I am trying to get the key and query vectors out of the Transformer layers but am confused by the docs regarding the embedding tensors provided for each layer. I have hidden_states=True. And the docs say: hidden_states ( tuple(torch.FloatTensor) , optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True ) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size) . Hidden-states of the model at the output of each layer plus the initial embedding outputs. Where are these embeddings coming from in each layer? At first I thought the embedding was from the token embedding layer as the very first input to the model but the embeddings across layers are not the same and it would not make sense to append this to the tuple for every layer. So where are these embeddings coming from? In addition are they at the 0 index of the tuple? The docs say " output of the embeddings + one for the output" implying index 0 but then say “model at the output of each layer plus the initial embedding outputs.” implying index 1. Bonus question: how can I get out the query vectors? If I have the keys and attentions then I think I can work out what the query vector is but is there an easier way? Thank you, Trenton
What it means is not “embeddings for each layer” but “output of embeddings” + “outputs of each layer”. So for a 12 layer model, you’d get the embedding output (1 tensor) + the outputs of the 12 following layers (12) = 13 tensors in total. You can get the attention values back (output_attentions=True), but AFAIK not the query vectors.
0
huggingface
🤗Transformers
ValueError: `mask_length` has to be smaller than `sequence_length`, while finetuning Wav2vec2.0
https://discuss.huggingface.co/t/valueerror-mask-length-has-to-be-smaller-than-sequence-length-while-finetuning-wav2vec2-0/8295
As in the title, I’m getting ValueError saying that mask length is longer than sequence length while finetuning wav2vec2 models. So far, I’ve tried wav2vec2-base and wav2vec2-large-xlsr-53 and the same error occurred for both of them. I am getting around this error by filtering out examples short than a certain length. It seemed the error is coming from the model predicting some sequences to be shorter than a mask length and failing in the forward pass of the training loop. Is this the desired behavior for wav2vec? It seems reasonable in some sense as (english) words can’t be too short. However, because of this, I’m dropping >50% of my data. Did anyone had the same issue? If so, how did you get around this error? Below is the full traceback of the error: Traceback (most recent call last): File "/home/admin/projects/voice-assessment/wav2vec/kr/run.py", line 165, in <module> trainer.train() File "/home/admin/.miniconda3/envs/voice/lib/python3.9/site-packages/transformers/trainer.py", line 1269, in train tr_loss += self.training_step(model, inputs) File "/home/admin/.miniconda3/envs/voice/lib/python3.9/site-packages/transformers/trainer.py", line 1760, in training_step loss = self.compute_loss(model, inputs) File "/home/admin/.miniconda3/envs/voice/lib/python3.9/site-packages/transformers/trainer.py", line 1794, in compute_loss outputs = model(**inputs) File "/home/admin/.miniconda3/envs/voice/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/admin/.miniconda3/envs/voice/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/admin/.miniconda3/envs/voice/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/admin/.miniconda3/envs/voice/lib/python3.9/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/home/admin/.miniconda3/envs/voice/lib/python3.9/site-packages/torch/_utils.py", line 425, in reraise raise self.exc_type(msg) ValueError: Caught ValueError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/admin/.miniconda3/envs/voice/lib/python3.9/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/home/admin/.miniconda3/envs/voice/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/admin/.miniconda3/envs/voice/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1467, in forward outputs = self.wav2vec2( File "/home/admin/.miniconda3/envs/voice/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/admin/.miniconda3/envs/voice/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1067, in forward hidden_states = self._mask_hidden_states(hidden_states) File "/home/admin/.miniconda3/envs/voice/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 982, in _mask_hidden_states mask_time_indices = _compute_mask_indices( File "/home/admin/.miniconda3/envs/voice/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 146, in _compute_mask_indices raise ValueError( ValueError: `mask_length` has to be smaller than `sequence_length`, but got `mask_length`: 10 and `sequence_length`: 9` '''
cc @patrickvonplaten
0
huggingface
🤗Transformers
A potential in-place operation that caused an RuntimeError
https://discuss.huggingface.co/t/a-potential-in-place-operation-that-caused-an-runtimeerror/3276
Hi, I’m using transformers to generate sentences containing gradients using model.generate with the modification of removing @torch.no_grad() ahead of def generate(...):, since the current version (4.2.1) of model.generate doesn’t support keeping gradient. And because I set do_sample=True and num_beams>1 in generate, the return type is BeamSampleEncoderDecoderOutput. According to the documents, the scores of BeamSampleEncoderDecoderOutput consists of log softmax scores for each vocabulary token and the sum of log softmax of previously generated tokens in this beam. What I want to do is gathering the non-inf value from the last beam and applying gradient descent to train the network later. The key pseudo-code is: outputs = self.generate(input_ids, ..., **model_kwargs) # The type of outputs is BeamSampleEncoderDecoderOutput scores = outputs.scores last_step_score = scores[-1] last_step_score = last_step_score[torch.where(last_step_score != -float('inf'))] last_step_score = last_step_score[::num_beams] However, when I run the program, I receive an error: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [16, 50265]], which is output 0 of LogSoftmaxBackward, is at version 17; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). which means that there is an in-place operation in generate, and I guess the in-place operation lies in BeamSearchScorer.finalize, but I can’t figure out what source code to change to make it viable.
cc @patrickvonplaten
0
huggingface
🤗Transformers
Differences between Config.from_pretrained and Model.from_pretrained
https://discuss.huggingface.co/t/differences-between-config-from-pretrained-and-model-from-pretrained/8520
Hey there! I have a question regarding the differences between loading a multilingual BERT model from pretrained weights and from a pretrained Config: Shouldn’t the two models defined below have the same weights? from transformers import BertConfig, BertModel mbert_model_1 = BertModel.from_pretrained("bert-base-multilingual-uncased") mbert_config = BertConfig.from_pretrained("bert-base-multilingual-uncased") mbert_model_2 = BertModel(mbert_config) I have checked and they have the same architecture, but the layer weights (and the results obtained when using them) are different. Sorry if it’s a well-known question but I had never loaded models from Configs and I’ve found this discrepancy. (I’ve looked for a previous question related to this topic but I haven’t found any). Thanks for your help!
You should have a look at the relevant section in the course 3 and the correspond video 2 where all of this is explained. The first model is initialized with the pretrained weights, the second is the same architecture but is initialized randomly.
0
huggingface
🤗Transformers
Adjusting parameters for the FC layers at the end
https://discuss.huggingface.co/t/adjusting-parameters-for-the-fc-layers-at-the-end/8502
I am trying to fine-tune a pre-trained BigBird on a custom task; the pre-training dataset has about 25k samples for a model of 76M parameters while the target datasets is about 800 samples. During fine-tuning, I am unable to get the loss to converge which is highly volatile (appears like a noise sine wave) - it seems to be that my model might be underfitting on the dataset due to its high sequence length and/or complexity. For the BERT model at the core, image1024×589 127 KB the size or complexity of the ‘Linear’ block can be adjusted to accommodate tasks however for the configuration accessible by Transformers - vocab_size (int, optional, defaults to 50358) – Vocabulary size of the BigBird model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling BigBirdModel. - hidden_size (int, optional, defaults to 768) – Dimension of the encoder layers and the pooler layer. - num_hidden_layers (int, optional, defaults to 12) – Number of hidden layers in the Transformer encoder. - num_attention_heads (int, optional, defaults to 12) – Number of attention heads for each attention layer in the Transformer encoder. - intermediate_size (int, optional, defaults to 3072) – Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. - hidden_act (str or function, optional, defaults to "gelu_new") – The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported. - hidden_dropout_prob (float, optional, defaults to 0.1) – The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. - attention_probs_dropout_prob (float, optional, defaults to 0.1) – The dropout ratio for the attention probabilities. There is not such way to adjust that. Does anyone have any idea how I might be able to customize BigBird - or should I instead extract the decoder output for use on my own network?
Gm! from the docs, it seems that obtaining the sequence embeddings from the pre-trained model is pretty easy using the last_hidden_state or hidden_states returned by BigBird model. However, I would still prefer if there might be a way to modify the size of the Linear Layer in-place as that might be easier than using the embeddings and constructing another Pytorch model to interface with that.
0
huggingface
🤗Transformers
Get output embeddings out of a transformer model
https://discuss.huggingface.co/t/get-output-embeddings-out-of-a-transformer-model/1219
Assuming that I am using a language model like BertForMaskedLM. how can I get the embeddings for each word in the sequence after passing the input_ids to the model. In the docs, I found the function get_out_embeddings but it returns an nn.Module linear(seq_length, vocab_size) My full model is something similar to class BYOLLM(AlbertPreTrainedModel): def __init__(self, config): super().__init__(config) self.albert = AlbertModel(config) self.predictions = AlbertMLMHead(config) self.init_weights() self.tie_weights() self.config = config self.mlp = nn.Sequential( nn.Linear(config.hidden_size, 4096), nn.BatchNorm1d(4096), nn.ReLU(inplace=True), nn.Linear(4096, config.hidden_size), ) def tie_weights(self): self._tie_or_clone_weights( self.predictions.decoder, self.albert.embeddings.word_embeddings ) def get_output_embeddings(self): return self.predictions.decoder def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, masked_index = None, **kwargs ): outputs = self.albert( input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) sequence_outputs = outputs[0] prediction_scores = self.predictions(sequence_outputs) return prediction_scores What I want is to access the predictions scores logits and get the corresponding embedding for a specific word. getting the embedding is the part that I am asking about.
I have found this in the docs 16 hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). But when using this option, i thought i should get Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). but after passing this to the model, the one for the output embedding is in shape (1, hidden_size) instead of (1, seq_lenght, hidden_size) note: the Batch size is 1
0
huggingface
🤗Transformers
Multiple training will give exactly the same result except for the first time
https://discuss.huggingface.co/t/multiple-training-will-give-exactly-the-same-result-except-for-the-first-time/8493
Hi, I have a function that will load a pre-trained model and fine-tune it for sentiment analysis then calculates the F1 score and returns the result. The problem is when I call this function multiple times with the exact same arguments, it will give the exact same metric score which is expected, except for the first time which is different, how is that possible? This is my function which is written based on this tutorial in hugging face: import uuid import numpy as np from datasets import ( load_dataset, load_metric, DatasetDict, concatenate_datasets ) from transformers import ( AutoTokenizer, AutoModelForSequenceClassification, DataCollatorWithPadding, TrainingArguments, Trainer, ) CHECKPOINT = "distilbert-base-uncased" SAVING_FOLDER = "sst2" def custom_train(datasets, checkpoint=CHECKPOINT, saving_folder=SAVING_FOLDER): model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) tokenizer = AutoTokenizer.from_pretrained(checkpoint) def tokenize_function(example): return tokenizer(example["sentence"], truncation=True) tokenized_datasets = datasets.map(tokenize_function, batched=True) data_collator = DataCollatorWithPadding(tokenizer=tokenizer) saving_folder = f"{SAVING_FOLDER}_{str(uuid.uuid1())}" training_args = TrainingArguments(saving_folder) trainer = Trainer( model, training_args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["validation"], data_collator=data_collator, tokenizer=tokenizer, ) trainer.train() predictions = trainer.predict(tokenized_datasets["test"]) print(predictions.predictions.shape, predictions.label_ids.shape) preds = np.argmax(predictions.predictions, axis=-1) metric_fun = load_metric("f1") metric_result = metric_fun.compute(predictions=preds, references=predictions.label_ids) return metric_result And then I will run this function several times with the same datasets, and append the result of the returned F1 score each time: raw_datasets = load_dataset("glue", "sst2") small_datasets = DatasetDict({ "train": raw_datasets["train"].select(range(100)).flatten_indices(), "validation": raw_datasets["validation"].select(range(100)).flatten_indices(), "test": raw_datasets["validation"].select(range(100, 200)).flatten_indices(), }) results = [] for i in range(4): result = custom_train(small_datasets) results.append(result) And then when I check the results list: [{'f1': 0.7755102040816325}, {'f1': 0.5797101449275361}, {'f1': 0.5797101449275361}, {'f1': 0.5797101449275361}] Something that may come to mind is that when I load a pre-trained model, the head will be initialized with random weights and that is why the results are different, if that is the case why only the first one is different and the others are exactly the same?
You need to set the seed before instantiating your model, otherwise the random head is not initialized the same way, that’s why the first run will always be different. The subsequent runs are all the same because the seed has been set by the Trainer in the train method. To set the seed: from transformers import set_seed set_seed(42)
1
huggingface
🤗Transformers
Distilbert Seq2clas
https://discuss.huggingface.co/t/distilbert-seq2clas/7323
Hello I have two questions: 1 We can view the. pooled layer by using output_hidden_states=true and follow the logic, but is there a generic way to do so? 2 When we are doing inference, can we output the hidden_states from trainer.predict() as we do by model(input/output_hidden_states=True)? Thanks
I have a similar doubt as regards point 2. I am working on Question Answering with Distilbert. The predict function in the Trainer does not work if output_hidden_states = True. It works fine if the same argument is set to False. Is this a bug? If not, then how is one to use a model for prediction if one has set the argument output_hidden_states = True while initializing the model ?
0
huggingface
🤗Transformers
What is the `tie_word_embeddings` option exactly doing?
https://discuss.huggingface.co/t/what-is-the-tie-word-embeddings-option-exactly-doing/8483
Hi, for some models there is this tie_word_embeddings parameter. I think it is for the text 2 text models. Can someone please explain what exactly this parameter is doing? Many thanks Philip
No this is for all models that have a language modeling head (so even masked language models like BERT or causal language models like GPT-2). The idea is that the embedding weights (vocab_size by hidden_size) are tied with the decoder (hidden_size by vocab_size) so the model only learns one representation of the words (that is a big matrix!)
0