docs
stringclasses
4 values
category
stringlengths
3
31
thread
stringlengths
7
255
href
stringlengths
42
278
question
stringlengths
0
30.3k
context
stringlengths
0
24.9k
marked
int64
0
1
huggingface
Beginners
Why do I get this error running tokenizer?
https://discuss.huggingface.co/t/why-do-i-get-this-error-running-tokenizer/780
I am using the Fake news dataset that is used in this google colab notebook. 2 with the goal of adapting this example. For full reproducability, I uploaded the exact files I am using for training and testing in a github repository here 1. However it appeared that some of the classes and methods were deprecated so I was trying to re-do it using the notebook as a guide: IMDb Classification with Trainer.ipynb I am getting error after running this train_dataset = ds_train.map(tokenize) where you will find tokenize defined below along with the rest of the code. I copied and pasted the error message after the code. (see full error in comment) In case anyone has further advice or comments I also added the rest of the code I am planning to run, which you will find after the error message. Thank you for viewing this post and I appreciate any help you can offer. from nlp import Dataset import pandas as pd from torch import tensor from transformers import AutoTokenizer, AutoModelForSequenceClassification, AutoConfig, EvalPrediction import torch # read csv in pandas df_train = pd.read_csv("~/Downloads/fakenewstrain.csv") df_test = pd.read_csv("~/Downloads/fakenewstest.csv") # convert pandas df (only columns 'titletext' and 'label') to nlp Dataset ds_train = Dataset.from_pandas(df_train[['titletext','label']]) ds_test = Dataset.from_pandas(df_test[['titletext','label']]) # set up configuration, tokenizer and model config = AutoConfig.from_pretrained('bert-base-uncased') tokenizer = AutoTokenizer.from_pretrained('bert-base-cased') model = AutoModelForSequenceClassification.from_config(config) # function to tokenize a line of text using tokenizer def tokenize(batch): return tokenizer(batch['titletext'], max_length = 64, truncation = True, padding = True, return_tensors = "pt") # loop through Dataset using Dataset map function for tokenization train_dataset = ds_train.map(tokenize) test_dataset = ds_test.map(tokenize) Here is the error I am getting: Full error is in the comment. ArrowInvalid: Could not convert tensor([[ 101, 8499, 4642, 1106, 5307, 1614, 1166, 1114, 27157, 2101, 1656, 1733, 119, 4613, 117, 2631, 113, 13597, 114, 5554, 8499, 112, 188, 1207, 13715, 176, 12328, 1500, 3215, 1786, 1656, 1733, 1120, 170, 185, 14695, 8037, 1303, 1113, 9170, 1115, 1103, 26961, 1524, 118, 6057, 1110, 1231, 7867, 27885, 1103, 1226, 107, 1115, 1119, 112, 188, 1151, 1773, 107, 1105, 1110, 2407, 102]]) with type Tensor: did not recognize Python value type when inferring an Arrow data type Here is the rest of my code (feel free to ignore as not relevant to exact question) I just figured I would add it in case anyone had any helpful comments: I have actually been confused about how the labels are specified. From what I see, they are only referenced to when using set_format however it looks like columns is actually just a list of column names, and I did not see anywhere in the documentation that implied that Trainer specifically looks for certain columns. # loop through Dataset using Dataset map function for tokenization train_dataset = ds_train.map(tokenize) test_dataset = ds_test.map(tokenize) # Set format of Dataset, and specify columns to use # (columns are "input_ids", "attention mask", "token_type_ids" and "label") # Do I need attention mask since im not doing two sentences? Do I need token type ids? train_dataset.set_format('torch', columns=['input_ids', 'token_type_ids', 'label']) test_dataset.set_format('torch', columns=['input_ids', 'token_type_ids', 'label']) def compute_metrics(p: EvalPrediction) -> dict(): preds = np.argmax(p.predictions, axis=1) return glue_compute_metrics(data_args.task_name, preds, p.label_ids) training_args = transformers.TrainingArguments( output_dir="./Downloads/tmp/", overwrite_output_dir=True, do_train=True, do_eval=True, per_gpu_train_batch_size=16, per_gpu_eval_batch_size=64, num_train_epochs=1, logging_steps=500, logging_first_step=True, save_steps=1000, evaluate_during_training=True, ) trainer = transformers.Trainer(model = model, args = training_args, train_dataset = text_train, eval_dataset = text_test, compute_metrics = compute_metrics) trainer.train()
here is the full error: --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) <ipython-input-2-ee0b34ea20c7> in <module> 1 # loop through Dataset using Dataset map function for tokenization ----> 2 train_dataset = ds_train.map(tokenize) /opt/anaconda3/lib/python3.8/site-packages/nlp/arrow_dataset.py in map(self, function, with_indices, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, verbose) 942 example = apply_function_on_filtered_inputs(example, i) 943 if update_data: --> 944 writer.write(example) 945 else: 946 for i in tqdm(range(0, len(self), batch_size), disable=not verbose): /opt/anaconda3/lib/python3.8/site-packages/nlp/arrow_writer.py in write(self, example, writer_batch_size) 175 writer_batch_size = self.writer_batch_size 176 if writer_batch_size is not None and len(self.current_rows) >= writer_batch_size: --> 177 self.write_on_file() 178 179 def write_batch( /opt/anaconda3/lib/python3.8/site-packages/nlp/arrow_writer.py in write_on_file(self) 139 type = None if self.update_features and self.pa_writer is None else self._type 140 if self.current_rows: --> 141 pa_array = pa.array(self.current_rows, type=type) 142 first_example = pa.array(self.current_rows[0:1], type=type)[0] 143 # Sanity check /opt/anaconda3/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib.array() /opt/anaconda3/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() /opt/anaconda3/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Could not convert tensor([[ 101, 8499, 4642, 1106, 5307, 1614, 1166, 1114, 27157, 2101, 1656, 1733, 119, 4613, 117, 2631, 113, 13597, 114, 5554, 8499, 112, 188, 1207, 13715, 176, 12328, 1500, 3215, 1786, 1656, 1733, 1120, 170, 185, 14695, 8037, 1303, 1113, 9170, 1115, 1103, 26961, 1524, 118, 6057, 1110, 1231, 7867, 27885, 1103, 1226, 107, 1115, 1119, 112, 188, 1151, 1773, 107, 1105, 1110, 2407, 102]]) with type Tensor: did not recognize Python value type when inferring an Arrow data type
0
huggingface
Beginners
DistilBert weights initialization
https://discuss.huggingface.co/t/distilbert-weights-initialization/768
I want to train a DistilBertModel from scratch with my own corpus, using BertModel as the teacher model. Following DistilBert paper, what’s the best way to initialize the weights of my DistilBert with part of the teacher model’s weights? It seems both models are constructed using different classes (e.g. BertAttention in BertModel and MultiheadAttention in `DistilBertModel). In this case, I don’t know if I can just “assign” the teacher’s layers to the DistilBert’s layers…
Hi @meisyarahd, you can find the distillation example here 30
0
huggingface
Beginners
Fast_bert using my finetuned model
https://discuss.huggingface.co/t/fast-bert-using-my-finetuned-model/764
I created a sentiment analysis model using one of the Huggingface models, i got good accuracy, i saved it and now i want to use it without passing by the training process again, I used the BertClassificationPredictor function but it didn’t work for me. My work : from fast_bert.prediction import BertClassificationPredictor MODEL_PATH = ‘/content/latest_model’ predictor = BertClassificationPredictor( model_path=MODEL_PATH, label_path=LABEL_PATH, multi_label=False, model_type=‘bert’) Single prediction single_prediction = predictor.predict(“just get me result for this text”) The model i finetuned : https://huggingface.co/asafaya/bert-base-arabic 1
Hi @AhmedBou better to ask this on fast_bert issues. Also if you could post the stack trace then I can take a look.
0
huggingface
Beginners
Question about Tokenization for text classification
https://discuss.huggingface.co/t/question-about-tokenization-for-text-classification/775
Can anyone answer this question 8 I posted on stack exchange?
The padding should be on the right for Bert. But you can do all of that a lot simpler now! You should take a look at the preprocessing tutorial 14.
0
huggingface
Beginners
Good way to output embedding for search?
https://discuss.huggingface.co/t/good-way-to-output-embedding-for-search/725
I tried bert and bart to output embedding of document and query. Bart is much better than bert ,. However in zero-shot learning condition, the embedding of sentence looks good when they have similar length, but when using embedding of query which is really short, the embedding is much worse. The embedding of query is much closer to the short rubbish data other than the long relative data. Are there good models or training tasks to map query and document into a similar space which has no relation with the length of sentences?
Have you read this article? There is a small portion that discusses why models that produce sentence embeddings are bad at embedding one or two words into a relevant latent space. Joe Davison Blog – 29 May 20 Zero-Shot Learning in Modern NLP 25 State-of-the-art NLP models for text classification without annotated data
0
huggingface
Beginners
Tokenizer for ‘sshleifer/distilbart-xsum-12-6’?
https://discuss.huggingface.co/t/tokenizer-for-sshleifer-distilbart-xsum-12-6/529
I try to generate outputs using sshleifer/distilbart-xsum-12-6 but it gives me the following error: OSError: Model name 'sshleifer/distilbart-xsum-12-6' was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed 'sshleifer/distilbart-xsum-12-6' was a path or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'], but couldn't find such vocabulary files at this path or url. I am thinking instead of tokenizer = AutoTokenizer.from_pretrained("sshleifer/distilbart-xsum-12-6") we should change it to another tokenizer for xsum? Thank you! (Again, thank you Sam for your wonderful work.
chz816: tokenizer = AutoTokenizer.from_pretrained(“sshleifer/distilbart-xsum-12-6”) Hi @chz816, this call should work. You can also use BartTokenizer
0
huggingface
Beginners
Getting different results on different hardware
https://discuss.huggingface.co/t/getting-different-results-on-different-hardware/720
I have been observing that transformers quite often generates different qualitative outcomes on different hardware, despite using the same seed and the same configuration. Shouldn’t it be the case that identical code with a fixed seed and identical configuration (batch_size, etc.) should produce identical results no matter the hardware? For example here, @sshleifer gets a bleu score of 27.65 on his PR branch, whereas I get 27.84. The only difference is hardware. Another example, we have been battling at finding hparams that will make CI happy with pl_glue_run.py test - I was getting acc/f1 of 1.0 on my hardware but CI was getting 0.5, despite multiple attempts to improve it. Two attempts were made (1, 2) but the test is currently still testing acc>0.25, so really we are just testing that the example runs. I have seen a lot of others examples, these are just 2 recent ones I could easily point to. Perhaps some of you have practical insights at how this can be improved. Thank you.
Different GPUs will often have different results in base operations starting at the 6th digit after the comma (from my experience). That may be enough to explain the difference? Then maybe the number of GPU used/total batch size?
0
huggingface
Beginners
Where in the code does masking of tokens happen when pretraining BERT
https://discuss.huggingface.co/t/where-in-the-code-does-masking-of-tokens-happen-when-pretraining-bert/668
Hi all, I was making myself familiar with the BertForPreTraining and BertTokenizer classes, and I am unsure where in the code the masking of tokens actually happens. I have tried tracing through but am getting lost in the weeds of various tokenizer subclasses. Any help with this would be much appreciated.
There is no script to pretrain BERT in the examples, transformers is primarily there to help you finetune a model like BERT on a downstream task. That being said, the DataCollatorForLanguageModeling 141 masks random tokens when creating batches, if you need it.
0
huggingface
Beginners
Selective masking in Language modeling
https://discuss.huggingface.co/t/selective-masking-in-language-modeling/700
Hi Huggingfacers I have a number of questions regarding finetuning a language model: How to mask a selective portion of a given input sentence instead of masking randomly. For example, if I am using ALBERT as a model, and I am aiming to do a different kind of loss function than the standard MLM loss for the masked tokens, how to access the model output of the masked tokens
Refer to: https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=M1oqh0F6W3ad 46 Mask Code: github.com huggingface/transformers/blob/master/src/transformers/data/data_collator.py#L157 27 if are_tensors_same_length: return torch.stack(examples, dim=0) else: if self.tokenizer._pad_token is None: raise ValueError( "You are attempting to pad samples but the tokenizer you are using" f" ({self.tokenizer.__class__.__name__}) does not have one." ) return pad_sequence(examples, batch_first=True, padding_value=self.tokenizer.pad_token_id) def mask_tokens(self, inputs: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: """ Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. """ if self.tokenizer.mask_token is None: raise ValueError( "This tokenizer does not have a mask token which is necessary for masked language modeling. Remove the --mlm flag if you want to use this tokenizer." ) labels = inputs.clone()
0
huggingface
Beginners
Why is the lm_head layer in GPT2LMHeadModel not a parameter?
https://discuss.huggingface.co/t/why-is-the-lm-head-layer-in-gpt2lmheadmodel-not-a-parameter/639
Hi all, I recently raised an issue 9 asking why the lm_head layer is not a parameters. The response I got is that “It is tied to the input layer”. After reading the docs as suggested I found that the lm_head layer is a “linear layer with weights tied to the input embeddings”. I still don’t understand what that means, as I thought the lm_head layer would output a tensor shaped (*, vocab_size) whereas the embedding is shaped (vocab_size, embedding_size)? Does that mean if I want to fine-tune the lm_head layer, I would need to fine-tune the embedding layer (wte)?
The embedding matrix has a size vocab_size, embedding_size. The lm_head linear layer has weights of size embedding_size,vocab_size, so you can use the transpose of the embedding matrix for that final lm layer in terms of shape (and in PyTorch, the weights of a linear layer are stored transposed, so you can just use the same matrix). As for the why: if the world the is encoded as [0.1, -0.3, 0.2] (for instance) and you predict [0.099, =0.302, 0.18], you probably want to to predict something very close to the, which is why we use the same weights. That way the model only learns one representation of embedding vectors. This trick was first introduced for LSTMs a while ago.
0
huggingface
Beginners
Multiclass vs Multilabel
https://discuss.huggingface.co/t/multiclass-vs-multilabel/672
Hello all, I’m relatively new to transformer library and its models. What I’m try-out is fine-tuning pretrained model for classification task. I understood that BertForSequenceClassification is for classical multi-class classification. However, I’m confused on how to achieve “multi-label” classification task. Can anyone guide me to multilabel classification? ps.) What I mean by Multiclass, it means labels are exclusive to each other ex) num_labels = 5 labels=[0, 1, 0, 0, 0] Multilabel num_labels = 5 labels = [1, 0, 0, 1, 0]
See how transformers.modeling_bert.BertForSequenceClassification is implemented - there is a CrossEntropyLoss built into this class internally. You can create custom class (PyTorch module) similar to BertForSequenceClassification and apply multi-label loss of your choice there. You can also look-up my blogpost https://zablo.net/blog/post/custom-classifier-on-bert-model-guide-polemo2-sentiment-analysis/ 202 where I show how to build totally custom model on top of the pre-trained transformers models.
0
huggingface
Beginners
Where is the source to benchmark’s dataset entries on the model’s website
https://discuss.huggingface.co/t/where-is-the-source-to-benchmarks-dataset-entries-on-the-models-website/652
If I go to https://huggingface.co/Helsinki-NLP/opus-mt-ru-en 4, I see a bunch of dataset entries in the benchmark table. How do I know what they are so that if I do my own training I can compare apples to apples? For example this particular model has a list: [...] |newstest2015-enru.ru.en |30.4 |0.568| |newstest2016-enru.ru.en |30.1 |0.565| [...] newstest2019-ruen.ru.en 31.4 0.576 Tatoeba.ru.en 61.1 0.736 After some research I have derived that most likely these are WMT datasets (e.g. https://www.statmt.org/wmt16/ 2), but I could be wrong. And even if I got it right, I can’t tell whether newstest2015 is actually wmt16 or wmt15? This is because wmt16 doesn’t include any data from 2016. It’s made from News Crawl articles up to and including 2015, according to http://www.statmt.org/wmt16/translation-task.html. So I can’t tell whether the year in newstest2016-enru.ru.en refers to the name of the dataset or the last included year of the News Crawl dump. Any suggestions to how I could find which entry is the right one if I finetune on wmt16? edit: Since there is wmt19 out there and their scorecard contains the “newstest2019” as the most recent entry, most likely the year listed in the entry is of the WMT release and not of the News Crawl data. So if I train on wmt16 I’d compare with newstest2016-enru.ru.en. That’s said, perhaps, it’d make it easier for the users if the contributed model’s webpage identifed which datasets it has in its benchmark table, with a link to a source or at least an official name so the former can be found. Also since there is a high link rot - perhaps, a backup link to waybackmachine. Thank you. p.s. now that I have investigated this model, the helpful links from the benchmark entry would have been: all but last entry: http://opus.nlpl.eu/WMT-News.php and maybe the original http://www.statmt.org/wmt19/ last entry: http://opus.nlpl.eu/Tatoeba.php and maybe the original https://tatoeba.org/eng/
I guess the only way is to open a issue on their github repo or contact them directly to ask.
0
huggingface
Beginners
Bart summarization
https://discuss.huggingface.co/t/bart-summarization/636
Good morning/evening I am trying to understand how does distilbart generate summaries, like what is the logic behind when you fine tune it with texts and their reference summaries, how does it learn to summarize with a specified length with new words? The way I see it is: I feed a text into the model, it gets encoded & then decoded with only the tokens containing important information? How does the model spots kinda the good sentences tokens?
You should read more about “Sequence to Sequence”. Bart is a seq2seq model : the input text is encoded with attention, and then output text is generated token by token, with attention over the input and the generated output so far. Since the output is generated token by token, we can choose how many token we want to generate.
0
huggingface
Beginners
Having trouble using Bart for conditional generation
https://discuss.huggingface.co/t/having-trouble-using-bart-for-conditional-generation/632
While trying to follow the example for conditional generation using BART I keep getting the following error: `ImportError Traceback (most recent call last) in ----> 1 from transformers.modeling_bart import BartTokenizer, BartForConditionalGeneration, BartConfig 2 3 # see examples/summarization/bart/run_eval.py for a longer example 4 model = BartForConditionalGeneration.from_pretrained(‘facebook/bart-large-cnn’) 5 tokenizer = BartTokenizer.from_pretrained(‘facebook/bart-large-cnn’) ~/anaconda3/envs/pytorch-hack/lib/python3.8/site-packages/transformers/modeling_bart.py in 28 from .activations import ACT2FN 29 from .configuration_bart import BartConfig —> 30 from .file_utils import ( 31 add_code_sample_docstrings, 32 add_end_docstrings, ImportError: cannot import name ‘add_code_sample_docstrings’ from ‘transformers.file_utils’ (/Users/ribhu/anaconda3/envs/pytorch-hack/lib/python3.8/site-packages/transformers/file_utils.py)` I was unable to find any references to such a problem. Also, I installed the library directly from this repo to make sure I used the updated version. Really feeling lost and would appreciate some help.
Hi can you again check if you are using correct transformers version, add_code_sample_docstrings is available in master here 20
0
huggingface
Beginners
Training BART, error when preparing decoder_input_ids. Shape of input_ids?
https://discuss.huggingface.co/t/training-bart-error-when-preparing-decoder-input-ids-shape-of-input-ids/623
Hi! I’ve been trying to train BART on a dialogue data set but got stuck on the following error. I don’t have explicit decoder input ids, so the forward() function calls the _prepare_bart_decoder_inputs() function which in turn calls the shift_tokens_right() function to make the decoder input. The dimensions of the tensor made in shift_tokens_right() don’t match the dimensions of the input_ids tensor for the torch.gather() function call to work. Here is the error output: ~/.local/lib/python3.6/site-packages/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, decoder_input_ids, encoder_outputs, decoder_attention_mask, decoder_cached_states, use_cache, output_attentions, output_hidden_states) 837 decoder_input_ids=decoder_input_ids, 838 decoder_padding_mask=decoder_attention_mask, --> 839 causal_mask_dtype=self.shared.weight.dtype, 840 ) 841 else: ~/.local/lib/python3.6/site-packages/transformers/modeling_bart.py in _prepare_bart_decoder_inputs(config, input_ids, decoder_input_ids, decoder_padding_mask, causal_mask_dtype) 111 pad_token_id = config.pad_token_id 112 if decoder_input_ids is None: --> 113 decoder_input_ids = shift_tokens_right(input_ids, pad_token_id) 114 bsz, tgt_len = decoder_input_ids.size() 115 if decoder_padding_mask is None: ~/.local/lib/python3.6/site-packages/transformers/modeling_bart.py in shift_tokens_right(input_ids, pad_token_id) 166. def shift_tokens_right(input_ids, pad_token_id): 167 """Shift input ids one token to the right, and wrap the last non pad token (usually <eos>).""" 168 prev_output_tokens = input_ids.clone() 169 index_of_eos = (input_ids.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1) --> 170 prev_output_tokens[:, 0] = input_ids.gather(1, index_of_eos).squeeze() 171 prev_output_tokens[:, 1:] = input_ids[:, :-1] 172 return prev_output_tokens RuntimeError: invalid argument 2: Input tensor must have same size as output tensor apart from the specified dimension at /pytorch/aten/src/THC/generic/THCTensorScatterGather.cu:28 My input_ids tensor has size [4, 2, 640] while index_of_eos from shift_tokens_right() has the size [4, 640, 1] which doesn’t fit the requirements for torch.gather(). Permuting the input_ids tensor before calling the function doesn’t work since then the dimensions of index_of_eos change as well and it doesn’t match. The only thing I have found to work is to edit shift_tokens_right() such that the index_of_eos tensor is permutated. Which seems like a bad idea. Is the shape of my input_ids tensor wrong? Do the tensors need to have just 2 dimensions batch_size and sequence length? Maybe how I create the input features for pytorch needs to be reviewed…
I’ve been here, with both dialgoue and BART. Solved all my problems by porting my code to pytorch lightning 7 Once ported, you can very easily use the training_step() function as follows, where self() calls model.forward(): def training_step(self, batch, batch_id): """see lighting docs. Need to make sure that first token is not ignored""" decoder_inputs = batch["target_ids"][:, :-1].contiguous() decoder_labels = batch["target_ids"][:, 1:].clone() decoder_labels[batch["target_ids"][:, 1:] == self.tokenizer.pad_token_id] = -100 loss = self(source_ids=batch["source_ids"], padding_mask=batch["padding_mask"], decoder_inputs=decoder_inputs, decoder_labels=decoder_labels)[0] return {"loss": loss}
0
huggingface
Beginners
Use GPU on pc to load the models
https://discuss.huggingface.co/t/use-gpu-on-pc-to-load-the-models/604
Good evening, I’m trying to load the distillbart-cnn-12-6 on my local machine, my GPU is NVIDIA GeForce GT 740M, and is located on “GPU 1”, when I try to load the model it’s not detected. Any idea how to solve that?
What do you mean it is not detected. What is not detected, the model or your GPU? Can you post the code that you used to load the model?
0
huggingface
Beginners
Purpose of padding and truncating
https://discuss.huggingface.co/t/purpose-of-padding-and-truncating/412
I have read the Preprocessing Data page 6. I understand what padding and truncating are doing, but I’m not sure I understand the reason for doing either of them. Can anyone help me understand the purpose for doing them? Thanks in advance!
Hi @aclifton314, padding : Padding is used to make all examples same length so that you can pack them in batch, sequences with uneven length can’t be batched. So if a sequence is shorter, than your max length then padding is used to make that sequence longer. Also some model might expect fixed length input, so padding help there too. truncation: Most of the models have max_lengths defined for them (there are exceptions, model with relative attention can take arbitrarily long sequences) for ex.for BERT max_length is 512, so if one of your sequence is longer than that you can’t feed it directly, so you need to truncate (drop extra tokens) to make the sequence smaller. Hope this helps
0
huggingface
Beginners
Training GPT-2 from scratch
https://discuss.huggingface.co/t/training-gpt-2-from-scratch/562
Hello! I’m currently working on a toy project that uses GPT-2 (smallest variant but only 6 layers, from scratch) to predict next tokens in the context of programming languages. So my dataset are all source codes and I am using a custom tokenizer and i have the following questions: If my sample is longer than 1024 tokens (supposing the model’s max length is 1024), is the past tokens automatically fed back to the model during training? or should I do it myself? My custom tokenizer works well (in my opinion) but i want to use the huggingface API to take advantage of the “fast” tokenizers. How do I go about subclassing the Tokenizer class so that my tokenizer is compatible with huggingface’s tokenizer api ? Thank you very much!!!
Hi @miguelvictor, You can train you tokenizer using the`tokenizers 6 library. These are fast rust tokenizers with python API.
0
huggingface
Beginners
Retrain from scratch a model in a loop
https://discuss.huggingface.co/t/retrain-from-scratch-a-model-in-a-loop/550
Hello all, I am currently exploring the influence of some architectures and hyperparameters in my specific task. Thus, I created a loop to train several times the same model (with the same set of hyperparameters but reinitialized weights) to see if model performance is statistically significant when compared to others. However, even if I set the parameter force_download = True when downloading the pretrained model (with the method from_pretrained), my experiments seem to be throwing the same results each time after the first iteration (see image below). image2444×394 44.4 KB So, do you have any insights of what I am doing wrong? How to create a loop that will train a new model from scratch each time? Thank you, J.
Could you be loading from a checkpoint somewhere perhaps?
0
huggingface
Beginners
Padding with pad_token_id improves results for T5?
https://discuss.huggingface.co/t/padding-with-pad-token-id-improves-results-for-t5/559
Hi, I was trying to reimplement a simple one beam search for T5 based on the awesome work of Thomas Wolf to understand better how HuggingFace generates new tokens and am a bit bewildered by a discovery. It appears that results improve a lot when I pad the text with pad_token_ids. Here is a minimum reproducible example of it: from transformers import T5Tokenizer, T5ForConditionalGeneration def top_k_top_p_filtering(logits, top_k=0, top_p=0.0, filter_value=-float("Inf")): """ Filter a distribution of logits using top-k and/or nucleus (top-p) filtering Function created by Thomas Wolf of the huggingface team Args: logits: logits distribution shape (vocabulary size) top_k > 0: keep only top k tokens with highest probability (top-k filtering). top_p > 0.0: keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751) From: https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317 """ assert ( logits.dim() == 1 ) # batch size 1 for now - could be updated for more but the code would be less clear top_k = min(top_k, logits.size(-1)) # Safety check if top_k > 0: # Remove all tokens with a probability less than the last token of the top-k indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None] logits[indices_to_remove] = filter_value if top_p > 0.0: sorted_logits, sorted_indices = torch.sort(logits, descending=True) cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1) # Remove tokens with cumulative probability above the threshold sorted_indices_to_remove = cumulative_probs > top_p # Shift the indices to the right to keep also the first token above the threshold sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone() sorted_indices_to_remove[..., 0] = 0 indices_to_remove = sorted_indices[sorted_indices_to_remove] logits[indices_to_remove] = filter_value return logits pretrained_model = 't5-base' # Loading models tokenizer = T5Tokenizer.from_pretrained(pretrained_model) t5_conditional = T5ForConditionalGeneration.from_pretrained(pretrained_model) encoder, decoder, lm_head = t5_conditional.encoder, t5_conditional.decoder, t5_conditional.lm_head ########### Without Padding ############ generated = torch.tensor( [tokenizer('translate English to French: I was a victim of a series of accidents.')['input_ids']]) encoded_embeddings = encoder(generated)[0] for _ in range(16): decoder_output = decoder(input_ids=generated, encoder_hidden_states=encoded_embeddings)[0] logits = lm_head(decoder_output) next_token_logits = logits[0, -1, :] next_token = torch.argmax(next_token_logits).unsqueeze(0) generated = torch.cat((generated, next_token.unsqueeze(0)), dim=1) tokenizer.decode(generated[0]) # Ouput generated: ................. (in tokens, a series of 5s) ########### With One Padded "0" ############ generated = torch.tensor( [tokenizer('translate English to French: I was a victim of a series of accidents.')['input_ids']]) generated = torch.cat((generated, torch.tensor([[0]])), dim=1) encoded_embeddings = encoder(generated)[0] for _ in range(16): decoder_output = decoder(input_ids=generated, encoder_hidden_states=encoded_embeddings)[0] logits = lm_head(decoder_output) next_token_logits = logits[0, -1, :] next_token = torch.argmax(next_token_logits).unsqueeze(0) generated = torch.cat((generated, next_token.unsqueeze(0)), dim=1) tokenizer.decode(generated[0]) # Ouput generated: ⁇ l'occasion de l'accident, j' ########### With Ten Padded "0s" ############ generated = torch.tensor( [tokenizer('translate English to French: I was a victim of a series of accidents.')['input_ids']]) generated = torch.cat((generated, torch.tensor([[0] * 10])), dim=1) encoded_embeddings = encoder(generated)[0] for _ in range(16): decoder_output = decoder(input_ids=generated, encoder_hidden_states=encoded_embeddings)[0] logits = lm_head(decoder_output) next_token_logits = logits[0, -1, :] next_token = torch.argmax(next_token_logits).unsqueeze(0) generated = torch.cat((generated, next_token.unsqueeze(0)), dim=1) tokenizer.decode(generated[0]) # Ouput generated: J'ai été victime d'une série d'accidents The same applies to prompts with or without eos tokens. Would anyone know why padding improves results so much and what is the optimal padding? Many thanks, Abel
Hi, in the first example the source text ids are also passed to the decoder, which they should not. When generating, the decoder sequence should first start with the decoder_start_token and not from the source ids. So when generating for the first step pass the encoder_hidden_states and decoder_start_token_id as the first id. So the correct usage would be enc = tokenizer(['translate English to French: I was a victim of a series of accidents.'], return_tensors="pt") input_ids = enc['input_ids'] encoded_embeddings = encoder(input_ids)[0] # decoder inputs should start from decoder_input_ids, # for T5 pad_token_id is the deocder_start_token generated = torch.tensor([[tokenizer.pad_token_id]]) for _ in range(16): decoder_output = decoder(input_ids=generated, encoder_hidden_states=encoded_embeddings)[0] logits = lm_head(decoder_output) next_token_logits = logits[0, -1, :] next_token = torch.argmax(next_token_logits).unsqueeze(0) generated = torch.cat((generated, next_token.unsqueeze(0)), dim=1) tokenizer.decode(generated[0]) which generates J'ai été victime d'une série d'accident. Also when passing padded text, we should also pass attention_mask so the pad tokens won’t be attended. The tokenizer returns attention_mask along with the input_ids. And to create tensors for the tokenized ids, pass the return_tensors argument to the tokenizer which will then return tensor instead of list depending on the value of return_tensor which is ‘pt’ for torch tensors, ‘tf’ for tf tensors. If you pass a list of strings, tokenizer will automatically batch them.
0
huggingface
Beginners
BART pre-training?
https://discuss.huggingface.co/t/bart-pre-training/549
Hi, How can I pre-train Bart with our own dataset? It seems that the script examples/language-modeling/run_language_modeling.py doesn’t support it yet. Thanks.
Hi @cahya, BART pre-training is not yet available in transformers. You can find the denoising dataset here 23 in the fairseq repo, and try to use it;
0
huggingface
Beginners
Multi-turn dialogue using dialoGPT with Hosted Inference API
https://discuss.huggingface.co/t/multi-turn-dialogue-using-dialogpt-with-hosted-inference-api/486
I am looking to use dialoGPT-large on the Hosted Inference API for a chatbot demo but am having trouble generating decent multi-turn dialogue. As an example, when I post the following to the API endpoint: I heard you won the cricket match. <|endoftext|> I did! <|endoftext|> Awesome. Who did you play against? <|endoftext|> I played against the Aussies. <|endoftext|> Wow ! Was it a tough game? <|endoftext|> It was a tough game. It went on till the last over. They almost won. <|endoftext|> Where was the match? <|endoftext|> It seems to just spit it back out at me: I heard you won the cricket match. <|endoftext|> I did! <|endoftext|> Awesome. Who did you play against? <|endoftext|> I played against the Aussies. <|endoftext|> Wow ! Was it a tough game? <|endoftext|> It was a tough game. It went on till the last over. They almost won. <|endoftext|> Where was the match? <|endoftext|> This blog post has an example of someone getting meaningful results from exactly the above prompt: https://medium.com/datadriveninvestor/a-simple-contextual-chatbot-to-predict-an-reply-with-pre-trained-dialogpt-model-from-huggingface-f681b550cd60 16. Any guidance as to where I’m going wrong would be really appreciated.
Try without the spaces. Works for me.
0
huggingface
Beginners
Generate method during finetuning
https://discuss.huggingface.co/t/generate-method-during-finetuning/495
I am inheriting a mode pre-trained model: class GPT2FinetunedWithNgrams(GPT2LMHeadModel): @timer def __init__(self, config, model_tokenizer=None): super().__init__(config) self.tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right') self.tokenizer.pad_token = self.tokenizer.eos_token and in the forward method during finetuning, I need to generate sequences from this model being finetuned: def forward( self, input_ids=None, past=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, use_cache=True, ): beam_output = self.generate( input_ids, max_length=50, num_beams=5, early_stopping=True) #Pass beam_output to different loss function and return loss My question is, will using the generate method use the weights for the current model that is being finetuned or will it use static weights from some other GPT2 model?
I ran this code and it looks like I’m getting a recursion error: def sd_data_collator(dataset_samples_list): tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right') tokenizer.pad_token = tokenizer.eos_token encoded_results = tokenizer(dataset_samples_list, padding=True, truncation=True, return_tensors='pt', return_attention_mask=True) batch = {} batch['input_ids'] = torch.stack([result for result in encoded_results['input_ids']]) batch['past'] = None batch['attention_mask'] = torch.stack([result for result in encoded_results['attention_mask']]) batch['position_ids'] = None batch['head_mask'] = None batch['inputs_embeds'] = None batch['labels'] = None batch['use_cache'] = True return batch sd_dataset = SDAbstractsDataset('/path/to/sd_samples_64.csv') training_args = TrainingArguments( output_dir='/path/to/finetuned_gpt2', do_train=True, per_device_train_batch_size=4, learning_rate=1e-3, num_train_epochs=1 ) model = GPT2FinetunedWithNgrams.from_pretrained('gpt2') trainer = Trainer( model=model, args=training_args, train_dataset=sd_dataset, data_collator = sd_data_collator ) trainer.train() class GPT2FinetunedWithNgrams(GPT2LMHeadModel): def __init__(self, config, model_tokenizer=None): super().__init__(config) self.tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right') self.tokenizer.pad_token = self.tokenizer.eos_token def load_ngrams_model(self, ngrams_model_path): self.ngrams_model = NGrams(ngrams_model_path) def forward( self, input_ids=None, past=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, use_cache=True, ): output = self.generate(input_ids=input_ids, max_length=470) Here’s the whole error (it’s really lengthy): Some weights of GPT2FinetunedWithNgrams were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias', 'lm_head.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Epoch: 0%| | 0/1 [00:00<?, ?it/s] Iteration: 0%| | 0/16 [00:00<?, ?it/s]Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence . . . File "/path/to/anaconda3/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context return func(*args, **kwargs) File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/generation_utils.py", line 480, in generate model_specific_kwargs=model_specific_kwargs, File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/generation_utils.py", line 520, in _generate_no_beam_search outputs = self(**model_inputs) File "/path/to/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/path/to/ric-2020/text_gen_w_transformers/finetune_gpt2.py", line 33, in forward File "/path/to/anaconda3/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context return func(*args, **kwargs) File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/generation_utils.py", line 350, in generate "Setting `pad_token_id` to {} (first `eos_token_id`) to generate sequence".format(eos_token_id) . . . File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 1390, in warning self._log(WARNING, msg, args, **kwargs) File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 1514, in _log self.handle(record) File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 1524, in handle self.callHandlers(record) File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 1594, in callHandlers lastResort.handle(record) File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 894, in handle self.emit(record) File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 1025, in emit msg = self.format(record) File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 869, in format return fmt.format(record) File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 608, in format record.message = record.getMessage() File "/path/to/anaconda3/lib/python3.7/logging/__init__.py", line 360, in getMessage def getMessage(self): RecursionError: maximum recursion depth exceeded while calling a Python object My guess is the self.generate() being called within the model produces the recursion problem. Is it possible to use the functionality within the generate method (like beam search, top-k, etc.) without causing this recursion error during finetuning?
0
huggingface
Beginners
How to Avoid Overfitting?
https://discuss.huggingface.co/t/how-to-avoid-overfitting/524
I have a dataset with 7,000 lines I plan to train with GPT2. I am unsure how many steps I should train it to, however. Does anybody have any advice on how I can avoid overfitting?
Avoiding overfit is a big question, I’ll just say some terms I’ve know / heard may help. data augmentation, layer wise lr, weight decay, gradient clip There are lots of things you can explore.
0
huggingface
Beginners
Is masking still used when finetuning a BERT model?
https://discuss.huggingface.co/t/is-masking-still-used-when-finetuning-a-bert-model/523
I’m trying to finetune a BERT model to do token classification, and I’m wondering exactly how the finetuning is done. When I was pretraining the model it was done by using Masked language model and masking out 15% as suggested in the original paper. But now when I’m finetuning the model, do I also mask out 15% of the input/target data or do I no longer do that and just train it on the unmasked data?
Hi @tueboesen, With BERT maksed language modelling is used as a pre-training task. For fine-tuning MLM is not used.
0
huggingface
Beginners
Better generated tokens from GPT2
https://discuss.huggingface.co/t/better-generated-tokens-from-gpt2/469
System Setup Pop!_OS 20.04 Pytorch: 1.5.1 Transformers: 3.0.2 Python: 3.7.6 Background Code from transformers import GPT2Tokenizer, GPT2LMHeadModel import torch prompt1 = 'We present an update on the results of the Double Chooz experiment. Double Chooz searches for the neutrino mixing angle, θ13, in the three-neutrino mixing matrix via the disappearance of produced by the dual 4.27 GW/th Chooz B Reactors. Here we discuss updated oscillation fit results using both the rate and the shape of the anti-neutrino energy spectrum. In the most recent oscillation analysis we included data with neutron captures on Gadolinium and Hydrogen along with the reactor off data that we collected. This is an important step in our multi-year program to establish the value of θ13.' prompt2 = 'The paper covers detailed discussion on novel control system developed for adaptive fluid-based shock-absorbers serving for mitigation of unknown impact excitations. In order to provide complete independence of the control system from the loading conditions, the Hybrid Prediction Control (HPC) was elaborated. The proposed method is an extension of previously introduced kinematic feedback control which ensures optimal path finding, tracking and path update in case of high disturbance or sudden change of loading conditions. Implementation of the presented control system allows to obtain self-adaptive fluid-based absorbers providing robust impact mitigation. In contrast to previously developed methods of Adaptive Impact Absorption, the proposed control strategy does not require prior knowledge of impact excitation or its preliminary identification. The independence of applied control system from parameters of impact loading results in the capability of automatic path correction in the case of disturbance occurrence and re-adaptation to a number of subsequent impacts. The successful operation of the self-adaptive system is investigated with the use of numerical examples involving double-chamber pneumatic shock-absorber equipped with controllable valve. Efficiency of the HPC is proved by comparison with passive absorber as well as device equipped with adaptive and optimal control modules.' prompt3 = 'This study aimed to produce biosurfactant from Pseudozyma tsukubaensis using cassava wastewater and an inoculum (biomass) for galactooligosaccharides synthesis from lactose as an integrated system. First, the use of cassava wastewater as a low cost culture medium by P. tsukubaensis to produce biomass and biosurfactant was evaluated and optimized. Then, the microbial cells (biomass) obtained from the optimized process were used to produce galactooligosaccharides from lactose. The optimum conditions for biosurfactant and biomass synthesis were found to be 80% (v/v) of cassava wastewater at 30°C and 200rpm for 48h. The highest concentration of biosurfactant, that is, minimum surface tension value and maximum biomass concentration predicted were experimentally confirmed as 26.87mN/m and 10.5g/L, respectively. The biosurfactant obtained showed good thermal (121°C/1h), pH (2–11) and ionic strength (0–25% NaCl) stability. Excellent emulsifier activity was also verified, suggesting a potential application in enhanced oil recovery. Galactooligosaccharides synthesized by the Kluyveromyces genus have been extensively investigated, however, few studies have reported transgalactosylation ability by other yeast genera. The transgalactosylation activity of the yeast biomass at optimized conditions from 40% (w/w) lactose resulted in galactooligosaccharides production of 73.12g/L and a yield of 18.28% (w/w) at pH 8.0 and 30°C in 24h. This research showed the technical feasibility of an integrated process: biosurfactant and GOS production from P. tsukubaensis, which takes advantage of the remarkable metabolism of this microorganism. To the best of our knowledge, this is the first study reporting the potential of P. tsukubaensis to produce two economical biotechnological products of increase interest as an integrated process.' prompt4 = 'Advantages of a fuzzy predictive control algorithm are discussed in the paper. The fuzzy predictive algorithm is a combination of a DMC (Dynamic Matrix Control) algorithm and Takagi–Sugeno fuzzy modeling, thus it inherits advantages of both techniques. The algorithm is numerically effective. It is in fact generalization of the standard DMC algorithm widely used in the industry, thus the existing implementations of the DMC algorithm can be extended using the presented fuzzy approach. A simple and easy to apply method of fuzzy predictive control algorithms synthesis is presented in the paper. It can be easy applied also in the case of Multiple Input Multiple Output (MIMO) control plants. Moreover, information about measured disturbance can be included in the algorithms in an easy way. The advantages of the fuzzy predictive control algorithm are demonstrated in the example control systems of two nonlinear chemical reactors: the first one—with inverse response and the second one—a MIMO plant with time delay.' batch = [prompt1, prompt2, prompt3, prompt4] tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right') tokenizer.pad_token = tokenizer.eos_token encoded_results = tokenizer(batch, padding=True, truncation=True, return_tensors='pt', return_attention_mask=True) gpt2 = GPT2LMHeadModel.from_pretrained('gpt2') temperature = 0.92 tmp_input_ids = encoded_results['input_ids'] tmp_attention_mask = encoded_results['attention_mask'] max_gen_length = 30 counter = 0 gen_dict = {'a1': '', 'a2': '', 'a3': '', 'a4': ''} while counter < max_gen_length: outputs = gpt2(input_ids=tmp_input_ids, attention_mask=tmp_attention_mask) # (batch_size, sequence_length, vocab_size) lm_logits_w_temp = outputs[0] / temperature # (batch_size, vocab_size) last_tokens = lm_logits_w_temp[:, -1, :] last_token_softmaxes = torch.softmax(last_tokens, dim=-1).squeeze() next_tokens = torch.multinomial(last_token_softmaxes, num_samples=1) next_strs = [tokenizer.decode(next_token).strip() for next_token in next_tokens] prev_input_strs = [tokenizer.decode(id_tensor, skip_special_tokens=True) for id_tensor in tmp_input_ids] prev_split_list = [prev_input_str.split() for prev_input_str in prev_input_strs] gen_dict['a1'] += next_strs[0] + ' ' gen_dict['a2'] += next_strs[1] + ' ' gen_dict['a3'] += next_strs[2] + ' ' gen_dict['a4'] += next_strs[3] + ' ' str_list_to_join = [] for ii, prev_split2 in enumerate(prev_split_list): next_str = next_strs[ii] tmp_prev = prev_split2 tmp_prev.append(next_str) str_list_to_join.append(tmp_prev) next_inputs = [' '.join(str_to_join) for str_to_join in str_list_to_join] if counter == max_gen_length - 1: final_str_batch = next_inputs else: new_encoded_results = tokenizer(next_inputs, padding=True, truncation=True, return_tensors='pt', return_attention_mask=True) tmp_input_ids = new_encoded_results['input_ids'] tmp_attention_mask = new_encoded_results['attention_mask'] counter += 1 print('Generated by GPT2:') for k, v in gen_dict.items(): print('{}: {}'.format(k, v)) print('\nNew abstracts (old+generated):') for final_str in final_str_batch: print(final_str) Question I was wondering if there were ways to make GPT2 generate better tokens? In my code, I’m using temperature set to an arbitrary value. Here are the printed results: Generated by GPT2: a1: About Ag J An The What SHARE You ... Ge M By £ What May SC " Ex ia The Still End Turkey The Hi A Late Army ________ Here a2: The P In This The This Google [ Five M You Uber Sit Re In Story So The Super Marvel Yet Jul Get An A There What L The Ru a3: Our bro il ousing method did not result in exclusion of PCR components from trans g alling . Repe ating to horizontal prep ot of PCR with b ic ulations a4: From Beaut A Mid F 2 Donald The Mel From Come T The By Act IF The Pin It Whenever This This Top The " ​ I It But Who New abstracts (old+generated): We present an update on the results of the Double Chooz experiment. Double Chooz searches for the neutrino mixing angle, θ13, in the three-neutrino mixing matrix via the disappearance of produced by the dual 4.27 GW/th Chooz B Reactors. Here we discuss updated oscillation fit results using both the rate and the shape of the anti-neutrino energy spectrum. In the most recent oscillation analysis we included data with neutron captures on Gadolinium and Hydrogen along with the reactor off data that we collected. This is an important step in our multi-year program to establish the value of θ13. About Ag J An The What SHARE You... Ge M By £ What May SC " Ex ia The Still End Turkey The Hi A Late Army ________ Here The paper covers detailed discussion on novel control system developed for adaptive fluid-based shock-absorbers serving for mitigation of unknown impact excitations. In order to provide complete independence of the control system from the loading conditions, the Hybrid Prediction Control (HPC) was elaborated. The proposed method is an extension of previously introduced kinematic feedback control which ensures optimal path finding, tracking and path update in case of high disturbance or sudden change of loading conditions. Implementation of the presented control system allows to obtain self-adaptive fluid-based absorbers providing robust impact mitigation. In contrast to previously developed methods of Adaptive Impact Absorption, the proposed control strategy does not require prior knowledge of impact excitation or its preliminary identification. The independence of applied control system from parameters of impact loading results in the capability of automatic path correction in the case of disturbance occurrence and re-adaptation to a number of subsequent impacts. The successful operation of the self-adaptive system is investigated with the use of numerical examples involving double-chamber pneumatic shock-absorber equipped with controllable valve. Efficiency of the HPC is proved by comparison with passive absorber as well as device equipped with adaptive and optimal control modules. The P In This The This Google [ Five M You Uber Sit Re In Story So The Super Marvel Yet Jul Get An A There What L The Ru This study aimed to produce biosurfactant from Pseudozyma tsukubaensis using cassava wastewater and an inoculum (biomass) for galactooligosaccharides synthesis from lactose as an integrated system. First, the use of cassava wastewater as a low cost culture medium by P. tsukubaensis to produce biomass and biosurfactant was evaluated and optimized. Then, the microbial cells (biomass) obtained from the optimized process were used to produce galactooligosaccharides from lactose. The optimum conditions for biosurfactant and biomass synthesis were found to be 80% (v/v) of cassava wastewater at 30°C and 200rpm for 48h. The highest concentration of biosurfactant, that is, minimum surface tension value and maximum biomass concentration predicted were experimentally confirmed as 26.87mN/m and 10.5g/L, respectively. The biosurfactant obtained showed good thermal (121°C/1h), pH (2–11) and ionic strength (0–25% NaCl) stability. Excellent emulsifier activity was also verified, suggesting a potential application in enhanced oil recovery. Galactooligosaccharides synthesized by the Kluyveromyces genus have been extensively investigated, however, few studies have reported transgalactosylation ability by other yeast genera. The transgalactosylation activity of the yeast biomass at optimized conditions from 40% (w/w) lactose resulted in galactooligosaccharides production of 73.12g/L and a yield of 18.28% (w/w) at pH 8.0 and 30°C in 24h. This research showed the technical feasibility of an integrated process: biosurfactant and GOS production from P. tsukubaensis, which takes advantage of the remarkable metabolism of this microorganism. To the best of our knowledge, this is the first study reporting the potential of P. tsukubaensis to produce two economical biotechnological products of increase interest as an integrated process. Our bro il ousing method did not result in exclusion of PCR components from trans g alling. Repe ating to horizontal prep ot of PCR with b ic ulations Advantages of a fuzzy predictive control algorithm are discussed in the paper. The fuzzy predictive algorithm is a combination of a DMC (Dynamic Matrix Control) algorithm and Takagi–Sugeno fuzzy modeling, thus it inherits advantages of both techniques. The algorithm is numerically effective. It is in fact generalization of the standard DMC algorithm widely used in the industry, thus the existing implementations of the DMC algorithm can be extended using the presented fuzzy approach. A simple and easy to apply method of fuzzy predictive control algorithms synthesis is presented in the paper. It can be easy applied also in the case of Multiple Input Multiple Output (MIMO) control plants. Moreover, information about measured disturbance can be included in the algorithms in an easy way. The advantages of the fuzzy predictive control algorithm are demonstrated in the example control systems of two nonlinear chemical reactors: the first one—with inverse response and the second one—a MIMO plant with time delay. From Beaut A Mid F 2 Donald The Mel From Come T The By Act IF The Pin It Whenever This This Top The " ​ I It But Who Here are the same inputs, but using a greedy search (next_tokens = torch.argmax(last_token_softmaxes, dim=-1).tolist()): Generated by GPT2: a1: The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The a2: The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The a3:   The results of this study are in agreement with the results of previous studies , which have shown that the bios ur fact ant if the a4: The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The New abstracts (old+generated): We present an update on the results of the Double Chooz experiment. Double Chooz searches for the neutrino mixing angle, θ13, in the three-neutrino mixing matrix via the disappearance of produced by the dual 4.27 GW/th Chooz B Reactors. Here we discuss updated oscillation fit results using both the rate and the shape of the anti-neutrino energy spectrum. In the most recent oscillation analysis we included data with neutron captures on Gadolinium and Hydrogen along with the reactor off data that we collected. This is an important step in our multi-year program to establish the value of θ13. The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The paper covers detailed discussion on novel control system developed for adaptive fluid-based shock-absorbers serving for mitigation of unknown impact excitations. In order to provide complete independence of the control system from the loading conditions, the Hybrid Prediction Control (HPC) was elaborated. The proposed method is an extension of previously introduced kinematic feedback control which ensures optimal path finding, tracking and path update in case of high disturbance or sudden change of loading conditions. Implementation of the presented control system allows to obtain self-adaptive fluid-based absorbers providing robust impact mitigation. In contrast to previously developed methods of Adaptive Impact Absorption, the proposed control strategy does not require prior knowledge of impact excitation or its preliminary identification. The independence of applied control system from parameters of impact loading results in the capability of automatic path correction in the case of disturbance occurrence and re-adaptation to a number of subsequent impacts. The successful operation of the self-adaptive system is investigated with the use of numerical examples involving double-chamber pneumatic shock-absorber equipped with controllable valve. Efficiency of the HPC is proved by comparison with passive absorber as well as device equipped with adaptive and optimal control modules. The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The This study aimed to produce biosurfactant from Pseudozyma tsukubaensis using cassava wastewater and an inoculum (biomass) for galactooligosaccharides synthesis from lactose as an integrated system. First, the use of cassava wastewater as a low cost culture medium by P. tsukubaensis to produce biomass and biosurfactant was evaluated and optimized. Then, the microbial cells (biomass) obtained from the optimized process were used to produce galactooligosaccharides from lactose. The optimum conditions for biosurfactant and biomass synthesis were found to be 80% (v/v) of cassava wastewater at 30°C and 200rpm for 48h. The highest concentration of biosurfactant, that is, minimum surface tension value and maximum biomass concentration predicted were experimentally confirmed as 26.87mN/m and 10.5g/L, respectively. The biosurfactant obtained showed good thermal (121°C/1h), pH (2–11) and ionic strength (0–25% NaCl) stability. Excellent emulsifier activity was also verified, suggesting a potential application in enhanced oil recovery. Galactooligosaccharides synthesized by the Kluyveromyces genus have been extensively investigated, however, few studies have reported transgalactosylation ability by other yeast genera. The transgalactosylation activity of the yeast biomass at optimized conditions from 40% (w/w) lactose resulted in galactooligosaccharides production of 73.12g/L and a yield of 18.28% (w/w) at pH 8.0 and 30°C in 24h. This research showed the technical feasibility of an integrated process: biosurfactant and GOS production from P. tsukubaensis, which takes advantage of the remarkable metabolism of this microorganism. To the best of our knowledge, this is the first study reporting the potential of P. tsukubaensis to produce two economical biotechnological products of increase interest as an integrated process.   The results of this study are in agreement with the results of previous studies, which have shown that the bios ur fact ant if the Advantages of a fuzzy predictive control algorithm are discussed in the paper. The fuzzy predictive algorithm is a combination of a DMC (Dynamic Matrix Control) algorithm and Takagi–Sugeno fuzzy modeling, thus it inherits advantages of both techniques. The algorithm is numerically effective. It is in fact generalization of the standard DMC algorithm widely used in the industry, thus the existing implementations of the DMC algorithm can be extended using the presented fuzzy approach. A simple and easy to apply method of fuzzy predictive control algorithms synthesis is presented in the paper. It can be easy applied also in the case of Multiple Input Multiple Output (MIMO) control plants. Moreover, information about measured disturbance can be included in the algorithms in an easy way. The advantages of the fuzzy predictive control algorithm are demonstrated in the example control systems of two nonlinear chemical reactors: the first one—with inverse response and the second one—a MIMO plant with time delay. The The The The The The The The The The The The The The The The The The The The The The The The The The The The The The As can be seen the torch.multinomial approach produces better tokens than the torch.argmax approach. Yet, the better tokens still don’t make a whole lot of sense. Maybe the generate method from GenerationMixin works better? I also suppose that the prompts could be too out of vocab, seeing as they are scientific article abstracts. But maybe not? Thanks in advance for your help!
The guide How to generate text: using different decoding methods for language generation with Transformers 19 from @patrickvonplaten helped me understand different generation strategies.
0
huggingface
Beginners
How to train a translation model from scratch
https://discuss.huggingface.co/t/how-to-train-a-translation-model-from-scratch/160
I’ve been recently working on text punctuation restoration which is a problem where you have some text with missing punctuation and you want to add it back. Reading some papers, it seems one of the best approaches is to use Transformers as if you were doing a translation, from a language which there’s no punctuation to one that has it. I am trying to use Hugging Face transformers, but I’ve been struggling to find good resources to learn how to train a translation network from scratch. Most of the documentation is related to other tasks and when it comes to translation, I’ve found only docs that explain how to use pre-trained models. Could someone help me giving a direction? I’m still trying to test a naïve approach where I want to give my model some text without punctuation and I want it to predict a new sentence where for each input token, it predicts if it’s preceded by some punctuation or not. Sorry if there’s somewhere really obvious in the documentation I didn’t look at. Thanks for reading this and I wish you a wonderful day!
I don’t think we have a really good example for seq2seq training from scratch right now, but you can take a look at the examples/seq2seq folder which implements fine-tuning, and adapt. This is using pytorch-lightning. @sshleifer can also chime in.
0
huggingface
Beginners
Is it possible to use a from_pretrained() method to respawn model and tokenizer from my own s3 bucket?
https://discuss.huggingface.co/t/is-it-possible-to-use-a-from-pretrained-method-to-respawn-model-and-tokenizer-from-my-own-s3-bucket/250
I saved a DistilBertModel and a tokenizer with the help of save_pretrained() method. Now, when I load them locally using from_pretrained(’/path_to_distilbert_model’) everything works fine and as intended. I need these models to be loaded from my own s3 bucket. But when I try to load them using TFDistilBertModel.from_pretrained(‘s3://my_bucket/path_to_distilbert_model’) there is an error, stating that model configuration file cannot be found or that ‘vocab.txt’ cannot be found if I use DistilBertTokenizer.from_pretrained(). Is it possible to load a model from my own s3 bucket and what is the suggested way to do that? Thanks
Looking at the source code 26, I do not think it is possible to load models from external URLs other than the HuggingFace bucket. github.com huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L613-L645 26 if pretrained_model_name_or_path is not None: if os.path.isdir(pretrained_model_name_or_path): if from_tf and os.path.isfile(os.path.join(pretrained_model_name_or_path, TF_WEIGHTS_NAME + ".index")): # Load from a TF 1.0 checkpoint archive_file = os.path.join(pretrained_model_name_or_path, TF_WEIGHTS_NAME + ".index") elif from_tf and os.path.isfile(os.path.join(pretrained_model_name_or_path, TF2_WEIGHTS_NAME)): # Load from a TF 2.0 checkpoint archive_file = os.path.join(pretrained_model_name_or_path, TF2_WEIGHTS_NAME) elif os.path.isfile(os.path.join(pretrained_model_name_or_path, WEIGHTS_NAME)): # Load from a PyTorch checkpoint archive_file = os.path.join(pretrained_model_name_or_path, WEIGHTS_NAME) else: raise EnvironmentError( "Error no file named {} found in directory {} or `from_tf` set to False".format( [WEIGHTS_NAME, TF2_WEIGHTS_NAME, TF_WEIGHTS_NAME + ".index"], pretrained_model_name_or_path, ) ) elif os.path.isfile(pretrained_model_name_or_path) or is_remote_url(pretrained_model_name_or_path): archive_file = pretrained_model_name_or_path This file has been truncated. show original That being said, why don’t you upload your model 16 to the HuggingFace model hub?
0
huggingface
Beginners
Failing to format sentiment140 for Trainer
https://discuss.huggingface.co/t/failing-to-format-sentiment140-for-trainer/433
Trying out one of the examples of using nlp with the Trainer class. But can’t seem to get the data formatted correctly or there is a bug. Any ideas? from transformers import DistilBertForSequenceClassification, DistilBertTokenizerFast, Trainer, TrainingArguments from nlp import load_dataset import torch import numpy as np from sklearn.metrics import accuracy_score, precision_recall_fscore_support model_dir = r"distilbert-base-uncased" dataset_name = r"sentiment140" def tokenize(batch): return tokenizer(batch['text'], padding=True, truncation=True, max_length=140) def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='binary') acc = accuracy_score(labels, preds) return { 'accuracy': acc, 'f1': f1, 'precision': precision, 'recall': recall } print("Loading data") train_dataset, test_dataset = load_dataset(dataset_name, split=['train', 'test']) train_dataset = train_dataset.map(tokenize, batched=True, batch_size=1000) test_dataset = test_dataset.map(tokenize, batched=True, batch_size=1000) train_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'sentiment']) test_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'sentiment']) model = DistilBertForSequenceClassification.from_pretrained(model_dir) tokenizer = DistilBertTokenizerFast.from_pretrained(model_dir) print("Loading data") train_dataset, test_dataset = load_dataset(dataset_name, split=['train', 'test']) train_dataset = train_dataset.map(tokenize, batched=True, batch_size=1000) test_dataset = test_dataset.map(tokenize, batched=True, batch_size=1000) train_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'sentiment']) test_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'sentiment']) print("Loading Trainer") training_args = TrainingArguments( output_dir='./results', num_train_epochs=1, per_device_train_batch_size=64, per_device_eval_batch_size=64, warmup_steps=500, weight_decay=0.01, evaluate_during_training=True, logging_dir='./logs', ) trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_dataset, eval_dataset=test_dataset ) trainer.train() >>> RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 96 and 61 in dimension 1 at C:\w\1\s\tmp_conda_3.7_100118\conda\conda-bld\pytorch_1579082551706\work\aten\src\TH/generic/THTensor.cpp:612 versions: transformers 3.0.2 pypi_0 pypi nlp 0.3.0 pypi_0 pypi
Hi @swayson the reason is that you have set padding to True and your bs for data processing is 1000, when padding is True then it pads to the longest sequence in the batch. So the max seq length for all batches is not same and your bs for training is 32 so might be taking examples which have different seq lengths. You can set padding to to max_lengh to pad to a max length specified in max_length or to the max acceptable input length for the model if no length is provided
0
huggingface
Beginners
Can’t load weights for ‘hfl/chinese-roberta-wwm-ext-large’.
https://discuss.huggingface.co/t/cant-load-weights-for-hfl-chinese-roberta-wwm-ext-large/389
11269×409 10.1 KB 21346×478 17.9 KB 31289×546 35.7 KB but 41365×213 14.8 KB
Hi @wwwwww931121 This model has only pytorch weights, to load it in TF you’ll need to set from_pt to True TFBertModel.from_pretrained("hfl/chinese-roberta-wwm-ext-large", from_pt=True)
0
huggingface
Beginners
GPT2LMHeadModel.from_pretrained(‘gpt2’) not loading attn weights
https://discuss.huggingface.co/t/gpt2lmheadmodel-from-pretrained-gpt2-not-loading-attn-weights/432
I am following the examples here 2. Specifically, this one: import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2LMHeadModel.from_pretrained('gpt2') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) loss, logits = outputs[:2] When I run the code, I get this warning: Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias', 'lm_head.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. I am wondering if I should maybe point to some specific file to have the weights loaded? BTW I see some friendly faces on these forums! Hi @sshleifer and @sgugger!
Hi @radek, good to see you here! This is a known issue that is currently being addressed by #5922 91 (under review). You can safely discard the warning.
0
huggingface
Beginners
Loading pretrained weights into model for sequence classifcation
https://discuss.huggingface.co/t/loading-pretrained-weights-into-model-for-sequence-classifcation/421
Hi there, I am a bit confused as to how saving models works. I have been pretraining with an ElectraForPreTraining model and have saved it using save_pretrained(). However, I am now interested in fine-tuning the model on a sequence classification task, so I figure I would have to load my pretrained weights into an ElectraForSequenceClassification model, except the ElectraForPreTraining model has a different head on the discriminator than the ElectraForSequenceClassification and I am not sure how to handle that correctly. Does the from_pretrained() method handle this out of the box and if not what would I have to do?
Hi @rsvarma the .from_pretrained method handles this. You can do ElectraForSequenceClassification.from_pretrained("path to your pre-trained electra model")
0
huggingface
Beginners
How to do sequence fine tuning?
https://discuss.huggingface.co/t/how-to-do-sequence-fine-tuning/419
I have been training a BERT model on a large unsupervised dataset, and now I wish to fine tune the model on a small labelled dataset, but I can’t quite grasp how to do this conceptually, and I’m hoping some of you can help me out. When doing the unsupervised training/self-training, everything seems fine, and I think I understand it. In this case, my network is a standard BERT, with a linear layer on top that takes the standard 768 hidden features in BERT down to 30, which is my vocab_size. (I’m training on gene sequences, so basically one sample in my dataset, looks like this: ASDGDFASGDFSGSDASFASDAUYRES where I do the standard thing of masking out some of the letters and trying to predict them. So for standard training, my setup looks like this: predicted_sequence=bert(input_sequence,masked_input_sequence) loss = crossentropy(predicted_sequence,masked_input_sequence) However when I now want to switch to fine-tuning I’m not really sure what to do. In this case my dataset now both consist of a gene sequence and a label sequence: DFASDGFTHGFDDFSDASFDASF , 00000001111111100000000022222 How do I change my network such that I can now fine-train it to predict these new labels?, do I still use bert(input,masked_input_sequence)? Do I remove the linear layer on top of BERT? or what is the conceptual idea here?
I found this explanation by one of the original authors great to get a conceptual understanding. Day 4: Focused Lecture - Transformers with Lukasz Kaiser
0
huggingface
Beginners
Training DistilGPT2, running into import error
https://discuss.huggingface.co/t/training-distilgpt2-running-into-import-error/396
Hello! I am trying to train distilgpt2 with my own data on colab. The problem is, even after installing the packages from requirements.txt, it is running into an import error. I am reproducing the entire error below. Traceback (most recent call last): File “train.py”, line 28, in from distiller import Distiller File “/content/transformers/examples/distillation/distiller.py”, line 31, in from grouped_batch_sampler import GroupedBatchSampler, create_lengths_groups File “/content/transformers/examples/distillation/grouped_batch_sampler.py”, line 24, in from utils import logger File “/content/transformers/examples/distillation/utils.py”, line 23, in import git File “/usr/local/lib/python3.6/dist-packages/git/init.py”, line 38, in from git.exc import * # @NoMove @IgnorePep8 File “/usr/local/lib/python3.6/dist-packages/git/exc.py”, line 9, in from git.compat import UnicodeMixin, safe_decode, string_types File “/usr/local/lib/python3.6/dist-packages/git/compat.py”, line 16, in from gitdb.utils.compat import ( ModuleNotFoundError: No module named ‘gitdb.utils.compat’ Any help is appreciated. Is this because of an improper version of python? NB : I have restarted the runtime after installing packages from requirements.txt , but still the error persists. Thank you for your time!
Are you sure you ran pip install -r requirements.txt in the distillation subfolder of the repo? It looks like you are missing some required packages.
0
huggingface
Beginners
Type of dataset in Trainer class
https://discuss.huggingface.co/t/type-of-dataset-in-trainer-class/364
Hi, I was going through the documentation and got a confusion trainer = Trainer( model=model, # the instantiated Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=test_dataset # evaluation dataset ) I couldn’t understand what is the type of train_dataset and how the target for loss calculation is selected. In Fine-tuning in native TensorFlow 2 also there is no target value. Am I missing something? model.fit(train_dataset, epochs=2, steps_per_epoch=115) Thank you
For more context, he/she is talking about this page: https://huggingface.co/transformers/training.html 11 I also got confused by this bit of the documentation, but I think this code expects datasets like the ones provided by Hugging Face’s NLP package 9. I think they are all based on Pytorch’s Dataset Class 6, but I could be mistaken. Try to use one of the datasets provided by their NLP package and check if it works correctly. Hope this helps!
0
huggingface
Beginners
Padding strategy for classification
https://discuss.huggingface.co/t/padding-strategy-for-classification/310
Hello everyone, I am working on multiclass text classification, currently using XLM-Roberta as a classifier. I have a doubt concerning padding strategies. My first intuition was to tokenize my training and validation sets separately (as they were two distinct super-batches) using padding = True; the result of this was having all training examples padded to length l1 (the length of the longest sequence in the training set), and validation examples padded to a different l2. An alternative approach (and the one that seems to be used in the GlueDataset and related methods) is to use padding = max_length, and thus have all examples padded to the same provided length (possibly, 512, which is the maximum sequence length allowed for this model). Would you mind sharing your thoughts on what strategy might work best and makes more sense from a “theoretical” point of view? Thank you very much!
Hi Isabella, My understanding is that so long as you have your padding mask correctly implemented, the model will not “pay attention” to the pad tokens and so the predictions should be consistent across models (regardless of padding length). If you do not use a padding mask then the predictions could differ, because the attention weights for the pad tokens will have some affect on your predicted outcome. By having an effect, I mean that the pad tokens contribute to the attention score, and therefore to the loss as a result. In terms of time complexity, my understanding is somewhat murkier. I suspect that the first method you suggest is faster but it depends on the implementation. If we calculate all attention scores and set those relating to padding to zero, then the max_length version could be much slower. However, I think it is reasonable to assume that the implementation performs a check first i.e. “should I calculate attention here?” which would be only slightly slower than the l1, l2 padding. This point is open to correction! I could be incorrect in my approach, but I prefer to pad to max_length for the sheer convenience of it. In the l1, l2 approach for example, methods such as cross validation require the repetition of the length calculation.
0
huggingface
Beginners
[Bart] Question for BartModel Output shape
https://discuss.huggingface.co/t/bart-question-for-bartmodel-output-shape/380
Hi I’m trying to fine-tune multilingual-Bart with Korean to generate some texts. while I tried to pass my datas into model, I can’t under stand why the output shape of the model is different from what I expected. Settings : I used MBartTokenizer and BartForConditionalGeneration for batch, I used prepare_translation_batch to make datas as batch, inputs_ids and target_ids. I also need decoder_input_ids(form like [tgt_lang_code, seqence, eos]), so I made it. Here’s the problem. BartForConditionalGeneration’s forward pass needs input_ids necessarily. From the Docs of Bart, If I pass only the ‘input_ids’ to the model(it can include attention_mask), the model’s decoder wouldn’t have it’s own input, so It takes ‘input_ids’ as their input. And the shape of the prediction_scores in returns should be (batch_size, seq_len, vocab_size) and It worked right But, When I pass the ‘input_ids’ and ‘decoder_input_ids’ together to the model, the shape of the prediction_scores shows (batch_size, 1 , vocab_size) all the time. I think when I pass the inputs together, The shape of prediction_scores in returns should be like (batch_size, decoder_input_seq_len, vocab_size) I don’t know why this happen. Maybe I totally misunderstood the model at all. The reason I made this topic is I need a clean view of this problem. Any advice would be appreciated. Thank you. q1904×785 15 KB
pass use_cache=False to forward. Confusing, I know.
0
huggingface
Beginners
Continue training XLNet on a specific closed-domain dataset
https://discuss.huggingface.co/t/continue-training-xlnet-on-a-specific-closed-domain-dataset/335
I’m wondering what if we want to leverage the already pre-trained XLNet model (and its language knowledge) and fine-tune on a specific closed-domain dataset, say legal domain for example. I have already corpora I’m just missing how to do this with XLNet like models. Any thoughts on how to do that?
Hi @krannnN, you can use the run_language_modelling script to fine-tune xlnet. You can fine it here 12. You’ll just need to provide the dataset in the required format.
0
huggingface
Beginners
How to get probability of the first generated token?
https://discuss.huggingface.co/t/how-to-get-probability-of-the-first-generated-token/321
When doing conditional generation with (m)BART how can I get the probability of the first generated token? I would like to use it as confidence to filter my results. (I generate really short summaries which are really extracted answers from text). The generate method only generates token ids and I could not find out which number to use from the raw model output (after softmax on dim=-1). Also why does it have dimensions like batch_size x 638 x 250027? 250027 is the vocabulary size I guess but what is 638? I thought it should be max_source_length which is 1024 (assuming max output length equals max input length).
Hi marton, As you can see in the doc 23 You should apply argmax rather than softmax on the output 638 is the actual sequence length, usually we pad sentences just to the length of the longest sequence in the batch. What you have got is called prediction_scores, now you can do predicts = prediction_scores.argmax(dim=-1) # (batch_size, sequence_length) dtype=torch.Long scores = prediction_scores[predicts] # (batch_size, sequence_length) dtype=torch.float scores now is the probability of all predicted tokens, if you just want the first token of every sentence, just modify the code above.
0
huggingface
Beginners
Using conda to install huggingface
https://discuss.huggingface.co/t/using-conda-to-install-huggingface/240
Hello! Very basic question - is there an official way to install huggingface using conda or does anybody have any insight into whether this is on the to-do list? Thanks!
Not yet but should be possible in the mid-term. The main limitation right now is that SentencePiece doesn’t like conda but @anthony is working on getting SentencePiece support in our tokenizers library should we should be able to have a conda install as well when this is finished (might still take a little bit of time though, it’s a big chunk of work)
0
huggingface
Beginners
Are WikinewsSum models for text summarization?
https://discuss.huggingface.co/t/are-wikinewssum-models-for-text-summarization/305
Can anyone tell me if the following models can be used for text summarization? WikinewsSum/t5-base-multi-en-wiki-news WikinewsSum/t5-base-multi-combine-wiki-news WikinewsSum/t5-base-with-title-multi-en-wiki-news WikinewsSum/bart-large-cnn-multi-en-wiki-news WikinewsSum/bart-large-multi-en-wiki-news Thank you, Efstathios PS: @valhalla If you know I will appreciate it.
I’m not sure, you can try it and verify
0
huggingface
Beginners
Which of the sshleifer/* models can be used as-is for text summarization?
https://discuss.huggingface.co/t/which-of-the-sshleifer-models-can-be-used-as-is-for-text-summarization/296
Hi @sshleifer, I would like to ask which of the following models can be used for text summarization: • shleifer/distilbart-cnn-12-6 • sshleifer/distilbart-xsum-12-6 • sshleifer/distilbart-cnn-6-6 • sshleifer/distilbart-xsum-12-3 • sshleifer/distilbart-xsum-9-6 • sshleifer/distilbart-xsum-12-1 • sshleifer/distilbart-xsum-6-6 • sshleifer/distilbart-xsum-1-1 • sshleifer/distilbart-cnn-12-3 I would also like to know about: • sshleifer/mbart-large-cc25 Thank you! Efstathios
You can use any of these models as-is, you can use cnn checkpoints if you want longer summaries or xsum if you want short summaries, you can also use facebook/bart-large-cnn and facebook/bart-large-xsum
0
huggingface
Beginners
Further pre-train roberta model
https://discuss.huggingface.co/t/further-pre-train-roberta-model/271
I have gone through this code train from scratch 17 and understood how to pre-train a model from scratch. I have the following doubts in this code What does block_size in LineByLineTextDataset represent? If I want to further pretrain Robert-base model (instead of training from scratch) using my own corpus, what are the changes I have to make in the above code besides the following changes from transformers import RobertaForMaskedLM, RobertaTokenizerFast tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base") model = RobertaForMaskedLM('roberta-base') I am aware that I need not to train tokenizer from scratch. @thomwolf @julien-c
Hi @mr-nlp, I think you can use the same run_language_modelling script to further pre-train roberta, just provide your own datasets. block_size is used for max_length
0
huggingface
Beginners
NLP model for tag generation
https://discuss.huggingface.co/t/nlp-model-for-tag-generation/179
Hi all, I’ve recently started into the NLP world, so this might seem like an obvious question but are there models to generate tags from text? E.g. lets say you have a corpus of text describing houses and you would like to retrieve the house type, size and color. The goal would be to get an output like this for every house description {house_type: ‘apartment’, size: ‘1000sqft’, color: ‘grey’} It seems that question answering model are able to understand that information (or at least answer to one of the questions at a time), but I’m not sure if there’s a way to adapt these for the above task. It seems to me that it could be pretty useful to train other models (e.g. vision) in a self-supervised way (e.g. get the house type size and color from a house photo using the output of the model above as target variable) Thanks! Elliot
Hi Eliot, If this information is explicitly available in the text then you can try to achieve this with a QA model. Asking questions like “What is the house type ?”, “What’s the color of the house ?” etc. There are other methods for such type of semantic parsing tasks, but one way you can approach this using is using text2text approach with T5 (it’s seq-to-seq model where you can feed in some text and ask the model to output some text). i.e given your text you can train T5 to output a structured text, something like house_type: apartment <sep> color: grey <sep> house_size: 1000 This might be a overkill but I tried this as a experiment in my work and so far it’s doing really well. One other approach would be to frame this as an entity extraction task, you entities will be house_type, color and size. Something like spacy could really help. If you are new to Entity extraction see this demo 9 to get an idea
0
huggingface
Beginners
Am I doing this right?
https://discuss.huggingface.co/t/am-i-doing-this-right/207
Fairly new to ML and very new to transformers. Want to make sure I’m doing the right thing … I’m trying to do text classification with a small data set and though this would be a good option (is it?) Here’s the basics of my code: texts = ["random text string...", ...] labels = [1, 0, ...] tokenized_sents = [] attention_masks = [] for sentence in sentences: tokenized_sents.append(tokenizer.encode(sentence, add_special_tokens=True, ...)) input_ids = pad_sequences(tokenized_sents) for sentence in input_ids: att_mask = [int(token_id > 0) for token_id in sentence] attention_masks.append(att_mask) dataset = tf.data.Dataset.from_tensor_slices(input_ids, attention_masks, labels) # copied from https://huggingface.co/transformers/training.html optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile(optimizer=optimizer, loss=loss) model.fit(dataset, epochs=2, steps_per_epoch=115) I was pretty confident this all worked, but then when I did the following test: sent = ["I like to watch movies"] sent = tokenizer.encode(sentence, add_special_tokens=True, ...) att_mask = [int(token_id > 0) for token_id in sent] ds = tf.data.Dataset.from_tensor_slices(sent, att_mask) model.predict(ds) I got a super long array. But the labels can only be 1 or 0 and there’s only one sample, so I was expecting a 1 by 2 array. Any idea why this doesn’t work? Also, what’s the best way to save this model and use it for predictions later. Thank you.
I would suggest expanding this example instead: https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/trainer/01_text_classification.ipynb 28
0
huggingface
Beginners
What exact inputs does bleu_metric.compute() require?
https://discuss.huggingface.co/t/what-exact-inputs-does-bleu-metric-compute-require/146
Very basic question. In my project I want switch from Google Translate API to Marian MT models and therefore I want to compare them using BLEU score. I want to use the BLEU metric from the nlp library, but I’m having problems getting it to work correctly. Here is my code: from transformers import MarianTokenizer, MarianMTModel src = 'de' trg = 'en' mname = f'Helsinki-NLP/opus-mt-{src}-{trg}' tokenizer = MarianTokenizer.from_pretrained(mname) model = MarianMTModel.from_pretrained(mname) src_texts = ["Ich bin ein kleiner Frosch.", "Tom bat seinen Lehrer um Rat."] tgt_texts = ["I am a small frog.", "Tom asked his teacher for advice."] batch = tokenizer.prepare_translation_batch(src_texts=src_texts, tgt_texts=tgt_texts) import nlp bleu_metric = nlp.load_metric('bleu') preds = model(batch.input_ids) targets = batch.decoder_input_ids bleu_metric.compute(preds, targets) And this is the error message I get: --------------------------------------------------------------------------- ArrowTypeError Traceback (most recent call last) <ipython-input-89-33da8475e3a1> in <module> ----> 1 bleu_metric.compute(preds, targets) /opt/conda/envs/fastai/lib/python3.7/site-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs) 191 """ 192 if predictions is not None: --> 193 self.add_batch(predictions=predictions, references=references) 194 self.finalize(timeout=timeout) 195 /opt/conda/envs/fastai/lib/python3.7/site-packages/nlp/metric.py in add_batch(self, predictions, references, **kwargs) 207 if self.writer is None: 208 self._init_writer() --> 209 self.writer.write_batch(batch) 210 211 def add(self, prediction=None, reference=None, **kwargs): /opt/conda/envs/fastai/lib/python3.7/site-packages/nlp/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 155 if self.pa_writer is None: 156 self._build_writer(pa_table=pa.Table.from_pydict(batch_examples)) --> 157 pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema) 158 if writer_batch_size is None: 159 writer_batch_size = self.writer_batch_size /opt/conda/envs/fastai/lib/python3.7/site-packages/pyarrow/types.pxi in __iter__() /opt/conda/envs/fastai/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.asarray() /opt/conda/envs/fastai/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.array() /opt/conda/envs/fastai/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() /opt/conda/envs/fastai/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowTypeError: Could not convert tensor([[[ 7.0599, -1.6253, 7.5354, ..., -1.6600, -1.6035, 0.0000], [ 6.7303, -2.1037, 7.4463, ..., -2.0789, -2.0494, 0.0000], [ 6.1458, -1.4315, 7.7637, ..., -1.3350, -1.3450, 0.0000], ..., [ 5.6321, 0.7129, 9.5273, ..., 0.7408, 0.7298, 0.0000], [ 5.4492, -0.6234, 9.0366, ..., -0.6214, -0.6522, 0.0000], [ 7.1594, -3.3825, 4.1902, ..., -3.4131, -3.3912, 0.0000]], [[ 5.8872, -3.5864, 5.8050, ..., -3.5273, -3.4877, 0.0000], [ 6.4951, -2.9127, 7.8423, ..., -2.7731, -2.8157, 0.0000], [ 6.4846, -2.8267, 8.0983, ..., -2.6857, -2.7147, 0.0000], ..., [ 7.0786, -2.7071, 7.8688, ..., -2.6743, -2.6513, 0.0000], [ 5.6782, -1.9020, 7.4212, ..., -2.0306, -2.0242, 0.0000], [ 7.0517, -2.7702, 5.3939, ..., -2.7920, -2.7438, 0.0000]]], grad_fn=<AddBackward0>) with type Tensor: was not a sequence or recognized null for conversion to list type I’m following the MarianMT docs and the nlp notebook on Colab. I’m not sure in which exact form the targets have to go into bleu_metric.compute(). On the other hand it looks like the error message is triggered by the format of preds. I tried many different ways but cannot get it to work. Any help is appreciated
Could you try to convert preds to lists instead of torch.Tensor ?
0
huggingface
Beginners
Smaller RoBERTa model
https://discuss.huggingface.co/t/smaller-roberta-model/164
Hello, I am training a smaller RoBERTa model (6layers) with BPE tokenizer on a domain specific corpus (not a large corpus) for MLM and it seems to do a good job in filling in masked words in testing. Also, the loss in Tensorboard looks much better compared to my BERT model (12layers) with wordpiece tokenizer. When I try to use this small RoBERTa model on fine-tuning for QA (on squad2.0), predictions are mostly always empty. It returns almost always blanks as answer. I followed this https://huggingface.co/blog/how-to-train 5 but QA seems to be not doing great. Any recommendations to make the smaller RoBERTa model to fine-tune on squad? or should I go with bigger models (12-24 layers)? Is there a pruning while training method available in Transformers? I would appreciate any ideas… Thank you!
hi @zb1 the official examples include Roberta distillation using the DistillBert method. You can find it here GitHub huggingface/transformers 28 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0. - huggingface/transformers
0
huggingface
Beginners
Pretrained Models to Heroku Production Environment
https://discuss.huggingface.co/t/pretrained-models-to-heroku-production-environment/156
Hi! I’m new to hugging face, so excuse this if it’s a dumb question! I’m trying to expose the T5 small pre_trained model as a REST endpoint. I have both the model code and the python Flask API working in a virtual environment in python on my computer. When I try to host the API on heroku, the code size exceeds the RAM limit. Are there any best practices for storing these models as static assets that can be called by an API? Is there a better way to do this to avoid hitting the RAM limit? FYI: I used this tutorial for the model code - https://towardsdatascience.com/simple-abstractive-text-summarization-with-pretrained-t5-text-to-text-transfer-transformer-10f6d602c426 9
Not sure if it can help for your usecase but we are providing an API inference endpoint for T5: https://huggingface.co/t5-small?text=My+name+is+Sarah+and+I+live+in+London 18. Will soon be usable for other tasks than translation.
0
huggingface
Beginners
Question Answering using RoBERTa in Hindi
https://discuss.huggingface.co/t/question-answering-using-roberta-in-hindi/159
Can the run_squad.py 3 file be used for fine-tuning models with datasets in languages other than English?
You can use it with a pretrained model in any language, but you would need your pretrained model to be in the same language as the dataset you are using (otherwise, the texts seen won’t make sense to the model!)
0
huggingface
Intermediate
How to give equal importance of all labels while dealing with unbalanced samples
https://discuss.huggingface.co/t/how-to-give-equal-importance-of-all-labels-while-dealing-with-unbalanced-samples/14174
for ex: I have four classes class 1: 200 samples class 2: 100 samples class 3: 20 samples class 4: 10 samples so during predictions, most of my samples are predicting class 1 or 2 because samples are high even I have unlabeled samples of class3 and class4. How can I overcome this in HF? is it possible to tell me the model that concentrates more on class3 and class4? Help me very soon. Thank you in advance.
Hi Para, I’m afraid there is no magic bullet in HF that can help solve this problem. It’s a common ML challenge and as such the standard approaches apply: (1) Under-sampling: Randomly delete records from the over-represented classes (2) Over-sampling: Duplicate records in the under-represented classes (3) Synthetic sampling/data augmentation: In NLP, there are actually quite some interesting techniques with which you can augment your data in the under-represented classes to increase record count. Check out this library: GitHub - makcedward/nlpaug: Data augmentation for NLP Hope that helps, let me know if any questions. Cheers Heiko
0
huggingface
Intermediate
Common practice, using the hidden state associated with [cls] as an input feature for a classification task?
https://discuss.huggingface.co/t/common-practice-using-the-hidden-state-associated-with-cls-as-an-input-feature-for-a-classification-task/14003
In the upcoming book “Natural Language Processing with Transformers”, it’s teaching us how to do a classification task on sentences by using Transformers as Feature Extractors. We process the sentences through a transformer to get the hidden state. To train a classifier, we take the token embedding for just the first token, namely the “[CLS]” and ignore the rest of the sentence. The book says that it’s common practice to do that. It doesn’t make much sense to me to ignore the rest of the embeddings. Shouldn’t they be averaged or something? The only reasoning I can think of is that the attention layers of the encoder make the CLS token absorb the meaningful context? Thank you! The book is awesome by the way, highly recommended!
Hi @carlosaguayo thanks for your question and I’m glad you’re enjoying the book In general, we need a way to represent the sequence of embeddings as as single vector and there are several “pooling” techniques that people use in the literature: [CLS] pooling: just take the embedding of the [CLS] token as the representation for the whole sequence mean pooling: take the average of token embeddings max pooling: take the token embedding with the largest values A related question is whether pooling should be applied to the last hidden states, or some earlier layers (or concatenation thereof). Now, which pooling method + layer(s) provides the best feature representation tends to depend on the task at hand, the domain of the data, length of the texts and so on. We picked [CLS] pooling in this early chapter because it’s simple and tends to be “good enough” for text classification tasks. You can find a nice ablation study that examined some of these issues here. As to why this even works, you’re insight that it’s due to self-attention is spot on! Each token embedding in the sequence is contextualised through the attention mechanism, so the [CLS] token does contain information about subsequent tokens in the sequence (we explain this in more detail in Chapter 3). Hope that helps!
1
huggingface
Intermediate
Using datacollator for multi-task training
https://discuss.huggingface.co/t/using-datacollator-for-multi-task-training/13999
Hey everyone, Actually, I am trying to accommodate multiple subtasks including STS-b and NER into a multi-task model, however am unable to issue the tokens from the conll dataset into the DataCollator. Can anyone help me with this? The code snippet is shown below. class NLPDataCollator(DefaultDataCollator): """ Extending the existing DataCollator to work with NLP dataset batches """ def collate_batch(self, features: List[Union[InputDataClass, Dict]]) -> Dict[str, torch.Tensor]: first = features[0] if isinstance(first, dict): # NLP data sets current works presents features as lists of dictionary # (one per example), so we will adapt the collate_batch logic for that if "labels" in first and first["labels"] is not None: if first["labels"].dtype == torch.int64: labels = torch.tensor([f["labels"] for f in features], dtype=torch.long) else: labels = torch.tensor([f["labels"] for f in features], dtype=torch.float) batch = {"labels": labels} for k, v in first.items(): if k != "labels" and v is not None and not isinstance(v, str): batch[k] = torch.stack([f[k] for f in features]) return batch else: # otherwise, revert to using the default collate_batch return DefaultDataCollator().collate_batch(features)
This error is displayed when I am trying to accommodate the NER dataset specifically entities extracted in the fashion show here image1121×432 19.3 KB
0
huggingface
Intermediate
HTML Embedding processing
https://discuss.huggingface.co/t/html-embedding-processing/13425
I am interested in creating embedding to HTML tags in a web. Does Bert have such model? Other works that can be relevant
Hi, MarkupLM 4 is such a model. I’ll add it soon to HuggingFace Transformers.
0
huggingface
Intermediate
Split compound words (windfall = wind + fall)
https://discuss.huggingface.co/t/split-compound-words-windfall-wind-fall/13768
Is there a way to use any of the model to split words into word parts that some might use downstream ? For example, consider a custom domain where words like windfall, firewall exists - but user may search for “wind fall” or “fire wall” downstream. Most basic way I thought was to split them randomly into multiple parts and “accept” a split whose sub parts make sense. For example, windfall = w + indfull, wi + ndfull, win + dfull and so on…Then apply existing language model to see if the subparts words exist in vocabulary. Appreciate if anyone has pointers.
Hi @mangled, I believe subword tokenizers (i.e. PreTrained Tokenizer from HuggingFace which is based on SentencePiece) already do this sort of word splitting. But most models rely on their own pretrained tokenizer with their own fixed vocab, so you may not have the same subword units. You may try to pre-train your own tokenizer. Hope this helps.
0
huggingface
Intermediate
Using .generate with TAPAS as encoder in EncoderDecoder
https://discuss.huggingface.co/t/using-generate-with-tapas-as-encoder-in-encoderdecoder/13655
Hi, I’m trying to to train the model to perform text generation conditioned on tables. Since TAPAS can encode the semi-structured meaning in tables, I guessed it was a good choice to use it as an encoder and say GPT2 (or any other CLM) as a decoder. I however encountered a problem when trying to generate from that EncoderDecoder model, this: image597×651 195 KB I guess this is since model.generate() for EncoderDecoder does not expect to have the extra dimension of token_type_ids that TAPAS has. Can anyone think of a way I can make this work? Thanks!
Update: it works for me when overriding the _update_model_kwargs_for_generation method. The token_type_ids shouldn’t be updated, as a table only needs to get encoded once. Notebook: Google Colab 1
1
huggingface
Intermediate
Cant reproduce Optuna results
https://discuss.huggingface.co/t/cant-reproduce-optuna-results/6838
Hello all, I run a Hyperparameter search using Optuna and got a model giving me 83% accuracy. When I then try and repeat this by retraining using the same hyperparameter (including seed), I cannot repeat the results. This is my trainer arguments and optuna search; # Define the trainig arguments training_args = TrainingArguments( output_dir='./results', # output directory seed = 0, num_train_epochs=3, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=16, # batch size for evaluation warmup_steps=22, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay learning_rate=5e-5, # initial learning rate for AdamW optimizer. load_best_model_at_end=False, # load the best model when finished training (default metric is loss) do_train=True, # Perform training do_eval=True, # Perform evaluation logging_dir='./logs', # directory for storing logs logging_steps=10, gradient_accumulation_steps=2, # total number of steps before back propagation fp16=True, # Use mixed precision fp16_opt_level="02", # mixed precision mode evaluation_strategy="epoch", # evaluate each `logging_steps` save_strategy = 'no', # The checkpoint save strategy to adopt during training. I dont want to save, probably why it did save and take up disk space in HP search #save_total_limit = 1. # Trying this to stop octuna from saving ) trainer = Trainer( model_init=model_init, args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset, # evaluation dataset compute_metrics=compute_metrics, #callbacks=[EarlyStoppingCallback(3, 0.0)] # early stopping if results dont improve after 3 epochs ) best_run = trainer.hyperparameter_search(direction="maximize", hp_space=my_hp_space, compute_objective=my_objective, # cant get this working, for now work with loss n_trials=50, pruner=optuna.pruners.NopPruner(), sampler=optuna.samplers.GridSampler(search_space), study_name=name, storage="sqlite:////content/drive/MyDrive/{}.db".format(name), #change this to a local directory if you want to save to disk load_if_exists=True # you can change this to true, for continuing the search ) best_run I have now also fixed the seed for numpy and torch RANDOM_SEED = 42 np.random.seed(RANDOM_SEED) torch.manual_seed(RANDOM_SEED) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") Could it be that the classification head that being reinitialised every time I retrain is random, resulting in different results?
Same issue
0
huggingface
Intermediate
Pre-training a BERT model from scratch with custom tokenizer
https://discuss.huggingface.co/t/pre-training-a-bert-model-from-scratch-with-custom-tokenizer/13115
Hi all, I’ve spent a couple days trying to get this to work. I’m trying to pretrain BERT from scratch using the standard MLM approach. I’m pretraining since my input is not a natural language per se. Here is my code: from tokenizers import Tokenizer from tokenizers.models import WordLevel from tokenizers import normalizers from tokenizers.normalizers import Lowercase, NFD, StripAccents from tokenizers.pre_tokenizers import Whitespace from tokenizers.processors import BertProcessing from tokenizers.trainers import WordLevelTrainer from tokenizers.pre_tokenizers import Split from tokenizers.normalizers import Strip from tokenizers import Regex exp = Regex("(^((\w)+(?=\s)))|((\[ENTRY\]\ (\w|\||\.)+)\s)|((\[CALL\]\ (\w|\||\.|\s)+)(?=\ \[))+|(\[EXIT\])") pre_tokenizer = Split(pattern=exp, behavior="removed",invert=True) #print(pre_tokenizer.pre_tokenize_str("performExpensiveLogSetup [ENTRY] void [CALL] java.io.PrintStream println java.lang.String void [CALL] java.lang.Math pow double|double double [CALL] java.lang.Math sqrt double double [CALL] java.io.PrintStream println java.lang.String void [EXIT]")) trace_tokenizer = Tokenizer(WordLevel(unk_token="[UNK]")) trace_tokenizer.add_special_tokens(["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) trace_tokenizer.normalizer = Strip() trace_tokenizer.pre_tokenizer = pre_tokenizer trace_tokenizer.post_processor = BertProcessing(sep=("[SEP]", 0),cls=("[CLS]", 1)) VOCAB_SIZE = 5000 trace_tokenizer.add_special_tokens(['[PAD]']) trace_tokenizer.add_tokens([' ']) trainer = WordLevelTrainer( vocab_size=VOCAB_SIZE, special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"] ) files = ["10k_smaller_dataset.txt"] trace_tokenizer.train(files, trainer) trace_tokenizer.save("data/trace.json") from transformers import BertConfig, BertForMaskedLM scale_factor = 0.25 config = BertConfig( vocab_size=VOCAB_SIZE, max_position_embeddings=int(768*scale_factor), intermediate_size=int(2048*scale_factor), hidden_size=int(512*scale_factor), num_attention_heads=8, num_hidden_layers=6, type_vocab_size=5, hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, ) from transformers import PreTrainedTokenizerFast fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=trace_tokenizer, return_special_tokens_mask=True, mask_token='[MASK]', return_token_type_ids=False) fast_tokenizer.add_special_tokens({'pad_token': '[PAD]', 'mask_token': '[MASK]'}) from datasets import load_dataset dataset = load_dataset('text', data_files={'train': '10k_smaller_dataset.txt', 'test': 'tiny_eval.txt', 'eval': 'tiny_eval.txt'}) small_train_dataset = dataset["train"] small_eval_dataset = dataset["test"] model = BertForMaskedLM(config) model.tokenizer = fast_tokenizer def preprocess_function(examples): return fast_tokenizer(examples["text"], max_length = 128, truncation=True, padding=True) encoded_dataset_train = small_train_dataset.map(preprocess_function, batched=True) encoded_dataset_test = small_eval_dataset.map(preprocess_function, batched=True) import numpy as np from datasets import load_metric metric = load_metric("accuracy") def compute_metric(eval_pred): return metric.compute(predictions=eval_pred.predictions, references=eval_pred.label_ids) from transformers import TrainingArguments from transformers import DataCollatorForWholeWordMask data_collator = DataCollatorForWholeWordMask(tokenizer=fast_tokenizer, mlm=True, mlm_probability=0.15) training_args = TrainingArguments("test_trainer_bert_pre", num_train_epochs=1, # prediction_loss_only=True, ) from transformers import Trainer trainer = Trainer( model=model, tokenizer=fast_tokenizer, data_collator=data_collator, args=training_args, train_dataset=encoded_dataset_train, eval_dataset=encoded_dataset_test, compute_metrics=compute_metric, ) train_result = trainer.train(resume_from_checkpoint=True) train_result trainer.evaluate(encoded_dataset_test) The problem is in the last line, I never see the accuracy metric I define. {'epoch': 1.0, 'eval_loss': 0.0025006113573908806, 'eval_runtime': 1.9859, 'eval_samples_per_second': 503.54, 'eval_steps_per_second': 62.942} I’m sure there’s a super simple mistake I’m making that’s resulting in it being ignored. Any ideas? Thank you in advance. Best, Claudio
If you do a metric.compute(…) will you get the accuracy which you are looking for?
0
huggingface
Intermediate
Grouping Tokens after Token Classification
https://discuss.huggingface.co/t/grouping-tokens-after-token-classification/13421
Is there a way to group tokens after token classification via HF? I see something similar in Rasa 1. However, I am not sure it is the best way to do it as they are giving group numbers to the model to train on. However, If a document contains more groups than the documents in the training data, the RASA implementation fails. I think I am looking for a solution like (kinda supervised) clustering which is independent of the number of groups in the documents.
Hi, The token classification pipeline has the ability to group tokens, as seen here 1. The front-facing API is an “aggregation strategy”. See the docs for more info.
0
huggingface
Intermediate
Finetuning for feature-extraction? I.e. unsupervised fine tuning?
https://discuss.huggingface.co/t/finetuning-for-feature-extraction-i-e-unsupervised-fine-tuning/12595
I noticed the facebook/bart-large-mnli · Hugging Face 1 model card doesn’t show the feature-extraction task under Train menu, but it is under the Deploy menu. I haven’t been able to find an example of fine tuning a feature-extraction model, so is fine tuning not an option for feature extraction tasks? If it is, I’d love to see a working example somewhere…the examples I’ve been able to find are all for supervised learning, which makes me wonder if one needs labelled data to do fine tuning?
Hey @MaximusDecimusMeridi, the term “feature extraction” usually means to extract or “pool” the last hidden states from a pretrained model. So fine-tuning a model for feature extraction is equivalent to fine-tuning the language model, e.g. via masked or autoregressive language modelling. (You can find a BERT-like example of fine-tuning here 4, and indeed one does not need any labelled data for this step). For BART, the situation is a bit more complex because it is a seq2seq architecture, so you would likely need to frame your fine-tuning task in that manner (e.g. as a translation or summarization task). Most applications that need feature extraction (e.g. neural search) perform best with encoder-based models like BERT and friends - I recommend checking out sentence-transformers (link) which provides many state-of-the-art models for these applications
0
huggingface
Intermediate
Can BERT for mlm predict never seen words?
https://discuss.huggingface.co/t/can-bert-for-mlm-predict-never-seen-words/12072
Is it possible to my BERT model predict a word seen in my training samples, but not contained in my wordpiece tokenizer vocab?
I am not sure I understand your question. Any word in your training sample would be broken down into word-piece tokens, and then converted into IDs. So yes, with the MLM task BERT can predict a word from the training sample.
0
huggingface
Intermediate
BERT Multilabel - Different Training Dataset For Each Label?
https://discuss.huggingface.co/t/bert-multilabel-different-training-dataset-for-each-label/9522
Hi everyone, I have successfully built a multi label classifier (10 labels, somewhat balanced) on sentence level with my own subclass of transformers library BertForSequenceClassification. The classification performance is okay-ish. When I was first testing BERT on a binary classification task for a single label in my dataset it was very benefitial towards performance to include adversarial sentences that did not hold the same label. Thus, I tried the same approach in this multi label setup by adding more 0 labeled sentences to the dataset. But this worsened the performance. My question would be if it is possible to have different training/evaluation datasets for each label in a Multilabel classification setup? Some more background on the dataset and labels: The labels of a sentence are indicated by a vector, e.g. (0, 0, 0, 1). It’s also possible that no label is present, i.e. the corresponding vector is (0, 0, 0, 0). In total there are 10 labels. Thank you!
Just out of curiosity, what does it mean to have “different dataset(s) for each label”?
0
huggingface
Intermediate
Forward and reverse detokinizing
https://discuss.huggingface.co/t/forward-and-reverse-detokinizing/13096
I am looking for some code examples on how to train with backbone transformers to both forward and reverse or detokenize. The problem being that all my datasets (including validation) are pre-labelled for scoring and I believe it would help if I made the text more clean and similar to what the my pretrained transformers were trained on (wikipedia text, I believe). This means removing contractions, among other things. The biggest challenge, of course, is the reverse trip and what techniques can be used to tokenize in such a way that we can appropriately map to and re-generate the original labels. The text is very messy and requires quite a bit of cleansing. Any pointers would be greatly appreciated! Cheers
Another approach is using augmentation, eg: What’s in the Dataset object — datasets 1.11.0 documentation 1 The idea being there is that I train/predict on augmented datasets so that I retain the original. Is this a typical approach used? Any feedback appreciated
0
huggingface
Intermediate
502 server error when running model
https://discuss.huggingface.co/t/502-server-error-when-running-model/13002
Hi everyone, I am running several models in production using the huggingface library to perform some tasks continuously. Unfortunately, the “continuously” part is very much lacking. This is due to the fact that very often (every couple of hours), the following error pops-up: requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/.../resolve/main/tokenizer.json 2 or config.json I have even tried to avoid this error by running transformers in offline mode using: TRANSFORMERS_OFFLINE=1 HF_DATASETS_OFFLINE=1 Unfortunately, the systeem still seems to request the model from the web, though its cached and offline mode. Any ideas how to avoid this problem? (except for saving model offline)
A colleague of mine and I have recently experienced a similar issue. Even if the models are cached locally, a momentary internet disconnection results in an error if it coincides with a model/tokenizer load in a series of training scripts. My thoughts on potential reasons go no further than speculations so, any suggestions on what might be the issue and how to solve it is pretty much welcome.
0
huggingface
Intermediate
Does batching in the standard question-answering pipeline provide a speedup?
https://discuss.huggingface.co/t/does-batching-in-the-standard-question-answering-pipeline-provide-a-speedup/10875
Hi there. I am using the QA pipeline and hoping to get speedup through batching. Basically what I want (this is pseudo code, not exact code; I would appreciate your guidance in doing it right): _classifier = pipeline("question-answering", model="deberta") result = _classifier(question=["What is the goal?","When did it happen?","Who did it?"], context="Once upon a time many years ago an engineer set up a question answering machine and it was hoped it would run really fast") So the basic goal is to get this to work at roughly the same speed as a single question, assuming I give it a decent GPU capable of parallelizing the 3 questions. I did search before asking this question I found a few others who have asked it in different ways but not yet answered: Batched pipeline · Issue #6327 · huggingface/transformers · GitHub 5 [Benchmark] Pipeline for question answering · Issue #3007 · huggingface/transformers · GitHub 1 Many thanks for any pointers or help!
In my experience it doesn’t help. Also, when you look at the code, you can see that once it encode all the given examples, it iterate over them one by one (transformers.pipelines — transformers 4.0.0 documentation): all_answers = [] for features, example in zip(features_list, examples): model_input_names = self.tokenizer.model_input_names + ["input_ids"] fw_args = {k: [feature.__dict__[k] for feature in features] for k in model_input_names} # Manage tensor allocation on correct device with self.device_placement(): if self.framework == "tf": fw_args = {k: tf.constant(v) for (k, v) in fw_args.items()} start, end = self.model(fw_args)[:2] start, end = start.numpy(), end.numpy() else: with torch.no_grad(): # Retrieve the score for the context tokens only (removing question tokens) fw_args = {k: torch.tensor(v, device=self.device) for (k, v) in fw_args.items()} start, end = self.model(**fw_args)[:2] start, end = start.cpu().numpy(), end.cpu().numpy()
0
huggingface
Intermediate
Weights and biases not showing train loss correctly
https://discuss.huggingface.co/t/weights-and-biases-not-showing-train-loss-correctly/12591
Screen Shot 2021-12-06 at 5.55.37 PM1367×677 41.2 KB Hi all, I’m training a binary text classification model. In order to debug, I’m training and evaluating on a small subset of the data (around 16 data points) to see if the model can successfully overfit. However, the train_loss logged to Weights and Biases are not showing correctly – as you can see from the screenshot, it’s just a single point. Any idea on why this happened? Below are my training code: model = AutoModelForSequenceClassification.from_pretrained("roberta-large") training_args = TrainingArguments( output_dir='./results', learning_rate=1e-3, per_device_train_batch_size=4, per_device_eval_batch_size=4, num_train_epochs=5, evaluation_strategy="epoch", logging_steps=1, # weight_decay=0.01, ) trainer = Trainer( model=model, args=training_args, train_dataset=encoded_ds, eval_dataset=encoded_ds, tokenizer=tokenizer, compute_metrics=compute_metrics, # data_collator=data_collator, )
What happens if you drop your batch size (both train and eval) to 1? And set max_steps = 16, evaluation_strategy="steps" Not sure if it will work fully, but might help diagnose what is going on
0
huggingface
Intermediate
BERT Split NER Labeling
https://discuss.huggingface.co/t/bert-split-ner-labeling/12177
Building a custom label NER model for custom medical data. In my dataset, there are times where an entity may have non-entity words splitting it. For a simple example, say I was designing data for to train for a person label that looked for first names. The sentence “His name was John Smith” would be O, O, O, B-PER, I-PER. That makes sense, but free-text gets messy. Imagine situations like this. John, the man and legend, Smith, will be remembered forever. Would Bert understand… B-PER, O, O, O, O, I-PER, O O O O. See how the split occurred? These should cause the same label, but I’m not sure if I should create IOB data as above, or have two separate instances of B-PER. The issue being. I want to model to understand that they are connected. I’m playing with Bio-clinicalBert and it’s done well for ner. Just trying to get it to the next level. Thanks in advanced, and I’d be happy to share more data if needed.
Anyone? Will keep playing on my own in the meantime
0
huggingface
Intermediate
Online learning in a Space
https://discuss.huggingface.co/t/online-learning-in-a-space/12375
Hello! Is there a way to enable online learning on a model deployed on a Huggingface space (or the Model Hub). Like the flag option in gradio apps to flag some examples, is there an option for the user to correct the output predicted? Thanks.
Thats a super cool idea. The machinery to build this exists, but you would need to actually hook the building blocks together. We’ll work on a tutorial to streamline this at some point
0
huggingface
Intermediate
OSError: Unable to load weights from pytorch checkpoint file
https://discuss.huggingface.co/t/oserror-unable-to-load-weights-from-pytorch-checkpoint-file/3406
Hi, everyone. I need some help. I have been developing the Flask website that has embedded one of Transformer’s fine-tuned models within it. I fine-tuned the model with PyTorch. I’ve tested the web on my local machine and it worked at all. I used fine-tuned model that I’ve already saved the weight to use locally, as pictured in the figure below: Annotation 2021-01-27 2031381142×127 37.7 KB The saved results contain: config.json pytorch_model.bin special_tokens_map.json tokenizer_config.json vocab.txt Then, I tried to deploy it to the cloud instance that I have reserved. Everything worked well until the model loading step and it said: OSError: Unable to load weights from PyTorch checkpoint file at <my model path/pytorch_model.bin>. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. I’ve searched around the internet to solve it but still nil. Can I get some enlightenment? By the way, I’m using Ubuntu 18.04 instance and the environments that I’m used are: torch 1.7.0 transformers 3.5.1 Thank you before!
Hi @aswincandra were you able to load the tokenizer in the Flask app without problems? My first guess is that the path you are pointing to in the app is not correct.
0
huggingface
Intermediate
Generate without using the generate method
https://discuss.huggingface.co/t/generate-without-using-the-generate-method/11379
Posting this here for visibility. What if you want to decode the output of a generative seq2seq model (like T5, BART, etc.) yourself, without using the .generate() method? The code example below illustrates this. Suppose that the model is given a long text, for which it needs to generate a summary. We illustrate here how to manually decode the generated ids autoregressively. In each iteration, we add the predicted token id by the model to the decoder_input_ids, which are then fed as input to the next time step. At the beginning, we only feed the decoder_start_token_id to the decoder of the model. from transformers import BartTokenizer, BartForConditionalGeneration import torch model_name = "sshleifer/distilbart-cnn-6-6" tokenizer = BartTokenizer.from_pretrained(model_name) model = BartForConditionalGeneration.from_pretrained(model_name) text = """The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.""" input_ids = tokenizer(text, return_tensors="pt").input_ids decoder_input_ids = [model.config.decoder_start_token_id] predicted_ids = [] for i in range(20): outputs = model(input_ids=input_ids, decoder_input_ids=torch.tensor([decoder_input_ids])) logits = outputs.logits[:,i,:] # perform argmax on the last dimension (i.e. greedy decoding) predicted_id = logits.argmax(-1) predicted_ids.append(predicted_id.item()) print(tokenizer.decode([predicted_id.squeeze()])) # add predicted id to decoder_input_ids decoder_input_ids = decoder_input_ids + [predicted_id] This will print: The E iff el Tower is 324 metres ( 1 , 06 3 ft ) tall , about the same The final result can also be printed using print(tokenizer.decode(predicted_ids)): The Eiffel Tower is 324 metres (1,063 ft) tall, about the same Note that we’ve only done 20 time steps here. Normally, one continues until the model generates the EOS (end of sequence) token, which for BART is </s>.
Hi Niels, thanks for sharing the code. Would you mind also sharing some examples of situations in which you would prefer not to use the .generate() method?
0
huggingface
Intermediate
Starting my first pull request, but transformers tests stall
https://discuss.huggingface.co/t/starting-my-first-pull-request-but-transformers-tests-stall/12192
Hello, I am trying to contribute to the library by fixing a bug that I’ve found some time ago. This is about to be my first PR here, so I am following exactly the contributing guide 1. Before making the changes to the current code, I’m running the tests to see how that goes: make test And this command takes way too long to be true (on a powerful machine with lots of CPUs and GPUs). In partihular, it hangs completely at some point and does not proceed at all with the recent output messages looking OK. The last message is below, but the tests run in parallel on 48 cores, so this may be useless. [gw0] SKIPPED tests/test_modeling_flax_mbart.py::FlaxMBartModelIntegrationTest::test_batch_generation_en_ro Is there a way to see, which test gets stuck? Can I skip it? Could anyone give me a piece of advice on how I should proceed?
You should only test the test files related to your PR and let the CI double-check for you, to go faster.
1
huggingface
Intermediate
How to deal with differences between CoNLL 2003 dataset tokenisation and BER tokeniser when fine tuning NER model?
https://discuss.huggingface.co/t/how-to-deal-with-differences-between-conll-2003-dataset-tokenisation-and-ber-tokeniser-when-fine-tuning-ner-model/11129
Hello, I am about to fine-tune a BERT model on the NER task using a legal dataset with custom entities, and would like to know how the fine tuning on the ConLL 2003 dataset was handled at the time in order to create a pertained BertForTokenClassification model, because I’m facing similar issues. The NER dataset here contains one token (or rather word) per line. However, the HuggingFace BERT tokenizer (e.g. “bert-base-cased” or any other) will not produce a one-to-one match with this dataset. Just to give an example, the word “precautionary” (which on the conll 2003 dataset would appear in one line) is split by the HuggingFace tokenizer into ['pre', '##ca', '##ution', '##ary'], and I assume the opposite might be true as well, although perhaps much rarer (i.e. that tokens which were split into two lines in the conll 2003 dataset would be tokenized by HuggingFace as a single token). Therefore, I was wondering what transformation was done to convert the CoNLL 2003 dataset (in the format I linked above) to a set of token-level labels corresponding to the BERT tokenizer suitable for creating a pytorch’s DataLoader.
What is typically done is, you tokenize each word of the annotated dataset you have, check how many tokens it has been tokenized into, and then either only label the first wordpiece token, or label them all. Small example: Suppose you have the sentence “hello my name is Niels”, and the CoNLL dataset has this labeled as: hello O my O name O is O niels B-PER Then what we do is the following: option 1, label all tokens of a word (i.e. propagate the label on all tokens) from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") words = ["hello", "my", "name", "is", "niels"] word_labels = ["O", "O", "O", "O", "B-PER"] # convert word labels to integer labels label2id = {"O": 0, "B-PER":1} word_labels = [label2id[label] for label in word_labels] tokens = [] labels = [] for word, label in zip(words, word_labels): # tokenize the word word_tokens = tokenizer.tokenize(word) # propagate label to all tokens of the given word tokens.extend(word_tokens) labels.extend([label] * len(word_tokens)) option 2, only label the first token of a word, set labels of all remaining tokens to -100 words = ["hello", "my", "name", "is", "niels"] word_labels = ["O", "O", "O", "O", "B-PER"] # convert word labels to integer labels label2id = {"O": 0, "B-PER":1} word_labels = [label2id[label] for label in word_labels] tokens = [] labels = [] for word, label in zip(words, word_labels): # tokenize the word word_tokens = tokenizer.tokenize(word) tokens.extend(word_tokens) # only label the first wordpiece labels.extend([label] + [-100] * (len(word_tokens) - 1)) The reason we set the remaining tokens to -100 is because this is the ignore_index of PyTorch’ CrossEntropyLoss 1. This means that those labels will not be taken into account by the loss function, hence no gradients will be computed for those. Which of the 2 options you choose is a design choice, mainly. In practice, both perform well. I’ve made 2 notebooks (one for each of both options), in my repo here 2.
1
huggingface
Intermediate
Load Custom Model
https://discuss.huggingface.co/t/load-custom-model/12113
I tried look into this similar issue: Custom GPT2 Model won't load after training but still… class CustomClass(PreTrainedModel): def __init__(self, config, num_labels): super().__init__(config, num_labels) self.distilbert = DistilBertModel.from_pretrained('distilbert-base-uncased') self.pre_classifier = torch.nn.Linear(768, 768) self.dropout = torch.nn.Dropout(0.1) self.classifier = torch.nn.Linear(768, num_labels) def forward(self, input_ids, attention_mask): distilbert_output = self.distilbert(input_ids=input_ids, attention_mask=attention_mask) hidden_state = distilbert_output[0] pooled_output = hidden_state[:, 0] pooled_output = self.pre_classifier(pooled_output) pooled_output = torch.nn.Tanh()(pooled_output) pooled_output = self.dropout(pooled_output) output = self.classifier(pooled_output) return output I was able to fine tune with a linear classifier for a classification job. config = PretrainedConfig(name_or_path='own-model', num_labels=100, output_hidden_states=True) model = CustomClass(config, 100) I can also save it with model.save_pretrained(PATH) But when I try to load it with new_model=PreTrainedModel.from_pretrained('./PATH/') i got 'NoneType' object has no attribute 'from_pretrained' which is really strange Alternatively, with a config file, this doesnt throw error though the model weights wont load new_config=PretrainedConfig.from_pretrained('./PATH/') new_model=PreTrainedModel.from_pretrained('./PATH/', config=new_config) please help. I am running out of idea. Oh and I am at v4.6.0 Thank you!
You should use CustomClass.from_pretrained. PreTrainedModel.from_pretrained won’t work directly.
0
huggingface
Intermediate
Is there a standard way to handle leftover batches when using gradient accumulation?
https://discuss.huggingface.co/t/is-there-a-standard-way-to-handle-leftover-batches-when-using-gradient-accumulation/12142
Let’s say that I have the following training loop that uses accumulated gradients (taken from here 1). Let’s also say that I have a batch size of 4 and want to accumulate gradients for 10 steps, which gives me an “effective” batch size of 40. model.zero_grad() for i, (inputs, labels) in enumerate(training_set): predictions = model(inputs) loss = loss_function(predictions, labels) loss = loss / accumulation_steps loss.backward() if (i+1) % accumulation_steps == 0: optimizer.step() model.zero_grad() if (i+1) % evaluation_steps == 0: evaluate_model() I was wondering what we do with gradient accumulation when we have leftover batches in our training loop. For the above code, we might see this if we have (say) 57 batches in our training step. This would lead to 5 successful training steps of gradient accumulation (corresponding to the first 50 batches), but the last 7 batches would be ignored. I’m guessing the convention is just to ignore the leftover batches (particularly if you are shuffling the batches in each epoch), but perhaps it might be better to do a training step instead? Thoughts appreciated.
You should use a step counter that goes over all the training loop instead of the counter step, so that you will finish your batch of epoch 0 during epoch 1 (unless your dataset is pretty small, the probablity of having the same samples twice is not super high).
1
huggingface
Intermediate
Error Training Vision Encoder Decoder for Image Captioning
https://discuss.huggingface.co/t/error-training-vision-encoder-decoder-for-image-captioning/12090
I am trying to train Vision Encoder Decoder with VIT encoder and Hindi GPT2(surajp/gpt2-hindi at main) decoder, for Hindi Image captioning, which my team are doing as a part of Huggingface course project. Currently my code is this For creating a dataset import torch from torch.utils.data import Dataset from PIL import Image class Image_Caption_Dataset(Dataset): def __init__(self,root_dir,df, feature_extractor,tokenizer,max_target_length=128): self.root_dir = root_dir self.df = df self.feature_extractor = feature_extractor self.tokenizer = tokenizer self.max_length=max_target_length def __len__(self,df): return self.df.shape[0] def __getitem__(self,idx): #return image image_path = self.df['images'][idx] text = self.df['text'][idx] #prepare image image = Image.open(self.root_dir+'/'+image_path).convert("RGB") pixel_values = self.feature_extractor(image, return_tensors="pt").pixel_values #add captions by encoding the input captions = self.tokenizer(text, padding='max_length', max_length=self.max_length).input_ids captions = [caption if caption != self.tokenizer.pad_token_id else -100 for caption in captions] encoding = {"pixel_values": pixel_values.squeeze(), "labels": torch.tensor(captions)} return encoding from transformers import ViTFeatureExtractor,AutoTokenizer encoder_checkpoint = 'google/vit-base-patch16-224' decoder_checkpoint = 'surajp/gpt2-hindi' feature_extractor = ViTFeatureExtractor.from_pretrained(encoder_checkpoint) tokenizer = AutoTokenizer.from_pretrained(decoder_checkpoint) root_dir = "../input/flickr8k/Images" train_dataset = Image_Caption_Dataset(root_dir=root_dir, df=train_df, feature_extractor=feature_extractor, tokenizer=tokenizer) val_dataset = Image_Caption_Dataset(root_dir=root_dir, df=test_df, feature_extractor=feature_extractor, tokenizer=tokenizer) from transformers import VisionEncoderDecoderModel # initialize a vit-bert from a pretrained ViT and a pretrained BERT model. Note that the cross-attention layers will be randomly initialized model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_checkpoint, decoder_checkpoint) #model.to(device) After initializing the model I configured the model arguments # set special tokens used for creating the decoder_input_ids from the labels model.config.decoder_start_token_id = tokenizer.cls_token_id model.config.pad_token_id = tokenizer.pad_token_id # make sure vocab size is set correctly model.config.vocab_size = model.config.decoder.vocab_size # set beam search parameters model.config.eos_token_id = tokenizer.sep_token_id model.config.max_length = 128 model.config.early_stopping = True model.config.no_repeat_ngram_size = 3 model.config.length_penalty = 2.0 model.config.num_beams = 4 Then started Training from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments training_args = Seq2SeqTrainingArguments( predict_with_generate=True, evaluation_strategy="steps", per_device_train_batch_size=8, per_device_eval_batch_size=8, fp16=True, output_dir="./", logging_steps=2, save_steps=1000, eval_steps=200, ) from transformers import default_data_collator # instantiate trainer trainer = Seq2SeqTrainer( model=model, tokenizer=feature_extractor, args=training_args, compute_metrics=compute_metrics, train_dataset=train_dataset, eval_dataset=val_dataset, data_collator=default_data_collator, ) trainer.train() But then I got the error, ValueError: Make sure to set the decoder_start_token_id attribute of the model’s configuration If you noticed the above code, I have already set this, but when I checked that the tokenizer.cls_token_id was None so I manually set the tokenizer.cls_token_id=’’ but then I got the Index out of range in self, error Is there any workaround for this, the code is inspired for` here written by @nielsr I have also tried with custom training loop, I get the same error
Hi, You were already on the good way! The only “mistake” I see here is that GPT2 doesn’t have a CLS token. The CLS token is only defined for encoder-only Transformers such as BERT, RoBERTa. So in this case, the decoder start token can be set to the bos (beginning of sequence) token: model.config.decoder_start_token_id = tokenizer.bos_token_id
1
huggingface
Intermediate
How to save model in Colab during TPU training with Accelerate
https://discuss.huggingface.co/t/how-to-save-model-in-colab-during-tpu-training-with-accelerate/12039
I’m training a model on a TPU on Colab and my code is based on this example (with the training loop wrapped in a large function): accelerate/nlp_example.py at main · huggingface/accelerate · GitHub How do I save to model trained on the TPU to my google drive? I understand that I need to run a code like this in the training function: accelerator.wait_for_everyone() unwrapped_model = accelerator.unwrap_model(model) accelerator.save(unwrapped_model.state_dict(), f'./results/{training_directory}') But then I get the error below. I know how to save a model when it’s in my environment, but I’m not sure how to save it when it’s in the large training function and on the TPU. Exception in device=TPU:0: [Errno 21] Is a directory: './results/nli-few-shot/TPU/' Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn fn(gindex, *args) File "/usr/local/lib/python3.7/dist-packages/accelerate/utils.py", line 570, in __call__ self.launcher(*args) File "<ipython-input-84-275217ef36aa>", line 97, in training_function accelerator.save(unwrapped_model.state_dict(), f'./results/{training_directory}') File "/usr/local/lib/python3.7/dist-packages/accelerate/accelerator.py", line 507, in save save(obj, f) File "/usr/local/lib/python3.7/dist-packages/accelerate/utils.py", line 544, in save xm.save(obj, f) File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 818, in save torch.save(cpu_data, file_or_path) File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 376, in save with _open_file_like(f, 'wb') as opened_file: File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 230, in _open_file_like return _open_file(name_or_buffer, mode) File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 211, in __init__ super(_open_file, self).__init__(open(name, mode)) IsADirectoryError: [Errno 21] Is a directory: './results/nli-few-shot/TPU/' Exception in device=TPU:6: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'torch_xla.core.xla_model.save': Socket closed (14) Traceback (most recent call last): Exception in device=TPU:5: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'torch_xla.core.xla_model.save': Socket closed (14) Exception in device=TPU:7: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'torch_xla.core.xla_model.save': Socket closed (14) Traceback (most recent call last): Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn fn(gindex, *args) File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn fn(gindex, *args) File "/usr/local/lib/python3.7/dist-packages/accelerate/utils.py", line 570, in __call__ self.launcher(*args) File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn fn(gindex, *args) File "/usr/local/lib/python3.7/dist-packages/accelerate/utils.py", line 570, in __call__ self.launcher(*args) File "/usr/local/lib/python3.7/dist-packages/accelerate/utils.py", line 570, in __call__ self.launcher(*args) File "<ipython-input-84-275217ef36aa>", line 97, in training_function accelerator.save(unwrapped_model.state_dict(), f'./results/{training_directory}') File "<ipython-input-84-275217ef36aa>", line 97, in training_function accelerator.save(unwrapped_model.state_dict(), f'./results/{training_directory}') File "/usr/local/lib/python3.7/dist-packages/accelerate/accelerator.py", line 507, in save save(obj, f) File "/usr/local/lib/python3.7/dist-packages/accelerate/accelerator.py", line 507, in save save(obj, f) File "/usr/local/lib/python3.7/dist-packages/accelerate/utils.py", line 544, in save xm.save(obj, f) File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 819, in save rendezvous('torch_xla.core.xla_model.save') File "/usr/local/lib/python3.7/dist-packages/accelerate/utils.py", line 544, in save xm.save(obj, f) File "<ipython-input-84-275217ef36aa>", line 97, in training_function accelerator.save(unwrapped_model.state_dict(), f'./results/{training_directory}') File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 819, in save rendezvous('torch_xla.core.xla_model.save') File "/usr/local/lib/python3.7/dist-packages/accelerate/accelerator.py", line 507, in save save(obj, f) File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 863, in rendezvous return torch_xla._XLAC._xla_rendezvous(get_ordinal(), tag, payload, replicas) RuntimeError: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'torch_xla.core.xla_model.save': Socket closed (14) File "/usr/local/lib/python3.7/dist-packages/accelerate/utils.py", line 544, in save xm.save(obj, f) File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 863, in rendezvous return torch_xla._XLAC._xla_rendezvous(get_ordinal(), tag, payload, replicas) RuntimeError: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'torch_xla.core.xla_model.save': Socket closed (14) File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 819, in save rendezvous('torch_xla.core.xla_model.save') File "/usr/local/lib/python3.7/dist-packages/torch_xla/core/xla_model.py", line 863, in rendezvous return torch_xla._XLAC._xla_rendezvous(get_ordinal(), tag, payload, replicas) RuntimeError: tensorflow/compiler/xla/xla_client/mesh_service.cc:364 : Failed to meet rendezvous 'torch_xla.core.xla_model.save': Socket closed (14) --------------------------------------------------------------------------- ProcessExitedException Traceback (most recent call last) <ipython-input-85-a91f3c0bb4fd> in <module>() 1 from accelerate import notebook_launcher 2 ----> 3 notebook_launcher(training_function) 3 frames /usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py in join(self, timeout) 142 error_index=error_index, 143 error_pid=failed_process.pid, --> 144 exit_code=exitcode 145 ) 146 ProcessExitedException: process 0 terminated with exit code 17
As the error indicates, you are trying to save in a directory, and not a file.
0
huggingface
Intermediate
Extract most important words from model
https://discuss.huggingface.co/t/extract-most-important-words-from-model/11978
Hi, I was wondering if it is possible to extract the most positive word or the most negative word from a sentence. Say, we have a movie review like, “This movie is amazing” and the most relevant word for sentiment classification would be the word “amazing”. Is it possible to use a trained BERT model to extract/return such most prominent words from a sentence? Thanks.
Hi, Yes that’s possible. There are some cool libraries out there that can be used to do that: ELI5: GitHub - TeamHG-Memex/eli5: A library for debugging/inspecting machine learning classifiers and explaining their predictions LIME: GitHub - marcotcr/lime: Lime: Explaining the predictions of any machine learning classifier I believe it requires some tweaks to make it work on Transformer-based models, but I did this myself some time last year, so it’s definitely possible. Edit: this Github thread 3 might be helpful.
1
huggingface
Intermediate
Multilabel classification performance metrics using Trainer API
https://discuss.huggingface.co/t/multilabel-classification-performance-metrics-using-trainer-api/11911
Hello, My goal is to output certain model performance metrics for my multilabel classification problem (I am using a DistilBERT architecture by the way). If I look at each of the labels individually you can say most of the labels are really unbalanced. Given this I also want to correct for the label (or class) imbalance. I am fairly new to this and by looking at some examples, and trying myself I have done the following: def accuracy_thresh(y_pred, y_true, thresh=0.5, sigmoid=True): y_pred = torch.from_numpy(y_pred) y_true = torch.from_numpy(y_true) if sigmoid: y_pred = y_pred.sigmoid() return ((y_pred>thresh)==y_true.bool()).float().mean().item() The above code calculates model accuracy given a threshold of 0.5. Next I used this code and I also included the above function as an output def compute_metrics(eval_pred): predictions, labels = eval_pred y_true = labels y_pred = sigmoid(eval_pred.predictions) y_pred = (y_pred>0.5).astype(float) clf_dict = classification_report(y_true, y_pred, target_names=all_labels, zero_division=0, output_dict=True) return {"accuracy_thresh": accuracy_thresh(predictions, labels), "micro f1": clf_dict['micro avg']['f1-score'], "macro f1": clf_dict['macro avg']['f1-score'], "weighted f1": clf_dict['weighted avg']['f1-score']} It looks a bit hacky, but it works (it runs). I have the Trainer als follows: class MultilabelTrainer(Trainer): def compute_loss(self, model, inputs, return_outputs=False): labels = inputs.pop("labels") #keeps the labels outputs = model(**inputs) logits = outputs.logits loss_fct = torch.nn.BCEWithLogitsLoss(pos_weight = class_weights) loss = loss_fct(logits.view(-1, self.model.config.num_labels), labels.float().view(-1, self.model.config.num_labels)) return (loss, outputs) if return_outputs else loss Note: I am actually using pos_weights. Why? Since I am dealing with imbalanced labels as said above I have a tensor which contains for each label a weight calculated as number of negative cases / positive cases. The trainer then is multi_trainer = MultilabelTrainer( model, args, train_dataset=train_dataset, eval_dataset=test_dataset, compute_metrics=compute_metrics, tokenizer=tokenizer) My main question is: Does it actually make sense for what I have done? That is am I actually getting the right performance metrics taking into account I want to correct for imbalance? Or is there a better alternative (e.g. less verbose) to achieve this?
Hi, I’ve created a notebook for you to illustrate this: Transformers-Tutorials/Fine_tuning_BERT_(and_friends)_for_multi_label_text_classification.ipynb at master · NielsRogge/Transformers-Tutorials · GitHub 7 Actually, there’s no need for a MultilabelTrainer anymore, as you can just set the problem_type of the model’s configuration to “multi_label_classification”.
1
huggingface
Intermediate
MLflowCallback TypeError: can only concatenate list (not “type”) to list
https://discuss.huggingface.co/t/mlflowcallback-typeerror-can-only-concatenate-list-not-type-to-list/9303
I’m trying to capture autolog parameters with MLFlow. I’m using the MLflowCallback class written in the transformers.integrations.MLflowCallback and interfacing with transformers.TrainerCallback class. Here’s the relevant code that tells MLFlow to create an experiment, send it to a host tracking server, and tells transformers what type of logging to do (as defined by the MLflowCallback class). import torch from transformers import RobertaTokenizer, RobertaForSequenceClassification, RobertaConfig, Trainer, TrainingArguments import mlflow from transformers import TrainerCallback from transformers.integrations import MLflowCallback remote_server_uri = [PRIVATE SERVER URL] mlflow.set_tracking_uri(remote_server_uri) # After loading and tokenizing data, here we run the training experiment experiment_name = "ht_vp_roberta_randomSearch" mlflow.set_experiment(experiment_name) # server creates experiment folder at this point with mlflow.start_run(): training_args = TrainingArguments( output_dir=experiment_name, evaluation_strategy='epoch', eval_steps=500, gradient_accumulation_steps=1000, eval_accumulation_steps=1, ) model = RobertaForSequenceClassification.from_pretrained("roberta-base") trainer = Trainer( args=training_args, tokenizer=tokenizer, train_dataset=train_dataset, eval_dataset=eval_dataset, model=model, compute_metrics=hf.compute_metrics, callbacks=MLflowCallback, # This triggers error ) trainer.train() trainer.evaluate() I’m getting the below error: Traceback (most recent call last): File "mlflow_test_simple.py", line 80, in <module> trainer = Trainer( File "/home/jovyan/conda/dsEnv/lib/python3.8/site-packages/transformers/trainer.py", line 385, in __init__ callbacks = default_callbacks if callbacks is None else default_callbacks + callbacks TypeError: can only concatenate list (not "type") to list
I just ran into the same problem. I have just encountered the same problem However, now I’m getting a new error, about nested processes, and mlflow wanting me to set nested=True in the start_run call that is embedded in the MLflowCallback class And found out the cause of this error. The cause of this problem was that Trainer’s __init__ automatically added a callback based on the installed package. As a result, the callback given as an argument to Trainer was duplicated and mlflow.start_run() was executed twice. The solution is to not specify the mlflow callback as an argument. trainer = Trainer( args=training_args, tokenizer=tokenizer, train_dataset=train_dataset, eval_dataset=eval_dataset, model=model, compute_metrics=compute_metrics, ) As a side note, I found the cause in lines 391-395 of trainer.py. get_reporting_integration_callbacks(self.args.report_to) returns all integration callbacks installed and added it into callback handler. default_callbacks = DEFAULT_CALLBACKS + get_reporting_integration_callbacks(self.args.report_to) callbacks = default_callbacks if callbacks is None else default_callbacks + callbacks self.callback_handler = CallbackHandler( callbacks, self.model, self.tokenizer, self.optimizer, self.lr_scheduler ) If you’ve already solved it, I’ll leave the solution for those who are facing this problem anew.
1
huggingface
Intermediate
Saving model per some step when using Trainer
https://discuss.huggingface.co/t/saving-model-per-some-step-when-using-trainer/11553
When using the Trainer and TrainingArguments from transformers, I notice that by default, the Trainer save a model every 500 steps. How can I change this value so that it save the model more/less frequent? here is a snipet that i use training_args = TrainingArguments( output_dir=output_directory, # output directory num_train_epochs=10, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir=log_directory, # directory for storing logs ) trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset # evaluation dataset ) trainer.train()
This is explained in the documentation 2. You can change the argument “save_steps”, which defaults to 500.
1
huggingface
Intermediate
Using a fixed vocabulary?
https://discuss.huggingface.co/t/using-a-fixed-vocabulary/10694
I have a special non-language use case using a fixed vocabulary—i.e., a relatively small set of generated tokens that represent the entire vocabulary of our “language.” I’d like to be able to use this with any of the different models and I’m wondering what would be the best approach? It’s just a vocab.txt file of short strings, which I don’t think will work with any of the BPE tokenizers. Am I correct in that assumption? Also, is there a way to “force” a vocabulary onto any of the tokenizers? Any help much appreciated.
What if I just instantiate a tokenizer (e.g., BigBirdTokenizer), then used add_tokens() to add my entire vocabulary? That is, start with nothing, then “force” them in with the add_tokens() function… ?? UPDATE: Hmm… no… I can’t instantiate without a vocab file, so that won’t work…
0
huggingface
Intermediate
Resuming training BERT from scratch with run_mlm.py
https://discuss.huggingface.co/t/resuming-training-bert-from-scratch-with-run-mlm-py/4439
Initiated training BERT from scratch with run_mlm.py as follows: python run_mlm.py --model_type bert –train_file ./data/mk.txt --output_dir ./models/bert-base-uncased –overwrite_output_dir --tokenizer_name ./models/bert-base-uncased –line_by_line True --do_train –per_device_train_batch_size 4 --num_train_epochs 100 –save_steps 100000 --save_total_limit 500 –max_seq_length 512 --logging_steps 500 –use_fast_tokenizer --report_to wandb –disable_tqdm True ` Training stopped due to power outage, having saved latest checkpoint: .\models\bert-base-uncased\checkpoint-1700000 Which is the most appropriate command, give initial one, to resume training from the last saved checkpoint, and preserving all of the parameters mentioned above?
hi @striki-ai if you remove the --overwrite_output_dir option and run the same command again, then the script will detect the last checkpoint and resume training from there.
0
huggingface
Intermediate
How can I make a Img2Text transformer using the existent modules?
https://discuss.huggingface.co/t/how-can-i-make-a-img2text-transformer-using-the-existent-modules/6434
I am trying to build a captioning system that inputs an image and outputs a caption. This is an attempt to solve the Kaggle competition: Bristol-Myers Squibb – Molecular Translation | Kaggle. For that I tried to build a Bert encoder-decoder module with ViT Embeddings using the following code: class ViTBert(nn.Module): def __init__(self, vocab_size): super().__init__() config_encoder = BertGenerationConfig( vocab_size = vocab_size, hidden_size = 256, num_hidden_layers = 4, num_attention_heads = 4, intermediate_size = 1024, bos_toke_id = 0, eos_token_id = 2 ) config_decoder = BertGenerationConfig( vocab_size = vocab_size, hidden_size = 256, num_hidden_layers = 4, num_attention_heads = 4, intermediate_size = 1024, add_cross_attention=True, is_decoder=True, bos_toke_id = 0, eos_token_id = 2 ) config_encdec = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) config_emb = ViTConfig( hidden_size = 256, num_hidden_layers = 4, num_attention_heads = 4, intermediate_size = 1024, image_size = 224, patch_size = 16, num_channels = 1 ) self.emb = ViTModel(config_emb).embeddings self.model = EncoderDecoderModel(config_encdec) def forward(self, x, y = None, l = None): return self.model( inputs_embeds = self.emb(x), decoder_input_ids = y, labels = l ) def beam_search(self, x, y, beam_scorer, criteria): lhs = self.model.encoder(inputs_embeds = self.emb(x)).last_hidden_state return self.model.decoder.beam_search(input_ids = y, encoder_hidden_states = lhs, beam_scorer=beam_scorer, stopping_criteria = criteria) model = ViTBert(len(vocab)) The first thing I did was to check if it would work passing random inputs, here I mimic a batch of 3 images of sizes [1, 224, 224] and text inputs and labels of shape [batch, seq_len] in the integer range of 0 to 41 (my vocab length). x = torch.rand((3, 1, 224, 224)) y = torch.randint(0, 41, (3, 50)) l = torch.randint(0, 41, (3, 50)) model(x, y, l).keys() odict_keys([‘loss’, ‘logits’, ‘encoder_last_hidden_state’]) I train it using the ‘loss’ provided by the model for a few epochs but when I try to run predictions I get the same output for different images. I did some testing and although the logits are slightly different for different input images, their argmax are always the same. x = torch.rand((2, 1, 224, 224)) y = torch.cat([torch.randint(0, 41, (1, 50))]*2) preds= model(x, y).logits.argmax(dim=2) all(preds[0] == preds[1]) True Not surprisingly, when I run the beam_search I always get the same result beam_scorer = BeamSearchScorer( batch_size=2, num_beams=6, device=model.model.decoder.device ) criteria = StoppingCriteriaList([MaxLengthCriteria(100)]) res = model.beam_search( torch.randn((2, 1, 224, 224)), torch.cat([torch.tensor([[0]])]*12), beam_scorer, criteria) all(res[0] == res[1]) True I am just started learning transformers a few weeks ago and my knowledge is very shallow, so any advice would be helpful (even if not directly related with this problem). If you interested in checking my colab notebook can be accessed here: Google Colaboratory 6 The notebook is a bit messy but I enabled commentary on that notebook so if you feel like writing down anything feel free to do so =) Looking forward to any comment. Sincerely, Passos.
Any luck with this task?
0
huggingface
Intermediate
Stopping `model.generate()` based on custom token
https://discuss.huggingface.co/t/stopping-model-generate-based-on-custom-token/3456
Hello everyone, I’ve managed to train a huggingface model that generates coherent sequences based on my training data and am using generate to create these new sequences. This has worked well enough so far however I need to stop sequence generation based on the count of a particular token that denotes the start of a subsequence in my domain. Is there a way to leverage the generate() method to do this? ie rather than generate based on length generate until n number of a particular token are generated.
I found that the best way to do this is by directly calling the model with the necessary inputs rather than using the generate method, and to build logic around this that checks the number of a particular token in the resulting sequence and stops once its reached.
0
huggingface
Intermediate
How to exclude layers in weight decay
https://discuss.huggingface.co/t/how-to-exclude-layers-in-weight-decay/10869
My code is written in pytorch, thus I use torch.optim.adam as my optimizer. However, I need to do use Adam wright decay with some layer excluded. To be more specifically, I am trying to reproduce this tensor flow code optimizer = AdamWeightDecayOptimizer(*some parameter setting here,* exclude_from_weight_decay=["LayerNorm", "layer_norm", "bias"]) What should I do to exclude [“LayerNorm”, “layer_norm”, “bias”] in weight decay in pytorch? Could I use tensor flow optimizer in pytorch? Thank you.
You can look at how we do this for the Trainer here 23.
0
huggingface
Intermediate
How to train the embedding of special token?
https://discuss.huggingface.co/t/how-to-train-the-embedding-of-special-token/10837
I would like to add some special tokens and train the tokens. For instance, this is an input example of BERT "[CLS] this is a special token special_token [SEP] The special token is ‘special_token’ ". I guess I should use some functions like tokenizer. additional_special_tokens(‘special_token’) to add the special token. How do I train the embedding of the token? Thank you
There are 2 things you need to do in order to train additional special tokens: Add new tokens to the tokenizer. You can either add “regular” tokens, as follows: tokenizer.add_tokens(['newWord', 'newWord2']) Or you can add them as special tokens (similar to [CLS] and [SEP]) by passing the additional argument special_tokens=True. This is equivalent to calling tokenizer.add_special_tokens (the latter accepts a dictionary rather than a list). special_tokens_dict = {'additional_special_tokens': ['[C1]','[C2]','[C3]','[C4]']} num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) Resize the token embedding matrix of the model, so that it matches with the tokenizer: model.resize_token_embeddings(len(tokenizer)) Next, you can fine-tune your model on your custom dataset, and train these additional tokens. Sources: How to add some new special tokens to a pretrained tokenizer? · Issue #247 · huggingface/tokenizers · GitHub 2 how can i finetune BertTokenizer? · Issue #2691 · huggingface/transformers · GitHub 4 source code 1 of tokenization_utils_base.py
0
huggingface
Intermediate
Other aggregation on TAPAS beyond (SUM/COUNT/AVERAGE/NONE)
https://discuss.huggingface.co/t/other-aggregation-on-tapas-beyond-sum-count-average-none/3658
In the current way to fine-tune the model, is it possible to train TAPAS to learn other aggregations such difference, percentages etc ? If it is possible, can you please point to some documentation?
Hi, Yes it is possible to train TAPAS on other custom aggregations. You can change the number of aggregation operators in TapasConfig, like so: from transformers import TapasConfig config = TapasConfig(num_aggregation_heads=10) and then initialize a TapasForQuestionAnswering model with a pre-trained base and your custom head on top: from transformers import TapasForQuestionAnswering model = TapasForQuestionAnswering.from_pretrained('google/tapas-base', config=config) For more information, see the fine-tuning guide of TAPAS here 1.
0
huggingface
Intermediate
Using Transformers with DistributedDataParallel — any examples?
https://discuss.huggingface.co/t/using-transformers-with-distributeddataparallel-any-examples/10775
Hi! I’ve been consulting this page: huggingface.co Model Parallelism 2 Parallelism overview: In the modern machine learning the various approaches to parallelism are used to: fit very large models onto limited hardware - e.g. t5... that says using DDP with transformers is “almost trivial”. But is there an example available? Am I supposed to follow https://pytorch.org/tutorials/intermediate/ddp_tutorial.html just as if I was working with a regular PyTorch model and its optimizer exposed (as opposed to having it abstracted via transformers.Trainer)? Also, I have some Dataset-related questions. I’ve written a custom dataset class that extends torch.Dataset. My dataset class yields samples from stored binary chunks with pre-shuffled pre-tokenized data (to maximize reading speed within a chunk). Therefore, I had to disable Trainer’s shuffling behavior by replacing RandomSampler with SequentialSampler within Trainer._get_train_sampler. Will this hack work with DDP? Would it work if I switched to another distributed backend, like deepspeed? Is there a better way to do this?
Then you just need to properly launch your training script, see here 1.
1
huggingface
Intermediate
ByT5: problem with tokenizer.decode()
https://discuss.huggingface.co/t/byt5-problem-with-tokenizer-decode/10359
[ EDIT ] : there is a bug in the 4.11.0. Back to 4.9.2 solves the issue related here (but can create others? like this one ByT5 tokenizer gives indices of chars instead of bytes?) Hi. I’ve created a Colab notebook to show a problem when using google/byt5-small from the model hub of Hugging Face and model.generate() . Observations: More especifically, the problem comes from the method tokenizer.convert_tokens_to_string() in source code for transformers.models.byt5.tokenization_byt5. The same problem happens with google/byt5-base. If someone could run my notebook and tell me what I did wrong or what could be a solution, I would appreciate it because this problem, besides preventing using ByT5 in inference, prevents its finetuning since when evaluating the model at the end of an epoch, the method tokenizer.convert_tokens_to_string() is called by the script … which suddenly fails). Thanks. cc @patrickvonplaten, @valhalla, @sshleifer Screen shots from the notebook image1502×562 107 KB image1720×950 72 KB image1920×1030 82.4 KB
@patrickvonplaten closed this issue (see explanation) with the return of errors="ignore" in decode("utf-8", errors="ignore") (see commit).
1
huggingface
Intermediate
How to get a model on patent data for question answering
https://discuss.huggingface.co/t/how-to-get-a-model-on-patent-data-for-question-answering/10782
Dear list, I want to have a question answering model for US patent text. For example, I want to ask it to read a patent’s text and ask questions such as ‘what is the specific problem to solve in this text?’. I tried with some general question answering models such as ‘distilbert-base-cased-distilled-squad’ but the answers were not satisfactory. Now I am considering if I can get a better model through fine-tuning the model with patent data. So, I wonder if this is the right approach and if it is, then how can I fine-tune a model with patent data so that I can get more satisfactory answers? Thanks in advance.
seunghon: a question answering model for US patent text. For example, I want to ask it to read a patent’s text and ask questions such as ‘what is the specific problem to solve in this text?’. I tried with some general question answering models such as ‘distilbert-base-cased- You’re going to have to finetune as you said, luckily you can finetune Squad pretty easily. See here: github.com transformers/examples/pytorch/question-answering at master ·... 1 master/examples/pytorch/question-answering 🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX. - transformers/examples/pytorch/question-answering at master · huggingface/transformers In a Python Notebook, import your data into a Pandas dataframe and export the table so that it matches the schema of the Squad dataset (see Streamlit 1 ). In this case, you need 5 fields in your exported file: id, title, context, question and answers. Once you’ve formatted your data to the schema and exported the JSON/CSV locally, run the run_qa.py file and pass the train and test/validation files like so; python run_qa.py \ --model_name_or_path bert-base-uncased \ --train_file=train-v1.1.json \ --validation_file=dev-v1.1.json And of course pass any other (hyper)parameters that you have for your finetuning task.
0
huggingface
Intermediate
ERROR: vars() argument must have __dict__ attribute when trying to use trainer.train()?
https://discuss.huggingface.co/t/error-vars-argument-must-have-dict-attribute-when-trying-to-use-trainer-train/10708
I have the following model that I am trying to fine-tune (CLIP_ViT + classification head). Here’s my model definition: class CLIPNN(nn.Module): def __init__(self, num_labels, pretrained_name="openai/clip-vit-base-patch32", dropout=0.1): super().__init__() self.num_labels = num_labels # load pre-trained transformer & processor self.transformer = CLIPVisionModel.from_pretrained(pretrained_name) self.processor = CLIPProcessor.from_pretrained(pretrained_name) # initialize other layers (head after the transformer body) self.classifier = nn.Sequential( nn.Linear(512, 128, bias=True), nn.ReLU(inplace=True), nn.Dropout(p=dropout, inplace=False), nn.Linear(128, self.num_labels, bias=True)) def forward(self, inputs, labels=None, **kwargs): logits = self.classifier(inputs) loss = None if labels is not None: loss_fct = nn.CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) return SequenceClassifierOutput( loss=loss, logits=logits, ) I also have the following definition for a dataset: class CLIPDataset(nn.utils.data.Dataset): def __init__(self, embeddings, labels): self.embeddings = embeddings self.labels = labels def __getitem__(self, idx): item = {"embeddings": nn.Tensor(self.embeddings[idx])} item['labels'] = nn.LongTensor([self.labels[idx]]) return item def __len__(self): return len(self.labels) Note: here I am assuming that the model is fed pre-computed embeddings and does not compute embeddings, I know this is not the right logic if I want to fine-tune the CLIP base model, I am just trying to get my code to work. Something like this throws an error: model = CLIPNN(num_labels=2) train_data = CLIPDataset(train_data, y_train) test_data = CLIPDataset(test_data, y_test) trainer = Trainer( model=model, args=training_args, train_dataset=train_data, eval_dataset=test_data ) trainer.train() TypeError Traceback (most recent call last) in ----> 1 trainer.train() ~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1256 self.control = self.callback_handler.on_epoch_begin(args, self.state, self.control) 1257 → 1258 for step, inputs in enumerate(epoch_iterator): 1259 1260 # Skip past any already trained steps if resuming training ~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/utils/data/dataloader.py in next(self) 515 if self._sampler_iter is None: 516 self._reset() → 517 data = self._next_data() 518 self._num_yielded += 1 519 if self._dataset_kind == _DatasetKind.Iterable and \ ~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self) 555 def _next_data(self): 556 index = self._next_index() # may raise StopIteration → 557 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 558 if self._pin_memory: 559 data = _utils.pin_memory.pin_memory(data) ~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 45 else: 46 data = self.dataset[possibly_batched_index] —> 47 return self.collate_fn(data) ~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/data/data_collator.py in default_data_collator(features, return_tensors) 64 65 if return_tensors == “pt”: —> 66 return torch_default_data_collator(features) 67 elif return_tensors == “tf”: 68 return tf_default_data_collator(features) ~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/data/data_collator.py in torch_default_data_collator(features) 80 81 if not isinstance(features[0], (dict, BatchEncoding)): —> 82 features = [vars(f) for f in features] 83 first = features[0] 84 batch = {} ~/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/transformers/data/data_collator.py in (.0) 80 81 if not isinstance(features[0], (dict, BatchEncoding)): —> 82 features = [vars(f) for f in features] 83 first = features[0] 84 batch = {} TypeError: vars() argument must have dict attribute Any clue where I’m going wrong?
It looks like your train_data variable is used for two different things. Are you sure you passed the instance of your CLIPDataset to the Trainer? Cause it looks like the elements of the training dataset of the Trainer are not dictionaries from the error message.
0
huggingface
Intermediate
Open-sourcing better cross-encoders for STILTS and better IR?
https://discuss.huggingface.co/t/open-sourcing-better-cross-encoders-for-stilts-and-better-ir/10611
Hi @nreimers, I find your research on bi-encoders and models on sbert.net 3 super helpful. Based on your research I understand that cross-encoders generally perform better than bi-encoders, while their main disadvantage is computational speed. I’m very interested in deepening my research in cross-encoders, but I noticed that you’ve only published comparatively few cross-encoders here: cross-encoder (Sentence Transformers - Cross-Encoders) 2. My question: Could you consider to publish improved cross-encoders, either trained on your paraphrase data or the ‘all’ data from the FLAX event (‘all-mpnet…’ etc.)? I feel like this would have great added value for the HF- and research-community, because: - Improved cross-encoders trained on more diverse data could be great improved STILTS for sequential transfer learning applications. (see here https://arxiv.org/pdf/1811.01088.pdf 1) - Your bi-encoders are probably already good STILTS, but I imagine that cross-encoders would be even better. Using these intermediate models for task-specific fine-tuning would probably be a super easy way for people to get improved performance on many tasks - just by taking your cross-encoder as the base model instead of BERT-base etc. - Having high-performance cross-encoders would also be useful for implementing BM25 & cross-encoder reranking for information retrieval applications etc. Could you consider to published improved cross-encoders? (Maybe there are technical reasons why your paraphrase or ‘all’ data cannot be used for cross-encoders and that’s the reason why non are published with this data?) Best, Moritz
Hi, Happy to hear that Better cross encoders that are trained on larger datasets are on my agenda. However, training is not so straightforward. For bi-encoders, you use the other examples in a batch as negative. For cross-encoders, you have to create the negative pairs. Here, the creation of the negative pairs plays an extremely important role. I hope I will soon be able to train these models. But setting up the training etc takes some effort. Best Nils
0
huggingface
Intermediate
Pipelines for mutliple inputs don’t produce reliable results
https://discuss.huggingface.co/t/pipelines-for-mutliple-inputs-dont-produce-reliable-results/10466
I am using a text classification pipeline (‘sentiment-analysis’) with a fine-tuned ELECTRA model and transformers version 4.5.1 For some reason, calling the pipeline for a list of inputs will result in different outputs for each input than with applying the pipeline to each input! Why is this like that? I went through the patch notes but couldn’t see any fix for this issue, so I’m not sure if this still persists in recent versions.
I found out that this is not related to the transformers package, but is probably due to PyTorch optimisations with operations potentially happening in different orders depending on the input tensor - as float operations are inaccurate, this may lead to different results for the same inputs to a pipeline if combined with other input sentences. As this does not (only) depend on the shape of the tensor but also on the content, the only safe way to generate the exact same output is by applying the pipeline only for one input sentence one at a time. For most use cases, class probabilities changing by values around 10^-6 doesn’t matter, but if you require exact results, be aware of this issue!
1
huggingface
Intermediate
A new dataset for multi-label text classification
https://discuss.huggingface.co/t/a-new-dataset-for-multi-label-text-classification/10408
Soumik and I are pleased to share a new NLP dataset for multi-label text classification. The dataset consists of paper titles, abstracts, and term categories scraped from arXiv. Find the dataset on Kaggle: arXiv Paper Abstracts | Kaggle 2. We are also releasing our data collection pipeline which is based on Apache Beam that can be run on Cloud Dataflow (GCP) at scale and can be used to accumulate an even bigger dataset at ease. To help the community get started quickly we have authored this blog post that shows how to build a simple baseline model for a smaller version of the dataset. More details are here: GitHub - soumik12345/multi-label-text-classification 7
Cool! Would be great if you upload this dataset to the hub here’s a guide: Share — datasets 1.12.1 documentation 2
0
huggingface
Intermediate
FlaxGPTNeoForCausalLM generates the same text regardless of seed, temperature, top_k and top_p values
https://discuss.huggingface.co/t/flaxgptneoforcausallm-generates-the-same-text-regardless-of-seed-temperature-top-k-and-top-p-values/9602
Hello, I was trying to generate text using flax (just as an experiment to see if it works well on a TPU-VM machine). However, no matter how I tried, it always generates the exact same text for a given prompt. This happened both on a TPU-VM as well as local CPU inference. Here is a short code snippet which demonstrates the problem I encountered: from transformers import FlaxGPTNeoForCausalLM, AutoTokenizer model_name = 'EleutherAI/gpt-neo-125M' model = FlaxGPTNeoForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt_text = "Hello there, my name is" generated_max_length = 50 #Changing the seed value, does not seem to change the outcome seed = 1001 model.seed = seed model.config.pad_token_id = model.config.eos_token_id inputs = tokenizer(prompt_text, return_tensors="jax") #Changing temperature, top_k and top_p does not seem to change the outcome outputs = model.generate( input_ids = inputs["input_ids"], max_length=generated_max_length, do_sample=True, temperature=0.8, early_stopping=True, top_k=50, top_p=0.90) output_sequence = outputs['sequences'].squeeze(0) text = tokenizer.decode(output_sequence, clean_up_tokenization_spaces=True) print(text) #Always prints: #Hello there, my name isergus, and I was presented a competition looking for a library keeper with the 31-K--Goods Wallace Leisure library manager on 10-11-08 447-5721. It involves teaching a Royally I also tried calling jax.random.PRNGKey(seed) which didn’t help as well as other methods such as: model.top_p = 0.9 model.top_k = 50 jit_generate = jax.jit(model.generate) #jit_generate( inputs["input_ids"], .... I assume I’m doing something very wrong, but I was not able to find any example code for generating text with FlaxGPTNeoForCausalLM (I did find examples for training it). I hope I posted this in the right forum. Regards, Doron
Update: Tried again on the latest Master branch (transformers-4.11.0.dev0) and this time I was able to get it to generate a different output by changing the seed. However, changing temperature, top_k and top_p still does not seem to have an influence on the outcome. import jax from transformers import FlaxGPTNeoForCausalLM, AutoTokenizer model_name = 'EleutherAI/gpt-neo-125M' model = FlaxGPTNeoForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt_text = "Hello there, my name is" generated_max_length = 50 #Changing the seed and thus the prng_key value below, does seem to change the outcome. seed = 1000 model.seed = seed inputs = tokenizer(prompt_text, return_tensors="np") #Changing temperature, top_k and top_p does not seem to change the outcome outputs = model.generate( input_ids = inputs["input_ids"], max_length=generated_max_length, pad_token_id = model.config.eos_token_id, prng_key=jax.random.PRNGKey(seed), temperature=1.0, early_stopping=True, top_k=50, top_p=0.95, do_sample=True, no_repeat_ngram_size=4) output_sequence = outputs['sequences'].squeeze(0) text = tokenizer.decode(output_sequence, clean_up_tokenization_spaces=True) print(text)
0
huggingface
Intermediate
Custom GPT2 Model won’t load after training
https://discuss.huggingface.co/t/custom-gpt2-model-wont-load-after-training/10017
Environment info transformers version: 4.10.2 Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.29 Python version: 3.8.10 PyTorch version (GPU?): 1.8.1+cu102 (True) Tensorflow version (GPU?): 2.4.1 (False) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: Using distributed or parallel set-up in script?: Information Model I am using GPT2PretrainedModel. The problem arises when using: [ ] the official example scripts: (give details below) [ x ] my own modified scripts: (give details below) The tasks I am working on is: [ ] an official GLUE/SQUaD task: (give the name) [ x ] my own task or dataset: (give details below) The Problem I was able to train my customly build model but I am not able to load it with the from_pretrained() function. BTW I don’t save the model manually if that is important. The saving is done by the Huggingface-Trainer. The Error message: model = CustomGPTModel.from_pretrained("results/checkpoint-19065", config=config) File "/home/flo/PycharmProjects/EET2/venv/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1325, in from_pretrained model = cls(config, *model_args, **model_kwargs) TypeError: __init__() missing 1 required positional argument: 'config' I load the model like this: config = AutoConfig.from_pretrained("results/checkpoint-19065") model = CustomGPTModel.from_pretrained("dbmdz/german-gpt2", config=config) # custom = CustomGPTModel(model=model, config=config) training_args = TrainingArguments( output_dir='./results', # output directory per_device_train_batch_size=1, # batch size per device during training per_device_eval_batch_size=1, # batch size for evaluation logging_dir='./logs/event/', # directory for storing logs ) trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained # model=custom, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above compute_metrics=compute_everything, ) trainer.predict(test_dataset=test_dataset) As you can tell from the commented code, I tried a lot of different approaches to no avail. Other approaches I tried: config = AutoConfig.from_pretrained("results/checkpoint-19065") model = CustomGPTModel.from_pretrained("results/checkpoint-19065", config=config) # or config = AutoConfig.from_pretrained("results/checkpoint-19065") model = CustomGPTModel.from_pretrained("results/checkpoint-19065") Anyway the question is How do I load my custom model? I think it is because of the way I initialize the CustomGPTModel (see below). The Task / More Information on what I am Doing I am training the “dbmdz/german-gpt2” on a multilabel-classification task. For this I had to create my own model by subclassing the GPT2PretrainedModel. This is what the model looks like: class CustomGPTModel(GPT2PreTrainedModel): def __init__(self, model, config): super(CustomGPTModel, self).__init__(config) self.num_labels = config.num_labels self.init_weights() ### Architecture: self.transformer = model self.linear1 = nn.Linear(config.n_embd, 256) self.score = nn.Linear(256, self.num_labels, bias=False) self.dropout = nn.Dropout(p=0.2) self.sig = nn.Sigmoid() self.relu = nn.ReLU() # Model parallel self.model_parallel = False self.device_map = None def forward(self, input_ids=None, past_key_values=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): return_dict = return_dict if return_dict is not None else self.config.use_return_dict transformer_outputs = self.transformer( input_ids, past_key_values=past_key_values, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) hidden_states = transformer_outputs[0] # call model hdn_2 = self.linear1(hidden_states) # first linear logits = self.score(self.dropout(self.relu(hdn_2))) # apply activation/dropout and final layer if input_ids is not None: batch_size, sequence_length = input_ids.shape[:2] else: batch_size, sequence_length = inputs_embeds.shape[:2] assert ( self.config.pad_token_id is not None or batch_size == 1 ), "Cannot handle batch sizes > 1 if no padding token is defined." if self.config.pad_token_id is None: sequence_lengths = -1 else: if input_ids is not None: sequence_lengths = torch.ne(input_ids, self.config.pad_token_id).sum(-1) - 1 pooled_logits = logits[range(batch_size), sequence_lengths] loss = None if labels is not None: loss_fct = BCEWithLogitsLoss() loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1, self.num_labels)) return (loss, pooled_logits) else: return logits Here I initialize the model for training: training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=10, # total number of training epochs per_device_train_batch_size=1, # batch size per device during training per_device_eval_batch_size=1, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs/event/', # directory for storing logs logging_steps=1000, load_best_model_at_end=True, evaluation_strategy="epoch", # Evaluation is done (and logged) every eval_steps save_strategy="epoch", # logging_first_step = True, do_eval=True, ) trainer = Trainer( model=custom_gpt2, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset, # evaluation dataset compute_metrics=compute_everything, callbacks=[EarlyStoppingCallback(early_stopping_patience=3)], ) trainer.train() Expected behavior The model should get loaded as expected. I tried to fix it for two days now so I thought creating an issue is the last straw. Hopefully someone can explain what I am doing wrong If someone needs more information please tell me!
You should either: use the regular torch.load to load the weights of your models or make sure your custom model class subclasses PreTrainedModel and is initalized with a single config, like all Transformers models if you want to use from_pretrained.
1
huggingface
Intermediate
Optimal methods to monitor attention matrices when doing training/inference using BERT-type models
https://discuss.huggingface.co/t/optimal-methods-to-monitor-attention-matrices-when-doing-training-inference-using-bert-type-models/9822
Our team is using BERT/Roberta from the huggingface transformers library for sequence-classification (amongst other tasks). We are looking for an efficient way to monitor the attention matrices so as to understand what the model is doing during inference (i.e. the model made this prediction because it is focusing on these words, etc). Are there any useful code snippets used for analysis. Often the models make funny predictions, and it’s hard to understand why… How are other teams managing this process? We want to avoid large bloated (graphical) tools, and would prefer simplicity. thanks!
Just checking you have seen BertViz 4 as a source of ideas if nothing else.
0
huggingface
Intermediate
Class weights in Trainer() instance
https://discuss.huggingface.co/t/class-weights-in-trainer-instance/9882
Hello everyone, I know the question isn’t new but I wanted to see if there have been new features/developments to this: Is there a way to easily add class weights while fine tuning a bert model? I am doing the fine tuning with the Trainer() instance. Cheers
You can override the loss computation anyway you like by subclassing the Trainer, as is documented here 4.
0
huggingface
Intermediate
BART from finetuned BERT
https://discuss.huggingface.co/t/bart-from-finetuned-bert/5246
Hi all! Is it possible to use a pretrained BERT model to initialize the encoder part of an encoder-decoder model like BART, leaving the decoder uninitialized (or random), and then do fintuning on some seq2seq task? How should I proceed if its possible? Does someone know of previous instances where something like that has been tried? Thanks in advance! Best, Gabriel.
I am not entirely sure about Bart but you can check out this: transformers/modeling_encoder_decoder.py at master · huggingface/transformers · GitHub 8 Also you can read the publication linked in comments, I think this is similar to what you want to achieve
0
huggingface
Intermediate
Save custom transformer as PreTrainedModel
https://discuss.huggingface.co/t/save-custom-transformer-as-pretrainedmodel/9768
I have a custom BERT-like model (with modified attention) that I pretrained with PyTorch. Data preparation was done with a Huggingface tokenizer. Now I want to integrate this PyTorch-model in the Huggingface environment so it can be used in pipelines and for finetuning as a PreTrainedModel. How do I generate the necessary config files?
There might be a better way, but I would: subclass PretrainedModel with your own class load your trained model weights into this new class run yourmodel.save_pretrained() to save the model weights for this class now you can do YourCustomModel.from_pretrained() as it now can use those HF methods
0
huggingface
Intermediate
Create DPR Tokenizer for non-Bert model
https://discuss.huggingface.co/t/create-dpr-tokenizer-for-non-bert-model/9735
Hello everyone, Is there a way to create non bert tokenizer for dpr model? And how much effort it will be to create one? Maybe someone have experience converting models from original library to transformers (I know there is a script for this, but it doesn’t show what to do with tokenisers).
just want to add more context. I’ve been trying to create dpr for my language, but now I can’t use it with transformers because DPRContextEncoderTokenizer inherits specifically from BertTokenizer, not PreTrainedTokenizer, I think it will be helpful to change it to something more general, can someone help me figure out the hierarchy of classes or maybe design some specific behaviour (for example hold actual tokenizer inside DPRTokenizer and then just override from_pretrained?)
0