docs
stringclasses
4 values
category
stringlengths
3
31
thread
stringlengths
7
255
href
stringlengths
42
278
question
stringlengths
0
30.3k
context
stringlengths
0
24.9k
marked
int64
0
1
huggingface
Beginners
Using GPU with transformers
https://discuss.huggingface.co/t/using-gpu-with-transformers/1827
Hi! I am pretty new to Hugging Face and I am struggling with next sentence prediction model. I would like it to use a GPU device inside a Colab Notebook but I am not able to do it. This is my proposal: tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForNextSentencePrediction.from_pretrained('bert-base-uncased', return_dict=True) model.to("cuda:0") prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." next_sentence = "The sky is blue due to the shorter wavelength of blue light." tokenizer_output = tokenizer(prompt, next_sentence, return_tensors='pt') tokens_tensor = tokenizer_output['input_ids'].to('cuda:0') token_type_ids = tokenizer_output['token_type_ids'].to('cuda:0') attention_mask = tokenizer_output['attention_mask'].to('cuda:0') encoding = {'input_ids' : tokens_tensor, 'token_type_ids' : token_type_ids, 'attention_mask' : attention_mask} outputs = model(**encoding) logits = outputs.logits print(logits) # next sentence was random However, it does not work. The GPU device is properly set, since I used the following command to check it: import torch; print(torch.cuda.get_device_name(0)) I would appreciate any help
What do you mean by “doesn’t work”. What errors do you get or how do you know the GPU isn’t used.
0
huggingface
Beginners
Fine-tung gpt-2 run_clm.py stops early
https://discuss.huggingface.co/t/fine-tung-gpt-2-run-clm-py-stops-early/1849
Hi, I’m running run_clm.py to fine-tune gpt-2 form the huggingface library, following the language_modeling example: !python run_clm.py \ --model_name_or_path gpt2 \ --train_file train.txt \ --validation_file test.txt \ --do_train \ --do_eval \ --output_dir /tmp/test-clm This is the output, the process seemed to be started but there was the ^C appeared to stop the process: The following columns in the training set don't have a corresponding argument in `GPT2LMHeadModel.forward` and have been ignored: . The following columns in the evaluation set don't have a corresponding argument in `GPT2LMHeadModel.forward` and have been ignored: . ***** Running training ***** Num examples = 2318 Num Epochs = 3 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient Accumulation steps = 1 Total optimization steps = 870 0% 0/870 [00:00<?, ?it/s]^C Here’s my environment info: transformers version: 3.4.0 Platform: Linux-4.19.112±x86_64-with-Ubuntu-18.04-bionic Python version: 3.6.9 Tensorflow version: 1.14 Using GPU in script?: yes What would be the possible triggers of the early stopping?
Hi, if you are running Tensorflow, don’t you need TFGPT2LMHeadModel rather than GPT2LMHeadModel ?
0
huggingface
Beginners
Customize tokenizer in model card’s widget
https://discuss.huggingface.co/t/customize-tokenizer-in-model-cards-widget/1836
I trained a Chinese Roberta model. In the model card, the widget uses a tokenizer defined in config.json( RobertaTokenizer ). But my model uses BertTokenizer . Can I customize the tokenizer in the widget of the model card just like I can choose any combination of model and tokenizer in a pipeline? I tried to use BertModel instead of RobertaModel (copy weights from Roberta to Bert). But the position embedding is different. And the outputs are different… So I have to use this combination of RobertaModel and BertTokenizer . Is that mean I can’t use the inference widget?
We are working on adding this indeed (as I understand it’s the same as https://github.com/huggingface/transformers/issues/8125 12)
0
huggingface
Beginners
Estimating Training Time for Fine Tuning
https://discuss.huggingface.co/t/estimating-training-time-for-fine-tuning/1808
I have a dataset with about 15,000 documents, containing about 7GB of uncompressed text, which I’m hoping to use for fine-tuning the Megatron-11b model 6, but before I get started, I’d like to get a rough estimate of the cost. If I train in the cloud, I’ll probably use AWS P3 2 or G4 1 instances. Roughly how many instance-hours would I expect to need, to complete the fine-tuning? At a certain price point, I might just decide to buy my own hardware, and do the training on-prem… What the minimum specs of hardware I’d need to be able to perform this task, without running out of memory? The pre-trained Megatron_11b model is 19GB of data… So does that mean I could perform model fine-tuning with only a single 24GB graphics card? Or would I need more than that?
maybe @teven?
0
huggingface
Beginners
Bert LM pretraining: training loss goes to 0 at masking probability of 0.999
https://discuss.huggingface.co/t/bert-lm-pretraining-training-loss-goes-to-0-at-masking-probability-of-0-999/1512
Hi, I am using the Trainer class to perform masked language modeling with a pretrained Bert checkpoint (fine-tune on own domain). I’m using the official run_language_modeling.py script from https://github.com/huggingface/transformers/tree/master/examples/language-modeling 8. I merely changed it to use a different dataset implementation, as the default ones load the full data into memory. But __getitem__ works the same, returning a token-id converted sequence at a time, so this should not matter. Even when setting the mlm probability to something ridiculous like 0.999, this is what I observe: image2400×745 46.6 KB The loss becomes zero rather quickly (full dataset would be 40k training steps) This is the command I’m using, working on google cloud. python3 xla_spawn.py --num_cores=8 train_mlm.py \ --output_dir=real_runs/6 \ --model_type=bert \ --model_name_or_path=Rostlab/prot_bert \ --do_train \ --train_data_file=/home/preprocessed_allorgs_alllengths.txt \ --mlm \ --line_by_line \ --block_size 512 \ --max_steps 30000 \ --out_of_core \ --logging_steps 20 \ --learning_rate 0.00001 \ --per_device_train_batch_size 20 \ --lazy \ --run_name high_mlm_prob \ --save_steps 2000 \ --warmup_steps 2666 \ --weight_decay 0.01 \ --mlm_probability=0.9999 (train_mlm.py is run_language_modeling.py with a custom dataset class, --lazy is an arg for that dataset. The rest is default, didn’t change anything about it.) Am I missing something with regards to the setup? This just seems wrong in general, and not an issue with the dataset.
Maybe the model is almost perfect already (?) What is mLm probability, and is it supposed to have a capital L? Is your data very similar to the data originally used to pre-train BERT? What value of loss do you get if you try to fine-tune using the data originally used to pre-train BERT? Maybe there are a few differences, but BERT has encountered enough examples of each difference to learn the patterns within the first 150 steps.
0
huggingface
Beginners
Correct way to structure BERT for genetic segmentation?
https://discuss.huggingface.co/t/correct-way-to-structure-bert-for-genetic-segmentation/340
I’m doing research on gene segmentation, and I’m currently trying to implement a BERT network. I have a database of 350 GB database (~300M proteins) of unlabelled proteins, where each protein is a sequence of letters with various lengths, and each letter correspond to an amino acid. There are only about 20 different amino acids, so they are easy to tokenize. And should be easy to use for a BERT model to predict. (here is an example of 3 such proteins) DGQPEIPAGRGEHPQGIPEDTSPNDIMSEVDLQMEFATRIAMESQLGDTLKSRLRISNAQTTDTGNYTCQPTTASSASVLVHVINGE AGQLWLSIGLISGDDSLDTREGVDLVLKCRFTEHYDSTDFTFYWARWTCCPTLFENVAIGDVQLNSNYRLDFRPSRGIYDLQIKNTSYNRDNGRFECRIKAKGTGADVHQEFYNLTVLTAPHPPMVTPGNLAVATEEKPLELTCSSIGGSPDPMITWYREGSTVPLQSYALKGGSKNHYTNATLQIVPRRADDGAKYKCVVWNRAMPEGHMLETSVTLNVNYYPRVEVGPQNPLKVERDHVAKLDCRVDAKPMVSNVRWSRNGQYVSATPTHTIYRVNRHHAGKYTCSADNGLGKTGEKDIVLDVLYPPIVFIESKTHEAEEGETVLIRCNVTANPSPINVEWLKEGAPDFRYTGELLTLGSVRAEHAGNYICRSVNIMQPFSSKRVEGVGNSTVALLVRHRPGQAYITPNKPVVHVGNGVTLTCSANPPGWPVPQYRWFRDMDGDIGNTQKILAQGPQYSIPKAHLGSEGKYHCHAVNELGIGKIATIILEVHQPPQFLAKLQQHMTRRVGDVDYAVTCSAKGKPTPQIRWIKDGTEILPTRKMFDIRTTPTDAGGGVVAVQSILRFRGKARPNGNQLLPNDRGLYTCLYENDVNSANSSMHLRIEHEPIVIHQYNKVAYDLRESAEVVCRVQAYPKPEFQWQYGNNPSPLTMSSDGHYEISTRMENNDVYTSILRIAHLQHSDYGEYICRAVNPLDSIRAPIRLQPKGSPEKPTNLKILEVGHNYAVLNWTPGFNGGFMSTKYLVSYRRVATPREQTLSDCSGNGYIPSYQISSSSSNSNHEWIEFNCFKENPCKLAPLDQHQSYMFKVYALNSKGTSGYSNEILATTKVSKIPPPLHVSYDPNSHVLGINVAATCLSLIAVVESLVTRDATVPMWEIVETLTLLPSGSETTFKEAIINHVSRPAHYTTATTSGRSLGVGGGSHLGEDRTMALAETAGPGPVVRVKLCLRSNHEHCGAYANAEIGKSYMPHKSSMTTSALVAIIIASLSFVVFLGLLYAFCHCRRKHAAKKESSSVGGGVGGGNANATANPGSTGAKEYDLDLDASRRPSLSQDPQQSQQQPPPPPPYYPTGTLDSKDIGNGNGGMELTLTALHDPDEQLNMQQQQHHSNHGQYQQPKAILGIYGGVAGSGGNNSGGQHPHSNGYGYHVTSAIGVDSDSYQVLPSVANSAAGSHGHGSGHGHGLGAGEXPLEATPPTCNISGGSSSNSGINPMQQQHSARANLTNQPTIATASSTNNYNNHLNNTNIAHTTNNTNNCTTLKRGHLGNRERERERCQVTAATAATTTALATTITTTSRNAKAATTTTTLAITGSSSNSNENNYSNARDLSQEALWRLQMATAQSQQIYVERPPSAFSGLVDYSGYSPHIPTVTSSLSQQSFSPTQQLAPHEMLQAAQRYGTLRKSGKQPPLPPQRKDMQQQAKPPQQMADIIQDLAN MKQINAASALCGQLKQHENRAGPSNLGNVISQILLCKQFTPDFNEEELCSITKDSQDIAVLLAEMQEYMPQHEAYLERNAALDTTGPWQAKRRQNYICKNMSLLCCVS Furthermore, I have a few small databases with 800-25000 proteins of labelled proteins, meaning for each amino acid in each protein it contains 1 of three labels (0 = postive, 1 = negative, 2 = unknown) So in this case my data would look like (2 small proteins shown here): DGQPEIPAGRGEHPQGI, 22222111111222222 YLERNAALDTTGPWQA, 00002222222222222 So far I have made a standard BERT model, with a linear layer on top (see code below) that I can use in a cross entropy loss function to predict the 15% masked amino acids in the proteins. It should be noted here that I do not start each input sequence with [cls] and end with [sep] since each sequence is only one protein I figured that I could do without this (Though maybe this is wrong and should still be included?) class BERTseq(nn.Module): def __init__(self, bert: BERT, vocab_size): super().__init__() self.bert = bert self.linear = nn.Linear(self.bert.hidden, vocab_size) def forward(self, x, segment_label): x = self.bert(x, segment_label) return self.linear(x) Now this part seems to work rather well. After training for a few hours I get 96% accuracy on testing data when predicting these proteins. So now I want to try and use this pretrained network for the segmentation task, but I must admit I’m uncertain how exactly I should do this, my initial idea was to remove the last linear layer from before and insert a new linear layer with size nn.Linear(self.bert.hidden, 2), since I have 2 segmentation classes I want out. However in this case I don’t know whether I should still use the masked approach from before? or should I just use the full label segmentation as input?, Also should I keep the old linear layer at the end and add another layer on top of it, or replace it with the new linear layer? So far I have been trying to replace it and use the segmentation labels directly in BERT, but that doesn’t seem to work.
Is the sequence IPAGRG always going to be Negative, or does it depend on what is around it? Is IPAGRG a commonly-occurring subset? When IPAGRG occur together, will they always have the same positivity. That is, will IPAGRG always be either 000000, 111111 or 222222, or could it be 012201 etc? I think you somehow need to have a single “answer” for each sequence that you give to BERT, so that it can learn. (I’m a bit worried that if you give it a label such as 22222111111222222 it will treat that as a single numerical value, which is not what you want). Could you treat IPAGRG as a single token, and DGQPE as another, and EHPQGI as another, so that BERT could try to learn to predict the value (positive/negative/unknown) of the central sequence in each input of 3 sequences? From your Masked prediction success, it seems that there must be some common sequences. If you don’t have any a priori idea of what these might be, you could maybe try an n-gram analysis. Completely different idea: Could you use 60 tokens instead of 20, where each token includes information about the amino acid AND the positivity? So, token A0 could represent alanine with positive, token A1 could represent alanine with negative, token A2 could represent alanine with unknown. By the way, it’s not relevant to the BERT implementation, but what do the positive/negative labels actually signify?
0
huggingface
Beginners
Custom Tasks and BERT Fine Tuning
https://discuss.huggingface.co/t/custom-tasks-and-bert-fine-tuning/874
I am using the transformers library to get embeddings for sentences and tokens. More specifically I use the first token embedding [CLS] for the embedding that represents the sentence and I compare sentences using cosine similarity. This approach is naive and completely unsupervised. Now I would like to gain some experience in fine tuning the model: For example how to fine tune BERT for NER and how to use BERT for sentence pairs. How can I study how to fine tune BERT?
nice tutorial on tuning using transformers with pytorch by Chris McCormick here https://mccormickml.com/2019/07/22/BERT-fine-tuning/#4-train-our-classification-model 183. See also his youtube videos and other posts. If you want to make a custom model, try this by abhimishra https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb 128
0
huggingface
Beginners
How to load pre-trained model parameters only in specific layers?
https://discuss.huggingface.co/t/how-to-load-pre-trained-model-parameters-only-in-specific-layers/1814
Hey , I want to know how to load pre-trained model parameters only in specific layers ? For example, I use EncoderDecoderModel class (bert-base-uncased2bert-base-uncased model) . And I only want to load parameters in specific layers like 2 or 10 of the pretrained model . How can I do this ?
guoziyuan: or 10 of the pretrained This is usually what I did: original_model = BartForConditionalGeneration.from_pretrained('facebook/bart-base') my_model = BartForConditionalGeneration.from_pretrained('facebook/bart-base', encoder_layers=2) ## for example, I want my model's last encoder layer's weight = original model's last encoder's weight model.model.encoder.layers[1].load_state_dict(original_model.model.encoder.layers[5].state_dict())
0
huggingface
Beginners
Unexpected keyword argument ‘return_dict’ in BertForSequenceClassification
https://discuss.huggingface.co/t/unexpected-keyword-argument-return-dict-in-bertforsequenceclassification/1790
My transformers version is 3.4.0. When my code is here: from transformers import BertTokenizer, BertForSequenceClassification import torch bert_config = BertConfig.from_json_file('torch_bert_chinese/config.json') model = BertForSequenceClassification.from_pretrained('torch_bert_chinese/', from_tf=False, config=bert_config, local_files_only=True, return_dict=True) The exception is here: TypeError Traceback (most recent call last) <ipython-input-33-08c5f3693153> in <module> 5 bert_config = BertConfig.from_json_file('torch_bert_chinese/config.json') 6 model = BertForSequenceClassification.from_pretrained('torch_bert_chinese/', from_tf=False, config=bert_config, local_files_only=True, ----> 7 return_dict=True) 8 /usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 945 946 # Instantiate model. --> 947 model = cls(config, *model_args, **model_kwargs) 948 949 if state_dict is None and not from_tf: TypeError: __init__() got an unexpected keyword argument 'return_dict'
If you are instantiating your model with a config, you need to pass this return_dict=True to the config create (so the line above in your case).
0
huggingface
Beginners
What does EvalPrediction.predictions contain exactly?
https://discuss.huggingface.co/t/what-does-evalprediction-predictions-contain-exactly/1691
I want to implement a function for computing metrics and pass it to the Trainer. In the doc, EvalPrediction has 2 attributes: predictions and label_ids. It is written that both of them are of type ndarray but this is not the case for me. The label_ids have is correct. It is ndarray and has shape (4, seqlen) where 4 is the number of samples in my validation. However, the attribute predictions are a tuple? At index 0, I have an array of sizes (3, 4, 56, 32104). 4 again is the number of samples 56 is the sequence length and 32104 is the vocabulary size but what is the 3 then? At index 1 I have first a tuple/list of tuples with size 4,6 and then an array of 4, 8, 56, 64 And at index 2 I have an array of size 4, 78, 512. What are all these arrays actually? I think this should be clarified in the documentation. Thanks for your help!
The Trainer will put in predictions everything your model returns (apart from the loss). So if you get multiple arrays, it’s likely because your model returns multiple things. No one can help you determine what they are without seeing your model (which is why you should always post the code you’re using when asking for help )
0
huggingface
Beginners
“IndexError: index out of range in self” for bert LM example on https://huggingface.co/transformers/quickstart.html
https://discuss.huggingface.co/t/indexerror-index-out-of-range-in-self-for-bert-lm-example-on-https-huggingface-co-transformers-quickstart-html/1797
Hi! I was trying to use my own data for the language model example (BERT) mentioned here: huggingface.co Quickstart — transformers 2.11.0 documentation 8 However, I get an IndexError: index out of range in self when I use my own data. At first I thought that it is related to the sequence length but I also get the error for sequences smaller than <512. The code is: tokenized_text = ['[CLS]', '#', '#', 'steps', '[SEP]', '1', '.', 'if', 'the', 'area', 'is', 'hot', 'or', 'in', '##fl', '##ame', '##d', 'after', 'your', 'laser', 'tattoo', 'removal', 'session', 'you', 'can', 'apply', 'an', 'ice', 'pack', 'wrapped', 'in', 'a', 'damp', 'cloth', '.', '[SEP]', '2', '.', 'over', 'the', 'counter', 'pain', 'relief', 'such', 'as', 'para', '##ce', '##tam', '##ol', 'can', 'help', 'by', 'reducing', 'any', 'temporary', 'pain', '.', '[SEP]', '3', '.', 'el', '##eva', '##te', 'the', 'area', 'is', 'its', 'an', 'ex', '##tre', '##mity', 'such', 'as', 'a', 'wrist', 'or', 'ankle', 'to', 'reduce', 'swelling', '.', '[SEP]', 'keep', 'the', 'tattoo', 'site', 'clean', 'and', 'dry', 'and', 'avoid', 'soaking', '[MASK]', 'in', 'the', 'first', 'week', 'or', 'two', 'during', 'the', 'healing', 'stage', '.', '[SEP]'] indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) index_of_masked_token = tokenized_text.index('[MASK]') # make the segments_ids counter = 0 segments_ids = [] for token in tokenized_text: segments_ids.append(counter) if token == '[SEP]': counter +=1 # Convert inputs to PyTorch tensors tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) print("size of tokens in tensor {0}".format(tokens_tensor.shape)) print("size of segment tokens in tensor {0}".format(segments_tensors.shape)) # Load pre-trained model (weights) model = BertModel.from_pretrained('bert-base-uncased') model.eval() # Predict hidden states features for each layer with torch.no_grad(): outputs = model(tokens_tensor, token_type_ids=segments_tensors) encoded_layers = outputs[0] assert tuple(encoded_layers.shape) == (1, len(indexed_tokens), model.config.hidden_size) # Load pre-trained model (weights) model = BertForMaskedLM.from_pretrained('bert-base-uncased') model.eval() # Predict all tokens with torch.no_grad(): # error is caused by the line below. outputs = model(tokens_tensor, token_type_ids=segments_tensors) predictions = outputs[0] The error is: File "/Users/talita/Documents/PhD/corpora/rulebook_diffs/2019-09-23/boardgame_scripts/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/Users/talita/Documents/PhD/corpora/rulebook_diffs/2019-09-23/boardgame_scripts/venv/lib/python3.8/site-packages/transformers/modeling_bert.py", line 752, in forward embedding_output = self.embeddings( File "/Users/talita/Documents/PhD/corpora/rulebook_diffs/2019-09-23/boardgame_scripts/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/Users/talita/Documents/PhD/corpora/rulebook_diffs/2019-09-23/boardgame_scripts/venv/lib/python3.8/site-packages/transformers/modeling_bert.py", line 180, in forward token_type_embeddings = self.token_type_embeddings(token_type_ids) File "/Users/talita/Documents/PhD/corpora/rulebook_diffs/2019-09-23/boardgame_scripts/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/Users/talita/Documents/PhD/corpora/rulebook_diffs/2019-09-23/boardgame_scripts/venv/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 124, in forward return F.embedding( File "/Users/talita/Documents/PhD/corpora/rulebook_diffs/2019-09-23/boardgame_scripts/venv/lib/python3.8/site-packages/torch/nn/functional.py", line 1852, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self Does anyone know why I get this IndexError?
You are training to process multiple sentences separated by SEP tokens. That does not make sense, but it is not clear what you are trying to do. The index error is raised because BERT was pretrained with only two segment IDs, 0 and 1. Those were needed for the NSP objective. It therefore does not make sense to add more segments/segment IDs - and that simply won’t work as you found out. Perhaps you should start by explaining what you are trying to do. I highly recommend you to read the original BERT paper to better understand this.
0
huggingface
Beginners
Speeding up GPT2 generation
https://discuss.huggingface.co/t/speeding-up-gpt2-generation/470
System Setup Pop!_OS 20.04 Pytorch: 1.5.1 Transformers: 3.0.2 Python: 3.7.6 Background Code from transformers import GPT2Tokenizer, GPT2LMHeadModel import torch import time import functools def time_gpt2_gen(): prompt1 = 'We present an update on the results of the Double Chooz experiment. Double Chooz searches for the neutrino mixing angle, θ13, in the three-neutrino mixing matrix via the disappearance of produced by the dual 4.27 GW/th Chooz B Reactors. Here we discuss updated oscillation fit results using both the rate and the shape of the anti-neutrino energy spectrum. In the most recent oscillation analysis we included data with neutron captures on Gadolinium and Hydrogen along with the reactor off data that we collected. This is an important step in our multi-year program to establish the value of θ13.' prompt2 = 'The paper covers detailed discussion on novel control system developed for adaptive fluid-based shock-absorbers serving for mitigation of unknown impact excitations. In order to provide complete independence of the control system from the loading conditions, the Hybrid Prediction Control (HPC) was elaborated. The proposed method is an extension of previously introduced kinematic feedback control which ensures optimal path finding, tracking and path update in case of high disturbance or sudden change of loading conditions. Implementation of the presented control system allows to obtain self-adaptive fluid-based absorbers providing robust impact mitigation. In contrast to previously developed methods of Adaptive Impact Absorption, the proposed control strategy does not require prior knowledge of impact excitation or its preliminary identification. The independence of applied control system from parameters of impact loading results in the capability of automatic path correction in the case of disturbance occurrence and re-adaptation to a number of subsequent impacts. The successful operation of the self-adaptive system is investigated with the use of numerical examples involving double-chamber pneumatic shock-absorber equipped with controllable valve. Efficiency of the HPC is proved by comparison with passive absorber as well as device equipped with adaptive and optimal control modules.' prompt3 = 'This study aimed to produce biosurfactant from Pseudozyma tsukubaensis using cassava wastewater and an inoculum (biomass) for galactooligosaccharides synthesis from lactose as an integrated system. First, the use of cassava wastewater as a low cost culture medium by P. tsukubaensis to produce biomass and biosurfactant was evaluated and optimized. Then, the microbial cells (biomass) obtained from the optimized process were used to produce galactooligosaccharides from lactose. The optimum conditions for biosurfactant and biomass synthesis were found to be 80% (v/v) of cassava wastewater at 30°C and 200rpm for 48h. The highest concentration of biosurfactant, that is, minimum surface tension value and maximum biomass concentration predicted were experimentally confirmed as 26.87mN/m and 10.5g/L, respectively. The biosurfactant obtained showed good thermal (121°C/1h), pH (2–11) and ionic strength (0–25% NaCl) stability. Excellent emulsifier activity was also verified, suggesting a potential application in enhanced oil recovery. Galactooligosaccharides synthesized by the Kluyveromyces genus have been extensively investigated, however, few studies have reported transgalactosylation ability by other yeast genera. The transgalactosylation activity of the yeast biomass at optimized conditions from 40% (w/w) lactose resulted in galactooligosaccharides production of 73.12g/L and a yield of 18.28% (w/w) at pH 8.0 and 30°C in 24h. This research showed the technical feasibility of an integrated process: biosurfactant and GOS production from P. tsukubaensis, which takes advantage of the remarkable metabolism of this microorganism. To the best of our knowledge, this is the first study reporting the potential of P. tsukubaensis to produce two economical biotechnological products of increase interest as an integrated process.' prompt4 = 'Advantages of a fuzzy predictive control algorithm are discussed in the paper. The fuzzy predictive algorithm is a combination of a DMC (Dynamic Matrix Control) algorithm and Takagi–Sugeno fuzzy modeling, thus it inherits advantages of both techniques. The algorithm is numerically effective. It is in fact generalization of the standard DMC algorithm widely used in the industry, thus the existing implementations of the DMC algorithm can be extended using the presented fuzzy approach. A simple and easy to apply method of fuzzy predictive control algorithms synthesis is presented in the paper. It can be easy applied also in the case of Multiple Input Multiple Output (MIMO) control plants. Moreover, information about measured disturbance can be included in the algorithms in an easy way. The advantages of the fuzzy predictive control algorithm are demonstrated in the example control systems of two nonlinear chemical reactors: the first one—with inverse response and the second one—a MIMO plant with time delay.' batch = [prompt1, prompt2, prompt3, prompt4] tokenizer = run_func(GPT2Tokenizer.from_pretrained, 'gpt2', print_str='Initialize GPT2Tokenizer', padding_side='right') tokenizer.pad_token = tokenizer.eos_token encoded_results = run_func(tokenizer, batch, print_str='Calculate initial encodings', padding=True, truncation=True, return_tensors='pt', return_attention_mask=True) gpt2 = run_func(GPT2LMHeadModel.from_pretrained, 'gpt2', print_str='Initialize GPT2LMHeadModel') temperature = 0.92 tmp_input_ids = encoded_results['input_ids'] tmp_attention_mask = encoded_results['attention_mask'] max_gen_length = 30 counter = 0 gen_dict = {'a1': '', 'a2': '', 'a3': '', 'a4': ''} while_tic = time.perf_counter() while counter < max_gen_length: print('\ncounter = {}'.format(counter)) outputs = run_func(gpt2, print_str=' Calculate GPT2 outputs', input_ids=tmp_input_ids, attention_mask=tmp_attention_mask ) # (batch_size, sequence_length, vocab_size) lm_logits_w_temp = outputs[0] / temperature # (batch_size, vocab_size) last_tokens = lm_logits_w_temp[:, -1, :] last_token_softmaxes = run_func(torch.softmax, last_tokens, print_str=' Last token softmax', dim=-1).squeeze() next_tokens = run_func(torch.multinomial, last_token_softmaxes, print_str=' Generate next token', num_samples=1) list_comp_tic = time.perf_counter() next_strs = [tokenizer.decode(next_token).strip() for next_token in next_tokens] prev_input_strs = [tokenizer.decode(id_tensor, skip_special_tokens=True) for id_tensor in tmp_input_ids] prev_split_list = [prev_input_str.split() for prev_input_str in prev_input_strs] list_comp_toc = time.perf_counter() print(' List comprehension calcs elapsed time: {} seconds'.format(list_comp_toc-list_comp_tic)) gen_dict['a1'] += next_strs[0] + ' ' gen_dict['a2'] += next_strs[1] + ' ' gen_dict['a3'] += next_strs[2] + ' ' gen_dict['a4'] += next_strs[3] + ' ' str_list_to_join = [] next_strs_tic = time.perf_counter() for ii, prev_split2 in enumerate(prev_split_list): next_str = next_strs[ii] tmp_prev = prev_split2 tmp_prev.append(next_str) str_list_to_join.append(tmp_prev) next_inputs = [' '.join(str_to_join) for str_to_join in str_list_to_join] next_strs_toc = time.perf_counter() print(' Add generated tokens onto previous full strings elapsed time: {} seconds'.format(next_strs_toc-next_strs_tic)) if counter == max_gen_length - 1: final_str_batch = next_inputs else: new_encoded_results = run_func(tokenizer, next_inputs, print_str=' Tokenizing next full strings', padding=True, truncation=True, return_tensors='pt', return_attention_mask=True) tmp_input_ids = new_encoded_results['input_ids'] tmp_attention_mask = new_encoded_results['attention_mask'] counter += 1 while_toc = time.perf_counter() print('Time to complete while loop for {} passes: {} seconds'.format(max_gen_length, while_toc-while_tic)) print('------------------------------------------------------------------------------------') run_func(time_gpt2_gen, print_str='Total time') Question I was wondering what I could do to speed up the generation of GPT2? I feel like this is a pretty naive implementation so I’d love to here feedback on some speed optimization strategies. Here is the output for generating 30 new tokens on a batch of 4 scientific abstracts. It takes about 4.8 minutes. Initialize GPT2Tokenizer elapsed time: 1.4727641880017472 seconds Calculate initial encodings elapsed time: 0.03799170200363733 seconds Initialize GPT2LMHeadModel elapsed time: 18.23160586800077 seconds counter = 0 Calculate GPT2 outputs elapsed time: 9.893233462003991 seconds Last token softmax elapsed time: 0.0007064949968480505 seconds Generate next token elapsed time: 0.0007856059964979067 seconds List comprehension calcs elapsed time: 0.13995413600059692 seconds Add generated tokens onto previous full strings elapsed time: 3.628500417107716e-05 seconds Tokenizing next full strings elapsed time: 0.012479234996135347 seconds counter = 1 Calculate GPT2 outputs elapsed time: 8.576971583999693 seconds Last token softmax elapsed time: 0.0006583049980690703 seconds Generate next token elapsed time: 0.0007301439982256852 seconds List comprehension calcs elapsed time: 0.12174680699536111 seconds Add generated tokens onto previous full strings elapsed time: 4.426100349519402e-05 seconds Tokenizing next full strings elapsed time: 0.011608965003688354 seconds counter = 2 Calculate GPT2 outputs elapsed time: 8.446395003004 seconds Last token softmax elapsed time: 0.0006648650014540181 seconds Generate next token elapsed time: 0.0007324170001083985 seconds List comprehension calcs elapsed time: 0.14154231199790956 seconds Add generated tokens onto previous full strings elapsed time: 5.113699444336817e-05 seconds Tokenizing next full strings elapsed time: 0.014487013999314513 seconds counter = 3 Calculate GPT2 outputs elapsed time: 9.05919142899802 seconds Last token softmax elapsed time: 0.0007119319998309948 seconds Generate next token elapsed time: 0.0007321929952013306 seconds List comprehension calcs elapsed time: 0.1303824819988222 seconds Add generated tokens onto previous full strings elapsed time: 4.862200148636475e-05 seconds Tokenizing next full strings elapsed time: 0.012177805001556408 seconds counter = 4 Calculate GPT2 outputs elapsed time: 8.18601783199847 seconds Last token softmax elapsed time: 0.02430849100346677 seconds Generate next token elapsed time: 0.0007163990012486465 seconds List comprehension calcs elapsed time: 0.15182566899602534 seconds Add generated tokens onto previous full strings elapsed time: 4.859599721385166e-05 seconds Tokenizing next full strings elapsed time: 0.014472196999122389 seconds counter = 5 Calculate GPT2 outputs elapsed time: 8.919605226001295 seconds Last token softmax elapsed time: 0.0006643329979851842 seconds Generate next token elapsed time: 0.0007834899952285923 seconds List comprehension calcs elapsed time: 0.12023409699759213 seconds Add generated tokens onto previous full strings elapsed time: 5.939300172030926e-05 seconds Tokenizing next full strings elapsed time: 0.011963372999161948 seconds counter = 6 Calculate GPT2 outputs elapsed time: 7.393029450999165 seconds Last token softmax elapsed time: 0.0007156190040404908 seconds Generate next token elapsed time: 0.0007048089973977767 seconds List comprehension calcs elapsed time: 0.13225654000416398 seconds Add generated tokens onto previous full strings elapsed time: 4.454200097825378e-05 seconds Tokenizing next full strings elapsed time: 0.012416008998116013 seconds counter = 7 Calculate GPT2 outputs elapsed time: 7.838971406003111 seconds Last token softmax elapsed time: 0.0023600620042998344 seconds Generate next token elapsed time: 0.0012576709996210411 seconds List comprehension calcs elapsed time: 0.1481078790020547 seconds Add generated tokens onto previous full strings elapsed time: 5.014600174035877e-05 seconds Tokenizing next full strings elapsed time: 0.017490088001068216 seconds counter = 8 Calculate GPT2 outputs elapsed time: 9.155262512998888 seconds Last token softmax elapsed time: 0.0010118979989783838 seconds Generate next token elapsed time: 0.000767368997912854 seconds List comprehension calcs elapsed time: 0.1483391650035628 seconds Add generated tokens onto previous full strings elapsed time: 9.162300557363778e-05 seconds Tokenizing next full strings elapsed time: 0.012755158000800293 seconds counter = 9 Calculate GPT2 outputs elapsed time: 10.77434083299886 seconds Last token softmax elapsed time: 0.010747615997388493 seconds Generate next token elapsed time: 0.0007793960030539893 seconds List comprehension calcs elapsed time: 0.14875024900538847 seconds Add generated tokens onto previous full strings elapsed time: 5.4737000027671456e-05 seconds Tokenizing next full strings elapsed time: 0.01669158499862533 seconds counter = 10 Calculate GPT2 outputs elapsed time: 10.110196126996016 seconds Last token softmax elapsed time: 0.02143118399544619 seconds Generate next token elapsed time: 0.0010516420006752014 seconds List comprehension calcs elapsed time: 0.1679224549952778 seconds Add generated tokens onto previous full strings elapsed time: 8.338599582202733e-05 seconds Tokenizing next full strings elapsed time: 0.014697657999931835 seconds counter = 11 Calculate GPT2 outputs elapsed time: 9.811458320000384 seconds Last token softmax elapsed time: 0.000684321996232029 seconds Generate next token elapsed time: 0.0007668550024391152 seconds List comprehension calcs elapsed time: 0.1525734469978488 seconds Add generated tokens onto previous full strings elapsed time: 8.502999844495207e-05 seconds Tokenizing next full strings elapsed time: 0.015574641001876444 seconds counter = 12 Calculate GPT2 outputs elapsed time: 10.353367308998713 seconds Last token softmax elapsed time: 0.0010184349957853556 seconds Generate next token elapsed time: 0.0007333970061154105 seconds List comprehension calcs elapsed time: 0.13797010699636303 seconds Add generated tokens onto previous full strings elapsed time: 5.920600233366713e-05 seconds Tokenizing next full strings elapsed time: 0.01412803399580298 seconds counter = 13 Calculate GPT2 outputs elapsed time: 10.826486637000926 seconds Last token softmax elapsed time: 0.0030568980000680313 seconds Generate next token elapsed time: 0.0016750930008129217 seconds List comprehension calcs elapsed time: 0.13814785299473442 seconds Add generated tokens onto previous full strings elapsed time: 5.037700611865148e-05 seconds Tokenizing next full strings elapsed time: 0.013047558997641318 seconds counter = 14 Calculate GPT2 outputs elapsed time: 7.55167671199888 seconds Last token softmax elapsed time: 0.0005718500033253804 seconds Generate next token elapsed time: 0.0007639390023541637 seconds List comprehension calcs elapsed time: 0.13264076500490773 seconds Add generated tokens onto previous full strings elapsed time: 5.1554998208303005e-05 seconds Tokenizing next full strings elapsed time: 0.013588694004283752 seconds counter = 15 Calculate GPT2 outputs elapsed time: 7.818915773998015 seconds Last token softmax elapsed time: 0.0005817519995616749 seconds Generate next token elapsed time: 0.0009528160007903352 seconds List comprehension calcs elapsed time: 0.13134208500559907 seconds Add generated tokens onto previous full strings elapsed time: 5.866299761692062e-05 seconds Tokenizing next full strings elapsed time: 0.01613930299936328 seconds counter = 16 Calculate GPT2 outputs elapsed time: 8.578778709998005 seconds Last token softmax elapsed time: 0.006186169004649855 seconds Generate next token elapsed time: 0.0007223930006148294 seconds List comprehension calcs elapsed time: 0.148166503997345 seconds Add generated tokens onto previous full strings elapsed time: 5.607099592452869e-05 seconds Tokenizing next full strings elapsed time: 0.015603724998072721 seconds counter = 17 Calculate GPT2 outputs elapsed time: 8.770265252001991 seconds Last token softmax elapsed time: 0.0005352449952624738 seconds Generate next token elapsed time: 0.0006981940023251809 seconds List comprehension calcs elapsed time: 0.13802178599871695 seconds Add generated tokens onto previous full strings elapsed time: 4.810000245925039e-05 seconds Tokenizing next full strings elapsed time: 0.013090499996906146 seconds counter = 18 Calculate GPT2 outputs elapsed time: 9.192157427001803 seconds Last token softmax elapsed time: 0.0013120759977027774 seconds Generate next token elapsed time: 0.0006921610038261861 seconds List comprehension calcs elapsed time: 0.1327957250032341 seconds Add generated tokens onto previous full strings elapsed time: 4.9251000746153295e-05 seconds Tokenizing next full strings elapsed time: 0.013492570993548725 seconds counter = 19 Calculate GPT2 outputs elapsed time: 8.60694089000026 seconds Last token softmax elapsed time: 0.003325685000163503 seconds Generate next token elapsed time: 0.000762072995712515 seconds List comprehension calcs elapsed time: 0.1455982879997464 seconds Add generated tokens onto previous full strings elapsed time: 5.135399987921119e-05 seconds Tokenizing next full strings elapsed time: 0.01226243400014937 seconds counter = 20 Calculate GPT2 outputs elapsed time: 7.204778202001762 seconds Last token softmax elapsed time: 0.000613894997513853 seconds Generate next token elapsed time: 0.0007320740041905083 seconds List comprehension calcs elapsed time: 0.13817419400584185 seconds Add generated tokens onto previous full strings elapsed time: 5.026099825045094e-05 seconds Tokenizing next full strings elapsed time: 0.011972155996772926 seconds counter = 21 Calculate GPT2 outputs elapsed time: 9.892961628000194 seconds Last token softmax elapsed time: 0.0006187749968376011 seconds Generate next token elapsed time: 0.0007513390009989962 seconds List comprehension calcs elapsed time: 0.12703227200108813 seconds Add generated tokens onto previous full strings elapsed time: 5.13460036017932e-05 seconds Tokenizing next full strings elapsed time: 0.012355157996353228 seconds counter = 22 Calculate GPT2 outputs elapsed time: 8.470063239998126 seconds Last token softmax elapsed time: 0.0005823390019941144 seconds Generate next token elapsed time: 0.0008517509995726869 seconds List comprehension calcs elapsed time: 0.15461847899860004 seconds Add generated tokens onto previous full strings elapsed time: 5.156599945621565e-05 seconds Tokenizing next full strings elapsed time: 0.013039129000389948 seconds counter = 23 Calculate GPT2 outputs elapsed time: 7.189594238996506 seconds Last token softmax elapsed time: 0.0006244809992494993 seconds Generate next token elapsed time: 0.0008493330024066381 seconds List comprehension calcs elapsed time: 0.13466514900210313 seconds Add generated tokens onto previous full strings elapsed time: 5.236799916019663e-05 seconds Tokenizing next full strings elapsed time: 0.012745897998684086 seconds counter = 24 Calculate GPT2 outputs elapsed time: 8.126932711005793 seconds Last token softmax elapsed time: 0.0006478180002886802 seconds Generate next token elapsed time: 0.0011431679959059693 seconds List comprehension calcs elapsed time: 0.14799942199897487 seconds Add generated tokens onto previous full strings elapsed time: 6.185499660205096e-05 seconds Tokenizing next full strings elapsed time: 0.01267693200497888 seconds counter = 25 Calculate GPT2 outputs elapsed time: 7.3953299740023795 seconds Last token softmax elapsed time: 0.0005992939986754209 seconds Generate next token elapsed time: 0.0007862490019761026 seconds List comprehension calcs elapsed time: 0.14582869299920276 seconds Add generated tokens onto previous full strings elapsed time: 5.401299858931452e-05 seconds Tokenizing next full strings elapsed time: 0.013731771003222093 seconds counter = 26 Calculate GPT2 outputs elapsed time: 8.018481942002836 seconds Last token softmax elapsed time: 0.04562465199705912 seconds Generate next token elapsed time: 0.0008113009971566498 seconds List comprehension calcs elapsed time: 0.13469937299669255 seconds Add generated tokens onto previous full strings elapsed time: 5.5969001550693065e-05 seconds Tokenizing next full strings elapsed time: 0.013372118999541271 seconds counter = 27 Calculate GPT2 outputs elapsed time: 8.428962316000252 seconds Last token softmax elapsed time: 0.0005464290006784722 seconds Generate next token elapsed time: 0.0007245989982038736 seconds List comprehension calcs elapsed time: 0.13549309900554363 seconds Add generated tokens onto previous full strings elapsed time: 4.968699795426801e-05 seconds Tokenizing next full strings elapsed time: 0.013636056995892432 seconds counter = 28 Calculate GPT2 outputs elapsed time: 7.557854796999891 seconds Last token softmax elapsed time: 0.0006319280000752769 seconds Generate next token elapsed time: 0.0007067559999995865 seconds List comprehension calcs elapsed time: 0.14673257899994496 seconds Add generated tokens onto previous full strings elapsed time: 5.2330004109535366e-05 seconds Tokenizing next full strings elapsed time: 0.012924063004902564 seconds counter = 29 Calculate GPT2 outputs elapsed time: 7.523497490998125 seconds Last token softmax elapsed time: 0.0005815810000058264 seconds Generate next token elapsed time: 0.0008171070003299974 seconds List comprehension calcs elapsed time: 0.14285911399929319 seconds Add generated tokens onto previous full strings elapsed time: 5.161000444786623e-05 seconds Time to complete while loop for 30 passes: 269.7878909170031 seconds ------------------------------------------------------------------------------------ Total time elapsed time: 289.6417858999994 seconds Thanks in advance for your help!
decoding is slow for auto-regressive decoder because the tokens are generated one at a time. I can think of only two things to improve speed for generation. put the model in fp16 (not sure if GPT-2 works with fp6, haven’t tried myself). See if you can use onnx to speed up inference. Maybe this 63 will help.
0
huggingface
Beginners
Fine-Tune BART using “Fine-Tuning Custom Datasets” doc
https://discuss.huggingface.co/t/fine-tune-bart-using-fine-tuning-custom-datasets-doc/1628
I am trying to fine-tune BART for a summarization task using the code on the “Fine Tuning with Custom Dataset” page (https://huggingface.co/transformers/custom_datasets.html 210). The data is a subset of the CNN/Daily Mail data. I am encountering two different errors. The first comes when I implement the exact code on the page: "TypeError: new(): invalid data type ‘str’ I assume this is caused by the fact that the labels are not encoded/tokenized, instead they are strings. I see this is the case in the sample code on the page, the labels are not tokenized. If I tokenize the labels and run the code I receive a different error: “Indexing with integers (to access backend Encoding for a given batch index) is not available when using Python based tokenizers” I am not sure how to interpret that. I get the same errors whether I am using the Trainer class or using native PyTorch. Any suggestions? Thanks!
It’s a bit hard to help you without seeing the code you run.
0
huggingface
Beginners
How to know if gpu memory is enough before starting training?
https://discuss.huggingface.co/t/how-to-know-if-gpu-memory-is-enough-before-starting-training/1763
Are there any available code to be used directly?
There is this: https://huggingface.co/transformers/benchmarks.html 16, which you can use to test memory usage amongst other things.
0
huggingface
Beginners
Bert output for padding tokens
https://discuss.huggingface.co/t/bert-output-for-padding-tokens/1550
Hi, I just saw that I have still embeddings of padding tokens in my sentence. I assumes that the BERT output would be a 768 dim 0 vector. So if I feed sentences with max length of 20 into TFBert-Model I get for every of the 20 tokens an embedding different from 0 despite if the sentences is just of length 10 and the rest is padded. How can I get output of BERT which ignores the padded output? I thought that is why I feed the attention masks such paddings are ignored? Maybe I misunderstand something? If I feed the sequence of hidden states of the output of BERT to a GlobalAveragePooling-Layer, do I need the masking tensor to avoid using padding embeddings?
Hey, I think I should use the attention_masking from the tokenizer for use in GlobalAveragePooling1d(x, mask=mask) , right? Otherwise, the average of all token (padded tokens included) is built, right?
0
huggingface
Beginners
How can we test Transformer Models after converting it to TFLite format
https://discuss.huggingface.co/t/how-can-we-test-transformer-models-after-converting-it-to-tflite-format/1670
I tried converting the Distilbert Model of Huggingface to TFLite format using this script Reference Script 44 I was able to convert it. After that, I wanted to test it in Python itself using tf.lite.Interpreter but I was not able to figure out the correct input dimension required for the model. Can anyone help me with this? Colab Notebook 23
Pinging @jplu, as he’s working on TF
0
huggingface
Beginners
Mutli GPU freezes on Roberta Pretraining
https://discuss.huggingface.co/t/mutli-gpu-freezes-on-roberta-pretraining/1743
I’m getting annoying crashes when I try to train a roberta model with two Titan X GPUs. I see in the documentation that the model should train on mutli gpu automatically and I see that with nvidia-smi that the gpus are in use. But I don’t see any progress and the session freezes. Any suggestions would be most helpful.
What do you mean by freeze? What do you see in the terminal? Also show us which command you used to train the model.
0
huggingface
Beginners
Load dataset failure
https://discuss.huggingface.co/t/load-dataset-failure/1736
Hey, I want to load the cnn-dailymail dataset for fine-tune. I write the code like this from datasets import load_dataset test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”) And I got the following errors. Traceback (most recent call last): File “test.py”, line 7, in test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“test”) File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 589, in load_dataset module_path, hash = prepare_module( File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 268, in prepare_module local_path = cached_path(file_path, download_config=download_config) File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 300, in cached_path output_path = get_from_cache( File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 475, in get_from_cache raise ConnectionError(“Couldn’t reach {}”.format(url)) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py 8 How can I fix this ?
I can browse https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py 47 through Google Chrome
0
huggingface
Beginners
The difference between Seq2SeqDataset.collate_fn and Seq2SeqDataCollator._encode
https://discuss.huggingface.co/t/the-difference-between-seq2seqdataset-collate-fn-and-seq2seqdatacollator-encode/1702
Hello, I’m yusukemori, who asked about “Seq2SeqTrainer” yesterday ( How to use Seq2seq Trainer with my original "[MASK]" 1 ). Thanks to the helpful comment, I’m now trying implementing my customized version of Seq2seqTrainer. Now I have a question about Seq2SeqDataset and Seq2SeqDataCollator in examples/seq2seq/utils.py 3 (the part of it is shown below). It seems Seq2SeqDataset has its collate_fn as a method. However, Seq2SeqDataCollator doesn’t use Seq2SeqDataset.collate_fn, but has its own method _encode. Could you please tell me what’s the difference between these two methods, and when to use each of them? Or, should I use both of them to run Seq2SeqTrainer? Thank you in advance. yusukemori class Seq2SeqDataset(AbstractSeq2SeqDataset): """A dataset that calls prepare_seq2seq_batch.""" def __getitem__(self, index) -> Dict[str, str]: index = index + 1 # linecache starts at 1 source_line = self.prefix + linecache.getline(str(self.src_file), index).rstrip("\n") tgt_line = linecache.getline(str(self.tgt_file), index).rstrip("\n") assert source_line, f"empty source line for index {index}" assert tgt_line, f"empty tgt line for index {index}" return {"tgt_texts": tgt_line, "src_texts": source_line, "id": index - 1} def collate_fn(self, batch) -> Dict[str, torch.Tensor]: """Call prepare_seq2seq_batch.""" batch_encoding: Dict[str, torch.Tensor] = self.tokenizer.prepare_seq2seq_batch( [x["src_texts"] for x in batch], tgt_texts=[x["tgt_texts"] for x in batch], max_length=self.max_source_length, max_target_length=self.max_target_length, return_tensors="pt", **self.dataset_kwargs, ).data batch_encoding["ids"] = torch.tensor([x["id"] for x in batch]) return batch_encoding class Seq2SeqDataCollator: def __init__(self, tokenizer, data_args, tpu_num_cores=None): self.tokenizer = tokenizer self.pad_token_id = tokenizer.pad_token_id assert ( self.pad_token_id is not None ), f"pad_token_id is not defined for ({self.tokenizer.__class__.__name__}), it must be defined." self.data_args = data_args self.tpu_num_cores = tpu_num_cores self.dataset_kwargs = {"add_prefix_space": isinstance(tokenizer, BartTokenizer)} if data_args.src_lang is not None: self.dataset_kwargs["src_lang"] = data_args.src_lang if data_args.tgt_lang is not None: self.dataset_kwargs["tgt_lang"] = data_args.tgt_lang def __call__(self, batch) -> Dict[str, torch.Tensor]: if hasattr(self.tokenizer, "prepare_seq2seq_batch"): batch = self._encode(batch) input_ids, attention_mask, labels = ( batch["input_ids"], batch["attention_mask"], batch["labels"], ) else: input_ids = torch.stack([x["input_ids"] for x in batch]) attention_mask = torch.stack([x["attention_mask"] for x in batch]) labels = torch.stack([x["labels"] for x in batch]) labels = trim_batch(labels, self.pad_token_id) input_ids, attention_mask = trim_batch(input_ids, self.pad_token_id, attention_mask=attention_mask) if isinstance(self.tokenizer, T5Tokenizer): decoder_input_ids = self._shift_right_t5(labels) else: decoder_input_ids = shift_tokens_right(labels, self.pad_token_id) batch = { "input_ids": input_ids, "attention_mask": attention_mask, "decoder_input_ids": decoder_input_ids, "labels": labels, } return batch def _shift_right_t5(self, input_ids): # shift inputs to the right shifted_input_ids = input_ids.new_zeros(input_ids.shape) shifted_input_ids[..., 1:] = input_ids[..., :-1].clone() shifted_input_ids[..., 0] = self.pad_token_id return shifted_input_ids def _encode(self, batch) -> Dict[str, torch.Tensor]: batch_encoding = self.tokenizer.prepare_seq2seq_batch( [x["src_texts"] for x in batch], tgt_texts=[x["tgt_texts"] for x in batch], max_length=self.data_args.max_source_length, max_target_length=self.data_args.max_target_length, padding="max_length" if self.tpu_num_cores is not None else "longest", # TPU hack return_tensors="pt", **self.dataset_kwargs, ) return batch_encoding.data
Hi @yusukemori The _encode method does the same work as collate_fn. The difference is that Seq2SeqTrainer also supports TPU and for that the padding needs to handled differently. It also prepares the correct labels and decoder_input_ids rather than doing this inside the trainer. If you are using Seq2SeqTrainer , use Seq2SeqDataCollator.
0
huggingface
Beginners
How to use Seq2seq Trainer with my original “[MASK]”
https://discuss.huggingface.co/t/how-to-use-seq2seq-trainer-with-my-original-mask/1682
Hello, My account name is yusukemori. (I’m who asked about Seq2seq Trainer at https://github.com/huggingface/transformers/issues/7740 8 . Thank you for giving me a quick, kind, and detailed reply then!) This is my first time posting on this forum and I apologize if I’m being rude. I’m now trying to find how to use my original “[mask]” for pre-training/fine-tuning the Seq2seq model. I’m wondering how to randomly assign [mask] at training time (per epoch) and how to get Seq2seq Trainer to load it. As far as BERT is concerned, there seems to be a BERTProcessor in huggingface/tokenizers, which is not intended to be used with the Seq2seq Trainer, if I understand correctly. Is there any processer, such as “BARTProcessor”, for the Seq2seq Trainer? Please allow me to ask one more question, is there a code for pre-training (from scratch) BART with the Seq2seq Trainer? (For fine-tuning, thanks to the clear example!) I’m afraid this is a beginner’s, rude question. Thank you for your help. Sincerely, yusukemori
Hi @yusukemori The Seq2SeqTrainer examples doesn’t (yet!) support pre-training, however you could use Seq2SeqTrainer with your custom data processing logic(masking etc) as it’s a generic s2s trainer. But you’ll need to implement masking yourself.
0
huggingface
Beginners
Distilbart Truncation
https://discuss.huggingface.co/t/distilbart-truncation/1689
Hello, can anyone please explain to me how is the truncation done for long sequences in Distilbart ? Thank you very much ! This is a very important matter.
max seq len for BART is 1024 tokens, the tokenizer truncates all the tokens beyond that limit
0
huggingface
Beginners
How to apply pruning on a BERT model?
https://discuss.huggingface.co/t/how-to-apply-pruning-on-a-bert-model/1658
I have trained a BERT model using ktrain (wrapper that uses huggingface transformers library) to recognize emotion on text, it works but it suffers from really slow inference. That makes my model not suitable for a production environment. I have done some research and it seems pruning could help. The problem is that pruning is not a not a widely used technique and I can not find a simple enough example on Kaggle or Stack that could help me to understand how to use it. Can someone help? I provide my working code below for reference. My question can be also found on stack https://stackoverflow.com/questions/64445784/how-to-apply-pruning-on-a-bert-model 3 import pandas as pd import numpy as np import preprocessor as p import emoji import re import ktrain from ktrain import text from unidecode import unidecode import nltk #text preprocessing class class TextPreprocessing: def __init__(self): p.set_options(p.OPT.MENTION, p.OPT.URL) def _punctuation(self,val): val = re.sub(r'[^\w\s]',' ',val) val = re.sub('_', ' ',val) return val def _whitespace(self,val): return " ".join(val.split()) def _removenumbers(self,val): val = re.sub('[0-9]+', '', val) return val def _remove_unicode(self, text): text = unidecode(text).encode("ascii") text = str(text, "ascii") return text def _split_to_sentences(self, body_text): sentences = re.split(r"(?<!\w\.\w.)(?<![A-Z][a-z]\.)(?<=\.|\?)\s", body_text) return sentences def _clean_text(self,val): val = val.lower() val = self._removenumbers(val) val = p.clean(val) val = ' '.join(self._punctuation(emoji.demojize(val)).split()) val = self._remove_unicode(val) val = self._whitespace(val) return val def text_preprocessor(self, body_text): body_text_df = pd.DataFrame({"body_text": body_text},index=[1]) sentence_split_df = body_text_df.copy() sentence_split_df["body_text"] = sentence_split_df["body_text"].apply( self._split_to_sentences) lst_col = "body_text" sentence_split_df = pd.DataFrame( { col: np.repeat( sentence_split_df[col].values, sentence_split_df[lst_col].str.len( ) ) for col in sentence_split_df.columns.drop(lst_col) } ).assign(**{lst_col: np.concatenate(sentence_split_df[lst_col].values)})[ sentence_split_df.columns ] body_text_df["body_text"] = body_text_df["body_text"].apply(self._clean_text) final_df = ( pd.concat([sentence_split_df, body_text_df]) .reset_index() .drop(columns=["index"]) ) return final_df["body_text"] #instantiate data preprocessing object text1 = TextPreprocessing() #import data data_train = pd.read_csv('data_train_v5.csv', encoding='utf8', engine='python') data_test = pd.read_csv('data_test_v5.csv', encoding='utf8', engine='python') #clean the data data_train['Text'] = data_train['Text'].apply(text1._clean_text) data_test['Text'] = data_test['Text'].apply(text1._clean_text) X_train = data_train.Text.tolist() X_test = data_test.Text.tolist() y_train = data_train.Emotion.tolist() y_test = data_test.Emotion.tolist() data = data_train.append(data_test, ignore_index=True) class_names = ['joy','sadness','fear','anger','neutral'] encoding = { 'joy': 0, 'sadness': 1, 'fear': 2, 'anger': 3, 'neutral': 4 } # Integer values for each class y_train = [encoding[x] for x in y_train] y_test = [encoding[x] for x in y_test] trn, val, preproc = text.texts_from_array(x_train=X_train, y_train=y_train, x_test=X_test, y_test=y_test, class_names=class_names, preprocess_mode='distilbert', maxlen=350) model = text.text_classifier('distilbert', train_data=trn, preproc=preproc) learner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=6) predictor = ktrain.get_predictor(learner.model, preproc) #save the model on a file for later use predictor.save("models/bert_model") message = "This is a happy message" #cleaning - takes 5ms to run clean = text1._clean_text(message) #prediction - takes 325 ms to run predictor.predict_proba(clean)
I don’t know how to perform pruning. The idea is simple enough - cut out some of the attention heads that are not apparently doing anything useful - but the implementation will be tricky. Have you considered fine-tuning a DistilBERT or ALBERT model instead?
0
huggingface
Beginners
TransformerXL on Custom Language
https://discuss.huggingface.co/t/transformerxl-on-custom-language/1568
I am looking to train a Transformer model on biological sequence data where a “sentence” may be represented as follows: [“A”, “LLGR”, “V”, “GD” …] In order to do so, I need a word-level tokenizer that doesn’t split up words, so I’ve opted for TransformerXL. The train() function from ByteLevelBPETokenizer() (as described in this blog: https://huggingface.co/blog/how-to-train) is not available for TransfoXLTokenizer() Is there a way to train TransfoXLTokenizer() on my custom “language”? Or do I simply train the TransfoXLModel and it’ll take care of custom tokenization? (I can’t see this being the case as its a completely unseen “language”)
No need to train.TransfoXLTokenizer() doesn’t merge words like ByteLevelBPETokenizer(). Simply create your vocab file as a txt file with each word on a new line, and feed it’s path as a parameter to TransfoXLTokenizer(): tokenizer = TransfoXLTokenizer(vocab_file=vocab_path)
0
huggingface
Beginners
Model giving same output for eval function but trains
https://discuss.huggingface.co/t/model-giving-same-output-for-eval-function-but-trains/1629
Hi as title says my model is giving the exact same output for different inputs when evaluating but is training and giving varied outputs when training. I can’t workout why I know it’s not that it’s the same input. I am also ending up with holes in the outputs when there are no such holes within the input data. both errors can be seen in the screenshout of my terminal output here: My training loss is: 121, whereas my RMS of my eval function is 23,000?? image708×360 18.8 KB Data_set and datacollator: **class smiles_dataset(Dataset): def __init__(self, smiles, targets): self.smiles = smiles self.targets = targets #self.tokenizer = tokenizer #self.max_len = max_len def __len__(self): return len(self.smiles) def __getitem__(self, index): smiles = str(self.smiles[index]) targets = float(self.targets[index]) return { 'SMILES': smiles, 'targets': targets, } class SMILESDataCollator(): def __init__(self, tokenizer): self.tokenizer = tokenizer def __call__(self, data): return self.__collate__(default_collate(data)) def __collate__(self, data): # Not sure why padding it twice works here but it gives the desired output so can't complain input_id_sequence = [] for d in data['SMILES']: input_ids = self.tokenizer(d)[0] input_id_sequence.append(input_ids) input_ids_padded = pad_sequence(input_id_sequence, batch_first=True, padding_value=0) # input_ids_padded = [pad_sequence((self.tokenizer(d)), padding_value=0) for d in data['SMILES']] # input_ids_padded = pad_sequence(input_ids_padded, padding_value=0) # for all the d in data target_list = [] for d in data['targets']: target_list.append(d) atten_mask_list = [] for input_ids in input_ids_padded: atten_mask = torch.where(input_ids.eq(torch.zeros(1)), torch.tensor([0]), torch.tensor([1])) atten_mask_list.append(atten_mask) return { "input_ids": input_ids_padded, "attention_masks": atten_mask_list, "targets": target_list } Eval function: def eval_fn(data_loader, model, device, optimizer): model.eval() fin_targets = [] fin_outputs = [] with torch.no_grad(): for batch_index, data_set in tqdm(enumerate(data_loader), total=len(data_loader)): ids = data_set['input_ids'] mask = torch.stack(data_set['attention_masks']) targets = torch.stack(data_set['targets']) #token_type_ids = data_set['token_type_ids'] ids = ids.to(device) mask = mask.to(device) targets = targets.to(device) #token_type_ids = token_type_ids.to(device, dtype=torch.long) optimizer.zero_grad() outputs = model( ids = ids, mask=mask, token_type_ids=None ) # print(outputs) # print(targets) fin_outputs.extend(outputs.cpu().detach().numpy().tolist()) fin_targets.extend(targets.cpu().detach().numpy().tolist()) return fin_outputs, fin_targets Model definition config = BertConfig( vocab_size=5000, max_positional_embeddings=224, num_attention_heads=12, num_hidden_layers=46, type_vocab_size=1, num_labels=1, hidden_dropout_prob=0.1 ) loss = nn.L1Loss() class regression_model(nn.Module): def __init__(self): super(regression_model, self).__init__() self.bert = BertForSequenceClassification(config=config) self.drop = nn.Dropout(p=0.3) #self.out = nn.Linear(self.bert.config.hidden_size, 1) def forward(self, ids, mask, token_type_ids): out = self.bert(input_ids=ids, attention_mask=mask, token_type_ids=token_type_ids)[0] out = self.drop(out) return out let me know if you need to see anymore code. The model is using a tokenizer specifically made for the dataset and so I had to write the attention_mask part into the collator. It is being trained from scratch on this data as it’s very different to the standard language data. Thanks A
Hi, when you say “holes” do you mean zeros? You could try increasing the precision of your values (maybe the numbers get so small pytorch thinks they are zero). Are you sure you need 46 hidden layers? Seems rather a lot. What are you passing to your eval_fn? How are you splitting your train/eval data? Could you be overfitting to your training data?
0
huggingface
Beginners
Fine-tuning distiBART
https://discuss.huggingface.co/t/fine-tuning-distibart/1601
Hi there, I am not a native english speaker so please don´t blame me for the question. I am currently trying to figure out how I can fine-tune distilBART on some Financial Data (like finBERT). In the examples/seq2seq README it states: For the CNN/DailyMail dataset, (relatively longer, more extractive summaries), we found a simple technique that works: you just copy alternating layers from bart-large-cnn and finetune more on the same data. As far as I understand this sentence, I can Only finetune a distilBART student with the same data the teacher was trained with (CCN/DM) or can I use my own dataset that is completely different to the one that the BART Teacher was trained on? Thanks in advance Chris
You can finetune distilbart on any data you want, the question is how well different approaches will perform. Without knowing much more about the data and assuming you want to be able to train in <24h, I would probably start from sshleifer/distilbart-cnn-12-3.
0
huggingface
Beginners
Load fine tuned model from local
https://discuss.huggingface.co/t/load-fine-tuned-model-from-local/1651
Hey, if I fine tune a BERT model is the tokneizer somehow affected? If I save my finetuned model like: bert_model.save_pretrained(’./Fine_tune_BERT/’) is the tokenizer saved too? is the tokenizer modified? Do I need to save it too? Bercause loading the tokenizer like: tokenizer = BertTokenizer.from_pretrained(‘Fine_tune_BERT/’) is giving error. > > OSError: Model name ‘Fine_tune_BERT/’ was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed ‘Fine_tune_BERT/’ was a path, a model identifier, or url to a directory containing vocabulary files named [‘vocab.txt’] but couldn’t find such vocabulary files at this path or url. SO I assume I can load the tokenizer in the normal way?
The model is independent from your tokenizer, so you need to also do: tokenizer.save_pretrained(’./Fine_tune_BERT/’) to be able to load it back with from_pretrained.
0
huggingface
Beginners
How to extract the “student” model after distillation?
https://discuss.huggingface.co/t/how-to-extract-the-student-model-after-distillation/1619
Hey there! I am using your distillation script 9 (thanks for sharing it!) and based on the dumped checkpoints I see, it seems that they contain both the teacher and the student. Assuming that my observation is correct, how can I dump only the student sub-model? @sshleifer wondering if you have any thoughts.
Great Q! the saved best_tfmr directory has only student the student saved (in huggingface format). There is also a pytorch lightning weights_only checkpoint you could pass to ModelCheckpoint here. Be aware that this might break --do_predict/trainer.test, which you can overcome by running eval as a second step, roughly: # Define useful aliases run_distributed_eval () { proc=$1 m=$2 dd=$3 sd=$4 shift shift shift shift python -m torch.distributed.launch --nproc_per_node=$proc run_distributed_eval.py \ --model_name $m --save_dir $sd --data_dir $dd $@ } eval_best () { proc=$1 m=$2 dd=$3 shift shift shift run_distributed_eval $proc $m/best_tfmr $dd $m/ $@ } Finally, run: eval_best 1 output_dir (if you have more gpus, change the first arg)
0
huggingface
Beginners
What is the proper way to do inference using fine-tuned model?
https://discuss.huggingface.co/t/what-is-the-proper-way-to-do-inference-using-fine-tuned-model/1617
Hello. Once the model is fine-tuned, what is the proper way to do inference? E.g. I would send list of words to API (e.g. Flask) and predict labels for each word using fine-tuned model. Should I just run model using do_pretict = True, and do_eval, do_test = False? Or there is a better way? Thanks!
this summary of tasks 11 doc shows how you can do inference for various tasks.
0
huggingface
Beginners
Customizing GenerationMixin to output attentions
https://discuss.huggingface.co/t/customizing-generationmixin-to-output-attentions/999
Hi all, I’m using a Pegasus model (or really BartForConditionalGeneration since almost everything is inherited) and I’m interested in the attention outputs of various encoder and decoder blocks throughout the model. Following the documentation, simply tokenizing an input context and running model(**input_tokens, output_attentions = True) allows me to dissect the attentions of each token in the input sequence in every layer (the dimensions being (batch_size, num_heads, seq_length, seq_length) for each layer). This is good. Now I want to see the attentions that lead to the predictions of each token returned from model.generate(). Since I am on master and build from source, I edit the method _generate_beam_search in the GenerationMixin class in transformers/generation_utils.py to also pass the arg output_attentions = True. Here is the edited snippet: decoder_attentions = [] # added this line while cur_len < max_length: model_inputs = self.prepare_inputs_for_generation( input_ids, past=past, attention_mask=attention_mask, use_cache=use_cache, **model_kwargs ) outputs = self(**model_inputs, return_dict=True, output_attentions = True) # edited this line decoder_attentions.append(outputs['decoder_attentions']) # added this line next_token_logits = outputs.logits[:, -1, :] # (batch_size * num_beams, vocab_size) As you can see, I try to get the decoder attentions for every iteration, append it, and later I pass up the list and return it from the main generate() method. The problem is these attentions are no longer of the shape (batch_size, num_heads, seq_length, seq_length). Specifically, I would expect the sequence lengths to be at least the input context sequence length, but they are much shorter, and do not even match the length of the final sequence prediction. I feel like I have some misunderstanding about how things are working. I know encoder_outputs and some hidden states are precomputed, but I don’t know if they are affecting this. Can anyone help me understand what is going on here? Is there a way to see the influence of specific input context tokens in decoder block attention heads on predicted tokens ?
I think this just returning the attentions from the deocder since the encoder hidden states are only computed once in the generate function. So the returned attentions at every iteration is attention over the tokens generated so far. pinging @sshleifer
0
huggingface
Beginners
Training DistilGPT2
https://discuss.huggingface.co/t/training-distilgpt2/372
Hello! I am trying to find resources/code samples to retrain the DistilGPT2 model with text I have preprocessed myself, but could not find any. Most of the documentation relates to DistilBert and it’s uses. Furthermore, I also have trained a gpt2-simple (tensorflow based) model. If there is any way to distil the same, it will help me too! Thanks for your help.
Hi @abhilashpal, you can find distillation code here 27. The same script that produces distillbert can be used for GPT-2, it’s not documented though. you should be able to use this command after processing your dataset python train.py \ --student_type gpt2 \ --student_config training_configs/distilgpt2.json \ --teacher_type gpt2 \ --teacher_name gpt2 \ # or your own teacher model --alpha_ce 5.0 --alpha_cos 1.0 --alpha_clm 0.5 \ --freeze_pos_embs \ --dump_path serialization_dir/my_first_training \ --data_file data/binarized_text.bert-base-uncased.pickle \ # your data path --token_counts data/token_counts.bert-base-uncased.pickle \ # your own pickle file path --force # overwrites the `dump_path` if it already exists. pinging @julien-c for more info
0
huggingface
Beginners
Longformer on 1 GPU or multi-GPU
https://discuss.huggingface.co/t/longformer-on-1-gpu-or-multi-gpu/1269
Hello, Sorry if I duplicate the question. I made some brief search in the forum, but did not really found the answer. So it was decided to make some fine-tuning of longformer using dataset which consists of 3000 pairs. Length of each input is up to 4096 tokens. After some simple computations understood that it is needed around 24Gb HBM on GPU to run BS=1. I do not have such GPU and I looked on my old 2-socket 20-core Xeon with 64gb of ram. I installed pytorch optimized by mkldnn for Intel processors… and you know after running I realized that fine-tuning on 3000 pairs will take around 100 hours. 100 hours, Carl! Either this Xeon is too old (only AVX supported) or mkl-dnn does not optimize bert-like pytorch models. Anyway I’m looking into renting some GPU server. And finally I’m coming to my questions. Assuming that I need 24gb of memory for 1 batch, then can I take server with 2 GPU with 16 gb each? Do you know if pytorch + cuda can split into 2 GPUs even for batch size = 1 w/o degradation? Or I need to look for single Nvidia V100 with 32gb of HBM to solve this problem? Anybody tried already longformer and can share some performance results with details of used HW? Thanks!!!
I faced the same problem. As of now, I used a 32 GB GPU (p3dn.24xlarge ec2 instance). Also reduced the number of tokens and the batch size. At present Longformer doesn’t support multiple GPUs. We can shard the training data and train it iteratively by saving the model (still exploring this). We can explore checkpoint feature in pytorch as well (https://pytorch.org/tutorials/recipes/recipes/saving_and_loading_a_general_checkpoint.html 5). Let me know if you have already found an efficient way to do this.
0
huggingface
Beginners
Checkpoint vs model weight
https://discuss.huggingface.co/t/checkpoint-vs-model-weight/1434
Please make me clear difference between checkpoint and saving the weights of the model, which one can I use to load later? Also I could not find my checkpoints (may be overwrite option at my end), so the same can done via these line of code trainer.save_model("/content/drive//results/distillbert/trainer") tokenizer.save_pretrained("/content/drive/results/distillbert/tokenizer")
I think a “checkpoint” is what we call a partial save during training. To take a checkpoint during training, you can save the model’s state_dict, which is a list of the current values of all the parameters that have been updated during this training run. Note that this doesn’t save the non-variable parameters, and it doesn’t save the weights in any frozen layers. To reload the model to that checkpoint state, you first of all have to load a complete model with the right configuration. You can do this either by initializing randomly with the config file, or by loading a suitable pre-trained model. Then you update that complete model with the saved state_dict weights. If you want to continue the training from the same point, you also need information about the scheduler and the optimizer. This can be saved and applied using the optimizer’s state_dict. I haven’t any examples of using save_model or save_pretrained, but here’s an example of saving a model and optimizer during training. filedt = datetime.datetime.now().strftime("%Y%m%d-%H%M%S") torch.save(model.state_dict(),’/content/drive/My Drive/ftregmod-’ + filedt) torch.save(optimizer.state_dict(),’/content/drive/My Drive/ftregopt-’ + filedt) and then to reload and continue training: READFROMNAMEMODEL = ‘/content/drive/My Drive/ftregmod-20200911-014657’ #### READFROMNAMEOPT = ‘/content/drive/My Drive/ftregopt-20200911-014657’ #### model = BertForSequenceClassification.from_pretrained(‘bert-base-uncased’, num_labels=NCLASSES, output_attentions=True) model.load_state_dict(torch.load(READFROMNAMEMODEL), strict=False) optimizer = AdamW(model.parameters(), lr = LEARNRATE, # default is 5e-5 eps = 1e-8 # default is 1e-8. ) optimizer.load_state_dict(torch.load(READFROMNAMEOPT))
0
huggingface
Beginners
Dataset for fake news detection, fine tune or pre-train
https://discuss.huggingface.co/t/dataset-for-fake-news-detection-fine-tune-or-pre-train/1383
Is there any dataset for fake news (different from sentiment analysis) detection? I have one NELA-GT but then I would need to pre-train that from scratch? Any methods, am I on the correct page ? https://huggingface.co/transformers/training.html 6 I want to use BERT model, thanks
You could try to get baseline with fine-tuning before going for pre-training and then make decision based on the results. This thread has nice pointer for pre-training Pre-Train BERT (from scratch) Research BERT has been trained on MLM and NSP objective. I wanted to train BERT with/without NSP objective (with NSP in case suggested approach is different). I haven’t performed pre-training in full sense before. Can you please share how to obtain the data (crawl and tokenization details which were used) on which BERT was trained on ?. Since it takes a lot of time, I am looking for well tested code that can yield the BERT with/without NSP in one go. Any suggestions will be helpful. I know about some pr…
0
huggingface
Beginners
How to best deal with numbers?
https://discuss.huggingface.co/t/how-to-best-deal-with-numbers/1467
Lets say I have mix of words and numbers,which represent for example prices or sizes of objects. The relation/magnitude between the numbers is very important. With normal tokenization it would split those large/less common numbers into smaller pieces (few digits) and concatenate them. Is this not very inefficient and suboptimal, since the transformer has to learn how digits/numbers work and would assign every digit/combination its own learned vector. Would it make sense to manually convert these numbers and pass in its own vector? Example: DATA (Laying carpet 150sqft | price 400$)-> LABEL (okay) DATA (Carpet type 133 installation with an area of 70m2 | price 8000$)-> LABEL ( Not okay)
If your task is closely related to those numbers, and all examples follow the same pattern, perhaps you should just use the price and surface as raw features in your model, along with the text. You could concatenate the price and surface values to the Transformer embedding for the text at the last hidden-layer before classification. I am not sure if that is what you meant by bone: Would it make sense to manually convert these numbers and pass in its own vector? Maybe also normalize the surface values to one standard. I don’t know if there are better ways to deal with values at tokenizing, but this would be my suggestion.
0
huggingface
Beginners
PYTORCH-TRANSFORMERS vs Transformers
https://discuss.huggingface.co/t/pytorch-transformers-vs-transformers/1484
What is the difference between [PYTORCH-TRANSFORMERS](https://pytorch.org/hub/huggingface_pytorch-transformers/) and transformers, when running RobertA on PYTORCH-TRANSFORMERS, it worked fine. please guide.
pytorch_transformers was older name/version. It later got renamed to transformers after adding support for TF. Also Roberta works in transformers. What’s not working for you ?
0
huggingface
Beginners
How to use transformer attention model when the input is features
https://discuss.huggingface.co/t/how-to-use-transformer-attention-model-when-the-input-is-features/1465
I am totally new to this NLP and transformer and attention. I was playing with models sentence-transformers and want to explore more, but now I got stuck. I have an input of BxKx768 which is my embedded features. Is there a way to give them to a transformer(which has attention model ) and get the output of size BxM where M can be any number? I learned to do it when input is a sentence, but have no idea how to do it when input is features. I guess what I am asking is how to give my input to a transform model and have my output. Apologies in advance if it is a bad question. If that is easy, are there different models to try, like e.g. in resnet we have resnet 18,50,etc, do have the same thing here?
The models in the HF library are focused on NLP, hence they have extra stuff related to language, such as an embeddings layer, positional and token types information as well as other model specific features. If you want to build your own model using Transformer layers then perhaps you should look at https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html 4. Regarding the last question, resnet18/50… are pre-trained models with different model sizes, see this post. With Transformers you can also change the size of the network by specifying different number of attention heads or layers. For instance bert-base model has 12-layers in depth and each has 12 attention heads, while bert-large is 24 layers deep and uses 16 attention heads, that’s why the embeddings they produce are different in size, 768 vs 1024, which has to do with the number of attention heads in this case. If you want to understand more I recommend this post. So I think the parallelism would be bert-base being a Resnet-18 and bert-large a Resnet-50. You could use pretrained Language Modelling models listed here for language related tasks as you would with pre-trained Resnets for computer vision tasks, although there are more task-specific trained models you can explore here. So as you can see there’s a lot happening, so welcome to the NLP world
0
huggingface
Beginners
Predictions for sequenceclassification task
https://discuss.huggingface.co/t/predictions-for-sequenceclassification-task/1453
I am working on SequenceClassification task and dont know how to see the predictions here is my code from transformers import DistilBertTokenizer, TFDistilBertForMaskedLM import tensorflow as tf import torch loaded_model = DistilBertForSequenceClassification.from_pretrained("/content/results/distillbert/model1",return_dict=True) tokenizer = DistilBertTokenizer.from_pretrained(‘distilbert-base-cased’) inputs = tokenizer(“Hello, this news is Not good for learning”, return_tensors=“pt”) labels = torch.tensor([1]).unsqueeze(0) # Batch size 1 outputs = loaded_model(**inputs, labels=labels) loss = outputs.loss logits = outputs.logits
Checkout this docs 5. First, you won’t need to pass labels for predictions and for prediction put the code under torch.no_grad() to avoid calculating gradients. The returned logits have shape [batch_size, num_classes], you can then apply argmax to get the index highest scored class. classes = ["NEGATIVE", "POSITIVE"] enc = tokenizer("positive text", return_tensors="pt") with torch.no_grad(): logits = model(**enc, return_dict=True).logits pred_class_idx = torch.argmax(logits).item() classes[pred_class_idx] you can also apply softmax on logits to get the probabilities. and better yet, just use pipeline as shown in the docs above, which does all of this for you.
0
huggingface
Beginners
Is MBart tensorflow2.0 model ready for use?
https://discuss.huggingface.co/t/is-mbart-tensorflow2-0-model-ready-for-use/1420
I am looking for MBart pretrained model, but it seems like there is only a pytorch version. I am wondering whether there is a tf2.0 version for this model. If not, can I convert this pytorch version to tf2.0 version?
Not at the moment, MBart inherits from Bart and it’s only available with torch. pinging @sshleifer
0
huggingface
Beginners
Sequence classification VS MaskedLM
https://discuss.huggingface.co/t/sequence-classification-vs-maskedlm/1410
Hi I am new to the usage of huggingface, before that I used BERT but using original code and just used sequence classification for sentiment analysis, now I am puzzled that Which task is more suitable for the purpose of detection tasks e.g., fake news (i) MaskedLM (ii) Sequence classification thanks
Hi @shainaraza Fake news detection is classification, so Sequence classification is suitable. masked language modelling (MLM) is used for pre-training and is not meant to be used for direct downstream tasks. For downstream tasks we take pre-trained model and add a task specific head on top of it. These two docs should help Summary of the models 5 Summary of the tasks 8
0
huggingface
Beginners
Training a regression model using Roberta (SMILES to CCS) Cheminformatics
https://discuss.huggingface.co/t/training-a-regression-model-using-roberta-smiles-to-ccs-cheminformatics/1314
Using SMILES string to predict a float I’ve been learning how to use this library over the past few weeks and getting stuck into it. I don’t have a lot of experience with NNs but I have some understanding. I want to use Roberta to build a regression model which would predict the CCS (collisional cross section) area of a molecule given it’s formula in a SMILES string (which is a string representation of the molecule). I’ve run into an index out of range in self within the model and I’m wondering if anyone has any suggestions. I’m assuming it’s a problem with the forward method in my model or my Dataset. I am currently still unable to train the model at all. Any suggestions would be amazing, Thank you. a smiles string looks like this: COC1=CC=CC=C1CNCCC2=CC(=C(C=C2OC)Cl)OC The error raceback (most recent call last): File "c:/Users/ktzd064/Documents/Python/CCS_Prediction/CCSround2.py", line 165, in <module> trainer.train() File "C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\transformers\trainer.py", line 707, in train tr_loss += self.training_step(model, inputs) File "C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\transformers\trainer.py", line 994, in training_step outputs = model(**inputs) File "C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "c:/Users/ktzd064/Documents/Python/CCS_Prediction/CCSround2.py", line 21, in forward out, _ = self.bert(input_ids, token_type_ids, attention_mask) File "C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\transformers\modeling_roberta.py", line 479, in forward return_dict=return_dict, File "C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\transformers\modeling_bert.py", line 825, in forward input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds File "C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\transformers\modeling_roberta.py", line 82, in forward input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds File "C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\transformers\modeling_bert.py", line 209, in forward token_type_embeddings = self.token_type_embeddings(token_type_ids) File "C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\torch\nn\modules\sparse.py", line 126, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\torch\nn\functional.py", line 1814, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self The model class regression_model(nn.Module): def __init__(self): super(regression_model, self).__init__() self.bert = RobertaForSequenceClassification(config=config) self.drop = nn.Dropout(p=0.3) self.out = nn.Linear(self.bert.config.hidden_size, 1) def forward(self, input_ids, attention_mask, targets, token_type_ids,): out, _ = self.bert(input_ids, token_type_ids, attention_mask) out = self.dropout(out) loss = nn.MSELoss() output = loss(out, targets) return output The Dataset class smiles_dataset(Dataset): def __init__(self, smiles, targets, tokenizer, max_len): self.smiles = smiles self.targets = targets self.tokenizer = tokenizer self.max_len = max_len def __len__(self): return len(self.smiles) def __getitem__(self, index): smiles = str(self.smiles[index]) targets = float(self.targets[index]) encoding = self.tokenizer.encode( smiles, ) return { 'SMILES': smiles, 'input_ids': encoding.ids, 'attention_mask': encoding.attention_mask, 'targets': torch.tensor(targets, dtype=torch.float32), 'token_type_ids': encoding.type_ids, } Tokenizer I have trained the tokenizer on the SMILES dataset I am using tokenizer = ByteLevelBPETokenizer() tokenizer.train('SMILES.txt', vocab_size=800, min_frequency=1, special_tokens=["<s>", "<PAD>", "<MASK>",]) tokenizer.save_model("CCSround2") from tokenizers.implementations import ByteLevelBPETokenizer from tokenizers.processors import BertProcessing tokenizer = ByteLevelBPETokenizer( "./CCSround2/vocab.json", "./CCSround2/merges.txt", ) tokenizer._tokenizer.post_processor = BertProcessing( ("<PAD>", tokenizer.token_to_id("<PAD>")), ("<MASK>", tokenizer.token_to_id("<MASK>")), ) tokenizer.enable_padding(length=300) tokenizer.save The training config train, test = train_test_split(SMILESandCCS, test_size=0.2) train_dataset = smiles_dataset(train['SMILES'].values, train['CCS'].values, tokenizer, 300) test_dataset = smiles_dataset(test['SMILES'].values, test['CCS'].values, tokenizer, 300) from transformers import RobertaConfig, RobertaTokenizerFast, RobertaForSequenceClassification config = RobertaConfig( vocab_size=800, max_positional_embeddings=224, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1, ) tokenizer = RobertaTokenizerFast.from_pretrained('./CCSround2', max_len=300) from transformers import DataCollatorForLanguageModeling model = regression_model() training_args = TrainingArguments( output_dir = '/results', num_train_epochs = 3, per_device_train_batch_size=16, per_device_eval_batch_size=64, weight_decay=0.01, logging_dir='/logs', ) trainer = Trainer( model = model, args = training_args, train_dataset = train_dataset, ) trainer.train()
Hi, are you sure your ByteLevelBPETokenizer that you trained and saved is compatible with the RobertaTokenizerFast that you are loading it into?
0
huggingface
Beginners
Failed to train bart-cnn from bart-base using my own code
https://discuss.huggingface.co/t/failed-to-train-bart-cnn-from-bart-base-using-my-own-code/1393
Update: problem solved. caused by a bug in model saving function There’re not much difference on Rouge score before and after training. The generated summarization of same article is almost same. Need some suggestion. I used original setting in fairseq 2. label_smoothed_cross_entropy, label-smoothing 0.1, attention-dropout 0.1, weight-decay 0.01, lr-scheduler polynomial_decay, adam-betas “(0.9, 0.999)” adam-eps 1e-08, LR=3e-05 some difference: batch-size 3, gradient_accumulation_steps 16, adamW, fp32, clip-norm 1.0, trained for 7 hours on 2 tesla P4. Loss (my fault made it hard to read): image1475×463 36.5 KB Model seems not converge . Rouge score before training: “rouge1”: 37.325248080026086, “rouge2”: 18.03751262341448, “rougeL”: 25.82438158757282 Rouge score after training: “rouge1”: 37.818, “rouge2”: 17.246, “rougeL”: 23.7038 Update: problem solved. caused by a bug in model saving function
AFAIK there isn’t bart-base checkpoint trained for summarization, how did you get those metrics without training ?
0
huggingface
Beginners
Facebook/bart-large-cnn has a low rouge score on cnn_dailymail
https://discuss.huggingface.co/t/facebook-bart-large-cnn-has-a-low-rouge-score-on-cnn-dailymail/673
I test on test split in the tensorflow_dataset 4. Use python library rouge to output rouge score. The score is quite low, compared to the score reported in the paper 17 {‘rouge-1’: {‘f’: 0.38628074837405213, ‘p’: 0.38253581551915466, ‘r’: 0.4136606028772784}, ‘rouge-2’: {‘f’: 0.1810805831229415, ‘p’: 0.17948749808930747, ‘r’: 0.193921872080545}, ‘rouge-l’: {‘f’: 0.3747852342130126, ‘p’: 0.37128779953880464, ‘r’: 0.3958861147871471}} the score reported in the paper: BART 44.16 21.28 40.90 (R1, R2, RL) parameter of generate function: num_beams=4, length_penalty=2.0, max_length=256, min_length=10, no_repeat_ngram_size=3
Hi @LiuYangyang, can you post the parameters you used for the generate function. (assuming you used generate)
0
huggingface
Beginners
Cannot reproduce the results
https://discuss.huggingface.co/t/cannot-reproduce-the-results/1345
Hi I try to reproduce the result related to BART and the result is not comparable to the claimed performance. I tried sshleifer/distilbart-cnn-12-6 and facebook/bart-large-cnn and met the same problem. My generation process is modified based on the released summarization pipeline. python run_eval.py sshleifer/distilbart-cnn-12-6 $DATA_DIR/test.source $OUTPUT_FILE \ --reference_path $DATA_DIR/test.target \ --task summarization \ --device cuda \ --fp16 \ --bs 32 My performance without post-processing: 1 ROUGE-1 Average_R: 0.48286 (95%-conf.int. 0.48036 - 0.48554) 1 ROUGE-1 Average_P: 0.33581 (95%-conf.int. 0.33356 - 0.33802) 1 ROUGE-1 Average_F: 0.38536 (95%-conf.int. 0.38338 - 0.38737) --------------------------------------------- 1 ROUGE-2 Average_R: 0.20405 (95%-conf.int. 0.20148 - 0.20648) 1 ROUGE-2 Average_P: 0.14260 (95%-conf.int. 0.14067 - 0.14449) 1 ROUGE-2 Average_F: 0.16314 (95%-conf.int. 0.16108 - 0.16517) --------------------------------------------- 1 ROUGE-L Average_R: 0.40419 (95%-conf.int. 0.40174 - 0.40665) 1 ROUGE-L Average_P: 0.28191 (95%-conf.int. 0.27984 - 0.28396) 1 ROUGE-L Average_F: 0.32309 (95%-conf.int. 0.32111 - 0.32509) My performance with post-posting (from ProphetNet): 1 ROUGE-1 Average_R: 0.49758 (95%-conf.int. 0.49505 - 0.50028) 1 ROUGE-1 Average_P: 0.35663 (95%-conf.int. 0.35421 - 0.35889) 1 ROUGE-1 Average_F: 0.40406 (95%-conf.int. 0.40200 - 0.40607) --------------------------------------------- 1 ROUGE-2 Average_R: 0.21882 (95%-conf.int. 0.21622 - 0.22125) 1 ROUGE-2 Average_P: 0.15750 (95%-conf.int. 0.15543 - 0.15947) 1 ROUGE-2 Average_F: 0.17794 (95%-conf.int. 0.17576 - 0.17998) --------------------------------------------- 1 ROUGE-L Average_R: 0.41627 (95%-conf.int. 0.41375 - 0.41881) 1 ROUGE-L Average_P: 0.29928 (95%-conf.int. 0.29712 - 0.30132) 1 ROUGE-L Average_F: 0.33860 (95%-conf.int. 0.33658 - 0.34056) The expected performance for sshleifer/distilbart-cnn-12-6 is ?/21.26/30.59 and I can only achieve 40.41/17.79/33.86. So is the trick related to the post-processing, or how can I achieve the expected performance? Thank you!
For anyone may see this post, the problem is solved by using larger batch size. Above result is using batch size 32 on GeForce 2080 Ti, and I change to Tesla V100 today with batch size 128. The result is pretty closed to expected: 1 ROUGE-1 Average_R: 0.53399 (95%-conf.int. 0.53146 - 0.53669) 1 ROUGE-1 Average_P: 0.39205 (95%-conf.int. 0.38963 - 0.39451) 1 ROUGE-1 Average_F: 0.44179 (95%-conf.int. 0.43972 - 0.44408) --------------------------------------------- 1 ROUGE-2 Average_R: 0.25584 (95%-conf.int. 0.25312 - 0.25867) 1 ROUGE-2 Average_P: 0.18821 (95%-conf.int. 0.18605 - 0.19056) 1 ROUGE-2 Average_F: 0.21172 (95%-conf.int. 0.20940 - 0.21420) --------------------------------------------- 1 ROUGE-L Average_R: 0.44976 (95%-conf.int. 0.44719 - 0.45244) 1 ROUGE-L Average_P: 0.33090 (95%-conf.int. 0.32864 - 0.33330) 1 ROUGE-L Average_F: 0.37251 (95%-conf.int. 0.37040 - 0.37465)
0
huggingface
Beginners
Sharing a community provided dataset
https://discuss.huggingface.co/t/sharing-a-community-provided-dataset/1338
Hello, I want to share my dataset, I’m following the instructions here: huggingface.co Sharing your dataset — datasets 1.0.2 documentation 2 I copied thins command in order to add the metadata: python datasets-cli test datasets/ --save_infos --all_configs but it gives me a syntax error each time [Errno 2] No such file or directory: ‘sample_data/’ /content/sample_data File “dataset-sharing.py”, line 6 python datasets-cli test datasets/sample_data --save_infos --all_configs ^ SyntaxError: invalid syntax What is the correct way to specify the dataset folder? thanks
Hi, In the link you shared, under “Adding tests and metadata to the dataset”, it says: “In the rest of this section, you should make sure that you run all of the commands from the root of your local datasets repository.” Since you are getting ‘No such file or directory’, are you running from the root?
0
huggingface
Beginners
How to train T5 with Tensorflow
https://discuss.huggingface.co/t/how-to-train-t5-with-tensorflow/641
There are many examples showing how to train T5 in PyTorch, e.g.: https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb 74 , but none so far for Tensorflow. Many people have been asking on transformers: https://github.com/huggingface/transformers/issues/3626 8 . Does anybody have a good notebook showing how to train T5 in Tensorflow? Otherwise, I will try to translate @valhalla 's great notebook: https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb 74 to Tensorflow.
Thanks very much Patrick @patrickvonplaten! In Suraj Patil’s notebook, he employed Pytorch Trainer to train T5. At first, I didn’t know that we can use Trainer with Seq2Seq problems (according to “The Big Table of Tasks” https://huggingface.co/transformers/examples.html 29 which stated that Trainer does not yet support Translation / Summarization ) I will try to use TFTrainer for TF2 on Seq2Seq problems. If that doesn’t work, I think I will try to write custom loop in TF2.
0
huggingface
Beginners
Speed expectations for production BERT models on CPU vs GPU?
https://discuss.huggingface.co/t/speed-expectations-for-production-bert-models-on-cpu-vs-gpu/452
Hi! I’m working with an ML model produced by a researcher. I’m trying to set it up to run economically in production on large volumes of text. I know a lot about production engineering, but next to nothing about ML. I’m getting some results that are surprising to me, and I’m hoping for pointers to explanations and advice. Right now I have the key code broken out into 3 methods, so it’s easy to profile. def _tokenize(self, text): return self.tokenizer.encode_plus(text, add_special_tokens=True, return_tensors='pt').to(self.device) def _run_model(self, model_input): return self.model(model_input['input_ids'], token_type_ids=model_input['token_type_ids'])[0] def _extract_results(self, logits): return logits[0][0].item(), logits[0][1].item() If I run this using my laptop CPU, I get numbers that make sense to me. For 1459 items, those three methods take 196.4 seconds, or about 135 ms per item. About 2.9 seconds is _tokenize, and the rest is _run_model. When I switch over to my laptop GPU, I get numbers that mystify me. The same data takes 131.8 seconds. 2.5 seconds to tokenize, and running the model takes 20.3 seconds. But extracting the result takes 108.8 seconds! The _extract_results method costs the same whether I extract one logit or both. The first one that I extract is slow, whether that’s [0] or [1]. The second one is effectively free. From nvidia-smi, I can see that the GPU is really being used, and my process is using ~850MB of the 2 GB of GPU RAM. So that seems fine. And if it matters, this is a GeForce 940MX on a Thinkpad t470. Do these numbers make sense to more experienced hands? I was expecting the GPU runs to be much faster, but if I actually want to get the results, it’s only a little faster. Thanks!
(probably too late to be useful…) Is the GPU being used for all of the sections? (Maybe your researcher only put the middle section on GPU.) Does the run-model step actually train and update the model, or does it just calculate answers for the data you entered?
0
huggingface
Beginners
How pretrained models are trained?
https://discuss.huggingface.co/t/how-pretrained-models-are-trained/1343
Hello, How pre-trained are obtained? In the library there is always some pre-trained model downloaded from the server, but how is it originally trained? Is it also trained using transformers library?
In most if not all built-in cases (e.g. bert-base-cased), the original paper implementations are ported to the transformers architecture. (you can have a look at conversion scripts, e.g. https://github.com/huggingface/transformers/blob/77cd0e13d2d09f60d2f6d8fb8b08f493d7ca51fe/src/transformers/convert_pytorch_checkpoint_to_tf2.py 2, https://github.com/huggingface/transformers/blob/d155b38d6ea70fef3dec2e1f678269e713672bb7/src/transformers/commands/convert.py) User models (e.g. username/mymodel-uncased) may have been trained in other ways or ported to the transformers architecture manually, with custom scripts, or they are trained by using the transformers library directly.
0
huggingface
Beginners
How to convert Tokenizer to TokenizerFast?
https://discuss.huggingface.co/t/how-to-convert-tokenizer-to-tokenizerfast/1305
I’m trying to run the question answering pipeline on chunks of of a long document. The DistilBertTokenizerFast has good features to support this. It can break a document down into overlapping chunks and it can give you the offsets mapping so that you can match up labels with tokens. But in the pipeline they use by default the DistilBertTokenizer, which has neither of these features. Is there a better way than below to find out what model is being used in the pipeline and construct a “fast” tokenizer to match? from transformers import pipeline from transformers import DistilBertTokenizerFast nlp = pipeline("question-answering") regular_tokenizer = nlp.tokenizer # look up which model is being used on github # https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines.py # currently the model is: model_name = "distilbert-base-cased-distilled-squad" fast_tokenizer = DistilBertTokenizerFast.from_pretrained(model_name)
You can pass tokenizer to the pipeline pipeline("", tokenizer=your_tok)
0
huggingface
Beginners
Model results differ after creating pipeline with same model
https://discuss.huggingface.co/t/model-results-differ-after-creating-pipeline-with-same-model/1315
Im using a pre-trained tokeniser and a fine-tuned model. With the same model, when I create pipeline, my results differ both the time. tokenizer = AutoTokenizer.from_pretrained("huggingface/CodeBERTa-small-v1") mymodelclf = TextClassificationPipeline(model=model, tokenizer = tokenizer,return_all_scores = True ) print (mymodelclf( negative_samples ) ) The same code is run twice, but 2 different results are obtained. Is this expected? image2000×482 109 KB
Are you sure your model is in evaluation mode?
0
huggingface
Beginners
How to load finetuned model in TF
https://discuss.huggingface.co/t/how-to-load-finetuned-model-in-tf/1272
So I trained a new gpt2 model using the language modeling script and it spits out a pytorch.bin file that I assume is the model. How do I go about converting it to TF 2.0? I see there is a convert script but I am not sure if I can just run that to make it TF compatible?
You should create your model like this: from transformers import TFGPT2Model model = TFGPT2Model.from_pretrained("path_to_dir", from_pt=True) where path_to_dir should be replaced with the path to the directory where your pytorch model is (what you set in GPT2Model.save_pretrained()). Then you can use your TF model and save it with the save_pretrained method.
0
huggingface
Beginners
Dataset expected by Trainer
https://discuss.huggingface.co/t/dataset-expected-by-trainer/148
Hello everyone, I am rewriting some old code to use the new tokenizer syntax and the Trainer class but I believe I am missing something. This is how I am building the training dataset to be passed to the Trainer constructor: encoded_texts = tokenizer(texts, padding = True, truncation = True, return_tensors = 'pt') labels = torch.tensor(labels) dataset = TensorDataset(encoded_texts['input_ids'], encoded_texts['attention_mask'], labels) Can you please help me understand what I am doing wrong/missing? When I run trainer.train() I get the following error: vars() argument must have __dict__ attribute Thanks in advance!
I’ll give you the full picture. The workflow: You create an instance of GlueDataset(data_args, tokenizer). Then you pass it to Trainer(...) class. In trainer, you also pass in default_data_collator. The reason is that GlueDataset return InputExample which is HF specific and cannot be used by Pytorch dataloader directly. So the default_data_collator takes in List[InputExamples] and returns a dict. This dict is then used by the dataloader. So in trainer, if you pass default_data_collator with TensorDataset, it won’t work directly (That’s why you’re getting the error). This error is raised when dataloader will pass the batch to default_data_collator. I’d suggest using the default Pytorch collate_fn with your TensorDataset, it would work just fine. One more additional thing: Make sure the dataloader returns the dict with same key values forward method expects. Inside _training_step, you’ll pass inputs to the function, and then after the inputs are passed kept on gpu, the function does: output = model(**inputs) In this case, the keyword arguments have to match. In case, they don’t, you can inherit from Trainer and redefine your own method. I hope this answers your question.
0
huggingface
Beginners
How to split a dataset into train, test, and validation?
https://discuss.huggingface.co/t/how-to-split-a-dataset-into-train-test-and-validation/1238
I am having difficulties trying to figure out how I can split my dataset into train, test, and validation. I’ve been going through the documentation here: github.com huggingface/datasets/blob/37d4840a39eeff5d472beb890c8f850dc7723bb8/datasets/wikihow/wikihow.py 10 # coding=utf-8 # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """WikiHow Datasets.""" from __future__ import absolute_import, division, print_function This file has been truncated. show original and the template here: github.com huggingface/datasets/blob/master/templates/new_dataset_script.py#L63 5 Args: data_size: the size of the training set we want to us (xs, s, m, l, xl) **kwargs: keyword arguments forwarded to super. """ self.data_size = data_size class NewDataset(datasets.GeneratorBasedBuilder): """TODO: Short description of my dataset.""" VERSION = datasets.Version("1.1.0") # This is an example of a dataset with multiple configurations. # If you don't want/need to define several sub-sets in your dataset, # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes. BUILDER_CONFIG_CLASS = NewDatasetConfig BUILDER_CONFIGS = [ NewDatasetConfig(name="my_dataset_" + size, description="A small dataset", data_size=size) for size in ["small", "medium", "large"] ] but it hasn’t become any clearer. this is the error I keep getting: TypeError: ‘NoneType’ object is not callable and this is the code I’m using: def _split_generators(self, dl_manager): """Returns SplitGenerators.""" dl_path = dl_manager.download_and_extract(_URLS) titles = {k: set() for k in dl_path} for k, path in dl_path.items(): with open(path, encoding="utf-8") as f: for line in f: titles[k].add(line.strip()) path_to_manual_file = os.path.join( os.path.abspath(os.path.expanduser(dl_manager.manual_dir)), self.config.filename ) if not os.path.exists(path_to_manual_file): raise FileNotFoundError( "{} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('wikihow', data_dir=...)` that includes a file name {}. Manual download instructions: {})".format( path_to_manual_file, self.config.filename, self.manual_download_instructions ) ) return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={ "path": path_to_manual_file, "title_set": titles["train"], }, ), datasets.SplitGenerator( name=datasets.Split.VALIDATION, gen_kwargs={ "path": path_to_manual_file, "title_set": titles["validation"], }, ), datasets.SplitGenerator( name=datasets.Split.TEST, gen_kwargs={ "path": path_to_manual_file, "title_set": titles["test"], }, ),
I think it’s answered here How to split main dataset into train, dev, test as DatasetDict 959
0
huggingface
Beginners
Recommended Hardware for NER Pipeline Model
https://discuss.huggingface.co/t/recommended-hardware-for-ner-pipeline-model/1224
Trying to get started and run the pretrained NER pipeline model (https://huggingface.co/transformers/task_summary.html#named-entity-recognition 4) on about 10 million instances of text. Would you recommend using a CPU or GPU? Previously, I used spaCy and broke my data up into batch sizes of 1k then ran those batches on a C5 instance on AWS. The AWS EC2 instance was compute optimized and had 96 cores. I’m able to run any instance on AWS. Thanks!
if you have 10M examples and have access to GPU then definitely use GPU. If you want fast inference on CPU for ner pipeline then you could try onnx_transformers 8, which provides same API as pipeline but leverages onnx for accelerated inference.
0
huggingface
Beginners
Run_ner.py slower on multi-GPU than single GPU
https://discuss.huggingface.co/t/run-ner-py-slower-on-multi-gpu-than-single-gpu/261
Am i missing something? Why run_ner.py in 3.0.2 is that slower when running on 2 GPUs vs a single GPU ? (1) 7 minutes for fp16, 1 GPU (2) 13 minutes for fp16, 2 GPUs (3) 11 minutes for fp16, python -m torch.distributed.launch --nproc_per_node 2 run_ner.py (1) 07/13/2020 13:21:49 - INFO - transformers.training_args - PyTorch: setting up devices 07/13/2020 13:21:50 - WARNING - main - Process rank: -1, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: True 07/13/2020 13:21:50 - INFO - main - Training/evaluation parameters TrainingArguments(output_dir=’/opt/ml/model/output’, overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=True, evaluate_du ring_training=False, per_device_train_batch_size=6, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0 , adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4, max_steps=-1, warmup_steps=0, logging_dir=’/opt/ml/model/log’, logging_first_step=False, logging_steps=500, save_steps=750, save_total_limit=None, no_ cuda=False, seed=1, fp16=True, fp16_opt_level=‘O1’, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, past_index=-1) 07/13/2020 13:21:50 - INFO - transformers.configuration_utils - loading configuration file /opt/program/models/bert-base-multilingual-cased/config.json … Defaults for this optimization level are: enabled : True opt_level : O1 cast_model_type : None patch_torch_functions : True keep_batchnorm_fp32 : None master_weights : None loss_scale : dynamic Processing user overrides (additional kwargs that are not None)… After processing overrides, optimization options are: enabled : True opt_level : O1 cast_model_type : None patch_torch_functions : True keep_batchnorm_fp32 : None master_weights : None loss_scale : dynamic 07/13/2020 13:21:58 - INFO - transformers.trainer - ***** Running training ***** 07/13/2020 13:21:58 - INFO - transformers.trainer - Num examples = 7239 07/13/2020 13:21:58 - INFO - transformers.trainer - Num Epochs = 4 07/13/2020 13:21:58 - INFO - transformers.trainer - Instantaneous batch size per device = 6 07/13/2020 13:21:58 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 6 07/13/2020 13:21:58 - INFO - transformers.trainer - Gradient Accumulation steps = 1 07/13/2020 13:21:58 - INFO - transformers.trainer - Total optimization steps = 4828 07/13/2020 13:21:58 - INFO - transformers.trainer - Starting fine-tuning. … Epoch: 100%|██████████| 4/4 [07:13<00:00, 108.33s/it] 07/13/2020 13:29:12 - INFO - transformers.trainer - Training completed. Do not forget to share your model on huggingface.co/models =) (2) 07/13/2020 15:21:33 - INFO - transformers.training_args - PyTorch: setting up devices 07/13/2020 15:21:33 - WARNING - main - Process rank: -1, device: cuda:0, n_gpu: 2, distributed training: False, 16-bits training: True 07/13/2020 15:21:33 - INFO - main - Training/evaluation parameters TrainingArguments(output_dir=’/opt/ml/model/output’, overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=True, evaluate_du ring_training=False, per_device_train_batch_size=6, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0 , adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4, max_steps=-1, warmup_steps=0, logging_dir=’/opt/ml/model/log’, logging_first_step=False, logging_steps=500, save_steps=750, save_total_limit=None, no_ cuda=False, seed=1, fp16=True, fp16_opt_level=‘O1’, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, past_index=-1) 07/13/2020 15:21:33 - INFO - transformers.configuration_utils - loading configuration file /opt/program/models/bert-base-multilingual-cased/config.json … enabled : True opt_level : O1 cast_model_type : None patch_torch_functions : True keep_batchnorm_fp32 : None master_weights : None loss_scale : dynamic Processing user overrides (additional kwargs that are not None)… After processing overrides, optimization options are: enabled : True opt_level : O1 cast_model_type : None patch_torch_functions : True keep_batchnorm_fp32 : None master_weights : None loss_scale : dynamic 07/13/2020 15:21:57 - INFO - transformers.trainer - ***** Running training ***** 07/13/2020 15:21:57 - INFO - transformers.trainer - Num examples = 7239 07/13/2020 15:21:57 - INFO - transformers.trainer - Num Epochs = 4 07/13/2020 15:21:57 - INFO - transformers.trainer - Instantaneous batch size per device = 6 07/13/2020 15:21:57 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 12 07/13/2020 15:21:57 - INFO - transformers.trainer - Gradient Accumulation steps = 1 07/13/2020 15:21:57 - INFO - transformers.trainer - Total optimization steps = 2416 07/13/2020 15:21:57 - INFO - transformers.trainer - Starting fine-tuning. … Epoch: 100%|██████████| 4/4 [13:16<00:00, 199.08s/it] 07/13/2020 15:35:14 - INFO - transformers.trainer - Training completed. Do not forget to share your model… (3) Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 07/13/2020 15:47:16 - INFO - transformers.training_args - PyTorch: setting up devices 07/13/2020 15:47:16 - WARNING - main - Process rank: 1, device: cuda:1, n_gpu: 1, distributed training: True, 16-bits training: True 07/13/2020 15:47:16 - WARNING - main - Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: True, 16-bits training: True 07/13/2020 15:47:16 - INFO - main - Training/evaluation parameters TrainingArguments(output_dir=’/opt/ml/model/output’, overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=True, evaluate_du ring_training=False, per_device_train_batch_size=6, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0 , adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, warmup_steps=0, logging_dir=’/opt/ml/model/log’, logging_first_step=False, logging_steps=500, save_steps=750, save_total_limit=None, n o_cuda=False, seed=1, fp16=True, fp16_opt_level=‘O1’, local_rank=0, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, past_index=-1) 07/13/2020 15:47:16 - INFO - transformers.configuration_utils - loading configuration file /opt/program/models/bert-base-multilingual-cased/config.json … Defaults for this optimization level are: enabled : True opt_level : O1 cast_model_type : None patch_torch_functions : True keep_batchnorm_fp32 : None master_weights : None loss_scale : dynamic Processing user overrides (additional kwargs that are not None)… After processing overrides, optimization options are: enabled : True opt_level : O1 cast_model_type : None patch_torch_functions : True keep_batchnorm_fp32 : None master_weights : None loss_scale : dynamic 07/13/2020 15:47:41 - WARNING - transformers.trainer - You are instantiating a Trainer but Tensorboard is not installed. You should consider installing it. 07/13/2020 15:47:41 - INFO - transformers.trainer - ***** Running training ***** 07/13/2020 15:47:41 - INFO - transformers.trainer - Num examples = 7239 07/13/2020 15:47:41 - INFO - transformers.trainer - Num Epochs = 4 07/13/2020 15:47:41 - INFO - transformers.trainer - Instantaneous batch size per device = 6 07/13/2020 15:47:41 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 12 07/13/2020 15:47:41 - INFO - transformers.trainer - Gradient Accumulation steps = 1 07/13/2020 15:47:41 - INFO - transformers.trainer - Total optimization steps = 2416 07/13/2020 15:47:41 - INFO - transformers.trainer - Starting fine-tuning. … Epoch: 100%|██████████| 4/4 [11:13<00:00, 168.32s/it] 07/13/2020 15:58:54 - INFO - transformers.trainer - Training completed. Do not forget to share your model…
It might be that since local_rank =-1 for the 2nd setting, only one gpu was still being used.
0
huggingface
Beginners
Cannot download tensorflow model of cahya/bert-base-indonesian-522M
https://discuss.huggingface.co/t/cannot-download-tensorflow-model-of-cahya-bert-base-indonesian-522m/1215
I was going to download this 3 model, and then I was going to save it later to be used with bert-serving. Since bert-serving only supports tensorflow model, I need to download the tensorflow one and not the PyTorch. The PyTorch model downloads just fine, but the I cannot download the tensorflow model. I used this code to download: from transformers import BertTokenizer, TFBertModel model_name='cahya/bert-base-indonesian-522M' model = TFBertModel.from_pretrained(model_name) Here’s what I got when running the code on Ubuntu 16.04, python3.5, transformers==2.5.1, Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/username/.local/lib/python3.5/site-packages/transformers/modeling_tf_utils.py", line 346, in from_pretrained assert os.path.isfile(resolved_archive_file), "Error retrieving file {}".format(resolved_archive_file) File "/usr/lib/python3.5/genericpath.py", line 30, in isfile st = os.stat(path) TypeError: stat: can't specify None for path argument And here’s what I got when running it on Windows 10, python 3.6.5, transformers 3.1.0: --------------------------------------------------------------------------- OSError Traceback (most recent call last) C:\ProgramData\Anaconda3\lib\site-packages\transformers\modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 579 if resolved_archive_file is None: --> 580 raise EnvironmentError 581 except EnvironmentError: OSError: During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-3-c2f14f761f05> in <module>() 3 model_name='cahya/bert-base-indonesian-522M' 4 tokenizer = BertTokenizer.from_pretrained(model_name) ----> 5 model = TFBertModel.from_pretrained(model_name) C:\ProgramData\Anaconda3\lib\site-packages\transformers\modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 585 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a file named one of {TF2_WEIGHTS_NAME}, {WEIGHTS_NAME}.\n\n" 586 ) --> 587 raise EnvironmentError(msg) 588 if resolved_archive_file == archive_file: 589 logger.info("loading weights file {}".format(archive_file)) OSError: Can't load weights for 'cahya/bert-base-indonesian-522M'. Make sure that: - 'cahya/bert-base-indonesian-522M' is a correct model identifier listed on 'https://huggingface.co/models' - or 'cahya/bert-base-indonesian-522M' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin. This also happens with other cahya/ models. This page 3 says that you can use the tensorflow model. However, based on the error, it seems like the file does not exist over there? I tried downloading other pretrained model like bert-base-uncased etc. and they download just fine. This issue only happens with cahya/ models. Am I missing something? or should I report this issue to a github issue?
@cahya do you have any idea what might cause this?
0
huggingface
Beginners
IndexError list out of range
https://discuss.huggingface.co/t/indexerror-list-out-of-range/1188
Hi, So I’m currently trying to train a model in tensorflow on SMILES which is a bit of chemical information which tells you the molecular formula of a given molecule. I thought a transformer would work well because of the importance of context of each character of the string within the SMILES. I am currently having an issue running the model this is the error I get: C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\transformers\trainer_tf.py:653 distributed_training_steps * self.args.strategy.run(self.apply_gradients, batch) C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\transformers\trainer_tf.py:618 apply_gradients * gradients = self.training_step(features, labels) C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\transformers\trainer_tf.py:601 training_step * per_example_loss, _ = self.run_model(features, labels, True) C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\transformers\trainer_tf.py:682 run_model * outputs = self.model(features, labels=labels, training=training)[:2] C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\transformers\modeling_tf_bert.py:1127 call * outputs = self.bert( C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\transformers\modeling_tf_bert.py:615 call * embedding_output = self.embeddings(input_ids, position_ids, token_type_ids, inputs_embeds, training=training) C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\transformers\modeling_tf_bert.py:191 call * return self._embedding(input_ids, position_ids, token_type_ids, inputs_embeds, training=training) C:\ProgramData\Anaconda3\envs\tf-transformer\lib\site-packages\transformers\modeling_tf_bert.py:206 _embedding * seq_length = input_shape[1] IndexError: list index out of range This part is caused by running the trainer.train() I have checked to make sure there are no zero values in my data and the dataset I have just contains the SMILES (String) which is a feature and then the target which is the CCS (Collisional Cross Section) (float) This is my code: dataFrameData = pd.DataFrame(numpyArrayOfData, columns=['CAS', 'CCS', 'Compound', 'Adducts', 'Mass', 'SMILES']).iloc[1:] # splitting the data into testing and training train, test = train_test_split(dataFrameData, test_size=0.2) # getting the CCS as the target from the data targetTrain = train.pop('CCS').astype(float) targetTest = test.pop('CCS').astype(float) # getting the SMILES as first feature from data dfStrippedTrain = train[['SMILES']].copy() dfStrippedTest = test[['SMILES']].copy() # compile these two into a test and training data set and then use enumerate so there is indexing dataSetTrain = tf.data.Dataset.from_tensor_slices((dfStrippedTrain.values, targetTrain.values)).enumerate() dataSetTest = tf.data.Dataset.from_tensor_slices((dfStrippedTest.values, targetTest.values)).enumerate() model = TFBertForSequenceClassification.from_pretrained("bert-large-cased") training_args = TFTrainingArguments( output_dir = '/results', num_train_epochs = 3, per_device_train_batch_size=16, per_device_eval_batch_size=64, weight_decay=0.01, logging_dir='/logs' ) trainer = TFTrainer( model = model, args = training_args, train_dataset = dataSetTrain, eval_dataset = dataSetTest ) trainer.train() trainer.evaluate() Any help you can give me would be greatly appreciated. Thank you in advance.
Hi, what does your SMILES data string look like? Have you considered tokenizing it?
0
huggingface
Beginners
Tokenizer decoding using BERT, RoBERTa, XLNet, GPT2
https://discuss.huggingface.co/t/tokenizer-decoding-using-bert-roberta-xlnet-gpt2/1128
I’ve been using BERT and am fairly familiar with it at this point. I’m now trying out RoBERTa, XLNet, and GPT2. When I try to do basic tokenizer encoding and decoding, I’m getting unexpected output. Here is an example of using BERT for tokenization and decoding: from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') result = tokenizer(text='the needs of the many', text_pair='outweigh the needs of the few') input_ids = result['input_ids'] print(input_ids) print(tokenizer.decode(input_ids)) print(tokenizer.convert_ids_to_tokens(input_ids)) The output is expected: [101, 1996, 3791, 1997, 1996, 2116, 102, 2041, 27204, 2232, 1996, 3791, 1997, 1996, 2261, 102] [CLS] the needs of the many [SEP] outweigh the needs of the few [SEP] ['[CLS]', 'the', 'needs', 'of', 'the', 'many', '[SEP]', 'out', '##weig', '##h', 'the', 'needs', 'of', 'the', 'few', '[SEP]'] I understand the special tokens like [CLS] and the wordpiece tokens like ##weig. However, when I try other models, I get crazy output. RoBERTa tokenizer = AutoTokenizer.from_pretrained('roberta-base') result = tokenizer(text='the needs of the many', text_pair='outweigh the needs of the few') input_ids = result['input_ids'] print(input_ids) print(tokenizer.decode(input_ids)) print(tokenizer.convert_ids_to_tokens(input_ids)) Output: [0, 627, 782, 9, 5, 171, 2, 2, 995, 1694, 8774, 5, 782, 9, 5, 367, 2] <s>the needs of the many</s></s>outweigh the needs of the few</s> ['<s>', 'the', 'Ġneeds', 'Ġof', 'Ġthe', 'Ġmany', '</s>', '</s>', 'out', 'we', 'igh', 'Ġthe', 'Ġneeds', 'Ġof', 'Ġthe', 'Ġfew', '</s>'] Where are those Ġ characters coming from? XLNet tokenizer = AutoTokenizer.from_pretrained('xlnet-base-cased') result = tokenizer(text='the needs of the many', text_pair='outweigh the needs of the few') input_ids = result['input_ids'] print(input_ids) print(tokenizer.decode(input_ids)) print(tokenizer.convert_ids_to_tokens(input_ids)) Output: /usr/local/lib/python3.6/dist-packages/transformers/configuration_xlnet.py:211: FutureWarning: This config doesn't use attention memories, a core feature of XLNet. Consider setting `men_len` to a non-zero value, for example `xlnet = XLNetLMHeadModel.from_pretrained('xlnet-base-cased'', mem_len=1024)`, for accurate training performance as well as an order of magnitude faster inference. Starting from version 3.5.0, the default parameter will be 1024, following the implementation in https://arxiv.org/abs/1906.08237 FutureWarning, [18, 794, 20, 18, 142, 4, 23837, 18, 794, 20, 18, 274, 4, 3] the needs of the many<sep> outweigh the needs of the few<sep><cls> ['▁the', '▁needs', '▁of', '▁the', '▁many', '<sep>', '▁outweigh', '▁the', '▁needs', '▁of', '▁the', '▁few', '<sep>', '<cls>'] Why are there underscore ▁ characters? GPT2 tokenizer = AutoTokenizer.from_pretrained('gpt2') result = tokenizer(text='the needs of the many', text_pair='outweigh the needs of the few') input_ids = result['input_ids'] print(input_ids) print(tokenizer.decode(input_ids)) print(tokenizer.convert_ids_to_tokens(input_ids)) Output: [1169, 2476, 286, 262, 867, 448, 732, 394, 262, 2476, 286, 262, 1178] the needs of the manyoutweigh the needs of the few ['the', 'Ġneeds', 'Ġof', 'Ġthe', 'Ġmany', 'out', 'we', 'igh', 'Ġthe', 'Ġneeds', 'Ġof', 'Ġthe', 'Ġfew'] Again, where are those Ġ characters coming from? I understand there are different subword tokenization schemes used by each. I also have the original research papers. Can someone please explain how the Huggingface Transformers implementation is producing these different outputs?
Each tokenizer has its own way of representing pieces of the same word, because the model expact them in different ways. For instance Bert model expects BPE tokenization because it was pretrained this way. GPT-2 and RoBERTa expect byet-level BPE (because they were pretrained this way) which results in outputs with those Ġ (which basically represent space) whereas XLNet expects sentencepiece tokenized texts, which uses ▁ to represent the space character. You can find a high-level summary of the tokenizers and which one is used for each model in this doc page 30.
0
huggingface
Beginners
Why positional embeddings are implemented as just simple embeddings?
https://discuss.huggingface.co/t/why-positional-embeddings-are-implemented-as-just-simple-embeddings/585
Hello! I can’t figure out why the positional embeddings are implemented as just the vanilla Embedding layer in both PyTorch 26 and Tensorflow 25. Based on my current understanding, positional embeddings should be implemented as non-trainable sin/cos or axial positional encodings (from reformer). Can anyone please enlighten me with this? Thank you so much!
Hi @miguelvictor ! Both are valid strategies: iirc the original Transformers paper had sinusoidal embeddings with a fixed rate, but BERT learned a full vector for each of the 512 expected positions. Currently, the Transformers library has sinusoidal embeddings in the TransfoXL model, check it out!
0
huggingface
Beginners
Debugging a model that isn’t learning
https://discuss.huggingface.co/t/debugging-a-model-that-isnt-learning/1174
This is a more open-ended question. Suppose you’ve trained a model and after looking at the loss curve you see it hasn’t learned (the loss curve is flat). What sort of things would you begin to investigate to understand what might be the cause? Additionally, what sort of things might you log apriori to help you debug a model that didn’t learn (in the event that it happened)?
First things first: don’t wait until the model is done learning; use a tool like tensorboard or weights and biases, or simply print your loss after every step. Just so you can jump in quickly. The first thing that your model should be capable of is overfitting on a tiny dataset (e.g. 1—5 samples). Trying this out should be relatively simple. If it doesn’t do so, you know something is wrong. Good things to check: learning rate too high or too low? Did you accidentally freeze the model and your model doesn’t have trainable parameters? Are your shapes in your forward pass correct? Did you clear the gradients after optimising? And so on.
0
huggingface
Beginners
Running ExBert Lite in Google Colab
https://discuss.huggingface.co/t/running-exbert-lite-in-google-colab/1167
https://huggingface.co/exbert/?model=bert-base-uncased&modelKind=bidirectional&sentence=The%20girl%20ran%20to%20a%20local%20pub%20to%20escape%20the%20din%20of%20her%20city.&layer=0&heads=..0,1,2,3,4,5,6,7,8,9,10,11&threshold=0.7&tokenInd=null&tokenSide=null&maskInds= 5…&hideClsSep=true Is it possible to run this in Google Colab? There are some changes I’d like to make, but I’m really not sure how to do it.
Hi zanderbush, I don’t know whether it is possible to run ExBert on Colab, but it is definitely possible to run BertViz on Colab. (I use it.) Do you need the extra functions of ExBert, or would BertViz be enough? I would expect it to be possible to install ExBert on Colab, though you might have memory problems if you want to do the Corpus part. I don’t think bert-base-uncased is one of the available options in ExBert, so you might have to use bert-base-cased.
0
huggingface
Beginners
How to train a language model from scratch when my dataset is bigger than RAM?
https://discuss.huggingface.co/t/how-to-train-a-language-model-from-scratch-when-my-dataset-is-bigger-than-ram/117
I intend to train a language model from scratch. I have been following the tutorial on Hugging Face’s blog 108. However, when I tried to use their google Colab’s code, I had a memory exceeded error. Their Google Colab’s code uses the LineByLineTextDataset. After digging through the source code, it turns out that LineByLineTextDatasets loads the entire dataset eagerly. No wonder why I had a Memory Exceeded error. My dataset is larger than my RAM capacity. The article hints that if my dataset is bigger than my capacity, I could “opt to load and tokenize examples on the fly, rather than as a preprocessing step.” However, I’m not certain how to achieve this. I will be very grateful if others can points me to the right direction, especially if I can still use the transformers.Trainer.
Do you want to use TensorFlow or PyTorch for the training?
0
huggingface
Beginners
Using Trainer with do_train + do_eval
https://discuss.huggingface.co/t/using-trainer-with-do-train-do-eval/1145
Hi, I’m currently training a T5 model using the HF Trainer and looking through the TrainingArguments documentation as well as example scripts. At the moment I am trying to understand the use of the the two optional arguments within TrainingArguments ‘do_train’ and ‘do_eval’. Some examples notebooks don’t specify these, and therefore default these to false. From experimenting, you can run trainer.evaluate() without ‘do_eval’, and I can’t figure out what ‘do_train’ would do if it defaults to False Thanks,
Those arguments are only used by the scripts, not the Trainer itself.
0
huggingface
Beginners
Does training tokenizer and adding new token to model when training BART on custom dataset improve performance?
https://discuss.huggingface.co/t/does-training-tokenizer-and-adding-new-token-to-model-when-training-bart-on-custom-dataset-improve-performance/1163
I have a dataset composed of 300,000 articles. Is it wise to train tokenizer on my own dataset and add new token to the model then train BART? Is my dataset large enough to pre-train the embedding of new tokens added?
If the domain of your dataset isn’t very specialized from the pre-training datatset then training tokenizer won’t help much. Also note that if you train a new tokenizer then you’ll need to do the pre-training again, and when you train your tokenizer from scratch you don’t need to add the new tokens any more since you already trained tokenizer. Hope this makes sense. Is my dataset large enough IMO this is highly subjective question and depends on the quality of the data, the task etc. @sshleifer might have some advice for this
0
huggingface
Beginners
Fine-tune, or train from scratch?
https://discuss.huggingface.co/t/fine-tune-or-train-from-scratch/1129
I have a corpus of about 15,000 documents, with a total of about 8gb of text, which I want to use as the source-material for a text generator. With that much data, would it make sense to use a pre-trained model (like gpt2-large), and then fine-tune it on my corpus? Or would it make more sense to train a new language model from scratch using my data? What would be the trade-offs between those two options? I know the answer is probably complex, but I’m interested in understanding what considerations to take into account, especially how those two options effect my budget for running the training process in the cloud.
Definitely finetune IMO. Will be much faster and better results. Would you rather teach a third grader how to predict the next word on your dataset or a newborn? One argument for from scratch would be more control. It is less likely your from scratch model says something racist or otherwise wrong if it is trained on just your (presumably friendly) data.
0
huggingface
Beginners
T5forConditionalGeneration
https://discuss.huggingface.co/t/t5forconditionalgeneration/1134
Hello, I am a newbie in using transformers and never used it before. I want to know, what’s the difference between T5Model and T5forConditionalGeneration? Where are they used?
Hi @ashiishkarhade T5Model contains the encoder (stack of encoder layers) and decoder (stack of decoder layers) without any task specific heads. It returns the raw hidden states of the decoder as output. T5ForConditionalGeneration also contains the encoder and decoder and adds an additional linear layer (lm_head) which takes the final hidden states of decoder and generates the next token. For fine-tuning the model for seq2seq generation you should use T5ForConditionalGeneration, if you want to add some different task specific head then you can T5Model. And almost all library models have this structure, a base model which returns raw hidden states and additional models with task specific heads(ForSequenceClassification, ForQuestionAnswering etc) on top of the base model.
0
huggingface
Beginners
Language-modeling script “killed” when fine-tuning gpt2-medium
https://discuss.huggingface.co/t/language-modeling-script-killed-when-fine-tuning-gpt2-medium/1107
Hello! I’m just getting started with the huggingface libraries for text-generation. I cloned the transformers repo from github, and then I was able to successfully run fine-tuning on the GPT-2 small model, on my macbook, using this command… python3 examples/language-modeling/run_language_modeling.py \ --output_dir=/my/data/gpt2-small-finetune \ --model_type=gpt2 \ --model_name_or_path=gpt2 \ --do_train \ --train_data_file=/path/to/my_corpus_text.txt For now, I’m using a very small corpus… with a total of around 150kb of text. The training process took about 30 minutes (which I assume was using the CPU rather than GPU). Once it finished running, I was able to successfully use the fine-tuned model to generate text, like this: python3 examples/text-generation/run_generation.py \ --model_type=gpt2 \ --model_name_or_path=/my/data/gpt2-small-finetune \ --length=100 \ --prompt="Once upon a time, there was a " Next, I tried running the same exact process, with the same training corpus, but with a new output directory and with model_name_or_path set to gpt2-medium… python3 examples/language-modeling/run_language_modeling.py \ --output_dir=/my/data/gpt2-medium-finetune \ --model_type=gpt2 \ --model_name_or_path=gpt2-medium \ --do_train \ --train_data_file=/path/to/my_corpus_text.txt The process starts up and emits some logs, which look normal… 09/13/2020 17:31:51 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0, distributed training: False, 16-bits training: False 09/13/2020 17:31:51 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='/my/data/gpt2-small-finetune', overwrite_output_dir=False, do_train=True, do_eval=False, do_predict=False, evaluate_during_training=False, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Sep13_17-31-51_APTI5214-MBP', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, past_index=-1, run_name=None, disable_tqdm=False, remove_unused_columns=True) /usr/local/lib/python3.8/site-packages/transformers/modeling_auto.py:777: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models. warnings.warn( /usr/local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py:1319: FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead. warnings.warn( 09/13/2020 17:32:03 - INFO - filelock - Lock 5925535648 acquired on /path/to/cached_lm_GPT2Tokenizer_1024_my_corpus_text.txt.lock 09/13/2020 17:32:03 - INFO - filelock - Lock 5925535648 released on /path/to/cached_lm_GPT2Tokenizer_1024_my_corpus_text.txt.lock /usr/local/lib/python3.8/site-packages/transformers/trainer.py:249: FutureWarning: Passing `prediction_loss_only` as a keyword argument is deprecated and won't be possible in a future version. Use `args.prediction_loss_only` instead. Then after about two minutes, the process exits with the message Killed: 9 So now I’m stuck… Does anybody know what might be wrong? This is a 2019 MacBook Pro with 16 GB of RAM and plenty of free space on the SSD. I’m an experienced Java developer, but I’m a python novice, so I might be missing something critical about the environment.
@lysandre might know this
0
huggingface
Beginners
Bart input confusion
https://discuss.huggingface.co/t/bart-input-confusion/1103
In Bart for the summarization task, the input length is 1024 (1024 token), what does this input represents: (for example i have a document with s1, s2, s500) does this mean that we feed a sequence as sentence per sentence or the whole document must be as input as 1024 token ( all sentences must fit with truncation ) if this true doesn’t it cause information loss ? And if it’s sequence per sequence, let’s say 20 sentences at a time of 500, will the output at the top encoder change each time? To be honest I’m having a difficult time imagining how the encoder is processing the document.
Hi @Hildweig For BART max input length is 1024 tokens. You can think of a token as as a word for simplicity (words can be split in multiple tokens as well). It’s not sentences. Here document means a seq with max 1024 tokens. Processing longer sequences than that is still a topic of ongoing research. And I would recommend to read the original Transformers paper (Attention is all you need ) to get an idea about how a sequence is processed by the encoder. Or the illustrated transformers 3
0
huggingface
Beginners
Finetuning a specific task when pretrained model isn’t trained on that specific task? Using the task model vs using the base model
https://discuss.huggingface.co/t/finetuning-a-specific-task-when-pretrained-model-isnt-trained-on-that-specific-task-using-the-task-model-vs-using-the-base-model/1113
I want to fine tune a RobertaForSequenceClassification task on microsoft/codebert-base model. This microsoft/codebert-base model hasn’t been trained for Sequence-Classification task. Can I load this pre-trained model inside a SequenceClassification function and fine tune it on my dataset? model = RobertaForSequenceClassification.from_pretrained( "microsoft/codebert-base" ) Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at microsoft/codebert-base and are newly initialized: ['classifier.dense.weight', 'classifier.out_proj.weight', 'classifier.out_proj.bias', 'classifier.dense.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. While loading I get this message which is expected as the model isn’t trained on the task and thus would not have the weights. Can I proceed with fine-tuning this RobertaForSequenceClassification model or would I need to define my own classifier layer on top of RobertaModel and train that?
Hi @mayanksatnalika Yes, you can load a pre-trained base model for SequenceClassification. RobertaForSequenceClassification adds the classification head itself, you won’t need to do that manually. So you can fine-tune it for classification.
0
huggingface
Beginners
Current best practice for final linear classifier layer(s)?
https://discuss.huggingface.co/t/current-best-practice-for-final-linear-classifier-layer-s/1093
I’m using BERT to perform text classification (sentiment analysis or NLI). I pass a 768-D vector through linear layers to get to a final N-way softmax. I was wondering what is the current best practice for the final block of linear layers? I see in the implementation of BertForSequenceClassification that the 768-D pooled output is passed through a Dropout and a Linear layer 8. pooled_output = outputs[1] pooled_output = self.dropout(pooled_output) logits = self.classifier(pooled_output) Is this the current best practice? What about adding more linear layers, dropout, relu, batchnorm, etc.? I am using this classifier architecture: pooled_output = outputs[1] # 768-D # --- start block --- Linear (out=1000-D) ReLU BatchNorm Dropout (0.25) # --- end block --- Linear (out=N) # Final N-way softmax I can repeat the classifier block as many times as I want with any intermediate dimensionality. I’m worried that my knowledge of using ReLU, batchnorm, and dropout may be outdated. Any help would be appreciated.
There is already one hidden layer between the final hidden state and the pooled output you see, so the one in SequenceClassificationHead is the second one. Usually for classification head, two hidden layers are sufficient (talking about vision as well as text here), but you can certainly try more and see if you get better results.
0
huggingface
Beginners
How to get weights indicating the importance of each words in a sentence corresponding to the label
https://discuss.huggingface.co/t/how-to-get-weights-indicating-the-importance-of-each-words-in-a-sentence-corresponding-to-the-label/1013
I am currently working on sentiment analyses using bert’s pooled output or classification task model in pytorch. I would want to see the importance weight of each word in a sentence which is leading me to the sentiment. More formally, I want a batch_size x max_length size tensor, where each value of the matrix is the weight indicating the importance of that word on the ith row. I’ve seen many suggesting to look at the 11th layer of bert but I am unable to find anything like that. Any help regarding this matter, please?
Isn’t there anybody who could help me out with this? I badly need guidance on it. Thank you in advance.
0
huggingface
Beginners
Inference API detailed request
https://discuss.huggingface.co/t/inference-api-detailed-request/876
I am using this api endpoint of gpt2 from https://huggingface.co/gpt2 3 this is given curl request, " curl -X POST -H “Authorization: Bearer api_xxxxxxxxxxxxxxxxxxxxxxx” -H “Content-Type: application/json” -d ‘“My name is Mariama, my favorite”’ https://api-inference.huggingface.co/models/gpt2 2 " The problem is this request only specifies start of the output text. “my name is marima, my favorite” in this case. But when using transformers you can specify max lenght of output too with >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) is it achievable with inference api ? Probably something like below request. " curl -X POST -H “Authorization: Bearer api_xxxxxxxxxxxxxxxxxxxxxxx” -H “Content-Type: application/json” -d ‘“My name is Mariama, my favorite”’ -d “max_length = 30” htt/api-inference.huggingface.co/models/gpt2 "
hi hexapoda, I have no idea what your post is asking about, but the link doesn’t seem to work (I get “method not allowed” when I try to use it). It might be good to include more detail too.
0
huggingface
Beginners
Speeding up zero shot classification [Solved]
https://discuss.huggingface.co/t/speeding-up-zero-shot-classification-solved/692
Hi, I was wondering if there was a way to speed up zero shot classification as outlined here 28 if I was to use pytorch directly. For example I’m guessing this default method tokenises and pads to length 512 whereas most of my text is < 50 words. I’ve had some experience in using BertWordPieceTokenizer. So I’m guessing it would also be faster to tokenize everything in one go and send it to a pytorch model directly, rather than one by one which is what I’m guessing is happening here? Would really appreciate even a starting point if such a thing is possible.
By default it actually pads to the length of the longest sequence in the batch, so that part is efficient. The thing to keep in mind with this method is that each sequence/label pair has to be fed through the model together. So if you are running with a large # of candidate labels, that’s going to be your bottleneck. The other thing is that the default model, bart-large-mnli, is pretty big. Theoretically, the pipeline can be used with any model that has been trained on NLI 23, which you can specify with the model parameter when you create the pipeline. So you could try out some smaller models, but you probably won’t get anything to work as well as Bart or Roberta in terms of accuracy.
0
huggingface
Beginners
Electra NER on Conll03-english
https://discuss.huggingface.co/t/electra-ner-on-conll03-english/1032
Hi all, I am new to NLP. I would like to understand how model dbmdz/electra-large-discriminator-finetuned-conll03-english was fine tuned on Conll03-english dataset. Is there any available code that I can take a look? I understand it was trained by @stefan-it. Thank you, Sergul
Hi @sergulaydore you can find the ner training code here 66
0
huggingface
Beginners
Data format in run_lm_fine_tuning.py
https://discuss.huggingface.co/t/data-format-in-run-lm-fine-tuning-py/1011
Hello everyone, I would like to ask for help with the following: I want to fine-tuning of a language model for text generation, for this I will use run_lm_fine_tuning.py I would like to know what is the optimal way to train the model. Can it be trained with a pure csv file, or should the file have some kind of pre-processing? Or is it necessary to train it with an instance of the Dataset class ? Thanks for the help
Hi @kintaro, run_lm_fine_tuning.py is now renamed to run_language_modeling.py , you can fine it here. The data file can be in one of the two formats Line by line: which means each example is on it’s own line seperated by \n. Set the --line_by_line command line argument Or just a plain text file from which examples will be sampled . This notebook 4 also walks through how a language model can be trained.
0
huggingface
Beginners
BertForMaskedLM on a fine-tuned base model
https://discuss.huggingface.co/t/bertformaskedlm-on-a-fine-tuned-base-model/1028
Hello, Is there a way for me to fine-tune the base bert/roberta architecture on a task like sequence classification, and then use the fine-tuned model as a base model for MLM predictions? I tried this by copying the state dict over from the sequence classification task into the MLM architecture, but that did not work at all. Seems like the weights that I swap from the sequence prediction task do not play well with the MLM objective. Here is a code snippet - #Load the fine-tuned ‘roberta-base’ model into RobertaForMaskedLM roberta_mlm_model = RobertaForMaskedLM.from_pretrained(MODEL_FILE) #load the default model default_model = RobertaForMaskedLM.from_pretrained(‘roberta-base’) #swap the weights for the head roberta_mlm_model.lm_head.load_state_dict(default_model.lm_head.state_dict()) Can someone tell me if I am thinking in the right direction here? Nikhil
This will be problematic because the heads are not compatible. In other words, you can fine tune the model and use the weights from one model in the other, but you still have the issue that the heads are different and cannot be mapped. So after fine tuning for sequenence classification, saving model, and loading that model in a MLM version of that architecture the LMHead will not have pretrained weights.
0
huggingface
Beginners
How pretrain ELECTRA on custom dataset?
https://discuss.huggingface.co/t/how-pretrain-electra-on-custom-dataset/968
I have a 1GB raw text dataset in a niche domain. I want to train an ELECTRA model but I couldn’t find any tutorial/examples to do so. Can anyone help me? I tried using the simpletransformers package, but it has memory issues at the moment and after a few epochs my colab session crashes.
pinging @lysandre
0
huggingface
Beginners
Fine-tuning Dataset Requirements
https://discuss.huggingface.co/t/fine-tuning-dataset-requirements/993
I am looking to fine-tune a BART-large model for a summarization task and I am creating a dataset to tune on. How should I structure this dataset? Should it have a column of text blocks and another column with associated summaries? Or, will simply providing the raw text (the text blocks) without summaries suffice? Thanks!
Hi @Buckeyes2019 You can fine the seq2seq fine-tuning scripts here. The readme explains how the data should formatted and saved. GitHub huggingface/transformers 15 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0. - huggingface/transformers
0
huggingface
Beginners
Error for loading checkpoints sometimes
https://discuss.huggingface.co/t/error-for-loading-checkpoints-sometimes/724
I fine tuned the T5 model and want to load the checkpoint. I have done this successfully for a lot of times but I got an error for one model: INFO:transformers.tokenization_utils_base:loading file https://s3.amazonaws.com/models.huggingface.co/bert/t5-spiece.model 1 from cache at /home/t-miahu/.cache/torch/transformers/68f1b8dbca4350743bb54b8c4169fd38cbabaad564f85a9239337a8d0342af9f.9995af32582a1a7062cb3173c118cb7b4636fa03feb967340f20fc37406f021f Traceback (most recent call last): File “eval_checkpoint_WEnoLem_PI_fromPI.py”, line 39, in model = T5FineTuner.load_from_checkpoint(PATH) File “/home/t-miahu/anaconda3/envs/T5env/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py”, line 1514, in load_from_checkpoint checkpoint = torch.load(checkpoint_path, map_location=lambda storage, loc: storage) File “/home/t-miahu/anaconda3/envs/T5env/lib/python3.6/site-packages/torch/serialization.py”, line 527, in load with _open_zipfile_reader(f) as opened_zipfile: File “/home/t-miahu/anaconda3/envs/T5env/lib/python3.6/site-packages/torch/serialization.py”, line 224, in init super(_open_zipfile_reader, self).init(torch.C.PyTorchFileReader(name_or_buffer)) RuntimeError: version <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /pytorch/caffe2/serialize/inline_container.cc:132, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. (init at /pytorch/caffe2/serialize/inline_container.cc:132) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7f47cc697193 in /home/t-miahu/anaconda3/envs/T5env/lib/python3.6/site-packages/torch/lib/libc10.so) frame #1: caffe2::serialize::PyTorchStreamReader::init() + 0x1f5b (0x7f472cc2b9eb in /home/t-miahu/anaconda3/envs/T5env/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #2: caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader(std::string const&) + 0x64 (0x7f472cc2cc04 in /home/t-miahu/anaconda3/envs/T5env/lib/python3.6/site-packages/torch/lib/libtorch.so) frame #3: + 0x6c53a6 (0x7f47cd5623a6 in /home/t-miahu/anaconda3/envs/T5env/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #4: + 0x2961c4 (0x7f47cd1331c4 in /home/t-miahu/anaconda3/envs/T5env/lib/python3.6/site-packages/torch/lib/libtorch_python.so) frame #39: __libc_start_main + 0xe7 (0x7f47d1d25b97 in /lib/x86_64-linux-gnu/libc.so.6) It seems the version has some problem, but why I did not have this error before? Any idea why I have no problem with other checkpoints but have error for this one?
I have the same issue… Did you solve it??
0
huggingface
Beginners
Returned Tensors and Hidden State
https://discuss.huggingface.co/t/returned-tensors-and-hidden-state/883
Hi, just quickly getting started with GPT2. from https://huggingface.co/gpt2 2 : from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2Model.from_pretrained('gpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) is said to yield the features of the text. Upon inspecting the output, it is an irregularly shaped tuple with nested tensors. Looking at the source code for GPT2Model, this is supposed to represent the hidden state. I can guess what some of these dimensions represent, for example the 768 dimension is obviously the word embedding, but in general I can’t find any documentation about interpreting the information in output I also tried adding: output = model(**encoded_input, output_attentions = True) but I do not know how to interpret the dimensions of this either. I am told to “See attentions under returned tensors for more detail.” in the docstring at https://huggingface.co/transformers/_modules/transformers/modeling_gpt2.html#GPT2Model 2 But I cannot find what this is referring to. Can someone help me interpret the dimensions of these nested tuples?
Please refer to GPT2 docs 16. It will give a detailed description of what GPT2Model is supposed to return. It returns (in order of output): last_hidden_state : (batch_size, sequence_length, hidden_size) past: (2, batch_size, num_heads, sequence_length, embed_size_per_head) hidden_states attentions
0
huggingface
Beginners
Does run_generation.py has a “no_repeat_ngram_size” attribute?
https://discuss.huggingface.co/t/does-run-generation-py-has-a-no-repeat-ngram-size-attribute/952
Good day, I’m generating text with run_generation.py. I would like to know if there is any way to pass the attribute no_repeat_ngram_size. I have seen that attribute with model. generate but not in run_generation.py.
hi @kintaro, run_generation.py is used for auto-regressive generation which uses sampling. no_repeat_ngram_size is used for beam search so run_generation.py doesn’t have that attribute since it only uses sampling for generation
0
huggingface
Beginners
Is there any difference in the tokenized output if I load the tokenizer from a different pretrained model
https://discuss.huggingface.co/t/is-there-any-difference-in-the-tokenized-output-if-i-load-the-tokenizer-from-a-different-pretrained-model/872
So let’s say if I do GPT2TokenizerFast.from_pretrained('gpt2-medium') vs GPT2TokenizerFast.from_pretrained('distilgpt2') Is there actually any differences in their tokenized output?
In that particular case, I don’t think so, but there are definitely cases where tokenizers from the same model type but different pretrained configurations are different. bert-base-uncased vs bert-base-cased would be one clear example.
0
huggingface
Beginners
Learning rate, LR scheduler and optimiser choice for fine-tuning GPT2
https://discuss.huggingface.co/t/learning-rate-lr-scheduler-and-optimiser-choice-for-fine-tuning-gpt2/971
I know the best choice is different depending on the actual dataset that we are fine-tuning on but I am just curious to know what combinations of learning rate, LR scheduler and optimiser have you guys found to be a good combination to train with in general? I am currently using AdamW, CosineAnnealingWarmRestarts, with a learning rate going from 0.002 to 0.0001, restarting at the end of each epoch.
You can refer to TrainingArguments to look at the defaults. Link 362. They usually work well.
0
huggingface
Beginners
Gradient overflow when fine-tune t5 on CNN/DM dataset
https://discuss.huggingface.co/t/gradient-overflow-when-fine-tune-t5-on-cnn-dm-dataset/513
I was trying to fine-tune t5 on CNN/DM dataset for summarization task. I use the data based on README file in examples/seq2seq: wget https://s3.amazonaws.com/datasets.huggingface.co/summarization/cnn_dm.tgz tar -xzvf cnn_dm.tgz I also successfully fine-tuned sshleifer/distilbart-cnn-12-6 on this dataset. But when I try to do it using t5-base, I receive the following error: Epoch 1: 0%| | 2/37154 [00:07<40:46:19, 3.95s/it, loss=nan, v_num=13]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0 Epoch 1: 0%| | 3/37154 [00:08<27:57:13, 2.71s/it, loss=nan, v_num=13]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4096.0 Epoch 1: 0%| | 4/37154 [00:08<21:32:17, 2.09s/it, loss=nan, v_num=13]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 2048.0 Epoch 1: 0%| | 5/37154 [00:08<17:41:05, 1.71s/it, loss=nan, v_num=13]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 1024.0 Epoch 1: 0%| | 6/37154 [00:08<15:15:58, 1.48s/it, loss=nan, v_num=13]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 512.0 Epoch 1: 0%| | 7/37154 [00:09<13:24:44, 1.30s/it, loss=nan, v_num=13]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 256.0 Epoch 1: 0%| | 8/37154 [00:09<12:01:25, 1.17s/it, loss=nan, v_num=13]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 128.0 Epoch 1: 0%| | 9/37154 [00:09<10:56:27, 1.06s/it, loss=nan, v_num=13]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 64.0 Epoch 1: 0%| | 10/37154 [00:09<10:04:29, 1.02it/s, loss=nan, v_num=13]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32.0 <...> Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 0.0 Epoch 1: 3%|███▋ | 1091/37154 [04:13<2:19:28, 4.31it/s, loss=nan, v_num=13]Traceback (most recent call last): File "finetune.py", line 409, in <module> main(args) File "finetune.py", line 383, in main logger=logger, File "/data/User/v3/bart/lightning_base.py", line 303, in generic_train trainer.fit(model) File "/data/User/v3/bart/venv/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1003, in fit results = self.single_gpu_train(model) File "/data/User/v3/bart/venv/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 186, in single_gpu_train results = self.run_pretrain_routine(model) File "/data/User/v3/bart/venv/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1213, in run_pretrain_routine self.train() File "/data/User/v3/bart/venv/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 370, in train self.run_training_epoch() File "/data/User/v3/bart/venv/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 452, in run_training_epoch batch_output = self.run_training_batch(batch, batch_idx) File "/data/User/v3/bart/venv/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 632, in run_training_batch self.hiddens File "/data/User/v3/bart/venv/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 822, in optimizer_closure error = context.__exit__(a, b, c) File "/root/anaconda3/lib/python3.7/contextlib.py", line 119, in __exit__ next(self.gen) File "/data/User/v3/bart/venv/lib/python3.7/site-packages/apex/amp/handle.py", line 123, in scale_loss optimizer._post_amp_backward(loss_scaler) File "/data/User/v3/bart/venv/lib/python3.7/site-packages/apex/amp/_process_optimizer.py", line 190, in post_backward_with_master_weights models_are_masters=False) File "/data/User/v3/bart/venv/lib/python3.7/site-packages/apex/amp/scaler.py", line 117, in unscale 1./scale) ZeroDivisionError: float division by zero My code for fine-tuning is modified based on examples/seq2seq: ./finetune.sh \ --data_dir $DATA_DIR \ --train_batch_size=8 \ --eval_batch_size=8 \ --output_dir=$OUTPUT_DIR \ --num_train_epochs 5 \ --model_name_or_path t5-base Can anyone provide some suggestions? Thank you!
Looks like you are using fp16, currently there are few bugs with fp16 for T5 and I think those are not fixed, so try turning off fp16. Pinging @sshleifer for more info
0
huggingface
Beginners
What can cause model.generate (BART) output to be gibberish after fine-tuning?
https://discuss.huggingface.co/t/what-can-cause-model-generate-bart-output-to-be-gibberish-after-fine-tuning/934
I’m trying to fine-tune BART for paraphrasing, it’s my first time fine-tuning a model so I think I’m doing something wrong… The input data is just pairs of sentences. I managed to get the training working, at least apparently. Here’s the code: model = BartForConditionalGeneration.from_pretrained('facebook/bart-base') model.train() from transformers import AdamW optimizer = AdamW(model.parameters(), lr=1e-5) for i_epoch in tqdm(range(3), desc="Epoch"): print("Epoch #", i_epoch) for i_batch, data in enumerate(tqdm(train_dataloader, desc="Training batches")): outputs = model(data['train_ids'], attention_mask=data['train_att'], decoder_input_ids=data['val_ids'], decoder_attention_mask=data['val_att'], labels=data['val_ids'] ) loss = outputs[0] if i_batch % 10 == 0: print("Batch", i_batch, " loss =", loss) loss.backward() optimizer.step() I guess the first question is if this looks correct? The data from the dataloader is populated with the input_ids and attention_mask results from tokenizer() on the training and validation sets. I wanted to write out the training myself instead of using Trainer for learning purposes. Looking at the loss alone, this kinda seems to work? Here’s the output: Epoch # 0 Training batches: 100% 90/90 [3:55:25<00:00, 156.96s/it] Batch 0 loss = tensor(14.6109, grad_fn=<NllLossBackward>) Batch 10 loss = tensor(11.6407, grad_fn=<NllLossBackward>) Batch 20 loss = tensor(10.6421, grad_fn=<NllLossBackward>) Batch 30 loss = tensor(9.4968, grad_fn=<NllLossBackward>) Batch 40 loss = tensor(8.4232, grad_fn=<NllLossBackward>) Batch 50 loss = tensor(6.9087, grad_fn=<NllLossBackward>) Batch 60 loss = tensor(5.8986, grad_fn=<NllLossBackward>) Batch 70 loss = tensor(5.3631, grad_fn=<NllLossBackward>) Batch 80 loss = tensor(4.9015, grad_fn=<NllLossBackward>) Epoch # 1 Training batches: 100% 90/90 [1:57:47<00:00, 78.53s/it] Batch 0 loss = tensor(4.5883, grad_fn=<NllLossBackward>) Batch 10 loss = tensor(4.1895, grad_fn=<NllLossBackward>) Batch 20 loss = tensor(3.7548, grad_fn=<NllLossBackward>) Batch 30 loss = tensor(3.4811, grad_fn=<NllLossBackward>) Batch 40 loss = tensor(3.2216, grad_fn=<NllLossBackward>) Batch 50 loss = tensor(2.9044, grad_fn=<NllLossBackward>) Batch 60 loss = tensor(2.3631, grad_fn=<NllLossBackward>) Batch 70 loss = tensor(2.1639, grad_fn=<NllLossBackward>) Batch 80 loss = tensor(1.8803, grad_fn=<NllLossBackward>) Epoch # 2 Training batches: 100% 90/90 [1:57:34<00:00, 78.39s/it] Batch 0 loss = tensor(1.6901, grad_fn=<NllLossBackward>) Batch 10 loss = tensor(1.7226, grad_fn=<NllLossBackward>) Batch 20 loss = tensor(1.2894, grad_fn=<NllLossBackward>) Batch 30 loss = tensor(0.9937, grad_fn=<NllLossBackward>) Batch 40 loss = tensor(0.9841, grad_fn=<NllLossBackward>) Batch 50 loss = tensor(0.9459, grad_fn=<NllLossBackward>) Batch 60 loss = tensor(0.6868, grad_fn=<NllLossBackward>) Batch 70 loss = tensor(0.6640, grad_fn=<NllLossBackward>) Batch 80 loss = tensor(0.5731, grad_fn=<NllLossBackward>) The loss is consistently going down over the course of the training, which I thought should be a basic sign that something is happening. But, my generate code generates repetitive gibberish instead! Here’s the generation code: test_input = "Yesterday they went to the park, and today they will go to the store." test_inputs = tokenizer([test_input], return_tensors='pt') summary_ids = model.generate( test_inputs['input_ids'], num_beams=12, temperature=1.0, num_return_sequences=10, repetition_penalty=1.0, do_sample=False ) for res in [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids]: print(res) The output is this: For for for for for for for for for for for for for for for for for Yesterday,,,,,,,,,,,,,,,, The the the the the the the the the the the the the the the the the The thethe the the the the the the the the the the the the the the On on on on on on on on on on on on on on on on on For for for for for for for for for for forfor for for for for for The thethethe the the the the the the the the the the the the the Yesterday Yesterday Yesterday Yesterday Yesterday,,,,,,,,,,,, For for for for for for for for for for for for For for for for for The the the the the the the the the the store store store store store store store (In comparison, before fine-tuning, the output for this code is the same as the input) Granted, this is fine-tuned on a small toy dataset of 10k sentence pairs mainly to test if the code runs. But seeing the output be repeated tokens like this makes me hesitant to try a bigger dataset. Surely 10k is enough data that the output should at least not be complete junk? So I’m wondering now if I did something simple wrong, or if it’s actually a data problem and I just need to train on a fuller dataset (1m+ examples?) to see results.
Do you need to zero your gradients for BART? (I’ve not used Bart, but in training Bert I need to use model.zero_grad before passing each batch of data to the model). Does your data look similar to the data Bart was originally trained on? If it is totally different then your model could get worse before it gets better. What are you hoping it will learn from your new data?
0
huggingface
Beginners
How does NER model learns from the way it is processed during training?
https://discuss.huggingface.co/t/how-does-ner-model-learns-from-the-way-it-is-processed-during-training/850
Hi guys, I am just getting started with NER using transformers. But I am getting some issues while understanding how the NER model is trained and how it is expected to perform during inference. During training, tokenization and labeling is done in such a way that for each word the label is assigned to the first token only and -100 (ignore_index) to all other tokens of that word. But during inference, it is expected to predict the same label for all the tokens. Training: https://github.com/huggingface/transformers/blob/6b4c617666fd26646d44d54f0c45dfe1332b12ca/examples/token-classification/utils_ner.py#L110-L117 7 Inference: https://huggingface.co/transformers/usage.html#named-entity-recognition Just trying to understand how does the model learns this way. Thanks Great ecosystem BTW
ping @stefan-it @vblagoje
0
huggingface
Beginners
Is there a way to prevent certain tokens from being masked during pretraining?
https://discuss.huggingface.co/t/is-there-a-way-to-prevent-certain-tokens-from-being-masked-during-pretraining/930
Hello, I am trying to pretrain various versions of BERT on a code corpus. I am using BPE tokenizer. The issue is that since newline characters are abundant in code they end up getting masked for prediction. This leads to the model predicting newlines often which is useless in code. Is there some way to prevent (the datacollator?) from masking certain tokens (in this case newlines/tab/spaces)? Or is there another solution to this? Since the corpus is huge relative to the hardware I have, it would save some expensive preprocessing of the dataset.
There is no mechanism implemented for this, so you should copy the code of the data collator you are using and adapt it a little bit to not mask your tokens.
0
huggingface
Beginners
Working with named entities with bert
https://discuss.huggingface.co/t/working-with-named-entities-with-bert/919
I am working on a project where I have some named entities and I know their locations in the text. I want to use separate embedding vectors for those names. How do I modify the tokenizer and how do I add learnable vectors to the embedding matrix? Any suggestion or guide will be highly appreciated. Edit: After looking into some documentations I found(here) that tokenizer.add_special_tokens is the thing I was looking for. However after model.resize_token_embeddings(len(tokenizer)) is there any way to make only these new embeddings trainable?
[I am not an expert] I think the answer is No (unless you want to write some very detailed code). Freezing is done on a per-layer basis, so either all the embeddings are trainable or all of them are not. I’m not sure it would even make sense to try to write code to alter only your new embeddings. The whole Bert only works in context. Are you sure you have enough data to train Bert to make useful embeddings of your new names?
0
huggingface
Beginners
Output_attention = True after downloading a model
https://discuss.huggingface.co/t/output-attention-true-after-downloading-a-model/907
Hi I am new to hugging face, I have downloaded distilBERT using model = DistilBertModel.from_pretrained("distilbert-base-uncased") and I did not modify the config. But now I want to see the output attentions. Is there any way I can see the attentions without downloading a new model?
Hey @pritam You can pass output_attentions=True to forward method, and it’ll return attentions. Attnetions are last element of the returned Tuple or if you are on master branch, then you can also pass return_dict=True and call .attentions on the returned object. output = model(input_ids, output_attentions=True, return_dict=True) output.attentions # this will give the attentions
0
huggingface
Beginners
Torch Tensor and Input Conflicting
https://discuss.huggingface.co/t/torch-tensor-and-input-conflicting/881
Due to the code “torch.tensor,” I am getting the error “Tensor object is not callable” when I add “input.” Does anyone know how I can fix this? import torch from torch.nn import functional as F from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2LMHeadModel.from_pretrained('gpt2') text0 = "In order to" text = tokenizer.encode("In order to") input, past = torch.tensor([text]), None logits, past = model(input, past = past) logits = logits[0,-1] probabilities = torch.nn.functional.softmax(logits) best_logits, best_indices = logits.topk(5) best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] text.append(best_indices[0].item()) best_probabilities = probabilities[best_indices].tolist() for i in range(5): f = ('Generated {}: {}'.format(i, best_words[i])) print(f) option = input("Pick a Option:") z = text0.append(option) print(z)
Hey, you can ask the tokenizer to return tensor instead of creating it manually. Pass return_tensors="pt" and it’ll return a tensor instead of list
0
huggingface
Beginners
Transformer installation from sourrce
https://discuss.huggingface.co/t/transformer-installation-from-sourrce/886
Hi I am trying to install the transformer library from source with this commands on my jupyter notebook; !git clone https://github.com/huggingface/transformers 3 !cd transformers !pip install . but i keep getting this error: ERROR: Directory ‘.’ is not installable. Neither ‘setup.py’ nor ‘pyproject.toml’ found. I need some help trying to fix this, thank you for the time
If you are doing this in jupyter notebook or colab, then you should use %cd instead of !cd or !git clone https://github.com/huggingface/transformers !pip install -U ./transformers
0
huggingface
Beginners
How much cleaning for transformers?
https://discuss.huggingface.co/t/how-much-cleaning-for-transformers/852
I know that BERT has tokens for numbers, punctuation, and special characters (e.g. #@!%). If I’m training a language model, should I Keep numbers, punctuation, and special characters Remove only the aforementioned characters, leaving the rest of the sentence untouched Remove the whole sentence if it contains any of those.
hi nbroad, [I am not an expert] I think it depends on the specifics of your data. I have a similar issue with my data. For example, some of my texts include repeated # characters or “l@@k”, designed to catch a viewer’s eye. I decided to delete this kind of thing, because it isn’t really language. It is likely that when Bert was being trained it didn’t often see them. What it tells me about the text is that the writer of the text was trying to catch some viewers’ attention. It doesn’t really tell me (or Bert) much else. It’s a bit tricky, because some special characters might have some meaning in some contexts, for example “p/x” for “part exchange” might be frequent enough to have some meaning to Bert. As a compromise, when I cleaned my data, I deleted all occurrences of | # * ] [ \ . Then I kept single occurrences of ! ( ) - ! ? , £ / +, but deleted any repeated occurrences. In my case, it wasn’t necessary to remove the whole sentence if it contained “######”, because the text that remained after removing the offending “######” still made a meaningful sentence. I haven’t yet decided what to do about numbers. It might be that Bert is able to make some sense of values such as 1984 or £2000, even though it has to tokenize them as 1, 9, 8, 4 and £, 2, 0, 0, 0. One thing I have recently realised is that my data include numbers with commas in (eg £2,000), and I’m pretty sure that would get a better representation if I removed the commas (ie cleaned it to £2000). I don’t think it would be right to remove numbers altogether, but I’m starting to wonder if it would be useful to replace numbers with descriptors, such as " a few / lots / hundreds / thousands / millions / billions / recent date / historical date ". In some cases, it might be necessary to extract the actual numbers and include them as separate features. So far as I know, the data that Bert was trained on wasn’t purged of special characters. I think it is likely that Bert will do a good job on data that is similar in style to the data it was trained on (books and wikipedia articles). As usual, if in doubt: try it out.
0
huggingface
Beginners
Loss.backward() problems with require_grad
https://discuss.huggingface.co/t/loss-backward-problems-with-require-grad/878
“element 0 of tensors does not require grad and does not have a grad_fn” Is it likely to be a problem with the model, the loss function, or the shape of the data tensors? I have a dataset of texts, each with an associated real value. I want to fine-tune a bert model to these, and then visualize the attention weights (for any specific text) and how they are altered by the fine-tuning. I have defined a model based on transformers BertModel, then passing the pooled_output (=CLS token) through two more dense layers, the first with ReLU and the second with Sigmoid. I’ve been following abhimishra https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb 5 and ChrisMcCormick https://mccormickml.com/2019/07/22/BERT-fine-tuning/#4-train-our-classification-model 6 , but I’m stuck with the .backward() step. I’ve tried calculating the loss as part of the forward pass (within the model class definition) or outside of that, but I get the same error: “element 0 of tensors does not require grad and does not have a grad_fn”. Model class ATBertClass(torch.nn.Module): def __init__(self): super(ATBertClass, self).__init__() self.L1bb = transformers.BertModel.from_pretrained('bert-base-uncased', output_attentions=True) self.L2Lin = torch.nn.Linear(768,64) self.L3Rel = torch.nn.ReLU() self.L4Lin = torch.nn.Linear(64,1) self.L5Sig = torch.nn.Sigmoid() def forward(self, input_ids, attention_mask, labels): _, output_1, attns = self.L1bb(input_ids = input_ids, attention_mask = attention_mask) output_2 = self.L2Lin(output_1) output_3 = self.L3Rel(output_2) output_4 = self.L4Lin(output_3) output_5 = self.L5Sig(output_4) return output_5, attns Is there anything obviously wrong with the model definition? (I’m not sure about the torch.nn.ReLU and Sigmoid layers). Can anyone advise?
I’ve found the problem: I needed to set dtype=torch.float on the target tensor. (Any advice on the model definition would still be welcome!)
0
huggingface
Beginners
Fine tune a transformer model for POS tagging
https://discuss.huggingface.co/t/fine-tune-a-transformer-model-for-pos-tagging/247
Can the example scripts for token classification be used for POS tagging? The README for token classification currently shows benchmarks for NER datasets only. Are the steps(preprocessing and evaluation) for POS tagging similar? Also are there any results for POS tagging using any transformer architecture? Thank you
Hi @kushalj001, yes, it is possible! Just use the same data format (token, tag pair per line) like it is shown for the NER example. However, what is currently missing is “accuracy” as metric, because for PoS tagging you normally report accuracy (instead of F1-score). I’m working on a nice readme extension for PoS tagging, maybe it is ready until next week.
0
huggingface
Beginners
[source code] why do many (not all) transformer classes have their docstring starting with r###
https://discuss.huggingface.co/t/source-code-why-do-many-not-all-transformer-classes-have-their-docstring-starting-with-r/819
Why do many (not all) transformer classes have their docstring starting with r### grep -r -A 2 'class ' src/transformers here is just a sample of both: src/transformers/modeling_tf_xlnet.py:# class TFXLNetForQuestionAnswering(TFXLNetPreTrainedModel): src/transformers/modeling_tf_xlnet.py-# r""" src/transformers/tokenization_distilbert.py:class DistilBertTokenizer(BertTokenizer): src/transformers/tokenization_distilbert.py- r""" src/transformers/tokenization_gpt2.py:class GPT2TokenizerFast(PreTrainedTokenizerFast): src/transformers/tokenization_gpt2.py- """ src/transformers/tokenization_auto.py:class AutoTokenizer: src/transformers/tokenization_auto.py- r""":class:`~transformers.AutoTokenizer` is a generic tokenizer class Is there a special purpose for using a regex string? There doesn’t seem to be any pattern - it appears in some, but not the others classes’ docstring. Thank you.
It’s r for raw, not regex. Not sure why they’re used exactly, it maybe to make sure some sphinx stuff doesn’t get interpreted weirdly. I know I added r in the new docstrings I created.
0
huggingface
Beginners
What is transformers-cli
https://discuss.huggingface.co/t/what-is-transformers-cli/789
I want to fine-tune a bert model in Tensorflow (keras) and then visualise its attention weights in PyTorch (bertviz). This documentation page https://huggingface.co/transformers/converting_tensorflow_models.html 45 suggests that it might be possible to convert a tensorflow model to a pytorch model, but I don’t understand how to do it. Firstly, what is transformers-cli? Is it a module? Do I need to import it? Where can I find out more about it? I’m using colab, so that includes transformers v 3.0.2 by default. Secondly, when I click on the Examples links run_bert_extract_features.py 2, run_bert_classifier.py 1 and run_bert_squad.py 1) I get a Github 404 page not found. Should they be there? Do I need to search within Github specificallly? Thirdly, can the process be done within a colab jupyter notebook, or would I have to get a colab command line somehow? (Any hints on how would be appreciated!)
Which Tensorflow model are you starting from? If it’s one from the library you probably don’t need to convert it. If it’s an original checkpoint from another repository (typically from the original authors repository) then you should use the convertion command line utility. The transformers-cli is a command you can run from the command line when you have installed the transformers package.
0
huggingface
Beginners
BERT model size (transformer block number)
https://discuss.huggingface.co/t/bert-model-size-transformer-block-number/620
Hi, I have some general questions regarding BERT and distillation. I want to compare the performance of BERT with different model size (transformer block number). Is it necessary to do distillation? If I just train a BERT with 6 Layers without distillation, does the performance look bad? Do I have to do pre-train from scratch every time I change the layer number of BERT? Is it possible to just remove some layers in an existing pre-trained model and finetune on tasks? Why BERT has 12 blocks? Not 11 or 13 etc. ? I couldn’t find any explanation. Thanks, ZLK
Hi, have you seen this https://github.com/google-research/bert 15 It describes and provides several smaller Bert models, including evaluations of their performance. By the way, I am not an expert. I think the results would be OK, but the 6-layer Bert would be slower to train and use than the 6-layer DistilBert I expect it would be possible to just remove some layers and then finetune. After all, full training starts from randomly initialized weights, so I don’t suppose your cut-down model would be actually worse than that. On the other hand, I don’t know if it would be better. Certainly, if you do try it, you would want to cut off the last layers, not the first ones. If you look at deep Convolutional networks for image recognition, the first layers detect simple patterns, and the later layers build those simple patterns into more complicated ones. I don’t know for sure why Devlin et al chose 12 or 24 layers, but I assume they tried lots of different configurations and 12 or 24 were the best compromises. They might also have wanted to create models that were roughly as complicated (expensive to run) as some of the previous state-of-the-art models, so that they could compare like for like in their evaluations. It is also likely that even numbers are more efficient because of the way the hardware (GPU or TPU) is configured. Notice that all the newly released small Bert models have even numbers of Layers.
0
huggingface
Beginners
How to fix the non-idempotency issue for the transformers test suite (codecov issue)
https://discuss.huggingface.co/t/how-to-fix-the-non-idempotency-issue-for-the-transformers-test-suite-codecov-issue/723
Currently, codecov on PRs reports a pretty random information most of the time. If you look at this analysis by Thomas Hu, he suggests that the transformers test suite is not idempotent. i.e. each test run produces a coverage report that is different from the previous run, even if there were no changes in the code. (Idempotence: definition) There are several examples already in that ticket, but here is a recent “outrageous” report where a change in a pure doc file and no code change gets reported as a 2.17% decrease in coverage. If you look at the reported percentages in the files, none of these make sense. Other than that the reported coverage from running the tests somehow wildly varies from run to run. screenshot639×1120 82.5 KB What I observed is that most of the time it’s the *_tf_* modules that are listed at the top of impacted files list, so I was thinking perhaps it has something to do specifically with TF, but it’s not always the case. If you have experience with this kind of situation, please, kindly share your insights and how this can be fixed. Thank you.
Well, after an extensive testing I couldn’t find any symptoms of non-idempotency. It doesn’t mean it doesn’t exist, I just couldn’t reproduce it on my machine. There was one sub-issue of codecov generating an invalid report when it fails to find a code coverage report for the “base” it checks against. When this happens it goes looking for the nearest hash with the coverage report, which often leads to a report that is not representative of the true impact of the proposed PR, since it is no longer comparing the proposed code changes to the base against which it’d be applied. A fix that prevents generation of invalid reports for when the base is lacking has been applied: github.com/huggingface/transformers fix incorrect codecov reports 1 huggingface:master ← stas00:patch-1 opened Aug 18, 2020 stas00 +4 -0 But looking at the recent PRs with no code change, the problem is still there. We still get coverage changes reported, when there are none: e.g. https://codecov.io/gh/huggingface/transformers/pull/6650/changes, https://codecov.io/gh/huggingface/transformers/pull/6650/changes
0
huggingface
Beginners
Fine Tuning IMDb tutorial - Unable to reproduce and adapt
https://discuss.huggingface.co/t/fine-tuning-imdb-tutorial-unable-to-reproduce-and-adapt/778
Hi! I am trying to reproduce this Notebook: https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing 29. However, there is no way to make it work in Colab since I am getting an OOM error all the time (with both GPU and None environments). I tried to download and run it on my laptop but still no way to make it work. Despite this, I wanted to adapt that tutorial in order to use another pre-trained model and another local CSV dataset. This is my code: from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments from nlp import load_dataset import torch import numpy as np from sklearn.metrics import accuracy_score, precision_recall_fscore_support model_name = "dccuchile/bert-base-spanish-wwm-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) train_dataset, validation_dataset = load_dataset( "csv", delimiter="\t", data_files={"train": "spanish_train.csv", "validation": "spanish_val.csv"}, split=["train", "validation"], ) train_dataset.remove_column_("id_str") train_dataset.rename_column_("TWEET", "tweet") train_dataset.rename_column_("LABEL", "label") validation_dataset.remove_column_("id_str") validation_dataset.rename_column_("TWEET", "tweet") validation_dataset.rename_column_("LABEL", "label") def tokenize(batch): return tokenizer(batch["tweet"], padding=True, truncation=True, max_length=10000) train_dataset = train_dataset.map(tokenize, batched=True, batch_size=10) validation_dataset = validation_dataset.map(tokenize, batched=True, batch_size=10) train_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'label']) validation_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'label']) def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='binary') acc = accuracy_score(labels, preds) return { 'accuracy': acc, 'f1': f1, 'precision': precision, 'recall': recall } training_args = TrainingArguments( output_dir='./results', num_train_epochs=1, per_device_train_batch_size=16, per_device_eval_batch_size=64, warmup_steps=500, weight_decay=0.01, evaluate_during_training=True, logging_dir='./logs', ) trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_dataset, eval_dataset=validation_dataset ) trainer.train() In this last line I am getting this error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-93-3435b262f1ae> in <module>() ----> 1 trainer.train() 10 frames /usr/local/lib/python3.6/dist-packages/transformers/trainer.py in train(self, model_path) 490 self._past = None 491 --> 492 for step, inputs in enumerate(epoch_iterator): 493 494 # Skip past any already trained steps if resuming training /usr/local/lib/python3.6/dist-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs) 213 def __iter__(self, *args, **kwargs): 214 try: --> 215 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): 216 # return super(tqdm...) will not catch exception 217 yield obj /usr/local/lib/python3.6/dist-packages/tqdm/std.py in __iter__(self) 1102 fp_write=getattr(self.fp, 'write', sys.stderr.write)) 1103 -> 1104 for obj in iterable: 1105 yield obj 1106 # Update and possibly print the progressbar. /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in __next__(self) 361 362 def __next__(self): --> 363 data = self._next_data() 364 self._num_yielded += 1 365 if self._dataset_kind == _DatasetKind.Iterable and \ /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in _next_data(self) 401 def _next_data(self): 402 index = self._next_index() # may raise StopIteration --> 403 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 404 if self._pin_memory: 405 data = _utils.pin_memory.pin_memory(data) /usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] /usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] /usr/local/lib/python3.6/dist-packages/nlp/arrow_dataset.py in __getitem__(self, key) 717 format_columns=self._format_columns, 718 output_all_columns=self._output_all_columns, --> 719 format_kwargs=self._format_kwargs, 720 ) 721 /usr/local/lib/python3.6/dist-packages/nlp/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs) 705 format_columns=format_columns, 706 output_all_columns=output_all_columns, --> 707 format_kwargs=format_kwargs, 708 ) 709 return outputs /usr/local/lib/python3.6/dist-packages/nlp/arrow_dataset.py in _convert_outputs(self, outputs, format_type, format_columns, output_all_columns, format_kwargs) 617 continue 618 if format_columns is None or k in format_columns: --> 619 v = map_nested(command, v, **map_nested_kwargs) 620 output_dict[k] = v 621 return output_dict /usr/local/lib/python3.6/dist-packages/nlp/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy) 189 return np.array(mapped) 190 # Singleton --> 191 return function(data_struct) 192 193 TypeError: new(): invalid data type 'str' Does anyone know what am I doing wrong?
max_length=10000 this seems wrong. IIRC the tokenizers will pad up to max length. So your batches will be size bsx10000. That will cause OOM.
0
huggingface
Beginners
Missing keys “model.embeddings.position_ids” when loading model using state_dict
https://discuss.huggingface.co/t/missing-keys-model-embeddings-position-ids-when-loading-model-using-state-dict/744
I have saved the model like model_state_dict = model.module.state_dict() torch.save({‘model_state_dict’: model_state_dict}, osp.join(save_dir, ‘best.ckpt’)) Now I try to load the model like model_path = “./models/best.ckpt” ckpt = torch.load(model_path) model.load_state_dict(ckpt[‘model_state_dict’]) then it has the error. There are “position_ids” in model, but not in the saved ckpt file. Anyway to skip loading position_ids ?
Hi @jyliu, is there any specif reason for not using .save_pretrained and .from_pretrained
0
huggingface
Beginners
Fine tune Masked Language Model on custom dataset
https://discuss.huggingface.co/t/fine-tune-masked-language-model-on-custom-dataset/747
Hi, I am new in Bert, I want to fine tune a plain text corpus(only text, without any label or other information) to get hidden state layer for specific word/document embedding. Thus I think the suitable model for my task is the masked language model, however I found this only example demo https://huggingface.co/blog/how-to-train 125, which is not universal and clear. The original examples are here, but no mlm model available https://huggingface.co/transformers/master/custom_datasets.html 62 Does anyone have clear example for this case? To be specific, I don’t know how to pass true label in MLM task for loss and gradient update. Also while using the Trainer, where should I insert -mlm command? Thank you in advance
Hi @smalltoken, what is the issue with https://huggingface.co/blog/how-to-train 366 ? This colab should help you. It walks you through, How to to train tokenizer from scratch Create RobertaModel using the config use the DataCollatorForLanguageModeling, which handle the masking and train using Trainer.
0