docs
stringclasses
4 values
category
stringlengths
3
31
thread
stringlengths
7
255
href
stringlengths
42
278
question
stringlengths
0
30.3k
context
stringlengths
0
24.9k
marked
int64
0
1
huggingface
Beginners
Speechbrain for Spanish
https://discuss.huggingface.co/t/speechbrain-for-spanish/10046
I could’n find any spanish model just like “asr-crdnn-commonvoice-fr” (speechbrain/asr-crdnn-commonvoice-fr · Hugging Face 1) How can I improve/fine-tune speechbrain pre-trained model with additional language speeches in particular Spanish ???
You can search for other ASR models in Models - Hugging Face, but as you mentioned there are no speechbrain models for Spanish at the moment. For speechbrain specific questions, their Discourse might be a place where you get better answers. https://speechbrain.discourse.group/ 1 They also have a ASR from scratch colab Google Colab
0
huggingface
Beginners
Sentence similarity
https://discuss.huggingface.co/t/sentence-similarity/7496
Hi all, I have a question. I have a dataset containing questions and answers from a specific domain. My goal is to find the find the X most similar questions to a query. for example: user: “What is python?” dataset questions: [“What is python?”, “What does python means?”, “Is it python?”, “Is it a python snake?”, “Is it a python?”] I tried encoding the questions to embeddings and calculate the cosine similarity but the problem is it gives me high similarity score for “Is it python?” for the query “What is python?” which is clearly not the same question meaning and for “What does python means?” get very low score compared to “Is it python?” Any suggestions how i can overcome this problem? maybe new approaches…
Hi, I would suggest to try 3-4 models from the Sentence similarity task filter 6. image1797×1193 426 KB There is an easy way to do it: use accelerated inference for each model from Colab notebook. It may help you to see if some of them is really giving the high weight t the “What does python means?” question from your example. image1761×1169 219 KB
0
huggingface
Beginners
ValueError: Expected input batch_size (16) to match target batch_size (64)
https://discuss.huggingface.co/t/valueerror-expected-input-batch-size-16-to-match-target-batch-size-64/1569
I’ve modeled my training script on the information in the finetuning with custom datasets documentation (https://huggingface.co/transformers/custom_datasets.html 7). I have both a custom dataset and a custom model (I used the run_language_modeling.py script to pretrain the roberta-base model with our raw texts). when I run trainer.train() I get the error: ValueError: Expected input batch_size (16) to match target batch_size (64), when the model is computing the loss on a training_step I don’t know where target batch_size is being set. The input batch_size matches the value I have for per_device_train_batch_size. Does anyone have an idea?
Tried this using roberta-base as the model as well, and get the same error.
0
huggingface
Beginners
What do we mean by POS or NEG in sentiment analysis?
https://discuss.huggingface.co/t/what-do-we-mean-by-pos-or-neg-in-sentiment-analysis/10001
In Sentiment Analysis when we say that a statement is Positive or Negative, what do we mean by these. For example, someone says “iPhone is good device”, so this is a positive sentence from the Apple Company view side, however that is a Negative one from Samsung Company side.
Where did you get the Samsung example from? By positive and negative we mean the sentiment that is attached to an utterance. Is it positive (good) or negative (bad). Both your examples, with Apple and Samsung, should be good.
0
huggingface
Beginners
Additional random tqdm progress bars while Training
https://discuss.huggingface.co/t/additional-random-tqdm-progress-bars-while-training/10004
Hi, Suddently, I started getting additional progress bars while training. Here is the snapshot. I realized that I am getting train_batch_size (8 in this case) bars between every training step update progress bar. It only started from today. All these additional print statements are drastically slowing down the training and the notebook is getting hanged. Any idea how to fix this? I have disabled tqdm in Trainer but the main tqdm is getting disabled but not these 8 additional bars between each step. Thanks. image1275×680 160 KB
Is this on TPU?
0
huggingface
Beginners
How to compare two corpus?
https://discuss.huggingface.co/t/how-to-compare-two-corpus/9138
Hi, I want to compare two corpora and extract the similar sentences present in both the corpus. what is the best way to do this?
For the same task I used sentence-similarity models, like this one huggingface.co Models - Hugging Face The I would suggest that you try to use “Accelerated inference” as the easiest way to test if it helps you. image1581×901 129 KB
0
huggingface
Beginners
Different results from `model.generate` depending on batch size?
https://discuss.huggingface.co/t/different-results-from-model-generate-depending-on-batch-size/9992
I seem to be getting very different results from model.generate for Question Generation with ProphetNet depending on how many questions I’m generating at once. from transformers import ProphetNetTokenizer, ProphetNetForConditionalGeneration, ProphetNetConfig model = ProphetNetForConditionalGeneration.from_pretrained('microsoft/prophetnet-large-uncased-squad-qg') tokenizer = ProphetNetTokenizer.from_pretrained('microsoft/prophetnet-large-uncased-squad-qg') fact1 = "Bill Gates [SEP] Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975." fact2 = "the New Right [SEP] The late 1980s and early 1990s saw the collapse of most of those socialist states that had professed a Marxist–Leninist ideology. In the late 1970s and early 1980s, the emergence of the New Right and neoliberal capitalism as the dominant ideological trends in Western politics championed by United States president Ronald Reagan and British prime minister Margaret Thatcher led the West to take a more aggressive stand towards the Soviet Union and its Leninist allies. Meanwhile, the reformist Mikhael Gorbachev became General Secretary of the Communist Party of the Soviet Union in March 1985 and sought to abandon Leninist models of development towards social democracy. Ultimately, Gorbachev's reforms, coupled with rising levels of popular ethnic nationalism, led to the dissolution of the Soviet Union in late 1991 into a series of constituent nations, all of which abandoned Marxist–Leninist models for socialism, with most converting to capitalist economies." fact3 = """Paul Lafarguel [SEP] Engels did not support the use of the term Marxism to describe either Marx's or his own views.:12 He claimed that the term was being abusively used as a rhetorical qualifier by those attempting to cast themselves as real followers of Marx while casting others in different terms such as Lassallians.:12 In 1882, Engels claimed that Marx had criticized self-proclaimed Marxist Paul Lafargue by saying that if Lafargue's views were considered Marxist, then "one thing is certain and that is that I am not a Marxist.":12""" inputs = tokenizer([fact1, fact2, fact3], padding=True, truncation=False, return_tensors="pt") """ ['what is one example of a person who founded a company?', 'the collapse of the soviet union led to the rise of which political party?', 'who was a self - proclaimed marxist?'] """ # inputs = tokenizer([fact1, fact2], padding=True, truncation=False, return_tensors="pt") """ ['???', 'the collapse of the soviet union in the late 1980s saw the collapse of what political party?'] """ # inputs = tokenizer([fact1], padding=True, truncation=False, return_tensors="pt") """ ['along with paul allen, who founded microsoft?'] """ # inputs = tokenizer([fact2], padding=True, truncation=False, return_tensors="pt") """ ['along with neoliberal capitalism, what political movement emerged in the late 1970s and early 1980s?'] """ # inputs = tokenizer([fact3], padding=True, truncation=False, return_tensors="pt") """ ['who did marx criticize in 1882?'] """ # Generate Summary question_ids = model.generate(inputs['input_ids'], num_beams=5, early_stopping=True) tokenizer.batch_decode(question_ids, skip_special_tokens=True) I commented out the inputs = lines and showed the corresponding outputs in those cases. I don’t understand what could be causing this. In particular, the results seem best generating one at a time.
Found out about attention_mask, but passing it makes no difference question_ids = model.generate(inputs['input_ids'], attention_mask=inputs['attention_mask'], num_beams=5, early_stopping=True)
0
huggingface
Beginners
Missing vocab in gpt2 model?
https://discuss.huggingface.co/t/missing-vocab-in-gpt2-model/8491
Hi there! I’m new to this forum so I hope I’m posting this in the right place… I am new to using gpt2/HuggingFace library but am trying to figure out how to use it for my purposes. I am currently trying to compare the probability of prediction tokens from GPT2 to actual tokens in an excerpt (Using a random book for now). My problem is, sometimes this token doesn’t exist in the vocab list, so a probability is not generated. What could I do to overcome this? An example would be ‘clocks’ - which I’m thinking maybe I’ll just have to go with the lemmatized word, but also ‘striking’ which cannot be further lemmatized, but it’s not in the vocab? Many thanks! Rain
I’m a beginner as well but from what I have seen is that you would actually encode this word and then, as your tokenizer doesn’t have it, it would return you a list of tokens that corresponds to that word. then you would just do the average of the word tokens probabilities
0
huggingface
Beginners
Is Eval and Validation same in Trainer API?
https://discuss.huggingface.co/t/is-eval-and-validation-same-in-trainer-api/9948
Hi, I am a bit confused if the eval dataset parameter is used during the training. #Trainer itself. trainer = Trainer( model, args, train_dataset=tokenized_datasets_train, eval_dataset=tokenized_datasets_val, tokenizer=tokenizer, compute_metrics=compute_metrics, data_collator = data_collator_ ) is the eval_dataset only used when we do trainer.evaluate() ?
Yes, it’s the default dataset used for that method (which will be used if you pass an eval_strategy to evaluate every epoch or n steps).
0
huggingface
Beginners
Is there a DataCollator for Question Answering?
https://discuss.huggingface.co/t/is-there-a-datacollator-for-question-answering/9915
Hi there, I can find several Data Collators, for example one for Masked Language modelling ( DataCollatorForLanguageModeling ). That way we have been able to pretrain our custom language model. Now we would like to train on a Question Answering downstream task using the Squad v2 dataset. However, we can’t find a DataCollator class related to Question Answering. What is the correct way to train a Question Answering model for the Squad V2 dataset using huggingface? Thanks in advance for any hints and pointers!
You should have a look at the official question answering examples 5.
0
huggingface
Beginners
RuntimeError: CUDA out of memory. Tried to allocate 1.91 GiB (GPU 0; 15.78 GiB total capacity; 12.36 GiB already allocated; 302.75 MiB free; 14.16 GiB reserved in total by PyTorch)
https://discuss.huggingface.co/t/runtimeerror-cuda-out-of-memory-tried-to-allocate-1-91-gib-gpu-0-15-78-gib-total-capacity-12-36-gib-already-allocated-302-75-mib-free-14-16-gib-reserved-in-total-by-pytorch/9483
Hi, I am trying to train a language model from scratch, but when I try to train my model, I get this error: RuntimeError: CUDA out of memory. Tried to allocate 1.91 GiB (GPU 0; 15.78 GiB total capacity; 12.36 GiB already allocated; 302.75 MiB free; 14.16 GiB reserved in total by PyTorch) Can anyone help me? I am so close to training my model, but this keeps happening. Also, I have tried this code but it does not seem to fix my issue: import torch, gc gc.collect() torch.cuda.empty_cache()
maybe you can try lower your per_gpu_batch_size in TrainingArguments.
0
huggingface
Beginners
An efficient way of loading a model that was saved with torch.save
https://discuss.huggingface.co/t/an-efficient-way-of-loading-a-model-that-was-saved-with-torch-save/9814
Hello, after fine-tuning a bert_model from huggingface’s transformers (specifically ‘bert-base-cased’). I can’t seem to load the model efficiently. My model class is as following: 1. import torch 2. import torch.nn as nn 3. class Model(nn.Module): 4. def __init__(self, model_name='bert_model'): 5. super(Model, self).__init__() 6. self.bert = transformers.BertModel.from_pretrained(config['MODEL_ID'], return_dict=False) 7. self.bert_drop = nn.Dropout(0.0) 8. self.out = nn.Linear(config['HIDDEN_SIZE'], config['NUM_LABELS']) 9. self.model_name = model_name 10. 11. def forward(self, ids, mask, token_type_ids): 12. _, o2 = self.bert(ids, attention_mask = mask, token_type_ids = token_type_ids) 13. bo = self.bert_drop(o2) 14. output = self.out(bo) 15. return output I then create a model, fine-tune it, and save it with the following code: 1. device = torch.device('cuda') 2. model = Model(model_name) 3. model.to(device) 4. TrainModel(model, data) 5. torch.save(model.state_dict(), config['MODEL_SAVE_PATH']+f'{model_name}.bin') I can load the model with this code: model = Model(model_name=model_name) model.load_state_dict(torch.load(model_path)) However the problem is that every time i load a model with the Model() class it installs and reads into memory a model from huggingface’s transformers due to the code line 6 in the Model() class. This is not very efficient, is there another way to load the model ?
Instead of R00: class Model(nn.Module): you can do class Model(PreTrainedModel): This allows you to use the built-in save and load mechanisms. Instead of torch.save you can do model.save_pretrained("your-save-dir/) 1. After that you can load the model with Model.from_pretrained("your-save-dir/").
0
huggingface
Beginners
Training Model on CPU instead of GPU
https://discuss.huggingface.co/t/training-model-on-cpu-instead-of-gpu/9810
I am using the transformer’s trainer API to train a BART model on server. The GPU space is enough, however, the training process only runs on CPU instead of GPU. I tried to use cuda and jit from numba like this example to add function decorators, but it still doesn’t help. What is the reason of it using CPU instead of GPU? How can I solve it and make the process run on GPU? Thank you for your help!
The GPU will be automatically used by the Trainer, if that’s not the case, make sure you have properly installed your NVIDIA drivers and PyTorch. Basically import torch torch.cuda.is_available() should print True.
0
huggingface
Beginners
Request to reset my API key
https://discuss.huggingface.co/t/request-to-reset-my-api-key/9781
Hello, I leaked my API key in Github, and would like some help to reset it. I posted about this request around two weeks ago (My old post 2) but I haven’t gotten a reply on that yet. Apologies for posting about this topic twice, but I found it weird that the previous post hasn’t even gotten a view. Thanks in advance! @julien-c @pierric
Hi @sakuttomon, sorry for missing your old request! I just renewed your API key, you can find the new one in your profile 2.
0
huggingface
Beginners
Label smoothing and compute_metrics in Trainer
https://discuss.huggingface.co/t/label-smoothing-and-compute-metrics-in-trainer/9778
I’m using RobertaForMaskedLM model with a Trainer and I’m passing a compute_metrics function. Within the function I typically do something like this: mask = p.label_ids != -100 labels = p.label_ids[mask] predictions = np.argmax(p.predictions, axis=-1)[mask] accuracy_metric.compute(predictions=predictions, references=labels) With this I hope to get the metric calculate only with respect to the characters that were actually masked. However, if I turn on label smoothing (pass label_smoothing_factor to the Trainer), the smoothing process seems to clamp the minimum value to 0 in place (line 461, trainer_pt_utils.py). Have I approached this in the wrong way, ie. is there another way computing the metrics?
Ah yes, the labels should not necessarily be modified inplace, this looks like a bug. This PR 4 should fix it.
0
huggingface
Beginners
Wandb does not display train/eval loss except for last one
https://discuss.huggingface.co/t/wandb-does-not-display-train-eval-loss-except-for-last-one/9170
Hello, I am having difficulty getting my code to log metrics periodically to wandb, so I can check that I am checkpointing correctly. Specifically, although I am running my model for 10 epochs (with 2 examples per epoch for debugging) and am requesting logging every 2 steps, my wandb output displays only the very last metric for both train and eval, a single dot. The metric corresponds correctly to the output for epoch 10. Could you please help me find the issue in my code/understanding? I am adapting the following script to get it to save validation checkpoints periodically: github.com huggingface/transformers/blob/v4.6.1/examples/pytorch/language-modeling/run_mlm.py 1 #!/usr/bin/env python # coding=utf-8 # Copyright 2020 The HuggingFace Team All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Fine-tuning the library models for masked language modeling (BERT, ALBERT, RoBERTa...) on a text file or a dataset. Here is the full list of checkpoints on the hub that can be fine-tuned by this script: https://huggingface.co/models?filter=masked-lm This file has been truncated. show original Specifically, the above after parsing my arguments: python3 run_mlm.py --model_name_or_path bert-base-uncased --do_train --do_eval --output_dir ./models/development/Alex/with_tags --train_file ./finetune/child/Alex/train.txt --validation_file ./finetune/child/Alex/val.txt --max_train_samples 2 --max_eval_samples 2 --overwrite_output_dir and overwriting the default values in the TrainingArguments as follows in my version of the run_mlm.py: # Added these lines training_args.load_best_model_at_end = True training_args.metric_for_best_model = "eval_loss" # end added # 8/7/21 added is_child = model_args.model_name_or_path != 'bert-base-uncased' num_epochs = 10 if is_child else 10 # Debug mode only!!! # end add # 8/1/21 added line training_args.save_total_limit = 1 strategy = "steps" training_args.logging_strategy = strategy training_args.evaluation_strategy = strategy training_args.save_strategy = strategy # For the child scripts logger.info('run_mlm.py is in debug mode and is requesting epoch = 20 for non-child! Need to revert!') training_args.save_steps = interval_steps training_args.logging_steps = interval_steps training_args.eval_steps = interval_steps # end added # For now train for fewer epochs because perplexity difference is not very large. training_args.num_train_epochs = num_epochs training_args.learning_rate = learning_rate # end additions
Does wandb work any better with logging_steps=1 ? Try adding training_args.report_to = "wandb" also, as it might be needed in future transformers releases. Do the logs from huggingface that get printed in the console print as expected or they’re also truncated?
0
huggingface
Beginners
Wav2Vec2: Inner workings of the Trainer class
https://discuss.huggingface.co/t/wav2vec2-inner-workings-of-the-trainer-class/9331
Hi all, I am following this guide 2 in order to fine-tune the model for my dataset. Reading the documentation of Wav2Vec2ForCTC, it says that the argument attention_mask must only be passed when the model’s processor has config.return_attention_mask == True. The processor that is created in the blog post, has indeed its return_attention_mask argument set to True. When setting up the trainer, does it take care of this by itself? Does it understand what arguments to use based on the settings? Or do I need to instruct it somehow to do it? By the way, as far as I understand the attention_mask is needed only when I input a batch of data into the model, not for single data points. Thanks in advance.
The Trainer takes the datasets after preprocessing has been applied, so setting this has nothing to do with the Trainer class.
0
huggingface
Beginners
Help with fine-tune BART for text infilling
https://discuss.huggingface.co/t/help-with-fine-tune-bart-for-text-infilling/9738
Hi guys, I am trying to fine-tune BART for text infilling task, for example, I want my model learn “Steve Jobs is founder of Apple” from “Steve Jobs [MASK] Apple”. My questions are mainly the following three: (1) BartModel and BartForConditionalGeneration, which one should I choose? (2) Can you provide examples of how to use the corresponding API? (3) How to compute the ‘loss’ of text infilling task?
You should use BartForConditionalGeneration, since this model adds a language modeling head on top of BartModel. BartModel itself is just the encoder-decoder Transformer, without any head on top. The language modeling head on top is necessary, in order to decode the hidden states to actual predicted tokens, and to generate text. Yes, check out my notebook here: https://github.com/NielsRogge/Transformers-Tutorials/tree/master/T5 12. You just need to use BartForConditionalGeneration instead of T5ForConditionalGeneration (T5 and BART are actually very similar). Also, you can check out the improved documentation I wrote for T5 5 to illustrate how these models work, both for training and inference. The loss gets automatically calculated for you when you provide labels to the model. It’s standard cross entropy loss between the predictions of the model and the labels.
0
huggingface
Beginners
Request: reset api key
https://discuss.huggingface.co/t/request-reset-api-key/9730
Hi, I have leaked my API key in Github. May I get some help resetting it? I have read a few forum posts regarding this.Thanks in advance. @julien-c @pierric
Hi @limivan, I changed your API key, you can find the new one in your profile 1. Cheers, Pierric
0
huggingface
Beginners
Bigbird pretraining
https://discuss.huggingface.co/t/bigbird-pretraining/5344
Hi, I am curious about the new Bigbird model, and I’m trying to pretrain one for my language+domain. Running my usual pretraining script (see below) however gives me the following message, and as a result of not being able to use block sparse attention my GPU (Tesla V100) obviously runs out of memory in no time. Attention type 'block_sparse' is not possible if sequence_length: 130 <= num global tokens: 2 * config.block_size + min. num sliding tokens: 3 * config.block_size + config.num_random_blocks * config.block_size + additional buffer: config.num_random_blocks * config.block_size = 704 with config.block_size = 64, config.num_random_blocks = 3.Changing attention type to 'original_full'... I understand what the problem is, but not how to solve it. How can I dynamically pad each minibatch to a multiple of that number of tokens in the formula for block sparse attention? I suppose I could pad all my samples to the max sequence length (4096), but that seems exceedingly wasteful as well. Any pointers on how to proceed here would be immensely appreciated. Thanks a lot! My current pretraining code: tokenizer = BigBirdTokenizer(FLAGS.tokenizer) print('tokenizer:', tokenizer) config = BigBirdConfig( vocab_size=tokenizer.vocab_size, num_hidden_layers=6, max_position_embeddings=4096, attention_type="block_sparse", ) model = BigBirdForMaskedLM(config=config) print('model', model) print(model.num_parameters(), 'parameters') dataset = load_from_disk(FLAGS.data) print('data loaded') data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15) training_args = TrainingArguments( output_dir=FLAGS.output, overwrite_output_dir=True, num_train_epochs=10, per_device_train_batch_size=FLAGS.batchsize, save_steps=10_000, save_total_limit=2, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, ) print('start training!') trainer.train() print('done training!') trainer.save_model(training_args.output_dir) ``
I have encountered the same problem. Do you have a solution now?
0
huggingface
Beginners
Type of model for PubMed article processing
https://discuss.huggingface.co/t/type-of-model-for-pubmed-article-processing/9734
Hi everyone. I am new to NLP systems, but not to machine learning. HuggingFace seemed like a great place to start as I attempt this latest project of mine. For graduate school, I am working on building a system that can take a PubMed article as the input and output all the research questions they asked and the steps they used to complete it. I have been thinking about this for a while, and I think the best system to use would be a summarization transformer, but instead of summarizing the article it would be trained to output the other stuff. I think this would be good, because like a summarization AI, the answer is not written directly in the article, the answer must be extrapolated and recombined. Can anyone offer me some advice on what type of NLP system to use if a summarization one isn’t the best? Or some steps to start solving this problem? I do not have any NLP experience so I really just need some advice on what avenue to start with.
For example: Mignone, John L., et al. “Neural Stem and Progenitor Cells in Nestin‐Gfp Transgenic Mice.” Wiley Online Library, John Wiley & Sons, Ltd, 12 Jan. 2004, onlinelibrary.wiley.com/doi/10.1002/cne.10964. "Neural stem cells generate a wide spectrum of cell types in developing and adult nervous systems. These cells are marked by expression of the intermediate filament nestin. We used the regulatory elements of the nestin gene to generate transgenic mice in which neural stem cells of the embryonic and adult brain are marked by the expression of green fluorescent protein (GFP). We used these animals as a reporter line for studying neural stem and progenitor cells in the developing and adult nervous systems. In these nestin-GFP animals, we found that GFP-positive cells reflect the distribution of nestin-positive cells and accurately mark the neurogenic areas of the adult brain. Nestin-GFP cells can be isolated with high purity by using fluorescent-activated cell sorting and can generate multipotential neurospheres. In the adult brain, nestin-GFP cells are ∼1,400-fold more efficient in generating neurospheres than are GFP-negative cells and, despite their small number, give rise to 70 times more neurospheres than does the GFP-negative population. We characterized the expression of a panel of differentiation markers in GFP-positive cells in the nestin-GFP transgenics and found that these cells can be divided into two groups based on the strength of their GFP signal: GFP-bright cells express glial fibrillary acidic protein (GFAP) but not βIII-tubulin, whereas GFP-dim cells express βIII-tubulin but not GFAP. These two classes of cells represent distinct classes of neuronal precursors in the adult mammalian brain, and may reflect different stages of neuronal differentiation. We also found unusual features of nestin-GFP–positive cells in the subgranular cell layer of the dentate gyrus. Together, our results indicate that GFP-positive cells in our transgenic animals accurately represent neural stem and progenitor cells and suggest that these nestin-GFP–expressing cells encompass the majority of the neural stem cells in the adult brain. J. Comp. Neurol. 469:311–324, 2004. © 2004 Wiley-Liss, Inc. Generation and analysis of transgenic mice Fragments of the nestin gene (gift from Drs. R. McKay and L. Zimmerman; Zimmerman et al., [1994] Josephson et al., [1998] Yaworsky and Kappen, [1999] were subcloned into the pBSM13+vector. The 5.8-kb fragment of the promoter region and the 1.8-kb fragment containing the second intron were combined with the cDNA of the enhanced version of GFP (EGFP; Clontech, Palo Alto, CA) and polyadenylation sequences from simian virus 40 and cloned into the pBSM13+ vector, generating nestin-GFP plasmid. In the final construct, EGFP cDNA was placed between the promoter and the intron sequences of the nestin gene, thus matching the arrangement of the regulatory sequences in the nestin gene. The plasmid was isolated and purified through centrifugation in cesium chloride and digested with the SmaI restriction enzyme; this removed the entire vector backbone, leaving the nestin-GFP sequences intact. An 8.7-kb fragment was purified by electrophoresis through the agarose gel and used for the pronuclear injections of the fertilized oocytes from C57BL/6 × Balb/cBy hybrid mice. Use of animals in the present experiments was reviewed and approved by the Cold Spring Harbor Laboratory Animal Use and Care Committee." Would output: Research Task : “Mark the intermediate filament nestin with GFP in mice.” Steps : “Subclone fragments of the nestin gene into pBSM13+ vector.” “Combine the second intron and the promoter region with the cDNA of GFP.” “Isolate and purify the plasmid through centrifugation in cesium chloride.” “Digested the plasmid with the SmaI restriction enzyme.” “Use pronuclear injections of the fertilized oocytes from C57BL/6 × Balb/cBy hybrid mice”
0
huggingface
Beginners
ONNX exported model outputs different value per inference call for the same input
https://discuss.huggingface.co/t/onnx-exported-model-outputs-different-value-per-inference-call-for-the-same-input/9627
I used BertModel in my pytorch model, and appended a few layers for classification purpose. I exported the pytorch model into ONNX model, it works but then the output value for the same model input produces different output value every time. It feels like that the model is in training mode but it is not. I exported the model using the following code torch.onnx.export( model, ( my_inputs1, my_inputs2, ), "model.onnx", input_names=['my_inputs1','my_inputs2'], output_names=['outputs'], dynamic_axes={ 'my_inputs1': {0: 'batch'}, 'my_inputs2': {0: 'batch'}, }, opset_version=11, do_constant_folding=True, enable_onnx_checker=True )
Updated reproducible Colab colab.research.google.com Google Colaboratory 1
0
huggingface
Beginners
Why BigBirdTokenizer can’t load my own vocab or trained BPE results?
https://discuss.huggingface.co/t/why-bigbirdtokenizer-can-t-load-my-own-vocab-or-trained-bpe-results/9700
BigBirdTokenizer can’t load vacob results. But BERT and RoBERTa can. tokenizer = RobertaTokenizer.from_pretrained('my_bpe', max_len=512) # right tokenizer = BertTokenizer.from_pretrained('./data/my_vocab.txt') # right tokenizer = BigBirdTokenizer.from_pretrained('my_bpe') # not right 175 176 def LoadFromFile(self, arg): --> 177 return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) 178 179 def Init(self, RuntimeError: Internal: /sentencepiece/src/sentencepiece_processor.cc(818) [model_proto->ParseFromArray(serialized.data(), serialized.size())] How can I train a token to use in BigBirdTokenizer? Thanks
Hi , may I know in which format is the token given ?
0
huggingface
Beginners
My input sentence is very long(more than 512). What should I do when I want to fintune model about classify?Thanks
https://discuss.huggingface.co/t/my-input-sentence-is-very-long-more-than-512-what-should-i-do-when-i-want-to-fintune-model-about-classify-thanks/9688
I know I can intercept the first 512 lengths.But I don’t want to do this. This task can understand the classification of texts. AutoModelForSequenceClassification can be directly used for classification, but the question is whether the input can be some 512 length sentences, and use pooler layer to classify ? Or what should I do? Thanks
Hey @ccfeidao you might want to try one of the dedicated models like LongFormer or BigBird which have a longer context size of around 4,096 tokens. See this thread 13 for more details
0
huggingface
Beginners
Longformer and sentiment analysis
https://discuss.huggingface.co/t/longformer-and-sentiment-analysis/9416
I am trying to use longformer to do a sentiment analysis and I am wondering what the best way is to do it. I have the following code: from transformers import LongformerTokenizer, EncoderDecoderModel model = EncoderDecoderModel.from_pretrained(“patrickvonplaten/longformer2roberta-cnn_dailymail-fp16”) tokenizer = LongformerTokenizer.from_pretrained(“allenai/longformer-base-4096”)
Hi, LongFormer itself is a Transformer encoder, and that’s more than sufficient to perform sentiment analysis. You can just use LongFormerForSequenceClassification, like so: from transformers import LongformerTokenizer, LongformerForSequenceClassification import torch tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096') model = LongformerForSequenceClassification.from_pretrained('allenai/longformer-base-4096') inputs = tokenizer("This text is positive", return_tensors="pt") labels = torch.tensor([1]).unsqueeze(0) # Batch size 1 outputs = model(**inputs, labels=labels) loss = outputs.loss logits = outputs.logits Note that this model will have a classification head that is randomly initialized, so you’ll need to fine-tune it on a custom dataset.
0
huggingface
Beginners
How to calculate the effective batch size on TPU?
https://discuss.huggingface.co/t/how-to-calculate-the-effective-batch-size-on-tpu/9656
When training on single GPU the effective batch size is the batch size multiplied by gradient accumulation steps. When multiple GPUs are used the we have to multiply the number of GPUs, batch size and gradient accumulation steps to get the effective batch size. Is it the same for TPU? When I use 8 TPU cores instead of 1 does the effective batch size equal 8 times batch size times gradient accumulation steps? Or the batch size gets divided equally to the 8 TPU cores? As an example suppose I give the trainer batch size 32, gradient accumulation steps 1 and number of TPU cores 8. Is the effective batch size 32 or 256?
Using 8 TPU cores work exactly the same as using 8 GPUs, so the effective batch size is 256.
1
huggingface
Beginners
Reuse context for BERT
https://discuss.huggingface.co/t/reuse-context-for-bert/8956
Reuse the same context in BERT Question and Answering 1 I want to reuse the same context for different questions, and tokenizing the context every time for a new question seems less efficient. How can I improve it (like reusing tokenized context)?
I’m trying to find a way to reuse a context too. I have a large context and I don’t want to tokenize it for every question. Did you find a way to do that?
0
huggingface
Beginners
Model never predicts minority class in a binary sequence classification
https://discuss.huggingface.co/t/model-never-predicts-minority-class-in-a-binary-sequence-classification/9637
I am new to huggingface. With the help of trainer API, I trained and evaluated a model. But whenever I use it for prediction, model predicts just one class always. It would be helpful if anyone can help me identify the bug. my data is such that there are two text inputs. Here is my code - import torch import collections import pandas as pd import numpy as np from sklearn.metrics import confusion_matrix from sklearn.model_selection import train_test_split, GroupShuffleSplit from sklearn.metrics import accuracy_score, recall_score, precision_score, f1_score from transformers import EarlyStoppingCallback, DataCollatorWithPadding, Trainer, TrainingArguments from transformers import BertTokenizerFast, BertConfig, BertModel, BertForSequenceClassification import os import sys sys.path.append(".") sys.path.append("..") model_name = "bert-base-uncased" max_length = 512 tokenizer = BertTokenizerFast.from_pretrained(model_name, do_lower_case=True) data = pd.read_csv('train_data.csv', '\t') gss = GroupShuffleSplit(train_size=.80, random_state = 2, n_splits=1) class Dataset(torch.utils.data.Dataset): def __init__(self, encodings, labels=None): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} if self.labels: item["labels"] = torch.tensor(self.labels[idx]) return item def __len__(self): return len(self.encodings["input_ids"]) for train_idx, val_idx in gss.split(data.loc[:, data.columns != 'Label'], data['Label'], groups=data['QueryID']): train_ds = data.iloc[train_idx] val_ds = data.iloc[val_idx] train_label = pd.factorize(train_ds.Label)[0] valid_label = pd.factorize(val_ds.Label)[0] train_label = 1 - train_label #because 0 should be relevant and 1 shoild be irrelevant valid_label = 1 - valid_label count = 0 print("Encodings generation.") train_encodings = tokenizer(train_ds['Query'].tolist(),train_ds['Segment'].tolist(), truncation=True, padding=True, max_length=max_length) valid_encodings = tokenizer(val_ds['Query'].tolist(), val_ds['Segment'].tolist(), truncation=True, padding=True, max_length=max_length) print("Encodings generated.") train_dataset = Dataset(train_encodings, train_label) valid_dataset = Dataset(valid_encodings, valid_label) model = BertForSequenceClassification.from_pretrained(model_name, num_labels=2, return_dict=True).to("cuda") data_collator = DataCollatorWithPadding(tokenizer=tokenizer) training_args = TrainingArguments( output_dir='bert_{}'.format(count), num_train_epochs=5, logging_dir='log_bert_{}'.format(count), load_best_model_at_end=True, evaluation_strategy="epoch") print("Training begins:\n") trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=valid_dataset, data_collator=data_collator, callbacks=[EarlyStoppingCallback(early_stopping_patience=3)]) print(trainer.train()) print('---------------------') ## test data test_data = pd.read_csv("test_data.csv", "\t") print("Encodings generation.") test_encodings = tokenizer(test_data['Query'].tolist(),test_data['Segment'].tolist(), truncation=True, padding=True, max_length=max_length) print("Encodings generated.") test_dataset = Dataset(test_encodings) # Make prediction raw_pred, y_act, _ = trainer.predict(test_dataset) # Preprocess raw predictions y_pred = np.argmax(raw_pred, axis=1) from sklearn.metrics import classification_report print(classification_report(y_act, y_pred))
What you typically do with an imbalanced set in a classification problem is using class weights in your loss function. See the documentation of CrossEntropyLoss 1 and its weight parameter. However, I do not think that the Trainer currently allows custom loss functions out-of-the-box. Instead you can subclass the Trainer, particularly overwriting the compute_loss method to calculate the loss manually. github.com huggingface/transformers/blob/c02cd95c56249e9bd38ecb3e4ebcce6d9eebd4a4/src/transformers/trainer.py#L1811 1 with amp.scale_loss(loss, self.optimizer) as scaled_loss: scaled_loss.backward() elif self.deepspeed: # loss gets scaled under gradient_accumulation_steps in deepspeed loss = self.deepspeed.backward(loss) else: loss.backward() return loss.detach() def compute_loss(self, model, inputs, return_outputs=False): """ How the loss is computed by Trainer. By default, all models return the loss in the first element. Subclass and override for custom behavior. """ if self.label_smoother is not None and "labels" in inputs: labels = inputs.pop("labels") else: labels = None outputs = model(**inputs)
0
huggingface
Beginners
Reduce the number of features of BERT embeddings
https://discuss.huggingface.co/t/reduce-the-number-of-features-of-bert-embeddings/3424
Hi everyone, I am using a XXL BERT for my project. I would like to test the network using an embedding dimension lesser than 768, for example, 300. I think I could try to perform a PCA on the embeddings. Is there an implemented solution which does this? Many thanks in advance
Hi, actually you could use a Dense layer (from sentence-tranformers here 111 ) and go from 768 to 300 with a bit of finetuning. If you still want to use PCA, huggingface (for what I know) doesn’t have it’s own implementation so I advice you to pick the best python library you know and use that implemlementation. For example, if you want to use scikit-learn library they have PCA as well as other cool stuff. ( this 42 is the first example that I happen to see on scikit-learn PCA)
0
huggingface
Beginners
Anyone have advice on best methods to cluster BERT-embedded documents?
https://discuss.huggingface.co/t/anyone-have-advice-on-best-methods-to-cluster-bert-embedded-documents/801
I am interested in using the feature extractor to get BERT embeddings for a corpus of documents. I am interested in clustering these documents (open to different algorithms/similarity metrics) at this point. However, I am assuming that dimensionality of the embeddings might be a problem. Has anyone done clustering on embeddings before? If so, what kind of dimensionality reduction did you use (if any) and how did you do the clustering or compute similarity metrics? Even if you haven’t done this before, if you have any ideas or if you can refer me to any papers/examples that would be great! Also just want to add that I am not trying to do any kind of search (ie not interested in finding out which article is most similar to article x) which is what I mostly found online when googling this problem. Although both utilize similarity metrics, the goal is ultimately different and wanted to be clear on that. I just want to cluster the documents in order to group the articles and come up with labels for them. Thank you for viewing this question!
Hello @afractalthought, You can try Sentence transformer which is much better for clustering from feature extraction than vanilla BERT or RoBERTa. When applying cosine similarity on the sentence embedding from this model, documents with semantic similarity should get a higher similarity score and clustering should get better.
0
huggingface
Beginners
Returning logits from Trainer.predict()
https://discuss.huggingface.co/t/returning-logits-from-trainer-predict/9314
Hello! I would like to perform some operations on the output probability distributions of an AutoModelForSequenceClassification model so, I was wondering if it is possible to return the logits rather than predicted class labels from the transformers.Trainer.predict() method.
The predict method does return the logits, as well as the labels.
0
huggingface
Beginners
Sentence Transformers paraphrase-MiniLM fine-tuning error
https://discuss.huggingface.co/t/sentence-transformers-paraphrase-minilm-fine-tuning-error/9612
Hi @nreimers, Really love your sentence transformers. I’m currently using them as base models to fine-tune them on a 3-class classification task using the standard hf trainer. This works very well with paraphrase-distilroberta-base-v2, but when I use variants of MiniLM-L6-v2 (I tried paraphrase-MiniLM-L6-v2 and flax-sentence-embeddings/all_datasets_v4_MiniLM-L6) I get the following error. The error occurs during training (with the hf trainer.train()) after around 100 steps. The exact same code works well with any other transformer, including other sentence transformers like paraphrase-distilroberta-base-v2, but for some reason it occurs for variants of your paraphrase-MiniLM. /usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length) 219 if self.position_embedding_type == "absolute": 220 position_embeddings = self.position_embeddings(position_ids) --> 221 embeddings += position_embeddings 222 embeddings = self.LayerNorm(embeddings) 223 embeddings = self.dropout(embeddings) RuntimeError: The size of tensor a (1088) must match the size of tensor b (512) at non-singleton dimension 1 with flax-sentence-embeddings/all_datasets_v4_MiniLM-L12" I get the error: RuntimeError: The size of tensor a (528) must match the size of tensor b (512) at non-singleton dimension 1 Note that I also get the error when fine-tuning nreimers/MiniLM-L6-H384-uncased. Do you know where this could come from? (Side-note: do you recommend using the flax-sentence-embeddings/all_datasets… models? Didn’t find performance metrics on sbert.net. Are they better than the models in the ranking here? Pretrained Models — Sentence-Transformers documentation )
Hi @MoritzLaurer Happy to hear that. I think the issue can be that the max length is not defined for these models. Then, the text is not truncated to 512 word pieces. Is it possible to set in the trainer the max_length for the input text? Currently adding these models to the performance metrics. Yes, the flax-sentence-embeddings/all_datasets_v3 (or_v4) work the best. Also added them here: huggingface.co sentence-transformers/all-MiniLM-L12-v1 · Hugging Face 8
0
huggingface
Beginners
How is compute_metrics working internally?
https://discuss.huggingface.co/t/how-is-compute-metrics-working-internally/9269
Hi everyone, I am following this blog post Fine-Tune XLSR-Wav2Vec2 for low-resource ASR with 🤗 Transformers 1 on fine-tuning an ASR model, and there is something I don’t understand about the compute_metrics function. In the notebook, we want to compute the Word Error Rate for the validation set, every eval_steps steps. What is the input of this function? Does it take one batch at a time, or the whole validation dataset? If it takes one batch at a time, how is the final WER that is diplayed calculated? Is it the mean of all the WERs of all the batches? Pinging the author of this blog post as well @patrickvonplaten . I’d appreciate any insight you guys can give me. Thanks in advance.
The compute_metrics function takes the predictions and labels over the whole evaluation dataset and computes the metrics from them.
0
huggingface
Beginners
What model checkpoint do I use if I trained a Word Piece tokenizer?
https://discuss.huggingface.co/t/what-model-checkpoint-do-i-use-if-i-trained-a-word-piece-tokenizer/9256
Hi, I have just trained my own tokenizer from scratch, which is a Word Piece model like BERT, and I have saved it. From there, I am now wanting to train my own language model from scratch using the tokenizer I trained beforehand. However, referring to the code below, what do I change my model_checkpoint to? model_checkpoint = "gpt2" tokenizer_checkpoint = "drive/wordpiece-like-tokenizer" I trained a Word Piece model like BERT, so should gpt2 be changed to something else? Thanks.
You should change it to "bert-base-cased" for instance.
0
huggingface
Beginners
Transformers: WordLevel tokenizer produces strange vocabulary
https://discuss.huggingface.co/t/transformers-wordlevel-tokenizer-produces-strange-vocabulary/9470
Training the WordLevel tokenizer I receive strange vocabulary. Bellow is my code: data = [ "Beautiful is better than ugly." "Explicit is better than implicit." "Simple is better than complex." "Complex is better than complicated." "Flat is better than nested." "Sparse is better than dense." "Readability counts." ] from tokenizers.models import WordLevel from tokenizers import Tokenizer, models, normalizers, pre_tokenizers, decoders, trainers tokenizer = Tokenizer(models.WordLevel()) trainer = trainers.WordLevelTrainer( vocab_size=100000, ) tokenizer.train_from_iterator(data, trainer=trainer) tokenizer.get_vocab() The output is the following: {'Beautiful is better than ugly.Explicit is better than implicit.Simple is better than complex.Complex is better than complicated.Flat is better than nested.Sparse is better than dense.Readability counts.': 0} Please explain what I’m doing wrong…
Your data should be a list of lists.
0
huggingface
Beginners
Questions about default checkpointing behavior (train v. val)
https://discuss.huggingface.co/t/questions-about-default-checkpointing-behavior-train-v-val/9106
Hello, I had a few questions about how Huggingface checkpoint behavior changes depending on the arguments to the Trainer. In the documentation, I noticed that by default: because evaluation_strategy is ‘no’, evaluation is never run during training. Reference was here: Trainer — transformers 4.7.0 documentation 1 because metric_for_best_model is by default ‘None’, the metric defaults to “loss”, which is the same as “eval_loss”. The same reference as above. My questions were: If I run a model with default arguments, does the model checkpointing automatically save and load the best checkpoint based on validation loss/the eval_dataloader? Do the losses displayed in trainer_state.json correspond to val or train loss? Is there an easy way to plot the train and val losses that doesn’t involve overriding the default model behavior or going through an external visualization library like comet? Additionally, I tried manually overwriting the metric_for_best_model as follows: training_args.metric_for_best_model = “eval_loss” I do this before the training arguments are passed to Trainer initialization. If I do this, 1) is this a correct way to enforce validation-based checkpointing? and 2) in this situation, what are the losses displayed in trainer_state.json? Thank you for your help! I appreciate it.
Hi there, here are the answers: No, by default the model checkpointing only saves model to resume training later if something goes wrong, but there is no best model loading logic unless you use load_best_model_at_end. You will then need to set an eval_strategy and a save_strategy that match (either epoch or steps) It’s the accumulated training loss since the beginning of the training. No there is not inside Trainer. We integrate with most reporting tooling, TensorBoard, Wandb, CometML etc for this reason. For your last questions, setting metric_for_best_model is not enough, you need to set load_best_model_at_end to True. The losses dispalyed in the trainer_state.json will still be the training losses.
0
huggingface
Beginners
How can I trace trainer.state.log_history in Multi-GPU environment?
https://discuss.huggingface.co/t/how-can-i-trace-trainer-state-log-history-in-multi-gpu-environment/9345
Hello. I am trying to train RoBERTa model from scratch. I successfully train model with Trainer. But, When I check the trainer.state.log_history, there was nothing. This situation occurred only on Multi-GPU training. When I use Single-GPU, log_history was exist. How can I get log_history in Multi-GPU training?
I found the answer by myself. In TrainingArguments, just set log_on_each_node=True.
0
huggingface
Beginners
How can I check mlm accuracy during training RoBERTa?
https://discuss.huggingface.co/t/how-can-i-check-mlm-accuracy-during-training-roberta/3164
Hello. I try to train RoBERTa from scratch. There are the code and printed log below. From the code, I can check the mlm loss, but I couldn’t find options for mlm accuracy. Is there anything I can do for check mlm acc? from transformers import RobertaConfig config = RobertaConfig( num_hidden_layers=4, hidden_size=512, hidden_dropout_prob=0.1, num_attention_heads=8, attention_probs_dropout_prob=0.1, intermediate_size=2048, vocab_size=34492, type_vocab_size=1, initializer_range=0.02, max_position_embeddings=512, position_embedding_type="absolute" ) from transformers import RobertaTokenizerFast tokenizer = RobertaTokenizerFast.from_pretrained("tokenizer", max_len=512) from transformers import RobertaForMaskedLM model = RobertaForMaskedLM(config=config) from transformers import LineByLineTextDataset train_dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="train.txt", block_size=tokenizer.max_len_single_sentence ) from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) from transformers import Trainer, TrainingArguments num_train_epochs = 4 max_steps = num_train_epochs * len(train_dataset) warmup_steps = int(max_steps*0.05) training_args = TrainingArguments( output_dir="output", overwrite_output_dir=True, do_train=True, max_steps=max_steps, warmup_steps=warmup_steps, num_train_epochs=num_train_epochs, per_device_train_batch_size=100, learning_rate=5e-5, weight_decay=0, max_grad_norm=1, adam_beta1=0.9, adam_beta2=0.98, adam_epsilon=1e-6, # disable_tqdm=True logging_dir="log", logging_first_step=True ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, ) trainer.train() 0: {'loss': 10.588345527648926, 'learning_rate': 1.4910684996868758e-09, 'epoch': 0.0011918951132300357} 0: {'loss': 10.444767531507718, 'learning_rate': 7.455342498434379e-07, 'epoch': 0.5959475566150179} 0: {'loss': 9.9342578125, 'learning_rate': 1.4910684996868757e-06, 'epoch': 1.1918951132300357} 0: {'loss': 9.384439453125, 'learning_rate': 2.236602749530314e-06, 'epoch': 1.7878426698450536} 0: {'loss': 8.790998046875, 'learning_rate': 2.9821369993737514e-06, 'epoch': 2.3837902264600714} 0: {'loss': 8.097921875, 'learning_rate': 3.727671249217189e-06, 'epoch': 2.9797377830750893} 0: {'loss': 7.4109140625, 'learning_rate': 4.473205499060628e-06, 'epoch': 3.575685339690107} 0: {'loss': 6.89530859375, 'learning_rate': 5.218739748904065e-06, 'epoch': 4.171632896305125} 0: {'loss': 6.57353515625, 'learning_rate': 5.964273998747503e-06, 'epoch': 4.767580452920143} 0: {'loss': 6.354984375, 'learning_rate': 6.70980824859094e-06, 'epoch': 5.363528009535161} 0: {'loss': 6.194296875, 'learning_rate': 7.455342498434378e-06, 'epoch': 5.959475566150179} 0: {'loss': 6.028484375, 'learning_rate': 8.200876748277817e-06, 'epoch': 6.5554231227651965} ...
You have to add two things to check your accuracy. First you should define an evaluation strategy, to regularly evaluate your model on the validation set (in TrainingArguments, add evaluation_strategy="steps" to evaluate every eval_steps steps or evaluation_strategy="epoch" to evaluate every epoch). Then you need to add a compute_metrics function to your Trainer, see for instance in the run_glue script 48 how one is coded, that should return the accuracy you want.
0
huggingface
Beginners
Any tutorials for distilling (e.g. GPT2)?
https://discuss.huggingface.co/t/any-tutorials-for-distilling-e-g-gpt2/8599
I’m trying to read up on knowledge distillation and as an exercise, I’d like to fine-tune a GPT2-medium model on a specific generation task and then distill it down to a small GPT2 model. Could someone point me towards a colab or tutorial that I could use to learn hands-on how to do this? Thanks
@ComfortEagle did you ever find a good tutorial?
0
huggingface
Beginners
K fold cross validation
https://discuss.huggingface.co/t/k-fold-cross-validation/5765
Hi, I use Trainer() to fine-tune bert-base-cased model on NER task.I split my dataset with sklearn.model_selection.train_test_split . Now, I want to use k fold cross validation to split dataset and fine-tune the model. Does anyone try the same way? plz tell me if you have any ideas.
one suggestion would be to use the split functionality of datasets to create your folds as described here: Splits and slicing — datasets 1.6.0 documentation 324 then you could use a loop to fine-tune on each fold with the trainer and aggregate the predictions per fold
0
huggingface
Beginners
How can I load models from any remote url
https://discuss.huggingface.co/t/how-can-i-load-models-from-any-remote-url/9452
Hi, I want to set up a http file server (simples case would be http localhost) that will contain my models or simply fork a github repository with pretrained models. I see that when loading a pretrained model a transformers or sentence-transformers libraries try to get files from huggingface.co by default. Is there a way to change it and load e.g. from http://localhost:8000 without modifying the insides of the library? E.g. i tried to simply put the url in pt_model = AutoModelForSequenceClassification.from_pretrained('http://localhost:8000'), but it failed to work. I also tried to use mirror parameter with adding my address to PRESET_MIRROR_DICT in configuration_utils.py, but that is already modification that I want to avoid. It didn’t work besides. Is there any proper way to do so?
you can try to paas your http server as param like this: proxies = {"http": "http://localhost:8000"} mode = AutoModelForSequenceClassification.from_pretrained(proxies)
0
huggingface
Beginners
Fine-Tuning BERT Question Answering sequence output problem
https://discuss.huggingface.co/t/fine-tuning-bert-question-answering-sequence-output-problem/7878
While following instructions on Fine-tuning with custom datasets — transformers 4.7.0 documentation 3 using TensorFlow Keras, model fit produces below problem and fail to start training from transformers import TFAutoModelForQuestionAnswering model = TFAutoModelForQuestionAnswering.from_pretrained("bert-base-multilingual-cased") ... model.fit(...) TypeError: The two structures don't have the same sequence type. Input structure has type <class 'tuple'>, while shallow structure has type <class 'transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput'>. I suspect that it is related to formatting output for Keras requirements as below: # Keras will expect a tuple when dealing with labels train_dataset = train_dataset.map(lambda x, y: (x, (y['start_positions'], y['end_positions']))) since the error message is about output and it says tuples are not transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput. Help? My platform is Windows 10 and libraries are print(tf.__version__) print(torch.__version__) print(transformers.__version__) 2.4.0 1.9.0+cu111 4.8.2
The error here was because they asked to use return_dict = False but in TF during model compilation, we have to set run_eagerly=True in order to actually make sense of the return_dict parameter. According to documentation: return_dict (bool, optional) – Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. Actually it was given in warning that it couldn’t set return_dict to False and hence I was able to figure out this issue. I don’t know its equivalent for PyTorch but I hope this helps.
0
huggingface
Beginners
Overwrite attention heads in BartForConditionalGeneration
https://discuss.huggingface.co/t/overwrite-attention-heads-in-bartforconditionalgeneration/7059
Hi, I am looking to overwrite the attention heads in the Bart model, following the below process: Run the model on an article with the keyword parameter: “Covid” Save the encoder/decoder heads for this article Run the model on another article, also with the keyword parameter: “Covid” As a proxy for making this model ‘topic-aware’, I will insert the “Covid” attention heads generated in step 2 and insert the attention heads for the model run in step 3 Model will generate a new ‘topic-aware’ summary for the article as the attention heads are ‘trained’ on the topic key-word ‘covid’ Note: The above is extremely preliminary, we will be looking to train the attention heads & model on more data for each key-word in the future. article = """Covid-19 is a global pandemic" model_name = "facebook/bart-large-cnn" config = BartConfig.from_pretrained(model_name, output_hidden_states=True, output_attention=True) tokenizer = AutoTokenizer.from_pretrained(model_name) inputs = tokenizer(article, padding=True, truncation=True, return_tensors="pt") model = AutoModel.from_pretrained(model_name) model.config.output_attentions = True outputs = model(**inputs) summary = tokenizer.decode(outputs) covid_encoder_attention = outputs.encoder_attentions covid_decoder_attention = outputs.decoder_attentions # Repeat model run with new article and insert covid_encoder_attention and/or covid_decoder_attention for new run
I’m curious to know how this is possible, also. I’ve found no methods in transformers to allow this.
0
huggingface
Beginners
Train loss is decreasing, but accuracy remain the same
https://discuss.huggingface.co/t/train-loss-is-decreasing-but-accuracy-remain-the-same/3244
this is the train and development cell for multi-label classification task using Roberta (BERT). the first part is training and second part is development (validation). train_dataloader is my train dataset and dev_dataloader is development dataset. my question is: why train loss is decreasing step by step, but accuracy doesn’t increase so much? practically, accuracy is increasing until iterate 4, but train loss is decreasing until the last epoch (iterate). is this ok or there should be a problem? train_loss_set = [] iterate = 4 for _ in trange(iterate, desc="Iterate"): model.train() train_loss = 0 nu_train_examples, nu_train_steps = 0, 0 for step, batch in enumerate(train_dataloader): batch = tuple(t.to(device) for t in batch) batch_input_ids, batch_input_mask, batch_labels = batch optimizer.zero_grad() output = model(batch_input_ids, attention_mask=batch_input_mask) logits = output[0] loss_function = BCEWithLogitsLoss() loss = loss_function(logits.view(-1,num_labels),batch_labels.type_as(logits).view(-1,num_labels)) train_loss_set.append(loss.item()) loss.backward() optimizer.step() train_loss += loss.item() nu_train_examples += batch_input_ids.size(0) nu_train_steps += 1 print("Train loss: {}".format(train_loss/nu_train_steps)) ############################################################################### model.eval() logits_pred,true_labels,pred_labels,tokenized_texts = [],[],[],[] # Predict for i, batch in enumerate(dev_dataloader): batch = tuple(t.to(device) for t in batch) batch_input_ids, batch_input_mask, batch_labels = batch with torch.no_grad(): out = model(batch_input_ids, attention_mask=batch_input_mask) batch_logit_pred = out[0] pred_label = torch.sigmoid(batch_logit_pred) batch_logit_pred = batch_logit_pred.detach().cpu().numpy() pred_label = pred_label.to('cpu').numpy() batch_labels = batch_labels.to('cpu').numpy() tokenized_texts.append(batch_input_ids) logits_pred.append(batch_logit_pred) true_labels.append(batch_labels) pred_labels.append(pred_label) pred_labels = [item for sublist in pred_labels for item in sublist] true_labels = [item for sublist in true_labels for item in sublist] threshold = 0.4 pred_bools = [pl>threshold for pl in pred_labels] true_bools = [tl==1 for tl in true_labels] print("Accuracy is: ", jaccard_score(true_bools,pred_bools,average='samples')) torch.save(model.state_dict(), 'bert_model') and the outputs: Iterate: 0%| | 0/10 [00:00<?, ?it/s] Train loss: 0.4024542534684801 /usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py:1272: UndefinedMetricWarning: Jaccard is ill-defined and being set to 0.0 in samples with no true or predicted labels. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) Accuracy is: 0.5806403013182674 Iterate: 10%|█ | 1/10 [03:21<30:14, 201.64s/it] Train loss: 0.2972540049911379 Accuracy is: 0.6091337099811676 Iterate: 20%|██ | 2/10 [06:49<27:07, 203.49s/it] Train loss: 0.26178574864264137 Accuracy is: 0.608361581920904 Iterate: 30%|███ | 3/10 [10:17<23:53, 204.78s/it] Train loss: 0.23612180122962365 Accuracy is: 0.6096717783158462 Iterate: 40%|████ | 4/10 [13:44<20:33, 205.66s/it] Train loss: 0.21416303515434265 Accuracy is: 0.6046892655367231 Iterate: 50%|█████ | 5/10 [17:12<17:11, 206.27s/it] Train loss: 0.1929110718982203 Accuracy is: 0.6030885122410546 Iterate: 60%|██████ | 6/10 [20:40<13:46, 206.74s/it] Train loss: 0.17280191068465894 Accuracy is: 0.6003766478342749 Iterate: 70%|███████ | 7/10 [24:08<10:21, 207.04s/it] Train loss: 0.1517329115446631 Accuracy is: 0.5864783427495291 Iterate: 80%|████████ | 8/10 [27:35<06:54, 207.23s/it] Train loss: 0.12957811209705325 Accuracy is: 0.5818832391713747 Iterate: 90%|█████████ | 9/10 [31:03<03:27, 207.39s/it] Train loss: 0.11256680189521162 Accuracy is: 0.5796045197740114 Iterate: 100%|██████████| 10/10 [34:31<00:00, 207.14s/it]
This means you are overfitting (training loss diminished but no improvement in validation loss/accuracy) so you should try using any technique that helps reduce overfitting: weight decay, more dropout, data augmentation (if applicable)…
0
huggingface
Beginners
[HELP] RuntimeError: CUDA error: device-side assert triggered
https://discuss.huggingface.co/t/help-runtimeerror-cuda-error-device-side-assert-triggered/9418
Hello, I am following this tutorial on how to train my language model from scratch: notebooks/language_modeling_from_scratch.ipynb at master · huggingface/notebooks · GitHub 3 However, when I pass everything to my trainer: from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15) trainer = Trainer( model=model, args=training_args, train_dataset=lm_datasets["train"], eval_dataset=lm_datasets["validation"], data_collator=data_collator, ) trainer.train() I get this error: RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
My advice is always: if you have a CUDA error, run your code on CPU and check if you’re getting a more helpful error message.
0
huggingface
Beginners
Fine-tune mt5 on Question Answering with run_qa
https://discuss.huggingface.co/t/fine-tune-mt5-on-question-answering-with-run-qa/3818
Hello everyone, I wanted to fine tune a mt5 model on QA with run_qa.py but it doesn’t work. I’m a beginner so I have no idea what to do to solve the problem. Does anyone know how to make it work?
Are you sure MT5 can do Question Answering? Would BERT be better? I can’t help, but I suggest you give a few more details: what happens when it “doesn’t work”?
0
huggingface
Beginners
Reduce number of cores
https://discuss.huggingface.co/t/reduce-number-of-cores/9449
Hello, when I fine-tune my BERT model on our company’s server I nearly take on all our capacity. Is there any way to reduce the number of used cores?
what does the number of used cores mean? what dl library you use? pytorch or tensorflow i assume you fine-tune with pytorch, and the cores means gpu device. generally, there are two ways to limit the gpu usage: set visible device environment variable # n means your gpu device id export CUDA_VISIBLE_DEVICES=0,1,2,..n`` directly set gpu device with pytorch library import torch torch.cuda.set_device(1) tensorflow provide similar configure options, you can look the official document for reference
0
huggingface
Beginners
Transformers with additional external data
https://discuss.huggingface.co/t/transformers-with-additional-external-data/8041
Hello. Working with pretrained BERT model for regression task. I have idea that dataset has usefull not text feature, which can help to improve result. Is it possible to modify my model so that the text field is processed by bert taking into account this useful feature? class MyModel(torch.nn.Module): def __init__(self): super().__init__() config = AutoConfig.from_pretrained(model_name) config.update({"output_hidden_states":True, "hidden_dropout_prob": 0.0, "layer_norm_eps": 1e-7}) self.roberta = AutoModel.from_pretrained(model_name, config=config) self.attention = torch.nn.Sequential( torch.nn.Linear(768, 512), torch.nn.Tanh(), torch.nn.Linear(512, 1), torch.nn.Softmax(dim=1) ) self.regressor = torch.nn.Sequential( torch.nn.Linear(768, 1) ) def forward(self, input_ids, attention_mask): roberta_output = self.roberta(input_ids=input_ids, attention_mask=attention_mask) last_layer_hidden_states = roberta_output.hidden_states[-1] weights = self.attention(last_layer_hidden_states) context_vector = torch.sum(weights * last_layer_hidden_states, dim=1) return self.regressor(context_vector) df - some dataset with ‘text’ feature processed by Bert and usefull feature ‘stat’(float). I what to use ‘stat’ feature to improve my predictions.
Hi, Did you finalize combining context vector + manual features?
0
huggingface
Beginners
[HELP] RuntimeError: CUDA error - when training my model?
https://discuss.huggingface.co/t/help-runtimeerror-cuda-error-when-training-my-model/9231
Hello everyone, I am encountering an error when training my language model from scratch, having trained a tokenizer beforehand. I have just trained my tokenizer from scratch on a WordPiece model like BERT, following this notebook: notebooks/tokenizer_training.ipynb at master · huggingface/notebooks · GitHub I then saved my model using this code: new_tokenizer.save_pretrained("/content/drive/MyDrive/my-new-tokenizer") Thus, the folder structure of my-new-tokenizer looks something like this: vocab.txt tokenizer.json tokenizer_config.json special_tokens_map.json After training my tokenizer from scratch, I followed the notebook to train a language model from scratch - this notebook: notebooks/language_modeling_from_scratch.ipynb at master · huggingface/notebooks · GitHub Then executed and used the following code from that notebook: from datasets import load_dataset You can replace the dataset above with any dataset hosted on [the hub](https://huggingface.co/datasets) or use your own files. Just uncomment the following cell and replace the paths with values that will lead to your files: datasets = load_dataset('csv', data_files={'train': ['/content/drive/MyDrive/data.csv'], 'validation': '/content/drive/MyDrive/data.csv'}) You can also load datasets from a csv or a JSON file, see the [full documentation](https://huggingface.co/docs/datasets/loading_datasets.html#from-local-files) for more information. To access an actual element, you need to select a split first, then give an index: datasets["train"][10] To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset. from datasets import ClassLabel import random import pandas as pd from IPython.display import display, HTML def show_random_elements(dataset, num_examples=10): assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset." picks = [] for _ in range(num_examples): pick = random.randint(0, len(dataset)-1) while pick in picks: pick = random.randint(0, len(dataset)-1) picks.append(pick) df = pd.DataFrame(dataset[picks]) for column, typ in dataset.features.items(): if isinstance(typ, ClassLabel): df[column] = df[column].transform(lambda i: typ.names[i]) display(HTML(df.to_html())) show_random_elements(datasets["train"]) As we can see, some of the texts are a full paragraph of a Wikipedia article while others are just titles or empty lines. ## Causal Language modeling For causal language modeling (CLM) we are going to take all the texts in our dataset and concatenate them after they are tokenized. Then we will split them in examples of a certain sequence length. This way the model will receive chunks of contiguous text that may look like: part of text 1 or end of text 1 [BOS_TOKEN] beginning of text 2 depending on whether they span over several of the original texts in the dataset or not. The labels will be the same as the inputs, shifted to the left. We will use the [`gpt2`](https://huggingface.co/gpt2) architecture for this example. You can pick any of the checkpoints listed [here](https://huggingface.co/models?filter=causal-lm) instead. For the tokenizer, you can replace the checkpoint by the one you trained yourself. model_checkpoint = "gpt2" tokenizer_checkpoint = "/content/drive/MyDrive/Train Tokenizer and LM /Tokenizer/my-new-tokenizer" To tokenize all our texts with the same vocabulary that was used when training the model, we have to download a pretrained tokenizer. This is all done by the `AutoTokenizer` class: from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(tokenizer_checkpoint) We can now call the tokenizer on all our texts. This is very simple, using the [`map`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map) method from the Datasets library. First we define a function that call the tokenizer on our texts: def tokenize_function(examples): return tokenizer(examples["Tweets"]) Then we apply it to all the splits in our `datasets` object, using `batched=True` and 4 processes to speed up the preprocessing. We won't need the `text` column afterward, so we discard it. tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["Tweets"]) If we now look at an element of our datasets, we will see the text have been replaced by the `input_ids` the model will need: tokenized_datasets["train"][1] Now for the harder part: we need to concatenate all our texts together then split the result in small chunks of a certain `block_size`. To do this, we will use the `map` method again, with the option `batched=True`. This option actually lets us change the number of examples in the datasets by returning a different number of examples than we got. This way, we can create our new samples from a batch of examples. First, we grab the maximum length our model was pretrained with. This might be a big too big to fit in your GPU RAM, so here we take a bit less at just 128. # block_size = tokenizer.model_max_length block_size = 128 Then we write the preprocessing function that will group our texts: def group_texts(examples): # Concatenate all texts. concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can # customize this part to your needs. total_length = (total_length // block_size) * block_size # Split by chunks of max_len. result = { k: [t[i : i + block_size] for i in range(0, total_length, block_size)] for k, t in concatenated_examples.items() } result["labels"] = result["input_ids"].copy() return result First note that we duplicate the inputs for our labels. This is because the model of the 🤗 Transformers library apply the shifting to the right, so we don't need to do it manually. Also note that by default, the `map` method will send a batch of 1,000 examples to be treated by the preprocessing function. So here, we will drop the remainder to make the concatenated tokenized texts a multiple of `block_size` every 1,000 examples. You can adjust this behavior by passing a higher batch size (which will also be processed slower). You can also speed-up the preprocessing by using multiprocessing: lm_datasets = tokenized_datasets.map( group_texts, batched=True, batch_size=1000, num_proc=4, ) And we can check our datasets have changed: now the samples contain chunks of `block_size` contiguous tokens, potentially spanning over several of our original texts. tokenizer.decode(lm_datasets["train"][1]["input_ids"]) Now that the data has been cleaned, we're ready to instantiate our `Trainer`. First we create the model using the same config as our checkpoint, but initialized with random weights: from transformers import AutoConfig, AutoModelForCausalLM config = AutoConfig.from_pretrained(model_checkpoint) model = AutoModelForCausalLM.from_config(config) And we will needsome `TrainingArguments`: from transformers import Trainer, TrainingArguments training_args = TrainingArguments( "test-clm", evaluation_strategy = "epoch", learning_rate=2e-5, weight_decay=0.01, ) The last two arguments are to setup everything so we can push the model to the [Hub](https://huggingface.co/models) at the end of training. Remove the two of them if you didn't follow the installation steps at the top of the notebook, otherwise you can change the value of `push_to_hub_model_id` to something you would prefer. We pass along all of those to the `Trainer` class: trainer = Trainer( model=model, args=training_args, train_dataset=lm_datasets["train"], eval_dataset=lm_datasets["validation"], ) And we can train our model: trainer.train() I then get this error: RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. I THINK THE ERROR HAS SOMETHING TO DO WITH THE FOLLOWING CODE: model_checkpoint = "gpt2" tokenizer_checkpoint = "/content/drive/MyDrive/my-new-tokenizer" I TRAINED MY TOKENIZER ON A WORD PIECE MODEL LIKE BERT, SO SHOULD THE MODEL CHECKPOINT BE DIFFERENT? THANKS!
Hi @anon58275033 I think that could indeed be the issue. Since the tokenizer has a different vocabulary size this is likely incompatible with the config you are loading which contains the vocab size of the original model. You can fix it with: config = AutoConfig.from_pretrained(model_checkpoint, vocab_size=len(tokenizer)) I hope this helps! PS: sometimes debugging these CUDA errors can be unreadable and it can help to execute the code for debugging purposes on the CPU instead (training_args.device=‘cpu’ should do the trick).
0
huggingface
Beginners
Multi-class Classification Basics
https://discuss.huggingface.co/t/multi-class-classification-basics/9411
Hello, I got a really basic question on the whole BERT/finetune BERT for classification topic: I got a dataset with customer reviews which consists of 7 different labels such as “Customer Service”, “Tariff”, “Provider related” etc. . My dataset contains 12700 not labelled customer reviews and I labelled 1100 reviews for my classification task. Now to my questions: Could it be enough to take an existing BERT model and fine-tune it with AutoModelForSequenceClassification on my specific task? Are the 1100 labelled reviews (around 150 per each class) enough to train that? What are other approaches? I am completely new to the whole NLP task and working on it for 3 weeks. I just understood that more traditional approaches are often outperformed by transformers… Thanks in advance!
marlon89: Could it be enough to take an existing BERT model and fine-tune it with AutoModelForSequenceClassification on my specific task? Yes. You can use “bert-base-uncased” for example. marlon89: Are the 1100 labelled reviews (around 150 per each class) enough to train that? Yes, that’s the power of transfer learning: we only need a few (like a hundred) examples per class. marlon89: What are other approaches? Actually, that’s the most typical approach: fine-tune an xxxForSequenceClassification model using a pre-trained base. You can check out the official tutorial here: notebooks/text_classification.ipynb at master · huggingface/notebooks · GitHub 5
0
huggingface
Beginners
Most efficient multi-label classifier?
https://discuss.huggingface.co/t/most-efficient-multi-label-classifier/9296
Background I’m trying to train a model in Tensorflow to classify text according to a fixed set of 5 labels. For example, let’s say I feed my model the following text: “my advice is that you go ahead with your plans to learn Python, because its syntax is easy for beginners. It’s also great for snake lovers like me!” After sniffing the text, the model would, ideally, report back how much the text matches my pre-defined labels: Label Prediction -------------------- ---------- programming_advice 0.99 advice_for_beginners 0.91 cooking_advice 0.11 health_advice 0.10 not_advice 0.01 My question What is the most efficient way to build such a classifier? I’ve seen several options to do this, but I’m not sure which one would be best: Fine-tune five different binary classifiers, since there are five labels… but this would take forever to train, so I assume there must be a better way. Make a model with a transformer only, and train it. Seen in this forum question 7 Make a model with a transformer plus my own Dense layers, and train it. Seen in this sample notebook 3, which was linked in the transformers documentation. Make a model with a transformer plus my own Dense layers—but freeze the transformer as-is, and only train the Dense layers. Freezing is a common practice with pre-trained computer vision models; I don’t know whether it’s also good practice for NLP too. I would be grateful for any suggestions on which of 1-4 works best. I’m still rather new around here, but the Huggingface community is extremely welcoming and helpful, and I appreciate being here! A big thanks for anybody who can help give me some pointers.
I am facing the same problem and as none replied yet I wanted to ask if you got any updates/new thoughts on this? Cheers
0
huggingface
Beginners
Multi-Label Product (Query) Classification
https://discuss.huggingface.co/t/multi-label-product-query-classification/6283
Hi all, I am looking for a transformers model for MULTI-LABEL query (product) classification that is pre-trained on product title or query data. Basically, the user will search for a product (query), we have to classify it into a set of classes/categories. Please let me know if you know of any such model. I would really appreciate it if I can find it asap. Thanks, Kalyan.
You got an answer to this? I am facing the same problem right now
0
huggingface
Beginners
The datasets.map function does not load cached dataset
https://discuss.huggingface.co/t/the-datasets-map-function-does-not-load-cached-dataset/8905
I am using the run_mlm.py provided in the transformers repository to pretrain bert. The dataset is of version 1.8.0. Since the used dataset Wikipedia is large, I hope the processing is one time and can be reused later. However, I find it always re-computing instead of load from the disk. I don’t think I changed any parameters to the map function. I notice the description of the parameter new_fingerprint might influence. The original codes of run_mlm.py do not specify it, should I give it a value?
Hi ! new_fingerprint is computed automatically by taking into account: the previous dataset fingerprint a hash of your map function a hash of the parameters passed to map So as long as you don’t change your code and you keep the same parameters, the fingerprint will stay the same and the dataset will be reloaded from the disk. Can you make sure you didn’t change your function or the parameters passed to map ? Note that preprocessing_num_workers is part of the parameters passed to map and you must make sure it doesn’t change either.
0
huggingface
Beginners
Batch sizes / 2 GPUs + Windows 10 = 1 GPU?
https://discuss.huggingface.co/t/batch-sizes-2-gpus-windows-10-1-gpu/9349
Hope you can help. Basically I just need some guidance/reassurance around how batch sizes are calculated when 2 GPUs are installed but I think (!) on Windows only 1 GPU can be/is being used. Scenario: Remoting into PC with 2 NVIDIA GPUs running Windows 10 (someone else’s machine so no option of installing Linux) I have the GPU enabled version of PyTorch installed Running run_summarization.py from the transformers repo I set the paramete per_device_train_batch_size = 4 I do a test run with 10,000 training samples Result: “_n_gpu=2” printed at start of run, so script has detected 2 GPUs (I have confirmed this directly in another script using torch.cuda.device_count() ) I get the warning: “C:\Users\BrianS.virtualenvs\summarization-DUOCBs9B\lib\site-packages\torch\cuda\nccl.py:15: UserWarning: PyTorch is not compiled with NCCL support” (I believe the lack of NCCL support on Windows is the reason why multiple GPU training on Windows is not possible?) I get 1,250 steps per epoch Questions: I assuming that PyTorch defaults to using just 1 GPU instead of the 2 available, hence the warning? (it certainly runs a lot, lot quicker than just on CPU) Given 2 GPUs installed, batch per device 4 and 1,250 seems to suggest an effective batch size of 8. So is it being automatically adjusted to 2 x 4 = 8 given only 1 GPU being used but 2 GPUs present (just checking that a batch of 4 is not being skipped for the other GPU detected but not being used?) Many thanks!
I saw this post involving @BramVanroy about setting CUDA_VISIBLE_DEVICES=0 to use just one of the 2 GPUs installed (I assume named 0 and 1). But is there any way to verify that only 1 GPU is being used when running the script? And I suppose even if so, doesn’t necessarily clarify how per_device_train_batch_size = 4 is being used when 2 GPUs are present, but I think (!) only one GPU being used.
0
huggingface
Beginners
T5 for conditional generation: getting started
https://discuss.huggingface.co/t/t5-for-conditional-generation-getting-started/1284
Hi, I have as specific task for which I’d like to use T5. Inputs look like some words <SPECIAL_TOKEN1> some other words <SPECIAL_TOKEN2> Training Outputs are a certain combination of the (some words) and (some other words). The goal is to have T5 learn the composition function that takes the inputs to the outputs, where the output should hopefully be good language. I was hoping to find an example script that I could modify. In particular I need a little help understanding how to do these parts: When generating the input files (i.e. the mapping from input_str to output_str) what is the best format (e.g. a tsv for input and a tsv for output with a 1:1 mapping by line)? Add special tokens to the vocab. Assuming that my inputs have special tokens and in the input files, then to make the model recognize them, I think I should use something like transformers.T5Tokenizer(additional_special_tokens=[, ]). Is this correct? Additional input processing: I think I need to somehow prepend a new “task tag” to all the input-output pairs. Where would I specify this new task name? : Do I need to register this task somewhere so that it can actually be executed? Some of the examples I saw seem to suggest that I do. And do I need to choose a loss function for my new task? (If I don’t, will one be selected automatically?) Any tips for the loss function? I care about the outputs being syntactical /grammatical, but I would also like the model to learn the relative positional relations of the inputs. For example, if I had something like a b c , the model might learn that abc, bac, cab, or cba are valid (i.e. in this case “a” and “b” must always be adjacent), and would choose the sequence that is most probable under the language model. Thanks!
You can choose whatever format that works well for you, only thing to note is your dataset or collatorshould return input_ids, attention_mask and labels. To add new tokens tokenizer.add_tokens(list_of_new_tokens) # resize the embeddings model.resize_token_embeddings(len(tokenizer)) Using task prefix is optional. No, you won’t need to register the task, the original T5 repo requires that but it’s not required here. You might find these two notebooks useful Fine-tune T5 for Classification and Multiple Choice 83 Fine-tune T5 for Summarization 133 Train T5 on TPU 23 Note: These notebooks manually add the eos token (</s>), but it’s not with the current version, the tokenizer will handle that. Here’s a great thread on tips and tricks for T5 fine-tuning T5 Finetuning Tips Models Starting this for results, sharing + tips and tricks, and results. This is my first attempt at this kind of thread so it may completely fail. Some things I’ve found Apparently if you copy AdaFactor from fairseq, as recommended by t5 authors, you can fit batch size = 2 for t5-large lm finetuning fp16 rarely works. for most tasks, you need to manually add </s> to the end of your sequence. Thing’s I’ve read task specific prefix doesn’t matter much. cc @mrm8488 @valhalla @patrickvonplaten who…
0
huggingface
Beginners
Predicting with Token Classifier on data with no gold labels
https://discuss.huggingface.co/t/predicting-with-token-classifier-on-data-with-no-gold-labels/9373
Hello everyone, I am implementing a token classification model, following the example in the github repo (transformers/run_ner.py at master · huggingface/transformers · GitHub 1). I have adapted it for my particular task, and I can train and test a model on data for which I have gold labels. Now I want to use the same model to predict labels for data without gold labels. In the sample code, the tokenize_and_align_labels function gives a label of -100 to special tokens and also to tokens within a word that are not the first (according to some parameter). def tokenize_and_align_labels(examples): tokenized_inputs = tokenizer( examples[text_column_name], padding=padding, truncation=True, max_length=data_args.max_seq_length, # We use this argument because the texts in our dataset are lists of words (with a label for each word). is_split_into_words=True, ) labels = [] for i, label in enumerate(examples[label_column_name]): word_ids = tokenized_inputs.word_ids(batch_index=i) previous_word_idx = None label_ids = [] for word_idx in word_ids: # Special tokens have a word id that is None. We set the label to -100 so they are automatically # ignored in the loss function. if word_idx is None: label_ids.append(-100) # We set the label for the first token of each word. elif word_idx != previous_word_idx: label_ids.append(label_to_id[label[word_idx]]) # For the other tokens in a word, we set the label to either the current label or -100, depending on # the label_all_tokens flag. else: label_ids.append(label_to_id[label[word_idx]] if data_args.label_all_tokens else -100) previous_word_idx = word_idx labels.append(label_ids) tokenized_inputs["labels"] = labels return tokenized_inputs Now, this works well if the data that’s being preprocessed has labels. But what should be done when these do not exist? In order to get predictions, we could simply not have a labels key in tokenized_inputs. However, the info in tokenized_inputs["labels"] (i.e. which tokens have a -100 label) is later used to retrieve the predicted label per word, ignoring the predictions for the special tokens and other tokens within a word but the first (which is correct). # Remove ignored index (special tokens) true_predictions = [ [label_list[p] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(predictions, labels) ] What do you think would be the best way to handle this case? Maybe in tokenize_and_align_labels, the “true” tokens for which we want labels could have another value in tokenized_inputs["labels"]? Or maybe the post-processing to remove predictions on special tokens should not rely on the labels in the first place? Any help you could provide would be welcome. Thanks!
Hi, If you’re using a fast tokenizer (such as BertTokenizerFast), you can add use the offsets to know if a token is a special token/the first wordpiece of a word or not. Small example: from transformers import BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased") text = "hello my name is niels" encoding = tokenizer(text, return_offsets_mapping=True) for id, offset in zip(encoding.input_ids, encoding.offset_mapping): print(tokenizer.decode([id]), offset) This returns: [CLS] (0, 0) hello (0, 5) my (6, 8) name (9, 13) is (14, 16) ni (17, 19) ##els (19, 22) [SEP] (0, 0) As you can see, the offsets for special tokens are (0, 0), and if offset[0] of a particular token is equal to offset[1] of the previous token, then we know that it’s a subword token that’s not the first one of a word. You can write this in a (rather long) list comprehension, to filter the predictions: true_indices = [1] + [idx for idx, offset in enumerate(encoding.offset_mapping) if offset != (0, 0) and offset[0] != encoding.offset_mapping[idx-1][1]] true_predictions = predictions.numpy()[true_indices]
0
huggingface
Beginners
How to fine tune LUKE for NER?
https://discuss.huggingface.co/t/how-to-fine-tune-luke-for-ner/6122
Hello. I am wondering if I can fine-tune LUKE with my own dataset of NER. I am aware that LUKE has a unique model so the code in the example notebook is off the table. I am aware that Studio Ousia has fine-tuning code with GitHub - studio-ousia/luke: LUKE -- Language Understanding with Knowledge-based Embeddings 18, but if I do that route, I am wondering if I can convert the new fine-tuned model to be transformers-compatible.
Hi Kerenza Were you able to make any progress on fine-tuning LukeForEntitySpanClassification for custom labels? Actually, I am also looking to fine-tune Luke for a NER task with multi-token entities. Any help is much appreciated. Thanks
0
huggingface
Beginners
How can I sample with BART for conditional generation?
https://discuss.huggingface.co/t/how-can-i-sample-with-bart-for-conditional-generation/8939
Hi there, I’m currently working with BART for conditional generation and would like to generate some good old-fashioned sampled outputs (i.e. nothing fancy and no beam search) for experimentation. According to the very nice blog post 4 by @patrickvonplaten, this should be possible by providing do_sample=True and top_k=0 to the .generate() method. However, it seems that the outputs are actually beam_search outputs and not true samples. E.g. output object is BeamSampleEncoderDecoderOutput and not SampleEncoderDecoderOutput, as I would have expected. Here’s a minimal example for what I’m trying to do. Note, for portability I’m just using the pre-trained model in this example but in my original code I’m loading a fine-tuned model. import sys import torch from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig # expects path to local HuggingFace pre-trained model model_path = sys.argv[1] num_return_sequences = 10 max_length = 256 # load from pre-trained model = BartForConditionalGeneration.from_pretrained(model_path) config = BartConfig.from_pretrained(model_path) tokenizer = BartTokenizer.from_pretrained(model_path, use_fast=True) if torch.cuda.is_available(): model.cuda() model.eval() # list of sentences for decoding sents = [ 'In its most basic form, sampling means randomly picking the next word according to its conditional probability distribution.', 'Taking the example from above, the following graphic visualizes language generation when sampling.', 'In transformers, we set do_sample=True and deactivate Top-K sampling (more on this later) via top_k=0.'] # decode each input sentence - no batching! for sent in sents: inputs = tokenizer(sent, return_tensors="pt", padding=True, truncation=True, max_length=256) # ensure input tensors are on the same device as model inputs.to(model.device) output = model.generate(input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], use_cache=True, max_length=max_length, pad_token_id=tokenizer.pad_token_id, decoder_start_token_id=tokenizer.pad_token_id, do_sample=True, top_k=0, num_return_sequences=num_return_sequences, return_dict_in_generate=True) print(type(output)) batch_hyp_strs = tokenizer.batch_decode(output.sequences.tolist(), skip_special_tokens=True) print('src:\t', sent) for hyp in batch_hyp_strs: print(f'\t{hyp}') print() Any clarification on how to get true random samples from pre-trained/fine-tuned BART for conditional generation would be much appreciated! Environment details torch==1.8.0 transformers==4.9.0 (installed from source)
Answering my own question here in case anyone ever runs into the same issue. The argument num_beams used by .generate() defaults to the value specified in the model’s config file (e.g. config.json). If this value is > 1, decoding will be performed with beam search (either regular or sampled). So, the simple fix is to explicitly set num_beams=1 in the call to .generate(), e.g.: output = model.generate(input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], use_cache=True, max_length=max_length, pad_token_id=tokenizer.pad_token_id, decoder_start_token_id=tokenizer.pad_token_id, do_sample=True, top_k=0, num_beams=1, num_return_sequences=num_return_sequences, return_dict_in_generate=True) print(type(outputs)) # ==> transformers.generation_utils.SampleEncoderDecoderOutput Lesson of the day: keep calm and read the docstrings carefully! transformers/generation_utils.py at 91ff480e2693f36b11aaebc4e9cc79e4e3c049da · huggingface/transformers · GitHub 2
0
huggingface
Beginners
Cannot encode/tokenize my Dataset Dictionary
https://discuss.huggingface.co/t/cannot-encode-tokenize-my-dataset-dictionary/9327
Hello everyone, I am trying to finetune my Sentiment Analysis Model. Therefore, I have splitted my pandas Dataframe (column with reviews, column with sentiment scores) into a train and test Dataframe and transformed everything into a Dataset Dictionary: #Creating Dataset Objects dataset_train = datasets.Dataset.from_pandas(training_data) dataset_test = datasets.Dataset.from_pandas(testing_data) #Get rid of weird columns dataset_train = dataset_train.remove_columns('__index_level_0__') dataset_test = dataset_test.remove_columns('__index_level_0__') #Create Dataset Dictionary data_dict = datasets.DatasetDict({"train":dataset_train,"test":dataset_test}) I am transforming everything to a dataset dictionary cause I am following more or less a code and transfer it to my problem. Anyways, I am defining the function to tokenize: from transformers import AutoModelForSequenceClassification from transformers import Trainer, TrainingArguments from sklearn.metrics import accuracy_score, f1_score num_labels = 5 model_name = "nlptown/bert-base-multilingual-uncased-sentiment" batch_size = 16 model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=num_labels) def tokenize(batch): return tokenizer(batch, padding=True, truncation=True) and call the function with: data_encoded = data_dict.map(tokenize, batched=True, batch_size=None) I am getting this error after all this: ValueError: text input must of type str (single example), List[str] (batch or single pretokenized example) or List[List[str]] (batch of pretokenized examples). What am I missing? Sorry I am completely new to the whole Huggingface infrastructure…
Found the error on my own as I had to specify the column which had to be tokenized. The correct Tokenizer function would be: def tokenize(batch): return tokenizer(batch["text"], padding=True, truncation=True) instead of def tokenize(batch): return tokenizer(batch, padding=True, truncation=True)
0
huggingface
Beginners
From Pandas Dataframe to Huggingface Dataset
https://discuss.huggingface.co/t/from-pandas-dataframe-to-huggingface-dataset/9322
Hello everyone, I am doing a tutorial on how to finetune pretrained Sentiment Analysis Classifier and all the finetuning part is based on a HuggingFace Dataset. Is there a way to transform a pandas Dataframe to a HuggingFace Dataset? Would help me alot with my data preprocessing…
You can have a look at here: link 176
0
huggingface
Beginners
Saved models do not work after being loaded
https://discuss.huggingface.co/t/saved-models-do-not-work-after-being-loaded/9282
Hi. I fine-tuned 4 Wav2Vec2 models with different settins, following by letter this guide 2. After I finished training, I saved the models using model_name.save_pretrained(PATH). After saving, I immediately loaded the model to see if it works properly, and it worked. Today, I wanted to check something and I loaded one of the models again, and I saw that it can predict nothing. Every prediction is wrong. The characters it predicted are indeed from my vocabulary, but they are just…random. I have no idea what is going on. I also do not know what else to post here to give you more information. I followed the guide I linked above exactly, only thing I changed is that I saved my models. The loaded models worked yesterday! Today they do not. The models are loaded as model = Wav2VecForCTC.from_pretrained(PATH). Inside path, there are the following files: (1)config.json, (2)pytorch_model.bin and (3) training_args.bin. Note that the last one was saved manually by me (the training args). Does anyone have any idea what might be happening? Please, any help is appreciated, as this is important to me… Thanks in advance.
I think I’ve figured it out. I will post it here, so that if someone else comes here with the same issue, they can perhaps be assisted by my answer. It turns out that it was my mistake, as even though I was loading an old model, I was using a new processor (tokenizer/feature extractor). Now, the problem here is that while the model worked as it should, the decoding of the transcriptions was simply wrong, because the labels it outputs were extracted from an old vocabulary. When I tried to decode the outputs with a new processor (who was build from a different vocab), the matchings were simply wrong.
0
huggingface
Beginners
How to get [CLS] embeddings from BertForTokenClassification model
https://discuss.huggingface.co/t/how-to-get-cls-embeddings-from-bertfortokenclassification-model/9276
Sorry for the issue, I don’t really write any code but only use the example code as a tool. I trained with my own NER dataset with the transformers example code. I want to get sentence embedding from the model I trained with the token classification example code here (this is the older version of example code by the way.) I want to get the sentence embedding from the trained model, which I think the [CLS] token embedding output should be one way. This github issue answer answers exactly how to get an embedding from a BertModel (I can also get [CLS] token as the first token in sentence) The answer code is copy paste below: tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained('bert-base-uncased') input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1 outputs = model(input_ids) last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple So here comes my problem: How to get embeddings from BertForTokenClassification instead of BertModel? Can I simply replace the BertModel with BertForTokenClassification in the code and the expected output is what I wanted?
Hi @slecraphi Just to elaborate on @ehalit’s correct approach here your example adapted for token classification: tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForTokenClassification.from_pretrained('bert-base-uncased') inputs = tokenizer("Hello, my dog is cute", return_tensors='pt') outputs = model(**inputs, output_hidden_states=True) last_hidden_states = outputs.hidden_states[-1] The shape of last_hidden_states will be [batch_size, tokens, hidden_dim] so if you want to get the embedding of the first element in the batch and the [CLS] token you can get it with last_hidden_states[0,0,:]. Hope this helps!
1
huggingface
Beginners
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking arugment for argument index in method wrapper_index_select)
https://discuss.huggingface.co/t/runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least-two-devices-cpu-and-cuda-0-when-checking-arugment-for-argument-index-in-method-wrapper-index-select/9255
I am working in a Google Coalab session with a HuggingFace DistilBERT model which I have fine tuned against some data. I am getting the following error when I try to evaluate a restored copy of my model:- RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking arugment for argument index in method wrapper_index_select) I run the following piece of code TWICE. Once just after fitting the model, and then once after saving and restoring the model. metric= load_metric("accuracy") model.eval() for batch in test_dataloader: batch = {k: v.to(device) for k, v in batch.items()} with torch.no_grad(): outputs = model(**batch) logits = outputs.logits predictions = torch.argmax(logits, dim=-1) metric.add_batch(predictions=predictions, references=batch["labels"]) metric.compute() If I run the evaluation straight after training there is no problem:- /usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:10: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). # Remove the CWD from sys.path while we load stuff. {'accuracy': 0.6692307692307692} If I run the above code after saving and restoring the model then I get the error quoted above, the full traceback for which is:- /usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:10: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). # Remove the CWD from sys.path while we load stuff. --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-102-33bf1579632a> in <module>() 4 batch = {k: v.to(device) for k, v in batch.items()} 5 with torch.no_grad(): ----> 6 outputs = model(**batch) 7 8 logits = outputs.logits 8 frames /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2041 # remove once script supports set_grad_enabled 2042 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 2043 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 2044 2045 RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking arugment for argument index in method wrapper_index_select) The steps I take for saving and restoring are as follows:- Write the model to the colab session’s local disc:- Write from local disc (of the colab session) to Google Drive Write back from Google Drive to the colab session’s local disc Use the copy on the local drive to load the model The code for step 1 has been adapted from that at run_glue.py 3 and is as follows:- # Saving best-practices: if you use defaults names for the model, you can reload it using from_pretrained() output_dir = './a_local_copy/' # Create output directory if needed if not os.path.exists(output_dir): os.makedirs(output_dir) #logger.info("Saving model checkpoint to %s", args.output_dir) print("Saving model checkpoint to %s" % output_dir) # Save a trained model, configuration and tokenizer using `save_pretrained()`. # They can then be reloaded using `from_pretrained()` model_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training model_to_save.save_pretrained(output_dir) tokenizer.save_pretrained(output_dir) # Good practice: save your training arguments together with the trained model # torch.save(args, os.path.join(output_dir, 'training_args.bin')) Step 4 is the straightforward:- model = AutoModelForSequenceClassification.from_pretrained(output_dir) tokenizer = AutoTokenizer.from_pretrained(output_dir) I am happy to load further code if you could give me some guidance as to what would be useful.
I think after you load the model, it is no longer on GPU, try model = AutoModelForSequenceClassification.from_pretrained(output_dir).to(device)
0
huggingface
Beginners
KeyError: “Invalid key: slice(0, 1000, None). Please first select a split
https://discuss.huggingface.co/t/keyerror-invalid-key-slice-0-1000-none-please-first-select-a-split/9089
Hi, I am trying to train a tokenizer and execute the following line of code: new_tokenizer = tokenizer.train_new_from_iterator(batch_iterator(), vocab_size=25000) Though, when I execute it, I get this error: KeyError: "Invalid key: slice(0, 1000, None). Please first select a split. For example: `my_dataset_dictionary['train'][slice(0, 1000, None)]`. Available splits: ['train']"
Hi ! Your batch_iterator must be iterating on a Dataset object, however it looks like you try to iterate over a DatasetDict (it maps split names to Dataset objects). To fix your code, you just have to replace dataset by dataset["train"] in your definition of batch_iterator. Let me know if that works or if if you have other questions
0
huggingface
Beginners
What is the best way to fine-tune ViT with a custom dataset?
https://discuss.huggingface.co/t/what-is-the-best-way-to-fine-tune-vit-with-a-custom-dataset/9233
I have checked out the course and I have come across tutorials for fine-tuning pre-trained models for NLP tasks. But I would really like to use the Vision Transformer 1 model for classifying images that I have. I have about 1.8k images belonging to 3 categories, and I would like to use ViT for classification. I want to fine-tune the model to my dataset and thus leverage transfer learning. This is a task of single-label classification. How can I do this? What is the best way to fine-tune the pretrained ViT model for a classification task to a smaller dataset? Can anyone point me towards any recipes or tutorials or other forms of how-tos? Thanks.
Hi there! I made some demos on how to fine-tune ViT on a custom dataset here: github.com Transformers-Tutorials/VisionTransformer at master ·... 10 master/VisionTransformer This repository contains demos I made with the Transformers library by HuggingFace. - Transformers-Tutorials/VisionTransformer at master · NielsRogge/Transformers-Tutorials
0
huggingface
Beginners
Is there a way to correctly load a pre-trained transformers model without the configuration file?
https://discuss.huggingface.co/t/is-there-a-way-to-correctly-load-a-pre-trained-transformers-model-without-the-configuration-file/9184
I would like to fine-tune a pre-trained transformers model on Question Answering. The model was pre-trained on large engineering & science related corpora. I have been provided a “checkpoint.pt” file containing the weights of the model. They have also provided me with a “bert_config.json” file but I am not sure if this is the correct configuration file. from transformers import AutoModel, AutoTokenizer, AutoConfig MODEL_PATH = "./checkpoint.pt" config = AutoConfig.from_pretrained("./bert_config.json") model = AutoModel.from_pretrained(MODEL_PATH, config=config) The reason I believe that bert_config.json doesn’t match “./checkpoint.pt” file is that, when I load the model with the code above, I get the error that goes as below. Some weights of the model checkpoint at ./aerobert/phase2_ckpt_4302592.pt were not used when initializing BertModel: [‘files’, ‘optimizer’, ‘model’, ‘master params’] This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BertModel were not initialized from the model checkpoint at ./aerobert/phase2_ckpt_4302592.pt and are newly initialized: [‘encoder.layer.2.attention.output.LayerNorm.weight’, ‘encoder.layer.6.output.LayerNorm.bias’, ‘encoder.layer.7.intermediate.dense.bias’, ‘encoder.layer.2.output.LayerNorm.bias’, ‘encoder.layer.21.attention.self.value.bias’, ‘encoder.layer.11.attention.self.value.bias’, … If I am correct in assuming that “bert_config.json” is not the correct one, is there a way to load this model correctly without the config.json file?
This is telling you that the checkpoint that they gave you also includes the state of other things. So they also saved the state of the optimizer and not just the state of the model. It seems that you need to only load the “model” key. Maybe there is a better way than this, but I think you can do: MODEL_PATH = "./checkpoint.pt" state_dict = torch.load(MODEL_PATH)["model"] config = AutoConfig.from_pretrained("./bert_config.json") model = BertModel(config) model = BertModel._load_state_dict_into_model( model, state_dict, MODEL_PATH )[0] # make sure token embedding weights are still tied if needed model.tie_weights() # Set model in evaluation mode to deactivate DropOut modules by default model.eval() I did not test this. See this for more: github.com huggingface/transformers/blob/e46ad22cd6cb28f78f4d9b6314e7581d8fd97dc5/src/transformers/modeling_utils.py#L1381-L1384 1 @classmethod def _load_state_dict_into_model( cls, model, state_dict, pretrained_model_name_or_path, ignore_mismatched_sizes=False, _fast_init=True ):
0
huggingface
Beginners
How to save my tokenizer using save_pretrained?
https://discuss.huggingface.co/t/how-to-save-my-tokenizer-using-save-pretrained/9189
I have just followed this tutorial on how to train my own tokenizer. Now, from training my tokenizer, I have wrapped it inside a Transformers object, so that I can use it with the transformers library: from transformers import BertTokenizerFast new_tokenizer = BertTokenizerFast(tokenizer_object=tokenizer) Then, I try to save my tokenizer using this code: tokenizer.save_pretrained('/content/drive/MyDrive/Tokenzier') However, from executing the code above, I get this error: AttributeError: 'tokenizers.Tokenizer' object has no attribute 'save_pretrained' Am I saving the tokenizer wrong? If so, what is the correct approach to save it to my local files, so I can use it later?
You are saving the wrong tokenizer ;-). new_tokenizer.save_pretrained(xxx) should work.
0
huggingface
Beginners
What is the purpose of save_pretrained()?
https://discuss.huggingface.co/t/what-is-the-purpose-of-save-pretrained/9167
Hello everyone. I hope my question is not too silly, but there is something that confuses me. Let’s say I load a huggingface model using from_pretrained() method, and then finetune it using the Trainer class. Now, via TrainingArguments, I get the chance to define an argument called output_dir. If I specify a directory here, won’t my model be saved in this directory, thus enabling me to load it again in the future from this folder, using from_pretrained()? Here lies my question: if this argument lets me save the model, what is the purpose of save_pretrained()? Looking on their respective documentations, it is clear that they do something different however: the first one saves something called checkpoints, while the second one saves “the model and its configuration”. Can someone explain to me their difference, in other words, what is the difference between saving the checkpoints or the model? Won’t the last checkpoint be the same as the model and its weights? Thanks in advance.
Hi there! The question is a bit weird in the sense you are asking: “Why does the model have this method when the Trainer has that model?”. The base answer is: " because they are two different objects." Not everyone uses the Trainer to train their model, so there needs to be a method directly on the model to properly save it. Now inside the Trainer, you could very well never save any checkpoint (save_strategy="no") or have the last checkpoint saved be before the end of training (with a save_strategy="steps") so you won’t necessarily automatically have the last model saved inside a checkpoint. A checkpoint, by the way, is just a folder with your model, tokenizer (if it was passed to the Trainer) and all necessary files to resume training from there (optimizer state, lr scheduler state, trainer state etc). To save your model at the end of training, you should use trainer.save_model(optional_output_dir), which will behind the scenes call the save_pretrained of your model (optional_output_dir is optional and will default to the output_dir you set).
0
huggingface
Beginners
Unable to use custom dataset when training a tokenizer
https://discuss.huggingface.co/t/unable-to-use-custom-dataset-when-training-a-tokenizer/9120
Hello, I am following this tutorial here: notebooks/tokenizer_training.ipynb at master · huggingface/notebooks · GitHub 1 So, using this code, I add my custom dataset: from datasets import load_dataset dataset = load_dataset('csv', data_files=['/content/drive/MyDrive/mydata.csv']) Then, I use this code to take a look at the dataset: dataset Access an element: dataset['train'][1] Access a slice directory: dataset['train'][:5] After executing the above code successfully, I try to execute this here: new_tokenizer = tokenizer.train_new_from_iterator(batch_iterator(), vocab_size=25000) However, I get this error: KeyError: "Invalid key: slice(0, 1000, None). Please first select a split. For example: `my_dataset_dictionary['train'][slice(0, 1000, None)]`. Available splits: ['train']" How do I fix this? I am trying to train my own tokenizer, and this seems to be an issue. Any help would be appreciated!
When asking for help on the forum, please paste all relevant code. In this case, you did not past the definition of batch_iterator. If you are following the notebook, you did not load one dataset, but several (with a split for train/validation/test) which is why you get this error. You should add the split="train" argument when you load your dataset, or adapt the code of batch_iterator to index in your dataset dictionary.
0
huggingface
Beginners
Object has no attribute ‘parameters’
https://discuss.huggingface.co/t/object-has-no-attribute-parameters/9144
I am running the following code:- from tqdm.auto import tqdm progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader: batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss loss.backward() optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) This is a straight copy and paste from the HuggingFace documentation. Unfortunately, the code is returning the following error:- 'TFDistilBertForSequenceClassification' object has no attribute 'parameters' I used the following boilerplate for the data loaders:- from torch.utils.data import DataLoader batch_size = 8 num_epochs = 3 train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size) eval_dataloader = DataLoader(val_dataset, batch_size=batch_size) I would be happy to share any further code that might be useful though there is rather a lot of it so please just describe what you need. The full Traceback is:- 0% 1/390 [00:00<03:43, 1.74it/s] /usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:10: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). # Remove the CWD from sys.path while we load stuff. --------------------------------------------------------------------------- KeyError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 2897 try: -> 2898 return self._engine.get_loc(casted_key) 2899 except KeyError as err: pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item() KeyError: 1002 The above exception was the direct cause of the following exception: KeyError Traceback (most recent call last) 8 frames <ipython-input-167-b86b10a2f8e7> in <module>() 5 model.train() 6 for epoch in range(num_epochs): ----> 7 for batch in train_dataloader: 8 batch = {k: v.to(device) for k, v in batch.items()} 9 outputs = model(**batch) /usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in __next__(self) 519 if self._sampler_iter is None: 520 self._reset() --> 521 data = self._next_data() 522 self._num_yielded += 1 523 if self._dataset_kind == _DatasetKind.Iterable and \ /usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _next_data(self) 559 def _next_data(self): 560 index = self._next_index() # may raise StopIteration --> 561 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 562 if self._pin_memory: 563 data = _utils.pin_memory.pin_memory(data) /usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] /usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] <ipython-input-154-c53f3c6a8f83> in __getitem__(self, idx) 9 def __getitem__(self, idx): 10 item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} ---> 11 item['labels'] = torch.tensor(self.labels[idx]) 12 return item 13 /usr/local/lib/python3.7/dist-packages/pandas/core/series.py in __getitem__(self, key) 880 881 elif key_is_scalar: --> 882 return self._get_value(key) 883 884 if is_hashable(key): /usr/local/lib/python3.7/dist-packages/pandas/core/series.py in _get_value(self, label, takeable) 988 989 # Similar to Index.get_value, but we do not fall back to positional --> 990 loc = self.index.get_loc(label) 991 return self.index._get_values_for_loc(self, loc, label) 992 /usr/local/lib/python3.7/dist-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 2898 return self._engine.get_loc(casted_key) 2899 except KeyError as err: -> 2900 raise KeyError(key) from err 2901 2902 if tolerance is not None: KeyError: 1002
I’m confused as the stack trace you are copying is not with the error you mention. The stack trace has a problem of indexing in the dataframe you apparently used for the labels, and as the final error says, it can’t find the label at index 1002.
0
huggingface
Beginners
Padding causes wrong predictions?
https://discuss.huggingface.co/t/padding-causes-wrong-predictions/9152
Hello guys, I am trying out multiclass text classification using DistilBERT. It is a dataset with user feedback on a product and there are 4 categories. 0 - Good - “easy to use” 1 - Bad - “slow and constant crashing” 2 - Questions - “can you add feature x and y?” 3 - Others - “NIL” I am following the guide here 1, I padded the datasets like that: train_encodings = tokenizer(train_texts, truncation=True, padding='max_length', max_length=128) test_encodings = tokenizer(test_texts, truncation=True, padding='max_length', max_length=128) I’ve set it to 128 for both as I thought having different pads would cause issues but some other posts here seem to suggest otherwise. From my limited knowledge, in order to get new predictions from the model, I would have to encode the new texts and also tokenize them while padding it to 128, that is what I did but the prediction done was wrong. Meanwhile if I didn’t pad it at all, it would predict correctly. Code: new_prediction = tokenizer('good', truncation=True, padding='max_length', max_length=128) new_text = torch.Tensor(new_prediction['input_ids']).long().reshape(1, len(new_prediction['input_ids'])).to('cuda:0') print("Padded with all 0s version") print(sentence) print(tokenizer.batch_decode(sentence, skip_special_tokens=True)) print(model(sentence)[0].argmax(1)) print(model(sentence)) print("\nNo 0s version") new_prediction = tokenizer('good', truncation=True, padding=True) new_text = torch.Tensor(new_prediction['input_ids']).long().reshape(1, len(new_prediction['input_ids'])).to('cuda:0') print(new_text) print(tokenizer.batch_decode(new_text, skip_special_tokens=True)) print(model(new_text)[0].argmax(1)) print(model(new_text)) Output: Padded with all 0s version tensor([[ 101, 2204, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], device='cuda:0') ['good'] tensor([1], device='cuda:0') SequenceClassifierOutput(loss=None, logits=tensor([[ 0.5761, 1.5903, -1.0050, -1.0793]], device='cuda:0', grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) No 0s version tensor([[ 101, 2204, 102]], device='cuda:0') ['good'] tensor([0], device='cuda:0') SequenceClassifierOutput(loss=None, logits=tensor([[ 6.5631, -2.2075, -1.9927, -2.2027]], device='cuda:0', grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) As you can see, the word ‘good’ should return category 0, the padded version does not while the non-padded version returns correctly. I am a beginner and am having a hard time understanding why, can someone enlighten me, thanks!
Please have a look at the course, in particular this section 9. You need to pass the attention mask returned by the tokenizer to have you model ignore padding.
0
huggingface
Beginners
I can’t concatenate_datasets because features are not sorted. How do I sort it?
https://discuss.huggingface.co/t/i-cant-concatenate-datasets-because-features-are-not-sorted-how-do-i-sort-it/6243
Hi guys, I’m trying to concatenate two datasets that share some common features. But these two datasets have features in a different order. It’s like: DatasetDict({ train: Dataset({ features: ['__index_level_0__', 'answers', 'context', 'document_id', 'id', 'question', 'title'], num_rows: 3952 }) validation: Dataset({ features: ['__index_level_0__', 'answers', 'context', 'document_id', 'id', 'question', 'title'], num_rows: 240 }) }) and DatasetDict({ train: Dataset({ features: ['id', 'title', 'context', 'question', 'answers'], num_rows: 60407 }) validation: Dataset({ features: ['id', 'title', 'context', 'question', 'answers'], num_rows: 5774 }) }) so I erased uncommon features like: __index_level_0__, document_id, etc by using .remove_columns Now the two DatasetDicts have the same features. However the order is different. 1st DatasetDict: features: ['answers', 'context', 'question', 'title'], 2nd DatasetDict: features: ['title', 'context', 'question', 'answers'], So when I try to concatenate them by using datasets.concatenate_datasets([1stDatasetDict, 2ndDatasetDict]), I get an error that says: ValueError: Features must match for all datasets
hey @jeffnlp, i don’t think you can concatenate DatasetDict objects with concatenate_datasets - as described in the docs 3 this function expects a list of Dataset objects. what happens if you try iterating over both DatasetDict objects and building up a new one that concatenates the Datasets objects as follows: ds1 = DatasetDict(...) ds2 = DatasetDict(...) # Create empty DatasetDict ds3 = DatasetDict() for (split1, x), (split2, y) in zip(ds1.items(), ds2.items()): ds3[split1] = concatenate_datasets([x, y]) this should work since concatenate_datasets can handle out-of-order columns. if not, you might have a problem with the answers columns if they’re nested and the sub-columns don’t match. in that case you might be better off flattening the columns of each dataset and casting the features into the same types (see e.g. here 5 for casting details)
0
huggingface
Beginners
Unsupported value type BatchEncoding
https://discuss.huggingface.co/t/unsupported-value-type-batchencoding/8924
Hi I’m a HuggingFace Newbie and I’m trying to fine tune DistilBERT for a three label sentiment classification task. To do so I am using as a guide the HuggingFace Course 2. Hence I am using the following code to train my model:- model = TFAutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=3) lr_scheduler = PolynomialDecay( initial_learning_rate=5e-5, end_learning_rate=0., decay_steps=num_train_steps ) opt = Adam(learning_rate=lr_scheduler) model.compile(optimizer=opt, loss=loss, metrics=['accuracy', F1_metric()]) model.fit( encoded_train, np.array(y_train), validation_data=(encoded_val, np.array(y_val)), batch_size=8, epochs=3 ) The loss function is:- loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) The number of training steps is calculated like so:- batch_size = 8 num_epochs = 3 num_train_steps = (len(encoded_train['input_ids']) // batch_size) * num_epochs So far then, very much like the boiler-plate code in the course. My encoded training data looks like this:- {'input_ids': <tf.Tensor: shape=(1040, 512), dtype=int32, numpy= array([[ 101, 155, 1942, ..., 0, 0, 0], [ 101, 27900, 7641, ..., 0, 0, 0], [ 101, 155, 1942, ..., 0, 0, 0], ..., [ 101, 109, 7414, ..., 0, 0, 0], [ 101, 2809, 1141, ..., 0, 0, 0], [ 101, 1448, 1111, ..., 0, 0, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(1040, 512), dtype=int32, numpy= array([[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], ..., [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0]], dtype=int32)>} Printing with y_train.head() my labels look like this (though my code turns this into a numpy array):- 10 2 147 1 342 1 999 3 811 3 Name: sentiment, dtype: int64 I am receiving the following error message:- Epoch 1/3 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-56-2902befb3adf> in <module>() 16 validation_data=(encoded_val, np.array(y_val)), 17 batch_size=8, ---> 18 epochs=3 19 ) 14 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/type_spec.py in __make_cmp_key(self, value) 381 raise ValueError("Unsupported value type %s returned by " 382 "%s._serialize" % --> 383 (type(value).__name__, type(self).__name__)) 384 385 @staticmethod ValueError: Unsupported value type BatchEncoding returned by IteratorSpec._serialize My code is being run in Google Collaboratory using GPUs.
The problem lies in the training data, but you did not share how you built it, so we can’t help you see what’s wrong.
0
huggingface
Beginners
Is there a way to finetune GPT2 775M on 16GB VRAM and 24GB RAM?
https://discuss.huggingface.co/t/is-there-a-way-to-finetune-gpt2-775m-on-16gb-vram-and-24gb-ram/9040
I was able to finetune GPT2 355M 2048 sequence, without FP16, all fit in VRAM. But no luck with GPT2 755M. Obviously didn’t fit to VRAM, so I used FP16 and DeepSpeed CPU offload, That way I got 9GB VRAM free, but out of RAM. Did someone succeed with GPT2 training for 775M with 16GB VRAM?
Now I’m able to run training with block size 1568. Free VRAM 1593MiB, free RAM 14GB Feel like there is a room to run block size 2048…
0
huggingface
Beginners
How to adjust the learning rate after N number of epochs?
https://discuss.huggingface.co/t/how-to-adjust-the-learning-rate-after-n-number-of-epochs/8644
I am using Hugginface’s Trainer. How to adjust the learning rate after N number of epochs? For example, I have an initial learning rate set to lr=2e-6 , and I would like to change the learning rate to lr=1e-6 after the first epoch and stay on it the rest of the training. I tried this so far: optimizer = AdamW(model.parameters(), lr = 2e-5, eps = 1e-8 ) epochs = 5 batch_number = len(small_train_dataset) / 8 total_steps = batch_number * epochs scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps = 0, num_training_steps = total_steps, last_epoch=-1 ) I know that there is LambdaLR — PyTorch 1.9.0 documentation 2 but here it drops learning rate every epoch but that is not what i want to do. I want it to drop after 1 epoch and then stay on it rest of the training process.
Were you able to resolve this? I have a similar problem where I want to implement adaptive learning rate during training.
0
huggingface
Beginners
Pre-training & fine-tuning BERT on specific domain with custom dataset
https://discuss.huggingface.co/t/pre-training-fine-tuning-bert-on-specific-domain-with-custom-dataset/6672
Hallo, I need some help with training BERT and thought maybe I can ask you here… I am trying to train a BERT model for a specific domain, similar to BioBERT, but for some other field. So, for achieving my plans, I run the run_mlm.py script which I found on transformers/examples/pytorch/language-modeling at master · huggingface/transformers · GitHub 31 for bert-base-uncased with some custom dataset, which is a large txt-file containing my corpus. In the next step I wanted to fine-tune my model on the NER tasks using run_ner.py script I found on the same GitHub in: examples/pytorch/token-classification. For a small example dataset the fine-tuning works, but if I use my whole dataset I get the following error: Map method to tokenize raises index error - #10 by rainman020 7. Maybe you can tell me if my approach is correct? Another point where I am struggling is that when using run_ner.py I get the warning that I should train this model on a down-stream task. But I thought this is what I am doing using this script to fine-tune on NER. Do I have to do some extra steps in the pre-training phase? I googled a lot but it is still not 100% clear for me. If you could help me I would be very glad. Thank you!
Hi there! I’m using your question to ask one related to the run_ner.py script (maybe you could help on this one, since it is extremely basic)! I’m trying to build an extractive summariser using this latter. I’m starting with all this so I am at level 0, and am trying to understand how to fit this script to my own data? I have my train dev and test datasets tokenized and labelled, in csv formats. But when i run the command on my data, the following appears. (the run_ner.py script is loaded to the environment) Screenshot 2021-06-14 at 16.00.122212×562 107 KB Any guidance on how to make this work…? Thank you!!
0
huggingface
Beginners
How do I fine-tune a zero-shot learning model to my task?
https://discuss.huggingface.co/t/how-do-i-fine-tune-a-zero-shot-learning-model-to-my-task/6758
How would you improve the performance of zero-shot models considering you will be obtaining 1) feedback on the predictions made by the model and 2) labeled examples. I want to do classification in Spanish and with very specialized legal text. The facebook/bart-large-mnli and joeddav/xlm-roberta-large-xnli models seem to work decently. However, I already have a couple of thousand examples labeled that might help to fine-tune the zero-shot learning models. Furthermore, I will be providing feedback on the predictions made by the model. Thank you and greetings from Mexico!
Hey, Any luck on zero-shot finetuning?
0
huggingface
Beginners
I want to custom my data set in speech recognition wav2vec
https://discuss.huggingface.co/t/i-want-to-custom-my-data-set-in-speech-recognition-wav2vec/8172
hello, i’m really bignner but i have to make my own data set wich is in wave format fine tuned to wav2vec model so i really need help about what to do in details .
Hello wzr97. I am not sure if I understand your question correctly, and I am also fairly new to everything myself. But this blog might contain the answer to your question. huggingface.co Fine-Tune Wav2Vec2 for English ASR in Hugging Face with 🤗 Transformers 15
0
huggingface
Beginners
Creating a Rick Sanchez chat bot with Transformers and Chai
https://discuss.huggingface.co/t/creating-a-rick-sanchez-chat-bot-with-transformers-and-chai/9078
Hey, I made a notebook tutorial to try and make it as simple as possible to train and deploy a DialoGPT bot. It uses dialogue from Rick and Morty to train a Rick bot and then deploys the bot onto Chai 5, a platform for creating and interacting with conversational AI’s. RickBotChai1530×2892 277 KB By the end of this tutorial you will have your very own chatbot, like the one pictured above The notebook can be found here 7. Thanks!
This is cool! In your notebook it says you upload the model to huggingface - do you have a link to the model page? I wanna check it out!
0
huggingface
Beginners
Shared public/private models are gone
https://discuss.huggingface.co/t/shared-public-private-models-are-gone/9035
Hi there, I uploaded public and private models a week ago (and the public model has over a hundred downloads so far). However, I just found out that they are gone in my profile page. Can anyone tell me what is going on here? I certainly didn’t delete the models. Is this related to model sharing policy that I might be missing? Thank you! best,
Oh my! I thought I was the only one going crazy! My models are also gone from my page, nor can they be searched. I similarly didn’t touch the models at all over the past few days… What’s odd is that I can still view and clone them if I visit their old direct links as per usual, like this 1 for example. There seems to be something going on. I hope it can be resolved soon!
0
huggingface
Beginners
Different versions of ‘wav2vec2’ model and their differences
https://discuss.huggingface.co/t/different-versions-of-wav2vec2-model-and-their-differences/8996
Hey everyone. I want to use wav2vec2 to perform ASR using data in my language (Greek). As such, I took a look at the various wav2vec2 pretrained models that exist in the model hub, and there are two things I don’t understand: Some versions, like this facebook/wav2vec2-large-lv60 · Hugging Face, say in the description that the model ‘should be fine-tuned on a downstream task, like Automatic Speech Recognition’. On the other hand, other versions like ‘facebook/wav2vec2-large-960h-lv60’ (sorry, can’t post more than 2 links), impose no such requirement and also provide code snippets as an example of how to use the particular model. Furthermore, the group of models mentioned first, do not have code examples, but contain links to this amazing blog post Fine-Tune XLSR-Wav2Vec2 for low-resource ASR with 🤗 Transformers which I studied. Forgive me if my question is getting too big, but I’d like to ask something more related to this blog post. I noticed 2 things: (1) the author does not load the tokenizer and feature extractor using the ‘from_pretrained()’ method, but instantiates them by themselves and (2) the second group of models mentioned earlier (those who include code examples in their page), do not use a feature extractor at all. What are the reasons behind these distinctions? Sorry again for the lengthy question. I’d really appreciate any help. Thanks in advance!
A mistake I noticed in my post: the first link is meant to redirect to this version of wav2vec: facebook/wav2vec2-large-xlsr-53 · Hugging Face 1
0
huggingface
Beginners
Continue pre-training of Greek BERT with domain specific dataset
https://discuss.huggingface.co/t/continue-pre-training-of-greek-bert-with-domain-specific-dataset/4005
Hello, I want to further pre-train Greek BERT of the library on a domain specific dataset in MLM task to improve results. The downstream task of BERT will be sequence classification. I found that the library also provides scripts 35 for that. In the example RoBERTa is further trained on wikitext-2-raw-v1. As I saw here 10, the dataset is formatted as: { “text”: “” The gold dollar or gold one @-@ dollar … } although, I downloaded the dataset from the link provided in that site and saw that the texts in the dataset are one after the other separated by titles within =. My question is, what format should the dataset that I will further pre-train BERT have and how should they provided as train and dev? If there is any source, it would be very helpful. p.s. BERT was pre-trained in two tasks, MLM and NSP. Since my downstream task is Sequence Labeling, I thought that I should continue the pre-training with just the MLM task.
You should fine-tune the model on whatever task you want to perform. If you’d like to use it for sequence classification, then that’s what you should train it on i.e. exchange the head for a Sequence-Classification one. This should be of help: BERT — transformers 4.3.0 documentation 46 This may help you understand how to format your input: BERT Fine-Tuning Tutorial with PyTorch · Chris McCormick 57 (the tokenizer does most of the heavy lifting for you)
0
huggingface
Beginners
Can t5 be used to text-generation?
https://discuss.huggingface.co/t/can-t5-be-used-to-text-generation/1075
Hello to all, I’m following this tutorial: https://huggingface.co/blog/how-to-generate 78 which says: " Auto-regressive language generation is now available for GPT2 , XLNet , OpenAi-GPT , CTRL , TransfoXL , XLM , Bart , T5 in both PyTorch and Tensorflow >= 2.0!" so I wanted to try to do the same, they just change the model to T5. However even though the model runs, the output is very strange. This is the code: !pip install transformers import tensorflow as tf from transformers import TFT5ForConditionalGeneration, T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("t5-small") model = TFT5ForConditionalGeneration.from_pretrained("t5-small", pad_token_id=tokenizer.eos_token_id) input_ids = tokenizer.encode('Hello, my dog is cute', return_tensors='tf') greedy_output = model.generate(input_ids, max_length=50) print("Output:\n" + 100 * '-') print(tokenizer.decode(greedy_output[0], skip_special_tokens=True)) I get this output: Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Hello Because of this, and taking into account that I have not found many text-generation examples with t5, I would like to ask if this is possible? if so, why my output is so strange?
My mistake, reading the documentation is required that the first token is the task, for example ‘summarize’.
0
huggingface
Beginners
[Question] Why does vocab size determine training parameters
https://discuss.huggingface.co/t/question-why-does-vocab-size-determine-training-parameters/8961
I have a not so smart question: Why does the vocab size increase training parameters by a lot? The following configuration: from transformers import RobertaConfig config = RobertaConfig( vocab_size=48000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1, ) Passed to the model gives 80 mil parameters model = RobertaForMaskedLM(config=config) model.num_parameters() → 80.4 milion If vocab_size is reduced to 16k, model = RobertaForMaskedLM(config=config) model.num_parameters() → 55 milion The Embedding layers at the start increases training parameters so much? self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id) self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size) The layer embeding takes: Embedding(vocab_size, embedding_space, input_length=max_length)) Where in my case embedding_space is 768 (where 768*48000 - 768 * 16000 ~ 25 mil.) Therefore the number of training parameters would be vocab_size * embedding_space?
I’m not sure where you question is, all your math is correct and yes, the embedding matrix is responsible for a looot of the model parameters.
0
huggingface
Beginners
[HELP] Special tokens not appearing as predicted tokens!
https://discuss.huggingface.co/t/help-special-tokens-not-appearing-as-predicted-tokens/8910
Hello, I require some urgent help! I trained a masked language model on a Twitter dataset, with each tweet containing one emoji. Then, I used the following code to add the emojis as special tokens: num_added_toks = tokenizer.add_tokens(['😃', '😄', '😁', '😆', '😅', '😂', '🤣', '🧔🏿‍♂️']) print('We have added', num_added_toks, 'tokens') model.resize_token_embeddings(len(tokenizer)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer From adding the special tokens, I added 3311 different emojis successfully, which increased the embedding to (53575, 768) as shown below: We have added 3311 tokens Embedding(53575, 768) Now, here’s the issue I am facing… When I add the <mask> token to a sentence and input the top_k as the total number of embeddings, which is 53575, not a single emoji shows up in the predictions. I used this line of code: mask_filler("Are you happy today <mask>", top_k=53575) As you can see in the code above, the top_k is 53575, the total number of embeddings which should include the 3311 emojis I added, right? However, when I make the predictions and scroll through the list of 53575, not a single emoji is there! Why is this is happening? Like, I have added the emojis to the vocabulary, but they are simple not there when making predictions. SEE FULL CODE HERE: MLM-EMOJIS/mlm_emojis.ipynb at main · saucyhambon/MLM-EMOJIS · GitHub 4
I see one emoji in the last predictions. Mostly, it’s that the model is not used to seeing those, so it probably needs to be trained longer.
0
huggingface
Beginners
How to fine-tune Bert on STS-B task?
https://discuss.huggingface.co/t/how-to-fine-tune-bert-on-sts-b-task/8950
Hi, I am new to NLP and trying to reproduce fine-tune results of Bert. However, the STS-B task troubles me, from what I understand, the STS-B task is a regression task, but Bert treats it as a classification task. I do not quite know the transformation between scores and labels in detail, is anybody willing to give me a hint?
This is all dealt with in the loss function: a model that is tasked with classification or regression is the same roughly, it just outputs a different number of labels. Inside the code of BertModelForSequenceClassification 16, you can see there is a test that picks a different loss function depending on the problem_type, and by default 1 label (like in STS-B) corresponds to a regression, so the mean-squared error is selected as a loss, instead of cross-entropy.
0
huggingface
Beginners
How can I convert a model created with fairseq?
https://discuss.huggingface.co/t/how-can-i-convert-a-model-created-with-fairseq/564
Hi, I fine tuned facebook’s model mbart.cc25 for machine translation with Fairseq, it saved its model as checkpoint_*.pt. How can I use it now with Transformers, is it possible? Thanks
Unless the naming conventions that are used in transformers are the same as in fairseq, this is not possible out of the box. However, with a bit of digging you should be able to map them. @sshleifer will know the answer.
0
huggingface
Beginners
Data format for BertForSequenceClassification with num_labels > 2
https://discuss.huggingface.co/t/data-format-for-bertforsequenceclassification-with-num-labels-2/4156
Hi, I have a multilabel task (num_labels=8) and I want to use BertForSequenceClassification using Trainer to train the model. But I get the following error: ValueError: Expected input batch_size (8) to match target batch_size (64). I assume that the problem is the data format of the labels. Currently, my label is a 8-dim list (e.g., [1,0,0,0,0,1,0,0]). What is the right format for the label data? Here my code: class EmotionDataset(torch.utils.data.Dataset): def init(self, encodings, labels): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} item['labels'] = torch.tensor(self.labels[idx]) return item def __len__(self): return len(self.labels) MODEL_NAME = ‘dbmdz/bert-base-german-uncased’ tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) model = BertForSequenceClassification.from_pretrained(MODEL_NAME, num_labels=8) tokenize data dataset_train = Dataset.from_pandas(df_train) train_encodings = tokenizer(dataset_train['text], truncation=True, padding=True) train_dataset = EmotionDataset(train_encodings, dataset_train['label]) training_args = TrainingArguments( output_dir=’./results’, # output directory num_train_epochs=1, # total # of training epochs per_device_train_batch_size=8, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir=’./logs’, # directory for storing logs ) trainer = Trainer( model=model, # the instantiated Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=test_dataset # evaluation dataset ) _ = trainer.train() trainer.evaluate() Thanks, Max
Hi @maxpower, I think the format of your dataset is fine but I think you have to change the model’s loss function to use a sigmoid instead of a softmax on the logits (i.e. BCEWithLogitsLoss). You can see a skeleton + hacky Colab in this thread: Fine-Tune for MultiClass or MultiLabel-MultiClass - #8 by lewtun 100
0
huggingface
Beginners
How to move huggingface’s pretrained model to another machine?
https://discuss.huggingface.co/t/how-to-move-huggingfaces-pretrained-model-to-another-machine/8909
Hi everyone, I’m having two computers, let’s call them A and B. On my computer A, I have my huggingface’s textattack/bert-base-uncased-imdb model already downloaded. My computer B, however, does not have access to the internet and cannot download the model. A and B can communicate through ssh. Now I’m trying to move the downloaded model from A to B by moving the entire ~/.cache/huggingface to B, but when I run my code on computer B, it still attempts to download the model from the internet. Maybe the pretrained model in A is located somewhere else? How do I move my model from A to B? Thanks.
Computer B is still trying to access the internet to check if there is a new version of the model you are using. You have to set the environment variable TRANSFORMERS_OFFLINE to some truthy value (like yes)
0
huggingface
Beginners
How do I change my username or email?
https://discuss.huggingface.co/t/how-do-i-change-my-username-or-email/8855
And how do I close my account? Thanks.
cc @pierric
0
huggingface
Beginners
Creating a custom tokenizer for Roberta
https://discuss.huggingface.co/t/creating-a-custom-tokenizer-for-roberta/2809
RobertaTokenizerFast seems to be ignoring my Lowercase() normaliser. I’ve created a custom tokeniser as follows: tokenizer = Tokenizer(BPE(unk_token="<unk>", end_of_word_suffix="</w>")) tokenizer.normalizer = Lowercase() tokenizer.pre_tokenizer = Sequence([Whitespace(), Digits(individual_digits=False), Punctuation()]) trainer = BpeTrainer( vocab_size=3000, special_tokens=["<s>", "</s>", "<unk>", "<pad>", "<mask>"] ) tokenizer.train(trainer, files) tokenizer.post_processor = RobertaProcessing( cls=("<s>", tokenizer.token_to_id("<s>")), sep=("</s>", tokenizer.token_to_id("</s>")), ) tokenizer.decoder = BPEDecoder(suffix="</w>") (I’m not 100% sure if the BPE suffix is required?) I then save as tokenizer.model.save("./models/roberta") tokenizer.save("./models/roberta/config.json") The reason I need a custom tokeniser is my examples aren’t white-space delimited, i…e tokenizer.encode("AHU-01-SAT").tokens [’<s>’, ‘ahu’, ‘-’, ‘01’, ‘-’, ‘sat’, ‘</s>’] The following doesn’t return the correct tokens: from transformers import RobertaTokenizerFast tokenizer = RobertaTokenizerFast.from_pretrained(".models/roberta", max_len=512) tokenizer("AHU-01-SAT") {‘input_ids’: [0, 40, 112, 40, 1], ‘attention_mask’: [1, 1, 1, 1, 1]} It’s missing the first and last tokens (plus it’s not even replacing them with “”? If I manually apply the normalisation I get the correct tokenisation tokenizer("ahu-01-sat") {‘input_ids’: [0, 109, 40, 112, 40, 598, 1], ‘attention_mask’: [1, 1, 1, 1, 1, 1, 1]} I tried AutoTokenizer and observed the same issue - am I doing something wrong?
Ahh, figured it out, it should be tokenizer.save("./models/roberta/tokenizer.json") not tokenizer.save("./models/roberta/config.json")
0
huggingface
Beginners
Which encoding does GPT2 vocabulary file use?
https://discuss.huggingface.co/t/which-encoding-does-gpt2-vocabulary-file-use/8875
I wanted to investigate a little, how tokenizer works. But the model trained in Russian language (mostly), and I see garbage instead of tokens. I tried obvious variants like open in UTF-8, that didn’t help.
Well, I checked that json vocabulary loads as gibberish: import json with open("/content/notebooks/ru-gpts/models/gpt3large/vocab.json", "r", encoding="utf-8") as f: vocab = json.load(f) list(vocab.items())[1000:1010] [('Ñĥма', 1000), ('Ġпи', 1001), ('Ġn', 1002), ('ĠнеÑĤ', 1003), ('иÑĤа', 1004), ('ÑĢÑĥп', 1005), ('ec', 1006), ('енÑĭ', 1007), ('ĠÑıв', 1008), ('Ðĵ', 1009)] The same exact gibberish is in tokenizer itself, which I view with list(tok1.get_vocab().items())[1000:1010] Yet, the model seems to work somehow, and produce meaningful results in Russian.
0
huggingface
Beginners
Are images generated by DALL·E mini free to use?
https://discuss.huggingface.co/t/are-images-generated-by-dall-e-mini-free-to-use/8854
Hi, I found DALL·E mini via Twitter. My question is in the topic title: May I use images generated from my prompts in DALL·E mini? Thanks.
cc @boris who may know more about the licensing of dall-e’s outputs (my guess is it’s fine!)
0
huggingface
Beginners
How to save, load and use my text classification model?
https://discuss.huggingface.co/t/how-to-save-load-and-use-my-text-classification-model/8837
Hi, I have followed this text classification tutorial: notebooks/text_classification.ipynb at master · huggingface/notebooks · GitHub 4 However, the tutorial does not show how to save, load and use my text classification model. Any help on how to use my saved model for classifying a sentence? Thanks!
Once you have pushed your model to the hub, you can use it like any other model, in a pipeline or directly with the from_pretrained method.
0
huggingface
Beginners
What does “encodings.update(…)” do?
https://discuss.huggingface.co/t/what-does-encodings-update-do/8781
Hello, I’m working thru the Question Answering with SQuAD 2.0 module. I’m stuck in this function: def add_token_positions(encodings, answers): start_positions = [] end_positions = [] for i in range(len(answers)): start_positions.append(encodings.char_to_token(i, answers[i]['answer_start'])) end_positions.append(encodings.char_to_token(i, answers[i]['answer_end'] - 1)) # if start position is None, the answer passage has been truncated if start_positions[-1] is None: start_positions[-1] = tokenizer.model_max_length if end_positions[-1] is None: end_positions[-1] = tokenizer.model_max_length encodings.update({'start_positions': start_positions, 'end_positions': end_positions}) add_token_positions(train_encodings, train_answers) add_token_positions(val_encodings, val_answers) and wondering what encodings.update({'start_positions': start_positions, 'end_positions': end_positions}) is actualing doing. I don’t find documentation or helpful information for this. Is there any information available? And how can I see the result of this function? Thanks in advance!
This is a Python method for any dictionary, it adds the content of the dictionary passed (so here the "start_positions" and "end_positions") to the encodings.
0
huggingface
Beginners
How can i use torch.optim.lr_scheduler.MultiStepLR with Trainer?
https://discuss.huggingface.co/t/how-can-i-use-torch-optim-lr-scheduler-multisteplr-with-trainer/8743
Is there any way to change learning rate scheduler by using Pytorch’s MultiStepLR with Trainer?
You can pass your own optimizer and scheduler to the Trainer. See the documentation 11 for more information.
0
huggingface
Beginners
CPU Multiprocessing for Text Generation
https://discuss.huggingface.co/t/cpu-multiprocessing-for-text-generation/5861
Hello. I’m trying to use multiprocessing when generating summaries on text within a data frame. The pool.map() command just hangs when I run it on a custom generate function. I tried debugging by removing every line of the generate function (shown below) and it seems to work fine if I remove the model.generate() part. Is this not the way to run multiprocessing on the generate function? Or is multiprocessing on CPUs not possible with the generate function? import torch.multiprocessing as mp def generate(i): content = df.loc[i, 'content_web'] location = df.loc[i, 'location'] content = content + '. ' + location tokenized_text = tokenizer(content, truncation=True, padding=True, return_tensors='pt') source_ids = tokenized_text['input_ids'].to(device, dtype = torch.long) source_mask = tokenized_text['attention_mask'].to(device, dtype = torch.long) generated_ids = model.generate( input_ids = source_ids, attention_mask = source_mask, max_length=512, min_length=50, num_beams=4, repetition_penalty=2.5, length_penalty=2.0, early_stopping=True, no_repeat_ngram_size=8, ) pool = mp.Pool(processes=4) results = pool.map(generate2, (range(0, 2)))
I have the same issue for prediction with AutoModelForSequenceClassification. pool.map() just hangs, while a Python map works fine.
0
huggingface
Beginners
How to fine-tune T5-base model?
https://discuss.huggingface.co/t/how-to-fine-tune-t5-base-model/8478
I want to fine-tune T5 model. but there is issue in running this script. my code:- !python /content/transformers/examples/language-modeling/run_clm.py –model_name_or_path t5-base –train_file /content/train.txt –do_train –learning_rate=1e-4 –per_device_train_batch_size=4 –output_dir /tmp/test-clm This code is not working. How can I fine-tune the t5-base model?
Please use Preformatted text block for your codes, you can create this block automatically by pressing Ctrl+E: !python /content/transformers/examples/language-modeling/run_clm.py –model_name_or_path t5-base –train_file /content/train.txt –do_train –learning_rate=1e-4 –per_device_train_batch_size=4 –output_dir /tmp/test-clm Can you share any kind of error or unexpected output with us?
0
huggingface
Beginners
How to freeze some layers of BertModel
https://discuss.huggingface.co/t/how-to-freeze-some-layers-of-bertmodel/917
I have a pytorch model with BertModel as the main part and a custom head. I want to freeze the embedding layer and the first few encoding layers, so that I can fine-tune the attention weights of the last few encoding layers and the weights of the custom layers. I tried: ct = 0 for child in model.children(): ct += 1 if ct < 11: # ########## change value - this freezes layers 1-10 for param in child.parameters(): param.requires_grad = False but I’m not sure that did what I want. I then ran this to check, but the layer names aren’t recognized print(L1bb.embeddings.word_embeddings.weight.requires_grad) print(L1bb.encoder.layer.0.output.dense.weight.requires_grad) print(L1bb.encoder.layer.3.output.dense.weight.requires_grad) print(L1bb.encoder.layer.6.output.dense.weight.requires_grad) print(L1bb.encoder.layer.9.output.dense.weight.requires_grad) print(L1bb.pooler.dense.weight.requires_grad) print(L4Lin.requires_grad) L1bb is the name of the BertModel section in my model, and L1bb.embeddings.word_embeddings.weight is shown in the output of the code that instantiates the model. How can I freeze the first n layers? What counts as a layer? What are the names of the layers in BertModel? How can I check which layers are frozen? PS how can I format this pasted code as code in the forum post? One section has done it automatically, but nothing seems to affect it.
You should not rely on the order returned by the parameters method as it does not necessarily match the order of the layers in your model. Instead, you should use it on specific part of your models: modules = [L1bb.embeddings, *L1bb.encoder.layer[:5]] #Replace 5 by what you want for module in mdoules: for param in module.parameters(): param.requires_grad = False will freeze the embeddings layer and the first 5 transformer layers.
1
huggingface
Beginners
Ideas for beginner-friendlier TPU-VM clm training
https://discuss.huggingface.co/t/ideas-for-beginner-friendlier-tpu-vm-clm-training/8351
Hello All, I’ve recently started trying out TPU-VMs and wanted to train distilgpt2 from scratch on an non English language. I’ve had a rather rough start but did manage to overcome and get the training going. Following the advice from Suraj Patil 3, I decided to write a list of things which can make this experience a bit smoother for others. If you too think these items might help, perhaps some could be added to the relevant tutorials, docs or even the example code itself. GIT credentials for Huggingface Perhaps explain / demonstrate how to store and cache them? (Extra useful if the model is “Private”) Because I did not cache properly at first, and because I had junk in the keyboard buffer, at some point when the script wanted to “Push to hub”, it failed. Now it seems like it does not retry, nor did ‘run_clm_flax.py’ saves the checkpoint to disk when such an error happen, so basically all the hours of training were lost. I’d suggest that if the training scripts have failed auth for some reason, they should not exit before it has written the checkpoint to disk. Is it possible to somehow make sure that we have cached GIT credentials when the training starts? Eval phase Same as above, in case of an error, do not exit before you write the checkpoint to avoid loss of training time Log Is there a built-in option to have all the “on screen” data be written to a log file? - If so, I couldn’t find it Why are all of those “tcmalloc” lines being printed (even on “Info” log level)? Is there a way to have them not print? - It really clutters the screen to the point where it’s hard to find the actual log prints. I “worked around” it by piping everything to grep -v “tcmalloc” but, is there a better solution? dtype float16 - Doesn’t really work well with TPU, right ? Perhaps not allow it if training with JAX? bfloat16 - Can it convert to pytorch? Only after I had my first checkpoint, I discovered that I can not use Flax models it with the online inference-box and that AutoModelForCausalLM fails to load bfloat16 ones float32 - I decided to re-start with float32 as it was loaded well by AutoModelForCausalLM and was saved well as pytorch. Is this the only/best option if I want to have my FLAX model also as pytorch? Upgrade CLU Somehow I missed this upgrade step (pip install --upgrade clu) which resulted in a lot of weird errors and malloc failures. If it’s not just me, which it totally might be Perhaps it should be more prominent? Anyway, This is my feedback, I hope someone finds it useful. Great work everyone! Doron Adler @Norod78
Follow this tutorial 7 to set-up and connect to your TPU-VM Add your local bin path to the PATH environment variable, If you do not know your local user name, type: whoami #In my case, the user is ‘dadler’, so replace ‘dadler’ in the following block with your own user: nano ~/.bashrc #Add the following line at the bottom, replace dadler with your own user name export PATH="/home/dadler/.local/bin:$PATH" #Save (CTRL O) #Exit (Ctrl X) #Realod bashrc source ~/.bashrc Install and upgrade libraries pip install datasets git clone https://github.com/huggingface/transformers.git sudo pip install --user -e transformers pip install --upgrade tokenizers pip install --upgrade clu git clone https://github.com/google/flax.git sudo pip install --user -e flax pip install git+https://github.com/deepmind/optax.git Setup git-lfs sudo apt install git-lfs git lfs install Login to your huggingface account huggingface-cli login Save your git credentials on the local VM (Not secure, do this only if you are the only person who has access the TPU-VM instance) git config --global credential.helper 'store --file ~/.git-credentials' git credential fill #Type/Paste the following two lines: protocol=https host="huggingface.co" #Now hit Enter until you are prompt to enter your huggingface user and password N̶o̶t̶ ̶s̶u̶r̶e̶ ̶i̶f̶ ̶t̶h̶i̶s̶ ̶i̶s̶ ̶n̶e̶e̶d̶e̶d̶,̶ ̶b̶u̶t̶ ̶i̶n̶ ̶t̶h̶e̶ ̶t̶e̶r̶m̶i̶n̶a̶l̶ ̶s̶e̶s̶s̶i̶o̶n̶ ̶y̶o̶u̶ ̶a̶r̶e̶ ̶a̶b̶o̶u̶t̶ ̶t̶o̶ ̶r̶u̶n̶ ̶y̶o̶u̶r̶ ̶t̶r̶a̶i̶n̶i̶n̶g̶ ̶s̶c̶r̶i̶p̶t̶ ̶i̶n̶,̶ ̶y̶o̶u̶ ̶m̶i̶g̶h̶t̶ ̶w̶a̶n̶t̶ ̶t̶o̶ ̶t̶y̶p̶e̶:̶ ̶̶e̶x̶p̶o̶r̶t̶ ̶X̶R̶T̶_̶T̶P̶U̶_̶C̶O̶N̶F̶I̶G̶=̶"̶l̶o̶c̶a̶l̶s̶e̶r̶v̶i̶c̶e̶;̶0̶;̶l̶o̶c̶a̶l̶h̶o̶s̶t̶:̶5̶1̶0̶1̶1̶"̶̶ Continue by following the instructions in this tutorial 6 Hope it helps
0