docs
stringclasses
4 values
category
stringlengths
3
31
thread
stringlengths
7
255
href
stringlengths
42
278
question
stringlengths
0
30.3k
context
stringlengths
0
24.9k
marked
int64
0
1
huggingface
Intermediate
Finetuning T5 for a task
https://discuss.huggingface.co/t/finetuning-t5-for-a-task/9558
In the paper for T5, I noticed that the inputs to the model always a prefix (ex. “summarize: …” or “translate English to German: …”. When I finetune a T5 model, can I use any phrase/word that I want as a prefix, or can T5 only understand a specific predefined list of prefixes?
T5 only has been trained on a specific set of prefixes. You can find a list here: https://arxiv.org/pdf/1910.10683.pdf 4 (starting at page 47) That said, you can just finetune without a prefix (or with a custom prefix) and it should still work out.
0
huggingface
Intermediate
Linear learning rate despite lr_scheduler_type=“polynomial”
https://discuss.huggingface.co/t/linear-learning-rate-despite-lr-scheduler-type-polynomial/9673
Hello, While fine-tuning my network, I would like to set up a polynomial learning rate scheduler by setting lr_scheduler_type="polynomial" and learning_rate=0.00005. However, when I visualize the learning rate on the wandb dashboard, I’m observing a linear decrease of the learning rate instead of polynomial. (Screenshot below) What might be causing the issue? I’ve tested without setting the learning_rate and the behavior was exactly the same. image872×622 22.8 KB
Polynomial scheduler is not really supported via this argument, as it requires an additional power keyword argument that defaults to 1 and which can’t be set via this API. You should thus set the scheduler directly in the Trainer.
0
huggingface
Intermediate
Finetuning from multiclass to mutlilabel
https://discuss.huggingface.co/t/finetuning-from-multiclass-to-mutlilabel/9658
Hello, I got a specific classification task where I finetuned a pretrained BERT Model on a specific task concerning customer reviews (classify a text as “customer service text”, “user experience” etc.): num_labels_cla = 8 model_name_cla = "bert-base-german-dbmdz-uncased" batch_size_cla = 32 model = AutoModelForSequenceClassification.from_pretrained(model_name_cla, num_labels=num_labels_cla) tokenizer = AutoTokenizer.from_pretrained(model_name_cla) As you can see I got 8 distinct classes. My finetuned classification model scores pretty well with unseen data with a f1 score of 80.1. However, it is possible that one text belongs to 2 different classes. My question now is how I have to change my code achieve that? I already transformed my target variable with MultiLabelBinarizer such that my target variable looks like this: [0, 0, 0, 1, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 1, 0, 0, 0, 0], I am using the HuggingFace Trainer instance for finetuning. Cheers
You can set the problem_type of an xxxForSequenceClassification model to multi_label_classification when instantiating it, like so: from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("bert-base-german-dbmz-uncased", problem_type="multi_label_classification", num_labels=num_labels_cla) This ensures that the BCEWithLogitsLoss is used instead of the CrossEntropyLoss, which is necessary for multi-label classification. You can then fine-tune just like you would do with multi-class classification.
0
huggingface
Intermediate
Upload a TF model to Huggingface
https://discuss.huggingface.co/t/upload-a-tf-model-to-huggingface/9662
Hi, I am pre-training a Bert model from scratch using Tensorflow. I’ve seen the methid to push pyTorch models, but I don’t know how to do with my TF model. Here is how I imagine I have to do: 1- Convert my checkpoint from Tf to torch 2- Push to HF Is this correct? But an other question is : I don’t know how to push the tokenizer, all I am having is: my vocab.txt tokenizer.vocab -tokenizer.model how can I do this please. Thanks
If you pre-trained BERT from scratch in TF using the run_mlm.py script, you can easily convert the model from TF to PyTorch, like so: from transformers import BertForMaskedLM model = BertForMaskedLM.from_pretrained("name of directory where the run_mlm.py script saved all files", from_tf=True) model.save_pretrained("name of directory where you'd like to save all model files") Next, you can easily push it to the hub as follows (I’m assuming you’re in a Colab notebook): First, install git-LFS: !sudo apt-get install git-lfs !git config --global user.email "<your email>" !git config --global user.name "<your name>" Next, create a repo on the hub, then git clone it: git clone <URL of your repository on the hub> Next, add your files and upload them: git add . git commit -m "First commit" git push
0
huggingface
Intermediate
Correct way to use pre-trained models
https://discuss.huggingface.co/t/correct-way-to-use-pre-trained-models/9444
I want to do Multiclass-Multilabel ( MLMC) classification problem using Conv-BERT model. Steps that I have taken is: I downloaded the Conv-Bert model from this link: YituTech/conv-bert-base · Hugging Face 2 << YituTech/conv-bert-base>> from pytorch_pretrained_bert import BertTokenizer, BertForSequenceClassification, BertAdam tokenizer = BertTokenizer.from_pretrained("path_to_Conv-Bert_model", do_lower_case = True) model = BertForSequenceClassification.from_pretrained("path_to_Conv-Bert_model", num_labels = 240) model.cuda() I want to understand can we call any classification module from Hugging face and pass any pre-trained models to it like Roberta, Conv-BERT.. so on. ? << As in above example>>
I think the question is a bit too vague but if you mean the BERT-like family of models can be loaded using the code above and similarly for others but ConvBERT for example has its own class (see here: ConvBERT — transformers 4.7.0 documentation 2). The head that will be initialized will be of random weights so the model will need fine-tuning.
0
huggingface
Intermediate
BERT finetuning “index out of range in self”
https://discuss.huggingface.co/t/bert-finetuning-index-out-of-range-in-self/9447
Hello everyone, I am trying to build a Multiclass Classifier with a pretrained BERT model. I am completely new to the topic. I have 8 classes and use Huggingface’s Dataset infrastructure to finetune a pretrained model for the german language: from transformers import AutoModelForSequenceClassification from transformers import Trainer, TrainingArguments from sklearn.metrics import accuracy_score, f1_score num_labels_cla = 8 model_name_cla = "bert-base-german-dbmdz-uncased" batch_size_cla = 8 model = AutoModelForSequenceClassification.from_pretrained(model_name_cla, num_labels=num_labels_cla) def tokenize(batch): return tokenizer(batch['text'], padding=True, truncation=True,max_length=260) def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) f1 = f1_score(labels, preds, average="weighted") acc = accuracy_score(labels,preds) return {"accuracy":acc, "f1":f1} My model shouldn’t be a sentiment classifier but a multilabel classifier which classifies customer reviews based on different label (e.g customer support etc.). When I train/finetune my model with the Huggingface Trainer() instance: #Encoding the data data_encoded = data_dict.map(tokenize, batched=True, batch_size=None) data_encoded.set_format("torch", columns=["input_ids", "attention_mask", "label"]) #Specify training arguments logging_steps=len(data_encoded["train"]) training_args = TrainingArguments(output_dir='./results', num_train_epochs=3, learning_rate=2e-5, per_device_train_batch_size=batch_size_cla, per_device_eval_batch_size=batch_size_cla, load_best_model_at_end=True, metric_for_best_model="f1", weight_decay=0.01, evaluation_strategy="steps", eval_steps = 2, disable_tqdm=False, logging_steps=logging_steps) #Specify trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=data_encoded['train'], eval_dataset=data_encoded['test'] ) #Train trainer.train() After 6 steps I get the following error: ~/miniconda3/envs/textmallet/lib/python3.9/site-packages/torch/nn/modules/sparse.py in forward(self, input) 156 157 def forward(self, input: Tensor) -> Tensor: --> 158 return F.embedding( 159 input, self.weight, self.padding_idx, self.max_norm, 160 self.norm_type, self.scale_grad_by_freq, self.sparse) ~/miniconda3/envs/textmallet/lib/python3.9/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2041 # remove once script supports set_grad_enabled 2042 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 2043 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 2044 2045 IndexError: index out of range in self Does anyone have any idea what I could change in my code? Cheers
Hi @marlon89 I can’t see the tokenizer you are loading. Are you sure it is the correct one (e.g. AutoTokenizer.from_pretrained("bert-base-german-dbmdz-uncased")?
0
huggingface
Intermediate
Encoding/decoding NLP model in tensorflow lite (fine-tuned GPT2)
https://discuss.huggingface.co/t/encoding-decoding-nlp-model-in-tensorflow-lite-fine-tuned-gpt2/6503
We are in the process of building a small virtual assistant and would like it to be able to run a fine-tuned version of GPT-2 on a raspberry-pi with a coral accelerator. So far, we managed to convert our model to a tflite and to get first results. We know how to convert from words to indices with the previous tokenizer but then we need a bigger tensor as input to the interpreter. We miss the conversion from indices to tensors. Is there a way to do this simply? You can find our pseudo-code here, we are stuck at step 2 and 6 : import tensorflow as tf #Prelude TF_MODEL_PATH_LITE = "/path/model.tflite" interpreter = tf.lite.Interpreter(model_path=TF_MODEL_PATH_LITE) interpreter.allocate_tensors() tokenizer = GPT2Tokenizer.from_pretrained('gpt2') input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() input_shape = input_details[0]['shape'] #1-Encode input, giving you indices context_idx = tokenizer.encode("Hello world.", return_tensors = "tf") #2-How to convert the context_idx to appropriate np.array ? input_data = np.array(np.random.random_sample(input_shape), dtype=np.int32) #dummy input for now #3- feed input interpreter.set_tensor(input_details[0]['index'], input_data) #4- Run model interpreter.invoke() #5- Get output as tensor output_data = interpreter.get_tensor(output_details[0]['index']) #6- How decode this np array to idx output_idx=np.random.randint(100) #dummy for now ... #7- Decode Output from idx to word string_tf = tokenizer.decode(output_idx, skip_special_tokens=True)
Were you able to accelerate your model with the Coral TPU in the end?
0
huggingface
Intermediate
Does fine-tuning a language model modify its hidden weights?
https://discuss.huggingface.co/t/does-fine-tuning-a-language-model-modify-its-hidden-weights/9087
If i load a pre-trained language model (say BERT) and use a standard pytorch implementation (as shown in the code-block), are the weights of the bert-model updated ? and if not is it recommended to try doing so for a down-stream task ? Note: the task involves using the bert embeddings for clustering text using a clustering algorithm (like KMeans, DBSCAN, etc…) class Model(nn.Module): def __init__(self, name): super(Model, self).__init__() self.bert = transformers.BertModel.from_pretrained(config['MODEL_ID'], return_dict=False) self.bert_drop = nn.Dropout(0.0) self.out = nn.Linear(config['HIDDEN_SIZE'], config['NUM_LABELS']) self.model_name = name def forward(self, ids, mask, token_type_ids): _, o2 = self.bert(ids, attention_mask = mask, token_type_ids = token_type_ids) bo = self.bert_drop(o2) output = self.out(bo) return output
When you do optimizer = torch.optimizer.SGD(Model("bert").parameters(), lr=1e-3) the parameters of the pretrained transformer are ready to be updated in addition to other layers you introduce. In my experience, updating only the newly introduced layers during fine-tuning resulted in very slow convergence and I would recommend updating the transformer weights as well. If you would use a clustering algorithm that assumes embeddings close to each other in the vector space are semantically related, I recommend that you use a loss function to enforce such a beahavior in fine-tuning (maybe Triplet Loss with Siamese Modeling like SBERT) because CLS embedding space, so to speak, is not constructed with such concerns like context-independent word embedding spaces.
0
huggingface
Intermediate
Training a language model from scratch with tensorflow (not pytorch)?
https://discuss.huggingface.co/t/training-a-language-model-from-scratch-with-tensorflow-not-pytorch/9002
Hello there, I am interested in training a language model from scratch. Not fine tuning the usual distilbert ---- running the whole thing on my GPU instead ! I found this interesting notebook How to train a new language model from scratch using Transformers and Tokenizers 3 and I would be interested to know if there is one that uses tensorflow instead. I cannot have pytorch on my machine unfortunately. Is there a huggingface example notebook that would help me do that? Thanks!
Hi there! This might be what you’re after: transformers/examples/tensorflow/language-modeling at master · huggingface/transformers · GitHub 6 You’d use the run_clm script for a GPT-2 like model, and the run_mlm script for a BERT-like model. EDIT: If you’re able to use docker on your machine, you could also use a huggingface image 2 to run that notebook you linked to.
0
huggingface
Intermediate
`serving` signature in TensorFlow Serving blogpost
https://discuss.huggingface.co/t/serving-signature-in-tensorflow-serving-blogpost/9005
Hi everyone! I am currently working through @jplu 's blogpost 3 on serving a HuggingFace model with TF-Serving, in which he overwrites the model’s serving method to change the signature of the traced graph input to accept embeddings. That, in turn, led me to discover that this serving signature is part of all TF models (Models — transformers 4.7.0 documentation 3) Can someone explain to me how exactly this serving method is used by the model server? I can’t find it referenced in the rest of the tutorial and I wasn’t succesful in finding my way around the codebase. Is that redefined signature used at all in the tutorial? I might be mistaken, but it seems to me that the requests (both REST and gRPC) to the TF-server use the output of the tokenizer, not those of an embedding layer.
cc @Rocketknight1
0
huggingface
Intermediate
Load fine tuned model in tensorflow
https://discuss.huggingface.co/t/load-fine-tuned-model-in-tensorflow/8932
I fine-tuned a pre-trained model(wav2vec) in hugging face using the transformers library and converted it from PyTorch to Tensorflow. I want to load this fine-tuned model in Tensorlfow but I can’t seem to find any tutorials showcasing how to. Any help would be appreciated.
from transformers import TFWav2Vec2ForXxx TFWav2Vec2ForXxx.from_pretrained(model_name_or_path) where model_name_or_path is the folder where you stored that model, or the name of the repo you pushed it to on the Hugging Face Hub.
0
huggingface
Intermediate
Understanding zero-shot classification in one-shot ;-)
https://discuss.huggingface.co/t/understanding-zero-shot-classification-in-one-shot/8851
Hello there! I LOVE the zero-shot classification API. When you think about it, it could also be used as search engine (find the documents that are related to a particular topic). My question is: conceptually, how does it work? I saw the original post New pipeline for zero-shot text classification 11 but I cannot find anything that explains the model under the hood. My understanding is that the API gets the embedding for the input sequence the API computes the embedding for the possible labels a similarity measure is computed between the input sequence and each of the labels and - via a softmax - a probability over the topics is then given Is that correct? Thanks and keep up the good work!
hey @olaffson, conceptually i believe the zero-shot classification pipeline works by adapting a task like natural language inference, where the language model is provided with a “template” like "<some text you want to classify>. This text is about <MASK>" and the model fills in the most probable label given the context. joe discusses this in this section of his blog post 12 and you can find one of the underlying zero-shot models here 13
0
huggingface
Intermediate
How to improve summarization?
https://discuss.huggingface.co/t/how-to-improve-summarization/8878
This might bit a tricky question (because summarization is difficult), but consider the example shown in the documentation summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20) Your max_length is set to 20, but you input_length is only 13. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50) Out[49]: [{'summary_text': 'apple a day keeps the doctor away from the doctor.'}] I find the summarized text a bit… interesting Granted, this was a very short example. But let’s consider a more real-life example taken from the NYT: mytext = ‘’’ The highly contagious Delta variant is now responsible for almost all new Covid-19 cases in the United States, and cases are rising rapidly. For the first time since February, there were more than 100,000 confirmed cases on Tuesday, the same day the Centers for Disease Control and Prevention recommended that vaccinated people should resume wearing masks in public indoor spaces in communities where the virus is surging. That updated guidance was based in part on a new internal report that cited evidence that vaccinated people experiencing breakthrough infections of the Delta variant, which remain infrequent, may be as capable of spreading the virus as infected unvaccinated people. Several studies, including ones referenced in the C.D.C.’s presentation, have shown that vaccines remain effective against the Delta variant, particularly against hospitalization and death. That has held true in the real world: About 97 percent of those recently hospitalized by the virus were unvaccinated, the C.D.C. said. But in counties where vaccination rates are low, cases are rising fast, and deaths are also on the rise. summarizer(mytext, min_length=5, max_length=20) Out[51]: [{'summary_text': 'the highly contagious variant is now responsible for almost all new cases in the united states '}] But… this is just the first sentence of the whole paragraph almost verbatim! What do you think we can do to improve the output of the summarization pipeline? Can summarization be trained? Thanks!
A funny result from NLP work on summarization is that the first sentence of a news article usually turns out to be a pretty challenging baseline to beat This isn’t necessarily a bad thing—it’s great for us as human readers that news writers do this! Summarization is an active area of research, with both supervised and unsupervised training approaches. If you check out NAACL 2021, the most recent NLP conference (the very latest, ACL 2021, starts today), there are twenty-seven papers about summarization 7!
0
huggingface
Intermediate
Computing similarity between sentences
https://discuss.huggingface.co/t/computing-similarity-between-sentences/8782
Hello there, I came across this very interesting post (Sentence Transformers in the Hugging Face Hub 14) that essentially shows a way to extract the embeddings for a given word or sentence from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) Just a few questions if someone has a few moments: how are these embeddings different from contextual embeddings that I would get with distilbert and other transformer models? More importantly, once I have the embeddings I can simply compute a cosine similarity metrics with other sentences to cluster by similarity. If so, what is the need of an API as described here Sentence Transformers in the Hugging Face Hub 14. Am I missing something more subtle here? Thanks!!
hey @olaffson, as described in the sentencebert paper 3 uses a siamese network structure to learn the sentence embeddings. Screen Shot 2021-07-28 at 20.20.542200×1112 179 KB in general, this approach gives higher-quality embeddings than those you’d get from distilbert etc and you can find a nice performance chart here 5 regarding your second question, i’m not sure which api you’re referring to exactly in the blog post (which is mostly about the integration of sentence-transformers with the hugging face hub). but indeed, once you have the embeddings you can compute metrics / cluster using whatever tools you wish
0
huggingface
Intermediate
How can state-of-the-art classifiers be so wrong?
https://discuss.huggingface.co/t/how-can-state-of-the-art-classifiers-be-so-wrong/8754
Hello there! Sorry for the provocative question but consider this simple example from transformers import pipeline classifier = pipeline(task = 'sentiment-analysis') classifier('this is so good!') Out[4]: [{'label': 'POSITIVE', 'score': 0.9998476505279541}] classifier('this is so gooood!') Out[5]: [{'label': 'NEGATIVE', 'score': 0.9922532439231873}] How can gooood be treated as negative with a very high confidence score? How can I fix this behavior? Thanks!
The problem you are encountering has little to do with whether the language model is state-of-the-art, but rather the language dialect(s) used to train the model versus those dialects used at inference time. Most likely you are using a model trained on standard English, such as the text that appears in Wikipedia, whereas your second query uses the slang term ‘gooood’ which is likely outside the vocabulary used during training. I suspect that you would encounter similar issues with ‘goooooood’, ‘soooo gooood’, vulgarities, acronyms (e.g.: lol, WTF) or words whose slang definition may differ from standard English (e.g.: bad, fly). Dialect can be an important issue when dealing with casual speech, tweets, medical literature, clinical records, patents or other domains that use specialized language variants. Possible approaches: Select a pre-trained model trained on the language dialect of interest. For example, cardiffnlp/twitter-roberta-base-sentiment is a variant of RoBERTa trained on tweets. The huggingface model library also includes models trained on Reddit text. Fine-tune an existing pre-trained model through supplementary training using examples of the language dialect of interest. Huggingface includes datasets related to Reddit and tweets that may be of use for this task. Using an existing base language model trained on the dialect of interest (e.g.: Reddit), and a classification head and train using a standard sentiment analysis training dataset
0
huggingface
Intermediate
How to ignore attributes of TrainingArguments?
https://discuss.huggingface.co/t/how-to-ignore-attributes-of-trainingarguments/8797
I am trying to subclass Huggingface’s Trainer and overwrite it with custom optimizer and lr_scheduler from transformers import TrainingArguments training_args = TrainingArguments( "rugpt3-headers", num_train_epochs=3, per_device_train_batch_size=3, evaluation_strategy="epoch", logging_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, ) from transformers import Trainer, AdamW from torch.optim.lr_scheduler import MultiStepLR class MyTrainer(Trainer): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def create_optimizer(self): self.optimizer = AdamW(model.parameters(),lr = 1e-7) def create_scheduler(self, num_training_steps): self.lr_scheduler = MultiStepLR(self.optimizer, milestones=[1,3], gamma=0.5) trainer = MyTrainer( model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset, compute_metrics=compute_metrics, callbacks=[EarlyStoppingCallback(early_stopping_patience=3)], ) trainer.train() Even though I did not specify learning_rate in TrainingArguments , it has a default value of 5e-7 . My attempt to overwrite the optimizer and scheduler is not successful because of that. After my training was completed, I used tensorboard to check which learning rate was used and it is still 5e-07 even though I thought I overwrote it. How to overcome this issue? I want it to set up my learning rate based on def create_optimizer(self): self.optimizer = AdamW(model.parameters(),lr = 1e-7) def create_scheduler(self, num_training_steps): self.lr_scheduler = MultiStepLR(self.optimizer, milestones=[1,3], gamma=0.5) I wrote above and ignore the default learning rate of TrainingArguments
Are you sure you are looking at the right logs? The learning_rate from the TrainingArguments is only used once in the create_optimizer method and you overrode it, so there is no reason for it to be used. Also, make sure you are on the latest version of Transformers, just in case.
0
huggingface
Intermediate
Mismatched target and input size for BCE using “multi_label_classification”
https://discuss.huggingface.co/t/mismatched-target-and-input-size-for-bce-using-multi-label-classification/8706
I am trying to build a multi-label, multi-class classification model. Any input text can have zero or more labels, up to 11 possible classes. I have been trying to use the problem_type="multi_label_classification" and everything looks OK, but I get ValueError: Target size (torch.Size([16, 11])) must be the same as input size (torch.Size([16, 2])) when it tries to calculate the binary_cross_entropy_with_logits I presume my data is in the wrong shape somehow, but I can’t see where exactly. Any suggestions? transformers==4.8.2 Here is a minimal example: import torch from torch.utils.data.dataset import Dataset from transformers import AutoTokenizer, AutoModelForSequenceClassification # Example data. # In reality, the strings are usually longer and there are 11 possible classes texts = [ "This is the first sentence.", "This is the second sentence.", "This is another sentence.", "Finally, the last sentence.", ] labels = [ [0, 0, 0, 0, 1], [1, 0, 0, 0, 0], [0, 1, 1, 0, 0], [0, 0, 0, 0, 0], ] train_texts = texts[:2] train_labels = labels[:2] eval_texts = texts[2:] eval_labels = labels[2:] tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True) train_encodings = tokenizer(train_texts, padding="max_length", truncation=True, max_length=512) eval_encodings = tokenizer(eval_texts, padding="max_length", truncation=True, max_length=512) class TextClassifierDataset(Dataset): def __init__(self, encodings, labels): self.encodings = encodings self.labels = labels def __len__(self): return len(self.labels) def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} item["labels"] = torch.tensor(self.labels[idx]) return item train_dataset = TextClassifierDataset(train_encodings, train_labels) eval_dataset = TextClassifierDataset(eval_encodings, eval_labels) model = AutoModelForSequenceClassification.from_pretrained( "bert-base-uncased", problem_type="multi_label_classification", ) training_arguments = TrainingArguments( output_dir=".", evaluation_strategy="epoch", per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=1, ) trainer = Trainer( model=model, args=training_arguments, train_dataset=train_dataset, eval_dataset=eval_dataset, ) trainer.train() # long traceback, but here is the important bit... ~/python3.8/site-packages/torch/nn/modules/loss.py in forward(self, input, target) 711 712 def forward(self, input: Tensor, target: Tensor) -> Tensor: --> 713 return F.binary_cross_entropy_with_logits(input, target, 714 self.weight, 715 pos_weight=self.pos_weight, ~/python3.8/site-packages/torch/nn/functional.py in binary_cross_entropy_with_logits(input, target, weight, size_average, reduce, reduction, pos_weight) 2956 2957 if not (target.size() == input.size()): -> 2958 raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size())) 2959 2960 return torch.binary_cross_entropy_with_logits(input, target, weight, pos_weight, reduction_enum) ValueError: Target size (torch.Size([16, 11])) must be the same as input size (torch.Size([16, 2]))
When defining your model, you did not specify the number of labels (with num_labels=xxx).
0
huggingface
Intermediate
Text classification on small dataset (8K)
https://discuss.huggingface.co/t/text-classification-on-small-dataset-8k/8615
Hello, I’m trying to find a good architecture for a model which has to do text classification. The domain is a chat-bot doing a help-desk. Its goal is to book appointments for customers who need some machines to be repaired. The current model only has to classify single utterances in one of the 20 categories. I have around 8K examples in my data set. I’m wondering if there is some recommended type of architecture/model based on tranformers for this type of model. I tried a model with a frozen DistilBERT layer followed by a fully connected layer, before a classification layer. So basically the same architecture than DistilBertForSequenceClassification presentend here: https://huggingface.co/transformers/_modules/transformers/models/distilbert/modeling_distilbert.html#DistilBertForSequenceClassification but with the DistilBERT layer frozen. But the results were so so. So I’m thinking about two things: Is there something more appropriate than DistilBERT for my set-up? Should I maybe keep only some layers of DistilBERT frozen and not all of them? If anyone has a suggestion, I would be glad to hear about it. Thank you in advance!
In my experience (I worked with BERT and RoBERTa), not updating the transformer model parameters during fine-tuning resulted in lower accuracy and slower decrease in the loss value. This might mean that the fully connected layer alone is not enough to model the task at hand. I suggest updating the parameters of the DistilBert model as well, which is what fine-tuning is for. I should also note that freezing the first 6 layers of BERT-base did not decrease the accuracy of the model significantly, in my case.
0
huggingface
Intermediate
Adding Preprocessing to Hosted Inference API
https://discuss.huggingface.co/t/adding-preprocessing-to-hosted-inference-api/8310
My question answering model requires an extra step of preprocessing the input and the output. How can I add those preprocessing scripts to work with Hosted Inference API (Widget on the website, too)?
hey @yigitbekir, as far as i know the inference api does not support custom pre- / post-processing logic, but you could easily include these steps within a dedicated web / streamlit application if you need a custom widget, you can propose one on the huggingface_hub library here: GitHub - huggingface/huggingface_hub: Client library to download and publish models and other files on the huggingface.co hub 4
0
huggingface
Intermediate
Segmentation fault (Core dumped) with datasets
https://discuss.huggingface.co/t/segmentation-fault-core-dumped-with-datasets/8165
While trying to download a large dataset(~100GB), without streaming mode like this: from datasets import load_dataset mc4_dataset = load_dataset("mc4", "hi") I first got an error: multiprocessing.pool.RemoteTraceback: ConnectionError: Couldn't reach https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/multilingual/c4-hi.tfrecord-00709-of-01024.json.gz On running the same 2 line script again, the downloads resumed but then crashed with a single line message Segmentation fault (core dumped). Rerun of same script again gives the following message: Downloading and preparing dataset mc4/hi (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/user/.cache/huggingface/datasets/mc4/hi/0.0.0/a2bc8f2c4d913b8b16fac4d1a63d673fa6cb22859520dcac7f193feec1f00cae... Segmentation fault (core dumped) Any suggestions on how to debug this error? There’s a lock file in ~/.cache/huggingface/datasets/ ~/.cache/huggingface/datasets/mc4/hi/ contains hash.“incomplete” directory, which is empty ~/.cache/huggingface/datasets/downloads/ contains a lot of hash-id files and locks. In this state, is there anything we could do to repair the state and continue without having to re-download entire dataset from scratch? Also in worst case, is there anydatasets alternative of rm -r purge, as not only mc4 but I think the lock and the downloads dir contents will need to go away.
@lhoestq have you seen this before by any chance?
0
huggingface
Intermediate
Determinism in sequence classification
https://discuss.huggingface.co/t/determinism-in-sequence-classification/7841
Hi! I noticed that in order to get reproducible results across runs when carrying out sequence classification, the type conversion for the labels from e.g. string to int has to be done consistently, i.e. every class has to be assigned to the same int across runs. Say I have 5 classes: ‘a’, ‘b’, ‘c’, ‘d’, ‘e’, and they get mapped to 0, 1, 2, 3, 4, respectively. If I run the exact same experiment but just changing this mapping to, say, 4, 3, 2, 1, 0, results won’t be the same. I’m assuming the distribution is not known (i.e. dataset can be imbalanced). In fact, this is also mentioned in a comment in the run_glue.py example (here). My question is simply: why? Why does this matter when computing cross entropy loss? Thanks
Even if you use the same seed, you will not get the same results because the last layer of the model, the classification head, will be initialized randomly the same way if you label are mapped to 0, 1, 2, 3, 4, or another permutation. That means that the initial weight for ‘a’ will be different if ‘a’ is mapped to 0 or 4. Then going from there, you will get different losses, so different gradients and different updates, and will end up with a completely different model.
0
huggingface
Intermediate
Calculating perplexity from hidden_states
https://discuss.huggingface.co/t/calculating-perplexity-from-hidden-states/7547
Hi all, I am trying to run ray tune for my masked language model, I want to find the best hyperparameters that will minimize perplexity of the model. I am not able to figure out how to calculate perplexity using the model’s hidden_states, which is returned as EvalPrediction.predictions. Any help will be greatly appreciated. Thank you! following code snippet show the training. model_checkpoint = "distilroberta-base" model = AutoModelForMaskedLM.from_pretrained(model_checkpoint).to('cuda') def compute_custom_metric(eval_pred): # following will print (3387, 32, 50265) (beach_size * max_output_len * vocal_size) print(eval_pred.predictions.shape) # following will print (3387, 32) (batch_size * max_output_len) print(eval_pred.label_ids.shape) return {'custom_metric': 0} trainer = Trainer( model = model, args = training_args, train_dataset = train, eval_dataset = validation, tokenizer = tokenizer, data_collator = data_collator, compute_metrics = compute_custom_metric ) trainer.evaluate()
After extensive searching finally found the solution. Below function calculates perplexity after every epoch. FYI I have also added my training arguments. def compute_custom_metric(pred): logits = torch.from_numpy(pred.predictions) labels = torch.from_numpy(pred.label_ids) loss = F.cross_entropy(logits.view(-1, tokenizer.vocab_size), labels.view(-1)) return {'perplexity': math.exp(loss), 'calculated_loss': loss} training_args = TrainingArguments( output_dir='./some_results', evaluation_strategy = "epoch", num_train_epochs=3, learning_rate=1e-5, per_device_train_batch_size=4, per_device_eval_batch_size=4, warmup_steps=500, weight_decay=0.01, logging_dir='./some_logs', logging_steps=logging_steps, seed=seed, fp16=True, eval_accumulation_steps=50, )
0
huggingface
Intermediate
Passing Trainer state as an artifact in kfp.v2 pipeline
https://discuss.huggingface.co/t/passing-trainer-state-as-an-artifact-in-kfp-v2-pipeline/7427
I’m trying to create a kfp pipeline fine-tuning BERT on GCP’s VertexAI. I can save a model as an artifact, but have troubles saving the Trainer (Trainer’s state), as I would like to split eval and testing into two separate pipeline components and to achieve that - I need to reconstruct the Trainer in the latter. I have, created trainer_artifact and would like to save_state() into tainer_artifact.path. However, save_state() does not accept arguments and saves to model’s directory by default. How can I save trainer as an artifact in this situation? Maybe there is a workaround for it, eg. recreating it in the next step? I attach my code. @component( packages_to_install = [ "pandas", "datasets", "transformers" ], ) def fine_tune_modell( small_train_dataset: Input[Dataset], small_eval_dataset: Input[Dataset], # full_train_dataset: Input[Dataset], # full_eval_dataset: Input[Dataset], model_artifact: Output[Model], trainer_artifact: Output[Artifact] ): import pandas as pd import numpy as np import datasets from transformers import AutoModelForSequenceClassification from transformers import TrainingArguments from transformers import Trainer from datasets import load_metric # create model model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2) # load data train_data = datasets.load_from_disk(small_train_dataset.path) eval_data = datasets.load_from_disk(small_eval_dataset.path) metric = load_metric("accuracy") def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) training_args = TrainingArguments( output_dir="test_trainer", evaluation_strategy="epoch", per_device_train_batch_size=8, per_device_eval_batch_size=8, num_train_epochs=3, seed=0,) trainer = Trainer( model=model, args=training_args, train_dataset=train_data, eval_dataset=eval_data, compute_metrics=compute_metrics ) train_output = trainer.train() model_artifact.metadata["train_output"] = train_output model_artifact.metadata["framework"] = "Pytorch" # How to save trainer? model.save_model(model_artifact.path)```
Maybe you can move the state file created?
0
huggingface
Intermediate
How can I use keyBERT using huggingface inference API?
https://discuss.huggingface.co/t/how-can-i-use-keybert-using-huggingface-inference-api/6996
Hi guys, basically I want to use KeyBERT using the huggingface inference API. can someone please tell me how can I do that? How can I upload KeyBERT on HF inference API?
Hi @akarshghale! We have a guide 22 on how to integrate a new library into the HF Inference API which should be added to the main site the coming week. Feel free to open an issue if you have any questions!
0
huggingface
Intermediate
Problem installing using conda
https://discuss.huggingface.co/t/problem-installing-using-conda/5518
Hey everone. I’m trying to install transformers and datasets package using conda. I installed pytorch using conda, and I’m using miniconda with python version 3.7. My environment is also using python 3.7. Installation of transformers using the command conda install -c huggingface transformers works, but when testing the installation I get from transformers import pipeline Traceback (most recent call last): File “”, line 1, in File “/home/nfs/tjviering/envs/torch3/lib/python3.7/site-packages/transformers-4.4.2-py3.8.egg/transformers/init.py”, line 43, in File “/home/nfs/tjviering/envs/torch3/lib/python3.7/site-packages/transformers-4.4.2-py3.8.egg/transformers/dependency_versions_check.py”, line 36, in File “/home/nfs/tjviering/envs/torch3/lib/python3.7/site-packages/transformers-4.4.2-py3.8.egg/transformers/file_utils.py”, line 56, in ModuleNotFoundError: No module named ‘importlib_metadata’ conda install importlib-metadata File “”, line 1 conda install importlib-metadata ^ SyntaxError: invalid syntax I thought possibly the following would solve it conda install importlib-metadata But when testing the installation I still get trouble: from transformers import pipeline Traceback (most recent call last): File “”, line 1, in File “/home/nfs/tjviering/envs/torch3/lib/python3.7/site-packages/transformers-4.4.2-py3.8.egg/transformers/init.py”, line 2310, in getattr File “/home/nfs/tjviering/envs/torch3/lib/python3.7/site-packages/transformers-4.4.2-py3.8.egg/transformers/file_utils.py”, line 1660, in getattr File “/home/nfs/tjviering/envs/torch3/lib/python3.7/site-packages/transformers-4.4.2-py3.8.egg/transformers/init.py”, line 2304, in _get_module File “/home/nfs/tjviering/envs/torch3/lib/python3.7/importlib/init.py”, line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File “/home/nfs/tjviering/envs/torch3/lib/python3.7/site-packages/transformers-4.4.2-py3.8.egg/transformers/pipelines/init.py”, line 24, in File “/home/nfs/tjviering/envs/torch3/lib/python3.7/site-packages/transformers-4.4.2-py3.8.egg/transformers/modelcard.py”, line 31, in File “/home/nfs/tjviering/envs/torch3/lib/python3.7/site-packages/transformers-4.4.2-py3.8.egg/transformers/models/init.py”, line 19, in File “/home/nfs/tjviering/envs/torch3/lib/python3.7/site-packages/transformers-4.4.2-py3.8.egg/transformers/models/mt5/init.py”, line 36, in File “/home/nfs/tjviering/envs/torch3/lib/python3.7/site-packages/transformers-4.4.2-py3.8.egg/transformers/models/t5/tokenization_t5_fast.py”, line 23, in File “/home/nfs/tjviering/envs/torch3/lib/python3.7/site-packages/transformers-4.4.2-py3.8.egg/transformers/tokenization_utils_fast.py”, line 25, in File “/home/nfs/tjviering/envs/torch3/lib/python3.7/site-packages/tokenizers/init.py”, line 79, in from .tokenizers import ( ImportError: /lib64/libm.so.6: version `GLIBC_2.29’ not found (required by /home/nfs/tjviering/envs/torch3/lib/python3.7/site-packages/tokenizers/tokenizers.cpython-37m-x86_64-linux-gnu.so) Any help would be greatly appreciated!
For anyone wondering how to fix this issue, I found that when using the channel conda-forge there is no issue at all. So the fix: conda install -c conda-forge transformers conda install importlib-metadata
0
huggingface
Intermediate
How to concatenate the word embedding for special tokens and words
https://discuss.huggingface.co/t/how-to-concatenate-the-word-embedding-for-special-tokens-and-words/6761
I tried to add an extra dimension to the Huggingface pre-trained BERT tokenizer. The extra column represents the extra label. For example, if the original embedding of the word “dog” was [1,1,1,1,1,1,1], then I might add a special column with index 2 to represent ‘noun’. Thus, the new embedding becomes [1,1,1,1,1,1,1,2]. Then, I will feed the new input [1,1,1,1,1,1,1,2] into the Bert model. How can I do this in Huggingface? There is something called tokenizer.add_special_tokens which extends the original vocabulary with new tokens. However, I want to concatenate the embedding of the original vocabulary with the embedding of the tokenizer. For example, I want the Bert model to understand that Dog is a noun by connecting the embedding of dog to the embedding of noun. Should I even change the input word embedding of a pre-trained model? Or should I somehow enhance the attention on “dog” and “noun” in the middle layer? Here is the example of using tokenizer.add_special_tokens tokenizer = GPT2Tokenizer.from_pretrained(‘gpt2’) model = GPT2Model.from_pretrained(‘gpt2’) special_tokens_dict = {‘cls_token’: ‘’} num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) print(‘We have added’, num_added_toks, ‘tokens’) model.resize_token_embeddings(len(tokenizer)) assert tokenizer.cls_token == ‘’
I found solution here : How to use additional input features for NER? 101
0
huggingface
Intermediate
Question Answering for generating long answers
https://discuss.huggingface.co/t/question-answering-for-generating-long-answers/6500
Hello, Most of the Question Answering model answers are small like 3-4 words. What if I want to build a model which could elaborate my answer rather than a factoid based answer. Here I have context and a question and I want it to answer a long answer sequence based on the context given. Is there some method I can control the process to generate longer sequence answer. Any model, or any idea how to approach this problem that may be helpful. Thanks.
hey @theainerd it sounds like you’re looking for models that are trained for long form question answering. here’s a great article by Yacine Jernite that shows how you can train such models (on the ELI5 dataset in this case): Long_Form_Question_Answering_with_ELI5_and_Wikipedia 46
0
huggingface
Intermediate
Preprocessing for T5 Denoising
https://discuss.huggingface.co/t/preprocessing-for-t5-denoising/6266
Hi all! I’m trying to perform continued pre-training on T5. Basically, I’m doing domain adaptation to new data before fine-tuning, and I want to make sure that I’m preprocessing data as similarly as possible to how T5 does it during pre-training (i.e., randomly corrupt 15% of tokens, pack sequences together for training examples of length 512, mask contiguous corrupted spans, reconstruct corrupted spans in target sequence). I know that there’s a TensorFlow implementation of this from Google (text-to-text-transfer-transformer/preprocessors.py at d72bd861de901d3269f45ec33c6ca6acd18b10b8 · google-research/text-to-text-transfer-transformer · GitHub 15), but has anyone implemented this in PyTorch for use with huggingface models? I’m hoping to make something like the linked span_corruption function work with torch tensors composed of tokenized text, rather than a tf.Dataset object. Thanks!
hey @amueller i haven’t tried this myself but it seems that pretraining T5 can be done by using sentinel tokens in the tokenizer as described here: T5 — transformers 4.5.0.dev0 documentation 51
0
huggingface
Intermediate
T5 cross-attention - inconsistent results
https://discuss.huggingface.co/t/t5-cross-attention-inconsistent-results/5520
Environment info Python version: 3.7.10 PyTorch version (GPU?): '1.7.1+cu110' (True) Transformer version: '4.5.0.dev0' Details I am trying to use the cross-attention from the T5 model for paraphrasing. The idea is to map the input sentence and output generated sequence based on the attention. But the first results I got are very strange. I generated an example with the following code: from transformers import T5ForConditionalGeneration, T5Tokenizer import torch pretrained_model = "ramsrigouthamg/t5_paraphraser" model = T5ForConditionalGeneration.from_pretrained(pretrained_model, output_attentions=True, output_scores=True) translated_sentence = "I like drinking Fanta and Cola." text = "paraphrase: " + translated_sentence + " </s>" encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"], encoding["attention_mask"]. Then, I gave a look to the cross attention for each generated token by selecting the last layer of the encoder and the first head. beam_outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, do_sample=True, max_length=256, top_k=100, top_p=0.95, num_return_sequences=1, output_attentions = True, output_scores=True, return_dict_in_generate=True ) sentence_id = 0 print("Input phrase: ", tokenizer.decode(encoding.input_ids[0], skip_special_tokens=False, clean_up_tokenization_spaces=False)) print("Predicted phrase: ", tokenizer.decode(beam_outputs.sequences[sentence_id], skip_special_tokens=True, clean_up_tokenization_spaces=True)) for out in range(len(beam_outputs.sequences[sentence_id])-1): print( "\nPredicted word: ", tokenizer.decode(beam_outputs.sequences[sentence_id][out], skip_special_tokens=True, clean_up_tokenization_spaces=True)) att = torch.stack(beam_outputs.cross_attentions[out]) # Last layer of the encoder att = att[-1] # First batch and first head att = att[0, 0, :, :] att = torch.squeeze(att) idx = torch.argsort(att) idx = idx.cpu().numpy() print("Input words ordered by attention: ") for i in range(min(5, len(idx))): token_smallest_attention =tokenizer.decode(encoding.input_ids[0][idx[i]], skip_special_tokens=True, clean_up_tokenization_spaces=True) token_largest_attention =tokenizer.decode(encoding.input_ids[0][idx[-(1+i)]], skip_special_tokens=True, clean_up_tokenization_spaces=True) print(f"{i+1}: Largest attention: {token_largest_attention} | smallest attention:{token_smallest_attention}") The attention scores are sorted and each generated token is associated with the input with the highest attention (5 values) and with the lowest attentions (also 5 values). Input phrase: paraphrase: I like drinking Fanta and Cola.</s> Predicted phrase: I like to drink Fanta and Cola. Predicted word: <pad> Input words ordered by attention: 1: Largest attention: I | smallest attention:Col 2: Largest attention: like | smallest attention:a 3: Largest attention: : | smallest attention:t 4: Largest attention: para | smallest attention:a 5: Largest attention: . | smallest attention:Fan Predicted word: I Input words ordered by attention: 1: Largest attention: phrase | smallest attention:t 2: Largest attention: </s> | smallest attention:a 3: Largest attention: para | smallest attention:a 4: Largest attention: : | smallest attention:Col 5: Largest attention: like | smallest attention:and Predicted word: like Input words ordered by attention: 1: Largest attention: Fan | smallest attention:I 2: Largest attention: Col | smallest attention:. 3: Largest attention: phrase | smallest attention:like 4: Largest attention: a | smallest attention:para 5: Largest attention: </s> | smallest attention:a Expecting results I was expecting an almost one-to-one mapping as the paraphrase is very close to the input but it is not the case. The model gives good paraphrases. Do you think that I made some errors in the interpretation of the cross-attention object? Thank you for your help! Hopefully, it is something simple that I am missing.
We found the problem: the shape of the attention matrix was not correct. Here are the modifications # From T5 documentation # Initial shape: Tuple (one element for each generated token) of tuples (one element for # each layer of the decoder) of torch.FloatTensor of shape # (batch_size, num_heads, generated_length, sequence_length). # combine all cross attention into one tensor x = [torch.stack(beam_outputs['cross_attentions'][i]) for i in range(len(beam_outputs['cross_attentions']))] x = torch.stack(x) # Shape: (nb_generated, nb_layer, (batch_size, num_heads, generated_length, sequence_length)) print(x.shape) x = x.transpose(1,0) # (nb_layer, nb_generated, batch_size, num_heads, generated_length, sequence_length) print(x.shape) x = x.transpose(1,3) # (nb_layer, num_heads, batch_size, nb_generated, generated_length, sequence_length) print(x.shape) x = torch.squeeze(x, 4) # (nb_layer, num_heads, batch_size, nb_generated, sequence_length) print(x.shape) x = x.transpose(2, 1) # (nb_layer, batch_size, num_heads, nb_generated, sequence_length) print(x.shape) cross_attentions = x We can compute the encoder/decoder tokens: encoder_text = tokenizer.convert_ids_to_tokens(input_ids[0]) decoder_text = tokenizer.convert_ids_to_tokens(beam_outputs.sequences[0]) encoder_tokens=np.array(encoder_text) decoder_tokens=np.array(decoder_text[:-1]) The initial sentence is: In Belgium, summers can be very dry and the heat is burning. and the paraphrase: 'In Belgium the summers can be very dry and the heat is burning. The associated tokens are: ['▁para', 'phrase', ':', '▁In', '▁Belgium', ',', '▁summer', 's', '▁can', '▁be', '▁very', '▁dry', '▁and', '▁the', '▁heat', '▁is', '▁burning', '.', '</s>'] ['<pad>', '▁In', '▁Belgium', '▁the', '▁summer', 's', '▁can', '▁be', '▁very', '▁dry', '▁and', '▁the', '▁heat', '▁is', '▁burning', '.', '</s>'] Here are the attention score sorted by importance: layer = 0 head = 0 # choose head to analyze att = cross_attentions[layer, 0, head] att.shape for i in range(att.shape[0]): idx = np.argsort(att[i].cpu().numpy())[::-1][:6] print(f"Predicted token: {decoder_tokens[i]}, input related tokens: {encoder_tokens[idx]}") Here are some results: Predicted token: ▁In, input related tokens: ['</s>' '▁In' 'phrase' '.' ':' '▁Belgium'] Predicted token: ▁Belgium, input related tokens: ['▁Belgium' '</s>' 'phrase' ':' '▁para' '▁summer'] Predicted token: ▁the, input related tokens: ['</s>' 'phrase' ',' '.' '▁the' ':'] Predicted token: ▁summer, input related tokens: ['▁Belgium' '</s>' 'phrase' '▁In' 's' '▁summer'] We solved this problem with the help of the decoder encoder version of bertviz 5
0
huggingface
Intermediate
What is the limit of grad accumulation?
https://discuss.huggingface.co/t/what-is-the-limit-of-grad-accumulation/5912
As I understand effective batch size = batch grad accumulation * batch size. Firstly Is this the only adjustment that needs to be done, nothing for the learning rate? Secondly how much can you accumulate. Say I want a batch size of 256, and can only fit a batch size of 4, can I really accumulate gradients over 64 batches or is that way too much and never done in practice?
From the pure math side, there is no limit to how many times you can accumulate gradients. \frac{\partial \mathcal L}{\partial \theta} = \sum_{accumulation} \sum_{batch\_size} \frac{\partial \mathcal L}{\partial θ}(x_i, y_i) = \sum_{larger\_batch\_size} \frac{\partial \mathcal L}{\partial θ}(x_i, y_i) I guess, there would be more numerical accuracy / stability issues. Maybe the results would be a bit different from the “true” batch_size=256, but not too much. I don’t think you need to adjust the learning rate, use the same value you would use for batch_size = 256. Also remember that it is deep learning — even if you are sure everything is going to be alright, it may be quite different in practice. Just try it out and see.
0
huggingface
Intermediate
XLMR-large not converging on Paws-X paraphrase dataset but mbert does
https://discuss.huggingface.co/t/xlmr-large-not-converging-on-paws-x-paraphrase-dataset-but-mbert-does/3638
Hey everyone, I tried training Mbert and Xlmr-large on the paws-x English paraphrase detection dataset and looks like Xlmr-large is not converging, while Mbert does. I’ve tried tweaking the hyperparameters for the Xlmr-large but that doesn’t seem to help as well. Attaching train stats for both models below(Note: I’m evaluating every 100 steps) image738×755 28.7 KB image741×689 27.1 KB I’ve changed the hf run_glue colab example to reproduce this behavior here 1. For hyperparameters, I’m following the hyperparameters used in xtreme 2 paper where they report better results for xlmr-large compared to mbert on paws-x Would appreciate it if someone took a quick look and have any suggestions. Thanks
I revisited this with the latest hf and tried it with fp16 and it seems to work now. Also had a similar issue with roberta-large models on xnli and paws. Tried with fp16 and fp32 and every time, one of them worked.
0
huggingface
Intermediate
Train and inference wav2vec2 using a language model
https://discuss.huggingface.co/t/train-and-inference-wav2vec2-using-a-language-model/5906
Hi everyone. Is it possible to use a language model built with KenLM for train and inference steps of Wav2Vec2 model? If so, please share any resource on this. Thanks in advance
Hey, We don’t have a lot of code that support LM decoding for Wav2Vec2 sadly. This notebook: huggingface_notebook/xlsr_gpt.ipynb at main · voidful/huggingface_notebook · GitHub 46 might help
0
huggingface
Intermediate
Deploying huggingface models to Chai
https://discuss.huggingface.co/t/deploying-huggingface-models-to-chai/5817
Chai 3 is an open-source platform which allows you to develop and deploy chat AIs. Using chai_py you can speak with your AI in just a few lines of code. The key to using a HuggingFace model for your chat AI on chai is to call their endpoint. You can read a little more about this in the chai docs 2. Screenshot 2021-04-28 at 07.49.122336×1692 271 KB
Contact us on hello@chai.ml or join our Discord channel if you have any questions/feedback! We’d love to hear from you.
0
huggingface
Intermediate
Scaling up BERT-like model Inference on modern CPU - Part 1
https://discuss.huggingface.co/t/scaling-up-bert-like-model-inference-on-modern-cpu-part-1/5653
Hi community, I have come through the nice article by @mfuntowicz : Hugging Face – The AI community building the future. 13 It sounds really interesting how easily you can benchmark your BERT transformer model with CLI and Facebook AI & Research’s Hydra configuration library 8. Is it possible however to easily test it on cloud services as AWS and how to deploy it? Thanks!
Hey @Matthieu, Thanks for reading and posting here . Indeed, everything in the blog was run in AWS (c5.metal) instance(s). The way I’m currently using it: git clone https://github.com/huggingface/tune cd tune pip install -r requirements.txt export PYTHONPATH=src python src/main.py --multirun backend=pytorch batch=1 sequence_length=128,256,512 The overall framework is quite new and I’ll be improving the UX in the coming days, sorry for this user experience Morgan
0
huggingface
Intermediate
Run_mlm.py using –sharded_ddp “zero_dp_3 offload” gives AssertionError
https://discuss.huggingface.co/t/run-mlm-py-using-sharded-ddp-zero-dp-3-offload-gives-assertionerror/5511
I’m trying to run the following on a single, multi-gpu machine that has 8 GPUs: python -m torch.distributed.launch --nproc_per_node=8 \ run_mlm.py \ --model_name_or_path roberta-base \ --use_fast_tokenizer \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --do_eval \ --num_train_epochs 5 \ --output_dir ./experiments/wikitext \ --fp16 \ --sharded_ddp "zero_dp_3 offload" This fails with the following AssertionError: Traceback (most recent call last): File "run_mlm.py", line 492, in <module> main() File "run_mlm.py", line 458, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/me/ve/lib/python3.6/site-packages/transformers/trainer.py", line 1120, in train tr_loss += self.training_step(model, inputs) File "/home/me/ve/lib/python3.6/site-packages/transformers/trainer.py", line 1522, in training_step loss = self.compute_loss(model, inputs) File "/home/me/ve/lib/python3.6/site-packages/transformers/trainer.py", line 1556, in compute_loss outputs = model(**inputs) File "/home/me/ve/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/me/ve/lib/python3.6/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 902, in forward self._lazy_init() File "/home/me/ve/lib/python3.6/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 739, in _lazy_init self._init_param_attributes(p) File "/home/me/ve/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context return func(*args, **kwargs) File "/home/me/ve/lib/python3.6/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 796, in _init_param_attributes assert p._fp32_shard.device == torch.device("cpu") AssertionError If I omit the “offload” option to --sharded_ddp, it runs with no problems CUDA 11.0 PyTorch 1.7.1+cu110 Huggingface 4.5.1 Has anyone successfully gotten this to work? Any help much appreciated!
Last time I checked, it was blocked by a bug on fairscale side 11, but that yielded a different error message than this one. Will take a look this morning. In any case solving this first bug will only get you in the second one, so you should use deepspeed for ZeRO DP3 with offload.
0
huggingface
Intermediate
Domain adaptation transformer
https://discuss.huggingface.co/t/domain-adaptation-transformer/5004
Hi all, What is the best method advised to do domain adaptation to a transformer model ? Kind regards, lematmat
One generic method that can be applied to any encoder is, [1505.07818] Domain-Adversarial Training of Neural Networks 52
0
huggingface
Intermediate
Multi-Task dataset with Custom Sampler and Sharding
https://discuss.huggingface.co/t/multi-task-dataset-with-custom-sampler-and-sharding/5594
The current Huggingface Trainer Supports, a single train_dataset (torch.utils.data.dataset.Dataset). While it makes sense for most of the training setups, there are still some cases where it is convenient to have a list of train_dataset. The trainer can randomly select or follow a specific sampling strategy to select the samples from each of the train_dataset. An example is attached below for the custom sampling streategy code. The sampling strategy for each of the train_dataset (torch.utils.data.dataset.Dataset) (from multiple train dataset) can be varied by a penalty variable (\alpha). The sample code for a custom multinomial distribution based sampling strategy is below (mentioned in XLM 1 paper), def multinomial_prob(dataset_len, alpha=.5): tot_number_of_sent_in_all_lang = 0 prob = OrderedDict() for k, v in dataset_len.items(): tot_number_of_sent_in_all_lang += v for k, v in dataset_len.items(): neu = v den = tot_number_of_sent_in_all_lang p = neu/den prob[k] = p q = OrderedDict() q_den = 0.0 for k, v in prob.items(): q_den += (v**alpha) sum_ = 0.0 for k, v in prob.items(): q[k] = (v**alpha)/q_den sum_ += q[k] assert math.fabs(1-sum_) < 1e-5 return q def iterator_selection_prob(alpha, train_datasets, logger=None): dataset_len = OrderedDict() for k, v in train_datasets.items(): dataset_len[k] = len(v) for k, v in dataset_len.items(): logger.info("Total Number of samples in {} : {}".format(k, v)) prob = multinomial_prob(dataset_len, alpha=alpha) logger.info("Language iterator selection probability.") ret_prob_index, ret_prob_list = [], [] for k,v in prob.items(): ret_prob_index.append(k) ret_prob_list.append(v) for k, v in zip(ret_prob_index, ret_prob_list): logger.info("{} : {}".format(k, v)) return dataset_len, ret_prob_index, ret_prob_list So I have three questions in general, How to integrate multiple datasets (or sub-dataset) in the same dataset class? How to apply custom control on the sampling strategy (let’s just say I want to inject the above sampling strategy in my sub-datasets) into different sub-dataset? Also in the case of the large tenderized dataset that cannot fit into memory how to handle the sharding using huggingface trainer. Note: I am not looking for sample codes. A discussion or pointer to the Hf source library is also highly appreciated. However, sample codes are always best. I would also like to know if you have seen some other repository implements these feature with/without Hf-library. Discussion on any topic is highly appreciated.
@sgugger Do you have any idea on these topics?
0
huggingface
Intermediate
Pruning a model embedding matrix for memory efficiency
https://discuss.huggingface.co/t/pruning-a-model-embedding-matrix-for-memory-efficiency/5502
Hi, I’m trying to finetune the facebook/mbart-large-50-many-to-many-mmt model for machine translation. Unfortunately, I keep maxing out my GPU memory and even with a batch size of 1 sample with gradient accumulation I cannot get it to work. I was looking through potential solutions and came across this 9 thread where pruning the embeddings has been suggested as a solution. @sshleifer created an issue for the same here 2 and here 2, but I don’t think it saw any progress. I’m trying to do this by myself right now, and was wondering if my approach was correct - Run tokenizer on dataset and get a vocabulary of all unique tokens Copy all the embeddings associated with the vocabulary and create a new embedding matrix Replace the embedding matrix in the model with the new one Map the old vocabulary to their corresponding indices on the new embedding matrix Run tokenizer again but remap tokens to new embedding matrix before passing them to the model Does anyone here have any idea if this could work?
Yes this seems like the right approach. When you get to step 4/5 you can just make a new Tokenizer. If you get it working please post the solution here!
0
huggingface
Intermediate
Converting GPT2 to JavaScript?
https://discuss.huggingface.co/t/converting-gpt2-to-javascript/5576
Hello everyone, I am contemplating converting my trained GPT2 model to JavaScript to go wild. The model has been trained in its PyTorch variant. I have an idea of how to map it to JavaScript and would like to ask your opinion. Maybe this is a little to much overkill: Save PyTorch GPT2 as pretrained. Load the TensorFlow GPT2 from the file I have just saved. (Believing the binaries are framework agnostic). Convert the TensorFlow model to TensorflowJS. Would that work?
hey @TristanBehrens, your approach sounds sensible (although i don’t know anything about the TF → TFJS step). to load the pytorch model in tf, you’ll just need to add a from_pt=True argument, e.g. tf_model = TFDistilBertForSequenceClassification.from_pretrained("path/to/pytorch-model", from_pt=True) an alternative would be to use onnx as the intermediate representation (see here 11 for the export), but your approach seems simpler
0
huggingface
Intermediate
RuntimeError: CUDA out of memory
https://discuss.huggingface.co/t/runtimeerror-cuda-out-of-memory/4540
Hey guys I currently got an insufficient GPU memory error with the config below. Training on 8 x V100 GPUs. It doesn’t appear imidiately though, but rather non-deterministicly far into the training, which rather points to a memory leak somewhere. Would you have some tips or ideas how to approach this? Any ideas? training_args = TrainingArguments( output_dir="./wav2vec2-xlsr-sg-g", logging_dir=’./logs’, group_by_length=True, per_device_train_batch_size=16, gradient_accumulation_steps=2, evaluation_strategy=“steps”, num_train_epochs=30, fp16=False, save_steps=400, eval_steps=400, logging_steps=400, learning_rate=3e-4, warmup_steps=500, save_total_limit=2, )
The non-determinism might arise if your batches aren’t sized uniformly? Without more detail on your training data, it’s just a wild guess. You might try enabling fp16. This will give you a lot more breathing room, even if it doesn’t explain the root cause…
0
huggingface
Intermediate
Using Roberta for Sentence2Vec
https://discuss.huggingface.co/t/using-roberta-for-sentence2vec/5413
Hey, I’ve been trying to train Sentence2Vec embeddings and I’ve been wondering what do you think about my approach. I would be glad to learn about different possible pitfalls in my approach and how to solve them. What do I have? I have a small unique corpus of about 4 million sentences in my test language I have a smaller subset (150k) of labeled sentences of whether the sentence is toxic or not. For this question’s sake assume there isn’t any pretrained model or another existing corpus I can use. What am I trying to achieve? The target is to be able to cluster sentences that have similar meanings together. Way of Action Train a RoBERTa language model Fine tune it for classification of toxic or not Use a hidden layer of that classifier as an embedding model Cluster using embeddings and HDBSCAN. That’s about it, I tried to keep it as clear as I can. Thanks for anyone that read up until here!!
What is your main goal? Is it to have a classifier for toxic vs non-toxic sentences? Or to cluster sentence embeddings by semantic meaning? For creating sentence embeddings I would recommend sentence transformers 9. It’s an extensions of regular huggingface transformers but optimized for creating text embeddings.
0
huggingface
Intermediate
Preventing Toxic Outputs
https://discuss.huggingface.co/t/preventing-toxic-outputs/5264
I’m developing an application similar to a chatbot and I’m curious how people prevent toxic outputs, E.g. references to extreme political groups, Trump. This seems like a pretty simple question, but unfortunately I couldn’t find much on it. For the fine-tuning step, the dataset is manually labelled and toxic training examples can be filtered out. But for the much larger pre-training corpus, this isn’t feasible. I’m weighing up a few options, such as using a blacklist or a toxicity classifier. Curious to hear what other approaches I can use?
If you’re using GPT-2, you can use bad_words_ids to filter out unwanted words.
0
huggingface
Intermediate
Out of index error when using pre-trained Pegasus model
https://discuss.huggingface.co/t/out-of-index-error-when-using-pre-trained-pegasus-model/5196
Hey everyone, I’ve been trying to use a pre-trained pegasus model for generating paraphrases of an input sentence using the most popular paraphrasing model on the huggingface model hub. However I’m running into an out of index error, and what’s strange about the error is that it only occasionally happens: most sentences get correctly paraphrased by the model but maybe one in every 100 sentences will run into an error. If I run: import torch from transformers import PegasusForConditionalGeneration, PegasusTokenizer para_model_name = 'tuner007/pegasus_paraphrase' para_tokenizer = PegasusTokenizer.from_pretrained(para_model_name) para_model = PegasusForConditionalGeneration.from_pretrained(para_model_name) text = [' (Chng et al'] batch = para_tokenizer(text, truncation=True, padding='longest', max_length=200, return_tensors="pt") translated = para_model.generate(**batch, max_length=200, num_beams=10, num_return_sequences=1, temperature=1.5) I get the following error (when run on cpu): /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1911 # remove once script supports set_grad_enabled 1912 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1913 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1914 1915 IndexError: index out of range in self I assumed at first it was that the tokenization scheme, and it assigned indices beyond the shape of the embedding matrix, however strangely if I change the input text to: text = [' (Chng et alt'] then the tokens are very similar, [143, 19152, 4652, 3256, 2700, 1] for the previous error-inducing input and [143, 19152, 4652, 3256, 20913, 1] now, but the model now works. This seems a bit backwards though as the one with a higher maximum tokenized value works and leads to no errors. So I’m a bit stuck, I don’t know if internally the model generates out of vocabulary words but that seems implausible given how popular the model is (it’s been downloaded 80000 times this month), so any help would be greatly appreciated. Thank you very much!
It seems that the problem is related to the shape of the “embed_positions” layer of the decoder. This code snippet: works fine with a max_length of 60 (or less) in ‘generate’ (throws no error) and also works fine with max_length=200 and a different Pegasus model, e.g.: “sshleifer/distill-pegasus-xsum-16-4” (examples below) The two models differ in the shape of this layer: tuner007/pegasus_paraphrase: (decoder): PegasusDecoder( (embed_tokens): Embedding(96103, 1024, padding_idx=0) (embed_positions): PegasusSinusoidalPositionalEmbedding(60, 1024) sshleifer/distill-pegasus-xsum-16-4: (decoder): PegasusDecoder( (embed_tokens): Embedding(96103, 1024, padding_idx=0) (embed_positions): PegasusSinusoidalPositionalEmbedding(1024, 1024) tuner007/pegasus_paraphrase, max_length=60, config.max_position_embeddings=60 : import torch from transformers import PegasusForConditionalGeneration, PegasusTokenizer para_model_name = 'tuner007/pegasus_paraphrase' para_tokenizer = PegasusTokenizer.from_pretrained(para_model_name) para_model = PegasusForConditionalGeneration.from_pretrained(para_model_name) text = [' (Chng et al'] batch = para_tokenizer(text, truncation=True, padding='longest', max_length=200, return_tensors="pt") translated = para_model.generate(**batch, #max_length=200, max_length=60, num_beams=10, num_return_sequences=1, temperature=1.5) print(para_model.config.max_position_embeddings) sshleifer/distill-pegasus-xsum-16-4, max_length=200, config.max_position_embeddings=1024 : import torch from transformers import PegasusForConditionalGeneration, PegasusTokenizer para_model_name = "sshleifer/distill-pegasus-xsum-16-4" para_tokenizer_s = PegasusTokenizer.from_pretrained(para_model_name) para_model_s = PegasusForConditionalGeneration.from_pretrained(para_model_name) text = [' (Chng et al'] batch = para_tokenizer_s(text, truncation=True, padding='longest', max_length=200, return_tensors="pt") translated = para_model_s.generate(**batch, max_length=200, num_beams=10, num_return_sequences=1, temperature=1.5) print(para_model_s.config.max_position_embeddings)
0
huggingface
Intermediate
Transformer’s output as input to other model
https://discuss.huggingface.co/t/transformers-output-as-input-to-other-model/3319
Hello, I want to create a model which generates text and the generated text is input to other model. So basically two models are trained together. How can i achieve this using hugging face? Thanks
Welcome to the forum, @omerarshad! Nice questions, I had the same problem, too. In my opinion this is possible only if you have ground truth for the intermediate step and not only the final reference. What you might do is to train two models separately: the first one with the intermediate reference, and the last one with the final reference. Schematically: Input -> MODEL_1 -> Output_1 | compare (cross-entropy) Intermediate Reference Intermediate Reference -> MODEL_2 -> Output_2 | compare (cross-entropy) Final Reference What do you think?
0
huggingface
Intermediate
Inference with Finetuned BERT Model converted to ONNX does not output probabilities
https://discuss.huggingface.co/t/inference-with-finetuned-bert-model-converted-to-onnx-does-not-output-probabilities/4062
Environment info transformers version: 3.5.1 Platform: Linux-4.14.203-116.332.amzn1.x86_64-x86_64-with-glibc2.10 Python version: 3.7.6 PyTorch version (GPU?): 1.7.0 (True) Tensorflow version (GPU?): 2.3.1 (True) Using GPU in script?: No Using distributed or parallel set-up in script?: No Information Model I am using (Bert, XLNet …): Bert The problem arises when using: my own modified scripts: (give details below) The tasks I am working on is: my own task or dataset: (give details below) To reproduce Steps to reproduce the behavior: Trained HuggingFace Transformers model BertForSequenceClassification on custom dataset with PyTorch backend. Used provided convert_graph_to_onnx.py script to convert model (from saved checkpoint) to ONNX format. Loaded the model with ONNXRuntime Instantiated BertTokenizer.from_pretrained(‘bert-based-uncased’) and fed in various input text to encode_plus method. Fed outputs of this to the ONNXRuntime session. Expected behavior The expected behavior is that the output of sess.run on the aforementioned inputs should output an array of dimension (1, 100) (corresponding to 100 classes) with each value between 0 and 1, with all entries summing to 1. We get the correct dimension, however, we get values between about -3.04 and 7.14 (unsure what these values refer to).
Hi @nsingh, without seeing your code it’s hard to know exactly what’s going wrong but based on this comment We get the correct dimension, however, we get values between about -3.04 and 7.14 (unsure what these values refer to). my guess is that you are getting the logits from the model instead of the predicted classes. I ran into this problem recently and the solution was to specify pipeline_name=sentiment-analysis to load the model for a TextClassificationPipeline: from transformers.convert_graph_to_onnx import convert model_ckpt = ... tokenizer = ... onnx_model_path = ... convert(framework="pt", model=model_ckpt, tokenizer=tokenizer, output=onnx_model_path, opset=12, pipeline_name="sentiment-analysis") By default, convert_graph_to_onnx uses the feature-extraction pipeline which might explain why you’re seeing negative numbers (i.e. the logits)
0
huggingface
Intermediate
404 when instantiating private model/tokenizer
https://discuss.huggingface.co/t/404-when-instantiating-private-model-tokenizer/4172
I think I might be missing something obvious, but when I attempt to load my private model checkpoint with the Auto* classes and use_auth=True I’m getting a 404 response. I couldn’t find anything in the docs about the token/auth setup for the library so I’m not sure what’s wrong. from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("jodiak/mymodel", use_auth=True) model = AutoModelWithLMHead.from_pretrained("jodiak/mymodel", use_auth=True) # 404 Client Error: Not Found for url: https://huggingface.co/jodiak/model/resolve/main/config.json I’ve verified that I can load this directory locally and also that the path in model hub is correct. Any help here is appreciated.
Hi @jodiak, did you do the login phase ? https://huggingface.co/transformers/model_sharing.html?highlight=login#basic-steps 664 That should create a ~/.huggingface/token file (that is being used to see your private model).
0
huggingface
Intermediate
Generate ‘continuation’ for seq2seq models
https://discuss.huggingface.co/t/generate-continuation-for-seq2seq-models/3822
I am not sure if I missed an obvious way to do this, but I didn’t find any. Basically the idea is that if we have a seq2seq model, let’s say Bart. Right now, one can input the tokens to the encoder in order to start decoding and generating text using model.generate(), but there doesn’t seem to be a way to add decoder inputs, that is text which we want the generate function to continue. Using the example at the documentation: from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig model = BartForConditionalGeneration.from_pretrained('facebook/bart-base') tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs." inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt') # Generate Summary summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=5, early_stopping=True) print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids]) And this outputs ['My friends']. Let’s say we want to precondition the generation to start with ‘Friends’ instead of ‘My’, it would be cool to have something like: decoder_inputs = tokenizer(['Friends'], max_length=1024, return_tensors='pt') # Generate Summary summary_ids = model.generate(inputs['input_ids'], decoder_inputs = decoder_inputs['inputs_ids'] num_beams=4, max_length=5, early_stopping=True) print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids]) Which would output a summary starting with ‘Friends’. I am aware that it is possible to do a forward pass with explicit decoder_inputs but I was wondering if there is a way to do this at generation, to take advantage of beam search and such. Perhaps with the newly added prefix_allowed_tokens_fn there is a workaround by having it return the desired starting tokens at the beginning of generation but I was wondering if there is a more straight forward way I missed or is this something that would be interesting to add to the generate functionality. Cheers!
So after looking at the code behind generate() indeed it already incorporates this functionality if decoder_input_ids is given as input, in a similar fashion as the forward function: summary_ids = model.generate(inputs['input_ids'], decoder_input_ids = decoder_inputs['inputs_ids'] num_beams=4, max_length=5, early_stopping=True) So once again I am impressed by the amount of capabilities built into the library. I still think it would be beneficial to clarify this functionality somewhere in the documentation. I won’t delete the post in case someone comes looking for the same.
0
huggingface
Intermediate
BERT: What is the shape of each Transformer Encoder block in the final hidden state?
https://discuss.huggingface.co/t/bert-what-is-the-shape-of-each-transformer-encoder-block-in-the-final-hidden-state/3714
Hi everyone, I am studying BERT paper after I have studied the Transformer. The thing I can’t understand yet is the output of each Transformer Encoder in the last hidden state (Trm before T1, T2, etc… in the image). In particular, I should know that thanks (somehow) to the Positional Encoding, the most left Trm represents the embedding of the first token, the second left represents the embedding of the second token and so on. Hence, the shape of each one of them should be simply the hidden_dim (for example, 768) if what I have said before is true. However, I am not convinced of this logic; so is the answer correct? Many thanks in advance
Your answer is true but your logic seems shaky. The input embedding has nothing to do with the final shape. In other words, you could still have an output of each token by having an 1*n_tokens to 768*n_tokens linear layer and that’s it. In the case of the transformer, these have the embedding has the same size as the hidden states of the encoder, but it is conceivable to have transformations in-between that deal with different shapes, e.g. a smaller embedding shape to reduce dimensionality for computational efficiency. So in your image, assuming the encoder block has a hidden size of 768, the output will N*768, simply because 768 is the output shape of the last layer in the encoder block.
0
huggingface
Intermediate
Training for sentence vectors in niche domain
https://discuss.huggingface.co/t/training-for-sentence-vectors-in-niche-domain/704
Hi everyone, I have been inspired to create a semantic text search engine for a niche domain, and I am wondering how I should proceed. The basic approach will be to use a transformer model to embed potential results into vectors, use the same model to embed search queries, and then use cosine similarity to compare the query vector with the result vectors. The main issue I see right now is that it is hard to have good embeddings for a niche domain. From what I can gather, training a model on an NLI task (textual entailment) is best for having good sentence embeddings, but NLI is a supervised task that requires labeled data. The next closest task would be NSP, which can be done without a labeled dataset, but RoBERTa showed that NSP isn’t a good way of training a model. What I’ve noticed other people do for Covid semantic searches is to take SciBERT or BioBERT, train it more on PubMed articles or Cord-19 articles doing MLM, and then finally end with training on an NLI task. I think the NLI task was unrelated to Covid or biology because I don’t know of any in-domain NLI tasks like that. I have seen joeddav’s blogpost and the recent ZSL pipeline work, and while ZSL is cool and has it’s purposes, it would be ineffective for comparing a search query against thousands or even just hundreds of results in real time. I have one main question: How should I train a model to generate good sentence vectors in a niche domain? My current plan is to take a pretrained model, fine-tune it using MLM on in-domain texts, and then do NLI training using SNLI. I am worried that it will be hard to gauge when to stop the NLI training because it seems like the longer it trains, the better it gets at producing sentence-level vectors, but the more it forgets about in-domain information. Moreover, I’m worried that the fine-tuning using MLM won’t go great because I have tens of thousands of 2-4 sentence chunks rather than long documents.
Hi @nbroad , interesting question. What kind of Niche domain do you consider? Since nowadays we have several hundreds (if not thousands) of NLP datasets, is it possible to find similar datasets for pre-MLM before final-MLM by your own data ? I don’t have direct experience on sentence similarity training, but I once trained a classifier on multi-langauges Toxic-comment domain (maybe a bit niche) where finetuning with MLM did improve the performance compared to non-MLM.
0
huggingface
Intermediate
Convert models to Longformer
https://discuss.huggingface.co/t/convert-models-to-longformer/3303
My request was posted as an issue 3. Environment info transformers version: 4.2.0 Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.10 Python version: 3.8.5 PyTorch version (GPU?): 1.7.1 (True) Tensorflow version (GPU?): not installed (NA) Using GPU in script?: Yes Using distributed or parallel set-up in script?: No Information Model I am using script to initialize Longformer starting from HerBERT. The problem arises when using: [ ] the official example scripts: (give details below) [x] my own modified scripts: (give details below) The tasks I am working on is: [ ] an official GLUE/SQUaD task: (give the name) [x] my own task or dataset: (give details below) To reproduce Steps to reproduce the behavior: Install dependencies: python3 -m pip install -r requirements.txt. Install apex according to official documentation. Run command CUDA_VISIBLE_DEVICES=0 python3 convert_model_to_longformer.py --finetune_dataset conllu. We are using dataset in .jsonl format, each line contains 1 CoNLLu entry. It is converted using custom LineByLineTextDataset class to LineByLine format from current version of transformers. I’ve added this class to be able to use it in older version (v3.0.2). Using suggested by author on allenai/longformer I’ve used transformers in version 3.0.2 and it works fine. But I would like to use recent models to convert them to Long* version and I can’t make conversion script work. Result As a result of running command above with transformers in version 4.2.0 I’ve got: Traceback (most recent call last): File "convert_model_to_longformer.py", line 277, in <module> pretrain_and_evaluate( File "convert_model_to_longformer.py", line 165, in pretrain_and_evaluate eval_loss = trainer.evaluate() File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/trainer.py", line 1442, in evaluate output = self.prediction_loop( File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/trainer.py", line 1566, in prediction_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/trainer.py", line 1670, in prediction_step outputs = model(**inputs) File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 1032, in forward outputs = self.roberta( File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 798, in forward encoder_outputs = self.encoder( File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 498, in forward layer_outputs = layer_module( File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 393, in forward self_attention_outputs = self.attention( File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 321, in forward self_outputs = self.self( File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "convert_model_to_longformer.py", line 63, in forward return super().forward(hidden_states, attention_mask=attention_mask, output_attentions=output_attentions) # v4.2.0 File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py", line 600, in forward diagonal_mask = self._sliding_chunks_query_key_matmul( File "/server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py", line 789, in _sliding_chunks_query_key_matmul batch_size, seq_len, num_heads, head_dim = query.size() ValueError: too many values to unpack (expected 4) I’ve changed function /server/server_1/user/miniconda3/envs/longformer_summary/lib/python3.8/site-packages/transformers/models/longformer/modeling_longformer.py up to line 789: def forward( self, hidden_states, attention_mask=None, is_index_masked=None, is_index_global_attn=None, is_global_attn=None, output_attentions=False, ): """ :class:`LongformerSelfAttention` expects `len(hidden_states)` to be multiple of `attention_window`. Padding to `attention_window` happens in :meth:`LongformerModel.forward` to avoid redoing the padding on each layer. The `attention_mask` is changed in :meth:`LongformerModel.forward` from 0, 1, 2 to: * -10000: no attention * 0: local attention * +10000: global attention """ hidden_states = hidden_states.transpose(0, 1) # project hidden states query_vectors = self.query(hidden_states) key_vectors = self.key(hidden_states) value_vectors = self.value(hidden_states) print(f"query_vectors: {query_vectors.shape}") print(f"key_vectors: {key_vectors.shape}") print(f"value_vectors: {value_vectors.shape}") print(f"attention_mask: {attention_mask.shape}") seq_len, batch_size, embed_dim = hidden_states.size() assert ( embed_dim == self.embed_dim ), f"hidden_states should have embed_dim = {self.embed_dim}, but has {embed_dim}" # normalize query query_vectors /= math.sqrt(self.head_dim) query_vectors = query_vectors.view(seq_len, batch_size, self.num_heads, self.head_dim).transpose(0, 1) key_vectors = key_vectors.view(seq_len, batch_size, self.num_heads, self.head_dim).transpose(0, 1) attn_scores = self._sliding_chunks_query_key_matmul( query_vectors, key_vectors, self.one_sided_attn_window_size ) # values to pad for attention probs remove_from_windowed_attention_mask = (attention_mask != 0)[:, :, None, None] # cast to fp32/fp16 then replace 1's with -inf float_mask = remove_from_windowed_attention_mask.type_as(query_vectors).masked_fill( remove_from_windowed_attention_mask, -10000.0 ) print(f"attn_scores: {attn_scores.shape}") print(f"remove_from_windowed_attention_mask: {remove_from_windowed_attention_mask.shape}") print(f"float_mask: {float_mask.shape}") # diagonal mask with zeros everywhere and -inf inplace of padding diagonal_mask = self._sliding_chunks_query_key_matmul( float_mask.new_ones(size=float_mask.size()), float_mask, self.one_sided_attn_window_size ) And as a result I’ve got: attention_mask: torch.Size([2, 1, 1, 1024]) query_vectors: torch.Size([1024, 2, 768]) key_vectors: torch.Size([1024, 2, 768]) value_vectors: torch.Size([1024, 2, 768]) attn_scores: torch.Size([2, 1024, 12, 513]) remove_from_windowed_attention_mask: torch.Size([2, 1, 1, 1, 1, 1024]) float_mask: torch.Size([2, 1, 1, 1, 1, 1024]) And after changing version to 3.0.2 and adding print statements I’ve got: attention_mask: torch.Size([2, 1024]) query_vectors: torch.Size([1024, 2, 768]) key_vectors: torch.Size([1024, 2, 768]) value_vectors: torch.Size([1024, 2, 768]) attn_scores: torch.Size([2, 1024, 12, 513]) remove_from_windowed_attention_mask: torch.Size([2, 1024, 1, 1]) float_mask: torch.Size([2, 1024, 1, 1]) So maybe it’s problem with _sliding_chunks_query_key_matmul function? Files: convert_model_to_longformer.py, based on allenai/longformer/scripts/convert_model_to_long.ipynb 7: import logging import os import math import copy import torch import argparse from dataclasses import dataclass, field from transformers import RobertaForMaskedLM, XLMTokenizer, TextDataset, DataCollatorForLanguageModeling, Trainer, XLMTokenizer, PreTrainedTokenizer from transformers import TrainingArguments, HfArgumentParser, XLMTokenizer, RobertaModel, XLMTokenizer from transformers import LongformerSelfAttention # v4.2.0 # from transformers.modeling_longformer import LongformerSelfAttention # v3.0.2 from conllu import load_conllu_dataset, save_conllu_dataset_in_linebyline_format from torch.utils.data.dataset import Dataset logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) class LineByLineTextDataset(Dataset): """ This will be superseded by a framework-agnostic approach soon. """ def __init__(self, tokenizer: PreTrainedTokenizer, file_path: str, block_size: int): assert os.path.isfile(file_path) # Here, we do not cache the features, operating under the assumption # that we will soon use fast multithreaded tokenizers from the # `tokenizers` repo everywhere =) logger.info("Creating features from dataset file at %s", file_path) with open(file_path, encoding="utf-8") as f: lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())] batch_encoding = tokenizer( lines, add_special_tokens=True, truncation=True, padding="max_length", max_length=block_size, pad_to_multiple_of=512) self.examples = batch_encoding["input_ids"] def __len__(self): return len(self.examples) def __getitem__(self, i) -> torch.Tensor: return torch.tensor(self.examples[i], dtype=torch.long) class RobertaLongSelfAttention(LongformerSelfAttention): def forward( self, hidden_states, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_value=None, output_attentions=False, ): return super().forward(hidden_states, attention_mask=attention_mask, output_attentions=output_attentions) class RobertaLongForMaskedLM(RobertaForMaskedLM): def __init__(self, config): super().__init__(config) for i, layer in enumerate(self.roberta.encoder.layer): # replace the `modeling_bert.BertSelfAttention` object with `LongformerSelfAttention` layer.attention.self = RobertaLongSelfAttention(config, layer_id=i) class RobertaLongModel(RobertaModel): def __init__(self, config): super().__init__(config) for i, layer in enumerate(self.encoder.layer): # replace the `modeling_bert.BertSelfAttention` object with `LongformerSelfAttention` layer.attention.self = RobertaLongSelfAttention(config, layer_id=i) def create_long_model(initialization_model, initialization_tokenizer, save_model_to, attention_window, max_pos): model = RobertaForMaskedLM.from_pretrained(initialization_model) tokenizer = XLMTokenizer.from_pretrained(initialization_tokenizer, model_max_length=max_pos) config = model.config # extend position embeddings tokenizer.model_max_length = max_pos tokenizer.init_kwargs['model_max_length'] = max_pos current_max_pos, embed_size = model.roberta.embeddings.position_embeddings.weight.shape max_pos += 2 # NOTE: RoBERTa has positions 0,1 reserved, so embedding size is max position + 2 config.max_position_embeddings = max_pos assert max_pos > current_max_pos # allocate a larger position embedding matrix new_pos_embed = model.roberta.embeddings.position_embeddings.weight.new_empty(max_pos, embed_size) # copy position embeddings over and over to initialize the new position embeddings k = 2 step = current_max_pos - 2 while k < max_pos - 1: new_pos_embed[k:(k + step)] = model.roberta.embeddings.position_embeddings.weight[2:] k += step model.roberta.embeddings.position_embeddings.weight.data = new_pos_embed model.roberta.embeddings.position_ids.data = torch.tensor([i for i in range(max_pos)]).reshape(1, max_pos) # v4.2.0 # model.roberta.embeddings.position_ids = torch.tensor([i for i in range(max_pos)]).reshape(1, max_pos) # v3.0.2 # replace the `modeling_bert.BertSelfAttention` object with `LongformerSelfAttention` config.attention_window = [attention_window] * config.num_hidden_layers for i, layer in enumerate(model.roberta.encoder.layer): longformer_self_attn = LongformerSelfAttention(config, layer_id=i) longformer_self_attn.query = copy.deepcopy(layer.attention.self.query) longformer_self_attn.key = copy.deepcopy(layer.attention.self.key) longformer_self_attn.value = copy.deepcopy(layer.attention.self.value) longformer_self_attn.query_global = copy.deepcopy(layer.attention.self.query) longformer_self_attn.key_global = copy.deepcopy(layer.attention.self.key) longformer_self_attn.value_global = copy.deepcopy(layer.attention.self.value) layer.attention.self = longformer_self_attn logger.info(f'saving model to {save_model_to}') model.save_pretrained(save_model_to) tokenizer.save_pretrained(save_model_to) return model, tokenizer def copy_proj_layers(model): for i, layer in enumerate(model.roberta.encoder.layer): layer.attention.self.query_global = copy.deepcopy(layer.attention.self.query) layer.attention.self.key_global = copy.deepcopy(layer.attention.self.key) layer.attention.self.value_global = copy.deepcopy(layer.attention.self.value) return model def pretrain_and_evaluate(args, model, tokenizer, eval_only, model_path, max_size): val_dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path=args.val_datapath, block_size=max_size, ) if eval_only: train_dataset = val_dataset else: logger.info(f'Loading and tokenizing training data is usually slow: {args.train_datapath}') train_dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path=args.train_datapath, block_size=max_size, ) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15, ) trainer = Trainer( model=model, args=args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=val_dataset, # prediction_loss_only=True, ) eval_loss = trainer.evaluate() eval_loss = eval_loss['eval_loss'] logger.info(f'Initial eval bpc: {eval_loss/math.log(2)}') if not eval_only: trainer = Trainer( model=model, args=args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=val_dataset, prediction_loss_only=False, ) trainer.train(model_path=model_path) trainer.save_model() eval_loss = trainer.evaluate() eval_loss = eval_loss['eval_loss'] logger.info(f'Eval bpc after pretraining: {eval_loss/math.log(2)}') @dataclass class ModelArgs: attention_window: int = field(default=512, metadata={"help": "Size of attention window"}) max_pos: int = field(default=1024, metadata={"help": "Maximum position"}) def parse_args(): parser = argparse.ArgumentParser() parser.add_argument("--finetune_dataset", required=True, choices=["conllu"], help="Name of dataset to finetune") return parser.parse_args() if __name__ == "__main__": parser = HfArgumentParser((TrainingArguments, ModelArgs,)) args = parse_args() training_args, model_args = parser.parse_args_into_dataclasses(look_for_args_file=False, args=[ '--output_dir', 'tmp_4.2.0', '--warmup_steps', '500', '--learning_rate', '0.00003', '--weight_decay', '0.01', '--adam_epsilon', '1e-6', '--max_steps', '3000', '--logging_steps', '500', '--save_steps', '500', '--max_grad_norm', '5.0', '--per_device_eval_batch_size', '2', '--per_device_train_batch_size', '2', '--gradient_accumulation_steps', '4', # '--evaluate_during_training', '--do_train', '--do_eval', '--fp16', '--fp16_opt_level', 'O2', ]) if args.finetune_dataset == "conllu": saved_dataset = '/server/server_1/user/longformer_summary/conllu/' if not os.path.exists(saved_dataset): os.makedirs(saved_dataset) dataset = load_conllu_dataset('/server/server_1/user/conllu_dataset/') save_conllu_dataset_in_linebyline_format(dataset, saved_dataset) training_args.val_datapath = os.path.join(saved_dataset, 'validation.txt') training_args.train_datapath = os.path.join(saved_dataset, 'train.txt') initialization_model = 'allegro/herbert-klej-cased-v1' initialization_tokenizer = 'allegro/herbert-klej-cased-tokenizer-v1' roberta_base = RobertaForMaskedLM.from_pretrained(initialization_model) roberta_base_tokenizer = XLMTokenizer.from_pretrained(initialization_tokenizer, model_max_length=512) model_path = f'{training_args.output_dir}/{initialization_model}-{model_args.max_pos}' if not os.path.exists(model_path): os.makedirs(model_path) logger.info(f'Converting roberta-base into {initialization_model}-{model_args.max_pos}') model, tokenizer = create_long_model( initialization_model=initialization_model, initialization_tokenizer=initialization_tokenizer, save_model_to=model_path, attention_window=model_args.attention_window, max_pos=model_args.max_pos, ) logger.info(f'Loading the model from {model_path}') tokenizer = XLMTokenizer.from_pretrained(model_path) model = RobertaLongForMaskedLM.from_pretrained(model_path) logger.info(f'Pretraining {initialization_model}-{model_args.max_pos} ... ') pretrain_and_evaluate( training_args, model, tokenizer, eval_only=False, model_path=training_args.output_dir, max_size=model_args.max_pos, ) logger.info(f'Copying local projection layers into global projection layers... ') model = copy_proj_layers(model) logger.info(f'Saving model to {model_path}') model.save_pretrained(model_path) logger.info(f'Loading the model from {model_path}') tokenizer = XLMTokenizer.from_pretrained(model_path) model = RobertaLongModel.from_pretrained(model_path) conllu.py import re import glob import torch from torch.utils.data import Dataset import time import os import json from xml.etree.ElementTree import ParseError import xml.etree.ElementTree as ET from typing import List, Dict from sklearn.model_selection import train_test_split def load_conllu_jsonl( path: str, ) -> List[Dict[str, str]]: dataset: List[Dict[str, str]] = list() with open(path, 'r') as f: for jsonl in f.readlines(): json_file = json.loads(jsonl) conllu = json_file['conllu'].split('\n') doc_text: str = "" utterance: Dict[str, str] = dict() for line in conllu: try: if line[0].isdigit(): if utterance: masked_text = utterance["text"] doc_text = f"{doc_text} {masked_text}.".strip() utterance = dict() elif line[0] == '#': text = line[1:].strip() key = text.split('=')[0].strip() value = text.split('=')[1].strip() utterance[key] = value except IndexError: pass dataset.append({"text": doc_text}) return dataset def load_conllu_dataset( path: str, train_test_val_ratio: float = 0.1, ) -> Dict[str, List[Dict[str, str]]]: dataset: Dict[str, List[Dict[str, str]]] = dict() data_dict: Dict[str, List[str]] = dict() filepath_list = glob.glob(os.path.join(path, '*.jsonl')) train = filepath_list[:int(len(filepath_list)*0.8)] test = filepath_list[int(len(filepath_list)*0.8):int(len(filepath_list)*0.9)] val = filepath_list[int(len(filepath_list)*0.9):] data_dict["test"] = test data_dict["train"] = train data_dict["validation"] = val for key, value in data_dict.items(): dataset_list: List[Dict[str, str]] = list() for filepath in value: data = load_conllu_jsonl(path=filepath) if data: dataset_list.extend(data) dataset[key] = dataset_list return dataset def save_conllu_dataset_in_linebyline_format( dataset: Dict[str, List[Dict[str, str]]], save_dir: str, ) -> None: for key, value in dataset.items(): with open(os.path.join(save_dir, f'{key}.txt'), 'w') as f: for line in value: # print(line["full"]) f.write(f'{line["text"]}\n') requirements.txt: apex @ file:///server/server_1/user/apex certifi==2020.12.5 chardet==4.0.0 click==7.1.2 datasets==1.2.0 dill==0.3.3 filelock==3.0.12 idna==2.10 joblib==1.0.0 multiprocess==0.70.11.1 numpy==1.19.4 packaging==20.8 pandas==1.2.0 pyarrow==2.0.0 pyparsing==2.4.7 python-dateutil==2.8.1 pytz==2020.5 regex==2020.11.13 requests==2.25.1 sacremoses==0.0.43 sentencepiece==0.1.94 six==1.15.0 tokenizers==0.8.1rc1 torch==1.7.1 tqdm==4.49.0 transformers==3.0.2 typing-extensions==3.7.4.3 urllib3==1.26.2 xxhash==2.0.0 Expected behavior Model should be converted, saved and loaded. After that it should be properly fine-tuned and saved on disk. Comparing codebase of version 3.0.2 and 4.2.0 I have noticed that forward function differs. I have added deleted lines right at the beginning of the function: def forward( self, hidden_states, attention_mask=None, is_index_masked=None, is_index_global_attn=None, is_global_attn=None, output_attentions=False, ): """ :class:`LongformerSelfAttention` expects `len(hidden_states)` to be multiple of `attention_window`. Padding to `attention_window` happens in :meth:`LongformerModel.forward` to avoid redoing the padding on each layer. The `attention_mask` is changed in :meth:`LongformerModel.forward` from 0, 1, 2 to: * -10000: no attention * 0: local attention * +10000: global attention """ attention_mask = attention_mask.squeeze(dim=2).squeeze(dim=1) # is index masked or global attention is_index_masked = attention_mask < 0 is_index_global_attn = attention_mask > 0 is_global_attn = any(is_index_global_attn.flatten()) and now model seems to be working, but returns: {'eval_loss': nan, 'eval_runtime': 20.6319, 'eval_samples_per_second': 1.939} Below You can find results of consecutive steps in forward function. Can You see something wrong here? diagonal_mask: tensor([[[[-inf, -inf, -inf, ..., 0., 0., 0.]], [[-inf, -inf, -inf, ..., 0., 0., 0.]], [[-inf, -inf, -inf, ..., 0., 0., 0.]], ..., [[0., 0., 0., ..., -inf, -inf, -inf]], [[0., 0., 0., ..., -inf, -inf, -inf]], [[0., 0., 0., ..., -inf, -inf, -inf]]], [[[-inf, -inf, -inf, ..., 0., 0., 0.]], [[-inf, -inf, -inf, ..., 0., 0., 0.]], [[-inf, -inf, -inf, ..., 0., 0., 0.]], ..., [[0., 0., 0., ..., -inf, -inf, -inf]], [[0., 0., 0., ..., -inf, -inf, -inf]], [[0., 0., 0., ..., -inf, -inf, -inf]]]], device='cuda:0', dtype=torch.float16) attn_scores: tensor([[[[ -inf, -inf, -inf, ..., 0.5771, 0.2065, -1.0449], [ -inf, -inf, -inf, ..., -1.3174, -1.5547, -0.6240], [ -inf, -inf, -inf, ..., -1.3691, -1.3555, -0.3799], ..., [ -inf, -inf, -inf, ..., 1.7402, 1.6152, 0.8242], [ -inf, -inf, -inf, ..., 0.5122, 1.0342, 0.2091], [ -inf, -inf, -inf, ..., 1.7568, -0.1534, 0.7505]], [[ -inf, -inf, -inf, ..., -0.8066, -1.7480, -2.5527], [ -inf, -inf, -inf, ..., -3.3652, 0.1046, -0.5811], [ -inf, -inf, -inf, ..., -0.0958, -1.0957, -0.2377], ..., [ -inf, -inf, -inf, ..., -0.4148, -0.9497, -0.1229], [ -inf, -inf, -inf, ..., -1.9443, -1.3467, -1.5342], [ -inf, -inf, -inf, ..., 0.1263, -0.4407, 0.1486]], [[ -inf, -inf, -inf, ..., -0.9077, -0.1603, -0.5762], [ -inf, -inf, -inf, ..., -0.2454, 0.1932, -0.5034], [ -inf, -inf, -inf, ..., -1.4375, -1.2793, -1.0488], ..., [ -inf, -inf, -inf, ..., -0.3452, 0.1405, 1.3643], [ -inf, -inf, -inf, ..., -0.2168, -1.0000, -0.9956], [ -inf, -inf, -inf, ..., -1.7451, 0.1410, -0.6221]], ..., [[-1.3965, 0.7798, 0.4707, ..., -inf, -inf, -inf], [ 0.6260, -0.4146, 0.9180, ..., -inf, -inf, -inf], [ 0.4807, -1.0742, 1.2803, ..., -inf, -inf, -inf], ..., [ 0.0909, 0.8022, -0.4170, ..., -inf, -inf, -inf], [-2.6035, -1.2988, 0.5586, ..., -inf, -inf, -inf], [-0.6953, -0.8232, 0.0436, ..., -inf, -inf, -inf]], [[ 1.0889, -0.2776, -0.0632, ..., -inf, -inf, -inf], [-0.4128, 0.4834, -0.3848, ..., -inf, -inf, -inf], [-0.8794, 0.9150, -1.5107, ..., -inf, -inf, -inf], ..., [ 0.8867, -0.4731, 0.3389, ..., -inf, -inf, -inf], [-0.1365, 0.4905, -2.0000, ..., -inf, -inf, -inf], [-0.0205, -0.5464, -0.6851, ..., -inf, -inf, -inf]], [[ nan, nan, nan, ..., -inf, -inf, -inf], [ nan, nan, nan, ..., -inf, -inf, -inf], [ nan, nan, nan, ..., -inf, -inf, -inf], ..., [ nan, nan, nan, ..., -inf, -inf, -inf], [ nan, nan, nan, ..., -inf, -inf, -inf], [ nan, nan, nan, ..., -inf, -inf, -inf]]], [[[ -inf, -inf, -inf, ..., -4.0469, -2.6270, -5.4805], [ -inf, -inf, -inf, ..., -0.9312, -0.6743, -1.9688], [ -inf, -inf, -inf, ..., -0.0593, -0.9507, -0.6392], ..., [ -inf, -inf, -inf, ..., 0.3105, 2.3926, 1.0664], [ -inf, -inf, -inf, ..., -0.0166, 2.2754, 1.0449], [ -inf, -inf, -inf, ..., -0.4224, 1.7686, -0.2603]], [[ -inf, -inf, -inf, ..., -0.5088, -1.2666, -0.4363], [ -inf, -inf, -inf, ..., -0.3823, -1.7998, -0.4504], [ -inf, -inf, -inf, ..., -0.1525, 0.1614, -0.0267], ..., [ -inf, -inf, -inf, ..., 0.0225, -0.5737, 0.2318], [ -inf, -inf, -inf, ..., 0.7139, 0.6099, 0.3767], [ -inf, -inf, -inf, ..., 0.2008, -0.6714, 0.5869]], [[ -inf, -inf, -inf, ..., -0.9302, -1.5303, -2.7637], [ -inf, -inf, -inf, ..., -0.1124, -0.5850, 0.0818], [ -inf, -inf, -inf, ..., -1.5176, -1.7822, -0.9111], ..., [ -inf, -inf, -inf, ..., -0.3618, 0.3486, 0.4368], [ -inf, -inf, -inf, ..., -0.4158, -1.1660, -0.9106], [ -inf, -inf, -inf, ..., -0.4636, -0.7012, -0.9570]], ..., [[-1.0137, -1.2324, -0.2091, ..., -inf, -inf, -inf], [ 0.0793, 0.1862, -0.6162, ..., -inf, -inf, -inf], [ 0.2406, 0.1237, -1.0420, ..., -inf, -inf, -inf], ..., [ 0.5308, 0.3862, 0.9731, ..., -inf, -inf, -inf], [-0.5752, -0.8174, 0.4766, ..., -inf, -inf, -inf], [-0.4299, -0.7031, -0.6240, ..., -inf, -inf, -inf]], [[-2.9512, -1.0410, 0.9194, ..., -inf, -inf, -inf], [-0.0306, -0.8579, 0.1930, ..., -inf, -inf, -inf], [ 0.2927, -1.4600, -1.6787, ..., -inf, -inf, -inf], ..., [ 0.6128, -0.8921, 1.2861, ..., -inf, -inf, -inf], [-0.7778, -0.8564, 2.3457, ..., -inf, -inf, -inf], [-0.8877, -1.4834, 0.7783, ..., -inf, -inf, -inf]], [[ nan, nan, nan, ..., -inf, -inf, -inf], [ nan, nan, nan, ..., -inf, -inf, -inf], [ nan, nan, nan, ..., -inf, -inf, -inf], ..., [ nan, nan, nan, ..., -inf, -inf, -inf], [ nan, nan, nan, ..., -inf, -inf, -inf], [ nan, nan, nan, ..., -inf, -inf, -inf]]]], device='cuda:0', dtype=torch.float16)
Hey @adamwawrzynski The implem of Lonformer is in 4.2.0 is different from 3.0.2, you might need to modify the convert script for the new version.
0
huggingface
Intermediate
Generating sentence embeddings from pretrained transformers model
https://discuss.huggingface.co/t/generating-sentence-embeddings-from-pretrained-transformers-model/3314
Hi, I have a pretrained BERT based model hosted on huggingface. huggingface.co microsoft/SportsBERT · Hugging Face 2 How do I generate sentence vectors using this model? I have explored sentence bert but it doesn’t allow you to use custom trained models. I have also seen Bert as a client. It works but for my current scenario, I was wondering if there’s something which could be done without running a server for converting to vectors.
I think the best method is to go with sentence-bert. Indeed, you can use your own model, just try reproducing what they do in the paper 11. You add a pooling layer at the output of your model, from the paper: We experiment with three pooling strategies: Using the output of the CLS-token,computing the mean of all output vectors (MEAN-strategy), and computing a max-over-time of the output vectors (MAX-strategy). The default configuration is MEAN. Finally, you might want to fine-tune your model for comparison, too: In order to fine-tune BERT / RoBERTa (your model), we create siamese and triplet networks (Schroff et al.,2015) to update the weights such that the produced sentence embeddings are semantically meaningful and can be compared with cosine-similarity. Hope this help!
0
huggingface
Intermediate
Converting Word-level labels to WordPiece-level for Token Classification
https://discuss.huggingface.co/t/converting-word-level-labels-to-wordpiece-level-for-token-classification/2118
Hi all, I am building a BertForTokenClassification model but I am having trouble figuring out how to format my dataset. I have already labeled my dataset with span labeling. So for example: sequence = “Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very close to the Manhattan Bridge.” In my dataset, I would have this labeled by hand as: [(Hugging, B-org), (Face, I-org), (Inc., L-org), (is, O), (a, O), (company, O), … (Bridge, L-org)] However, when I pass this through my BertTokenizer, I get the following tokens: [[CLS], Hu, ##gging, Face, Inc., ., is, a, company, …, Bridge, [SEP]] My question is, how do I handle the Hu, ##gging <-> Hugging label mismatch issue? I have Hugging labeled as B-org, and if I zip these tokens with my labels my labels will be offset by one: [(Hu, B-org), (##gging, I-org), (Face, L-org), (Inc., O), (is, O), (a, O), (company, O), … (Bridge, OUT_OF_LABELS)] Has anybody been able to handle this problem before?
Hi @altozachmo, you can “extend” the labels list, adding as many labels as the number of token splits. So, for example: sequence = “Hugging Face" labels = [(Hugging, B-org),(Face, I-org)] tokenized_sentence = [[CLS], Hu, ##gging, Face, [SEP]] tokenized_labels = [(Hu, B-org),(##gging, B-org),(Face, I-org)] (the tokenized_labels should include also the labels for the CLS and SEP tokens but I omitted them) To do this, you can check this tutorial 45 and look for the “tokenize_and_preserve_labels” function
0
huggingface
Intermediate
how to convert text to word embeddings using bert’s pretrained model ‘faster’?
https://discuss.huggingface.co/t/how-to-convert-text-to-word-embeddings-using-berts-pretrained-model-faster/3005
I’m trying to get word embeddings for clinical data using microsoft/pubmedbert. I have 3.6 million text rows. Converting texts to vectors for 10k rows takes around 30 minutes. So for 3.6 million rows, it would take around - 180 hours(8days approx). Is there any method where I can speed up the process? My code - from transformers import AutoTokenizer from transformers import pipeline model_name = "microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext" tokenizer = AutoTokenizer.from_pretrained(model_name) classifier = pipeline('feature-extraction',model=model_name, tokenizer=tokenizer) def lambda_func(row): tokens = tokenizer(row['notetext']) if len(tokens['input_ids'])>512: tokens = re.split(r'\b', row['notetext']) tokens= [t for t in tokens if len(t) > 0 ] row['notetext'] = ''.join(tokens[:512]) row['vectors'] = classifier(row['notetext'])[0][0] return row def process(progress_notes): progress_notes = progress_notes.apply(lambda_func, axis=1) return progress_notes progress_notes = process(progress_notes) vectors_breadth = 768 vectors_length = len(progress_notes) vectors_2d = np.reshape(progress_notes['vectors'].to_list(), (vectors_length, vectors_breadth)) vectors_df = pd.DataFrame(vectors_2d) My progress_notes dataframe looks like - progress_notes = pd.DataFrame({'id':[1,2,3],'progressnotetype':['Nursing Note', 'Nursing Note', 'Administration Note'], 'notetext': ['Patient\'s skin is grossly intact with exception of skin tear to r inner elbow and r lateral lower leg','Patient with history of Afib with RVR. Patient is incontinent of bowel and bladder.','Give 2 tablet by mouth every 4 hours as needed for Mild to moderate Pain Not to exceed 3 grams in 24 hours']}) Note - 1) I’m running the code on aws ec2 instance r5.8x large(32 CPUs) - I tried using multiprocessing but the code goes into a deadlock because bert takes all my cpu cores.
hi @madhuryadav, you could try to use onnxruntime for this to get some speed-up. Here’s a notebook 45 which shows, how to use onnxruntime for bert. This could also help github.com patil-suraj/onnx_transformers 17 Accelerated NLP pipelines for fast inference on CPU. Built with Transformers and ONNX runtime.
0
huggingface
Intermediate
MarianMt translation issue
https://discuss.huggingface.co/t/marianmt-translation-issue/3013
I’m trying to work on a translation yo-en using MarianMt since I found a pretrained bilingual for my need however I checked the link here, but the source txt language was encoded differently, a lot of the characters changed. I need help in proceeding, I think it would affect performance. Thanks🤍 https://object.pouta.csc.fi/OPUS-MT-models/yo-en/opus-2020-01-16.test.txt 4
You are probably on Windows, right? That text file contains UTF-8 characters, but windows (still) defaults to cp1252 or something like that. That means it does not correctly display those characters by default. That does not mean that the text is incorrect: in byte format, it is correct but your computer is just showing it incorrectly. You can check this by downloading the file and opening it in your favourite editor with an UTF-8 encoding. So if you open this file in Python, for instance, then you have to use something like with open(yourfile, encoding="utf-8") as fh: ... That should help you. However, this is a very general issue and has nothing at all to do with transformers or any other HF libraries. So please use some other forums for general questions like this, like Stack Overflow.
0
huggingface
Intermediate
Token classification on custom BERT and data
https://discuss.huggingface.co/t/token-classification-on-custom-bert-and-data/2833
I am trying to follow the instructions from here 15 but using my own fine-tuned BERT model. I had to make a few adaptations but I think I got the data in the same format as the example. However, when I try to run the Trainer, I get the following error: RuntimeError: The size of tensor a (8160) must match the size of tensor b (16) at non-singleton dimension 0 I had defined the max_length=510 on the call to tokenizer function, to match what I used to train the language model. I also manually pad the labels to match this length. The code I’m using is the following: def main(): train_texts, train_tags = read_data('./data/ner_train.pkl') val_texts, val_tags = read_data('./data/ner_test.pkl') unique_tags = set(tag for doc in train_tags for tag in doc) tag2id = {tag: id for id, tag in enumerate(unique_tags)} id2tag = {id: tag for tag, id in tag2id.items()} tokenizer = AutoTokenizer.from_pretrained("C:\\Users\\Rogerio\\Documents\\bert-beaver-language") train_encodings = tokenizer(train_texts, is_split_into_words=True, return_offsets_mapping=True, padding=True, truncation=True, max_length=510) val_encodings = tokenizer(val_texts, is_split_into_words=True, return_offsets_mapping=True, padding=True, truncation=True, max_length=510) labels = [[tag2id[tag] for tag in doc] for doc in train_tags] train_labels = [] for doc_labels, doc_offset in zip(labels, train_encodings.offset_mapping): # pad labels if necessary if len(doc_labels) < len(doc_offset): doc_labels += [-100] * (len(doc_offset) - len(doc_labels)) for label, offset in zip(doc_labels, doc_offset): if offset[0] != 0 or offset == (0, 0): label = -100 train_labels.append(label) labels = [[tag2id[tag] for tag in doc] for doc in val_tags] val_labels = [] for doc_labels, doc_offset in zip(labels, val_encodings.offset_mapping): # pad labels if necessary if len(doc_labels) < len(doc_offset): doc_labels += [-100] * (len(doc_offset) - len(doc_labels)) for label, offset in zip(doc_labels, doc_offset): if offset[0] != 0 or offset == (0, 0): label = -100 val_labels.append(label) train_encodings.pop("offset_mapping") # we don't want to pass this to the model val_encodings.pop("offset_mapping") train_dataset = NERDataset(train_encodings, train_labels) val_dataset = NERDataset(val_encodings, val_labels) model = AutoModelForTokenClassification.from_pretrained("C:\\Users\\Rogerio\\Documents\\bert-beaver-language", num_labels=len(unique_tags)) training_args = TrainingArguments( output_dir="C:\\Users\\Rogerio\\Documents\\bert-beaver-ner\\ner_output", logging_dir="C:\\Users\\Rogerio\\Documents\\bert-beaver-ner\\ner_logs", num_train_epochs=8, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_steps=10, save_total_limit=5, overwrite_output_dir=True, save_steps=750, do_eval=True, do_train=True, do_predict=True ) trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset # evaluation dataset ) trainer.train() The full stack trace is below: 0%| | 0/1797240 [00:00<?, ?it/s]Traceback (most recent call last): File "C:/Users/Rogerio/Documents/Python_Projects/beaver-model-training/ner_bert.py", line 558, in <module> main() File "C:/Users/Rogerio/Documents/Python_Projects/beaver-model-training/ner_bert.py", line 496, in main trainer.train() File "C:\Users\Rogerio\python-virtual-envs\beaver-model-training\lib\site-packages\transformers\trainer.py", line 747, in train tr_loss += self.training_step(model, inputs) File "C:\Users\Rogerio\python-virtual-envs\beaver-model-training\lib\site-packages\transformers\trainer.py", line 1075, in training_step loss = self.compute_loss(model, inputs) File "C:\Users\Rogerio\python-virtual-envs\beaver-model-training\lib\site-packages\transformers\trainer.py", line 1099, in compute_loss outputs = model(**inputs) File "C:\Users\Rogerio\python-virtual-envs\beaver-model-training\lib\site-packages\torch\nn\modules\module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "C:\Users\Rogerio\python-virtual-envs\beaver-model-training\lib\site-packages\transformers\models\bert\modeling_bert.py", line 1541, in forward active_loss, labels.view(-1), torch.tensor(loss_fct.ignore_index).type_as(labels) RuntimeError: The size of tensor a (8160) must match the size of tensor b (16) at non-singleton dimension 0 Any ideas?
This error seems to mean your outputs and labels have mismatched shapes. I would double-check the shapes by grabbing a batch in your training dataloader and looking at the inputs and labels.
0
huggingface
Intermediate
MRPC Reproducibility with transformers-4.1.0
https://discuss.huggingface.co/t/mrpc-reproducibility-with-transformers-4-1-0/2888
I always get lower precision following the MRPC example, what’s the reason? python run_glue.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir /tmp/$TASK_NAME/ and get 12/18/2020 17:16:38 - INFO - __main__ - ***** Eval results mrpc ***** 12/18/2020 17:16:38 - INFO - __main__ - eval_loss = 0.5318707227706909 12/18/2020 17:16:38 - INFO - __main__ - eval_accuracy = 0.7622549019607843 12/18/2020 17:16:38 - INFO - __main__ - eval_f1 = 0.8417618270799347 12/18/2020 17:16:38 - INFO - __main__ - eval_combined_score = 0.8020083645203595 12/18/2020 17:16:38 - INFO - __main__ - epoch = 3.0 12/18/2020 16:45:29 - INFO - __main__ - ***** Eval results mrpc ***** 12/18/2020 16:45:29 - INFO - __main__ - eval_loss = 0.47723284363746643 12/18/2020 16:45:29 - INFO - __main__ - eval_accuracy = 0.8063725490196079 12/18/2020 16:45:29 - INFO - __main__ - eval_f1 = 0.868988391376451 12/18/2020 16:45:29 - INFO - __main__ - eval_combined_score = 0.8376804701980294 12/18/2020 16:45:29 - INFO - __main__ - epoch = 3.0 12/18/2020 16:34:37 - INFO - __main__ - ***** Eval results mrpc ***** 12/18/2020 16:34:37 - INFO - __main__ - eval_loss = 0.571368932723999 12/18/2020 16:34:37 - INFO - __main__ - eval_accuracy = 0.6838235294117647 12/18/2020 16:34:37 - INFO - __main__ - eval_f1 = 0.8122270742358079 12/18/2020 16:34:37 - INFO - __main__ - eval_combined_score = 0.7480253018237863 12/18/2020 16:34:37 - INFO - __main__ - epoch = 3.0 GPU: GTX 1080 transformers: 4.1.0 Torch: 1.6.0 python: 3.8 Server: Ubuntu 18.04
Please don’t use two different account to post the exact same message in two different categories.
0
huggingface
Intermediate
Based on HF documentation, unnaswerable questions from Squad 2.0 don’t make it into train/val data
https://discuss.huggingface.co/t/based-on-hf-documentation-unnaswerable-questions-from-squad-2-0-dont-make-it-into-train-val-data/2088
Hi, I wanted to finetune Electra on my own Squad 2.0-style dataset so I looked at the following documentation to figure out what the data format should be. huggingface.co Fine-tuning with custom datasets — transformers 3.5.0 documentation 5 It seems in the walkthrough that only answerable questions actually make it into the training/validation datasets. In the JSON files, if a question cannot be answered, the “answers” array is empty. However in the walkthrough, this is how a (context, question, answer) triplet gets added to the data: for answer in qa['answers']: contexts.append(context) questions.append(question) answers.append(answer) Because it’s iterating through the “answers” array, if i’m not mistaken, the questions that are unanswerable will never get added to the data.
@valhalla @sgugger Not sure if you two are the right people to tag but thought I’d start somewhere!
0
huggingface
Intermediate
Specify attention masks for some heads in multi-head attention
https://discuss.huggingface.co/t/specify-attention-masks-for-some-heads-in-multi-head-attention/2100
Suppose I have 16-head Transformer layers in a standard BERT model. I want to constrain the first head of all the transformer layers to attend to tokens only in the same sentence, while the other 15 heads can attend to all the (non-padding) tokens (which is the default). I looked at head_mask, but that merely specifies which heads to deactivate (0/1). Looked at attention_mask, but that does not provide a way to specify different masks for different heads. Any suggestions would be awesome! EDIT - example The input is a number of sentences, let’s say: Drums, drums in the deep. We cannot get out. They are coming. I want the first head of multi-head attention to just attend to tokens/words in the same sentence. So, when calculating the dot-product attention for “We”, the only words considered are “We cannot get out”. The other sentences are ignored. This can be specified by getting a num_words x num_words mask for the first head, and for each row, placing 1’s for other words in the same sentence, and 0s for words not in the same sentence. However, there doesn’t seem to be a clean way of specifying per-head attention masks. I want to make sure that I am not missing some obvious way of doing this using the huggingface methods.
Hi arunirc, (I’m not sure I understand what you want to do. Are you planning to do a Next Sentence Prediction type of task, where you input pairs of texts having up to 512 tokens each? Or, when you say “same sentence” do you mean a set of words separated by full stops?) Within each layer, after the attention heads, the output from each head needs to be concatenated before the feed-forward network. For that concatenation to work, I suspect that the matrices need to be the same size. If so, then I don’t think you could have one head only attending to a few tokens, because its matrix would be smaller. Have you considered using two completely separate BERT models, one of which has only 1 head per layer and takes as input only the first text, and the other of which has 15 heads per layer and takes as input the pairs of texts, and combining the outputs later.
0
huggingface
Intermediate
Finding gradients in zero-shot learning
https://discuss.huggingface.co/t/finding-gradients-in-zero-shot-learning/2033
First off, the zero-shot module is amazing. It wraps up a lot of boiler-plate that I’ve been using into a nice succinct interface. With that however, I’m having trouble getting the gradients of intermediate layers. Let’s take an example: from transformers import pipeline import torch model_name = 'facebook/bart-large-mnli' nlp = pipeline("zero-shot-classification", model=model_name) responses = ["I'm having a great day!!"] hypothesis_template = 'This person feels {}' candidate_labels = ['happy', 'sad'] nlp(responses, candidate_labels, hypothesis_template=hypothesis_template) This works well! The output is: {'sequence': "I'm having a great day!!", 'labels': ['happy', 'sad'], 'scores': [0.9989933371543884, 0.0010066736722365022]} What I’d like to do however, is look at the gradients of the input tokens to see which tokens are important. This is in contrast to looking at the attention heads (which is also another viable tactic). Trying to rip apart the internals of the module, I can get the logics and embedding layers: inputs = nlp._parse_and_tokenize(responses, candidate_labels, hypothesis_template) predictions = nlp.model(**inputs, return_dict=True, output_hidden_states=True) predictions['logits'] tensor([[-3.1864, -0.0714, 3.2625], [ 4.5919, -1.9473, -3.6376]], grad_fn=<AddmmBackward>) This is expected, as the label for “happy” is index 0 and the entailment index for this model is 2, so the value of 3.2625 is an extremely strong signal. The label for “sad” is 1 and the contradiction index is 0, so the value of 4.5919 is also the correct answer. Great! Now I should be able to look at the first embedding layer and check out the gradient with respect to the happy entailment scalar: layer = predictions['encoder_hidden_states'][0] layer.retain_grad() predictions['logits'][0][2].backward(retain_graph=True) Unfortunately, layer.grad is None. I’ve tried almost everything I can think of, and now I’m a bit stuck. Thanks for the help!
I’ve reproduced this but not sure if I have a good answer – looks like more of a Bart/PyTorch question rather than something specific to the zero shot pipeline. Maybe @patrickvonplaten would have an idea?
0
huggingface
Intermediate
TokenizerFast with various units (e.g., BPE, wordpiece, word, character, unigram)
https://discuss.huggingface.co/t/tokenizerfast-with-various-units-e-g-bpe-wordpiece-word-character-unigram/1932
I am using BartTokenizerFast to encode & decode my dataset. I can find the documentation for using BPE unit for TokenizerFast (including https://colab.research.google.com/github/huggingface/transformers/blob/master/notebooks/01-training-tokenizers.ipynb 2) Is there any useful documentation for using other units (e.g., wordpiece, word, character) for TokenizerFast? Thank you !
Yes all the doc is here: https://huggingface.co/docs/tokenizers/python/latest/ 7 cc @anthony @Narsil
0
huggingface
Intermediate
Causal masks in BERT vs. GPT2
https://discuss.huggingface.co/t/causal-masks-in-bert-vs-gpt2/1540
Hi all - just a quick clarification about causal masks in GPT2 and BertLMHeadModel. Its clear that GPT2 automatically adds causal masks (lines 115 and 151 in modeling_gpt2.py). I believe that this should be the case for BertLMHeadModel as well - my understanding is that it is mainly meant to be used as a decoder in the EncoderDecoder construct (please correct me if this is not the case). However, I am having a hard time finding where this occurs in the source. Are causal masks automatically added? I suspect I am just missing where this happens in the code as the attention_mask input (meant for masking padding tokens) wouldn’t take care of this. Thanks!
Hi @dilawn, have you figured this out yet by any chance? I have just posted a similar question on not finding causal masks in RobertaForCausalLM and BertGeneration classes. I am just wondering if I misunderstand the concept or missing the line of code that applies these causal masks. Link to my post 8.
0
huggingface
Intermediate
Positional Encoding error, Protein Bert Model
https://discuss.huggingface.co/t/positional-encoding-error-protein-bert-model/1737
Hi Guys, This seems very obivious but I can’t seem to find an answer anywhere. I’m trying to build a very basic roberta protein model similar to ProTrans. It’s just Roberta but I need to use a very long positional encodings of 40_000, because protein seqeunces are about 40,000 amino acids long. But anytime I change the max postional embeddings to 40k I keep getting an CUDA error: device-side assert triggered error. (additidonal question I save the linebylinetext to a variable. From what I understand linebyline splits the sentence into chunks, but how can I index in to see each chunk?) (also other than a lambda function is there any more efficient way I can preprocess and tokenise this dataset?) Steps Preprocess a Uniref50 sequence into a single space text document. Treating each Amino acid as a word and each protein as a sentence. Tokenise, I can load in the tokens from ProTrans model, tokenises fine. Use line by line text dataset to split each into 40_0000 blocks, Step up the config, important changes to roberta vocab size 30, max_positional_embeddings= 40_000 Run through datacollator, model setup, train setup, get error.
Hi @donal I guess it’s near impossible to put 40 000 tokens in any transformer model that uses full attention. Maybe a better choice is Reformer 1 or LongFormer(I am not sure that LongFormer can handle 40 000 tokens too). The main problem of BERT architecture is that it’s memory and computational complexity is O(L^2) where L is max_seq_len, so I guess about 2048 tokens is upper limit. Or maybe consider using something like sliding window or some kind of memory banks
0
huggingface
Intermediate
What could be causing ” line 51, in write_predictions_to_file if not preds_list[example_id]: IndexError: list index out of range” in token-classification?
https://discuss.huggingface.co/t/what-could-be-causing-line-51-in-write-predictions-to-file-if-not-preds-list-example-id-indexerror-list-index-out-of-range-in-token-classification/1458
I am running token classification on my data, and it runs just fine except for write_predictions_to_file in https://github.com/huggingface/transformers/blob/master/examples/token-classification/tasks.py#L51 1 I believe its and issue with my data because I run the script over other data sources without any issues. It seems that it runs the predictions just fine, and the issue is with writing the predictions to file; there seems to be some sort of index mismatch. The full error is Traceback (most recent call last): File "run_ner.py", line 308, in <module> main() File "run_ner.py", line 297, in main token_classification_task.write_predictions_to_file(writer, f, preds_list) File "/content/transformers/examples/token-classification/tasks.py", line 51, in write_predictions_to_file if not preds_list[example_id]: IndexError: list index out of range Specifically, the mismatch seems to be between pred_list and test_input_reader . I’ve been looking at the difference between the data that has caused the error, and data that runs just fine, but I can’t seem to pick anything out. I was thinking maybe it was caused by several new lines in a row, or some other lining issue, but I haven’t seen it yet. Anyone have an idea. For convenience, I recreated the issue in this colab notebook. colab.research.google.com Google Colaboratory 4
Hi @reSearch2vec, I will have a look on it and report back here
0
huggingface
Intermediate
Loading models sometimes maxes DISK%, then crashes
https://discuss.huggingface.co/t/loading-models-sometimes-maxes-disk-then-crashes/1364
Got an issue where quite often, somewhere in the chain my disk goes 100% read (500mb/s for 10-20m) then crash. I’ve put loggers everywhere to see what’s causing it, and the last logger is usually after loading the summarization model (code here 2, model used). It could be anything; that’s the most common model used on the service. All I know is it’s transformers, as it’s always that file/module that triggers it. My models are all persisted, so it’s not re-downloading. Dev’ing Docker on Windows (WSL2 with nvidia-docker / dev-channel). I know that’s the smoking gun, but it happens on my Ubuntu server-server too. pytorch=1.6.0 cuda=10.1 cudnn=7 transformers=3.3.1 python=3.8 (github/lefnire/dockerfiles, forum not letting me post >2 links). I saw github/huggingface/transformers/issues/5001 which had me wondering if it’s a pytorch<->cuda<->transformers version bad-match (the ticket’s very old; cuda 9.2, etc). But is there a recommended/common version-combo of Pytorch, CUDA, Python for transformers? Could it be something with the .lock files?
I looked into Huggface Dockerfiles, and we’re using the same setup except Python version (theirs Ubuntu 18.04 default Python 3.6). They also install mkl, an Intel optimizer, but not sure if used anywhere; I installed it too check, no cigar. Blast, it’s nearly every time I load facebook/bart-cnn-large.
0
huggingface
Intermediate
Fine tuning bert on next sentence prediction task
https://discuss.huggingface.co/t/fine-tuning-bert-on-next-sentence-prediction-task/1101
I am trying to fine-tune Bert using the Huggingface library on next sentence prediction task. I looked at the tutorial and I am trying to use DataCollatorForNextSentencePrediction and TextDatasetForNextSentencePrediction . When I am using that I get the following error(use the pastebin link to see the error)https://pastebin.pl/view/bde2c3d4 50. I have provided my code bellow. ============Code================ def train(bert_model,bert_tokenizer,path,eval_path=None): out_dir = “/content/drive/My Drive/next_sentence/” training_args = TrainingArguments( output_dir=out_dir, overwrite_output_dir=True, num_train_epochs=1, per_device_train_batch_size=30, save_steps=10000, save_total_limit=2, ) data_collator = DataCollatorForNextSentencePrediction( tokenizer=bert_tokenizer,mlm=False,block_size=512,nsp_probability =0.5 ) dataset = TextDatasetForNextSentencePrediction( tokenizer = bert_tokenizer, file_path=path, block_size=512, ) trainer = Trainer( model=bert_model, args=training_args, train_dataset=dataset, data_collator=data_collator, ) trainer.train() trainer.save_model(out_dir) def main(): print("Running main") bert_tokenizer = BertTokenizer.from_pretrained("bert-base-cased") bert_model = BertForNextSentencePrediction.from_pretrained("bert-base-cased") train_data_set_path = "/content/drive/My Drive/next_sentence/line_data_set_file.txt" train(bert_model,bert_tokenizer,train_data_set_path) #prepare_data_set(bert_tokenizer) main()
Can you fix the formatting in your post? It would make it easier to read
0
huggingface
Intermediate
Trying to understand XForSequenceClassification heads
https://discuss.huggingface.co/t/trying-to-understand-xforsequenceclassification-heads/1153
I’m interested in 1-sentence and 2-sentence text classification, so I’ve been looking at the classification heads for BERT, GPT2, XLNet, and RoBERTa. I have a few questions: 1. I see that there are dedicated classification classes BertForSequenceClassification 1, XLNetForSequenceClassification, and RobertaForSequenceClassification. However, there is no XForSequenceClassification class for GPT2. Is there any documentation to help us write our own? 2. When I look at the classification heads for BERT, XLNet, and RoBERTa, the layer structure for producing the logits appears to be different for each one. I would think that the final few layers would be exactly the same. For example, here is the code for the BERT classification head: class BertForSequenceClassification(BertPreTrainedModel): def __init__(self, config): self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, config.num_labels) def forward( ... ): outputs = self.bert( ... ) pooled_output = outputs[1] pooled_output = self.dropout(pooled_output) logits = self.classifier(pooled_output) Here is the code for the XLNet classification head: class XLNetForSequenceClassification(XLNetPreTrainedModel): def __init__(self, config): self.transformer = XLNetModel(config) self.sequence_summary = SequenceSummary(config) self.logits_proj = nn.Linear(config.d_model, config.num_labels) def forward( ... ): transformer_outputs = self.transformer( ... ) output = transformer_outputs[0] output = self.sequence_summary(output) logits = self.logits_proj(output) Here is the code for the RoBERTa classification head: class RobertaForSequenceClassification(BertPreTrainedModel): config_class = RobertaConfig base_model_prefix = "roberta" def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.roberta = RobertaModel(config) self.classifier = RobertaClassificationHead(config) def forward( ... ): outputs = self.roberta( ... ) sequence_output = outputs[0] logits = self.classifier(sequence_output) class RobertaClassificationHead(nn.Module): """Head for sentence-level classification tasks.""" def __init__(self, config): super().__init__() self.dense = nn.Linear(config.hidden_size, config.hidden_size) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.out_proj = nn.Linear(config.hidden_size, config.num_labels) def forward(self, features, **kwargs): x = features[:, 0, :] # take <s> token (equiv. to [CLS]) x = self.dropout(x) x = self.dense(x) x = torch.tanh(x) x = self.dropout(x) x = self.out_proj(x) return x From the above code, producing the output logits involves: BERT: dropout, linear XLNet: linear RoBERTa: dropout, linear, tanh, dropout, linear Why does each model implement the final layers differently? Are the implementations taken from the original papers? Wouldn’t it be better to make the classification layers exactly the same for each model so that classification relative performance would be a result of the models’ internal architecture rather than the classification layer? Thank you for any help.
Any help would be greatly appreciated.
0
huggingface
Intermediate
Contributing to Github
https://discuss.huggingface.co/t/contributing-to-github/1198
Hello, I occasionally run into bugs (rare, minor, of course) in the Huggingface codebase, which I fix in my own pulls. Does Huggingface accept pull requests from the general user base (with an appropriate issue, of course)? Thanks, Andreas
Yes! PRs are welcome (I’m a contributor ), Here’s the contributing guide 7
0
huggingface
Intermediate
Sharing BERT formatted corpus
https://discuss.huggingface.co/t/sharing-bert-formatted-corpus/75
First of all, sorry if I am missing something because this is my first post on the forum. I have thought that users / teams that train and share BERT like models could share the formatted corpus in order to other users to be able to use it in new models such as Electra easily saving time and computation (as the corpus format is the same). Maybe HF could store and share formatted corpus or just the users/team can provide it. The idea comes because I have seen several BERT like models in the hub for different languages and I would like to bring it to Electra as it needs less resources for pretraining and I (and maybe many users) can deal with it.
HuggingFace already provides ~100 NLP datasets in their nlp 14 repository. I think these are mostly evaluation datasets. However, you can add your own dataset 8, too!
0
huggingface
Intermediate
BART - Input format
https://discuss.huggingface.co/t/bart-input-format/1078
Hi, Due to recent code changes by @sshleifer, I am trying to understand what is desired for BART’s input for training and generation, and whether the codebase is reflecting it properly as I’ve encountered some inconsistencies. I am assuming both src_ids and tgt_ids are encoded with a BART tokenizer, and therefore have the format of [bos, token1, token2, …, eos]. Looking at transformers/examples/seq2seq/finetune.py#L151 10 decoder_input_ids = shift_tokens_right(tgt_ids) means that eos will be the first token and bos will be the second token. This has an effect on generation: We need decoder_start_token_id=eos_token_id. The first actually generated token (i.e. after decoder_start_token_id) will be bos. Questions: The default value for decoder_start_token_id is missing from facebook/bart-base and facebook/bart-large-mnli, which means it falls back to bos. The other BART models have eos as their decoder_start_token_id. Why is the difference? Looks to me that using finetune.py with bart-base/bart-large-mnli will not have generation as intended. In fairseq’s implementation the equivalent for decoder_start_token_id is set to bos: fairseq/models/bart/hub_interface.py#L123. Can you please explain why did you decide to use the format of [eos, bos, token1, token2, ...] for decoder_input_ids instead of [bos, token1, token2, ...]? Is there still need for force_bos_token_to_be_generated? It was introduced in transformers/pull/6526 (new user, can’t add another link), when the first token of decoder_input_ids was bos and the second was the first regular token of the target sequence (transformers/examples/seq2seq/finetune.py#L144). Using it now shouldn’t have any effect, if I understand correctly (because a trained model will easily learn to always output bos in this position anyway). Thanks!
This is a great question and super confusing. You can get a good snapshot of my understanding here 67. I don’t think that will clear everything up, but if you could read that and let me know what you still don’t understand it would be helpful. If I encounter empirical evidence that I should change decoder_start_token_id for any model I will do so.
0
huggingface
Intermediate
Properly loading a fine tuned model from directory
https://discuss.huggingface.co/t/properly-loading-a-fine-tuned-model-from-directory/857
Hi everybody, After reading the docs I still don’t really understand how should I load my saved model properly. I fine tuned a CamembertForSequenceClassification.from_pretrained model with some data, the results was good so I saved it using save_pretrained(model_path). Now I would like to use this model to do inference… I use these lines of code: config = AutoConfig.from_pretrained(model_path) tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForSequenceClassification.from_config(config) I now have several interogation: If I tokenize a new sequence with tokenizer.encode_plus(new_seq), it doesn’t tokenize the sentence the way it did when I previously saved the model. I must precise all the params (max_length…) to retrieve what I want. Is that normal? If I load my saved model without using AutoModelForSequenceClassification but with AutoModel the model do not load a classification layer however I saved my model using this last layer. Is that also normal? So I’m wondering how can I be sure that I load the exact same model with all the weights and params that I saved previously. Thanks for everything!
Yes, you should pass the same params to tokenizer that you passed during training. AutoModel won’t add the heads (classfication layer, LM head etc). Use proper AutoModel classes to get same weights and params i.e AutoModelForSequenceClassification for classification model, AutoModelForCausalLM for causal LM’s like GPT-2 etc You can load the saved model using just AutoModelForSequenceClassification.from_pretrained, it’ll load the config itself. You should pass config explicitly when you want to change something in it.
0
huggingface
Intermediate
Can Q&A model say “I don’t know”
https://discuss.huggingface.co/t/can-q-a-model-say-i-dont-know/779
Hi, I am working on training a domain-specific ‘extractive Q&A’ model but I was wondering if the model could say “I dont know” when the answer is not in the context. Any comments?
I think SQuAD 2.0 introduced this kind of “unanswerable” questions: arXiv.org Know What You Don't Know: Unanswerable Questions for SQuAD 13 Extractive reading comprehension systems can often locate the correct answer to a question in a context document, but they also tend to make unreliable guesses on questions for which the correct answer is not stated in the context. Existing datasets... Demo on the model hub that uses a fine-tuned model: https://huggingface.co/deepset/bert-base-cased-squad2?text=Where+lives+Wolfgang%3F&context=My+name+is+Wolfgang+and+I+live+in+Berlin 18 Outputs “Berlin” correctly. Now an unanswerable question (because it is not given in the context): https://huggingface.co/deepset/bert-base-cased-squad2?text=Where+was+Obama+born%3F&context=My+name+is+Wolfgang+and+I+live+in+Berlin 17
0
huggingface
Intermediate
RoBERTa from scratch with different vocab vs. fine-tuning
https://discuss.huggingface.co/t/roberta-from-scratch-with-different-vocab-vs-fine-tuning/569
I have a question about training custom RoBERTa model. My corpus consists of 100% english text, but the structure of the text I have is totally different than well formed english books / wikipedia sentences. As the overall nomenclature of my dataset is very different form books / wikipedia I wanted to train new LM from scratch using a new tokenizer trained on my dataset, to capture this corpus-specific nomenclature. I would like to hear from experts - which of the following approaches is the best one for my case? Train custom tokenizer and train RoBERTa from scratch Just fine tune pretrained RoBERTa and rely on the existing BPE tokenizer Use pretrained RoBERTa and somehow adjust the vocab (if it’s even possible and if so, then how?)
Bump, anyone?
0
huggingface
Intermediate
BART_LM: Odd Beam Search Output
https://discuss.huggingface.co/t/bart-lm-odd-beam-search-output/618
Hi folks, Problem: fine-tuned model adopts peculiar behaviour with beam search. Specifically, beam search outputs include 2 bos tokens and exclude the first word token. I have double checked my data feed and the inputs look fine, with one bos and the first word token is present. Even more strange, there are no such problems with nucleus sampling, which suggests that this is particular to beam search. for example, here are two outputs from the same fine-tuned model; beam search output: <s><s>rek likes to play football. </s> nuleus sampling output: <s> Derek likes to go to the football pitch. </s> Any clues?
Hi,how did you prepare labels and decoder_input_ids ?
0
huggingface
Intermediate
Performing Back Translation with T5 network
https://discuss.huggingface.co/t/performing-back-translation-with-t5-network/481
Hi, current trying to reproduce the paper “Unsupervised Translation of Programming Languages 6” (TransCoder) from Facebook research, but using the T5 network as the seq2seq model. Right now I am stuck on the back translation part from the approach: def back_translate(self, batch: List[torch.Tensor]) -> Dict[str, torch.Tensor]: device = batch[0]['input_ids'].device print(device) input_ids = torch.stack([example['input_ids'] for example in batch]) target_ids = torch.stack([example['target_ids'] for example in batch]) attention_mask = torch.stack([example['attention_mask'] for example in batch]) batch_langs = [self.tokenizer.decode([ids[0].detach()]) for ids in target_ids] lang = random.choice(self.langs) self.model.eval() cpu_model = self.model.to('cpu') outputs = cpu_model.generate( input_ids = input_ids, attention_mask = attention_mask, decoder_start_token_id = self.tokenizer.encode(f'<{lang}>')[0], max_length = 256 ) self.model.train() inputs = [self.tokenizer.decode(ids).replace('complete: ', '') for ids in input_ids] outputs = [self.tokenizer.decode(ids[1:]) for ids in outputs] # remove lang token examples = [] for inpt, outpt, l in zip(outputs, inputs, batch_langs): inpt = f'complete: {inpt} </s>' outpt = f'<{l}>{outpt}' input_encodings = self.tokenizer.encode_plus(inpt, pad_to_max_length = True, max_length = 256, truncation = True) target_encodings = self.tokenizer.encode_plus(outpt, pad_to_max_length = True, max_length = 256, truncation = True) encodings = { 'input_ids': torch.tensor(input_encodings['input_ids'], dtype=torch.long, device = xm.xla_device()), 'attention_mask': torch.tensor(input_encodings['attention_mask'], dtype=torch.long, device = xm.xla_device()), 'target_ids': torch.tensor(target_encodings['input_ids'], dtype=torch.long, device = xm.xla_device()), 'target_attention_mask': torch.tensor(target_encodings['attention_mask'], dtype=torch.long, device = xm.xla_device()) } examples.append(encodings) input_ids = torch.stack([example['input_ids'] for example in examples]) input_ids, _ = self.masked_data_collator.mask_tokens(input_ids) lm_labels = torch.stack([example['target_ids'] for example in examples]) lm_labels[lm_labels[:, :] == 0] = -100 attention_mask = torch.stack([example['attention_mask'] for example in examples]) decoder_attention_mask = torch.stack([example['target_attention_mask'] for example in examples]) print(input_ids.device, xm.xla_device()) return { 'input_ids': input_ids, 'attention_mask': attention_mask, 'lm_labels': lm_labels, 'decoder_attention_mask': decoder_attention_mask } I am training this in Google colab using TPUs, but even though I am explicitly putting the tensors onto the TPU device, it is giving me an error saying: Input tensor is not an XLA tensor: torch.FloatTensor Here is a link to the full colab notebook: https://colab.research.google.com/drive/1nRGkCdei7D6v6njKWPVZZWtGgPAedkVQ?usp=sharing 6 Any help or advice would be greatly appreciated!
Can you post the full error trace? Just to be safe, you can do return { 'input_ids': input_ids.to(xm.xla_device()), 'attention_mask': attention_mask.to(xm.xla_device()), 'lm_labels': lm_labels.to(xm.xla_device()), 'decoder_attention_mask': decoder_attention_mask.to(xm.xla_device()) }
0
huggingface
Intermediate
An easy way to make huggingface PRs
https://discuss.huggingface.co/t/an-easy-way-to-make-huggingface-prs/496
Do one time: wget https://github.com/stas00/git-tools/blob/master/how-to-make-pr/github-make-pr-branch chmod u+x github-make-pr-branch sudo cp github-make-pr-branch /usr/bin/ (or put it in your local bin path) And then any time you want to work on a new huggingface transformers feature, just run: github-make-pr-branch ssh GITHUB-USERNAME huggingface transformers new-feature It will: Fork the project if needed Set up this fork to track the upstream Sync your fork to upstream master Create ‘new-feature’ branch Set its upstream for an easy push Now you just need to: cd transformers-new-feature do your magic and when ready: git commit -a git push and the PR is ready (just need to go to github and click on PR suggestion to submit) It, of course, works for any github project. If this is useful I can make a PR to add instructions to CONTRIBUTING.pm. This could also be converted in an even simpler script invocation specific to transformers, so it’ll just need: transformers-pr new-feature currently it uses an ssh access. There is also a detailed guide to How to Make a Pull Request (PR) 1.
Cool! Another way is to directly use the github cli 1. My own process: Clone repo git clone repo_url Create new branch git checkout -b feat-add_cool_stuff Regular commits (I do them through VS Code because I like the interface) Send PR with github cli gh pr create
0
huggingface
Intermediate
Calculate Impact of Input Tokens on BERT Output Probability
https://discuss.huggingface.co/t/calculate-impact-of-input-tokens-on-bert-output-probability/466
Say I’ve trained a BERT model for classification. I’d like to calculate the proportional impact each input token is having on the predicted output. For example - and this is very general - if I have a model that labels input text as {‘about dogs’ : 0, ‘about cats’ : 1}, the following input sentence: s = 'this is a sentence about a cat' should output very close to: 1 HOWEVER, what I’d like is to calculate each input’s impact on that final prediction, e.g. (assuming we’re tokenizing on the level of words - which is not how it would be done in practice, I know): {this : .01, is: .005, a : .02, sentence : .0003, about : [some other low prob], a: [another low prob], cat : 0.999999} Intuitively I’d think this means running a forward pass with the input sentence, then looking at the backprop values? But I’m not quite sure how you’d do that. Thoughts?
Hi @matthew, I’ve looked into this a bit in the past and your intuitions are nearly spot on! What you have described is something like a saliency map. Here are some references that might be useful: https://arxiv.org/pdf/1312.6034.pdf 22 https://arxiv.org/pdf/1703.01365.pdf 13 https://arxiv.org/pdf/1706.03825.pdf 12 Taking the simplest case, what is effectively done is a forward pass, then a backward pass all the way to the input layer. The intuition is that will give you a sense of which tokens have the largest impact on the output if you were to change them by a small amount. In other words, if one of the input tokens from a sentence about cats is changed in such a way that the output of the model becomes 0 (i.e. a sentence about dogs), then that particular token would have a high saliency score. Some of the papers above still were a bit unclear to me, so here are some supplementary references to hopefully get you on your way: https://medium.com/@thelastalias/saliency-maps-for-deep-learning-part-1-vanilla-gradient-1d0665de3284 36 https://glassboxmedicine.com/2019/06/21/cnn-heat-maps-saliency-backpropagation/ 12 https://blog.qure.ai/notes/visualizing_deep_learning 17
0
huggingface
Intermediate
Preprocessing required for fine-tuning RoBERTa
https://discuss.huggingface.co/t/preprocessing-required-for-fine-tuning-roberta/161
I am finetuning a QA model in Hindi using a trained Roberta LM. I need to preprocess the dataset for Roberta. What are the steps that I need to take before I feed the input to the model? One script for English is given here 9. I am not sure if other languages behave in a similar way. e.g The linked notebook adds an extra " " character before the start token. Is this necessary for Roberta? What are other nuances that have to be taken care of? I am only concerned about spans at token-level and not character-level as explained in the link mentioned above. Thank you.
Hi @kushalj001, Maybe you can consider XLM-Roberta (XLM-R) instead of Roberta? XLM-R supports 100 languages out-of-the-box, including its tokenizer which can tokenize most language in the world. This kaggle notebook show how to finetune 3 languages simultaneously on XLM-R with TF2+TPU , which is extremely efficient (10x faster than P100) https://www.kaggle.com/riblidezso/finetune-xlm-roberta-on-jigsaw-test-data-with-mlm 30
0
huggingface
Intermediate
Share your work here
https://discuss.huggingface.co/t/share-your-work-here/125
Show us what you’ve created using transformers, nlp ! It could be a blog post, a jupyter notebook, a colab, a picture, a github repo, a web app, or anything else. Some tips: Probably the easiest way to blog is using FastPages 5. You can easily convert your notebooks and .md files into blog posts using FastPages. The easiest way to share a notebook on github is to install the gist it extension 1. This will only be possible if you use a platform that supports jupyter extensions, such as GCP. Otherwise, you can create a notebook gist by clicking File->Download to get your notebook to your computer, and then follow the steps from this SO post : Go to https://gist.github.com/YOUR-GITHUB-USERNAME/ Click ‘New Gist’ on the upper right corner Open the folder in a Finder/Explorer window on your local computer Drag the file into the text box (the ‘code space’). This should fill the space with JSON looking text for the framework of the notebook content. Copy/Paste the full file name (e.g., mynotebook.ipynb) into the filename box, and give a description above. Create the Gist! If you want to have folks on the forum look at a draft and give feedback without sharing it more widely, just mention that in your post You can also just use a reply to this topic to describe what you did - preferably pasting in a picture or two!
To start this, Happy to announce it here first, I’ve been working on Question Generation using transformers for past two months, and today releasing the first set of experiments here. Question generation is the task of automatically generating questions from a text paragraph. This project is aimed as an open source study on question generation with pre-trained transformers (specifically seq-2-seq models) using straight-forward end-to-end methods without much complicated pipelines. The goal is to provide simplified data processing and training scripts and easy to use pipelines for inference. Specifically, I trained T5 model for answer aware question generation multitask qa and qq end-to-end question generation (without answer supervision) Here’s a sneak peek qa-qg-ss2048×1780 319 KB Everything is built using HuggingFace libraries Dataset: nlp library model and training: transformers models hosted on: model hub For more details here’s the repo 13 Do share your feedback, specifically regarding the quality of questions, the mistakes and any ethical biases that you observe. Happy to discuss more details here. Cheers ! All models 1 are available on hub with configured inference API. You can search using question-generation tag. Here’s a colab 9 if anyone wants to play more with it.
0
huggingface
Course
About the Course category
https://discuss.huggingface.co/t/about-the-course-category/6728
Use this category to ask any question related to the course 87 or organize study groups.
Hey everyone, can we form a study group similar to how MLT 21 does it? I really suck at keeping up with a course alone and would love to set up a weekly meeting kind of thing where we all can actually go through some parts of the course on our own and then discuss the same. We can then leave the entire week to practice it out and then discuss it briefly at the start of the next week session before repeating.
0
huggingface
Course
Chapter 7 questions
https://discuss.huggingface.co/t/chapter-7-questions/11746
Use this topic for any question about Chapter 7 5 of the course.
Hi - this is a relatively simple question but i’m totally new to HuggingFace so apologies in advance but on section 3 you discuss domain adaption. I’m just experimenting with the task at the end of the section i.e. “To quantify the benefits of domain adaptation, fine-tune a classifier on the IMDb labels for both the pretrained and fine-tuned MiniLM checkpoints…” Can you use the ‘Fill-Mask’ domain-adapted checkpoint you generated in the course (huggingface-course/distilbert-base-uncased-finetuned-imdb) for a classification task? Or do you have to adapt the original distilbert-base-uncased to the domain specifically for classification?
0
huggingface
Course
Build a language detector
https://discuss.huggingface.co/t/build-a-language-detector/11520
Please read the topic category description 2 to understand what this is all about Description For many online applications it is not known in advance what language an end-user will communicate in. The goal of this project is to build a system that can automatically predict the language a text is written in. Model(s) There are a few popular multilingual models that you can start with: xlm-roberta-base 3 bert-base-multilingual-uncased 2 Datasets There are quite a few multilingual datasets 6 available on the Hub. Many of these have a “language” field that could be used as a target for the model to predict. Challenges This project will likely require you to combine several datasets together to gain enough coverage of many languages. Desired project outcomes Create a Streamlit or Gradio app on Spaces 1 that can predict the language of a piece of text provided by an end-user Don’t forget to push all your models and datasets to the Hub so others can build on them! Additional resources A good baseline to compare your model against is the Python langid library 5 Discord channel To chat and organise with other people interested in this project, head over to our Discord 4 and: Follow the instructions on the #join-course channel Join the #language-detection channel Just make sure you comment here to indicate that you’ll be contributing to this project
@lewtun Hi, I would like to work on this project.
0
huggingface
Course
Chapter 5 questions
https://discuss.huggingface.co/t/chapter-5-questions/11744
Use this topic for any question about Chapter 5 9 of the course.
Please correct me if I’m wrong. A DataSet object can be thought of as some tabular data, whose rows and columns are examples and features respectively. The length of a DataSet object is the number of examples, which equals to the number of its rows. Each column corresponds to one feature. Given this understanding of DataSet, I found the following descriptions in chapter 5 are confusing (in other words incorrect?). … here those 1,000 examples gave 1,463 new features, resulting in a shape error. 1,463 is the number of rows (i.e. examples) of the newly added columns (i.e. features) such as attention_mask, input_ids, etc. We can check that our new dataset has many more features than the original dataset by comparing the lengths: len(tokenized_dataset[“train”]), len(drug_dataset[“train”]) (206772, 138514) Obviously the above two numbers are the numbers of rows, i.e. the numbers of examples, not the number of columns (features). The number of features in this case is 4. Specifically, these 4 features are attention_mask input_ids overflow_to_sample_mapping and token_type_ids
0
huggingface
Course
Chapter 6 questions
https://discuss.huggingface.co/t/chapter-6-questions/11745
Use this topic for any question about Chapter 6 1 of the course.
Hi, In the section “Fast tokenizers’ special powers” (Tensorflow tutorial) executing this part of the code triggers an error: Screenshot from 2021-11-16 11-05-532072×1026 232 KB
0
huggingface
Course
Inside the Token classification pipeline (PyTorch)
https://discuss.huggingface.co/t/inside-the-token-classification-pipeline-pytorch/13838
Hi, I have been looking through one of the part 2 course tutorials for Inside the Token classification pipeline (PyTorch) illustrated by @sgugger. I have tried to use the example in (https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/videos/token_pipeline_pt.ipynb) and used sshleifer/tiny-dbmdz-bert-large-cased-finetuned-conll03-english as a model-checkpoint. However, I had the following error, AttributeError: ‘list’ object has no attribute ‘argmax’ in `import torch probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1)[0].tolist() predictions = probabilities.argmax(dim=-1)[0].tolist() print(predictions)` Am I missing something? Best, Ghadeer
Hey @ghadeermobasher there are several strategies that you can use to merge the entities and my suggestion would be to inspect the implementation in the token-classification pipeline 1. As you can see in the docstring 2 there are quite some subtleties associated with merging entities correctly (depends on the language, tokenizer etc)!
1
huggingface
Course
Helsinki-NLP/opus-mt-en-fr missing tf_model.h5
https://discuss.huggingface.co/t/helsinki-nlp-opus-mt-en-fr-missing-tf-model-h5/13486
Hi there, I have been following the tensorflow track of the HF course and got an http 404 error when running the below: from transformers import TFAutoModelForSeq2SeqLM model = TFAutoModelForSeq2SeqLM.from_pretrained(model_checkpoint) error message: 404 Client Error: Not Found for url: https://huggingface.co/Helsinki-NLP/opus-mt-en-fr/resolve/main/tf_model.h5 I went to the model card and could not find the tf_model.h5 file. Is there something that I am missing or does the model only work for Torch? thanks
Hi @nickmuchi, thanks for the bug report! Indeed, you’re right that this model only has weights for PyTorch. However, you can load it in TensorFlow using the from_pt argument as follows: from transformers import TFAutoModelForSeq2SeqLM model = TFAutoModelForSeq2SeqLM.from_pretrained(model_checkpoint, from_pt=True) In the meantime, I’ll post a fix on the website - thanks!
1
huggingface
Course
Chapter 3 questions
https://discuss.huggingface.co/t/chapter-3-questions/6800
Use this topic for any question about Chapter 3 27 of the course.
Hi @sgugger, thank you for informative course. Do you have any documents or tutorial for fine-tuning a pretrain model for translation task? Thank you very much.
0
huggingface
Course
Create a multilingual classifier
https://discuss.huggingface.co/t/create-a-multilingual-classifier/11519
Please read the topic category description to understand what this is all about Description Many countries have populations that speak and write in more than one language. Building NLP applications in these conditions can be challenging, especially if the languages differ significantly from each other. The goal of this project is to explore the effectiveness of multilingual Transformer models by training a classifier that can analyze texts in multiple languages at once. Model(s) There are a few popular multilingual models that you can start with: xlm-roberta-base 3 bert-base-multilingual-uncased 1 Datasets There are several multilingual datasets on the Hub that you can use to get started: amazon_reviews_multi 4 Even better would be to create a multilingual dataset in your own languages! Challenges The current multilingual models are typically limited to 100 languages or so. Check out the corresponding papers to see if your language is supported. Desired project outcomes Create a Streamlit or Gradio app on Spaces that can automatically classify text in multiple languages. Don’t forget to push all your models and datasets to the Hub so others can build on them! Additional resources This project has some overlap with the summarization section of Chapter 7 in the course.
Hi Lewtun, is there any notebook with a solution of this task? Kind regards
0
huggingface
Course
Fine-tuning nomenclature
https://discuss.huggingface.co/t/fine-tuning-nomenclature/12383
Chapter 3 of the course is called “fine-tuning”. Is this use of the term “fine-tuning” consistent with this article (Transfer learning & fine-tuning 1), which differentiates between transfer learning and fine tuning? i.e. When we fine-tune a model as in Chapter 3, are we training the whole network, or are we only training the new layer(s) attached to the body of the pre-trained network?
Fine-tuning is the act of re-training a pretrained model on a new dataset/task, it has nothing to do with freezing part of the network or not (at lest in the wat we use it in the course, which is the general way it is used in the published literature AFAIK). Freezing part of the network might help in some situations (like computer vision) but it doesn’t really help for Transformer models, usually it gives worse results.
0
huggingface
Course
Numpy.str_ error during training phase
https://discuss.huggingface.co/t/numpy-str-error-during-training-phase/12425
Hi, I was using amazon datasets 1 for trying to build a small language detector. but bumped up the numpy.str_ error during training phase. You can view my colab notebook here: Google Colab 1 . I was using review body field as text and language field as label. and dropped the other fields. I found that the ‘language’ data field is type of datasets.Value, not the datasets.ClassLabel. I guessed this causing the numpy.str_ error during training. Question: how do I convert datasets.Value to datasets.ClassLabel ? One way I can think of is doing str2int inside preprocess_function/tokenize method but curious that is there any existing conversion method to do that. Thanks
Hey @ivanlau I think your idea to apply ClassLabel.str2int() is the simplest approach, e.g. from datasets import load_dataset, ClassLabel, Features dset = load_dataset("amazon_reviews_multi", "all_languages", split="test") # Create ClassLabel feature langs = dset.unique("language") lang_feature = ClassLabel(names=langs) # Update default features features = dset.features features["language"] = lang_feature # Update dataset dset_with_classlabel = dset.map(lambda x : {"language": lang_feature.str2int(x["language"])}, features=features) dset_with_classlabel.features # {'language': ClassLabel(num_classes=6, names=['de', 'en', 'es', 'fr', 'ja', 'zh'], names_file=None, id=None), # 'product_category': Value(dtype='string', id=None), # 'product_id': Value(dtype='string', id=None), # 'review_body': Value(dtype='string', id=None), # 'review_id': Value(dtype='string', id=None), # 'review_title': Value(dtype='string', id=None), # 'reviewer_id': Value(dtype='string', id=None), # 'stars': Value(dtype='int32', id=None)} Alternatively, you can provide the features dictionary when you load the dataset with load_dataset. Hope that helps!
1
huggingface
Course
Build a news summarizer
https://discuss.huggingface.co/t/build-a-news-summarizer/11572
Description A common data science task for many business is to be able to condense the news about their products or services into short summaries. The goal of this task is to fine-tune a model to automatically summarise news articles, ideally in a domain that is of interest to you! Model(s) There are various summarisation models on the Hub that have been fine-tuned on the famous CNN/Dailymail dataset. These provide a good starting point for performing domain adaptation: google/pegasus-cnn_dailymail 4 google/roberta2roberta_L-24_cnn_daily_mail 4 There are also other summarization models that are worth investigating: facebook/bart-large-cnn 3 Datasets Using the summarization filter on the Hugging Face Hub 3 gives a good list of datasets to start from. Challenges [Explain whether the task is feasible with a single T4 GPU (what we get from AWS), does the data need a lot of preprocessing, are there ethical considerations etc] Desired project outcomes Create a Streamlit or Gradio app on Spaces that can summarize news articles, either from their text or from a given URL. Don’t forget to push all your models and datasets to the Hub so others can build on them! Additional resources Here are some existing spaces as inspiration: https://huggingface.co/spaces/chinhon/News_Summarizer 11 https://huggingface.co/spaces/benthecoder/news-summarizer 5 Discord channel To chat and organise with other people interested in this project, head over to our Discord 3 and: Follow the instructions on the #join-course channel Join the #new-summarizer channel Just make sure you comment here to indicate that you’ll be contributing to this project
Hi Lewis, I am interested in working on a text summarizer as my course project. How can I join this project? - Ali
0
huggingface
Course
Build a question answering system in your own language
https://discuss.huggingface.co/t/build-a-question-answering-system-in-your-own-language/11570
Please read the topic category description to understand what this is all about Description One of the major challenges with NLP today is the lack of systems for the thousands of non-English languages in the world. In this project, the goal is to build a question answering system in your own language. There are two main approaches you can take: Find a SQuAD-like dataset in your language (these tend to only exist for a few languages unfortunately) Find a dataset of question / answer pairs and build a search engine that returns the most likely answers to a given question Model(s) For the SQuAD-like task, any BERT-like model in your language would be a good starting point. If such a model doesn’t exist, consider one of the multilingual Transformers like mBERT or XLM-RoBERTa. For the search-like task, check out one of the many sentence-transformers models on the Hub: sentence-transformers (Sentence Transformers) Datasets squad 2 squad_it 1 (for inspiration) mqa Challenges This is a somewhat complex project because it involves both training multilingual models on potentially low-resource languages. Desired project outcomes Create a Streamlit or Gradio app on Spaces 1 that allows users to obtain answers from a snippet of text in your own language, or returns the top-N documents the might contain the answer. Don’t forget to push all your models and datasets to the Hub so others can build on them! Additional resources GitHub - crux82/squad-it: A large scale dataset for Question Answering in Italian 1 Discord channel To chat and organise with other people interested in this project, head over to our Discord and: Follow the instructions on the #join-course channel Join the #question-answering-de-fr channel Just make sure you comment here to indicate that you’ll be contributing to this project
Hi @lewtun would be glad to work on that project Thanks
0
huggingface
Course
Build a cover letter generator
https://discuss.huggingface.co/t/build-a-cover-letter-generator/11721
Description Sometimes students spend more time working on their cover letters, customizing them for each company, than working on their portfolio. And the preliminary screening steps that are based only on cover letters are quite unfair. The goal of this project is to build a model that can automatically generate cover letters from a targeted job and a person’s skills / previous experiences / resume… Model(s) I’m open for suggestions but I believe these are good places to start with: t5-base 2 Datasets This is the hardest part. I already started working on scraping a couple of example resumes from a couple of websites (Indeed…), but we still need to fill the person’s information accordingly. Challenges Scrape scarce data for a real-world problem. Fine-tune a state-of-the-art language model. Desired project outcomes Create a Streamlit or Gradio app on Spaces that can generate a cover letter from a targeted job and a person’s resume Don’t forget to push all your models and datasets to the Hub so others can build on them! Additional resources A good baseline to compare our model with gpt2 cover letter generator 4 Discord channel To chat and organise with other people interested in this project, head over to our Discord and: Follow the instructions on the #join-course channel Join the #cover-letter-generator channel Just make sure you comment here to indicate that you’ll be contributing to this project
This looks like a super cool project @nouamanetazi - thanks for suggesting it! Let me know if you want a Discord channel and I’ll create it
0
huggingface
Course
Create your own search engine
https://discuss.huggingface.co/t/create-your-own-search-engine/11575
Please read the topic category description 17 to understand what this is all about Create your own search engine In Chapter 5 of the Course, you learned how to use FAISS to find documents that are most semantically similar to a given query. The goal of this project is to extend this idea to build a retrieval and reranking system, where the retriever returns possibly relevant results, while the reranker evaluates the how relevant these hits are to the query. An example of the architecture might looks as follows (taken from the sentence-transformers library): image945×279 8.53 KB Model(s) The sentence-transformers models on the Hub 9 are great for the reranking task. Datasets Wikipedia is usually a good corpus to test retrieval systems on and you can find a dump in various languages here: wikipedia 12 Challenges Implementing the full retriever-reranking architecture might be a challenge, so a simpler place to start is with a single long document. You can then chunk that document into paragraphs and compute the relevancy scores across each paragraph Desired project outcomes Create a Streamlit or Gradio app on Spaces 3 that allows a user to enter a search query about a document (or a whole corpus of documents), and returns the top 5 most relevant paragraphs. Don’t forget to push all your models and datasets to the Hub so others can build on them! Additional resources https://www.sbert.net/examples/applications/retrieve_rerank/README.html#retrieve-re-rank 24 Discord channel To chat and organise with other people interested in this project, head over to our Discord 9 and: Follow the instructions on the #join-course channel Join the neural-search-engine channel Just make sure you comment here to indicate that you’ll be contributing to this project
Hi , I will be working on this cool project during the event.
0
huggingface
Course
Poster2Plot: Generate Movie/T.V show plot from poster
https://discuss.huggingface.co/t/poster2plot-generate-movie-t-v-show-plot-from-poster/12132
Description Our team is working on building an image captioning model which can generate a movie/t.v show plot from it’s poster. The goal of this project is to create an image captioning model using a transformer encoder model like Vision Transformer (ViT) and a transformer decoder language model like GPT-2 Model(s) Any vision based encoder and language model decoder would be a good candidate to train the VisionEncoderDecoderModel for image captioning. We are trying the following models first: Encoder: google/vit-base-patch16-224-in21k 1 Decoder: gpt2 1 Datasets We are using publicly available IMDb datasets to train the model. Some examples: IMDb movies extensive dataset | Kaggle 1 48K IMDB Movies With Posters | Kaggle 1 Challenges The main challenge is to create a good dataset of poster and movie plots. Also it will be interesting to see if the model gives good predictions for non-english movies/tv shows. Desired project outcomes We will create a Streamlit or Gradio app on Spaces 3 that can predict a movie/t.v show plot from it’s poster.
Let’s give it a try.
0
huggingface
Course
Build a title recommender for scientific articles
https://discuss.huggingface.co/t/build-a-title-recommender-for-scientific-articles/11522
Please read the topic category description 1 to understand what this is all about Description If you’ve ever worked in research, then you know that picking a catchy title for your articles is not easy! In this project, we’ll train a summarization model that can convert abstracts into titles. Model(s) Several of the Summarization models 3 on the Hub should serve as a good starting point for this project. Datasets There are several scientific datasets available on the Hub: arxiv_dataset 6 pubmed 3 Challenges Training a model that can summarise across any scientific domain is unlikely to be feasible. We recommend picking one subdomain (e.g. high-energy physics) and focusing your efforts there. Desired project outcomes Create a Streamlit or Gradio app on Spaces 1 that can generate titles from a given abstract. Don’t forget to push all your models and datasets to the Hub so others can build on them! Additional resources https://cs230.stanford.edu/projects_spring_2020/reports/38954132.pdf 4 Discord channel To chat and organise with other people interested in this project, head over to our Discord 1 and: Follow the instructions on the #join-course channel Join the science-title-generator channel Just make sure you comment here to indicate that you’ll be contributing to this project
This looks like a great project and I would love to contribute to this.
0
huggingface
Course
Learning from emojis
https://discuss.huggingface.co/t/learning-from-emojis/11573
Please read the topic category description to understand what this is all about Description Emoticons are often used as a proxy for emotional content in social media posts or instant messaging chats. As a result, emojis are often used as a label to train text classifiers. The goal of this project is to create a Transformer-based implementation of DeepEmoji 6, a research project from MIT that studied this task with LSTMs. Model(s) Any BERT-like model would be a good candidate for fine-tuning on an emoji dataset and you can get inspiration from models like these: cardiffnlp/twitter-roberta-base-emoji Datasets tweet_eval 7 Challenges To get better performance, you may want to perform domain adaptation by fine-tuning the language model on in-domain data. We recommend trying this approach only after building a baseline classifier. Desired project outcomes Create a Streamlit or Gradio app on Spaces that can predict the top 5 emojis associated with a piece of text Don’t forget to push all your models and datasets to the Hub so others can build on them! Additional resources https://huggingface.co/spaces/ml6team/emoji_predictor 6 (a Space for inspiration) Discord channel To chat and organise with other people interested in this project, head over to our Discord and: Follow the instructions on the #join-course channel Join the learning-from-emojis channel Just make sure you comment here to indicate that you’ll be contributing to this project
This project sounds interesting. I would like to work on something like this.
0
huggingface
Course
DataCollatorWithPadding: TypeError
https://discuss.huggingface.co/t/datacollatorwithpadding-typeerror/12093
Hi, I am following the course. I am now at Fine-tuning Fine-tuning a pretrained model - Hugging Face Course. When I set up DataCollatorWithPadding as following I got an error while trying to reproduce the course code in Kaggle. This error occurs with either a CPU-only-device or a GPU-device. Input: checkpoint = 'bert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(checkpoint) data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") Output: `--------------------------------------------------------------------------- TypeError Traceback (most recent call last) /tmp/ipykernel_42/1563280798.py in 1 checkpoint = ‘bert-base-uncased’ 2 tokenizer = AutoTokenizer.from_pretrained(checkpoint) ----> 3 data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors=“pt”) TypeError: init() got an unexpected keyword argument ‘return_tensors’` When I call help method, it too confirms that there is no argument return_tensors. Input: help(DataCollatorWithPadding.__init__) Output: `Help on function init in module transformers.data.data_collator: init(self, tokenizer: transformers.tokenization_utils_base.PreTrainedTokenizerBase, padding: Union[bool, str, transformers.file_utils.PaddingStrategy] = True, max_length: Union[int, NoneType] = None, pad_to_multiple_of: Union[int, NoneType] = None) → None` But, the source file Data Collator — transformers 4.12.5 documentation says that there is such an argument. By default, it returns Pytorch tensors while I need TF tensors. Where do I miss? Please help me.
Hi all, this issue is due to the older version of transformers library. When I upgraded it, problems are resolved. # upgrade transformers and datasets to latest versions !pip install --upgrade transformers !pip install --upgrade datasets Best Regards.
1
huggingface
Course
Dataset object has no attribute `to_tf_dataset`
https://discuss.huggingface.co/t/dataset-object-has-no-attribute-to-tf-dataset/12099
I am following HuggingFace Course. I am at Fine-tuning a model. Link: Fine-tuning a pretrained model - Hugging Face Course 1 I use tokenize_function and map as mentioned in the course to process data. # define a tokenize function def Tokenize_function(example): return tokenizer(example['sentence'], truncation=True) # tokenize entire data tokenized_data = raw_data.map(Tokenize_function, batched=True) I get Dataset object at this point. When I try converting this to a TF dataset object as mentioned in the course, it throws the following error. # convert to TF dataset train_data = tokenized_data["train"].to_tf_dataset( columns = ['attention_mask', 'input_ids', 'token_type_ids'], label_cols = ['label'], shuffle = True, collate_fn = data_collator, batch_size = 8 ) Output: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /tmp/ipykernel_42/103099799.py in <module> 1 # convert to TF dataset ----> 2 train_data = tokenized_data["train"].to_tf_dataset( \ 3 columns = ['attention_mask', 'input_ids', 'token_type_ids'], \ 4 label_cols = ['label'], \ 5 shuffle = True, \ AttributeError: 'Dataset' object has no attribute 'to_tf_dataset' When I look for dir(tokenized_data["train"]), there is no method or attribute in the name of to_tf_dataset. Why do I get this error? And how to clear this? Please help me.
Hey @rajkumar I believe the to_tf_dataset() method was only added in a recent version of datasets. Could you try upgrading to the latest version and check if the problem persists?
1
huggingface
Course
FAQ about the course projects
https://discuss.huggingface.co/t/faq-about-the-course-projects/11689
Hello everyone ! I’ll collect here any frequently asked questions that arise during the course of the community event on November 15th - 19th. Feel free to comment here if something is unclear Can I propose my own project? Yes, this is certainly encouraged! Just create a new topic under the #course:course-event category. You can use one of the existing project ideas as a template for what information to include. Can I work on more than one project? Yes, but we recommend completing one first with before starting/joining another. How do I join a team? Just comment on one of the project ideas in #course:course-event to indicate that you’d like to contribute. Alternatively, you can propose your own project idea by creating a new topic on the forums. Just tag me (@lewtun), so I know what you’re proposing How many people can each team have? Each team can have 1-4 people. We’ve found in previous events that having more than 4 people adds planning complexity. If you want to work by yourself, please note that you’ll have to use your own GPUs / cloud provider as the Amazon SageMaker compute is reserved for teams with 2 or more people. What should I do after joining a team? There are three main things you should do after joining a team: Head over to Discord and join the project’s channel. If a channel does not yet exist, you can request it in the #request-a-channel channel and someone in the team will create it for you. Join the Amazon Sagemaker Community organisation 13 on the Hub. Every model trained using the free compute on Amazon SageMaker will need to be uploaded to that organization. [Optional] Join your team’s organisation on the Hub. The organisation can be used to store your datasets and Streamlit / Gradio demo as a Space 3. If an organisation doesn’t yet exist for your project, just create one yourself and share the link in the project topic. What are the requirements to obtain the course certificate? Your team should have a Streamlit or Gradio demo running on Spaces 3. There should only be one Space per team, and you can deploy it under a single user account or organization. Someone from the team will check that the application runs and then everyone in the team will receive their certificate. The deadline is November 24th. Where should our team deploy our Space? You can deploy your team’s space in several locations: Under a single team members username (e.g. here’s a Space 1 created under my account) Under an organisation that you create for the team (e.g. here’s a Space 4 created under the course org) Under the Amazon SageMaker Community 13 org (here’s an example) If you want to transfer a Space from your user account to an organisation, just select the Settings tab of your Space and scroll down to the Rename or transfer this space box: Screen Shot 2021-11-17 at 19.25.261052×822 74.7 KB There you can select New owner to change the location of your Space to an organisation. My team has deployed a Space for our project, what do we do next? Make sure all the team members (i.e. Hub usernames) are listed on the Space Post a link to your Space in the corresponding project topic in #course:course-event or project channel in Discord. Feel free to tag @lewtun so we don’t miss it [Optional] Write a nice model card 4 for the models that you trained for the Space Celebrate on completing the project ! How can I access the compute provided by Amazon SageMaker? Fill out this registration form 11 Request to join the Amazon SageMaker Community 13 organisation on the Hub Do I have to use Amazon Sagemaker to train my models? Although we encourage you to use the free compute generously provided by Amazon Sagemaker, you are welcome to use your own GPUs / cloud provider. What kind of compute resources will be supplied by Amazon Sagemaker? You can expect to have access to a single T4 GPU (16 GB RAM) between November 17-19. After November 19 you’ll need to use your own compute. Fortunately, there are several free options you can try: Google Colab Kaggle notebooks Paperspace Gradient notebooks 2 Can I share my Amazon SageMaker credentials with another team member? No, please fill out the registration form 11 and you’ll receive a confirmation email with instructions sometime later.
What kind of compute resources will be supplied by AWS? Which type, how much RAM, for how long can we run it? Background: I have some ideas for projects, but they require training a large model which cannot be trained with a normal single google colab pro T4/P100 GPU (for training e.g. a DeBERTa-large model even the ‘high-RAM’ GPUs from colab are not enough).
0
huggingface
Course
Create a GitHub issues tagger
https://discuss.huggingface.co/t/create-a-github-issues-tagger/11521
Please read the topic category description to understand what this is all about Description Many open-source projects on GitHub use Issues to triage feature requests, bugs, and so on. For example, check out the Issues tab of Transformers and Datasets to get an idea. The goal of this project is to pick your favourite open-source project and create a bot that can automatically assign a Label (e.g. bug, enhancement etc) to a new GitHub issue. Model(s) Any of the pretrained BERT-like models on the Hub should serve as a good basis for this project. Given the domain is about source code, you may find that fine-tuning the language model first on the dataset gives a boost in performance. Datasets For this project you’ll have to create your own dataset by downloading and processing the GitHub issues associated with an open-source project. You can do this with GitHub’s REST 1 or GraphQL APIs. You can find an example dataset on the Hub here: lewtun/github-issues 5 Challenges This is a multilabel classification task, so you’ll need to do some data exploration to figure out which classes can be feasibly detected. Desired project outcomes Create a Streamlit or Gradio app on Spaces that injests new GitHub issues from an open-source project and predicts the Labels of each one. Don’t forget to push all your models and datasets to the Hub so others can build on them! Additional resources Predicting Issues’ Labels with RoBERTa 1 Check out this chapter 1 of the course for more details Discord channel To chat and organise with other people interested in this project, head over to our Discord 1 and: Follow the instructions on the #join-course channel Join the #github-issues-classification channel Just make sure you comment here to indicate that you’ll be contributing to this project
I would like to proceed with this project if still possible
0