docs
stringclasses
4 values
category
stringlengths
3
31
thread
stringlengths
7
255
href
stringlengths
42
278
question
stringlengths
0
30.3k
context
stringlengths
0
24.9k
marked
int64
0
1
huggingface
🤗Transformers
Minor Bug: HF (run_text_classification) attempts to use XLA on CUDA device
https://discuss.huggingface.co/t/minor-bug-hf-run-text-classification-attempts-to-use-xla-on-cuda-device/8454
A minor inconsistency: on a GPU runtime, when I execute: !pip install -q cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.9-cp37-cp37m-linux_x86_64.whl to install the TPU client, HuggingFace will try to use XLA even if a CUDA device is present. RuntimeError: tensorflow/compiler/xla/xla_client/computation_client.cc:273 : Missing XLA configuration Shouldn’t there be checks to verify if XLA/TPU cores flag is not passed, it should fall back to CUDA->CPU rather than trying to run via XLA?
Good point, which flags were you thinking of? If you feel up to it, don’t hesitate to open a PR with those changes!
0
huggingface
🤗Transformers
Save only best model in Trainer
https://discuss.huggingface.co/t/save-only-best-model-in-trainer/8442
I have read previous posts on the similar topic but could not conclude if there is a workaround to get only the best model saved and not the checkpoint at every step, my disk space goes full even after I add savetotallimit as 5 as the trainer saves every checkpoint to disk at the start. Please suggest. Thanks
You can set save_strategy to NO to avoid saving anything and save the final model once training is done with trainer.save_model().
0
huggingface
🤗Transformers
`run_glue.py` with my own dataset of one-sentence input
https://discuss.huggingface.co/t/run-glue-py-with-my-own-dataset-of-one-sentence-input/3098
Hello, This post is related to `run_glue.py` fails when using my own dataset of regression task · Issue #9393 · huggingface/transformers · GitHub 17 and [examples/text-classification] `do_predict` for the test set of local datasets · Issue #9442 · huggingface/transformers · GitHub 4. While I was writing the text to open an issue, I realized that it seemed to be a simple mistake on my part. If anyone gives the detail about it, I would appreciate your comments. Information Model I am using (Bert, XLNet …): Bert The problem arises when using: [ ] the official example scripts: (give details below) [x] my own modified scripts: (give details below) almost the same as run_glue.py, but add some modifications in evaluation metrics, using test sets The tasks I am working on is: [ ] an official GLUE/SQUaD task: (give the name) [x] my own task or dataset: (give details below) To reproduce It seems that an error occurs when I use run_glue.py with my own dataset of a regression task. CUDA_VISIBLE_DEVICES=0 python <my_modified_run_glue.py> \ --model_name_or_path bert-base-cased \ --train_file data/****.csv \ --validation_file data/****.csv \ --test_file data/****.csv \ # this arg is added for issue #9442 --do_train \ --do_eval \ --do_predict \ # this arg is related to issue #9442 --max_seq_length 64 \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 10.0 \ --load_best_model_at_end \ --evaluation_strategy epoch \ --metric_for_best_model eval_pearson \ --output_dir **** \ --overwrite_output_dir An example of the train/valid CSV file is as below: id,label,sentence1 __id_as_string__,3.0,__string__ Then, the trainer gives me the information below. [INFO|trainer.py:387] 2021-01-07 12:52:02,202 >> The following columns in the training set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: id, sentenc e1. [INFO|trainer.py:387] 2021-01-07 12:52:02,204 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: id, sente nce1. Expected behavior it is natural that id column is ignored, but I didn’t know why sentence1 is ignored. I checked again the task_to_keys in the original script: task_to_keys = { "cola": ("sentence", None), "mnli": ("premise", "hypothesis"), "mrpc": ("sentence1", "sentence2"), "qnli": ("question", "sentence"), "qqp": ("question1", "question2"), "rte": ("sentence1", "sentence2"), "sst2": ("sentence", None), "stsb": ("sentence1", "sentence2"), "wnli": ("sentence1", "sentence2"), } Should I use “sentence” instead of "sentence" if there is only one sentence in the input (in other words, sentence2 is None`)? Thank you in advance.
I’ve changed sentence1 to sentence, but the almost same info appears: [INFO|trainer.py:387] 2021-01-07 13:22:18,233 >> The following columns in the training set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence, id. [INFO|trainer.py:387] 2021-01-07 13:22:18,233 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence, id. Is it related to the code snippet below? # Preprocessing the datasets if data_args.task_name is not None: sentence1_key, sentence2_key = task_to_keys[data_args.task_name] else: # Again, we try to have some nice defaults but don't hesitate to tweak to your use case. non_label_column_names = [name for name in datasets["train"].column_names if name != "label"] if "sentence1" in non_label_column_names and "sentence2" in non_label_column_names: sentence1_key, sentence2_key = "sentence1", "sentence2" else: if len(non_label_column_names) >= 2: sentence1_key, sentence2_key = non_label_column_names[:2] else: sentence1_key, sentence2_key = non_label_column_names[0], None Should I change the order of the columns?
0
huggingface
🤗Transformers
Multiple Mask Tokens
https://discuss.huggingface.co/t/multiple-mask-tokens/174
For those wishing to [MASK] several tokens, here this is. My question, however, relates to the output. I added “top_k” assuming I’d be able to return multiple sentences, but that was not the case. I am not sure how exactly I can achieve this. import torch from transformers import BertTokenizer, BertModel,BertForMaskedLM tokenizer = BertTokenizer.from_pretrained('bert-base-cased') input_tx = "[CLS] [MASK] [MASK] [MASK] of the United States mismangement of the Coronavirus is its distrust of science. [SEP]" tokenized_text = tokenizer.tokenize(input_tx) indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) top_k = 10 tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([[0]*25]) model = BertForMaskedLM.from_pretrained('bert-base-cased') outputs = model(tokens_tensor, token_type_ids=segments_tensors) predictions = outputs[0] predicted_index = [torch.argmax(predictions[0, i]).item() for i in range(0,24)] predicted_token = [tokenizer.convert_ids_to_tokens([predicted_index[x]])[0] for x in range(1,24)] print(predicted_token) `Output: 'The', 'main', 'cause', 'of', 'the', 'United', 'States', 'mi', '##sman', '##gement', 'of', 'the', 'Co', '##rona', '##virus', 'is', 'its', 'di', '##st', '##rust', 'of', 'science', '`
Hi there! First of all, please note that in the latest release, the recommended way to preprocess your input is just to call the tokenizer on your test: import torch from transformers import BertTokenizer, BertModel,BertForMaskedLM tokenizer = BertTokenizer.from_pretrained('bert-base-cased') input_txt = "[MASK] [MASK] [MASK] of the United States mismanagement of the Coronavirus is its distrust of science." inputs = tokenizer(input_txt, return_tensors='pt') This returns a dict string to tensors (since I asked to return pytorch tensors with the last argument) and you can then directly call your model on it: model = BertForMaskedLM.from_pretrained('bert-base-cased') outputs = model(**inputs) predictions = outputs[0] At this stage, predictions is the output of our language model before the softmax (we won’t care about that since the probabilities after the softmax or the activations before are in the same order). You ask for the most probable token, so it only returns that. If you want, say, the most probable 10 tokens, you could go: sorted_preds, sorted_idx = predictions[0].sort(dim=-1, descending=True) for k in range(10): predicted_index = [sorted_idx[i, k].item() for i in range(0,24)] predicted_token = [tokenizer.convert_ids_to_tokens([predicted_index[x]])[0] for x in range(1,24)] print(predicted_token)
0
huggingface
🤗Transformers
How is the “Auto Model For Sequence Classification” architecture?
https://discuss.huggingface.co/t/how-is-the-auto-model-for-sequence-classification-architecture/8440
How is the architecture of AutoModelForSequenceClassification? I suppose it’s some pre-trained transformer with some dense layer for classification, however where could I see the forward details of this model?
The auto classes are just abstractions that work for every architecture. You can see the actual forward passes in each modeling files. For instance, if you are using a BERT checkpoint, you will get a BertForSequenceClassification model, which forward pass is defined in `transformers.models.bert.modeling_bert 33
0
huggingface
🤗Transformers
Tutorials not found
https://discuss.huggingface.co/t/tutorials-not-found/8437
the first 3 tutorials are nowhere to be found. 404 error huggingface.co 🤗 Transformers Notebooks 2 You can find here a list of the official notebooks provided by Hugging Face. Also, we would like to list here interesting content created by the community. I...
Yes, those have been replaced by more recent versions, check out the master docs 4 for the new links.
0
huggingface
🤗Transformers
MT5 Fine Tuning - KeyError: ‘source_ids’
https://discuss.huggingface.co/t/mt5-fine-tuning-keyerror-source-ids/5257
Hi, I am trying to fine tune MT5 for multitask question answer and question generation similar to @valhalla model. I prepared the dataset by using datasets library as follows: train_dataset = Dataset.from_pandas(pd.DataFrame(generate_data(mode="train"))) valid_dataset = Dataset.from_pandas(pd.DataFrame(generate_data(mode="valid"))) processor = DataProcessor( tokenizer, max_source_length=data_args.max_source_length, max_target_length=data_args.max_target_length ) train_dataset = processor.process(train_dataset) valid_dataset = processor.process(valid_dataset) columns = ["source_ids", "target_ids", "attention_mask"] However, when I try to train my model as below: !python3 run_multi.py \ --model_name_or_path google/mt5-small \ --model_type mt5 \ --tokenizer_name_or_path mt5_qg_tokenizer \ --output_dir mt5-small-multi \ --train_file_path data/train_data_qa_qg_mt5.pt \ --valid_file_path data/valid_data_qa_qg_mt5.pt \ --per_device_train_batch_size 16 \ --per_device_eval_batch_size 16 \ --gradient_accumulation_steps 2 \ --learning_rate 1e-4 \ --num_train_epochs 2 \ --seed 42 \ --do_train \ --do_eval \ --logging_steps 100 \ --prediction_loss_only True it says KeyError: 'source_ids' I am sure the dataset has “source_ids” field. train_dataset=torch.load(r"train_data_qa_qg_mt5.pt") train_dataset > Dataset({ features: ['attention_mask', 'source_ids', 'source_text', 'target_ids', 'target_text', 'task'], num_rows: 3449 }) What might cause this? The versions of the libraries are: transformers == 4.4.2 datasets == 1.5.0 Thank you for the reply in advance.
yes I was using Trainer. But I solved the problem. I was loading dataset via Datasets library, when I replaced it with nlp.load_dataset, it worked seamlessly. But thank you for the response, I did not pass --remove_unused_columns False
1
huggingface
🤗Transformers
[Pytext] Scalable Model Deployment
https://discuss.huggingface.co/t/pytext-scalable-model-deployment/1378
Can we use Huggingface library with Pytext 6? Any guides or references if possible? I seek to deploy models like huggingface-conv-ai or RAG or DialoGPT in scalable way. Is there any other good alternative? Thanks
Hi I actually created a deployment platform with a few friends of mine as a small project - https://backprop.co 4. Is this something that could potentially help you out?
0
huggingface
🤗Transformers
Predict beam size on Seq2SeqTrainer
https://discuss.huggingface.co/t/predict-beam-size-on-seq2seqtrainer/8393
Hi, There is any way to select the beam size of the evaluation step of Seq2SeqTrainer?
You can set it in trainer.evaluate(num_beams=...) or set the default you like when instantiating your model: model = AutoModelForXXX(checkpoint, num_beams=...)
0
huggingface
🤗Transformers
Translation - MBART, translation with identical source and target language, for text normalization
https://discuss.huggingface.co/t/translation-mbart-translation-with-identical-source-and-target-language-for-text-normalization/8342
Hi, This is rather a general question about translation and I am aware that I don’t follow exactly your guidelines, so I am sorry for that. (We could run the examples mentioned in your readme, great tool!) We try to conceive normalization for Dutch, as a ‘translation’ task. So, is it for instance possible to use source + target language, defined as the same language, for instance --source_lang nl_XX \ --target_lang nl_XX \ {“translation”: {“nl_XX”: “liefst geen energie vandaag . waar is m’n oplaadstation ?”, “nl_XX”: “liefst geen energie vandaag . waar is mijn oplaadstation ?”}} or --source_lang source\ --target_lang target \ {“translation”: {“source”: “liefst geen energie vandaag . waar is m’n oplaadstation ?”, “target”: “liefst geen energie vandaag . waar is mijn oplaadstation ?”}} Environment info github.com huggingface/transformers 2 master/examples/pytorch/translation 🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX. –model_name_or_path facebook/mbart-large-50-many-to-many-mmt Thanks for your answer!
You should change the data processing in the script as the source_lang and target_lang are also used to set the langauges on multilingual tokenizers (if you’re using mBart for instance, this approach wouldn’t work).
0
huggingface
🤗Transformers
Problem with push_to_hub
https://discuss.huggingface.co/t/problem-with-push-to-hub/8273
Hello everyone. I’m trying to upload my fine-tuned GPT-2 Model to Model Hub. When I try to use the uploading function push_to_hub I get the following error: AttributeError: 'GPT2Model' object has no attribute 'push_to_hub' In the documentation it says that I can push the model with this function. Can anybody help please?
You need to update your Transformers library to the latest version.
0
huggingface
🤗Transformers
How to modify each decoding step in ProphetNet Transformer
https://discuss.huggingface.co/t/how-to-modify-each-decoding-step-in-prophetnet-transformer/8238
Hi everyone, I am working on a project which needs modification on each decoding step of the ProphetNet model. At each decoding step, I want to concatenate an embedding representation with decoding output representation before it passes to the softmax layer. I am not sure which script/code needs modification for this purpose. I am new to the transformer libaray. Can you please suggest or provide some references? It will be helpful. Thank you!
Hi @valhalla and @s4sarath, any suggestions for this. It need not be a prophet-net model, it can t5 or BART model in general. Thank you!
0
huggingface
🤗Transformers
Is there a notebook or document for hyperparameter search?
https://discuss.huggingface.co/t/is-there-a-notebook-or-document-for-hyperparameter-search/8308
Trainer has a function named hyperparameter_search(), I wonder is there a notebook or document to describe how to use this function? This function seems so hard for me to understand o(╥﹏╥)o Thank you~
hey @yc1999 you can find an example on how the hyperparameter search in the Trainer works in this tutorial notebook here: Google Colaboratory 12
0
huggingface
🤗Transformers
ValueError: too many values to unpack (expected 2) when using BertTokenizer
https://discuss.huggingface.co/t/valueerror-too-many-values-to-unpack-expected-2-when-using-berttokenizer/8301
Hi everyone, I get an error when using BertTokenizer . I do encoding = tokenizer([[prompt, prompt, prompt], [choice0, choice1, choice2]], return_tensors='tf', padding=True)) and get ValueError: too many values to unpack (expected 2) . When I do encoding = tokenizer([[prompt, prompt], [choice0, choice1]], return_tensors='tf', padding=True) it works. Any idea why? I want to fine-tune TFBertForMultipleChoice such that each question ( prompt ) has three choices and not two as in the documentation BERT — transformers 4.7.0 documentation 2 Below is the complete code import os import numpy as np import pandas as pd import tensorflow as tf from transformers import BertTokenizer, TFBertForMultipleChoice tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = TFBertForMultipleChoice.from_pretrained('bert-base-uncased') prompt = "Accept and check containers of mail from large volume mailers, couriers, and contractors." choice0 = "Time Management" choice1 = "Writing" choice2 = "Reading Comprehension" encoding = tokenizer([[prompt, prompt, prompt], [choice0, choice1, choice2]], return_tensors='tf', padding=True) inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()} outputs = model(inputs) # batch size is 1 logits = outputs.logits Thanks! Ayala
Oh the example has one pairs of [] that is not necessary, will fix. It should be: encoding = tokenizer([prompt, prompt, prompt], [choice0, choice1, choice2], return_tensors='tf', padding=True)
0
huggingface
🤗Transformers
Accuracy of MLM model
https://discuss.huggingface.co/t/accuracy-of-mlm-model/6516
How to calculate the accuracy of the testing dataset when we build MLM model using scrach?
have you got the answers?
0
huggingface
🤗Transformers
Key Error ‘loss’ while fine tuning GPT-2 with the Trainer utility
https://discuss.huggingface.co/t/key-error-loss-while-fine-tuning-gpt-2-with-the-trainer-utility/2861
training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=3, # total # of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=16, # batch size for evaluation logging_dir='./logs', # directory for storing logs ) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["test"], tokenizer=tokenizer ) Error Log: /usr/local/lib/python3.6/dist-packages/transformers/trainer.py in train(self, model_path, trial) 745 tr_loss += self.training_step(model, inputs) 746 else: –> 747 tr_loss += self.training_step(model, inputs) 748 self._total_flos += self.floating_point_ops(inputs) 749 /usr/local/lib/python3.6/dist-packages/transformers/trainer.py in training_step(self, model, inputs) 1073 loss = self.compute_loss(model, inputs) 1074 else: -> 1075 loss = self.compute_loss(model, inputs) 1076 1077 if self.args.n_gpu > 1: /usr/local/lib/python3.6/dist-packages/transformers/trainer.py in compute_loss(self, model, inputs) 1103 self._past = outputs[self.args.past_index] 1104 # We don’t use .loss here since the model may return tuples instead of ModelOutput. -> 1105 return outputs[“loss”] if isinstance(outputs, dict) else outputs[0] 1106 1107 def is_local_process_zero(self) -> bool: /usr/local/lib/python3.6/dist-packages/transformers/file_utils.py in getitem(self, k) 1356 if isinstance(k, str): 1357 inner_dict = {k: v for (k, v) in self.items()} -> 1358 return inner_dict[k] 1359 else: 1360 return self.to_tuple()[k] KeyError: ‘loss’
If you have this error, it’s probably because you are not passing any labels to your model. It’s hard to know for sure since you don’t explain how you built your dataset.
0
huggingface
🤗Transformers
Model doesn’t load when using a venv
https://discuss.huggingface.co/t/model-doesnt-load-when-using-a-venv/8187
I noticed that when loading a pretrained model (BertForSequenceClassification in my case) while running a virtual environment that an error occurs: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-uncased/resolve/main/config.json (Caused by ProxyError('Cannot connect to proxy.', OSError(0, 'Error'))) Traceback (most recent call last): File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/connectionpool.py", line 696, in urlopen self._prepare_proxy(conn) File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/connectionpool.py", line 964, in _prepare_proxy conn.connect() File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/connection.py", line 359, in connect conn = self._connect_tls_proxy(hostname, conn) File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/connection.py", line 506, in _connect_tls_proxy ssl_context=ssl_context, File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 450, in ssl_wrap_socket sock, context, tls_in_tls, server_hostname=server_hostname File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) File "/home/aclifton/anaconda3/lib/python3.7/ssl.py", line 423, in wrap_socket session=session File "/home/aclifton/anaconda3/lib/python3.7/ssl.py", line 870, in _create self.do_handshake() File "/home/aclifton/anaconda3/lib/python3.7/ssl.py", line 1139, in do_handshake self._sslobj.do_handshake() OSError: [Errno 0] Error During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/requests/adapters.py", line 449, in send timeout=timeout File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/connectionpool.py", line 756, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/util/retry.py", line 574, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-uncased/resolve/main/config.json (Caused by ProxyError('Cannot connect to proxy.', OSError(0, 'Error'))) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/transformers/configuration_utils.py", line 505, in get_config_dict user_agent=user_agent, File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/transformers/file_utils.py", line 1337, in cached_path local_files_only=local_files_only, File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/transformers/file_utils.py", line 1499, in get_from_cache r = requests.head(url, headers=headers, allow_redirects=False, proxies=proxies, timeout=etag_timeout) File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/requests/api.py", line 104, in head return request('head', url, **kwargs) File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/requests/sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/requests/adapters.py", line 510, in send raise ProxyError(e, request=request) requests.exceptions.ProxyError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-uncased/resolve/main/config.json (Caused by ProxyError('Cannot connect to proxy.', OSError(0, 'Error'))) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1086, in from_pretrained **kwargs, File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/transformers/configuration_utils.py", line 440, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/transformers/configuration_utils.py", line 517, in get_config_dict raise EnvironmentError(msg) OSError: Can't load config for 'bert-base-uncased'. Make sure that: - 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models' - or 'bert-base-uncased' is the correct path to a directory containing a config.json file However, not running the venv, the pretrained model loads fine. Here is an example: from transformers import BertForSequenceClassification model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
Any thoughts on this?
0
huggingface
🤗Transformers
How is T5 pretrained?
https://discuss.huggingface.co/t/how-is-t5-pretrained/8222
Hi all. I’m creating a pretrianed T5 model with: T5ForConditionalGeneration.from_pretrained("t5-small") How is this model pretrained? It seems to me that the model weights I get here were trained at least on the GLUE dataset (and probably others). I’d like it to only be pretrained on C4. Are those weights around somewhere? How do I get a model pretrained that way? Thanks!
This is quite a general question. You should find everything you need in their paper. [1910.10683] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer (arxiv.org) 5
0
huggingface
🤗Transformers
For BERT LMs … are the random tasks created on just the first sentence or the second as well?
https://discuss.huggingface.co/t/for-bert-lms-are-the-random-tasks-created-on-just-the-first-sentence-or-the-second-as-well/8240
Just wondering if the 15% masked tokens are selected randomly on the first sentence or over the entire sequence (first and 2nd sentence)? Thanks - wg
Looks like all input tokens, except for special tokens are masked. So, yes the second sentence will have tokens masked as well, but not the [SEP] between it and the first (and the other special tokens). Why I think so: Looks like probability_matrix is just a tensor the shape of (all) inputs, filled with the 15% probability of being masked…, except for the special tokens where are set to have a 0% probability of masking before being passed to torch.bernoulli here: github.com huggingface/transformers/blob/master/src/transformers/data/data_collator.py#L379-L380 probability_matrix.masked_fill_(special_tokens_mask, value=0.0) masked_indices = torch.bernoulli(probability_matrix).bool()
0
huggingface
🤗Transformers
Has anyone here deployed a transformers model on Google Cloud using AI Platform?
https://discuss.huggingface.co/t/has-anyone-here-deployed-a-transformers-model-on-google-cloud-using-ai-platform/3208
Hi, I have a fine tuned distilgpt2 model that I want to deploy using GCP ai-platform. I’ve followed all the documentation for deploying a custom prediction routine on GCP but when creating the model I get the error: Create Version failed. Bad model detected with error: Model requires more memory than allowed. Please try to decrease the model size and re-deploy. Here is my setup.py file: from setuptools import setup setup( name="generator_package", version="0.2", include_package_data=True, scripts=["generator_class.py"], install_requires=['transformers==2.8.0'] ) I then create a model version using: gcloud beta ai-platform versions create v1 --model my_model \ --origin=gs://my_bucket/model/ \ --python-version=3.7 \ --runtime-version=2.3 \ --package-uris=gs://my_bucket/packages/gpt2-0.1.tar.gz,gs://cloud-ai-pytorch/torch-1.3.1+cpu-cp37-cp37m-linux_x86_64.whl \ --prediction-class=model_prediction.CustomModelPrediction I have tried every suggested route and cant get this to work and I’m still getting the above error. I’m using the smallest gpt2 model and am well within memory. Can anyone who have successfully deployed to GCP please give some insight here. Thank you
Hi @farazk86, Any updates about this? Did you manage to use AI platform to serve your model’s predictions?
0
huggingface
🤗Transformers
T5 for multiple choice
https://discuss.huggingface.co/t/t5-for-multiple-choice/7788
I wanted to use T5 for multiple choice task. Since there’s no class as T5ForMultipleChoice defined, I was wondering if there’s any specific reason for why this hasn’t been done.
Can anyone suggest how to use T5ForConditionalGeneration for multiple choice task ? I can define my own MLP in the same way as AutoModelForMultipleChoice but was wondering if there’s any specific formatting of inputs required to deal with Multiple Choice task within the library.
0
huggingface
🤗Transformers
Is there a way to backpropagate through multiple steps while using Trainer API
https://discuss.huggingface.co/t/is-there-a-way-to-backpropagate-through-multiple-steps-while-using-trainer-api/8215
I was wondering if there’s a way to backpropagate the accumulation of loss through multiple optimizer steps while using Trainer API? Since it’s easy to get cuda out of memory and would like to avoid it.
If you are talking about gradient accumulation, you can set it with gradient_accumulation_steps=xxx in your TrainingArguments.
0
huggingface
🤗Transformers
Calling changed functions in transformers/training_args
https://discuss.huggingface.co/t/calling-changed-functions-in-transformers-training-args/8197
I am maintaining a program that used to do things like TrainingArguments.to_json_string(my_object) and TrainingArguments.to_sanitized_dict(my_object). But in the latest version of transformers/training_args (v4.8.2), these two functions do not take arguments anymore. Instead, they act on ‘self’, which is a TrainingArgument object: to_json_string(self) and to_sanitized_dict(self) -> Dict[str, Any]. Therefore, I wonder how do I change my implementation accordingly in order to make the same functionality works. Copying functions from transformers/training_args to my object class solves part of the issue, but works bad when the function has dependencies on other things in transformers/training_args, and it doesn’t make a lot of sense to copy everything from the transformers/training_args file to my object class. What is the best way of doing this? Thank you!
Those functions have not changed since the 14 months ago, so quite a few versions. They have always been methods, which you should call directly on an instance of a TrainingArguments.
0
huggingface
🤗Transformers
TrainingArguments changing the GPU by iteslf
https://discuss.huggingface.co/t/trainingarguments-changing-the-gpu-by-iteslf/8001
I have 4 gpus available, out of which i have selected the second one using if torch.cuda.is_available(): torch.cuda.set_device(2) However when i compute the TrainingArgument() command : training_args = TrainingArguments('mydirectory') then torch.cuda.current_device(). is giving 0 Any idea why this is happening?
Yes, the training argument set the GPU corresponding to its local_rank value (for distributed training), so you have to make sure to pass along local_rank=2 when you instantiate them. Though to execute a script on a given GPU, you would be better off setting the global env variable CUDA_VISIBLE_DEVICES.
0
huggingface
🤗Transformers
Pegasus Questions
https://discuss.huggingface.co/t/pegasus-questions/838
Q: Max model input size varies between checkpoints, what is the max num input tokens that each model can process? A: max_model_length = { "xsum": 512, "cnn_dailymail": 1024, "newsroom": 512, "wikihow": 512, "multi_news": 1024, "reddit_tifu": 512, "big_patent": 1024, "arxiv": 1024, "pubmed": 1024, "gigaword": 128, "aeslc": 512, "billsum": 1024, "large": 1024, } that constant is defined here: github.com sshleifer/transformers_fork/blob/f69cac3347641beaba5037b9a6fca1a46f423639/src/transformers/configuration_pegasus.py#L67-L67 19 max_model_length = {
So does that mean max_position_embedding is reduced for fine-tuned models (gigaword, wikihow). i.e if the max_position_embedding for the pre-trained model is 1024 then all fine-tuned models should also have same, right ?
0
huggingface
🤗Transformers
What is the correct form of decoder_input_ids for LEDForConditionalGeneration?
https://discuss.huggingface.co/t/what-is-the-correct-form-of-decoder-input-ids-for-ledforconditionalgeneration/7947
I have been looking into the articles on the web, but unfortunately I cannot find the clear answer. I guess one of them is the correct decoder_input_ids (label should be decoder_input_ids[1:] ?): 1) <s>...</s><pad>...<pad> 2) </s><s>...</s><pad>...<pad> 3) <pad><pad>...</s><s>...</s> Thanks in advance. +) I am going to fine-tune this model for free form QA.
I guess decoder_input_ids should be </s><s>... (without </s>), given label as <s>...</s>, according to the code below used for generating decoder_input_ids: def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int): """ Shift input ids one token to the right. """ shifted_input_ids = input_ids.new_zeros(input_ids.shape) shifted_input_ids[:, 1:] = input_ids[:, :-1].clone() shifted_input_ids[:, 0] = decoder_start_token_id assert pad_token_id is not None, "config.pad_token_id has to be defined." # replace possible -100 values in labels by `pad_token_id` shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id) return shifted_input_ids
0
huggingface
🤗Transformers
RoBERTa training low GPU utilization
https://discuss.huggingface.co/t/roberta-training-low-gpu-utilization/1184
I’m trying to train RoBERTa from scratch on proprietary dataset using the script from HF repo (https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py 1). When I run the training on machine with 32 cores and 8x V100, GPUs are not utilized in 100% all the time and it seems like there is a bottleneck on transfer between CPU and GPU. Even when I set number of workers in DataLoaders to 32, the performance does not change at all. My batch size is 8 (max I could fit into 16GB V100 on Google Cloud), all examples have 512 tokens. How improve GPU usage? Are there any additional parameters that need to be configured to utilize the GPU better? I’ve also captured a few seconds of watch on nvidia-smi output to give you a full picture: https://1drv.ms/v/s!AkfjsmHCRwTChtVDsCiowniO7rMLSA?e=zjNAkF 14 I’m using the following model parameters: { "architectures": [ "RobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "eos_token_id": 2, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "type_vocab_size": 1, "vocab_size": 32768 }
Bump - anyone?
0
huggingface
🤗Transformers
What arguments need to be changed when using deepeed in trainer?
https://discuss.huggingface.co/t/what-arguments-need-to-be-changed-when-using-deepeed-in-trainer/7890
I understand that we will need to change a regular call function like python run_trainer.py ... to deepspeed --hostfile <hostfile> run_trainer.py ... --deepspeed <deepspeed_config_file>. Besides the deepspeed argument, is there anything else I should change, for example, sharded_ddp, ddp_find_unused_parameters, skip_memory_metrics, etc.?
You should find everything that you need over here: DeepSpeed Integration — transformers 4.7.0 documentation (huggingface.co) 3
0
huggingface
🤗Transformers
Trainer with load_best_model_at_end doesn’t work
https://discuss.huggingface.co/t/trainer-with-load-best-model-at-end-doesnt-work/7799
I was trying to save the checkpoint after each epoch by setting load_best_model_at_end=True with p3.16xlarge parallel training. However, the following error occurs: OSError Can’t load config for ‘bert_model/checkpoint-156’. Make sure that: ‘bert_model/checkpoint-156’ is a correct model identifier listed on ‘Hugging Face – The AI community building the future. 1’ or ‘bert_model/checkpoint-156’ is the correct path to a directory containing config.json file. The error occurs only if the output_dir folder is empty but not occurs if there were checkpoints from last training. Does anyone face the same issue or have an idea on this?
What is the version of Transformers you are using? Also what were the training arguments for this training?
0
huggingface
🤗Transformers
Trainer API to log both Training and Validation Metrics
https://discuss.huggingface.co/t/trainer-api-to-log-both-training-and-validation-metrics/7785
I am fine-tuning for a classification task - I am trying to replicate (and potentially replace) my native PyTorch training and evaluation loops with the Trainer API. I usually log metrics for both training and validation across each batch/epoch. Here is what I can achieve with Trainer API. The accuracy and F1 are of validation sets and I want to also see the same set of metrics at each step for training data as well. Can someone guide on how I can achieve this? Thanks.
This is not implemented in the Trainer. You can manually evaluate yourself at the end of training on any dataset you want (training set included).
0
huggingface
🤗Transformers
Further train bert with next sentence prediction head using tensorflow
https://discuss.huggingface.co/t/further-train-bert-with-next-sentence-prediction-head-using-tensorflow/3362
I’m trying to train TFBertForNextSentencePrediction on my own corpus, not from scratch, but rather taking the existing bert model with only a next sentence prediction head and further train it on a specific cuprous of text (pairs of sentences). Then I want to use the model I trained to be able to extract sentence embeddings from the last hidden state for other texts. Currently the problem I encounter is that after I train the keras model I am not able to extract the hidden states of the last layer before the next sentence prediction head. Below is the code. Here I only train it on a few sentences just to make sure the code works. Any help will be greatly appreciated. Thanks, Ayala import numpy as np import pandas as pd import tensorflow as tf from datetime import datetime from tensorflow.keras.utils import to_categorical from tensorflow.keras.preprocessing import sequence from tensorflow.keras.callbacks import ModelCheckpoint from transformers import BertTokenizer, PreTrainedTokenizer, BertConfig, TFBertForNextSentencePrediction from sklearn.metrics import confusion_matrix, accuracy_score, f1_score, precision_score, recall_score PRETRAINED_MODEL = 'bert-base-uncased' # set paths and file names time_stamp = str(datetime.now().year) + "_" + str(datetime.now().month) + "_" + str(datetime.now().day) + "_" + \ str(datetime.now().hour) + "_" + str(datetime.now().minute) model_name = "pretrained_nsp_model" model_dir_data = model_name + "_" + time_stamp model_fn = model_dir_data + ".h5" base_path = os.path.dirname(__file__) input_path = os.path.join(base_path, "input_data") output_path = os.path.join(base_path, "output_models") model_path = os.path.join(output_path, model_dir_data) if not os.path.exists(model_path): os.makedirs(model_path) # set model checkpoint checkpoint = ModelCheckpoint(os.path.join(model_path, model_fn), monitor="val_loss", verbose=1, save_best_only=True, save_weights_only=True, mode="min") # read data max_length = 512 def get_tokenizer(pretrained_model_name): tokenizer = BertTokenizer.from_pretrained(pretrained_model_name) return tokenizer def tokenize_nsp_data(A, B, max_length): data_inputs = tokenizer(A, B, add_special_tokens=True, max_length=max_length, truncation=True, pad_to_max_length=True, return_attention_mask=True, return_tensors="tf") return data_inputs def get_data_features(data_inputs, max_length): data_features = {} for key in data_inputs: data_features[key] = sequence.pad_sequences(data_inputs[key], maxlen=max_length, truncating="post", padding="post", value=0) return data_features def get_transformer_model(transformer_model_name): # get transformer model config = BertConfig(output_attentions=True) config.output_hidden_states = True config.return_dict = True transformer_model = TFBertForNextSentencePrediction.from_pretrained(transformer_model_name, config=config) return transformer_model def get_keras_model(transformer_model): # get keras model input_ids = tf.keras.layers.Input(shape=(max_length,), name='input_ids', dtype='int32') input_masks_ids = tf.keras.layers.Input(shape=(max_length,), name='attention_mask', dtype='int32') token_type_ids = tf.keras.layers.Input(shape=(max_length,), name='token_type_ids', dtype='int32') X = transformer_model({'input_ids': input_ids, 'attention_mask': input_masks_ids, 'token_type_ids': token_type_ids})[0] model = tf.keras.Model(inputs=[input_ids, input_masks_ids, token_type_ids], outputs=X) model.summary() model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), optimizer=tf.optimizers.Adam(learning_rate=0.00005), metrics=['accuracy']) return model def get_metrices(true_values, pred_values): cm = confusion_matrix(true_values, pred_values) acc_score = accuracy_score(true_values, pred_values) f1 = f1_score(true_values, pred_values, average="binary") precision = precision_score(true_values, pred_values, average="binary") recall = recall_score(true_values, pred_values, average="binary") metrices = {'confusion_matrix': cm, 'acc_score': acc_score, 'f1': f1, 'precision': precision, 'recall': recall } for k, v in metrices.items(): print(k, ':\n', v) return metrices # get tokenizer tokenizer = get_tokenizer(PRETRAINED_MODEL) # train prompt = ["Hello", "Hello", "Hello", "Hello"] next_sentence = ["How are you?", "Pizza", "How are you?", "Pizza"] train_labels = [0, 1, 0, 1] train_labels = to_categorical(train_labels) train_inputs = tokenize_nsp_data(prompt, next_sentence, max_length) train_data_features = get_data_features(train_inputs, max_length) # val prompt = ["Hello", "Hello", "Hello", "Hello"] next_sentence = ["How are you?", "Pizza", "How are you?", "Pizza"] val_labels = [0, 1, 0, 1] val_labels = to_categorical(val_labels) val_inputs = tokenize_nsp_data(prompt, next_sentence, max_length) val_data_features = get_data_features(val_inputs, max_length) # get transformer model transformer_model = get_transformer_model(PRETRAINED_MODEL) # get keras model model = get_keras_model(transformer_model) callback_list = [] early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=4, min_delta=0.005, verbose=1) callback_list.append(early_stop) reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=2, epsilon=0.001) callback_list.append(reduce_lr) callback_list.append(checkpoint) history = model.fit([train_data_features['input_ids'], train_data_features['attention_mask'], train_data_features['token_type_ids']], np.array(train_labels), batch_size=2, epochs=3, validation_data=([val_data_features['input_ids'], val_data_features['attention_mask'], val_data_features['token_type_ids']], np.array(val_labels)), verbose=1, callbacks=callback_list) model.layers[3].save_pretrained(model_path) # need to save this and make sure i can get the hidden states ## predict # load model transformer_model = get_transformer_model(model_path) model = get_keras_model(transformer_model) model.summary() model.load_weights(os.path.join(model_path, model_fn)) # test prompt = ["Hello", "Hello"] next_sentence = ["How are you?", "Pizza"] test_labels = [0, 1] test_df = pd.DataFrame({'A': prompt, 'B': next_sentence, 'label': test_labels}) test_labels = to_categorical(val_labels) test_inputs = tokenize_nsp_data(prompt, next_sentence, max_length) test_data_features = get_data_features(test_inputs, max_length) # predict pred_test = model.predict([test_data_features['input_ids'], test_data_features['attention_mask'], test_data_features['token_type_ids']]) preds = tf.keras.activations.softmax(tf.convert_to_tensor(pred_test)).numpy() true_test = test_df['label'].to_list() pred_test = [1 if p[1] > 0.5 else 0 for p in preds] test_df['pred_val'] = pred_test metrices = get_metrices(true_test, pred_test) I am also attaching a picture from the debugging mode in which I try (with no success) to view the hidden state. The problem is I am not able to see and save the transform model I trained and view the embeddings of the last hidden state. I tried converting the KerasTensor to numpy array but without success. Screen Shot 2021-01-24 at 17.36.052138×1522 253 KB
stackoverflow.com Using tensorflow and TFBertForNextSentencePrediction to further train bert on a specific corpus 13 python, tensorflow, keras, huggingface-transformers asked by ayalaall on 03:40PM - 24 Jan 21 UTC
0
huggingface
🤗Transformers
Cuda out of memory during evaluation but training is fine
https://discuss.huggingface.co/t/cuda-out-of-memory-during-evaluation-but-training-is-fine/1783
Hi, I am finetuning a BARTForConditionalGeneration model. I am using Trainer from the library to train so I do not use anything fancy. I have 2 gpus I can even fit batch size 8 or 16 during training but after first epoch, I always receive Cuda Out of memory error. I find it strange because my evaluation batch size is 1. Below is my code, which is very short actually. import torch import argparse import os import sys import numpy as np import torch.nn.functional as F sys.path.append('..') from transformers import T5ForConditionalGeneration, Trainer, TrainingArguments, BartForConditionalGeneration, FSMTForConditionalGeneration from data_reader import GetDataAsPython from sklearn.model_selection import train_test_split from prepare_data import create_data, create_dataset, get_test_results, extract_warning_types from transformers import T5Tokenizer, BartTokenizer, FSMTTokenizer from datetime import datetime parser = argparse.ArgumentParser() parser.add_argument('-e', '--epochs', type=int, default=100) parser.add_argument('-bs', '--batch-size', type=int, default=1) parser.add_argument('-lr', '--learning-rate', type=float, default=1e-4) parser.add_argument('-gcv', '--gradient-clip-val', type=float, default=0.0) parser.add_argument('-wd', '--weight-decay', type=float, default=0.01) parser.add_argument('-mn', '--model-name', type=str, choices=['t5-small', 't5-base', 't5-large', 'bart-base'], required=True) args = parser.parse_args() data = GetDataAsPython('../data_large2.json') data_eslint = GetDataAsPython('../data_eslint.json') data += data_eslint all_warning_types = extract_warning_types(data) all_warning_types = ['generator-star-spacing', 'no-array-constructor', 'no-extra-bind', 'no-debugger', 'no-extra-boolean-cast', 'no-extra-semi', 'no-useless-escape'] model_name = args.model_name if 't5' in model_name: tokenizer = T5Tokenizer.from_pretrained(model_name) elif 'bart' in model_name: tokenizer = BartTokenizer.from_pretrained('facebook/' + model_name) else: raise "Unrecognized model" tokenizer.add_tokens(['{', '}']) now = datetime.now() dt_string = now.strftime("%d-%m-%Y_%H-%M-%S") model_directory = 't5global' + '_' + dt_string model_directory = model_name + '_global_' + dt_string os.system('mkdir ' + model_directory) with open(model_directory + '/commandline_args.txt', 'w') as f: f.write('\n'.join(sys.argv[1:])) tokenizer.save_pretrained(model_directory) train_inputs, train_labels, val_inputs, val_labels, test_inputs, test_labels = create_data(data, all_warning_types, include_warning=True, model_name=model_name) train_dataset = create_dataset(train_inputs, train_labels, tokenizer, pad_truncate=True) val_dataset = create_dataset(val_inputs, val_labels, tokenizer, pad_truncate=True) test_dataset = create_dataset(test_inputs, test_labels, tokenizer, pad_truncate=True) training_args = TrainingArguments( output_dir=model_directory, num_train_epochs=args.epochs, per_device_train_batch_size=args.batch_size, per_device_eval_batch_size=1, warmup_steps=500, weight_decay=args.weight_decay, logging_dir=model_directory, logging_steps=100, do_eval=True, evaluation_strategy='epoch', learning_rate=args.learning_rate, load_best_model_at_end=True, metric_for_best_model='eval_loss', greater_is_better=False, ) if 't5' in model_name: model = T5ForConditionalGeneration.from_pretrained(model_name, return_dict=False) elif 'bart' in model_name: model = BartForConditionalGeneration.from_pretrained('facebook/' + model_name) model.resize_token_embeddings(len(tokenizer)) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=val_dataset, optimizers=[torch.optim.Adam(params=model.parameters(), lr=args.learning_rate), None], tokenizer=tokenizer, ) trainer.train() trainer.save_model() output = get_test_results(model, tokenizer, test_inputs, test_labels, False) print(output) output_file = open(model_name + 'allrules_results.txt', 'w+') output_file.write(output) and here is the stack trace {'loss': 7.759439697265625, 'learning_rate': 2e-05, 'epoch': 0.28328611898017} {‘loss’: 1.2010345458984375, ‘learning_rate’: 4e-05, ‘epoch’: 0.56657223796034} {‘loss’: 0.3362786865234375, ‘learning_rate’: 6e-05, ‘epoch’: 0.8498583569405099} 3%|█████▎ | 353/10590 [02:50<1:22:07, 2.08it/sTraceback (most recent call last):██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉ | 40/41 [00:09<00:00, 3.72it/s] File “transformers_global.py”, line 87, in trainer.train() File “/home/berkay/model/lib/python3.8/site-packages/transformers/trainer.py”, line 792, in train self._maybe_log_save_evalute(tr_loss, model, trial, epoch) File “/home/berkay/model/lib/python3.8/site-packages/transformers/trainer.py”, line 843, in _maybe_log_save_evalute metrics = self.evaluate() File “/home/berkay/model/lib/python3.8/site-packages/transformers/trainer.py”, line 1251, in evaluate output = self.prediction_loop(eval_dataloader, description=“Evaluation”) File “/home/berkay/model/lib/python3.8/site-packages/transformers/trainer.py”, line 1353, in prediction_loop preds_host = logits if preds_host is None else nested_concat(preds_host, logits, dim=0) File “/home/berkay/model/lib/python3.8/site-packages/transformers/trainer_pt_utils.py”, line 47, in nested_concat return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors)) File “/home/berkay/model/lib/python3.8/site-packages/transformers/trainer_pt_utils.py”, line 47, in return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors)) File “/home/berkay/model/lib/python3.8/site-packages/transformers/trainer_pt_utils.py”, line 49, in nested_concat return torch.cat((tensors, new_tensors), dim=dim) RuntimeError: CUDA out of memory. Tried to allocate 2.63 GiB (GPU 0; 10.76 GiB total capacity; 4.74 GiB already allocated; 2.53 GiB free; 7.27 GiB reserved in total by PyTorch) 3%|█████▎
To avoid that, you need to add eval_accumulation_steps in your TrainingArguments. By default the Trainer accumulated all predictions on the host before sending them to the CPU (because it’s faster) but if you run OOM, fix that argument to a small value (for instance 20 or 10) to trigger the copy more frequently and free host memory.
0
huggingface
🤗Transformers
IndexError: index out of bound, MLM+XLA
https://discuss.huggingface.co/t/indexerror-index-out-of-bound-mlm-xla/7619
This is an error with the MLM script (PyTorch) for attempting to pre-train BigBird on TPUs over XLA. The dataset in question is a custom dataset, and the model config and tokenizer has been initialized appropriately. This is a continuation of this unanswered 1 Forum post that faces the same error. Command used to run the script:- %%bash python xla_spawn.py --num_cores=8 ./run_mlm.py --output_dir="./results" \ --model_type="big_bird" \ --config_name="./config" \ --tokenizer_name="./tokenizer" \ --train_file="./dataset.txt" \ --validation_file="./val.txt" \ --line_by_line="True" \ --max_seq_length="16000" \ --weight_decay="0.01" \ --per_device_train_batch_size="1" \ --per_device_eval_batch_size="1" \ --learning_rate="3e-4" \ --tpu_num_cores='8' \ --warmup_steps="1000" \ --overwrite_output_dir \ --pad_to_max_length \ --num_train_epochs="5" \ --adam_beta1="0.9" \ --adam_beta2="0.98" \ --do_train \ --do_eval \ --logging_steps="50" \ --evaluation_strategy="steps" \ --eval_accumulation_steps='10' \ --report_to="tensorboard" \ --logging_dir='./logs' \ --save_strategy="epoch" \ --load_best_model_at_end='True' \ --metric_for_best_model='validation' \ --preprocessing_num_workers='15' I am facing two errors to be precise, Exception in device=TPU:0: Default process group has not been initialized, please make sure to call init_process_group. Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/transformers/training_args.py", line 1006, in main_process_first yield File "/content/run_mlm.py", line 393, in main desc="Running tokenizer on dataset line_by_line", File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 489, in map for k, dataset in self.items() File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 489, in <dictcomp> for k, dataset in self.items() File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 1664, in map for rank in range(num_proc) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 1664, in <listcomp> for rank in range(num_proc) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2664, in shard writer_batch_size=writer_batch_size, File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 186, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py", line 397, in wrapper out = func(self, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2254, in select return self._new_dataset_with_indices(indices_buffer=buf_writer.getvalue(), fingerprint=new_fingerprint) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2170, in _new_dataset_with_indices fingerprint=fingerprint, File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 297, in __init__ self._indices.column(0)[0].type File "pyarrow/table.pxi", line 162, in pyarrow.lib.ChunkedArray.__getitem__ File "pyarrow/array.pxi", line 549, in pyarrow.lib._normalize_index IndexError: index out of bounds During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn fn(gindex, *args) File "/content/run_mlm.py", line 529, in _mp_fn main() File "/content/run_mlm.py", line 393, in main desc="Running tokenizer on dataset line_by_line", File "/usr/lib/python3.7/contextlib.py", line 130, in __exit__ self.gen.throw(type, value, traceback) File "/usr/local/lib/python3.7/dist-packages/transformers/training_args.py", line 1011, in main_process_first torch.distributed.barrier() File "/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py", line 2523, in barrier default_pg = _get_default_group() File "/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py", line 358, in _get_default_group raise RuntimeError("Default process group has not been initialized, " RuntimeError: Default process group has not been initialized, please make sure to call init_process_group. I haven’t modified the script to call the init_process_group yet, focusing on the earlier error of index out of bounds. Clearly, the problem is arising from my own dataset - which was working before however. Interestingly, we get it when its in the tokenizing stage. At some point, when constructing the arrow dataset its failing. I have no idea about Apache Arrow, so I can’t debug further Can anyone give me some guidance on where should I start to investigate the error and some possible leads as to the origin?
Any ideas anyone?
0
huggingface
🤗Transformers
How to print a few examples at the beginning of training when using Trainer?
https://discuss.huggingface.co/t/how-to-print-a-few-examples-at-the-beginning-of-training-when-using-trainer/7597
I would like to see a few examples to manually verify the input and output. Previously in an older version of transformers there is a functionality of doing that. I am wondering whether there is something similar for trianer?
This is not something related to the Trainer, you just have to print some elements of your dataset. The functionality is implemented in all examples, see for instance this one 11
0
huggingface
🤗Transformers
Using Trainer class with T5 - what is returned in EvalPrediction dict?
https://discuss.huggingface.co/t/using-trainer-class-with-t5-what-is-returned-in-evalprediction-dict/1041
Hi, I am trying to finetune T5 using the Trainer class. I understand that Trainer doesn’t work out-of-the-box for seq2seq tasks and saw @patrickvonplaten’s https://github.com/huggingface/transformers/pull/5840 9 which extends the Trainer class to work for Bert2Bert. Is my understanding correct that the Trainer class appropriately handles training for seq2seq models since the loss is calculated by the model itself, and that the only problem is when returning EvalPredictions for calculating and logging custom validation metrics? If so then I would really appreciate if someone can help me to understand what’s being returned in the EvalPrediction dict for T5, it seems like EvalPrediction.predictions is of size batch_size * max_output_len * model_size (31218), is this the generated prediction in embedding form? If so what is the best way to convert this to prediction ids? I tried naively calling model.lm_head() on it but that didn’t seem to be the correct approach. @valhalla perhaps you can weigh in, I also took a look at your notebook finetuning T5 with Pytorch Lightning but would really like to use the HF Trainer class. Thanks for all help.
Hi @melody-ju Not sure what you mean by "T5 doesn’t work out-of-the-box for seq2seq tasks ". T5 is a seq2seq model and it does work for seq2seq tasks. You can use Trainer for seq2seq tasks as it is. Patrick’s PR extends it so that generative metrics can be calculated (ROUGE, BLUE etc), it should be okay if you calculate them after training the training is finished. To use Trainer for T5, the dataset or collator (if you are using one) should at least return input_ids, attention_mask and labels (set pad tokens to -100 in labels). The rest will be handled by Trainer This notebook 33 uses Trainer for fine-tuning T5. Few things to note about that notebook, I wrote it before v3.0.0, few things have changed after that DatCollator is not a class anymore, so you won’t need to inherit from DataCollator when creating T2TDataCollator. Also collate_batch should be renamed to __call__. lm_lables is now deprecated, use labels instead. No need to manually add </s> anymore, the tokenizer now does that itself. Also you can use the prepare_seq2seq_batch method on toknizer which can take the source and target text and returns input_ids, attention_mask and labels. You can also use finetune.py script from here 11 to finetune T5 and other seq2seq models. It’s using PL, and there;s WIP version of Seq2SeqTrainer in this PR 5
0
huggingface
🤗Transformers
TypeError: forward() got an unexpected keyword argument ‘start_positions’
https://discuss.huggingface.co/t/typeerror-forward-got-an-unexpected-keyword-argument-start-positions/6641
Hello everyone. I already have post a question about fine-tuning bert-base-italian-cased on SQuAD-it dateset. Waiting for an answer I tried another solution, following the Question Answerinf tutorial on SQuAS 2.0 in the transformers docs on HuggingFace. My data are taken from SQuAD-it. I followed this way: import json from pathlib import Path def read_dataset(path): path = Path(path) with open(path, 'rb') as f: squad_dict = json.load(f) contexts = [] questions = [] answers = [] for group in squad_dict['data']: for passage in group['paragraphs']: context = passage['context'] for qa in passage['qas']: question = qa['question'] for answer in qa['answers']: contexts.append(context) questions.append(question) answers.append(answer) return contexts, questions, answers train_contexts, train_questions, train_answers = read_dataset('SQuAD_it-train.json') val_contexts = [] val_questions = [] val_answers = [] while len(val_answers) != 5831: value = train_contexts.pop() val_contexts.append(value) value = train_questions.pop() val_questions.append(value) value = train_answers.pop() val_answers.append(value) def add_end_idx(answers, contexts): for answer, context in zip(answers, contexts): gold_text = answer['text'] start_idx = answer['answer_start'] end_idx = start_idx + len(gold_text) # sometimes squad answers are off by a character or two – fix this # if context[start_idx:end_idx] == gold_text: # answer['answer_end'] = end_idx if context[start_idx-1:end_idx-1] == gold_text: answer['answer_start'] = start_idx - 1 answer['answer_end'] = end_idx - 1 # When the gold label is off by one character elif context[start_idx-2:end_idx-2] == gold_text: answer['answer_start'] = start_idx - 2 answer['answer_end'] = end_idx - 2 # When the gold label is off by two characters elif context[start_idx-1:end_idx-2] == gold_text: answer['answer_start'] = start_idx - 1 answer['answer_end'] = end_idx - 2 elif context[start_idx-2:end_idx-1] == gold_text: answer['answer_start'] = start_idx - 2 answer['answer_end'] = end_idx - 1 elif context[start_idx-3:end_idx-3] == gold_text: answer['answer_start'] = start_idx - 3 answer['answer_end'] = end_idx - 3 elif context[start_idx-2:end_idx-3] == gold_text: answer['answer_start'] = start_idx - 2 answer['answer_end'] = end_idx - 3 elif context[start_idx-3:end_idx-2] == gold_text: answer['answer_start'] = start_idx - 3 answer['answer_end'] = end_idx - 2 else: answer['answer_end'] = end_idx if answer['answer_start'] < 0: answer['answer_start'] =+ 1 answer['answer_end'] =+ 1 add_end_idx(train_answers, train_contexts) add_end_idx(val_answers, val_contexts) from transformers import DistilBertTokenizerFast tokenizer = DistilBertTokenizerFast.from_pretrained('dbmdz/bert-base-italian-cased') train_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True) val_encodings = tokenizer(val_contexts, val_questions, truncation=True, padding=True) from transformers import AutoModel, model_name = "dbmdz/bert-base-italian-cased" model = AutoModel.from_pretrained(model_name) def add_token_positions(encodings, answers): start_positions = [] end_positions = [] for i in range(len(answers)): start_positions.append(encodings.char_to_token(i, answers[i]['answer_start'])) end_positions.append(encodings.char_to_token(i, answers[i]['answer_end'] - 1)) # if start position is None, the answer passage has been truncated if start_positions[-1] is None: start_positions[-1] = tokenizer.model_max_length if end_positions[-1] is None: end_positions[-1] = tokenizer.model_max_length encodings.update({'start_positions': start_positions, 'end_positions': end_positions}) add_token_positions(train_encodings, train_answers) add_token_positions(val_encodings, val_answers) Then I created the datasets: import torch class SquadDataset(torch.utils.data.Dataset): def __init__(self, encodings): self.encodings = encodings def __getitem__(self, idx): return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} def __len__(self): return len(self.encodings.input_ids) train_dataset = SquadDataset(train_encodings) val_dataset = SquadDataset(val_encodings) And finally I tried to train: from transformers import TrainingArguments, Trainer training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=3, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=10, label_names = ["start_positions", "end_positions"] ) trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset # evaluation dataset ) trainer.train() But it raises me this error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-171-8794092ae722> in <module>() 20 ) 21 ---> 22 trainer.train() 3 frames /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), TypeError: forward() got an unexpected keyword argument 'start_positions' I’ve seen this has already an issue but in none topic I’ve found a solution. Thank you in advance.
Like in the other subject, we need to know how you created your model. It looks like that model is not happy with start_positions so it’s very likely not a question-answering model.
0
huggingface
🤗Transformers
“I am gay.” sentence is classified as NEGATIVE with score 0.99
https://discuss.huggingface.co/t/i-am-gay-sentence-is-classified-as-negative-with-score-0-99/7469
This code gives the following output: from transformers import pipeline classifier = pipeline(“sentiment-analysis”) #classifier(“I’ve been waiting for a HuggingFace course my whole life.”) classifier(“I am gay.”) output: [{‘label’: ‘NEGATIVE’, ‘score’: 0.9939725399017334}] “I am gay.” sentence is classified as NEGATIVE with score of 0.99. Should the dataset be modified or processed again?
That’s part of the warnings about pretrained models included here 5. Every fine-tuned versions gets the bias of the original pretrained model.
0
huggingface
🤗Transformers
ValueError: Mixed precision training with AMP or APEX (`–fp16`) and FP16 evaluation can only be used on CUDA devices
https://discuss.huggingface.co/t/valueerror-mixed-precision-training-with-amp-or-apex-fp16-and-fp16-evaluation-can-only-be-used-on-cuda-devices/6910
I am trying to tune Wav2Vec2 Model with a dataset on my local device using my CPU (I don’t have a GPU or Google Colab pro), I am using this 4 as my reference. When I try to execute from transformers import TrainingArguments training_args = TrainingArguments( # output_dir="/content/gdrive/MyDrive/wav2vec2-base-timit-demo", output_dir="./wav2vec2-medical", group_by_length=True, per_device_train_batch_size=32, evaluation_strategy="steps", num_train_epochs=30, fp16=True, save_steps=500, eval_steps=500, logging_steps=500, learning_rate=1e-4, weight_decay=0.005, warmup_steps=1000, save_total_limit=2, ) I am getting following error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-26-f9014a6221db> in <module> 1 from transformers import TrainingArguments 2 ----> 3 training_args = TrainingArguments( 4 # output_dir="/content/gdrive/MyDrive/wav2vec2-base-timit-demo", 5 output_dir="./wav2vec2-medical", ~/Library/Python/3.8/lib/python/site-packages/transformers/training_args.py in __init__(self, output_dir, overwrite_output_dir, do_train, do_eval, do_predict, evaluation_strategy, prediction_loss_only, per_device_train_batch_size, per_device_eval_batch_size, per_gpu_train_batch_size, per_gpu_eval_batch_size, gradient_accumulation_steps, eval_accumulation_steps, learning_rate, weight_decay, adam_beta1, adam_beta2, adam_epsilon, max_grad_norm, num_train_epochs, max_steps, lr_scheduler_type, warmup_ratio, warmup_steps, logging_dir, logging_strategy, logging_first_step, logging_steps, save_strategy, save_steps, save_total_limit, no_cuda, seed, fp16, fp16_opt_level, fp16_backend, fp16_full_eval, local_rank, tpu_num_cores, tpu_metrics_debug, debug, dataloader_drop_last, eval_steps, dataloader_num_workers, past_index, run_name, disable_tqdm, remove_unused_columns, label_names, load_best_model_at_end, metric_for_best_model, greater_is_better, ignore_data_skip, sharded_ddp, deepspeed, label_smoothing_factor, adafactor, group_by_length, length_column_name, report_to, ddp_find_unused_parameters, dataloader_pin_memory, skip_memory_metrics, use_legacy_prediction_loop, push_to_hub, resume_from_checkpoint, mp_parameters) ~/Library/Python/3.8/lib/python/site-packages/transformers/training_args.py in __post_init__(self) 609 610 if is_torch_available() and self.device.type != "cuda" and (self.fp16 or self.fp16_full_eval): --> 611 raise ValueError( 612 "Mixed precision training with AMP or APEX (`--fp16`) and FP16 evaluation can only be used on CUDA devices." 613 ) ValueError: Mixed precision training with AMP or APEX (`--fp16`) and FP16 evaluation can only be used on CUDA devices. I understand that the error is because I am not using a GPU, as mixed precision can not be carried out without a GPU. I want to run it on my CPU, how can I resolve the error
You should remove fp16=True then.
0
huggingface
🤗Transformers
How to train the Translation Language Modeling (TLM) with transformers/examples/language-modeling/run_mlm.py?
https://discuss.huggingface.co/t/how-to-train-the-translation-language-modeling-tlm-with-transformers-examples-language-modeling-run-mlm-py/1887
Hello, I have a question to ask for your help. I want to train the Translation Language Modeling (TLM) in XLM (Paper: Cross-lingual Language Model Pretraining). The translation language modeling (TLM) is very similar to the Masked Language Modeling (MLM), which only shows the difference in the form of input data. If I want to use the run_mlm.py file to achieve the effect of training the translation language modeling (TLM), can I just modify the composition of training data without modifying the source code of the transformers/examples/language-modeling/run_mlm.py file? Is this feasible? For example, for the masked language modeling (MLM), one row of my training data is a language, as shown below: ( Row 1 ) polonium 's isotopes tend to decay with alpha or beta decay ( en ) . ( Row 2 ) 231 and penetrated the armour of the Panzer IV behind it ( en ) . ( Row 3 ) die Isotope von Polonium neigen dazu , mit dem Alpha- oder Beta-Zerfall zu zerfallen ( de ) . ( Row 4 ) 231 und durchbrach die Rüstung des Panzers IV hinter ihm ( de ) . … For the translation language modeling (TLM), my training data is a combination of two parallel corpora (It is to splice the above data in pairs. The separator is [/s] [/s].), as shown below: ( Row 1 ) polonium 's isotopes tend to decay with alpha or beta decay ( en ) . [/s] [/s] die Isotope von Polonium neigen dazu , mit dem Alpha- oder Beta-Zerfall zu zerfallen ( de ) . ( Row 2 ) 231 and penetrated the armour of the Panzer IV behind it ( en ) . [/s] [/s] 231 und durchbrach die Rüstung des Panzers IV hinter ihm ( de ) . … If I only modify the training data into a combination of two parallel corpora before executing the transformers/examples/language-modeling/run_mlm.py file, can I achieve the effect of training the translation language modeling (TLM)? Looking forward to your help, thank you very much!
I have same confuse. Could tell me if you solve this issue?
0
huggingface
🤗Transformers
Running BigBird on TPUs
https://discuss.huggingface.co/t/running-bigbird-on-tpus/7289
This is in continuation of my previous issue filed in this forum (here 2). I wanted to try and run the run_mlm_flax script with a Colab TPU v2, but I am facing an error:- Module Name: <module 'run_mlm_flax' from '/content/run_mlm_flax.py'> WARNING:root:TPU has started up successfully with version pytorch-1.9 Traceback (most recent call last): File "xla_spawn.py", line 87, in <module> main() File "xla_spawn.py", line 83, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) AttributeError: module 'run_mlm_flax' has no attribute '_mp_fn' To my understanding, the script for spawning processes in each TPU core required a Huggingface Trainer object to work with. However, we apparently do not have a Trainer script for MLM/CLM for Bigbird. Compounding the difficulties is the whole Flax-noTrainer-TPU combination which I do not think BigBird has been tested out fully. Is there any possible approach to run this on TPUs?
Hey, I think you are mixing flax & pytorch scripts. how can you get pytorch error in flax script otherwise? FlaxBigBird works perfectly on cloud TPU. You can refer this script 4 also. Also Currently Trainer is for only PyTorch & Tensorflow.
0
huggingface
🤗Transformers
[Question] Wav2vec2 word times
https://discuss.huggingface.co/t/question-wav2vec2-word-times/5039
Hello, did someone already try to get the timestamps of the words while decoding the audio with wav2vec2 model? Thank you!
I have created some hacky code to do so and have posted it in this huggingface github issue 14. You can find it here: Getting time offsets of beginning and end of each word in Wav2Vec2 · Issue #11307 · huggingface/transformers · GitHub 56
0
huggingface
🤗Transformers
BUG Confirmation: BigBirdLM not able to use Flax
https://discuss.huggingface.co/t/bug-confirmation-bigbirdlm-not-able-to-use-flax/6959
On Google Colab, trying to use FlaxBigBirdForMaskedLM shows that Flax is not installed. from transformers import BigBirdConfig config = BigBirdConfig( vocab_size=40000, hidden_size = 768, max_position_embeddings=16000, num_attention_heads=4, #6 num_hidden_layers=4, #6 ) from transformers import FlaxBigBirdForMaskedLM model = FlaxBigBirdForMaskedLM(config=config) It may be due to some other issue, But I am consistenly getting this error across all runtimes (target runtime is TPU). ImportError: FlaxBigBirdForMaskedLM requires the FLAX library but it was not found in your environment. Checkout the instructions on the installation page: https://github.com/google/flax and follow the ones that match your environment. Flax is indeed installed, via !pip install -q transformers[flax]. Does this seem like a genuine bug? The problem is that the backend Flax is apparently not accessible by the method, while I can easily import flax and other utilities.
cc @vasudevgupta who i believe is the one working on bigbird and flax
0
huggingface
🤗Transformers
How to save RoBERTA sequence classifier model
https://discuss.huggingface.co/t/how-to-save-roberta-sequence-classifier-model/6610
After finishing the training phase, I want to save the RoBERTa classifier model for later evaluation purpose. I am getting this error. AttributeError Traceback (most recent call last) in 1 out_model = “src_classifier_final_ICWSM_10e” ----> 2 src_classifier.save_model(f’models/{out_model}_final’) ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in getattr(self, name) 945 if name in modules: 946 return modules[name] → 947 raise AttributeError("’{}’ object has no attribute ‘{}’".format( 948 type(self).name, name)) 949 AttributeError: ‘RobertaForSequenceClassification’ object has no attribute ‘save_model’
I’m not sure where you have gotten that save_model method but Transformers model don’t have it. They use save_pretrained.
0
huggingface
🤗Transformers
Question about greedy_search
https://discuss.huggingface.co/t/question-about-greedy-search/5749
Hello. I am trying to greedy_search with T5ModelForConditionalGeneration. However, it seems I have to provide input_ids as both the input_ids parameter and as part of model_kwargs parameter of the greedy_search. The input_ids argument of greedy_search acts as the initial decoded state, while input_ids that is supposed to appear in model_kwargs is passed to self (T5) for inference. I do not see a way to pass two input_ids argument to the greedy_search function. Right now the function complains that I don’t pass input_ids to its internal T5. How would I go about using the greedy_search function fot T5?
Solved by passing encoder_outputs to greedy_seach.
0
huggingface
🤗Transformers
Tutorials for using Colab TPUs with Huggingface Transformers?
https://discuss.huggingface.co/t/tutorials-for-using-colab-tpus-with-huggingface-transformers/1970
I looking for an easy-to-follow tutorial for using Huggingface Transformer models (e.g. BERT) in PyTorch on Google Colab with TPUs. I found guides about XLA, but they are largely centered around TensorFlow. Any help would be appreciated.
There are a few contributed notebooks which might help you here: https://github.com/huggingface/transformers/tree/master/notebooks 623 For instance this one by @valhalla is about training T5 on TPU in PyTorch: https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=QLGiFCDqvuil 560
0
huggingface
🤗Transformers
Wav2vec2 not converging when finetuning
https://discuss.huggingface.co/t/wav2vec2-not-converging-when-finetuning/6773
Hi @patrickvonplaten, Thanks for offering to help me! For anyone else reading, this is a continuation of the discussion on huggingface/transformers issue 12137. In short, despite following the official wav2vec2 English ASR guide 2, my model cannot converge on my dataset of single English word, 1 second long audio clips. When trying out the guide’s Colab Notebook on the TIMIT dataset, however, the model converges just fine which is perplexing to me. Here is my training notebook 9.
You probably need to train a bit longer. Did you train the full 30 epochs? Or did you stop at 2? The wav2vec2 embeddings only learn the representations of speech, it does not know how to output characters. The finetuning stage learns to use the embeddings to output characters. The usual finetuning training behavior looks sth like: Beginning: Output random chars Early: Output nothing - empty strings - looks like you are here? After a while: Starts to spit out more relevant chars
0
huggingface
🤗Transformers
How to set up DistilBertModel to use a bach_size?
https://discuss.huggingface.co/t/how-to-set-up-distilbertmodel-to-use-a-bach-size/6715
Goal: I want to get the [CLS] values, but I am getting an error when I call DistilBertModel. My code: import transformers as ppb m, t, p = (ppb.DistilBertModel, ppb.DistilBertTokenizerFast, 'distilbert-base-uncased') tokenizer = t.from_pretrained(pretrained_weights, cache_dir=<path>) model = m.from_pretrained(pretrained_weights, from_tf=True, cache_dir=<path>) # [beginning of EDIT] def my_encode(tokenizer, texts, max_length=MAX_LENGTH): inputs = tokenizer.batch_encode_plus(texts, max_length=max_length, padding='longest', truncation=True, return_attention_mask=True, return_token_type_ids=False, return_tensors="pt" ) return inputs tokenizer_output = my_encode(tokenizer, pandas_df['raw_text'].tolist()) # [end of EDIT] I am getting error when I call the model with ‘tokenizer_output’, which is a ‘transformers.tokenization_utils_base.BatchEncoding’: result = model(**tokenizer_output) The error is: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <timed exec> in <module> ~/miniconda3/envs/x/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), ~/miniconda3/envs/x/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict) 485 output_attentions=output_attentions, 486 output_hidden_states=output_hidden_states, --> 487 return_dict=return_dict, 488 ) 489 ~/miniconda3/envs/x/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), ~/miniconda3/envs/x/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py in forward(self, x, attn_mask, head_mask, output_attentions, output_hidden_states, return_dict) 305 306 layer_outputs = layer_module( --> 307 x=hidden_state, attn_mask=attn_mask, head_mask=head_mask[i], output_attentions=output_attentions 308 ) 309 hidden_state = layer_outputs[-1] ~/miniconda3/envs/x/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), ~/miniconda3/envs/x/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py in forward(self, x, attn_mask, head_mask, output_attentions) 262 263 # Feed Forward Network --> 264 ffn_output = self.ffn(sa_output) # (bs, seq_length, dim) 265 ffn_output = self.output_layer_norm(ffn_output + sa_output) # (bs, seq_length, dim) 266 ~/miniconda3/envs/x/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), ~/miniconda3/envs/x/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py in forward(self, input) 213 214 def forward(self, input): --> 215 return apply_chunking_to_forward(self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, input) 216 217 def ff_chunk(self, input): ~/miniconda3/envs/x/lib/python3.7/site-packages/transformers/modeling_utils.py in apply_chunking_to_forward(forward_fn, chunk_size, chunk_dim, *input_tensors) 1815 return torch.cat(output_chunks, dim=chunk_dim) 1816 -> 1817 return forward_fn(*input_tensors) ~/miniconda3/envs/x/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py in ff_chunk(self, input) 217 def ff_chunk(self, input): 218 x = self.lin1(input) --> 219 x = self.activation(x) 220 x = self.lin2(x) 221 x = self.dropout(x) ~/miniconda3/envs/x/lib/python3.7/site-packages/torch/nn/functional.py in gelu(input) 1457 if has_torch_function_unary(input): 1458 return handle_torch_function(gelu, (input,), input) -> 1459 return torch._C._nn.gelu(input) 1460 1461 RuntimeError: [enforce fail at CPUAllocator.cpp:67] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 2826240000 bytes. Error code 12 (Cannot allocate memory) I believe if call model using batch would solve my problem. But how I can I use batch_size here? result = model(**tokenizer_output) Is there another way to get the [CLS] (word representation)? Thanks in advance!
You did not share how you are building your tokenizer_output, so it’s hard to help you.
0
huggingface
🤗Transformers
Source attribution with CTRL
https://discuss.huggingface.co/t/source-attribution-with-ctrl/5624
I wanted to know how we can do source attribution with current implementation of CTRL as shown here 3.
@patrickvonplaten Can you suggest something ?
0
huggingface
🤗Transformers
Continuous training on Fine-tuned Model
https://discuss.huggingface.co/t/continuous-training-on-fine-tuned-model/6687
Feature request How can I continue training on a Fine-tuned Model? I have a fine tuned model from OpenSLR data. And I want to continue training on an model as I continue to gain transcribed audio data over time. Can I do like making the fine-tuned model as a checkpoint? Motivation I am aiming to make a model for Nepali Language. I have a way to collect data over time and it is continuous. So, I want to find a way I can train the model continuously as I gain data over time
In PyTorch, you can further train a model just by putting it in training mode (model.train()), and then train as usual. This will update the parameters of the model.
0
huggingface
🤗Transformers
How to Improve inference time of facebook/mbart many to many model?
https://discuss.huggingface.co/t/how-to-improve-inference-time-of-facebook-mbart-many-to-many-model/4068
If we tried to run translation service on facebook mbart many to many on cpu it take 9 secs to translate, how do we reduce the inference time further…
Hi @Vimal0703, one idea could be to try quantizing the model’s weights to a lower precision datatype. See e.g. step 2 in this guide: Dynamic Quantization — PyTorch Tutorials 1.7.1 documentation 16 This usually gives you a 2-3x reduction in latency and model size
0
huggingface
🤗Transformers
How do we quantize facebook / mbart-large-50-one-to-many-mmt to ONNX runtime
https://discuss.huggingface.co/t/how-do-we-quantize-facebook-mbart-large-50-one-to-many-mmt-to-onnx-runtime/4026
I want to know where to find the mbart-large-50-one-to-many-mmt model so that I can download and convert to onnx model and please do suggest a way to convert this model to onnx runtime.
Have you seen these: huggingface.co Hugging Face – On a mission to solve NLP, one commit at a time. 5 https://huggingface.co/transformers/serialization.html 10 github.com huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb 16 { "cells": [ { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "jBasof3bv1LB" }, "source": [ "<h1><center>How to export 🤗 Transformers Models to ONNX ?<h1><center>" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[ONNX](http://onnx.ai/) is open format for machine learning models. It allows to save your neural network's computation graph in a framework agnostic way, which might be particulary helpful when deploying deep learning models.\n", "\n", "Indeed, businesses might have other requirements _(languages, hardware, ...)_ for which the training framework might not be the best suited in inference scenarios. In that context, having a representation of the actual computation graph that can be shared accross various business units and logics across an organization might be a desirable component.\n", "\n", This file has been truncated. show original
0
huggingface
🤗Transformers
gpt-neo-2.7B isn’t working with pipleline
https://discuss.huggingface.co/t/gpt-neo-2-7b-isnt-working-with-pipleline/6496
I’m getting a basic error when I try to access GTP-NEO-2.7B via a pipeline. Working from my local machine, I go through the exact stages on the relevant model card page, namely: from transformers import pipeline generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B') However, when I do so I get the following error: OSError: Can't load config for 'EleutherAI/gpt-neo-2.7B'. Make sure that: - 'EleutherAI/gpt-neo-2.7B' is a correct model identifier listed on 'https://huggingface.co/models' - or 'EleutherAI/gpt-neo-2.7B' is the correct path to a directory containing a config.json file Can anyone advise what’s happening here? Thanks.
I think I had the same problem. I went to the Huggingface site and copied the identifier from there and pasted it into my notebook. Then it worked. Even though the spelling seemed correct, it fixed the problem. I’m not sure why.
0
huggingface
🤗Transformers
Cannot convert mbart from fairseq to huggingface using the script in the repo
https://discuss.huggingface.co/t/cannot-convert-mbart-from-fairseq-to-huggingface-using-the-script-in-the-repo/6559
I am using this converter script in the transformers repo 4 to convert the official fairseq bart to huggingface. The command looks like: python convert_mbart_original_checkpoint_to_pytorch.py mbart.cc25.v2/model.pt ./temp/ which returns an error of Unexpected key(s) in state_dict: "encoder.layers.0.layer_norms.0.weight", "encoder.layers.0.layer_norms.0.bias", "encoder.layers.0.layer_norms.1.weight", "encoder.layers.0.layer_norms.1.bias", "encoder.layers.0.self_attn.in_proj_weight", "encoder.layers.0.self_attn.in_proj_bias", "encoder.layers.1.layer_norms.0.weight", "encoder.layers.1.layer_norms.0.bias", "encoder.layers.1.layer_norms.1.weight", "encoder.layers.1.layer_norms.1.bias", "encoder.layers.1.self_attn.in_proj_weight", "encoder.layers.1.self_attn.in_proj_bias", "encoder.layers.2.layer_norms.0.weight", "encoder.layers.2.layer_norms.0.bias", "encoder.layers.2.layer_norms.1.weight", "encoder.layers.2.layer_norms.1.bias", "encoder.layers.2.self_attn.in_proj_weight", "encoder.layers.2.self_attn.in_proj_bias", "encoder.layers.3.layer_norms.0.weight", "encoder.layers.3.layer_norms.0.bias", "encoder.layers.3.layer_norms.1.weight", "encoder.layers.3.layer_norms.1.bias", "encoder.layers.3.self_attn.in_proj_weight", "encoder.layers.3.self_attn.in_proj_bias", "encoder.layers.4.layer_norms.0.weight", "encoder.layers.4.layer_norms.0.bias", "encoder.layers.4.layer_norms.1.weight", "encoder.layers.4.layer_norms.1.bias", "encoder.layers.4.self_attn.in_proj_weight", "encoder.layers.4.self_attn.in_proj_bias", "encoder.layers.5.layer_norms.0.weight", "encoder.layers.5.layer_norms.0.bias", "encoder.layers.5.layer_norms.1.weight", "encoder.layers.5.layer_norms.1.bias", "encoder.layers.5.self_attn.in_proj_weight", "encoder.layers.5.self_attn.in_proj_bias", "encoder.layers.6.layer_norms.0.weight", "encoder.layers.6.layer_norms.0.bias", "encoder.layers.6.layer_norms.1.weight", "encoder.layers.6.layer_norms.1.bias", "encoder.layers.6.self_attn.in_proj_weight", "encoder.layers.6.self_attn.in_proj_bias", "encoder.layers.7.layer_norms.0.weight", "encoder.layers.7.layer_norms.0.bias", "encoder.layers.7.layer_norms.1.weight", "encoder.layers.7.layer_norms.1.bias", "encoder.layers.7.self_attn.in_proj_weight", "encoder.layers.7.self_attn.in_proj_bias", "encoder.layers.8.layer_norms.0.weight", "encoder.layers.8.layer_norms.0.bias", "encoder.layers.8.layer_norms.1.weight", "encoder.layers.8.layer_norms.1.bias", "encoder.layers.8.self_attn.in_proj_weight", "encoder.layers.8.self_attn.in_proj_bias", "encoder.layers.9.layer_norms.0.weight", "encoder.layers.9.layer_norms.0.bias", "encoder.layers.9.layer_norms.1.weight", "encoder.layers.9.layer_norms.1.bias", "encoder.layers.9.self_attn.in_proj_weight", "encoder.layers.9.self_attn.in_proj_bias", "encoder.layers.10.layer_norms.0.weight", "encoder.layers.10.layer_norms.0.bias", "encoder.layers.10.layer_norms.1.weight", "encoder.layers.10.layer_norms.1.bias", "encoder.layers.10.self_attn.in_proj_weight", "encoder.layers.10.self_attn.in_proj_bias", "encoder.layers.11.layer_norms.0.weight", "encoder.layers.11.layer_norms.0.bias", "encoder.layers.11.layer_norms.1.weight", "encoder.layers.11.layer_norms.1.bias", "encoder.layers.11.self_attn.in_proj_weight", "encoder.layers.11.self_attn.in_proj_bias", "decoder.layers.0.self_attn.in_proj_weight", "decoder.layers.0.self_attn.in_proj_bias", "decoder.layers.0.encoder_attn.in_proj_weight", "decoder.layers.0.encoder_attn.in_proj_bias", "decoder.layers.1.self_attn.in_proj_weight", "decoder.layers.1.self_attn.in_proj_bias", "decoder.layers.1.encoder_attn.in_proj_weight", "decoder.layers.1.encoder_attn.in_proj_bias", "decoder.layers.2.self_attn.in_proj_weight", "decoder.layers.2.self_attn.in_proj_bias", "decoder.layers.2.encoder_attn.in_proj_weight", "decoder.layers.2.encoder_attn.in_proj_bias", "decoder.layers.3.self_attn.in_proj_weight", "decoder.layers.3.self_attn.in_proj_bias", "decoder.layers.3.encoder_attn.in_proj_weight", "decoder.layers.3.encoder_attn.in_proj_bias", "decoder.layers.4.self_attn.in_proj_weight", "decoder.layers.4.self_attn.in_proj_bias", "decoder.layers.4.encoder_attn.in_proj_weight", "decoder.layers.4.encoder_attn.in_proj_bias", "decoder.layers.5.self_attn.in_proj_weight", "decoder.layers.5.self_attn.in_proj_bias", "decoder.layers.5.encoder_attn.in_proj_weight", "decoder.layers.5.encoder_attn.in_proj_bias", "decoder.layers.6.self_attn.in_proj_weight", "decoder.layers.6.self_attn.in_proj_bias", "decoder.layers.6.encoder_attn.in_proj_weight", "decoder.layers.6.encoder_attn.in_proj_bias", "decoder.layers.7.self_attn.in_proj_weight", "decoder.layers.7.self_attn.in_proj_bias", "decoder.layers.7.encoder_attn.in_proj_weight", "decoder.layers.7.encoder_attn.in_proj_bias", "decoder.layers.8.self_attn.in_proj_weight", "decoder.layers.8.self_attn.in_proj_bias", "decoder.layers.8.encoder_attn.in_proj_weight", "decoder.layers.8.encoder_attn.in_proj_bias", "decoder.layers.9.self_attn.in_proj_weight", "decoder.layers.9.self_attn.in_proj_bias", "decoder.layers.9.encoder_attn.in_proj_weight", "decoder.layers.9.encoder_attn.in_proj_bias", "decoder.layers.10.self_attn.in_proj_weight", "decoder.layers.10.self_attn.in_proj_bias", "decoder.layers.10.encoder_attn.in_proj_weight", "decoder.layers.10.encoder_attn.in_proj_bias", "decoder.layers.11.self_attn.in_proj_weight", "decoder.layers.11.self_attn.in_proj_bias", "decoder.layers.11.encoder_attn.in_proj_weight", "decoder.layers.11.encoder_attn.in_proj_bias". Am I missing anything here? Thanks!
still needs help on this…
0
huggingface
🤗Transformers
`Trainer.predict` takes twice as long as progress bar shows
https://discuss.huggingface.co/t/trainer-predict-takes-twice-as-long-as-progress-bar-shows/6600
Am using Trainer.predict but have noticed that it’s actually taking twice as long as displayed by the progress bar. print('Predicting on test samples...') t0 = time.time() predictions, label_ids, _ = trainer.predict(tokenized_datasets['test']) print(f'completed in {(time.time() - t0) / 60:.2f} mins.') print('Argmaxing...') t0 = time.time() predictions = predictions.argmax(axis=2) print(f'completed in {(time.time() - t0) / 60:.2f} mins.') trainer_predict_timing1040×226 11.9 KB During the period shown by the progress bar, the GPU is used, but it’s not used after the progress bar has completed. Different sizes of tokenized_datasets['test'] has been tried, and it appears that the trend is that it takes twice as long regardless of the size. Is this normal, or expected?
It depends on the Trainer you are using, which you did not share. The progress bar only represents the evaluation loop (goign through all the batches of your evaluation set and getting the predictions). After this is done, the Trainer computes metrics if you gave it a compute metrics function, which could take some time, and there can also be some additional post-processing, depending on your task.
0
huggingface
🤗Transformers
Is there a list of MLM corruption strategies?
https://discuss.huggingface.co/t/is-there-a-list-of-mlm-corruption-strategies/6537
Ideally, looking for a list of such corruption strategies that include an ELI5 description, pictures, and sample implementation(s). Thanks -wg
Hi, Yes, the T5 paper 2 explored different corruption strategies. See table 3 on page 21.
0
huggingface
🤗Transformers
CUDA error: device-side assert triggered
https://discuss.huggingface.co/t/cuda-error-device-side-assert-triggered/1407
When I run the GPU (google colab) I always get RuntimeError: CUDA error: device-side assert triggered, with huggingface. When I convert to CPU, it works fine, any solution?
its solved, when I corrected the number of label classes, closing
0
huggingface
🤗Transformers
LaBSE vs multilingual BERT, same layers?
https://discuss.huggingface.co/t/labse-vs-multilingual-bert-same-layers/6546
Hey all am I missing something here: Do the bert base multilingual uncased 1 and sentence-bert/LaBSE 3 have the same layers? When I print out both models it seems so. I thought they are different? Is it just the data that they have been trained on that differs? Thanks a lot
Oh I see now, they indeed use the same layers as indicated in the research paper 2. (page 4)
0
huggingface
🤗Transformers
Does transformers 3.5.1 support auto mixed precision training?
https://discuss.huggingface.co/t/does-transformers-3-5-1-support-auto-mixed-precision-training/6387
I’m using a GPTLMHead model in pytorch. Is it possible , i add autocast() in the forward function in GPTLMHead and change the training process followed the Automatic Mixed Precision — PyTorch Tutorials 1.8.1+cu102 documentation 5
Yes, it’s possible.
0
huggingface
🤗Transformers
Regression is failing in fine tuning with BERT/GPT-2/Albert
https://discuss.huggingface.co/t/regression-is-failing-in-fine-tuning-with-bert-gpt-2-albert/6457
I have been trying to use BertModel, albert and GPT2 models for fine tuning on my regression task and i was able to produce unwanted results . i will mention it below: I tried it two times: I used CLS token embeddings and fine tuned over my entire custom model but it produced some random number repeating over and over in my output matrix space. I simply passed CLS token embeddings to the feed forward NN. In this case also it produced some random number. what can be the solution to this problem? class Custom_GPT(tf.keras.Model): def __init__(self,embedding_dim): super(Custom_GPT,self).__init__() self.embedding_dim=embedding_dim self.dense=tf.keras.layers.Dense(1,input_shape=(embedding_dim,),activation=None,name='dense_layer_1') self.GPT_layer=GPT_model def call(self,input_ids): sequence=self.GPT_layer(input_ids)[0] cls=sequence[:,0,:] x=self.dense(cls) model doesn’t seem to be learning anything here. it generates a random constant repeatedly
Are you returning x after call? I am not familiar with Tensorflow, but I assume that you still have to return the final logits? Otherwise it will implicitly return None.
0
huggingface
🤗Transformers
Wav2Vec2 for Audio Emotion Classification
https://discuss.huggingface.co/t/wav2vec2-for-audio-emotion-classification/4312
We are having a thesis project on Podcast Trailer Generation - Hotspot Detection for Podcast Dataset at Spotify. The Spotify Podcast Dataset contains both transcript and audio data for many podcast episodes, and currently we are looking to use Wav2Vec2 embeddings as input to train an emotion classification model for the audio data. The audio data is currently only in English (with accompanied transcript). It would be much appreciated if you could help out with fine-tuning Wav2Vec2 on some standard emotion-annotated audio datasets (e.g. RAVDESS 10, SAVEE 11). We will then use the fine-tuned embeddings as input for emotion classification, after which we will have human evaluation on the classified results.
That sounds great. I’m also working with fine-tuning Wav2Vec2. I can help you out if you have any questions. @patrickvonplaten is also a great person to ask.
0
huggingface
🤗Transformers
TensorFlow trainer
https://discuss.huggingface.co/t/tensorflow-trainer/6383
Hi HuggingFace Team We are at the beginning of a new DL project. In this project, we work with transformers in TF2. Before we start this we’d love to know what are your plans about creating a training framework for TF2. In our search for answers, we came across a GitHub issue that mentioned that TFtrainer will be changed/removed. github.com/huggingface/transformers evaluation in TFTrainer does not run on GPU 1 opened May 5, 2021 rohanshingade ## Environment info <!-- You can run the command `transformers-cli env` and cop…y-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.1 - Platform: ubuntu 18.04 - Python version: 3.6.9 - PyTorch version (GPU?): - Tensorflow version (GPU?): tensorflow-gpu==2.4.1 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patil-suraj @Rocketknight1 ## Information I'm using TFT5ForConditionalGeneration for masked language modelling task. During training GPU utilisation is above 95% but as soon as evaluation starts it goes to 0%. Evaluation is slow. Even though [evaluate function is in strategy.scope()](https://github.com/huggingface/transformers/blob/c065025c4720a87783ca504a9018454893d00649/src/transformers/trainer_tf.py#L580). it does not use gpu. The problem arises when using: * [ ] the official example scripts: (give details below) I'm using the official example script of TFTrainer and modified `run_tf_glue.py` a bit for custom data input. The tasks I am working on is: * [ ] my own task or dataset: (give details below) Final train_dataset and eval_dataset (input to TFTrainer) have the form `({"input_ids": , "attention_mask": ,"decoder_attention_mask": }, labels)` ## To reproduce Steps to reproduce the behavior: I tried reproducing the error using run_tf_squad.py and run_tf_glue.py but both the scripts gave error as the inputs to the trainer were not compatible. Only MRPC task worked, but it had only 400 examples in evaluation so hard to determine. Rest of them simply didn't work, there was an error. If possible I would like to contribute to TFTrainer in terms of running evaluation on GPU and processing squad and glue dataset to match dimensions to TFTrainer inputs. Guidance is really appreciated. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior We were hoping you could shed more light on your plans for the integration of the transformers library with TF2. More concretely - Do you intend to release a TF Trainer? Will it be using Keras? Any date expectations? Thanks,
Hi there! Yes the TFTrainer will be deprecated and removed in v5, we will focus on better integrating with Keras (though the means of Keras callbacks if we need to add functionality). Checkout the new classification example 7 for an example of where we are going.
0
huggingface
🤗Transformers
Error when using the forward() function of `LongformerLayer` class
https://discuss.huggingface.co/t/error-when-using-the-forward-function-of-longformerlayer-class/1531
Hello, Sorry if my question sounds a bit silly, but I just have a question: I am trying to use LongformerForMultipleChoice model for a multiple-choice question that has 4 options. When I do: my_Longformer_multiple_choice_model.encoder.layer[layer_index].forward(hidden_output, attention_mask=my_attention_mask,output_attention=False) , an this error is generated: File "/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py", line 384, in _sliding_chunks_query_key_matmul batch_size, seq_len, num_heads, head_dim = query.size() ValueError: too many values to unpack (expected 4) Here, my_attention_mask is the same attention mask that I would specify under the regular LongformerForMultipleChoice command: # I am using the LongformerForMultipleChoice model, where each multiple choice question has 4 options. my_attention_mask = tensor([[[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0]]]) # I can use the my_attention_mask in the regular command as below: longformer_output= my_Longformer_multiple_choice_model(input_ids=input_ids,....,attention_mask=my_attention_mask) why is this value error generated? What should I pass for the attention_mask parameter in the command my_Longformer_multiple_choice_model.encoder.layer[layer_index].forward(hidden_output, attention_mask,output_attention=False)? Thank you,
That seems to me error more related to the data structures rather than the longformer, may be you should check the format of input data first.
0
huggingface
🤗Transformers
Trainer.train() is stuck
https://discuss.huggingface.co/t/trainer-train-is-stuck/3702
Hi, I’m training roberta-base using HF Trainer, but it’s stuck at the starting itself. Here’s my code - train_dataset[0] {'input_ids': tensor([ 0, 100, 657, ..., 1, 1, 1]), 'attention_mask': tensor([1, 1, 1, ..., 0, 0, 0]), 'labels': tensor(0)} val_dataset[0] {'input_ids': tensor([ 0, 11094, 14, ..., 1, 1, 1]), 'attention_mask': tensor([1, 1, 1, ..., 0, 0, 0]), 'labels': tensor(0)} ## simple test model(train_dataset[:2]['input_ids'], attention_mask = train_dataset[:2]['attention_mask'], labels=train_dataset[:2]['labels']) SequenceClassifierOutput(loss=tensor(0.6995, grad_fn=<NllLossBackward>), logits=tensor([[ 0.0438, -0.1893], [ 0.0530, -0.1786]], grad_fn=<AddmmBackward>), hidden_states=None, attentions=None) train_args = transformers.TrainingArguments( output_dir='test_1', overwrite_output_dir=True, evaluation_strategy="epoch", per_device_train_batch_size=8, per_device_eval_batch_size=8, learning_rate=3e-5, weight_decay=0.01, num_train_epochs=2, load_best_model_at_end=True, ) trainer = transformers.Trainer( model=model, args=train_args, train_dataset=train_dataset, eval_dataset=val_dataset, tokenizer=tok, ) trainer.train() I saw memory consumption and it is stuck at - +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... On | 00000000:62:00.0 Off | 0 | | N/A 49C P0 60W / 300W | 1756MiB / 32510MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 Tesla V100-SXM2... On | 00000000:8A:00.0 Off | 0 | | N/A 50C P0 61W / 300W | 1376MiB / 32510MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| +-----------------------------------------------------------------------------+ Plz let me know how to proceed further…
any progress since then?
0
huggingface
🤗Transformers
Pass a custom mask when using RoBERTa
https://discuss.huggingface.co/t/pass-a-custom-mask-when-using-roberta/3210
Suppose I have a sequence that consists of 2 sentences separated by </SEP> tokens like A </SEP> B. When performing forward pass with RoBERTa model, I want tokens in sentence A only attend to tokens in sentence A and vice versa for sentence B. The mask will be look like this: In summary, is there any way to explicitly pass a custom attention mask to the model? Thanks in advance.
Attention mask is normally created from input_mask . You cannot bypass attention mask directly. I might be wrong also. For your purpose, create an input_mask with 1s on First row rows and two colums and then 1s on last two rows and last two columns. Set else to 0.
0
huggingface
🤗Transformers
Issues with building extensions in Deepspeed
https://discuss.huggingface.co/t/issues-with-building-extensions-in-deepspeed/6323
I get this error even after following the instructions on Deepspeed installation page and HF Trainer docs. Can anyone suggest how to fix this ? I’ve followed @stas’s replies on GH issues, but this error keeps coming up. pybind11 is installed on my machine, not sure what is this indicating ? ds1305×675 166 KB
@prajjwal1 - something is off in your python env. The mantra should always be - let’s see if others have asked this question already, so here try https://www.google.com/search?q=pybind11%2Fpybind11.h+no+such+file+or+directory 12 Please avoid posting images for tracebacks - instead copy-n-paste the output using code blocks - this is because it’s impossible to copy-n-paste from the image to do the search for you. That’s said if the issue continues after you tried the solutions offered at the top matching pages - please post the details at Issues · microsoft/DeepSpeed · GitHub
0
huggingface
🤗Transformers
How to train your own corpus without labels
https://discuss.huggingface.co/t/how-to-train-your-own-corpus-without-labels/6369
Hi, I’d like to fine-tune BERT for my product catalog corpus which contains a lot of out-of-vocabulary words like brand names. By fine-tuning, I mean transfer learning and not training from scratch. I have been following this [Fine-tuning a pretrained model — transformers 4.5.0.dev0 documentation 11] tutorial and see that this requires labels. As you can imagine, my use case if connected to information retrieval and search and does not contain any y_labels. All I want is unique vector embedding out of my trained model. How should I approach this problem using HF Trainer module?
hey @awaiskaleem if i understand correctly, i that what you’re looking for is to fine-tune the language model on your corpus. this will generally produce mask-filling that more accurately captures the relations in your corpus and for BERT you can check out the Masked language modeling section of this tutorial: Google Colaboratory 79 once the language model is fine-tune, you can save the weights and then load them using AutoModel.from_pretrained to generate your embeddings.
0
huggingface
🤗Transformers
Ask for help with prediction results of Named Entity Recognition Task
https://discuss.huggingface.co/t/ask-for-help-with-prediction-results-of-named-entity-recognition-task/6168
Hi guys, After training the NER Task with using RoBERTa Architecture, I got the below result {‘eval_loss’: 0.003242955543100834, ‘eval_precision’: 0.9959672534053343, ‘eval_recall’: 0.9959672534053343, ‘eval_f1’: 0.9959672534053343, ‘eval_accuracy’: 0.9995624335836689} The result generally is quite high, as I expected. But here is my confusion, when I randomly input a set of sentences (out of the training set) to really know the model’s performance. My pseudo code def tokenize_and_align_labels_random(examples, tokenizer): tokenized_inputs = tokenizer(examples['tokens'], truncation=True, is_split_into_words=True) return tokenized_inputs def preprocess_datasets(tokenizer, **datasets) -> Dict[str, Dataset]: tokenize_ner = partial(tokenize_and_align_labels_random, tokenizer=tokenizer) return {k: ds.map(tokenize_ner) for k, ds in datasets.items()} address=Testing_Dataset[Testing_Dataset['address']==1]['text'].apply(clean_doc).tolist() da_datasets_random_Test = preprocess_datasets(tokenizer, test=Dataset.from_dict({'tokens':address})) results=da_trainer.predict(da_datasets_random_Test['test']) predictions=results.predictions predictions = np.argmax(predictions, axis=2) # Remove ignored index (special tokens) true_predictions = [ [label_list[p] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(predictions, labels) ] I input the sentences with some words that don’t exist in the tokenizer vocabulary, and the model will handle that part for me by automatically generating their sub token. That means the ‘input_ids’ will generate more token ids for presenting these cases, the problem is their predicted tags will also be increasing (based on how many tokens was delivered to the model). For instance Input sentence: “Giao tôi lê_lai phường hai tân_bình hcm” Value after tokenizer: {‘input_ids’: [0, 64003, 64003, 17489, 6115, 64139, 64151, 64003, 6446, 64313, 1340, 74780, 2], ‘token_type_ids’: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ‘attention_mask’: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} Because tokenize of “lê_lai” is [‘lê@@’, ‘l@@’, ‘ai’]; of “tân_bình” is ['tân@@’, ‘bình’]; of “hcm” is [‘h@@’, ‘cm’] The result I got after all: [‘O’,‘O’,‘B-LOC’,‘I-LOC’,‘I-LOC’,‘I-LOC’,‘I-LOC’,‘I-LOC’,‘O’,‘I-LOC’,‘I-LOC’, ‘O’] In fact, their prediction should only have 7 tags for the input tokens, but now it was more than this. So do guys have any strategies for this (I got one that we can train the tokenizer with more tokens). I do appreciate your time and sharing.
Hello Iacle. I suspect your issue is around WordPiece Tokenization but I can’t tell for sure with the info you posted. Take a look at Fine-tuning with custom datasets — transformers 4.5.0.dev0 documentation 9 In particular pay attention to when it talks about WordPiece Tokenization and the code in def encode_tags(tags, encodings): …
0
huggingface
🤗Transformers
What’s a good value for pad_to_multiple_of?
https://discuss.huggingface.co/t/whats-a-good-value-for-pad-to-multiple-of/1481
Have anyone tried this? I can’t find a suggested value in doc. I am running on GPU (with tensor core).
If you use mixed precision, you need all your tensors to have dimensions that are multiple of 8s to maximize the benefits of your tensor cores. So pas_to_multiple_of=8 is a good value, unless you model has some pooling (like Funnel Transformer) in which case those 8 might be divided by 2 (you’d need pad_to_multiple_of=32 for this model for instance, since there are two pooling operations).
0
huggingface
🤗Transformers
Architecture attribute of model.config is different from the actual model’s architecture in RoBERTa
https://discuss.huggingface.co/t/architecture-attribute-of-model-config-is-different-from-the-actual-models-architecture-in-roberta/6244
Loading roberta-base model using RobertaForSequenceClassification.from_pretrained returns a model having config with incorrect value for attribute architectures (“architectures”: [ “RobertaForMaskedLM” ]) of RobertaForMaskedLM instead of RobertaForSequenceClassification. Similarly, I tried for RobertaForMultipleChoice and TFRobertaForSequenceClassification and got same result of in appropriate architecture attribute. (below are code snippets and their outputs) Can anyone explain the reason for this? package versions: transformers-4.6.0 tensorflow-2.4.1 torch-1.8.1+cu101 from transformers import RobertaTokenizer, RobertaForSequenceClassification import torch tokenizer = RobertaTokenizer.from_pretrained('roberta-base') model = RobertaForSequenceClassification.from_pretrained('roberta-base') model.config Output: Some weights of the model checkpoint at roberta-base were not used when initializing RobertaForSequenceClassification: ['lm_head.decoder.weight', 'lm_head.dense.weight', 'lm_head.layer_norm.bias', 'lm_head.layer_norm.weight', 'roberta.pooler.dense.bias', 'roberta.pooler.dense.weight', 'lm_head.bias', 'lm_head.dense.bias'] - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at roberta-base and are newly initialized: ['classifier.dense.bias', 'classifier.out_proj.bias', 'classifier.out_proj.weight', 'classifier.dense.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. RobertaConfig { "_name_or_path": "roberta-base", "architectures": [ "RobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "position_embedding_type": "absolute", "transformers_version": "4.6.0", "type_vocab_size": 1, "use_cache": true, "vocab_size": 50265 } from transformers import RobertaTokenizer, RobertaForMultipleChoice import torch tokenizer = RobertaTokenizer.from_pretrained('roberta-base') model = RobertaForMultipleChoice.from_pretrained('roberta-base') model.config Output: Some weights of the model checkpoint at roberta-base were not used when initializing RobertaForMultipleChoice: ['lm_head.decoder.weight', 'lm_head.dense.weight', 'lm_head.layer_norm.bias', 'lm_head.layer_norm.weight', 'lm_head.bias', 'lm_head.dense.bias'] - This IS expected if you are initializing RobertaForMultipleChoice from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaForMultipleChoice from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of RobertaForMultipleChoice were not initialized from the model checkpoint at roberta-base and are newly initialized: ['classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. RobertaConfig { "_name_or_path": "roberta-base", "architectures": [ "RobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "position_embedding_type": "absolute", "transformers_version": "4.6.0", "type_vocab_size": 1, "use_cache": true, "vocab_size": 50265 } from transformers import RobertaTokenizer, TFRobertaForSequenceClassification import tensorflow as tf model = TFRobertaForSequenceClassification.from_pretrained('roberta-base') model.config Output: All model checkpoint layers were used when initializing TFRobertaForSequenceClassification. Some layers of TFRobertaForSequenceClassification were not initialized from the model checkpoint at roberta-base and are newly initialized: ['classifier'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. RobertaConfig { "_name_or_path": "roberta-base", "architectures": [ "RobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "position_embedding_type": "absolute", "transformers_version": "4.6.0", "type_vocab_size": 1, "use_cache": true, "vocab_size": 50265 }
This is the architecture of the model class it was pretrained with. If you save your model (preferably after training) using the save_pretrained method, this architectures field will be updated to put the architecture you were using.
0
huggingface
🤗Transformers
Pegasus on qa task
https://discuss.huggingface.co/t/pegasus-on-qa-task/6215
Hello everyone, I try to use pegasus on squad dataset. I am using notebook 2 for pegasus fine-tuning on xsum. I use all hyper parameters as in the above notebook. After train, I got following loss values. /usr/local/lib/python3.7/dist-packages/transformers/optimization.py:562: UserWarning: This overload of add_ is deprecated: add_(Number alpha, Tensor other) Consider using one of the following signatures instead: add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:1005.) exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1)) [2000/2000 46:21, Epoch 2000/2000] Step Training Loss Validation Loss 100 11.515400 11.971568 200 9.583800 10.400596 300 7.782900 8.812467 400 1.653800 1.603077 500 0.006000 0.367762 600 0.049500 0.271221 700 0.000600 0.271198 800 0.000300 0.282740 900 0.000200 0.311012 1000 0.000200 0.308879 1100 0.000100 0.316374 1200 0.000100 0.329436 1300 0.000100 0.326996 1400 0.020200 0.291157 1500 0.002400 0.272543 1600 0.000100 0.291934 1700 0.035900 0.308235 1800 0.000100 0.308755 1900 0.000100 0.311096 2000 0.000000 0.311382 CPU times: user 1h 1min 52s, sys: 18min 49s, total: 1h 20min 41s Wall time: 46min 26s TrainOutput(global_step=2000, training_loss=1.8480298345236805, metrics={‘train_runtime’: 2783.2468, ‘train_samples_per_second’: 0.719, ‘total_flos’: 0, ‘epoch’: 2000.0, ‘init_mem_cpu_alloc_delta’: 8192, ‘init_mem_gpu_alloc_delta’: 0, ‘init_mem_cpu_peaked_delta’: 0, ‘init_mem_gpu_peaked_delta’: 0, ‘train_mem_cpu_alloc_delta’: -4224876544, ‘train_mem_gpu_alloc_delta’: 2288025088, ‘train_mem_cpu_peaked_delta’: 4242300928, ‘train_mem_gpu_peaked_delta’: 8677123584}) My question is what values for hyperparameters (optimization, learning rate etc.) I should use for a QA task? And the training loss shown in TrainOutput does not match with the loss values on table. What can the reason be?
hey @helloworld123-lab, are you sure that pegasus can be used for a reading comprehension task like SQuAD? my understanding is that it is designed for abstractive tasks like summarization (although i’d be interested to hear otherwise!). as an alternative, i’d suggest checking out the official question-answering tutorial here: Google Colaboratory 3
0
huggingface
🤗Transformers
Help Improving Abstractive Summarization
https://discuss.huggingface.co/t/help-improving-abstractive-summarization/6225
Hey everyone, Hope you’re doing great! I’m a beginner when it comes to using transformers. I’ve been working on book summarization project for a while, the idea is to split the book into chapters then the chapter into chunks and summarize the chunks separately. I’ve tried several models and the summaries provided aren’t that good. Some of the problems are: Some sentences aren’t fully generated. The context is lost most of the time. Do you guys have any suggestions please? Also, I have found these two papers: Mind The Facts: Knowledge-Boosted Coherent Abstractive Text Summarization 4 Enhancing Factual Consistency of Abstractive Summarization 4 But I couldn’t find any code implementation of these models. If you have any implementations please let me know. Thank you for your time! Have a good day.
hey @haithembrr, your approach sounds very sensible have you tried playing around with the parameters of the model’s generate function, e.g. max_len and trying different strategies like beam search vs sampling (see docs 1)? alternatively, you could have a look at the discussion in this thread to see if someone has also run into the same problem: Summarization on long documents 13
0
huggingface
🤗Transformers
Checkpoint missing Optimizer.pt? How to Resume?
https://discuss.huggingface.co/t/checkpoint-missing-optimizer-pt-how-to-resume/6138
I tried to train a model with HF and it helped me a lot! My only problem is resuming the training. As you can see in the screenshot below, only my first checkpoint contains the data I expect. My question is, is there a flag where I can turn off saving the checkpoints (I ask only to turn it off!)? Can I still continue the training? image306×566 19 KB Im using load_best_model_at_end save_total_limit = 3 overwrite_output_dir I didnt change my Code i just updated to using the latest HF Version !pip install -q git+https://github.com/huggingface/transformers Is there any way to resum from the last Checkpoint? Maybe a Flag init_epoch etc? TrainingArguments(output_dir=/share/datasets/output_run, overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=IntervalStrategy.STEPS, prediction_loss_only=False, per_device_train_batch_size=20, per_device_eval_batch_size=16, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=0.0001, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=20.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_ratio=0.0, warmup_steps=0, logging_dir=runs/May12_05-06-46_a600ce861ff7, logging_strategy=IntervalStrategy.STEPS, logging_first_step=False, logging_steps=1000, save_strategy=IntervalStrategy.STEPS, save_steps=1000, save_total_limit=3, no_cuda=False, seed=42, fp16=True, fp16_opt_level=O1, fp16_backend=auto, fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=[], dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=2, past_index=-1, run_name=cv_sm_1, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=True, metric_for_best_model=loss, greater_is_better=False, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=True, length_column_name=length, report_to=['wandb'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, use_legacy_prediction_loop=False, push_to_hub=False, resume_from_checkpoint=None, _n_gpu=1, mp_parameters=) !find / -name optimizer.pt Just returned the Checkpoint from withing the Screenshot
@sgugger do you by any chance know what this could be? The Strange thing is, i resumed training from last valid checkpoint and the model normally generates checkpoints as expected, but at some point the checkpoint folder only contains rng_state.pth as shown on the screenshot.
0
huggingface
🤗Transformers
Masked vectors are included in vanilla transformer model
https://discuss.huggingface.co/t/masked-vectors-are-included-in-vanilla-transformer-model/6191
I trained a vanilla Bert Model. model = BertModel.from_pretrained(‘bert-base-uncased’, num_labels = 4, output_hidden_states=True, output_attentions=True) That I trained. I’m using multi label classification, so I train this model with an extra logit layer attached to it, then summed the results. Code for running it is below def validation_nn(): prob_list = [] input_ids = [[101,4769, 77, 102, 0, 0]] mask = [[1, 1, 1, 1, 0, 0]] model.eval() outputs = model(torch.tensor(input_ids), attention_mask=torch.tensor(mask) ) last_hidden_state = outputs.last_hidden_state summed_final_hidden_state = torch.sum(last_hidden_state, 1) logits = logit_layer(summed_final_hidden_state) probs = torch.sigmoid(logits) prob_list.append(probs) print(outputs[-1]) #print('probs', probs) return prob_list probs = validation_nn() Here is a snippet of the output tensor([[[[0.2019, 0.1652, 0.0762, 0.5566, 0.0000, 0.0000], [0.2206, 0.1417, 0.3824, 0.2552, 0.0000, 0.0000], [0.0319, 0.1476, 0.6395, 0.1810, 0.0000, 0.0000], [0.4228, 0.1063, 0.1553, 0.3157, 0.0000, 0.0000], [0.1468, 0.1533, 0.4655, 0.2344, 0.0000, 0.0000], [0.1618, 0.1286, 0.4877, 0.2219, 0.0000, 0.0000]], [[0.9799, 0.0092, 0.0029, 0.0080, 0.0000, 0.0000], [0.0043, 0.0714, 0.9032, 0.0212, 0.0000, 0.0000], [0.2281, 0.1369, 0.4741, 0.1608, 0.0000, 0.0000], [0.3400, 0.3117, 0.0927, 0.2557, 0.0000, 0.0000], [0.0053, 0.1026, 0.7928, 0.0992, 0.0000, 0.0000], [0.0063, 0.0944, 0.8022, 0.0971, 0.0000, 0.0000]], As you can see, the two columns to the far right, which should be masked, are zero’d out, as they should be. But I get two more rows which are not zero’d out. When I go to sum my results, these mask vectors are added, which obviously messes up my results. I imagine having these non zero’d vectors messes up the training as well. Is there a way to zero out the rows, which belong to mask tokens?
hey @bennicholl you might have better luck using the Trainer class to run your training with a custom loss function (see docs): import torch from transformers import Trainer class MultilabelTrainer(Trainer): def compute_loss(self, model, inputs, return_outputs=False): labels = inputs.pop("labels") outputs = model(**inputs) logits = outputs.logits loss_fct = torch.nn.BCEWithLogitsLoss() loss = loss_fct(logits.view(-1, self.model.config.num_labels), labels.float().view(-1, self.model.config.num_labels)) return (loss, outputs) if return_outputs else loss then you can define a compute_metrics function as follows from scipy.special import expit as sigmoid from sklearn.metrics import classification_report def compute_metrics(pred): y_true = pred.label_ids y_pred = sigmoid(pred.predictions) y_pred = (y_pred>0.5).astype(float) clf_dict = classification_report(y_true, y_pred, target_names=all_labels, zero_division=0, output_dict=True) return {"micro f1": clf_dict['micro avg']['f1-score'], "macro f1": clf_dict['macro avg']['f1-score']} and after fine-tuning you can then get the predictions on your validation set via Trainer.predict: trainer = # fine-tuned Trainer pred = trainer.predict(your_eval_dataset) metrics = compute_metrics(pred) hth!
0
huggingface
🤗Transformers
Summarization with mT5
https://discuss.huggingface.co/t/summarization-with-mt5/6199
Hello, I am trying to do summarization with mT5 and when I use the official summarization colab which uses seq2seq trainer, the model outputs trash. You can see my GitHub issue here 56. Do you have any ideas on how to proceed with summarization using mT5?
disabling fp16 seems to solve the issue, but I would rather have fp16 because it doubles the training speed
0
huggingface
🤗Transformers
ELECTRA: Accounting for mask tokens that are correctly predicted by MLM
https://discuss.huggingface.co/t/electra-accounting-for-mask-tokens-that-are-correctly-predicted-by-mlm/596
I have been pretraining ELECTRA models using the ElectraForPreTraining class, some of them taking in input_ids and others just taking the embeddings straight via the input_embeds parameter (I am training the model on biological data and assign particular tokens to particular species). As I understand from the paper, during pretraining both the generator and discriminator are trained; the generator is a small masked language model and the discriminator aims to predict whether a token is an original or a token replaced by the generator. Additionally, as the paper states " if the generator happens to generate the correct token, that token is considered “real” instead of “fake”. However, the labels ElectraForPreTraining uses to determine the ground truth of “real” or “fake” are an array passed in by me - and I have no way of knowing whether the generator will correctly predict the token or not, resulting in the labels I pass in marking any masked tokens as “fake” whether or not the generator actually succeeds. Additionally, since the model doesn’t take in any information regarding the true identity of masked tokens it seems to me that the model has no way to handle this case. Is this being properly handled in the model? If so, what do I have to do to make sure I am passing in my data correctly?
inputs | mask masked inputs | generator (Electraformlm) gen logits | gumbel softmax generated | discriminator(electraformpretraining) disc logits is_replaced = generated == inputs(x) edited: is_replaced = generated != inputs is replaced is the true label for discriminator
0
huggingface
🤗Transformers
Trainer.evaluate()
https://discuss.huggingface.co/t/trainer-evaluate/6107
Hello, When the following code is run several times (notebook language_modeling.ipynb 3), it gives a diferent value at each time: import math eval_results = trainer.evaluate() print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}") I do not understand why (the eval loss should be always the same when using the same eval dataset and the same model).
I dont think its the exact same model and the same evaluation dataset everytime. Can you actually save and check the eval dataset across two runs please
0
huggingface
🤗Transformers
How to set early stopping when running run_summarization.py
https://discuss.huggingface.co/t/how-to-set-early-stopping-when-running-run-summarization-py/6093
Same with the topic. I know how to change the training epochs buut I don’t know how to add early stopping when running run_summarization.py
Hi, I’m having a similar issue. I’m trying to set an EarlyStoppingCallback in the run_mlm.py parameters setting the following flag: --callbacks [EarlyStoppingCallback(early_stopping_patience=3)] When doing this bash complains about the '('. I have tried to use a more bash-friendly syntax adding \ before the parentheses but it says that that it is not used in the HfArgumentParser. Have you figured out how to set it? Thank you very much!!
0
huggingface
🤗Transformers
Wav2vec2-large-xlsr-53 for non-listed low resource language
https://discuss.huggingface.co/t/wav2vec2-large-xlsr-53-for-non-listed-low-resource-language/5457
Hi, Can we fine-tuned the model for low resource language not inside the 53 languages listed? Does it have to be inside the same language family of the 53 listed? Regards Becks
Hello, Yes you can fine-tune the model for low resource language outside the ones which the model was pretrained on. On the original paper of XLSR-53 ([2006.13979] Unsupervised Cross-lingual Representation Learning for Speech Recognition 1) you can see on table 3 and table 4 that they finetuned on out-of-pretraining languages. In addition, during my school project, we finetuned XLSR-53 ont the Czech and Ukrainian language and got some good results , (feel free to check my github - GitHub - omarsou/wav2vec_xlsr_cv_exp: Experiments on out of training languages (from common voice https://commonvoice.mozilla.org/) using Wav2Vec 11 ). Best, Omar
0
huggingface
🤗Transformers
Generate text on multiple GPU
https://discuss.huggingface.co/t/generate-text-on-multiple-gpu/1281
I was wondering how one might go about generating text on a machine with multiple GPU? Here is some example code: import pandas as pd from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2LMHeadModel.from_pretrained('gpt2') def generate_text(primer_sent: str, chunk_sent_length: int, is_finetuning_current_model=None): encoded_results = tokenizer(primer_sent, return_tensors='pt') tmp_ids = encoded_results['input_ids'] max_length = tmp_ids.shape[1] + chunk_sent_length gen_ids = model.generate(input_ids=tmp_ids, max_length=max_length, is_finetuning_current_model=is_finetuning_current_model, do_sample=True, top_k=50, top_p=0.95, pad_token_id=50256) decoded_str = tokenizer.decode(gen_ids, skip_special_tokens=True) return decoded_str def generate_text_samples(data_filename: str, chunk_sent_length: int): sent_df = pd.read_csv(data_filename) sent_df['gen'] = sent_df[0].apply(generate_text, args=(chunk_sent_length,)) return sent_df text_df = generate_text_samples('/path/to/data.csv', 50) When I run this code on my machine, it appears that it is all being done on the CPU. Is it possible to distribute this workflow across the multiple gpu I have? Thanks in advance for your help!
Hi @aclifton314. Have you had any luck with this? I’m also interested in generating text (using a different model - t5) on multiple GPUs to speed up the throughput of my process. Wondering if you were able to do this?
0
huggingface
🤗Transformers
Conda install -c huggingface or conda-forge?
https://discuss.huggingface.co/t/conda-install-c-huggingface-or-conda-forge/6013
Hello, In the Transformers docs, the conda installation paragraph 10 gives the following code that installs the version 4.4.2: conda install -c huggingface transformers … but the Anaconda page of Transformers 6 gives the following one that installs the version 4.5.1 (the latest one): conda install -c conda-forge transformers Why the Hugging Face channel does not install the latest version?
Hi @pierreguillou, there were some build errors with the conda builds, they’ve since been fixed and the packages have been uploaded. You should now see v4.5.0 and v4.5.1 on the huggingface channel.
0
huggingface
🤗Transformers
Trainer Question Answering evaluation metrics
https://discuss.huggingface.co/t/trainer-question-answering-evaluation-metrics/3290
Hi HF community I wanted to ask whether anyone has encountered an example of evaluating QA models using built-in trainer functions like compute_metrics
You can check the new run_qa script 39. It does need a special subclass of Trainer because the post-processing is fairly complex, but it uses compute_metrics and the squad metrics from the datasets library.
0
huggingface
🤗Transformers
Fine Tuning GPT2 for machine translation
https://discuss.huggingface.co/t/fine-tuning-gpt2-for-machine-translation/5894
good evening everyone, is it possible to fine-tune gpt2 for text translation? if it is possible, how can I do it using my own data? I want to translate from ASL to English, and the idea that came to me was to use gpt2 as the decoder (since it is trained in English) and use a BERT as an encoder (I would fine tune it and retrain with the ASL base) Does anyone have a tutorial on how to do something like this?
hey @yansoares, you could try using the EncoderDecoderModel (docs 50) to create a BERT2GPT model. I’m not aware of any tutorials using this class, but the original paper (link 54) is quite well written and should give you a good idea on what needs to be done.
0
huggingface
🤗Transformers
DataCollator vs. Tokenizers
https://discuss.huggingface.co/t/datacollator-vs-tokenizers/5897
In @sgugger 's example notebook Token Classification 11 A DataCollatorForTokenClassification is used: from transformers import DataCollatorForTokenClassification data_collator = DataCollatorForTokenClassification(tokenizer) ... trainer = Trainer( model, args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["validation"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics ) I am trying to figure out what the real purpose of this is. It appears that the purpose of DataCollatorForTokenClassification is for padding, truncation, etc. But you can also do this in the tokenizer. Why do we need this extra thing, then? Is it because DataCollator does it per batch instead on the fly and is more efficient? Thanks.
hey @hamel, welcome to the forum! you’re spot on about using data collators to do padding on-the-fly. to understand why this helps, consider the following scenarios: use the tokenizer to pad each example in the dataset to the length of the longest example in the dataset use the tokenizer and DataCollatorWithPadding (docs 28) to pad each example in a batch to the length of the longest example in the batch clearly, scenario 2 is more efficient, especially in cases where a few examples happen to be much longer than the median length and scenario 1 would introduce a lot of unnecessary padding. btw under the hood, the data collators are fed to the collate_fn argument of pytorch’s DataLoader, see e.g. here: transformers/trainer.py at 4e7bf94e7280d2b725ac4644dbe9808560afa5d8 · huggingface/transformers · GitHub 38 the pytorch docs 2 are not super informative on collate_fn itself, but you can find various discussion in their forums (e.g. here 22)
0
huggingface
🤗Transformers
Dataset for training BlenderBot
https://discuss.huggingface.co/t/dataset-for-training-blenderbot/5853
I am trying to build a chatbot using BlenderbotForConditionalGeneration. I am using the pretrained model, however, I have to fine-tune it. My question is how should the training data look like and is there any tutorials how should I preprocess it in order to fine-tune the model? Thank you!
I also want to fine-tune blenderbot with custom data. I noticed the blended_skill_talk dataset page 48 says that blenderbot weights 13 use that to train on. So, it might make sense to structure our own custom training data like the blended_skill_talk dataset. But, I’m not sure, since when I tried to train on the blended_skill_talk dataset, like this: from transformers import BlenderbotForConditionalGeneration, Trainer, TrainingArguments from datasets import load_dataset mname = 'facebook/blenderbot-400M-distill' model = BlenderbotForConditionalGeneration.from_pretrained(mname).to('cuda') train_dataset = load_dataset("blended_skill_talk", split="train") val_dataset = load_dataset("blended_skill_talk", split="validation") training_args = TrainingArguments("tmp_trainer") trainer = Trainer(model=model, args=training_args, train_dataset=train_dataset, eval_dataset=val_dataset) trainer.train() I get an error (ValueError: too many dimensions ‘str’). I think it has to do with the collator, but I’m not sure how to fix this.
0
huggingface
🤗Transformers
Could I inference the Encoder-Decoder model without specify “decoder_input_ids”?
https://discuss.huggingface.co/t/could-i-inference-the-encoder-decoder-model-without-specify-decoder-input-ids/5811
I’m using Encoder-Decoder model to train a translation task, while partial of the data are unlabeled. For labeled data, I can use the following codes to do the inference and compute the loss, # model is composed of EncoderDecoder architecture # source_data and target_data are processed by tokenizer beforehand batch = { "inputs_idx": source_data["inputs_idx"], "attention_mask": source_data["attention_mask"], "decoder_input_ids": target_data["inputs_idx"] "decoder_attention_mask": target_data["attention_mask"], "labels": target_data["inputs_idx"].clone() } output = model(**batch) supervised_loss = output["loss"] Besides the supervised loss, I also want to compute some unlabeled loss over the predicted logits of unlabeled source data, such as, batch = { "inputs_idx": source_data["inputs_idx"], "attention_mask": source_data["attention_mask"], } output = model(**batch) unsupervised_loss = some_loss_func(output["logits"]) However, I can not do the inference without specifying “decoder_input_ids”, the decoder will produce error about You have to specify either input_ids or inputs_embeds So far, I assign source_data["idx"] for decoder_input_ids to avoid the issue, but I feel like it is incorrect cause it will bring inconsistency in inference between labeled and unlabeled data. So, I am wondering how should I do inference for unlabeled data correctly.
Hi during inference use output = model.generate(**batch) instead of output = model(**batch) Also during training: decoder_input_ids != target_data[“inputs_idx”] labels = target_data[“inputs_idx”] and decoder_input_ids = shift_to_right(target_data[“inputs_idx”]) - this action is performed automatically in library code, so you can simply omit decoder_input_ids argument
0
huggingface
🤗Transformers
How to only finetune the last layer of ALBERT?
https://discuss.huggingface.co/t/how-to-only-finetune-the-last-layer-of-albert/5875
AlbertModel( (embeddings): AlbertEmbeddings( (word_embeddings): Embedding(30000, 128, padding_idx=0) (position_embeddings): Embedding(512, 128) (token_type_embeddings): Embedding(2, 128) (LayerNorm): LayerNorm((128,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) (encoder): AlbertTransformer( (embedding_hidden_mapping_in): Linear(in_features=128, out_features=768, bias=True) (albert_layer_groups): ModuleList( (0): AlbertLayerGroup( (albert_layers): ModuleList( (0): AlbertLayer( (full_layer_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (attention): AlbertAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (attention_dropout): Dropout(p=0.1, inplace=False) (output_dropout): Dropout(p=0.1, inplace=False) (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) ) (ffn): Linear(in_features=768, out_features=3072, bias=True) (ffn_output): Linear(in_features=3072, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) ) (pooler): Linear(in_features=768, out_features=768, bias=True) (pooler_activation): Tanh() ) As we can see, ALBERT has only a ModuleList. And I am not sure how to only finetune the last layer out of the 12 layers in total. Thanks!
You can access the name of the parameters through model.state_dict().keys() and customize the optimizer according to the name of the parameters in the corresponding layers, for example if you set optimizer_grouped_parameters = [{'params': [p for n, p in model.named_parameters() if "pooler" in n], 'weight_decay': 0.01}] and then initialize the optimizer with optimizer = AdamW(optimizer_grouped_parameters, lr=1e-5), only the pooler layer will get updated during training. If you check the weights of the pooler layer after learning with model.pooler.weight, it will be different from the initial model. However, other layers will have the same weights with the initial model.
0
huggingface
🤗Transformers
CUDA out of memory when using Trainer with compute_metrics
https://discuss.huggingface.co/t/cuda-out-of-memory-when-using-trainer-with-compute-metrics/2941
Recently, I want to fine-tuning Bart-base with Transformers (version 4.1.1). The fine-tuning process is very smooth with compute_metrics=None in Trainer. However, when I implement a function of computing metrics and offer this function to Trainer, I received the CUDA out of memory error during the evaluation stage. I want to try this feature, so my implementation is straightforward. def compute_metrics(pred): preds = pred.predictions labels = pred.label_ids print(preds.shape, labels.shape) return { 'loss': 1 } Because when compute_metrics=None, the training process is normal, so I think it can’t be the problem of batch size. But I still tried a smaller batch size, but even if I set the batch size to 1, the situation is the same. I even try tinier-bart, but I received the same error at last. One thing caught my attention . When I set a tiny batch size, the memory will not fill up at once, but the occupancy rate has increased until the memory can’t hold more data. That is, the processed data is not released in time. Is there any magic operation to solve this problem?
When computing metrics inside the Trainer, your predictions are all gathered together on the device (GPU/TPU) and only passed back to the CPU at the end (because that operation can be slow). If your dataset is large (or your model outputs large predictions) you can use eval_accumulation_steps to set a number of steps after which your predictions are sent back to the CPU (slower but uses less device memory). This should avoid your OOM.
0
huggingface
🤗Transformers
Argmax of Generation Probabilities doesn’t match with Generated Sequence Tokens
https://discuss.huggingface.co/t/argmax-of-generation-probabilities-doesnt-match-with-generated-sequence-tokens/5865
I am using BART model to make a chatbot. I am doing the following to generate response from the model: generated_output = model.generate( input_ids = input_ids, attention_mask = mask, output_scores=True, return_dict_in_generate=True ) But, the sequence generated from the ‘sequences’ field: gen_tokens = generated_output['sequences'] gen_tokens_seq = [tokenizer.decode(g, skip_special_tokens = True) for g in gen_tokens] And the one generated from argmax(scores) num_generated_tokens = len(generated_output['scores']) for i1 in range(0, num_generated_tokens, 1): temptensor = generated_output['scores'][i1][0] gen_id = torch.argmax(temptensor).item() gen_ids.append(gen_id) gen_ids = torch.tensor(gen_ids) gen_ids = gen_ids.view(1, -1) gen_ids_seq = [tokenizer.decode(g, skip_special_tokens = True) for g in gen_ids] are not the same. I need the logit vector for the generated sequence of tokens by model.generate(), but it’s not returning what I expect it to return. What can I do to get the logit values for the “sequence” returned by model.generate()?
Okay I think I found it. I had to set num_beams=1, do_sample=False manually inside model.generate() to get the correct logit values. These values are set by default according to huggingface documentation, but still, I had to manually set them. Please correct me if I am wrong on this. Thanks.
0
huggingface
🤗Transformers
Adding more information on Trainer state
https://discuss.huggingface.co/t/adding-more-information-on-trainer-state/5852
Hi I would like to same more information on trainer state to load them later, I am doing it like self.state.A = A then, I need to load them later as: path = # saved_path of state state = TrainerState.load_from_json(path) print(state.A) then this gives the error that A does not exist, could you please help me? thanks @sgugger
The state is not expandable. You should save your information separately to reload it like that.
0
huggingface
🤗Transformers
How to specify sequence length when using “feature-extraction”
https://discuss.huggingface.co/t/how-to-specify-sequence-length-when-using-feature-extraction/5793
from transformers import BertTokenizerFast tokenizer = BertTokenizerFast("./vocab.txt", unk_token="<unk>", sep_token="</s>", cls_token="<s>", pad_token="<pad>", mask_token="[MASK]" ) features = pipeline( "feature-extraction", model="./model", tokenizer=tokenizer ) search_features = features(text) #IndexError: index out of range in self
what happens if you specify model_max_len=512 when you load the tokenizer? i’d try that and do a sanity check with tokenizer(text) to make sure the truncation is working as expected
0
huggingface
🤗Transformers
Eval freezes on local multi GPU Deepspeed run
https://discuss.huggingface.co/t/eval-freezes-on-local-multi-gpu-deepspeed-run/5762
Environment info transformers version: 4.6.0.dev0 Platform: Linux-4.19.112±x86_64-with-Ubuntu-18.04-bionic Python version: 3.7.10 PyTorch version (GPU?): 1.8.1+cu101 (True) Tensorflow version (GPU?): 2.4.1 (True) Using GPU in script?: <2,4> Using distributed or parallel set-up in script?: Information I’m working on wav2vec2.0 using the following official script of huggingface. github.com huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py 3 #!/usr/bin/env python3 import json import logging import os import re import sys from dataclasses import dataclass, field from typing import Any, Dict, List, Optional, Union import datasets import numpy as np import torch import torchaudio from packaging import version from torch import nn import transformers from transformers import ( HfArgumentParser, Trainer, This file has been truncated. show original I am trying to finetune huggingface model with multiple gpus using deepspeed. deepspeed --num_gpus=1 run_common_voice.py --deepspeed ds_config.json --do_train --do_eval works, but deepspeed --num_gpus=2 run_common_voice.py --deepspeed ds_config.json --do_train --do_eval stops working and freezes at the end of eval. The progress bar is 100% done but the eval result is not returned and it freezes. To reproduce This is how to reproduce! colab.research.google.com Google Colaboratory 2 Steps to reproduce the behavior: Install deepspeed Add with autocast():after line 481 in run_common_voice.py Set param: --deepspeed ds_config.json --do_train --do_eval Run run_common_voice.py using deepspeed with 1> gpus ds_config has the following parameters. { "fp16": { "enabled": "true", "loss_scale": 0, "loss_scale_window": 1000, "hysteresis": 2, "min_loss_scale": 1, "opt_level": "O3" }, "steps_per_print": 100, "wall_clock_breakdown": "false" } Expected behavior The finetuning eval should be executed without freezing.
This is how to reproduce! colab.research.google.com Google Colaboratory 3
0
huggingface
🤗Transformers
RuntimeError: CUDA error: device-side assert triggered
https://discuss.huggingface.co/t/runtimeerror-cuda-error-device-side-assert-triggered/5733
When I freshly train the Token Classification model (DistilBertForTokenClassification) and run a prediction for a single sentence that I manually type out, it runs fine, but when I try to run it on my dataset, it fails with RuntimeError: CUDA error: device-side assert triggered when using. This is how I’m performing the training: if __name__ == '__main__': import torch class WNUTDataset(torch.utils.data.Dataset): def __init__(self, encodings, labels): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} item['labels'] = torch.tensor(self.labels[idx]) return item def __len__(self): return len(self.labels) train_encodings.pop("offset_mapping") # we don't want to pass this to the model val_encodings.pop("offset_mapping") train_dataset = WNUTDataset(train_encodings, train_labels) val_dataset = WNUTDataset(val_encodings, val_labels) from transformers import DistilBertForTokenClassification model = DistilBertForTokenClassification.from_pretrained('distilbert-base-cased', num_labels=len(unique_tags)) model.resize_token_embeddings(len(tokenizer)) from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=3, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=10, ) trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset # evaluation dataset ) trainer.train() This is the function that I’m using for prediction. def ner_normalize(text, clean=False): if clean: text = clean_posts(text) inputs = tokenizer(text, return_tensors="pt") labels = torch.tensor([1] * inputs["input_ids"].size(1)).unsqueeze(0) # Running the prediction outputs = model(**inputs, token_type_ids=None) text = [inverse_vocab[i] for i in inputs['input_ids'][0].numpy()] preds = [id2tag[i.item()] for i in torch.argmax(outputs['logits'], axis=-1)[0]] collapsed_tokens = [] collapsed_preds = [] curr_token = '' curr_pred = None for i, subword_pred_pair in enumerate(zip(text, preds)): subword, pred = subword_pred_pair if subword.startswith('##'): curr_token += subword[2:] if i == len(text) - 1: collapsed_tokens.append(curr_token) collapsed_preds.append(curr_pred) else: collapsed_tokens.append(curr_token) collapsed_preds.append(curr_pred) curr_token = subword curr_pred = pred outputs = zip(collapsed_tokens[2:], collapsed_preds[2:]) normalized = ' '.join( [x[0] if x[1] == "O" else "CODE" for x in outputs] ) return normalized
This could be due to various reasons. Please run your code on the CPU to get a specific error message.
0
huggingface
🤗Transformers
[Deepspeed] ZeRO-Infinity integration released and config changes
https://discuss.huggingface.co/t/deepspeed-zero-infinity-integration-released-and-config-changes/5786
Deepspeed ZeRO-Infinity (deepspeed==0.3.15) has been just integrated. You need to use the transformers master branch to use it. There are 2 important changes that you need to be aware of if you’re already using DeepSpeed integration in transformers: After this release only config params that are set to auto will get automatically overriden/set to the correct/recommended values, everything else is left as is. This is to avoid the previously confusing behavior of never being quite sure what gets overridden and what not despite the logger telling what it did override. The new behavior is completely unambiguous. See examples zero2 6 zero3 8 Full doc: Trainer — transformers 4.5.0.dev0 documentation 8 If you are using massive models and aren’t using example scripts, make sure to read: Full doc: Trainer — transformers 4.5.0.dev0 documentation 6 Everything else should work as before or better. The docs were revamped a lot too - if you find anything unclear or lacking please let me know. You probably want to install deepspeed master though, since 0.3.15 left some debug prints in-place, which creates a lot of noise, which has been fixed in master. So: pip install git+https://github.com/microsoft/DeepSpeed If you encounter any problems please post an Issue and tag @stas00 to it. Thank you!
Thanks for your work, I tried deepspeed in Wav2vec2-finetune and when I use the configuration file “ds_config_zero2.json”, it reports the following error: File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 259, in forward self.padding, self.dilation, self.groups) RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same So I made a change in the function _prepare_input() by using “.half()”: def _prepare_inputs(self, inputs: Dict[str, Union[torch.Tensor, Any]]) -> Dict[str, Union[torch.Tensor, Any]]: """ Prepare :obj:`inputs` before feeding them to the model, converting them to tensors if they are not already and handling potential state. """ # Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same for k, v in inputs.items(): if isinstance(v, torch.Tensor): # inputs[k] = v.to(self.args.device) inputs[k] = v.to(self.args.device).half() # add .half() here if self.args.past_index >= 0 and self._past is not None: inputs["mems"] = self._past return inputs I don’t know if this is the right way to change it, but then I got a new error: File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/functional.py", line 1692, in linear output = input.matmul(weight.t()) RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)` terminate called after throwing an instance of 'c10::Error' what(): CUDA error: device-side assert triggered Exception raised from create_event_internal at /opt/conda/conda-bld/pytorch_1607370141920/work/c10/cuda/CUDACachingAllocator.cpp:687 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7f2ed6c508b2 in /root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0xad2 (0x7f2ed6ea2982 in /root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/lib/libc10_cuda.so) frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7f2ed6c3bb7d in /root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/lib/libc10.so) frame #3: <unknown function> + 0x5fea0a (0x7f2f13f8da0a in /root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0x5feab6 (0x7f2f13f8dab6 in /root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #5: <unknown function> + 0x1a3f6e (0x55c7aa0a8f6e in /root/anaconda3/envs/huggingface/bin/python) frame #6: <unknown function> + 0x10e34c (0x55c7aa01334c in /root/anaconda3/envs/huggingface/bin/python) frame #7: <unknown function> + 0x216141 (0x55c7aa11b141 in /root/anaconda3/envs/huggingface/bin/python) frame #8: <unknown function> + 0x10e318 (0x55c7aa013318 in /root/anaconda3/envs/huggingface/bin/python) frame #9: <unknown function> + 0x1a3f50 (0x55c7aa0a8f50 in /root/anaconda3/envs/huggingface/bin/python) frame #10: <unknown function> + 0x10e34c (0x55c7aa01334c in /root/anaconda3/envs/huggingface/bin/python) frame #11: <unknown function> + 0x216141 (0x55c7aa11b141 in /root/anaconda3/envs/huggingface/bin/python) frame #12: <unknown function> + 0x10e3a8 (0x55c7aa0133a8 in /root/anaconda3/envs/huggingface/bin/python) frame #13: <unknown function> + 0x1a3f50 (0x55c7aa0a8f50 in /root/anaconda3/envs/huggingface/bin/python) frame #14: <unknown function> + 0x10e34c (0x55c7aa01334c in /root/anaconda3/envs/huggingface/bin/python) frame #15: <unknown function> + 0x216141 (0x55c7aa11b141 in /root/anaconda3/envs/huggingface/bin/python) frame #16: <unknown function> + 0x10e3a8 (0x55c7aa0133a8 in /root/anaconda3/envs/huggingface/bin/python) frame #17: <unknown function> + 0x1a3f50 (0x55c7aa0a8f50 in /root/anaconda3/envs/huggingface/bin/python) frame #18: <unknown function> + 0x10e34c (0x55c7aa01334c in /root/anaconda3/envs/huggingface/bin/python) frame #19: <unknown function> + 0x216141 (0x55c7aa11b141 in /root/anaconda3/envs/huggingface/bin/python) frame #20: <unknown function> + 0x10e3a8 (0x55c7aa0133a8 in /root/anaconda3/envs/huggingface/bin/python) frame #21: <unknown function> + 0x1a3f50 (0x55c7aa0a8f50 in /root/anaconda3/envs/huggingface/bin/python) frame #22: <unknown function> + 0x10e34c (0x55c7aa01334c in /root/anaconda3/envs/huggingface/bin/python) frame #23: <unknown function> + 0x216141 (0x55c7aa11b141 in /root/anaconda3/envs/huggingface/bin/python) frame #24: <unknown function> + 0x10e3a8 (0x55c7aa0133a8 in /root/anaconda3/envs/huggingface/bin/python) frame #25: <unknown function> + 0x1a3f50 (0x55c7aa0a8f50 in /root/anaconda3/envs/huggingface/bin/python) frame #26: <unknown function> + 0x10e34c (0x55c7aa01334c in /root/anaconda3/envs/huggingface/bin/python) frame #27: <unknown function> + 0x216141 (0x55c7aa11b141 in /root/anaconda3/envs/huggingface/bin/python) frame #28: <unknown function> + 0x10e3a8 (0x55c7aa0133a8 in /root/anaconda3/envs/huggingface/bin/python) frame #29: <unknown function> + 0x1a3f50 (0x55c7aa0a8f50 in /root/anaconda3/envs/huggingface/bin/python) frame #30: <unknown function> + 0x10e318 (0x55c7aa013318 in /root/anaconda3/envs/huggingface/bin/python) frame #31: <unknown function> + 0x1a3f50 (0x55c7aa0a8f50 in /root/anaconda3/envs/huggingface/bin/python) frame #32: <unknown function> + 0x10e3a8 (0x55c7aa0133a8 in /root/anaconda3/envs/huggingface/bin/python) frame #33: <unknown function> + 0x1a3f50 (0x55c7aa0a8f50 in /root/anaconda3/envs/huggingface/bin/python) frame #34: <unknown function> + 0xfd9c8 (0x55c7aa0029c8 in /root/anaconda3/envs/huggingface/bin/python) frame #35: <unknown function> + 0x10eb77 (0x55c7aa013b77 in /root/anaconda3/envs/huggingface/bin/python) frame #36: <unknown function> + 0x10eb8d (0x55c7aa013b8d in /root/anaconda3/envs/huggingface/bin/python) frame #37: PyDict_SetItem + 0x502 (0x55c7aa068da2 in /root/anaconda3/envs/huggingface/bin/python) frame #38: PyDict_SetItemString + 0x4f (0x55c7aa06986f in /root/anaconda3/envs/huggingface/bin/python) frame #39: PyImport_Cleanup + 0xa0 (0x55c7aa0af5d0 in /root/anaconda3/envs/huggingface/bin/python) frame #40: Py_FinalizeEx + 0x67 (0x55c7aa12a487 in /root/anaconda3/envs/huggingface/bin/python) frame #41: <unknown function> + 0x237f03 (0x55c7aa13cf03 in /root/anaconda3/envs/huggingface/bin/python) frame #42: _Py_UnixMain + 0x3c (0x55c7aa13d22c in /root/anaconda3/envs/huggingface/bin/python) frame #43: __libc_start_main + 0xf5 (0x7f2f4d63d555 in /usr/lib64/libc.so.6) frame #44: <unknown function> + 0x1dce90 (0x55c7aa0e1e90 in /root/anaconda3/envs/huggingface/bin/python) /opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [0,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [1,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [2,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. /opt/conda/conda-bld/pytorch_1607370141920/work/aten/src/ATen/native/cuda/IndexKernel.cu:84: operator(): block: [0,0,0], thread: [3,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed. I also tried using the configuration file "ds_config_zero3.json", and it gives a new error: nn.functional.linear has been overridden with a more memory efficient version. This will persist unless manually reset. Traceback (most recent call last): Traceback (most recent call last): File "run_libri960.py", line 633, in <module> File "run_libri960.py", line 633, in <module> main() main() File "run_libri960.py", line 484, in main File "run_libri960.py", line 484, in main vocab_size=len(processor.tokenizer),vocab_size=len(processor.tokenizer), File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/modeling_utils.py", line 1131, in from_pretrained File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/modeling_utils.py", line 1131, in from_pretrained model = cls(config, *model_args, **model_kwargs)model = cls(config, *model_args, **model_kwargs) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/models/wav2vec2/modeling_wav2vec2.py", line 976, in __init__ File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/models/wav2vec2/modeling_wav2vec2.py", line 976, in __init__ self.wav2vec2 = Wav2Vec2Model(config)self.wav2vec2 = Wav2Vec2Model(config) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/models/wav2vec2/modeling_wav2vec2.py", line 782, in __init__ File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/models/wav2vec2/modeling_wav2vec2.py", line 782, in __init__ self.encoder = Wav2Vec2EncoderStableLayerNorm(config) self.encoder = Wav2Vec2EncoderStableLayerNorm(config) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 197, in wrapper File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 197, in wrapper f(module, *args, **kwargs) f(module, *args, **kwargs) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/models/wav2vec2/modeling_wav2vec2.py", line 595, in __init__ File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/models/wav2vec2/modeling_wav2vec2.py", line 595, in __init__ self.pos_conv_embed = Wav2Vec2PositionalConvEmbedding(config) self.pos_conv_embed = Wav2Vec2PositionalConvEmbedding(config) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 197, in wrapper File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 197, in wrapper f(module, *args, **kwargs)f(module, *args, **kwargs) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/models/wav2vec2/modeling_wav2vec2.py", line 200, in __init__ File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers-4.6.0.dev0-py3.7.egg/transformers/models/wav2vec2/modeling_wav2vec2.py", line 200, in __init__ self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/utils/weight_norm.py", line 105, in weight_norm self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/utils/weight_norm.py", line 105, in weight_norm WeightNorm.apply(module, name, dim)WeightNorm.apply(module, name, dim) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/utils/weight_norm.py", line 44, in apply File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/utils/weight_norm.py", line 44, in apply module.register_parameter(name + '_g', Parameter(norm_except_dim(weight, 2, dim).data)) module.register_parameter(name + '_g', Parameter(norm_except_dim(weight, 2, dim).data)) IndexError: IndexErrorDimension out of range (expected to be in range of [-1, 0], but got 2) : Dimension out of range (expected to be in range of [-1, 0], but got 2) Here is the command I executed in the terminal: deepspeed --include=“localhost:3,4” run_libri960.py –output_dir={output_dir} \ --num_train_epochs="30" \ --deepspeed={ds_config_dir} –per_device_train_batch_size=“4” –per_device_eval_batch_size=“4” –evaluation_strategy=“steps” –save_total_limit=“3” –save_steps=“2000” –eval_steps=“500” –logging_steps=“50” –learning_rate=“3e-5” –warmup_steps=“500” –model_name_or_path={model_name_or_path} \ --deepspeed={ds_config_dir} –preprocessing_num_workers=“32” –group_by_length –freeze_feature_extractor –logging_dir=${logging_dir} –gradient_accumulation_steps=“2” I’d appreciate it if you could reply to me! @patrickvonplaten @valhalla By the way,I tried using the DDP to solve the problem of uneven distribution of memory during multi-GPU training But I find it more likely to prompt OOM when using DDP, why is that?
0
huggingface
🤗Transformers
Append a linear layer on top of the vanilla Electra model
https://discuss.huggingface.co/t/append-a-linear-layer-on-top-of-the-vanilla-electra-model/5785
I’m using the ElectraModel.from_pretrained(‘google/electra-base-discriminator’) to train a multi label classification task. I would like to add a linear layer for the final hidden state logits. Below is a pseudocode example model = ElectraModel.from_pretrained('google/electra-base-discriminator') logit_layer = torch.nn.Linear(768, 4) ### below code is what I'm trying to figure out append_logit_layer = model.append(logit_layer) My main reason for needing to append this is so when I back propagate with torch SGD optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.8) The model.parameters() value will have my logit_layer gradient.
hey @bennicholl, one idea would be to subclass ElectraModel, add your logit_layer to the init and then override the forward method so the logit_layer is fed the outputs from the encoder (see source code 3). there might be more elegant approaches, but i’m pretty sure this one should work
0
huggingface
🤗Transformers
Getting random results with BERT
https://discuss.huggingface.co/t/getting-random-results-with-bert/5753
Hi I have modified a BERT model a bit and adds small “Linear” layers between its layer, the only random part is random initalization done for these layers as below: W = torch.nn.init.xavier_normal_(tensor, gain=math.sqrt(2)) I have put these initialization when defining each layers. I am getting each time 3-4% difference, and really appreciate your help to fix this issue. Could you please help me on how I should handle initialization on top of a BERT model, should them all go inside __init_weights(), would that differ if one does it inside this function or anywhere in the model? Huggingface run_glue.py fix the random seeds on top of all the lines, shall I redo it each time for the initialization? I am really struggling with this issue and appreciate your help a lot. @sgugger @stas
Hi I confirm the same issue happens also for the BERT model without any modifications, for this I run it on MRPC for 3 epochs, here is the two results: [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:01,542 >> epoch = 3.0 [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:01,542 >> eval_average_metrics = 0.8355071710663065 [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:01,542 >> eval_mem_cpu_alloc_delta = 0MB [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:01,543 >> eval_mem_cpu_peaked_delta = 1MB [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:01,543 >> eval_mem_gpu_alloc_delta = 0MB [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:01,543 >> eval_mem_gpu_peaked_delta = 264MB [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:01,543 >> mrpc_eval_accuracy = 0.8088 [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:01,543 >> mrpc_eval_combined_score = 0.8355 [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:01,543 >> mrpc_eval_f1 = 0.8622 [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:01,543 >> mrpc_eval_loss = 0.5017 [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:01,543 >> mrpc_eval_runtime = 0:00:00.31 [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:01,543 >> mrpc_eval_samples_per_second = 656.083 and [INFO|trainer_pt_utils.py:722] 2021-04-26 00:46:42,272 >> ***** test metrics ***** [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:42,272 >> epoch = 3.0 [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:42,272 >> eval_average_metrics = 0.8656245715069244 [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:42,272 >> eval_mem_cpu_alloc_delta = 0MB [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:42,272 >> eval_mem_cpu_peaked_delta = 2MB [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:42,272 >> eval_mem_gpu_alloc_delta = 0MB [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:42,272 >> eval_mem_gpu_peaked_delta = 264MB [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:42,272 >> mrpc_eval_accuracy = 0.8431 [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:42,272 >> mrpc_eval_combined_score = 0.8656 [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:42,272 >> mrpc_eval_f1 = 0.8881 [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:42,272 >> mrpc_eval_loss = 0.4185 [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:42,272 >> mrpc_eval_runtime = 0:00:00.32 [INFO|trainer_pt_utils.py:727] 2021-04-26 00:46:42,272 >> mrpc_eval_samples_per_second = 623.473 This now looks to me this is a library issue, @sgugger really appreciate your comments on this issue. thanks
0
huggingface
🤗Transformers
Prohibit GPT-2 from generating some words on a condition
https://discuss.huggingface.co/t/prohibit-gpt-2-from-generating-some-words-on-a-condition/4823
Hello, I have added to the GPT-2 vocabulary two special tokens ([ss], [se]) and use it to generate sequences. I want to prohibit GPT-2 from generating some words after having generated [ss] until it generates [se] . For example in the following sequence some words of the sequence [ss] here I want some words not to be generated [se] other words of the sequence. in other words, I want between [ss] … [se] to exclude some words from the generation. As @deathcrush answered here 1 it is possible to prohibit GPT-2 from generating some vocabulary ids in general, but is it possible to do it on condition? (only between those tokens [ss] and [se])
Could you check out this function for contained prefix generation: Models — transformers 4.4.2 documentation 9
0
huggingface
🤗Transformers
RobertaTokenizer: How to enable masking of custom special tokens
https://discuss.huggingface.co/t/robertatokenizer-how-to-enable-masking-of-custom-special-tokens/5737
Hi! I am trying to include some of my vocabulary as special tokens in RobertaTokenizer, bu t have noticed it does not mask them properly for the MLM objective: tokenizer = RobertaTokenizer.from_pretrained(args.tokenizer_path, additional_special_tokens=["[SPECIAL_TOK]") t.all_special_ids → [[0, 2, 3, 2, 1, 0, 4, 32000]] t("A test [SPECIAL_TOK] now", return_special_tokens_mask=True) → {'input_ids': [0, 107, 320, 32000, 37, 2], 'special_tokens_mask': [1, 0, 0, 0, 0, 1], 'attention_mask': [1, 1, 1, 1, 1, 1]} I expect 'special_tokens_mask' to be [1, 0, 0, 1, 0, 1]. Do I just need to overwrite the RobertaForMaskedLM collator to mask my custom special tokens? Or why is this happening? For context, it rained a custom BPE with the modul: from tokenizers.implementations import ByteLevelBPETokenizer And I set special_tokens in there to be atomic. I also do not want these to be masked/predicted when training my LM.
I think it may be be that the term special_tokens is just overloaded in HuggingFace, and the mask is only for masking <s> and </s>
0
huggingface
🤗Transformers
Use two sentences as inputs for sentence classification
https://discuss.huggingface.co/t/use-two-sentences-as-inputs-for-sentence-classification/5444
for an example, we have a datasource with three columns column_a: text data which describes one feature column_b: text data which describes another feature column_c : category/label If i have to approach this kind of text classification problem with BERT, how can we pass column_a and column_b as inputs to bert model? is there a way to concatenate two sentences using separator token or is there a way to achieve this using encoder_plus method? any help is appreciated!
Not an expert (and far from being one) but I’m interested in what you have tried so far. If I wouldn’t get good results using simple methods (let’s say by just concatenating the two columns) I’d try having an ensemble of two BERT models where one receives column_a and the other receives column_b. Let’s say one column is an expert opinion about the medical condition of a patient, and the other is the patient’s opinion on his medical condition. Then to me it makes sense to have two models one fine-tuned to the expert opinion and the other fine-tuned to the patient’s opinions. Hopefully I didn’t say too much nonsense
0
huggingface
🤗Transformers
Mixed precision for bfloat16-pretrained models
https://discuss.huggingface.co/t/mixed-precision-for-bfloat16-pretrained-models/5315
As bfloat16 hardware support is becoming more available there is an emerging trend of training in bfloat16, which leads to the issue of not being able to finetune such models in mixed precision (or eval in fp16) - be it amp, apex or deepspeed/fairscale. Last week I spent some time sitting with the NaN issues reported in t5/mt5 (and pegasus apparently too), and I have been watching the activation values: [T5/MT5] resolve inf/nan under amp (mixed precision) by stas00 · Pull Request #10956 · huggingface/transformers · GitHub 20 and studying the numerical qualities of bfloat16 vs bloat16: ml-ways/bfloat16-vs-float16-study.ipynb at master · stas00/ml-ways · GitHub 20 So my conclusion/understanding is this: since bfloat16 has no access to precision it basically compensates and trains itself to use huge numbers, so rather than having small activation values it operates in the 1e5 - 1e10+ range which is beyond the 64k limit float16 can handle and thus overflows (inf) which then immediately leads to nan (see my nb 20 for how inf/nan comes about). To make things worse bfloat16 huge number range has huge gaps with no numbers in it: torch.tensor(283, dtype=torch.bfloat16)*10 # 2848 instead of 2830! so it trains to compensate for that handicap as well. And so when float16 comes around which has much smaller gaps it obviously won’t produce the same results. See my notebook to see the gaps demo’ed. Ideally there should be some plane transform that could take the weights trained in bfloat16 and convert those to the numerical domain of float16. A naive approach could be to divide everything by ~100000 to shift to a different effective range . But because the training is non-linear I can’t see how this would be possible, other than via some DNN that was trained for such transform. As you can see from the PR some workarounds may work, but it’s hard to keep the numbers in check when the model wants to constantly operate in the range float16 wasn’t designed for. A user already reported NaNs after a 3h training with this PR, but hasn’t shared a way to reproduce yet. @sshleifer suggested here 12 that perhaps finetuning with a penalty for large activations could do the trick. It’s unclear how much of such finetuning it’d take, since the need is to lower the weights by several orders of magnitude, so that the activations and accumulative math operations don’t break the 64K barrier. So currently t5/mt5/pegasus models are affected, but I’m sure there will be more emerging as new hardware supporting bfloat16 is quickly emerging so we will have to deal with that a lot more very soon I believe. Of course, if we wait long enough, the mixed precision will be moved to fp32/bf16 or even not be needed anymore. If perhaps some of you have experimented with such bf16 to fp16 finetuning and had good results please do share. It’s possible that if a solid approach is found then we will need to make a 2nd set of these models whose weights are finetuned for fp16. Thank you.
And it looks like GPT-Neo just added itself to the group of bfloat16-pretrained models. github.com/huggingface/transformers FP16 overflow with GPT-Neo when using sequence lengths of 2048. 40 opened Apr 6, 2021 LouisCastricato Environment info transformers version: 4.5.0.dev0 Platform: Linux-5.4.0-54-generic-x86_64-with-glibc2.29 Python version: 3.8.5 PyTorch version (GPU?): 1.8.0+cu111 Tensorflow version (GPU?): N/A Using GPU in script?: Yes Using distributed or parallel set-up...
0
huggingface
🤗Transformers
Run_mlm.py cuda error memory after resuming a training
https://discuss.huggingface.co/t/run-mlm-py-cuda-error-memory-after-resuming-a-training/5549
Hello, I am trying to pretrain an XLM Roberta model, but I got some issue. I have trained the model for few thousand steps and get the checkpoint. Then I wanted to continue the pre-training from the checkpoint, but got a memory error “CUDA error memory” after few steps. I wonder if there is a leak of something like that ? The memory used during the 1st pretraining used around 15.X GB / 16.2 GB so I quite don’t understand what’s going on.
I’m running into the same issue but with the mBART model. For some reason, running training from scratch with the Seq2SeqTrainer works just fine, but resuming from checkpoint exceeds the memory limit, and produces a CUDA ‘out of memory’ error. I think it might be related to this 23 issue on the GitHub repository. @sshleifer I think this is another issue with training large models, as we discussed here 18 although this just seems to be a bug in the trainer.
0