docs
stringclasses
4 values
category
stringlengths
3
31
thread
stringlengths
7
255
href
stringlengths
42
278
question
stringlengths
0
30.3k
context
stringlengths
0
24.9k
marked
int64
0
1
huggingface
🤗Transformers
Few shot text generation with T5 transformers like GPT-3
https://discuss.huggingface.co/t/few-shot-text-generation-with-t5-transformers-like-gpt-3/3120
Hi HF team, In a very interesting exploration, I explored the T5 transformer for few shot text generation just like GPT-3. The results are impressive. Thought you might be interested in checking https://towardsdatascience.com/poor-mans-gpt-3-few-shot-text-generation-with-t5-transformer-51f1b01f843e?sk=5bb8f20ee72d55a91155289535ab26c5 197
This looks impressive! Thanks for sharing
0
huggingface
🤗Transformers
Pre-training a language model on a large dataset
https://discuss.huggingface.co/t/pre-training-a-language-model-on-a-large-dataset/790
Hi, I’m getting a memory error when I run the example code for language modeling. I’m interested in pre-training a RoBERTa model using a 25GB text data on a virtual machine with a v3-8 TPU on Google Cloud Platform. I’m using the following command with transformers/examples/xla_spawn.py and transformers/examples/run_language_modeling.py. python xla_spawn.py --num_cores 8 \ run_language_modeling.py \ --output_dir=[*****] \ --config_name=[*****] \ --tokenizer_name=[*****] \ --do_train \ --per_device_train_batch_size 8 \ --gradient_accumulation_steps 128 \ --learning_rate 6e-4 \ --weight_decay 0.01 \ --adam_epsilon 1e-6 \ --adam_beta1 0.9 \ --adam_beta2 0.98 \ --max_steps 500_000 \ --warmup_steps 24_000 \ --save_total_limit 5 \ --save_steps=100_000 \ --block_size=512 \ --train_data_file=[*****] \ --mlm \ --line_by_line When I run this, I get the following error. 08/20/2020 15:21:07 - INFO - transformers.data.datasets.language_modeling - Creating features from dataset file at [*****] Traceback (most recent call last): File "xla_spawn.py", line 72, in <module> main() File "xla_spawn.py", line 68, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 395, in spawn start_method=start_method) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes while not context.join(): File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 108, in join (error_index, name) Exception: process 0 terminated with signal SIGKILL It looks like the script gets killed while it’s loading the training data here 1. with open(file_path, encoding="utf-8") as f: lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())] When I run the above block of code separately with transformers/examples/xla_spawn.py, I get an error. Traceback (most recent call last): File "xla_spawn.py", line 72, in <module> main() File "xla_spawn.py", line 68, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 395, in spawn start_method=start_method) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes while not context.join(): File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 108, in join (error_index, name) Exception: process 0 terminated with signal SIGKILL When I run the above block of code separately using n1-highmem-16 (16 vCPUs, 104 GB memory) without TPU, I still get an error. Traceback (most recent call last): File "debug_load.py", line 7, in <module> lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())] File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/codecs.py", line 321, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) MemoryError Has anyone successfully reproduced the original RoBERTa model or pretrained a language model with a large dataset using Huggingface’s transformers (with TPU)? If so, what are the specifications of your machine? Has this code (transformers/examples/run_language_modeling.py) tested on a large dataset?
Same problem here. It seems that run_language_modeling.py is not able to deal with very large files. Any help?! @valhalla @lhoestq Thanks
0
huggingface
🤗Transformers
[Announcement] GenerationOutputs: Scores, Attentions and Hidden States now available as outputs to generate
https://discuss.huggingface.co/t/announcement-generationoutputs-scores-attentions-and-hidden-states-now-available-as-outputs-to-generate/3094
This PR: https://github.com/huggingface/transformers/pull/9150 19 added GenerationOutputs similar to ModelOutpus for Pytorch Transformers. => Check out the corresponding tweet with an example: https://twitter.com/SimonBrandeis/status/1346858472000937984 55 . The community has been asking for quite some time for this feature, see https://github.com/huggingface/transformers/issues/7654 17 https://github.com/huggingface/transformers/issues/8656 7 https://github.com/huggingface/transformers/issues/3891 7 When setting return_dict_in_generate the PyTorch .generate() method now returns GenerationOutputs that are very similar in style to ModelOutputs. The usual generation output_ids can then be accessed under outputs["sequences"]. For all “non-beam search” generations (greedy_search and sample), one has now access to all attentions of every layer at every generation step (be careful memory might blow up here) if output_attentios=True all hidden_states of every layer at every generation step if output_hidden_states=True. scores. Now the scores correspond to the processed logits -> which means the models lm head output after applying all processing functions (like top_p or top_k or repetition_penalty) at every generation step in addition if output_scores=True. For all “beam_search” methods: all attentions and all hidden_states of every layer at every generation step if output_attentions and output_hidden_states are set to True scores now correspond to all processed lm head logits + the current beam_scores for each output token. This score (next_token_scores + beam_scores) is the most important score at every generation step so we decided to output this score. sequence_scores In addition to the three outputs above for beam search output_scores=True also returns the final “beam score” for each returned sequence (see https://twitter.com/SimonBrandeis/status/1346858472000937984 55) This should make it easier to analyze the generation of transformer models and should also allow the user to build “confidence” graphs from the scores and sequence_scores. For more in-detail information please check out the docs: https://huggingface.co/transformers/master/internal/generation_utils.html#generate-outputs 51 We would be very happy for some feedback from the community if the GenerationOutputs meets the expectations .
This might be interesting as well: Generation Probabilities: How to compute probabilities of output scores for GPT2 🤗Transformers Now that it is possible to return the logits generated at each step, one might wonder how to compute the probabilities for each generated sequence accordingly. The following code snippet showcases how to do so for generation with do_sample=True for GPT2: import torch from transformers import AutoModelForCausalLM from transformers import AutoTokenizer gpt2 = AutoModelForCausalLM.from_pretrained("gpt2", return_dict_in_generate=True) tokenizer = AutoTokenizer.from_pretrained("gpt2") input_ids …
0
huggingface
🤗Transformers
Problem while uploading a file
https://discuss.huggingface.co/t/problem-while-uploading-a-file/565
I am trying to upload our bert model for stack-overflow domain. I have used the transformer-cli upload command and I can access the uploaded files and see them . However in in the https://huggingface.co/ 8 web page I am always getting the errors that config,json is not found Any help is really appreciated. Thanks.
Maybe @julien-c has a solution.
0
huggingface
🤗Transformers
Question About Attention Score Computation & Intuition
https://discuss.huggingface.co/t/question-about-attention-score-computation-intuition/3141
When it comes to transformers, the Query and Key matrices are what determine the attention scores. Here is a nice visual taken from Jay Alammar’s blog post on transformers that illustrates how attention scores are computed: self-attention_softmax867×546 29.4 KB As you can see the attention score depends solely on qi and kj vectors multiplied with no additional parameters. However each of these two vectors are calculated through a linear layer which had the word embedding (+positional) of just 1 word as input. My question is: how can the network assign attention scores meaningfully if q and k are computed without looking at different parts of the sentence other than their corresponding word? How can the network produce k and q vectors that when multiplied represent a meaningful attention score if k and q are computed based on a single word embedding? lets say I want to process this sentence: The man ate the apple; It didn’t taste good. When calculating the attention scores for the word ‘it’, how would the model know to assign a higher attention score to ‘apple’ (it refers to the apple) than to ‘man’ or basically any other word? The model had no way of understanding the context of the sentence because q and k are calculated solely based on the embedding of one word and not the sentence as a whole. q for ‘it’ is computed from the apple’s embedding and the same goes for k for ‘apple’. The two vectors are then multiplied to get the attention score. wouldn’t this mean that if the two words are present in a different sentence but with the same distance the attention score between the two would be identical in the second sentence? What makes sense to me is the classic approach to attention models. Look at the following visual from Andrew NG’s deep learning specialization. Here the attention scores are calculated using the hidden states at that timestamp. The hidden states are calculated with FC layers in a bidirectional RNN. In other words a hidden state at a certain timestamp is influenced by the words that come after and before it, So it makes sense that the model is able to calculate attention scores there.
Hi @rezhv , I think your confusion comes from thinking that the words are fed one by one. You’re exactly right that there could be no meaningful attention score when computing based on a single word embedding. Please refer to Chapter 9.4 in the newly updated: https://web.stanford.edu/~jurafsky/slp3/9.pdf 6 Best of luck, Chris
0
huggingface
🤗Transformers
Improvements with SWA
https://discuss.huggingface.co/t/improvements-with-swa/858
Has anyone tried SWA with transformers ? Would be interesting to see how much gains does it provide over just AdamW. I’m thinking of adding it to the Trainer if there are significant gains since Pytorch natively supports it.
SWA is a wrapper around any optimizer, so you could try it with AdamW. Fair warning though, I have never been able to reproduce any results of the original paper of finding it actually helped with anything when using it in fastai (maybe why it took two years to have it inside PyTorch). It’s also supposed to work best with a cyclical schedule (and you average the weights at the end of the cycles) which are not used in transfomers.
0
huggingface
🤗Transformers
Issues running seq2seq distillation
https://discuss.huggingface.co/t/issues-running-seq2seq-distillation/3075
Hello, I didn’t see these errors earlier when I ran seq2seq distillation last year, however the below script run from transformers/examples/research_projects/seq2seq-distillation gives me a couple of issues. python distillation.py \ --teacher google/t5-large-ssm-nq --data_dir $NQOPEN_DIR \ --tokenizer_name t5-large \ --student_decoder_layers 6 --student_encoder_layers 6 \ --freeze_encoder --freeze_embeds \ --learning_rate=3e-4 \ --do_train \ --gpus 4 \ --do_predict \ --fp16 --fp16_opt_level=O1 \ --val_check_interval 0.1 --n_val 500 --eval_beams 2 --length_penalty=0.5 \ --max_target_length=60 --val_max_target_length=60 --test_max_target_length=100 \ --model_name_or_path IGNORED \ --alpha_hid=3. \ --train_batch_size=2 --eval_batch_size=2 --gradient_accumulation_steps=2 \ --sortish_sampler \ --num_train_epochs=6 \ --warmup_steps 500 \ --output_dir distilled_t5_sft \ --logger_name wandb \ "$@" Issues: I get the following warning at the beginning: Epoch 0: 0%| | 2/12396 [00:00<1:20:46, 2.56it/s, loss=nan, v_num=xyme]/home/sumithrab/miniconda3/envs/distill/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:131: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " Wandb does not show learning curves. I get the following warning: Epoch 0: 1%| | 99/12396 [00:27<56:02, 3.66it/s, loss=5.55e+04, v_num=xyme]wandb: WARNING Step must only increase in log calls. Step 98 < 100; dropping {'loss': 56946.98828125, 'ce_loss': 24.933889389038086, 'mlm_loss': 9.203145980834961, 'hid_loss_enc': 951.351318359375, 'hid_loss_dec': 18023.71484375, 'tpb': 42, 'bs': 2, 'src_pad_tok': 2, 'src_pad_frac': 0.05882352963089943}. Any ideas you may have would be very helpful.
Hi @sbhaktha You can safely ignore this warning See if wandb is authorized, you should be able to see logs after first validation run
0
huggingface
🤗Transformers
[Blenderbot] Getting runtime error while using generate
https://discuss.huggingface.co/t/blenderbot-getting-runtime-error-while-using-generate/3127
While using blenderbot small model, I got a peculiar error saying– RuntimeError: probability tensor contains either inf, nan or element < 0 I have been using generator function for many days, but never had this issue. Below is the code i used- model = BlenderbotForConditionalGeneration.from_pretrained(‘facebook/blenderbot-90M’) tokenizer = BlenderbotSmallTokenizer.from_pretrained(‘facebook/blenderbot-90M’) outputs = model.generate(input_ids=encoded, max_length=20, do_sample=True, temperature=1.5, top_k=150, top_p=0.99)
Hi @parvej , could you also post the input (the encoded variable in your snippet) that way we can reproduce it.
0
huggingface
🤗Transformers
Seq2Seq-Example does not work on Azure
https://discuss.huggingface.co/t/seq2seq-example-does-not-work-on-azure/3071
Hi community, we use transformers to generate summaries (seq2seq) for finance articles. Therefore we use the model: facebook/bart-large-cnn The generated summaries are pretty good. In the next step we want to finetune this model. Based on the examples 4 on github we want to run the finetune in the Azure cloud with AzureML. This is the part where we have problems. We use the following snippet to run the finetune: dataset_input = Dataset.File.from_files(path=(datastore, 'datasets/wmt_en_ro')) config = ScriptRunConfig(source_directory='transformers/examples/seq2seq', script='seq2seq_trainer.py', compute_target='gpu-cluster', arguments=['--learning_rate', 3e-5, '--gpus', 1, '--num_train_epochs', 4, '--data_dir', dataset_input.as_mount(), '--output_dir', 'outputs', '--model_name_or_path', 'facebook/bart-large-cnn']) # set up pytorch environment env = Environment.from_pip_requirements(name='transformers-env', file_path='transformers/examples/seq2seq/requirements.txt') # install local (forked) transformers package whl_url = Environment.add_private_pip_wheel(workspace=ws, file_path=retrieve_whl_filepath(), exist_ok=True) env.python.conda_dependencies.add_pip_package(whl_url) env.python.conda_dependencies.add_pip_package('azureml-sdk') env.python.conda_dependencies.add_pip_package('torch') env.python.conda_dependencies.add_pip_package('torchvision') env.docker.enabled = True env.docker.base_image = 'mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.2-cudnn7-ubuntu18.04' config.run_config.environment = env run = experiment.submit(config) In Azure we see the following logs: [2021-01-05T09:47:48.676635] Entering context manager injector. [context_manager_injector.py] Command line Options: Namespace(inject=['ProjectPythonPath:context_managers.ProjectPythonPath', 'RunHistory:context_managers.RunHistory', 'TrackUserError:context_managers.TrackUserError'], invocation=['seq2seq_trainer.py', '--learning_rate', '3E-05', '--gpus', '1', '--num_train_epochs', '4', '--data_dir', '/tmp/tmpzuh7cbpc', '--output_dir', 'outputs', '--model_name_or_path', 'facebook/bart-large-cnn']) Script type = None Starting the daemon thread to refresh tokens in background for process with pid = 121 Entering Run History Context Manager. [2021-01-05T09:47:51.702678] Current directory: /mnt/batch/tasks/shared/LS_root/jobs/sumurai-ml/azureml/transformers-example-finetune_1609838617_863b0dcf/mounts/workspaceblobstore/azureml/transformers-example-finetune_1609838617_863b0dcf [2021-01-05T09:47:51.702779] Preparing to call script [seq2seq_trainer.py] with arguments:['--learning_rate', '3E-05', '--gpus', '1', '--num_train_epochs', '4', '--data_dir', '/tmp/tmpzuh7cbpc', '--output_dir', 'outputs', '--model_name_or_path', 'facebook/bart-large-cnn'] [2021-01-05T09:47:51.702858] After variable expansion, calling script [seq2seq_trainer.py] with arguments:['--learning_rate', '3E-05', '--gpus', '1', '--num_train_epochs', '4', '--data_dir', '/tmp/tmpzuh7cbpc', '--output_dir', 'outputs', '--model_name_or_path', 'facebook/bart-large-cnn'] [2021-01-05T09:47:53.903691] Reloading <module '__main__' from 'seq2seq_trainer.py'> failed: module __main__ not in sys.modules. Starting the daemon thread to refresh tokens in background for process with pid = 121 [2021-01-05T09:47:54.034286] The experiment completed successfully. Finalizing run... Cleaning up all outstanding Run operations, waiting 900.0 seconds 1 items cleaning up... Cleanup took 0.05184674263000488 seconds [2021-01-05T09:47:54.508286] Finished context manager injector. The output directory is not created, because of the following error: [2021-01-05T09:47:53.903691] Reloading <module '__main__' from 'seq2seq_trainer.py'> failed: module __main__ not in sys.modules. This is our situation. We are clueless and hope for some help. We dont know why this error occours!? First of all: Is this the right way to run the seq2seq-finetune in the cloud or is there a better way? Does somebody has any idea why this error occours? Setting: transformers 4.1.1 AzureML Docker-Base-Image: mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.2-cudnn7-ubuntu18.04 3 Using GPU-Setup Thanks in advance. Regards, Florian
Hi @Fl0w I’m not familiar with Azure, but from what I can see, I think the script argument expected the path of the training script. seq2seq_trainer.py contains the trainer class, it’s not a training script. finetune_trainer.py is the training script, so I think passing that might fix this.
0
huggingface
🤗Transformers
Instantiating TransfoXLTokenizer using existing vocab dict
https://discuss.huggingface.co/t/instantiating-transfoxltokenizer-using-existing-vocab-dict/2510
Hello everyone, I’ve been experimenting with several examples to try and grok how to train a TransformerXL model from scratch for my own text generation use case and was looking for some guidance. I’m currently stuck on how to properly load my existing vocabulary which is a python dictionary saved as a pickle format. Does someone have an example of creating a TransoXLTokenizer using a preexisting vocabulary?
Found all you have to do is instantiate a TransfoXLTokenizer and pass vocab a file where each “word” in your vocabulary is a line.
0
huggingface
🤗Transformers
Rewriting generate function for manual decoder input
https://discuss.huggingface.co/t/rewriting-generate-function-for-manual-decoder-input/3034
Hello everybody, I am trying to reproduce the generate function of the GenerationMixin class to be able to give manual decoder input. I am using transformers v4.1.1. While I get nice results using the greedy_search function, I am not managing to reproduce the beam_search one, since my RAM overflows. I do not have memory problems using generate. Hereafter is the code. I am not using any special decoder input for now, only model.config.bos_token_id. My plan was to check everything worked and change it afterwards. I have tested this with both bart-base and pegasus-large, with equal results. from transformers.generation_utils import GenerationMixin gm = GenerationMixin min_length = config.BULLETS_MIN_LEN max_length = config.BULLETS_MAX_LEN num_beams = 4 early_stopping = True no_repeat_ngram_size = 5 num_return_sequences = 1 model_kwargs = {} pad_token_id = model.config.pad_token_id eos_token_id = model.config.eos_token_id decoder_start_token_id = model.config.decoder_start_token_id bos_token_id = model.config.bos_token_id # encode the text input_ids = tokenizer.encode(text, return_tensors='pt') # prepare attention mask and encoder output model_kwargs["attention_mask"] = gm._prepare_attention_mask_for_generation( model, input_ids, pad_token_id, eos_token_id) if model.config.is_encoder_decoder: model_kwargs = gm._prepare_encoder_decoder_kwargs_for_generation(model, input_ids, model_kwargs) input_ids = gm._prepare_decoder_input_ids_for_generation( model, input_ids, decoder_start_token_id=decoder_start_token_id, bos_token_id=bos_token_id, **model_kwargs) model_kwargs["use_cache"] = None logits_processor = gm._get_logits_processor( model, repetition_penalty=None, no_repeat_ngram_size=no_repeat_ngram_size, bad_words_ids=None, min_length=min_length, eos_token_id=None, prefix_allowed_tokens_fn=None, num_beams=num_beams, num_beam_groups=None, diversity_penalty=None) Using greedy_search: outputs = gm.greedy_search( model, input_ids, logits_processor=logits_processor, max_length=max_length, pad_token_id=pad_token_id, eos_token_id=eos_token_id, **model_kwargs) print(tokenizer.decode(outputs[0], skip_special_tokens = True)) And the results is equal to real_outputs = model.generate(tokenizer.encode( df_cc_group.iloc[0].text[0], return_tensors='pt'), min_length = min_length, max_length = max_length, no_repeat_ngram_size = no_repeat_ngram_size, num_beams = 1) print(tokenizer.decode(real_outputs[0], skip_special_tokens = True)) However, using beam_search my RAM overflows: from transformers import BeamSearchScorer batch_size = input_ids.shape[0] length_penalty = model.config.length_penalty early_stopping = early_stopping beam_scorer = BeamSearchScorer( batch_size=batch_size, max_length=max_length, num_beams=num_beams, device=model.device, length_penalty=length_penalty, do_early_stopping=early_stopping, num_beam_hyps_to_keep=num_return_sequences) input_ids, model_kwargs = gm._expand_inputs_for_generation( input_ids, expand_size=4, is_encoder_decoder=model.config.is_encoder_decoder, **model_kwargs) outputs = gm.beam_search( model, input_ids, beam_scorer, logits_processor=logits_processor, max_length=max_length, pad_token_id=pad_token_id, eos_token_id=eos_token_id, **model_kwargs) Thank you in advance for the help!
Hey @marcoabrate, The current version of generate (and also the one of v4.1.1.) already includes the possibility to provide user specific decoder_input_ids. You just have to add it to generate(). E.g. the following code works as expected from transformers import T5ForConditionalGeneration, T5TokenizerFast model = T5ForConditionalGeneration.from_pretrained("t5-small") tokenizer = T5TokenizerFast.from_pretrained("t5-small") input_ids = tokenizer("translate English to German: How are you?", return_tensors="pt").input_ids decoder_input_ids = tokenizer("<pad> Wie geht", return_tensors="pt", add_special_tokens=False).input_ids output = model.generate(input_ids, decoder_input_ids=decoder_input_ids, num_beams=4, num_return_sequences=4) print("With decoder_input_ids num_beams=4", tokenizer.batch_decode(output, skip_special_tokens=True)) output = model.generate(input_ids, num_beams=4, num_return_sequences=4) print("Without decoder_input_ids num_beams=4", tokenizer.batch_decode(output, skip_special_tokens=True)) Also see this notebook for the answers of this specific use case: https://colab.research.google.com/drive/11js9We6ZtjN15hb3-PoFZBXJrcSOo_Qa?usp=sharing 7 This feature was enabled by the generate refactor: Big `generate()` refactor 2 Does this correspond to what you were trying to achieve or are you looking for some other behavior?
0
huggingface
🤗Transformers
Parallelize model call for TFBertModel
https://discuss.huggingface.co/t/parallelize-model-call-for-tfbertmodel/3088
Hi folks! I am using a pretrained Bert (TFBertModel) in transformers to encode several batches of sentences, with varying batch size. That is, I need to use Bert to encode a series of inputs, where each input has [n_sentences, 512] dimensionality (where 512 is the number of tokens). N_sentences can vary between 2 and 250 across inputs/examples. This is proving very time consuming: encoding each input/example takes several seconds, especially for larger values of n_sentences . Is there a(n easy) way to parallelize the model(input) call (where, again, input has dimensionality [n_sentences, 512]) in Google Colab’s TPU (or on GPUs), such that more than one sentence is encoded at once?
I’m not sure what you are asking. “Parallellism” is a term that is often used to spread models or inputs across different devices (multiple CPUs, GPUs, and/or TPUs). It doesn’t seem that that is what you are after. Rather, you seem to look for batched processing where you process multiple sentences at once. Then again, you say that you use batched inputs so that model(input) receives [n_sentences, 512] inputs. So you are already using batched data, effectively “encoding” multiple sentences at once. So again, I’m not sure what you are asking. Could you clarify?
0
huggingface
🤗Transformers
`run_qa.py` achieves much lower performance than the original BERT run_squad.py
https://discuss.huggingface.co/t/run-qa-py-achieves-much-lower-performance-than-the-original-bert-run-squad-py/3058
I have been trying to train a BERT model on the TyDi QA dataset (only arabic questions) using the HF example script. While everything runs well and the model trains well, I noticed that when I tried to train the same model .on the same data. with the same hyperparameters but using the run_squad.py script from the google/bert repo i get much higher results (+10 exact match) I linked a colab with both codes ready to run https://colab.research.google.com/drive/1AHz4mpDBSea92MJVb-GhudGhFvO1VgJs?usp=sharing 22 . I checked the evaluation scripts (from datasets and the official squad script) and they both output the same scores. Hence the problem must be from either the preprocessing or the model(which i doubt).
The script has been tested on the squad and squad v2 datasets and gives the same results (roughly) as the previous one. The reason might be linked to the fast tokenizers offset mappings not really working with Arabic, maybe? This would be the thing I would check first. Maybe you could try to replicate the steps in this notebook 23 on your dataset to check all the preprocessing looks correct?
0
huggingface
🤗Transformers
How to train new token embedding to add to a pretrain model?
https://discuss.huggingface.co/t/how-to-train-new-token-embedding-to-add-to-a-pretrain-model/3083
Hello, I would like to take a pretrained model and only train new embeddings on a corpus, leaving the rest of the transformer untouched. Then, fine tuning on a task without changing the original embedding. Finally, swapping the embedding. All in all, how can I have control over only training the embeddings, leaving the embeddings untouched in training and swapping the embeddings of a model with the Hugging Face Transformer library ? This is to follow the following approach taken in this article 21: Pre-train a monolingual BERT (i.e. a transformer) in L1 with masked language modeling (MLM) and next sentence prediction (NSP) objectives on an unlabeled L1 corpus. Transfer the model to a new language by learning new token embeddings while freezing the transformer body with the same training objectives (MLM and NSP) on an unlabeled L2 corpus. Fine-tune the transformer for a downstream task using labeled data in L1, while keeping the L1 token embeddings frozen. Zero-shot transfer the resulting model to L2 by swapping the L1 token embeddings with the L2 embeddings learned in Step 2. Thank you !
Well, you answered your own question. You can freeze layers in PyTorch by setting requires_grad=False to a layer’s parameters. They will not be updated during training. You can then load the model, swap out the weights of the embedding layer with other learnt weights and save the model again (In transformers you can use model.save_pretrained()). I am not sure how much help you need. If you need a step-by-step guide, I fear I do not have the time to help with that. The above should you help you a bit.
0
huggingface
🤗Transformers
Inverse T5 with output (instead of input) prefix
https://discuss.huggingface.co/t/inverse-t5-with-output-instead-of-input-prefix/3065
I want to train a T5 like model but I want to output different pieces of information from the same document representation created by the encoder. When doing prediction I would feed partial outputs to the decoder (name, age, etc.) and start autoregressive generation from there. But how to train such a model? Some examples: T5: Input: “name: I am George. I am 34 years old”. Output: “George”. Input: “age: I am George. I am 34 years old”. Output: “34”. Inverse T5: Input: “I am George. I am 34 years old”. Possible outputs: “name: George”, “age: 34”. Inverse T5 as a training dataset would look something like this: Input: “I am George. I am 34 years old”. Output: “name: George”. Input: “I am George. I am 34 years old”. Output: “age: 34”. Trivial solution would be to just train on such a dataset and only feed partial outputs ("name: ", "age: ") when predicting. I have doubts though that this would lead to good results. A better solution would be to also feed prefixes to the decoder at training time. Can I do that?
Hi @marton-avrios Your first input format might work, and in prediction instead of feeding partial output you could ask the model to generate whole text. Passing partial output is also possible. I had applied T5 to a somewhat similar problem, and it gave surprisingly good results.
0
huggingface
🤗Transformers
Sentiment Analysis keywords
https://discuss.huggingface.co/t/sentiment-analysis-keywords/3061
My objective is to understand the keywords or phrases responsible for a sentence being assigned a particular class or sentiment. Say if I use a BERT model for emotion analysis,is it possible to identify the words or phrases for a sentence having a particular sentiment and changing which would alter their sentiment or semantic essence/meaning. I wonder anyone could shed some light on this as to which layers could be exposed to identify those keywords or if there is any other method to do this.
It sounds to me like you’re trying to do style transfer from one sentiment to another. This is an active area of research currently. One of the most popular papers is linked below but doesn’t use transformers: ACL Anthology Delete, Retrieve, Generate: a Simple Approach to Sentiment and Style Transfer 3 Juncen Li, Robin Jia, He He, Percy Liang. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). 2018. And here is a paper that builds on those methods using transformers: arxiv.org 2005.12086.pdf 3 338.76 KB Best of luck
0
huggingface
🤗Transformers
How can I download my private model?
https://discuss.huggingface.co/t/how-can-i-download-my-private-model/2998
I subscribe huggingface for private model repository. I upload my model but how can I use my model? I can`t use it because when I use it with model name(eg, kykim/bert-kor-large) for fine-tuning, command line shows that this is not listed in huggingface.co/models 1. Thanks.
cc @julien-c
0
huggingface
🤗Transformers
[Beginner] ClassificationModel Running out of Memory, long training Epochs
https://discuss.huggingface.co/t/beginner-classificationmodel-running-out-of-memory-long-training-epochs/2984
Hi guys, I am new to Deep Learning and wanted to train a binary (sentiment) classification using SimpleTransformers. As a dataset I took Sentiment140 (1,6 Tweets 800k Positive, 800k Negative). The training itself works, but depending on the length of the dataset Google Colab crashes. If I divide the 1.6 million tweets into 1.28 million training and 0.32 million test data the model crashes after -> [2020-12-28 16:55:15,023] {classification_model.py:1147} INFO - Converting to features started. Cache is not used. 100% 1278719/1278719 [09:25<00:00, 2260.76it/s] (1) Is this normal? Now if I reduce the number to 800k training, 160k test data Google Colab does not crash, but one epoch takes 4 hours. (This number often works, sometimes 800k training-data also crashes as described above. When it gets to training, I don’t even know if it goes through - since an epoch lasts 4 hours, I’ve never run it through) I do not know how far you can compare the things, but in tensorflow i have trained a CNN, BiLSTM network on the entire data set and there an epoch took only 5 minutes, (2) does 4 hours make sense, or have I made a gross error? [2020-12-28 17:45:10,844] {classification_model.py:1147} INFO - Converting to features started. Cache is not used. 100% 800000/800000 [05:44<00:00, 2638.77it/s] Epoch 1 of 1: 0% 0/1 [00:00<?, ?it/s] Epochs 0/1. Running Loss. 0.6640: 0% 375/100000 [01:04<3:50:03, 7.19it/s] import torch torch.cuda.is_available() True model_type, model_name = 'roberta', 'roberta-base' model_args = { 'output_dir': 'outputs/', 'cache_dir': 'cache/', 'max_seq_length': 144, 'num_train_epochs': 1,#50 'learning_rate': 1e-3, 'adam_epsilon': 1e-8, "early_stopping_delta" : 1e-3, "early_stopping_patience" : 5, #5 'overwrite_output_dir': True, 'manual_seed' : True, 'silent' : SILENT } model = ClassificationModel(model_type=model_type, model_name=model_name, args=model_args, use_cuda=True, num_labels=2) I also tried to add 'eval_accumulation_steps' : 20 to my model_args, but it still crashed pre-training ty in advanced
Hey there, If the question is about SimpleTransformers then IMO it would be better to ask it on their issues or forum. We would be happy to help you here, but we are not familiar with SimpleTransformers
0
huggingface
🤗Transformers
AttributeError: ‘TrainOutput’ object has no attribute ‘metrics’ when finetune custom dataset
https://discuss.huggingface.co/t/attributeerror-trainoutput-object-has-no-attribute-metrics-when-finetune-custom-dataset/2970
Hi, I’m try to finetune Bert on sst2 dataset by using run_glue.py (https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py 7). I randomly pick 1% train data and 10 validation data from sst2 original dataset, and build them as csv. file with label and sentence columns. Here is my finetune setting: But there is an error occurred: Traceback (most recent call last): File “run_glue.py”, line 432, in main() File “run_glue.py”, line 356, in main metrics = train_result.metrics AttributeError: ‘TrainOutput’ object has no attribute ‘metrics’ I run all these scripts on colab pro
Hi Could you post your transformers version. Also to run the examples you should install transformers from source.
0
huggingface
🤗Transformers
Easy way to implement annealing temperature softmax
https://discuss.huggingface.co/t/easy-way-to-implement-annealing-temperature-softmax/2966
Hi friends, I wonder if there’s a easy way to implement annealing temperature softmax: For example, I want to change 314 3 in modeling_bert.py from attention_probs = nn.Softmax(dim=-1)(attention_scores) to attention_probs = nn.Softmax(dim=-1)(attention_scores/temp) where “temp” is a variable decaying as the training process goes on according to a scheduler like the learning rate. Thank you!
You can just do that change Model files are kept completely independent from each other just so you can easily tweak them for experiments like this.
0
huggingface
🤗Transformers
Model saving results in a small size checkpoint
https://discuss.huggingface.co/t/model-saving-results-in-a-small-size-checkpoint/3035
Hi Community I am trying to save a checkpoint in a costume Model using model.save_pretrained() after each epoch. the .bin directory however will contain a config and weights file, the weights file is only 98 MB no matter how big or small I change the configs and the data size. At the same time, saving the model using torch.save() results in a reasonable pth file that would be around 600MB. is there any requirements or extra parameters to use for the save_pretrained() function?
Hard to say anything without looking at the code. Could you post a small code snippet?
0
huggingface
🤗Transformers
Is there a way to return the “decoder_input_ids” from “tokenizer.prepare_seq2seq_batch”?
https://discuss.huggingface.co/t/is-there-a-way-to-return-the-decoder-input-ids-from-tokenizer-prepare-seq2seq-batch/2929
Using BART as an example … this: tokenizer.prepare_seq2seq_batch( src_texts=['This is a very short text', 'This is shorter'], tgt_texts=['very short', 'much shorter than very short']) returns … {'input_ids': [[100, 19, 3, 9, 182, 710, 1499, 1], [947, 19, 10951, 1, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 0, 0, 0, 0]], 'labels': [[182, 710, 1, 0, 0, 0], [231, 10951, 145, 182, 710, 1]]} For fine-tuning, how should we build the decoder_input_ids? And do we also need to shift the labels to the right so that they look like this? [[710, 1, 0, 0, 0, 0], [10951, 145, 182, 710, 1, 0]]
Or do we even have to pass decoder_input_ids anymore??? Looking at this example for MT5, it looks like hte answer is “no” … from transformers import MT5ForConditionalGeneration, T5Tokenizer model = MT5ForConditionalGeneration.from_pretrained("google/mt5-small") tokenizer = T5Tokenizer.from_pretrained("google/mt5-small") article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien." summary = "Weiter Verhandlung in Syrien." batch = tokenizer.prepare_seq2seq_batch(src_texts=[article], tgt_texts=[summary], return_tensors="pt") outputs = model(**batch) loss = outputs.loss This sure would make it easier if all we have to pass in are the “labels” and not have to deal with the decoder_input_ids ourselves when working within ConditionalGeneration models. Please lmk either way. Thanks
0
huggingface
🤗Transformers
Sampling with FSMTForConditionalGeneration
https://discuss.huggingface.co/t/sampling-with-fsmtforconditionalgeneration/2939
Hi, I’m looking to perform some English to German translation using FSMTForConditionalGeneration model (loading pretrained model from facebook/wmt19-en-de). The current documentation only mentions decoding using beam search, but I was wondering if it’s also possible to perform sampling for decoding? I read the blog post here https://huggingface.co/blog/how-to-generate 2 and it talks about sampling-based decoding with the generate method. But I wasn’t sure if the same is possible with FSMTForConditionalGeneration model. Can I also pass do_sample=True and temperature=<value> to FSMTForConditionalGeneration's generate method to perform sampling with specific temperature?
Yes, generate method is common for all generative models so you can do sampling with FSMT as well.
0
huggingface
🤗Transformers
Recommend argument values for transformers.generation_utils.GenerationMixin.generate() for summarization and translation tasks?
https://discuss.huggingface.co/t/recommend-argument-values-for-transformers-generation-utils-generationmixin-generate-for-summarization-and-translation-tasks/2948
Are there a set of recommended argument values folks should pass to generate for summarization and for translation? I’m looking for values that generally work for those tasks generally, as well as, values that are recommended for specific models. If any of this is documented somewhere I’d love to get the link(s) to such resources. Thanks
Hi @wgpubs The values will be different for different models and tasks, you should try and see what works for you. Also, almost every pre-trained model comes with some default values which you find in its config file. For ex, if you look at bart-large-cnn model’s config 2, it uses 4 num_beams with no_repeat_ngram_size 3. Also, this blog explains different generation arguments and should help pick values for those https://huggingface.co/blog/how-to-generate
0
huggingface
🤗Transformers
Unrecognized configuration class in mT5-small-finetuned-tydiqa-for-xqa
https://discuss.huggingface.co/t/unrecognized-configuration-class-in-mt5-small-finetuned-tydiqa-for-xqa/2916
Hi, I tried to run the multilingual question-answering with the mt5 model at https://huggingface.co/mrm8488/mT5-small-finetuned-tydiqa-for-xqa 3. Unfortunately I couldn’t and the following message appears: Unrecognized configuration class <class ‘transformers.models.t5.configuration_t5.T5Config’> for this kind of AutoModel: AutoModelForCausalLM. Model type should be one of CamembertConfig, XLMRobertaConfig, RobertaConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMConfig, CTRLConfig, ReformerConfig, BertGenerationConfig, XLMProphetNetConfig, ProphetNetConfig. How can this be solved? Thanks!
Hi there, could you post the code snippet the raised this error?
0
huggingface
🤗Transformers
Valueerror “too many rows” with Tapas/TableQuestionAnswering pipeline - How to fix it?
https://discuss.huggingface.co/t/valueerror-too-many-rows-with-tapas-tablequestionanswering-pipeline-how-to-fix-it/2898
Hi guys! I wanted to query a dataframe via the "table-question-answering" pipeline. It works well with small dataframes, however as soon as I import larger dataframes (e.g. with ~400 rows), I’ve got the following issue: valueerror "too many rows" image751×495 24.8 KB Any idea what may be happening here? Thanks in advance Charly
pinging @lysandre
0
huggingface
🤗Transformers
About the origin of the model category names in `AutoModelWithLMHead`
https://discuss.huggingface.co/t/about-the-origin-of-the-model-category-names-in-automodelwithlmhead/2911
Hello, I’d like to ask about where the model category names come from. In AutoModelWithLMHead class 1, the warning says we should use AutoModelForCausalLM, AutoModelForMaskedLM, or AutoModelForSeq2SeqLM instead of it. class AutoModelWithLMHead: r""" This is a generic model class that will be instantiated as one of the model classes of the library---with a language modeling head---when created with the when created with the :meth:`~transformers.AutoModelWithLMHead.from_pretrained` class method or the :meth:`~transformers.AutoModelWithLMHead.from_config` class method. This class cannot be instantiated directly using ``__init__()`` (throws an error). .. warning:: This class is deprecated and will be removed in a future version. Please use :class:`~transformers.AutoModelForCausalLM` for causal language models, :class:`~transformers.AutoModelForMaskedLM` for masked language models and :class:`~transformers.AutoModelForSeq2SeqLM` for encoder-decoder models. """ I’m afraid this may not be a very essential question, but is there any origin for the names of the classification of CausalLM, MaskedLM, and Seq2SeqLM? Or are they original to the transformers library? I would like to know more about the source of the terms in using the library. Thank you in advance.
The differences between the three are explained in the model summary 47. Causal language modeling/masked language modeling are very often used in research papers, so those terms don’t come from the transformers library.
0
huggingface
🤗Transformers
Unable to import Hugging Face transformers
https://discuss.huggingface.co/t/unable-to-import-hugging-face-transformers/2900
I have been using transformers fine up until today. However, when I imported the package today, I received this error message: In Transformers v4.0.0, the default path to cache downloaded models changed from '~/.cache/torch/transformers' to '~/.cache/huggingface/transformers'. Since you don't seem to have overridden and '~/.cache/torch/transformers' is a directory that exists, we're moving it to '~/.cache/huggingface/transformers' to avoid redownloading models you have already in the cache. You should only see this message once. Error: Destination path '/home/user/.cache/huggingface/transformers/transformers' already exists I have tried to install and uninstall the package but still unable to make it work. Any suggestions to fix this would be really appreciated.
tlqnguyen: Error: Destination path '/home/user/.cache/huggingface/transformers/transformers' already exists It seems you already have /home/user/.cache/huggingface/transformers/transformers, and transformers v4 tried to make ~/.cache/huggingface/transformers and failed. How about mv (rename) the /home/user/.cache/huggingface/transformers/ to some temporary name and retry import transformers? (If I were you, I would not delete the directory, but rename it and keep it for now, just in case.)
0
huggingface
🤗Transformers
Huggingface Transformer code successfully gets executed on amazon web services but not on other server
https://discuss.huggingface.co/t/huggingface-transformer-code-successfully-gets-executed-on-amazon-web-services-but-not-on-other-server/2891
Hello, This might be slightly off-topic but I decided to write a question here in case anything helpful can come out. I have a block of code that makes a use of HuggingFace Transformer models. I can execute this my code on Amazon Web Services, so I don’t think that there is any syntax/semantic errors in my code. However, when I run the same code on my university server, I am keep getting the following error: Traceback (most recent call last): File "/home/h56cho/projects/def-schonlau/h56cho/GPT2.py", line 505, in <module> main_function('/home/h56cho/projects/def-schonlau/h56cho/G1G2.txt','/home/h56cho/projects/def-schonlau/h56cho/G1G2_answer_num.txt', num_iter) File "/home/h56cho/projects/def-schonlau/h56cho/GPT2.py", line 439, in main_function gpt2_tokenizer = GPT2Tokenizer.from_pretrained('gpt2') File "/localscratch/h56cho.42131937.0/env/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1623, in from_pretrained resolved_vocab_files[file_id] = cached_path( File "/localscratch/h56cho.42131937.0/env/lib/python3.8/site-packages/transformers/file_utils.py", line 948, in cached_path output_path = get_from_cache( File "/localscratch/h56cho.42131937.0/env/lib/python3.8/site-packages/transformers/file_utils.py", line 1124, in get_from_cache raise ValueError( ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. I highly doubt that the error is due to the internet connection, so this may has to do with the “cached path”. Can any of your team member suggest me how to solve this issue or why this error is popping up? Thank you,
Seems likely that the problem is with your internet connection though. Perhaps your server has a strict policy about which files can be downloaded and which ones can’t? Make sure AWS is accessible. Might also be a problem with access to the cache directory but I would have expected an OSError or PermissionError in that case. You can verify that you have read/write access to ~/.cache/torch/transformers (<V4) or ~/.cache/huggingface/transformers (V4).
0
huggingface
🤗Transformers
Transformers Tokenizer on GPU?
https://discuss.huggingface.co/t/transformers-tokenizer-on-gpu/2850
Hi, I am finding the tokenizing takes long time when I have large text data. There may be some documentation about this somewhere, but I could not find any that address how to use multiple GPUs to process the tokenization. Any help will be much appreciated. Thanks!
@Narsil might help be able to help here
0
huggingface
🤗Transformers
Loading Lower Layers of Model
https://discuss.huggingface.co/t/loading-lower-layers-of-model/2841
I have trained an ElectraForPreTraining model that has 10 encoder layers and saved the checkpoint. I now want to initialize an ElectraForSequenceClassification from this model. However, I also want to be able to initialize ElectraForSequenceClassification models that only has say 5, or 3 encoder layers that are initialized with the bottom 5 or bottom 3 layers of my ElectraForPreTraining Checkpoint. I was wondering if there is any way to do this with built in library methods. If not, I figure I would have to isolate out the bottom n layers and add a head myself, in which case any help on how to isolate out the bottom n layers into a new model would be extremely helpful.
You could load the full model and a smaller model with less number of layers and then copy the layers from the full model to the smaller model. This snippet might help import torch.nn as nn from transformers import ElectraForSequenceClassification, ElectraConfig # load pre-trained model model = ElectraForSequenceClassification.from_pretrained("google/electra-small-discriminator") # create smaller model config = ElectraConfig.from_pretrained("google/electra-small-discriminator", num_hidden_layers=3) student = ElectraForSequenceClassification(config) # this function takes the layers from first model, the layers from smaller model # and the indices of layers to copy # and copies the from source layers to dest layers def copy_layers(src_layers, dest_layers, layers_to_copy): layers_to_copy = nn.ModuleList([src_layers[i] for i in layers_to_copy]) assert len(dest_layers) == len(layers_to_copy), f"{len(dest_layers)} != {len(layers_to_copy)}" dest_layers.load_state_dict(layers_to_copy.state_dict()) # to copy last three layers give the indices of last 3 layers layers_to_copy = [9, 10, 11] copy_layers(model.electra.encoder.layer, student.electra.encoder.layer, layers_to_copy) # save the smaller model student.save_pretrained("student_model") # now you can load the smaller model using student = ElectraForSequenceClassification.from_pretrained("student_model")
0
huggingface
🤗Transformers
Fine tune a saved model with custom tokenizer
https://discuss.huggingface.co/t/fine-tune-a-saved-model-with-custom-tokenizer/2772
I am using a RoBERTa based model for pre-training and fine-tuning. To pre-train, I use RobertaForMaskedLM with a customized tokenizer . This means I used my tokenizer in the LineByLineTextDataset() and pre-trained my model for masked language modeling. However, for fine tuning, When I want to use my dataset with labels for a classification task, I think I must use my customized tokenizer before feeding my data to the model for fine tuning. My question is, How can I use my tokenizer to prepare the data and fine tune my pre-trained model?
hi @Adel You could save your custom tokenizer using the save_pretrained method and then load it again using from_pretrained method. So for classification fine-tuning you could just use the custom tokenizer. And if you are using the official transformer examples script then all you need to do is, pass the tokenizer using the --tokenizer_name_or_path argument.
0
huggingface
🤗Transformers
Diverse Generations for pseudolabeling
https://discuss.huggingface.co/t/diverse-generations-for-pseudolabeling/1189
@yjernite @patrickvonplaten @valhalla: what are good kwargs to get 10 diverse summaries from bart? The top 10 beams are all > 98 ROUGE against each other. (aka barely different). I am working a bit on pseudolabeling and have gotten huge gains from using the following strategy: generate best summary with default bart parameters if rouge(generated, train label) > 0.25: add (src example, generated example to dataset) I modified the strategy a bit to consider all top 10 generations (adding at most 1 for each example), and that also works well, though I haven’t been controlled about whether it works better. What I do know is that if you blindly add the pseudolabel to the dataset (remove bullet 2), it hurts performance. Also if there is a systematic study on this stuff I would definitely read it!
Yacine writes: Haha actually I’ve worked on that a fair bit. You probably want to be sampling rather than using beam search there it works better for semi-supervised training with back-translation (got some pretty OK results on gigaword at least)
0
huggingface
🤗Transformers
Tips for PreTraining BERT from scratch
https://discuss.huggingface.co/t/tips-for-pretraining-bert-from-scratch/1175
So far, I’ve been using pre-trained models. For my task, it seems that I am required to perform pre-training on GLUE task just to see how it performs. I wanted to confirm what modifications need to be done to do this ? I’m not sure about using the same tokenizer. I want to randomly initialize it and train on GLUE task. Additionally, if you some tips on effectively doing it when not using fine-tuning weights, please share ?
You can initialize a model without pre-trained weights using from transformers import BertConfig, BertForSequenceClassification # either load pre-trained config config = BertConfig.from_pretrained("bert-base-cased") # or instantiate yourself config = BertConfig( vocab_size=2048, max_position_embeddings=768, intermediate_size=2048, hidden_size=512, num_attention_heads=8, num_hidden_layers=6, type_vocab_size=5, hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, num_labels=3, ) # pass the config to model constructor instead of from_pretrained # this creates the model as per the params in config # but with weights randomly initialized model = BertForSequenceClassification(config) and as it’s a ForSequenceClassification model, the existing run_glue.py script can be used to train this model, just initialize model using config instead of .from_pretrained
0
huggingface
🤗Transformers
Recover the attention weights matrix with Reformer model
https://discuss.huggingface.co/t/recover-the-attention-weights-matrix-with-reformer-model/521
Hi, When using the chunked self attention layer in Reformer, the attention weight matrix has got a shape which is different than using global self attention. The documentation doesn’t give any information about this so I dig into the code to better understand why. It seems to be related chunk mechanism. However, I struggled to recover the equivalent attention weight matrix as in the classical global attention layer. Does anyone has any idea how to do such thing ? Global attention: attention weight shape (batch_size, num_heads, sequence_length, sequence_length) Chunked attention: attention weight shape (batch_size, num_heads, sequence_length, num_chunk, attn_chunk_length, attn_chunk_length * (1 + num_chunks_before + num_chunks_after) Thanks
did you ever figure this out? I am trying to do the same - recover attention data from Reformer Classification Model. I expect (batch_size, num_heads, num_chunks, seq_chunk_size X seq_chunk_size) but get (batch_size, num_heads, num_chunks, seq_chunk_size, 2 X seq_chunk_size) Thank you
0
huggingface
🤗Transformers
Fine-tuning seq2seq: Helsinki-NLP
https://discuss.huggingface.co/t/fine-tuning-seq2seq-helsinki-nlp/1810
Hello, I’m currently running an NMT experiment using the finetune.py from examples/seq2seq. With some research, I found the idea of leveraging pre-trained models instead of training from scratch. My model aims to translate pt_BR to es_ES, so my choice was to take advantage of https://huggingface.co/Helsinki-NLP/opus-mt-pt-ca 6 which seemed from very proximate domains. I’m using the opus dataset pt_BR and es with 55M sentence pairs (with a quick qualitative analysis I believe this data has low/medium quality but is a very considerable amount). After doing the first run and, as this requires a huge amount of processing and costs $, some questions came to my mind, you can check my experiment on Weights and Biases: https://wandb.ai/jpmc/data/runs/3hahd13a?workspace=user-jpmc 3 . I’m having a huge inconsistency on the GPU Usage, as you can see from the link, I have 8 Tesla K80, and tried some batch_sizes that crashed earlier, while this one (32) crashed after 41h of training. Then, when lowered to 16, it crashed even earlier (check the system information in the other run from this experiment). One epoch is estimated to take 60hours, is this correct? I have no experience in this problem with transformers. The val_BLEU seems to drop but not the val_loss, is it possible that the bleu value is not being logged correctly? What could cause this? This is my first time using the library, any advice on models and hyper-parameters I could tune? Should I train from scratch?
Hi, I’ve not tried seq-to-seq (I’ve been using BERT), and I’m not an expert, but I have a few suggestions. I suggest you don’t train from scatch. Brazilian Portugese should be very close to standard Portugese, and Catalonian Spanish is probably quite close to standard Spansih. Much closer than randomly-initialized weights would be. I suggest you start by fine-tuning with a much smaller sample of data, so that you can find out where the problems are, and get some suitable hyperparameters. What do you suppose is happening at 17 hours and 35 hours? Is someone else sharing your system? If you want to train a bit and then stop, and restart from the same place, you can save the model state-dict and the optimizer state-dict. I suggest you run the validation less frequently.
0
huggingface
🤗Transformers
Training TransfoXL/GPT2 with fastai gives error
https://discuss.huggingface.co/t/training-transfoxl-gpt2-with-fastai-gives-error/2660
I am trying to train transformers language model with marathi language database (Wikipedia text). While training I get following error: ~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in getattr(self, name) 776 if name in modules: 777 return modules[name] –> 778 raise ModuleAttributeError("’{}’ object has no attribute ‘{}’".format( 779 type(self).name, name)) 780 ModuleAttributeError: ‘TransfoXLModel’ object has no attribute 'reset’ Can anyone tell me what I am doing wrong? Please note that I have been able to train with AWD_LSTM model from fastai in similar setup. Fastai version : 2.1.5, Pytorch version: 1.7.0, Transformer version: 4.0.0 Here is my code snippet: from fastai import * from fastai.text.all import * import pathlib import sentencepiece as spm import io import fastai, torch fastai.version , torch.version import transformers from transformers import TransfoXLConfig, TransfoXLModel, TransfoXLTokenizer from transformers import GPT2LMHeadModel, GPT2TokenizerFast path = pathlib.Path(’/home/rajendra/ML/NLP/marathi_nlp/’) sample_path = ‘/home/rajendra/ML/NLP/marathi_nlp/data/’ get_texts = partial(get_text_files, folders=[‘txt’, ‘wikipedia_txt’]) dls_lm = DataBlock( blocks=TextBlock.from_folder(sample_path, is_lm=True), get_items=get_texts, splitter=RandomSplitter(0.1) ).dataloaders(sample_path, path=sample_path, bs=16, seq_len=80) dls_lm.show_batch(max_n=3) #This shows correct batches of text pretrained_weights = ‘gpt2’ tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_weights) model = GPT2LMHeadModel.from_pretrained(pretrained_weights) loss_func = CrossEntropyLossFlat() learn = LMLearner(dls_lm, model, loss_func = loss_func, metrics=[accuracy, Perplexity()]) model.train() #This should set model to train mode. Prints model info learn.fit_one_cycle(1, 2e-2) Call to fit_one_cycle fails with above mentioned error. Please help! The error trace starts with: epoch train_loss valid_loss accuracy perplexity time 0 nan 00:00 ModuleAttributeError Traceback (most recent call last) ~/anaconda3/lib/python3.8/site-packages/fastai/learner.py in with_events(self, f, event_type, ex, final) 153 def with_events(self, f, event_type, ex, final=noop): –> 154 try: self(f’before{event_type}’) ;f() 155 except ex: self(f’after_cancel{event_type}’) ~/anaconda3/lib/python3.8/site-packages/fastai/learner.py in _do_fit(self) 195 self.epoch=epoch –> 196 self._with_events(self._do_epoch, ‘epoch’, CancelEpochException) 197
This is a problem coming from the fastai side, not ours. fastai tries to call .reset() on the model, and the Transformers models (like many others) do not have that method.
0
huggingface
🤗Transformers
Length_penalty not influencing results (Bart, Pegasus)
https://discuss.huggingface.co/t/length-penalty-not-influencing-results-bart-pegasus/2206
Hello, I am experimenting with the generative parameters of the two models Bart and Pegasus. In particular, I am having trouble with the length_penalty parameter, since changing it does not change the output of the model. I am summarizing two different chapters of a book (# tokens around 1k) and this is the code I am using: model.generate( b0ch1sec1_text_enc, min_length = 150, max_length = 350, num_beams = 2, length_penalty = lp, early_stopping = True)[0] With lp varying from 0.1 to 2 and model being either bart-large-cnn or pegasus-large. Do you have any idea why the output does not change at all?
For those who were following this post, I tried in a more rigorous way with some (around 10) articles from the CNN/DM and the length_penalty parameter does change the output (actually a lot, from 200 to 500 tokens). However, it is still a mystery for me why sometimes it does not influence it at all. The code I have used: import torch torch.manual_seed(42) summ = {} model.to('cuda') for i, a in enumerate(articles): summ[i] = [] for lp in [0.1, 1, 2]: summ[i].append( tokenizer.decode( model.generate( tokenizer.encode(a, truncation=True, return_tensors='pt').to('cuda'), length_penalty=lp)[0], skip_special_tokens=True))
0
huggingface
🤗Transformers
Multiple texts as inputs to Transformers models
https://discuss.huggingface.co/t/multiple-texts-as-inputs-to-transformers-models/2222
I would like to use multiple texts as inputs to a model, let’s say I have a dataset with 10 columns each column is a text (sentence or two), how can I fit all these inputs to the model and do a classification for example ? I can see it’s possible to just concatenate all texts in one, but seems that for me, I need a very large data to be apple to achieve good accuracy. Maybe using multiple models (BERT) in parallel, taking last hidden state, concatenate them and classify ? But the problem is that there’s so many values order of 30 texts. Any idea how to tackle this ?
You should take the same approach as Extractive text summarization : Concatenate all your sentences, separated with a special token (CLS for example), then use the CLS token representation to do classification. From the Presumm paper 55 image1533×484 100 KB
0
huggingface
🤗Transformers
Advice to speed and performance
https://discuss.huggingface.co/t/advice-to-speed-and-performance/1769
Hey, I get the feeling that I might miss something about the perfomance and speed and memory issues using huggingface transformer. Since, I like this repo and huggingface transformers very much (!) I hope I do not miss something as I almost did not use any other Bert Implementations. Because I want to use TF2 that is why I use huggingface. Now, I would like to speed up inference and maybe decreasing memory usage. As I am native tensorflow user, I have no experience with the pytorch models at all. So is it possible that the pytorch models are more performant and more efficient than the tf models? *How can I speed up inference ? For encoding 200 sentences pairs on my cpu it takes 12 seconds. So is it more feasible to use pytorch models for making inference or even training? Are there any memory usage differences? *So, why is bert-as-a-service more performant and faster (as it looks like) I hope I can test this? I ask because I stumpled over here: github.com/huggingface/transformers Difference between this repo and bert-as-service 19 opened Apr 14, 2019 closed Jun 23, 2019 tcqiuyu Hi, I wondered if anybody knows the difference between the BertModel of this repo and bert-as-service. I cannot get the same result between... Discussion wontfix How to ensure fast inference on both CPU and GPU with BertForSequenceClassification? Beginners Hi! I’d like to perform fast inference using BertForSequenceClassification on both CPUs and GPUs. For the purpose, I thought that torch DataLoaders could be useful, and indeed on GPU they are. Given a set of sentences sents I encode them and employ a DataLoader as in encoded_data_val = tokenizer.batch_encode_plus(sents, add_special_tokens=True, return_attention_mask=True, … Some advices for better usage (for deployment) are very appreciated.
Is huggingface with pytorch faster than with tensorlfow?
0
huggingface
🤗Transformers
Pegasus Tokenizer Error
https://discuss.huggingface.co/t/pegasus-tokenizer-error/2509
While working with pegasus(using hugging face tranformer implimentation) I got this error… also tried to install sentencepiece library using pip… still showing the same error can anyone help? this is my code model_name = ‘google/pegasus-aeslc’#'google/pegasus-xsum’ torch_device = ‘cuda’ if torch.cuda.is_available() else 'cpu’ tokenizer1 = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) Error: PegasusTokenizer requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/google/sentencepiece#installation 5 and follow the ones that match your environment.
You are not showing the error, but I am having the same problem with the SentencePiece library and PegasusTokenizer.
0
huggingface
🤗Transformers
Why is there no pooler representation for XLNet or a consistent use of sequence_summary()?
https://discuss.huggingface.co/t/why-is-there-no-pooler-representation-for-xlnet-or-a-consistent-use-of-sequence-summary/2357
I’m trying to create sentence embeddings using different Transformer models. I’ve created my own class where I pass in a Transformer model, and I want to call the model to get a sentence embedding. Both BertModel 2 and RobertaModel 2 return a pooler output (the sentence embedding). pooler_output ( torch.FloatTensor of shape (batch_size, hidden_size) ) – Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. Why does XLNetModel not produce a similar pooler_output? When I look at the source code for XLNetForSequenceClassification 2, I see that there actually exists code for getting a sentence embedding 3 using a function called sequence_summary(). def forward(): transformer_outputs = self.transformer( ... ) output = transformer_outputs[0] output = self.sequence_summary(output) Why is this sequence_summary() function not used consistently in the other Transformers models, such as BertForSequenceClassification and RobertaForSequenceClassification?
To be rigorous, I compared (1) BertModel’s pooler output to (2) the output of SequenceSummary for the same sentence. Aren’t these two approaches supposed to produce the same sentence embedding? That’s what I’m led to believe from the comments of SequenceSummary 2: class transformers.modeling_utils. SequenceSummary Compute a single vector summary of a sequence hidden states. summary_type ( str ) – The method to use to make this summary. Accepted values are: "last" – Take the last token hidden state (like XLNet) "first" – Take the first token hidden state (like Bert) "mean" – Take the mean of all tokens hidden states "cls_index" – Supply a Tensor of classification token position (GPT/GPT-2) "attn" – Not implemented now, use multi-head attention Returns The summary of the sequence hidden states. I’ve created a Google Colab notebook 8 that computes both (1) pooler output and (2) SequenceSummary output for the same sentence. The last line in the notebook shows that the two approaches to sentence embeddings are not the same. Is this a bug, or was my asssumption wrong? If it’s a bug (in code or documentation), I’ll open a GitHub issue. Any help would be appreciated.
0
huggingface
🤗Transformers
Default models for pipeline tasks
https://discuss.huggingface.co/t/default-models-for-pipeline-tasks/2559
What are the default models used for the various pipeline tasks? I assume the “SummarizationPipeline” uses Bart-large-cnn or some variant of T5, but what about the other tasks? |ConversationalPipeline|?| |FeatureExtractionPipeline|?| |FillMaskPipeline|?| |QuestionAnsweringPipeline|?| |SummarizationPipeline|BART or T5 (?)| |TextClassificationPipeline|?| |TextGenerationPipeline|?| |TokenClassificationPipeline|?| |TranslationPipeline|?| |ZeroShotClassificationPipeline|?| |Text2TextGenerationPipeline|?|
You can find all the defaults here: github.com huggingface/transformers/blob/71688a8889c4df7dd6d90a65d895ccf4e33a1a56/src/transformers/pipelines.py#L2716-L2804 17 SUPPORTED_TASKS = { "feature-extraction": { "impl": FeatureExtractionPipeline, "tf": TFAutoModel if is_tf_available() else None, "pt": AutoModel if is_torch_available() else None, "default": {"model": {"pt": "distilbert-base-cased", "tf": "distilbert-base-cased"}}, }, "sentiment-analysis": { "impl": TextClassificationPipeline, "tf": TFAutoModelForSequenceClassification if is_tf_available() else None, "pt": AutoModelForSequenceClassification if is_torch_available() else None, "default": { "model": { "pt": "distilbert-base-uncased-finetuned-sst-2-english", "tf": "distilbert-base-uncased-finetuned-sst-2-english", }, }, }, "ner": { "impl": TokenClassificationPipeline, This file has been truncated. show original
0
huggingface
🤗Transformers
Seq2Seq fnetuning wandb issue
https://discuss.huggingface.co/t/seq2seq-fnetuning-wandb-issue/2494
When using seq2seq finetune.py I am getting this warning: wandb: WARNING Step must only increase in log calls. and because of this issue, it is not reporting all the stats in my wandb console. I found this which seems to be relevant: https://github.com/wandb/client/issues/1530 12 Here is a more complete log: Epoch 1: 50%|█████ | 1/2 [00:00<00:00, 2.62it/s, lowandb: WARNING Step must only increase in log calls. Step 1 < 3; dropping {'val_avg_loss': 3.4621164798736572, 'val_avg_rouge1': 0.0, 'val_avg_rouge2': 0.0, 'val_avg_rougeL': 0.0, 'val_avg_rougeLsum': 0.0, 'val_avg_gen_time': 0.008627939224243163, 'val_avg_gen_len': 3.0, 'step_count': 3, 'epoch': 1}. Epoch 2: 50%|█████ | 1/2 [00:00<00:00, 2.65it/s, lowandb: WARNING Step must only increase in log calls. Step 2 < 4; dropping {'val_avg_loss': 2.1513829231262207, 'val_avg_rouge1': 19.3801, 'val_avg_rouge2': 8.7686, 'val_avg_rougeL': 18.0454, 'val_avg_rougeLsum': 19.2525, 'val_avg_gen_time': 0.11329512596130371, 'val_avg_gen_len': 66.0, 'step_count': 4, 'epoch': 2}. Epoch 3: 50%|█████ | 1/2 [00:00<00:00, 2.61it/s, lowandb: WARNING Step must only increase in log calls. Step 3 < 5; dropping {'val_avg_loss': 1.903280258178711, 'val_avg_rouge1': 27.2808, 'val_avg_rouge2': 11.1652, 'val_avg_rougeL': 23.9808, 'val_avg_rougeLsum': 27.4063, 'val_avg_gen_time': 0.10656294822692872, 'val_avg_gen_len': 61.0, 'step_count': 5, 'epoch': 3}. Epoch 4: 50%|█████ | 1/2 [00:00<00:00, 2.71it/s, lowandb: WARNING Step must only increase in log calls. Step 4 < 6; dropping {'val_avg_loss': 1.3527400493621826, 'val_avg_rouge1': 13.7018, 'val_avg_rouge2': 7.5888, 'val_avg_rougeL': 11.3654, 'val_avg_rougeLsum': 11.3237, 'val_avg_gen_time': 0.20975174903869628, 'val_avg_gen_len': 121.0, 'step_count': 6, 'epoch': 4}. Any thoughts on this?
Looks like we need to use the latest version of Pytorch-lightening: https://github.com/PyTorchLightning/pytorch-lightning/issues/4811 57
0
huggingface
🤗Transformers
Should Google Colab Be Updated With New Model Uploading?
https://discuss.huggingface.co/t/should-google-colab-be-updated-with-new-model-uploading/2400
colab.research.google.com Google Colaboratory 9
I think it’s a good idea! Maybe you can update it and send a PR
0
huggingface
🤗Transformers
BART for sequence classification
https://discuss.huggingface.co/t/bart-for-sequence-classification/2425
the facebook/bart-large-cnn is pre-trained on summarization task, is it possible to fine-tune it on a classification task? syntactically, it doesn’t cause any issue, but in terms of results.
In Google’s QUEST challenge 4 which is a “multi-label” classificatin competition , the 1st place winner uses (finetuned) pretrained Bart as one of their core model, so that might partially answer your question.
0
huggingface
🤗Transformers
Are BERT models pretrained with Whole Word Masking?
https://discuss.huggingface.co/t/are-bert-models-pretrained-with-whole-word-masking/2307
image1748×1196 295 KB Are BERT models in Transformers pretrained with Whole Word Masking?
It depends on the checkpoint you are using, we provide both version. For instance bert-base-uncased is the first BERT model pretrained without WWM, but bert-large-uncased-whole-word-masking is pretrained with WWM. Check all the checkpoints available here 10
0
huggingface
🤗Transformers
Naming inconsistency in Distilbert config
https://discuss.huggingface.co/t/naming-inconsistency-in-distilbert-config/2220
Hi! I noticed one inconsistency between Distilbert and bert configs. Distilbert config stores output hidden size as hidden_size and ffn dim as hidden_dim while BERT and RoBERTa use hidden_dim for output and intermediate_size for ffn dim. I know such thing can be hard to fix without breaking backcompatibiliby, but such behavior makes is a bit harder to get your model output size upfront. E.g. if I want to be able to use both DistilBERT and BERT as an encoder in my model like this class MySuperCustomModel(nn.Module): def __init__(self, encoder, n_classes): super().__init__() self.encoder = encoder hidden_size = ... # I wish it would be as simple as encoder.config.hidden_dim self.logit_network = nn.Linear(hidden_size, n_classes) the code to get the encoder output size is kind of ugly, because you need to use isinstance or something like it. Or course, for classification you can use *ModelForClassification, but what if you want to use a pre-trained model as a seq2seq encoder or to write some other custom model. I feel like solving this issue can make quite some people a bit happier, as they would be able to have to experiment with using different pre-trained models without code modifications and without thinking about the differences between transformer configs. Is there a better way to get the output dimension of the model or any fixes planned? I can help with a PR too.
You can make a PR with new properties for those configs (like hidden_size for DistilBert) but we can’t change the name of the arguments of the configs as it would be a severe breaking change. I agree that consistent named properties would be useful!
0
huggingface
🤗Transformers
Additional features as input to TFBert?
https://discuss.huggingface.co/t/additional-features-as-input-to-tfbert/2212
Say I have a binary classification problem, but in addition to the sentence I’d like to also input some scalar value as well. Is it possible to just tack on this scalar as input to the last linear layer of BERT? For example, I’d like to detect if a particular sentence is from my source data or generated. And I know that many instances of a repeated word increases the likelihood that it is a generated sentence. So I’d like to pass the sentence itself into BERT as well as a scalar feature such as the number of unique words in the sentence.
All the models are defined in a self-contained file that you can tweak to your need. Just add the new inputs to the model you’re using!
0
huggingface
🤗Transformers
Some unintended things happen in Seq2SeqTrainer example
https://discuss.huggingface.co/t/some-unintended-things-happen-in-seq2seqtrainer-example/2361
Hi, Congratulations to HuggingFace Transformers for winning the Best Demo Paper Award at EMNLP 2020! I’m now trying v4.0.0-rc-1 with great interest. If you don’t mind, I’d like to ask you about what seems strange during running the Seq2SeqTrainer example. I’m sorry if I’m mistaken or if the problem is dependent on the environment, but I’d be happy if you look over it. What seems strange The number of data pairs is not correctly recognized. MLflow cannot treat the params (too long). I wasn’t sure if I should divide these into two topics, but in the end, I decided on one. If it is better to divide them into two, I will modify it. Environment transformers version: 4.0.0-rc-1 The latest commit: commit 5ced23dc845c76d5851e534234b47a5aa9180d40 Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.10 Python version: 3.8.3 PyTorch version (GPU?): 1.7.0 (True) Tensorflow version (GPU?): 2.3.1 (True) Using GPU in script?: Yes Using distributed or parallel set-up in script?: No Script and Parameters I first noticed this strangeness when I use a different dataset than the those in the example. I again follow the README of examples/seq2seq to check if my modification causes the problem or not. Having checked https://github.com/huggingface/transformers/issues/8792 1, I used --evaluation_strategy epoch instead of --evaluate_during_training. $ CUDA_VISIBLE_DEVICES=0 python finetune_trainer.py \ --data_dir $XSUM_DIR \ --learning_rate=3e-5 \ --fp16 \ --do_train --do_eval --do_predict \ --evaluation_strategy epoch \ --predict_with_generate \ --n_val 1000 \ --model_name_or_path facebook/bart-large \ --output_dir ./xsum_bart-large/ \ --save_total_limit 5 \ 2>&1 | tee tmp.log Log [INFO|trainer.py:667] 2020-11-30 08:10:43,836 >> ***** Running training ***** [INFO|trainer.py:668] 2020-11-30 08:10:43,836 >> Num examples = 204016 [INFO|trainer.py:669] 2020-11-30 08:10:43,836 >> Num Epochs = 3 [INFO|trainer.py:670] 2020-11-30 08:10:43,836 >> Instantaneous batch size per device = 8 [INFO|trainer.py:671] 2020-11-30 08:10:43,836 >> Total train batch size (w. parallel, distributed & accumulation) = 8 [INFO|trainer.py:672] 2020-11-30 08:10:43,836 >> Gradient Accumulation steps = 1 [INFO|trainer.py:673] 2020-11-30 08:10:43,836 >> Total optimization steps = 76506 ... mlflow.exceptions.MlflowException: Param value '{'summarization': {'length_penalty': 1.0, 'max_length': 128, 'min_length': 12, 'num_beams': 4}, 'summarization_cnn': {'length_penalty': 2.0, 'max_length': 142, 'min_length': 56, 'num_beams': 4}, 'summarization_xsum': {'length_penalty': 1.0, 'max_leng' had length 293, which exceeded length limit of 250 (Reference) Dataset length $ cd $XSUM_DIR/ $ wc -l * 11333 test.source 11333 test.target 204017 train.source 204017 train.target 11327 val.source 11327 val.target 453354 total Details The number of examples shown At first, I tried to use the dataset with 40,000 pairs for training, but it was shown that Num examples = 39999. I don’t know why, so I’ve checked the example with the XSum dataset. Checking the number of lengths, it seems the XSum train set used in the example has 204017 pairs, but it is shown Num examples = 204016 as above. I thought the dataset was supposed to start with the first line, but am I mistaken? For example, is the first line treated as a header? MLflow can not treat params in this case As shown above, the length of param value exceeds the limit that MLflow can handle. Do I just need to change the settings of MLflow? Or, should I add some modifications to param value to be used in MLflow? Thank you in advance. yusukemori
hey there. It seems as if you have encountered some bugs with the trainer. Cool, that is very helpful! The forum may not be the best place to post this, though, as it servs more the purpose for general questions. If you believe these are bugs, can you instead post this in the bug tracker on Github 3? You can include a link to this forum post as well.
0
huggingface
🤗Transformers
Arguments in encode_plus
https://discuss.huggingface.co/t/arguments-in-encode-plus/2355
Just a general question, can I give multiple arguments in encode_plus Blockquote encoding = self.tokenizer.encode_plus( author, source, title, content, add_special_tokens=True, max_length=self.max_len, return_token_type_ids=False, pad_to_max_length=True, return_attention_mask=True, return_tensors='pt', ) this one is giving me error
ignore it, solved encoding = self.tokenizer.encode_plus( [author, source, title, content], add_special_tokens=True, max_length=self.max_len, return_token_type_ids=False, pad_to_max_length=True, return_attention_mask=True, return_tensors='pt', )
0
huggingface
🤗Transformers
Improving performance results for BERT
https://discuss.huggingface.co/t/improving-performance-results-for-bert/2121
I’m using the bert-base-german-cased model to perform token classification with custom NER labels on a dataset of German court documents. I have 11 labels in total (including the O label), which are however not tagged in BIO form. I’m letting the model train and evaluate on an NVidia GeForce GTX Titan X. But despite the good ressources and the model, which was actually pretrained on German judicial documents, the results are rather lacking. precision recall f1-score support Date 0.87 0.99 0.93 407 Schadensbetrag 0.77 0.58 0.66 112 Delikt 0.59 0.50 0.54 44 Gestaendnis_ja 0.60 0.71 0.65 21 Vorstrafe_nein 0.00 0.00 0.00 6 Strafe_Gesamtfreiheitsstrafe_Dauer 0.76 0.91 0.83 35 Strafe_Gesamtsatz_Betrag 0.42 0.52 0.46 25 Strafe_Gesamtsatz_Dauer 0.52 0.82 0.64 28 Strafe_Tatbestand 0.30 0.29 0.30 283 micro avg 0.65 0.68 0.66 961 macro avg 0.54 0.59 0.56 961 weighted avg 0.64 0.68 0.66 961 What could be some steps to improve these results? Perhaps it’s the low data count for some of the labels, or that the labels often are not single tokens but text spans of multiple tokens? I would be glad for every hint of some more experienced users. I can also share data or other files, if they are relevant. This is my config file: { "data_dir": "./Data", "labels": "./Data/labels.txt", "model_name_or_path": "bert-base-german-cased", "output_dir": "./Data/Models", "task_type": "NER", "max_seq_length": 180, "num_train_epochs": 6, "per_device_train_batch_size": 48, "seed": 7, "fp16": true, "do_train": true, "do_predict": true, "do_eval": true }
Anyone who could help on this topic?
0
huggingface
🤗Transformers
Convert tokens and token-labels to string
https://discuss.huggingface.co/t/convert-tokens-and-token-labels-to-string/2086
I am building a token classification model, and I am asking if there’s a good way that I can transform the tokens and labels (each token has its label) to string, I know there is a tokenizer.convert_tokens_to_string that convert tokens to strings, but I must also take into consideration the labels. Any idea rather than creating my proper implementation !
Hi Zack, what form are the labels currently in? I don’t understand what you are building. Are the labels the classifications?
0
huggingface
🤗Transformers
T5 for Named Entity Recognition
https://discuss.huggingface.co/t/t5-for-named-entity-recognition/2167
The task is as follows: need to write the code for NamedEntityRecognition(Token classification), using the T5 model. Who knows how to do this, please write explicitly on the example. This should work as for example: from transformers import AutoModelForTokenClassification, AutoTokenizer import torch model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english", return_dict=True) tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") label_list = [ "O", # Outside of a named entity "B-MISC", # Beginning of a miscellaneous entity right after another miscellaneous entity "I-MISC", # Miscellaneous entity "B-PER", # Beginning of a person's name right after another person's name "I-PER", # Person's name "B-ORG", # Beginning of an organisation right after another organisation "I-ORG", # Organisation "B-LOC", # Beginning of a location right after another location "I-LOC" # Location ] sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \ "close to the Manhattan Bridge." # Bit of a hack to get the tokens with the special tokens tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence))) inputs = tokenizer.encode(sequence, return_tensors="pt") outputs = model(inputs).logits predictions = torch.argmax(outputs, dim=2) print([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].numpy())]) [('[CLS]', 'O'), ('Hu', 'I-ORG'), ('##gging', 'I-ORG'), ('Face', 'I-ORG'), ('Inc', 'I-ORG'), ('.', 'O'), ......] Please, write a concrete example with using T5Model and T5Tokenizer
github.com/google-research/text-to-text-transfer-transformer Does T5 support sequence labeling (like NER) tasks? 185 opened Dec 26, 2019 closed Dec 26, 2019 Morizeyao Hi there, just as the title says. I wonder, does T5 support sequence labeling (like NER) tasks? I don't see report of...
0
huggingface
🤗Transformers
How to train TFT5ForConditionalGeneration model?
https://discuss.huggingface.co/t/how-to-train-tft5forconditionalgeneration-model/888
My code is as follows: batch_size=8 sequence_length=25 vocab_size=100 import tensorflow as tf from transformers import T5Config, TFT5ForConditionalGeneration configT5 = T5Config( vocab_size=vocab_size, d_ff =512, ) model = TFT5ForConditionalGeneration(configT5) model.compile( optimizer = tf.keras.optimizers.Adam(), loss = tf.keras.losses.SparseCategoricalCrossentropy() ) input = tf.random.uniform([batch_size,sequence_length],0,vocab_size,dtype=tf.int32) labels = tf.random.uniform([batch_size,sequence_length],0,vocab_size,dtype=tf.int32) input = {'inputs': input, 'decoder_input_ids': input} model.fit(input, labels) It generates an error: logits and labels must have the same first dimension, got logits shape [1600,64] and labels shape [200] [[node sparse_categorical_crossentropy_3/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits (defined at C:\Users\FA.PROJECTOR-MSK\Google Диск\Colab Notebooks\PoetryTransformer\experiments\TFT5.py:30) ]] [Op:__inference_train_function_25173] Function call stack: train_function I dont understand - why the model returns a tensor of [1600, 64]. According to https://huggingface.co/transformers/model_doc/t5.html#tft5forconditionalgeneration 11 model returns [batch_size, sequence_len, vocab_size].
Pinging @patrickvonplaten
0
huggingface
🤗Transformers
How to create the warmup and decay from the BERT/Roberta papers?
https://discuss.huggingface.co/t/how-to-create-the-warmup-and-decay-from-the-bert-roberta-papers/2106
Roberta’s pretraining is described below BERT is optimized with Adam (Kingma and Ba, 2015) using the following parameters: β1 = 0.9, β2 = 0.999, ǫ = 1e-6 and L2 weight decay of 0.01. The learning rate is warmed up over the first 10,000 steps to a peak value of 1e-4, and then linearly decayed. BERT trains with a dropout of 0.1 on all layers and attention weights, and a GELU activation function (Hendrycks and Gimpel, 2016). Models are pretrained for S = 1,000,000 updates, with minibatches containing B = 256 sequences of maximum length T = 512 tokens. I’m trying to figure out how to replicate this optimizer schedule. I see that in the trainer.py code there’s AdamW and get_linear_schedule_with_warmup github.com huggingface/transformers/blob/master/src/transformers/trainer.py#L510 17 optimizer_grouped_parameters = [ { "params": [p for n, p in self.model.named_parameters() if not any(nd in n for nd in no_decay)], "weight_decay": self.args.weight_decay, }, { "params": [p for n, p in self.model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, ] self.optimizer = AdamW( optimizer_grouped_parameters, lr=self.args.learning_rate, betas=(self.args.adam_beta1, self.args.adam_beta2), eps=self.args.adam_epsilon, )if self.lr_scheduler is None: self.lr_scheduler = get_linear_schedule_with_warmup( self.optimizer, num_warmup_steps=self.args.warmup_steps, num_training_steps=num_training_steps ) But I’m not sure how to replicate Roberta’s learning rate schedule from these classes huggingface.co Optimization — transformers 3.5.0 documentation 7 It seems that AdamW already has the decay rate, so using AdamW with get_linear_schedule_with_warmup will result in two types of decay. So to me it makes more sense to use AdamW with get_constant_schedule_with_warmup. I am also wondering how to set the schedule based on 1) a starting learning rate 2) warm it up to a particular maximum value 3) from the maximum value, decay using a particular decay rate. The classes on the main optimization class seem to be based on warming up/decaying to/from zero.
From further looking into the code for Roberta (https://github.com/pytorch/fairseq/blob/dd52ed0f3896639b3c04aa67c44775f689faf1a5/fairseq/optim/lr_scheduler/polynomial_decay_schedule.py 16) and also Bert (https://github.com/google-research/bert/blob/master/optimization.py#L36 8) It seems that the learning rate starts what is specified in the optimizer, increased to a particular LR, and then linearly decreased to zero. It seems that get_linear_schedule_with_warmup could work, but would need to be altered for a different learning rate. It seems that it uses torch.optim.lr_scheduler.LambdaLR github.com huggingface/transformers/blob/08f534d2da47875a4b7eb1c125cfa7f0f3b79642/src/transformers/optimization.py#L90 4 The number of steps for the warmup phase. num_training_steps (:obj:`int`): The total number of training steps. last_epoch (:obj:`int`, `optional`, defaults to -1): The index of the last epoch when resuming training. Return: :obj:`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. """ def lr_lambda(current_step: int): if current_step < num_warmup_steps: return float(current_step) / float(max(1, num_warmup_steps)) return max( 0.0, float(num_training_steps - current_step) / float(max(1, num_training_steps - num_warmup_steps)) ) return LambdaLR(optimizer, lr_lambda, last_epoch)def get_cosine_schedule_with_warmup( So I’m thinking of creating a custom function directly to use that method pytorch.org torch.optim — PyTorch 1.7.0 documentation 4
0
huggingface
🤗Transformers
Initializing the weights of the final layer of e.g. BertForTokenClassification with a manual seed
https://discuss.huggingface.co/t/initializing-the-weights-of-the-final-layer-of-e-g-bertfortokenclassification-with-a-manual-seed/1377
First of, I’m wondering how the final layer is initialized in the first place when I load my model using BertForTokenClassification.from_pretrained('bert-base-uncased') Most of the model obviously loads the weights from pretraining, but where does the final layer, in this case the linear layer which takes in the hidden states for each token, get the weights? Is it a new random set of weights each time I load bert-base-uncased? And is there a way for me to give a manual seed so that I get the same initialization for this final layer every time with this seed? Initialization of the final layer may have an effect on the results of fine tuning, so I would like to have control over it if I can, and compare how different initializations do.
In this link 56, if you search for the “BertForTokenClassification” class, I see there is a call to the init_weights() function after the architecture is defined. And to reproduce the results, have you tried setting the seed?
0
huggingface
🤗Transformers
Training BERT from scratch (MLM+NSP) on a new domain
https://discuss.huggingface.co/t/training-bert-from-scratch-mlm-nsp-on-a-new-domain/2075
Hi, I have been trying to train BERT from scratch using the wonderful hugging face library. I am referring to the Language modeling tutorial and have made changes to it for the BERT. As I am running on a completely new domain I have trained my own tokenizer, which trains fine. However, I run into the following error during the training of the model. ) /usr/local/lib/python3.6/dist-packages/transformers/trainer.py in train(self, model_path, trial) 710 self.state.is_world_process_zero = self.is_world_process_zero() 711 –> 712 tr_loss = torch.tensor(0.0).to(self.args.device) 713 self._logging_loss_scalar = 0 714 self._total_flos = self.state.total_flos RuntimeError: CUDA error: device-side assert triggered From what I understand this is related to some tensor mismatch, however, I unable to resolve this and can’t understand where I am going wrong during the model building. I would really appreciate your help with this. I have attached the colab notebook for reference (https://colab.research.google.com/drive/12NHfXeUBo7RBl3Kffa-715i-Zpd0MOzP?usp=sharing 182). I am using BertForPreTraining, TextDatasetForNextSentencePrediction, DataCollatorForNextSentencePrediction and BertFasttokeizer. Environment: tokenizers: 0.9.2 transformers: 3.4.0 torch: 1.7.0 CUDA: 10.1
I haven’t run your notebook, but the first thing I’d double check looking at it is that your vocabulary has indeed 50,000 tokens and not more.
0
huggingface
🤗Transformers
Convert mT5 to HF weights?
https://discuss.huggingface.co/t/convert-mt5-to-hf-weights/1727
Hi, I have been attempting to convert the mT5 weights available here 11 to the HF weights for TFT5ForConditionalGeneration or T5ForConditionalGeneration? Any ideas on how to do this?
I have a question, can this new model be used for summarization on other languages other than English and without fine-tuning it ?
0
huggingface
🤗Transformers
mBART finetuning tips/post-mortem
https://discuss.huggingface.co/t/mbart-finetuning-tips-post-mortem/851
I have run lots of mbart finetuning experiments and am moving on to pegasus/marian so wanted to share some general tips. I ran them all on en-ro because I knew what fairseq scores on that pair, since thats the only finetuned checkpoint they released. Best test BLEU I got from finetuning was 26.42. the fairseq-converted model gets 26.81. --freeze_embeds does not hurt metrics and saves lots of memory. Always use this Got 26.32 on 8x V100 GPU on master on Aug 21 (f230a640). Took 10h32mins before I killed. Maybe shouldn’t have killed post-processing in romanian_postprocessing.md distillation works well - slightly better with a teacher. Posted sshleifer/distilmbart-enro. Probably still best to used Marian, which scored on wmt-en-ro test 27.7/37.4 in 90 Seconds vs mbart-large-en-ro 26.8/37.1 6 minutes. The distilled 12-6 is roughly 26.1 3 mins. Distilled 12-4 is 25.9/2 minutes. I wonder why marian is so good. I guess pretraining is not convincingly a benefit in machine translation yet. There may also be a leak in the marian data, but the original author Jorg Tiedemann doesn’t think so. Still, these metrics made me think I should focus much less on distilling/finetuning for mbart and more on supporting training MT from scratch, distilling marian even smaller. Unsolved: 7. Using --joined-dictionary in fairseq and trimming embeddings should make training much faster, but I couldn’t get sentencepiece SetVocabulary doing this to make the correct/restricted vocabulary. The sentencepiece maintainers ignore my issue, so may post something detailed. Don’t know who I’d sent that to. 8. Had a good run with --decoder_layerdrop=0.3, but subsequent distillation wasn’t any faster/better. Wierd. 9. Can get a 30-40% speedup with dynamic batch size, might send a PR for that. It’s default in fairseq. 10. Getting the decoder_input_ids to look exactly like fairseq (rather than off by 1) doesn’t change metrics at all (afaict). Lessons learned: Spent too much time trying different hparams rather than zooming out and thinking about what I wanted to accomplish in this project. We still got a lot out of it – less memory hungry dataset, better command line args, support for MT in seq2seq/finetune.py. Should have run marian eval earlier before I put so much time into mbart.
This is weird, even if pre-training doesn’t help, it should perform at least similar or better than Marian given the large architecture. Could multilinguality be the reason for this perf drop ? dynamic batching will be a great addition if it gives speed-up. Can’t fine-tune large models reliably on colab. I want to get bart/MBart working on TPU with Seq2Seq trainer, very hard to get interesting results with large models without huge compute. I knew it
0
huggingface
🤗Transformers
Evaluation metrics
https://discuss.huggingface.co/t/evaluation-metrics/2085
In order to use the evaluation metrics from https://huggingface.co/metrics 102, for example, from datasets import load_metric metric = load_metric(“accuracy”) does the dataset must be from the huggingface or may I use these metrics with transformer model based on my data? thanks
You can use it on any data you want.
0
huggingface
🤗Transformers
Learning rate setting
https://discuss.huggingface.co/t/learning-rate-setting/2063
Hey, I wonder is it possible to set different learning rates for different parts in the model ?This is ususlly considered as a trick in bert fine-tuning.
You can pass your own optimizer to the Trainer, in which you can define your parameter groups as you like.
0
huggingface
🤗Transformers
New Model sharing and uploading is extremely slow
https://discuss.huggingface.co/t/new-model-sharing-and-uploading-is-extremely-slow/2067
Hello, The new “Model sharing and uploading” using git is a great idea. However, I tried to update one of my models “t5-3b” which is “11G”, and it is taking too much time for “git add --all” command. I had to cancel it at the end. This is because the nature of git for large files while calculating the diff. Is there any solution for this problem or workaround ?
To have an idea of which steps take time, you can add the following env variables to your git commands: GIT_CURL_VERBOSE=1 GIT_TRACE=1 Also make sure you actually have git-lfs activated: git lfs install. (We’re in the process of adding it everywhere in the documentation as it was easy to miss). On the hashing overhead part: for me hashing (w/ sha256) a 42GB file takes 2’30sec (on a beefy computer) so the hashing overhead is not too huge, and of course gives the benefit of security/reproducibility. And finally in your precise use case, uploading files larger than 5GB is currently not supported out of the box, though we’ll support it in the next 2-3 weeks. Issue to follow is at https://github.com/huggingface/transformers/issues/8480#issuecomment-726731046 8
0
huggingface
🤗Transformers
GPT2 with TensorFlow?
https://discuss.huggingface.co/t/gpt2-with-tensorflow/2051
I tried doing this at the start without luck. I was wondering if anyone was successful at eval/retrain GPT2 with TensorFlow instead of pyTorch? I guess there are some ports directly in TF. However, my question is related to the Transformers library.
I am not aware of public notebooks regarding training Huggingface’s TFGPT2, but we have public notebooks of training TFT5 and TFXLM-R , which should give you some ideas TF-T5 colab.research.google.com Google Colaboratory 3 TF-XLM-R https://www.kaggle.com/riblidezso/finetune-xlm-roberta-on-jigsaw-test-data-with-mlm 3
0
huggingface
🤗Transformers
Custom DistilBertTokenizer training
https://discuss.huggingface.co/t/custom-distilberttokenizer-training/1967
Hi All I am trying to find how to build a custom tokenizer for DistilBert all the examples I saw just use the pre-trained tokenizer. huggingface.co DistilBERT — transformers 3.5.0 documentation Can someone point me to how to build my custom model? Thanks in advance.
Hi, this is probably where you can start if you want to build a fast tokenizer: https://huggingface.co/docs/tokenizers/python/master/quicktour.html 5 cc @anthony and @Narsil
0
huggingface
🤗Transformers
DPR retriever module
https://discuss.huggingface.co/t/dpr-retriever-module/627
I see https://github.com/huggingface/transformers/pull/5279 12 that describes the DPR flow. Just checking to see when the retriever module will be available. Many Thanks for making DPR available !
I see this topic was already answered in Github 10 from Quentin. So, I’d love to add the answer here for convenience The retriever is now part of the nlp library. You can install it with pip install datasets and load the retriever: from datasets import load_dataset wiki = load_dataset("wiki_dpr", with_embeddings=False, with_index=True, split="train") The retriever is basically a dense index over wikipedia passages. To query it using the DPR question encoder you can do: from transformers import DPRQuestionEncoderTokenizer, DPRQuestionEncoder question_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained('facebook/dpr-question_encoder-single-nq-base') question_encoder = DPRQuestionEncoder.from_pretrained('facebook/dpr-question_encoder-single-nq-base') question = "What is love ?" question_emb = question_encoder(**question_tokenizer(question, return_tensors="pt"))[0].detach().numpy() passages_scores, passages = wiki.get_nearest_examples("embeddings", question_emb, k=20) # get k nearest neighbors
0
huggingface
🤗Transformers
Transformers v4.0.0 announcement
https://discuss.huggingface.co/t/transformers-v4-0-0-announcement/1990
We are working on a new major release that should come out at the end of next week, with cool new features that will unfortunately result in some breaking changes. There will be one last release for v3 before we start introducing those breaking changes on master, so if you’re using a source installation, be prepared or revert to v3.5.0 for a bit AutoTokenizers and pipeline will switch to Fast tokenizers by default => Resulting breaking change: the slow and fast tokenizers have roughly the same API but they have a different handling of the overflowing tokens. => Why are we doing this: This will greatly increase the performance of the tokenization aspect in pipelines, and enable clearer, simpler example scripts leveraging the fast tokenizers. The overflowing of Fast tokenizers is also a lot more powerful than it’s counterpart in slow tokenizers. sentencepiece will be removed as a required dependency. (It will still be required to be installed for slow SP based tokenizers) => Resulting breaking change: some people will have to install sentencepiece explicitly while they didn’t have to before with the command pip install transformers[sentencepiece]. => Why are we doing this? This, in turn, will allow us to create and maintain a conda channel offering the full Hugging Face suite on conda. Reorganizing the internal organization of the library with subfolders (either one per model or one for all models, all tokenizers, one subfolder for pipelines, trainer etc). With the number of models growing, the source folder is a bit too hard to navigate right now. => Resulting breaking change: some people directly accessing the internals will have to update the path they use. If you only use imports from transformers directly, nothing will break. => Why are we doing this? The library will be more robust to scaling for more models. Switching the return_dict argument to True. This argument that makes the ouputs of the models self-documented was introduced a few months ago with a default to False for backward compatibility. => Resulting breaking change: unpacking the output of a model with commands like loss, logits = model(**inputs) won’t work anymore. The command to_tuple can convert a model output to a tuple. => Why are we doing this? Outputs of the model are easier to understand when they are ModelOutput. You can index as a dict or use auto-complete in an IDE to find all fields. This will also allow us to optimize the TensorFlow models more (the tuples of various size being incompatible with graph mode). Deprecated arguments or functions will be removed on a case-by-case basis.
Reorganizing the whole repo was bound to be necessary at one point. If anything, it is a testament of how a great library it is with ever-increasing features and models. Good luck with the release!
0
huggingface
🤗Transformers
Clarification: finetune.py max target length
https://discuss.huggingface.co/t/clarification-finetune-py-max-target-length/2014
@sshleifer (1) could you clarify what is the rationale behind these assertions? github.com huggingface/transformers/blob/5e24982e580ab32a779d7271ad8cb46dc5c6475f/examples/seq2seq/finetune.py#L93-L94 1 assert self.target_lens["train"] <= self.target_lens["val"], f"target_lens: {self.target_lens}"assert self.target_lens["train"] <= self.target_lens["test"], f"target_lens: {self.target_lens}" (2) The naming of the variables *_max_target_length make it seem like they specify a bound on the target (output), but it seems like (according to their description) they’re limits on the input (source). What do you think we rename them to something more descriptive?
danyaljj: their description Naming of the variables is correct, the description is wrong. Feel free to change the description. cc @valhalla if you copy pasted bad descriptions.
0
huggingface
🤗Transformers
`KeyError: ‘eval_loss’` when using Trainer with BertForQA
https://discuss.huggingface.co/t/keyerror-eval-loss-when-using-trainer-with-bertforqa/1920
When I try to run BertForQuestionAnswering with a Trainer object, it reaches the end of the eval before throwing KeyError: 'eval_loss'(full traceback below). I ran a very vanilla implementation based very closely on the Fine-tuning with custom datasets QA tutorial 4. The training and validation both finish, but from the traceback, it seems like there is some problem when reporting results. Am I missing something that should be there? Is this a bug? Is Trainer not supported here? This is transformers v3.4.0. tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') model = BertForQuestionAnswering.from_pretrained("bert-base-uncased") class MyDataset(torch.utils.data.Dataset): def __init__(self, encodings): self.encodings = encodings def __getitem__(self, idx): # self.encodings.keys() = ['input_ids', 'attention_mask', 'start_positions', 'end_positions'] return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} def __len__(self): return len(self.encodings.input_ids) train_dataset = MyDataset(train_encodings) val_dataset = MyDataset(val_encodings) device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') model.to(device) training_args = TrainingArguments( output_dir="./tmp/qa_trainer_test", do_train=True, do_eval=True, evaluation_strategy="epoch", per_device_train_batch_size=4, per_device_eval_batch_size=4, learning_rate=3e-5, num_train_epochs=1, ) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=val_dataset, ) trainer.train() Traceback: --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-22-7b137ef43258> in <module> 20 ) 21 ---> 22 trainer.train() ~/SageMaker/conda_env/my_env/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path, trial) 790 791 self.control = self.callback_handler.on_epoch_end(self.args, self.state, self.control) --> 792 self._maybe_log_save_evalute(tr_loss, model, trial, epoch) 793 794 if self.args.tpu_metrics_debug or self.args.debug: ~/SageMaker/conda_env/my_env/lib/python3.7/site-packages/transformers/trainer.py in _maybe_log_save_evalute(self, tr_loss, model, trial, epoch) 843 metrics = self.evaluate() 844 self._report_to_hp_search(trial, epoch, metrics) --> 845 self.control = self.callback_handler.on_evaluate(self.args, self.state, self.control, metrics) 846 847 if self.control.should_save: ~/SageMaker/conda_env/my_env/lib/python3.7/site-packages/transformers/trainer_callback.py in on_evaluate(self, args, state, control, metrics) 350 def on_evaluate(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, metrics): 351 control.should_evaluate = False --> 352 return self.call_event("on_evaluate", args, state, control, metrics=metrics) 353 354 def on_save(self, args: TrainingArguments, state: TrainerState, control: TrainerControl): ~/SageMaker/conda_env/my_env/lib/python3.7/site-packages/transformers/trainer_callback.py in call_event(self, event, args, state, control, **kwargs) 374 train_dataloader=self.train_dataloader, 375 eval_dataloader=self.eval_dataloader, --> 376 **kwargs, 377 ) 378 # A Callback can skip the return of `control` if it doesn't change it. ~/SageMaker/conda_env/my_env/lib/python3.7/site-packages/transformers/utils/notebook.py in on_evaluate(self, args, state, control, metrics, **kwargs) 324 else: 325 values["Step"] = state.global_step --> 326 values["Validation Loss"] = metrics["eval_loss"] 327 _ = metrics.pop("total_flos", None) 328 _ = metrics.pop("epoch", None) KeyError: 'eval_loss'
Trainer is untested on QA-problems, and this is actually my work for the end of the week/beginning of next Will give a quick look this morning to see if there is a way to have a quick fix for this, otherwise you’ll have to wait a tiny bit more.
0
huggingface
🤗Transformers
Gradient accumulation averages over gradient
https://discuss.huggingface.co/t/gradient-accumulation-averages-over-gradient/2020
So I have been looking at this for the past day and a half. Here in the code 28 the gradient in gradient accumulation is averaged. Please explain to me. Gradient accumulation should accumulate (i.e. sum) the gradient, not average it, right? That makes this scaling plain wrong? Am I missing something? Same holds for multi-gpu parallel training, where the mean() is used directly. Both cases would be closer to their 1 GPU and full batch size equivalent if just a sum was used, right?
Well it actually depend on the loss you are using but most of the time with use a CrossEntropy loss averaged over the samples in the batch (so it’s mostly independant of the batch size). The natural extension of that to gradient accumulation is to average over the accumulation. I wrote a bit about that some time ago here: https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255 56
0
huggingface
🤗Transformers
Using proxy to upload models
https://discuss.huggingface.co/t/using-proxy-to-upload-models/1946
Hi I am trying to upload our model using the CLI command. However, my computer need a proxy to connect S3 server (because of the GFW): requests.exceptions.ConnectionError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /models.huggingface.co/bert/***** Is there anyway to set a proxy? Thanks in advance.
We have updated the way models are uploaded, please read the detailed announcement here: [Announcement] Model Versioning: Upcoming changes to the model hub 871
0
huggingface
🤗Transformers
2 possible bugs for adding new tokens to T5
https://discuss.huggingface.co/t/2-possible-bugs-for-adding-new-tokens-to-t5/1984
Hi, I think that I found 2 issues while trying to add new tokens to the T5 tokenizer. My goal was to add smaller sign “<” to the vocabulary of T5. However, doing this prevents the model from extracting the eos_token and unk_token correctly. When the model encounters or , the model splits it as < and unk> or < and /s>. (I am adding the eos tokens to the end by myself and they will be extracted as normal tokens and also occur in the prediction even if skip_special_tokens=True) So to overcome this, I decided to change the eos_token to ~end~. This works but then I observed that the prediction results were much worse! Especially at the end of the prediction. Actually everything was okay but suddenly at the end there were always some strange additional generation. I tracked down the issue and I found it. The custom eos_token is added to the vocabulary as a new token, instead of overwriting the existing eos_token. This means “</s>” has id 1, when I add my own custom eos_token, it does not overwrite 1, or it will not be mapped to 1. It adds a new token to vocabulary but then all the pretaining for eos token is gone! The model has to learn the eos_token from stracth and this degenerates the results remarkably. If you could run the code below and observe the output, you will understand better what I mean! tokenizer = T5Tokenizer.from_pretrained(model_name,) print('len_tokenizer with default eos token: ', len(tokenizer)) tokenizer = T5Tokenizer.from_pretrained(model_name, eos_token='~end~') print('len_tokenizer with custom eos token: ', len(tokenizer)) print('id 1 before adding <:', tokenizer.decode(1, skip_special_tokens=False)) print('id 2 before adding <:', tokenizer.decode(2, skip_special_tokens=False)) tokenizer.add_tokens(['{', '}', '<', '>', '\\']) print('id 1 after adding <:', tokenizer.decode(1, skip_special_tokens=False)) print('id 2 after adding <:', tokenizer.decode(2, skip_special_tokens=False)) custom_eos_id = tokenizer.encode("~end~", return_tensors='pt', truncation=True, padding=True) print('custom_eos id: ', custom_eos_id) Thank you for your time!
Hm currently we don’t have a simple high-level way to change the string associated to a token. You could manually edit the tokenizer.JSON file generated by the fast version of the T5 tokenizer I guess but that’s a bit hacky.
0
huggingface
🤗Transformers
Num_beams: Faster Summarization without Distillation
https://discuss.huggingface.co/t/num-beams-faster-summarization-without-distillation/2009
@patrickvonplaten @valhalla @stas For many seq2seq models in the hub, num_beams can be set meaningfully lower without hurting metrics. For xsum, cnn, I tried a bunch of different values and decided these would be the better. (and don’t list if the default is good). The defaults are 8 for all pegasus, 6 for bart*xsum and 4 for bart*cnn. It’s not clear whether to change defaults from the published parameters, (it would be nice to save compute for pipelines and inference API, though) so I figured I’d just post this if people want faster inference. The speedups are substantial: between 20% and 100%. Tends to be easier on cnn_dailymail than xsum. google/pegasus-cnn_dailymail: 4 sshleifer/distill-pegasus-cnn-16-4: 4 sshleifer/pegasus-cnn-ft-v2: 4 sshleifer/distilbart-cnn-12-3: 3 sshleifer/distilbart-cnn-12-6: 2 sshleifer/distilbart-cnn-6-6: 2 sshleifer/distill-pegasus-xsum-16-4: 4 sshleifer/distill-pegasus-xsum-12-12: 4 facebook/bart-large-cnn Here are some rouge2 vs num_beams plots for different models XSUM image2158×1044 118 KB CNN image1986×1046 118 KB Another note: facebook/bart-large-xsum: prefix=" " hurts rouge2 by .02. Should be removed. No impact on facebook/bart-large-cnn
Awesome sharing, @sshleifer! Perhaps let’s add these notes to README.md? Otherwise it’d be difficult to remember that this is on the forums - or perhaps create OPTIMIZATIONS.md with various such performance notes - so README focuses on functionality, and the latter for tips and tricks.
0
huggingface
🤗Transformers
BartForConditionalGeneration “logits” shape is wrong/unexpected
https://discuss.huggingface.co/t/bartforconditionalgeneration-logits-shape-is-wrong-unexpected/2000
Using BartForConditionalGeneration with a batch_size = 2 … all the inputs look right but when I examine the logits the shape is torch.Size([2, 1, 50264]) The inputs are: x['input_ids'].shape => torch.Size([2, 256]) x['attention_mask'].shape => torch.Size([2, 256]) x['decoder_input_ids'].shape => torch.Size([2, 68]) What may I be doing wrong? Or is there a bug in the model?
Could you give us the command you produce the logits? Did you just call the model or using model.generate? Btw, 50264 is Bart vocab_size
0
huggingface
🤗Transformers
Issue with finetuning a seq-to-seq model
https://discuss.huggingface.co/t/issue-with-finetuning-a-seq-to-seq-model/1680
I am using finetune.py script among the seq2seq examples to finetune for a QA task: export NQOPEN_DIR=/home/danielk/nqopen_csv export OUT=/home/danielk/fine_tune_t5_small python3 finetune.py \ --data_dir $NQOPEN_DIR \ --model_name_or_path t5-small --tokenizer_name t5-small \ --learning_rate=3e-4 --freeze_encoder --freeze_embeds \ --do_train --train_batch_size 16 \ --do_predict --n_train -1 \ --eval_beams 2 --eval_max_gen_length 142 \ --val_check_interval 0.25 --n_val 3000 \ --output_dir $OUT --gpus 4 --logger_name wandb \ --save_top_k 3 Here are how my input/outputs look like: $ head ~/Desktop/nqopen_csv/train.source -l 5 ==> /Users/danielk/Desktop/nqopen_csv/train.source <== total number of death row inmates in the us? big little lies season 2 how many episodes? who sang waiting for a girl like you? where do you cross the arctic circle in norway? who is the main character in green eggs and ham? do veins carry blood to the heart or away? who played charlie bucket in the original charlie and the chocolate factory? what is 1 radian in terms of pi? when does season 5 of bates motel come out? how many episodes are in series 7 game of thrones? head: -l: No such file or directory head: 5: No such file or directory $ head ~/Desktop/nqopen_csv/train.target -l 5 ==> /Users/danielk/Desktop/nqopen_csv/train.target <== 2,718 seven Foreigner Saltfjellet Sam - I - am to Peter Gardner Ostrum 1 / 2π February 20 , 2017 seven After fine-tuning, I use the following script to get example generations: path = "/Users/danielk/ideaProjects/fine_tune_t5_small/best_tfmr" model = T5ForConditionalGeneration.from_pretrained(path) tokenizer = T5Tokenizer.from_pretrained(path) model.eval() def run_model(input_string, **generator_args): # input_string += "</s>" input_ids = tokenizer.encode(input_string, return_tensors="pt") res = model.generate(input_ids, **generator_args) tokens = [tokenizer.decode(x) for x in res] print(tokens) run_model("how many states does the US has? ") run_model("who is the US president?") run_model("who got the first nobel prize in physics?") run_model("when is the next deadpool movie being released?") run_model("which mode is used for short wave broadcast service?") run_model("the south west wind blows across nigeria between?") which gives me the following responses: ['44,100 state legislatures, 391,415 state states,527 states ; 521 states : 517 states'] ['President Pro - lect Ulysses S. Truman and Mr. President Proseudo - Emees'] ['Wilhelm Conrad Röntgen of Karl - Heinz Zurehmann - Shelgorithsg ⁇ rd'] ['December 14, 2018. 05 - 02 - 03 - 08 - 13 - 2022. 2022'] ['Fenway Wireless, Bluetooth, wireless channel system, WMV, FMN type 3D system.E.N'] ["Nigeria's natural gas, but some other half saggbourns ; they reboss"] which are quite bad. For comparison, when I used a T5-small model fine-tuned with TPU (tensorflow), I get the following predictions: ['50'] ['Donald Trump'] ['Wilhelm Conrad Röntgen'] ['December 18, 2018'] ['TCP port 25'] ['the Nigerian and Pacific Oceans'] Any thoughts on what is going wrong? @sshleifer
have never used examples/seq2seq/finetune.py for QA, but @valhalla may have something similar @danyaljj can you post the TF training code you used that worked well?
0
huggingface
🤗Transformers
Train a new tokenizer from scratch
https://discuss.huggingface.co/t/train-a-new-tokenizer-from-scratch/1939
Hi, I would like to train a tokenizer from scratch and use it with Bert. I would like to have a subword tokenizer (unigram, bpe, wordpiece) that would generate the right files (special_token_map.json, tokenizer_config.json, added_tokens.json and vocab.txt). As far as I can tell, the tokenizers provided by the tokenizer library are not compatible with transformers.PretrainedTokenizer (you cannot load the files created by one in the other). The tokenizers provided by the transformers library are all supposed to be pretrained. For example, one issue is the way the subwords are handled ("##" in BERT tokenizer, which is handled differently in tokenizers). What is the right way to create and train a new tokenizer that I can use directly with BERT? And if this definitely not possible, how can I convert one tokenizer to the other (or the files created by one to files that would be understood by the other)? Thank you very much!
Have a look at the tokenizers 15 repo.
0
huggingface
🤗Transformers
Seq2Seq Distillation: train_distilbart_xsum error
https://discuss.huggingface.co/t/seq2seq-distillation-train-distilbart-xsum-error/1927
@sshleifer Hi, I would like to run the method command lines of Direct Knowledge Distillation (KD) which you mentioned at https://github.com/huggingface/transformers/tree/master/examples/seq2seq 2 " ./train_distilbart_xsum.sh --logger_name wandb --gpus 1" and meet an error, “OSError: Model name ‘distilbart_xsum_12_6/student’ was not found in tokenizers model name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). We assumed ‘distilbart_xsum_12_6/student’ was a path, a model identifier, or url to a directory containing vocabulary files named [‘vocab.json’, ‘merges.txt’] but couldn’t find such vocabulary files at this path or url.” I try to fix it by add the code to distillation.py 46 line that is “hparams.tokenizer_name = hparams.teacher # Use teacher’s tokenizer” It seems to fix it but I am not very sure.
yes, your fix is perfect! Wil fix, thanks for reporting this!
0
huggingface
🤗Transformers
Is there a pre-trained BERT model with the sequence length 2048?
https://discuss.huggingface.co/t/is-there-a-pre-trained-bert-model-with-the-sequence-length-2048/1877
Hello, I want to use the pre-trained BERT model because I do not want to train the entire BERT model to analyze my data. Is there a pre-trained BERT model with sequence length 2048? or are all pre-trained BERT model only have the sequence length of 512? Thank you.
Hi, instead of Bert, you may be interested in Longformer which has a pretrained weights on seq. length of 4096 huggingface.co Longformer — transformers 3.4.0 documentation 17
0
huggingface
🤗Transformers
T5-base model create spelling mistake is summary
https://discuss.huggingface.co/t/t5-base-model-create-spelling-mistake-is-summary/1116
Hi, I am using T5-base model for abstractive summarization, results are good but I am getting newly generated spelling mistakes in the summary which were not actually present in input text. Can anyone tell me why these spelling mistakes occuring and how can I solve this?
I think it’s due to your min_output size, for example if you have forced the model to generate results at minimum more than 50 sequence, and somehow the prediction length predicted only 40 sequences, I think it will start to generate random tokens just to reach the 50 seq.
0
huggingface
🤗Transformers
How to analyze ROCstories with `BertForQuestionAnswering`?
https://discuss.huggingface.co/t/how-to-analyze-rocstories-with-bertforquestionanswering/1719
Hello, I looked at the HuggingFace documentation for the BertForQuestionAnswering model, and realized that the model gives out two logit outputs - start_logits and end_logits. How can I determine what the actual predicted answer is based on this two logits? Thank you,
Hi, the start_logits indicates the (most probable) starting token , while the end_logits indicates the ending token. So combining start and end tokens (and every token in between), you will have the “acutal predicted answer”, which is a string in this case.
0
huggingface
🤗Transformers
Simple Save/Load of tokenizer not working
https://discuss.huggingface.co/t/simple-save-load-of-tokenizer-not-working/1864
I am training a DistilBert pretrained model for sequence classification with a pretrained tokenizer. I currently save the model like this: > model.save_pretrained(dir) > tokenizer.save_pretrained(dir) And load like this: > model.from_pretrained(dir) > tokenizer.from_pretrained(dir)). Weirdly this produces bad results (by over 10%) because the tokenizer has somehow changed. Instead this works much better: > model.from_pretrained(dir) > (DistilBertTokenizer.from_pretrained('distilbert-base-cased')). There must be something simple I’m missing here. Why won’t it work properly if I just load the tokenizer directly from dir. PROGRESS Upon further investigation I noticed that > tokenizer.from_pretrained(dir)). defaults to > set_lower_case=True. Manually setting this fixes my issue. However, this doesn’t make sense to me as the model before saving had set_lower_case=False. Is there any way it can be saved directly with this option?
Yes this was fixed in (https://github.com/huggingface/transformers/pull/8006 17) and the fix will be in the next release
0
huggingface
🤗Transformers
Pipeline for sentiment classification
https://discuss.huggingface.co/t/pipeline-for-sentiment-classification/1774
Hey everyone! I’m using the transformers pipeline for sentiment classification to classify unlabeled text. Unfortunately, I’m getting some very awful results! For example, the sentence below is classified as negative with 0.99 percent certainty! sent = “The audience here in the hall has promised to remain silent.” sentimentAnalysis = pipeline(task = “sentiment-analysis”) print(sentimentAnalysis(sent)) # output : {‘label’: ‘NEGATIVE’, ‘score’: 0.9911394119262695} Do you know what I can do to get better results for unlabeled text? I actually tried training a large Roberta model on labeled text from kaggle and I’m getting so much better results, but I want to know why the pipeline is performing so bad, and what model it is actually using?
Hi Mitra, I am curious to know the metric performance (e.g. f1) between your trained model and the default pipeline. (How much better is the trained Roberta ?)
0
huggingface
🤗Transformers
Why is grad norm clipping done during training by default?
https://discuss.huggingface.co/t/why-is-grad-norm-clipping-done-during-training-by-default/1866
github.com huggingface/transformers/blob/master/src/transformers/trainer.py#L789 75 # last step in epoch but step is always smaller than gradient_accumulation_steps steps_in_epoch <= self.args.gradient_accumulation_steps and (step + 1) == steps_in_epoch ): if self.args.fp16 and _use_native_amp: self.scaler.unscale_(self.optimizer) torch.nn.utils.clip_grad_norm_(model.parameters(), self.args.max_grad_norm) elif self.args.fp16 and _use_apex: torch.nn.utils.clip_grad_norm_(amp.master_params(self.optimizer), self.args.max_grad_norm) else: torch.nn.utils.clip_grad_norm_(model.parameters(), self.args.max_grad_norm) if is_torch_tpu_available(): xm.optimizer_step(self.optimizer) elif self.args.fp16 and _use_native_amp: self.scaler.step(self.optimizer) self.scaler.update() else: self.optimizer.step() self.lr_scheduler.step() I know that gradient clipping is useful for preventing exploding gradients, is this is reason why it is there by default? Or does this improve overall model training quality? Why is norm clipping used instead of the alternatives?
It usually improves the training (and is pretty much always done in the fine-tuning scripts of research papers), which is why we use it by default. Norm clipping is the most commonly use, you can always try alternatives and see if it yields better results.
0
huggingface
🤗Transformers
Load Bert model weights to transformers v3 from model trained with transformers v2
https://discuss.huggingface.co/t/load-bert-model-weights-to-transformers-v3-from-model-trained-with-transformers-v2/1807
I have a BertModel (bert-base-uncased) trained with transformers==2.1.1 (using PyTorch v1.3) However, we want to move to transformers v3. I was wondering if it would be possible to load the model weights of that model into transformers v3. Otherwise, we would have to re-train the language model and we want to avoid doing that. What would be the best way to do this?
It’s possible, you won’t need to anything special, you should be able to load it using from_pretrained method.
0
huggingface
🤗Transformers
What does increasing number of heads do in the Multi-head Attention?
https://discuss.huggingface.co/t/what-does-increasing-number-of-heads-do-in-the-multi-head-attention/1847
can someone explain to me the point of number of heads in the MultiheadAttention? what happens if I increase or decrease them? would it change the number of learnable parameters? what is the intuition behind increasing or decreasing the number of heads in the MultiheadAttention?
Changing the number of heads changes the number of learnable parameters. If you have more heads, training will take longer. This is definitely true. The next bit is more of an opinion. When you have several heads per layer the heads are independent of each other. This means that the model can learn different patterns with each head. For example, one head might pay most attention to the next word in each sentence, and another head might pay attention to how nouns and adjectives combine. Having several heads per layer is similar to having several kernels in convolution. Having several heads per layer allows one model to try out several pathways at once. It often turns out that some of the heads are not doing anything useful, but that’s OK because the later layers can learn to ignore the un-useful heads. It is possible to train a model with lots of heads and then cut some of them away. This is called pruning. Note that some researchers prune away whole heads, and other researchers prune away the least useful weights within a head. I believe you can prune a model either after pre-training or after fine-tuning. If you want to look at what patterns each individual head is learning, I recommend a visualisation tool called Bertviz by Jesse Vig. (Note that this only works for pytorch, not in Tensorflow)
0
huggingface
🤗Transformers
How can I run separately the Encoder and Decoder layers?
https://discuss.huggingface.co/t/how-can-i-run-separately-the-encoder-and-decoder-layers/1835
For the models that are built from distinct encoding and decoding phases, is there a simple way to use them separately (without changing the actual model code)? use cases may include: text from data (use the decoder on top of a linear layer) style transfer (encode, “change style”, decode)
Hey @ophiry, The code example at the end of this section in the Encoder-Decoder blog posts: https://huggingface.co/blog/encoder-decoder#decoder 39 gives an example of how to run each part differently. In general the blog post should be helpful in answering your question. To try tweak the code examples the blog post is also available as a google colab.
0
huggingface
🤗Transformers
The loss value is not decreasing training the Roberta model
https://discuss.huggingface.co/t/the-loss-value-is-not-decreasing-training-the-roberta-model/1822
Hi, I load the Roberta pre-trained model from the transformers library and use it for the sentence-pair classification task. The loss function used to decrease during the training per epoch until the last week, but now even though all of the parameters, including the batch size and the learning rate have the same value, when I train my model the value of the loss function is not decreasing. I am a little bit confused and I have trained my model using various parameters and also I utilized another code in PyTorch, but still, the loss function is not decreasing. Can anyone help me to figure out the problem? here is the link to my code: colab.research.google.com Google Colaboratory 42 and the dataset: drive.google.com wiki - Google Drive 4
I can’t give you an answer, but just a few questions: Are you sure you are running exactly the same code that previously worked? If so: are you getting exactly the same output, including that warning about not using all the roberta parameters? (That’s a lot of layers not being used.) has your data been changed? has the colab environment changed - for example, is it the same version of transformers? What is the loss function value before you start training? What would you expect the loss to be showing as? Could it possibly be training completely within the first epoch? Do you still have a notebook (with output) that shows what used to happen when it was working? Is your Colab Runtime set to GPU or CPU? Finally: exactly when did it stop working?
0
huggingface
🤗Transformers
TypeError: only size-1 arrays can be converted to Python scalars
https://discuss.huggingface.co/t/typeerror-only-size-1-arrays-can-be-converted-to-python-scalars/1805
I write the code like this import logging from datasets import load_from_disk from transformers import RobertaTokenizer, EncoderDecoderModel, Trainer, TrainingArguments from rouge import Rouge logging.basicConfig(level=logging.INFO) model = EncoderDecoderModel.from_encoder_decoder_pretrained("roberta-base",'roberta-base',tie_encoder_decoder=True) tokenizer = RobertaTokenizer.from_pretrained("roberta-base") print("begin") # load train and validation data train_dataset = load_from_disk("train_dataset") val_dataset = load_from_disk('val_dataset') print("done") print('begin') # load rouge for validation rouge = Rouge() print('done') # set decoding params model.config.decoder_start_token_id = tokenizer.bos_token_id model.config.eos_token_id = tokenizer.eos_token_id model.config.max_length = 142 model.config.min_length = 56 model.config.no_repeat_ngram_size = 3 model.early_stopping = True model.length_penalty = 2.0 model.num_beams = 4 encoder_length = 512 decoder_length = 128 batch_size = 16 def compute_metrics(pred): labels_ids = pred.label_ids pred_ids = pred.predictions # all unnecessary tokens are removed pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True) labels_ids[labels_ids == -100] = tokenizer.eos_token_id label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True) rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid return { "rouge2_precision": round(rouge_output.precision, 4), "rouge2_recall": round(rouge_output.recall, 4), "rouge2_fmeasure": round(rouge_output.fmeasure, 4), } # set training arguments - these params are not really tuned, feel free to change training_args = TrainingArguments( output_dir="robertashare", per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, evaluate_during_training=True, do_train=True, do_eval=True, logging_steps=1000, save_steps=1000, eval_steps=1000, overwrite_output_dir=True, warmup_steps=2000, save_total_limit=1, fp16=True, num_train_epochs=30, eval_accumulation_steps=1, gradient_accumulation_steps=8 ) # instantiate trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_dataset, eval_dataset=val_dataset, ) print('begin to train') # start training trainer.train() after training 1000 steps ,I begin to eval it , the following error happened File “train_roberta.py”, line 85, in trainer.train() File “/home/LAB/maoqr/miniconda3/envs/py36/lib/python3.6/site-packages/transformers/trainer.py”, line 786, in train self._maybe_log_save_evalute(tr_loss, model, trial, epoch) File “/home/LAB/maoqr/miniconda3/envs/py36/lib/python3.6/site-packages/transformers/trainer.py”, line 843, in _maybe_log_save_evalute metrics = self.evaluate() File “/home/LAB/maoqr/miniconda3/envs/py36/lib/python3.6/site-packages/transformers/trainer.py”, line 1251, in evaluate output = self.prediction_loop(eval_dataloader, description=“Evaluation”) File “/home/LAB/maoqr/miniconda3/envs/py36/lib/python3.6/site-packages/transformers/trainer.py”, line 1381, in prediction_loop metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids)) File “train_roberta.py”, line 41, in compute_metrics pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True) File “/home/LAB/maoqr/miniconda3/envs/py36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py”, line 2886, in batch_decode for seq in sequences File “/home/LAB/maoqr/miniconda3/envs/py36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py”, line 2886, in for seq in sequences File “/home/LAB/maoqr/miniconda3/envs/py36/lib/python3.6/site-packages/transformers/tokenization_utils.py”, line 777, in decode filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens) File “/home/LAB/maoqr/miniconda3/envs/py36/lib/python3.6/site-packages/transformers/tokenization_utils.py”, line 723, in convert_ids_to_tokens index = int(index) TypeError: only size-1 arrays can be converted to Python scalars How can I fix this ?
Have a look at what shape your arrays are. In particular, is your Val data being read in as a different shape to your Train data? If they come in as the same, are you processing them in a way that changes them? If you find an array of the wrong shape, you might need a command something like this input_ids = torch.cat(input_ids, dim=0)
0
huggingface
🤗Transformers
Control EncoderDecoderModel to generate tokens step by step
https://discuss.huggingface.co/t/control-encoderdecodermodel-to-generate-tokens-step-by-step/1756
Hey, I am writing a model and the baseline is bert2bert for text summarization. But I want to add a specific layer above the Decoder. For example , I want to change the LMhead of Decoder by concatenating another vector. But the DecoderModel outputs all the hidden states at once. I want to control it for step by step decoding. In other words. I want to use the concatenated vector as the hidden state for generation and use the generated word vector for next step’s input. How can I change the model or call the interface properly ?
My expression may not be very clear. I want to say, in EncoderDeocderModel, I load the model like this model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') I want to modify the structure of LMHead or manipulate the single step of the output hidden state of the decoder to make the use-specific generation . Is it possible ?
0
huggingface
🤗Transformers
[Solved] Issue on translating DPR to TFDPR on loading pytorch weights to TF model
https://discuss.huggingface.co/t/solved-issue-on-translating-dpr-to-tfdpr-on-loading-pytorch-weights-to-tf-model/1764
Hi Huggingface team, I would love to contribute translating DPR to TFDPR model . This is my first time trying to contribute so please bear with me for my simple question. I have followed @sshleifer 's great PR on TFBart model on 4 files : __init__.py , convert_pytorch_checkpoint_to_tf2.py , utils/dummy_tf_objects.py and (newly created) modeling_tf_dpr.py Now the TF code can run properly with examples in Pytorch’s DPR (correct input/output tensors) . However, it seems the method .from_pretrained(..., from_pt=True) does not work properly as there are always warning messages that weights could not be loaded : Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFDPRContextEncoder: ['ctx_encoder.bert_model.encoder.layer.7.intermediate.dense.bias', 'ctx_encoder.bert_model.encoder.layer.7.attention.output.dense.weight', ... (long list of every variable)] I have a colab code for the modified 4 mentioned files here : https://colab.research.google.com/drive/1lU4fx7zkr-Y3CXa3wmHIY8yJhKdiN3DI?usp=sharing (It would be great if Sam or Patrick @patrickvonplaten can take a quick look, as this may be an easy fix) (to easily navigate the change, please “find on page” for e.g. TFDPRContextEncoder )
try passing for example, name=‘ctx_encoder’, as a kwarg to more components (like you do for layers) and calling super().__init__(config, **kwargs) inside of them. Like this: image1526×218 24.1 KB That’s how I debugged some tensorflow layer name issues (like the one you have), but I’m no expert.
0
huggingface
🤗Transformers
Tfmodelforquestionanswering in eval mode
https://discuss.huggingface.co/t/tfmodelforquestionanswering-in-eval-mode/1801
how can i put the tfmodelforquestionanswering in eval mode ? or is it not needed ? i see the eval function available in bertmodel.
Hi, not sure if I understand your question correctly, but TF model does not have eval mode (unlike Pytorch). In TF Keras model, for example, we can use model.evaluate() or model.predict() directly where Keras will automatically turn on eval mode for us.
0
huggingface
🤗Transformers
Multiple choice with variable length options
https://discuss.huggingface.co/t/multiple-choice-with-variable-length-options/1742
Hello! I have a beginner question. I am trying to create a model that makes predictions on the QAngaroo dataset with DistilBert. In this dataset, we get a list of supports and some candidate answers (between 2~100), and we need to choose the right answer for the model. Right now, I am trying to use TFDistilBertForMultipleChoice, but I am running into a problem since num_choices is a value that is fixed with the entire batch size. I was wondering how I could go about making that value dynamic. I also had some doubts about the way input goes into the model, from the code example at https://huggingface.co/transformers/master/model_doc/distilbert.html#transformers.TFDistilBertForMultipleChoice.call 2 encoding = tokenizer([[prompt, prompt], [choice0, choice1]], return_tensors='tf', padding=True) inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()} outputs = model(inputs) # batch size is 1 Here we put prompt down one time for each choice, but won’t this result in a really slow model if there are lots of choices that all share the same prompt? Can anyone help me understand how to fix this, or tell me if I’m going about it the wrong way? I’m starting to think multiple choice isn’t the right way to go about it, and it would be better to ignore the choices given and use a Question Answering model instead, since all of the choices are contained somewhere in the input. But this seems a bit ‘wrong’ to me, since many choices appear several times in the text and the question answering model only takes in 1 start position/end position from the text, and the correct entity might occur several times in the text. How can training work if it needs to predict the right location for a string that appears several times? Especially when there isn’t really a correct location in the first place, since the QAngaroo dataset tests multi-hop reasoning, which I wouldn’t expect to be associated to any single occurrence of the answer string. Any help would be greatly appreciated!
Hi @wolfblue , I believe you can formulate this problem as seq2seq and use model such as T5 to solve this problem. You can give (dynamic number of) choices as contexts, and in fact by this method we can combine other type of QA (e.g. context-based [reading comprehension] QA) as augmented dataset too. This approach results in SOTA in many datasets : https://github.com/allenai/unifiedqa 5
0
huggingface
🤗Transformers
TransfoXLLMHeadModel - Trying to create tensor with negative dimension -199500
https://discuss.huggingface.co/t/transfoxllmheadmodel-trying-to-create-tensor-with-negative-dimension-199500/1768
Hi all, I am trying to create a TransfoXLLMHeadModel using a custom vocabulary, but I keep coming across the same issue: RuntimeError Traceback (most recent call last) in ----> 1 model = TransfoXLModel(config=cfg) ~/.local/lib/python3.6/site-packages/transformers/modeling_transfo_xl.py in init(self, config) 736 737 self.word_emb = AdaptiveEmbedding( –> 738 config.vocab_size, config.d_embed, config.d_model, config.cutoffs, div_val=config.div_val 739 ) 740 ~/.local/lib/python3.6/site-packages/transformers/modeling_transfo_xl.py in init(self, n_token, d_embed, d_proj, cutoffs, div_val, sample_softmax) 421 l_idx, r_idx = self.cutoff_ends[i], self.cutoff_ends[i + 1] 422 d_emb_i = d_embed // (div_val ** i) –> 423 self.emb_layers.append(nn.Embedding(r_idx - l_idx, d_emb_i)) 424 self.emb_projs.append(nn.Parameter(torch.FloatTensor(d_proj, d_emb_i))) 425 ~/.local/lib/python3.6/site-packages/torch/nn/modules/sparse.py in init(self, num_embeddings, embedding_dim, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse, _weight) 107 self.scale_grad_by_freq = scale_grad_by_freq 108 if _weight is None: –> 109 self.weight = Parameter(torch.Tensor(num_embeddings, embedding_dim)) 110 self.reset_parameters() 111 else: RuntimeError: Trying to create tensor with negative dimension -199500: [-199500, 8] The code I am running is the following: tokenizer = TransfoXLTokenizer(vocab_file=’/path/to/vocab.txt’) Note: tokenizer.vocab_size == 500 cfg = TransfoXLConfig( vocab_size=tokenizer.vocab_size, d_model=512, d_embed=512, n_head=8, d_head=64, n_layer=12, d_inner=2048 ) model = TransfoXLLMHeadModel(config=cfg) Does anyone have any insight as to what may be going wrong? Any help is greatly appreciated! Thank you, Victor
The solution is to change the cutoff for the adaptive embeddings, as mentioned by user TevenLeScao in this GitHub issue: https://github.com/huggingface/transformers/issues/8098#issuecomment-717914018 20 To summarize, you need to set: cfg = TransfoXLConfig(cutoffs=[0, x]) Where 0 < x < vocab_size If you don’t do this, the model will try to generate an embedding size of vocab_size - cutoffs[-1], which for a vocab_size < default vocab size, will be negative and throw an error.
0
huggingface
🤗Transformers
Trainer class, compute_metrics and EvalPrediction
https://discuss.huggingface.co/t/trainer-class-compute-metrics-and-evalprediction/1698
Hello everybody, I am trying to use my own metric for a summarization task passing the compute_metrics to the Trainer class. I would like to calculate rouge 1, 2, L between the predictions of my model (fine-tuned T5) and the labels. However, I have a problem understanding what the Trainer gives to the function. The EvalPrediction object should be composed of predictions and label_ids. To my understanding, since I am truncating the summaries to 150 characters, the predictions and label_ids should be vectors of size 150, or (batch_size, 150). Surprisingly, I get the predictions to be nested tuples, of size (23, 150, 32128) or (23, 12, 150, 64) or (23, 12, 512, 64), are these logits vectors? While the label_ids is a tuple with 23 vectors of size 150. Can you kindly help me to understand what I should expect from the object EvalPrediction? Thank you.
The predictions are the outputs of your model. Without seeing your model, no one can help you figure out what they are.
0
huggingface
🤗Transformers
Track multiple losses & different outputs size with Trainer and callbacks
https://discuss.huggingface.co/t/track-multiple-losses-different-outputs-size-with-trainer-and-callbacks/1759
Hi, I’ve built a model that optimize jointly 2 different models and I would like to track these into Tensorboard or Wandb. Right now, I had to subclass the following methods: training_step (to return all losses and the one to optmize), train (to manage the new output from training_step and to add all losses into metrics). Is there a better way to integrate this logic (especially to not subclass train which is a very heavy method) ? Also, can Trainer manage different output size (ex: logits at token level and logits at sentence level) ? It seems that all logits must have the same shape currently, hence reshaping everything with the same dimension may lead to waste of memory. Thanks
The Trainer class is not built to optimize two models at the same time, so no, there is no easier way than subclassing and overrifing the training_step. In general, subclassing the Trainer and overriding the method(s) to fit your needs is the expected way and we designed the Trainer API to make it as easy as possible. For predict/evaluate, yes Trainer will need tensors of the same size (with the exception of the batch dimension) otherwise it won’t be able to concatenate all predictions. This is something we’ll look into more when we rewrite the token-classification examples (in the next few weeks).
0
huggingface
🤗Transformers
Can’t use DistributedDataParallel for training the EncoderDecoderModel
https://discuss.huggingface.co/t/cant-use-distributeddataparallel-for-training-the-encoderdecodermodel/1747
Hey ,I want to fine-tune the EncoderDecoderModel with 4 GPUs . And I use DistributedDataParallel for parallel training. My code just like this: from transformers import EncoderDecoderModel, BertTokenizer import torch import argparse import argparse import torch.multiprocessing as mp import torch.nn as nn import torch.distributed as dist def main(): parser = argparse.ArgumentParser() args = parser.parse_args() args.max_src_len = 512 args.max_dst_len = 128 args.gpus = 4 args.world_size = args.gpus args.epoches = 30 mp.spawn(train, nprocs=args.gpus, args=(args,)) def train(gpu, args): rank = gpu dist.init_process_group( backend='nccl', init_method='tcp://127.0.0.1:23456', world_size=args.world_size, rank=rank ) torch.manual_seed(0) model = EncoderDecoderModel.from_pretrained("bert2bert") torch.cuda.set_device(gpu) model = model.to(gpu) optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) model = nn.parallel.DistributedDataParallel(model, device_ids=[gpu]) dataset_path = 'dataset/example.json' vocab_path = 'dataset/vocab.txt' dataset = CNNDataset(dataset_path, vocab_path, args) train_sampler = torch.utils.data.distributed.DistributedSampler( dataset, num_replicas=args.world_size, rank=rank ) dataloader = DataLoader(dataset, batch_size=32, shuffle=False, num_workers=0, pin_memory=True, sampler=train_sampler, collate_fn=collate_fn) for epoch in range(args.epoches): for src, dst in dataloader: src = torch.stack(src).to(gpu) dst = torch.stack(dst).to(gpu) mask = (src!=0) mask = mask.long() outputs = model(input_ids=src, attention_mask=mask, decoder_input_ids=dst, labels=dst, return_dict=True) loss, logits = outputs.loss, outputs.logits optimizer.zero_grad() loss.backward() optimizer.step() if __name__ == '__main__': main() I got the following errors. – Process 0 terminated with the following error: Traceback (most recent call last): File “/home/LAB/maoqr/miniconda3/envs/py36/lib/python3.6/site-packages/torch/multiprocessing/spawn.py”, line 20, in _wrap fn(i, *args) File “/home/LAB/maoqr/yanghongzheng/demo_multi.py”, line 66, in train outputs = model(input_ids=src, attention_mask=mask, decoder_input_ids=dst, labels=dst, return_dict=True) File “/home/LAB/maoqr/miniconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl result = self.forward(*input, **kwargs) File “/home/LAB/maoqr/miniconda3/envs/py36/lib/python3.6/site-packages/torch/nn/parallel/distributed.py”, line 528, in forward self.reducer.prepare_for_backward([]) RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel; (2) making sure all forward function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn’t able to locate the output tensors in the return value of your module’s forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable). How can I fix this issue ? It seems like the outputs of the EncoderDeocderModel can’t be located. Thank you very much.
As the error tells you, try adding find_unused_parameters=True to DistributedDataParallel(). That should help. Read more here 382.
0
huggingface
🤗Transformers
Forward-looking or left-context attention mask (left-to-right) generation with BertGeneration and RobertaForCausalLM
https://discuss.huggingface.co/t/forward-looking-or-left-context-attention-mask-left-to-right-generation-with-bertgeneration-and-robertaforcausallm/1753
Hi, I am trying to build an altered version of the models proposed in “Leveraging Pre-trained Checkpoints for Sequence Generation Tasks” by Rothe et al. (2020). In the paper they say that for the BERT-like architectures that are used for generation: “If not stated otherwise, the implementation of the decoder layers are also identical to the BERT implementation with two adjustments. First the self-attention mechanism is masked to look only at the left context.” I am using the RobertaForCausalLM class 3 as a basis, but the same would hold for the BertGeneration class 1. I do not see how this left-context or forward-looking attention mask is implemented. I see that I could provide it myself by passing it to the function, but I feel it is strange that is not noted in the code anywhere, as if I am missing something. If someone could point me out what I am missing or where I can find this attention mask, that would be very helpful. Thanks in advance, Claartje Barkhof
In other words, I am not sure where the ‘causal mask’ is implemented?
0
huggingface
🤗Transformers
How to integrate an AzureMLCallback for logging in Azure?
https://discuss.huggingface.co/t/how-to-integrate-an-azuremlcallback-for-logging-in-azure/1713
Hi! I saw that @sgugger recently refactored the way in which transformers integrates with tools to visualize logs in a more helpful way: https://github.com/huggingface/transformers/pull/7596 3 As I am running in Azure and using AzureML, I was trying to see if I could do something similar. Prior to the PR above, I could add a pair of very simple snippets 2 that allowed to send information to Azure via https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.run(class)?view=azure-ml-py#log-name--value--description---- 4 I tried to replicate the above with the new approach, but I may be missing something obvious. I created a new callback class in integrations.py class AzureMLCallback(TrainerCallback): def __init__(self, azureml_run=None): assert ( _has_azureml ), "AzureMLCallback requires azureml to be installed. Run `pip install azureml-sdk`." self.azureml_run = azureml_run def on_init_end(self, args, state, control, **kwargs): if self.azureml_run is None and state.is_world_process_zero: self.azureml_run = Run.get_context() def on_log(self, args, logs=None, **kwargs): if self.azureml_run: for k, v in logs.items(): if isinstance(v, (int, float)): self.azureml_run.log(k, v, description=k) and did another bunch of other minor changes 2. Upon installing on a machine my fork of the library with pip install git+https://github.com/davidefiocco/transformers.git@c32718170899d1110a77ab116a2a60bbe326829e --quiet when running python run_glue.py --model_name_or_path bert-base-cased \ --task_name CoLA \ --do_train \ --do_eval \ --train_file ./glue_data/CoLA/train.tsv \ --validation_file ./glue_data/CoLA/dev.tsv \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir output \ --evaluation_strategy steps \ --logging_steps 8 \ --eval_steps 4 I get the error: Traceback (most recent call last): File “run_glue.py”, line 417, in main() File “run_glue.py”, line 352, in main model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None File “/usr/local/lib/python3.6/dist-packages/transformers/trainer.py”, line 792, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) File “/usr/local/lib/python3.6/dist-packages/transformers/trainer.py”, line 853, in _maybe_log_save_evaluate metrics = self.evaluate() File “/usr/local/lib/python3.6/dist-packages/transformers/trainer.py”, line 1291, in evaluate self.log(output.metrics) File “/usr/local/lib/python3.6/dist-packages/transformers/trainer.py”, line 1044, in log self.control = self.callback_handler.on_log(self.args, self.state, self.control, logs) File “/usr/local/lib/python3.6/dist-packages/transformers/trainer_callback.py”, line 366, in on_log return self.call_event(“on_log”, args, state, control, logs=logs) File “/usr/local/lib/python3.6/dist-packages/transformers/trainer_callback.py”, line 382, in call_event **kwargs, TypeError: on_log() got multiple values for argument ‘logs’ So there’s likely something wrong in my AzureMLCallback… can someone help me spot the issue? If you wish to replicate the behavior you can use this notebook https://colab.research.google.com/gist/davidefiocco/416c382cd51ad58cabf3eb940c040220/azureml-logging-on-transformers.ipynb 1 while the source code is https://github.com/davidefiocco/transformers/tree/c32718170899d1110a77ab116a2a60bbe326829e 1
Hi there! Glad to see you try the new callbacks! The mistake is that you did not leave state and control which are positional arguments. Just replace you on_log definition by: def on_log((self, args, state, control, logs=None, **kwargs): and you’ll be fine!
0
huggingface
🤗Transformers
RuntimeError: arguments are located on different GPUs
https://discuss.huggingface.co/t/runtimeerror-arguments-are-located-on-different-gpus/1704
Hi all, I’m facing these issue in these days but I haven’t found a solution. I’m using the simpletransformers, a wrapper for your library, the main code of the model that I’m trying to train, that is T5, is here 2. I’ve trained the model correctly and I would like to continue the training for more epochs, so I’ve loaded the model simply using: model = T5Model("/PATHtoCHECKPOINT") Then I’ve started the training with: model.train_model(train_df, eval_data=eval_df) The training starts and the first observation is that the loss spikes at a higher value w.r.t the value of the last checkpoint, as shown in the image: W&amp;B Chart 21_10_2020, 14_56_37750×525 11.9 KB After some steps the loss starts to decrease and then this error is thrown: Traceback (most recent call last): File "TINIA_doubleTraining.py", line 112, in <module> model.train_model(train_df, eval_data=eval_df) #,args={'num_train_epochs': 5, 'learning_rate':2e-5}) #args=model_args, sacreBLEU=sacreBLEU) File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/simpletransformers/t5/t5_model.py", line 165, in train_model global_step, training_details = self.train( File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/simpletransformers/t5/t5_model.py", line 418, in train results = self.eval_model( File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/simpletransformers/t5/t5_model.py", line 613, in eval_model result = self.evaluate(eval_dataset, output_dir, verbose=verbose, silent=silent, **kwargs) File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/simpletransformers/t5/t5_model.py", line 673, in evaluate outputs = model(**inputs) File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 155, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 165, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply output.reraise() File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 1 on device 5. Original Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 155, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 165, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply output.reraise() File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/datadrive/data/T5/transformers/src/transformers/modeling_t5.py", line 1169, in forward encoder_outputs = self.encoder( File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/datadrive/data/T5/transformers/src/transformers/modeling_t5.py", line 711, in forward inputs_embeds = self.embed_tokens(input_ids) File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 124, in forward return F.embedding( File "/home/ubuntu/anaconda3/envs/nlp_venv/lib/python3.8/site-packages/torch/nn/functional.py", line 1814, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: arguments are located on different GPUs at /opt/conda/conda-bld/pytorch_1595629395347/work/aten/src/THC/generic/THCTensorIndex.cu:403 I’m using AWS EC2 with 8 GPUs, so I’ve tried the same code with a machine with 1 GPU only and the code starts, has this peaks at the beginning of each epoch but I don’t get any error, therefore the problem is related to the DataParallel. Any suggestion? Thanks!
Would be better if you create an issue in simpletransformers repo.
0
huggingface
🤗Transformers
Running a Trainer in DistributedDataParallel mode
https://discuss.huggingface.co/t/running-a-trainer-in-distributeddataparallel-mode/1718
I am trying to train a model on four GPUs (AWS ml.p3.8xlarge). As far as I can tell, to get my model to train in DistributedDataParallel, I only need to specify some integer value for local_rank. But my understanding is that this will only distribute the training across a single GPU (whichever I specify with local_rank). What is the proper way to launch DistributedDataParallel training across all four GPUs using a Trainer? Do I have to launch something via the command line (as hinted at here https://github.com/huggingface/transformers/issues/1651 33)?
Hi @deppen8 Yes, you’ll need to use torch.distributed.launch for distributed training. See this command 140 for an example.
0