docs
stringclasses 4
values | category
stringlengths 3
31
| thread
stringlengths 7
255
| href
stringlengths 42
278
| question
stringlengths 0
30.3k
| context
stringlengths 0
24.9k
| marked
int64 0
1
|
---|---|---|---|---|---|---|
huggingface | Beginners | Pre-Train LayoutLM | https://discuss.huggingface.co/t/pre-train-layoutlm/1350 | Hello,
We are using pretrained LayoutLM model which is working well with only English Language. We have many forms or invoices from different languages.
How can i pre-train LayoutLM model with my own corpus?
Thank you. | Hi sharathmk99,
LayoutLM model is not currently available in the huggingface transformers library. If you want to add it that should be possible (though not simple). Alternatively, you could put in a suggestion and hope that someone else will incorporate it.
If you decide instead to pre-train a LayoutLM model using native Tensorflow or native PyTorch, the first question is whether you have enough data. How large is your corpus?
If your corpus is not large enough, you might be better off using a different model that has been pre-trained for the language(s) you need.
Do you definitely want to pre-train (from randomly-initialized) or would it work to fine-tune? I don’t know what results people get for fine-tuning with a new language. I expect it would not work at all if the alphabet is different, but it might be at least partly effective if the languages are quite similar (eg english + french which have almost the same alphabet and many of the same word-pieces). | 0 |
huggingface | Beginners | ONNX Conversion - transformers.onnx vs convert_graph_to_onnx.py | https://discuss.huggingface.co/t/onnx-conversion-transformers-onnx-vs-convert-graph-to-onnx-py/10278 | Hey Folks,
I am attempting to convert a RobertaForSequenceClassification pytorch model (fine tuned for classification from distilroberta-base) to ONNX using transformers 4.9.2. When using the transformers.onnx package, the classifier seems to be lost:
Some weights of the model checkpoint at {} were not used when initializing RobertaModel: ['classifier.dense.bias', 'classifier.out_proj.bias', 'classifier.out_proj.weight', 'classifier.dense.weight']
- This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
When I specify a feature (–feature sequence-classification) I get an error stating only default is supported. However, when I revert to the convert_graph_to_onnx script that the documentation says is being deprecated, I am able to convert successfully using the --pipeline sentiment-analysis flag.
Is this expected? Does transformers.onnx not support RobertaForSequenceClassification yet, or am I missing some step? | Hi @meggers.
I’m like you: I know how to use the old method (transformers/convert_graph_to_onnx.py) but not the new one (transformers.onnx) to get the quantized onnx version of a Hugging Face task model (for example: a Question-Answering model).
In order to illustrate it, I did publish this notebook in Colab: ONNX Runtime with transformers.onnx for HF tasks models (for example: QA model) (not only with transformers/convert_graph_to_onnx.py) 6
Hope that @lysandre @mfuntowicz @valhalla @lewtun will have some time to complete the online documentation Exporting transformers models and/or to update microsoft tutorials about onnx 1.
Others topics about this subject:
Inference with Finetuned BERT Model converted to ONNX does not output probabilities 1
Gpt2 inference with onnx and quantize
Got ONNXRuntimeError when try to run BART in ONNX format #12851
There is as well the Accelerate Hugging Face models page from microsoft but the notebooks look very complicated (heavy code). | 0 |
huggingface | Beginners | Visualize Loss without tensorboard | https://discuss.huggingface.co/t/visualize-loss-without-tensorboard/9523 | Hello,
is there any way to visualize the loss functions of a Trainer model without tensorboard? I am using Jupyter Lab, pytorch and tensorboard refuses to work.
Cheers | Met the same problem. I’ve set the eval_step and eval_strategy, but there’s no log in the logging_dir at all. | 0 |
huggingface | Beginners | When finetuning Bert on classification task raised TypeError(f’Object of type {o.__class__.__name__} ’ TypeError: Object of type ndarray is not JSON serializable | https://discuss.huggingface.co/t/when-finetuning-bert-on-classification-task-raised-typeerror-fobject-of-type-o-class-name-typeerror-object-of-type-ndarray-is-not-json-serializable/11370 | Hello, I am trying to finetune bert on classification task but I am getting this error during the training.
e[ASaving model checkpoint to /gpfswork/rech/kpf/umg16uw/results_hf/checkpoint-500
Configuration saved in /gpfswork/rech/kpf/umg16uw/results_hf/checkpoint-500/config.json
Model weights saved in /gpfswork/rech/kpf/umg16uw/results_hf/checkpoint-500/pytorch_model.bin
Traceback (most recent call last):
File “/gpfs7kw/linkhome/rech/genlig01/umg16uw/test/expe_5/traitements/Flaubert_huggingface.py”, line 225, in
train_results = trainer.train()
File “/linkhome/rech/genlig01/umg16uw/.conda/envs/bert/lib/python3.9/site-packages/transformers/trainer.py”, line 1325, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File “/linkhome/rech/genlig01/umg16uw/.conda/envs/bert/lib/python3.9/site-packages/transformers/trainer.py”, line 1422, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File “/linkhome/rech/genlig01/umg16uw/.conda/envs/bert/lib/python3.9/site-packages/transformers/trainer.py”, line 1537, in _save_checkpoint
self.state.save_to_json(os.path.join(output_dir, “trainer_state.json”))
File “/linkhome/rech/genlig01/umg16uw/.conda/envs/bert/lib/python3.9/site-packages/transformers/trainer_callback.py”, line 96, in save_to_json
json_string = json.dumps(dataclasses.asdict(self), indent=2, sort_keys=True) + “\n”
File “/linkhome/rech/genlig01/umg16uw/.conda/envs/bert/lib/python3.9/json/init.py”, line 234, in dumps
return cls(
File “/linkhome/rech/genlig01/umg16uw/.conda/envs/bert/lib/python3.9/json/encoder.py”, line 201, in encode
chunks = list(chunks)
File “/linkhome/rech/genlig01/umg16uw/.conda/envs/bert/lib/python3.9/json/encoder.py”, line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File “/linkhome/rech/genlig01/umg16uw/.conda/envs/bert/lib/python3.9/json/encoder.py”, line 405, in _iterencode_dict
yield from chunks
File “/linkhome/rech/genlig01/umg16uw/.conda/envs/bert/lib/python3.9/json/encoder.py”, line 325, in _iterencode_list
yield from chunks
File “/linkhome/rech/genlig01/umg16uw/.conda/envs/bert/lib/python3.9/json/encoder.py”, line 405, in _iterencode_dict
yield from chunks
File “/linkhome/rech/genlig01/umg16uw/.conda/envs/bert/lib/python3.9/json/encoder.py”, line 438, in _iterencode
o = _default(o)
File “/linkhome/rech/genlig01/umg16uw/.conda/envs/bert/lib/python3.9/json/encoder.py”, line 179, in default
raise TypeError(f’Object of type {o.class.name} ’
TypeError: Object of type ndarray is not JSON serializable
76%|███████▋ | 500/654 [02:52<00:52, 2.91it/s]
srun: error: r10i6n1: task 0: Exited with exit code 1
srun: Terminating job step 1775050.0
output file :
file in training… /gpfs7kw/linkhome/rech/genlig01/umg16uw/test/expe_5/dataset/train_corpus/train_80tr_moins_20t/80tr/corpusIxAug_et_Or80tr.xlsx
Filename in processed… corpusIxAug_et_Or80tr
Number of sentences 18568.00…
Type of preprocessing… verbatim
Train : 13926 Val : 4642
{‘loss’: 1.0099, ‘learning_rate’: 4.9923547400611625e-05, ‘epoch’: 0.0}
{‘loss’: 0.872, ‘learning_rate’: 4.235474006116208e-05, ‘epoch’: 0.46}
{‘eval_loss’: 0.5922592878341675, ‘eval_accuracy’: 0.7615252046531668, ‘eval_f1’: array([0.64872657, 0.83726867, 0.71302958]), ‘eval_precision’: array([0.62674095, 0.80523732, 0.79571811]), ‘eval_recall’: array([0.67231076, 0.87195392, 0.64590876]), ‘eval_f1_mi’: 0.7615252046531666, ‘eval_precision_mi’: 0.7615252046531668, ‘eval_recall_mi’: 0.7615252046531668, ‘eval_f1_ma’: 0.733008272114256, ‘eval_precision_ma’: 0.7425654572607411, ‘eval_recall_ma’: 0.7300578132910654, ‘eval_runtime’: 9.6426, ‘eval_samples_per_second’: 481.407, ‘eval_steps_per_second’: 7.571, ‘epoch’: 0.46}
{‘loss’: 0.6014, ‘learning_rate’: 3.4709480122324164e-05, ‘epoch’: 0.92}
{‘eval_loss’: 0.30887845158576965, ‘eval_accuracy’: 0.8862559241706162, ‘eval_f1’: array([0.80689306, 0.92978868, 0.8742268 ]), ‘eval_precision’: array([0.82146543, 0.95429104, 0.83191629]), ‘eval_recall’: array([0.79282869, 0.90651307, 0.92107169]), ‘eval_f1_mi’: 0.8862559241706162, ‘eval_precision_mi’: 0.8862559241706162, ‘eval_recall_mi’: 0.8862559241706162, ‘eval_f1_ma’: 0.8703028482577086, ‘eval_precision_ma’: 0.8692242527354628, ‘eval_recall_ma’: 0.873471147629887, ‘eval_runtime’: 9.6181, ‘eval_samples_per_second’: 482.632, ‘eval_steps_per_second’: 7.59, ‘epoch’: 0.92}
{‘loss’: 0.3815, ‘learning_rate’: 2.7064220183486238e-05, ‘epoch’: 1.38}
{‘eval_loss’: 0.16964389383792877, ‘eval_accuracy’: 0.9467901766479966, ‘eval_f1’: array([0.9054878 , 0.96444059, 0.94674556]), ‘eval_precision’: array([0.92427386, 0.94437367, 0.96749811]), ‘eval_recall’: array([0.8874502 , 0.98537882, 0.92686459]), ‘eval_f1_mi’: 0.9467901766479966, ‘eval_precision_mi’: 0.9467901766479966, ‘eval_recall_mi’: 0.9467901766479966, ‘eval_f1_ma’: 0.9388913189246848, ‘eval_precision_ma’: 0.9453818807708362, ‘eval_recall_ma’: 0.933231203841253, ‘eval_runtime’: 9.402, ‘eval_samples_per_second’: 493.727, ‘eval_steps_per_second’: 7.764, ‘epoch’: 1.38}
{‘loss’: 0.2669, ‘learning_rate’: 1.9418960244648318e-05, ‘epoch’: 1.83}
{‘eval_loss’: 0.10839153826236725, ‘eval_accuracy’: 0.9648858250753986, ‘eval_f1’: array([0.93973442, 0.97762021, 0.96196232]), ‘eval_precision’: array([0.96436059, 0.97783688, 0.9448324 ]), ‘eval_recall’: array([0.91633466, 0.97740363, 0.97972484]), ‘eval_f1_mi’: 0.9648858250753986, ‘eval_precision_mi’: 0.9648858250753986, ‘eval_recall_mi’: 0.9648858250753986, ‘eval_f1_ma’: 0.9597723163259425, ‘eval_precision_ma’: 0.9623432895564524, ‘eval_recall_ma’: 0.9578210438568343, ‘eval_runtime’: 9.6063, ‘eval_samples_per_second’: 483.223, ‘eval_steps_per_second’: 7.599, ‘epoch’: 1.83}
{‘loss’: 0.1962, ‘learning_rate’: 1.1773700305810397e-05, ‘epoch’: 2.29}
{‘eval_loss’: 0.07769232243299484, ‘eval_accuracy’: 0.978026712623869, ‘eval_f1’: array([0.96184739, 0.98504179, 0.97815004]), ‘eval_precision’: array([0.96963563, 0.9781564 , 0.98388278]), ‘eval_recall’: array([0.95418327, 0.99202481, 0.97248371]), ‘eval_f1_mi’: 0.978026712623869, ‘eval_precision_mi’: 0.978026712623869, ‘eval_recall_mi’: 0.978026712623869, ‘eval_f1_ma’: 0.9750130736531469, ‘eval_precision_ma’: 0.9772249371959657, ‘eval_recall_ma’: 0.9728972620291924, ‘eval_runtime’: 9.458, ‘eval_samples_per_second’: 490.8, ‘eval_steps_per_second’: 7.718, ‘epoch’: 2.29}
and then it stops and causes the error
script : | It looks like your compute_metric function is returning NumPy arrays, which is not supported. | 0 |
huggingface | Beginners | Wav2vec: how to run decoding with a language model? | https://discuss.huggingface.co/t/wav2vec-how-to-run-decoding-with-a-language-model/6055 | Hello.
I am finetuning wav2vec “wav2vec2-large-lv60
“ using my own dataset. I followed Patrick’s tutorial (Fine-Tune Wav2Vec2 for English ASR in Hugging Face with 🤗 Transformers 40) and successfully finished the finetuning (thanks for very nice tutorial.)
Now, I would like to run decoding with a language model and have a few questions.
Can we run decoding with a language model directly from huggingface?
If not, how can I get the wave2vec model compatible to the fairseq decoding script (fairseq/examples/speech_recognition/infer.py)?
I did the following steps, but it failed:
Create ‘.pt’ file from the finetuning checkpoint
def save_model(my_checkpoint_path):
model = Wav2Vec2ForCTC.from_pretrained(my_checkpoint_path)
torch.save(model.state_dict(), my_model.pt)
Decoding
I used the decoding step command from the following webpage fairseq/README.md at master · pytorch/fairseq · GitHub 19
$subset=dev_other
python examples/speech_recognition/infer.py /checkpoint/abaevski/data/speech/libri/10h/wav2vec/raw --task audio_pretraining
–nbest 1 --path /path/to/model --gen-subset $subset --results-path /path/to/save/results/for/sclite --w2l-decoder kenlm
–lm-model /path/to/kenlm.bin --lm-weight 2 --word-score -1 --sil-weight 0 --criterion ctc --labels ltr --max-tokens 4000000
–post-process letter
I replaced /path/to/model with “my_model.pt”.
Then, I am getting the following error message.
Traceback (most recent call last):
File “/mount/fairseq/examples/speech_recognition/infer.py”, line 427, in
cli_main()
File “/mount/fairseq/examples/speech_recognition/infer.py”, line 423, in cli_main
main(args)
File “/mount/fairseq/examples/speech_recognition/infer.py”, line 229, in main
models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
File “/mount/fairseq/fairseq/checkpoint_utils.py”, line 370, in load_model_ensemble_and_task
state = load_checkpoint_to_cpu(filename, arg_overrides)
File “/mount/fairseq/fairseq/checkpoint_utils.py”, line 304, in load_checkpoint_to_cpu
state = _upgrade_state_dict(state)
File “/mount/fairseq/fairseq/checkpoint_utils.py”, line 456, in _upgrade_state_dict
{“criterion_name”: “CrossEntropyCriterion”, “best_loss”: state[“best_loss”]}
KeyError: ‘best_loss’
When I googled it, this seems relevant to removal of the optimization history logs:
This happens because we remove the useless optimization history logs from the model to reduce the file size. Only the desired model weights are kept to release. As a result, if you directly load the model, error will be reported that some logs are missed.
So how can I save the finetuning model compatible to “fairseq”. Should I store the optimization history? If yes, how can I do it? Does anyone have same experience? If yes, could you please share it with me? Thank you always. | Oh, I found the following previous discussion from the forum. Sorry for missing this one.
Language model for wav2vec2.0 decoding Models
Hello, I implemented wav2vec2.0 code and a language model is not used for decoding. How can I add a language model (let’s say a language model which is trained with KenLM) for decoding @patrickvonplaten ?
thanks in advance.
Note: I also opened an issue, but redirected here.
so i will check them out first. Thanks. | 0 |
huggingface | Beginners | How to tokenize input if I plan to train a Machine Translation model. I’m having difficulties with text_pair argument of Tokenizer() | https://discuss.huggingface.co/t/how-to-tokenize-input-if-i-plan-to-train-a-machine-translation-model-im-having-difficulties-with-text-pair-argument-of-tokenizer/11333 | Hi!
If I want to use an already trained Machine Translation model for inference, I do something along these lines:
from transformers import MarianMTModel, MarianTokenizer
tokenizer = MarianTokenizer.from_pretrained(“Helsinki-NLP/opus-mt-en-de”)
model=MarianMTModel.from_pretrained(“Helsinki-NLP/opus-mt-en-de”)
sentence_en=“I am stuck with text_pair argument of Tokenizer.”
input_ids=tokenizer(sentence_en, return_tensors=“pt”)[“input_ids”]
generated_sequence = model.generate(input_ids=input_ids)[0].numpy().tolist()
translated_sentence=tokenizer.decode(generated_sequence, skip_special_tokens=True)
print(translated_sentence)
and it will return a German translation for the English sentence I fed to the model without a problem. In this example above I only needed to feed a source (English) sentence. However, if I now want to train a Machine Translation model from scratch, I will need to feed it pairs of English and German sentences. Which in turn means that I need to provide tokenizer with an English-German pair (here for simplicity I assume I will using batches of size 1). Do I do it this way? (see below)
from transformers import MarianTokenizer
tokenizer = MarianTokenizer.from_pretrained(“Helsinki-NLP/opus-mt-en-de”)
sentence_en=“I am stuck with text_pair argument of Tokenizer.”
sentence_de=“Ich stecke mit text_pair Argument von Tokenizer fest.”
encoded_input = tokenizer(text=sentence_en, text_pair=sentence_de)
If yes, I can’t make sense of my encoded_input, which looks like that:
{‘input_ids’: [38, 121, 21923, 33, 2183, 585, 25482, 14113, 7, 429, 2524, 7359, 3, 38, 492, 11656, 7662, 30, 2183, 585, 25482, 48548, 728, 21, 429, 2524, 7359, 17, 4299, 3, 0], ‘attention_mask’: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
There are no token_type_ids in the encoded_input. How can I supply it to the model for training if there is no way for it to know where the source English text ended and the target German one started? If I convert the above ids to tokens:
print(tokenizer.convert_ids_to_tokens(encoded_input[‘input_ids’]))
I get the following:
[‘▁I’, ‘▁am’, ‘▁stuck’, ‘▁with’, ‘▁text’, ‘’, ‘pair’, ‘▁argument’, ‘▁of’, ‘▁To’, ‘ken’, ‘izer’, ‘.’, ‘▁I’, ‘ch’, ‘▁ste’, ‘cke’, ‘▁mit’, ‘▁text’, '’, ‘pair’, ‘▁Argu’, ‘ment’, ‘▁von’, ‘▁To’, ‘ken’, ‘izer’, ‘▁’, ‘fest’, ‘.’, ‘’]
So, the tokenizer simply concatenated two sentences and tokenized the concatenated text. There are no separators or anything else, which would distinguish the source from the target.
What do I understand wrong? What is the right way to tokenize a source-target pair, having in mind that it will later be fed for a MT model for training?
Will appreciate help as I’ve been stuck with this simple issue for quite a while by now. | Hi,
To fine-tune a MarianMT (or any other seq2seq model in the library), you don’t need to feed the source and target sentences at once to the tokenizer. Instead, they should be tokenized separately:
from transformers import MarianTokenizer
tokenizer = MarianTokenizer.from_pretrained(“Helsinki-NLP/opus-mt-en-de”)
input_ids = tokenizer(“I am stuck with text_pair argument of Tokenizer.”, return_tensors="pt").input_ids
labels = tokenizer(“Ich stecke mit text_pair Argument von Tokenizer fest.”, return_tensors="pt").input_ids
You can then train as follows:
outputs = model(input_ids=input_ids, labels=labels)
loss = outputs.loss
This is because we feed the encoder of the seq2seq model only the encoded input sentence (as input_ids). The decoder’s output will then be compared against the labels to compute the loss.
The text_pair use case is only when we would provide sentence A [SEP] sentence B to a model, which is done for example when using BERT to classify the relationship between 2 sentences, or for question answering, where we feed question [SEP] context to the model. | 0 |
huggingface | Beginners | Loading custom audio dataset and fine-tuning model | https://discuss.huggingface.co/t/loading-custom-audio-dataset-and-fine-tuning-model/8836 | Hi all. I’m very new to HuggingFace and I have a question that I hope someone can help with.
I was suggested the XLSR-53 (Wav2Vec) model for my use-case which is a speech to text model. However, the languages I require aren’t supported so I was told I need to fine-tune the model per my requirements. I’ve seen several documentation but they all use Common Voice which also doesn’t support what I need.
I have ~4 hours audio files and tsv files (annotations of the audio) but I am not sure how to load them and fine-tune the model with them. I can’t find much info online either. Is there any reference I can follow?
Any help would be appreciated. | @patrickvonplaten I am also trying it out for a similar usecase but couldnt find any example script till now for audio datasets other than CommonVoice. I have several datasets with me which arent available on huggingface datasets but because almost all the scripts rely so much on the usage of huggingface datasets its hard to get my head around it to change it my use cases. If you can suggest me any resources or any changes so that I can use my own dataset inspite of Commonvoice or any other dataset available on huggingface datasets it would be of great help. | 0 |
huggingface | Beginners | Is BERT document embedding model? | https://discuss.huggingface.co/t/is-bert-document-embedding-model/11205 | Are BERT and its derivatives(like DistilBert, RoBertA,…) document embedding methods like Doc2Vec? | Do you mean they will map the words to vectors? Yes, they do, but it’s different than some methods like word2veq; I am not sure about Doc2Vec, though. For example, in word2veq, we give each word only one vector, and that’s it. This is not ideal since some words have different meanings in different contexts; for example, we have banks where we go to deposit or withdraw money, and we have river banks. Word2vec will give both banks the same vector, but in BERT, the vector is based on the context. | 0 |
huggingface | Beginners | Use custom loss function for training ML task | https://discuss.huggingface.co/t/use-custom-loss-function-for-training-ml-task/11351 | Hello.
I’d like to train BERT from stratch on my custom corpus for the Masked Language Modeling task. But corpus has one specific - it is sequence of the numbers and absolute value of the difference of two words corresponds to its proximity. Therefore I guess I should use this difference(or some similar) as loss function during training. Is it possible to use custom loss function training BERT model fo ML task? | You can compute the loss outside of your model since it returns the logits, and apply any function you like.
If you question was related to the Trainer, you should definte your subclass with a compute_loss method. There is an example in the documentation 53 (scroll a bit down). | 0 |
huggingface | Beginners | Model.generate() – IndexError: too many indices for tensor of dimension 2 | https://discuss.huggingface.co/t/model-generate-indexerror-too-many-indices-for-tensor-of-dimension-2/11316 | I’ve tried merging most of the code blocks below; but to sum up:
DistilGPT2 with extra tokens.
Google Colab
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint, num_labels=2,)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True, bos_token='<|startoftext|>', eos_token='<|endoftext|>', pad_token='<|pad|>')
tokenizer.pad_token = '<|pad|>'
tokenizer.add_special_tokens({'pad_token': '<|pad|>'})
tokenizer.add_tokens(["<SEP>"])
mappings = {"YES": 1, "NO": 0}
# newcolumn is labels..
data["newcolumn"] = data['newcolumn'].map(mappings)
from sklearn.model_selection import train_test_split
max_length = 1024
padding = True # "max_length" # True
X = list(data["document_plaintext"])
y = list(data["newcolumn"])
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2)
X_train_tokenized = tokenizer(X_train, padding=padding, truncation=True, max_length=max_length)
X_val_tokenized = tokenizer(X_val, padding=padding, truncation=True, max_length=max_length)
import torch
# Create torch dataset
class Dataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels=None):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
if self.labels:
item["labels"] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.encodings["input_ids"])
train_dataset = Dataset(X_train_tokenized, y_train)
eval_dataset = Dataset(X_val_tokenized, y_val)
from transformers import TrainingArguments
training_args = TrainingArguments("test_trainer",
per_device_train_batch_size=1,
gradient_accumulation_steps=2, # 2, with small batches
per_device_eval_batch_size=1,
)
model.resize_token_embeddings(len(tokenizer))
from transformers import Trainer
trainer = Trainer(model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
)
trainer.train()
model.generate() # This gives an error
model.to("cpu")
model.generate(tokenizer.encode('i enjoy walking with my cute dog', return_tensors='pt')) # This gives an error
Another problem is I still receive a missing padding_token error when training in batches despite trying many times to define it for the tokenizer.
The full error:
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-27-2f650aa8ce2f> in <module>()
1 model.to("cpu")
----> 2 model.generate(tokenizer.encode('i enjoy walking with my cute dog', return_tensors='pt'))
2 frames
/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
26 def decorate_context(*args, **kwargs):
27 with self.__class__():
---> 28 return func(*args, **kwargs)
29 return cast(F, decorate_context)
30
/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, **model_kwargs)
997 return_dict_in_generate=return_dict_in_generate,
998 synced_gpus=synced_gpus,
--> 999 **model_kwargs,
1000 )
1001
/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py in greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs)
1301 continue # don't waste resources running the code we don't need
1302
-> 1303 next_token_logits = outputs.logits[:, -1, :]
1304
1305 # Store scores, attentions and hidden_states when required
IndexError: too many indices for tensor of dimension 2 | Could it be the vocabulary embedding dimensions not carrying over to text generation? | 0 |
huggingface | Beginners | Fine-tuning XLM-RoBERTa for binary sentiment classification | https://discuss.huggingface.co/t/fine-tuning-xlm-roberta-for-binary-sentiment-classification/11337 | I’m trying to fine-tune xlm-roberta-base model for binary sentiment classification problem on review data.
I’ve implemented the code as follows:
Split data into train, validation set.
from sklearn.model_selection import train_test_split
train_texts, val_texts, train_labels, val_labels = train_test_split(sample['text'],
sample['sentiment'],
test_size=0.2,
stratify=sample['sentiment'])
Prepared the datasets for training:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base')
train_encodings = tokenizer(text=list(train_texts), max_length=200, truncation=True, padding=True, return_tensors='pt')
val_encodings = tokenizer(text=list(val_texts), max_length=200, truncation=True, padding=True, return_tensors='pt')
import torch
class Dataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
train_dataset = Dataset(train_encodings, list(train_labels))
val_dataset = Dataset(val_encodings, list(val_labels))
Then I setup the training parameters.
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir='/content/drive/MyDrive/Workshop/sentiment_analysis/model',
num_train_epochs=3,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
warmup_steps=500,
weight_decay=0.01,
logging_dir='/content/drive/MyDrive/Workshop/sentiment_analysis/logs',
logging_steps=100,
)
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
from transformers import AutoModelForSequenceClassification, BertForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("xlm-roberta-base", num_labels=2).to(device)
And, finally used the Trainer API for training.
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset
)
trainer.train()
Then I got the following error message:
***** Running training *****
Num examples = 800
Num Epochs = 3
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 300
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:9: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
if __name__ == '__main__':
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-19-d5efe8e9a1d1> in <module>()
6 )
7
----> 8 trainer.train()
7 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in binary_cross_entropy_with_logits(input, target, weight, size_average, reduce, reduction, pos_weight)
2956
2957 if not (target.size() == input.size()):
-> 2958 raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size()))
2959
2960 return torch.binary_cross_entropy_with_logits(input, target, weight, pos_weight, reduction_enum)
ValueError: Target size (torch.Size([8])) must be the same as input size (torch.Size([8, 2]))
Where did I make mistake? | Can you verify that you prepared the data correctly for the model?
What I typically do is check random elements of the dataset, i.e.
encoding = train_dataset[0]
then verify some things like:
for k,v in encoding.items():
print(k, v.shape)
I also decode the input_ids of an example back to text to see whether it’s created correctly:
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
tokenizer.decode(encoding["input_ids"])
And check the corresponding label. Also, to check whether batches are created correctly, I typically create a dataloader, and verify some more:
from torch.utils.data import DataLoader
train_dataloader = DataLoader(train_dataset, batch_size=4, shuffle=True)
batch = next(iter(train_dataloader))
for k,v in batch.items():
print(k, v.shape)
The Trainer automatically batches examples together, using the default_data_collator. | 0 |
huggingface | Beginners | How to finetune Bert on aspect based sentiment analysis? | https://discuss.huggingface.co/t/how-to-finetune-bert-on-aspect-based-sentiment-analysis/11350 | Hello, I check huggingface library and I only saw notebooks for finetuning on text classification, I am looking for a tuto to finetune on aspect based sentiment analysis , does it exsit for Bert ? | Can you clarify what you mean by aspect based sentiment analysis? | 0 |
huggingface | Beginners | Unsupported value type BatchEncoding returned by IteratorSpec._serialize | https://discuss.huggingface.co/t/unsupported-value-type-batchencoding-returned-by-iteratorspec-serialize/7535 | Hi all!
I’m having a go at fine tuning BERT for a regression problem (given a passage of text, predict it’s readability score) as a part of a Kaggle competition 1.
To do so I’m doing the following:
1. Loading BERT tokenizer and applying it to the dataset
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# df_raw is the training dataset
tokens = tokenizer(list(df_raw['excerpt']), padding='max_length', truncation=True, return_tensors="np")
2. Adding the target column and creating a TensorFlow dataset
tokens_w_labels = tokens.copy()
tokens_w_labels['target'] = df_raw['target']
tokens_dataset = tf.data.Dataset.from_tensor_slices(
tokens_w_labels
)
3. Loading a sequence classification model, and attempting to fine tune
import tensorflow as tf
from transformers import TFAutoModelForSequenceClassification
# num_labels=1 should produce regression (numerical) output
model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=1)
# Compile with relevant metrics / loss
model.compile(
optimizer='adam',
loss='mean_squared_error',
metrics=['mean_absolute_error', 'mean_squared_error'],
)
# Removed validation data for time being
model.fit(
tokens_dataset,
batch_size=8,
epochs=3
)
It’s at this step that I get an error: Unsupported value type BatchEncoding returned by IteratorSpec._serialize. I’ve tried a few different setups and I can’t figure out where the issue is.
The specific part of TensorFlow that the error code comes from is here 11.
Any pointers on what’s going wrong here? | Eventually got this working - appears that the error was in step 2, where I was combining the target column with the tokenized labels before creating the dataset.
I also needed to turn the tokenized labels into a dict.
# This version works
train_dataset = tf.data.Dataset.from_tensor_slices((
dict(tokens_w_labels),
df_raw['target'].values
)) | 0 |
huggingface | Beginners | How to deal with of new vocabulary? | https://discuss.huggingface.co/t/how-to-deal-with-of-new-vocabulary/11295 | Hi, the project that I am working on has a lot of domain-specific vocabulary. Could you please suggest techniques for tuning BERT on domain data? I do have over 1 million unlabeled sentences. Hoping that should be enough to pre-train the language model.
My end goal is to train a multi-class classification model. But, my primary interest is to pre-train the BERT language model on domain data (with 1 million texts), use the word embeddings from the trained model, and feed into traditional classification models like Random Forest. Thanks! | I’d also be very interested to see if/how this could be done for BART’s encoder since this might be a solution to this problem 9 | 0 |
huggingface | Beginners | How to ensure fast inference on both CPU and GPU with BertForSequenceClassification? | https://discuss.huggingface.co/t/how-to-ensure-fast-inference-on-both-cpu-and-gpu-with-bertforsequenceclassification/1694 | Hi!
I’d like to perform fast inference using BertForSequenceClassification on both CPUs and GPUs.
For the purpose, I thought that torch DataLoaders could be useful, and indeed on GPU they are.
Given a set of sentences sents I encode them and employ a DataLoader as in
encoded_data_val = tokenizer.batch_encode_plus(sents,
add_special_tokens=True,
return_attention_mask=True,
padding='longest',
truncation=True,
max_length=256,
return_tensors='pt')
input_ids_val = encoded_data_val['input_ids']
attention_masks_val = encoded_data_val['attention_mask']
dataset_val = TensorDataset(input_ids_val, attention_masks_val)
dataloader_val = DataLoader(dataset_val, sampler=SequentialSampler(dataset_val), batch_size=batch_size)
Afterwards, I perform inference on batches (using some value for batch_size) and retrieve softmax scores for my binary problem using
all_logits = np.empty([0,2])
for batch in dataloader_val:
batch = tuple(b.to(device) for b in batch)
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
}
with torch.no_grad():
outputs = model(**inputs)
logits = outputs[0]
all_logits = np.vstack([all_logits, torch.softmax(logits, dim=1).detach().cpu().numpy()])
This works well and allows me to enjoy fast inference on GPU varying the batch_size.
However, on CPU the code above runs 2x slower than a simpler version without DataLoader:
all_logits2 = np.empty([0,2])
for sent in sents:
input_ids = torch.tensor(tokenizer.encode(sent,
add_special_tokens=True,
return_attention_mask=False,
padding='longest',
truncation=True,
max_length=256)).unsqueeze(0).to(device) # Batch size 1
labels = torch.tensor([1]).unsqueeze(0).to(device) # Batch size 1
outputs = model(input_ids, labels=labels)
loss, logits = outputs[:2]
all_logits2 = np.vstack([all_logits2, torch.softmax(logits, dim=1).detach().cpu().numpy()])
Based on my crude benchmarks, I should stick the “DataLoader” version above if I want to run faster on GPUs by playing with the batch size, and the “DataLoader-free” version if I am running on CPUs.
The behavior does reproduce on this colab notebook, running all cells on a CPU first and subsequently comparing on a GPU runtime: https://colab.research.google.com/gist/davidefiocco/4d738ef9d3b1976187086ea31ca25ed2/batch-bert.ipynb 9
Am I missing something obvious? Can I tweak my snippet using DataLoaders so that it doesn’t result in a speed penalty when running on CPUs?
Thanks! | I think use ONNX runtime run faster 2x on cpu. you can check my repo: https://github.com/BinhMinhs10/transformers_onnx 50 or repo microsoft: https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/notebooks/PyTorch_Bert-Squad_OnnxRuntime_CPU.ipynb 30. And i note that notebook huggingface infer model by onnx still have bug :)) | 0 |
huggingface | Beginners | True/False or Yes/No Question-answering? | https://discuss.huggingface.co/t/true-false-or-yes-no-question-answering/11271 | How can I perform a Question-Answering system that returns Yes or No to my questions?
For example, I give the context “Machine Learning is lorem ipsum etc etc”, and I ask a question “Does it talks about Machine Learning?” and returns to me “Yes”. Is it possible to do this? If so, what path I need to follow to perform this? Is there any good model to do this? | This kind of problem is a task in the SuperGLUE 1 benchmark.
Generally, the approach is to fine-tune a BERT-like model with question/context pairs with a SEP token (or equivalent) separating them. Accordingly, labels correspond to yes/no answers.
You can use BERT for sequence classification 3 model to do so.
Do note though, that the objective of the BoolQ dataset is to give a yes/no answer to a question given a text that an answer may inferred from. It’s not expected to figure out whether that text is actually relevant to the question or contains the answer, so the BoolQ formulation may not be directly applicable for your task at hand. | 1 |
huggingface | Beginners | Adding examples in GPT-J | https://discuss.huggingface.co/t/adding-examples-in-gpt-j/11183 | In GPT-3, we can add examples before generating the output from the prompt. How do we add examples in GPT-J? | You might take a look at my demo notebook here 5, which illustrates how to use GPT-J for inference, including adding examples to the prompt. | 1 |
huggingface | Beginners | Using HuggingFace on no-free-internet-access server | https://discuss.huggingface.co/t/using-huggingface-on-no-free-internet-access-server/11202 | How can I have pre-trained models from HuggingFace when I run any model on a limited server to free internet? Because it cannot download the pre-trained model parameters and then use them. Is there any way to download whatever needed for any model like BERT to my local and then transfer it to my server? | When loading a model from pretrained, you can pass the model’s name to be obtained from Hugging Face servers, or you can pass the path of the pretrained model.
On a computer with internet access, load a pretrained model by passing the name of the model to be downloaded, then save it and move it to the computer without internet access.
model.save_pretrained("./your_file_name")
In the computer without internet access, load the pretrained model by passing the path of the file you downloaded earlier and moved here.
BertModel.from_pretrained("./your_file_name") | 1 |
huggingface | Beginners | I get a “You have to specify either input_ids or inputs_embeds” error, but I do specify the input ids | https://discuss.huggingface.co/t/i-get-a-you-have-to-specify-either-input-ids-or-inputs-embeds-error-but-i-do-specify-the-input-ids/6535 | I trained a BERT based encoder decoder model: ed_model
I tokenized the input with:
txt = "I love huggingface"
inputs = input_tokenizer(txt, return_tensors="pt").to(device)
print(inputs)
The output clearly shows that a input_ids is the return dict
{'input_ids': tensor([[ 101, 5660, 7975, 2127, 2053, 2936, 5061, 102]], device='cuda:0'), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0]], device='cuda:0'), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1]], device='cuda:0')}
But when I try to predict, I get this error:
ed_model.forward(**inputs)
ValueError: You have to specify either input_ids or inputs_embeds
Any ideas ? | Does this help: ValueError: You have to specify either input_ids or inputs_embeds! · Issue #3626 · huggingface/transformers · GitHub 422 | 0 |
huggingface | Beginners | Which model for inference on 11 GB GPU? | https://discuss.huggingface.co/t/which-model-for-inference-on-11-gb-gpu/11147 | Hello everybody
I’ve just found the amazing Huggingface library. It is an awesome piece of work.
I would like to train a chatbot on some existing dataset or several datasets (e.g. the Pile). For training (or fine-tuning) the model I have no GPU memory limitations (48 GB GPU is available). For inference, I only have a GPU with 11 GB available. Inference should be feasible in real-time (i.e. below around 3 seconds) and the model should be adjustable, i.e. the source code should be available to change the structure of the model.
What model is best when taking into account these requirements? Probably one of the best models is GPT-J but I think for inference it needs more than 11 GB GPU. | Does anybody have some input? Any input is highly appreciated. | 0 |
huggingface | Beginners | How give weight to some specific tokens in BERT? | https://discuss.huggingface.co/t/how-give-weight-to-some-specific-tokens-in-bert/11110 | I am going to do Opinion Mining on twitter posts. According to the hashtags, users who are against the topic mostly use some specific hashtags and also users who are with that topic use other hashtags.
Can we give more importance to these hashtags (weight up)? First, is that a good idea and second is that possible to do it in BERT tokenizer? | Hi Mahdi,
My guess is that with enough training data, the transformer model, and in particular its attention heads, will learn to recognise what they should be paying most attention to, i.e. which parts of the text are more important for the classification, and which parts are not that relevant, so this will happen implicitly, so long as the model has enough data. I’m not aware of a way to explicitly force BERT to weight some tokens more than others, however I’d be happy to be proven wrong by other contributors if this is the case. | 1 |
huggingface | Beginners | Fine-tuning T5 with custom datasets | https://discuss.huggingface.co/t/fine-tuning-t5-with-custom-datasets/8858 | Hi folks,
I am a newbie to T5 and transformers in general so apologies in advance for any stupidity or incorrect assumptions on my part!
I am trying to put together an example of fine-tuning the T5 model to use a custom dataset for a custom task. I have the “How to fine-tune a model on summarization 72” example notebook working but that example uses a pre-configured HF dataset via “load_dataset()” not a custom dataset that I load from disk. So I was wanting to combine that example with the guidance given at “Fine-tuning with custom datasets 25” but with T5 and not DistilBert as in the fine-tuning example shown.
I think my main problem is knowing how to construct a dataset object that the pre-configured T5 model can consume. So here is my use of the tokenizer and my attempt at formating the tokenized sequencies into datasets:
dataset1559×754 47 KB
But I get the following error back when I call trainer.train():
I have seen the post “Defining a custom dataset for fine-tuning translation 6” but the solution offered there seems to be write your own custom Dataset loading class rather than directly providing a solution to the problem - I can try to learn/do this but it would be great to get this working equivalent to “Fine-tuning with custom datasets” but for the T5 model I want to use.
I also found “Fine Tuning Transformer for Summary Generation 14” which is where I got the idea to change the getitem method of my ToxicDataset class to return “input_ids” “input_mask” “output_ids” “output_mask” but I am guessing really, I can’t find any documentation of what is needed (sorry!).
Any help or pointers to find what I need would be very much appreciated! | I think I may have found a way around this issue (or at least the trainer starts and completes!). The subclassing of a torch.utils.data.Dataset object for the distilbert example in “Fine-tuning with custom datasets 7” needs changing as follows. I guess because the distilbert model provides just a list of integers whereas the T5 model has output texts and I assume the DataCollatorForSeq2Seq() takes care of preprocessing the labels (the output encodings) into the features needed by forward function of T5 model (I am guessing, but this is what I am assuming from what I have read). Code changes below:
Solution820×531 28.4 KB | 0 |
huggingface | Beginners | Seq2Seq Loss computation in Trainer | https://discuss.huggingface.co/t/seq2seq-loss-computation-in-trainer/10988 | Hello, I’m using the EncoderDecoderModel to do the summarization task.
I have questions on the loss computation in Trainer class.
For text summarization task, as far as I know, the encoder input is the content, the decoder input and the label is the summary.
The EncoderDecoderModel utilizes CausalLMModel as the Decoder model. In the CausalLMModel, the loss is computed by shifting the labels and inputs so that the decoder can predict the next token based on the decoder inputs.
However, in Trainer class, the labels is first poped out from inputs dictionary (transformers/trainer.py at master · huggingface/transformers · GitHub 1). Without labels, the loss will not be calculated in the decoder model (transformers/modeling_bert.py at master · huggingface/transformers · GitHub). The loss is calculated in Trainer Line 1887. This calculation is different from the calculation in the decoder model forward. There is no shift in labels and decoder inputs.
My question is how to define decoder inputs and labels in EncoderDecoderModel for text summarization task? How to use Trainer to fine-tune EncoderDecoderModel for text summarization task?
Thank you. | Note that the loss is only popped if you use label smoothing. The default behavior is indeed that the loss is calculated within the forward. | 0 |
huggingface | Beginners | Source code for model definition | https://discuss.huggingface.co/t/source-code-for-model-definition/11108 | Hi,
I am new to Huggingface transformers. Could you please point me to the source code containing the definition of BERT model (uncased)
Thanks | Yes, it can be found here: transformers/modeling_bert.py at master · huggingface/transformers · GitHub 4 | 0 |
huggingface | Beginners | How to calculate perplexity properly | https://discuss.huggingface.co/t/how-to-calculate-perplexity-properly/11121 | Hey guys, i’m trying to evaluate my model through it’s perplexity on my test set and started to read this guide: Perplexity of fixed-length models — transformers 4.11.3 documentation 1
However, i don’t understand why joining our texts like this would not damage my models predictions:
from datasets import load_dataset
test = load_dataset('wikitext', 'wikitext-2-raw-v1', split='test')
encodings = tokenizer('\n\n'.join(test['text']), return_tensors='pt')
How can my model predict properly if my contexts are mixed up?
I’m dealing with short sentences tho | It allows the model to generalize across sentence or document boundaries, which is typically what you want in generative models. This is not a requirement, by the way, but combining it with a strided window this is quite powerful. | 1 |
huggingface | Beginners | NER model fine tuning with labeled spans | https://discuss.huggingface.co/t/ner-model-fine-tuning-with-labeled-spans/10633 | Hi!
I’m looking to fine-tune an NER model (dslim/bert-base-NER-uncased) with my own data.
My annotations are of this form: for each example I have a piece of raw text (str) and a list of annotated spans of this form: {start_index: int, end_index: int, tag: str}
However, to fine-tune the NER model, I need to prepare X (tokens) and Y (token tags) for each example. So, those spans have to be translated into token tags, matching the model’s tokenizer.
Hope that makes sense.
Is there a way to handle this? Or, what would you recommend?
Thanks! | Hi folks! Does this make sense? | 0 |
huggingface | Beginners | EncoderDecoderModel converts classifier layer of decoder | https://discuss.huggingface.co/t/encoderdecodermodel-converts-classifier-layer-of-decoder/11072 | I am trying to do named entity recognition using a Sequence-to-Sequence-model. My output is simple IOB-tags, and thus I only want to predict probabilities for 3 labels for each token (IOB).
I am trying a EncoderDecoderModel using the HuggingFace-implementation with a DistilBert as my encoder, and a BertForTokenClassification as my decoder.
First, I import my encoder and decoder:
encoder = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
encoder.save_pretrained("Encoder")
decoder = BertForTokenClassification.from_pretrained('bert-base-uncased',
num_labels=3,
output_hidden_states=False,
output_attentions=False)
decoder.save_pretrained("Decoder")
decoder
When I check my decoder model as shown, I can clearly see the linear classification layer that has out_features=3:
## sample of output:
)
(dropout): Dropout(p=0.1, inplace=False)
(classifier): Linear(in_features=768, out_features=3, bias=True)
)
However, when I combine the two models in my EncoderDecoderModel, it seems that the decoder is converted into a different kind of classifier - now with out_features as the size of my vocabulary:
bert2bert = EncoderDecoderModel.from_encoder_decoder_pretrained("./Encoder","./Decoder")
bert2bert
## sample of output:
(cls): BertOnlyMLMHead(
(predictions): BertLMPredictionHead(
(transform): BertPredictionHeadTransform(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(decoder): Linear(in_features=768, out_features=30522, bias=True)
)
)
Why is that? And how can I keep out_features = 3 in my model? | The EncoderDecoderModel class is not meant to do token classification. It is meant to do text generation (like summarization, translation). Hence, the head on top of the decoder will be a language modeling head.
To do token classification, you can use any xxxForTokenClassification model in the library, such as BertForTokenClassification or RobertaForTokenClassification. | 0 |
huggingface | Beginners | Does GPT-J support api access? | https://discuss.huggingface.co/t/does-gpt-j-support-api-access/11086 | Hi, i’m trying to use the gpt-j model to test through the api but i’m getting this error >> The model EleutherAI/gpt-j-6B is too large to be loaded automatically.
can you please tell me how to resolve this?
should i purchase the paid package or am i doing something wrong?
i used this code, i took it from the official website:
import requests
API_URL = "https://api-inference.huggingface.co/models/EleutherAI/gpt-j-6B"
headers = {"Authorization": "Bearer api_KsXRTrcHCdYZNqLVwStUOFmhcwxMWjPJDd"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query("Can you please let us know more details about your ")
print (output)
this is the output i’m getting:
{'error': 'The model EleutherAI/gpt-j-6B is too large to be loaded automatically.'} | Can you try again?
I just tested the inference widget 3, and it works for me. | 1 |
huggingface | Beginners | Generating actual answers from QA models | https://discuss.huggingface.co/t/generating-actual-answers-from-qa-models/11048 | Hi everybody,
I’m planning or training a BARTQA model, but before training I would like to test how to actually generate an answer at inference time. I’ve looked through the documentation, but I couldn’t find an obvious answer. Can I do it using the BARTForQuestionAnswering model or do I have to use the BARTForConditionalGeneration model that has the generate() method?
Thank you very much for any help!
Antonio | Hi,
Solving question-answering using Transformers is usually done in one of 2 ways:
either extractive, where the model predicts start_scores and end_scores. In other words, the model predicts which token it believes is at the start of the answer, and which token is at the end of the answer. This was introduced in the original BERT paper.
either generative, where the model simply generates the correct answer. This was introduced in the T5 paper, where they treated every NLP problem as a generative task.
As BART is a seq2seq (encoder-decoder) model similar to T5, it makes sense to use BartForConditionalGeneration. You can indeed use the .generate() method at inference time to let it generate a predicted answer. However, BartForQuestionAnswering is also available in the library, meaning you can also use BART to do BERT-like extractive question answering.
If you ask me, option 2 is much simpler, and more “human-like”. | 0 |
huggingface | Beginners | Refresh of API Key | https://discuss.huggingface.co/t/refresh-of-api-key/10651 | Hello,
I was wondering if I could get a refresh on my API key?
CCing @pierric @julien-c
Thank you! | Same here. I would also need my API key reseting. Thanks. | 0 |
huggingface | Beginners | DataCollatorWithPaddings without Tokenizer | https://discuss.huggingface.co/t/datacollatorwithpaddings-without-tokenizer/11068 | I want to fine-tune a model…
model = BertForTokenClassification.from_pretrained('monilouise/ner_pt_br'
with this dataset:
raw_datasets = load_dataset('lener_br')
The raw_datasets loaded are already tokenized and encoded. And I don’t know how it was tokenized. Now, I want to pad the inputs, but I don’t know how to use DataCollatorWithPaddings in this case.
I noticed that this dataset is similar to wnut dataset from the docs. Still, I can’t figure out what should I do. | You can use the base BERT tokenizer I would say (since it’s a BERT model). Just make sure the pad token is compatible with what the model expects. | 1 |
huggingface | Beginners | How to convert string labels into ClassLabel classes for custom set in pandas | https://discuss.huggingface.co/t/how-to-convert-string-labels-into-classlabel-classes-for-custom-set-in-pandas/8473 | I am trying to fine tune bert-base-uncased model, but after loading datasets from pandas dataframe I get the following error with the trainer.train():
ValueError: Target size (torch.Size([16])) must be the same as input size (torch.Size([16, 5]))
I tried to understand the problem and I think it is related to the wrong data type. The following example illustrates this problem:
text = [‘John’, ‘snake’, ‘car’, ‘tree’, ‘cloud’, ‘clerk’, ‘bike’]
labels = [‘0’, ‘1’, ‘2’, ‘3’, ‘4’, ‘0’, ‘2’]# create Pandas DataFrame
df = pd.DataFrame({‘text’: text, ‘label’: labels})# define data set object
ds = Dataset.from_pandas(df)
ds.features
The last command shows the following:
{‘text’: Value(dtype=‘string’, id=None),
‘label’: Value(dtype=‘string’, id=None)}
While it should be (from the huggingface tutorial)
{‘text’: Value(dtype=‘string’, id=None),
‘label’: ClassLabel(num_classes=5, names=[‘0’, ‘1’, ‘2’, ‘3’, ‘4’], names_file=None, id=None)}
My question is how to convert the ‘label’ that has a string type into a ‘label’ that has the proper ClassLabel type. Tutorials say that one should use the map function, but I could not find any code examples.
Thank you for your help. | hi @Krzysztof,
i think you can get what you want by using the features argument of Dataset.from_pandas:
from datasets import Dataset, Value, ClassLabel, Features
text = ["John", "snake", "car", "tree", "cloud", "clerk", "bike"]
labels = [0,1,2,3,4,0,2]
df = pd.DataFrame({"text": text, "label": labels})# define data set object
features = Features({"text": Value("string"), "label": ClassLabel(num_classes=5, names=[0,1,2,3,4])})
ds = Dataset.from_pandas(df, features=features)
ds.features
# {'text': Value(dtype='string', id=None),
# 'label': ClassLabel(num_classes=5, names=[0, 1, 2, 3, 4], names_file=None, id=None)} | 0 |
huggingface | Beginners | Train GPT2 on wikitext from scratch | https://discuss.huggingface.co/t/train-gpt2-on-wikitext-from-scratch/5276 | Hello everyone,
I would like to train GPT2 on wikitext from scratch (not fine-tune pre-trained model). I launched the following script in this 62 folder.
python run_clm.py
–model_type gpt2
–tokenizer_name gpt2
–block_size 256
–dataset_name wikitext
–dataset_config_name wikitext-2-raw-v1
–do_train
–do_eval
–overwrite_output_dir
–num_train_epochs 1
–output_dir /tmp/test-clm
Now I have two questions:
1- I was wondering if what I did is indeed a correct approach to train GPT2 from scratch?
2- I would like to know what hyperparameters I shoud use for this task? ( as far as I can tell, the suggested hyperparameters in existing examples in huggingface repo are for fine-tuning pre-trainned model) | I can confirm the command is correct if you want to train from scratch. As for hyperparameters, you will need to tune them a bit, but the defaults should not be too bad. | 0 |
huggingface | Beginners | Getting KeyError: ‘logits’ when trying to run deberta model | https://discuss.huggingface.co/t/getting-keyerror-logits-when-trying-to-run-deberta-model/11012 | !pip install sentencepiece --upgrade
!pip install transformers --upgrade
from transformers import pipeline,AutoModel,AutoTokenizer
model =AutoModel.from_pretrained("microsoft/deberta-v2-xxlarge-mnli")
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xxlarge")
classifier = pipeline("zero-shot-classification",model=model,tokenizer=tokenizer)
sequence = "Bob: Hey, how is work going? , Amy: Good, I wanted to talk to you about your work performance."
candidate_labels = ["Bob is talking about the same topic as Amy.", "Amy explained to Bob why he is not doing well.", "Bob asked how Amy is.", "Bob brought up the topic of work performance."]
print(classifier(sequence, candidate_labels)) | also, how do I fine-tune such a model for my purpose? | 0 |
huggingface | Beginners | Bigbirdmodel: Problem with running code provided in documentation | https://discuss.huggingface.co/t/bigbirdmodel-problem-with-running-code-provided-in-documentation/5657 | Hey folks, QQ: Has anyone tried running the provided code in Bigbird documentation and run into problems? I’m simply trying to embed some input using the pre-trained model 2 for initial exploration, and I’m running into an error: IndexError: index out of range in self
Has anyone come across this error before or seen a fix for it? Thanks.
Full stack trace below:
IndexError Traceback (most recent call last)
in
5
6 inputs = tokenizer(“Hello, my dog is cute”, return_tensors=“pt”)
----> 7 outputs = model(**inputs)
8 outputs
~/SageMaker/persisted_conda_envs/intercom_kevin/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
→ 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
~/SageMaker/persisted_conda_envs/intercom_kevin/lib/python3.6/site-packages/transformers/models/big_bird/modeling_big_bird.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
2076 token_type_ids=token_type_ids,
2077 inputs_embeds=inputs_embeds,
→ 2078 past_key_values_length=past_key_values_length,
2079 )
2080
~/SageMaker/persisted_conda_envs/intercom_kevin/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
→ 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
~/SageMaker/persisted_conda_envs/intercom_kevin/lib/python3.6/site-packages/transformers/models/big_bird/modeling_big_bird.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length)
283
284 if inputs_embeds is None:
→ 285 inputs_embeds = self.word_embeddings(input_ids)
286
287 if self.rescale_embeddings:
~/SageMaker/persisted_conda_envs/intercom_kevin/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
→ 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
~/SageMaker/persisted_conda_envs/intercom_kevin/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input)
124 return F.embedding(
125 input, self.weight, self.padding_idx, self.max_norm,
→ 126 self.norm_type, self.scale_grad_by_freq, self.sparse)
127
128 def extra_repr(self) → str:
~/SageMaker/persisted_conda_envs/intercom_kevin/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1812 # remove once script supports set_grad_enabled
1813 no_grad_embedding_renorm(weight, input, max_norm, norm_type)
→ 1814 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1815
1816
IndexError: index out of range in self | cc @vasudevgupta | 0 |
huggingface | Beginners | CUDA out of memory for Longformer | https://discuss.huggingface.co/t/cuda-out-of-memory-for-longformer/1472 | I have issue training the longformer on custom dataset, even on a small batch number, it says CUDA out of memory,
RuntimeError Traceback (most recent call last)
in ()
----> 1 trainer.train()
18 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in _pad(input, pad, mode, value)
3550 assert len(pad) // 2 <= input.dim(), ‘Padding length too large’
3551 if mode == ‘constant’:
-> 3552 return _VF.constant_pad_nd(input, pad, value)
3553 else:
3554 assert value == 0, ‘Padding mode “{}”" doesn’t take in value argument’.format(mode)
RuntimeError: CUDA out of memory. Tried to allocate 1.13 GiB (GPU 0; 15.90 GiB total capacity; 11.40 GiB already allocated; 659.81 MiB free; 14.39 GiB reserved in total by PyTorch) | Did you try smaller batch sizes? What is the size of single batch size in your RAM? | 0 |
huggingface | Beginners | How to specify which metric to use for earlystopping? | https://discuss.huggingface.co/t/how-to-specify-which-metric-to-use-for-earlystopping/10990 | For example, I define a custom compute_metrics which returns a dict including rouge and bleu. How to tell early stopping callback to stop training based on bleu?
Thank you! | You can specifiy in the field metric_for_best_model (should be the name of the key in the dictionary returned by your compiute_metrics). | 0 |
huggingface | Beginners | How to use specified GPUs with Accelerator to train the model? | https://discuss.huggingface.co/t/how-to-use-specified-gpus-with-accelerator-to-train-the-model/10967 | I’m training my own prompt-tuning model using transformers package. I’m following the training framework in the official example to train the model. I’m training environment is the one-machine-multiple-gpu setup. My current machine has 8 gpu cards and I only want to use some of them. However, the Accelerator fails to work properly. It just puts everything on gpu:0, so I cannot use mutliple gpus. Also, os.environ['CUDA_VISIBLE_DEVICES'] fails to work.
I have re-written the code without using Accelerator. Instead, I use nn.Dataparallel with os.environ['CUDA_VISIBLE_DEVICES'] to specify the gpus. Everything work fine in this case.
So what’s the reason? According the manual, I think Accelerator should be able to take care of all these things. Thank you so much for your help!
FYI, here is the version information:
python 3.6.8
transformers 3.4.0
accelerate 0.5.1
NVIDIA gpu cluster | No it needs to be done before the lauching command:
CUDA_VISIBLE_DEVICES = "3,4,5,6" accelerate launch training_script.py | 1 |
huggingface | Beginners | Why the HF tokenizer time is bigger when launched just once? | https://discuss.huggingface.co/t/why-the-hf-tokenizer-time-is-bigger-when-launched-just-once/10948 | Hello,
I guess this is more a python for loop issue and/or a Colab one but as I tested it with a HF tokenizer, I’m sending this question to this forum: Why the HF tokenizer time is bigger when launched just once?
I published a colab notebook 2 to explain this issue that is showed in the following graph:
If I launch just one time the tokenizer on a text, it will always takes a much bigger time than the average time of x tokenizations of the same text. Strange, no?
Configuration
transformers version: 4.11.3
tokenizer from the model bert-base-uncased
importation code of the tokenizer:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True) | I think it is just the first time running the tokenization that results in a bigger time and subsequent calls are faster. I noticed that this only happens with the fast tokenizer, I think this is due to how the fast is designed, although I don’t know the details below it, maybe a delayed operation? | 0 |
huggingface | Beginners | Spaces not running latest version of Streamlit | https://discuss.huggingface.co/t/spaces-not-running-latest-version-of-streamlit/10790 | Hello,
I am not sure if this is the right place to drop but I noticed that some of my implementations that use some of the functionalities included in the more recent versions of Streamlit were displaying errors when deployed on Spaces, even though they work fine in my local environment.
My efforts: I created a requirements.txt file and specify “Streamlit==1.0.0” in it, but I’m still getting the same error.
Any recommendations on how to fix this? Thank you! | cc @julien-c | 0 |
huggingface | Beginners | Extract hidden layers from a Roberta model in sagemaker | https://discuss.huggingface.co/t/extract-hidden-layers-from-a-roberta-model-in-sagemaker/10748 | Hello,
I have fine tuned a Camembert Model (inherits from Roberta) on a custom dataset using sagemaker.
My goal is to have a language model able to extract embedding to be used in my search engine.
Camembert is trained for a “fill-mask” task.
Using the Huggingface API outputting hidden_layers (thus computing embedding) is fairly simple
model = AutoModelForMaskedLM.from_pretrained(args.model_name, output_hidden_states=True)
But when deploying such model in sagemaker the predict method only returns the text output.
There is some kind of post-processing that I do not control.
Is there a way to customize the post-processing steps in sagemaker ?
What model architecture should I be using to extract embeddings on sagemaker ?
Thanks for your help. | For those having the same issue I found a solution.
Train the model on masked ML and at inference time use the pipeline ‘feature_extraction’ by setting the HF_TASK environment variable.
hub = {
'HF_TASK': 'feature_extraction'
}
huggingface_model = HuggingFaceModel(
env=hub,
model_data="s3://bucket/model.tar.gz"
role=<SageMaker Role>,
transformers_version="4.6",
pytorch_version="1.7",
py_version="py36",
)
huggingface_model.deploy(1, '<ec2 -instance type >')
the model will send the feature vector for each token. To get the sentence vector you can average the word vectors or use other fancy methods I dit not explore.
If you want more control on what exactly your model returns you can customize the
output_fn, predict_fn etc… as described here: GitHub - aws/sagemaker-huggingface-inference-toolkit 1 | 0 |
huggingface | Beginners | Load CLIP pretrained model on GPU | https://discuss.huggingface.co/t/load-clip-pretrained-model-on-gpu/10940 | I’m using the CLIP for finding similarities between text and image but I realized the pretrained models are loading on CPU but I want to load it on GPU since in CPU is not fast. How can I load them on GPU?
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
Thanks! | Here’s how you can put a model on GPU (same for any PyTorch model):
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
model.to(device) | 0 |
huggingface | Beginners | Metrics in Comet.ml from Transformers | https://discuss.huggingface.co/t/metrics-in-comet-ml-from-transformers/10777 | Hello, I am using Comet.ml to log my transformer training. For the puposes of logging confusion matrix, i am not able to use CometCallback because it become practically impossible to then calculate these matrices.
Due to this, i am getting very different interpretation of steps. While my output shows, total optimization steps = 3512 and my comet log records steps in 20k steps in it’s graph.
I am confused. what is being logged, and what loss is being logged? or in optimizer case, which loss is being logged. | I think we have these questions answered over here: Comet and Transformers · Issue #434 · comet-ml/issue-tracking · GitHub 13 | 0 |
huggingface | Beginners | GPU OOM when training | https://discuss.huggingface.co/t/gpu-oom-when-training/10945 | I’m running the language modeling script provided here. I’m training a Roberta-base model and I have an RTX 3090 with 24 Gb, although when training it runs well until 9k steps, then an OOM error is through. The memory usage on training begins at 12Gb, runs a few steps, and keeps growing until OOM error. It seems to be that previous batches aren’t freed from the memory but I am not sure yet.
I implemented my dataset class and passed it to the Trainer, although I am loading all raw data into the RAM, I only tokenized them at the __getitem__ method, so I don’t think this is the actual issue.
Does anyone have some thoughts on this?
My dataset class:
class LMDataset(Dataset):
def __init__(
self,
base_path: str,
tokenizer: AutoTokenizer,
set: str = "train",
):
self.tokenizer = tokenizer
src_file = Path(base_path).joinpath("processed", "{}.csv".format(set))
df = pd.read_csv(src_file, header=0, names=["text"])
self.samples = df["text"].to_list()
def __len__(self):
return len(self.samples)
def _tokenize(
self,
text: str,
padding: Optional[Union[str, bool]] = False,
max_seq_length: Optional[int] = None,
):
return self.tokenizer(
text,
padding=padding,
truncation=True,
max_length=max_seq_length or self.tokenizer.model_max_length,
return_special_tokens_mask=True,
)
def __getitem__(
self,
i,
padding: Optional[Union[str, bool]] = False,
max_seq_length: Optional[int] = None,
):
input_ids = self._tokenize(self.samples[i], padding, max_seq_length)[
"input_ids"
]
return torch.tensor(input_ids, dtype=torch.long) | My guess would be that you have a specific sample in your dataset that is very long. Your collate function (not shown) might then be padding up to that length. That means that, for instance, your first <9k steps are of size 128x64 (seq_len x batch_size), which does not lead to an OOM. But then, around 9k steps you have a large sequence as a sample, which would (for instance) lead to 384 x 64 input, leading to an OOM.
So check the data distribution of your dataset, and check the collate function. You may want to specify a max_length that is smaller than model max length after all. | 1 |
huggingface | Beginners | Cuda out of memory while using Trainer API | https://discuss.huggingface.co/t/cuda-out-of-memory-while-using-trainer-api/7138 | Hi
I am trying to test the trainer API of huggingface through this small code snippet on a toy small data. Unfortunately I am getting cuda memory error although I have 32gb cuda memory which seems sufficient for this small data. Any help will be greatly appreciated
from datasets import load_dataset,load_metric
import datasets as dtset
import transformers
import torch
from transformers import Trainer,TrainingArguments,set_seed
from transformers import BertTokenizer, BertForSequenceClassification, BertConfig
config = BertConfig.from_pretrained("bert-base-uncased",num_labels=6)
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
model = BertForSequenceClassification.from_pretrained("bert-base-uncased",config=config)
data_files = {'train':'F1000_train.csv','validation':'F1000_valid.csv','test':'F1000_test.csv'}
datasets = load_dataset("csv",data_files=data_files)
num_labels = len(datasets['train'].unique('label'))
def preprocess(examples):
args = ((examples['sentence1'],))
result = tokenizer(*args,padding='max_length',max_length=32,truncation=True)
result['label'] = [l for l in examples['label']]
return result
metric = load_metric('f1')
training_args=TrainingArguments(output_dir=“Out”,prediction_loss_only=True,gradient_accumulation_steps=1,learning_rate=2e-5,weight_decay=1e-4,local_rank=-1,fp16=True)
trainer = Trainer(model = model,
args = training_args,
train_dataset=datasets["train"],
eval_dataset= datasets["validation"],
compute_metrics = metric,
)
train_result= trainer.train(resume_from_checkpoint=None) | Could this be related to this issue 59? | 0 |
huggingface | Beginners | Sst2 dataset labels look worng | https://discuss.huggingface.co/t/sst2-dataset-labels-look-worng/10895 | Hello all,
I feel like this is a stupid question but I cant figure it out
I was looking at the GLUE SST2 dataset through the huggingface datasets viewer and all the labels for the test set are all -1.
They are 0 and 1 for the training and validation set but all -1 for the test set.
Shouldn’t the test labels match the training labels? What am I missing? | GLUE is a benchmark, so the true labels are hidden, and only known by its creators.
One can submit a script to the official website, which is then run on the test set. In that way, one can create a leaderboard with the best performing algorithms. | 1 |
huggingface | Beginners | How to use Trainer with Vision Transformer | https://discuss.huggingface.co/t/how-to-use-trainer-with-vision-transformer/10852 | What changes should be made for using Trainer with the Vision Transformer, are the keys expected by the trainer from dataset input_ids, attention_mask, and labels?
class OCRDataset(torch.utils.data.Dataset):
def __init__(self, texts, tokenizer, transforms = None):
self.texts = texts
self.tokenizer = tokenizer
self.transforms = transforms
def __getitem__(self, idx):
data = generate_sample(self.texts[idx])
if data:
img, label = data
img = torch.from_numpy(img)
tokens = tokenizer(label, padding='max_length')
if self.transforms:
img = self.transforms(img)
batch = {}
batch['labels'] = tokens
batch['input_ids'] = img
return batch
transform= transforms.Compose([transforms.Normalize((0.5,), (0.5,))])
train_dataset = OCRDataset(jp_list, tokenizer, transform)
.....
.....
trainer.train()
This code throws the following error
ValueError: could not determine the shape of object type ‘BatchEncoding’ | Hi,
I do have a demo notebook on using the Trainer for fine-tuning the Vision Transformer here: Transformers-Tutorials/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_🤗_Trainer.ipynb at master · NielsRogge/Transformers-Tutorials · GitHub 8.
ViT doesn’t expect input_ids and attention_mask as input, but pixel_values instead. Note that we will add support for attention_mask in the future. | 1 |
huggingface | Beginners | Wav2VecForPreTraining - Not able to run trainer.train() | https://discuss.huggingface.co/t/wav2vecforpretraining-not-able-to-run-trainer-train/10884 | I am trying to use Wav2VecForPreTraining to train the model from scratch on own audio dataset.
from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2ForPreTraining, TrainingArguments, Trainer
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("patrickvonplaten/wav2vec2-base")
model = Wav2Vec2ForPreTraining.from_pretrained("patrickvonplaten/wav2vec2-base")
use_cuda = torch.cuda.is_available()
device = 'cuda' if use_cuda else 'cpu'
fp16 = True if use_cuda else False
model = model.to(device)
logstep = 100
training_args = TrainingArguments(
output_dir="./",
group_by_length=True,
per_device_train_batch_size=2,
evaluation_strategy="steps",
num_train_epochs=35,
fp16=fp16,
save_steps=2100,
eval_steps=logstep,
logging_steps=logstep,
learning_rate=1e-4,
weight_decay=0.005,
warmup_steps=1000,
report_to=None,
save_total_limit=1,
)
trainer = Trainer(
model=model,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_data,
eval_dataset=test_data,
tokenizer=processor.feature_extractor,
)
trainer.train()
I get below error
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-34-3435b262f1ae> in <module>
----> 1 trainer.train()
~/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1284 tr_loss += self.training_step(model, inputs)
1285 else:
-> 1286 tr_loss += self.training_step(model, inputs)
1287 self.current_flos += float(self.floating_point_ops(inputs))
1288
~/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py in training_step(self, model, inputs)
1787 if self.use_amp:
1788 with autocast():
-> 1789 loss = self.compute_loss(model, inputs)
1790 else:
1791 loss = self.compute_loss(model, inputs)
~/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)
1821 else:
1822 labels = None
-> 1823 outputs = model(**inputs)
1824 # Save past state if it exists
1825 # TODO: this needs to be fixed and made cleaner later.
~/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: forward() got an unexpected keyword argument 'labels'
Input data is in below dictionary format
{'input_values': tensor([[-0.0075, -0.0095, -0.0085, ..., -1.0926, -1.1881, -1.1047],
[ 0.5310, 0.9788, 1.4064, ..., -0.1375, -0.1230, -0.1085]]), 'labels': tensor([[ 3, 6, 12, 13, 13, 1, 22, 1, 26, 24, 28, 1,
0, 6, 10, 1, 25, 4, 1, 3, 6, 13, 1, 4,
27, 9, 4, 14, 12, 25, 9, 13, 12, 1, 10, 24,
1, 3, 6, 13, 1, 24, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100],
[ 6, 26, 21, 13, 1, 26, 1, 20, 12, 13, 26, 3,
1, 28, 26, 19, 1, 10, 6, 1, 24, 10, 3, 1,
26, 1, 7, 12, 10, 9, 2, 13, 11, 1, 28, 25,
28, 1, 19, 10, 27, 1, 24, 13, 13, 28, 1, 11,
13, 1, 3, 10, 1, 3, 12, 26, 24, 4, 16, 13,
12, 1, 19, 10, 27, 1, 10, 21, 13, 12, 1, 3,
10, 1, 28, 13, 2, 3, 26, 1, 28, 13, 24, 3,
26, 2]])}
When I looked at trainer.py in transformers, I see that error is coming from compute_loss function. In this function, it seems I need to define label_smoother.
def compute_loss(self, model, inputs, return_outputs=False):
"""
How the loss is computed by Trainer. By default, all models return the loss in the first element.
Subclass and override for custom behavior.
"""
if self.label_smoother is not None and "labels" in inputs:
labels = inputs.pop("labels")
else:
labels = None
outputs = model(**inputs)
# Save past state if it exists
# TODO: this needs to be fixed and made cleaner later.
if self.args.past_index >= 0:
self._past = outputs[self.args.past_index]
if labels is not None:
loss = self.label_smoother(outputs, labels)
else:
# We don't use .loss here since the model may return tuples instead of ModelOutput.
loss = outputs["loss"] if isinstance(outputs, dict) else outputs[0]
return (loss, outputs) if return_outputs else loss
I even tried below in compute_loss
labels = inputs.pop("labels")
outputs = model(**inputs)
This throws below error
~/.conda/envs/torch/lib/python3.7/site-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)
1823 print(".... ", self.label_smoother)
1824 print(" >>> ", labels)
-> 1825 outputs = model(**inputs)
1826 # Save past state if it exists
1827 # TODO: this needs to be fixed and made cleaner later.
~/.conda/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
~/.conda/envs/torch/lib/python3.7/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py in forward(self, input_values, attention_mask, mask_time_indices, output_attentions, output_hidden_states, return_dict)
1299 # -log(exp(sim(c_t, q_t)/\kappa) / \sum_{\sim{q}} exp(sim(c_t, \sim{q})/\kappa))
1300 preds = logits.transpose(0, 2).reshape(-1, logits.size(0))
-> 1301 target = ((1 - mask_time_indices.long()) * -100).transpose(0, 1).flatten()
1302 contrastive_loss = nn.functional.cross_entropy(preds.float(), target, reduction="sum")
1303
AttributeError: 'NoneType' object has no attribute 'long'
Could someone please guide here? | You can’t use this model with the Trainer as it does not compute the loss. The Trainer API is only compatible with models that compute the loss when they are provided with labels. | 0 |
huggingface | Beginners | How to install latest version of transformers? | https://discuss.huggingface.co/t/how-to-install-latest-version-of-transformers/10876 | No matter what command I try, I always end up with version 2.1.1 of transformers, where models such as the GPT2 do not accept the input_embeds arguments in the forward pass, which I really need.
How can I install the latest version? I have tried with conda, with pip, I have tried updating but so far I am stuck with 2.1.1 | Making a new virtual environment in which I re-installed everything solved my issue but I do not know why… Weird. | 1 |
huggingface | Beginners | Binary classification of text files in two directories | https://discuss.huggingface.co/t/binary-classification-of-text-files-in-two-directories/10807 | I am trying to do transfer learning on GPT-Neo to distinguish scam websites and normal websites from their content and I am completely confused as to what I should do next. I have already used some code to scrape websites content and parsed them using bs4. Now only the website text is stored in different directories using txt format. My directory structure looks like this. The two root folders are the two classes (“Scam” and “Normal”). In each class, there are more subdirectories with the website’s url as names, and then within them is the parsed html page in txt.
Scam/
Website1/
content.txt
Website2/
content.txt
...
Normal/
Website1/
content.txt
Website2/
content.txt
...
I have read a lot of documentation but I am not sure on what I should do next. Do I extract the text in each file, merge a [0,1] label and make a big csv? What’s next? Tokenize the text column of the csv and feed it to the input layer of the transformer? I would appreciate any advice! | Yes, that’s already a good idea. You can indeed make 1 dataset with 2 columns, text and label.
Next, there are several options:
Either you create your dataset as a csv file, and then you turn it into a HuggingFace Dataset object, as follows:
from datasets import load_dataset
dataset = load_dataset('csv', data_files='my_file.csv')
… or if you have multiple files:
dataset = load_dataset('csv', data_files=['my_file_1.csv', 'my_file_2.csv', 'my_file_3.csv'])
… or if you already want to determine which ones are for training, which ones for testing:
dataset = load_dataset('csv', data_files={'train': ['my_train_file_1.csv', 'my_train_file_2.csv'] 'test': 'my_test_file.csv'})
The benefit of HuggingFace Datasets is that it allows you to quickly tokenize the entire dataset and prepare it for the model, using the .map(function) functionality.
Alternatively, you can implement a classic PyTorch dataset with the getitem method. Each dataset item should then return the input_ids, attention_mask and label. | 0 |
huggingface | Beginners | Tokenizing two sentences with the tokenizer | https://discuss.huggingface.co/t/tokenizing-two-sentences-with-the-tokenizer/10858 | Hello, everyone.
I’m working on an NLI task (such as MNLI, RTE, etc.) where two sentences are given to predict if the first sentence entails the second one or not. I’d like to know how the huggingface tokenizer behaves when the length of the first sentence exceeds the maximum sequence length of the model.
I’m using encode_plus() to tokenize my sentences as follows:
inputs = tokenizer.encode_plus(example.text_a, example.text_b, add_special_tokens=True, max_length=max_length,)
I’d like to avoid the case of the second sentence not being encoded since the first sentence itself already exceeds the maximum input sequence length of the model. Is there an option for the encode_plus() function to truncate the first sentence to make sure I always have the second one in the processed data? | Hi,
As explained in the docs 2, you can specify several possible strategies for the truncation parameter, including 'only_first'. Also, the encode_plus method is outdated actually. It is recommended to just call the tokenizer, both on single sentence or pair of sentences. TLDR:
inputs = tokenizer(text_a, text_b, truncation='only_first', max_length=max_length) | 1 |
huggingface | Beginners | How to use the test set in those beginner examples? | https://discuss.huggingface.co/t/how-to-use-the-test-set-in-those-beginner-examples/10824 | Hi, maybe a stupid question but I cant find the answer ether in Doc or on Google.
In these examples notebooks/text_classification.ipynb at master · huggingface/notebooks · GitHub
the datasets contain a test set, but the examples finished after how to train and evaluate(on validation set if I understand it correctly).
But how about test set? Is there a convenient way to test it on test set? Like the one line command for trainer.evaluate().
Thanks. | trainer.evaluate takes a dataset, so you can pass the test split if you want to evaluate on the test set.
Or there is trainer.predict if you just want the predictions. | 0 |
huggingface | Beginners | Fine-tuning MT5 on XNLI | https://discuss.huggingface.co/t/fine-tuning-mt5-on-xnli/7892 | Hi there,
I am trying to fine-tune a MT5-base model to test it over the Spanish portion of the XNLI dataset.
My training dataset is the NLI dataset machine translated to Spanish by a MarianMT model, so the quality isn’t the best but I have still managed to get good results while training it with other models shuch as xlm-roberta.
Also, given the size of the NLI dataset I am only training with a 10% of it (with same proportion of labels), which is still 40.000 examples.
The problem I have is that it gets to a point where the loss is stucked and always predicts the same class, so I am looking for some hints about how to make training effective by changing parameters or to see if someone also had the same problems as me.
I have tried with both AdamW and Adafactor and with learning rates ranging from 0.001 to 1e-5 and I always get the same results.
Any help will be appreciated. Thank you very much! | Hello,
I had the same issue with mT5 (both small and base) on BoolQ dataset (~9.5k train samples) and found out something that may be useful to you.
No matter what settings I used, how long I trained, and whether I oversampled the minority class on training set, all predictions on validation set were the same. Interestingly, this only occurred when using boolean QA data with mT5. Other tasks such as SQuAD, or switching to T5, worked just fine.
So, I looked into the differences of the pre-training stage between T5 and mT5. One thing to note is that no supervised pre-training is used in mT5. Since mT5 works fine on SQuAD, I trained for one epoch on SQuAD before proceeding to train on BoolQ using the settings described in the mT5 paper. This resolved the issue for me and now the accuracy improves as expected.
In short: Train on some other tasks first.
I hope it helps you too! | 0 |
huggingface | Beginners | GPT-2 Perplexity Score Normalized on Sentence Lenght? | https://discuss.huggingface.co/t/gpt-2-perplexity-score-normalized-on-sentence-lenght/5205 | I am using the following code to calculate the perplexity of sentences and I need to know whether the score is normalized on sentence length. If not, what do I need to change to normalize it?
Thanks!
import torch
import sys
import numpy as np
from transformers import GPT2Tokenizer, GPT2LMHeadModel
# Load pre-trained model (weights)
with torch.no_grad():
model = GPT2LMHeadModel.from_pretrained('gpt2')
model.eval()
# Load pre-trained model tokenizer (vocabulary)
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
def score(sentence):
tokenize_input = tokenizer.encode(sentence)
tensor_input = torch.tensor([tokenize_input])
loss=model(tensor_input, labels=tensor_input)[0]
return np.exp(loss.detach().numpy())
if __name__=='__main__':
for line in sys.stdin:
if line.strip() !='':
print(line.strip()+'\t'+ str(score(line.strip())))
else:
break | Hey, did you find an answer to your question?
What is the right way (if there is a need) to normalize the perplexity number based on sentence length? Should I divide by the number of tokens ? I have a reason to believe that they must already be doing it on the inside in the loss computation. Not sure though | 0 |
huggingface | Beginners | Error in spaces/akhaliq/T0pp_11B | https://discuss.huggingface.co/t/error-in-spaces-akhaliq-t0pp-11b/10791 | I want to try the model but there is error!! Why?? | The demo is calling https://huggingface.co/bigscience/T0pp_11B 1 which does not exist at the moment. | 0 |
huggingface | Beginners | How can I reset the API token? | https://discuss.huggingface.co/t/how-can-i-reset-the-api-token/10763 | Is there a way to reset the API token I leaked mine when pushing the code to the online repository. | Neil46:
API token I leaked mine when pushing the code
cc @pierric @julien-c | 0 |
huggingface | Beginners | Dimension mismatch when training BART with Trainer | https://discuss.huggingface.co/t/dimension-mismatch-when-training-bart-with-trainer/6430 | Hi all,
I encountered a ValueError when training the facebook/bart-base Transformer with a sequence classification head.
It seems that the dimensions of the predictions are different to e.g. the bart-base-uncased model for sequence classification.
I am using transformers version 4.6.1.
Here is an example script which you could copy paste to reproduce the error. I use my own toy data and only use a subset of it so that the script only runs for some minutes.
Note that when I replace "facebook/bart-base" with "bert-base-uncased" for model_name down below, the script executes successfully.
Do you know what I am doing wrong?
import torch
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from transformers import TrainingArguments, Trainer, EvalPrediction, AutoConfig, AutoTokenizer, AutoModelForSequenceClassification
filepath = "https://raw.githubusercontent.com/DavidPfl/thesis_ds/main/data/archiv/suttner_data.tsv"
df_dataset = pd.read_csv(filepath, sep = "\t", header = None)
df_dataset["text"] = df_dataset.iloc[:,4] + df_dataset.iloc[:,5]
articles = df_dataset["text"].tolist()
labels = df_dataset.iloc[:,1].astype(int).tolist()
train_articles, test_articles, train_labels, test_labels = train_test_split(articles, labels, stratify=labels)
class HyperpartisanDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
def compute_accuracy(p: EvalPrediction):
preds = np.argmax(p.predictions, axis=1)
return {"acc": (preds == p.label_ids).mean()}
model_name = "facebook/bart-base" # change this line to "bert-base-uncased" and the script executes successfully!
config = AutoConfig.from_pretrained(model_name, num_labels = 2)
tokenizer = AutoTokenizer.from_pretrained(model_name)
train_encodings = tokenizer(train_articles[:3], truncation=True, padding=True)
test_encodings = tokenizer(test_articles[:3], truncation=True, padding=True)
train_dataset = HyperpartisanDataset(train_encodings, train_labels[:3])
eval_dataset = HyperpartisanDataset(test_encodings, test_labels[:3])
model = AutoModelForSequenceClassification.from_pretrained(
model_name,
config=config,
)
training_args = TrainingArguments(
output_dir="./test",
do_train=True,
do_eval=True,
evaluation_strategy="epoch",
num_train_epochs = 1,
learning_rate=1e-4,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
logging_steps=200,
remove_unused_columns=False,
logging_dir="./logs",
)
trainer = Trainer(
model = model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
compute_metrics=compute_accuracy,
)
trainer.train()
This creates the ValueError: (in the last call it looks like the predictions have the wrong shape)
ValueError Traceback (most recent call last)
<ipython-input-35-2929c2069c3e> in <module>
36 )
37
---> 38 trainer.train()
~/transformers/lib/python3.8/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs)
1109
1110 self.control = self.callback_handler.on_epoch_end(self.args, self.state, self.control)
-> 1111 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
1112
1113 if self.args.tpu_metrics_debug or self.args.debug:
~/transformers/lib/python3.8/site-packages/transformers/trainer.py in _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch)
1196 metrics = None
1197 if self.control.should_evaluate:
-> 1198 metrics = self.evaluate()
1199 self._report_to_hp_search(trial, epoch, metrics)
1200
~/transformers/lib/python3.8/site-packages/transformers/trainer.py in evaluate(self, eval_dataset, ignore_keys, metric_key_prefix)
1665 start_time = time.time()
1666
-> 1667 output = self.prediction_loop(
1668 eval_dataloader,
1669 description="Evaluation",
~/transformers/lib/python3.8/site-packages/transformers/trainer.py in prediction_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix)
1838
1839 if self.compute_metrics is not None and preds is not None and label_ids is not None:
-> 1840 metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))
1841 else:
1842 metrics = {}
<ipython-input-34-e9488e1392dd> in compute_accuracy(p)
15 def compute_accuracy(p: EvalPrediction):
16
---> 17 preds = np.argmax(p.predictions, axis=1)
18 return {"acc": (preds == p.label_ids).mean()}
<__array_function__ internals> in argmax(*args, **kwargs)
~/transformers/lib/python3.8/site-packages/numpy/core/fromnumeric.py in argmax(a, axis, out)
1191
1192 """
-> 1193 return _wrapfunc(a, 'argmax', axis=axis, out=out)
1194
1195
~/transformers/lib/python3.8/site-packages/numpy/core/fromnumeric.py in _wrapfunc(obj, method, *args, **kwds)
53 bound = getattr(obj, method, None)
54 if bound is None:
---> 55 return _wrapit(obj, method, *args, **kwds)
56
57 try:
~/transformers/lib/python3.8/site-packages/numpy/core/fromnumeric.py in _wrapit(obj, method, *args, **kwds)
42 except AttributeError:
43 wrap = None
---> 44 result = getattr(asarray(obj), method)(*args, **kwds)
45 if wrap:
46 if not isinstance(result, mu.ndarray):
~/transformers/lib/python3.8/site-packages/numpy/core/_asarray.py in asarray(a, dtype, order, like)
100 return _asarray_with_like(a, dtype=dtype, order=order, like=like)
101
--> 102 return array(a, dtype, copy=False, order=order)
103
104
ValueError: could not broadcast input array from shape (3,2) into shape (3,) | Hi, @DavidPfl, I am facing a similar issue. Were you able to fix this?
Thanks! | 0 |
huggingface | Beginners | Fine-tuning: Token Classification with W-NUT Emerging Entities | https://discuss.huggingface.co/t/fine-tuning-token-classification-with-w-nut-emerging-entities/9054 | Issue
I’d like to run the sample code for Token Classification with W-NUT Emerging Entities 2 on Google Colaboratory, but I cannot run it both CPU and GPU environment.
How can I check default values of Trainer for each pre-trained model?
Errors
I didn’t set Target on my code.
Where can I fix it and what number is appropriate in this case?
CPU
Target 12 is out of bounds.
GPU
If there are any solutions to fix below, I also hope to hear your experiences.
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Code
The code is almost same as the original one, but I’m trying to run on the note, custom_datasets.ipynb 1 which can be opened from web browsers.
# Transformers installation
! pip install transformers datasets
! wget http://noisy-text.github.io/2017/files/wnut17train.conll
from pathlib import Path
import re
def read_wnut(file_path):
file_path = Path(file_path)
raw_text = file_path.read_text().strip()
raw_docs = re.split(r'\n\t?\n', raw_text)
token_docs = []
tag_docs = []
for doc in raw_docs:
tokens = []
tags = []
for line in doc.split('\n'):
token, tag = line.split('\t')
tokens.append(token)
tags.append(tag)
token_docs.append(tokens)
tag_docs.append(tags)
return token_docs, tag_docs
texts, tags = read_wnut('wnut17train.conll')
from sklearn.model_selection import train_test_split
train_texts, val_texts, train_tags, val_tags = train_test_split(texts, tags, test_size=.2)
unique_tags = set(tag for doc in tags for tag in doc)
tag2id = {tag: id for id, tag in enumerate(unique_tags)}
id2tag = {id: tag for tag, id in tag2id.items()}
from transformers import DistilBertTokenizerFast
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-cased')
train_encodings = tokenizer(train_texts, is_split_into_words=True, return_offsets_mapping=True, padding=True, truncation=True)
val_encodings = tokenizer(val_texts, is_split_into_words=True, return_offsets_mapping=True, padding=True, truncation=True)
import numpy as np
def encode_tags(tags, encodings):
labels = [[tag2id[tag] for tag in doc] for doc in tags]
encoded_labels = []
for doc_labels, doc_offset in zip(labels, encodings.offset_mapping):
# create an empty array of -100
doc_enc_labels = np.ones(len(doc_offset),dtype=int) * -100
arr_offset = np.array(doc_offset)
# set labels whose first offset position is 0 and the second is not 0
doc_enc_labels[(arr_offset[:,0] == 0) & (arr_offset[:,1] != 0)] = doc_labels
encoded_labels.append(doc_enc_labels.tolist())
return encoded_labels
train_labels = encode_tags(train_tags, train_encodings)
val_labels = encode_tags(val_tags, val_encodings)
## PYTORCH CODE
import torch
class WNUTDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
train_encodings.pop("offset_mapping") # we don't want to pass this to the model
val_encodings.pop("offset_mapping")
train_dataset = WNUTDataset(train_encodings, train_labels)
val_dataset = WNUTDataset(val_encodings, val_labels)
## PYTORCH CODE
from transformers import DistilBertForTokenClassification
model = DistilBertForTokenClassification.from_pretrained('distilbert-base-cased', num_labels=len(unique_tags))
## PYTORCH CODE
from transformers import DistilBertForTokenClassification, Trainer, TrainingArguments,
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
model = DistilBertForTokenClassification.from_pretrained("distilbert-base-uncased")
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset # evaluation dataset
)
trainer.train()
What I changed
On Trainer, I adjust the function name for Token Classification with W-NUT Emerging Entities rather than the sample code on Hugging Face’s Fine-tuning with Trainer page.
DistilBertForSequenceClassification → DistilBertForTokenClassification | Did you get any solution for this? I am also facing same issue | 0 |
huggingface | Beginners | What is the purpose of this fine-tuning? | https://discuss.huggingface.co/t/what-is-the-purpose-of-this-fine-tuning/10729 | Hi,
I found 🤗 Transformers Notebooks — transformers 4.12.0.dev0 documentation and then Google Colab .
The notebook will create examples which have the same text in the input and the labels. What is the purpose of such a model? Is it training some autoencoder task? I would think a more interesting challenge would be: Given input sample of text, have the label be the continuation of the sample of text.
Thank you,
wilornel | As mentioned in the notebooks, the task is causal language modeling at first, so predict the next word. They also explicitly say that:
First note that we duplicate the inputs for our labels. This is because the model of the Transformers library apply the shifting to the right, so we don’t need to do it manually.
Which is why you see the same labels as the inputs. | 0 |
huggingface | Beginners | BertForTokenClassification with IOB2 Tagging | https://discuss.huggingface.co/t/bertfortokenclassification-with-iob2-tagging/10603 | Dear members of this forum,
I am using BertForTokenClassification for named entity recognition. The labels are encoded using the beginning-inside-outside tagging format (IOB2 format to be precise). My overall setup works. However I am observing two things where I don’t know the proper solution to:
In order to obtain the values associated with the target labels, the argmax function is applied on the logits returned by the model. However, sometimes it happens that the model predicts an “I” tag (e.g. I-LOC) after an “O” tag which is a violation of the format since a “B” tag (e.g. B-LOC) is expected first. Of a course I could interpret an “I” after an “O” as “B” or I can go for interpreting an “O” in front of an “I” as “B” and choose what performs better. However I wondered whether there a method (perhaps a modified argmax approach) where such a result cannot occur by construction.
Sometimes I am observing “B” and “I” tags in areas where the attention mask is 0 which means I am having a prediction for an input token which does not exist. My approach was to completely ignore such cases. However I am wondering here as well whether there is a better strategy.
Thank you very much in advance.
Jan | I think I found a solution to the first problem I described. The option aggregation_strategy in
TokenClassificationPipeline 3 lists all the possible options to deal with inconsistencies. | 0 |
huggingface | Beginners | Is there a way to know how many epoch or steps the model has trained with Trainer API? | https://discuss.huggingface.co/t/is-there-a-way-to-know-how-many-epoch-or-steps-the-model-has-trained-with-trainer-api/10702 | For instance, I want to add a new loss after 5 epochs’ training. | All the relevant information is kept in the Trainer state 20 | 0 |
huggingface | Beginners | I am beginner, I need guide and help | https://discuss.huggingface.co/t/i-am-beginner-i-need-guide-and-help/10725 | Hello
as you see in the title I am beginner to transformer and python.
my task is : Abstractive summarization in Arabic language by using transformer (fine-tune a model)
as a start point, I was trying to apply the coding in this link Deep-Learning/Tune_T5_WikiHow-Github.ipynb at master · priya-dwivedi/Deep-Learning · GitHub 1
but I faced some errors and difficulties. so, I stopped coding and i am trying to learn more about transformers , tensors, tensorflow,…etc.
but I feel I am confused and time is running without any progress.
could you please help me with some tutorials or some code that is similar to my task (it is preferred the code has comments explain it ) ?
also if you can guide me with steps that I can follow it to achieve my task ( in other word I need some one to explain me the basic things that help me to start coding my model)
Thanks | You can start by following our free course 6. It will teach you everything about Transformers, but note that it assumes some basic knowledge about deep learning. | 0 |
huggingface | Beginners | Using ViTForClassification for regression? | https://discuss.huggingface.co/t/using-vitforclassification-for-regression/10716 | Hi,
I would like to use the ViT model for classification and adapt it to a regression task, is it feasible ?
Can the model work just by changing the loss function ? How can I define the classes in the _info method of my custom dataset since there is an infinity of them possible ? What are all the other changes to make ?
Thank you | If you set the num_labels of the config to 1, it will automatically use the MSE loss for regression, as can be seen here 7. So yes, it’s totally possible. | 0 |
huggingface | Beginners | Does HuBERT need text as well as audio for fine-tuning? / How to achieve sub-5% WER? | https://discuss.huggingface.co/t/does-hubert-need-text-as-well-as-audio-for-fine-tuning-how-to-achieve-sub-5-wer/6905 | There’s a fine-tuning guide provided here that was for wav2vec2: facebook/hubert-xlarge-ll60k · Hugging Face 26
However, I’m interested in achieving the actual performance of wav2vec2 (of 3% WER not 18%). Because this wav2vec2 implementation does not use a language model it suffers at 18%.
However, with HuBERT, if I understand correctly, it doesn’t need text? HuBERT: Speech representations for recognition & generation 11
But the current fine tuning notebook is using a dataset with text.
Nevertheless, lets say it does need text. If it is fine tuned will it achieve the same performance or similar in the paper above of around 3% or will it also need its own language model like wav2vec2, and remain at around 18%? | which parts did you change from the Wav2vec2 example to get hubert to work? | 0 |
huggingface | Beginners | How do make sure I am using the transformer version/code from source? | https://discuss.huggingface.co/t/how-do-make-sure-i-am-using-the-transformer-version-code-from-source/9511 | How do I know whether my jupyter notebook is using the transformer version/code that I cloned from github?
My steps:
I did fork the transformers repo in my github account then I did:
git clone --recursive https://github.com/myaccount/transformers.git
cd transformers/
conda create -n hf-dev-py380 python=3.8.0
conda activate hf-dev-py380
git checkout v4.9.2-release
pip install -e “.[dev]”
conda install -c conda-forge librosa
conda install libgcc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/miniconda3/lib/
transformers-cli env
conda install jupyter
nohup jupyter notebook --ip=0.0.0.0 --port=myport &
When I type in my terminal “python --version”, it appears the same python version that my jupyter notebook prints (v3.8.0). But a different python version (v3.7) and directory appear in my jupyter notebook when I type:
import transformers
print(transformers)
<module ‘transformers’ from ‘/mypath/miniconda3/lib/python3.7/site-packages/transformers/init.py’>
How do I make sure my jupyter notebook is using the transformer code I cloned in my terminal?
Thanks! | Just to complement my question, the documentation says that pip install -e . command would link the folder I cloned the repository to my python library paths. So the python packages would get installed into the directory I used to clone the repo, which is …/transfomers/. But still I don’t see the site-packages folder there.
Any idea what do I need to do to link my jupyter notebook to my cloned folder?
Thank you! | 0 |
huggingface | Beginners | ERROR: could not find a version that satisfies the requirement torch==1.9.1 | https://discuss.huggingface.co/t/error-could-not-find-a-version-that-satisfies-the-requirement-torch-1-9-1/10670 | Hi I am having a problem when I type
Pip install -requirement.txt
ERROR: could not find a version that satisfies the requirement torch==1.9.1+cu111(from versions 0.1.2, 0.1.2post1, 0.1.2post2
ERROR: No matching distribution found for torch==1.9.1+cu111 | I use python 3.10 btw | 0 |
huggingface | Beginners | Token classification | https://discuss.huggingface.co/t/token-classification/10680 | huggingface.co
Fine-tuning with custom datasets 1
This tutorial will take you through several examples of using 🤗 Transformers models with your own datasets. The guide shows one of many valid workflows for u...
i followed token classification tutorial on above link trained completed successufully then i predicted
model = DistilBertForTokenClassification.from_pretrained('./results/checkpoint-500', num_labels=len(unique_tags))
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
)
res = trainer.predict(new_dataset)
predictions = res.predictions```
i am getting an numpy array of d dimension how to convert predicted values to its corresponding tokens and i am assuming that res.predictions is numerical value of tags not tokens if i am wrong correct me | You can check out this thread 7 I just wrote on how to convert predictions to actual labels for token classification models. | 0 |
huggingface | Beginners | Evaluate question answering with squad dataset | https://discuss.huggingface.co/t/evaluate-question-answering-with-squad-dataset/10586 | Hello everybody
I want to build question answering system by fine tuning bert using squad1.1 or squad2.0
i would like to ask about evaluating question answering system, i know there is squad and squad_v2 metrics, how can we use them when fine-tune bert with pytorch?
thank you | This example should hopefully answer your question.
github.com
huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py 5
#!/usr/bin/env python
# coding=utf-8
# Copyright 2020 The HuggingFace Team All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Fine-tuning the library models for question answering.
"""
# You can also adapt this script on your own question answering task. Pointers for this are left as comments.
This file has been truncated. show original
If the purpose is to have a good question answering model, you could also use one of the many pretrained models on the hugging face model hub. Models - Hugging Face 1 | 0 |
huggingface | Beginners | Moving my own trained model to huggingface hub | https://discuss.huggingface.co/t/moving-my-own-trained-model-to-huggingface-hub/10622 | Hi ,
I trained a BERT and Pytorch model with my data set on Google Colab, now I want to move the trained model to Hub so I can use it on my account like other pre-trained models.
I do not know how to save and move the model I trained to huggingface.
Thank you | Hi @staceythompson,
We have a guide on how to upload models via usual Git approach here.
If you want programmatic access, you can also use our huggingface_hub Python library. There’s documentation on how to upload models here. | 0 |
huggingface | Beginners | How does the GPT-J inference API work? | https://discuss.huggingface.co/t/how-does-the-gpt-j-inference-api-work/10337 | Hi All.
I started a 7 day trial for the startup plan. I need to use GPT-J through HF inference API. I pinned it in on org account to work on GPU and, after sending a request, all I get back is a single generated word. The max token param is set to 100.
Could you please let me know how should I make it generate more than one word | Hi,
Normally it should become available. I’ll ask the team and get back to you. | 0 |
huggingface | Beginners | “Initializing global attention on CLS token” on Longformer Training | https://discuss.huggingface.co/t/initializing-global-attention-on-cls-token-on-longformer-training/10601 | I have this text classification task that follows the mnli run_glue.py task. The premise is a text that is on average 2k tokens long and the hypothesis is a text that is 200 tokens long. The labels remain the same (0 for entailment, 1 for neutral, 2 for contradiction). I set the train and eval batch size to 1 as anything other than that maxed out my 16 gig vram sagemaker card and I did the training job. It’s been around 2 hours now and I keep seeing the Initializing global attention on CLS token message. Not even sure if the model has even started the epoch yet. For context here are my hyper parameters:
hyperparameters={'model_name_or_path': 'allenai/longformer-base-4096',
'task_name': 'mnli',
'max_seq_length': 4096,
'do_train': True,
'do_eval': True,
'per_device_train_batch_size': 1,
'per_device_eval_batch_size': 1,
'output_dir': '/opt/ml/model',
'learning_rate': 2e-5,
'max_steps': 500,
'num_train_epochs': 3}
I have experience training on transformers before but usually with a model like albert. I’ve never done Longformers so I want to know if I should be prepared to wait longer than a day or two. | Oh thank goodness, the output stated showing training iterations. | 0 |
huggingface | Beginners | Hyperparameter tuning practical guide? | https://discuss.huggingface.co/t/hyperparameter-tuning-practical-guide/10297 | Hi i have been having problems doing parameter tuning with google colab, where its alawys gpu that runs out of memory.
Is there any practical advice you could give me for tuning bert models? In terms of envoirment settings i need for example number of gpu so i don’t run out of mem
It is to be noted that when doing tuning with CPU it works but takes ages.
I am using trainer api with Optuna | If your GPU can only take 16 as batch_size then make sure that multiplication of batch_size and gradient_accumulation does not go beyond 16. You need to specify range for both these parameters such that any combination of elements from both ranges does not take the effective batch_size beyond 16. | 0 |
huggingface | Beginners | Questions when doing Transformer-XL Finetune with Trainer | https://discuss.huggingface.co/t/questions-when-doing-transformer-xl-finetune-with-trainer/5280 | Hi everyone,
Nice to see you here.
I’m new to the Transformer-XL model. I’m following Fine-tuning with custom datasets 1 to finetune Transformer-XL with Trainer.(sequence classification task)
First, I used exactly the same way as the instruction above except for:
tokenizer = TransfoXLTokenizer.from_pretrained(‘transfo-xl-wt103’)
model = TransfoXLForSequenceClassification.from_pretrained(“transfo-xl-wt103”)
By doing this, I got ‘RuntimeError: stack expects each tensor to be equal size, but got [25] at entry 0 and [24] at entry 1.’ I think the reason for the error is that I should pad the sequences in the same batch to the same length. Let me know, if I’m wrong. Probably, I need a data_collator to solve this problem. Is there a build-in data_collator in huggingface to solve this problem? If not, is there an example about how to overwrite the data_collator?
Second, I changed the code to:
tokenizer = TransfoXLTokenizer.from_pretrained(‘transfo-xl-wt103’)
model = TransfoXLForSequenceClassification.from_pretrained(“transfo-xl-wt103”)
train_texts = [train_text[:120] for train_text in train_texts]
val_texts = [val_text[:120] for val_text in val_texts]
test_texts = [test_text[:120] for test_text in test_texts]
tokenizer.pad_token = tokenizer.eos_token
train_encodings = tokenizer(train_texts, padding=True, max_length=‘120’)
val_encodings = tokenizer(val_texts, padding=True, max_length=‘120’)
test_encodings = tokenizer(test_texts, padding=True, max_length=‘120’)
multilabel_trainer = Trainer(
model=model, # the instantiated Transformers model to be trained
tokenizer=tokenizer,
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset # evaluation dataset
)
By doing this, I think I made the sequence in the same batch have the same size. However, I got the error ‘AssertionError: Cannot handle batch sizes > 1 if no padding token is defined.’ I checked my tokenizer:
tokenizer.pad_token return ‘’, tokenizer.pad_token_id return 0.
Sometimes, it will provide my cuda out of memory even though I restarted the gpu and checked the gpu memory before I running the code by using nvidia-smi.
Last, I changed the batchsize to 1, it trained for 11 steps and cuda out of memory. My GPU is P100 with 16 GB memory, I think it shouldn’t be full so quick. (I used the gpu to fine tune bert successfully)
I have no idea where did I do wrong. Any suggestions or help will be appreciated.
For your convenience, I uploaded the notebook here 1.
Best! | Note that TransformerXL is the only model of the library that does not work with Trainer as the loss it returns is not reduced (it’s an array and not a scalar). You might get away with it by implementing your own subclass of Trainer and override the compute_loss function to convert that array to a scalar. | 0 |
huggingface | Beginners | Batch[k] = torch.tensor([f[k] for f in features]) ValueError: expected sequence of length 3 at dim 1 (got 4) | https://discuss.huggingface.co/t/batch-k-torch-tensor-f-k-for-f-in-features-valueerror-expected-sequence-of-length-3-at-dim-1-got-4/1354 | Hi there,
I am trying to build a multiple-choice question solver and I am getting the following error.
Any thoughts what could be the cause of this error?
File "../src/run_multiple_choice.py", line 195, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/usr/local/lib/python3.7/site-packages/transformers/trainer.py", line 755, in train
for step, inputs in enumerate(epoch_iterator):
File "/usr/local/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 363, in __next__
data = self._next_data()
File "/usr/local/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 403, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/usr/local/lib/python3.7/site-packages/transformers/data/data_collator.py", line 65, in default_data_collator
batch[k] = torch.tensor([f[k] for f in features])
ValueError: expected sequence of length 3 at dim 1 (got 4) | Looks like my instances were not of the same size. Making them the same size fixes the problem. | 0 |
huggingface | Beginners | Why aren’t all weights of BertForPreTraining initialized from the model checkpoint? | https://discuss.huggingface.co/t/why-arent-all-weights-of-bertforpretraining-initialized-from-the-model-checkpoint/10509 | When I load a BertForPretraining with pretrained weights with
model_pretrain = BertForPreTraining.from_pretrained('bert-base-uncased')
I get the following warning:
Some weights of BertForPreTraining were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['cls.predictions.decoder.bias']
Why aren’t all the weights in cls.predictions initialized from the saved checkpoint?
The model seems to produce reliable token prediction outputs (without further training). In particular, it produces the same outputs as a model loaded with
model_masked = BertForMaskedLM.from_pretrained('bert-base-uncased')
Here’s code verifying this in an example:
s = ("Pop superstar Shakira says she was the [MASK] of a random [MASK] by a [MASK] "
"of [MASK] boars while walking in a [MASK] in Barcelona with her eight-year-old "
"[MASK].")
inputs = tokenizer(s, return_tensors='pt')
outputs_pretrain = model_pretrain(**inputs)
outputs_masked = model_masked(**inputs)
assert torch.allclose(outputs_pretrain["prediction_logits"], outputs_masked["logits"])
Incidentally, when loading model_masked, I don’t get a warning about newly initialized weights in cls.predictions. All newly initialized weights are in cls.seq_relationship, which is reasonable since if we only care about masked LM, the information from the base model regarding next sentence prediction can be safely thrown away. | mgreenbe:
BertForPreTraining
The BertForPreTraining model is BERT with 2 heads on top (the ones used for pre-training BERT, namely next sentence prediction and masked language modeling). The bert-base-uncased checkpoint on the hub only includes the language modeling head (it’s actually suited to be loaded into a BertForMaskedLM model). You can also see this in the config file here 5. | 0 |
huggingface | Beginners | Adding Blenderbot 2.0 to Huggingface | https://discuss.huggingface.co/t/adding-blenderbot-2-0-to-huggingface/10503 | I noticed that Huggingface has the original Blenderbot model but not the current new version of it. I was wondering how we can possibly add it to Huggingface? | Contributing a model to HuggingFace Transformers involves first forking the original Github repository 7, in order to understand the model, do a basic forward pass, etc.
Next, you can start implementing the model. This 3 and this guide 7 explain in detail how to do this.
If you want to start working on this and you need guidance, let me know. | 1 |
huggingface | Beginners | When is a generative model said to overfit? | https://discuss.huggingface.co/t/when-is-a-generative-model-said-to-overfit/10489 | If I train a causal language model, should I be worried about overfitting? If so, what would that imply? That it cannot generalize well to unseen prompts?
I am used to validating on downstream tasks and selecting the best checkpoint there where validation loss is not worse than training loss (overfitting), but I am not sure if that applies to CLM/generation tasks.
I guess what I am asking is:
do you validate your (C)LM/generation tasks during training as a means to do early stopping/finding the best checkpoint?
if you do not, how do you decide how long to train? | For generative models, one typically measures the perplexity on a held-out dataset. As long as perplexity keeps improving, keep training. | 0 |
huggingface | Beginners | Generate raw word embeddings using transformer models like BERT for downstream process | https://discuss.huggingface.co/t/generate-raw-word-embeddings-using-transformer-models-like-bert-for-downstream-process/2958 | Hi,
I am new to using transformer based models. I have a few basic questions, hopefully, someone can shed light, please.
I’ve been training GloVe and word2vec on my corpus to generate word embedding, where a unique word has a vector to use in the downstream process. Now, my questions are:
Can we generate a similar embedding using the BERT model on the same corpus?
Can we have one unique word with its vector? BERT is contextual, not sure how the vector will look like for the same word which is repeated in different sentences.
If a word is repeated and not unique, not sure how I can use these vectors in the downstream process.
Appreciate your valuable inputs. I tried to look over the internet but was not able to find a clear answer. If someone can help with the above it will be really helpful.
Thanks | Yes you can get a word embedding for a specific word in a sentence. You have to take care though, because in language models we often use a subword tokenizer. It chops words into smaller pieces. That means that you do not necessarily get one output for every word in a sentence, but probably more than one, namely one for all its subword components. What we then typically do is average the outputs of those tokens of the right word, to get one representation for that word. I’m on mobile now, but this is a modified script that I have used in the past to get the output of a specific word.
import numpy as np
import torch
from transformers import AutoTokenizer, AutoModel
def get_word_idx(sent: str, word: str):
return sent.split(" ").index(word)
def get_hidden_states(encoded, token_ids_word, model, layers):
"""Push input IDs through model. Stack and sum `layers` (last four by default).
Select only those subword token outputs that belong to our word of interest
and average them."""
with torch.no_grad():
output = model(**encoded)
# Get all hidden states
states = output.hidden_states
# Stack and sum all requested layers
output = torch.stack([states[i] for i in layers]).sum(0).squeeze()
# Only select the tokens that constitute the requested word
word_tokens_output = output[token_ids_word]
return word_tokens_output.mean(dim=0)
def get_word_vector(sent, idx, tokenizer, model, layers):
"""Get a word vector by first tokenizing the input sentence, getting all token idxs
that make up the word of interest, and then `get_hidden_states`."""
encoded = tokenizer.encode_plus(sent, return_tensors="pt")
# get all token idxs that belong to the word of interest
token_ids_word = np.where(np.array(encoded.word_ids()) == idx)
return get_hidden_states(encoded, token_ids_word, model, layers)
def main(layers=None):
# Use last four layers by default
layers = [-4, -3, -2, -1] if layers is None else layers
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
model = AutoModel.from_pretrained("bert-base-cased", output_hidden_states=True)
sent = "I like cookies ."
idx = get_word_idx(sent, "cookies")
word_embedding = get_word_vector(sent, idx, tokenizer, model, layers)
return word_embedding
if __name__ == '__main__':
main()
Word embeddings are always contextual. You can extract values from the embedding layer only but that seems counter intuitive and will probably not work well. The whole point of (bidirectional) is to include context.
Not sure what you mean here. Unique in that sentence or unique in what sense? | 0 |
huggingface | Beginners | Logs of training and validation loss | https://discuss.huggingface.co/t/logs-of-training-and-validation-loss/1974 | Hi, I made this post to see if anyone knows how can I save in the logs the results of my training and validation loss.
I’m using this code:
*training_args = TrainingArguments(*
* output_dir='./results', # output directory*
* num_train_epochs=3, # total number of training epochs*
* per_device_train_batch_size=16, # batch size per device during training*
* per_device_eval_batch_size=16, # batch size for evaluation*
* warmup_steps=50, # number of warmup steps for learning rate scheduler*
* weight_decay=0.01, # strength of weight decay*
* logging_dir='./logs', # directory for storing logs*
* logging_steps=20,*
* evaluation_strategy="steps"*
*)*
*trainer = Trainer(*
* model=model, # the instantiated 🤗 Transformers model to be trained*
* args=training_args, # training arguments, defined above*
* train_dataset=train_dataset, # training dataset*
* eval_dataset=val_dataset # evaluation dataset*
*)*
And I thought that using logging_dir and logging_steps would achieve that but in such logs all I see is this:
*output_dir ^A"^X*
*^Toverwrite_output_dir ^B"^L*
*^Hdo_train ^B"^K*
*^Gdo_eval ^A"^N*
*do_predict ^B"^\*
*^Xevaluate_during_training ^B"^W*
*^Sevaluation_strategy ^A"^X*
*^Tprediction_loss_only ^B"^_*
*^[per_device_train_batch_size ^C"^^*
*^Zper_device_eval_batch_size ^C"^\*
*^Xper_gpu_train_batch_size ^A"^[*
*^Wper_gpu_eval_batch_size ^A"^_*
*^[gradient_accumulation_steps ^C"^[*
*^Weval_accumulation_steps ^A"^Q*
*^Mlearning_rate ^C"^P*
*^Lweight_decay ^C"^N*
And it goes on like that.
Any ideas will be welcome.
My system instalation:
- transformers version: 3.4.0
- Platform: Linux-3.10.0-1127.13.1.el7.x86_64-x86_64-with-centos-7.8.2003-Core
- Python version: 3.6.8
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No | Hi!
I was also recently trying to save my loss values at each logging_steps into a .txt file.
There might be a parameter I am unaware of, but meanwhile I pulled from git the latest version of the transformer library and slightly modified the trainer.py to include in def log(self, logs: Dict[str, float]) -> None: the following lines to save my logs into a .txt file:
# TODO PRINT ADDED BY XXX
logSave = open('lossoutput.txt', 'a')
logSave.write(str(output) + '\n')
logSave.close()
Happy to hear if there is a less ‘cowboy’ way to do this, one that would not require modifying trainer.py | 0 |
huggingface | Beginners | Easily save DatasetDict as community dataset? | https://discuss.huggingface.co/t/easily-save-datasetdict-as-community-dataset/10467 | I expected that
dataset_dict.save_to_disk("./my_dataset")
would produce a suitable format, but it appears not. I can’t seem to find a simple way of getting from a DatasetDict object to a community dataset. Does this not exist? | Hi ! We are working on it | 0 |
huggingface | Beginners | How can I put multiple questions in the same context at once using Question-Answering technique (i’m using BERT)? | https://discuss.huggingface.co/t/how-can-i-put-multiple-questions-in-the-same-context-at-once-using-question-answering-technique-im-using-bert/10416 | Is that possible? If so, how can I do that? | Yes that’s possible, like so:
from transformers import BertTokenizer, BertForQuestionAnswering
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForQuestionAnswering.from_pretrained('bert-base-uncased')
context = "Jim Henson was a nice puppet"
questions = ["Who was Jim Henson?", "What is Jim's last name?"]
inputs = tokenizer(questions, [context for _ in range(len(questions))], padding=True, return_tensors='pt')
outputs = model(**inputs)
start_scores = outputs.start_logits
end_scores = outputs.end_logits
We just make several [CLS] question [SEP] context [SEP] [PAD] [PAD] ... examples, which we forward through the model. | 1 |
huggingface | Beginners | Do you train all layers when fine-tuning T5? | https://discuss.huggingface.co/t/do-you-train-all-layers-when-fine-tuning-t5/1034 | From my beginner-level understanding, when it comes to BERT, sometimes people will train just the last layer, or sometimes they’ll train all layers a couple epochs and then the last layer a few more epochs.
Does T5 have any similar practices? Or is it normal to just train the whole thing when fine-tuning?
And very tangentially related: to fine-tune T5, we just do loss.backward() on the result of the forward() call of the T5 model right (the loss key in the returned dict)? So there’s no need to calculate any loss on our own? | I haven’t seen much experiments for this, but IMO it’s better to fine-tune the whole model.
Also when you pass labels argument to T5ForConditionalGeneration's forward method then it calculates the loss for you and returns it as the first value in the returned tuple .
And you can use the finetune.py script here 41 to fine-tuning T5 and other seq2seq models
See this thread T5 Finetuning Tips 45 | 0 |
huggingface | Beginners | RuntimeError: CUDA out of memory. Tried to allocate 384.00 MiB (GPU 0; 11.17 GiB total capacity; 10.62 GiB already allocated; 145.81 MiB free; 10.66 GiB reserved in total by PyTorch) | https://discuss.huggingface.co/t/runtimeerror-cuda-out-of-memory-tried-to-allocate-384-00-mib-gpu-0-11-17-gib-total-capacity-10-62-gib-already-allocated-145-81-mib-free-10-66-gib-reserved-in-total-by-pytorch/444 | Hi Huggingface team,
I am trying to fine-tune my MLM RoBERTa model on a binary classification dataset. I’m able to successfully tokenize my entire dataset, but during training, I keep getting the same CUDA memory error. I’m sure as to where the memory is taken up, but have attached the entire notebook here 26 for reference.
Error message: Screen Shot 2020-07-23 at 11.53.29 AM1630×824 181 KB
I suspect it has something to do with my train() method.
Does anyone have any thoughts on why the GPU memory is being almost entirely allocated to PyTorch? Any help is appreciated, thanks! | You can try lowering your batch size,reserved by pytorch means that the memory is used for the data, model , gradients etc | 0 |
huggingface | Beginners | Can we use tokenizer from one architecture and model from another one? | https://discuss.huggingface.co/t/can-we-use-tokenizer-from-one-architecture-and-model-from-another-one/10377 | I’ve a Bert tokenizer, which is pre-trained on some dataset. Now I want to fine tune some task in-hand with a Roberta model. So in this scenario
Can I use Bert tokenizer output as input to Roberta Model?
Does such kind of setup makes sense between autoregressive and non-autoregressive models, i.e., using Bert tokenizer with XLNet model?
Does these kind of setups make sense?
From what I understand, this can be implemented, but doesn’t make sense. But I can use some experience or clarification in this direction. | hi sps.
I think it would be possible to use a Bert tokenizer with a Roberta Model, but you would have to train the Roberta model from scratch. You wouldn’t be able to take advantage of transfer learning by using a pre-trained Roberta.
Why would you want to do that?
You might run into problems with things like the sep and cls tokens, which might have different conventions between Bert and Roberta, though I expect you could write some code to deal with that.
A tokenizer splits your text up into chunks, and replaces each chunk with a numerical value. I think Bert and Roberta do this in different ways, but that shouldn’t make the systems incompatible. Any embedding layer should be able to learn to use the numbers that come out of WordPiece, BytePair or SentencePiece tokenizers.
Have you seen this intro to tokenizers [Summary of the tokenizers — transformers 4.11.1 documentation 1] | 0 |
huggingface | Beginners | RecursionError: Maximum recursion depth exceeded in comparison | https://discuss.huggingface.co/t/recursionerror-maximum-recursion-depth-exceeded-in-comparison/10230 | I got this error :
RecursionError: maximum recursion depth exceeded in comparison
when I was trying to run this line:
bert = TFAutoModel.from_pretrained('bert-base-cased')
Also, I increased the maximum recursion amount in sys.
I wanted this to fine-tune a model. | Full stack trace:
RecursionError Traceback (most recent call last)
/tmp/ipykernel_1850345/304243349.py in <module>
5 from transformers import AutoModel
6 #bert = AutoModel.from_pretrained('bert-base-cased')
----> 7 bert = TFAutoModel.from_pretrained('bert-base-uncased')
8
9 # we can view the model using the summary method
~/anaconda3/envs/tf2/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
385 )
386
--> 387
388 def insert_head_doc(docstring, head_doc=""):
389 if len(head_doc) > 0:
~/anaconda3/envs/tf2/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py in _get_model_class(config, model_mapping)
334 return supported_models
335
--> 336 name_to_model = {model.__name__: model for model in supported_models}
337 architectures = getattr(config, "architectures", [])
338 for arch in architectures:
~/anaconda3/envs/tf2/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py in __getitem__(self, key)
~/anaconda3/envs/tf2/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py in _load_attr_from_module(self, model_type, attr)
~/anaconda3/envs/tf2/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py in getattribute_from_module(module, attr)
... last 1 frames repeated, from the frame below ...
~/anaconda3/envs/tf2/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py in getattribute_from_module(module, attr)
RecursionError: maximum recursion depth exceeded in comparison | 0 |
huggingface | Beginners | Keyerror when trying to download GPT-J-6B checkpoint | https://discuss.huggingface.co/t/keyerror-when-trying-to-download-gpt-j-6b-checkpoint/10395 | model = AutoModelForCausalLM.from_pretrained(“EleutherAI/gpt-j-6B”, torch_dtype=torch.float32)
results in a
keyerror ‘gptj’
when attempting to download the checkpoint. running transformers library version 4.10.2
similar to topic:
How to get "EleutherAI/gpt-j-6B" working? Models
I’m trying to run the EleutherAI/gpt-j-6B model, but with no luck. The code
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
returns the following error:
Traceback (most recent call last):
File "gptjtest.py", line 18, in <module>
model = AutoModelForCausalLM.from_pretrained("gpt-j-6B")
File "/home/marcin/miniconda3/envs/py37/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 383, in from_pretrained
pretrained_model_name_or_path, return_u…
github pull request seems to have been merged already. | GPT-J has been merged and is part of version 4.11, so you should be able to update your transformers version to the latest one and use GPT-J. | 1 |
huggingface | Beginners | Import Error for timm with facebook/detr-resnet-50 | https://discuss.huggingface.co/t/import-error-for-timm-with-facebook-detr-resnet-50/10372 | I am working through implementing the How to Use section from the Facebook Detr Resnet 50 model card here: https://huggingface.co/facebook/detr-resnet-50 and am getting the error below when calling DetrForObjectDetection.from_pretrained('facebook/detr-resnet-50').
Even after I pip install timm. Any suggestions or help is welcomed
ImportError Traceback (most recent call last)
<ipython-input-41-ec07e43ae43f> in <module>()
----> 1 DetrModel.from_pretrained()
1 frames
/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in requires_backends(obj, backends)
681 name = obj.__name__ if hasattr(obj, "__name__") else obj.__class__.__name__
682 if not all(BACKENDS_MAPPING[backend][0]() for backend in backends):
--> 683 raise ImportError("".join([BACKENDS_MAPPING[backend][1].format(name) for backend in backends]))
684
685
ImportError:
DetrModel requires the timm library but it was not found in your environment. You can install it with pip:
`pip install timm`
Many thanks! | What is your environment? In Colab notebooks, it might help to restart the runtime. | 1 |
huggingface | Beginners | Using Hugging Face dataset class as pytorch class | https://discuss.huggingface.co/t/using-hugging-face-dataset-class-as-pytorch-class/10385 | Hi,
I have created a custom dataset class using hugging face, and for some reason I would like to use this class as a pytorch dataset class. (with get_item etc…)
Is it possible ?
Thanks | This is possible by default. What exactly do you want to do? You can simply use such dataset in a PT dataloader as well, as long as you set the format to torch. For instance:
dataset.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask', 'labels']) | 0 |
huggingface | Beginners | Key error: 0 in DataCollatorForSeq2Seq for BERT | https://discuss.huggingface.co/t/key-error-0-in-datacollatorforseq2seq-for-bert/7260 | Hello everyone,
I am trying to fine-tune a german BERT2BERT model for text summarization unsing bert-base-german-cased and want to use dynamic padding. However, when calling Trainer.train() I receive an error, that tensors cannot be created and I should use padding. I was able to trace this error back to my DataCollator. The code I used is the following:
First, I define the function to tokenize my data and do so using the map function.
tokenizer = BertTokenizerFast.from_pretrained(“bert-base-german-cased”)
tokenizer.bos_token = tokenizer.cls_token
tokenizer.eos_token = tokenizer.sep_token
max_input_length = 512
max_target_length = 128
def prepro_bert2bert(samples):
model_inputs = tokenizer(samples[“text”], max_length = max_input_length, truncation = True)
with tokenizer.as_target_tokenizer():
labels = tokenizer(samples["description"], max_length = max_target_length, truncation = True)
samples["input_ids"] = model_inputs.input_ids
samples["attention_mask"] = model_inputs.attention_mask
samples["decoder_input_ids"] = labels.input_ids
samples["decoder_attention_mask"] = labels.attention_mask
samples["labels"] = labels.input_ids.copy()
return samples
traindata = Dataset.from_pandas(traindata)
tokenized_traindata = traindata.map(prepro_bert2bert, batched = True, remove_columns = [“text”, “description”, “_index_level_0_”])
tokenized_traindata.set_format(columns = [“labels”, “input_ids”, “attention_mask”, “decoder_input_ids”, “decoder_attention_mask”])
My tokenized_traindata looks like the following:
Dataset({
features: [‘attention_mask’, ‘decoder_attention_mask’, ‘decoder_input_ids’, ‘input_ids’, ‘labels’],
num_rows: 7986
})
Then I instantiate my bert2bert model and my DataCollator:
bert2bert = EncoderDecoderModel.from_encoder_decoder_pretrained(“bert-base-german-cased”, “bert-base-german-cased”)
data_collator = DataCollatorForSeq2Seq(tokenizer, model = bert2bert, padding = “longest”)
Lastly, I form batches from my training data and want to use the data_collator
samples = tokenized_traindata[:8]
batch = data_collator(samples)
This returns the following error message
KeyError Traceback (most recent call last)
in
----> 1 batch = data_collator(samples)
2 {k: v.shape for k, v in batch.items()}
~\miniconda3\envs\BERTnew\lib\site-packages\transformers\data\data_collator.py in call(self, features)
271
272 def call(self, features):
→ 273 labels = [feature[“labels”] for feature in features] if “labels” in features[0].keys() else None
274 # We have to pad the labels before calling tokenizer.pad as this method won’t pad them and needs them of the
275 # same length to return tensors.
KeyError: 0
Unfortunately, I do not know where to look further for a solution. I hope someone may has a suggestion where to look or how to solve this. Thank you very much in advance! | This is because the datasets library returns a slice of the dataset as a dictionary with lists for each key. The data collator however expects a list of dataset elements, so a list of dictionaries. Practically, I think you need to do:
samples = [tokenized_traindata[i] for i in range(8)]
batch = data_collator(samples) | 0 |
huggingface | Beginners | Loading model from checkpoint after error in training | https://discuss.huggingface.co/t/loading-model-from-checkpoint-after-error-in-training/758 | Let’s say I am finetuning a model and during training an error is encountered and the training stops. Let’s also say that, using Trainer, I have it configured to save checkpoints along the way in training. How would I go about loading the model from the last checkpoint before it encountered the error?
For reference, here is the configuration of my Trainer object:
TRAINER ARGS
args: TrainingArguments(
output_dir='models/textgen/out',
overwrite_output_dir=False,
do_train='True',
do_eval=False,
do_predict=False,
evaluate_during_training=False,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
per_gpu_train_batch_size=None,
per_gpu_eval_batch_size=None,
gradient_accumulation_steps=1,
learning_rate=5e-05,
weight_decay=0.0,
adam_epsilon=1e-08,
max_grad_norm=1.0,
num_train_epochs=3.0,
max_steps=-1,
warmup_steps=0,
logging_dir='models/textgen/logs',
logging_first_step=False,
logging_steps=500,
save_steps=500,
save_total_limit=None,
no_cuda=False,
seed=42,
fp16=False,
fp16_opt_level='O1',
local_rank=-1,
tpu_num_cores=None,
tpu_metrics_debug=False,
debug=False,
dataloader_drop_last=False,
eval_steps=1000,
past_index=-1)
data_collator: <function sd_data_collator at 0x7ffaba8f8e18>
train_dataset: <custom_dataset.SDAbstractsDataset object at 0x7ffa18c8c400>
eval_dataset: None
compute_metrics: None
prediction_loss_only: False
optimizers: None
tb_writer: <torch.utils.tensorboard.writer.SummaryWriter object at 0x7ff9f79e45c0> | The checkpoint should be saved in a directory that will allow you to go model = XXXModel.from_pretrained(that_directory). | 0 |
huggingface | Beginners | Filtering Dataset | https://discuss.huggingface.co/t/filtering-dataset/10228 | I’m trying to filter a dataset based on the ids in a list. This approach is too slow. The dataset is an Arrow dataset.
responses = load_dataset('peixian/rtGender', 'responses', split = 'train')
# post_id_test_list contains list of ids
responses_test = responses.filter(lambda x: x['post_id'] in post_id_test_list) | Hi baumstan.
I’m not sure I understand the question. Why does it matter if it is slow?
I would expect you to create and then save your train/test datasets only once, before you start using your model. If it takes a long time, just leave it running.
Are you trying to use a dynamic post_id_test_list, or to train with transient data, or what?
I suspect you might find better answers on Stack Overflow, as this doesn’t look like a Huggingface-specific question. | 0 |
huggingface | Beginners | Issue uploading model: “fatal: cannot exec ‘.git/hooks/pre-push’: Permission denied” | https://discuss.huggingface.co/t/issue-uploading-model-fatal-cannot-exec-git-hooks-pre-push-permission-denied/2947 | Hi there,
I’m trying to upload my first model to the model hub. I’ve followed step-by-step the docs here 3, but encountered different issues:
I got it to sort of work on my local machine, but it was extremely slow (20~kbit per sec) and I had to abandon it. I saw this topic and that git lfs install is important, but it still doesn’t work.
Now I’ve retried it on a google colab and there I’m getting the following error when I !git push :
fatal: cannot exec '.git/hooks/pre-push': Permission denied.
Here is a colab with my exact code: https://colab.research.google.com/drive/1OSJh4GySF_m3RZTPerXtBlNltMjv3hQy?usp=sharing 16
Uploading via colab is more important/convenient for me because I’m also training via colab.
Would be great if someone could tell me what I’m doing wrong in the colab (I’m new to using git) and I’m looking forward to adding my first models to the hub | Hi @MoritzLaurer, the doc could be improved but in your notebook it looks like you have two git repos (one you create with git init, the other you clone from huggingface.co) whereas you only should have one.
Here’s a minimal Colab here: https://colab.research.google.com/drive/1TFgta6bYHte2lLiQoJ0U0myRd67bIuiu?usp=sharing 94
Let me know if this solves your issue.
@sgugger I think we might want to remove the “and your clone is setup with the right remote URL” clause from the doc as it seems to be more confusing than helpful, what do you think? Maybe we can also link to my Colab above as a self-contained example of how to push from a Colab notebook. | 0 |
huggingface | Beginners | What is the difference between forward() and generate()? | https://discuss.huggingface.co/t/what-is-the-difference-between-forward-and-generate/10235 | Hi!
It seems like some models implement both functions and semantically they behave similarly, but might be implemented differently? What is the difference? In both cases, for an input sequence, the model produces a prediction (inference)?
Thank you,
wilornel | Hi,
forward() can be used both for training and inference.
generate() can only be used at inference time, and uses forward() behind the scenes. It is used for several decoding strategies such as beam search, top k sampling, and so on (a detailed blog post can be found here 5). | 1 |
huggingface | Beginners | Questions about the shape of T5 logits | https://discuss.huggingface.co/t/questions-about-the-shape-of-t5-logits/10207 | The shape of logits in output is (batch,sequence_length,vocab_size). I don’t understand the sequence_length part. I thought decoder should predict one word at a time and the logits should be (batch,vocab_size) .
Thank you in advance for any replies! | Hi,
Yes, but you always have a sequence length dimension. At the start of generation, we give the decoder start token to the T5 decoder.
Suppose you have trained a T5 model to translate language from English to French, and that we now want to test it on the English sentence “Welcome to Paris”. In that case, the T5 encoder will first encode this sentence, so we get last_hidden_states of the encoder of shape (batch_size, sequence_length, hidden_size). As we only have a single sentence here, batch_size = 1, for simplicity let’s assume that every word is a token and we don’t add special tokens, so sequence_length = 3, and the hidden_size of a T5 base model is 768. So the output of the encoder is of shape (1, 3, 768).
Next, we have the decoder. We first give it the config.decoder_start_token_id , which for T5 is equal to 0 (i.e. the id of the pad token = <pad>). This will be our only token at the beginning, hence sequence length = 1. So what we give as input (assuming we turned the decoder start token id into a vector) to the decoder is of shape (batch_size, sequence_length, hidden_size) = (1, 1, 768), and it will output the scores for each of the tokens of the vocabulary, hence shape (batch_size, sequence_length, vocab_size) = (1, 1, 32100). This will indicate which token T5 thinks will follow the pad token (so ideally it should output “Bienvenue”).
Next, we give <pad> Bienvenue as input to the decoder, so now our sequence length is 2. Assuming we have turned both tokens into a vector, the input to the decoder is now of shape (1, 2, 768). It will output a tensor of shape (1, 2, 32100). We are only interested in the logits for the last token, and we will take that as the prediction for the next token (ideally it should output “à”). | 0 |
huggingface | Beginners | ImportError while loading huggingface tokenizer | https://discuss.huggingface.co/t/importerror-while-loading-huggingface-tokenizer/10193 | My broad goal is to be able to run this Keras demo. 1
I’m trying to load a huggingface tokenizer using the following code:
import os
import re
import json
import string
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tokenizers import BertWordPieceTokenizer
from transformers import BertTokenizer, TFBertModel, BertConfig
slow_tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
However, I get the following error message:
Exception ignored in: <function tqdm.__del__ at 0x000001AE43527A60>
Traceback (most recent call last):
File "C:\Users\iavta\anaconda3\envs\Ivo\lib\site-packages\tqdm\std.py", line 1152, in __del__
self.close()
File "C:\Users\iavta\anaconda3\envs\Ivo\lib\site-packages\tqdm\notebook.py", line 286, in close
self.disp(bar_style='danger', check_delay=False)
AttributeError: 'tqdm' object has no attribute 'disp'
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_10096/4122314904.py in <module>
1 # Save the slow pretrained tokenizer
----> 2 slow_tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
3 #save_path = "bert_base_uncased"
4 #if not os.path.exists(save_path):
5 # os.makedirs(save_path)
~\anaconda3\envs\Ivo\lib\site-packages\transformers\tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs,
**kwargs)
1688 else:
1689 try:
-> 1690 resolved_vocab_files[file_id] = cached_path(
1691 file_path,
1692 cache_dir=cache_dir,
~\anaconda3\envs\Ivo\lib\site-packages\transformers\file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, use_auth_token, local_files_only)
1402 if is_remote_url(url_or_filename):
1403 # URL, so get it from the cache (downloading if necessary)
-> 1404 output_path = get_from_cache(
1405 url_or_filename,
1406 cache_dir=cache_dir,
~\anaconda3\envs\Ivo\lib\site-packages\transformers\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, use_auth_token, local_files_only)
1665 logger.info(f"{url} not found in cache or force_download set to True, downloading to {temp_file.name}")
1666
-> 1667 http_get(url_to_download, temp_file, proxies=proxies, resume_size=resume_size, headers=headers)
1668
1669 logger.info(f"storing {url} in cache at {cache_path}")
~\anaconda3\envs\Ivo\lib\site-packages\transformers\file_utils.py in http_get(url, temp_file, proxies, resume_size, headers)
1516 content_length = r.headers.get("Content-Length")
1517 total = resume_size + int(content_length) if content_length is not None else None
-> 1518 progress = tqdm(
1519 unit="B",
1520 unit_scale=True,
~\anaconda3\envs\Ivo\lib\site-packages\tqdm\notebook.py in __init__(self, *args, **kwargs)
240 unit_scale = 1 if self.unit_scale is True else self.unit_scale or 1
241 total = self.total * unit_scale if self.total else self.total
--> 242 self.container = self.status_printer(self.fp, total, self.desc, self.ncols)
243 self.container.pbar = proxy(self)
244 self.displayed = False
~\anaconda3\envs\Ivo\lib\site-packages\tqdm\notebook.py in status_printer(_, total, desc, ncols)
113 # Prepare IPython progress bar
114 if IProgress is None: # #187 #451 #558 #872
--> 115 raise ImportError(
116 "IProgress not found. Please update jupyter and ipywidgets."
117 " See https://ipywidgets.readthedocs.io/en/stable"
ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
I’ve already updated Jupyter to 6.4.3 version, and the ipywidgets. | I solved this issue by installing the C++ Visual Studio Compiler. | 0 |
huggingface | Beginners | Collapsing Wav2Vec2 pretraining loss | https://discuss.huggingface.co/t/collapsing-wav2vec2-pretraining-loss/10104 | I’m trying to pretrain a Wav2Vec2 model based on the example given here 1`.
I was initially getting a contrastive loss like the graph on the left which seemed very slow so I upped the learning rate and got the graph on the right after only a few steps.
image880×426 33.7 KB
I’m not familiar with the nuts and bolts of contrastive loss but this came as a bit of a surprise and I was wondering if anyone could help me understand.
The batch size (with accumulation) is 32, the number of epochs is 20 and the warmup steps is 1200 for both attempts. | The solution in the end was to set return_attention_mask to True in the feature extractor, or use a pretrained feature extractor and model that prefers attention masks (i.e. not wav2vec2-base). | 0 |
huggingface | Beginners | Advise on model design, fine-tune model to output text given numerical values | https://discuss.huggingface.co/t/advise-on-model-design-fine-tune-model-to-output-text-given-numerical-values/10088 | I have recently done some work on gesture recognition using sensors attached to e.g. gloves. With a defined set of distinct gestures the model works fairly well. However an idea that sprung up is if it would be possible to use pretrained “general knowledge” models to also predict other gestures. Deep down in, lets say, GPT-2 there might be some knowledge of what a “pointing finger” or a “waving hand” is. With my limited exposure to NLP and transformers: would it be possible to fine-tune a pretrained model so that it tells us some semantic representation of the gesture?
The question is broad and I will try to break it down as far as I have though of it:
The input data is simply the numerical values (fixed size float vector) from the sensors (possibly in a sequence). The first step of using e.g. GPT-2 would be to discard the first textual tokenization and embedding step. I would say that this is an input domain shift and any pointers/discussion about this would be welcome, I have yet to find anything with my google-fu. One approach would perhaps simply be to feed the sensor data to the models directly.
The encoder/decoder steps of the model could perhaps work as is. Slow fine-tuning of these steps so that the general knowledge is preserved is probably important.
The output of the model could probably come in many different forms. I think the most interesting output would be sort of like a summarization of the gesture (e.g. a few tokens). However I have some trouble thinking of how to define the labels during training. When recording gestures for the training data it is easy to come up with many different words for a single gesture (e.g. “victory” or “2” for stretched index and middle finger). Would it be possible to combine several labels into one label? A first step could also simply be a single label just to see “if it works”.
There are many different NLP tasks and the models are generally suited for a specific task. Would GPT-2 be usable to, for example, output a small set of tokens or are other models perhaps better suited?
I would love to have an discussion about this approach and also be pointed to resources that I have (surely) missed. | Hi johank.
(I am not an expert in any of these areas.)
I don’t think GPT-2 has any knowledge of anything “Deep down”. The way the model works is only probabilistic. It doesn’t automatically “know” even simple things like sizes. If you ask it how many pints to a gallon, it might be able to tell you, but it might also generate a paragraph that implies that a pint is bigger than a gallon, without “realising” that it should check for that kind of error.
I suppose, if GPT2 has seen enough descriptions of a “pointing finger” it might be able to associate a description with the label, but I don’t think that’s what you are after.
There is almost certainly a better understanding of “pointing finger” inside your head than in GPT2. Although you are having trouble thinking of how to define the labels during training, I think you would be better at it than GPT2.
If you “discard the first textual tokenization and embedding step”, then the whole trained GPT2 effectively becomes untrained.
When people make a “victory” gesture they mean something completely different to a “2” gesture. If GPT2 “knows” about either or both of those gestures, it will “know” about them as completely different things. It is unlikely that GPT2 will ever have been told that the two physical manifestations are similar.
When you say “gesture”, do you mean a single hand shape, or is the motion important?
I would be interested to see whether a neural network could learn to distinguish between “victory” and “2”. Obviously, it can only learn to distinguish them if the training data has something different about them. I imagine it might have, particularly if your data includes motion and not merely shape
GPT2 might possibly have some memory about what other text is commonly found in the vicinity of the words “pointing finger”.
You could test out what gestures GPT2 already “knows” about, by feeding it some starter text such as “He made a gesture of …” and looking at the probabilities for the next words.
GPT2 is very (very) large, and would need a lot of time and a lot of data to train it. I think a smaller model would be more suitable.
My guess is that you don’t want a pre-trained text model at all. I could be wrong about that. | 0 |
huggingface | Beginners | How to determine if a sentence is correct? | https://discuss.huggingface.co/t/how-to-determine-if-a-sentence-is-correct/10036 | Is there any way to calculate if a sentence is correct. I have tried to calculate sentence perplexity using gtp2 as here - GPT-2 Perplexity Score Normalized on Sentence Lenght? 3.
So there i get quite close results, considering it is obvious that the other sentence is wrong by all means.
I am a man. 50.63967
I is an man. 230.10565
Is there any other way to calculate if i sentence is correct. Because this is a quite close result.
Maybe finetune T5 on examples, if there is a training set?
I have made some huge 3 gram and 4 gram models and they seem to be useless, even I used around 800 GB of text and i cant tell if a sentence is good or not. | Although I cannot vouch for their quality, there are a number of grammar correction models in model hub: Models - Hugging Face 8
They seem to finetune T5 or GPT as you mentioned. However, there will never be a guarantee that the model output is 100% grammatically correct. I think a rule-based approach suits grammar the most, since it mostly follows well-defined rules. | 0 |
huggingface | Beginners | Fine-Tune BERT with two Classification Heads “next to each other”? | https://discuss.huggingface.co/t/fine-tune-bert-with-two-classification-heads-next-to-each-other/9984 | I am currently working on a project to fine-tune BERT models on a multi-class classification task with the goal to classify job ads into some broader categories like “doctors” or “sales” via AutoModelForSequenceClassification (which works quite well ). Now I am wondering, whether it would be possible to add a second classification head “next” to the first one (not in sequence) to classify the minimum educational level that is required for the job. I imagine that each head is directly connected to the pooler output an then makes a prediction independent of the other’s prediction. I think my use-case is slightly different than a multi-label classification since both labels describe different aspects of the job ad.
Similar to this CV example: What is a multi-headed model? And what exactly is a ‘head’ in a model? 1
I hope it’s not total nonsense that I’m asking here
Greetings,
David | Sure, you can just use any default model, e.g. BertModel and add your custom classification heads on top of that. Have a look at the existing iclassification implementation. You can basically duplicate that, but add another classifier layer. Of course you’ll also have to adapt the forward method accordingly.
github.com
huggingface/transformers/blob/c1e47bf4fe1d9de06d774cc2c24ec5a93461c5a5/src/transformers/models/bert/modeling_bert.py#L1481 1
)
@add_start_docstrings(
"""
Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
""",
BERT_START_DOCSTRING,
)
class BertForSequenceClassification(BertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.config = config
self.bert = BertModel(config)
classifier_dropout = (
config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob
)
self.dropout = nn.Dropout(classifier_dropout) | 0 |
huggingface | Beginners | How to fine-tune a pre-trained model and then get the embeddings? | https://discuss.huggingface.co/t/how-to-fine-tune-a-pre-trained-model-and-then-get-the-embeddings/10061 | I would like to fine-tune a pre-trained model. This is the model:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
model = AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
This is the data (I know it is not clinical but let’s roll with it for now):
from fastai.datasets import untar_data, URLs
path = untar_data(URLs.IMDB_SAMPLE)
df = pd.read_csv(path/'texts.csv')
df.head()
How can I fine-tune the above model with this data? I know the answer is here but I cannot figure it out.
I would then like to take the embeddings. I tried model.last_hidden_state (as I have seen outputs.last_hidden_state) but it does not work either. | Please, before asking questions look on the internet for a minute or two. This is a VERY common use case, as you may have expected. It takes us too much time to keep repeating all the same questions. Thanks.
The first hit that I got on Google already gives you a tutorial on fine-tuning: Fine-tuning a pretrained model — transformers 4.10.1 documentation 6
Second: Fine-tuning with custom datasets — transformers 4.10.1 documentation 2
Notebooks: 🤗 Transformers Notebooks — transformers 4.10.1 documentation 1
Of course, you cannot get the last hidden states as an attribute of the model. You first need to do a forward pass with some given data. From the output of the data you can then extract the last hidden state. | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.