docs
stringclasses
4 values
category
stringlengths
3
31
thread
stringlengths
7
255
href
stringlengths
42
278
question
stringlengths
0
30.3k
context
stringlengths
0
24.9k
marked
int64
0
1
huggingface
Beginners
Can’t download (some) models although they are in the hub
https://discuss.huggingface.co/t/cant-download-some-models-although-they-are-in-the-hub/13944
Can’t download (some) models to pytorch, although they are in the hub (tried also the from_tf flag) Error: 404 Client Error: Not Found for url: https://huggingface.co/umarayub/t5-small-finetuned-xsum/resolve/main/config.json 1 Models for example: all of those models give 404 when trying to download them [ “SvPolina/t5-small-finetuned-CANARD”, “Edwardlzy/t5-small-finetuned-xsum”, “Teepika/t5-small-finetuned-xsum”, “HuggingLeg/t5-small-finetuned-xsum”, “V3RX2000/t5-small-finetuned-xsum”, “Teepika/t5-small-finetuned-xsum-glcoud”, “VenkateshE/t5-small-finetuned-xsum”, “Wusgnob/t5-small-finetuned-xsum”, “HugoZhu/t5-small-finetuned-xsum”, “Zazik/t5-small-finetuned-xsum”, “Paramveer/t5-small-finetuned-xsum”, “arkosark/t5-small-finetuned-xsum”, “RamadasK7/t5-small-finetuned-squad”, “bochaowei/t5-small-finetuned-cnn-wei2”, “Kyaw/t5-small-finetuned-xsum”, “ggosline/t5-small-herblables”, ] Thise for example does work: “valhalla/t5-small-qa-qg-hl”,“mrm8488/t5-small-finetuned-quora-for-paraphrasing”
Looking at umarayub/t5-small-finetuned-xsum at main, there’s indeed no files in there. There’s no config.json uploaded in that repo.
0
huggingface
Beginners
Trainer.push_to_hub is taking lot of time, is this expected behaviour?
https://discuss.huggingface.co/t/trainer-push-to-hub-is-taking-lot-of-time-is-this-expected-behaviour/14162
Hi, I’m trying to push my model to HF hub via trainer.push_to_hub and I see that it is taking a lot of time. Is this the expected behaviour? below are the o/p Saving model checkpoint to xlm-roberta-base Configuration saved in xlm-roberta-base/config.json Model weights saved in xlm-roberta-base/pytorch_model.bin tokenizer config file saved in xlm-roberta-base/tokenizer_config.json Special tokens file saved in xlm-roberta-base/special_tokens_map.json Several commits (2) will be pushed upstream. The progress bars may be unreliable.
@sgugger can you please help me out with this error?
0
huggingface
Beginners
SSLCertVerificationError when loading a model
https://discuss.huggingface.co/t/sslcertverificationerror-when-loading-a-model/12005
I am exploring potential opportunities of using HuggingFace “Transformers”. I have been trying check some basic examples from the introductory course, but I came across a problem that I have not been able to solve. I have successfully installed transformers on my laptop using pip, and I have tried to run your “sentiment-analysis” example using Jupyter Notebook, Spider, or Python Command Line. I am running Python 3.8.5 on Windows 10 behind a company firewall: transformers version: 4.12.3 Platform: Windows-10-10.0.19041-SP0 Python version: 3.8.5 PyTorch version (GPU?): not installed (NA) Tensorflow version (GPU?): 2.7.0 (False) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: NO Using distributed or parallel set-up in script?: NO When I do: from transformers import pipeline I get no errors, but when I try: classifier = pipeline(“sentiment-analysis”) I get: No model was supplied, defaulted to distilbert-base-uncased-finetuned-sst-2-english (distilbert-base-uncased-finetuned-sst-2-english · Hugging Face) HTTPSConnectionPool(host=‘huggingface.co’, port=443): Max retries exceeded with url: /distilbert-base-uncased-finetuned-sst-2-english/resolve/main/config.json (Caused by SSLError(SSLCertVerificationError(1, ‘[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123)’))) followed by a long detailed traceback for SSLCertVerificationError Any advice on how to proceed? Thanks a lot Miroslaw Bartkowiak
I’m also getting the same error. Please let me know if any one has resolution for this?
0
huggingface
Beginners
How to use embeddings to compute similarity?
https://discuss.huggingface.co/t/how-to-use-embeddings-to-compute-similarity/13876
Hi, I would like to compute sentence similarity from an input text and output text using cosine similarity and the embeddings I can get from the Feature Extraction task. However, I noticed that it returns different dimension matrix, so I cannot perform the matrix calculation. For example, in facebook/bart-base · Hugging Face you’ll get a different matrix size depending on the input text. Is there anyway to get just a single vector? Am I thinking about this the right way?
With transformers, the feature-extraction pipeline will retrieve one embedding per token. If you want a single embedding for the full sentence, you probably want to use the sentence-transformers library. There are some hundreds of st models at HF you can use Models - Hugging Face. You might also want to use a transformers model and do pooling, but I would suggest to just use sentence transformers For inference, you can use something like this import requests API_URL = "https://api-inference.huggingface.co/pipeline/feature-extraction/sentence-transformers/all-mpnet-base-v2" headers = {"Authorization": "Bearer API_TOKEN"} def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() output = query({ "inputs": ["this is a sentence", "this is another sentence"] }) # Output is a list of 2 embeddings, each of 768 values.
0
huggingface
Beginners
How to use additional input features for NER?
https://discuss.huggingface.co/t/how-to-use-additional-input-features-for-ner/4364
Hello, I’ve been following the documentation on fine-tuning custom datasets (Fine-tuning with custom datasets — transformers 4.3.0 documentation 29), I was wondering how additional token level features can be used as input (e.g.POS tags). My intuition was to concatenate each token with the tag before feeding it into a pre-trained tokenizer (e.g [“Arizona_NNP”, “Ice_NNP”, “Tea_NNP”]). Is this the right way to do it? Is there a better way to do it? Thank you in advance!
mhl: e.g [“Arizona_NNP”, “Ice_NNP”, “Tea_NNP”]). Is this the right way to do it? Actually no, because the pre-trained tokenizer only knows tokens, not tokens + POS tags. A better way to do this would be to create an additional input to the model (besides input_ids and token_type_ids) called pos_tag_ids, for which you can add an additional embedding layer (nn.Embedding). In that way, you can sum the embeddings of the tokens, token types and the POS tags. Let’s illustrate this for a pre-trained BERT model: We first have to modify the BertEmbeddings class 42. In short, we’ll add an embedding layer for the POS tags: class BertEmbeddings(nn.Module): """Construct the embeddings from word, position and token_type embeddings.""" def __init__(self, config): super().__init__() self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id) self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size) self.pos_tag_embeddings = nn.Embedding(max_number_of_pos_tags, config.hidden_size) (...) def forward( self, input_ids=None, pos_tag_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0 ): if input_ids is not None: input_shape = input_ids.size() else: input_shape = inputs_embeds.size()[:-1] seq_length = input_shape[1] if position_ids is None: position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length] if token_type_ids is None: token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device) if inputs_embeds is None: inputs_embeds = self.word_embeddings(input_ids) token_type_embeddings = self.token_type_embeddings(token_type_ids) pos_tag_embeddings = self.pos_tag_embeddings(pos_tag_ids) embeddings = inputs_embeds + token_type_embeddings + pos_tag_embeddings if self.position_embedding_type == "absolute": position_embeddings = self.position_embeddings(position_ids) embeddings += position_embeddings embeddings = self.LayerNorm(embeddings) embeddings = self.dropout(embeddings) return embeddings The max_number_of_pos_tags is the total unique number of POS tags we have (might be 20 for example, with NNP being one of them), also called the “vocabulary size” of the embedding layer. The config.hidden_size is the size of the embedding vector that we want to learn for each POS tag (which is 768 by default for BERT-base). We would also need to modify the forward pass of BertModel a bit to add the additional input pos_tag_ids: def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, pos_tag_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): (...) embedding_output = self.embeddings( input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, pos_tag_ids=pos_tag_ids, inputs_embeds=inputs_embeds, past_key_values_length=past_key_values_length, ) (...) Now that we have modified the model (modeling_bert.py), let’s move on to provide actual inputs to the model. An additional complexity of BERT-like models is that they rely on subword tokens, rather than words. This means that a word like “Arizona” might be tokenized into [“Ari”, “##zona”]. This means that we will also have to provide POS tags at the token level. Similar to how each token is turn into an integer (input_ids), we will also have to turn each POS tag into a corresponding integer (pos_tag_ids) in order to provide it to the model. So we would actually need to keep a dictionary that maps each POS tag to a corresponding integer. For simplicity, let’s assume that we only have two POS tags, namely NNP and VNP. We create corresponding integers (pos_tag_ids) for them, for example [0, 1]. So our vocabulary size of the POS tag embedding layer is only 2. Let’s now provide an example sentence to the model: from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") text = "She sells" # if we tokenize it, this becomes: encoding = tokenizer(text, return_tensors="pt") # this creates a dictionary with keys 'input_ids' etc. # we add the pos_tag_ids to the dictionary pos_tags = [NNP, VNP] encoding['pos_tag_ids'] = torch.tensor([[0, 1]]) # next, we can provide this to our modified BertModel: from tranformers import BertModel model = BertModel.from_pretrained("bert-base-uncased") outputs = model(**encoding) Note that the code above assumes that each word is turned into a single token, which is typically not the case for other words. So suppose that the word Arizona is tokenized into [“Ari”, “##zona”], then we would have pos_tag_ids [0,0] for example.
0
huggingface
Beginners
How to get ‘sequences_scores’ from ‘scores’ in ‘generate()’ method
https://discuss.huggingface.co/t/how-to-get-sequences-scores-from-scores-in-generate-method/6048
Hi all I was wondering if I can ask you some questions about how to use .generate() for BART or other pre-trained models. The example code is, from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig path = 'facebook/bart-large' model = BartForConditionalGeneration.from_pretrained(path) tokenizer = BartTokenizer.from_pretrained(path) ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs." inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt') # Generate Summary summary_ids = model.generate( inputs['input_ids'], num_beams=4, num_return_sequences=2, max_length=5, early_stopping=True, output_scores=True, return_dict_in_generate=True, ) print(summary_ids.keys()) print(summary_ids['sequences']) print(summary_ids['sequences_scores']) print(len(summary_ids['scores'][0])) print(summary_ids['scores'][0].size()) print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids['sequences']]) Then, the output is, odict_keys(['sequences', 'sequences_scores', 'scores']) tensor([[ 2, 2387, 2387, 964, 2], [ 2, 2387, 4, 4, 2]]) tensor([-0.8599, -0.9924]) 4 torch.Size([4, 50265]) ['MyMy friends', 'My..'] Do not worry about poor performance, [‘MyMy friends’, ‘My…’], since I am only trying to understand how this works. So, the question is, return_dict_in_generate=True returns ['sequences'], but together with output_scores=True, it returns ['sequences', 'sequences_scores', 'scores']. There are other arguments, like output_attentions or output_hidden_states. BART BartForConditionalGeneration documents do not explain anything about .generate(). So, I searched further and found Utilities for Generation (Utilities for Generation — transformers 4.5.0.dev0 documentation 3) that seems to talk about generating outputs using .generate() and Huggingface transformers model 1 that seems to talk about the general methods of base classes, PreTrainedModel, but there is no document that shows what each variable, [‘sequences’, ‘sequences_scores’, ‘scores’], actually work or how they are computed. Where is the documents for this? Is sequences_scores computed as, \sum_{t} \log p(y_{t} | x, y_{t<})? How do you get sequences_scores from scores? My initial guess was to apply softmax on scores in dim=1, then get topk with k=1, but this does not give me very weird answer. import torch sm = torch.nn.functional.softmax(summary_ids['scores'][0], dim=1) topk = sm.topk(k=1, dim=1) print(sm) print(topk) print(summary_ids['sequences'][0]) which comes out as tensor([[1.2851e-04, 8.8341e-12, 2.4085e-06, ..., 3.9426e-12, 2.8815e-12, 1.0564e-08], [1.9899e-05, 1.9899e-05, 1.9899e-05, ..., 1.9899e-05, 1.9899e-05, 1.9899e-05], [1.9899e-05, 1.9899e-05, 1.9899e-05, ..., 1.9899e-05, 1.9899e-05, 1.9899e-05], [1.9899e-05, 1.9899e-05, 1.9899e-05, ..., 1.9899e-05, 1.9899e-05, 1.9899e-05]]) torch.return_types.topk( values=tensor([[9.9271e-01], [1.9899e-05], [1.9899e-05], [1.9899e-05]]), indices=tensor([[2387], [ 0], [ 0], [ 0]])) tensor([ 2, 2387, 2387, 964, 2]) First token 2387 appears to be correct, but from the second, the probability is 1.9899e-05, which is just equivalent to 1/len(tokenizer). This seems to me that all the tokens are likely to be generated equally. So, How do you get sequences_scores from scores? 4. How do I get the probability of all the conditional probability of output tokens? For example, if .generate() gives output as [I, am, student], then how do I get the conditional probability of each token? [Pr(I | x), Pr(am | x, I), Pr(student | x, I, am)]. Initially, I thought it was ‘scores’, but I am not sure now. 5. Since I find it difficult to find documents/information on .generate() nor any information above, is this something that experienced researchers in NLP or programming would just be able to guess? Thank you in advance
I am having the same problem as in 4. @patrickvonplaten , can you help us with this? Thanks!
0
huggingface
Beginners
How to prepare local dataset for load_dataset() and mimic its behavior when loading HF’s existing online dataset
https://discuss.huggingface.co/t/how-to-prepare-local-dataset-for-load-dataset-and-mimic-its-behavior-when-loading-hfs-existing-online-dataset/6368
Good day! Thank you very much for reading this question. I am working on private dataset in local storage and I want to mimic the program that loads dataset with load_dataset(). In order not to modify the training loop, I would like to convert my private dataset into the exact format the online dataset is stored; so that after loading the dataset, it will have exact same behavior, i.e. having a DatasetDict object with 3 splits (train, validation and test) with feature ‘translation’ which contains two key value pairs in each row with key name as language code and value as sentence). The behavior is shown as below. Would you please help me with (1) the folder structure, naming of the files, and the data format (2) how to call load_dataset so that it will return a DatasetDict with same behavior as below. raw_datasets = load_dataset("wmt17", "de-en") print(raw_datasets) ''' DatasetDict({ train: Dataset({ features: ['translation'], num_rows: 5906184 }) validation: Dataset({ features: ['translation'], num_rows: 2999 }) test: Dataset({ features: ['translation'], num_rows: 3004 }) }) ''' print(raw_datasets["train"][0]) ''' {'translation': {'de': 'Wiederaufnahme der Sitzungsperiode', 'en': 'Resumption of the session'}} ''' Thank you!
hey @jenniferL, to have the same behaviour as your example you’ll need to create a dataset loading script (see docs 8) which defines the configuration (de-en in your example), along with the column names and types. once your script is ready, you should be able to do something like: from datasets import load_dataset dataset = load_dataset('PATH/TO/MY/SCRIPT.py', 'my_configuration', data_files={'train': 'my_train_file.txt', 'validation': 'my_validation_file.txt'}) tips: you might need to hardcode data_files explicitly in your script to preserve the exact same signature you have for load_dataset in your example. you might find this 13 script template a useful place to start from
0
huggingface
Beginners
RagRetriver Import error
https://discuss.huggingface.co/t/ragretriver-import-error/3475
I am getting an error cannot import name RagRetriever (unknown location). Using base terminal under conda python3.8. transformers 4.2.2
Does it work under 4.2.1 (stable version) ? Do either of these help at all: stackoverflow.com cannot import name 'pipline' from 'transformers' (unknown location) 25 python, jupyter-lab, huggingface-transformers asked by Arman on 06:35PM - 21 Jul 20 UTC github.com/huggingface/transformers upgrading new transformer doesn't work 14 opened Dec 30, 2019 closed Dec 31, 2019 ehsan-soe ❓ Questions & Help Hi, I have pulled the repo again since a lot of stuff changed/added. When I try to use pip install... I am not an expert, and your question might be sufficient for an expert to answer, but I suggest you add a bit more detail. (eg what are you running, have you managed to make any other transformers models work, what are you attempting to achieve).
0
huggingface
Beginners
Class weights for bertForSequenceClassification
https://discuss.huggingface.co/t/class-weights-for-bertforsequenceclassification/1674
I have an unbalanced data with a couple of classes with relatively smaller sample sizes. I am wondering if there is a way to assign the class weights to BertFor SequenceClassification class, maybe in BertConfig ?, as we can do in nn.CrossEntropyLoss. Thank you, in advance!
No, you need to compute the loss outside of the model for this. If you’re using Trainer, see here 374 on how to change the loss form the default computed by the model.
0
huggingface
Beginners
How to show the learning rate during training
https://discuss.huggingface.co/t/how-to-show-the-learning-rate-during-training/13914
Hi everyone I would like to know if it is possible to include the learning rate value as part of the information presented during the training. The columns Accuracy, F1, Precision and Recall were added after setting a custom compute_metrics function. And I would like to have the Learning Rate as well. image775×193 22.4 KB Is it possible to add it there? Thanks in advance
Hi Alberto, yes it is possible to include learning rate in the evaluation logs! Fortunately, the log() method of the Trainer class is one of the methods that you can “subclass” to inject custom behaviour: Trainer So, all you have to do is create your own Trainer subclass and override the log() method like so: class MyTrainer(Trainer): def log(self, logs: Dict[str, float]) -> None: logs["learning_rate"] = self._get_learning_rate() super().log(logs) trainer = MyTrainer(...) trainer.train() You should now see the learning rate in the eval logs. Hope that helps, let me know if any questions. Cheers Heiko
1
huggingface
Beginners
IndexError: index out of bounds
https://discuss.huggingface.co/t/indexerror-index-out-of-bounds/2859
Hi, I am trying to further pretrain “allenai/scibert_scivocab_uncased” on my own dataset using MLM. I am using following command - python3 ./transformers/examples/language-modeling/run_mlm.py --model_name_or_path "allenai/scibert_scivocab_uncased" --train_file train.txt --validation_file validation.txt --do_train --do_eval --output_dir test1 --overwrite_cache --cache_dir ./tt However I am getting error: 0% 0/240 [00:00<?, ?ba/s]Traceback (most recent call last): File "./transformers/examples/language-modeling/run_mlm.py", line 409, in <module> main() File "./transformers/examples/language-modeling/run_mlm.py", line 355, in main load_from_cache_file=not data_args.overwrite_cache, File "/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py", line 303, in map for k, dataset in self.items() File "/usr/local/lib/python3.6/dist-packages/datasets/dataset_dict.py", line 303, in <dictcomp> for k, dataset in self.items() File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1259, in map update_data=update_data, File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 157, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 1528, in _map_single writer.write_batch(batch) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_writer.py", line 278, in write_batch pa_table = pa.Table.from_pydict(typed_sequence_examples) File "pyarrow/table.pxi", line 1474, in pyarrow.lib.Table.from_pydict File "pyarrow/array.pxi", line 322, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 222, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_writer.py", line 100, in __arrow_array__ if trying_type and out[0].as_py() != self.data[0]: File "pyarrow/array.pxi", line 1058, in pyarrow.lib.Array.__getitem__ File "pyarrow/array.pxi", line 540, in pyarrow.lib._normalize_index IndexError: index out of bounds Can someone help me in understanding this problem and how to resolve it? When I try the same command with bert-base-uncased, it runs fine. Also, what is the best practice to further pretrain a model on custom dataset?
Any progress on this? I’m currently facing the same issue.
0
huggingface
Beginners
What is ‘model is currently loading;’
https://discuss.huggingface.co/t/what-is-model-is-currently-loading/13917
Hi. I’m a beginner at NLP. I’m going to summarize sentences using T5 model’s information API. The model is currently loading keeps popping up and does not proceed. Can you tell me what this error is? image852×453 48 KB also I want to summarize more than 5,000 characters into 1,000 to 2,000 characters. How should I write the parameters?
wait_for_model is documented in the link shared above. If false, you will get a 503 when it’s loading. If true, your process will hang waiting for the response, which might take a bit while the model is loading. You can pin models for instant loading (see Hugging Face – Pricing 1)
1
huggingface
Beginners
How to specify labels-column in BERT
https://discuss.huggingface.co/t/how-to-specify-labels-column-in-bert/13649
Hi, i’m trying to follow the huggingface datasets tutorial 1 to finetune a BERT model on a custom dataset for sentiment analysis. The quicktour states: rename our label column in labels which is the expected input name for labels in BertForSequenceClassification. In the docs for to_tf_dataset it states: label_cols – Dataset column(s) to load as labels. Note that many models compute loss internally rather than letting Keras do it, in which case it is not necessary to actually pass the labels here, as long as they’re in the input columns. I am uncertain if label_cols can be used to specify labels for differently named columns, or if it only possible to pass labels with a column named label inside the columns parameter?
Hi @fogx, this is a good question! Here’s what’s happening in to_tf_dataset: columns specifies the list of columns to be passed as the input to the model, and label_cols specifies the list of columns to be passed to Keras at the label. For most tasks (including sentiment analysis), you will usually only want one column to be passed here, in which case it doesn’t really matter what it’s called because to_tf_dataset will only make the labels a dict when there are multiple label columns. Sentiment analysis is an example of a ‘text classification’ task, so if you want a tutorial on that specifically, please take a look at this notebook or the colab link.
1
huggingface
Beginners
Dataloader_num_workers in a torch.distributed setup using HF Trainer
https://discuss.huggingface.co/t/dataloader-num-workers-in-a-torch-distributed-setup-using-hf-trainer/13905
Hi everyone - super quick question. I looked around and I couldn’t find this previously asked, but my apologies if I missed something! Wondering if I have HF trainer set up using torch.distributed.launch on 8 gpus… if my dataloader_num_workers = 10… is that 10 total processes for dataloaders or 10*8=80 processes? Thank you again!
10 * 8 = 80
1
huggingface
Beginners
Memory use of GPT-J-6B
https://discuss.huggingface.co/t/memory-use-of-gpt-j-6b/10078
Hello everyone! I am trying to install GPT-J-6B on a powerful (more or less “powerful”) computer and I have encountered some problems. I have followed the documentation examples (GPT-J — transformers 4.11.0.dev0 documentation 23) and also this guide (Use GPT-J 6 Billion Parameters Model with Huggingface 30). The following are the specifications of the available resources: transformers version: 4.11.0.dev0 Platform: Linux-5.4.0-84-generic-x86_64-with-Ubuntu-18.04-bionic Platform resources: 32GB RAM and 30GB Swap Python version: 3.6.9 PyTorch version (GPU?): 1.9.0+cu111 (True) Tensorflow version (GPU?): not installed (NA) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: Yes, a GeForce RTX 2080 SUPER (7981MiB) Using distributed or parallel set-up in script?: No +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce RTX 208... Off | 00000000:02:00.0 On | N/A | | 0% 43C P8 11W / 250W | 342MiB / 7981MiB | 20% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 1186 G /usr/lib/xorg/Xorg 18MiB | | 0 N/A N/A 1324 G /usr/bin/gnome-shell 70MiB | | 0 N/A N/A 2673 G /usr/lib/xorg/Xorg 175MiB | | 0 N/A N/A 2808 G /usr/bin/gnome-shell 34MiB | | 0 N/A N/A 7608 G /usr/lib/firefox/firefox 10MiB | | 0 N/A N/A 7782 G ...AAAAAAAAA= --shared-files 26MiB | +-----------------------------------------------------------------------------+ I’ll start explaining what works for me: I’ve loaded the model into the machine’s RAM (no GPU, just CPU). It consumes the 32 GB of RAM and 17 GB of Swap. It takes 500 seconds (8 min) to load the model and then the RAM consumption drops to 24 GB of RAM and 14 of Swap. Sending an input and generating an output takes 2 minutes on average to send a response to the user. First question: Is the memory consumption that is observed normal for this model? Do you see reasonable times for this level of RAM and Swap memory? Code: import time from transformers import AutoModelForCausalLM, AutoTokenizer start_time = time.time() model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") end_time = time.time() - start_time print("Total Taken => ",end_time) prompt = "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " \ "previously unexplored valley, in the Andes Mountains. Even more surprising to the " \ "researchers was the fact that the unicorns spoke perfect English." input_ids = tokenizer(prompt, return_tensors="pt").input_ids start_time = time.time() gen_tokens = model.generate(input_ids, do_sample=True, temperature=0.9, max_length=100,) gen_text = tokenizer.batch_decode(gen_tokens)[0] print(gen_text) end_time = time.time() - start_time print("Total Taken => ",end_time) Seeing that the model was too much for the machine, I decided to lower the precision with the torch_dtype to float16 and load it on the GPU. But, after a few minutes and after consuming 32GB of RAM and 12 of Swap, with the following code the following exception arises: Code: import time from transformers import GPTJForCausalLM, AutoTokenizer import torch start_time = time.time() model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16).to("cuda") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") end_time = time.time() - start_time print("Total Taken => ",end_time) prompt = "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " \ "previously unexplored valley, in the Andes Mountains. Even more surprising to the " \ "researchers was the fact that the unicorns spoke perfect English." input_ids = tokenizer(prompt, return_tensors="pt").input_ids start_time = time.time() gen_tokens = model.generate(input_ids, do_sample=True, temperature=0.9, max_length=100,) gen_text = tokenizer.batch_decode(gen_tokens)[0] print(gen_text) end_time = time.time() - start_time print("Total Taken => ",end_time) Output: Traceback (most recent call last): File "gpt.py", line 202, in <module> model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16).to("cuda") File "/home/robotica/Escritorio/gpt/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 852, in to return self._apply(convert) File "/home/robotica/Escritorio/gpt/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 530, in _apply module._apply(fn) File "/home/robotica/Escritorio/gpt/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 530, in _apply module._apply(fn) File "/home/robotica/Escritorio/gpt/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 530, in _apply module._apply(fn) [Previous line repeated 2 more times] File "/home/robotica/Escritorio/gpt/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 552, in _apply param_applied = fn(param) File "/home/robotica/Escritorio/gpt/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 850, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 7.79 GiB total capacity; 6.07 GiB already allocated; 38.56 MiB free; 6.07 GiB reserved in total by PyTorch) Second question: Is this exception due to running out of memory on the GPU? How much VRAM does the GPT-J-6B consume to fit in the GPU? Seeing that this was not working either I decided not to use the GPU and use only the CPU with float16 precision. But then another exception arises: Code: import time from transformers import GPTJForCausalLM, AutoTokenizer import torch start_time = time.time() model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16) tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") end_time = time.time() - start_time print("Total Taken => ",end_time) prompt = "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " \ "previously unexplored valley, in the Andes Mountains. Even more surprising to the " \ "researchers was the fact that the unicorns spoke perfect English." input_ids = tokenizer(prompt, return_tensors="pt").input_ids start_time = time.time() gen_tokens = model.generate(input_ids, do_sample=True, temperature=0.9, max_length=100,) gen_text = tokenizer.batch_decode(gen_tokens)[0] print(gen_text) end_time = time.time() - start_time print("Total Taken => ",end_time) Output: Total Taken => 177.10330414772034 Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. Traceback (most recent call last): File "gpt.py", line 128, in <module> gen_tokens = model.generate(input_ids, do_sample=True, temperature=0.9, max_length=100,) File "/home/robotica/Escritorio/gpt/env/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "/home/robotica/Escritorio/gpt/env/lib/python3.6/site-packages/transformers/generation_utils.py", line 1026, in generate **model_kwargs, File "/home/robotica/Escritorio/gpt/env/lib/python3.6/site-packages/transformers/generation_utils.py", line 1533, in sample output_hidden_states=output_hidden_states, File "/home/robotica/Escritorio/gpt/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/robotica/Escritorio/gpt/env/lib/python3.6/site-packages/transformers/models/gptj/modeling_gptj.py", line 780, in forward return_dict=return_dict, File "/home/robotica/Escritorio/gpt/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/robotica/Escritorio/gpt/env/lib/python3.6/site-packages/transformers/models/gptj/modeling_gptj.py", line 631, in forward output_attentions=output_attentions, File "/home/robotica/Escritorio/gpt/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/robotica/Escritorio/gpt/env/lib/python3.6/site-packages/transformers/models/gptj/modeling_gptj.py", line 274, in forward hidden_states = self.ln_1(hidden_states) File "/home/robotica/Escritorio/gpt/env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/robotica/Escritorio/gpt/env/lib/python3.6/site-packages/torch/nn/modules/normalization.py", line 174, in forward input, self.normalized_shape, self.weight, self.bias, self.eps) File "/home/robotica/Escritorio/gpt/env/lib/python3.6/site-packages/torch/nn/functional.py", line 2346, in layer_norm return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled) RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' I think this last message is a bug, but I would like to know your opinion, lest I have programmed something wrong… I think that there is an incompatibility between a programmed layer and the float 16 precision and the model can not be user with this configuration. Third question, what are the hardware specifications to run this model? In the documentation they explain that with a GPU it would be only consume 24 GB of RAM but they do not talk about the capacity of the graph to reach this consumption. In the same way, loading each of these models has used around 50 GB of RAM. Is there no way to reduce the use of RAM when loading the model? Will there always be those peaks of consumption of 50 GB of RAM when I try to load the model and then it will drop to 24 GB? Thanks for your time reading me ^.^
You need at least 12GB of GPU RAM for to put the model on the GPU and your GPU has less memory than that, so you won’t be able to use it on the GPU of this machine. You can’t use it in half precision on CPU because all layers of the models are not implemented for half precision (like the layernorm layer) so you need to use the model in full precision on the CPU to make predictions (that will take a looooooooong time). AS for the RAM footprint, we are working on a way to load the model with from_pretrained to only consume the model memory in RAM (currently it consumes twice the model size). It should be merged soon.
0
huggingface
Beginners
How is the dataset loaded?
https://discuss.huggingface.co/t/how-is-the-dataset-loaded/13790
Hi everyone, I’m trying to pre-train BERT on a cluster server, using the classic run_mlm script. I have a dataset of 27M sentences divided into 27 files. When I was testing my script with 2-5 files, it all worked perfectly, but when I try to use all the dataset, it seems that the execution remains stuck before training, but after dataset caching! The execution doesn’t stop until the time limit is reached and i get this error: slurmstepd: error: Detected 2 oom-kill event(s) in StepId=5328394.batch. Some of your processes may have been killed by the cgroup out-of-memory handler. I thought that the dataset was loaded lazily using the transformers trainer, am I worng? Have you got any suggestion? Thanks in advance!
Hi! What do you get when you run print(dset.cache_files) on your dataset object?
0
huggingface
Beginners
Handling long text in BERT for Question Answering
https://discuss.huggingface.co/t/handling-long-text-in-bert-for-question-answering/382
I’ve read post which explains how the sliding window works but I cannot find any information on how it is actually implemented. From what I understand if the input are too long, sliding window can be used to process the text. Please correct me if I am wrong. Say I have a text "In June 2017 Kaggle announced that it passed 1 million registered users". Given some stride and max_len, the input can be split into chunks with over lapping words (not considering padding). In June 2017 Kaggle announced that # chunk 1 announced that it passed 1 million # chunk 2 1 million registered users # chunk 3 If my questions were "when did Kaggle make the announcement" and "how many registered users" I can use chunk 1 and chunk 3 and not use chunk 2 at all in the model. Not quiet sure if I should still use chunk 2 to train the model So the input will be: [CLS]when did Kaggle make the announcement[SEP]In June 2017 Kaggle announced that[SEP] and [CLS]how many registered users[SEP]1 million registered users[SEP] Then if I have a question with no answers do I feed it into the model with all chunks like and indicate the starting and ending index as -1? For example "can pigs fly?" [CLS]can pigs fly[SEP]In June 2017 Kaggle announced that[SEP] [CLS]can pigs fly[SEP]announced that it passed 1 million[SEP] [CLS]can pigs fly[SEP]1 million registered users[SEP]
Hi @benj, Sylvain has a nice tutorial (link 331) on question answering that provides a lot of detail on how the sliding window approach is implemented. The short answer is that all chunks are used to train the model, which is why there is a fairly complex amount of post-processing required to combine everything back together.
0
huggingface
Beginners
Continue fine-tuning with Trainer() after completing the initial training process
https://discuss.huggingface.co/t/continue-fine-tuning-with-trainer-after-completing-the-initial-training-process/9842
Hey all, Let’s say I’ve fine-tuned a model after loading it using from_pretrained() for 40 epochs. After looking at my resulting plots, I can see that there’s still some room for improvement, and perhaps I could train it for a few more epochs. I realize that in order to continue training, I have to use the code trainer.train(path_to_checkpoint). However, I don’t know how to specify the new number of epochs that I want it to continue training for. Because it’s already finished the 40 epochs I initially instructed it to train for. Do I have to define a new trainer? But if I define a new trainer, can I also change the learning rate? In addition to these questions, there is also the learning rate scheduler. The default of the trainer is the OneCycleLR, if I’m not mistaken. This means that by the end of my 40 previous epochs, the learning rate was 0. By restarting the training process, will the whole scheduler restart as well? Thanks for any help in advance.
Yes, you will need to restart a new training with new training arguments, since you are not resuming from a checkpoint. The Trainer uses a linear decay by default, not the 1cycle policy, so you learning rate did end up at 0 at the end of the first training, and will restart at the value you set in your new training arguments.
0
huggingface
Beginners
Can one get an embeddings from an inference API that computes Sentence Similarity?
https://discuss.huggingface.co/t/can-one-get-an-embeddings-from-an-inference-api-that-computes-sentence-similarity/9433
Hi there, I’m new to using Huggingface’s inference API and wanted to check if a model whose task is to return Sentence Similarity can return sentence embeddings instead. For example, in this sentence-transformers model, the model task is to return sentence similarity. Instead, I would like to just get the embeddings of a list of sentences. Is there an API parameter I can tweak to get this? Help would be very much appreciated!
Hi there! Yes, you can compute sentence embeddings. Here is an example import requests API_URL = "https://api-inference.huggingface.co/pipeline/feature-extraction/sentence-transformers/all-mpnet-base-v2" headers = {"Authorization": "Bearer API_TOKEN"} def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() output = query({ "inputs": ["this is a sentence", "this is another sentence"] }) # Output is a list of 2 embeddings, each of 768 values.
0
huggingface
Beginners
Obtaining word-embeddings from Roberta
https://discuss.huggingface.co/t/obtaining-word-embeddings-from-roberta/7735
Hello Everyone, I am fine-tuning a pertained masked LM (distil-roberta) on a custom dataset. Post-training, I would like to use the word embeddings in a downstream task. How does one go about obtaining embeddings for whole-words when the model uses sub-word tokenising. For example, tokeniser.tokenize(‘floral’) will give me [‘fl’, ‘oral’]. So, if ‘floral’ is not even a part of the vocabulary, how do I obtain its embedding? When I do this: tokens = tokenizer.encode("floral") word = tokenizer.encode("floral",return_tensors='pt') output = model(word) I see that output is a tensor with shape torch.Size([1, 4, 50265]) and rightly so, because a LM will output the probability distribution across all the words in the vocabulary. I am expecting something like [1,768]. Can someone please help?
hey @okkular i believe the standard approach to dealing with this is to simply average the token embeddings of the subwords to generate an embedding for the whole word. having said that, aggregating the embeddings this way might have a negative effect on your downstream performance, so trying both approaches would be a good test
0
huggingface
Beginners
Comparing output of BERT model - why do two runs differ even with fixed seed?
https://discuss.huggingface.co/t/comparing-output-of-bert-model-why-do-two-runs-differ-even-with-fixed-seed/11412
I have example code as below. If I instantiate two models as below and compare the outputs, I see different outputs. wondering why would this be the case? – code snippet – # fix seed torch.manual_seed(10) tokenizer = BertTokenizer.from_pretrained(“bert-base-uncased”) config = BertConfig(vocab_size_or_config_json_file=32000, hidden_size=768, num_hidden_layers=2, num_attention_heads=2, intermediate_size=3072, torchscript=True) # Instantiating the model model = BertModel(config) model.eval() model2 = BertModel(config) model2.eval() # inputs to model sequence = ["A Titan RTX has 24GB of VRAM"] inputs = tokenizer.prepare_seq2seq_batch(sequence, return_tensors='pt') input_ids = inputs["input_ids"] o1 = model(input_ids) o2 = model2(input_ids) # why do o1 and o2 differ? – code snippet –
resolved : used pretrained to match this.
0
huggingface
Beginners
Which dataset can be used to evaluate a model for sentiment analysis?
https://discuss.huggingface.co/t/which-dataset-can-be-used-to-evaluate-a-model-for-sentiment-analysis/13633
Which dataset can we use to evaluate a model for sentiment analysis that can be used as reference? The model has been fine-tuned over a human-labeled dataset to be trained, so as a reference, is there any metric can be used to evaluate it?
SST-2 is a bit general. If you have a domain-specific use case I’d doubt if there would be a benchmark for that, but your best option is SST-2 imho.
1
huggingface
Beginners
Email confirmation?
https://discuss.huggingface.co/t/email-confirmation/13755
It’s been an hour, I can’t access my token because apparently my email isn’t confirmed yet. I checked my inbox and spam folder and i didn’t see the confirmation email from hugging face??
It’s not working brother, same here , you will get confirmation link but it will say …ehhhh token invalid, damn
0
huggingface
Beginners
Confirmation link
https://discuss.huggingface.co/t/confirmation-link/13741
I haven’t received any confirmation link
hello! i’m having the same issue… did you get it now?
0
huggingface
Beginners
Instances in tensorflow serving DialoGPT-large model
https://discuss.huggingface.co/t/instances-in-tensorflow-serving-dialogpt-large-model/13586
Hi. I have managed to successfully use pretrained DialoGPT model in tensorflow serving (thanks to Merve). Rest API is up and running as it should. The issue occurs when I try to send the data to it. When I try to pass in the example input (available at Huggingface’s API documentation under conversational model section). I get an error “error”:“Missing ‘inputs’ or ‘instances’ key”. This is the part where I get confused. In the tutorial I watched on youtube it’s stated that “instances” are bassically inputs (video section is available on: tf serving tutorial | tensorflow serving tutorial | Deep Learning Tutorial 48 (Tensorflow, Python) - YouTube), but something else is written in the tensorflow serving documentation (available on: RESTful API  |  TFX  |  TensorFlow).So my question is, how can I get the values that need to be passed into the “instances” key, what are they? Are they in any file in the model’s huggingface repository? Thank you for answering in advance. (And sorry because you have to copy paste URLs, but platform wouldn’t let me use more than 2). This is the code I’ve written in node.js to test the Rest API: const axios = require('axios'); const ai_url = "http://localhost:8601/v1/models/dialogpt:predict"; const payload = { "instances": [], "inputs": { "past_user_inputs": ["Which movie is the best ?"], "generated_responses": ["It's Die Hard for sure."], "text": "Can you explain why ?" } } console.log(JSON.stringify(payload)); axios.post(ai_url, { data: payload }) .then(function (response) { console.log("data: " + JSON.stringify(response.data)); console.log(ai_response); }) .catch(function (error) { console.log(error); console.log(error.response.data.detail); console.log(error.response); }); ```This text will be hidden
Hello Here you can find a neat tutorial on how to use Hugging Face models with TF Serving 2. As you guessed, instances are your examples you want your model to infer. batch = tokenizer(sentence) batch = dict(batch) batch = [batch] input_data = {"instances": batch} Your payload input works just fine in Inference API btw. My guess is you could put your inputs to instances part and it would work just fine. (maybe try as a list if it doesn’t) Something like: batch = [{"inputs": { "past_user_inputs": ["Which movie is the best ?"], "generated_responses": ["It's Die Hard for sure."], "text": "Can you explain why ?" }}] input_data = {"instances": batch} Let me know if it doesn’t work.
0
huggingface
Beginners
How can i get the word representation using BERT?
https://discuss.huggingface.co/t/how-can-i-get-the-word-representation-using-bert/13739
I need to get the embeddings using BERT for word-level, not sub-word . I got a lot of functions one of them gets the embeddings from the last layers in the model but the results for sub-word . How can I start, please? I hope someone can help . Appreciating your time
This previous discussion can be useful Obtaining word-embeddings from Roberta 7
0
huggingface
Beginners
RuntimeError: CUDA out of memory even with simple inference
https://discuss.huggingface.co/t/runtimeerror-cuda-out-of-memory-even-with-simple-inference/11984
Hi everyone, I am trying to use the pre-trained DistiBert model to perform sentiment analysis on some stock data data. When trying to feed the input sentences to the model though I get the following error: ““RuntimeError: CUDA out of memory. Tried to allocate 968.00 MiB (GPU 0; 11.17 GiB total capacity; 8.86 GiB already allocated; 869.81 MiB free; 9.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF”” I have read on other answers that a possible solution would be to lower batch size during training but this happens to me even when running torch.no_grad(). Also the max length of a sentence is 61 so I don’t think the issue lies there. Any idea where the problem could lie? I am new to PyTorch and deep learning in general so forgive me for the lame question. Here is the code : model_class = tsf.DistilBertModel model = model_class.from_pretrained('distilbert-base-uncased') tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased') device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') model.to(device) df = pd.read_csv("stock_data.csv",delimiter =";") df = df.dropna() df['Text'] = df['Text'].astype(str) text_list = df['Text'].values.tolist() tokenized = df['Text'].apply((lambda x: tokenizer.encode(x, add_special_tokens=True))) max_len = 0 for i in tokenized.values: if len(i) > max_len: max_len = len(i) padded = np.array([i + [0]*(max_len-len(i)) for i in tokenized.values]) attention_mask = np.where(padded != 0, 1, 0) input_ids = torch.tensor(padded,device='cuda') attention_mask = torch.tensor(attention_mask,device='cuda') with torch.no_grad(): last_hidden_states = model(input_ids, attention_mask=attention_mask) Thank you in advance
Hi @totoro02, I faced the same error fine tuning another model, but in my case I needed to lower the batch size from 64 to 16. I did not applied the torch.no_grad(). In your case, what was lowest batch size you tried?
0
huggingface
Beginners
Error 403! What to do about it?
https://discuss.huggingface.co/t/error-403-what-to-do-about-it/12983
I can not save the model, it gives HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/repos/create 5 - You don’t have the rights to create a model under this namespace how to fix?
This is an English-focused forum. Please translate your message - or use a translation engine.
0
huggingface
Beginners
Validation loss always 0.0 for BERT Sequence Tagger
https://discuss.huggingface.co/t/validation-loss-always-0-0-for-bert-sequence-tagger/13654
I want to implement a BERT sequence tagger following this tutorial. My dataset is rather small, so the size of the validation dataset are around 10 texts. The training loss decreases quite good, but the validation loss is always at 0.0. I don’t know what is going on there. When I look at the predicted and the true output, it is obvious that there is still some error in the prediction, so it should have some kind of validation loss. Is there something I’ve been missing? At the line print("val loss: ", outputs[0].item()) I already get 0.0 as loss value. Here is the train and validation part of my code: config = AutoConfig.from_pretrained("dbmdz/bert-base-german-cased", output_hidden_states=True, num_labels=len(unique_tags)) model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-base-german-cased", config=config) model.config.pad_token_id = self.tokenizer.pad_token_id device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') model.to(device) if self.FULL_FINETUNING: param_optimizer = list(model.named_parameters()) no_decay = ['bias', 'gamma', 'beta'] optimizer_grouped_parameters = [ {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay_rate': self.weight_decay}, {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.0} ] else: param_optimizer = list(model.classifier.named_parameters()) optimizer_grouped_parameters = [{"params": [p for n, p in param_optimizer]}] optimizer = AdamW( optimizer_grouped_parameters, lr=self.learning_rate, eps=self.adam_eps ) total_steps = len(train_dataloader) * self.epochs scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=0, num_training_steps=total_steps ) loss_values, validation_loss_values, validation_accuracy, val_f1_scores, test_accuracies, test_f1_scores = [], [], [], [], [], [] for epoch in trange(self.epochs, desc="Epoch"): # ======================================== # Training # ======================================== model.train() total_loss = 0 for step, batch in enumerate(train_dataloader): batch = tuple(t.to(device) for t in batch) b_input_ids, b_input_mask, b_labels = batch model.zero_grad() outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) loss = outputs[0] loss.backward() total_loss += loss.item() torch.nn.utils.clip_grad_norm_(parameters=model.parameters(), max_norm=self.max_grad_norm) optimizer.step() scheduler.step() avg_train_loss = total_loss / len(train_dataloader) print("Average train loss: {}".format(avg_train_loss)) loss_values.append(avg_train_loss) # ======================================== # Validation # ======================================== model.eval() eval_loss, eval_accuracy = 0, 0 predictions, true_labels = [], [] for batch in valid_dataloader: batch = tuple(t.to(device) for t in batch) b_input_ids, b_input_mask, b_labels = batch with torch.no_grad(): outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) print(outputs) print("val loss: ", outputs[0].item()) logits = outputs[1].detach().cpu().numpy() label_ids = b_labels.to('cpu').numpy() eval_loss += outputs[0].mean().item() predictions.extend([list(p) for p in np.argmax(logits, axis=2)]) true_labels.extend(label_ids) avg_eval_loss = eval_loss / len(valid_dataloader) validation_loss_values.append(avg_eval_loss) print("Validation loss: {}".format(avg_eval_loss)) pred_tags = [unique_tags[p_i] for p, l in zip(predictions, true_labels) for p_i, l_i in zip(p, l) if unique_tags[l_i] != "PAD"] valid_tags = [unique_tags[l_i] for l in true_labels for l_i in l if unique_tags[l_i] != "PAD"] val_acc = accuracy_score(valid_tags, pred_tags) validation_accuracy.append(val_acc) f1_val_score = f1_score([valid_tags], [pred_tags]) val_f1_scores.append(f1_val_score) print("Validation Accuracy: {}".format(val_acc)) print("Validation F1-Score: {}".format(f1_val_score))
Update: I tried out different data and also fine tuning only the last linear layer of the BERT model vs. fine tuning the pretrained + linear layer. I still don’t get why there is no validation loss.
0
huggingface
Beginners
Exporting wav2vec model to ONNX
https://discuss.huggingface.co/t/exporting-wav2vec-model-to-onnx/6695
Hello, I am trying to export a wav2vec model (cahya/wav2vec2-base-turkish-artificial-cv) to ONNX format with convert_graph_to_onnx.py 11 script provided in transformers repository. When I try to use these script with this line: python convert_graph_to_onnx.py --framework pt --model cahya/wav2vec2-base-turkish-artificial-cv exported_model.onnx I am getting this error: ====== Converting model to ONNX ====== ONNX opset version set to: 11 Loading pipeline (model: cahya/wav2vec2-base-turkish-artificial-cv, tokenizer: cahya/wav2vec2-base-turkish-artificial-cv) Some weights of the model checkpoint at cahya/wav2vec2-base-turkish-artificial-cv were not used when initializing Wav2Vec2Model: ['lm_head.bias', 'lm_head.weight'] - This IS expected if you are initializing Wav2Vec2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing Wav2Vec2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Error while converting the model: __init__() got an unexpected keyword argument 'feature_extractor' All I understand is that this script is not made for specifically wav2vec models. If I am right, how can I convert the wav2vec model to ONNX format?
Hi, Were you able to get this working? I am running into the same issue.
0
huggingface
Beginners
Pinned model still needs to load
https://discuss.huggingface.co/t/pinned-model-still-needs-to-load/12006
Hello, I have a model pinned. After a short amount of idle time the inference API still needs to load the model, i.e. it returns the message ‘Model <username>/<model_name> is currently loading’. This is not supposed to happen, right? As I understand it, this is the whole purpose of pinning models. I have confirmed it is indeed pinned through the code: request_headers = { 'Authorization': 'Bearer {}'.format(<huggingface_token>) } pin_url = "https://api-inference.huggingface.co/usage/pinned_models" response = requests.get(pin_url, headers=request_headers) The model is called through the following code: api_endpoint = 'https://api-inference.huggingface.co/models/<username>/<model_name>' data = json.dumps(payload) response = requests.request('POST', api_endpoint, headers=request_headers, data=data) I feel like I have followed everything in the documentation and don’t understand why it isn’t working. Thank you in advance for any answers!
We’re encountering the same issue.
0
huggingface
Beginners
FlaxVisionEncoderDecoderModel decoder_start_token_id
https://discuss.huggingface.co/t/flaxvisionencoderdecodermodel-decoder-start-token-id/13635
Im following the documentation here to instantiate a FlaxVisionEncoderDecoderModel but am unable to do so. I’m on Transformers 4.15.0 huggingface.co Vision Encoder Decoder Models We’re on a journey to advance and democratize artificial intelligence through open source and open science. from transformers import FlaxVisionEncoderDecoderModel, ViTFeatureExtractor, GPT2Tokenizer from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k") # load output tokenizer tokenizer_output = GPT2Tokenizer.from_pretrained("gpt2") # initialize a vit-gpt2 from pretrained ViT and GPT2 models. Note that the cross-attention layers will be randomly initialized model = FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained("vit", "gpt2") --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-3-d9e3c9f46932> in <module> 5 6 # initialize a vit-gpt2 from pretrained ViT and GPT2 models. Note that the cross-attention layers will be randomly initialized ----> 7 model = FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained("vit", "gpt2") 8 9 pixel_values = feature_extractor(images=image, return_tensors="np").pixel_values AttributeError: type object 'FlaxVisionEncoderDecoderModel' has no attribute 'from_encoder_decoder_pretrained'
Hi. Do you have Flax/Jax installed on your computer? It’s required in order to use FlaxVisionEncoderDecoderModel. (There should have a better error message for this situation, and it will be fixed.)
0
huggingface
Beginners
Controlling bos, eos, etc in api-inference
https://discuss.huggingface.co/t/controlling-bos-eos-etc-in-api-inference/5701
Is there a way to control the beginning of sentence, end of sentence tokens through the inference api? I could not find it in the documentation.
Hi @cristivlad , Currently there is no way to override those within the API. We are adding and end_sequence parameter to enable stopping the generation when using prompt-like generation (for GPT-Neo for instance). For BOS, what did you want to do with it ? Cheers, Nicolas
0
huggingface
Beginners
Custom Loss: compute_loss() got an unexpected keyword argument ‘return_outputs’
https://discuss.huggingface.co/t/custom-loss-compute-loss-got-an-unexpected-keyword-argument-return-outputs/4148
Hello, I have created my own trainer with a custom loss function; from torch.nn import CrossEntropyLoss device = torch.device("cuda") class_weights = torch.from_numpy(class_weights).float().to(device) class MyTrainer(Trainer): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def compute_loss(self, model, inputs): labels = inputs.pop("labels") outputs = model(**inputs) logits = outputs[0] loss = CrossEntropyLoss(weight=class_weights) return loss(logits, labels) Yet, after Training for an hour, It went to do an evaluation and I got this error; TypeError: compute_loss() got an unexpected keyword argument 'return_outputs' I don’t have compute_loss() variable within my code so I don’t think it was me inputting something to it. I was thinking perhaps it has to do with my custom loss? by the way, this is my trainer: trainer = MyTrainer( args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset, # evaluation dataset model_init=model_init, compute_metrics=compute_metrics, )
Hi @theudster, I ran into just this problem today The solution is to change the signature of your compute_loss to reflect what is implemented in the source code 18: def compute_loss(self, model, inputs, return_outputs=False): ... return (loss, outputs) if return_outputs else loss It seems that the example in the Trainer docs is not up-to-date so I suggest inspecting the source code of compute_loss as a reference for now
0
huggingface
Beginners
Do transformers need Cross-Validation
https://discuss.huggingface.co/t/do-transformers-need-cross-validation/4074
Hello, I am training a model and according to the standard documentation, I split the data into training and validation and pass those on to Trainer(), where I calculate various metrics. In previous ML projects I used to do K-fold validation. I have not found any examples of people doing this with Transformers and was wondering is there a reason for that? Will using K-fold not improve results?
hi @theudster, i believe the main reasons you don’t see cross-validation with transformers (or deep learning more generally) is because a) transfer learning is quite effective and b) most public applications involve large enough datasets where the effect of picking a bad train/test split is diminished. having said that, nothing stops you from using cross-validation with transformers and it’s probably useful if you have a smallish number of labelled samples for fine-tuning. i’m also sure it’s heavily used in kaggle comps to get good results
0
huggingface
Beginners
Boilerplate for Trainer using torch.distributed
https://discuss.huggingface.co/t/boilerplate-for-trainer-using-torch-distributed/13567
Hi everyone, Thanks in advance for helping! First I just have to say, as my first post here, Huggingface is awesome. We have been using the tools/libraries for a while for NLP work and it is just a pleasure to use and so applicable to real-world problems!!! We are “graduating” if you will from single GPU to multi-GPU models/datasets. Looking through different platforms / libraries to do this. I think overall for our applications, we don’t need to customize our training loops - the Huggingface Trainer is our bread and butter (with the exception of ~5% of applications where we do our own Pytorch training loops). So my question is - for the Huggingface trainer - is there some boilerplate code that works using torch.distributed? I understand that - at a basic level - torch.distributed launches the same code a bunch of times and you need to know which process instance you are running. My understanding is the trainer itself handles all of this for you - but what about when you instantiate the model? When you instantiate your datasets? Etc. What is the bare minimum you need to do to get a Trainer working in a torch.distributed environment? The examples I have found thusfar are pretty heavy - contain a lot of code to parse your arguments, etc. (Don’t get me wrong - they are awesome and well documented - again this is all kind of too good to be true). What we’re trying to do is a large ViT to Text model based on this: github.com NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_Seq2SeqTrainer.ipynb { "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "name": "Fine-tune TrOCR on IAM Handwriting Database using Seq2SeqTrainer.ipynb", "provenance": [], "collapsed_sections": [], "mount_file_id": "1AiB-bjFpcWXp3eRsfXjWFC8-RpvHVQJS", "authorship_tag": "ABX9TyMn8k9j37HGBCAplZPQJ1Jp", "include_colab_link": true }, "kernelspec": { "name": "python3", "display_name": "Python 3" }, "language_info": { "name": "python" }, "accelerator": "GPU" This file has been truncated. show original Trying to use the Seq2Seq trainer - but using multi-node, multi-GPU for a very large dataset and much higher resolution images. Any pointers to some simple examples would be much appreciated!!
The Trainer code will run on distributed or one GPU without any change. Regarding your other questions: you need to define your model in all processes, they will see different part of the data each and all copies will be kept the same. the tokenizer and dataset preprocessing can either be done on all processes if it doesn’t slow you down, or you can use the with training_args.main_process_first(desc="dataset map pre-processing"): context manager to make sure the preprocessing is done only on process 0.
1
huggingface
Beginners
Fine-Tune Wav2Vec2 for English ASR with Transformers article bug
https://discuss.huggingface.co/t/fine-tune-wav2vec2-for-english-asr-with-transformers-article-bug/10440
Hello @patrickvonplaten! Thank you so much for the tutorials. Super helpful! I’ve been having troubles with setting up a CTC head on a pre-trained model with an existing CTC head and wanted to point out a possible problem in the tutorial 3 that in part led me to having my main problem. First, I’ll pinpoint the problem in the tutorial, and then I’ll describe the problem I am getting. I am not that concerned with my problem yet as I haven’t dived deep into the source code to actually find what is wrong. I just want to make sure I am not crazy with the tutorial thing. TUTORIAL So, first, we’re setting up a vocabulary of 30 tokens and loading it into a tokenizer. It is indeed 30 tokens. Then, we proceed to load a pre-trained model with CTC head initialized at random. The thing is, the head is initialized with the size of the vocabulary being 32! And we never correct it. Are those supposed to be bos and eos? Shouldn’t we add those to the vocab? Screenshot 2021-10-01 at 17.02.181398×976 91.3 KB MY PROBLEM My goal is to fine-tune a model that already has a fine-tuned CTC head to work for my data (there is no headless model). The head has more output tokens than I need. So I want to initialize it at random. I prepare my dictionary and a tokenizer, it works just as I expect it to. I load the model with the CTC head, swap it with my head. I add the vocab_size=len(tokenizer) and pad_token_id to the model config. I even add the bos_token_id and eos_token_id (and add those to the vocabulary, although I don’t need them) And then several strange things tend to happen: I was going nuts about why I have the same random character appearing between the characters that should indeed be recognized (I was printing out evaluation set predictions during training). And then I understood that this character has id 0 in the vocab I am creating. And it is usually the id for the PAD token in wav2vec vocabs. If I set the vocab size in the model config exactly to my vocab size I get raise ValueError(f"Label values must be <= vocab_size: {self.config.vocab_size}") ValueError: Label values must be <= vocab_size: 32 so I am forced to do model.config.vocab_size=len(tokenizer)+1 for the thing to work. So if you know right away what is going wrong, I’ll appreciate the answer If not, I am just going to dive deeper myself Katja
Well, the label error was my error with adding a special char to the vocabulary… But the addition of a token with zero id is still under my investigation
0
huggingface
Beginners
How to load my own BILOU/IOB labels for training?
https://discuss.huggingface.co/t/how-to-load-my-own-bilou-iob-labels-for-training/12877
Hi everyone, I’m not sure if I just missed it in the documentation, but I’m looking to fine-tune a model with my own annotated data (out of Doccano). I’m comfortable manipulating the Doccano output into a format specific to what HuggingFace needs, but I’m not actually sure how to load my own data with the IOB labels. The “Fine tune with a custom dataset” section in the documentation doesn’t actually use a custom dataset (Section in question 4, it’s using one of the built-in examples. The HuggingFace Datasets documentation also doesn’t explain how to load NER labels, unless I’m missing something (which I probably am). If someone has an example of how to format data labels and how to use load_datasets() to create an NER dataset, I’d really appreciate the help! Thanks everyone!
I was having the same problem and this 2 helped. Basically you still need to create your own data loader. Based on what they described in their documents I thought the Dataset library could automatically identify and load common data formats, guess I was wrong…
0
huggingface
Beginners
Should we save the tokenizer state over domain adaptation?
https://discuss.huggingface.co/t/should-we-save-the-tokenizer-state-over-domain-adaptation/13324
I am going to do domain adaptation over my dataset with BERT. When I train the BERT model, should I save the tokenizer state? I mean, will tokenizer state change over training the model?
No, the tokenizer is not changed by the model fine-tuning on a new dataset.
1
huggingface
Beginners
Why am I getting KeyError: ‘loss’?
https://discuss.huggingface.co/t/why-am-i-getting-keyerror-loss/6948
Why when I run trainer.train() it gives me Keyerror:‘loss’ previously i use something like start_text and stop_text and I read in previous solution that this the cause of error so I delete it, but it still give same error.Did you have any solution? Thanks from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("distilgpt2") model = AutoModelWithLMHead.from_pretrained("distilgpt2") from datasets import Dataset dataset = Dataset.from_text('/content/drive/MyDrive/Colab_Notebooks/qna.txt') tokenizer.pad_token = tokenizer.eos_token def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True) tokenized_datasets = dataset.map(tokenize_function, batched=True) from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir='/content/drive/MyDrive/Colab_Notebooks/GPT_checkpoint', # output directory num_train_epochs=3, # total number of training epochs per_device_train_batch_size=1, # batch size per device during training warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='/content/drive/MyDrive/Colab_Notebooks/GPT_checkpoint/logs', # directory for storing logs logging_steps=10 ) trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=tokenized_datasets ) Here is the dataset sample: Was Volta an Italian physicist? yes Was Volta an Italian physicist? yes Is Volta buried in the city of Pittsburgh? no Is Volta buried in the city of Pittsburgh? no Here is the full error message: KeyError Traceback (most recent call last) <ipython-input-17-3435b262f1ae> in <module>() ----> 1 trainer.train() 3 frames /usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in __getitem__(self, k) 1804 if isinstance(k, str): 1805 inner_dict = {k: v for (k, v) in self.items()} -> 1806 return inner_dict[k] 1807 else: 1808 return self.to_tuple()[k] KeyError: 'loss'
There are no labels in your dataset, so it can’t train (and the model does not produce a loss, hence your error). Maybe you wanted to use the DataCollatorForMaskedLM to generate those labels automatically?
0
huggingface
Beginners
Helsinki-NLP/opus-mt-en-fr missing tf_model.h5 file
https://discuss.huggingface.co/t/helsinki-nlp-opus-mt-en-fr-missing-tf-model-h5-file/13467
Hi there, I have been following the tensorflow track of the HF course and got an http 404 error when running the below: from transformers import TFAutoModelForSeq2SeqLM model = TFAutoModelForSeq2SeqLM.from_pretrained(model_checkpoint) error message: 404 Client Error: Not Found for url: https://huggingface.co/Helsinki-NLP/opus-mt-en-fr/resolve/main/tf_model.h5 I went to the model card and could not find the tf_model.h5 file. Is there something that I am missing or does the model only work for Torch? thanks
Hello You need to set from_pt = True when loading. from transformers import TFAutoModelForSeq2SeqLM model_checkpoint = "Helsinki-NLP/opus-mt-en-fr" model = TFAutoModelForSeq2SeqLM.from_pretrained(model_checkpoint, from_pt = True) Downloading: 100% 1.26k/1.26k [00:00<00:00, 34.4kB/s] Downloading: 100% 287M/287M [00:07<00:00, 37.4MB/s] All PyTorch model weights were used when initializing TFMarianMTModel. All the weights of TFMarianMTModel were initialized from the PyTorch model. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFMarianMTModel for predictions without further training.
1
huggingface
Beginners
SpanBERT, ELECTRA, MARGE from scratch?
https://discuss.huggingface.co/t/spanbert-electra-marge-from-scratch/13374
Hey everyone! I am incredibly grateful for this tutorial on training a language model from scratch: How to train a new language model from scratch using Transformers and Tokenizers 3 I really want to expand this to contiguous masking of longer token sequences (e.g. [mask-5], [mask-8]). I have begun looking into how to write a custom DataCollator for this, but suspect I will also need to make some changes to the model as well. Has anyone looked into this and can point me to any resources? Thank you!
Found something useful on StackOverflow for this: stackoverflow.com Best way of using hugging face's Mask Filling for more than 1 masked token at a time 5 python, neural-network, nlp, huggingface-transformers asked by user3472360 on 11:50AM - 02 Apr 20 UTC
0
huggingface
Beginners
How to get embedding matrix of bert in hugging face
https://discuss.huggingface.co/t/how-to-get-embedding-matrix-of-bert-in-hugging-face/10261
I have tried to build sentence-pooling by bert provided by hugging face from transformers import BertModel, BertTokenizer model_name = 'bert-base-uncased' tokenizer = BertTokenizer.from_pretrained(model_name) # load model = BertModel.from_pretrained(model_name) input_text = "Here is some text to encode" # tokenizer-> token_id input_ids = tokenizer.encode(input_text, add_special_tokens=True) # input_ids: [101, 2182, 2003, 2070, 3793, 2000, 4372, 16044, 102] input_ids = torch.tensor([input_ids]) with torch.no_grad(): last_hidden_states = model(input_ids)[0] # Models outputs are now tuples last_hidden_states = last_hidden_states.mean(1) print(last_hidden_states) # size of last_hidden_states is [1,768] Now I want to know what does this vector refers to in dictionary. So how can I get the matrix in embedding whose size is [sequence_length,embedding_length], and then do the last_hidden_states @ matrix to find the word this vector refers to in dictionary? Please help me.
Betacat: actually I want to get the word that my last_hidden_state refer to Actually, that’s not possible, unless you compute cosine similarity between the mean of the last hidden state and the embedding vectors of each token in BERT’s vocabulary. You can do that easily using sklearn. The embedding matrix of BERT can be obtained as follows: from transformers import BertModel model = BertModel.from_pretrained("bert-base-uncased") embedding_matrix = model.embeddings.word_embeddings.weight However, I’m not sure it is useful to compare the vector of an entire sentence with each of the rows of the embedding matrix, as the sentence vector is a “summary” of the entire sentence.
1
huggingface
Beginners
How to load a pipeline saved with pipeline.save_pretrained?
https://discuss.huggingface.co/t/how-to-load-a-pipeline-saved-with-pipeline-save-pretrained/5373
Hi, I have a system saving an HF pipeline with the following code: from transformers import pipeline text_generator = pipeline('...') text_generator.save_pretrained('modeldir') How can I re-instantiate that model from a different system What code snippet can do that? I’m looking for something like p = pipeline.from_pretrained('...') but couldn’t find such a thing in the doc
I think pipeline(task, 'modeldir') should work to reload it.
0
huggingface
Beginners
How do I message someone?
https://discuss.huggingface.co/t/how-do-i-message-someone/13459
I’ve found a space and I would like to message the developer. Is there any way I can contact the person in the platform or get the contact email?
You can’t message them directly via their Hugging Face profile at the moment, but you can send them a message via the forum
0
huggingface
Beginners
Do we use pre-trained weights in Trainer?
https://discuss.huggingface.co/t/do-we-use-pre-trained-weights-in-trainer/13472
When we use Trainer to build a language model with MLM, based on which model we use (suppose DistilBERT), do we use the pre-trained weights in Trainer or weights are supposed to be updated from scrach?
You can do either – it depends on how you create your model. Trainer just handles the training aspect, not the model initialization. # Model randomly initialized (starting from scratch) config = AutoConfig.for_model("distilbert") # Update config if you'd like # config.update({"param": value}) model = AutoModelForMaskedLM.from_config(config) # Model from a pre-trained checkpoint model = AutoModelForMaskedLM.from_pretrained("distilbert-base-cased") # Put model in Trainer trainer = Trainer(model=model) Unless you have a huge amount of data that is very different than what pre-trained models were trained on, I wouldn’t recommend starting from scratch. Start from scratch when you are creating a model for a niche domain like a low-resource language. Start from a pre-trained model if your text is in a high-resource language (like English) but the jargon might be very specific (like scientific texts). There are enough fundamental similarities that you’ll save compute and time by starting from a pre-trained model.
0
huggingface
Beginners
Longformer for Encoder Decoder with gradient checkpointing
https://discuss.huggingface.co/t/longformer-for-encoder-decoder-with-gradient-checkpointing/13428
I’m struggling to find the right transformers class for my task. I want to solve a seq2seq problem with an encoder decoder longformer. I generated one with this german RoBERTa model 1 using this script 1. I know that I could use EncoderDecoderModel() 1, but the issue is that it doesn’t support gradient checkpointing, which I desperately need, because otherwise it wouldn’t run on the machine. And if I understand it correctly, the class LEDModel() 1 only takes already built encoder decoder models and not just a plain longformer to chain it together, so that is also not an option. I thought about initializing two seperate Longformers for encoder and decoder with LongformerModel(), but then I don’t know how to glue them together. Can someone explain how it works? Or does anyone have another suggestions on how I can solve this problem? Thank you very much!
I found a solution which at least helps a little: When using EncoderDecoderModel(), it is possible to set gradient checkpointing at least on the encoder part: model.encoder.config.gradient_checkpointing = True
0
huggingface
Beginners
Trouble with fine tuning DialoGPT-large
https://discuss.huggingface.co/t/trouble-with-fine-tuning-dialogpt-large/8161
I’m trying to fine tune the DialoGPT-large model but I’m still really new to ML and am probably misusing the trainer API. I already went through the tutorial and the colab examples but I still can’t figure out the issue. error: Traceback (most recent call last): File "/.../main.py", line 26, in <module> tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]] code: from transformers import AutoModelForCausalLM, AutoTokenizer import torch ### CONFIG weights = "microsoft/DialoGPT-large" # Initialize tokenizer and model print("Loading model... ", end='', flush=True) tokenizer = AutoTokenizer.from_pretrained(weights) model = AutoModelForCausalLM.from_pretrained(weights) print('DONE') ### FINE TUNING ### from datasets import load_dataset from transformers import AutoTokenizer, DataCollatorWithPadding raw_datasets = load_dataset("glue", "mrpc") tokenizer.pad_token = tokenizer.eos_token tokenizer.pad_token = tokenizer.eos_token def tokenize_function(example): return tokenizer([example["sentence1"], example["sentence2"]], truncation=True) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) data_collator = DataCollatorWithPadding(tokenizer=tokenizer) from transformers import TrainingArguments training_args = TrainingArguments("test-trainer") from transformers import Trainer trainer = Trainer( model, training_args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["validation"], data_collator=data_collator, tokenizer=tokenizer, ) trainer.train() predictions = trainer.predict(tokenized_datasets["validation"]) print(predictions.predictions.shape, predictions.label_ids.shape)
Hello, I’ve had this same error. It’s due to there being nonetypes in your data. So when you are creating your dataframe from the data, change your code to specify values, and convert it to a list: for i in data.index.values.tolist(): If you’re still having issues after that, try the following code on your dataframe object: df = df.dropna() Hope this helps you, good luck.
0
huggingface
Beginners
How can I view total downloads of my model?
https://discuss.huggingface.co/t/how-can-i-view-total-downloads-of-my-model/10476
Hello, I have uploaded my fine-tuned ‘PEGASUS’ model more than 3 months ago. Is there a way I can view my total downloads rather than just my last month downloads?
cc @julien-c
0
huggingface
Beginners
How can I enforce reproducibility for Longformer?
https://discuss.huggingface.co/t/how-can-i-enforce-reproducibility-for-longformer/8862
Hi all, I’m struggling with ensuring reproducible results with the Longformer. Here is the result of transformer-cli env: transformers version: 4.9.1 Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.29 Python version: 3.8.10 PyTorch version (GPU?): 1.8.1+cu102 (True) Tensorflow version (GPU?): 2.5.0 (False) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: yes Using distributed or parallel set-up in script?: no I am running a script which finetunes Longformer for sequence classification two times , each time for 4 epochs. When using the model "allenai/longformer-base-4096", I do not get the same training loss in the two iterations. However, if I use "roberta-base" as a model, the training loss is identical in both iterations. I did not find anything else I could add to the script to ensure reproducible results. Could you tell me if I am missing something? I plotted the training loss over epochs for two consecutive runs with "roberta-base" and "allenai/longformer-base-4096". You can see that the "allenai/longformer-base-4096" runs show different training loss in the two runs where as the "roberta-base" runs have identical training loss. See the plot in a wandb-Report here: Wandb Report 3 Below is code to reproduce the results. You can comment/uncomment the respective model_name to chose either "allenai/longformer-base-4096" or "roberta-base". import torch import random import wandb import datetime import numpy as np from datasets import load_dataset, load_metric from transformers import AutoTokenizer, AutoConfig, TrainingArguments, Trainer, AutoModelForSequenceClassification import transformers transformers.logging.set_verbosity_error() seed = 42 # python RNG random.seed(seed) # pytorch RNGs torch.manual_seed(seed) torch.backends.cudnn.deterministic = True if torch.cuda.is_available(): torch.cuda.manual_seed_all(seed) # numpy RNG np.random.seed(seed) #model_name = "roberta-base" model_name = "allenai/longformer-base-4096" raw_datasets = load_dataset("imdb") tokenizer = AutoTokenizer.from_pretrained(model_name) def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) def get_model(): # get_model is used for the model_init argument for trainer. This should ensures reproducibility. Otherwise, weights from classification head are randomly initialized. # see https://discuss.huggingface.co/t/fixing-the-random-seed-in-the-trainer-does-not-produce-the-same-results-across-runs/3442 model = AutoModelForSequenceClassification.from_pretrained( model_name, config = AutoConfig.from_pretrained(model_name, num_labels = 2), ) return model metric = load_metric("accuracy") def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) lr = 1e-5 num_epochs = 4 batch_size = 2 model_path = "models/" + model_name.replace("/", "_") for i in range(2): run = wandb.init( reinit=True, name = "transformers_" + model_name + "_" + datetime.datetime.now().strftime("%Y%m%d_%H%M%S"), notes = "reproducibility training with imdb dataset", save_code = True, config = { "model":model_name, "learning_rate":lr, "num_epochs": num_epochs, "warmup_ratio":0.1, "batch_size":batch_size, "random_seed":seed } ) training_args = TrainingArguments( seed = seed, do_train=True, do_eval=True, evaluation_strategy="epoch", logging_strategy="epoch", num_train_epochs = num_epochs, learning_rate=lr, per_device_train_batch_size = batch_size, per_device_eval_batch_size = batch_size, output_dir = "./test_output" ) trainer = Trainer( model_init=get_model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset, compute_metrics=compute_metrics, ) trainer.train() run.finish()
Hi @DavidPfl, were you able to figure this out?
0
huggingface
Beginners
Any pretrained model for Grammatical Error Correction(GEC)?
https://discuss.huggingface.co/t/any-pretrained-model-for-grammatical-error-correction-gec/4662
Hi, is there any pre-trained model for GEC task? It is often treated as an MT task.
As I couldn’t find one, I developed a model using Marian NMT and then migrated this model to Huggingface to use it as pre trained. I wrote a post describing my path. Hope this helps anyone. Medium – 24 Dec 21 Training a Grammar Error Correction (GEC) Model from Scratch with Marian NMT... 2 Objective Reading time: 13 min read
0
huggingface
Beginners
Bert embedding layer
https://discuss.huggingface.co/t/bert-embedding-layer/13355
I have taken specific word embeddings and considered bert model with those embeddings self.bert = BertModel.from_pretrained(‘bert-base-uncased’) self.bert(inputs_embeds=x,attention_mask=attention_mask, *args, **kwargs) Does this means I’m replacing the bert input embeddings(token+position+segment embeddings) How to consider all embeddings i.e.,(token+position+segment+custom embeddings)
Hi, As you can see here 2, if you provide inputs_embeds yourself, they will only be used to replace the token embeddings. The token type and position embeddings will be added separately.
1
huggingface
Beginners
Best model to use for Abstract Summarization
https://discuss.huggingface.co/t/best-model-to-use-for-abstract-summarization/13240
I am looking for a pre-trained model for abstract summarization, I have used the Google Pegasus-xsum and Pegasus-Large, the xsum seems good but only provide one liner summary, while the Pegasus-large seems that it is providing the extractive summary rather than the abstractive, it just picks up the sentences from paragraph and shows it as summary. Can anybody provide the best abstract summary model for common sentences, I don’t need for any specific domain, just common day to day usage paragraphs.
I have tested a few different models and have found facebook/bart-large-cnn to be the best for my use-case.
0
huggingface
Beginners
Tokenizer issue in Huggingface Inference on uploaded models
https://discuss.huggingface.co/t/tokenizer-issue-in-huggingface-inference-on-uploaded-models/3724
While inferencing on the uploaded model in huggingface, I am getting the below error, Can’t load tokenizer using from_pretrained, please update its configuration: Can’t load tokenizer for ‘bala1802/model_1_test’. Make sure that: - ‘bala1802/model_1_test’ is a correct model identifier listed on ‘Hugging Face – On a mission to solve NLP, one commit at a time. 2’ - or ‘bala1802/model_1_test’ is the correct path to a directory containing relevant tokenizer files How to configure the tokenizer in config file ? The configuration file looks like below, { “_name_or_path”: “gpt2”, “activation_function”: “gelu_new”, “architectures”: [ “GPT2LMHeadModel” ], “attn_pdrop”: 0.1, “bos_token_id”: 50256, “embd_pdrop”: 0.1, “eos_token_id”: 50256, “gradient_checkpointing”: false, “initializer_range”: 0.02, “layer_norm_epsilon”: 1e-05, “model_type”: “gpt2”, “n_ctx”: 1024, “n_embd”: 768, “n_head”: 12, “n_inner”: null, “n_layer”: 12, “n_positions”: 1024, “resid_pdrop”: 0.1, “summary_activation”: null, “summary_first_dropout”: 0.1, “summary_proj_to_labels”: true, “summary_type”: “cls_index”, “summary_use_proj”: true, “task_specific_params”: { “text-generation”: { “do_sample”: true, “max_length”: 50 } }, “transformers_version”: “4.3.2”, “use_cache”: true, “vocab_size”: 50257 }
Hi @bala1802, I just tried your text generation model on the inference API (link 11) and it seems to work without any error - perhaps you found a way to solve your problem?
0
huggingface
Beginners
BERT and RoBERTA giving same outputs
https://discuss.huggingface.co/t/bert-and-roberta-giving-same-outputs/10214
Hi All. I tried using Roberta model in two different models. In both these models, I’ve faced same problem of getting same output for different test input during evaluation process. Earlier, I thought it might be due to some implementation problem and hence I took a small dataset to overfit the dataset and predict the outputs for the same. I still got the same problem. Roberta was still giving out same output for different records. I replaced Roberta with Bert and still got same issue. Is there any bug in latest transformer version i.e. 4.10.2 (which I’m surely believe is very unlikely) or do have any other suggestion that I can try? I’ve used 4.2.1 version of transformer earlier and didn’t face this problem. Also, I keep getting this warning while training and evaluation: Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: ['cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.bias', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight'] - This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). I checked online and I suspect this is not a issue but I am still not sure what it actually means. Could this be any issue?
@theguywithblacktie have you figured out what was wrong with your code as I am facing the same issue while using roberta from transformer library.
0
huggingface
Beginners
Push_to_hub usage errors?
https://discuss.huggingface.co/t/push-to-hub-usage-errors/9132
Trying to push my model back to the hub from python (not notebook) and failing so far: I am using a T5 model with the latest development version of the example “run_summarization.py” and pass a load of runtime parameters in and my model works fine. There are some parameters that seem to relate to pushing the model back to the hub which I have identified from the “run_summarization.py -h” text: –use_auth_token - Will use the token generated when running transformers-cli login (necessary to use this script with private models). (default: False) -I assume I need to set this True given I ran the cli and it saved my token in the cache? –push_to_hub - Whether or not to upload the trained model to the model hub after training. (default: False) - I set this to True –push_to_hub_model_id - The name of the repository to which push the Trainer. (default: None) - *I set this to a string that is my model like “my_model” I guess? * –push_to_hub_organization - Not relevant for me since I am an individual? –push_to_hub_token - Not needed if I set --use_auth_token True So I have as part of my run time parameter list: –push_to_hub True --use_auth_token True --push_to_hub_model_id "t5_tuesday" But I get the error: OSError: Tried to clone a repository in a non-empty folder that isn’t a git repository. If you really want to do this, do it manually:\mgit init && git remote add origin && git pull origin main or clone repo to a new folder and move your existing files there afterwards. As I said above, I did transformers-cli login successfully in my environment. I thought maybe I needed to do as I had seen in an example Colab notebook: !pip install hf-lfs !git config --global user.email "<my_github_email>" !git config --global user.name "<my_github_username>" but after doing the above the error changes to: subprocess.CalledProcessError: Command ‘[‘git-lfs’, ‘–version’]’ returned non-zero exit status 1. But not sure if needed (I am guessing)! I can supply the Trace for both kinds of errors above if needed, but I don’t know what minimal configuration works running a .py file to see if I am being a dumb user and the problem is usage or the problem is something else. Any help on correct usage appreciated or point me to a working example? Thanks!
As the error indicates, you are trying to clone an existing repository in a folder that is not a git repository, so you should use an empty folder, or an ID for a new repository.
0
huggingface
Beginners
Hugging Face Tutorials - Basics / Classification tasks
https://discuss.huggingface.co/t/hugging-face-tutorials-basics-classification-tasks/13345
Hi everyone, I recently decided to make practical coding guides for hugging face because I thought videos like these would have been useful for when I was learning the basics. I have made one on the basics of the Hugging Face library / website and its layout (and BERT basics). The second one is a guide using pytorch / pytorch lightning and hugging face to fine tune RoBERTa / BERT on a multi-label classification task to predict unhealthy online comments (attributes include things such as sarcasm, etc.). Let me know if you find this useful! links below: Hugging Face basics tutorial 8 Pytorch multi-label classification with RoBERTa - state of the art results on the Unhealthy Comment Corpus 3
This is awesome! Thanks for sharing
0
huggingface
Beginners
Where to put use_auth_token in the code if you can’t run hugginface-cli login command?
https://discuss.huggingface.co/t/where-to-put-use-auth-token-in-the-code-if-you-cant-run-hugginface-cli-login-command/11701
I am training a bart_large_cnn model for summarization, where I have used : - training_args = Seq2SeqTrainingArguments( output_dir="results", num_train_epochs=1, # demo do_train=True, do_eval=True, per_device_train_batch_size=4, # demo per_device_eval_batch_size=4, # learning_rate=3e-05, warmup_steps=500, weight_decay=0.1, label_smoothing_factor=0.1, predict_with_generate=True, logging_dir="logs", logging_steps=50, save_total_limit=3, push_to_hub=True, ) but when I try to compile :- trainer = Seq2SeqTrainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_data, eval_dataset=validation_data, tokenizer=tokenizer, compute_metrics=compute_metrics ) it gives an error :- HTTPError: Invalid user token. If you didn't pass a user token, make sure you are properly logged in by executing huggingface-cli login, and if you did pass a user token, double-check it's correct. Now, in the user documentation, its written that either pass the token as :- !huggingface-cli login or use_auth_token='token_value' I tried putting this token value as below :- the first command (cli-login) doesn’t run (takes forever). so I used the second option as below; - model = AutoModelForSeq2SeqLM.from_pretrained(model_name,use_auth_token='token_value') tokenizer = AutoTokenizer.from_pretrained(model_name,use_auth_token = 'token_value') but the same error keeps popping up. Where else can I put this token value to push my model to the hub?
I added the following parameter (hub_token) in the training_args as below :- training_args = Seq2SeqTrainingArguments( output_dir="results", num_train_epochs=1, # demo do_train=True, do_eval=True, per_device_train_batch_size=4, # demo per_device_eval_batch_size=4, # learning_rate=3e-05, warmup_steps=500, weight_decay=0.1, label_smoothing_factor=0.1, predict_with_generate=True, logging_dir="logs", logging_steps=50, save_total_limit=3, push_to_hub=True, hub_token = 'token value' ) but when I try to compile :- trainer = Seq2SeqTrainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_data, eval_dataset=validation_data, tokenizer=tokenizer, compute_metrics=compute_metrics ) FileNotFoundError: [WinError 2] The system cannot find the file specified During handling of the above exception, another exception occurred: File "C:\Users\abhay.saini\AppData\Local\Programs\Python\Python39\lib\site-packages\huggingface_hub\repository.py", line 489, in check_git_versions raise EnvironmentError( OSError: Looks like you do not have git-lfs installed, please install. You can install from https://git-lfs.github.com/. Then run `git lfs install` (you only have to do this once). I did then try installing git-lfs pip install git-lfs Requirement already satisfied: git-lfs in c:\users\abhay.saini\appdata\local\programs\python\python39\lib\site-packages (1.6) but the error persists… Can anyone help?
0
huggingface
Beginners
Errors when fine-tuning T5
https://discuss.huggingface.co/t/errors-when-fine-tuning-t5/3527
Hi everyone, I’m trying to fine-tune a T5 model. I followed most (all?) the tutorials, notebooks and code snippets from the Transformers library to understand what to do, but so far, I’m only getting errors. The end goal is giving T5 a task such as finding the max/min of a sequence of numbers, for example, but I’m starting with something really small, just to see if I understand how things work. I’m using Transformers v4.2.2 (Tokenizers v0.9.4). This is what I have understood so far and which I think is correct (excuse the French, I’m working on that too ): tokenizer = T5TokenizerFast.from_pretrained("t5-base") model = T5ForConditionalGeneration.from_pretrained("t5-base") prefix = "translate English to French:" inputs = [f"{prefix} How are you?", f"{prefix} My name is Ben", f"{prefix} My cat is great"] outputs = ["Comment ca va?", "Je m'appelle Ben", "Mon chat est genial"] model_inputs = tokenizer(inputs, padding=True, truncation=True, return_tensors="pt") with tokenizer.as_target_tokenizer(): labels = tokenizer(outputs, padding=True, truncation=True, return_tensors="pt") model_inputs["labels"] = labels["input_ids"] class MyDataset(torch.utils.data.Dataset): def __init__(self, examples): self.examples = examples def __getitem__(self, idx): return self.examples[idx] def __len__(self): return len(self.examples) train = MyDataset([model_inputs]) training_args = Seq2SeqTrainingArguments( output_dir="output", overwrite_output_dir=True, per_device_train_batch_size=2, num_train_epochs=3, run_name="T5 Experiment", ) trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=train, tokenizer=tokenizer ) trainer.train() The error I’m getting is ValueError: too many values to unpack (expected 2) which happens in the forward method in transformers/models/t5/modeling_t5.py, in the line 877 (880 on the master branch): batch_size, seq_length = input_shape Looking at a few lines before the error, I see input_shape is just input_ids.size(), and my model_inputs["input_ids"] is indeed a two-dimensional PyTorch tensor, so I don’t understand why the unpacking crashes. I have no idea whether I’m doing something wrong in the model, in the tokenization, in the training, I really am lost. Any help is greatly appreciated!
Good job posting your issue in a very clear fashion, it was very easy to reproduce and debug So the problem is that you are using model_inputs wrong. It contains one key for input_ids, labels and attention_mask and each value is a 2d tensor with the first dimension being 3 (your number of sentences). You dataset should actually dig in that dictionary in its __getitem__: def __getitem__(self, idx): return {k: v[idx] for k, v in self.examples.items()} So that each element of your dataset is a dictionary with the tree keys pointing to one of the encoded sentences (so associated values being 1d tensors). Then you should pass the model_inputs without putting them in a list: train = MyDataset(model_inputs) and it should work. The current model_inputs["input_ids"] your model gets is a 3d-tensor with your actual code, of shape 1 x 3 x n
0
huggingface
Beginners
Inference API in JavaScript
https://discuss.huggingface.co/t/inference-api-in-javascript/11537
Dear Community, Good day. How can I use Inference API in pure JavaScript. My goal is to test the ML model on the web by sending API request and getting a response. Is it possible to do it? Can someone show me an example? Akbar
I would also be interested in this. All help highly appreciated
0
huggingface
Beginners
Shuffle a Single Feature (column) in a Dataset
https://discuss.huggingface.co/t/shuffle-a-single-feature-column-in-a-dataset/13195
Hi, I am learning the dataset API 2. The shuffle API states that it rearranges the values of a column but from my experimentations, it shuffles the rows. The code documentation 2 is more clear and states that the rows are shuffled. To achieve column shuffle I used the map functionality (batch=True) and created the following mapper function: def _shuffle_question_column_batch(examples): questions = examples["question"] Random(42).shuffle(questions) examples["question"] = questions return examples I am wondering whether the shuffle API is capable of rearranging values of a single column or a better way exists. Please advice.
Using the imdb (movie review dataset) data as an example, this is 1000s of movie reviews, with columns being the text for the movie review and then the label (0 or 1). We wouldn’t want to shuffle the columns - this would only be swapping the text and the label - there is no benefit to that. We care about shuffling the rows. This is what the shuffle method does.
0
huggingface
Beginners
Can I download the raw text of a dataset?
https://discuss.huggingface.co/t/can-i-download-the-raw-text-of-a-dataset/13298
Hi, Can I download the raw text as a file of a dataset? Thanks, Steve
There are a couple of options. Using the imdb dataset as an example. As an arrow file: from datasets import load_dataset im = load_dataset('imdb') imdb.save_to_disk(dataset_dict_path='./imdb') Will save your files in the imdb directory. Or convert to pandas as then save as csv / json from datasets import load_dataset im = load_dataset('imdb') im.set_format('pandas') df = im['train'][:] df.to_csv('imdb_train.csv')
0
huggingface
Beginners
How do organizations work?
https://discuss.huggingface.co/t/how-do-organizations-work/8918
After creating a new user account, I see an option to create a new organization. As I am hoping to create some datasets that might be shared within my organization, this seems like a good choice. However, I don’t see a way to search existing organizations to see if (unlikely) someone has previously created a huggingface organization that I should be joining. I don’t see any way to join an existing organization, so I’m also a little concerned about how others would find and join a new organization that I create. It all seems like a walk down a dark alleyway… I see documentation on how models or datasets can be created and assigned to an organization, but I see no documentation on how huggingface organizations are created, managed, and what things are available to free vs. premium users. So, how do organizations work in huggingface?
hey @jnemecek here’s a few quick answers, but feel free to open an issue on the huggingface_hub repo 2 if you think the docs could be improved further. jnemecek: However, I don’t see a way to search existing organizations to see if (unlikely) someone has previously created a huggingface organization that I should be joining. I don’t see any way to join an existing organization, so I’m also a little concerned about how others would find and join a new organization that I create. currently you can search for organisations by name in the search bar on the hub. the name will appear under the “Organizations” section and you can request to join an organisation by clicking on the button shown in the screenshot below. Screen Shot 2021-08-02 at 12.38.051920×751 67.5 KB jnemecek: what things are available to free vs. premium users. as far as i know there is no difference in organisation-specific features between free and premium users. as described in the pricing page 2 most of the features concern the use of the inference api
0
huggingface
Beginners
Cnn_dailymail dataset loading problem with Colab
https://discuss.huggingface.co/t/cnn-dailymail-dataset-loading-problem-with-colab/13281
The cnn_dailymail dataset was rarely downloaded successfully in the past few days. import datasets test_dataset = datasets.load_dataset(“cnn_dailymail”, “3.0.0”, split=“test”) Most of the time when I try to load this dataset using Colab, it throws a “Not a directory” error: NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories’ I really don’t know why and what the exact problem is. This wastes my time waiting for hours or days until I can load the dataset again. Please guide me to solve this problem or to save this dataset locally so that next time I load it “when it becomes available” from my drive instead. Thank you in advance
I would either try streaming 3 or clear the cache, mount drive & let it save under ‘/content’.
0
huggingface
Beginners
Deploying to Model Hub for Inference with custom tokenizer
https://discuss.huggingface.co/t/deploying-to-model-hub-for-inference-with-custom-tokenizer/3595
Hello everyone, I have a working model for a text generation task and I would to use the huggingface Inference API to simplify calling this model to generate new text. However I used a custom WordLevel tokenizer due to the nature of my domain and the docs aren’t very clear on how I would make this work since I didn’t use a PretrainedTokenizer. Does anyone have any sort of documentation or reference on how I can still leverage the Inference API or if this is possible? Currently with my model deployed on Model Hub I receive a route not found when attempting to call with example text in following the example here Overview — Api inference documentation 8.
@jodiak Did you ever find a solution to this?
0
huggingface
Beginners
Trainer.evaluate() with text generation
https://discuss.huggingface.co/t/trainer-evaluate-with-text-generation/591
Hi everyone, I’m fine-tuning XLNet for generation. For training, I’ve edited the permutation_mask to predict the target sequence one word at a time. I’m evaluating my trained model and am trying to decide between trainer.evaluate() and model.generate(). Running the same input/model with both methods yields different predicted tokens. Is it correct that trainer.evaluate() is not set up for sequential generation? I’ll switch my evaluation code to use model.generate() if that’s the case. Thanks for the help!
Hi, I encountered a similar problem when trying to use EncoderDecoderModel for seq2seq tasks. It seems like Trainer does not support text-generation tasks for now, as their website https://huggingface.co/transformers/examples.html 36 shows.
0
huggingface
Beginners
XLM-Roberta for many-topic classification
https://discuss.huggingface.co/t/xlm-roberta-for-many-topic-classification/12638
Hi, I am working on a multi-label topic classifier that can classify webpages into some of our ~100 topics. The classifier currently uses a basic Neural Network and I wish to adapt the XLM-R model provided by Huggingface to give the classifier multi-lingual capabilities. However, when I train a classifier using XLM-R the performance (using pr_auc) is worse than that of the classifier using the basic Neural Network. What can I do to improve the performance of a transformer-based Neural Network model?
I don’t have an answer, unfortunately, but I have the same issue and can add some details. I have a classification problem with 60 classes. I have training data of ~70k documents, unfortunately, with very unbalanced class distribution. My baseline is a FastText classifier trained on the same data which achieves an accuracy of ~0.45. The majority of the documents is in English, but some are in other languages (all I have followed the tutorial for fine-tuning pre-trained classification models 1 using the Trainer API. I have not changed much from the example other than the number of labels, and the model. Running on a Colab notebook with GPU, training time for a single epoch is roughly 4h. I’ve run intermediate evaluations every 500 steps, and the accuracy is around 0.04, no changes (neither positive nor negative). Same happens when using a different model (e.g. bert-base-cased, as shown in the tutorial). My suspicion is that the model does not learn anything at all, the accuracy is very close to random choice. What am I missing? Is there a more suitable, up-to-date tutorial somewhere?
0
huggingface
Beginners
Pegasus - how to get summary of more than 1 line?
https://discuss.huggingface.co/t/pegasus-how-to-get-summary-of-more-than-1-line/11014
I am trying Abstractive summary with Pegasus by following the example code give here: huggingface.co Pegasus 2 DISCLAIMER: If you see something strange, file a Github Issue and assign @patrickvonplaten. Overview: The Pegasus model was proposed in PEGASUS: Pre-training... here is my sample code and results: image1450×269 14.9 KB it seems it produces summary of 1 line only. Is there way to get more than 1 line of summary?
kapilkathuria: e only. Is t that is because you are probably using the pegasus-xsum model… All the xsum model generates one line summary.
0
huggingface
Beginners
How to change my account to organization?
https://discuss.huggingface.co/t/how-to-change-my-account-to-organization/13025
hi, I want to sign up as a organization (company). Is there any way to register this account as a company? If not, we request you to delete this account thank you.
As a user, you can create an organization by clicking your profile picture + New Organization. There is no concept of an account being a company; instead, you and more people can belong to an organization. Let me know if you need any further help!
0
huggingface
Beginners
Entity type classification
https://discuss.huggingface.co/t/entity-type-classification/13146
Certain entities are wrongly classified. For example in some cases ORG are classified as people. How to correct this?
Welcome @dorait Is it possible if you could send me the model you’re inferring with and an example input?
0
huggingface
Beginners
Trainer .train (resume _from _checkpoint =True)
https://discuss.huggingface.co/t/trainer-train-resume-from-checkpoint-true/13118
Hi all, I’m trying to resume my training from a checkpoint my training argument: training_args = TrainingArguments( output_dir=repo_name, group_by_length=True, per_device_train_batch_size=16, per_device_eval_batch_size=1, gradient_accumulation_steps=8, evaluation_strategy=“steps”, num_train_epochs=50, fp16=True, save_steps=500, eval_steps=400, logging_steps=10, learning_rate=5e-4, warmup_steps=3000, push_to_hub=True, ) my trainer: trainer = Trainer( model=model, data_collator=data_collator, args=training_args, compute_metrics=compute_metrics, train_dataset=common_voice_train, eval_dataset=common_voice_test, tokenizer=processor.feature_extractor, ) till here everything is fine then my training command : trainer.train(resume_from_checkpoint=True) the error is: ----> 1 trainer.train(resume_from_checkpoint=True) /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1073 resume_from_checkpoint = get_last_checkpoint(args.output_dir) 1074 if resume_from_checkpoint is None: → 1075 raise ValueError(f"No valid checkpoint found in output directory ({args.output_dir})") 1076 1077 if resume_from_checkpoint is not None: ValueError: No valid checkpoint found in output directory (stt-arabic-2) any thought why ??
maher13: trainer.train(resume_from_checkpoint=True) Probably you need to check if the models are saving in the checkpoint directory, You can also provide the checkpoint directory in the resume_from_checkpoint=‘checkpoint_dir’
0
huggingface
Beginners
Using DialoGPT for Text Classification
https://discuss.huggingface.co/t/using-dialogpt-for-text-classification/13123
I have labelled set of 21000 chat transcripts between agent and customer. My intent is to classify what they are talking about? I have already tried few models but wanted to try something that was trained on similar kind of data. I stumbled upon DialoGPT. How can I finetune DialoGPT for Multi-class Text Classification?
Hello, You have two ways of doing this. First is through an intent action chatbot (which looks like it’s something you need instead of a generative model). If you want a better control over the responses your chatbot gives and have a set of answers (actions) to user inputs (intents) you should rather solve this as a sequence classification problem where you have user inputs and associated classes (like intent is “greetings” and user input samples are hello, hi, good morning etc). If you want to keep it simple you can just train a sequence classification model and define answers. You can also look at various chatbot frameworks like rasa OS. Second way is dialoGPT like model where you have conversation turns and you use it to fine tune DialoGPT (nice tutorial here 2). Note that your responses will be quite random.
0
huggingface
Beginners
Adding new Entities to Flair NER models
https://discuss.huggingface.co/t/adding-new-entities-to-flair-ner-models/3913
Hi, I hope it’s not inapproriate to ask question about Flair here. I noticed that Flair models were also hosted on the model hub and I could not find the answer to my question anywhere else. I have a NER problem that I need to tackle and there is a nearly perfect existing model. The problem is however, that it lacks a few entities that I need. My question is, can I add new entities to said existing model, or do I need to train from scratch with the same corpus the original authors used, plus additional training data for my additional entitites? The flair documentation mentioneds training models and even continuation of training for existing models but it doesn’t mention whether new entities could be added. Any help would be much appreciated!
Hi @neuralpat, I’ve never tried this but I wonder whether you could fine-tune the existing NER model on a small corpus composed of a mix of the original annotation and the new ones you’d like to extend with (I think the mix is needed so the original model doesn’t forget the original entities)? Alternatively you could try just fine-tuning on the new annotations (perhaps without dropout or weight decay) and then compare the two approaches
0
huggingface
Beginners
The inputs into BERT are token IDs. How do we get the corresponding input token VECTORS?
https://discuss.huggingface.co/t/the-inputs-into-bert-are-token-ids-how-do-we-get-the-corresponding-input-token-vectors/11273
Hi, I am new and learning about transformers. In alot of BERT tutorials i see the input is just the token id of the words. But surely we need to convert this token ID to a vector representation (it can be one hot encoding, or any initial vector representation for each token ID) so that it can be used by the model? My question is where can I find this initial vector representation for each token? It seems like theres no guide on this hence why I am asking
The token ID specifically is used in the embedding layer, which you can see as a matrix with as row indices all possible token IDs (so one row for each item in the total vocabulary size, for instance 30K rows). Every token therefore has a (learned!) representation. Be ware though, that this is not the same as word2vec or similar approaches - it is context-sensitive and not trained specifically to used by itself. It only serves as the the input of the model, together with potentially other embeddings like type and position embeddings. Getting those embeddings by themselves is not very useful. If you want to get output representations for each word, this post may be helpful. Generate raw word embeddings using transformer models like BERT for downstream process - #2 by BramVanroy 27
1
huggingface
Beginners
T5 [ input & target ] text
https://discuss.huggingface.co/t/t5-input-target-text/13134
Hello, I am trying to fine tune mt5 on seq2seq tasks with my own dataset, and the dataset has input and target columns. How can I tell the model about those input & target to be trained on ?
Hi, The T5 docs is quite extensive (it’s equivalent for mT5): T5 1. Next to that, I do have some notebooks that illustrate how to fine-tune T5 models: Transformers-Tutorials/T5 at master · NielsRogge/Transformers-Tutorials · GitHub 3
0
huggingface
Beginners
Getting started with GPT2
https://discuss.huggingface.co/t/getting-started-with-gpt2/13125
Hi I am new to HuggingFace. I want to build a ReactNative mobile app that can leverage the HuggingFace GPT2 model. I wanted to host a GPT2 model (as is, without fine-tuning) on HuggingFace servers so I can invoke the model via an API from my mobile app. Can someone please guide me on how to do this? Your help is really appreciated!
Hey @shivenpandey21, Check out the following links: How to programmatically access the Inference API 4 Inference API Docs Pricing 1
0
huggingface
Beginners
Error while downloading BertForQuestionAnswering
https://discuss.huggingface.co/t/error-while-downloading-bertforquestionanswering/13120
Hi, I just ran this code: ""from transformers import BertTokenizer, BertForQuestionAnswering modelname = ‘deepset/bert-base-cased-squad2’ tokenizer = BertTokenizer.from_pretrained(modelname) model = BertForQuestionAnswering.from_pretrained(modelname)"" but I got an error like this: ""OSError: Can’t load weights for ‘deepset/bert-base-cased-squad2’. Make sure that: ‘deepset/bert-base-cased-squad2’ is a correct model identifier listed on ‘Models - Hugging Face’ or ‘deepset/bert-base-cased-squad2’ is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt."" I have ensured that those bert-base-cased-squad2 is available, but it still got an error. Any one know the solution for this problem? Thanks
Hey, I ran this code in Colab and works perfectly fine for me. Can you please try the below code again? from transformers import BertTokenizer, BertForQuestionAnswering modelname = "deepset/bert-base-cased-squad2" tokenizer = BertTokenizer.from_pretrained(modelname) model = BertForQuestionAnswering.from_pretrained(modelname)
0
huggingface
Beginners
Unfreeze BERT vs pre-train BERT for Sentiment Analysis
https://discuss.huggingface.co/t/unfreeze-bert-vs-pre-train-bert-for-sentiment-analysis/13041
I am doing Sentiment Analysis over some text reviews, but I do not get good results from. I use BERT for feature extraction and a Fully Connected as classifier. I am going to do these experiments, but I do not have any overview of the results in general. I have two options: 1- Unfreeze some Transfomer layers and let the gradient propagate over that layers 2- Do pre-train the BERT with masked language over related texts and then use classifier. Which one has the priority? Or it depends just on experiments?
Currently, it seems that the consensus is that to get the best results when fine-tuning on a downstream task you don’t freeze any layers at all. If you’re freezing the weights to save up on memory, then I’d suggest considering Adapter Framework. The idea of it is, basically, to insert additional trainable layers in-between existing frozen layers of a Transformer model. It should help, but there’s no guarantee that the results will be on par with full fine-tuning. Here I assume that you mean fine-tuning an existing pre-trained BERT with MLM objective. This may help, but it depends on the kind of texts you’re trying to classify. If you have a reason to believe that these texts are noticeably different from the texts that BERT was trained, then it’s likely to improve the results, although it may hinder its generalization ability. It’s a safe bet to say that just unfreezing the weights will be the most advantageous, so I’d start with that, if it’s an option.
1
huggingface
Beginners
Vector similarity in Python representing music notes
https://discuss.huggingface.co/t/vector-similarity-in-python-representing-music-notes/13094
I have the following vector, which represents 5 notes played on a guitar: [(0, 0.06365224719047546), (41, 0.6289597749710083), (42, 0.6319441795349121), (43, 0.632896363735199), (44, 0.631447434425354), (45, 0.6318693161010742), (46, 0.6315509080886841), (47, 0.6318208575248718), (48, 0.6322312355041504), (49, 0.6312702894210815), (50, 0.6237916350364685), (51, 0.630915105342865), (52, 0.6276333928108215), (53, 0.6117454171180725), (54, 0.6141350865364075), (55, 0.6154367923736572), (56, 0.6177382469177246), (57, 0.6182602047920227), (58, 0.617703378200531), (59, 0.6159048080444336), (60, 0.6125935316085815), (61, 0.6102696657180786), (64, 0.5833065509796143), (66, 0.5837067365646362), (67, 0.5833760499954224), (68, 0.5839518904685974), (69, 0.5836052894592285), (70, 0.5835791826248169), (71, 0.5839493274688721), (72, 0.5813934803009033), (76, 0.6141220331192017), (77, 0.614814043045044), (80, 0.6165654063224792), (81, 0.6164389848709106), (82, 0.6160199642181396), (83, 0.610359787940979), (84, 0.6152002215385437), (85, 0.6141528487205505), (87, 0.5446829795837402), (88, 0.5510357022285461), (89, 0.552834153175354), (90, 0.5524792075157166), (91, 0.5520510077476501), (92, 0.5521120429039001), (93, 0.5521101355552673), (94, 0.5524383187294006), (95, 0.552492082118988), (96, 0.5522018074989319), (97, 0.5521131753921509), (98, 0.5523127317428589), (99, 0.5523053407669067), (100, 0.5521847009658813), (101, 0.5523706674575806), (102, 0.5523468852043152), (103, 0.5524667501449585), (104, 0.5524278879165649), (105, 0.552390992641449), (106, 0.5524452328681946), (107, 0.5524633526802063), (108, 0.5524865984916687), (109, 0.5526250600814819), (110, 0.5525843501091003), (111, 0.5524541139602661), (112, 0.5526156425476074), (113, 0.5528975129127502), (114, 0.5524407029151917), (115, 0.5524605512619019), (116, 0.5524886250495911), (117, 0.5525526404380798), (118, 0.5524702072143555), (119, 0.5525854229927063), (120, 0.5523728728294373), (121, 0.5524235963821411), (122, 0.5523437261581421), (123, 0.5518389940261841), (124, 0.5520192384719849), (125, 0.5523939728736877), (126, 0.5523043870925903), (127, 0.5532050132751465), (132, 0.5515548586845398), (134, 0.5523167252540588), (135, 0.5519833564758301), (136, 0.5524169206619263), (137, 0.5527742505073547), (138, 0.5523315668106079), (139, 0.5523473024368286), (140, 0.5532975196838379), (141, 0.5522792935371399), (142, 0.5503222942352295)] It has 89 elements. Visual spectrogram-like representation looks like: vv111280×642 19.5 KB Y-axis is basically pitch. And X-axis is time. And I have another vector, which represents exactly the same notes, played on another acoustic guitar, and the play is quite similar if you listen to it. The vector looks like (94 elements this time): [(31, 0.13769060373306274), (39, 0.15499019622802734), (40, 0.16191327571868896), (43, 0.16355487704277039), (59, 0.6356481313705444), (60, 0.634376585483551), (63, 0.6343578100204468), (64, 0.6335580945014954), (65, 0.6335859894752502), (66, 0.6335384845733643), (67, 0.6334232091903687), (68, 0.6339468955993652), (69, 0.630445122718811), (72, 0.6184465885162354), (73, 0.6183992028236389), (74, 0.6181117296218872), (75, 0.6186220049858093), (76, 0.6186297535896301), (77, 0.6185297966003418), (78, 0.618561327457428), (79, 0.618633508682251), (80, 0.6185418963432312), (81, 0.6184455752372742), (82, 0.6117323040962219), (83, 0.6014747619628906), (85, 0.39688345789909363), (90, 0.5867741107940674), (91, 0.5872393250465393), (92, 0.586899995803833), (93, 0.5866436958312988), (94, 0.5866578817367554), (95, 0.5864415168762207), (96, 0.5868685245513916), (97, 0.586477518081665), (104, 0.6182103157043457), (105, 0.6182997226715088), (106, 0.6188161969184875), (107, 0.618650496006012), (108, 0.6187460422515869), (109, 0.6181941628456116), (110, 0.6184064149856567), (111, 0.6148801445960999), (114, 0.5523382425308228), (115, 0.5557005405426025), (116, 0.5558828711509705), (118, 0.554828405380249), (119, 0.554919958114624), (120, 0.55497145652771), (121, 0.5546750426292419), (122, 0.5545178651809692), (123, 0.5545105338096619), (124, 0.5545050501823425), (125, 0.5544342994689941), (126, 0.5545479655265808), (127, 0.5543638467788696), (128, 0.5543714165687561), (129, 0.5545525550842285), (130, 0.5545058846473694), (131, 0.5547449588775635), (132, 0.5546910166740417), (133, 0.5545474290847778), (134, 0.5546845197677612), (135, 0.5546503663063049), (136, 0.5545172691345215), (137, 0.5548205971717834), (138, 0.5546956062316895), (139, 0.5547483563423157), (140, 0.5544265508651733), (141, 0.554632306098938), (142, 0.5543283820152283), (143, 0.5546634197235107), (144, 0.5543924570083618), (145, 0.5543931722640991), (146, 0.5547248721122742), (147, 0.5549289584159851), (148, 0.5547417998313904), (149, 0.5546922087669373), (150, 0.5545686483383179), (151, 0.5547193884849548), (152, 0.5548165440559387), (153, 0.5544684529304504), (154, 0.5549207329750061), (155, 0.5548054575920105), (156, 0.5541093945503235), (157, 0.554355800151825), (158, 0.5545284152030945), (159, 0.5548104643821716), (160, 0.5544529557228088), (161, 0.5541993379592896), (162, 0.5540767312049866), (163, 0.5550277829170227), (164, 0.5545808672904968), (177, 0.1516854166984558), (182, 0.15804383158683777)] Visual representation is: (see my first comment, since website allows only 1 attachment for new users) As you can see, the data looks similar, and there is a little bit of noise. Time-wise, there is shift at the beginning (starts at 60, while the first one starts at 40). And some notes have been played a little bit (100-200ms) longer than others. But if you look to Y-axis, it’s pretty much the same, because note frequencies are the same. I am looking for the way to compare these vectors to find out their similarity, which to my subjective judgement is around 95%. I tried to invent my own algos, but I’m getting nowhere currently. Would really appreciate any pointers, what I’m looking for is a function that for given 2 vectors returns similarity value from 0 to 1.
Visual representation of a second vector: vv221280×642 19.9 KB
0
huggingface
Beginners
Is last_hidden_state the output of Encoder block?
https://discuss.huggingface.co/t/is-last-hidden-state-the-output-of-encoder-block/13084
When we use BertModel.forward() , is the last_hidden_state the output of Encoder in Transformers block?
Yes! It’s a tensor of shape (batch_size, seq_len, hidden_size).
1
huggingface
Beginners
Replacing last layer of a fine-tuned model to use different set of labels
https://discuss.huggingface.co/t/replacing-last-layer-of-a-fine-tuned-model-to-use-different-set-of-labels/12995
I’m trying to fine-tune dslim/bert-base-NER using the wnut_17 dataset. Since the number of NER labels is different, I manually replaced these parameters in the model to get rid of the size mismatch error : model.config.id2label = my_id2label model.config.label2id = my_label2id model.config._num_labels = len(my_id2label) ## replacing 9 by 13 However, when training starts I get the following error which I don’t know how to handle: Expected input batch_size (1456) to match target batch_size (1008). Has anyone handled this manually? @sgugger @phosseini Won’t it be great if we can have a solid function that handles the head replacements for fine-tuning. Shapes: tokenized_wnut[‘train’].shape = (3394, 7) tokenized_wnut[‘validation’].shape = (1009, 7) Model config after “manual” modifications: BertConfig { "_name_or_path": "dslim/bert-base-NER", "_num_labels": 13, "architectures": [ "BertForTokenClassification" ], "attention_probs_dropout_prob": 0.1, "classifier_dropout": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": "O", "1": "B-corporation", "2": "I-corporation", "3": "B-creative-work", "4": "I-creative-work", "5": "B-group", "6": "I-group", "7": "B-location", "8": "I-location", "9": "B-person", "10": "I-person", "11": "B-product", "12": "I-product" }, "initializer_range": 0.02, "intermediate_size": 3072, "label2id": { "B-corporation": 1, "B-creative-work": 3, "B-group": 5, "B-location": 7, "B-person": 9, "B-product": 11, "I-corporation": 2, "I-creative-work": 4, "I-group": 6, "I-location": 8, "I-person": 10, "I-product": 12, "O": 0 }, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "output_past": true, "pad_token_id": 0, "position_embedding_type": "absolute", "transformers_version": "4.14.1", "type_vocab_size": 2, "use_cache": true, "vocab_size": 28996 }
Thank you @nielsr for being responsive. That error is resolved now, but the question is "does simply changing the number of labels mean that we have changed the classifier head?!" By the way, for my problem, I had to do these modifications: model_name = "dslim/bert-base-NER" mymodel = AutoModelForTokenClassification.from_pretrained(model_name, num_labels=len(my_id2label), ignore_mismatched_sizes=True) ... mymodel.config.id2label = my_id2label mymodel.config.label2id = my_label2id mymodel.config._num_labels = len(my_id2label) ## replacing 9 by 13 mymodel.config.num_labels = len(my_id2label)
1
huggingface
Beginners
Key Error ‘loss’ - finetuning [ arabert , mbert ]
https://discuss.huggingface.co/t/key-error-loss-finetuning-arabert-mbert/13052
Hello all! I am trying to fine-tune mbert and arabert models on translation task as explained here, however, I am getting this error Key Error ‘loss’ the input to the model : DatasetDict({ train: Dataset({ features: ['attention_mask', 'input_ids', 'labels', 'src', 'token_type_ids', 'trg'], num_rows: 40 }) validation: Dataset({ features: ['attention_mask', 'input_ids', 'labels', 'src', 'token_type_ids', 'trg'], num_rows: 11 }) }) Note: when I run trainer.train() , it returns a message like The following columns in the training set don't have a corresponding argument in BertModel.forward and have been ignored: trg, labels, src. any hints?
You can use the Trainer on BertModel as it has no objective, it’s the body of the model with no particular head. You should pick a model with a head suitable for the task at hand.
0
huggingface
Beginners
Predicting On New Text With Fine-Tuned Multi-Label Model
https://discuss.huggingface.co/t/predicting-on-new-text-with-fine-tuned-multi-label-model/13046
This is very much a beginner question. I am trying to load a fine tuned model for multi-label text classification. Fine-tuned on 11 labels, bert-base-uncased. I just want to feed new text to the model and get the labels predicted to be associated with the text. I have looked everywhere and cannot find an example of how to actually load and use a fine-tuned model on new data after fine tuning is complete. I have the model saved via: model.save_pretrained("fine_tuned_model") I can load the model back with: model = AutoModelForSequenceClassification.from_pretrained("fine_tuned_model", from_tf=False, config=config) From here I am stuck. I have been told to use model.predict("text") but when I do I get the following error: 'BertForSequenceClassification' object has no attribute 'predict' I hope this makes sense and any help would be greatly appreciated.
To get all scores, pipeline has a parameter clf("text", return_all_scores=True) For the label being LABEL_7, you need to check out the config.json in your repo. See for example id2label and label2id in config.json · distilbert-base-uncased-finetuned-sst-2-english at main 1.
1
huggingface
Beginners
Deployed GPT-2 models vs “Model Card” question
https://discuss.huggingface.co/t/deployed-gpt-2-models-vs-model-card-question/12961
Hi, I’ve noticed a difference in performance between a GPT-2 trained conversational chat bot deployed as a Discord chat bot, and it’s “Model Card” page. (Not sure if that is the correct term) huggingface.co Jonesy/DialoGPT-medium_Barney · Hugging Face When you repeat input (eg, “Hi”) Barney says something different every time. In the Discord version, he replies the same way every time. Ideally the Discord bot would behave the same way as his Model Card. Thoughts? THANKS!!!
Hello The inference widget and your bot in discord might be using different temperature and sampling parameters (here’s a great blog post if you’re interested btw), at least this is my guess.
0
huggingface
Beginners
HOW TO determine the best threshold for predictions when making inference with a finetune model?
https://discuss.huggingface.co/t/how-to-determine-the-best-threshold-for-predictions-when-making-inference-with-a-finetune-model/13001
Hello, I finetune a model but the F-score is not quite good for certains classes. To avoid a lot of false positives I decide to set a threshhold for the probabilities and I would like to know how to determine, the best threshhold ? Should I use the mean, median , or just look at accuracy of the model on the test_data ?
Hi, The best way to determine the threshold is to compute the true positive rate (TPR) and false positive rate (FPR) at different thresholds, and then plot the so-called ROC-curve. The ROC curve plots, for every threshold, the corresponding true positive rate and false positive rate. Then, selecting the point (i.e. threshold) that is most to the top left of the curve will yield the best balance among the two. Sklearn provides an implementation 2 of this, however it’s for binary classification only. Note that there are extensions 1 for multiclass classification.
0
huggingface
Beginners
Which model can use to pre-train a BERT model?
https://discuss.huggingface.co/t/which-model-can-use-to-pre-train-a-bert-model/13027
I am going to do pre-train a BERT model on specific dataset aiming for Sentiment Analysis. To self-train the model, which method will be better to use: Masked Language Modeling or Next Sentence Prediction? Or maybe there is not specific answer.
Choosing depends on what you want to do. Using masked language modeling is good when you want good representations of the data with which it was trained. Next sentence prediction, or rather causal language modeling (such as GPT), are better when you want to focus in generation. The course has a section in how to fine-tune a masked language model that could be interesting to you: Main NLP tasks - Hugging Face Course 5.
1
huggingface
Beginners
Which parameter is causing the decrease in Learning rate every epoch?
https://discuss.huggingface.co/t/which-parameter-is-causing-the-decrease-in-learning-rate-every-epoch/13015
Hey, I have been trying to train my model on mnli and the learning rate seems to keep decreasing for no reason. Can someone help me? - train_args = TrainingArguments( output_dir=f'./resultsv3/output', logging_dir=f'./resultsv3/output/logs', learning_rate=3e-6, per_device_train_batch_size=4, per_device_eval_batch_size=4, num_train_epochs=4, load_best_model_at_end=True, metric_for_best_model="accuracy", fp16=True, fp16_full_eval=True, evaluation_strategy="epoch", save_strategy = "epoch", save_total_limit=5, logging_strategy="epoch", report_to="all") def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return metric.compute(predictions=predictions, references=labels) trainer = Trainer( model=model, tokenizer=tokenizer, args=train_args, data_collator=data_collator, train_dataset=encoded_dataset_train, eval_dataset=encoded_dataset_test, compute_metrics=compute_metrics ) which parameter is causing the decrease in Learning rate every epoch?
The learning_rate parameter is just the initial learning rate, but it is usually changed during training. You can find the default values of TrainingArguments at Trainer. You can see that lr_scheduler_type is linear by default. As specified in its [documentation(Optimization), linear creates a schedule with a learning rate that decreases linearly from the initial learning rate after an initial warmup period.
0
huggingface
Beginners
Batch size vs gradient accumulation
https://discuss.huggingface.co/t/batch-size-vs-gradient-accumulation/5260
Hi, I have a basic theoretical question. Which one is better for the model and GPU usage? First option: --per_device_train_batch_size 8 --gradient_accumulation_steps 2 Second option: --per_device_train_batch_size 16
Using gradient accumulation loops over your forward and backward pass (the number of steps in the loop being the number of gradient accumulation steps). A for loop over the model is less efficient than feeding more data to the model, as you’re not taking advantage of the parallelization your hardware can offer. The only reason to use gradient accumulation steps is when your whole batch size does not fit on one GPU, so you pay a price in terms of speed to overcome a memory issue.
1
huggingface
Beginners
Is it possible to do inference on gpt-j-6B via Colab?
https://discuss.huggingface.co/t/is-it-possible-to-do-inference-on-gpt-j-6b-via-colab/13007
When I use the pipeline API, it crashes Colab with an out of memory error (fills 25.5GB of RAM). I think it should be possible to do the inference on TPUv2? But how do I tell the pipeline to start using the TPUs from the start? from transformers import pipeline model_name = 'EleutherAI/gpt-j-6B' generator = pipeline('text-generation', model=model_name) out = generator("I am Harry Potter.", do_sample=True, min_length=50)
Hi, Inference is only possible on Colab Pro. You can check my notebook here 3 for more info.
1
huggingface
Beginners
Different results predicting from trainer and model
https://discuss.huggingface.co/t/different-results-predicting-from-trainer-and-model/12922
Hi, I’m training a simple classification model and I’m experiencing an unexpected behaviour: When the training ends, I predict with the model loaded at the end with: predictions = trainer.predict(tokenized_test_dataset) list(np.argmax(predictions.predictions, axis=-1)) and I obtain predictions which match the accuracy obtained during the training (the model loaded at the end of the training is the best of the training, I’m using load_best_model_at_end=True). However, if I load the model from the checkpoing (the best one), and get predictions with: logits = model(model_inputs) probabilities = torch.nn.functional.softmax(logits.logits, dim=-1) predictions = torch.argmax(probabilities, axis=1) I get predictions which are slightly different from the previous ones and do not match the accuracy of the training. So, anything I’m missing? Shouldn’t these predicitions be exactly equal? Any help would be appreciated!
It’s hard to know where the problem lies without seeing the whole code. It could be that your model_inputs are defined differently than in the tokenized_test_dataset for instance.
0
huggingface
Beginners
GPT-2 trained models output repeated “!”
https://discuss.huggingface.co/t/gpt-2-trained-models-output-repeated/12962
Hello, separate question from my last post. Same example model: huggingface.co Jonesy/DialoGPT-medium_Barney · Hugging Face When I repeat the same input on my models, they appear to have a nervous breakdown (repeated exclamation marks over and over, it is a little disturbing to be honest ) I have noticed this with 2 models now. Any thoughts on how I can fix this? Thank you for your time!
Hello, I feel like you can make use of temperature parameter when inferring to avoid repetition and put more randomness to your conversations. I found a nice model card showing how to infer 1 with DialoGPT. Hope it helps. There’s a nice blog post by Patrick that explains generative models. from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) model = AutoModelWithLMHead.from_pretrained(model_checkpoint) for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
0
huggingface
Beginners
Trainer.save_pretrained(modeldir) AttributeError: ‘Trainer’ object has no attribute ‘save_pretrained’
https://discuss.huggingface.co/t/trainer-save-pretrained-modeldir-attributeerror-trainer-object-has-no-attribute-save-pretrained/12950
I am trying to save a model during finetuning but I get this error ? trainer, outdir = prepare_fine_tuning(PRE_TRAINED_MODEL_NAME, train_dataset, val_dataset, tokenizer, sigle, train_name, elt_train.name) trainer.train() trainer.evaluate() #trainer.save_model(modeldir) trainer.save_pretrained(modeldir) tokenizer.save_pretrained(modeldir) trainer.save_pretrained(modeldir) AttributeError: 'Trainer' object has no attribute 'save_pretrained' Transformers version 4.8.0
I don’t knoe where you read that code, but Trainer does not have a save_pretrained method. Checkout the documentaiton 3 for a list of its methods!
1
huggingface
Beginners
NER fine-tuning
https://discuss.huggingface.co/t/ner-fine-tuning/12980
Hi, I want to fine tune a NER model on my own dataset and entities. Is that possible? If yes how ? Thanks in advance
Hello, You can take a look at token classification notebooks here 3 for guidance. There’s also course chapter on token classification. 5
0
huggingface
Beginners
Log training accuracy using Trainer class
https://discuss.huggingface.co/t/log-training-accuracy-using-trainer-class/5529
Hello, I am running BertForSequenceClassification and I would like to log the accuracy as well as other metrics that I have already defined for my training set . I saw in another issue that I have to add a self.evaluate(self.train_dataset) somewhere in the code, but I am a beginner when it comes to Python and deep learning in general so I am not sure where exactly I have to include it. I was trying to replicate the evaluate() method of the Trainer class, taking the train_dataset as argument, but it did not work. It would really mean a lot if you could guide me as for where I should tweak the code! Thank you for your help!
Hi, I am having a similar issue. I used the Trainer similar to the example and all I see in the output is the training loss but I don’t see any training accuracy. I wonder if find out how to log accuracy. Thanks.
0
huggingface
Beginners
How can I get the last value of the tensor token obtained from model.generate?
https://discuss.huggingface.co/t/how-can-i-get-the-last-value-of-the-tensor-token-obtained-from-model-generate/12937
tensor([[ 2337, 2112, 387, 3458, 385, 378, 1559, 379, 1823, 1674, 427, 547, 2158, 803, 3328, 26186, 409, 1751, 4194, 971, 395, 1591, 418, 1670, 4427, 2518, 107, 978, 461, 758, 463, 418, 549, 402, 1959, 393, 499, 409, 17263, 792]], device='cuda:0') What I want is to get the last value of 792. But, I can’t get it… How can I get it ?
I found a solution. I was able to do it with token[0][-1] . (indexing) I guess I should study tensor.
1
huggingface
Beginners
Question answering bot: fine-tuning with custom dataset
https://discuss.huggingface.co/t/question-answering-bot-fine-tuning-with-custom-dataset/4412
Hello everybody I would like to fine-tune a custom QAbot that will work on italian texts (I was thinking about using the model ‘dbmdz/bert-base-italian-cased’) in a very specific field (medical reports). I already followed this guide 7 and fine-tuned an english model by using the default train and dev file. The problem is that now I’m trying to use my own files (formatted in SQuaD 2.0), but I’m not able to perform the same operations. This is my code: datasets = load_dataset('json', data_files='/content/SQuAD_it-train.json', field='data') Instead of getting something like this… DatasetDict({ train: Dataset({ features: [‘id’, ‘title’, ‘context’, ‘question’, ‘answers’], num_rows: 130319 }) validation: Dataset({ features: [‘id’, ‘title’, ‘context’, ‘question’, ‘answers’], num_rows: 11873 }) }) …I get this: DatasetDict({ train: Dataset({ features: [‘title’, ‘paragraphs’], num_rows: 442 }) }) I tried the same command with the train-v2.0.json file downloaded from the official SQuaD website… datasets = load_dataset('json', data_files='/content/dev-v2.0.json', field='data') …and this is what I got: DatasetDict({ train: Dataset({ features: [‘title’, ‘paragraphs’], num_rows: 442 }) }) So I’m assuming that this is not related to the file format but maybe with some parameter of the function load_dataset? Thanks a lot for you attention Claudio
Hi @Neuroinformatica, from the datasets docs 3 it seems that the ideal format is line-separated JSON, so what I usually do is convert the SQuAD format as follows: import json from datasets import load_dataset input_filename = "dev-v2.0.json" output_filename = "dev-v2.0.jsonl" with open(input_filename) as f: dataset = json.load(f) with open(output_filename, "w") as f: for article in dataset["data"]: title = article["title"] for paragraph in article["paragraphs"]: context = paragraph["context"] answers = {} for qa in paragraph["qas"]: question = qa["question"] idx = qa["id"] answers["text"] = [a["text"] for a in qa["answers"]] answers["answer_start"] = [a["answer_start"] for a in qa["answers"]] f.write( json.dumps( { "id": idx, "title": title, "context": context, "question": question, "answers": answers, } ) ) f.write("\n") ds = load_dataset("json", data_files=output_filename) This converts each article in the SQuAD dataset into a single JSON object of the form { "id":"56ddde6b9a695914005b9628", "title":"Normans", "context":"The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their name to Normandy, a region in France. They were descended from Norse (\"Norman\" comes from \"Norseman\") raiders and pirates from Denmark, Iceland and Norway who, under their leader Rollo, agreed to swear fealty to King Charles III of West Francia. Through generations of assimilation and mixing with the native Frankish and Roman-Gaulish populations, their descendants would gradually merge with the Carolingian-based cultures of West Francia. The distinct cultural and ethnic identity of the Normans emerged initially in the first half of the 10th century, and it continued to evolve over the succeeding centuries.", "question":"In what country is Normandy located?", "answers":{ "text":[ "France", "France", "France", "France" ], "answer_start":[ 159, 159, 159, 159 ] } } which is then well suited for the Arrow columnar format of datasets. HTH!
0
huggingface
Beginners
Does Trainer prefetch data?
https://discuss.huggingface.co/t/does-trainer-prefetch-data/12777
Hi everyone, I’m pretty new to this. I’m trying to train a transformer model on a GPU using transformers.Trainer. I’m doing my prototyping at home on a Windows 10 machine with a 4-core CPU with a 1060 gtx. I have my data, model, and trainer all set up, and my dataset is of type torch.utils.data.Dataset. Based on what I see in the task manager, it looks like Trainer might not be prefetching data to keep the GPU busy at all times. Here’s what I see during training: As you can see, GPU usage maxes out around 55%, and cycles down to 0% regularly during training. I can iterate through my dataset around 10x faster than it takes for Trainer to train the model for one epoch, so I don’t think data loading is the bottleneck. So, any ideas why I am seeing this type of behavior with my GPU? Is Trainer not prefetching the data for each training step, or is it some other issue with my code?
In case anyone else is wondering about this, I figured it out. Trainer indeed appears to prefetch data. The problem was that my data loader was too slow to keep up with the GPU. After optimizing my data loading routine, I’m able to keep the GPU busy constantly.
1
huggingface
Beginners
Problems and solution on Trainer
https://discuss.huggingface.co/t/problems-and-solution-on-trainer/11498
I am using the trainer to train an ASR model, the dataset and the output dimension are huge. This will cause some problems during training. I struggle with it many days, so I post my solution here, hope it can help. compute_metrics out of memory issue during compute_metrics, it will save all the logits in an array, when the output dimension is large, it will easily cause out-of-memory on a large dataset. The solution is to use torch.argmax on logits first to avoid saving all the data. when using trainer on seq2seq model, if the model output contains past_key_value, it will cause length error when merging different output, so past_key_value needs to be dropped on model output. group_by_length will take a very long processing time on model.train, and it uses a lot of memory to calculate the length of the data.
Note that for 3, you can have it in your datasets.Dataset computed once and for all with the map method, you just have to store the resuls in a "lengths" column. It will then uses the Dataset features and not try to access every element.
1
huggingface
Beginners
Can i export a VisionEncoderDecoder checkpoint to onnx
https://discuss.huggingface.co/t/can-i-export-a-visionencoderdecoder-checkpoint-to-onnx/12885
Can I export any hugging face checkpoint to onnx? If not, how do I go about it @nielsr I want to export trocr checkpoint to onnx, is it possible. I tried doing the same with a fine-tuned checkpoint of mine, it gives a KEY ERROR KeyError: <class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_decoder.VisionEncoderDecoderConfig'> When I do the same using “microsoft/trocr-large-printed” checkpoint, I get an exception saying onnx_export_fail903×479 76.9 KB Do you know what might I be missing? Is it possible, if not , how do I go about it.
Pinging @lewtun here, to let him know people are also interested in exporting EncoderDecoder model classes to ONNX.
0
huggingface
Beginners
How do I fine-tune roberta-large for text classification
https://discuss.huggingface.co/t/how-do-i-fine-tune-roberta-large-for-text-classification/12845
Hi there, I have been doing the HF course and decided to apply what I have learned but I have unfortunately encountered some errors at the model.fit() stage. I extracted BBC text data as an excel file from kaggle and converted it to a DatasetDict as below: image1132×612 32.7 KB Loaded the tokenizer and tokenized the text features image927×583 33 KB Train_test_val split image1242×749 53.3 KB Converted my data to tf_data and padded with DataCollator and instantiated the model image1610×1108 88.6 KB Optimizer and compile: image1584×616 54.1 KB GEtting the error at the below stage image1584×1083 160 KB Not sure what I am doing wrong here as I tried following the steps in the course, thanks in advance
Hi @nickmuchi, the key is in the warning that pops up when you compile()! When you compile without specifying a loss, the model will compute loss internally. For this to work, though, the labels need to be in your input dict. We talk about this in the Debugging your Training Pipeline 1 section of the course. There are two solutions here. One is to change your calls to to_tf_dataset(). Instead of columns=["attention_mask", "input_ids"], label_cols=["labels"] do columns=["attention_mask", "input_ids", "labels"], This will put your labels in the input dict, and the model will be able to compute a loss in the forward pass. This is simple, but might cause issues with your accuracy metric. The alternative option is to leave the labels where they are, but instead to use a proper Keras loss. In that case, you would leave the call to to_tf_dataset() unchanged, but change your compile() call to model.compile( optimizer=optimizer, metrics=['accuracy'], loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) ) That should work, and will allow you to keep using the accuracy metric too. Let me know if you encounter any other problems!
1

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
63
Add dataset card