docs
stringclasses
4 values
category
stringlengths
3
31
thread
stringlengths
7
255
href
stringlengths
42
278
question
stringlengths
0
30.3k
context
stringlengths
0
24.9k
marked
int64
0
1
huggingface
Research
Resume Training / Finetune a language model and further finetune a classifier
https://discuss.huggingface.co/t/resume-training-finetune-a-language-model-and-further-finetune-a-classifier/1616
Hi, I would like to finetune a powerful classifier based on a pre-trained language model. As we know, the typical approach is to fine-tune a classifier using a pre-trained model. What I am wondering is that, if I fine-tune a pre-trained model based on a fine-tune language model settings using DS1(typical text dataset) (OR resume training from the last checkpoint) and then further fine-tune this newly fine-tuned model using another DS2(typical text dataset) for a classifier purpose, would this be a redundant effort as compared to a pipeline which is to just finetune a pre-trained model using DS2? I would like to receive your thoughts. Thank you.
Hi, there are papers indeed indicate that “multi-steps” finetuning is helpful. See this paper 15 for one example .
0
huggingface
Research
GPT2 for QA Pair Generation
https://discuss.huggingface.co/t/gpt2-for-qa-pair-generation/759
I was wondering if it were possible to somehow train GPT2 to generate question-answer pairs in a particular domain?
I’ve tried this with seq2seq models. I have worked on qa pair generation (separately) using T5 with descent results. You can find it here 78. One way we can do this with GPT-2 is prepare our input like this Our context is 42 is the answer to life, the universe and everything , answer is 42 and target question is What is the answer to life, universe and everything ? Then input text: context: 42 is the answer to life, the universe and everything. question: What is the answer to life, universe and everything ? answer: 42 and prepare the attention mask such that, there will be no attention from question: ... part, so model won’t look into future tokens and calculate loss only on the question: ... part. And it inference time we will feed only the context part and ask the model to generate the question. This just one one way I can think of the of my mind. Feel free to correct me if this is wrong.
0
huggingface
Research
Transformer for Abstractive Summarization for Chats Based on Performance
https://discuss.huggingface.co/t/transformer-for-abstractive-summarization-for-chats-based-on-performance/731
Hi, I’ve some general questions related to Transfer Learning on pretrained models for summarization problem. I’ve been trying to engineer Seq2Seq model for Summarizing Chats between two user agents. I’ve tried T5 model (Pretrained & Transfer Learning), but the results were not satisfactory. The summarized text missed the context entirely after training on the custom dataset. Can someone please help me understand which model works better for summarizing chats or any pre-processing task that precedes this. Thanks in advance.
Hi @anant0308 ! Happy to discuss possible approaches, but what works best (and whether you can expect good results at all) will depend on what your fine-tuning data looks like: for example, how long are the chats? do you have any gold summaries for your chats? do you have examples of summaries without corresponding chats? how many examples do you have? how are you representing speaker turns? Keep in mind that summarizing chats is quite a different task from summarizing news text: if the pre-training data lacks any kind of dialogue inputs, then the model will have to learn how to interpret multi-turn structure from scratch, which will probably be your main challenge.
0
huggingface
Research
Evaluation metrics for BERT-like LMs
https://discuss.huggingface.co/t/evaluation-metrics-for-bert-like-lms/1256
Hey guys, I’ve read that Perplexity (PPL) is one of the most common metrics for evaluating autoregressive and causal language models. But what do we use for MLMs like BERT? I need to evaluate BERT models after pre-training and compare them to existing BERT models without going through downstream task GLUE-like benchmarks. Best, Vladimir
I found an interesting project https://github.com/awslabs/mlm-scoring 98 which seems to be the step in the right direction. The authors also published the paper https://arxiv.org/pdf/1910.14659v2.pdf 21
0
huggingface
Research
Obtaining BERT-base from BERT-large
https://discuss.huggingface.co/t/obtaining-bert-base-from-bert-large/1288
So I want to extract (prune) BERT-large such that I get BERT-base fairly. Initially I performed random pruning (near to 110M param count) on BERT-large but it didn’t seem to work well. L1 pruning seemed to work (nearly 131M param), but it doesn’t seem fair. Pre-training seems like a big hurdle given that there are some ambiguities on how to go about it. Please let me know if you’ve any suggestions on getting BERT-base fairly from BERT-large.
Have you tried Distilling it? https://medium.com/huggingface/distilbert-8cf3380435b5 1 . Why would you expect pruning to work? (Why do you want to extract bert-base from bert-large?)
0
huggingface
Research
How I fine-tune BART for summarization using large texts?
https://discuss.huggingface.co/t/how-i-fine-tune-bart-for-summarization-using-large-texts/1266
Good night! I’m using a pre-trained Bart for summarization and I have my own dataset for fine-tuning (which has a set with the big text and its respective summary). Despite this, my input texts are approximately 2500 characters long and the maximum Bart accepts is 1024. Is there any technique I can use to use all text? I thought of splitting each cell into smaller texts (max 1024) and assigning the same summary to each. Makes sense? Example: Before: ABC: summary1 DEF: summary2 After: A: summary1 B: summary1 C: summary1 D: summary2 E: summary2 F: summary2 Thanks in advance!
Hi, there’e already thread for this, you might find it helpful Summarization on long documents 🤗Transformers You can try extractive summarisation followed by abstractive. In the extractive step you choose top k sentences of which you choose top n allowed till model max length. Another way is to use successive abstractive summarisation where you summarise in chunk of model max length and then again use it to summarise till the length you want. This method will be super expensive. You can also combine first + second method.
0
huggingface
Research
New seq2seq tool: search hparam space with run_eval.py
https://discuss.huggingface.co/t/new-seq2seq-tool-search-hparam-space-with-run-eval-py/1166
FYI, there is a new tool available to you - you can now search the hparam space with run_eval.py. It’s called run_eval_search.py It uses the same arguments as run_eval.py, but allows you to parametrize the hparams, so in addition to the normal args you can pass: --search="num_beams=8:11:15 length_penalty=0.9:1.0:1.1 early_stopping=true:false" and it’ll search all the possible combinations and at the end print a table of results sorted by the scores of the task. e.g.: bleu | num_beams | length_penalty | early_stopping ----- | --------- | -------------- | -------------- 41.35 | 11 | 1.1 | 0 41.33 | 11 | 1.0 | 0 41.33 | 11 | 1.1 | 1 41.32 | 15 | 1.1 | 0 41.29 | 15 | 1.1 | 1 41.28 | 15 | 1.0 | 0 41.25 | 8 | 1.1 | 0 41.24 | 11 | 1.0 | 1 41.23 | 11 | 0.9 | 0 41.20 | 15 | 1.0 | 1 41.18 | 8 | 1.0 | 0 You can have one or more params searched. Here is an example of a full command: PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval_search.py \ facebook/wmt19-$PAIR $DATA_DIR/val.source $SAVE_DIR/test_translations.txt \ --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json \ --bs $BS --task translation \ --search="num_beams=1:5 length_penalty=0.9:1.1 early_stopping=true:false" If you encounter any issues please let me know. It’s documented here: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md#run_eval-tips-and-tricks 3. @sshleifer and I added some more goodies in run_eval.py - you will find them all documented at that url. Enjoy. p.s. edited to remove things that are going to change based on Sam’s comment below.
Great work! There are only two possible sets of keys to get from run_eval.py since score_fn = calculate_bleu_score if "translation" in args.task else calculate_rouge You shouldn’t hard code the possible tasks any more than that IMO.
0
huggingface
Research
How to use T5 for sentence embedding?
https://discuss.huggingface.co/t/how-to-use-t5-for-sentence-embedding/1097
is there any way to use encoder part of T5 model for representation learning?
Hi @banucool You can initialize the T5Model class and only forward pass through it’s encoder. The first element of the returned tuple is the final hidden states. model = T5Model.from_pretrained("t5-small") tok = T5Tokenizer.from_pretrained("t5-small") enc = tok("some text", return_tensors="pt") # forward pass through encoder only output = model.encoder( input_ids=enc["input_ids"], attention_mask=enc["attention_mask"], return_dict=True ) # get the final hidden states emb = output.last_hidden_state The shape of emb will be (batch_size, seq_len, hidden_size)
0
huggingface
Research
BART question, it seems that pretraining is not work for a small model?
https://discuss.huggingface.co/t/bart-question-it-seems-that-pretraining-is-not-work-for-a-small-model/511
What is your question? My task is to generate keywords from sentences. I pretrain a text-generation model. I mask the sentences’ tokens and predict the whole sentences’ tokens. Pretraining batch_size = 8 and step = 1000000 I haven’t observed improvement from pretraining. BLEU score is 10.5 for not pretraining, BLEU score is 9.5 for pretraining. Code I take the python code from github.com google-research/pegasus/blob/master/pegasus/models/transformer.py#L38 1 from pegasus.layers import attention from pegasus.layers import decoding from pegasus.layers import embedding from pegasus.layers import timing from pegasus.layers import transformer_block from pegasus.models import base import tensorflow as tf from tensorflow.contrib import layers as contrib_layers class TransformerEncoderDecoderModel(base.BaseModel): """Transformer encoder+decoder. Notations: B: batch_size, I: max_input_len, T: max_target/decode_len, D: hidden_size V: vocab_size """ def __init__(self, vocab_size, hidden_size, filter_size, num_heads, num_encoder_layers, num_decoder_layers, label_smoothing, dropout): hidden_size = 512 num_encoder_layers = 3 num_decoder_layers = 3 Discussion The task is to generate keyword from sentences. The keyword may not appear in the sentences. So input masked sentences to predict whole sentences, it is not benefit the keywords generation task. Input masked sentences to predict whole sentences, it do not have relation to the keywords generation task. Am I right? Is it the reason that pretraining do not improve the BLEU score? Thank you very much.
With all due respect, you are asking a question on a forum dedicated to a specific library transformers by HuggingFace, but the question does not involve that library. In fact, you are using a completely different library. I am not sure if this is the right place for such questions. @sgugger
0
huggingface
Research
Why are segment and position embeddings so large?
https://discuss.huggingface.co/t/why-are-segment-and-position-embeddings-so-large/254
Cross-post from: https://forum.opennmt.net/t/size-of-feature-embeddings/3836 2 These days I am part-time doing work on improving translation models. We are working with regular transformer seq2seq networks using OpenNMT. This question is not about OpenNMT but it was triggered by going through its documentation. In onmt one can add features to each word 1. These features are then used to train their own embedding. For example, if you want to train a lower case model but still want to give importance to casing, you can add a casing feature that indicates whether the word was lower case or not. i│C like│l cookies│l from│l new│C york│C This will create two embedding layers under the hood. One for the tokens, and one for the case features. In their documentation 1, they state that the default size for features is … set to N^feat_vec_exponent where N is the number of values the feature takes. where the default feat_vec_exponent value is 0.7. However, that means that for two features, they would only get a size of 1 or 2 (1.6). The embeddings (token and casing) are then concatenated. This contrasts sharply with the language models that I know. Take for instance, BERT, which has token (30k values), segment (two values), and position (512 values) which all have 512 dimensions, even the segment embeddings. These embeddings are summed. My question thus ends up being: I always thought that the number of items in the embedding should more or less dictate the hidden size of that embedding (as onmt suggests), but BERT and siblings do not do this. So what is the best way, and why? How come that only two features in a 512 dimension space make sense?
It’s actually more a question of projecting in a high-dimensionality dense vector space versus a sparse space rather than the dimensionality it-self. A lot of the recent developments in NLP are about projecting labels and tabular data in a high-dim vector space (assigning learned vectors to spare categorical features) prior to computation. One striking demonstration of the efficiency of casting in high-dimension is in the work of John Wieting and Douwe Kiela: https://openreview.net/forum?id=BkgPajAcY7 2 but there is also a much older history of work on random projections and the Johnson-Lindenstrauss lemma: https://scikit-learn.org/stable/modules/random_projection.html A related discussion on the JL lemma you may want to join is here: https://github.com/huggingface/awesome-papers/discussions/7 Note however that there is a limit in the optimal dimension for the input embedding and recent models like ALBERT (https://openreview.net/forum?id=H1eA7AEtvS) or approach like Adaptive inputs (http://arxiv.org/abs/1809.10853) keep the input dimension smaller the models hidden-size to reach more optimal ratio between both of these dimensions.
0
huggingface
Research
Understanding what went wrong in attention
https://discuss.huggingface.co/t/understanding-what-went-wrong-in-attention/386
I am working on attention analysis. I want to learn more about where self attention made mistakes while attending to context query. Given two sentences, I am interested in learning more about where self-attention should have paid more attention (and not irrelevant tokens) to provide correct answers. In general, what went wrong in processing a given sample even if fine-tuned transformer is employed. While there are projects based on visualization like BertViz, ExBERT, I am not sure if it’s straightforward to extract the information I’m looking for. Do you know of any good projects, or workarounds in Transformers to answer my query ?
Can anyone point me to the method on how to visualize attention in matrix form between query and context sentences ? Is there any other alternative ? Any pointers will be appreciated.
0
huggingface
Research
ACL 2020 highlights – Joe
https://discuss.huggingface.co/t/acl-2020-highlights-joe/188
I had a great time at ACL this week. There are many great papers and I’m still going through them. Here’s a summary of just a few that I wanted to highlight. I’d love to get thoughts and retorts from anyone reading! “To Test Machine Comprehension, Start by Defining Comprehension” 6 by Jesse Dunietz, Gregory Burnham, Akash Bharadwaj, Owen Rambow, Jennifer Chu-Carroll, and David Ferrucci Like most great ideas, the framework presented here is simple – seemingly obvious, even. They take a specific look at Machine Reading Comprehension (MRC) and argue that current evaluation metrics don’t really inspire much confidence in the system’s comprehension of the relevant information in the passage to make it trust it in any real-world setting. They argue that rather than making questions harder, we should explicitly defining so-called “Templates of Understanding” to measure the different dimensions of comprehension within a particular context. For example, in the context of a story, they lay out the following ToU: image2190×1140 238 KB The authors do a great job thinking with clarity and simplicity about how we should approach evaluating MRC systems. “Intermediate-Task Transfer Learning with Pretrained Language Models” 8 by Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, Samuel R. Bowman Recently the pre-train/fine-tune paradigm has become ubiquitous. This paper explores whether we can take advantage of labeled data during an intermediate training step. The authors do really extensive analysis on what kinds of datasets are useful for intermediate training and what downstream tasks they have a positive (or negative) effect on. image1200×574 58.1 KB A really interesting insight for me is that commonsense tasks don’t ever seem to have a negative effect. They either help on the downstream task, or don’t have much of an effect at all. I wonder if this because we do have labeled commonsense data that is used, or if we could build some kind of unsupervised commonsense objective into the pre-training procedure that would work just as well. “Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data” 12 by Emily M. Bender and Alexander Koller This paper is not focused around any one method or technique, but rather makes a general and pretty bold argument: meaning cannot be learned from form. In other words, just giving a model access to a whole bunch of text will never be enough to learn meaningfully about the real world. Whether you buy their argument or not, I found it to be an intellectually stimulating presentation. I suspect the hyperintelligent octopus argument will be one that sticks around for a long time. image2802×1558 296 KB I also appreciated their word of caution about the way we use different words when communicating about a model’s capabilities. At the very end of the presentation, Alexander warned, As a community, let’s remember that we’re scientists and not marketing people. Let’s be a little bit careful when we use terms like understanding, meaning, and comprehension.
Intermediate Task Transfer is a very practical one. It exhaustively provides many results that can help engineers save much time.
0
huggingface
Research
Finetuning German BERT for QA on biomedical domain
https://discuss.huggingface.co/t/finetuning-german-bert-for-qa-on-biomedical-domain/500
Hello there and thank you very much for this wonderful work. I am relatively new to this field, so please bear with my amateur question. I want to perform question-answering on a German Biomedical text. From what I understand up to now, I need to fine-tune German BERT on biomedical QA datasets. Is there any script/pipeline that I should be using for this? Thank you very much in advance.
There is an example of script finetuning a model on question answering here 14, hope it can help!
0
huggingface
Research
Debiasing models by HEX projection
https://discuss.huggingface.co/t/debiasing-models-by-hex-projection/473
I am interested in implementing the orthogonality portion of Towards Robustifying NLI Models Against Lexical Dataset Biases 4 from ACL 2020 in Pytorch. The overall idea seems simple. Have a primary model and a sub-model (like BOW) to detect superficial features, and then use HEX projection (Wang et al., 2019a) to project the representation of the original primary model to the orthogonal space of the representation of the BoW model. In this case, I would use a transformer as the primary model. I’m not sure about the implementation of HEX projection. If someone is familiar with it, it would be really helpful if they can share the snippet responsible for projecting the representation orthogonally. Additionally, adding a debiasing example in Transformers repo would be a good addition which I’m happy to add, once I implement myself.
I figured out myself from the equations in the paper (Wang et al.). My implementation seems to be working. Will share the link for my repo once I open source the code.
0
huggingface
Research
What does it mean to prime a GPT model?
https://discuss.huggingface.co/t/what-does-it-mean-to-prime-a-gpt-model/446
I am not sure I understand what it means to prime a LM. I came across this concept in several blogposts and papers (sometimes also referred to as exploring the capabilities of meta learning of the model or as in context learning). From the openai gpt2 paper, section 3.7 Translation: We test whether GPT-2 has begun to learn how to translate from one language to another. In order to help it infer that this is the desired task, we condition the language model on a context of example pairs of the format english sentence = french sentence and then after a final prompt of english sentence = This I believe is an example of priming? Since with transformers there is no concept of hidden state being passed from one step to another, we provide the model with an input sequence of tokens of up to 1024 length and the model will output up to 1024 x vocab size softmax activations where each will encode the probability of the subsequent word (following the word at a given position). So priming would be just constructing the input sequence in a specific manner? If I am reading this correctly, priming would refer to the act of passing a sequence into the model expecting that the model’s meta learning capability would affect its output? In this sense, for priming, we are always limited to a sequence of < 1024 tokens (where 1024 need to suffice for the priming sequence and the output)? Passing the past parameter just saves on compute, it provides the model with the key value pairs calculated at earlier steps of text generation but there is nothing else magical happening there? And last but not least - are such questions okay to ask? Meaning, this would certainly qualify as a beginner question, but it doesn’t directly relate to the library I suppose. I really appreciate the amazing resource you put out there, the transformer library along with the wonderful documentation, in fact I am blown over by how awesome it is, just would like to make sure I am not bothering you with my questions and am using the forums in a way that they were intended to be used. Thank you very much!
If I am reading this correctly, priming would refer to the act of passing a sequence into the model expecting that the model’s meta learning capability would affect its output? You’ve nailed it on the head. When talking about a left-to-right model like GPT-N, priming is just prepending text that is similar in some way to the text you are predicting which often helps the model to predict it correctly. Incidentally, this is the thing that GPT-3 seems to be especially good at. There seems to be something about language models that we don’t completely understand that can make priming a surprisingly effective meta-learning technique, especially when the models get really big. See this Twitter thread 54 for some examples. And yes, this kind of question is perfect for the forums. However, I’d say Research is probably a better category fit since this more about general NLP/research talk and rather than the HF libraries
0
huggingface
Research
Attaching TF models to CNN features
https://discuss.huggingface.co/t/attaching-tf-models-to-cnn-features/391
This may be not entirely about NLP. I am working on Image captioning and learning textual representations from CNN features. Idea is to train CNN using captioning. So, I tried to use GPT-2 tokenizer but I had to create Captioning model from scratch. Is there any way to attach TF Transformer models to other CV applications for better learning? My VirTex implementation in Keras 7
Hey @surajp. Sorry, I’m not familiar enough with VirTex to give a concrete response here. But our TF models are compatible with TF2/Keras, so you should be able to include them in your TF graph. If you’re having trouble with this, please post some more specifics and I’ll see if we can be of any more help.
0
huggingface
Research
Is it reasonableto pretrain by masking certain dimensions of each vector, rather than the individual token?
https://discuss.huggingface.co/t/is-it-reasonableto-pretrain-by-masking-certain-dimensions-of-each-vector-rather-than-the-individual-token/290
Let’s say I want to adapt Transformers to a non-NLP task, like financial data or a multiplayer online video game. You can imagine that the high-dimensional vector of each input will contain information that pertain to different events. For example, the first 10 dimensions might describe player 1, and the next 10 dimensions might describe player 2. If I were to extend the pre-training exercise to these non-NLP tasks, I think it could be reasonable to mask the actions of certain players in order to predict back their actions. This would essentially involve masking certain dimensions of a vector rather than masking the entire “input”. My question is: is this reasonable to do and is this even the right approach?
I don’t know what kind of input embeddings you’d be working with in that case, but the problem you’ll probably run into is that latent embeddings are usually not as nicely disentangled as you’ve described here. We sometimes talk about them as if they were for illustrative purposes, but in reality your description of “player 1” is probably distributed across the entire vector rather than existing entirely in some subset of vector positions.
0
huggingface
Research
Print All Tokens Over a Certain Probability Threshold
https://discuss.huggingface.co/t/print-all-tokens-over-a-certain-probability-threshold/329
I am curious to know how I would do this using GPT-2. Thank you for your time!
Hi there, here is a quick way to do this for the last token on a given sentence in PyTorch: from transformers import GPT2LMHeadModel, GPT2Tokenizer import torch import torch.nn.functional as F # Load model and tokenizer model = GPT2LMHeadModel.from_pretrained('gpt2') tokenizer = GPT2Tokenizer.from_pretrained('gpt2') # Input example input_txt = "Hello, my name is Sylvain." inputs = tokenizer(input_txt, return_tensors='pt') outputs = model(**inputs) # If you are not on a source install, replace outputs.logits by outputs[0] predictions = F.softmax(outputs.logits, dim=-1) thresh = 1e-2 vocab_size = predictions.shape[-1] # Predictions has one sentence (index 0) and we look at the last token predicted (-1) idxs = torch.arange(0, vocab_size)[predictions[0][-1] >= thresh] print(tokenizer.convert_ids_to_tokens(idxs))
0
huggingface
Research
Building a custom Squad 2.0 style dataset, is it worth it?
https://discuss.huggingface.co/t/building-a-custom-squad-2-0-style-dataset-is-it-worth-it/398
Was wondering what the experts think and whether this is a sensible approach. The pre-trained Squad 2.0 models perform well in a custom domain, but can be greatly improved, given the target domain is rather narrow and the vocabulary is different but there is overlap. Do you think it is worth obtaining a custom dataset, say 1000 observations, using the same methodology as Squad v2.0 but derived from data of the target domain? Is 1000 observation enough for the fine-tuning?
Hi @swayson, not an expert here but fine-tuning on your domain should give better results. I can’t comment on if 1000 examples will e enough or not, you’ll probably need to experiment. Also have look at this question generation models 15. You can try to create synthetic QA corpora using these models. Synthetic QA corpora has shown to improve results for QA.
0
huggingface
Research
State of the art technique for initializing Embedding Matrix?
https://discuss.huggingface.co/t/state-of-the-art-technique-for-initializing-embedding-matrix/326
What are your thoughts on the state-of-the-art technique for initializing Embedding Weight matrices? Currently, PyTorch uses normal distribution to initialize these. Does using Kaiming Init make more sense?
From what I remember, Transformer modules should use Xavier init by default. I don’t remember the reason why, though, nor whether Kaiming is a better choice.
0
huggingface
Research
Modern NLP for “Economics of Innovation” (Open Research Project using Patent Data)
https://discuss.huggingface.co/t/modern-nlp-for-economics-of-innovation-open-research-project-using-patent-data/235
Hi all, Suraj and I started discussing a potential research project and he suggested I make a thread here to discuss. As a quick intro, I am an NLP hobbyist and consumer of NLP research, and Suraj is a software developer with a keen interest in NLP. From my perspective, here are a few goals of the project: Upgrade our NLP skills in general Make an immediate contribution to an applied field by introducing modern NLP methods Dig deeper into NLP research and potentially make a minor advancement My idea was to introduce the Innovation Studies (or Economics of Innovation) field to modern NLP methods. I suggested this for a few reasons. First, it is generally accepted that the long-run economic growth rate, and standard of living, is driven by innovation. And second, there are about 8 million published US patents - that are freely available - that we can use as data. I am open to any directions to take, but here are a few starting points: Patent Classification I can see two reasons for improving patent classifications. One is for Innovation researchers to use the improved patent classes for their research - rather than relying on officially listed patent classes. And two, would be for actual innovation policy. One consensus in the field is that basic research is drastically under-invested in, since companies do not directly benefit from the large spillovers of basic research. So the rate of return on basic research is much higher for society than for any single company. However, when governments try to encourage basic research through incentivizing these types of patents, inventors can try to “cheat the system” by re-labeling their patent. Economists Ufuk Akcigit and Stefanie Stantcheva [1] say “Going forward, finding a feasible way to differentiate between basic and applied research is essential to better innovation tax policies.” Estimating the “Impact” of a Patent As far as I know, the vast majority of innovation studies, that use patent data, use the number of citations as a proxy for the impact of a patent. So improving the “impact score” of a patent might help many innovation researchers. Professor Bryan Kelly et al [2] use a very clever modification of TF-IDF to find similarity scores between patents. A patent’s impact is then estimated by finding the difference in similarity scores between the target patent and all previous patents, and the target patent and all future patents. This makes sense to me, and is well explained in their paper. However, I do think that using other methods of finding patent embeddings may be worth investigating - like using AllenAI’s SPECTER document embedding approach. I’d also like to look into deep graph networks to see if they can help produce an estimate of the impact of a patent, without using citations. Patent Idea Generation I think it would be cool to generate a patent abstract (or idea) either unconditionally, or conditioned on a sentence that would guide the generation. There are lots of directions we could pursue with this. Anyway, sorry for the long post. Please let us know if you have ideas, suggestions, would like to participate, etc. [1] https://www.nber.org/chapters/c14428.pdf 3 [2] https://www.nber.org/papers/w25266.pdf 4
@VictorSanh, @joeddav, @yjernite we would love to hear your thoughts on this
0
huggingface
Research
ACL 2020 - Some personal highlights - Victor
https://discuss.huggingface.co/t/acl-2020-some-personal-highlights-victor/202
Hey! I had such a blast at ACL2020 this week! So many cool works, and lots of very interesting discussions both in the chat and in the zoom Q&A sessions! Here’s a pick of 3 of my highlights (there are extremely biased towards what I’m currently interested in): (1) Inherent Disagreements in Human Textual Inferences 8 by Ellie Pavlick, Tom Kwiatkowski Natural Language Inference (sometimes referred to as textual entailment) has become fundamental in evaluating language understanding and semantics. The central question of this paper is “what should we use as ground truth labels for textual inference?” The authors show that the apparent “annotation noise” often results from a multi-modality among the annotators’ labels. They discuss the implication of this uncertainty and argue for a refined evaluation that better captures the diversity of human judgments. (2) Unsupervised Domain Clusters in Pretrained Language Models 9 by Roee Aharoni, Yoav Goldberg The authors propose a “data-driven” approach to define what a domain is in NLP and to select in-domain data. They show that large pre-trained language models are able to capture these domains in an unsupervised way and leverage this insight to select in-domain data to train neural machine translation models. (3) Syntactic Data Augmentation Increases Robustness to Inference Heuristics 14 by Junghyun Min, R. Thomas McCoy, Dipanjan Das, Emily Pitler, Tal Linzen Natural Language Inference models fine-tuned on top of models like BERT show high accuracy on standard test datasets but fail on challenge sets. The authors propose a simple syntactic data augmentation procedure to augment the standard training set to up to a thousand examples. Results show great improvement (and generalization) by just exposing the model to these controlled syntactic examples supporting that hypothesis that BERT contains knowledge that simply needs to be “activated”. Cases failures (like passive) support that the idea there is also knowledge pre-trained BERT is not aware of. How about you? Did any work change your perspective?
Hi @VictorSanh, thanks so much for your list. As the conference is overwhelming with contents, I did not see these papers at all. In paper (3) , syntactic augmentation is very interesting since (a) Augmentation is very successful in Computer Vision (CV), but in NLP, augmentation is much more non-obvious (regarding how to do it) and maybe sensitive to downstream tasks (more robust in CV) (b) In the paper Section 3, author stated that the augmented examples are noisy We did not attempt to ensure the naturalness of the generated examples; e.g., in the INVERSION transformation, The carriage made a lot of noise was transformed into A lot of noise made the carriage. In addition, the labels of the augmentation dataset were somewhat noisy; e.g., we assumed that INVERSION changed the correct label from entailment to neutral, but this is not necessarily the case (if The buyer met the seller, it is likely that The seller met the buyer). As we show below, this noise did not hurt accuracy on MNLI. This is very interesting to me (in CV it’s often intuitively clear which augmentation is noiseless / noisy), so I assume that the ‘noisy-ratio’ is minimum since too much noise should degrade the overall performance … Further, in CV, we also have soft-labels augmentation like MixUp and CutMix, so maybe this similar area in NLP also has more potential. (on Kaggle we also tried our own (non-published) augmentation to NLP with this similar ideas – e.g. In the recent Jigsaw Toxic classification competition where a paragraph of comment texts are given as an example. We can combine two paragraphs together [with Toxic + Neutral = Toxic label Formula) , or dynamic random shuffling sentences within the given paragraph where toxicity degree should be invariant with this operation.)
0
huggingface
Research
ICLR 2020 highlights - Yacine
https://discuss.huggingface.co/t/iclr-2020-highlights-yacine/37
I took some notes on some ICLR2020 papers that seemed most relevants to my research topics: information retrieval for QA, model architectures and analysis, and text generations. You can find them here! docs.google.com ICLR papers 168 Transformer architectures / pretraining losses Lite Transformer with Long-Short Range Attention Long Short Range Attention uses smaller dimension global attention in parallel with convolutions to capture local context. The approach is more...
Thanks for the great summary Yacine @yjernite . It’s a pity that there’s no ICLR video presentation now on slidelive (maybe they deleted 1-2 weeks after the conference end ?) … Some of them can still be found on Youtube though
0
huggingface
Research
About the Research category
https://discuss.huggingface.co/t/about-the-research-category/26
Use this category for any research question or to coordinate on a project with other users.
Thanks so much to have this category! Love it.
0
huggingface
Research
ACL 2020 highlights – Canwen
https://discuss.huggingface.co/t/acl-2020-highlights-canwen/183
The original Twitter thread here 4. The selecting criterion here is being interesting. Not an exhaustive list. Let Me Choose: From Verbal Context to Font Selection aclweb.org 2020.acl-main.762.pdf 5 687.19 KB Bridging text with its font! Very interesting application paper from Adobe. They even have emojis play a role in it! Even fonts have their semantics and sentiments. Contextualized Weak Supervision for Text Classification aclweb.org 2020.acl-main.30.pdf 6 1819.53 KB This paper cleverly introduces word disambiguation into weakly supervised text classification and the method for data augmentation is also great! Human Attention Maps for Text Classification: Do Humans and Neural Networks Focus on the Same Words? aclweb.org 2020.acl-main.419.pdf 12 813.57 KB This paper answered the interesting question that if machines read text just like us humans? Though the conclusion may not be surprising, it opens a new path to understand attention. Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-Encoders aclweb.org 2020.acl-main.23.pdf 10 912.50 KB Sorry for self-promoting but this paper is actually very interesting. The flexible framework can be extended to many more fields including text style transfer, image generation, voice conversion, etc.
I like the human attention maps one. It’s interesting that humans have much more peaked distributions, focusing in a few key words where as the ML system attends to a larger sweet of words with varying weights.
0
huggingface
Models
Wav2Vec2 WER remains 1.00 and return blank transcriptions
https://discuss.huggingface.co/t/wav2vec2-wer-remains-1-00-and-return-blank-transcriptions/8857
Hello everyone. I faced a strange problem and not sure about how i can resolve the problem myself. I was fine tuning wav2vec2 on two custom dataset and although previously it used to work fine now I am not sure what changed. Issue#1 : The WER does not decrease, it always remains constant at 1.00 WER . Issue#2 : When you make predictions you get blank transciptions.It happened twice and then i decided to run exactly on English Fine tuning blog post. Fine-Tune Wav2Vec2 for English ASR with Transformers 2 I am attaching an image below, the only change i made is on train and eval steps to experiment faster. Please run the following colab notebook to reproduce results : Google Colaboratory 4
Hey there I was experiencing the same problem on my own dataset. I managed to solve it by using librosa to load the raw data. librosa.load(filename, sr = 16000) Previously i was using pydub.AudioSegment
0
huggingface
Models
Wav2Vec2: fix growing training and validation loss after few epochs
https://discuss.huggingface.co/t/wav2vec2-fix-growing-training-and-validation-loss-after-few-epochs/8757
Hi, I’m using Wav2Vec2ForCTC.from_pretrained(“facebook/wav2vec2-large-xlsr-53”) to fine-tune on the Lithuanian language dataset. I’ve limited the dataset to 100 hours of records in the range of 1 to 15 seconds. I’m following the example from this notebook: Fine-Tune Wav2Vec2 for English ASR in Hugging Face with Transformers 2 by @patrickvonplaten. My issue is that that the training loss and validation loss steadily decrease first few epochs and then all metrics start to worsen. Eval loss image1218×730 30.2 KB Wer image1230×680 44.9 KB Train loss (although not so clearly visible) image1238×740 48.7 KB My configuration is: model = Wav2Vec2ForCTC.from_pretrained( "facebook/wav2vec2-large-xlsr-53", activation_dropout=0.055, attention_dropout=0.094, hidden_dropout=0.047, feat_proj_dropout=0.04, mask_time_prob=0.082, layerdrop=0.041, gradient_checkpointing=True, ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id, vocab_size=len(processor.tokenizer), ) model.freeze_feature_extractor() training_args = TrainingArguments( output_dir="/workspace/models/wav2vec-lt", group_by_length=True, per_device_train_batch_size=24, gradient_accumulation_steps=2, evaluation_strategy="steps", num_train_epochs=30, fp16=True, save_steps=1000, eval_steps=1000, logging_steps=1000, learning_rate=2.34e-4, warmup_steps=500, save_total_limit=20, load_best_model_at_end=True, greater_is_better=False, log_level='debug', dataloader_num_workers=6, metric_for_best_model="wer", ) trainer = Trainer( model=model, data_collator=data_collator, args=training_args, compute_metrics=compute_metrics, train_dataset=dataset_prepared['train'], eval_dataset=dataset_prepared['valid'], tokenizer=processor.feature_extractor, callbacks=[EarlyStoppingCallback(early_stopping_patience=5, early_stopping_threshold=0.0001)], ) Model params were taken from another wav2vec finetuning example. At first, I had a higher learning rate, later reduced to the current one. I was thinking of using HPT to get the params to get the best results, but I’d like to resolve this problem first. Any advice?
Have you been able to resolve the issue? Facing the same problem
0
huggingface
Models
Error for Training job huggingface-sdk-extension-2022-01-24-16-31-30-883: Failed. Reason: AlgorithmError: ExecuteUserScriptError:
https://discuss.huggingface.co/t/error-for-training-job-huggingface-sdk-extension-2022-01-24-16-31-30-883-failed-reason-algorithmerror-executeuserscripterror/14061
Hi All I am getting the below error msg when trying to train Bert , any help will be great. Bit urgent. error:- "Unable to create tensor, you should probably activate truncation and/or padding " ValueError: Unable to create tensor, you should probably activate truncation and/or padding with ‘padding=True’ ‘truncation=True’ to have batched tensors with the same length. Error for Training job huggingface-sdk-extension-2022-01-24-16-47-13-971: Failed. Reason: AlgorithmError: ExecuteUserScriptError: Command “/opt/conda/bin/python3.6 train.py --epochs 2 --model_name distilbert-base-uncased --train_batch_size 32” LOG:- 2022-01-24 16:53:12,880 sagemaker-training-toolkit INFO Imported framework sagemaker_pytorch_container.training 2022-01-24 16:53:12,904 sagemaker_pytorch_container.training INFO Block until all host DNS lookups succeed. 2022-01-24 16:53:15,927 sagemaker_pytorch_container.training INFO Invoking user training script. 2022-01-24 16:53:16,395 sagemaker-training-toolkit INFO Invoking user script Training Env: { “additional_framework_parameters”: {}, “channel_input_dirs”: { “test”: “/opt/ml/input/data/test”, “train”: “/opt/ml/input/data/train” }, “current_host”: “algo-1”, “framework_module”: “sagemaker_pytorch_container.training:main”, “hosts”: [ “algo-1” ], “hyperparameters”: { “train_batch_size”: 32, “model_name”: “distilbert-base-uncased”, “epochs”: 2 }, “input_config_dir”: “/opt/ml/input/config”, “input_data_config”: { “test”: { “TrainingInputMode”: “File”, “S3DistributionType”: “FullyReplicated”, “RecordWrapperType”: “None” }, “train”: { “TrainingInputMode”: “File”, “S3DistributionType”: “FullyReplicated”, “RecordWrapperType”: “None” } }, “input_dir”: “/opt/ml/input”, “is_master”: true, “job_name”: “huggingface-sdk-extension-2022-01-24-16-47-13-971”, “log_level”: 20, “master_hostname”: “algo-1”, “model_dir”: “/opt/ml/model”, “module_dir”: “s3://sagemaker-eu-west-2-352316401451/huggingface-sdk-extension-2022-01-24-16-47-13-971/source/sourcedir.tar.gz”, “module_name”: “train”, “network_interface_name”: “eth0”, “num_cpus”: 8, “num_gpus”: 1, “output_data_dir”: “/opt/ml/output/data”, “output_dir”: “/opt/ml/output”, “output_intermediate_dir”: “/opt/ml/output/intermediate”, “resource_config”: { “current_host”: “algo-1”, “hosts”: [ “algo-1” ], “network_interface_name”: “eth0” }, “user_entry_point”: “train.py” } Environment variables: SM_HOSTS=[“algo-1”] SM_NETWORK_INTERFACE_NAME=eth0 SM_HPS={“epochs”:2,“model_name”:“distilbert-base-uncased”,“train_batch_size”:32} SM_USER_ENTRY_POINT=train.py SM_FRAMEWORK_PARAMS={} SM_RESOURCE_CONFIG={“current_host”:“algo-1”,“hosts”:[“algo-1”],“network_interface_name”:“eth0”} SM_INPUT_DATA_CONFIG={“test”:{“RecordWrapperType”:“None”,“S3DistributionType”:“FullyReplicated”,“TrainingInputMode”:“File”},“train”:{“RecordWrapperType”:“None”,“S3DistributionType”:“FullyReplicated”,“TrainingInputMode”:“File”}} SM_OUTPUT_DATA_DIR=/opt/ml/output/data SM_CHANNELS=[“test”,“train”] SM_CURRENT_HOST=algo-1 SM_MODULE_NAME=train SM_LOG_LEVEL=20 SM_FRAMEWORK_MODULE=sagemaker_pytorch_container.training:main SM_INPUT_DIR=/opt/ml/input SM_INPUT_CONFIG_DIR=/opt/ml/input/config SM_OUTPUT_DIR=/opt/ml/output SM_NUM_CPUS=8 SM_NUM_GPUS=1 SM_MODEL_DIR=/opt/ml/model SM_MODULE_DIR=s3://sagemaker-eu-west-2-352316401451/huggingface-sdk-extension-2022-01-24-16-47-13-971/source/sourcedir.tar.gz SM_TRAINING_ENV={“additional_framework_parameters”:{},“channel_input_dirs”:{“test”:"/opt/ml/input/data/test",“train”:"/opt/ml/input/data/train"},“current_host”:“algo-1”,“framework_module”:“sagemaker_pytorch_container.training:main”,“hosts”:[“algo-1”],“hyperparameters”:{“epochs”:2,“model_name”:“distilbert-base-uncased”,“train_batch_size”:32},“input_config_dir”:"/opt/ml/input/config",“input_data_config”:{“test”:{“RecordWrapperType”:“None”,“S3DistributionType”:“FullyReplicated”,“TrainingInputMode”:“File”},“train”:{“RecordWrapperType”:“None”,“S3DistributionType”:“FullyReplicated”,“TrainingInputMode”:“File”}},“input_dir”:"/opt/ml/input",“is_master”:true,“job_name”:“huggingface-sdk-extension-2022-01-24-16-47-13-971”,“log_level”:20,“master_hostname”:“algo-1”,“model_dir”:"/opt/ml/model",“module_dir”:“s3://sagemaker-eu-west-2-352316401451/huggingface-sdk-extension-2022-01-24-16-47-13-971/source/sourcedir.tar.gz”,“module_name”:“train”,“network_interface_name”:“eth0”,“num_cpus”:8,“num_gpus”:1,“output_data_dir”:"/opt/ml/output/data",“output_dir”:"/opt/ml/output",“output_intermediate_dir”:"/opt/ml/output/intermediate",“resource_config”:{“current_host”:“algo-1”,“hosts”:[“algo-1”],“network_interface_name”:“eth0”},“user_entry_point”:“train.py”} SM_USER_ARGS=["–epochs",“2”,"–model_name",“distilbert-base-uncased”,"–train_batch_size",“32”] SM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate SM_CHANNEL_TEST=/opt/ml/input/data/test SM_CHANNEL_TRAIN=/opt/ml/input/data/train SM_HP_TRAIN_BATCH_SIZE=32 SM_HP_MODEL_NAME=distilbert-base-uncased SM_HP_EPOCHS=2 PYTHONPATH=/opt/ml/code:/opt/conda/bin:/opt/conda/lib/python36.zip:/opt/conda/lib/python3.6:/opt/conda/lib/python3.6/lib-dynload:/opt/conda/lib/python3.6/site-packages Invoking script with the following command: /opt/conda/bin/python3.6 train.py --epochs 2 --model_name distilbert-base-uncased --train_batch_size 32 2022-01-24 16:53:21,122 - main - INFO - loaded train_dataset length is: 572 2022-01-24 16:53:21,122 - main - INFO - loaded test_dataset length is: 144 2022-01-24 16:53:21,457 - filelock - INFO - Lock 140311318218288 acquired on /root/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.91b885ab15d631bf9cee9dc9d25ece0afd932f2f5130eba28f2055b2220c0333.lock 2022-01-24 16:53:21,790 - filelock - INFO - Lock 140311318218288 released on /root/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.91b885ab15d631bf9cee9dc9d25ece0afd932f2f5130eba28f2055b2220c0333.lock 2022-01-24 16:53:22,163 - filelock - INFO - Lock 140311212156688 acquired on /root/.cache/huggingface/transformers/9c169103d7e5a73936dd2b627e42851bec0831212b677c637033ee4bce9ab5ee.126183e36667471617ae2f0835fab707baa54b731f991507ebbb55ea85adb12a.lock 2022-01-24 16:53:27,475 - filelock - INFO - Lock 140311212156688 released on /root/.cache/huggingface/transformers/9c169103d7e5a73936dd2b627e42851bec0831212b677c637033ee4bce9ab5ee.126183e36667471617ae2f0835fab707baa54b731f991507ebbb55ea85adb12a.lock Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: [‘vocab_layer_norm.bias’, ‘vocab_projector.weight’, ‘vocab_projector.bias’, ‘vocab_transform.bias’, ‘vocab_layer_norm.weight’, ‘vocab_transform.weight’] This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: [‘classifier.weight’, ‘pre_classifier.weight’, ‘pre_classifier.bias’, ‘classifier.bias’] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 2022-01-24 16:53:28,847 - filelock - INFO - Lock 140311146924352 acquired on /root/.cache/huggingface/transformers/0e1bbfda7f63a99bb52e3915dcf10c3c92122b827d92eb2d34ce94ee79ba486c.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.lock 2022-01-24 16:53:29,484 - filelock - INFO - Lock 140311146924352 released on /root/.cache/huggingface/transformers/0e1bbfda7f63a99bb52e3915dcf10c3c92122b827d92eb2d34ce94ee79ba486c.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99.lock 2022-01-24 16:53:29,811 - filelock - INFO - Lock 140311211720320 acquired on /root/.cache/huggingface/transformers/75abb59d7a06f4f640158a9bfcde005264e59e8d566781ab1415b139d2e4c603.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4.lock 2022-01-24 16:53:30,528 - filelock - INFO - Lock 140311211720320 released on /root/.cache/huggingface/transformers/75abb59d7a06f4f640158a9bfcde005264e59e8d566781ab1415b139d2e4c603.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4.lock 2022-01-24 16:53:31,518 - filelock - INFO - Lock 140311211719928 acquired on /root/.cache/huggingface/transformers/8c8624b8ac8aa99c60c912161f8332de003484428c47906d7ff7eb7f73eecdbb.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79.lock 2022-01-24 16:53:38 Uploading - Uploading generated training model2022-01-24 16:53:31,850 - filelock - INFO - Lock 140311211719928 released on /root/.cache/huggingface/transformers/8c8624b8ac8aa99c60c912161f8332de003484428c47906d7ff7eb7f73eecdbb.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79.lock [2022-01-24 16:53:36.715 algo-1:26 INFO utils.py:27] RULE_JOB_STOP_SIGNAL_FILENAME: None [2022-01-24 16:53:36.867 algo-1:26 INFO profiler_config_parser.py:102] User has disabled profiler. [2022-01-24 16:53:36.868 algo-1:26 INFO json_config.py:91] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json. [2022-01-24 16:53:36.869 algo-1:26 INFO hook.py:201] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries. [2022-01-24 16:53:36.870 algo-1:26 INFO hook.py:255] Saving to /opt/ml/output/tensors [2022-01-24 16:53:36.871 algo-1:26 INFO state_store.py:77] The checkpoint config file /opt/ml/input/config/checkpointconfig.json does not exist. #015Downloading: 0%| | 0.00/483 [00:00<?, ?B/s]#015Downloading: 100%|██████████| 483/483 [00:00<00:00, 460kB/s] #015Downloading: 0%| | 0.00/268M [00:00<?, ?B/s]#015Downloading: 2%|▏ | 4.08M/268M [00:00<00:06, 40.8MB/s]#015Downloading: 3%|▎ | 8.71M/268M [00:00<00:06, 42.3MB/s]#015Downloading: 5%|▌ | 13.5M/268M [00:00<00:05, 43.8MB/s]#015Downloading: 7%|▋ | 18.4M/268M [00:00<00:05, 45.2MB/s]#015Downloading: 9%|▊ | 23.3M/268M [00:00<00:05, 46.5MB/s]#015Downloading: 10%|█ | 28.0M/268M [00:00<00:05, 46.5MB/s]#015Downloading: 12%|█▏ | 32.6M/268M [00:00<00:05, 46.5MB/s]#015Downloading: 14%|█▍ | 37.7M/268M [00:00<00:04, 47.5MB/s]#015Downloading: 16%|█▌ | 42.7M/268M [00:00<00:04, 48.4MB/s]#015Downloading: 18%|█▊ | 47.8M/268M [00:01<00:04, 49.1MB/s]#015Downloading: 20%|█▉ | 52.8M/268M [00:01<00:04, 49.2MB/s]#015Downloading: 22%|██▏ | 57.8M/268M [00:01<00:04, 49.6MB/s]#015Downloading: 23%|██▎ | 62.9M/268M [00:01<00:04, 50.0MB/s]#015Downloading: 25%|██▌ | 68.0M/268M [00:01<00:03, 50.3MB/s]#015Downloading: 27%|██▋ | 73.2M/268M [00:01<00:03, 50.7MB/s]#015Downloading: 29%|██▉ | 78.2M/268M [00:01<00:03, 50.5MB/s]#015Downloading: 31%|███ | 83.3M/268M [00:01<00:03, 50.7MB/s]#015Downloading: 33%|███▎ | 88.5M/268M [00:01<00:03, 50.9MB/s]#015Downloading: 35%|███▍ | 93.6M/268M [00:01<00:03, 51.1MB/s]#015Downloading: 37%|███▋ | 98.8M/268M [00:02<00:03, 51.3MB/s]#015Downloading: 39%|███▉ | 104M/268M [00:02<00:03, 50.5MB/s] #015Downloading: 41%|████ | 109M/268M [00:02<00:03, 50.5MB/s]#015Downloading: 43%|████▎ | 114M/268M [00:02<00:03, 50.7MB/s]#015Downloading: 44%|████▍ | 119M/268M [00:02<00:02, 50.8MB/s]#015Downloading: 46%|████▋ | 124M/268M [00:02<00:02, 51.1MB/s]#015Downloading: 48%|████▊ | 130M/268M [00:02<00:02, 51.1MB/s]#015Downloading: 50%|█████ | 135M/268M [00:02<00:02, 51.2MB/s]#015Downloading: 52%|█████▏ | 140M/268M [00:02<00:02, 51.3MB/s]#015Downloading: 54%|█████▍ | 145M/268M [00:02<00:02, 51.3MB/s]#015Downloading: 56%|█████▌ | 150M/268M [00:03<00:02, 51.4MB/s]#015Downloading: 58%|█████▊ | 155M/268M [00:03<00:02, 50.9MB/s]#015Downloading: 60%|█████▉ | 160M/268M [00:03<00:02, 51.0MB/s]#015Downloading: 62%|██████▏ | 166M/268M [00:03<00:02, 51.2MB/s]#015Downloading: 64%|██████▎ | 171M/268M [00:03<00:01, 51.5MB/s]#015Downloading: 66%|██████▌ | 176M/268M [00:03<00:01, 52.1MB/s]#015Downloading: 68%|██████▊ | 181M/268M [00:03<00:01, 52.5MB/s]#015Downloading: 70%|██████▉ | 187M/268M [00:03<00:01, 52.9MB/s]#015Downloading: 72%|███████▏ | 192M/268M [00:03<00:01, 52.0MB/s]#015Downloading: 74%|███████▎ | 198M/268M [00:03<00:01, 52.5MB/s]#015Downloading: 76%|███████▌ | 203M/268M [00:04<00:01, 52.9MB/s]#015Downloading: 78%|███████▊ | 208M/268M [00:04<00:01, 49.9MB/s]#015Downloading: 80%|███████▉ | 213M/268M [00:04<00:01, 50.8MB/s]#015Downloading: 82%|████████▏ | 219M/268M [00:04<00:00, 51.4MB/s]#015Downloading: 84%|████████▎ | 224M/268M [00:04<00:00, 52.0MB/s]#015Downloading: 86%|████████▌ | 229M/268M [00:04<00:00, 52.4MB/s]#015Downloading: 88%|████████▊ | 235M/268M [00:04<00:00, 52.7MB/s]#015Downloading: 90%|████████▉ | 240M/268M [00:04<00:00, 52.8MB/s]#015Downloading: 92%|█████████▏| 245M/268M [00:04<00:00, 52.9MB/s]#015Downloading: 94%|█████████▎| 251M/268M [00:04<00:00, 53.0MB/s]#015Downloading: 96%|█████████▌| 256M/268M [00:05<00:00, 53.1MB/s]#015Downloading: 98%|█████████▊| 261M/268M [00:05<00:00, 52.7MB/s]#015Downloading: 100%|█████████▉| 267M/268M [00:05<00:00, 51.6MB/s]#015Downloading: 100%|██████████| 268M/268M [00:05<00:00, 50.9MB/s] Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: [‘vocab_layer_norm.bias’, ‘vocab_projector.weight’, ‘vocab_projector.bias’, ‘vocab_transform.bias’, ‘vocab_layer_norm.weight’, ‘vocab_transform.weight’] This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: [‘classifier.weight’, ‘pre_classifier.weight’, ‘pre_classifier.bias’, ‘classifier.bias’] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. #015Downloading: 0%| | 0.00/232k [00:00<?, ?B/s]#015Downloading: 16%|█▌ | 36.9k/232k [00:00<00:00, 242kB/s]#015Downloading: 82%|████████▏ | 190k/232k [00:00<00:00, 321kB/s] #015Downloading: 100%|██████████| 232k/232k [00:00<00:00, 755kB/s] #015Downloading: 0%| | 0.00/466k [00:00<?, ?B/s]#015Downloading: 20%|█▉ | 92.2k/466k [00:00<00:00, 610kB/s]#015Downloading: 73%|███████▎ | 338k/466k [00:00<00:00, 750kB/s] #015Downloading: 100%|██████████| 466k/466k [00:00<00:00, 1.52MB/s] #015Downloading: 0%| | 0.00/28.0 [00:00<?, ?B/s]#015Downloading: 100%|██████████| 28.0/28.0 [00:00<00:00, 24.3kB/s] #015 0%| | 0/36 [00:00<?, ?it/s]Traceback (most recent call last): File “/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py”, line 699, in convert_to_tensors tensor = as_tensor(value) ValueError: too many dimensions ‘str’ During handling of the above exception, another exception occurred: Traceback (most recent call last): File “train.py”, line 83, in trainer.train() File “/opt/conda/lib/python3.6/site-packages/transformers/trainer.py”, line 1246, in train for step, inputs in enumerate(epoch_iterator): File “/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 444, in next (data, worker_id) = self._next_data() File “/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 526, in _next_data 2022-01-24 16:53:37,616 sagemaker-training-toolkit ERROR ExecuteUserScriptError: Command “/opt/conda/bin/python3.6 train.py --epochs 2 --model_name distilbert-base-uncased --train_batch_size 32” #015Downloading: 0%| | 0.00/483 [00:00<?, ?B/s]#015Downloading: 100%|██████████| 483/483 [00:00<00:00, 460kB/s] data = self._dataset_fetcher.fetch(index) # may raise StopIteration #015Downloading: 0%| | 0.00/268M [00:00<?, ?B/s]#015Downloading: 2%|▏ | 4.08M/268M [00:00<00:06, 40.8MB/s]#015Downloading: 3%|▎ | 8.71M/268M [00:00<00:06, 42.3MB/s]#015Downloading: 5%|▌ | 13.5M/268M [00:00<00:05, 43.8MB/s]#015Downloading: 7%|▋ | 18.4M/268M [00:00<00:05, 45.2MB/s]#015Downloading: 9%|▊ | 23.3M/268M [00:00<00:05, 46.5MB/s]#015Downloading: 10%|█ | 28.0M/268M [00:00<00:05, 46.5MB/s]#015Downloading: 12%|█▏ | 32.6M/268M [00:00<00:05, 46.5MB/s]#015Downloading: 14%|█▍ | 37.7M/268M [00:00<00:04, 47.5MB/s]#015Downloading: 16%|█▌ | 42.7M/268M [00:00<00:04, 48.4MB/s]#015Downloading: 18%|█▊ | 47.8M/268M [00:01<00:04, 49.1MB/s]#015Downloading: 20%|█▉ | 52.8M/268M [00:01<00:04, 49.2MB/s]#015Downloading: 22%|██▏ | 57.8M/268M [00:01<00:04, 49.6MB/s]#015Downloading: 23%|██▎ | 62.9M/268M [00:01<00:04, 50.0MB/s]#015Downloading: 25%|██▌ | 68.0M/268M [00:01<00:03, 50.3MB/s]#015Downloading: 27%|██▋ | 73.2M/268M [00:01<00:03, 50.7MB/s]#015Downloading: 29%|██▉ | 78.2M/268M [00:01<00:03, 50.5MB/s]#015Downloading: 31%|███ | 83.3M/268M [00:01<00:03, 50.7MB/s]#015Downloading: 33%|███▎ | 88.5M/268M [00:01<00:03, 50.9MB/s]#015Downloading: 35%|███▍ | 93.6M/268M [00:01<00:03, 51.1MB/s]#015Downloading: 37%|███▋ | 98.8M/268M [00:02<00:03, 51.3MB/s]#015Downloading: 39%|███▉ | 104M/268M [00:02<00:03, 50.5MB/s] #015Downloading: 41%|████ | 109M/268M [00:02<00:03, 50.5MB/s]#015Downloading: 43%|████▎ | 114M/268M [00:02<00:03, 50.7MB/s]#015Downloading: 44%|████▍ | 119M/268M [00:02<00:02, 50.8MB/s]#015Downloading: 46%|████▋ | 124M/268M [00:02<00:02, 51.1MB/s]#015Downloading: 48%|████▊ | 130M/268M [00:02<00:02, 51.1MB/s]#015Downloading: 50%|█████ | 135M/268M [00:02<00:02, 51.2MB/s]#015Downloading: 52%|█████▏ | 140M/268M [00:02<00:02, 51.3MB/s]#015Downloading: 54%|█████▍ | 145M/268M [00:02<00:02, 51.3MB/s]#015Downloading: 56%|█████▌ | 150M/268M [00:03<00:02, 51.4MB/s]#015Downloading: 58%|█████▊ | 155M/268M [00:03<00:02, 50.9MB/s]#015Downloading: 60%|█████▉ | 160M/268M [00:03<00:02, 51.0MB/s]#015Downloading: 62%|██████▏ | 166M/268M [00:03<00:02, 51.2MB/s]#015Downloading: 64%|██████▎ | 171M/268M [00:03<00:01, 51.5MB/s]#015Downloading: 66%|██████▌ | 176M/268M [00:03<00:01, 52.1MB/s]#015Downloading: 68%|██████▊ | 181M/268M [00:03<00:01, 52.5MB/s]#015Downloading: 70%|██████▉ | 187M/268M [00:03<00:01, 52.9MB/s]#015Downloading: 72%|███████▏ | 192M/268M [00:03<00:01, 52.0MB/s]#015Downloading: 74%|███████▎ | 198M/268M [00:03<00:01, 52.5MB/s]#015Downloading: 76%|███████▌ | 203M/268M [00:04<00:01, 52.9MB/s]#015Downloading: 78%|███████▊ | 208M/268M [00:04<00:01, 49.9MB/s]#015Downloading: 80%|███████▉ | 213M/268M [00:04<00:01, 50.8MB/s]#015Downloading: 82%|████████▏ | 219M/268M [00:04<00:00, 51.4MB/s]#015Downloading: 84%|████████▎ | 224M/268M [00:04<00:00, 52.0MB/s]#015Downloading: 86%|████████▌ | 229M/268M [00:04<00:00, 52.4MB/s]#015Downloading: 88%|████████▊ | 235M/268M [00:04<00:00, 52.7MB/s]#015Downloading: 90%|████████▉ | 240M/268M [00:04<00:00, 52.8MB/s]#015Downloading: 92%|█████████▏| 245M/268M [00:04<00:00, 52.9MB/s]#015Downloading: 94%|█████████▎| 251M/268M [00:04<00:00, 53.0MB/s]#015Downloading: 96%|█████████▌| 256M/268M [00:05<00:00, 53.1MB/s]#015Downloading: 98%|█████████▊| 261M/268M [00:05<00:00, 52.7MB/s]#015Downloading: 100%|█████████▉| 267M/268M [00:05<00:00, 51.6MB/s]#015Downloading: 100%|██████████| 268M/268M [00:05<00:00, 50.9MB/s] Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: [‘vocab_layer_norm.bias’, ‘vocab_projector.weight’, ‘vocab_projector.bias’, ‘vocab_transform.bias’, ‘vocab_layer_norm.weight’, ‘vocab_transform.weight’] This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: [‘classifier.weight’, ‘pre_classifier.weight’, ‘pre_classifier.bias’, ‘classifier.bias’] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. #015Downloading: 0%| | 0.00/232k [00:00<?, ?B/s]#015Downloading: 16%|█▌ | 36.9k/232k [00:00<00:00, 242kB/s]#015Downloading: 82%|████████▏ | 190k/232k [00:00<00:00, 321kB/s] #015Downloading: 100%|██████████| 232k/232k [00:00<00:00, 755kB/s] #015Downloading: 0%| | 0.00/466k [00:00<?, ?B/s]#015Downloading: 20%|█▉ | 92.2k/466k [00:00<00:00, 610kB/s]#015Downloading: 73%|███████▎ | 338k/466k [00:00<00:00, 750kB/s] #015Downloading: 100%|██████████| 466k/466k [00:00<00:00, 1.52MB/s] #015Downloading: 0%| | 0.00/28.0 [00:00<?, ?B/s]#015Downloading: 100%|██████████| 28.0/28.0 [00:00<00:00, 24.3kB/s] #015 0%| | 0/36 [00:00<?, ?it/s]Traceback (most recent call last): File “/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py”, line 699, in convert_to_tensors tensor = as_tensor(value) ValueError: too many dimensions ‘str’ During handling of the above exception, another exception occurred: Traceback (most recent call last): File “train.py”, line 83, in trainer.train() File “/opt/conda/lib/python3.6/site-packages/transformers/trainer.py”, line 1246, in train for step, inputs in enumerate(epoch_iterator): File “/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 444, in next (data, worker_id) = self._next_data() File “/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 526, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File “/opt/conda/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py”, line 47, in fetch return self.collate_fn(data) File “/opt/conda/lib/python3.6/site-packages/transformers/data/data_collator.py”, line 123, in call return_tensors=“pt”, File “/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py”, line 2680, in pad return BatchEncoding(batch_outputs, tensor_type=return_tensors) File “/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py”, line 204, in init self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis) File “/opt/conda/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py”, line 47, in fetch return self.collate_fn(data) File “/opt/conda/lib/python3.6/site-packages/transformers/data/data_collator.py”, line 123, in call return_tensors=“pt”, File “/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py”, line 2680, in pad return BatchEncoding(batch_outputs, tensor_type=return_tensors) File “/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py”, line 204, in init self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis) File “/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py”, line 716, in convert_to_tensors "Unable to create tensor, you should probably activate truncation and/or padding " ValueError: Unable to create tensor, you should probably activate truncation and/or padding with ‘padding=True’ ‘truncation=True’ to have batched tensors with the same length. #015 0%| | 0/36 [00:00<?, ?it/s] File “/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py”, line 716, in convert_to_tensors "Unable to create tensor, you should probably activate truncation and/or padding " ValueError: Unable to create tensor, you should probably activate truncation and/or padding with ‘padding=True’ ‘truncation=True’ to have batched tensors with the same length. #015 0%| | 0/36 [00:00<?, ?it/s] 2022-01-24 16:54:35 Failed - Training job failed ProfilerReport-1643042834: Stopping UnexpectedStatusException Traceback (most recent call last) in 10 11 # starting the train job with our uploaded datasets as input —> 12 huggingface_estimator.fit({‘train’: training_input_path, ‘test’: test_input_path}) ~/anaconda3/envs/python3/lib/python3.6/site-packages/sagemaker/estimator.py in fit(self, inputs, wait, logs, job_name, experiment_config) 690 self.jobs.append(self.latest_training_job) 691 if wait: → 692 self.latest_training_job.wait(logs=logs) 693 694 def _compilation_job_name(self): ~/anaconda3/envs/python3/lib/python3.6/site-packages/sagemaker/estimator.py in wait(self, logs) 1665 # If logs are requested, call logs_for_jobs. 1666 if logs != “None”: → 1667 self.sagemaker_session.logs_for_job(self.job_name, wait=True, log_type=logs) 1668 else: 1669 self.sagemaker_session.wait_for_job(self.job_name) ~/anaconda3/envs/python3/lib/python3.6/site-packages/sagemaker/session.py in logs_for_job(self, job_name, wait, poll, log_type) 3783 3784 if wait: → 3785 self._check_job_status(job_name, description, “TrainingJobStatus”) 3786 if dot: 3787 print() ~/anaconda3/envs/python3/lib/python3.6/site-packages/sagemaker/session.py in _check_job_status(self, job, desc, status_key_name) 3341 ), 3342 allowed_statuses=[“Completed”, “Stopped”], → 3343 actual_status=status, 3344 ) 3345 UnexpectedStatusException: Error for Training job huggingface-sdk-extension-2022-01-24-16-47-13-971: Failed. Reason: AlgorithmError: ExecuteUserScriptError: Command “/opt/conda/bin/python3.6 train.py --epochs 2 --model_name distilbert-base-uncased --train_batch_size 32” Downloading: 100%|██████████| 483/483 [00:00<00:00, 460kB/s] Downloading: 18%|█▊ | 47.8M/268M [00:01<00:04, 49.1MB/s]
Hi All any updates?
0
huggingface
Models
How calculate loss.backward() in MLM by PyTorch
https://discuss.huggingface.co/t/how-calculate-loss-backward-in-mlm-by-pytorch/13217
I am going to train an MLM model by pytorch, but in the training part, I do not know how to calculate the loss.backward. model_name = "distilbert-base-uncased" model = AutoModelForMaskedLM.from_pretrained(model_name, max_length=256, return_dict=True) model.train() device = torch.device("cpu") for epoch in range(epochs): loop = tqdm(dataloader) for batch in loop: optimizer.zero_grad() input_ids = batch['input_ids'].to(device) labels = batch['labels'].to(device) attention_mask = batch['attention_mask'].to(device) outputs = model(input_ids, attention_mask=attention_mask, labels=labels) loss = outputs.loss loss.backwards() optimizer.step() @lewtun
Hey @MahdiA what you have looks pretty close - I think you just need loss.backward() instead of loss.backwards() Does that solve your issue?
1
huggingface
Models
Codet5 fails with a CUDA error
https://discuss.huggingface.co/t/codet5-fails-with-a-cuda-error/13896
I’m trying to reproduce the codet5 fine-tuning results (GitHub - salesforce/CodeT5: Code for CodeT5: a new code-aware pre-trained encoder-decoder model.) The script being used is: python3 /home/ubuntu/CodeT5/run_gen.py \ --task summarize \ --sub_task python \ --summary_dir /home/ubuntu/CodeT5/summary \ --cache_path /home/ubuntu/CodeT5/cache \ --data_dir /home/ubuntu/CodeT5/data \ --res_dir /home/ubuntu/CodeT5/res \ --output_dir /home/ubuntu/CodeT5/output \ --save_last_checkpoints \ --always_save_model \ --do_eval_bleu \ --model_name_or_path='Salesforce/codet5-base-multi-sum' \ --tokenizer_name='Salesforce/codet5-base-multi-sum' \ --train_filename /home/ubuntu/CodeT5/data/summarize/python/train.jsonl \ --dev_filename /home/ubuntu/CodeT5/data/summarize/python/valid.jsonl \ --test_filename /home/ubuntu/CodeT5/data/summarize/python/test.jsonl \ --do_train \ --do_eval \ --do_test \ --save_steps=500 \ --log_steps=100 \ --local_rank=-1 Running it leads to the following error: Traceback (most recent call last): File "/home/ubuntu/CodeT5/run_gen.py", line 387, in <module> main() File "/home/ubuntu/CodeT5/run_gen.py", line 234, in main outputs = model(input_ids=source_ids, attention_mask=source_mask, File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1561, in forward encoder_outputs = self.encoder( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 998, in forward layer_outputs = layer_module( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 639, in forward self_attention_outputs = self.layer[0]( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 546, in forward attention_output = self.SelfAttention( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 483, in forward scores = torch.matmul( RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. When run with an extra --no_cuda flag, ithe training script produces this error: Traceback (most recent call last): File "/home/ubuntu/CodeT5/run_gen.py", line 394, in <module> main() File "/home/ubuntu/CodeT5/run_gen.py", line 241, in main outputs = model(input_ids=source_ids, attention_mask=source_mask, File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1561, in forward encoder_outputs = self.encoder( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 898, in forward inputs_embeds = self.embed_tokens(input_ids) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/sparse.py", line 158, in forward return F.embedding( File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 2044, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self My guess is something fishy is going on with the source_ids, but I haven’t been able to figure it out. A simple test shows that source_ids has shape torch.Size([8, 64]) , while target_ids have torch.Size([8, 64]) . I wonder: whether (and how) this can be debugged and fixed? whether a Trainer can be adapted to this fine-tuning task? The main things I expect the Trainer to provide are deepspeed support and wandb reporting integration UPD: an example summarization script seems to work fine. Perhpas I’ll just have to tweak it a bit so that it can summarize code instead of natural language
Ok, to the best of my understanding at the moment, it goes like this: the example summarization script uses certain (I presume standard/based on the original paper) values for max_source_length and max_target_length, which are 1024 and 128, respectively. But codet5 uses 64 and 32 respectively, for some reason I was unable to discern. Anyway, once those were changed, it … didn’t work, since it didn’t fit into a single a100 memory, but then I changed the batch_size from 8 to 4, and now it does work. Hooray!
0
huggingface
Models
How can I delete a model repository
https://discuss.huggingface.co/t/how-can-i-delete-a-model-repository/2097
Hi, Sorry if this is a duplicate thread. Thanks for this beautiful platform for sharing our models and datasets. My question is, how can I delete a model repository from huggingface model hub? Thanks in advance. Sagor
To delete a model repository, right now you need to do it in code from the transformers library and do something like: from transformers.hf_api import HfApi HfApi().delete_repo(...) We’ll ship a UI to do it directly from the website in the coming days, though. (cc @pierric @thomwolf)
0
huggingface
Models
Hubert ASR Fine Tuning giving weird results
https://discuss.huggingface.co/t/hubert-asr-fine-tuning-giving-weird-results/13572
Hi I have taken hubert-large-ls960-ft model and fine tuned it for my dataset. I can see the loss and wer decreasing at each step and the best model is saved. Now when i try to do inference i get repeated characters like ‘zhzhzhzzhzhz’. I am not sure where I am going wrong because I used the same script (Fine-Tune Wav2Vec2 for English ASR in Hugging Face with 🤗 Transformers) for my other wav2vec2 models and they work fine. I have just replaced Wav2Vec2ForCTC with HubertForCTC and the entire script is same. Can anyone please help me Training progression is as follows:- image501×778 34.9 KB And the code snippet where I made the change is as follows:- from transformers import HubertForCTC model = HubertForCTC.from_pretrained( "facebook/hubert-large-ls960-ft", attention_dropout=0.01, feat_proj_dropout= 0.05, activation_dropout=0.05, hidden_dropout=0.05, hidden_act= "gelu", # feat_proj_dropout=0.0, mask_time_prob=0.1, layerdrop=0.05, gradient_checkpointing=True, ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id, vocab_size=len(processor.tokenizer) ) Inference script is as follows:- from transformers import HubertForCTC,Wav2Vec2Processor model = HubertForCTC.from_pretrained('/content/drive/MyDrive/finalmodel/checkpoint-9500').to("cuda") processor = Wav2Vec2Processor.from_pretrained("/content/drive/MyDrive/finalmodel") test_df['audio_path'] = '/content/test_audio/'+test_df['Clip_ID']+'.mp3' test_df1 = test_df[['audio_path']] test_data = Dataset.from_pandas(test_df1) def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["audio_path"]) batch["speech"] = librosa.resample(np.asarray(speech_array[0].numpy()), 32_000, 16_000) batch["sampling_rate"] = 16_000 return batch test_data = test_data.map(speech_file_to_array_fn, remove_columns=test_data.column_names) def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_data.map(evaluate, batched=True, batch_size=8) result["pred_strings"]
Hey @sammy786, Could you please upload all relevant files that are created during training to the hub so that I can take a look. I especially need to take a look at the tokenizer that was created when running the script. So please upload all files that are required to run the model in inference and all bash and Python scripts that you used for training to a repository to the Hub. There is no way that I could find out from just looking at screenshots what might be wrong here. Thank you!
0
huggingface
Models
Finetuning wav2vec2-large-xlsr-53 only outputs blank labels
https://discuss.huggingface.co/t/finetuning-wav2vec2-large-xlsr-53-only-outputs-blank-labels/8617
Hi, When I try to finetune wav2vec2-large-xlsr-53 1 with FSC 1 dataset for ASR using built-in class Wav2VecForCTC, the CTC loss is not converging, and the system only outputs blank labels even for training instances. Here is the log of training (overfiting) with only 8 training instances in total: Epoch 60/1000, Batch 1/1, Total Step = 60, Loss = 26.296, CER = 100.000, Gold: ['SWITCH OFF THE LIGHTS', 'TURN THE VOLUME UP'] Pred: ['', ''] Epoch 1000/1000, Batch 1/1, Total Step = 1000, Loss = 2.663, CER = 100.000 Gold: ['SWITCH OFF THE LIGHTS', 'TURN THE VOLUME UP'] Pred: ['', ''] We can see that even at Epcoh 60, the CTC loss is ~26 and the system only outputs blank labels for training instances. Continuing training to 1000 epoch will reduce the CTC loss, but still the system outputs blanks. However, if I use an ASR finetuned model 3 (the model is finetuned even on Chinese corpora), with exactly the same code, continuing finetuning on FSC can quickly overfit the training instances, And we can see now the CTC loss is small and we can reproduce input pretty well: Epoch 37/1000, Batch 1/1, Total Step = 37, Loss = 0.681, CER = 21.818, ['CHANGE LANGUAGE', 'TURN THE LIGHTS ON'] ['CHANE LANGUAEEI', 'TURN THE LITSH ONT'] Epoch 100/1000, Batch 1/1, Total Step = 100, Loss = 0.028, CER = 2.727, ['SWITCH OFF THE LIGHTS', 'SWITCH ON THE LIGHTS'] ['SWITCH OFF THE LIGHTS', 'SWIITCH ON THE LIGHTSWW'] Here is the code to create new model loading different pretrained models: model = Wav2Vec2ForCTC.from_pretrained( args.audio_model, gradient_checkpointing=True, apply_spec_augment=False, vocab_size=processor.tokenizer.vocab_size, hidden_dropout=0.05, activation_dropout=0.05, feat_proj_dropout=0.05, layerdrop=0.05, final_dropout=0.05, mask_time_prob=0.05, ctc_loss_reduction='mean', ctc_zero_infinity=True, ) I am using Adam with 1e-4 as learning rate. Both models use the same vocabulary of size ~3k (with Chinese chars). This configuration is exactly the same for both pretrained models, but still yields different behaviors. Note that I also tried sum for ctc_loss_reduction in xlsr but also got the same blanks. Could anybody help me on that? Thank you very much!
I meet the same issue. Do you solve it???
0
huggingface
Models
CLIPModel finetuning
https://discuss.huggingface.co/t/clipmodel-finetuning/13388
I am doing the following to finetune CLIPMode further on my own dataset. But with no luck. Please advise. Model load from transformers import CLIPProcessor, CLIPModel, CLIPConfig config = CLIPConfig.from_pretrained("openai/clip-vit-base-patch32") model = CLIPModel(config) processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") Data loader import torch from torch.utils.data import Dataset from PIL import Image class CLIPDataset(Dataset): def __init__(self, root_dir, df,processor, max_target_length=32): self.root_dir = root_dir self.df = df self.processor = processor self.max_target_length = max_target_length def __len__(self): return len(self.df) def __getitem__(self, idx): file_name = self.df['image'][idx] image = Image.open(self.root_dir + "/" + file_name).convert("RGB") text = self.df['title'][idx] pixel_values = self.processor.feature_extractor(image, return_tensors="pt").pixel_values labels = self.processor.tokenizer(text, padding="max_length", max_length=self.max_target_length, truncation=True).input_ids labels = [label if label != self.processor.tokenizer.pad_token_id else -100 for label in labels] return {"input_ids":torch.tensor(labels), "pixel_values":pixel_values.squeeze()} Using default data_collator with Trainer trainer = Seq2SeqTrainer( model, args, train_dataset=train_dataset, eval_dataset=eval_dataset, data_collator=default_data_collator, tokenizer=processor.feature_extractor )
@nielsr / @valhalla / @patrickvonplaten can I seek your advice ?
0
huggingface
Models
Language model for wav2vec2.0 decoding
https://discuss.huggingface.co/t/language-model-for-wav2vec2-0-decoding/4434
Hello, I implemented wav2vec2.0 code 69 and a language model is not used for decoding. How can I add a language model (let’s say a language model which is trained with KenLM) for decoding @patrickvonplaten ? thanks in advance. Note: I also opened an issue 65, but redirected here.
Hey Emre! Yeah good question - we currently don’t support evaluating with a language model, but we plan on adding this functionality soon! It’s sadly not that trivial to decode a CTC model with a language model. I’ll try to keep you posted for updates here!
0
huggingface
Models
How to separately use T5 decoder
https://discuss.huggingface.co/t/how-to-separately-use-t5-decoder/13536
I am working on a task in which I should modify the encoding results. what I would like to do is generally like this: input_ids = tokenizer(“i am trying hard!”, return_tensors=‘pt’).input_ids last_hidden_state=model.encoder(input_ids=input_ids).last_hidden_state modified_last_hidden_state = modify(last_hidden_state) outputs = model.decoder(modified_last_hidden_state) output_sequence = tokenizer.decode(outputs) I think this model.decoder() actually doesn’t work as I want.
reply myself: I think this is a good try since the loss and hidden states are totally the same as the standard training process, and I will test the training process later. the separate process: Screenshot from 2022-01-10 12-27-07939×495 74.9 KB
0
huggingface
Models
Confusion About T5LM Properties
https://discuss.huggingface.co/t/confusion-about-t5lm-properties/13385
Hello, I’m working on using T5LM (small) for semantic parsing tasks. However, there seems to be discrepancies between the model properties as listed on the T5 documentation page: https://huggingface.co/docs/transformers/model_doc/t5 versus the error outputs I get from trying to run the Supervised Training example. More specifically, when running the last line: loss = model(input_ids=input_ids, labels=labels).loss I get the error saying the model needs either decoder_inputs_embeds or decoder_input_ids in place of the labels parameter. Additionally, the tutorial text states: The model will automatically create the decoder_input_ids based on the labels ,… Is there source code for this model implementation that I can reference? I don’t understand the difference between the embeds vs decoder input ids for a start. And I can’t tell if the model I imported using model = transformers.AutoModel.from_pretrained(“google/t5-small-lm-adapt”) is correct because in addition to the above discrepancy, when I run the original loss command, I got the error that Seq2SeqModelOutput does not have a .loss function. I thought the LM adaptation of T5 would be 1-to-1 to the original T5 architecture, but that clearly doesn’t seem to be the case. Any help or direction would be greatly appreciated. Thanks, Selma
I now understand the difference between decoder_inputs_embeds and the decoder_input_ids; however I’m still confused about the difference between the google/t5-small-lm-adapt that is being loaded from AutoModel vs The T5ConditionalGeneration model supplied by hugging face
0
huggingface
Models
How to use the inference api on tts model?
https://discuss.huggingface.co/t/how-to-use-the-inference-api-on-tts-model/12397
Hi, how can I use the inference api on this model: espnet/kan-bayashi_ljspeech_vits? It receives text and should return audio file, since it is text to speech model.
This looks like a cache issue (There’s a cache in front of the API to prevent calculating things over and over). You can try adding {"inputs": "....", {"parameters": {"use_cache": False}} to your input to force the output to be calculated. The caching mechanism should be upgraded at some point so you don’t have to do this.
1
huggingface
Models
Increasing loss during LM fine-tuning on custom dataset
https://discuss.huggingface.co/t/increasing-loss-during-lm-fine-tuning-on-custom-dataset/974
Hello, During the fine-tuning of camembert-base on a custom dataset using the Trainer API, the loss starts to increase quickly (compared to its value at step 0) and then decreases slowly, without ever reaching its value before training. What’s stranger is that lowering the learning rate results in a higher peak, whereas I expected the opposite to happen. Has anyone experienced a similar issue? For information, the dataset is quite small (35k items of 15-300 words each for training, 6k for validation). I’ve attached a screenshot of the metrics for different learning rate values (I’ve early stopped most of them). Capture d’écran 2020-09-03 à 13.51.302172×622 62.9 KB Here is the code I used for training: import json import logging import os import math import torch from torch.utils.data.dataset import Dataset from transformers import ( AutoTokenizer, CamembertForMaskedLM, DataCollatorForLanguageModeling, HfArgumentParser, PreTrainedTokenizer, Trainer, TrainingArguments, ) logger = logging.getLogger() class LMJsonDataset(Dataset): def __init__( self, tokenizer: PreTrainedTokenizer, file_path: str, field_name: str, block_size: int ): assert os.path.isfile(file_path), f"Input file path {file_path} not found" logger.info("Creating features from dataset file at %s", file_path) with open(file_path, encoding="utf-8") as f: texts = [t for t in (json.loads(item)[field_name] for item in f if item) if t] batch_encoding = tokenizer( texts, add_special_tokens=True, truncation=True, max_length=block_size ) self.examples = batch_encoding["input_ids"] def __len__(self): return len(self.examples) def __getitem__(self, i: int) -> torch.Tensor: return torch.tensor(self.examples[i], dtype=torch.long) parser = HfArgumentParser(TrainingArguments) parser.add_argument("--train_dataset", required=True) parser.add_argument("--eval_dataset", required=True) parser.add_argument("--mlm_probability", type=float, default=0.15) parser.add_argument("--base_model", default="camembert-base") parser.add_argument("--dataset_json_field_name", default="text") parser.add_argument("--block_size", type=int, default=128) training_args, remaining_args = parser.parse_args_into_dataclasses() model = CamembertForMaskedLM.from_pretrained(remaining_args.base_model) tokenizer = AutoTokenizer.from_pretrained(remaining_args.base_model) train_dataset = LMJsonDataset( tokenizer=tokenizer, file_path=remaining_args.train_dataset, field_name=remaining_args.dataset_json_field_name, block_size=remaining_args.block_size, ) eval_dataset = LMJsonDataset( tokenizer=tokenizer, file_path=remaining_args.eval_dataset, field_name=remaining_args.dataset_json_field_name, block_size=remaining_args.block_size, ) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=remaining_args.mlm_probability ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=eval_dataset, prediction_loss_only=True, ) trainer.train() Thank you for your help!
I have a question do you work with french data ?
0
huggingface
Models
Fionetune model always predicts same output class for new data
https://discuss.huggingface.co/t/fionetune-model-always-predicts-same-output-class-for-new-data/12863
I have a problem, trained a model with bert which give around 0.90% on test data and I decide to use it on new data which were not annotated. When running the model, I keep getting the same class output. And I would like to know why ? Can you help me ?
Hey! Just wanted to state that I am facing the same problem and still don’t know how this is happening. I was working with RoBERTaForSequenceClassification with a binary classification problem and it gave 50% ACC. after 5 epochs. It did not learn anything at all.
0
huggingface
Models
Unispeech-sat-base-plus-sv evalution runs out of VRAM
https://discuss.huggingface.co/t/unispeech-sat-base-plus-sv-evalution-runs-out-of-vram/13136
Hi! Currently I’m working on some utterance classification problem. Previously I was using Wav2Vec2 (facebook/wav2vec2-base) model and everything was fine. Now I tried to use this new model using the same code. The training process goes fine, but when it comes to evaluation, it runs out of CUDA memory: 43%|████▎ | 55/129 [00:41<01:09, 1.07it/s]e[ATraceback (most recent call last): File "/a2e_workspace/train.py", line 331, in <module> trainer.train() File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1399, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1521, in _maybe_log_save_evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2158, in evaluate output = eval_loop( File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2341, in evaluation_loop preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 106, in nested_concat return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors)) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 106, in <genexpr> return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors)) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 106, in nested_concat return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors)) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 106, in <genexpr> return type(tensors)(nested_concat(t, n, padding_index=padding_index) for t, n in zip(tensors, new_tensors)) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 108, in nested_concat return torch_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 76, in torch_pad_and_concatenate result = tensor1.new_full(new_shape, padding_index) RuntimeError: CUDA out of memory. Tried to allocate 480.00 MiB (GPU 0; 15.78 GiB total capacity; 13.21 GiB already allocated; 415.75 MiB free; 13.99 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF The GPU has 16 GB VRAM. Batch size parameters: per_device_train_batch_size: 4 per_device_eval_batch_size: 4 gradient_accumulation_steps: 4 The code is pretty much default, it’s based on this guide The strange thing is that the size of the model is approximately the same as Wav2Vec2, and also memory always runs out exactly on evalution. So maybe there is some memory leak bug in the code. I’m gonna test this on 32 GB GPU and report the results.
I figured out why this happens. There is a line 1783 in transformers/models/unispeech_sat/modeling_unispeech_sat.py: output_hidden_states = True if self.config.use_weighted_layer_sum else output_hidden_states So even if you pass output_hidden_states=None in forward pass, the output will anyways contain hidden states (because config.use_weighted_layer_sum=True by default). And if you are using output in a form such like that: @dataclass class SpeechClassifierOutput(ModelOutput): loss: Optional[torch.FloatTensor] = None logits: torch.FloatTensor = None hidden_states: Optional[Tuple[torch.FloatTensor]] = None attentions: Optional[Tuple[torch.FloatTensor]] = None This will actually contain all the hidden states. Evaluation round will accumulate all the hidden states for all samples, therefore you will run out of memory. So I just manually set hidden_states=None in the output. Hopefully this will help someone.
1
huggingface
Models
Use one-hot encoding as input for T5 and GPT
https://discuss.huggingface.co/t/use-one-hot-encoding-as-input-for-t5-and-gpt/13023
Hi, Is it possible to train the T5 model by using onehot encoding input and integers as target? something like that loss = model(onehot, attn, targetInList , attn) As it’s a translation problem, the onehot input will be [ [0,0,0,1…],[0,1,0,0,…]…] Any help would be appretiated!
If cannot, is there any other way to convert that one-hot encoding input to normal integer list? because torch.argmax is not differentiable
0
huggingface
Models
Models for Multilingual Classification Tasks
https://discuss.huggingface.co/t/models-for-multilingual-classification-tasks/13003
Hello, I want to build a multilingual classification model (english, hindi, malayalam…) and wanted to ask if anyone has some suggentions which models to use. I would like to explore the performance of different models. So which models are good in general for the usecase of a text classification with different languages? Thanks in advance!
Some great multilingual models: multilingual BERT 1/DistilBERT 1 XLM-RoBERTa RemBERT 1 CANINE These are all encoder-only Transformer models (great for classification, question-answering, NER,…). CANINE is a relatively new model that is tokenizer-free, meaning it’s a character-level model and does not require an explicit tokenization step. For summarization/translation/etc. (seq2seq tasks), mT5 is a great model.
0
huggingface
Models
HTTP 502 Bad Gateway for url
https://discuss.huggingface.co/t/http-502-bad-gateway-for-url/12977
Hello, I am having some issues to run tokenization using the XLSUM tokenizer. It seems to be an issue related to Huggingface as the file in the error message changes every time I run my script. The error message: requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum/resolve/main/tokenizer_config.json Here is my code: from datasets import Dataset, concatenate_datasets, load_from_disk from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from sklearn.model_selection import train_test_split import pandas as pd import glob import os import re # PARAMETERS encoder_max_length = 512 decoder_max_length = 150 batch_size = 100 model_name = "csebuetnlp/mT5_multilingual_XLSum" num_processes_to_tokenize = 12 WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip())) def load_description_and_transcript_data(lang): # concat all CSVs and load them as one pandas dataframe all_files = glob.glob('data/{}/description_and_transcript*.csv'.format(lang)) li = [] for filename in all_files: df = pd.read_csv(filename, index_col=None, header=0) li.append(df) df = pd.concat(li, axis=0, ignore_index=True) # keep only 3 columns: episode_uri, episode_description_cleaned, transcript_text df = df[['episode_uri', 'episode_description_cleaned', 'transcript_text']] df = df.dropna() # clean text (source https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) df["transcript_text"] = df["transcript_text"].apply(WHITESPACE_HANDLER) df["episode_description_cleaned"] = df["episode_description_cleaned"].apply(WHITESPACE_HANDLER) return df def load_train_dev_splits_data(lang): dev = pd.read_csv('data/{}/out.final.filtered.dev.tsv'.format(lang), index_col=None, header=0, sep='\t') train = pd.read_csv('data/{}/out.final.filtered.train.tsv'.format(lang), index_col=None, header=0, sep='\t') return train, dev # Copied from https://github.com/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb def process_data_to_model_inputs(batch, lang): # load tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) inputs = tokenizer( batch["transcript_text"], # return_tensors="pt", padding="max_length", truncation=True, max_length=encoder_max_length ) outputs = tokenizer( batch["episode_description_cleaned"], # return_tensors="pt", padding="max_length", truncation=True, max_length=decoder_max_length, ) batch["input_ids"] = inputs.input_ids batch["attention_mask"] = inputs.attention_mask batch["labels"] = outputs.input_ids return batch def tokenize_data(hf_dataset, lang): processed_data = hf_dataset.map( lambda batch: process_data_to_model_inputs(batch, lang), num_proc=num_processes_to_tokenize, batched=True, batch_size=batch_size, remove_columns=["transcript_text", "episode_description_cleaned"], ) # set Python list to PyTorch tensor processed_data.set_format( type="torch", columns=["input_ids", "attention_mask", "labels"], ) return processed_data def preprocess_data(lang): """ Steps: - load splits out.final.filtered.dev.tsv and out.final.filtered.train.tsv - load the dataset containing both the episode description and the transcript text - split the dataset from the previous step into train and dev - tokenize the data - save tokenized :param df: :return: """ print("START preprocessing language:", lang) desc_transc = load_description_and_transcript_data(lang) print('desc_transc:', len(desc_transc)) train, dev = load_train_dev_splits_data(lang) print('train/dev:', len(train), len(dev)) desc_and_transc_train = desc_transc[desc_transc['episode_uri'].isin(train['episode_uri'].tolist())] desc_and_transc_dev = desc_transc[desc_transc['episode_uri'].isin(dev['episode_uri'].tolist())] # Load as HF dataset print('Loading as HF dataset...') podcast_train = Dataset.from_pandas(desc_and_transc_train) podcast_dev = Dataset.from_pandas(desc_and_transc_dev) # remove column __index_level_0__ podcast_train = podcast_train.map(lambda x: x, remove_columns=['__index_level_0__']) podcast_dev = podcast_dev.map(lambda x: x, remove_columns=['__index_level_0__']) # Tokenize and save data print('Processing TRAIN data...') tokenized_data = tokenize_data(podcast_train, lang) tokenized_data.save_to_disk(f'data/preprocessed/{lang}/train/') print('Processing TRAIN data FINISHED') # Tokenize and save data print('Processing DEV data...') tokenized_data = tokenize_data(podcast_dev, lang) tokenized_data.save_to_disk(f'data/preprocessed/{lang}/dev/') print('Processing DEV data FINISHED') def read_lines(file_path): with open(file_path, "r") as f: return f.readlines() def merge_and_shuffle(): en_data = load_from_disk('data/preprocessed/en_XX/train/') pt_data = load_from_disk('data/preprocessed/pt_XX/train/') all_train = concatenate_datasets(dsets=[en_data, pt_data]) all_train = all_train.shuffle() all_train.save_to_disk('data/preprocessed/all/train/') en_data = load_from_disk('data/preprocessed/en_XX/dev/') pt_data = load_from_disk('data/preprocessed/pt_XX/dev/') all_dev = concatenate_datasets(dsets=[en_data, pt_data]) all_dev = all_dev.shuffle() all_dev.save_to_disk('data/preprocessed/all/dev/') def main(): preprocess_data('pt_XX') preprocess_data('en_XX') merge_and_shuffle() main()
I am getting the same error since yesterday. Below is my instance: requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/api/models/roberta-base
0
huggingface
Models
How is the AdafactorScheluder suppose to be used?
https://discuss.huggingface.co/t/how-is-the-adafactorscheluder-suppose-to-be-used/9007
Hi, I recently saw my transformer model having divergence issues and I saw a paper that uses Adafactor and wanted to try it out. The docs are fantastic but they don’t mention how often or how the Adafactor scheduler actually works. How is that suppose to be used? When do I call the scheduler in my code? from transformers.optimization import Adafactor, AdafactorSchedule optimizer = Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None) lr_scheduler = AdafactorSchedule(optimizer) all my code is my custom training so idk when lr_scheduler is suppose to be called. Usually it depends on the model or I try to start of with the convenient ReduceLROnPlateau. When using lr=None with [ Trainer ] you will most likely need to use AdafactorSchedule scheduler as following: from transformers.optimization import Adafactor, AdafactorSchedule optimizer = Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None) lr_scheduler = AdafactorSchedule(optimizer) trainer = Trainer(..., optimizers=(optimizer, lr_scheduler)) related: Optimization — transformers 4.7.0 documentation 6 T5 Finetuning Tips - #34 by brando 3
Apologies for the direct ping, but can you help me get the right person to help me with this? @sgugger? Thank you in advance!
0
huggingface
Models
Tensorflow h5 file doesn’t contain network, it only include weighs?
https://discuss.huggingface.co/t/tensorflow-h5-file-doesnt-contain-network-it-only-include-weighs/12560
I have downloaded bert-base-uncased tf_model.h5 form bert-base-uncased at main 1 Inspect it with netron, only weight can be found, how can I save network in h5? image632×1314 27.1 KB
Hello, Sorry for late reply, you might’ve already figured it out. TF models can be saved in two ways, one is .h5 file which contains weights and other is SavedModel protobuf file. SavedModel has everything you need (graphs, variables etc) so it’s more platform agnostic. When you save as SavedModel you can use it everywhere (android/flutter apps with TF lite, browser with tf.js, or REST API with tensorflow serving). See more here 1 on how to save as SavedModel and h5.
0
huggingface
Models
Difference between language modeling scripts
https://discuss.huggingface.co/t/difference-between-language-modeling-scripts/12540
Hi, I would like to fine-tune a language model (Tensorflow-based) and see that you have two scripts pertaining to that: https://github.com/huggingface/notebooks/blob/master/examples/language_modeling-tf.ipynb https://github.com/huggingface/transformers/blob/master/examples/tensorflow/language-modeling/run_mlm.py I followed the first script and am quite satisfied with the result. However, I am curious if the second one can be also an option for me. Could you, please, outline what are the differences between these two scripts and under which circumstances each one should be used? Or, alternatively, could you, please, provide a link to documentation, if such exists, that can answer my question? I apologize in advance, if my question sounds ignorant, I am quite a newbie in machine learning and mastering it now with HuggingFace ^^ Thank you!
Hello Lenn, First one is an end-to-end tutorial on how to train a mask language model, and it shows what you need to do (preprocessing etc) if you want to train an MLM, helps you understand what’s going on in the background, the problem itself etc. Second one is a more production-y script with no EDA. I used to use those scripts like run_glue.py in my former job to train/fine-tune language model directly, like this: !python run_glue.py \ --model_name_or_path "model_checkpoint" \ --task_name cola \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_gpu_train_batch_size 32 \ --per_gpu_eval_batch_size 32 \ --learning_rate 2e-5 \ --save_steps 2000 \ --num_train_epochs 50 \ --output_dir 'output_dir_here'
0
huggingface
Models
Wav2vec2.0 memory issue
https://discuss.huggingface.co/t/wav2vec2-0-memory-issue/4868
Hi @patrickvonplaten, I am trying to fine-tune XLSR-Wav2Vec2. Data contains more than 900k sound, it is huge. In this case, I always receive out of memory, even batch size is 2 (gpu = 24gb). When I take a subset (100 sound) and fine-tune on this subset, everything is fine. What could be the problem? Is there any issue which is related to loading data to memory? I think it should not depend how much bigger data is when batch size is same.
Hey @EmreOzkose do you train locally or in a google colab? Also do you get hard disk out-of-memory errors or RAM out-of-memory? Feel free to share your fine-tuning script here so that I can take a look
0
huggingface
Models
Fine-Tune for MultiClass or MultiLabel-MultiClass
https://discuss.huggingface.co/t/fine-tune-for-multiclass-or-multilabel-multiclass/4035
Hi, I want to build a: MultiClass Label (eg: Sentiment with VeryPositiv, Positiv, No_Opinion, Mixed_Opinion, Negativ, VeryNegativ) and a MultiLabel-MultiClass model to detect 10 topics in phrases (eg: Science, Business, Religion …etc) and I am not sure where to find the best model for these types of tasks? I understand this refers to the Sequence Classification Task. So, I could search for a model tagged with that task on your model repository site - but not all models are tagged like that and the transformers API seems to provide much more task applications beyond the original training. I found with the code below that I can have a model that supports originally 5 labels but load it into a ConvBertForSequenceClassification model to support, for example 25 labels. Would this (plus softmax or sigmoid and fine-tuning) be the correct way to pick up an existing model and implement 1. or 2. or is there a different more effective way to choose a model and fine tune it? Thanks dirk from transformers import pipeline nlp = pipeline("sentiment-analysis", 'bert-base-multilingual-uncased-sentiment') result = nlp("I hate you")[0] print(f"label: {result['label']}, with score: {round(result['score'], 4)}") result = nlp("I love you")[0] print(f"label: {result['label']}, with score: {round(result['score'], 4)}") #label: 1 star, with score: 0.6346 #label: 5 stars, with score: 0.8547 from transformers import ConvBertForSequenceClassification, ConvBertTokenizer convBertModel = ConvBertForSequenceClassification.from_pretrained('bert-base-multilingual-uncased-sentiment', num_labels=25) convBerttokenizer = ConvBertTokenizer.from_pretrained('bert-base-multilingual-uncased-sentiment') print ( f" num_labels: {model.num_labels}") print ( f" classifier: {model.classifier}") # num_labels: 25 # classifier: ConvBertClassificationHead( # (dense): Linear(in_features=768, out_features=768, bias=True) # (dropout): Dropout(p=0.1, inplace=False) # (out_proj): Linear(in_features=768, out_features=25, bias=True) # )
Hi @dikster99, The way I usually search for models on the Hub 184 is by selecting the task in the sidebar, followed by applying a filter on the target dataset (or querying with the search bar if I know the exact name). In both your cases, you’re interested in the Text Classification tags, which is a specific example of sequence classification: Screen Shot 2021-02-27 at 4.00.33 pm942×1346 132 KB However, this assumes that someone has already fine-tuned a model that satisfies your needs. If not, there are two main options: If you have your own labelled dataset, fine-tune a pretrained language model like distilbert-base-uncased (a faster variant of BERT). You can find a nice example for text classification here 271 and see here 493 for the multi-label case. In general this is similar to your second example with ConvBertForSequenceClassification and you were correct to specify num_labels in the from_pretrained function If you have no labelled dataset, then you could try out one of the Zero-Shot Classification models (e.g. here 43). See this 32 blog post for an explanation on how zero-shot works. Hope that helps! PS. the Pipeline classes are typically used for generating predictions from a fine-tuned model, so your example with bert-base-multilingual-uncased-sentiment wouldn’t work because that model was trained on 5 labels and does not know about the 25 labels you are interested in.
0
huggingface
Models
Adding aggregation to TAPAS
https://discuss.huggingface.co/t/adding-aggregation-to-tapas/12365
I want to add a “portion” operator that will calculate the count of the selected cells divided by the number of rows in the table (count \ total). To answer questions like “what is the portion of players with more than 10 points?” and get “0.32” as an answer. Count result is already calculated in the _calculate_expected_result method in the modeling_tapas.py: scaled_probability_per_cell = (scaled_probability_per_cell / numeric_values_scale) * input_mask_float count_result = torch.sum(scaled_probability_per_cell, dim=1) However, I’m not sure how to continue from here. How to get the number of cells (i thought it is the size of scaled_probability_per_cell, but I’m not sure it is correct in small tables)? How to add the result to the loss? , etc.
So you want to know the number of rows for every example in the batch? You can probably take the max of the unique row IDs in the token type ids created by TapasTokenizer. Small example: from transformers import TapasTokenizer import pandas as pd model_name = 'google/tapas-base-finetuned-wtq' tokenizer = TapasTokenizer.from_pretrained(model_name) data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], 'Number of movies': ["87", "53", "69"]} queries = ["What is the name of the first actor?", "How many movies has George Clooney played in?", "What is the total number of movies?"] table = pd.DataFrame.from_dict(data) inputs = tokenizer(table=table, queries=queries, padding='max_length', return_tensors="pt") row_ids = inputs["token_type_ids"][:,:,2] You can easily get the unique row IDs for every example in the batch, and then take the max: import torch nrows = [torch.max(torch.unique(example)).item() for example in row_ids] You can pass this variable to the _calculate_expected_result method, such that you can compute the portion result as: portion_result = count_result / nrows
1
huggingface
Models
T5 Finetuning Tips
https://discuss.huggingface.co/t/t5-finetuning-tips/684
Starting this for results, sharing + tips and tricks, and results. This is my first attempt at this kind of thread so it may completely fail. Some things I’ve found Apparently if you copy AdaFactor from fairseq, as recommended by t5 authors, you can fit batch size = 2 for t5-large lm finetuning fp16 rarely works. for most tasks, you need to manually add </s> to the end of your sequence. Thing’s I’ve read task specific prefix doesn’t matter much. cc @mrm8488 @valhalla @patrickvonplaten who have all tried different experiments.
Things I’ve found task prefixes matter when 1. When doing multi-task training 2. When your task similar or related to one of the supervised tasks used in T5 pre-training mixture. Needs slightly higher LR than the default one set in Trainer, in my experiments 1e-4 and 3e-4 worked for almost all problems (classification, QA, que-gen, summ) no need to pass decoder_input_ids to T5 yourself, just pass labels and the T5Model will prepare them for you. labels should end with eos_token. (important! This is where most of the mistakes are happening). T5 uses pad_token as the decoder_start_token_id so when doing generation without the generate function make sure you start it with pad token. trimming batches when training on TPU leads to very slower training. apparently, because of sentencepiece and some possible leakage of other languages in C4 data, T5 gives somewhat sensible results for french lang. fine-tuned it on FQuAD (french version of SQuAD) for que gen and BLUE-4 against dev set was 15. Not sure if it’s an issue or not but in some cases using label_smoothing in T5 resulted in nan loss
0
huggingface
Models
403 Client Error: Forbidden for url:
https://discuss.huggingface.co/t/403-client-error-forbidden-for-url/12563
hi guys, I have this problem using paraphrase-distilroberta-base-v2 model. once I call the model through in my python code: model = SentenceTransformer(‘sentence-transformers/paraphrase-distilroberta-base-v2’) this error appear: 403 Client Error: Forbidden for url: https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2/resolve/486c69f0e5395a86ef58883e1c18e475cc7b8aba/.gitattributes 1 same thing if I use Hosted inference API in model main page. Some advise about this problem?
Hi @giovanni94s, I guess there is today a problem in the Hugging Face hub as I even can not edit a model or a datasets card in my profile. @julien-c can you help us? Thank you.
0
huggingface
Models
Small document issue for segformer?
https://discuss.huggingface.co/t/small-document-issue-for-segformer/12227
hi @nielsr nielsr, not sure if the docs for SegformerConfig default image_size=512 is wrong… I suppose it should be 224 according to the source code? master link: SegFormer — transformers 4.13.0.dev0 documentation v4.12.5 link: SegFormer — transformers 4.12.5 documentation 截屏2021-11-24 上午11.51.491156×391 73.8 KB
Hi, Thanks for reporting, this has been fixed.
0
huggingface
Models
Train Bart for Conditional Generation (e.g. Summarization)
https://discuss.huggingface.co/t/train-bart-for-conditional-generation-e-g-summarization/1904
Hi everybody I ran into some issues when trying to fine-tune bart for summarization using the BartForConditionalGeneration model. The issue evolved around properly masking and ignoring the padding tokens when training. Without the following fix the loss went down but the model produced bad summaries. I post the solution here in case anyone else runs into similar problems. from transformers import BartTokenizer, BartForConditionalGeneration from transformers import Trainer, TrainingArguments from transformers.modeling_bart import shift_tokens_right dataset = ... # some Datasets object with train/validation split and columns 'text' and 'summary' tokenizer = BartTokenizer.from_pretrained('facebook/bart-large') model = BartForConditionalGeneration.from_pretrained('facebook/bart-large') def convert_to_features(example_batch): input_encodings = tokenizer.batch_encode_plus(example_batch['text'], pad_to_max_length=True, max_length=1024, truncation=True)) target_encodings = tokenizer.batch_encode_plus(example_batch['summary'], pad_to_max_length=True, max_length=1024, truncation=True)) labels = target_encodings['input_ids'] decoder_input_ids = shift_tokens_right(labels, model.config.pad_token_id) labels[labels[:, :] == model.config.pad_token_id] = -100 encodings = { 'input_ids': input_encodings['input_ids'], 'attention_mask': input_encodings['attention_mask'], 'decoder_input_ids': decoder_input_ids, 'labels': labels, } return encodings dataset = dataset.map(convert_to_features, batched=True) columns = ['input_ids', 'labels', 'decoder_input_ids','attention_mask',] dataset.set_format(type='torch', columns=columns) training_args = TrainingArguments( output_dir='./models/bart-summarizer', num_train_epochs=1, per_device_train_batch_size=1, per_device_eval_batch_size=1, warmup_steps=500, weight_decay=0.01, logging_dir='./logs', ) trainer = Trainer( model=model, args=training_args, train_dataset=dataset['train'], eval_dataset=dataset['validation'] ) The convert_to_features function makes sure that the decoder inputs are correctly shifted and still include the padding tokens while in the labels the padding tokens are replaced by -100 such they are ignored in the model loss.
Hi, I tried to use your fix but I’m wondering whether this is still up-to-date: transformers.modeling_bart doesn’t exist. There is only transformers.models.bart.modeling_bart but its shift_tokens_right function requires a torch.Tensor object while huggingface’s datasets object only consists of lists (plus it needs an additional decoder_start_token_id). This also leads to an error in labels[labels[:, :] == model.config.pad_token_id] = -100 because this is numpy syntax. Also, batch_encode_plus is deprecated. It would be nice if there was a statement from huggingface if this is still necessary and if not, could it please be removed from the BART page?
0
huggingface
Models
How to call foward of multiple models
https://discuss.huggingface.co/t/how-to-call-foward-of-multiple-models/12502
I have a wrapper for a huggingface model. In this wrapper I have some encoders, which are mainly a series of embeddings. In forward of the wrapped model, I want to call forward of each of encoders in a loop, but I get the error: Traceback (most recent call last): File "/home/pouramini/mt5-comet/comet/train/train.py", line 1275, in <module> run() File "/home/pouramini/anaconda3/lib/python3.8/site-packages/click/core.py", line 716, in __call__ return self.main(*args, **kwargs) File "/home/pouramini/anaconda3/lib/python3.8/site-packages/click/core.py", line 696, in main rv = self.invoke(ctx) File "/home/pouramini/anaconda3/lib/python3.8/site-packages/click/core.py", line 1060, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/pouramini/anaconda3/lib/python3.8/site-packages/click/core.py", line 889, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/pouramini/anaconda3/lib/python3.8/site-packages/click/core.py", line 534, in invoke return callback(*args, **kwargs) File "/home/pouramini/mt5-comet/comet/train/train.py", line 1069, in train result = wrapped_model(**batch) File "/home/pouramini/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/pouramini/mt5-comet/comet/transformers_ptuning/ptuning_wrapper.py", line 135, in forward prompt_embeds = encoder(prompt_input_ids,\ File "/home/pouramini/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/pouramini/mt5-comet/comet/transformers_ptuning/ptuning_wrapper.py", line 238, in forward return self.embedding(prompt_token_ids) File "/home/pouramini/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/pouramini/anaconda3/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 158, in forward return F.embedding( File "/home/pouramini/anaconda3/lib/python3.8/site-packages/torch/nn/functional.py", line 2043, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking arugment for argument index in method wrapper_index_select) This the code that results the error: def forward(self,input_ids, labels, decoder_input_ids=None,prompt_ids=None,**kwargs): # find masks based on the range of prompt ids (offset_id < X < offset_id + prompt_length) #Because this wrapper only deals with a single prompt, the length should be the same, you can use masked_select to reshape prompt_masks = self.prompt_token_fn(input_ids) if prompt_masks.any(): input_ids_ = input_ids.clone() wlog.info("inpu ids :{}".format(input_ids)) if self.replacing_token_id is not None: # replace prompt ids in input_ids with replacing token input_ids_[prompt_masks]=self.replacing_token_id # find the model embeddings of input ids except for prompt tokens inputs_embeds = self.model_embeddings(input_ids_) for encoder in self.prompt_encoders: #encoder = self.prompt_encoders[0] wlog.info("********** offset: %s, length: %s", encoder.id_offset, encoder.length) prompt_token_fn = encoder.get_prompt_token_fn() encoder_masks = prompt_token_fn(input_ids) wlog.info("Encoder masks: %s", encoder_masks) if encoder_masks.any(): #find input ids for prompt tokens prompt_input_ids = input_ids[encoder_masks] wlog.info("Prompt Input ids: %s", prompt_input_ids) # call forwards on prompt encoder whose outputs are prompt embeddings prompt_embeds = encoder(prompt_input_ids,\ prompt_ids).to(device=inputs_embeds.device) The code however runs if I just use cpu as device. Also if I have one encoder, the code is run with cuda, but when there are multiple encoders, it seems it expects that all of them are transfered to device, which I don’t know how to do that.
It seems I need to put all encoders to device. wrapped_model.to(device=device) for encoder in wrapped_model.prompt_encoders: encoder.to(device=device) However, when there was a single encoder or a list of encoders but including one encoder, I didn’t need to explicitly put it on device, but for the list of encoders i must!
0
huggingface
Models
Validation loss vs ROUGE (mismatch)
https://discuss.huggingface.co/t/validation-loss-vs-rouge-mismatch/12473
Hi there, I’m experiencing an unexpected behaviour working on a summarization model: As I increase my training samples, as expected, my validation loss decreases BUT the ROUGE metrics do not improve (do not increase), so the best model based on validation loss is not the best model based on ROUGE, they are not even close: a model with a quite larger validation loss is the one with the best ROUGE scores. Can anyone explaina bit this? How normal is it? Thank you guys
What is your objective function for validation loss? I think the answer to your question depends on several factors, the biggest of which are: the kind of rouge you are using (R1 vs. R2 vs. RL) and the score range (high vs. low).
0
huggingface
Models
Trouble saving/loading fine-tuned BART model
https://discuss.huggingface.co/t/trouble-saving-loading-fine-tuned-bart-model/12396
Hello! I am fine-tuning the ‘sshleifer/distilbart-cnn-12-6’ model using the following code. model = AutoModelForSeq2SeqLM.from_pretrained(‘sshleifer/distilbart-cnn-12-6’) optim = AdamW(model.parameters(), lr= params_layered.lr) model.train() for epoch in range(params_layered.epochs): with tqdm(train_loader_l, unit=“batch”) as tepoch: for inputs,masks,labels in tepoch: tepoch.set_description(f"Epoch {epoch}") optim.zero_grad() outputs = model(inputs, attention_mask=masks, labels=labels) loss = outputs[0] loss.backward() optim.step() This produces great results. Then, I’d like to save this model and load it back. However, I cannot seem to do this correctly. First, it’s curious to note that when I print out the weights of this fine-tuned model, they are identical to the original pre-trained (before fine tuning) model weights. And indeed, when I save and load the model back, the results are the same as the model results before fine-tuning. I am testing this using the code below: (fine-tuned model) print(dict(model.named_parameters())) Output: "tensor([[-0.0369, 0.0782, 0.1621, …, 0.1831, 0.0589, -0.0659], … test = AutoModelForSeq2SeqLM.from_pretrained(‘sshleifer/distilbart-cnn-12-6’) print(dict(test.named_parameters())) Output: "tensor([[-0.0369, 0.0782, 0.1621, …, 0.1831, 0.0589, -0.0659], … output matches w/ above print statement for all weights printed I am attempting to save the weights using: torch.save(model.state_dict(),’/dbfs/FileStore/…/summary_model.pt’) net = AutoModelForSeq2SeqLM.from_pretrained(‘sshleifer/distilbart-cnn-12-6’) arxiv = torch.load(’/dbfs/FileStore/…/summary_model.pt’) net.load_state_dict(arxiv) (All keys do match successfully) I have also tried: model.save_pretrained("/dbfs/FileStore/…/summary-model/") test = AutoModelForSeq2SeqLM.from_pretrained("/dbfs/FileStore/…/summary-model/") And both of these have identical weights to the above as well. Not sure what I’m doing wrong, would appreciate any advice!
This actually resolved itself!
0
huggingface
Models
Re-training NLP model with training AND validation dataset after validation has been done
https://discuss.huggingface.co/t/re-training-nlp-model-with-training-and-validation-dataset-after-validation-has-been-done/12246
Hello, I have a question about methodology. I have a good dataset (about 8000 samples) which I can use to train a model on an NLP task. As usual, I would normally divide the dataset into training and validation (90% - 10%), or perhaps even training, validation and test datasets (80% - 10% - 10%), and perform training as usual on the training set, then do a hyper parameter search using the performance on the validation set, and once I’m happy with my model parameters, I would run it once on the test set to get the final performance metrics. Now, assuming (seems a reasonable assumption) that the model will not get worse if we add more training data to it, I was wondering if there is anything wrong with re-training the model with the full dataset once the steps above (validation and testing) have been completed. Obviously you can’t then re-test the model (because the newly trained model will have already seen all the samples in your dataset), but it seems reasonable to say that the performance metric you got on the test set when you only used 80% of the samples for training will not get any worse if you add the remaining 20% of the samples to the training set, hence you can keep the performance you got as a baseline, and then train the model on the full dataset knowing that the performance, if anything, is only going to get better than that. Is there anything fundamentally wrong with this approach from a scientific or philosophical perspective? Any reason why it shouldn’t be done?
It might work well, it might not. When you add data to your training run, the model will converge differently because it sees different data and optimizes accordingly. This might be good or bad (it is not a given that it will deterministically end up being better simply because you gave it more data), but the problem is that you simply cannot tell because you have no held-out set anymore. If you do wish to squeeze everything out of your data, cross validation is recommended.
1
huggingface
Models
Fine-tuning GPT2 for bidirectional training
https://discuss.huggingface.co/t/fine-tuning-gpt2-for-bidirectional-training/12155
Hallo, we all know, that GPT2 is widely used in text generation task as autoregressive model. May I ask, could I further train GPT2 in a bidirectional attention, like BERT as auto-encoding model? If yes, how could I flexible control the uni- or bidirectional training in GPT2? Thanks a lot!!
I don’t think this is possible at all due to structural differences in the model’s design, but I may be wrong. Reference this if you want more clarity on what I am saying - Comparison between BERT, GPT-2 and ELMo | by Gaurav Ghati | Medium " Drawbacks: GPT is its uni-directional nature — the model is only trained to predict the future left-to-right context ." - from the article Hope this answers your question
0
huggingface
Models
Identifying sections of a text document
https://discuss.huggingface.co/t/identifying-sections-of-a-text-document/12145
Hello all, I have a question and I’m not sure where to ask so I’m asking it in general. I’m working on a problem where I identify different sections of Job Description. For example skills section, company information section, job requirements section etc. I thought about it using NER and QA System but: Entities in NER system and answers in QA system can’t be that much long (a complete section of text). Is there any specific domain in NLP for these kind of problems? Are there any pre-trained models for these kind of problems on HuggingFace?
If you know the possible names of the sections you can try to use regex to find them and partition them.(no the best method may be but should get the job done. idk about an NLP way to do this other than some sort of few shot learning where you give a model like- deberta v2 mnli and a few examples to train it and hope for the best.
0
huggingface
Models
Data type error while trying to fine tune Deberta v3 Large
https://discuss.huggingface.co/t/data-type-error-while-trying-to-fine-tune-deberta-v3-large/11604
image1440×458 168 KB I was trying this - microsoft/deberta-v3-large · Hugging Face 2 Command ran :- python3 run_glue.py --model_name_or_path microsoft/deberta-v3-large --task_name mnli --do_train --do_eval --evaluation_strategy steps --max_seq_length 256 --warmup_steps 50 --learning_rate 6e-5 --num_train_epochs 3 --output_dir outputv3 --overwrite_output_dir --logging_steps 10000 --logging_dir outputv3/ while in this folder-- /transformers/examples/pytorch/text-classification$ for the fuller sequence of issues that occured and got solved- [quote=“nielsr, post:9, topic:11486, full:true”] I’ve created a notebook for you: Google Colab 5 [/quote] Thanks for reading.
I just tested with microsoft/deberta-v3-large and I don’t get any error : the training starts well for me. I just added 2 more arguments --per_device_train_batch_size 2 --per_device_eval_batch_size 2 because otherwise I didn’t have enough VRAM. I don’t have exactly the same setting as you, though. Would it be possible for you to try again with the same settings as me (first trying with the last transformers version on master, then with Python version: 3.8.10 and finally with PyTorch 1.9.0)? My setting: - `transformers` version: 4.13.0.dev0 - Python version: 3.8.10 - PyTorch version (GPU?): 1.9.0+cu102 (True) Can’t wait to see how it turns out on your end!
1
huggingface
Models
Using T5-Base via Inference API
https://discuss.huggingface.co/t/using-t5-base-via-inference-api/8557
Hi, I’m trying to use T5-base model for summarization task using Inference API. I added the T5-specific prefix "summarize: " to the text. However, the model is returning translation_text as output instead of summary_text. (I was able to use t5-base model for summarization using a model and tokenizer as described here: Summary of the tasks — transformers 4.7.0 documentation 2)
Hello @mohsenalam. Did you solve your issue? On my side, I would like to use Inferenec API for T5 base as well. I opened this thread: How to get Accelerated Inference API for T5 models? 4
0
huggingface
Models
Wav2Vec2 For Indian English
https://discuss.huggingface.co/t/wav2vec2-for-indian-english/5068
I’m trying to build an Automatic Speech Recognition model for Indian English ( accents, dialect, etc.). I have around 15 hours of labeled data. I followed the steps blog 15 by @patrickvonplaten replacing the TIMIT dataset with my own keeping everything else the same. After training, the WER is a perfect 1.0. The trained model outputs blank for every file in the test set and I don’t know where it is going wrong. Any help would be much appreciated. Is anyone else attempting this?
Vishaal: s blank for every file in the test set WER 1.0 is not a very good metric by itself. If you are not getting anything in the output this may be because the model has not learnt anything and there are some silent errors happening. Try increasing the epochs or other tuning methodologies and see if this resolves the issue.
0
huggingface
Models
What is loss function for T5
https://discuss.huggingface.co/t/what-is-loss-function-for-t5/11669
Could you please guide me how loss function for T5 is computed? I mean it is a seq2seq model. Suppose it must map a sequence of X tokens to Y tokens, but it generates Z tokens. How Y and Z are compared to calculate loss?
T5 uses the regular cross-entropy loss (as any language model). Suppose that you are fine-tuning T5 for translation, and you have the following training example: * source sentence: "hello how are you" * target sentence: "salut comment ça-va" First, one needs to tokenize the sentences for the model using T5Tokenizer. Assuming that every word is tokenized into a single token, and we also add T5’s special token (namely </s> - which indicates the end of a sequence), we provide the follow inputs to the model: * input tokens = [hello, how, are, you, </s>] * label tokens = [salut, comment, ça, -, va, </s>] Of course, we don’t provide these tokens as text to the model, but rather as integer IDs, which refer to row indices in an embedding matrix, so the actual inputs will look like: * input_ids = [21820, 149, 33, 25, 1] * labels = [20239, 1670, 3664, 18, 900, 1] In that case, you first provide the input_ids to T5’s encoder, which will turn it into a tensor of shape (batch_size, seq_len, hidden_size). Next, T5’s decoder will predict, for each token of the target sequence, the correct next token. This happens as follows: salut comment ça - va </s> => label tokens 20239 1670 3664 18 900 1 => labels ---------------------------------------------------------------------------------------------- DECODER ---------------------------------------------------------------------------------------------- 0 20239 1670 3664 18 900 => decoder_input_ids decoder_start_token salut comment ça - va => decoder input tokens In other words, what happens is, we prepend the decoder inputs with a special token (the decoder start token - which for T5 is the padding token, with index 0), and then the decoder needs to predict (in parallel) that: the token that follows the decoder start token is “salut”. Here, we compute the cross-entropy loss between the prediction of the model and the target token (which is “salut”). the token that follows “salut” is “comment”. Here, we compute the cross-entropy loss between the prediction of the model and the target token (which is “comment”). the token that follows “comment” is “ça”. Here, we compute the cross-entropy loss between the prediction of the model and the target token (which is “ça”). etc. the token that follows “va” is “</s>” (meaning, the end-of-sequence or EOS token). Here, we compute the cross-entropy loss between the prediction of the model and the target token (which is “</s>”). In the code 7, this is done in one go, namely by comparing the logits of the model - which are of shape (batch_size, seq_len, vocab_size) - to the ground truth labels: loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1))
0
huggingface
Models
How to conver Pytorch model to Flax
https://discuss.huggingface.co/t/how-to-conver-pytorch-model-to-flax/11847
I want to train a mT5 model on more data for a sepcific language, how can I resume training (not from scratch)? I found a training example in huggingface examples, but that is for flax and from scratch. If I change FlaxT5ForConditionalGeneration to T5ForConditionalGenration and load my pytorch mode, then would it work? If no, how can I convert my pytorch to flax so that I can use this code?
I believe you can convert a model from one framework to the other as follows: from transformers import T5ForConditionalGeneration model = T5ForConditionalGeneration.from_pretrained(model_name, from_flax=True)
0
huggingface
Models
SQuAD/BERT: Why max_length=384 by default and not 512?
https://discuss.huggingface.co/t/squad-bert-why-max-length-384-by-default-and-not-512/11693
Why do training scripts for fine-tuning BERT-based models on SQuAD (e.g., this one from google or this one from HuggingFace, use set a maximum length of 384 (by default) for input sequences even though the models can handle inputs of length up to 512? (This maximum length refers to the combined length of the question and context, right? Regardless, the questions in the SQuAD dataset typically have length significantly less than 128.)
We use the same default as the Google scripts to reproduce their results. I’m guessing the 384 was a compromise for the regular SQUAD dataset between having most question/contexts be tokenized without any truncation while keeping something small to go fast.
1
huggingface
Models
Pre-training for Wav2Vec2-XLSR via Huggingface
https://discuss.huggingface.co/t/pre-training-for-wav2vec2-xlsr-via-huggingface/7490
Hi guys! I note that the most topics are related to fine-tuning a pre-trained model. But if I have got some new unlabeled data, how can I preform the pre-training process via Huggingface?
Hey Javen, We’ve now an official wav2vec2-pretraining example here: transformers/examples/pytorch/speech-pretraining at master · huggingface/transformers · GitHub 58
0
huggingface
Models
Getting error while fine tuning Deberta v3 Large
https://discuss.huggingface.co/t/getting-error-while-fine-tuning-deberta-v3-large/11486
I have been trying to fine tune the model using the instructions given in - microsoft/deberta-v3-large · Hugging Face but I am getting ImportError: This example requires a source install from HuggingFace Transformers (see https://huggingface.co/transformers/installation.html#installing-from-source), but the version found is 4.11.3. so I cloned the transformers repo on my device and now I am getting an error saying it can’t run the run_glue.py. What am I doing incorrectly? Thank you.
full error looks like FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use_env is set by default in torchrun. If your script expects `--local_rank` argument to be set, please change it to read from `os.environ['LOCAL_RANK']` instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructions FutureWarning, /usr/bin/python3: can't open file ' run_glue.py': [Errno 2] No such file or directory ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 2) local_rank: 0 (pid: 6570) of binary: /usr/bin/python3 Traceback (most recent call last): File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/nikhil/.local/lib/python3.6/site-packages/torch/distributed/launch.py", line 193, in <module> main() File "/home/nikhil/.local/lib/python3.6/site-packages/torch/distributed/launch.py", line 189, in main launch(args) File "/home/nikhil/.local/lib/python3.6/site-packages/torch/distributed/launch.py", line 174, in launch run(args) File "/home/nikhil/.local/lib/python3.6/site-packages/torch/distributed/run.py", line 713, in run )(*cmd_args) File "/home/nikhil/.local/lib/python3.6/site-packages/torch/distributed/launcher/api.py", line 131, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/nikhil/.local/lib/python3.6/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent failures=result.failures, torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ run_glue.py FAILED ------------------------------------------------------------ and although it says no such file found, the run_glue.py file is there in the correct folder
0
huggingface
Models
TAPAS question answering for missing informations?
https://discuss.huggingface.co/t/tapas-question-answering-for-missing-informations/5954
Hello! I have experimented a bit with the pre-trained TAPAS models available, and they actually work quite well. I tried asking questions that don’t have an answer in the table and still get an output, as expected. My idea was to filter questions without answers by looking at the probabilities in the output of the model. Unfortunately, even when there isn’t any answer in the table, the model outputs extremely confident answers (probability > 0.9). How can I treat questions without answers? I don’t have a lot of computer power so fine-tuning will be very time-consuming and hard for me. Is there an alternative way?
I tried shuffling the rows and columns of the table and if the question does not have an answer in the table it returns different answers on each try.
0
huggingface
Models
Continuing Pre Training from Model Checkpoint
https://discuss.huggingface.co/t/continuing-pre-training-from-model-checkpoint/11392
Hi, I pre-trained a language model for my own data and I want to continue the pre-training for additional steps using the last checkpoint. I am planning to use the code below to continue the pre-training but want to be sure that everything is correct before starting. Let’s say that I saved all of my files into CRoBERTa. model = RobertaForMaskedLM.from_pretrained(‘CRoBERTa/checkpoint-…’) tokenizer = RobertaTokenizerFast.from_pretrained(‘CRoBERTa’, max_len = 512, padding = ‘longest’) training_args = TrainingArguments(overwrite_output_dir = False, …) trainer = Trainer(…) trainer.train(resume_from_checkpoint = True) Is this pipeline correct ? Is there anything I am missing ?
If you use trainer.train(resume_from_checkpoint = True) The Trainer will load the last checkpoint it can find, so it won’t necessarily be the one you specified. It will also resume the training from there with just the number of steps left, so it won’t be any different from the model you got at the end of your initial Trainer.train.
0
huggingface
Models
An error with BART model generate
https://discuss.huggingface.co/t/an-error-with-bart-model-generate/10734
I try to generate with the code: model.generate(input_ids=None, inputs_embeds=student_embeddings, attention_mask=node_masks, num_beams=4, max_length=config[“max_seq_length”], early_stopping=True) you can see my input_ids is None, but when I run this code system return error : File “/home/amax/anaconda3/envs/shj_dev/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py”, line 740, in forward raise ValueError(“You cannot specify both input_ids and inputs_embeds at the same time”) ValueError: You cannot specify both input_ids and inputs_embeds at the same time I try to read code and I found it in generation_utils.py if input_ids is None: # init input_ids with bos_token_id input_ids = self._prepare_input_ids_for_generation(bos_token_id, model_kwargs.get(“encoder_outputs”)) this code will get a valve to input_ids, and then system return the error You cannot specify both input_ids and inputs_embeds at the same time How to fix it ?
It seems a bug ?
0
huggingface
Models
Decding Large Audio Files Using Wav2Vec2ForCTC Model
https://discuss.huggingface.co/t/decding-large-audio-files-using-wav2vec2forctc-model/11097
I’ve been working on Wav2Vec2ForCTC model for a while. I used to have small audio files, i.e., audio files with relatively short durations (~ 1 min). When I tested the model on a large file (~ 14 mins), the model could not handle it in GPU, so, I shifted to use CPU. I notices that it used more than 200 GB of RAM to decode! I’ve tried to split the audio file into smaller audio files and use hidden states to link them together while decoding each segment but I could not find a way to feed the hidden states of the current audio file to the next audio file for the model to use it while decoding! Any ideas or suggestions? @patrickvonplaten
Try using VAD to split the large audio file into smaller clips, using the pauses (ends of sentences). ASR performs well without any special way of feeding state from one clip to the next.
0
huggingface
Models
Failed to get mT5 model
https://discuss.huggingface.co/t/failed-to-get-mt5-model/5551
I got this error when getting mT5 model. Can anyone help? (base) notooth@Debian:~$ python Python 3.8.8 (default, Apr 13 2021, 19:58:26) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> from transformers import MT5Model, T5Tokenizer >>> model = MT5Model.from_pretrained("google/mt5-small") Traceback (most recent call last): File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers-4.4.2-py3.8.egg/transformers/modeling_utils.py", line 1062, in from_pretrained File "/home/notooth/anaconda3/lib/python3.8/site-packages/torch/serialization.py", line 527, in load with _open_zipfile_reader(f) as opened_zipfile: File "/home/notooth/anaconda3/lib/python3.8/site-packages/torch/serialization.py", line 224, in __init__ super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer)) RuntimeError: version_ <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /tmp/pip-req-build-66hwoyb6/caffe2/serialize/inline_container.cc:132, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. (init at /tmp/pip-req-build-66hwoyb6/caffe2/serialize/inline_container.cc:132) frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x6d (0x7f82bc66e2ad in /home/notooth/anaconda3/lib/python3.8/site-packages/torch/lib/libc10.so) frame #1: caffe2::serialize::PyTorchStreamReader::init() + 0x25db (0x7f82b8a9f2bb in /home/notooth/anaconda3/lib/python3.8/site-packages/torch/lib/libtorch.so) frame #2: caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x7b (0x7f82b8aa07cb in /home/notooth/anaconda3/lib/python3.8/site-packages/torch/lib/libtorch.so) frame #3: <unknown function> + 0x65d00e (0x7f82bbe1600e in /home/notooth/anaconda3/lib/python3.8/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0x1375f9 (0x7f82bb8f05f9 in /home/notooth/anaconda3/lib/python3.8/site-packages/torch/lib/libtorch_python.so) <omitting python frames> frame #32: __libc_start_main + 0xea (0x7f82ccc1dd0a in /lib/x86_64-linux-gnu/libc.so.6) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers-4.4.2-py3.8.egg/transformers/modeling_utils.py", line 1064, in from_pretrained OSError: Unable to load weights from pytorch checkpoint file for 'google/mt5-small' at '/home/notooth/.cache/huggingface/transformers/8e7b2a80ddcb5611b27d8c89e1e8e33a947e105415051402a22b9c8d7d1caeb0.e22331f3a065b885b30ae3dd1ff11ccaf7fbc444485f6eb07ef5e0138bca8b70'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
I have the same problem
0
huggingface
Models
How to get answer with RobertaForQuestionAnswering
https://discuss.huggingface.co/t/how-to-get-answer-with-robertaforquestionanswering/11080
Dear list, What I like to do is to pretrain a model and finetune it for Q&A with Squad dataset. I trained a model as a roberta after reading the blog on huggingface site. And I would like to make it as a Q&A model, I tried to do finetuning following this document: Fine-tuning with custom datasets — transformers 4.11.3 documentation. ( I changed Distilbert to Roberta to match the pretrained model and fine tuning training) As a result, I’ve got a fine tuned model and want to test on it. So, I loaded model as described in this document: RoBERTa — transformers 4.11.3 documentation However, the output doesn’t show the ‘answer’ that I am looking for. Could you tell me how I can get the answer? I’ve searched the code but I couldn’t find it. Thank you in advance. Best regards, Seungho
Let’s take an example with an already-finetuned RoBERTa model from the hub. from transformers import RobertaTokenizer, RobertaForQuestionAnswering model_name = "deepset/roberta-base-squad2" tokenizer = RobertaTokenizer.from_pretrained(model_name) model = RobertaForQuestionAnswering.from_pretrained(model_name) question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" encoding = tokenizer(question, text, return_tensors="pt") # forward pass outputs = model(**encoding) predicted_start_idx = outputs.start_logits.argmax(-1).item() predicted_end_idx = outputs.end_logits.argmax(-1).item() # decode predicted_answer = tokenizer.decode(encoding.input_ids.squeeze()[predicted_start_idx : predicted_end_idx + 1]) print(predicted_answer) xxxForQuestionAnswering models output start_logits and end_logits, indicating which token the model thinks is at the start of the answer, and which token is at the end of the answer. Both are of shape (batch_size, sequence_length). To get the highest score (logit), we do an argmax on the last dimension . Next, we use the decode method of the tokenizer to turn the predicted token IDs back to text.
0
huggingface
Models
How to get model size?
https://discuss.huggingface.co/t/how-to-get-model-size/11038
Hi, everyone! Is there a quick way to get model size for an arbitrary transformers model?
Hi🤗 Do you mean size in mb or the number of the parameters used? Regarding size check this: stackoverflow.com How to get the size of a Hugging Face pretrained model? 21 python, deep-learning, nlp, pytorch answered by Tobias H&#246;lzer on 05:23PM - 28 Jun 21 UTC Regarding the number of the parameters in PyTorch you can use: sum(p.numel() for p in model.parameters())
0
huggingface
Models
Will XLM-R-(X)XL be available in Models?
https://discuss.huggingface.co/t/will-xlm-r-x-xl-be-available-in-models/8336
Hi guys, PyTorch FairSeq has recently published larger XLM-R models here: fairseq/examples/xlmr at master · pytorch/fairseq · GitHub 13 They have XLM-R-XL and XLM-R-XXL models that are much larger than the XLM-R-Base and XLM-R-Large that are currently available in Models. Is it possible to load these models in Transformers? What would it take to convert them?
I know this is an old post, but in case you are still interested there is an ongoing pull request for it here: Add support for XLM-R XL and XXL models by modeling_xlm_roberta_xl.py by Soonhwan-Kwon · Pull Request #13727 · huggingface/transformers · GitHub 43
0
huggingface
Models
How to train a gpt2 with colab pro
https://discuss.huggingface.co/t/how-to-train-a-gpt2-with-colab-pro/8303
we have a dataset for Turkish language with 35GB. we’ve pre processed and cleaned the whole text. but we have no gpu, no muscle computer thats why we hope maybe colab pro can make it happen. we want to pre-train a BERT, Longformer, BigBird and GPT-2. When we start tokenize text to train colab collapse. We are looking for a complete guide to train theese models via checkpoints and availabe to colab pro. No google cloud suggestions please. It costs more than 7k$'s. And we are just independent student researchers with pocket money. So please guide us. We will share models via huggingface so it is a win win situation. Kind regards emre
Yes, I have been stuck on this issue for a long time and I still haven’t been able to solve it. I’m open to any ideas that might help.
0
huggingface
Models
RAG Retriever: hf vs legacy vs exact vs compressed
https://discuss.huggingface.co/t/rag-retriever-hf-vs-legacy-vs-exact-vs-compressed/2135
In the eval_rag.py file under examples, I notice that the arguments for index_name is “hf” or “legacy”. How are these different from “exact” vs “compressed”?
Hi! The last question was answered here : RAG Retriever : Exact vs. Compressed Index? 2 RAG Retriever : Exact vs. Compressed Index? exact vs compressed refers to the quantization used for the FAISS index. The compressed one uses an IVF index with product quantization and requires significantly less RAM than the exact one. To reproduce the RAG papers result you will need the exact one though. Note that I will update this week the parameters of both index so that the exact one uses the same as RAG’s paper, and also to have an optimized compressed one.
0
huggingface
Models
GPT-J weights on HuggingFace
https://discuss.huggingface.co/t/gpt-j-weights-on-huggingface/10922
The GPT-J model which is available on HuggingFace - is this a full weight model or slim weight? Any idea?
You can use both. The default one is the full version (float32). To use the slim version, you can do the following: from transformers import GPTJForCausalLM import torch model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", revision="float16", torch_dtype=torch.float16, low_cpu_mem_usage=True)
0
huggingface
Models
How much memory required to load T0pp
https://discuss.huggingface.co/t/how-much-memory-required-to-load-t0pp/10904
Hi, I’m trying to load the T0pp model (49GB). However, after quite a while, the system threw a read error. I suppose it’s because the memory of my machine is not enough to load it. Any info about how much memory is required to load the model? Or is there any trick to go around it? Thank you very much.
Hi, Looking at the model repo 1, it seems to be 41.5 GB. Actually you need twice as much CPU RAM in order to load the model. When calling .from_pretrained(), the model actually gets loaded twice: once with randomly initialized weights, once with the pretrained weights. However, @stas has added a new (experimental) argument called low_cpu_mem_usage, which can be set to True, in order to only load the model once into CPU memory (directly with the pretrained weights), see this PR 2. So using that argument, it requires at least 41.5 GB of CPU RAM. Next, if you want to perform inference on GPU, you also need at least the same amount of GPU RAM (41.5 GB) in order to put the model on it, + you need some extra space for the data you put on it, as well as the activations (i.e. logits).
0
huggingface
Models
Loss behaviour for bert fine-tuning on QNLI
https://discuss.huggingface.co/t/loss-behaviour-for-bert-fine-tuning-on-qnli/6054
Hello everyone, I trained Bert on the QNLI dataset for 20 epochs and here are the losses I got: We can see that the training loss is increasing before dropping between each epoch, and I don’t really understand this behaviour, has anyone an idea where it might come from ? Is it normal, or do you think it could come from a problem in my code ? Also there are “spikes” appearing at the end of each epoch (especially after 8 epochs). I think it comes from the size of my batches. It is quite small, so my last batch contains only 2 samples. Here are the specification of my training: The model I used is “bert-base-cased” which I got pre-trained from the Transformers library, same for the tokenizer. I split the training set in a 80/20 ratio to get the validation set. I optimized using Adam with a learning rate of 3e-5, nothing else. I am evaluating on the validation set 10 times per epoch. And here is the code I use on each batch for training: optimizer.zero_grad() output = model(input_ids, attention_mask=attention_masks, token_type_ids = token_type_ids, labels=labels) loss = output.loss loss.backward() optimizer.step() Thank you in advance for your help !
Dear @lucasval and @nielsr, Thank you for your answers. In fact I already found what was wrong, I forgot to set the model in training mode at the end of my evaluation loop! Doing so the dropout layers would only be random for the first 1/10th of each epoch, and would be in eval mode for the remaining 9/10th of each epoch, which explains the periodic behavior of the loss. Again thank you for your replies! Best regards, Axel
1
huggingface
Models
Fine-tuning Pegasus
https://discuss.huggingface.co/t/fine-tuning-pegasus/1433
Hi I’ve been using the Pegasus model over the past 2 weeks and have gotten some very good results. I would like to fine-tune the model further so that the performance is more tailored for my use-case. I have some code up and running that uses Trainer. However, when looking at examples, the model does worse after training. In fact, the model output has a lot of repeating strings, the more the model is trained (i.e., more epochs). I’m wondering if my implementation is wrong, or if Trainer is not suitable for fine-tuning Pegasus (‘google/pegasus-xsum’). Am I running into catastrophic forgetting? My code is not long, I’ve attached it below. I mostly used the tutorial(s) from: huggingface.co Fine-tuning with custom datasets — transformers 3.3.1 documentation 185 Thanks!!! import pandas as pd in_df = pd.read_csv('/content/drive/My Drive/summaries_sample.csv') # Train Test Split train_pct = 0.6 test_pct = 0.2 in_df = in_df.sample(len(in_df), random_state=20) train_sub = int(len(in_df) * train_pct) test_sub = int(len(in_df) * test_pct) + train_sub train_df = in_df[0:train_sub] test_df = in_df[train_sub:test_sub] val_df = in_df[test_sub:] train_texts = list(train_df['allTextReprocess']) test_texts = list(test_df['allTextReprocess']) val_texts = list(val_df['allTextReprocess']) train_decode = list(train_df['summaries']) test_decode = list(test_df['summaries']) val_decode = list(val_df['summaries']) import transformers import torch min_length = 15 max_length = 40 # Setup model model_name = 'google/pegasus-xsum' torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = transformers.PegasusTokenizer.from_pretrained(model_name) model = transformers.PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) in_text = [in_df['allTextReprocess'].iloc[3]] batch = tokenizer.prepare_seq2seq_batch(in_text, truncation=True, padding='longest').to(torch_device) translated = model.generate(min_length=min_length, max_length=max_length, **batch) tgt_text0 = tokenizer.batch_decode(translated, skip_special_tokens=True) print(tgt_text0) # Tokenize train_encodings = tokenizer(train_texts, truncation=True, padding=True) val_encodings = tokenizer(val_texts, truncation=True, padding=True) test_encodings = tokenizer(test_texts, truncation=True, padding=True) train_labels = tokenizer(train_decode, truncation=True, padding=True) val_labels = tokenizer(val_decode, truncation=True, padding=True) test_labels = tokenizer(test_decode, truncation=True, padding=True) # Setup dataset objects class Summary_dataset(torch.utils.data.Dataset): def __init__(self, encodings, labels): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} item['labels'] = torch.tensor(self.labels['input_ids'][idx]) # torch.tensor(self.labels[idx]) return item def __len__(self): return len(self.encodings) train_dataset = Summary_dataset(train_encodings, train_labels) val_dataset = Summary_dataset(val_encodings, val_labels) test_dataset = Summary_dataset(test_encodings, test_labels) # Training from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=1000, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=10, ) trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset # evaluation dataset ) trainer.train() # Check results in_text = [in_df['allTextReprocess'].iloc[3]] batch = tokenizer.prepare_seq2seq_batch(in_text, truncation=True, padding='longest').to(torch_device) translated = model.generate(min_length=min_length, max_length=max_length, **batch) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) print(tgt_text) Any help would be awesome, thanks!
I also want to finetune Pegasus. Thank you for sharing your code! How similar is this to what happens in https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_pegasus_xsum.sh 197 ?
0
huggingface
Models
OnnxConfig for LayoutLMv2
https://discuss.huggingface.co/t/onnxconfig-for-layoutlmv2/10720
Hello, Did anyone try approach to making onnx config for LayoutLMv2? What do you think is it possible? Thanks.
It would be really cool to have ONNX support for LayoutLMv2! I haven’t looked into ONNX yet, but I assume it should be possible. Would be cool if you make a PR for this. You can take a look at the documentation 6 to get started.
0
huggingface
Models
Pegasus Model Weights Compression/Pruning
https://discuss.huggingface.co/t/pegasus-model-weights-compression-pruning/6381
I have trained pegasus for condition generation on a custom dataset. And have been successfully using it as well. My project aim is text summarization deployed on an heroku app. My model weights are over 2.5GB and heroku can’t support the same. I trained pegasus on seq2seq.py in legacy dir in transformers. I was thinking if pruning would help in reducing the model capacity significantly but still keeping the accuracy and performance intact. I couldn’t try keras pruning due to the complexity of the trainer present in seq2seq.py and I do not have any idea on how that works. Any way you guys could help would be much appreciated. Thanks
hey @SteveMama, have you tried quantizing your model first? this might be a “quick win” to reduce the size of your model by 2-3x. if that’s not good enough, then my suggestion would be to try movement pruning 3 which is designed for fine-tuning language models and is implemented in the nn_pruning library: GitHub - huggingface/nn_pruning: Prune a model while finetuning or training. 10 (i’m not sure if pegasus is supported in the library, but you can post and issue to find out!)
0
huggingface
Models
Error while loading the checkpoints
https://discuss.huggingface.co/t/error-while-loading-the-checkpoints/10644
I finetuned the Xlm/roberta and got its .ckpt file. But I am unable to load these check points for testing. While loading, I am getting errors like this: ‘XLMRobertaModel’ object has no attribute ‘load_from_checkpoint.’ I am using this code to load the checkpoint. from transformers import AutoTokenizer model_checkpoint = ‘deepset/xlm-roberta-base-squad2’ tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) from transformers import AutoModelForQuestionAnswering model = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint) chk_pth="/content/drive/MyDrive/Russian-QA/epoch=18-step=53826.ckpt" model2=model.load_from_checkpoint(chk_pth) kindly help me to sort out the slolution
I’m not sure where you have seen the load_from_checkpoint method but it does not exist in Transformers. Same for the .ckpt file, it’s not something generated from our side.
0
huggingface
Models
MobileBERT too slow?
https://discuss.huggingface.co/t/mobilebert-too-slow/6198
Is it just me or is mobileBERT is much slower than DistilBERT on Huggingface? When I train/fine-tune MobileBERT on GTX 1070, I get 3.8 it/sec. However, when I train DistilBERT on the same GPU, I get 15 it/sec. Am I missing something? The paper on MobileBERT states that MobileBERT should be faster.
I have exactly the same issue. Too slow. And in my case, it doesn’t converge at all. I increased the learning rate as was advised in the paper, but it didn’t help. Any idea? My code is working perfectly with normal bert and distill_bert! But very bad performance with mobilebert.
0
huggingface
Models
Can’t load weights for ‘facebook/bart-base’
https://discuss.huggingface.co/t/cant-load-weights-for-facebook-bart-base/10367
OSError: Can’t load weights for ‘facebook/bart-base’. Make sure that: ‘facebook/bart-base’ is a correct model identifier listed on ‘Models - Hugging Face’ or ‘facebook/bart-base’ is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt Can anyone please help?
If you are trying to run your code on a server or HPC system and you receive this error, one solution is to first run the step that downloads the model outside of the batch system and then use the downloaded model within the batch system. This solution resolved my error which was exactly the same error you mentioned here.
1
huggingface
Models
Encoding error with fine-tuned model
https://discuss.huggingface.co/t/encoding-error-with-fine-tuned-model/8769
Hello there! I have a question regarding the fine-tuning of mbart. I did the training like the example here https://github.com/huggingface/transformers/tree/v4.6.1/examples/pytorch/translation 2 and obtained a model pytorch_model.bin However when trying to use the model to translate I get an UnicodeDecodeError UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte The complete error is the following. As far as I can see it produces when loading the model I obtained from fine-tuning. Traceback (most recent call last): File "mbart/predict.py", line 41, in <module> main() File "mbart/predict.py", line 21, in main model = MBartForConditionalGeneration.from_pretrained(opt.model) File "/home/claudia/anaconda3/envs/speechenv/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1080, in from_pretrained **kwargs, File "/home/claudia/anaconda3/envs/speechenv/lib/python3.7/site-packages/transformers/configuration_utils.py", line 427, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/claudia/anaconda3/envs/speechenv/lib/python3.7/site-packages/transformers/configuration_utils.py", line 495, in get_config_dict config_dict = cls._dict_from_json_file(resolved_config_file) File "/home/claudia/anaconda3/envs/speechenv/lib/python3.7/site-packages/transformers/configuration_utils.py", line 578, in _dict_from_json_> text = reader.read() File "/home/claudia/anaconda3/envs/speechenv/lib/python3.7/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte Any ideas on how to solve this? Thanks in advance
@claudia did you ever solve this?
0
huggingface
Models
Sentence similarity models not capturing opposite sentences
https://discuss.huggingface.co/t/sentence-similarity-models-not-capturing-opposite-sentences/10388
I have tried different models for sentence similarity, namely: distilbert-base-uncased bert-base-uncased sentence-transformers/all-mpnet-base-v2 I used them together with the packages sentence-similarity 2 and sentence-transformers 2, which simplify the programming. I have also tried Universal Sentence Encoders (en_use_md and en_use_cmlm_lg). However, while these models generally correctly detect similarity for equivalent sentences, they all fail when inputting negated sentences. E.g., these opposite sentences: “I like rainy days because they make me feel relaxed.” “I don’t like rainy days because they don’t make me feel relaxed.” return a similarity of 0.993 with the model distilbert-base-uncased. However, sentences that could be considered very similar: “I like rainy days because they make me feel relaxed.” “I enjoy rainy days because they make me feel calm.” return a similarity of 0.996, which is barely higher. To my understanding, opposite sentences should have a small similarity, especially when semantics are taken into account. My question is: Are there any models/approaches that are able to capture the affirmative/negative nature of sentences when calculating similarity?
This relates back to a discussion that was had on Twitter not too long ago. The problem is that “similarity” is ill-defined. You can read through the thread 3. I did not add much but the discussion between Nils Reimers and Yoav Goldberg is interesting. It is a good mind exercise to think outside of what you’d want it to mean, and what the models are actually paying attention to. In your example, it is likely that content words receive the most attention and are responsible for a lot of the “meaning” (representation vector). On top of that, lexical overlap inevitably contributes to this value as well. That means that “(rainy) days” and “because they make me feel” already overlap. Yes, in terms of semantics and the whole context the meaning is different, but for the model these sentences are very “similar”. These models do not (necessarily) do sentiment analysis and comparison which seems to be what you are after. You may wish to look for sentiment models instead.
0
huggingface
Models
Bug in the Flaubert tokenizer_config.json do_lowercase option
https://discuss.huggingface.co/t/bug-in-the-flaubert-tokenizer-config-json-do-lowercase-option/10084
Hi, This is a bug report for the Flaubert tokenizer. The tokenizer_config.json of all models of the Flaubert model repo, for example: here has the wrong option name: { "do_lower_case": true } while it should read do_lowercase, as expected by the FlaubertTokenizer. This results in all Flaubert models having case-sensitive tokenizers. In my project (flaubert-base-uncased model) the bug first manifested itself in transformers v.4.4.0. Previous versions of transformers didn’t download this file, and I noticed it during the version upgrade. It may be related to Pull Request #10624, but I’m not at all sure here and it probably doesn’t really matter. Thanks for correcting the bug and many thanks for the great library. Environment info transformers version: 4.4.0 Platform: Linux-5.4.0-80-generic-x86_64-with-glibc2.10 Python version: 3.8.8 PyTorch version (GPU?): 1.9.0a0+df837d0 (True) Tensorflow version (GPU?): not installed (NA) Using GPU in script?: Yes Using distributed or parallel set-up in script?: No Who can help @thomwolf @patrickvonplaten
Hi, there hasn’t been any answer to my report, maybe, I have posted it to the wrong place or haven’t tagged the correct person? Could someone please help me get the message through? Also, in case this helps, here are some steps, that someone can take to reproduce the incorrect model behaviour: Download the default ‘flaubert-base-uncased’ model from the official repo, fix the seed, start the training and print out something (train loss, some weights, ets). Then in the same downloaded model, modify the "do_lower_case": false and restart the training. Make sure that the printed value is exactly the same. Then still in the same downloaded model, modify the option to "do_lowercase": false, restart the training and once again make sure that the printed value is the same. Basically, those three trainings are the same because in the first two cases the network uses the default value of "do_lowercase": false, while in the third one we explicitly select the option. Finally, in the very same model, set "do_lowercase": true, launch the training and check that now the training is different, and that this is indeed the correct option name, which controls the upper/lower case of the model.
0
huggingface
Models
Using vectors instead of input_ids in BERT
https://discuss.huggingface.co/t/using-vectors-instead-of-input-ids-in-bert/9967
Hi there! I’ve been struggling with this question for a while now, any help would be appreciated! So my problem is that instead of passing the ids in a list (input_ids) to our, say, BERT base model, is there a way I can directly give the one-hot vectors in a tensor to the model? and if so, Is there a problem if the vector is not exactly one-hot but more generally a probability distribution over the vocabulary? Also, I’d rather not use inputs_embeds since that would require that I implement the positional encoding and also map all of them to the model hidden size. I just want to use the basic BERT model with vectors instead of direct ids. Is there a way? As an illustration, I would want to be able to input [[0, 0, 0.1, 0, 0, …, 0.9, 0, 0]] where the length of the vector is that of the vocabulary. I don’t want to just have one of the non-zero indices, as is the case in one-hot vectors.
What would that “mean”? How do you want the embedded value to be calculated based of this? Do you want to get the weighted average over the vocabulary and use those vectors as input of the encoder?
0
huggingface
Models
RuntimeError: blank must be in label range
https://discuss.huggingface.co/t/runtimeerror-blank-must-be-in-label-range/4976
Hi, I’m trying to train a tamil model. I ran the code as explained in Patrick’s video. But I ran into this error. Can you help me what the reason for this? This is my colab notebook colab.research.google.com Google Colaboratory 11 image771×571 22.3 KB
Here a shareable link of the notebook: https://colab.research.google.com/drive/1SSmJywEvx07TtQSRtSFpxRawzb1lnXIC?usp=sharing 19
0
huggingface
Models
Continual pre-training from an initial checkpoint with MLM and NSP
https://discuss.huggingface.co/t/continual-pre-training-from-an-initial-checkpoint-with-mlm-and-nsp/6869
I’m trying to further pre-train a language model (BERT here) not from scratch but from an initial checkpoint using my own data. My goal is to later use these further pre-trained models for fine-tuning on some downstream tasks (I have no issue with the fine-tuning part). For the pre-training, I want to use both Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) heads (the same way that BERT is pre-trained 8 where the model’s total loss is the sum of MLM loss and NSP loss). My data is stored in a text file following the standard format for BERT input (each document has multiple sentences separated by newlines and documents are separated by an empty line): sentence 1.1 sentence 1.2 empty line sentence 2.1 sentence 2.2 I have two specific questions and I appreciate any feedback: I have some trouble finding the right function/script in the transformers library for such a purpose. As far as I understand, all the scripts for language modeling 18 only use MLM for pretraining (correct me if I’m wrong.) I wonder if I should use BertForPreTraining for this purpose? Assuming I should use BertForPreTraining, I wonder how I should prepare my data for this model. I’m looking for the right object or data type/format and the right way of tokenizing my input data so that it’s suitable both for MLM and NSP.
Hi. I want to do exactly same as you. Did you find any answer out there? How were things going on with your approach? I appreciate any advice that let me avoid any headache. Thanks in advance.
0
huggingface
Models
Fine-tuning T5 with Trainer for novel task
https://discuss.huggingface.co/t/fine-tuning-t5-with-trainer-for-novel-task/5501
Hi guys, I am trying to fine-tune T5 with Huggingface’s Trainer class, trying to recycle as much training code as possible. Yet I am wondering what the Trainer.train() method actually does. In the T5 paper 3 the authors mention three fine-tuning methods that they used (§3.5.1): training only additional adapter layers gradually unfreezing the model and training more and more layers training the whole model right away What does the Huggingface Trainer.train() do? And is there a simple way of switching between strategies?
By default, Trainer.train() will train the entire model (i.e. all layers). To freeze certain layers before training, you can do that as follows: for name, param in model.named_parameters(): if name == "...": param.requires_grad = False
0
huggingface
Models
Wav2Vec2: How to correct for nan in training and validation loss
https://discuss.huggingface.co/t/wav2vec2-how-to-correct-for-nan-in-training-and-validation-loss/6089
Hi, I’m using Wav2Vec2ForCTC.from_pretrained(“facebook/wav2vec2-base”) to fine-tune on a English language medical translation dataset, which is about 6GB. It is similar to the Timit_ASR dataset, with the exception that the wav files are in 48KHz. I’m following the example show in this notebook: Fine-Tune Wav2Vec2 for English ASR in Hugging Face with 🤗 Transformers 1 Thank you @patrickvonplaten for an excellent illustration. My issue is that that the training loss and validation loss show either nan or inf, and the WER does not decrease. /usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py:134: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) [5895/5895 24:03, Epoch 1/1] Step Training Loss Validation Loss Wer Runtime Samples Per Second 500 4.942800 nan 1.000000 20.818400 18.301000 1000 3.023500 nan 1.000000 20.785500 18.330000 1500 inf 3.477225 1.000000 20.757200 18.355000 2000 nan 3.492772 1.000000 20.922500 18.210000 2500 nan nan 1.000000 20.817700 18.302000 3000 nan nan 1.000000 20.788900 18.327000 3500 nan nan 1.000000 20.850800 18.273000 4000 nan nan 1.000000 20.928300 18.205000 4500 nan 3.214458 1.000000 20.900100 18.230000 5000 nan nan 1.000000 20.907300 18.223000 5500 nan 3.297620 1.000000 20.979900 18.160000 TrainOutput(global_step=5895, training_loss=nan, metrics={'train_runtime': 1444.0059, 'train_samples_per_second': 4.082, 'total_flos': 2.494079663135328e+17, 'epoch': 1.0, 'init_mem_cpu_alloc_delta': 1760702464, 'init_mem_gpu_alloc_delta': 377847808, 'init_mem_cpu_peaked_delta': 0, 'init_mem_gpu_peaked_delta': 0, 'train_mem_cpu_alloc_delta': 2907852800, 'train_mem_gpu_alloc_delta': 1116327936, 'train_mem_cpu_peaked_delta': 847519744, 'train_mem_gpu_peaked_delta': 2587473920}) The only exceptions that I have to the notebook code is that I’m using torchaudio instead of soundfile and I’m downsampling to 16 KHz. The down sampling seems to be working correctly because the audio plays well when checking. That code is here: import torchaudio def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["file_name"]) batch["speech"] = speech_array[0].numpy() batch["sampling_rate"] = sampling_rate batch["target_text"] = batch["phrase"] return batch dataset = dataset.map(speech_file_to_array_fn, remove_columns=dataset.column_names["train"], writer_batch_size=32, num_proc=4) import librosa import numpy as np def resample(batch): batch["speech"] = librosa.resample(np.asarray(batch["speech"]), 48_000, 16_000) batch["sampling_rate"] = 16_000 return batch dataset = dataset.map(resample, writer_batch_size=32, num_proc=4) I’ve tried to use different learning rates. A couple of the 500 increment steps in the above table actually showed a loss number instead of nan. But then subsequent losses were nan. I also tried to follow the example in the Fine-tuning XLSR-Wav2Vec2 for Multi-Lingual ASR with Transformers notebook. That resulted in a different error saying that the blank was out of label range. I think I have a vanishing/exploding gradient problem. Perhaps I should try changing learning rates and regularization like dropout. I’m not sure how to make it work in this notebook. Any help would be appreciated. The dataset is from here: Medical Speech, Transcription, and Intent | Kaggle Notebook is run on Google Colab Pro. Any help would be much appeciated. Thanks, Sidd
Hello, I had the same issue, I can give my solution but I don’t know if it will work for you. So, in the pytorch documentation (CTCLoss — PyTorch 1.8.1 documentation 12), it is said that The alignment of input to target is assumed to be “many-to-one”, which limits the length of the target sequence such that it must be ≤ the input length. Sometimes the predicted segments’ length were smaller than the true ones, hence I had “inf” and “nan” during the training. To fix this, you need to allow zero_infinity : zero_infinity ( bool 1 , optional ) – Whether to zero infinite losses and the associated gradients. Default: False Infinite losses mainly occur when the inputs are too short to be aligned to the targets. You need to do that in your code : model = Wav2Vec2ForCTC.from_pretrained(path_2_model) model.config.ctc_zero_infinity = True Hope it will solve your problem. Best, Omar
0
huggingface
Models
How to get “EleutherAI/gpt-j-6B” working?
https://discuss.huggingface.co/t/how-to-get-eleutherai-gpt-j-6b-working/9427
I’m trying to run the EleutherAI/gpt-j-6B 47 model, but with no luck. The code model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") returns the following error: Traceback (most recent call last): File "gptjtest.py", line 18, in <module> model = AutoModelForCausalLM.from_pretrained("gpt-j-6B") File "/home/marcin/miniconda3/envs/py37/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 383, in from_pretrained pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs File "/home/marcin/miniconda3/envs/py37/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 514, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/home/marcin/miniconda3/envs/py37/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 263, in __getitem__ raise KeyError(key) KeyError: 'gptj' I’ve tried transformers version 4.9.2 as well as the latest 4.10.0.dev0 from github trunk. Apparently there is no model_type of type gptj. Do I need to add it somehow?
Hi, GPT-J-6B is not added yet to the library. It will be soon though: GPT-J-6B by StellaAthena · Pull Request #13022 · huggingface/transformers · GitHub 141
0
huggingface
Models
Extract visual and contextual features from images
https://discuss.huggingface.co/t/extract-visual-and-contextual-features-from-images/9355
@nielsr Hi, I’m currently trying to build an OCR (text-recognition model), which gets the visual and contextual features via a transformer architecture. Currently my architecture looks like this: Visual features: DeiT FeatureExtractor (can it be that this is just a preprocessing instance for the DeiT model and no forward / model is called there?) Contextual features: BiLSTM / stacked BiLSTM Prediction: Attention Is it possible to use the Transformers library to get the visual and contextual features via a pre-trained model? Or what would you recommend? The idea behind this is to get away from the original CRNN - LSTM architecture and get the features via Transformer to speed up the whole inference process to reach faster results as with tesseract. (This model will later used mainly for document images with much text) Or maybe you have even an example how to reach something like this? One example from synthetic dataset: 0_18800×32 5.49 KB and labels are the plain text PS: i use mainly pytorch / text dtection part is done so i need only the poor recognition Many greetings Felix Edit: has transformers any ready to use implementation like ViTSTR paper or any recommendations how i can rebuild it but with pretrained models from transformers lib and not timm? https://www.google.de/url?sa=t&source=web&rct=j&url=https://arxiv.org/pdf/2105.08582&ved=2ahUKEwiQhLyW873yAhUugP0HHW7mDysQFnoECBIQAQ&usg=AOvVaw1oKeq8pMRl9R8lpapFnx1l&cshid=1629404520093 9
Hi, The feature extractors (like ViTFeatureExtractor, DEiTFeatureExtractor) can be used to prepare images for Transformer-based models (ViT and DEiT respectively). They mainly do 2 things: resize images to a given size and normalize the channels. After using the feature extractor, an image is turned into a PyTorch tensor of shape (batch_size, num_channels, height, width), which might be (1, 3, 224, 224). Next, this tensor is provided to a Transformer that turns it into contextual features. For prediction, one typically simply places a linear classification head (nn.Linear) on top of the contextual features. You might be interested in this project: GitHub - him4318/Transformer-ocr: Handwritten text recognition using transformers. 15. It’s based on DETR, which is available in HuggingFace Transformers. Note that DETR itself consists of a convolutional backbone + encoder-decoder Transformer. Instead of using classification heads for predicting class labels + bounding boxes (as was done in the original DETR as it was meant for object detection), he simply adds a linear layer on top of the Transformer outputs, which act as a “language modeling decoder” (similar to was is done in models like BERT during pre-training). This language modeling decoder maps the contextual features of the Transformer to actual words. This language modeling decoder is defined here. 3
0
huggingface
Models
Clustering news articles with sentence bert
https://discuss.huggingface.co/t/clustering-news-articles-with-sentence-bert/3361
Hi! I would like to cluster articles about the same topic. Now I saw that sentence bert might be a good place to start to embed sentences and then check similarity with something like cosine similarity. But since articles are build upon a lot of sentences, this method doesnt work well. Is there some bert embedding that embeds a whole text or maybe some algorithm to use the sentence embeddings on the scale of a whole text? Thanks for any input!
Hi @cezary, since you want to cluster articles you could use any of the “encoder” Transformers (e.g. BERT-base) to extract the hidden states per article (see e.g. here 193 for an example with IMDB) and then apply clustering / dimensionality reduction on the hidden states to identify the clusters. If you’re dealing with an unsupervised task, I’ve found that UMAP 135 + HDBSCAN 74 works well for the second step. HTH!
0
huggingface
Models
How to find Multilingual models
https://discuss.huggingface.co/t/how-to-find-multilingual-models/9302
Hi Everyone! Sorry if this is a silly question. I couldn’t figure out how to find all multi-lingual models on the model hub. Is it possible to filter those out? Thanks in Advance!
Hi there! You can find some multilingual models under Models - Hugging Face 1 (have multilingual language tag) and Models - Hugging Face (have multilingual in the name).
1
huggingface
Models
Why is Wav2Vec pretraining loss not decreasing?
https://discuss.huggingface.co/t/why-is-wav2vec-pretraining-loss-not-decreasing/8112
Hi there everyone I’m currently trying to train a Wav2Vec base model. During the pre-training phase, The loss starts off around 4, decreases and then shoots up to 6.658 and stays there. The accuracy is also low and does not increase. My learning rate is set at 0.005. I started off with a learning rate of 0.0001 and started increasing it gradually when I saw these results. I use the english Wav2Vec model for weight initialisation. I thought it would improve if I waited longer but it stays the same even after 20 epochs. Can anyone please share some advice on what I could do to avoid this and improve the training? Your assistance will be much appreciated!
Hi ZaNi, Have you found the reason for such behaviour? I’m facing the same problem pretraining my model from English base model.
0