docs
stringclasses 4
values | category
stringlengths 3
31
| thread
stringlengths 7
255
| href
stringlengths 42
278
| question
stringlengths 0
30.3k
| context
stringlengths 0
24.9k
| marked
int64 0
1
|
---|---|---|---|---|---|---|
huggingface | Beginners | Application of a transformer model without fine tuning for NER task | https://discuss.huggingface.co/t/application-of-a-transformer-model-without-fine-tuning-for-ner-task/6470 | I am interested in using pre-trained models from Huggingface for named entity recognition (NER) tasks without any further training or testing of the model. The only information on Huggingface for the model are to use the following lines:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
model = AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
I tried the following code, but I am getting a tensor output instead of class labels for each named entity.
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
model = AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
text = "my text for named entity recognition here."
input_ids = torch.tensor(tokenizer.encode(text, padding=True, truncation=True,max_length=50, add_special_tokens = True)).unsqueeze(0)
with torch.no_grad():
output = model(input_ids, output_attentions=True)
Can anyone suggest what am I doing wrong here? It will be good to have a short tutorial on how to use a pre-trained model for NER (without any fine tuning). | hey @shoaibb, my suggestion would be to use the ner pipeline (see docs 6) to extract the named entities because this is quite tricky to get right.
also, emilyalsentzer/Bio_ClinicalBERT appears to be a pretrained language model so you’ll need to either fine-tune it on a labelled NER dataset or see if you can find an existing model on the Hub that suits your needs. you can find an example on fine-tuning NER models here: Google Colaboratory 5 | 0 |
huggingface | Beginners | BioBERT NER issue | https://discuss.huggingface.co/t/biobert-ner-issue/6411 | Hello,
I’m trying to implement NER with BioBERT.
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("dmis-lab/biobert-v1.1")
model = AutoModelForTokenClassification.from_pretrained("dmis-lab/biobert-v1.1")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
sentence = "This expression of NT-3 in supporting cells in embryos and neonates may even preserve in Brn3c null mutants the numerous spiral sensory neurons in the apex of 8-day old animals."
result = nlp(sentence)
print(result)
But the result isn’t what I’m expecting.
Some weights of BertForTokenClassification were not initialized from the model checkpoint at dmis-lab/biobert-v1.1 and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
[{'word': 'This', 'score': 0.5616263747215271, 'entity': 'LABEL_1', 'index': 1, 'start': 0, 'end': 4}, {'word': 'expression', 'score': 0.6285454630851746, 'entity': 'LABEL_1', 'index': 2,
The output is pretty clear : I need to train the model.
But, I’m not sure if with a trained model, I will manage to get rid off the ‘entity’: ‘LABEL_1’ issue.
My desired output would be something like:
https://bern.korea.ac.kr/ 14
With a complete response such as:
{
"project": "BERN",
"sourcedb": "",
"sourceid": "43c1bfdebd3ccb8c9a42d10a22a3be3e8b2fe9ae7601b244b6318d71-Thread-18603546",
"text": "This expression of NT-3 in supporting cells in embryos and neonates may even preserve in Brn3c null mutants the numerous spiral sensory neurons in the apex of 8-day old animals.",
"denotations": [
{
"id": [
"HGNC:8020",
"BERN:324182202"
],
"span": {
"begin": 19,
"end": 23
},
"obj": "gene"
},
{
"id": [
"MIM:602460",
"HGNC:9220",
"Ensembl:ENSG00000091010",
"BERN:324351702"
],
"span": {
"begin": 89,
"end": 94
},
"obj": "gene"
}
],
"timestamp": "Thu May 27 08:22:14 +0000 2021",
"logits": {
"disease": [],
"gene": [
[
{
"start": 19,
"end": 23,
"id": "HGNC:8020\tBERN:324182202"
},
0.9999972581863403
],
[
{
"start": 89,
"end": 94,
"id": "MIM:602460\tHGNC:9220\tEnsembl:ENSG00000091010\tBERN:324351702"
},
0.9999972581863403
]
],
"drug": [],
"species": []
}
}
Am I in the right path to achieve that?
Any help/suggestion is more than welcome!
Cheers,
Vivian | @Vivian Did you get any success with this issue? I am also in a similar situation. | 0 |
huggingface | Beginners | Bert NextSentence memory leak | https://discuss.huggingface.co/t/bert-nextsentence-memory-leak/6450 | I am using bert for next-sentence prediction on a cpu. I want to call the model twice in a row to select a sentence from a list of sentence pairs. I am using a batch size of 128. My code looks something like this:
def bert_batch_compare(self, prompt1, prompt2):
encoding = self.tokenizer(prompt1, prompt2, return_tensors='pt', padding=True, truncation=True, add_special_tokens=True)
target = torch.ones((1,len(prompt1)), dtype=torch.long)
outputs = self.model(**encoding, next_sentence_label=target)
logits = outputs.logits.detach()
return logits
def call_bert(self):
batch_pattern = []
batch_template = []
batch_input = []
## make batch_pattern, batch_input, patch_template here!
si = self.bert_batch_compare(batch_pattern, batch_input)
## second call to this fn is killed because of memory limitations
sj = self.bert_batch_compare(batch_input, batch_template)
If I lower the batch size to something like 24 it runs, but I’d like to use a larger batch size. I am not doing any training right now. I’m using ‘bert-base-uncased’. During the second call to ‘bert_batch_compare()’ the memory usage increases to 100% and the program crashes. I have 16G to work with. Until that time the code only uses 1.8Gig. I am using linux and python 3.6, along with pytorch 1.8. | Might not be a memory leak but a case of larger batch padding.
If the longest sequence in the first batch is 80 tokens, then that batch will (likely) be padded to 80 items and that may fit into memory. But if the longest sequence in the next batch then contains 250 tokens, then the whole batch is padded to 250 and that might not fit into memory. So verify the length of each individual sample to be sure. | 0 |
huggingface | Beginners | Enhance a MarianMT pretrained model from HuggingFace with more training data | https://discuss.huggingface.co/t/enhance-a-marianmt-pretrained-model-from-huggingface-with-more-training-data/1017 | I am using a pretrained MarianMT machine translation model from English to German 5. I also have a large set of high quality English-to-German sentence pairs that I would like to use to enhance the performance of the model, which is trained on the OPUS corpus, but without making the model forget the OPUS training data. Is there a way to do that? Thanks.
Also on StackOverflow 17 | You could further fine-tune it on your own corpus, and I think if you have a high quality dataset then it should improve the results after fine-tuning.
You can use the finetune.py script from here 129 for fine-tuning marian | 0 |
huggingface | Beginners | Should I normalize text or not | https://discuss.huggingface.co/t/should-i-normalize-text-or-not/6449 | Hello. I have a question about a general understanding of how transformers work. The tokenizer feeds the model text, which is usually normalized, cleared of punctuation, and so on. But as I assume the transformer is trained on a raw corpus - I made this conclusion after seeing the single characters in the vocabulary. Hence the question - should I normalize and do other preprocessing of the sentences for which I want to get embbedings? If the model was trained on the raw corpus, how correct will the preprocessing described above be for the text under study? | No, you should not preprocess the dataset. Depending on the tokenizer scheme, you may want to tokenize words beforehand (which will then still be tokenised into subword units). For something like sentencepiece that is not needed (in fact it is recommended to not pretokenise in sentencepiece because a space is a regular character there).
It is best if the data is “clean” though, in the sense that it should not contain HTML/XML tags or other longer sequences of strange characters. But that should then also be true of inference data. | 0 |
huggingface | Beginners | Use wav2vec2 models with a microphone easily | https://discuss.huggingface.co/t/use-wav2vec2-models-with-a-microphone-easily/5522 | Hello folks,
I wrote a little lib to be able to use any wav2vec2 model from the model hub with a microphone. Since wav2vec2 does not support streaming mode, I used voice activity detection to create audio chunks that I can feed into the model.
Here is a little example, you can find the code on github 109.
from live_asr import LiveWav2Vec2
german_model = "maxidl/wav2vec2-large-xlsr-german"
asr = LiveWav2Vec2(german_model,device_name="default")
asr.start()
try:
while True:
text,sample_length,inference_time = asr.get_last_text()
print(f"{sample_length:.3f}s"
+f"\t{inference_time:.3f}s"
+f"\t{text}")
except KeyboardInterrupt:
asr.stop()
If you have any questions or feedback feel free to write me. | Pretty cool! would you consider making an implementation for Google Colab / notebooks? Similar to this:
github.com
voidful/huggingface_notebook/blob/main/xlsr_gpt.ipynb 23
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "xlsr tw.ipynb",
"provenance": [],
"collapsed_sections": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "code",
"metadata": {
"id": "ZXXYTItj9pDQ"
This file has been truncated. show original
But with the VAD to get a near real-time transcription. | 0 |
huggingface | Beginners | Pegasus Summarization API_Inference | https://discuss.huggingface.co/t/pegasus-summarization-api-inference/5974 | Loving this model, and if we can use the API we’re happy to do so. Using Pegasus summarization for some testing.
human-centered-summarization/financial-summarization-pegasus · Hugging Face 3
However, there seem to be no options to pass via POST to determine a minimum and maximum length, which renders this endpoint useless.
Am I missing something here? Thanks, happy to be a new subscriber. | Hi @induveca ,
Why would you say the endpoint is useless without being able to specify min_length and max_length ? Models are conditioned to summarize, so the output should work.
Anyway, you can still specify min_length and max_length even if these arguments are not documented (they are not documented which means behavior is subject to change in the future). They are specified in tokens length, which is really hard to gauge if you are only handling text.
If you are using those, expect that the output might be cut mid sentence as you are basically forcing the model to stop outputting tokens even if it had still tokens to say.
Cheers,
Nicolas | 0 |
huggingface | Beginners | Which model to use for suggesting article to the user based on details provided? | https://discuss.huggingface.co/t/which-model-to-use-for-suggesting-article-to-the-user-based-on-details-provided/6362 | Hello Members,
I have a use case in which the user enters the details of the problem his computer is facing and the system will return the most relevant article from the list of articles that potentially might solve the issue.
I tried cosine similarity and BM25 but not getting decent results. Can someone please suggest which pre-trained model i can use for this kind of data and use-case.
Sorry I am new to transformers. | hey @gladmortal your use case sounds like a good match for dense retrieval, where you use a transformer like BERT to embed your documents as dense vectors and then measure their similarity to a query vector.
you can find an example on how to do this with FAISS and the datasets library here: Adding a FAISS or Elastic Search index to a Dataset — datasets 1.6.2 documentation 4
there is also a nice example from sentence-transformers here if your articles are not too long: Semantic Search — Sentence-Transformers documentation 3
alternatively, if you have a corpus of (question, article) tuples, you could try doing a similarity comparison of new questions against the existing ones and using the matches to return the most relevant article. there’s a tutorial of this from a nice library called haystack which is built on transformers here: https://haystack.deepset.ai/docs/latest/tutorial4md 2 | 0 |
huggingface | Beginners | Extra Dimension with DataCollatorFor LanguageModeling into BertForMaskedLM? | https://discuss.huggingface.co/t/extra-dimension-with-datacollatorfor-languagemodeling-into-bertformaskedlm/6400 | Hi all,
EDIT: I forgot to state that I am on transformers 4.6.1 and python 3.7
On Colab, I am trying to pre-train a BertforMaskedLM using a random subset of half of Wikitext-103. I am using a simple custom dataset class and the DataCollatorForLanguageModeling as follows.
import torch
import torchtext
from torch.utils.data import Dataset, DataLoader
import torch.nn as nn
import torch.optim as optim
import torch.nn as nn
import torch.optim as optim
import re
import random
from transformers import BertForMaskedLM, BertModel, BertConfig, BertTokenizer
from transformers import DataCollatorForLanguageModeling
from transformers import Trainer, TrainingArguments
from transformers import PreTrainedTokenizer
wiki_train, wiki_valid, wiki_test = torchtext.datasets.WikiText103(root='data',
split=('train','valid','test'))
def scrub_titles_get_lines(dataset):
pattern = " =+.+ =+"
pattern = re.compile(pattern)
title_scrubbed = []
for example in dataset:
if not example.isspace() and not bool(pattern.match(example)):
title_scrubbed.append(example)
return title_scrubbed
class LineByLineBertDataset(torch.utils.data.Dataset):
def __init__(self, data, tokenizer: PreTrainedTokenizer, max_len=512):
self.examples = data
self.tokenizer = tokenizer
self.max_length = max_len
def __len__(self):
return len(self.examples)
def __getitem__(self, i):
result = self.tokenizer(self.examples[i],
add_special_tokens=True,
truncation=True,
return_special_tokens_mask=True,
padding='max_length',
max_length=self.max_length,
return_tensors='pt')
return result
configuration = BertConfig()
model = BertForMaskedLM(configuration)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
wiki_train = random.sample(wiki_train, len(wiki_train)//2) # list of strings
train_set = LineByLineBertDataset(wiki_train, tokenizer)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
training_args = TrainingArguments(
output_dir="/content/drive/MyDrive/BERT_TEST",
overwrite_output_dir=True,
num_train_epochs=1,
save_steps=10_000,
save_total_limit=2,
prediction_loss_only=True,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_set,
)
trainer.train()
However, I get an error in the forward() method of the model:
/usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
923 elif input_ids is not None:
924 input_shape = input_ids.size()
--> 925 batch_size, seq_length = input_shape
926 elif inputs_embeds is not None:
927 input_shape = inputs_embeds.size()[:-1]
ValueError: too many values to unpack (expected 2)
Each of the tensors in the batch encoding are of shape (8,512)
In the DataCollatorForMaskedLM I know that at some point another dimension gets added. If I do:
res = tokenizer(wiki_train[:8],
add_special_tokens=True,
return_special_tokens_mask=True,
truncation=True,
padding='max_length',
max_length=512,
return_tensors='pt')
collated = data_collator([res])
collated['input_ids'].size()
Output: torch.Size([1, 8, 512])
So it seems that maybe this first dimension needs to be squeezed out. However, I am not sure what parameter I can tweak to ensure that the correct tensor is being seen by the model after collation.
Any thoughts? | No the dimension was added by you when you passed
collated = data_collator([res])
res was already a list with 8 elements here, by putting in a new list you add the 1. | 0 |
huggingface | Beginners | Zero-Shot Classification Pipeline - Truncating | https://discuss.huggingface.co/t/zero-shot-classification-pipeline-truncating/6269 | Hi,
Is it possible to specify arguments for truncating and padding the text input to a certain length when using the transformers pipeline for zero-shot classification?
For instance, if I am using the following:
classifier = pipeline(“zero-shot-classification”, device=0)
Do I need to first specify those arguments such as truncation=True, padding=‘max_length’, max_length=256, etc in the tokenizer / config, and then pass it to the pipeline?
Thank you in advance | hey @valkyrie the pipelines in transformers call a _parse_and_tokenize function that automatically takes care of padding and truncation - see here 10 for the zero-shot example.
so the short answer is that you shouldn’t need to provide these arguments when using the pipeline. do you have a special reason to want to do so? | 0 |
huggingface | Beginners | Correct interpretation of the model embbedings output | https://discuss.huggingface.co/t/correct-interpretation-of-the-model-embbedings-output/6386 | Hello. Using DeepPavlov transformer I was surprised to get different embbedings for the same word ‘шагать’. This is a fictional example showing the essence of the question.
MODEL_NAME = 'DeepPavlov/rubert-base-cased-sentence'
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModel.from_pretrained(MODEL_NAME)
model(**tokenizer('шагать шагать', return_tensors='pt', truncation=True, max_length=512)).last_hidden_state.detach().squeeze()
As I can see the tokenizer splits word ‘шагать’ on two tokens: ‘шага’ and ‘##ть’
Output for embbedings is:
tensor([[-0.5780, 0.0937, -0.3210, …, -0.3401, 0.0203, 0.4830],
[-0.6516, 0.0278, -0.3610, ..., -0.4095, 0.0527, 0.5094],*
[-0.6018, 0.1147, -0.2739, ..., -0.4194, 0.0580, 0.4853],*
[-0.6632, 0.0110, -0.3995, ..., -0.3953, 0.0823, 0.4497],*
[-0.6711, 0.1017, -0.2829, ..., -0.3797, 0.0994, 0.4285],*
[-0.6337, 0.0572, -0.3519, ..., -0.3553, 0.0126, 0.4479]])*
I have expected that vector 1([-0.6516, 0.0278, -0.3610, …, -0.4095, 0.0527, 0.5094] - I guess it corresponds to ‘шага’) will be equal to vector 3 - but I see other values. Same is true and for pair vectors 2,4 (’##ть’).
I guess it is result of my mismatching of how model works. Please explain me wht is wrong in my undestanding… | hey @Roman, the reason why you don’t get the same embedding for the same word in a sequence is because the transformers like BERT produce context-sensitive representations, i.e. they depend on the context in which they appear in a sequence.
the advantage of such representations is that you can deal with tricky examples like “Time flies like an arrow; fruit flies like a banana”, where the word “flies” has two very different meanings
you can find a nice description of contextual embeddings here: The Illustrated BERT, ELMo, and co. (How NLP Cracked Transfer Learning) – Jay Alammar – Visualizing machine learning one concept at a time. 1 | 0 |
huggingface | Beginners | Inconsistent Bleu score between test_metrics[‘test_bleu’] and written-to-file test_metric.predictions | https://discuss.huggingface.co/t/inconsistent-bleu-score-between-test-metrics-test-bleu-and-written-to-file-test-metric-predictions/6352 | I got a bleu score at about 11 and would like to do some error analysis, so I saved the predictions to file. When I read the predictions, I felt that the bleu score should be much lower than 11 because most tokens in the references are missing in the predictions. Therefore, I directly calculated the bleu score by giving the predictions file and references file to sacrebleu (which is the package used as metric in the training program) and the bleu score is about 2. The predictions and references files are both formatted one sentence a line. Each predicted sentence has only one reference.
Relevant code snippets are attached below:
import sacrebleu
metric = load_metric("sacrebleu")
#----------------------------------------------------------#
# Define compute_metrics for trainer
#----------------------------------------------------------#
def postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
labels = [[label.strip()] for label in labels]
return preds, labels
def compute_metrics(eval_preds):
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Some simple post-processing
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
result = {"bleu": result["score"]}
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
result["gen_len"] = np.mean(prediction_lens)
result = {k: round(v, 4) for k, v in result.items()}
return result
#----------------------------------------------------------#
# Calculate metric for test dataset, get bleu score, and save predictions to file
#----------------------------------------------------------#
test_metric = trainer.predict(test_dataset = tokenized_datasets['test'], metric_key_prefix = 'test', num_beams=6)
print(test_metric.metrics['test_bleu']) # get about 11
detokenized_predictions = tokenizer.batch_decode(test_metric.predictions, skip_special_tokens=True)
with open(path_predictions_file, 'w') as outfile:
s = '\n'.join(detokenized_predictions)
outfile.write(s)
#----------------------------------------------------------#
# load previously-saved predictions files and references file to calculate bleu score
#----------------------------------------------------------#
predictions = []
with open(path_predictions_file) as prediction_infile:
for sentence in prediction_infile:
predictions.append(sentence.strip())
references = []
with open(path_references_file) as reference_infile:
for sentence in reference_infile:
references.append(sentence.strip())
bleu = sacrebleu.corpus_bleu(predictions, [references])
print('{}'.format(bleu.format(score_only=True))) #get about 2
Thank you very much for the reading! Really appreciate any suggestions | The post-processing in the first example is not applied to the predictions written to disk. | 0 |
huggingface | Beginners | Allenai/reviews_roberta_base trained on? | https://discuss.huggingface.co/t/allenai-reviews-roberta-base-trained-on/6342 | hey.
do you know what this model is trained on and what task is it fine tuned for? (presume aspect sentiment analysis)
maybe @patrickvonplaten will know the answer…?
thanks!!
huggingface.co
allenai/reviews_roberta_base · Hugging Face | ok, from this related paper
arxiv.org
2004.10964.pdf
1717.18 KB
it seems that it’s pretrained on amazon reviews dataset | 0 |
huggingface | Beginners | Text Generation, adding random words, weird linebreaks & symbols at random | https://discuss.huggingface.co/t/text-generation-adding-random-words-weird-linebreaks-symbols-at-random/6330 | Here’s the code I’m using to generate text.
tokenizer = GPT2Tokenizer.from_pretrained("gpt2-medium")
model = GPT2LMHeadModel.from_pretrained("gpt2-medium" , pad_token_id = tokenizer.eos_token_id)
sentence= tokenizer.encode(kw, return_tensors="pt") output = model.generate(sentence, max_length = 500, no_repeat_ngram_size = 2, do_sample=False) text.append(tokenizer.decode(output[0], skip_special_tokens = True))
The issue is that the output often comes like this:
"What are the benefits of using collagen?
,
,
,
,
, __________________, __________
The skin that has collagen has a higher level of hydrophilic (water-loving) proteins. `
or like this:
Yes, collagen is a natural skin-repairing substance. It is also a powerful anti-inflammatory and antiaging agent. , and, are the most common types of collagen found in skin.
As you can see, at the start it wrote “, and,” at random and it happens EXTREMELY often, nearly in every single text generation I did.
I don’t know if it’s related to my settings or not but I’d appreciate all the help you guys can give. I want to get my text to be as human-readable as possible & up to 100-500 words each input. | It might help if you give more information about the model and tokenizer that you use. | 0 |
huggingface | Beginners | Weights not downloading | https://discuss.huggingface.co/t/weights-not-downloading/6310 | Hi,
When first I did
from transformers import BertModel
model = BertModel.from_pretrained('bert-base-cased')
Then it’s fine.
But after doing the above, when I do:
from transformers import BertForSequenceClassification
m = BertForSequenceClassification.from_pretrained('bert-base-cased')
I get warning messages:
Some weights of the model checkpoint at bert-base-cased were not used when
initializing BertForSequenceClassification: ['cls.predictions.bias',
'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias',
'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias',
'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
- This IS expected if you are initializing BertForSequenceClassification from the
checkpoint of a model trained on another task or with another architecture (e.g.
initializing a BertForSequenceClassification model from a BertForPreTraining model).
Some weights of BertForSequenceClassification were not initialized from the model
checkpoint at bert-base-cased and are newly initialized: ['classifier.weight',
'classifier.bias']
There is another topic regarding the same issue in the forum here 1.
What I have understood is that, due to the first code which I ran, the weights of the pre-trained bare bert-base-cased model got downloaded, and when I ran the second code for sequence classification, the weights regarding the sequence classification didn’t get downloaded because it is grabbing its checkpoint from the first code which I ran.
The same is also given in the last paragraph of the warning message.
So, what’s the solution to download the pre-trained weights for sequence classification tasks or in general other tasks?
Thanks. | Hi,
Can anyone help me regarding this?
Thanks. | 0 |
huggingface | Beginners | A universal granular method to breakdown text for modeling | https://discuss.huggingface.co/t/a-universal-granular-method-to-breakdown-text-for-modeling/6258 | My gut feeling is one thing but,
Is there a way to breakdown text into the most granular form that can be applied to any model? (For instance IOB, tagging, tokens etc?) | hey @Joanna if i understand correctly, your question is fundamentally about the manner in which we tokenize raw text into tokens right? for example, BERT uses a WordPiece tokenizer that can be used for all downstream tasks, provided some alignment between the tokens and labels is given for tasks like NER.
there’s a nice walkthrough the various tokenization strategies that are employed in transformers here: Summary of the tokenizers — transformers 4.5.0.dev0 documentation 2 | 0 |
huggingface | Beginners | Tutorials on transformers | https://discuss.huggingface.co/t/tutorials-on-transformers/6172 | hi everyone,
are there any paid/unpaid tutorials on usage of this awesome library? or any github pages with code walkthroughs?
any help is appreciated!!
thanks,
Kishor | huggingface.co
Transformers 31
State-of-the-art Natural Language Processing for Jax, Pytorch and TensorFlow 🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-be... | 0 |
huggingface | Beginners | Value Error when instantiating MultiClass Classifier | https://discuss.huggingface.co/t/value-error-when-instantiating-multiclass-classifier/6264 | Here is the code I am using:
model = ClassificationModel('roberta', 'roberta-base', num_labels=37, args={'learning_rate':1e-5, 'num_train_epochs':10, 'reprocess_input_data':True})
Here is the error it generates:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-16-affadfcfb25e> in <module>
1 #Need to figure out how many labels in os data
----> 2 model = ClassificationModel('roberta', 'roberta-base', num_labels=37, args={'learning_rate':1e-5, 'num_train_epochs':10, 'reprocess_input_data':True})
~\Anaconda3\lib\site-packages\simpletransformers\classification\classification_model.py in __init__(self, model_type, model_name, tokenizer_type, tokenizer_name, num_labels, weight, args, use_cuda, cuda_device, onnx_execution_provider, **kwargs)
334
335 if num_labels:
--> 336 self.config = config_class.from_pretrained(
337 model_name, num_labels=num_labels, **self.args.config
338 )
~\Anaconda3\lib\site-packages\transformers\configuration_utils.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
425
426 """
--> 427 config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
428 if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type:
429 logger.warn(
~\Anaconda3\lib\site-packages\transformers\configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
482 try:
483 # Load from URL or cache if already cached
--> 484 resolved_config_file = cached_path(
485 config_file,
486 cache_dir=cache_dir,
~\Anaconda3\lib\site-packages\transformers\file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, use_auth_token, local_files_only)
1269 if is_remote_url(url_or_filename):
1270 # URL, so get it from the cache (downloading if necessary)
-> 1271 output_path = get_from_cache(
1272 url_or_filename,
1273 cache_dir=cache_dir,
~\Anaconda3\lib\site-packages\transformers\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, use_auth_token, local_files_only)
1492 )
1493 else:
-> 1494 raise ValueError(
1495 "Connection error, and we cannot find the requested files in the cached path."
1496 " Please try again or make sure your Internet connection is on."
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
What can I do to resolve this issue? | hey @parkz your issue seems to be about the simpletransformers library (not transformers) so i suggest you post you question on their repo to see if someone can help: GitHub - ThilinaRajapakse/simpletransformers: Transformers for Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI 1 | 0 |
huggingface | Beginners | Encountering issues using camemBERT with rasa | https://discuss.huggingface.co/t/encountering-issues-using-camembert-with-rasa/6257 | in the Rasa forum there is some developer who succeeded using CamemBERT
here is the example RASA and camemBERT - #3 by Zoukero - Rasa Open Source - Rasa Community Forum 6
unfortunately while trying their solution I get this error
ImportError: cannot import name ‘modeling_tf_camembert’ from ‘transformers’ (unknown location)
any help/advice will be much appreciated
Thanks | This must have been with a Transformers version < 4.0.0. You should just do from transformers.models.camembert import modeling_tf_camembert now. | 0 |
huggingface | Beginners | How to pad tokens to a fixed length on a single sentence? | https://discuss.huggingface.co/t/how-to-pad-tokens-to-a-fixed-length-on-a-single-sentence/6248 | >>> from transformers import BartTokenizerFast
>>> tokenizer = BartTokenizerFast.from_pretrained("facebook/bart-large")
>>> str = "How are you?"
>>> tokenizer(str, return_tensors="pt")
{'input_ids': tensor([[ 0, 6179, 32, 47, 116, 2]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1]])}
>>> tokenizer(str, padding=True, max_length=10, return_tensors="pt")
{'input_ids': tensor([[ 0, 6179, 32, 47, 116, 2]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1]])}
Why didn’t it show something like
{'input_ids': tensor([[ 0, 6179, 32, 47, 116, 2, 1, 1, 1, 1]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 0, 0, 0, 0]])}
and how can I do it? | hey @zuujhyt, you can activate the desired padding by specifying padding="max_length" in your tokenizer as follows:
tokenizer(str, return_tensors="pt", padding="max_length", max_length=10)
when padding=True the tokenizer will pad to the longest sequence in the batch (or no padding for the single sentence case) | 0 |
huggingface | Beginners | Tokenization of Sequencepair for Pipeline | https://discuss.huggingface.co/t/tokenization-of-sequencepair-for-pipeline/6115 | Hi,
for my model I tokenize my input like this:
encoded_dict = xlmr_tokenizer(term1, term2, max_length=max_len, padding='max_length',truncation=True, return_tensors='pt')
so that it gets 2 different strings as input (term1, term2), that it separates with the special separator token.
How can I use the huggingface pipeline for input like this?
If I load the pipeline like this:
tokenizer_xlmr = XLMRobertaTokenizerFast.from_pretrained("xlm-roberta-base")
my_pipeline = pipeline("sentiment-analysis", model=my_model, tokenizer=tokenizer_xlmr)
my_pipeline([term1,term2])
I get two separate predictions instead of one. Thanks in advance for any type of help! | Further information:
calling
my_pipeline(term1+"</s></s>"+term2)
gives nearly the same output as using the trained model directly on the data from the encoded_dict as defined above. The labels are the same, but the probability differ from the 5th/6th decimal place onwards. Why is that? | 0 |
huggingface | Beginners | BertForNextSentencePrediction with larger batch size | https://discuss.huggingface.co/t/bertfornextsentenceprediction-with-larger-batch-size/6219 | Is there a way to use BertForNextSentencePrediction in inference mode with a batch size larger than 1? I have some code.
encoding = self.tokenizer(prompt1, prompt2, return_tensors='pt', padding=True, truncation=True, add_special_tokens=True)
print(encoding)
outputs = self.model(**encoding, next_sentence_label=torch.LongTensor([1]), target_batch_size=10)
logits = outputs.logits
#print(logits)
Here prompt1 and prompt2 are lists of sentences. The list is 10 sentences long. I get an error like this:
ValueError: Expected input batch_size (10) to match target batch_size (1). | hey @DLiebman i think the problem is that you’re passing a batch of 10 examples, but only a single label in the next_sentence_label argument. changing your code to the following works for me:
outputs = model(**encoding, next_sentence_label=torch.ones((10,1), dtype=torch.long), target_batch_size=10) | 0 |
huggingface | Beginners | Difference between tokenizer and tokenizerfast | https://discuss.huggingface.co/t/difference-between-tokenizer-and-tokenizerfast/6226 | Hi,
I have searched for the answer for my question, but still can’t get the clear answer.
Some issues in the github/forum also report that the result of tokenizer and tokenizerfast is a little bit different.
I want to know what is the difference between them (in terms of mechanism)?
If they should output the same result, then why we need both of them? | hey @ad26kr can you provide a few links on the reported differences between the two types of tokenizers?
cc @anthony who is the tokenizer expert | 0 |
huggingface | Beginners | GPT2 working perfectly in local system, but doesn’t generate text (stuck) when deployed in server | https://discuss.huggingface.co/t/gpt2-working-perfectly-in-local-system-but-doesnt-generate-text-stuck-when-deployed-in-server/6179 | I have built a simple web app 8 where the user inputs a few words and the length of the text to generate, and the model takes a minute to produce the results locally. But when I deploy it in Heroku, its taking forever (not displaying any results even after a couple of hours). You can check it out here 4.
Is it because the server’s CPU is too slow/weak? If yes, how do I use a faster CPU in Heroku, or can you suggest some other service instead of Heroku that would be better to deploy GPT2 based web apps? If not, what’s the issue and how do I fix it? | hey @kristada673, looking at your code it seems that you load the model every time a user provides a prompt. a better approach would be to load the model once when the server spins up and then call it in a dedicated endpoint for prompting.
depending on your use case, a simpler alternative to heroku would be streamlit - you can find many examples online using GPT-2 with it (e.g. here 6) | 0 |
huggingface | Beginners | Help resolving error (“TextInputSequence must be str”) | https://discuss.huggingface.co/t/help-resolving-error-textinputsequence-must-be-str/5510 | Hi,
I’m very new to HuggingFace, I’ve come around this error “TextInputSequence must be str” on a notebook which is helping me a lot to do some practice on various hugging face models. The boilerplate code on the notebook is throwing this error (I guess) due to some changes in huggingface’s API or something. So I was wondering if someone could help me and suggest some changes that I can make to the code to resolve the error.
The error can easily be reproduced by just running all the cells of the notebook.
Link: Colab Notebook 9
11865×678 267 KB | 21802×593 330 KB | 0 |
huggingface | Beginners | Scores in generate() | https://discuss.huggingface.co/t/scores-in-generate/3450 | Hi,
I was wondering why the length of the output_scores is always +1 longer than the max_length in the output of generate()? This seems to be not consistent with the documentation https://huggingface.co/transformers/internal/generation_utils.html#transformers.generation_utils.BeamSampleEncoderDecoderOutput 28
I found the scores from the output of the generate() function when setting output_scores to be True is (max_length+1,)-shaped tensors or shorter due to the early eos_token_id with each element of shape (batch_size*num_beams, config.vocab_size). The shape of the output.sequence is (batch_size, max_length). | Hey @Kylie,
good observation! So output_scores should max_length - 1. The reason is that the first token, the decoder_start_token_id is not generated, meaning that no scores can be calculated.
Here an example:
#!/usr/bin/env python3
from transformers import AutoModelForSeq2SeqLM
import torch
model = AutoModelForSeq2SeqLM.from_pretrained('facebook/bart-large')
out = model.generate(torch.tensor([10 * [1]]), return_dict_in_generate=True, output_scores=True, max_length = 10)
print("len scores:", len(out.scores)) # should give 9
Would you be interested in correcting the documentation in a PR for Transformers? | 0 |
huggingface | Beginners | Wav2Vec2ForCTC.from_pretrained for already trained Models? | https://discuss.huggingface.co/t/wav2vec2forctc-from-pretrained-for-already-trained-models/5716 | Hi Guys,
how do i further train a Model someone else already fine-tuned? I wanted to load a already Fine-Tuned Modell like this:
model = Wav2Vec2ForCTC.from_pretrained(
model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
activation_dropout=model_args.activation_dropout,
attention_dropout=model_args.attention_dropout,
hidden_dropout=model_args.hidden_dropout,
feat_proj_dropout=model_args.feat_proj_dropout,
mask_time_prob=model_args.mask_time_prob,
gradient_checkpointing=model_args.gradient_checkpointing,
layerdrop=model_args.layerdrop,
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id,
vocab_size=len(processor.tokenizer),
)
But im getting:
RuntimeError: Error(s) in loading state_dict for Wav2Vec2ForCTC:
size mismatch for lm_head.weight: copying a param with shape torch.Size([174, 1024]) from checkpoint, the shape in current model is torch.Size([35, 1024]).
size mismatch for lm_head.bias: copying a param with shape torch.Size([174]) from checkpoint, the shape in current model is torch.Size([35]).
How would i do this right?
Something like
model = Wav2Vec2ForCTC.from_pretrained(model_args.model_name_or_path)
model.to("cuda")
model.freeze_feature_extractor()
processor = Wav2Vec2Processor.from_pretrained(model_args.model_name_or_path)
But im not sure if this is the way to go.
Ty in advanced | It seems like your vocabulary size does not match the vocab size from the original model. You can remove the old LM head manually and initialise a new one:
state_dict = torch.load(f"{model_args.model_name_or_path}/pytorch_model.bin", map_location='cpu')
state_dict.pop('lm_head.weight')
state_dict.pop('lm_head.bias')
model = Wav2Vec2ForCTC.from_pretrained(
model_args.model_name_or_path,
state_dict=state_dict,
...
) | 0 |
huggingface | Beginners | Models take all ubuntu free space | https://discuss.huggingface.co/t/models-take-all-ubuntu-free-space/6134 | I use ubuntu 20.04, and it seems that after I’m using some models, the free space is just disappear. (250 GB)
where does the models cache are saved to?
is there some command to clean them?
thanks! | Models are cached in the ~/.cache/huggingface/transformers directory (from your home directory) | 0 |
huggingface | Beginners | ELMO Character encoder layer | https://discuss.huggingface.co/t/elmo-character-encoder-layer/6196 | I think my model would benefit from the inclusion of character encoding similar to ELMO, i.e. as discussed in CharacterBERT (https://www.aclweb.org/anthology/2020.coling-main.609.pdf ) since I have loads of acronyms so I think added characters will help.
Is there a layer in the huggingface library I could use to replicate this configuration? I don’t see one, nor a hugginface ELMO but I could well have missed it. Otherwise I’ll try and use AllenNLP. | hey @david-waterworth, looking at the paper’s github repo (link), it seems that they just changed the embedding layer of the BERT implementation in transformers to accept character embeddings: character-bert/character_bert.py at 0519a6e22a2912bf0fdbbd49bd562cc2e5410bc7 · helboukkouri/character-bert · GitHub 1
you could try the same approach and see if that works! | 0 |
huggingface | Beginners | Missing content in task specific pipeline docs | https://discuss.huggingface.co/t/missing-content-in-task-specific-pipeline-docs/6153 | Looks like missing doc content in the pipeline page here Pipelines — transformers 4.5.0.dev0 documentation | Thanks for flagging! Investigating why this is the case. | 0 |
huggingface | Beginners | Using wav2vec2 for own usecase | https://discuss.huggingface.co/t/using-wav2vec2-for-own-usecase/6147 | I’m looking to use wav2vec2 english for own project use-case for the purpose of transcription, it is possible that I can feed large audio files (upto 1hr of duration) for the task?
Thanks in Advance! | Yes, you should be able to. Here 5 are some references on fine-tuning wav2vec on common voice dataset. With a csv file having one column for location of audio clips and the other column the label, you can fine-tune it for your own audio files. | 0 |
huggingface | Beginners | How to load weights which was trained older version of PyTorch? | https://discuss.huggingface.co/t/how-to-load-weights-which-was-trained-older-version-of-pytorch/1267 | Hi,
I trained a RoBERTa model on PyTorch 1.6 and tried to load the weights on different server which had PyTorch 1.4.
But I could not load the weights and got an error which said,
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
I can upgrade PyTorch which is not a problem but I’m curious how can I load weights trained on older version of PyTorch?
Environments of both servers:
ubuntu: 18.04 ( both servers )
transformers: 3.1.0 ( both servers )
Python 3.7.9 and Python 3.7.3
Thanks in advance. | AFAIK PyTorch versions shouldn’t make a difference. Check your path, check that the file is not empty. Also post the code that you use to save and load the model. | 0 |
huggingface | Beginners | In the “Write With Transformer” page, how do different suggestions are selected? | https://discuss.huggingface.co/t/in-the-write-with-transformer-page-how-do-different-suggestions-are-selected/5994 | When using Write With Transformer, after hitting tab, there are multiple suggestions. Sometimes one word, sometimes a partial sentence. How are these selected? In particular, for one word, does it take the top k probabilities from the output? When does it decide to continue to a sentence? | No answer from anyone?
Maybe link to the source code of the suggestion generation? | 0 |
huggingface | Beginners | Ideas to correct Wav2Vec2 transcription results | https://discuss.huggingface.co/t/ideas-to-correct-wav2vec2-transcription-results/3767 | I’m tinkering with the preprocessing (segmentation) to achieve the best Wav2Vec2 transcriptions I can, and am fairly impressed with the results (compared to others like Silero, and previous experience with Sphinx, or alternatives like ELAN).
(I’m finding I probably need to reduce the maximum segment times below 60s and haven’t quite got that down perfectly yet, but that’s beside the point)
However, there are some pretty glaring phonetic transcription mistakes and I’m wondering if there are standard approaches to adjust these (in an automated way, before resorting to manual adjustment).
For instance, I’m seeing “Boric Johnson” (for Boris Johnson, the UK Prime Minister), “ennay chess” (for NHS, the UK’s health service), and “social medeor” (social media).
These are all clearly phonetic ‘guesses’ and would be identifiable as outside the vocabulary of a standard text model: I’m curious if there’s a standard post-processing step that attempts to ‘realign’ such outputs with possible alternatives (which I could perhaps explore in an interface to resolve ambiguous parts).
I’m wondering if anyone knows of the usual approach (or if this is not a standard step, could suggest an innovative approach) using language models — even just the proper terms for what I’m trying to do here would help me research my next steps.
I’d think it was a similar-ish problem to spelling mistakes (for which there’s ‘T5 Sentence Doctor 6’ for example, but tests aren’t too encouraging). I may be missing a more appropriate alternative I don’t know about so I thought I’d ask the community here.
Thanks, first question on the forum so please let me know if this is off topic, I’ve used the new Wav2Vec2 960h model and may try the other versions next but expect this will apply to all of them. | Hi,
I think you need a language model trained on a large corpora. You can have a look on this issue 45.
Best,
Omar | 0 |
huggingface | Beginners | KeyError: ‘loss’ even after appending labels while Fine Tuning Transformer XL | https://discuss.huggingface.co/t/keyerror-loss-even-after-appending-labels-while-fine-tuning-transformer-xl/6097 | I am trying to do Casual Language Modeling Task by fine tuning Transformer XL (transfo-xl-wt103) model on my custom data. I have data in which max input words in a line are approx 50,000 and average number of words are 1000. As I have read that Transformer XL can take unlimited length of input. So, I think its the best option for my task. Moreover, I don’t want to concatenate and make blocks of length 128 as it would defeat the actual purpose. I can add padding/truncation to make all sentence of equal length. I want to know is my task achievable. Also, what memory & GPU would I require for this task.
Currently I am doing the same task with dummy data but getting error. Please check the code and help me with this error.
Here is my complete code:
from datasets import load_dataset
import pandas as pd
from datasets import Dataset
from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM
from transformers import Trainer, TrainingArguments
data = """
A laptop, laptop computer, or notebook computer is a small, portable personal computer (PC).
Laptops are folded shut for transportation, and thus are suitable for mobile use.
Its name comes from lap, as it was deemed practical to be placed on a person's lap when being used.
Today, laptops are the used in a variety of settings, such as at work, in education, web browsing, and general home computer use.
Design elements, form factor and construction can also vary significantly between models depending on intended use.
Examples of specialized models of laptops include rugged notebooks for use in construction or military applications.
"""
dataList = data.strip().split('.')
dataset = []
for line in dataList[0: -2]:
dataset.append([line])
dataFrame = pd.DataFrame(dataset, columns = ['data'])
valDataFrame = pd.DataFrame(dataset[0:2], columns = ['data'])
print(dataFrame.shape)
print(valDataFrame.shape)
dataset = Dataset.from_pandas(dataFrame)
print(dataset)
modelCheckpoint = 'transfo-xl-wt103'
tokenizer = AutoTokenizer.from_pretrained(modelCheckpoint)
tokenizer.pad_token = tokenizer.eos_token
def tokenizeFunction(examples):
return tokenizer(examples["data"], add_special_tokens = True, padding = True, pad_to_max_length = True, max_length = 10, truncation = True)
dataset = dataset.map(tokenizeFunction, remove_columns=["data"])
valDataset = valDataset.map(tokenizeFunction, remove_columns=["data"])
print(dataset[0], end = " ")
def appendLabels(examples):
examples["labels"] = examples["input_ids"].copy()
return examples
dataset = dataset.map(appendLabels)
valDataset = valDataset.map(appendLabels)
print(dataset[0], end = " ")
model = AutoModelForCausalLM.from_pretrained(modelCheckpoint)
trainingArgs = TrainingArguments(
"test-clm",
evaluation_strategy = "epoch",
learning_rate = 2e-5,
weight_decay = 0.01,
label_names = ["labels"]
)
trainer = Trainer(
model = model,
args = trainingArgs,
train_dataset = dataset,
eval_dataset = valDataset
)
trainer.train()
Error:
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-37-0d4f1e0acd39> in <module>()
15 eval_dataset = valDataset
16 )
---> 17 trainer.train()
3 frames
/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in __getitem__(self, k)
1614 if isinstance(k, str):
1615 inner_dict = {k: v for (k, v) in self.items()}
-> 1616 return inner_dict[k]
1617 else:
1618 return self.to_tuple()[k]
KeyError: 'loss'
Version & Details:
- `transformers` version: 4.5.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No | Yes, transformer XL is the only model of the library incompatible with Trainer because it return losses (not averaged) instead of loss. | 0 |
huggingface | Beginners | Unlimited API usage for models | https://discuss.huggingface.co/t/unlimited-api-usage-for-models/5823 | can I have an unlimited usage of the models for 9$ or is too low?? | Hi @micole66 ,
No the 9$ does not provide unlimited usage of the models.
You are enabled to use 300k input characters with your subscription currently. We want to make the API usage something pay-as-you go so we can make it affordable for everyone.
Cheers,
Nicolas | 0 |
huggingface | Beginners | JavaScript Example for inference API | https://discuss.huggingface.co/t/javascript-example-for-inference-api/5975 | Hi,
Is there a JavaScript example for using inference API - 🤗 Accelerated Inference API — Api inference documentation 25 | Hi @hgarg ,
Currently we don’t provide documentation for JS (or TS).
You could probably use simple fetch:
const HF_API_TOKEN = "api_xxxx";
const model = "XXX"
const data = {inputs:"Something here"};
const response = await fetch(`https://api-inference.huggingface.co/models/${model}`, {headers: {"Authorization": `Bearer ${HF_API_TOKEN}`}, method: "POST", data:JSON.stringify(data)});
const data = await response.json()
should work.
Cheers,
Nicolas | 0 |
huggingface | Beginners | Training using multiple GPUs | https://discuss.huggingface.co/t/training-using-multiple-gpus/1279 | I would like to train some models to multiple GPUs.
Let suppose that I use model from HF library, but I am using my own trainers,dataloader,collators etc.
Where I should focus to implement multiple GPU training? I need to make changes only in the Trainer class? If yes, can you give me a brief description?
Thank you in avance. | The Trainer class automatically handles multi-GPU training, you don’t have to do anything special. | 0 |
huggingface | Beginners | Setting specific device for Trainer | https://discuss.huggingface.co/t/setting-specific-device-for-trainer/784 | Hi I’m trying to fine-tune model with Trainer in transformers,
Well, I want to use a specific number of GPU in my server.
My server has two GPUs,(index 0, index 1) and I want to train my model with GPU index 1.
I’ve read the Trainer and TrainingArguments documents, and I’ve tried the CUDA_VISIBLE_DEVICES thing already. but it didn’t worked for me.
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
(I did it within Jupyter, before I import all libraries)
It gave me a runtime error when the trainer tries to initiate self.model = model.to(args.device) line.
and the error says like RuntimeError: CUDA error: all CUDA-capable devices are busy or unavailable.
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset)
123944×820 127 KB
I’ve also tried torch.cuda.set_device(1), it also didn’t work.
I don’t know how to set it up. It seems like I don’t have any options in argument of class
Please help me to handle this problem.
Thank you. | If torch.cuda_set_device(1) doesn’t work, the problem is in your install. Does the command nvidia-smi show up two GPUs? | 0 |
huggingface | Beginners | TFBertForTokenClassification scoring only O labels on a NER task | https://discuss.huggingface.co/t/tfbertfortokenclassification-scoring-only-o-labels-on-a-ner-task/2025 | I’m using TFBertForTokenClassification to perform a NER task on the annotated corpus fo NER:
https://www.kaggle.com/abhinavwalia95/entity-annotated-corpus 9.
The problem is that the O-Labels are the majority of all labels, then the accuracy is quite high as the model correctly predicts most of them.
So, when I try to predict the labels of a simple sentence, the network predict only the O Label for each token of the sentence, however in several tutorials in which it is used Pytorch (I am using Tensorflow), the predictions are good.
Probably there is a problem in my code, but I cannot figure out where is it.
The code is the following:
# Import libraries
import tensorflow as tf
import pandas as pd
from sklearn.model_selection import train_test_split
import math
import numpy as np
from transformers import (
TF2_WEIGHTS_NAME,
BertConfig,
BertTokenizer,
TFBertForTokenClassification,
create_optimizer)
# Config
MAX_LEN= 128
TRAIN_BATCH_SIZE = 32
VALID_BTCH_SIZE = 8
EPOCHS = 10
BERT_MODEL = 'bert-base-uncased'
MODEL_PATH = "model.bin"
TRAINING_FILE = "../input/entity-annotated-corpus/ner_dataset.csv"
TOKENIZER = BertTokenizer.from_pretrained(BERT_MODEL, do_lower_case=True)
# Create the padded input, attention masks, token type and labels
def get_train_data(text, tags):
tokenized_text = []
target_tags = []
for index, token in enumerate(text):
encoded_token = TOKENIZER.encode(
token,
add_special_tokens = False
)
encoded_token_len = len(encoded_token)
tokenized_text.extend(encoded_token)
target_tags.extend([tags[index]] * encoded_token_len)
#truncation
tokenized_text = tokenized_text[: MAX_LEN - 2]
target_tags = target_tags[: MAX_LEN - 2]
#[101] = [CLS] , [102] = [SEP]
tokenized_text = [101] + tokenized_text + [102]
target_tags = [0] + target_tags + [0]
attention_mask = [1] * len(tokenized_text)
token_type_ids = [0] * len(tokenized_text)
#padding
padding_len = int(MAX_LEN - len(tokenized_text))
tokenized_text = tokenized_text + ([0] * padding_len)
target_tags = target_tags + ([0] * padding_len)
attention_mask = attention_mask + ([0] * padding_len)
token_type_ids = token_type_ids + ([0] * padding_len)
return (tokenized_text, target_tags, attention_mask, token_type_ids)
# Extract sentences from dataset
class RetrieveSentence(object):
def __init__(self, data):
self.n_sent = 1
self.data = data
self.empty = False
function = lambda s: [(w, p, t) for w, p, t in zip(s["Word"].values.tolist(),
s["POS"].values.tolist(),
s["Tag"].values.tolist())]
self.grouped = self.data.groupby("Sentence #").apply(function)
self.sentences = [s for s in self.grouped]
def retrieve(self):
try:
s = self.grouped["Sentence: {}".format(self.n_sent)]
self.n_sent += 1
return s
except:
return None
# Load dataset and create one hot encoding for labels
df_data = pd.read_csv(TRAINING_FILE,sep=",",encoding="latin1").fillna(method='ffill')
Sentences = RetrieveSentence(df_data)
sentences_list = [" ".join([s[0] for s in sent]) for sent in Sentences.sentences]
labels = [ [s[2] for s in sent] for sent in Sentences.sentences]
tags_2_val = list(set(df_data["Tag"]))
tag_2_idx = {t: i for i, t in enumerate(tags_2_val)}
id_labels = [[tag_2_idx.get(l) for l in lab] for lab in labels]
sentences_list = [sent.split() for sent in sentences_list]
# I removed the sentence n 41770 because it gave index problems
del labels[41770]
del sentences_list[41770]
del id_labels[41770]
encoded_text = []
encoded_labels = []
attention_masks = []
token_type_ids = []
for i in range(len(sentences_list)):
text, labels, att_mask, tok_type = get_train_data(text = sentences_list[i], tags = id_labels[i])
encoded_text.append(text)
encoded_labels.append(labels)
attention_masks.append(att_mask)
token_type_ids.append(tok_type)
# Convert from list to np array
encoded_text = np.array(encoded_text)
encoded_labels = np.array(encoded_labels)
attention_masks = np.array(attention_masks)
token_type_ids = np.array(token_type_ids)
# Train Test split
X_train, X_valid, Y_train, Y_valid = train_test_split(encoded_text, encoded_labels, random_state=20, test_size=0.1)
Mask_train, Mask_valid, Token_ids_train, Token_ids_valid = train_test_split(attention_masks,token_type_ids ,random_state=20, test_size=0.1)
# Aggregate the train and test set, then shuffle and batch the train set
def example_to_features(input_ids,attention_masks,token_type_ids,y):
return {"input_ids": input_ids,
"attention_mask": attention_masks,
"token_type_ids": token_type_ids},y
train_ds = tf.data.Dataset.from_tensor_slices((X_train,Mask_train,Token_ids_train,Y_train)).map(example_to_features).shuffle(1000).batch(32)
test_ds=tf.data.Dataset.from_tensor_slices((X_valid,Mask_valid,Token_ids_valid,Y_valid)).map(example_to_features).batch(1)
# Load TFBertForTokenClassification with default config
config = BertConfig.from_pretrained(BERT_MODEL,num_labels=len(tags_2_val))
model = TFBertForTokenClassification.from_pretrained(BERT_MODEL, from_pt=bool(".bin" in BERT_MODEL), config=config)
# Add softmax layer, compute loss, optimizer and fit
model.layers[-1].activation = tf.keras.activations.softmax
model.summary()
optimizer = tf.keras.optimizers.Adam()
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
history = model.fit(train_ds, epochs=3, validation_data=test_ds)
# Prediction. Spoiler: the label predicted are O-Label
sentence = "Hi , my name is Bob and I live in England"
inputs = TOKENIZER(sentence, return_tensors="tf")
input_ids = inputs["input_ids"]
inputs["labels"] = tf.reshape(tf.constant([1] * tf.size(input_ids).numpy()), (-1, tf.size(input_ids))) # Batch size 1
output = model(inputs)
The code is executed on a Kaggle notebook.
The transformer library version is 3.4.0
Many thanks in advance. | I’m trying to train a similar model and I am getting the same problem. It does work for me however with a relu activation on the last classification layer instead of softmax and a smaller learning rate optimizer = keras.optimizers.Adam(learning_rate=3e-5).
I’m not sure why it isn’t working with softmax. FYI here’s the model I’m using
ids_input = keras.Input(shape=(max_tokens,), dtype=np.int32)
attention_mask_input = keras.Input(shape=(max_tokens,), dtype=np.int32)
bert_model = TFDistilBertModel.from_pretrained('distilbert-base-uncased')
dropout = layers.Dropout(0.1)
token_classifier_layer = layers.Dense(num_labels, activation="relu")
bert_output = bert_model({'input_ids': ids_input, 'attention_mask': attention_mask_input}, return_dict=True)
x = dropout(bert_output['last_hidden_state'])
x = token_classifier_layer(x)
word_classifier_model = keras.Model(inputs=[ids_input, attention_mask_input], outputs=x)
optimizer = keras.optimizers.Adam(learning_rate=3e-5)
word_classifier_model.compile(optimizer=optimizer, loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])
word_classifier_model.fit(x=[input_ids, attention_mask], y=model_labels, epochs=config.epochs, batch_size=config.batch_size, validation_split=0.2)
word_classifier_model.save_weights(config.model_file_path) | 0 |
huggingface | Beginners | How does transformers.pipeline works for NLI? | https://discuss.huggingface.co/t/how-does-transformers-pipeline-works-for-nli/5866 | I am applying pretrained NLI models such as roberta-large-mnli to my own sentence pairs. However, I am slightly confused by how to separate the promise and hypothesis sentences. By checking through the models available on Huggingface and the examples they show on hosted inference API, some use </s></s> between sentences, some use [CLS] ... [SEP] ... [SEP], and some such as your own model do not add any placeholders
I just want to know more about how pipeline(task="sentiment-analysis", model="xxx-nli") works under-the-hood. I assume it feeds each sentence pair separately into tokenizer.encode_plus like what is done here 6. But what max_length does the model though?
Any information would be really appreciated! Thanks! | Hi @bwang482 ,
You are perfectly correct, the responsability of chosing the proper layout for this task goes to the tokenizer part (encode_plus or just encode).
What do you mean with max_length? I am not sure what you are referring to.
Cheers,
Nicolas | 0 |
huggingface | Beginners | How can I evaluate my fine-tuned model on Squad? | https://discuss.huggingface.co/t/how-can-i-evaluate-my-fine-tuned-model-on-squad/5969 | Hello,
I have loaded the already finetune model for squad 'twmkn9/bert-base-uncased-squad2'
I would like to now evaluate it on the SQuAD2 dataset, how would I do that?
This is my code currently;
from transformers import AutoTokenizer, AutoModelForQuestionAnswering, AutoConfig
model_name = 'twmkn9/bert-base-uncased-squad2'
config = AutoConfig.from_pretrained(model_name, num_hidden_layers=10)
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForQuestionAnswering.from_config(config)
Now I am just unsure what to do next?
UPDATE
`i am trying to follow the instructions from here 13. Yet I am unsure how to use my own model. This is what I have:
# Grab the run_squad.py script
!curl -L -O https://raw.githubusercontent.com/huggingface/transformers/master/examples/pytorch/question-answering/run_qa.py
!curl -L -O https://raw.githubusercontent.com/huggingface/transformers/master/examples/pytorch/question-answering/trainer_qa.py
!curl -L -O https://raw.githubusercontent.com/huggingface/transformers/master/examples/pytorch/question-answering/utils_qa.py
!python run_qa.py \
--model_type bert \
--model_name_or_path model \
--output_dir models/distilbert/twmkn9_distilbert-base-uncased-squad2 \
--data_dir data/squad \
--predict_file dev-v2.0.json \
--do_eval \
--version_2_with_negative \
--do_lower_case \
--per_gpu_eval_batch_size 12 \
--max_seq_length 384 \
--doc_stride 128
But I am getting an error
2021-05-04 11:14:04.086537: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
Traceback (most recent call last):
File "run_qa.py", line 613, in <module>
main()
File "run_qa.py", line 208, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/usr/local/lib/python3.7/dist-packages/transformers/hf_argparser.py", line 187, in parse_args_into_dataclasses
obj = dtype(**inputs)
File "<string>", line 20, in __init__
File "run_qa.py", line 184, in __post_init__
raise ValueError("Need either a dataset name or a training/validation file/test_file.")
ValueError: Need either a dataset name or a training/validation file/test_file.
I imagine because --model_name_or_path model \ is no good, but then how do I call my own configured model?
Thank you | The error comes before your model (you will get one for the model after ), first you have to replace --predict_file by --test_file in your command. Then if your model is in models/distilbert/twmkn9_distilbert-base-uncased-squad2, this is what you should pass (the model_type argument is useless, it will be inferred from the model files). | 0 |
huggingface | Beginners | What detailed parameters can be used while calling EleutherAI/gpt-neo-x models from Inference API? | https://discuss.huggingface.co/t/what-detailed-parameters-can-be-used-while-calling-eleutherai-gpt-neo-x-models-from-inference-api/5921 | Hi,
Where can I find more details on what detailed parameters can be used while calling EleutherAI/gpt-neo-x models from Inference API?
I haven’t found any details on these links beyond a basic example.
huggingface.co
EleutherAI/gpt-neo-2.7B · Hugging Face 11
This page from Inference Docs also doesn’t cover these models.
api-inference.huggingface.co
Detailed parameters 11
Which task is used by this model ?: In general the 🤗 Hosted API Inference accepts a simple string as an input. However, more advanced usage depends on the “t...
thanks, | you wont find in the documentation because they didnt write it…
the only additional parameter that I was able to use was max_length | 0 |
huggingface | Beginners | Seeking detailed parameter docs on Wav2Vec via API | https://discuss.huggingface.co/t/seeking-detailed-parameter-docs-on-wav2vec-via-api/4408 | Hi y’all I’m trying to find docs on how to call the Wav2Vec model for TTS via the API.
The detailed API docs on parameters 1 don’t seem yet to have information on the format of the API request for that model
Trying to infer it from the demo page fails because of a CORS error in Chrome:
Access to fetch at 'https://api-audio-frontend.huggingface.co/models/facebook/wav2vec2-large-960h-lv60-self' from origin 'https://huggingface.co' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
bundle.7995df3.js:1 POST https://api-audio-frontend.huggingface.co/models/facebook/wav2vec2-large-960h-lv60-self net::ERR_FAILED
run_api @ bundle.7995df3.js:1
handleClick @ bundle.7995df3.js:1
async function (async)
handleClick @ bundle.7995df3.js:1
(anonymous) @ bundle.7995df3.js:1 | cc @patrickvonplaten | 0 |
huggingface | Beginners | Difference between AutoModel and AutoModelForLM | https://discuss.huggingface.co/t/difference-between-automodel-and-automodelforlm/5967 | What is the difference between downloading a model using AutoModel.from_pretrained(model_name) and AutoModelForLM.from_pretrained(model_name) ? | The first one will give you the bare pretrained model, while the second one will have a head attached to do language modeling. Note that AutoModelForLM is deprecated, you should use AutoModelForCausalLM, AutoModelForMaskedLM or AutoModelForSeq2SeqLM depending on the task at hand. | 0 |
huggingface | Beginners | Bert ner classifier | https://discuss.huggingface.co/t/bert-ner-classifier/5847 | hi,
I fine-tune the bert on NER task, and huggingface add a linear classifier on the top of model. I want to know more details about classifier architecture. e.g. fully connected + softmax…
thank you for your help | Hi! Can you be a little bit more specific about your query?
Just to give you a head start,
In general, NER is a sequence labeling (a.k.a token classification) problem.
The additional stuff you may have to consider for NER is, for a word that is divided into multiple tokens by bpe or sentencepiece like model, you use the first token as your reference token that you want to predict. Since all the tokens are connected via self-attention you won’t have problem not predicting the rest of the bpe tokens of a word. In PyTorch, you can ignore computing loss 10 (see ignore_index argument) of those tokens by providing -100 as a label to those tokens (life is so easy with pytorch ).
Apart from that, I didn’t find any more additional complexity in the training NER model.
Some other implementation details you need to check,
One important Note: So far I remember (please verify), In conll, german or dutch dataset there are 2-3 long sentences in the test dataset. Sequence labeling doesn’t work like sentiment analysis. You need to make sure your sentence is not cut down by the max_sequence_len argument of the Language Model’s tokenizer. Otherwise, you will see a little bit of discrepency in your test F1 sore. An easier hack for this problem is to divide the sentence into smaller parts and predict them one by one and finally merge them.
Imo Self-attention and CRF layer is theoretically different but in application some of the problem that CRF solved in prior model, self-attention can also solve them (because they create a fully connected graph). So using softmax is more preferable than a CRF layer.
The score that the original BERT paper 2 reported are not reproducible and comparable with most of the papers since they used document level NER fine-tuning.
If you still have query about the architecture you can follow this,
Guillaume Genthial blog – 5 Apr 17
Sequence Tagging with Tensorflow 67
GloVe + character embeddings + bi-LSTM + CRF for Sequence Tagging (Named Entity Recognition, NER, POS) - NLP example of bidirectionnal RNN and CRF in Tensorflow
you only have to replace hierarchical rnn with transformer as the encoder.
You can check the following paper’s for more info,
Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT - ACL Anthology 2
https://arxiv.org/pdf/2007.07683.pdf 3
https://arxiv.org/pdf/2004.12440.pdf 3
https://arxiv.org/pdf/2004.13240.pdf 9 (beware, this is my publication. I may be biased, still not accepted)
Please let me know if you have more queries. | 0 |
huggingface | Beginners | What headers are accepted by Inference API? | https://discuss.huggingface.co/t/what-headers-are-accepted-by-inference-api/5920 | Hi.
This question is related to Hugging Face inference API - 🤗 Accelerated Inference API — Api inference documentation 3
Are there any other headers you can send to the API apart from the API key?
“headers = {“Authorization”: f"Bearer {API_TOKEN}”}"
thanks, | look further into the documentation, it says about the parameters…but there arent too many, top_p, repetition and max_length and a few others if you have access to gpu with the hundreds dollars plan | 0 |
huggingface | Beginners | Question on Next Sentence Prediction | https://discuss.huggingface.co/t/question-on-next-sentence-prediction/4243 | Hi !
I’m trying to fine-tune a transformer model on a simultaneous MLM + NSP task. I’ve some older code examples that show that there used to be a DataCollatorForNSP class that could be relied on in this configuration. However, this class no longer exists and has been replaced by DataCollatorForLanguageModelling, as stated in this issue: Why was DataCollatorForNextSentencePrediction removed ? · Issue #9416 · huggingface/transformers · GitHub 33
I’m nevertheless a bit confused because the source code for the DataCollatorForLanguageModelling class shows no parameters for controlling the amount of NSP while there is a float value for how much masked words should the training involve.
I was wondering whether someone could give me a clearer picture of this class and how to involve Next Sentence Prediction as an auxiliary task during a MLM training.
Thanks a lot ! | Hi,
If you use TextDatasetForNextSentencePrediction for dataset, there is a parameter called nsp_probability default value is 0.5 just like in Bert.
So I believe it fits your needs | 0 |
huggingface | Beginners | Short text clustering | https://discuss.huggingface.co/t/short-text-clustering/5829 | Hey folks, I’ve been using the sentence-transformers library for trying to group together short texts.
I’ve had reasonable success using the AgglomerativeClustering 21 library from sklearn (using either euclidean distance + ward linkage or precomputed cosine + average linkage) as it’s ability to set the distance thresholds + automatically find the right number of clusters (as opposed to Kmeans) is really nice.
But while it seemingly provides great results during the first wave of clustering, it tends to struggle when finding decent groupings on outliers that slip through the net the first time (where there’s only 1-2 observations per group).
I’ve tried some other clustering methods such as:
KMedoids
HDBscan (not great)
Kmeans / Agglomerative on predefined K
But none have been as effective as hierarchical on the initial embeddings. I’ve experimented with looping through and re-clustering just with slightly tighter distance thresholds on the outliers each time, but not really sure of a way to automatically set the distances without a large amount of trial and error.
So I was wondering if anyone knew of any methods for:
a) Grouping these more effectively during the first wave
b) Better clustering together any of the remaining outliers after the first pass
Any help would be hugely appreciated - cheers! | hey @scroobiustrip, have you tried first passing the embeddings through UMAP before applying a density based clustering algorithm? there’s a nice discussion on this approach in the UMAP docs 15 which comes with the following warning:
This is somewhat controversial, and should be attempted with care. For a good discussion of some of the issues involved in this, please see the various answers in this stackoverflow thread 9 on clustering the results of t-SNE. Many of the points of concern raised there are salient for clustering the results of UMAP. The most notable is that UMAP, like t-SNE, does not completely preserve density. UMAP, like t-SNE, can also create false tears in clusters, resulting in a finer clustering than is necessarily present in the data.
the example in the docs actually seems to exhibit some of the problems you found with e.g. HDBSCAN in your application | 0 |
huggingface | Beginners | Keyword argument `table` should be either of type `dict` or `list`, but is <class ‘str’>) | https://discuss.huggingface.co/t/keyword-argument-table-should-be-either-of-type-dict-or-list-but-is-class-str/5827 | I am trying to run Tapas for Question Answering using the Inference API and can’t figure out the problem in the parsing as I followed the example on inference and built the JSON in the same format as required and still getting this error.
Here is the code
dataToPass["query"] = "What is Akshat ID Number"
dataToPass["table"] = ast.literal_eval(str(dict))
jsonFinal = {}
jsonFinal["inputs"] = dataToPass
test_string = str(jsonFinal)
answer = getAnswer(test_string)
print(answer)
I am getting the output for jsonFinal as the following
{"inputs": {"query": "What is Akshat ID Number", "table": {"id": [1, 2, 3, 4, 5, 6, 7], "gender": ["", "MALE", "MALE", "MALE", "MALE", "MALE", "MALE"], "name": ["", "Akshat", "Rishabh Sahu", "Ashu", "Ashu", "Ashu", "Ashu"], "dob": ["", "1999", "1999", "1999", "1999", "1999", "1999"], "id_number": ["", "808", "1011", "100", "100", "100", "102"]}}}
I have tried passing in jsonFinal and test_string and a couple of other combination but I am not sure why this error persists
Just FYI I am using a custom dictionary class hence converting the above way to a dictionary and
getAnswer is where I call the inference API.
And I am building the dict by iterating over a django model queryset -
here is the code to that
def dataToJSON(coloumName, jsonData):
jsonBar = []
for i in jsonData:
jsonBar.append(i)
dict.add(coloumName, jsonBar)
return "" | cc @nielsr | 0 |
huggingface | Beginners | Loading pytorch_pretrained_bert models with transformers | https://discuss.huggingface.co/t/loading-pytorch-pretrained-bert-models-with-transformers/5845 | Hi, I have a model (checkpoint) based pytorch_pretrained_bert. I am converting it to use the latest transformer code. I have defined the model:
class BertForTokenAndSequenceJointClassification(BertPreTrainedModel)
and I am loading it with:
model = BertForTokenAndSequenceJointClassification.from_pretrained(
pretrained_model_name_or_path="bert-base-cased",
state_dict=torch.load("./20190509-115940.pt", map_location=torch.device('cpu')),
)
I am getting warnings:
Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForTokenAndSequenceJointClassification: [‘module.bert.embeddings.word_embeddings.weight’, ‘module.bert.embeddings.position_embeddings.weight’, ‘module.bert.embeddings.token_type_embeddings.weight’, ‘module.bert.embeddings.LayerNorm.weight’, ‘module.bert.embeddings.LayerNorm.bias’, ‘module.bert.encoder.layer.0.attention.self.query.weight’, ‘module.bert.encoder.layer.0.attention.self.query.bias’, ‘module.bert.encoder.layer.0.attention.self.key.weight’, ‘module.bert.encoder.layer.0.attention.self.key.bias’, ‘module.bert.encoder.layer.0.attention.self.value.weight’, ‘module.bert.encoder.layer.0.attention.self.value.bias’, ‘module.bert.encoder.layer.0.attention.output.dense.weight’, ‘module.bert.encoder.layer.0.attention.output.dense.bias’, ‘module.bert.encoder.layer.0.attention.output.LayerNorm.weight’, ‘module.bert.encoder.layer.0.attention.output.LayerNorm.bias’, ‘module.bert.encoder.layer.0.intermediate.dense.weight’, ‘module.bert.encoder.layer.0.intermediate.dense.bias’, ‘module.bert.encoder.layer.0.output.dense.weight’, ‘module.bert.encoder.layer.0.output.dense.bias’, ‘module.bert.encoder.layer.0.output.LayerNorm.weight’, ‘module.bert.encoder.layer.0.output.LayerNorm.bias’, ‘module.bert.encoder.layer.1.attention.self.query.weight’, ‘module.bert.encoder.layer.1.attention.self.query.bias’, ‘module.bert.encoder.layer.1.attention.self.key.weight’, ‘module.bert.encoder.layer.1.attention.self.key.bias’, ‘module.bert.encoder.layer.1.attention.self.value.weight’, ‘module.bert.encoder.layer.1.attention.self.value.bias’, ‘module.bert.encoder.layer.1.attention.output.dense.weight’, ‘module.bert.encoder.layer.1.attention.output.dense.bias’, ‘module.bert.encoder.layer.1.attention.output.LayerNorm.weight’, ‘module.bert.encoder.layer.1.attention.output.LayerNorm.bias’, ‘module.bert.encoder.layer.1.intermediate.dense.weight’, ‘module.bert.encoder.layer.1.intermediate.dense.bias’, ‘module.bert.encoder.layer.1.output.dense.weight’, ‘module.bert.encoder.layer.1.output.dense.bias’, ‘module.bert.encoder.layer.1.output.LayerNorm.weight’, ‘module.bert.encoder.layer.1.output.LayerNorm.bias’, ‘module.bert.encoder.layer.2.attention.self.query.weight’, ‘module.bert.encoder.layer.2.attention.self.query.bias’, ‘module.bert.encoder.layer.2.attention.self.key.weight’, ‘module.bert.encoder.layer.2.attention.self.key.bias’, ‘module.bert.encoder.layer.2.attention.self.value.weight’, ‘module.bert.encoder.layer.2.attention.self.value.bias’, ‘module.bert.encoder.layer.2.attention.output.dense.weight’, ‘module.bert.encoder.layer.2.attention.output.dense.bias’, ‘module.bert.encoder.layer.2.attention.output.LayerNorm.weight’, ‘module.bert.encoder.layer.2.attention.output.LayerNorm.bias’, ‘module.bert.encoder.layer.2.intermediate.dense.weight’, ‘module.bert.encoder.layer.2.intermediate.dense.bias’, ‘module.bert.encoder.layer.2.output.dense.weight’, ‘module.bert.encoder.layer.2.output.dense.bias’, ‘module.bert.encoder.layer.2.output.LayerNorm.weight’, ‘module.bert.encoder.layer.2.output.LayerNorm.bias’, ‘module.bert.encoder.layer.3.attention.self.query.weight’, ‘module.bert.encoder.layer.3.attention.self.query.bias’, ‘module.bert.encoder.layer.3.attention.self.key.weight’, ‘module.bert.encoder.layer.3.attention.self.key.bias’, ‘module.bert.encoder.layer.3.attention.self.value.weight’, ‘module.bert.encoder.layer.3.attention.self.value.bias’, ‘module.bert.encoder.layer.3.attention.output.dense.weight’, ‘module.bert.encoder.layer.3.attention.output.dense.bias’, ‘module.bert.encoder.layer.3.attention.output.LayerNorm.weight’, ‘module.bert.encoder.layer.3.attention.output.LayerNorm.bias’, ‘module.bert.encoder.layer.3.intermediate.dense.weight’, ‘module.bert.encoder.layer.3.intermediate.dense.bias’, ‘module.bert.encoder.layer.3.output.dense.weight’, ‘module.bert.encoder.layer.3.output.dense.bias’, ‘module.bert.encoder.layer.3.output.LayerNorm.weight’, ‘module.bert.encoder.layer.3.output.LayerNorm.bias’, ‘module.bert.encoder.layer.4.attention.self.query.weight’, ‘module.bert.encoder.layer.4.attention.self.query.bias’, ‘module.bert.encoder.layer.4.attention.self.key.weight’, ‘module.bert.encoder.layer.4.attention.self.key.bias’, ‘module.bert.encoder.layer.4.attention.self.value.weight’, ‘module.bert.encoder.layer.4.attention.self.value.bias’, ‘module.bert.encoder.layer.4.attention.output.dense.weight’, ‘module.bert.encoder.layer.4.attention.output.dense.bias’, ‘module.bert.encoder.layer.4.attention.output.LayerNorm.weight’, ‘module.bert.encoder.layer.4.attention.output.LayerNorm.bias’, ‘module.bert.encoder.layer.4.intermediate.dense.weight’, ‘module.bert.encoder.layer.4.intermediate.dense.bias’, ‘module.bert.encoder.layer.4.output.dense.weight’, ‘module.bert.encoder.layer.4.output.dense.bias’, ‘module.bert.encoder.layer.4.output.LayerNorm.weight’, ‘module.bert.encoder.layer.4.output.LayerNorm.bias’, ‘module.bert.encoder.layer.5.attention.self.query.weight’, ‘module.bert.encoder.layer.5.attention.self.query.bias’, ‘module.bert.encoder.layer.5.attention.self.key.weight’, ‘module.bert.encoder.layer.5.attention.self.key.bias’, ‘module.bert.encoder.layer.5.attention.self.value.weight’, ‘module.bert.encoder.layer.5.attention.self.value.bias’, ‘module.bert.encoder.layer.5.attention.output.dense.weight’, ‘module.bert.encoder.layer.5.attention.output.dense.bias’, ‘module.bert.encoder.layer.5.attention.output.LayerNorm.weight’, ‘module.bert.encoder.layer.5.attention.output.LayerNorm.bias’, ‘module.bert.encoder.layer.5.intermediate.dense.weight’, ‘module.bert.encoder.layer.5.intermediate.dense.bias’, ‘module.bert.encoder.layer.5.output.dense.weight’, ‘module.bert.encoder.layer.5.output.dense.bias’, ‘module.bert.encoder.layer.5.output.LayerNorm.weight’, ‘module.bert.encoder.layer.5.output.LayerNorm.bias’, ‘module.bert.encoder.layer.6.attention.self.query.weight’, ‘module.bert.encoder.layer.6.attention.self.query.bias’, ‘module.bert.encoder.layer.6.attention.self.key.weight’, ‘module.bert.encoder.layer.6.attention.self.key.bias’, ‘module.bert.encoder.layer.6.attention.self.value.weight’, ‘module.bert.encoder.layer.6.attention.self.value.bias’, ‘module.bert.encoder.layer.6.attention.output.dense.weight’, ‘module.bert.encoder.layer.6.attention.output.dense.bias’, ‘module.bert.encoder.layer.6.attention.output.LayerNorm.weight’, ‘module.bert.encoder.layer.6.attention.output.LayerNorm.bias’, ‘module.bert.encoder.layer.6.intermediate.dense.weight’, ‘module.bert.encoder.layer.6.intermediate.dense.bias’, ‘module.bert.encoder.layer.6.output.dense.weight’, ‘module.bert.encoder.layer.6.output.dense.bias’, ‘module.bert.encoder.layer.6.output.LayerNorm.weight’, ‘module.bert.encoder.layer.6.output.LayerNorm.bias’, ‘module.bert.encoder.layer.7.attention.self.query.weight’, ‘module.bert.encoder.layer.7.attention.self.query.bias’, ‘module.bert.encoder.layer.7.attention.self.key.weight’, ‘module.bert.encoder.layer.7.attention.self.key.bias’, ‘module.bert.encoder.layer.7.attention.self.value.weight’, ‘module.bert.encoder.layer.7.attention.self.value.bias’, ‘module.bert.encoder.layer.7.attention.output.dense.weight’, ‘module.bert.encoder.layer.7.attention.output.dense.bias’, ‘module.bert.encoder.layer.7.attention.output.LayerNorm.weight’, ‘module.bert.encoder.layer.7.attention.output.LayerNorm.bias’, ‘module.bert.encoder.layer.7.intermediate.dense.weight’, ‘module.bert.encoder.layer.7.intermediate.dense.bias’, ‘module.bert.encoder.layer.7.output.dense.weight’, ‘module.bert.encoder.layer.7.output.dense.bias’, ‘module.bert.encoder.layer.7.output.LayerNorm.weight’, ‘module.bert.encoder.layer.7.output.LayerNorm.bias’, ‘module.bert.encoder.layer.8.attention.self.query.weight’, ‘module.bert.encoder.layer.8.attention.self.query.bias’, ‘module.bert.encoder.layer.8.attention.self.key.weight’, ‘module.bert.encoder.layer.8.attention.self.key.bias’, ‘module.bert.encoder.layer.8.attention.self.value.weight’, ‘module.bert.encoder.layer.8.attention.self.value.bias’, ‘module.bert.encoder.layer.8.attention.output.dense.weight’, ‘module.bert.encoder.layer.8.attention.output.dense.bias’, ‘module.bert.encoder.layer.8.attention.output.LayerNorm.weight’, ‘module.bert.encoder.layer.8.attention.output.LayerNorm.bias’, ‘module.bert.encoder.layer.8.intermediate.dense.weight’, ‘module.bert.encoder.layer.8.intermediate.dense.bias’, ‘module.bert.encoder.layer.8.output.dense.weight’, ‘module.bert.encoder.layer.8.output.dense.bias’, ‘module.bert.encoder.layer.8.output.LayerNorm.weight’, ‘module.bert.encoder.layer.8.output.LayerNorm.bias’, ‘module.bert.encoder.layer.9.attention.self.query.weight’, ‘module.bert.encoder.layer.9.attention.self.query.bias’, ‘module.bert.encoder.layer.9.attention.self.key.weight’, ‘module.bert.encoder.layer.9.attention.self.key.bias’, ‘module.bert.encoder.layer.9.attention.self.value.weight’, ‘module.bert.encoder.layer.9.attention.self.value.bias’, ‘module.bert.encoder.layer.9.attention.output.dense.weight’, ‘module.bert.encoder.layer.9.attention.output.dense.bias’, ‘module.bert.encoder.layer.9.attention.output.LayerNorm.weight’, ‘module.bert.encoder.layer.9.attention.output.LayerNorm.bias’, ‘module.bert.encoder.layer.9.intermediate.dense.weight’, ‘module.bert.encoder.layer.9.intermediate.dense.bias’, ‘module.bert.encoder.layer.9.output.dense.weight’, ‘module.bert.encoder.layer.9.output.dense.bias’, ‘module.bert.encoder.layer.9.output.LayerNorm.weight’, ‘module.bert.encoder.layer.9.output.LayerNorm.bias’, ‘module.bert.encoder.layer.10.attention.self.query.weight’, ‘module.bert.encoder.layer.10.attention.self.query.bias’, ‘module.bert.encoder.layer.10.attention.self.key.weight’, ‘module.bert.encoder.layer.10.attention.self.key.bias’, ‘module.bert.encoder.layer.10.attention.self.value.weight’, ‘module.bert.encoder.layer.10.attention.self.value.bias’, ‘module.bert.encoder.layer.10.attention.output.dense.weight’, ‘module.bert.encoder.layer.10.attention.output.dense.bias’, ‘module.bert.encoder.layer.10.attention.output.LayerNorm.weight’, ‘module.bert.encoder.layer.10.attention.output.LayerNorm.bias’, ‘module.bert.encoder.layer.10.intermediate.dense.weight’, ‘module.bert.encoder.layer.10.intermediate.dense.bias’, ‘module.bert.encoder.layer.10.output.dense.weight’, ‘module.bert.encoder.layer.10.output.dense.bias’, ‘module.bert.encoder.layer.10.output.LayerNorm.weight’, ‘module.bert.encoder.layer.10.output.LayerNorm.bias’, ‘module.bert.encoder.layer.11.attention.self.query.weight’, ‘module.bert.encoder.layer.11.attention.self.query.bias’, ‘module.bert.encoder.layer.11.attention.self.key.weight’, ‘module.bert.encoder.layer.11.attention.self.key.bias’, ‘module.bert.encoder.layer.11.attention.self.value.weight’, ‘module.bert.encoder.layer.11.attention.self.value.bias’, ‘module.bert.encoder.layer.11.attention.output.dense.weight’, ‘module.bert.encoder.layer.11.attention.output.dense.bias’, ‘module.bert.encoder.layer.11.attention.output.LayerNorm.weight’, ‘module.bert.encoder.layer.11.attention.output.LayerNorm.bias’, ‘module.bert.encoder.layer.11.intermediate.dense.weight’, ‘module.bert.encoder.layer.11.intermediate.dense.bias’, ‘module.bert.encoder.layer.11.output.dense.weight’, ‘module.bert.encoder.layer.11.output.dense.bias’, ‘module.bert.encoder.layer.11.output.LayerNorm.weight’, ‘module.bert.encoder.layer.11.output.LayerNorm.bias’, ‘module.bert.pooler.dense.weight’, ‘module.bert.pooler.dense.bias’, ‘module.classifier.0.weight’, ‘module.classifier.0.bias’, ‘module.classifier.1.weight’, ‘module.classifier.1.bias’, ‘module.masking_gate.weight’, ‘module.masking_gate.bias’, ‘module.merge_classifier_1.weight’, ‘module.merge_classifier_1.bias’]
Some weights of BertForTokenAndSequenceJointClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['embeddings.word_embeddings.weight', 'embeddings.position_embeddings.weight', 'embeddings.token_type_embeddings.weight', 'embeddings.LayerNorm.weight', 'embeddings.LayerNorm.bias', 'encoder.layer.0.attention.self.query.weight', 'encoder.layer.0.attention.self.query.bias', 'encoder.layer.0.attention.self.key.weight', 'encoder.layer.0.attention.self.key.bias', 'encoder.layer.0.attention.self.value.weight', 'encoder.layer.0.attention.self.value.bias', 'encoder.layer.0.attention.output.dense.weight', 'encoder.layer.0.attention.output.dense.bias', 'encoder.layer.0.attention.output.LayerNorm.weight', 'encoder.layer.0.attention.output.LayerNorm.bias', 'encoder.layer.0.intermediate.dense.weight', 'encoder.layer.0.intermediate.dense.bias', 'encoder.layer.0.output.dense.weight', 'encoder.layer.0.output.dense.bias', 'encoder.layer.0.output.LayerNorm.weight', 'encoder.layer.0.output.LayerNorm.bias', 'encoder.layer.1.attention.self.query.weight', 'encoder.layer.1.attention.self.query.bias', 'encoder.layer.1.attention.self.key.weight', 'encoder.layer.1.attention.self.key.bias', 'encoder.layer.1.attention.self.value.weight', 'encoder.layer.1.attention.self.value.bias', 'encoder.layer.1.attention.output.dense.weight', 'encoder.layer.1.attention.output.dense.bias', 'encoder.layer.1.attention.output.LayerNorm.weight', 'encoder.layer.1.attention.output.LayerNorm.bias', 'encoder.layer.1.intermediate.dense.weight', 'encoder.layer.1.intermediate.dense.bias', 'encoder.layer.1.output.dense.weight', 'encoder.layer.1.output.dense.bias', 'encoder.layer.1.output.LayerNorm.weight', 'encoder.layer.1.output.LayerNorm.bias', 'encoder.layer.2.attention.self.query.weight', 'encoder.layer.2.attention.self.query.bias', 'encoder.layer.2.attention.self.key.weight', 'encoder.layer.2.attention.self.key.bias', 'encoder.layer.2.attention.self.value.weight', 'encoder.layer.2.attention.self.value.bias', 'encoder.layer.2.attention.output.dense.weight', 'encoder.layer.2.attention.output.dense.bias', 'encoder.layer.2.attention.output.LayerNorm.weight', 'encoder.layer.2.attention.output.LayerNorm.bias', 'encoder.layer.2.intermediate.dense.weight', 'encoder.layer.2.intermediate.dense.bias', 'encoder.layer.2.output.dense.weight', 'encoder.layer.2.output.dense.bias', 'encoder.layer.2.output.LayerNorm.weight', 'encoder.layer.2.output.LayerNorm.bias', 'encoder.layer.3.attention.self.query.weight', 'encoder.layer.3.attention.self.query.bias', 'encoder.layer.3.attention.self.key.weight', 'encoder.layer.3.attention.self.key.bias', 'encoder.layer.3.attention.self.value.weight', 'encoder.layer.3.attention.self.value.bias', 'encoder.layer.3.attention.output.dense.weight', 'encoder.layer.3.attention.output.dense.bias', 'encoder.layer.3.attention.output.LayerNorm.weight', 'encoder.layer.3.attention.output.LayerNorm.bias', 'encoder.layer.3.intermediate.dense.weight', 'encoder.layer.3.intermediate.dense.bias', 'encoder.layer.3.output.dense.weight', 'encoder.layer.3.output.dense.bias', 'encoder.layer.3.output.LayerNorm.weight', 'encoder.layer.3.output.LayerNorm.bias', 'encoder.layer.4.attention.self.query.weight', 'encoder.layer.4.attention.self.query.bias', 'encoder.layer.4.attention.self.key.weight', 'encoder.layer.4.attention.self.key.bias', 'encoder.layer.4.attention.self.value.weight', 'encoder.layer.4.attention.self.value.bias', 'encoder.layer.4.attention.output.dense.weight', 'encoder.layer.4.attention.output.dense.bias', 'encoder.layer.4.attention.output.LayerNorm.weight', 'encoder.layer.4.attention.output.LayerNorm.bias', 'encoder.layer.4.intermediate.dense.weight', 'encoder.layer.4.intermediate.dense.bias', 'encoder.layer.4.output.dense.weight', 'encoder.layer.4.output.dense.bias', 'encoder.layer.4.output.LayerNorm.weight', 'encoder.layer.4.output.LayerNorm.bias', 'encoder.layer.5.attention.self.query.weight', 'encoder.layer.5.attention.self.query.bias', 'encoder.layer.5.attention.self.key.weight', 'encoder.layer.5.attention.self.key.bias', 'encoder.layer.5.attention.self.value.weight', 'encoder.layer.5.attention.self.value.bias', 'encoder.layer.5.attention.output.dense.weight', 'encoder.layer.5.attention.output.dense.bias', 'encoder.layer.5.attention.output.LayerNorm.weight', 'encoder.layer.5.attention.output.LayerNorm.bias', 'encoder.layer.5.intermediate.dense.weight', 'encoder.layer.5.intermediate.dense.bias', 'encoder.layer.5.output.dense.weight', 'encoder.layer.5.output.dense.bias', 'encoder.layer.5.output.LayerNorm.weight', 'encoder.layer.5.output.LayerNorm.bias', 'encoder.layer.6.attention.self.query.weight', 'encoder.layer.6.attention.self.query.bias', 'encoder.layer.6.attention.self.key.weight', 'encoder.layer.6.attention.self.key.bias', 'encoder.layer.6.attention.self.value.weight', 'encoder.layer.6.attention.self.value.bias', 'encoder.layer.6.attention.output.dense.weight', 'encoder.layer.6.attention.output.dense.bias', 'encoder.layer.6.attention.output.LayerNorm.weight', 'encoder.layer.6.attention.output.LayerNorm.bias', 'encoder.layer.6.intermediate.dense.weight', 'encoder.layer.6.intermediate.dense.bias', 'encoder.layer.6.output.dense.weight', 'encoder.layer.6.output.dense.bias', 'encoder.layer.6.output.LayerNorm.weight', 'encoder.layer.6.output.LayerNorm.bias', 'encoder.layer.7.attention.self.query.weight', 'encoder.layer.7.attention.self.query.bias', 'encoder.layer.7.attention.self.key.weight', 'encoder.layer.7.attention.self.key.bias', 'encoder.layer.7.attention.self.value.weight', 'encoder.layer.7.attention.self.value.bias', 'encoder.layer.7.attention.output.dense.weight', 'encoder.layer.7.attention.output.dense.bias', 'encoder.layer.7.attention.output.LayerNorm.weight', 'encoder.layer.7.attention.output.LayerNorm.bias', 'encoder.layer.7.intermediate.dense.weight', 'encoder.layer.7.intermediate.dense.bias', 'encoder.layer.7.output.dense.weight', 'encoder.layer.7.output.dense.bias', 'encoder.layer.7.output.LayerNorm.weight', 'encoder.layer.7.output.LayerNorm.bias', 'encoder.layer.8.attention.self.query.weight', 'encoder.layer.8.attention.self.query.bias', 'encoder.layer.8.attention.self.key.weight', 'encoder.layer.8.attention.self.key.bias', 'encoder.layer.8.attention.self.value.weight', 'encoder.layer.8.attention.self.value.bias', 'encoder.layer.8.attention.output.dense.weight', 'encoder.layer.8.attention.output.dense.bias', 'encoder.layer.8.attention.output.LayerNorm.weight', 'encoder.layer.8.attention.output.LayerNorm.bias', 'encoder.layer.8.intermediate.dense.weight', 'encoder.layer.8.intermediate.dense.bias', 'encoder.layer.8.output.dense.weight', 'encoder.layer.8.output.dense.bias', 'encoder.layer.8.output.LayerNorm.weight', 'encoder.layer.8.output.LayerNorm.bias', 'encoder.layer.9.attention.self.query.weight', 'encoder.layer.9.attention.self.query.bias', 'encoder.layer.9.attention.self.key.weight', 'encoder.layer.9.attention.self.key.bias', 'encoder.layer.9.attention.self.value.weight', 'encoder.layer.9.attention.self.value.bias', 'encoder.layer.9.attention.output.dense.weight', 'encoder.layer.9.attention.output.dense.bias', 'encoder.layer.9.attention.output.LayerNorm.weight', 'encoder.layer.9.attention.output.LayerNorm.bias', 'encoder.layer.9.intermediate.dense.weight', 'encoder.layer.9.intermediate.dense.bias', 'encoder.layer.9.output.dense.weight', 'encoder.layer.9.output.dense.bias', 'encoder.layer.9.output.LayerNorm.weight', 'encoder.layer.9.output.LayerNorm.bias', 'encoder.layer.10.attention.self.query.weight', 'encoder.layer.10.attention.self.query.bias', 'encoder.layer.10.attention.self.key.weight', 'encoder.layer.10.attention.self.key.bias', 'encoder.layer.10.attention.self.value.weight', 'encoder.layer.10.attention.self.value.bias', 'encoder.layer.10.attention.output.dense.weight', 'encoder.layer.10.attention.output.dense.bias', 'encoder.layer.10.attention.output.LayerNorm.weight', 'encoder.layer.10.attention.output.LayerNorm.bias', 'encoder.layer.10.intermediate.dense.weight', 'encoder.layer.10.intermediate.dense.bias', 'encoder.layer.10.output.dense.weight', 'encoder.layer.10.output.dense.bias', 'encoder.layer.10.output.LayerNorm.weight', 'encoder.layer.10.output.LayerNorm.bias', 'encoder.layer.11.attention.self.query.weight', 'encoder.layer.11.attention.self.query.bias', 'encoder.layer.11.attention.self.key.weight', 'encoder.layer.11.attention.self.key.bias', 'encoder.layer.11.attention.self.value.weight', 'encoder.layer.11.attention.self.value.bias', 'encoder.layer.11.attention.output.dense.weight', 'encoder.layer.11.attention.output.dense.bias', 'encoder.layer.11.attention.output.LayerNorm.weight', 'encoder.layer.11.attention.output.LayerNorm.bias', 'encoder.layer.11.intermediate.dense.weight', 'encoder.layer.11.intermediate.dense.bias', 'encoder.layer.11.output.dense.weight', 'encoder.layer.11.output.dense.bias', 'encoder.layer.11.output.LayerNorm.weight', 'encoder.layer.11.output.LayerNorm.bias', 'pooler.dense.weight', 'pooler.dense.bias', 'classifiers.0.bias', 'classifiers.0.weight', 'classifiers.1.weight', 'classifiers.1.bias', 'masking_gate.weight', 'masking_gate.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
What does it mean? Does it mean those weights in the list was not loaded from checkpoint? | Okay, went through the source code it does indicate those weights are unexpected. | 0 |
huggingface | Beginners | How to Use a Nested Python Dictionary in Dataset.from_dict | https://discuss.huggingface.co/t/how-to-use-a-nested-python-dictionary-in-dataset-from-dict/5757 | I have a nested python dictionary
create a nested dictionary
Dict = {‘train’: {‘id’: np.arange(len(train_texts)),
‘tokens’: train_texts,
‘tags’: train_tags},
‘val’: {‘id’: np.arange(len(val_texts)),
‘tokens’: val_texts,
‘tags’: val_tags},
‘test’: {‘id’: np.arange(len(test_texts)),
‘tokens’: test_texts,
‘tags’: test_tags}
}
My question how do I use the nested dictionary in transformers Dataset.from_dict() such that it gives me an output like the following:
DatasetDict({
train: Dataset({
features: [‘id’, ‘tokens’, ‘tags’],
num_rows: 6801
})
val: Dataset({
features: [‘id’, ‘tokens’, ‘tags’],
num_rows: 1480
})
test: Dataset({
features: [‘id’, ‘tokens’, ‘tags’],
num_rows: 1532
})
}) | hey @GSA, as far as i know you can’t create a DatasetDict object directly from a python dict, but you could try creating 3 Dataset objects (one for each split) and then add them to DatasetDict as follows:
dataset = DatasetDict()
# using your `Dict` object
for k,v in Dict.items():
dataset[k] = Dataset.from_dict(v) | 0 |
huggingface | Beginners | NER model only predicts the outside ‘O’ tag | https://discuss.huggingface.co/t/ner-model-only-predicts-the-outside-o-tag/5805 | Hi
I’ve managed to produce an NER module following this tutorial 6. Although I am using the ScienceIE dataset (Process, Material, Task entities).
The issue is that my model only predicts the ‘O’ tag… I can understand that the model does it because the majority of tags are ‘O’ but obviously this isn’t solving the task.
I’ve created a custom compute_metrics function that calculates f1 score for each entity and the macro-average. However, this doesn’t help steer the model. Do I need to create a custom loss function?
I’ve tried googling around for this issue but haven’t had much luck. Sorry if this is a repeat issue.
Thanks for your help. | hey @CDobbs
CDobbs:
I’ve created a custom compute_metrics function that calculates f1 score for each entity and the macro-average. However, this doesn’t help steer the model. Do I need to create a custom loss function?
if you’re using the Trainer for fine-tuning, then you shouldn’t need a custom loss function. speaking of losses, does the train / validation loss decrease during fine-tuning?
without seeing more details / code on your approach, it’s hard to debug the problem - can you share them?
ps. you might find it instructive to see the “official” tutorial on NER which contains many more details than the one you linked to: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/token_classification.ipynb#scrollTo=n9qywopnIrJH 17 | 0 |
huggingface | Beginners | How to use fine-tuned model | https://discuss.huggingface.co/t/how-to-use-fine-tuned-model/5799 | hi,
I use trainer.train() to fine-tune my custome NER dataset with bert model,
and train.save_model() to save the models that having different hyperparameters.
how should I do to reload my different model? | You can just do
from transformers import AutoModel
model = AutoModel.from_pretrained(ouput_dir_of_your_trainer) | 0 |
huggingface | Beginners | How to use Data Collator? | https://discuss.huggingface.co/t/how-to-use-data-collator/5794 | I want to train transformer TF model for NER with my pipeline. I have a problem with alignment of labels. As I understand for this task one uses DataCollatorForTokenClassification. But I can’t figure out how to use it outside of Trainer to get aligned labels.
Just to clearify what do I mean:
tokens: [‘Europe’,‘is’,‘international’]
labels: [‘1’,‘0’.‘0’]
input_ids: [‘545’,‘43’,‘6343’,‘2334’,‘2’] | hey @Constantin you should be able to use the tokenize_and_align_labels function from here: transformers/run_ner_no_trainer.py at bc2571e61c985ec82819cf01ad038342771c94d0 · huggingface/transformers · GitHub 54
you could also try adapting the pytorch code to tensorflow for the training loop | 0 |
huggingface | Beginners | Trainer not logging eval_loss | https://discuss.huggingface.co/t/trainer-not-logging-eval-loss/5760 | I am using Trainer from master branch with following args:
args = TrainingArguments(
output_dir="nq-complete-training",
overwrite_output_dir=False,
do_train=True,
do_eval=True,
evaluation_strategy="steps",
eval_steps=5,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=1,
group_by_length=True,
learning_rate=7e-5,
warmup_steps=50,
lr_scheduler_type="linear",
num_train_epochs=3,
logging_strategy="steps",
logging_steps=5,
save_strategy="steps",
run_name="nq",
disable_tqdm=False,
report_to="wandb",
remove_unused_columns=False,
fp16=False,
)
Trainer is not logging eval loss. Any idea why?? | hey @vasudevgupta, is is possible that you’re not providing the correct labels / label names for the loss to be computed?
judging by output_dir, it looks like you’re doing question answering, in which case the trainer looks for the column names ["start_positions", "end_positions"] in the label_names argument. | 0 |
huggingface | Beginners | Weak Conversational Skills - dialogPT trained model issue | https://discuss.huggingface.co/t/weak-conversational-skills-dialogpt-trained-model-issue/5342 | I noticed in most / all dialogPT tutorials, when somebody trains on top of it with their own data, the answers they get back from it always turn into “!!!?!?!!;,!.com?!” - “!!!” - “”, and stuff like that after about 3-5 questions. I also had this problem in my own training code. Why is that? | From my experience this correlates with:
Lack of fine-tuning for your specific length. I don’t know why that is the case but I have noticed a significant drop in this “!!!?!?!!;,!.com?!” thing once you increase the fine-tuning dataset size.
This seems to only occur on dialoGPT-small. Have not seen it once on the medium version. This is not that big a deal since if you can train dialoGPT-small, generaly you will be able to train dialoGPT-mid on the same GPU.
P.S. You had me confused for a second there . It’s not “dialogPT” it’s dialoGPT as it’s based on the GPT-2 model. | 0 |
huggingface | Beginners | How to load training_args | https://discuss.huggingface.co/t/how-to-load-training-args/5720 | Hi
I wonder how I can load the training_args.bin?
thanks | it’s saved with torch so you can just do training_args = torch.load(path_to_training_args) | 0 |
huggingface | Beginners | How can I renew my API key | https://discuss.huggingface.co/t/how-can-i-renew-my-api-key/5696 | Hello, I was wondering if there’s a way to renew/create a new API key as I might have leaked my older one? Please assist. | cc @julien-c or @pierric | 0 |
huggingface | Beginners | Missing examples in transformers/examples/language-modeling | https://discuss.huggingface.co/t/missing-examples-in-transformers-examples-language-modeling/5691 | Hi,
I have been using the scripts in transformers/examples/language-modeling like run_clm.py and run_clm_no_trainer.py in my project but they’re all gone since yesterday, instead there’s only a run_mlm_flax.py in transformers/examples/flax/language-modeling transformer now. Did anything went wrong, or were the example scripts wrong so they’re taken down? If the examples are wrong I need to fix my code too.
Thank you! | The examples have moved in backend-specific folders, so the ones you are looking for are in transformers/examples/pytorch/language-modeling. | 0 |
huggingface | Beginners | Early stopping callback problem | https://discuss.huggingface.co/t/early-stopping-callback-problem/5649 | Hello,
I am having problems with the EarlyStoppingCallback I set up in my trainer class as below:
training_args = TrainingArguments(
output_dir = 'BERT',
num_train_epochs = epochs,
do_train = True,
do_eval = True,
evaluation_strategy = 'epoch',
logging_strategy = 'epoch',
per_device_train_batch_size = batch_size,
per_device_eval_batch_size = batch_size,
warmup_steps = 250,
weight_decay = 0.01,
fp16 = True,
metric_for_best_model = 'eval_loss',
load_best_model_at_end = True
)
trainer = MyTrainer(
model = bert,
args = training_args,
train_dataset = train_dataset,
eval_dataset = val_dataset,
compute_metrics = compute_metrics,
callbacks = [EarlyStoppingCallback(early_stopping_patience = 3)]
)
trainer.train()
I keep getting the following error:
Screen Shot 2021-04-21 at 09.45.06954×561 78.7 KB
I already tried running the code without the metric_for_best_model arg, but it still gives me the same error.
I tweaked the Trainer class a bit to report metrics during training, and also created custom_metrics to report during validation. I suspect that maybe I made a mistake there and that’s why I can’t retrieve the validation loss now. See here 29 for the tweaked code.
Thanks in advance!! | You won’t be able to use the EarlyStoppingCallback with a nested dictionary of metrics as you did, no. And is will need the metric you are looking for to be prefixed by eval_ (otherwise it will add it unless you change the code too). You probably will need to write your own version of the callback for this use case.
At some point, instead of rewriting the whole Trainer, you might be interested in writing your own training loop with Accelerate 60. You can still have mixed precision training and distributed training but will have full control over your training loop. There is one example for each task using accelerate (the run_xxx_no_trainer) in the examples 108 of Transformers | 0 |
huggingface | Beginners | Colab error (memory crashes) | https://discuss.huggingface.co/t/colab-error-memory-crashes/1429 | I have this trainer code on a sample of only 10,000 records, still the GPU runs out, I am using Google Colab pro, before that it didnt happen with me, something wrong in my code, please see
from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir='/content/drive/My Drive/results/distillbert', # output directory
overwrite_output_dir= True,
do_predict= True,
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=4, # batch size per device during training
per_device_eval_batch_size=2, # batch size for evaluation
warmup_steps=1000, # number of warmup steps for learning rate scheduler
save_steps=1000,
save_total_limit=10,
load_best_model_at_end= True,
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=0,
)
model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-cased")
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset # evaluation dataset
) | How big is each record? How big is it after tokenization?
Are you using a data-loader? What batchsize is it using?
What happens if you you try to train using only 10 records? | 0 |
huggingface | Beginners | Expected scalar type Long but found Float using Trainer for BertForTokenClassification | https://discuss.huggingface.co/t/expected-scalar-type-long-but-found-float-using-trainer-for-bertfortokenclassification/5617 | Hello,
I am using Trainer with BertForTokenClassification:
training_args = TrainingArguments(
evaluation_strategy="epoch",
learning_rate=2e-5,
output_dir='./results', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
#warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=encoded_dataset['train'], # training dataset
eval_dataset=encoded_dataset['validation'], # evaluation dataset
)
and I am getting errors as following:
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: expected scalar type Long but found Float
Is there any option for trainer I need to set to fix this error?
Thank you. | From the very little you shared about the error message, it seems that your labels don’t have the right type. So the problem lies in the dataset you fed to the Trainer. | 0 |
huggingface | Beginners | Is “Some weights of the model were not used” warning normal when pre-trained BERT only by MLM | https://discuss.huggingface.co/t/is-some-weights-of-the-model-were-not-used-warning-normal-when-pre-trained-bert-only-by-mlm/5672 | Hello guys,
I’ve trained BERT model from the scratch using BertForMaskedLM and trainers. When I use AutoModelForSequenceClassification to fine-tune my model for a text classification task, I get a warrning about weights initialization. Is it normal to get a warning such as in the below or am I doing something wrong ?
Some weights of the model checkpoint at ./cased/bert-wikidump-50mb-mlm/model were not used when initializing BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.predictions.decoder.bias']
- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at ./cased/bert-wikidump-50mb-mlm/model and are newly initialized: ['bert.pooler.dense.weight', 'bert.pooler.dense.bias', 'classifier.weight', 'classifier.bias']
Loading pre-trained model with AutoModelForSequenceClassification
from transformers import AutoModelForSequenceClassification, AdamW, AutoConfig
config = AutoConfig.from_pretrained(PATHS["model"]["cased"]["local"], num_labels=df.category.unique().size)
model = AutoModelForSequenceClassification.from_pretrained(PATHS["model"]["cased"]["local"], config=config)
Code for training BERT from scratch with only MLM task
from transformers import BertConfig
config = BertConfig(vocab_size=64_000)
from transformers import BertForMaskedLM
model = BertForMaskedLM(config=config)
from transformers import Trainer, TrainingArguments
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=bert_cased_tokenizer, mlm=True, mlm_probability=0.15
)
training_args = TrainingArguments(
output_dir=PATHS["model"]["cased"]["training"]["local"],
overwrite_output_dir=True,
num_train_epochs=2,
per_gpu_train_batch_size= 8, ## 512 max sequence lenght, 64 sequence count
save_steps=10_000,
save_total_limit=2,
prediction_loss_only=True,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
) | Yes, the warning is telling you that some weights were randomly initialized (here you classification head), which is normal since you are instantiating a pretrained model for a different task. It’s there to remind you to finetune your model (it’s not usable for inference directly). | 0 |
huggingface | Beginners | LM finetuning on domain specific unlabelled data | https://discuss.huggingface.co/t/lm-finetuning-on-domain-specific-unlabelled-data/5433 | Hello Team,
Thanks a lot for the great work!
Can you please tell me how to finetune a(any) MLM model on domain specific corpus ? I am following this link 20 obtained from the huggingface documentation. Is this the procedure I should be following ? if this is how it is done, how will this update the vocabulary to adapt to new tokens of my domain specific corpus ?
Thanks in advance. | Can anyone help ? | 0 |
huggingface | Beginners | Linear Learning Rate Warmup with step-decay | https://discuss.huggingface.co/t/linear-learning-rate-warmup-with-step-decay/5604 | Hey Guys,
I am new to PyTorch and have to train deep models like ResNet-, VGG- which requires the following learning rate schedule:
Linearly increase the learning rate from 0 to ‘initial_lr’ in the first k training steps/iterations
Continue with ‘initial_lr’ for the next ‘m’ training steps
Decay the learning rate in a step-decay manner. For example, say after 30th epoch, you reduce the ‘initial_lr’ by 10. And after 45th epoch, you further reduce it by 10 for any further training.
This can be better visualized using the following picture:
This is an example using LeNet-300-100 on MNIST with TensorFlow2.
How can I achieve this particular learning rate schedule with ‘huggingface’?
Thanks | hey @adaptivedecay, you can define your own scheduler for the learning rate by subclassing Trainer and overriding the create_scheduler function to include your logic: Trainer — transformers 4.5.0.dev0 documentation 3
alternatively, you can pass the optimizer and scheduler as a tuple in the optimizers argument. | 0 |
huggingface | Beginners | Force decoder to avoid repetition between generated sentences | https://discuss.huggingface.co/t/force-decoder-to-avoid-repetition-between-generated-sentences/625 | I would like to fine-tune T5 for diverse paraphrase generation. For each original sentence, I would like to have several different paraphrases generated, but current results contain sentences very similar to each other.
Example:
Original Question ::
What is the expected close date of the opportunity
Paraphrased Questions Generated by T5::
0: What will be the expected close date of the opportunity?
1: What is the expected closing date for the opportunity that you are considering?
2: What is the expected close date of the opportunity?
3: What is the expected close date on the opportunity?
4: When would be the expected close date of the opportunity?
I tried to add diversity measure in the training but was notified it wouldn’t work.
Thus, I want to directly force the decoder to avoid the repetition of ngrams between generated sentences during testing. The ‘generate’ function 1 has two parameters: repetition_penalty, no_repeat_ngram_size. I check the paper 3 and the source code, if I understand correctly, they just avoid repetition along the beam rather than between the sentences. No surprise: I tried different values of the two parameters and there seems no effect.
Thus, I was wondering if there is any simple way to penalize the repetition between sentences? My thought is, during beam search, to penalize the probabilities of repetitive words on different branches at the same/ nearby step. Is there open source code available for this? If not, is there anything I need to pay attention to when I modified the 'generate()’ function for this? | Hey @mengyahu - this sounds like a cool use case! You are right we the current generate() method it is not really possible to avoid repetitions between sentences. It’s quite a special case so I’d suggest that after this PR is merged: Big `generate()` refactor 3 you to make a fork of the transformers repo and try to tweak the beam_scorer.update() function (or the BeamSearchScorer class in general to add a penalty as needed). | 0 |
huggingface | Beginners | RuntimeError: CUDA out of memory. Tried to allocate 11.53 GiB (GPU 0; 15.90 GiB total capacity; 4.81 GiB already allocated; 8.36 GiB free; 6.67 GiB reserved in total by PyTorch) | https://discuss.huggingface.co/t/runtimeerror-cuda-out-of-memory-tried-to-allocate-11-53-gib-gpu-0-15-90-gib-total-capacity-4-81-gib-already-allocated-8-36-gib-free-6-67-gib-reserved-in-total-by-pytorch/5622 | After I run trainer.train and I try to predict the wer of the model, I always get this output.
How to solve it?
The code is below:
def predict(batch, model):
input_dict = processor(batch["input_values"], sampling_rate=16000, return_tensors='pt',padding=True)
logits = model(input_dict.input_values.to(device)).logits
pred_ids = torch.argmax(logits, dim=-1)[0]
batch['pred_ids'] = processor.decode(pred_ids)
return batch
from transformers import TrainingArguments
from transformers import Wav2Vec2ForCTC
model = Wav2Vec2ForCTC.from_pretrained(
model_name,
attention_dropout=0.1,
hidden_dropout=0.1,
feat_proj_dropout=0.0,
mask_time_prob=0.05,
layerdrop=0.1,
gradient_checkpointing=True,
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id,
vocab_size=len(processor.tokenizer)
)
training_args = TrainingArguments(
output_dir="/content/gdrive/MyDrive/wav2vec2-large-xlsr-portuguese-demo/modelo",
output_dir="./wav2vec2-large-xlsr-portuguese-demo",
group_by_length=True,
per_device_train_batch_size=16,
gradient_accumulation_steps=2,
evaluation_strategy=“steps”,
num_train_epochs=5,
fp16=True,
save_steps=400,
eval_steps=400,
logging_steps=400,
learning_rate=3e-4,
warmup_steps=500,
save_total_limit=2,
)
from transformers import Trainer
trainer = Trainer(
model=model,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=d_train,
eval_dataset=d_val,
tokenizer=processor.feature_extractor,
)
If you want to acceed to the whole project, it is avaiable at:
colab.research.google.com
Google Colaboratory 1 | Try this:
import gc
gc.collect()
torch.cuda.empty_cache() | 0 |
huggingface | Beginners | Question Regarding trainer arguments:: load_best_model_at_end | https://discuss.huggingface.co/t/question-regarding-trainer-arguments-load-best-model-at-end/5593 | My question is regarding transformers.TrainingArguments class argument. There are two parameter,
save_total_limit
load_best_model_at_end
Q1. Let’s just say I have set save_total_limit=50. But the best model found by the metric doesn’t stay in the last 50 checkpoints. Maybe it is in the last 200 checkpoints.
Now should load_best_model_at_end will select the best model from the last 50 checkpoints or the entire training duration?
Q2. The problem regarding this is, not always we have large SSD space (or even Regular storage) to train the model. So save_total_limit is kind of a limited feature based on an individual’s disk space. On the contrary, save_total_limit on the best checkpoints would be a great feature. In that way, you can even look for the ensemble of multiple checkpoints (may be good for generation tasks).
So is there any way you can save “best 5 checkpoints” (or best X) from the entire training duration?
Note: I tried to read the source code, but too many callback functions to deal with. It would be a great time-saving if someone can help. | When you use load_best_moel_at_end in combination with save_total_limit, the checkpoint with the best metric is never deleted (it’s always put first in the list of all checkpoints).
There is no way for now to keep the 5 best checkpoints, only the best one. | 0 |
huggingface | Beginners | Input of compute_metrics in ASR model | https://discuss.huggingface.co/t/input-of-compute-metrics-in-asr-model/5580 | What should be the input of the function below?
Is it the model or a pass forward of the model?
def compute_metrics(pred):
pred_logits = pred.predictions
pred_ids = np.argmax(pred_logits, axis=-1)
pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
pred_str = processor.batch_decode(pred_ids)
# we do not want to group tokens when computing the metrics
label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
wer = wer_metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
Also, I have one question about how the Trainer class works…
I mean, it encloses everything, but does it update the weights of the model automatically or it creates another instance for model? | The input is a namedtuple of type EvalPrediction 5. It should have one key predictions for the predictions (will have the structure as the output of your model, so one tensor if your model outputted one tensor, a tuple of two tensors if that’s what your model returns, et.) and one key label_ids that will contain all the labels.
The Trainer does update the weights of the model, otherwise, the model would not be… well… training | 0 |
huggingface | Beginners | How to combine two models’ logits | https://discuss.huggingface.co/t/how-to-combine-two-models-logits/5582 | Hi,
I want to perform text generation by combining the logits of two existing language models in various ways (these models both have causal LM heads). What is the best way to do this? I’ve tried to subclass PreTrainedModel to contain the two models and then output concatenations of the two models’ logits, but the configuration and initialization methods are more geared towards saving and loading existing models rather than combining existing models, so this hasn’t worked out so well. It’s easy to do this kind of task in standard pytorch for vision models, is there a simple way to do this in Huggingface that I’m missing?
Thank you for the help! | You should be able to create a pytorch model with each of the huggingface models initialized as layers of the model. Then in the forward function for the pytorch model, pass the inputs through self.model_a and self.model_b to get logits from both. You can concatenate these there and pass them through the rest of the model. I’ve written the PSEUDOCODE (this code won’t run directly, but presents the general idea) for the same below:
import torch.nn as nn
from transformers import AutoModel
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.model_a = AutoModel.from_pretrained('model_a')
self.model_b = AutoModel.from_pretrained('model_b')
self.classifier = nn.Sequential(
nn.Dropout(p=0.1),
nn.Linear(768, 768, bias=True),
nn.Tanh(),
nn.Dropout(p=0.1),
nn.Linear(768, 3, bias=True)
)
def forward(self, input_ids, attention_mask):
logits_a = self.model_a(input_ids, attention_mask=attention_mask).last_hidden_state[:, 0, :]
logits_b = self.model_b(input_ids, attention_mask=attention_mask).last_hidden_state[:, 0, :]
concatenated_vectors = torch.concat(logits_a, logits_b)
output = self.classifier(concatenated_vectors)
return output
model = Net()
You can just train this model like how you train a regular Pytorch model.
Edit: Made a small error in the code by passing x to classifier instead of concatenated_vectors. | 0 |
huggingface | Beginners | Apple silicon Installation of Transformers | https://discuss.huggingface.co/t/apple-silicon-installation-of-transformers/3060 | What follows is a ‘hint sequence’ to get Transformers working on Big Sur 11.1 @ Apple Silicon M1.
Please try it with most care, and consider it “AS IS” -it has been proven twice, but I cannot be responsible for any unintended behaviour-
Install Xcode cmd line tool chain
$ xcode-select --install
Install homebrew
/bin/bash -c "(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh 8)"
…and a couple of utils. Rustup from source is the key to get Transformers working on M1.
brew install vim wget
brew install --build-from-source rustup-init
Install miniforge -put it in a temp directory of your choice-
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-MacOSX-arm64.sh
bash Miniforge3-MacOSX-arm64.sh
refresh your env variables (run again the shell bash/zsh: effective while inelegant )
bash
conda config --set auto_activate_base false && conda deactivate
Download TensorFlow 2.4 from Apple M1 BUT DON’T INSTALL IT, YET!
$ cd $HOME && git clone https://github.com/apple/tensorflow_macos.git 36 && cd tensorflow_macos/arm64
Create a Conda (virtual) environment -and always restart a shell session-
bash && conda create --name tf24transformers
conda activate tf24transformers && conda install -y python=3.8.6
$ conda install -y pandas matplotlib scikit-learn jupyterlab
Install Apple Tensorflow (ATF)2.4. You’re already in arm64. (If not cd to $HOME/tensorflow_macos/arm64).
7.1)
$ pip install --force pip==20.2.4 wheel setuptools cached-property six
7.2) Install the packages from Apple
$ pip install --upgrade --no-dependencies --force numpy-1.18.5-cp38-cp38-macosx_11_0_arm64.whl grpcio-1.33.2-cp38-cp38-macosx_11_0_arm64.whl h5py-2.10.0-cp38-cp38-macosx_11_0_arm64.whl tensorflow_addons-0.11.2+mlcompute-cp38-cp38-macosx_11_0_arm64.whl
7.3) Some dependencies…
$ pip install absl-py astunparse flatbuffers gast google_pasta keras_preprocessing opt_einsum protobuf tensorflow_estimator termcolor typing_extensions wrapt wheel tensorboard typeguard
7.4) Install ATF
$ pip install --upgrade --force --no-dependencies tensorflow_macos-0.1a1-cp38-cp38-macosx_11_0_arm64.whl
7.5) Refresh your shell once more and you are done
bash && conda activate tf24transformers
python -m pip install --upgrade pip
pip install tokenizers transformers
python -c “from transformers import pipeline; print(pipeline(‘sentiment-analysis’)(‘we love you’))”
[{‘label’: ‘POSITIVE’, ‘score’: 0.9998704791069031}]
Many thanks to Fabrice Daniel for the tutorial about AFT2.4.
Ciao
Ernesto | what’s up with step 2? What does it say?
this can't be that hard
Do you mean?
brew install vimwget
and then
brew install --build-from-source rustup-init | 0 |
huggingface | Beginners | Ner_run.py: ValueError: Can’t find a valid checkpoint | https://discuss.huggingface.co/t/ner-run-py-valueerror-cant-find-a-valid-checkpoint/5512 | I want to start my train process from the last checkpoint
!python ./transformers/examples/token-classification/run_ner.py \
--model_name_or_path xlm-roberta-base \
--output_dir /content/drive/MyDrive/model/roberta \
--do_train \
--test_file /content/nertest.json \
--validation_file /content/nerval.json \
--train_file /content/nertrain.json \
--num_train_epochs 4.0
But I get this error:
ValueError: Can't find a valid checkpoint at /content/drive/MyDrive/model/roberta/checkpoint-4500
And this is my checkpoint-4500 folder
16
So, what is the reason of it? What is valid checkpoint? | Problem have been solved | 0 |
huggingface | Beginners | Evaluation without using a Trainer | https://discuss.huggingface.co/t/evaluation-without-using-a-trainer/5542 | Is there a clean way to perform evaluation on a model without using the Trainer?
I have scripts which train and then evaluation my model. I want to perform further evaluation on the trained model - for now I’ve hacked my training script so I create a Trainer but don’t call train() - is there a better way of doing this? | you’ll need to set up the evaluation dataset / dataloader and write your own evaluation loop to compute the metrics.
you can find examples of this in the examples/legacy directory of the transformers repo, e.g. here is the evaluation for question-answering on squad: transformers/run_squad.py at 6e1ee47b361f9dc1b6da0104d77b38297042efae · huggingface/transformers · GitHub 63 | 0 |
huggingface | Beginners | What is the largest model on huggingface? | https://discuss.huggingface.co/t/what-is-the-largest-model-on-huggingface/5509 | Hello,
Can anyone tell me the model that is pre-trained on the largest data set on huggingface? | Those are two different things. A model can be large and trained on very little or even no data. The size of a model is typically measured by the number of parameters. That is different from the data set size or number of steps that the model has been trained. | 0 |
huggingface | Beginners | Column names of custom dataset for use with trainer | https://discuss.huggingface.co/t/column-names-of-custom-dataset-for-use-with-trainer/5532 | Hi all, Please, I would like to pass my custom datasets to trainer for a text classification. I have read an example on how to pass one of the pre-packaged datasets to trainer, but what I don’t understand is: what should be the names of the columns holding the input_ids and the labels after tokenization? Also, before tokenization, what shoud be names of the columns holding the text and labels ? Thanks a lot | hey @rahmanoladi, in general you can have whatever column names you want for the text and labels before tokenization - it’s up to you to decide how the text should be processed.
once you’ve tokenized the text, you shouldn’t need to rename the resulting columns like input_ids and attention_mask (and i wouldn’t recommend this since it will probably break the Trainer logic).
by default, the Trainer looks for the label column name labels but you can override this by specifying the value of TrainingArguments.label_names: Trainer — transformers 4.5.0.dev0 documentation 14 | 0 |
huggingface | Beginners | DeBERTa absolute Positions | https://discuss.huggingface.co/t/deberta-absolute-positions/3671 | Hi! I was taking a look at Hugging Face’s DeBERTa implementation 5, but I couldn’t find where in the code are the absolute positions being added (they should come before the softmax layer), can someone point me towards the line where that is being done?
Thank you for your time | cc @lysandre maybe? | 0 |
huggingface | Beginners | Does the HF Trainer class support multi-node training? | https://discuss.huggingface.co/t/does-the-hf-trainer-class-support-multi-node-training/5527 | Does the HF Trainer 11 class support multi-node training? Or only single-host multi-GPU training? | It supports both single-node and multi-node distributed training with the PyTorch launcher (torch.distributed.launch) | 0 |
huggingface | Beginners | Evaluation results in training GPT-2 on WikiText-2 | https://discuss.huggingface.co/t/evaluation-results-in-training-gpt-2-on-wikitext-2/5503 | Hello everyone,
I’m trying to find the existing evaluation results for training GPT-2 on WikiText-2. In GPT-2 model card 9, it is mentioned that the perplexity is 29.41, whereas in this 1 blog post by OpenAI, it is said that the perplexity is 18.34 for this task.
I was wondering whether this difference is due to different loss (huggingface used the casual language modeling loss) ? | No, the difference is in what model is evaluated. The model card takes the results reported in the paper for the smallest GPT-2 model, the PPL of 18.34 is for the largest one, which is gpt2-xl on the hub. | 0 |
huggingface | Beginners | ImportError: cannot import name ‘GPTNeoForCausalLM’ from ‘transformers’ (unknown location) | https://discuss.huggingface.co/t/importerror-cannot-import-name-gptneoforcausallm-from-transformers-unknown-location/5452 | Hi, I am trying to execute:
from transformers import GPT2Tokenizer, GPTNeoForCausalLM
and I’m getting the above mentioned error.
I’m using torch == 1.8.1 and transformers == 4.4.2. Any help or solution? | GPT-Neo was released in version 4.5 only, so you need to upgrade to that version | 0 |
huggingface | Beginners | Running out of Memory with run_clm.py | https://discuss.huggingface.co/t/running-out-of-memory-with-run-clm-py/5463 | Hi,
first of all, thanks for creating such a cool library
I have already successfully fine-tuned a GPT2 model and I currently want to fine-tune a GPT2-Large model from the same 1.4 GB training dataset, but I seem to be running out of memory.
When I run the run_clm.py script, I usually get “Killed” as the output. My parameters are the following:
python run_clm.py \
--use_fast_tokenizer \
--model_name_or_path gpt2-large \
--train_file "/home/mark/Downloads/adp5/train2.txt" \
--validation_file "/home/mark/Downloads/adp5/test2.txt" \
--do_train \
--do_eval \
--fp16 \
--overwrite_cache \
--evaluation_strategy="steps" \
--output_dir finetuned \
--eval_steps 200 \
--num_train_epochs 1 \
--gradient_accumulation_steps 2 \
--per_device_train_batch_size 8
When viewing memory allocation, I can see that both system memory (64 GB) and swap (16 GB) have been completely allocated (GPU memory is not allocated).
I’ve tried using deepspeed as well, but end up with the same error.
Does anybody know what’s wrong?
Cheers,
Mark | Hey @MarkStrong do you still get memory issues if you reduce the batch size? | 0 |
huggingface | Beginners | Can trainer.hyperparameter_search also tune the drop_out_rate? | https://discuss.huggingface.co/t/can-trainer-hyperparameter-search-also-tune-the-drop-out-rate/5455 | I’m trying to do hyperparameter searching for the distilBERT model on the sequence classification task. I also want to try different dropout rates. Can the trainer.hyperparameter_search do this? I tried the following code, but it is not working:
image867×305 39.3 KB
image1360×646 110 KB | The model_init is called once at init with no trial. So you need to add a test whether trial is None or not in your model_init. | 0 |
huggingface | Beginners | Fine-tune TF-XML-ROBERTa for token classification | https://discuss.huggingface.co/t/fine-tune-tf-xml-roberta-for-token-classification/5465 | Hello
I would like to finetune TF-XML-RoBERTa for token classification with custom dataset. Quite simple task, I know. But I have some problems with it…
I start with defining Tokenizer
tokenizer = AutoTokenizer.from_pretrained('jplu/tf-xlm-roberta-large')
Firstly, I convert conllu2003 dataset into tf.Dataset. After it, I have the dataset in the following format:
({'input_ids': array([[ 0, 3747, 456, 75161, 7, 30839, 11782, 47, 25299,
47924, 18, 56101, 21, 6492, 6, 5, 2, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0]], dtype=int32),
'attention_mask': array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0]], dtype=int32)},
array([[0, 3, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0]], dtype=int32))
Secondly, I build model based on pretrained ‘jplu/tf-xlm-roberta-large’. I have done it with two a little bit different ways. The first one is to apply default TFXLMRobertaForTokenClassification:
model = TFXLMRobertaForTokenClassification.from_pretrained( 'jplu/tf-xlm-roberta-large',num_labels=len(labels))
And train it.
model.fit(tfdataset_train)
14041/14041 [==============================] - 2700s 191ms/step - loss: 0.4062 - accuracy: 0.9713
Finally evaluate it
benchmarks = model.evaluate(tfdataset_test, return_dict=True, batch_size=2)
3453/3453 [==============================] - 127s 36ms/step - loss: 0.4149 - accuracy: 0.9743
Look like great accuracy, but in fact when I run model on some example from train data it just returns almost same logits for each token! It predicts non named entity class for each token!
res = model(next(tfdataset_train.batch(1).as_numpy_iterator())[0]).logits
for i in range(15):
print(res[0][i])
tf.Tensor(
[ 2.3191597 -2.450158 -4.1209745 -5.032844 -1.676107 -8.229665
-5.121443 -1.2029874 -4.7935658], shape=(9,), dtype=float32)
tf.Tensor(
[ 2.3191764 -2.4501915 -4.120973 -5.032861 -1.6761211 -8.229721
-5.121465 -1.2030123 -4.793587 ], shape=(9,), dtype=float32)
tf.Tensor(
[ 2.3191643 -2.450162 -4.120951 -5.032834 -1.6761119 -8.229675
-5.121448 -1.2029958 -4.7935658], shape=(9,), dtype=float32)
tf.Tensor(
[ 2.3191674 -2.4501798 -4.1209555 -5.032833 -1.67613 -8.229715
-5.1214595 -1.2029996 -4.793585 ], shape=(9,), dtype=float32)
tf.Tensor(
[ 2.319163 -2.4501457 -4.120945 -5.0328226 -1.6761187 -8.229664
-5.1214366 -1.202993 -4.7935667], shape=(9,), dtype=float32)
So, what can cause such behavior? How to solve this problem? | hey @Constantin, i think you might be missing a few preprocessing steps for token classification (i’m assuming that you’re doing something like named entity recognition).
if your input examples have already been split into words then add the is_split_into_words=True argument to the tokenizer
align the labels and tokens - see the tokenize_and_align_labels function in this tutorial: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/token_classification.ipynb#scrollTo=jmNN-iX1KUHd 4
i’m not familiar with the tensorflow api, but you might also need to specify the DataCollatorForTokenClassification collator for this task | 0 |
huggingface | Beginners | BERT Multiclass Sequence Classification Index Error | https://discuss.huggingface.co/t/bert-multiclass-sequence-classification-index-error/5448 | Transformers: 4.4.2
Pytorch: 1.8.0
I am trying to fine-tune BERT for sequence classification on my dataset. The number of classes I have is 6. Here is some example code:
import torch
from torch.utils.data import Dataset
from transformers import BertForSequenceClassification, BertTokenizer, Trainer, TrainingArguments
import pandas as pd
class MyDataset(Dataset):
def __init__(self, csv_file: str):
self.df = pd.read_csv(csv_file, encoding='ISO-8859-1')
self.tokenizer = BertTokenizer.from_pretrained("bert-base-uncased", padding_side='right', local_files_only=True)
self.label_list = self.df['label'].value_counts().keys().to_list()
def __len__(self) -> int:
return len(self.df)
def __getitem__(self, idx: int) -> str:
if torch.is_tensor(idx):
idx = idx.tolist()
text = self.df.iloc[idx, 1]
label = self.label_list.index(self.df.iloc[idx, 3])
return (text, label)
def data_collator(dataset_samples_list):
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased", padding_side='right', local_files_only=True)
examples = [example[0] for example in dataset_samples_list]
encoded_results = tokenizer(examples, padding=True, truncation=True, return_tensors='pt',
return_attention_mask=True)
batch = {}
batch['input_ids'] = torch.stack([result for result in encoded_results['input_ids']])
batch['attention_mask'] = torch.stack([result for result in encoded_results['attention_mask']])
batch['labels'] = torch.stack([torch.tensor(example[1]) for example in dataset_samples_list])
return batch
train_data_obj = MyDataset('/path/to/train/data.csv')
eval_data_obj = MyDataset('/path/to/eval/data.csv')
model = BertForSequenceClassification.from_pretrained("bert-base-uncased")
model.config.num_labels = 6
training_args = TrainingArguments(
output_dir='/path/to/output/dir',
do_train=True,
do_eval=True,
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
evaluation_strategy='epoch',
num_train_epochs=2,
save_steps=10,
gradient_accumulation_steps=4,
dataloader_drop_last=True
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_data_obj,
eval_dataset=eval_data_obj,
data_collator=data_collator
)
trainer.train()
trainer.save_model("/path/to/model/save/dir")
When I run this, I get the following error:
Traceback (most recent call last):
File "/path/to/my/project/scratch.py", line 9, in <module>
bert_processor.train()
File "/path/to/my/project/BertClassifierProcessor.py", line 156, in train
self.trainer.train()
File "/path/to/python/lib/python3.7/site-packages/transformers/trainer.py", line 1053, in train
tr_loss += self.training_step(model, inputs)
File "/path/to/python/lib/python3.7/site-packages/transformers/trainer.py", line 1443, in training_step
loss = self.compute_loss(model, inputs)
File "/path/to/python/lib/python3.7/site-packages/transformers/trainer.py", line 1475, in compute_loss
outputs = model(**inputs)
File "/path/to/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/path/to/python/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 1526, in forward
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
File "/path/to/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/path/to/python/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 1048, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/path/to/python/lib/python3.7/site-packages/torch/nn/functional.py", line 2690, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/path/to/python/lib/python3.7/site-packages/torch/nn/functional.py", line 2385, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
IndexError: Target 3 is out of bounds.
Any thoughts about how to correct this or what I may be doing wrong? Thanks in advance! | The problem may be in your model not having the right number of outputs, but you did not share how your model is created | 0 |
huggingface | Beginners | Trainer crashes during predict and with compute_metrics | https://discuss.huggingface.co/t/trainer-crashes-during-predict-and-with-compute-metrics/2136 | Hi,
I’m trying to train a XLNetForSequenceClassification model using Trainer to classify sentences into 3 categories. It works fine for the training and eval datasets during trainer.train() (loss reduces as expected), but if I try to use compute_metrics argument in my trainer or I try to obtain the predictions on the same eval dataset using trainer.predict(), it crashes with the following error :
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in predict(self, test_dataset)
1353 test_dataloader = self.get_test_dataloader(test_dataset)
1354
-> 1355 return self.prediction_loop(test_dataloader, description="Prediction")
1356
1357 def prediction_loop(
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in prediction_loop(self, dataloader, description, prediction_loss_only)
1442 eval_losses_gatherer.add_arrays(self._gather_and_numpify(losses_host, "eval_losses"))
1443 if not prediction_loss_only:
-> 1444 preds_gatherer.add_arrays(self._gather_and_numpify(preds_host, "eval_preds"))
1445 labels_gatherer.add_arrays(self._gather_and_numpify(labels_host, "eval_label_ids"))
1446
/usr/local/lib/python3.6/dist-packages/transformers/trainer_pt_utils.py in add_arrays(self, arrays)
328 # If we get new arrays that are too big too fit, we expand the shape fo the storage
329 self._storage = nested_expand_like(self._storage, arrays_shape[1], padding_index=self.padding_index)
--> 330 slice_len = self._nested_set_tensors(self._storage, arrays)
331 for i in range(self.world_size):
332 self._offsets[i] += slice_len
/usr/local/lib/python3.6/dist-packages/transformers/trainer_pt_utils.py in _nested_set_tensors(self, storage, arrays)
335 if isinstance(arrays, (list, tuple)):
336 for x, y in zip(storage, arrays):
--> 337 slice_len = self._nested_set_tensors(x, y)
338 return slice_len
339 assert (
/usr/local/lib/python3.6/dist-packages/transformers/trainer_pt_utils.py in _nested_set_tensors(self, storage, arrays)
335 if isinstance(arrays, (list, tuple)):
336 for x, y in zip(storage, arrays):
--> 337 slice_len = self._nested_set_tensors(x, y)
338 return slice_len
339 assert (
/usr/local/lib/python3.6/dist-packages/transformers/trainer_pt_utils.py in _nested_set_tensors(self, storage, arrays)
347 else:
348 storage[self._offsets[i] : self._offsets[i] + slice_len, : arrays.shape[1]] = arrays[
--> 349 i * slice_len : (i + 1) * slice_len
350 ]
351 return slice_len
ValueError: could not broadcast input array from shape (4565,16,768) into shape (916,16,768)
Here 916 is the size of the eval dataset and 16 is the batch_size, and my guess is that 4565 is the longest concatenated feature list?
My code is as follows :
class XLNetDataset(data.Dataset):
def __init__(self, dfObject):
self.dfObject = dfObject # Pandas dataframe
def __len__(self):
return self.dfObject.shape[0]
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
dfRows = self.dfObject.iloc[idx]
dfSentences = dfRows['sentence']
dfLabels = dfRows['p_typ']
return dfSentences, dfLabels
def XLNetCollatFunc(data):
sents = [elem[0] for elem in data]
labels = [elem[1] for elem in data]
encoded_result = xlTokenizer(sents, padding=True, truncation=True, max_length=128, return_tensors='pt', return_attention_mask=True)
output = {'input_ids': encoded_result['input_ids'],
'attention_mask': encoded_result['attention_mask'],
'token_type_ids': encoded_result['token_type_ids'],
'labels': torch.tensor(labels)}
return output
trainDataset = XLNetDataset(trainData) # trainData is pandas DF containing train sentences
testDataset = XLNetDataset(testData) # testData is pandas DF containing test sentences
xlTokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')
xlNetModel = XLNetForSequenceClassification.from_pretrained('xlnet-base-cased', num_labels=3)
for param in xlNetModel.base_model.parameters():
param.requires_grad = False
trainArgs = TrainingArguments(
num_train_epochs = 1,
evaluation_strategy = 'epoch',
per_device_train_batch_size = 16,
per_device_eval_batch_size = 16
)
trainer = Trainer(
model = xlNetModel,
args = trainArgs,
train_dataset = trainDataset,
eval_dataset = testDataset,
data_collator = XLNetCollatFunc
)
trainer.train()
trainer.predict(testDataset)
I’m guessing the problem is somewhere with my custom data collator (I’m still a little unsure of the exact data format the data collator or trainer is expected to receive), but I can’t understand how it is able to produce training and evaluation loss during trainer.train() and not during the predict() call.
I’m using the latest API version (3.5) | There have been several issues around this. This should be solved in the latest release (pre-release to be more specific). You should be able to install it with
pip install --upgrade --pre transformers | 0 |
huggingface | Beginners | Best practice for using Transformers on GPU on EC2? | https://discuss.huggingface.co/t/best-practice-for-using-transformers-on-gpu-on-ec2/3440 | I’ve fired up an EC2 instance with a PyTorch deep learning AMI. The problem is transformers isn’t in the conda environment (someone should convince Amazon to add it!). I’ve tried installing it but that seems to be leading to other things breaking, so before I go down a giant rabbit hole, thought I’d ask here for the simplest way to get up and running with Transformers running on a GPU instance on EC2. Should I use the Huggingface Docker repo, for example? Thanks! | Hey @thecity2, I am currently using HF on a EC2 g4dn.xlarge with Deep Learning Base AMI (Ubuntu 18.04) Version 32.0.
The good part about the AMI is that you have CUDA ready. What I do is to create a new conda environment for HF, then I install PyTorch 8 (checking the CUDA version with the command nvidia-smi) and HuggingFace in the new env simply by using pip.
Let me know if you have any more questions. | 0 |
huggingface | Beginners | How to load weights for Pre-trained model for Question Answering? | https://discuss.huggingface.co/t/how-to-load-weights-for-pre-trained-model-for-question-answering/5287 | What should I change in below snippet to get consistent and accurate output?
Code Snippet:
from transformers import ElectraTokenizer, TFElectraForQuestionAnswering, ElectraConfig
import tensorflow as tf
configuration = ElectraConfig()
tokenizer = ElectraTokenizer.from_pretrained(‘google/electra-small-discriminator’)
TFElect = TFElectraForQuestionAnswering(configuration)
#model = TFElectraForQuestionAnswering.from_pretrained(‘google/electra-small-discriminator’)
model = TFElect.from_pretrained(‘google/electra-small-discriminator’)
question, text = “Who was Jim Henson?”, “Jim Henson was a nice puppet”
input_dict = tokenizer(question, text, return_tensors=‘tf’)
outputs = model(input_dict,return_dict=True)
#print(outputs)
start_logits = outputs.start_logits
end_logits = outputs.end_logits
all_tokens = tokenizer.convert_ids_to_tokens(input_dict[“input_ids”].numpy()[0])
answer = ’ '.join(all_tokens[tf.math.argmax(start_logits, 1)[0] : tf.math.argmax(end_logits, 1)[0]+1])
print(answer)
Output: I get different and incorrect output every time I run it so it seems it doesn’t have any pre-trained weights for the QnA tasks [Also getting warning as below].
Warning:
Some layers from the model checkpoint at google/electra-small-discriminator were not used when initializing TFElectraForQuestionAnswering: [‘discriminator_predictions’]
This IS expected if you are initializing TFElectraForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing TFElectraForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some layers of TFElectraForQuestionAnswering were not initialized from the model checkpoint at google/electra-small-discriminator and are newly initialized: [‘qa_outputs’]
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. | You are using a model that is not fine-tuned on question answering, so it’s initialized with random weights in the question answering head, which is why you get the warning, and get incorrect results that change at every run.
You should pick a model on the hub fine-tuned on squad (see list here 8) for instance distilbert-base-cased-distilled-squad. | 0 |
huggingface | Beginners | How can I use the models provided in huggingface.co/models? | https://discuss.huggingface.co/t/how-can-i-use-the-models-provided-in-huggingface-co-models/5352 | Hi,
How can I use the models provided in Hugging Face – The AI community building the future.? For example, if I want to generate the same output as in the example for the hosted interface API for roberta-large-mnli · Hugging Face 2, how would I get the same result for a text I input?
I see that this is written in “use in transformers”:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("roberta-large-mnli")
model = AutoModelForSequenceClassification.from_pretrained("roberta-large-mnli")
But I dont know how to apply the downloaded model to my own text?
I’ve tried given this to pipeline, classifier = pipeline('roberta-large-mnli') but its not a recognized model.
Any help here would be appreciated.
Thanks | hey @farazk86, you almost had it right with the pipeline - as described in the docs, you also need to provide the task with the model. in this case we’re dealing with text classification (entailment), so we can use the sentiment-analysis task as follows:
from transformers import pipeline
pipe = pipeline(task="sentiment-analysis", model="roberta-large-mnli")
pipe("I like you. </s></s> I love you.") # returns [{'label': 'NEUTRAL', 'score': 0.5168218612670898}]
If you want to see how to generate the predictions using the tokenizer and model directly, I suggest checking out the MNLI task in this tutorial: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/text_classification.ipynb 10 | 0 |
huggingface | Beginners | Distilgpt2 pre-training configuration | https://discuss.huggingface.co/t/distilgpt2-pre-training-configuration/5335 | Hi everyone!
As part of our research work, we are attempting to re-produce distilgpt2. We downloaded openwebtext, binazrized it as indicated here 1, extracted the students weights, used the (almost) same configurations as in research_projects/distillation 1:
python -m torch.distributed.launch \
--nproc_per_node=$N_GPU_NODE \
--nnodes=$N_NODES \
--node_rank $NODE_RANK \
--master_addr $MASTER_ADDR \
--master_port $MASTER_PORT \
train.py \
--fp16 \
--force \
--gpus $WORLD_SIZE \
--student_type gpt2 \
--student_config training_configs/distilgpt2.json \
--student_pretrained_weights ./student/pytorch_model.bin \
--teacher_type gpt2 \
--teacher_name gpt2 \
--alpha_ce 5.0 --alpha_cos 1.0 --alpha_clm 0.5 \
--freeze_pos_embs \
--dump_path my_dir \
--data_file data/owt.pickle \
--token_counts data/token_owt.pickle
We kept the default values for the rest of hyper-parameters. However, the model is not converging (perplexity over 80 for wikitext103 test set). Can anyone confirm whether the settings above are correct or not? Thanks a lot! | cc @VictorSanh | 0 |
huggingface | Beginners | Having Multiple [MASK] tokens in a sentence | https://discuss.huggingface.co/t/having-multiple-mask-tokens-in-a-sentence/3493 | I would like to have multiple [MASK] tokens in a sentence but I get an error when I try to run it.
What do I need to change to fix it?
Instead of: text = "The capital of France, " + tokenizer.mask_token + “, contains the Eiffel Tower.”
I need: text = "The capital of France, " + tokenizer.mask_token + ", contains the Eiffel + tokenizer.mask_token "
from transformers import BertTokenizer, BertForMaskedLM
from torch.nn import functional as F
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForMaskedLM.from_pretrained('bert-base-uncased', return_dict = True)
text = "The capital of France, " + tokenizer.mask_token + ", contains the Eiffel Tower."
input = tokenizer.encode_plus(text, return_tensors = "pt")
mask_index = torch.where(input["input_ids"][0] == tokenizer.mask_token_id)
output = model(**input)
logits = output.logits
softmax = F.softmax(logits, dim = -1)
mask_word = softmax[0, mask_index, :]
top_10 = torch.topk(mask_word, 10, dim = 1)[1][0]
for token in top_10:
word = tokenizer.decode([token])
new_sentence = text.replace(tokenizer.mask_token, word)
print(new_sentence)
I’ve used the code from here 11
I’ve already looked at Multiple Mask Tokens 28 but I want the output to be a sentence.
I hope you can help me
Kind regards
Linda | import torch
sentence = "The capital of France [MASK] contains the Eiffel [MASK]."
token_ids = tokenizer.encode(sentence, return_tensors='pt')
# print(token_ids)
token_ids_tk = tokenizer.tokenize(sentence, return_tensors='pt')
print(token_ids_tk)
masked_position = (token_ids.squeeze() == tokenizer.mask_token_id).nonzero()
masked_pos = [mask.item() for mask in masked_position ]
print (masked_pos)
with torch.no_grad():
output = model(token_ids)
last_hidden_state = output[0].squeeze()
print ("\n\n")
print ("sentence : ",sentence)
print ("\n")
list_of_list =[]
for mask_index in masked_pos:
mask_hidden_state = last_hidden_state[mask_index]
idx = torch.topk(mask_hidden_state, k=100, dim=0)[1]
words = [tokenizer.decode(i.item()).strip() for i in idx]
list_of_list.append(words)
print (words)
best_guess = ""
for j in list_of_list:
best_guess = best_guess+" "+j[0] | 0 |
huggingface | Beginners | Retrain/reuse fine-tuned models on different set of labels | https://discuss.huggingface.co/t/retrain-reuse-fine-tuned-models-on-different-set-of-labels/346 | Hello,
I am wondering is it possible to reuse or retrain a fine-tuned model with a new set of labels(the set of labels contain new labels or the new set of labels is a subset of the labels used to fine-tune the model)?
What I try to do is fine-tune pre-trained models for a task (e.g. NER) in the domain free dataset, then reuse/retrain this fine-tuned model to do a similar task but in a more specific domain (e.g. NER for healthcare), thus in this specific domain, the set of labels may not be the same.
I already try to fine-tune a BERT model to do NER on WNUT17 data based on token classification example in Transformers GitHub. After that, I try to retrain the fine-tuned model by adding a new label and provide train data that has this label, the train failed with error
RuntimeError: Error(s) in loading state_dict for BertForTokenClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([13, 1024]) from checkpoint, the shape in current model is torch.Size([15, 1024]).
size mismatch for classifier.bias: copying a param with shape torch.Size([13]) from checkpoint, the shape in current model is torch.Size([15]).
Is it possible to do this with Transformers and if so how? Thank you in advance! | AFAIK now it is not possible to use the fine-tuned model to be retrained on a new set of labels. A workaround for this is to fine-tune a pre-trained model use whole (old + new) data with a superset of the old + new labels. Is true?
I know it’s more of an ML question than a specific question toward this package, but I will really appreciate it if anyone can refer me to some reference that explains this. Thank you in advance. | 0 |
huggingface | Beginners | Best practical course for NLP | https://discuss.huggingface.co/t/best-practical-course-for-nlp/5202 | Hi,
I am a fresher here. Wish to learn and then contribute. What is the right course (preferably video) to start with? Please suggest
Regards
Puneet | Stanford CS224N is one of my favorites. | 0 |
huggingface | Beginners | Read data from hdfs | https://discuss.huggingface.co/t/read-data-from-hdfs/5339 | Hi, I’m new to huggingface. For my current exeperience, the code I’ve read about shows that the data will be downloaded to local before next steps.
Is that possible to read stream data from remote source like hdfs? Since my training data mab be huge, it will exhaust my disk storage for downloading the data.
Will be thankful for any hint. | hey @gfork, i’ve never tried it myself but the datasets library let’s you process data with Apache Beam: Beam Datasets — datasets 1.5.0 documentation 10
perhaps that is suitable for your use case? | 0 |
huggingface | Beginners | Generate: How to output scores? | https://discuss.huggingface.co/t/generate-how-to-output-scores/5327 | The documentation 3 states that it is possible to obtain scores with model.generate via return_dict_in_generate / output_scores.
generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True)
However, when I add one of these to my model.generate, like
model.generate(input_ids,
max_length=85,
num_beams=5,
output_scores=True,
)
it returns the following error:
TypeError: forward() got an unexpected keyword argument ‘output_scores’
My model is of type, type(model):
transformers.models.bart.modeling_bart.BartForConditionalGeneration | Are you sure you are on the latest version of Transformers? | 0 |
huggingface | Beginners | Saving and Loading SimpleTransformer model in docker container | https://discuss.huggingface.co/t/saving-and-loading-simpletransformer-model-in-docker-container/5320 | I am training a XLM-roberta model using simple transfomer and saving the model by
torch.save(model, 'classifier')
But when I am trying to load the same model in different environment,
model = torch.load('classifier')
I am facing this error
OSError: Not found: "/home/jupyter/.cache/huggingface/transformers/9df9ae4442348b73950203b63d1b8ed2d18eba68921872aee0c3a9d05b9673c6.00628a9eeb8baf4080d44a0abe9fe8057893de20c7cb6e6423cddbf452f7d4d8": No such file or directory Error #2 | hey @swapnil since this question is about another library (simpletransformers), you might be able to get help by opening an issue there: GitHub - ThilinaRajapakse/simpletransformers: Transformers for Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI 9 | 0 |
huggingface | Beginners | Resources for using custom models with trainer | https://discuss.huggingface.co/t/resources-for-using-custom-models-with-trainer/4151 | Hello, I am newer to HuggingFace and wanted to create my own nn.Module class that used RoBERTa as an encoder. I am also hoping that I would be able to use it with HuggingFace’s Trainer class. Looking at the source code for Trainer, it looks like my model’s forward only needs to return an object with ouputs[loss]. Is there anything else I need to do? Are there any resources/guides/tutorials for creating your own model? | Hi @Gabe, I’m not aware of any dedicated tutorials for building custom models, but my suggestion would be to subclass PreTrainedModel (check out how e.g. BertForSequenceClassification is implemented) or one of the existing model classes. This has several advantages to using nn.Module:
You get all the helper methods like from_pretrained for free
Your custom model will play nice with the Trainer
Depending on your use case, you can also override methods directly in the Trainer - see here 4 for a list of the available methods. | 0 |
huggingface | Beginners | T5 forward pass versus generate, latter outputs non-sense | https://discuss.huggingface.co/t/t5-forward-pass-versus-generate-latter-outputs-non-sense/4831 | Hi, at training, I’m using the forward pass and batch_decode on the logits to get the decoded output:
outputs = model(
input_ids,
attention_mask,
dec_input_ids,
dec_attention_mask,
labels=dec_input_ids,
)
loss, logits = outputs.loss, outputs.logits
decoded_output = tokenizer.batch_decode(torch.argmax(outputs.logits, dim=2).tolist(), skip_special_tokens=True)
And decoded_output seems to comply with what I trained the model on:
bread dough ; side surface
However, I’ve noticed that using model.generate produces non-sense:
generated = model.generate(input_ids)
tokenizer.decode(generated[0], skip_special_tokens=True))
table table table table table table table table table table table table table table table table table table
Note that it is the same model instance, as well as the same input_ids (this way it can’t be related to saving/loading issues, and I guess it also eliminates the possibility of encoding/tokenization issues for input_ids).
Background: model is of class T5ForConditionalGeneration and initialized with t5-small.
What’s the problem here? I’ve used the EncoderDecoderModel in the very same way, and there, model.generate works as expected. | It may help to know the exact structure of input_ids (e.g., dimensions) and what is the context for each batch element? | 0 |
huggingface | Beginners | Shifting ids to the right when training GPT-2 on text generation? | https://discuss.huggingface.co/t/shifting-ids-to-the-right-when-training-gpt-2-on-text-generation/5308 | Hello,
I am slightly confused how to exactly prepare the training data for GPT-2 text generation.
In order to train you have to provide input_ids (inputs) and labels (outputs). Both are supposed to be lists of token indices. This is the easy part.
Question: Are inputs_ids and labels supposed to be absolutely identical, or are the labels supposed to be input_ids shifted one element to the right?
Best,
Tristan | During training, the labels are shifted inside the models (see the doc 20) so you should pass labels equal to input_ids. | 0 |
huggingface | Beginners | How to input Scibert in run_mlm? (Is it possible?) | https://discuss.huggingface.co/t/how-to-input-scibert-in-run-mlm-is-it-possible/5206 | Hi there,
I’m trying to further train from the scibert_scivocab_uncased model, using the run_mlm script. I’ve had no issues further training from BERT_base and RoBERTa but I’m a bit stuck with sciBERT.
SciBERT is not one of the basic models you can directly call from run_mlm.py
So I downloaded the model from allenai/scibert_scivocab_uncased at main 1
to run:
"python myrun_mlm.py "
"--model_name_or_path=scibert_scivocab_uncased "
But the tokenizer files (tokenizer.json, tokenizer_config.json,…) are missing so it’s not working. I can’t find the tokenizers files in the allenAI scibert git repo either.
What am I missing there?
Thanks for the help! | Alternative solution: not using the run-mlm.py script, but a code inspired from: lordtt13/COVID-SciBERT · Hugging Face 1 / word-embeddings/COVID-SciBERT.ipynb at master · lordtt13/word-embeddings · GitHub 1 Thanks @lordtt13 !!
with:
tokenizer = transformers.AutoTokenizer.from_pretrained('allenai/scibert_scivocab_uncased')
model = transformers.AutoModelWithLMHead.from_pretrained('allenai/scibert_scivocab_uncased').to('cuda')
Still would be interesting to know if the integration of scibert as a pre-trained model is planned for run_mlm.py | 0 |
huggingface | Beginners | Is that possible there is different output for the same model tested online and tested locally? | https://discuss.huggingface.co/t/is-that-possible-there-is-different-output-for-the-same-model-tested-online-and-tested-locally/5034 | I am focusing on this model: mfeb/albert-xxlarge-v2-squad2 · Hugging Face
It seems like the result is different for test online and locally:
If I directly type context/question in the webpage, it can generate a good result.
But if I use transformer code
> from transformers import AutoTokenizer, AutoModelForQuestionAnswering
> tokenizer = AutoTokenizer.from_pretrained(“mfeb/albert-xxlarge-v2-squad2”)
> model = AutoModelForQuestionAnswering.from_pretrained(“mfeb/albert-xxlarge-v2-squad2”)
It generates different result.
But they are from same model mfeb/albert-xxlarge-v2-squad2 | still not figure it out | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.