docs
stringclasses 4
values | category
stringlengths 3
31
| thread
stringlengths 7
255
| href
stringlengths 42
278
| question
stringlengths 0
30.3k
| context
stringlengths 0
24.9k
| marked
int64 0
1
|
---|---|---|---|---|---|---|
huggingface | Beginners | Training stops when I try Fine-Tune XLSR-Wav2Vec2 for low-resource ASR | https://discuss.huggingface.co/t/training-stops-when-i-try-fine-tune-xlsr-wav2vec2-for-low-resource-asr/8981 | Hi,
I’m learning Wav2Vec2 according the blog link:
huggingface.co
Fine-Tune XLSR-Wav2Vec2 for low-resource ASR with 🤗 Transformers 1
And I download the ipynb file and try run it locally.
Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_🤗_Transformers.ipynb
All looks file but when I run trainer.train(), it seems stop after a while, and it generate some log files under the folder wav2vec2-large-xlsr-turkish-demo, I send the screen shot to you as following:
2021-08-05 17-05-36 的屏幕截图1063×410 35 KB
I don’t know how to open the file events.out.tfevents.1628152300.tq-sy.129248.2, what’s the problem and how can I debug of it? please help.
Thanks a lot. | It probably stops cause u don’t have enough resources to run the script, I recommend trying to run the script on google collab | 0 |
huggingface | Beginners | Wav2Vec2ForCTC and Wav2Vec2Tokenizer | https://discuss.huggingface.co/t/wav2vec2forctc-and-wav2vec2tokenizer/3587 | Having installed transformers and trying:
import transformers
import librosa
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC
from transformers import Wav2Vec2Tokenizer
#load model and tokenizer
tokenizer = Wav2Vec2Tokenizer.from_pretrained(“facebook/wav2vec2-base-960h”)
model = Wav2Vec2ForCTC.from_pretrained(“facebook/wav2vec2-base-960h”)
I get:
ImportError Traceback (most recent call last)
in
3 import soundfile as sf
4 import torch
----> 5 from transformers import Wav2Vec2ForCTC
6 from transformers import Wav2Vec2Tokenizer
7
ImportError: cannot import name ‘Wav2Vec2ForCTC’ from ‘transformers’ (c:\python\python37\lib\site-packages\transformers_init_.py)
How I install/get Wav2Vec2ForCTC and Wav2Vec2Tokenizer ??? | This probably means you don’t have the latest version. You should check your version of Transformers with
import transformers
print(transformers.__version__)
and if you don’t see at least 4.3.0, you will need to upgrade your install. | 0 |
huggingface | Beginners | Trainer .evaluate() method returns one less prediction, but training runs fine (GPT-2 fine-tuning) | https://discuss.huggingface.co/t/trainer-evaluate-method-returns-one-less-prediction-but-training-runs-fine-gpt-2-fine-tuning/12846 | I’ve been breaking my head about this bug in my code for two days now. I have a set of german texts that I want to classify into one of 10 classes. The training runs smoothly, I have problems with the evaluation. Obviously I don’t share the whole texts, let me know if that is required, but they are confidential, so I’d have to make a mock example.
Here is the code I use to get my data:
train_texts, val_texts, train_labels, val_labels = train_test_split(texts, labels, random_state=111, test_size=0.1)
print("TRAIN TEXTS LENGTH", len(train_texts))
print("VAL TEXTS LENGTH", len(val_texts))
print("TRAIN LABELS LENGTH", len(train_labels))
print("VAL LABELS LENGTH", len(val_labels))
TRAIN TEXTS LENGTH 36
VAL TEXTS LENGTH 4
TRAIN LABELS LENGTH 36
VAL LABELS LENGTH 4
Here is the code I have. First I prepare the model:
###########################
# Prepare model
###########################
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained("benjamin/gerpt2", model_max_len = 300)
tokenizer.padding_side = "left" # GPT-2 must be padded to the left
tokenizer.pad_token = tokenizer.eos_token
# Model
config = GPT2Config.from_pretrained(pretrained_model_name_or_path="benjamin/gerpt2",
id2label = id2label, #dictionary of {'id': 'label'}
label2id = label2id) #dictionary of {'label': 'id'}
model = AutoModelForSequenceClassification.from_pretrained("benjamin/gerpt2", num_labels = 10)
model.resize_token_embeddings(len(tokenizer))
model.config.pad_token_id = model.config.eos_token_id
Then I create dataset and tokenize text (see custom defined classes and function):
def tokenize_text(text, tokenizer):
'''
Tokenizes text using a loaded tokenizer
'''
return tokenizer(text, max_length=300, truncation=True, padding=True)
class CustomDataset(torch.utils.data.Dataset):
'''
Defines a Dataset class to feed the model.
'''
def __init__(self, encodings, labels=None):
'''
Initializes the class with the preprocessed text (encodings), labels and number of examples.
'''
self.encodings = encodings
self.labels = labels
self.n_examples = len(self.labels)
def __getitem__(self, idx):
'''
Defines a method that pulls a single item with its idx from the dataset.
'''
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} # get from dictionary
if self.labels:
item["labels"] = torch.tensor(self.labels[idx])
return item
def __len__(self):
'''
Defines a method that returns the length of the dataset.
'''
return len(self.encodings["input_ids"])
###########################
# Encode text
###########################
train_encodings = tokenize_text(train_texts, tokenizer)
val_encodings = tokenize_text(val_texts, tokenizer)
###########################
# Create dataset objects
###########################
train_dataset = CustomDataset(train_encodings, train_labels)
val_dataset = CustomDataset(val_encodings, val_labels)
Now I created my own Trainer class and compute metrics because I want to use the weight argument in my loss function that I defined:
###########################
# Training arguments
###########################
def compute_metrics(pred):
'''
Calculates metrics to evaluate model.
'''
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
print('\npred.predictions:\n', pred.predictions)
print('\npred:\n', pred)
print()
print('y_true', labels)
print('y_hat', preds)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='weighted')
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
# Model Artifacts should fall into this folder (or sub folders)
artifacts_out_dir = './outputs'
training_args = TrainingArguments(
output_dir=artifacts_out_dir,
# checkpoint saving strategy
overwrite_output_dir=True,
evaluation_strategy = 'epoch',
# model hyperparameters
num_train_epochs=1,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
warmup_steps=10,
weight_decay=0.01,
# evaluation strategy and logging
logging_dir='./logs/tensorboard',
logging_steps=2
)
class TrainerCustom(transformers.Trainer):
def __init__(self, weights, *args, **kwargs):
super().__init__(*args, **kwargs)
# initialize weights from argument
self.weights = weights
def compute_loss(self, model, inputs, return_outputs=False):
"""
How the loss is computed by Trainer. By default, all models return the loss in the first element.
Subclass and override for custom behavior.
"""
labels = inputs.pop("labels")
outputs, _ = model(**inputs, return_dict=False) # returns tuple, that is why '_'
# set to same device as labels
self.weights = self.weights.to(labels.device)
print("LABELS:", labels)
print('LABELS device:', labels.device)
print("WEIGHTS:", self.weights)
print('WEIGHTS device:', self.weights.device)
# Save past state if it exists
if self.args.past_index >= 0:
self._past = outputs[self.args.past_index]
cross_entropy_loss_func = torch.nn.CrossEntropyLoss(weight = self.weights)
print('OUTPUTS:', outputs)
loss = cross_entropy_loss_func(outputs, labels.long()) # set labels to TensorLong, got error before
print('LOSS', loss)
return (loss, outputs) if return_outputs else loss
Now I obviously run just debug mode (1 epoch, 36 train examples, 4 val examples).
###########################
# Trainer class
###########################
trainer = TrainerCustom(
model=model,
args=training_args,
compute_metrics=compute_metrics, #own defined function see above
train_dataset=train_dataset,
eval_dataset=val_dataset,
weights = torch.tensor([1.2000, 0.9000, 1.2000, 0.9000, 0.9000, 0.9000, 0.9000, 1.8000, 0.9000, 0.9000]) # 10 weights for 10 classes
)
Now I run training (evaluation happens as part of training due to arguments I set):
print('\nRunning training...\n')
trainer.train()
However, the issue is that the .evaluate() function for some reason returns correct number of labels. but one less than batch size predictions - hence my lengths don’t match and I get an error. See below. I print it all so I found where the bug is, I looked into the source code, but just can’t find where I’m making a mistake.
# TRAINING STEP PRINTS - EVERYTHING IS OK
LABELS: tensor([3, 9, 5, 9, 0, 6, 4, 0], dtype=torch.int32)
LABELS device: cpu
WEIGHTS: tensor([1.2000, 0.9000, 1.2000, 0.9000, 0.9000, 0.9000, 0.9000, 1.8000, 0.9000,
0.9000])
WEIGHTS device: cpu
OUTPUTS: tensor([[-0.0889, 0.2450, 0.3983, 0.1111, -0.1511, -0.0520, -0.3428, 0.2376,
-0.1851, -0.5946],
[ 0.3004, 0.1739, 0.4019, 0.1611, -0.2102, -0.1775, -0.0751, 0.4822,
-0.3875, -0.5656],
[ 0.2611, 0.1720, 0.0378, 0.0174, -0.1998, -0.1694, 0.0667, 0.7277,
-0.0311, -0.4646],
[ 0.3728, 0.6940, 0.0792, 0.1359, -0.0296, 0.2614, -0.1489, 0.5426,
-0.0150, -0.7283],
[ 0.3806, 0.3427, 0.2283, -0.0392, -0.0176, -0.2239, -0.1351, 0.8266,
-0.4894, -0.5863],
[ 0.0585, 0.3695, 0.5742, -0.7659, -0.1160, -0.2615, 0.1515, 1.7408,
-0.7622, -1.0512],
[-0.1374, 0.0696, 0.1904, 0.2616, 0.1822, -0.3327, -0.4270, 0.6404,
-0.2022, -0.5745],
[ 0.4530, 0.3680, 0.4304, -0.4875, -0.4661, -0.2198, 0.0557, 0.4714,
-0.3884, -0.2292]], grad_fn=<IndexBackward>)
LOSS tensor(2.4015, grad_fn=<NllLossBackward>)
# EVAL STEP
LABELS: tensor([0, 2, 7, 7], dtype=torch.int32)
LABELS device: cpu
WEIGHTS: tensor([1.2000, 0.9000, 1.2000, 0.9000, 0.9000, 0.9000, 0.9000, 1.8000, 0.9000,
0.9000])
WEIGHTS device: cpu
OUTPUTS: tensor([[ 0.1938, -0.2064, 0.3387, 0.0504, 0.0684, -0.2160, -0.2775, 0.4145,
-0.2933, -0.1107],
[ 0.1445, 0.0269, 0.1467, 0.1527, -0.2904, 0.0661, -0.2611, 0.5330,
-0.0186, -0.4184],
[-0.0918, -0.0234, 0.2311, 0.1614, -0.1304, -0.1700, -0.1917, 0.2001,
-0.3553, -0.2138],
[-0.0918, -0.0234, 0.2311, 0.1614, -0.1304, -0.1700, -0.1917, 0.2001,
-0.3553, -0.2138]])
LOSS tensor(2.1039)
pred.predictions:
[[ 0.14445858 0.02692143 0.14672504 0.1527456 -0.29039353 0.06611381
-0.26105392 0.5329592 -0.01855119 -0.41837007]
[-0.09184867 -0.02340093 0.23106857 0.16139469 -0.13035089 -0.17000316
-0.19174051 0.20007178 -0.3553058 -0.2137518 ]
[-0.09184867 -0.02340093 0.23106857 0.16139469 -0.13035089 -0.17000316
-0.19174051 0.20007178 -0.3553058 -0.2137518 ]]
pred:
EvalPrediction(predictions=array([[ 0.14445858, 0.02692143, 0.14672504, 0.1527456 , -0.29039353,
0.06611381, -0.26105392, 0.5329592 , -0.01855119, -0.41837007],
[-0.09184867, -0.02340093, 0.23106857, 0.16139469, -0.13035089,
-0.17000316, -0.19174051, 0.20007178, -0.3553058 , -0.2137518 ],
[-0.09184867, -0.02340093, 0.23106857, 0.16139469, -0.13035089,
-0.17000316, -0.19174051, 0.20007178, -0.3553058 , -0.2137518 ]],
dtype=float32), label_ids=array([0, 2, 7, 7]))
y_true [0 2 7 7]
y_hat [7 2 2]
As you can see, I get y_hat one less predicted class. Not sure why - the bug must be in the step above, as I only get three tensors instead of 4 of class probabilities (in the EvalPrediction obj).
Here is the error message:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_8820/2277851298.py in <module>
4
5 print('\nRunning training...\n')
----> 6 trainer.train()
~\Anaconda3\envs\mailbot\lib\site-packages\transformers\trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1405
1406 self.control = self.callback_handler.on_epoch_end(args, self.state, self.control)
-> 1407 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
1408
1409 if DebugOption.TPU_METRICS_DEBUG in self.args.debug:
~\Anaconda3\envs\mailbot\lib\site-packages\transformers\trainer.py in _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for_eval)
1512 metrics = None
1513 if self.control.should_evaluate:
-> 1514 metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
1515 self._report_to_hp_search(trial, epoch, metrics)
1516
~\Anaconda3\envs\mailbot\lib\site-packages\transformers\trainer.py in evaluate(self, eval_dataset, ignore_keys, metric_key_prefix)
2156 prediction_loss_only=True if self.compute_metrics is None else None,
2157 ignore_keys=ignore_keys,
-> 2158 metric_key_prefix=metric_key_prefix,
2159 )
2160
~\Anaconda3\envs\mailbot\lib\site-packages\transformers\trainer.py in evaluation_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix)
2390 # Metrics!
2391 if self.compute_metrics is not None and all_preds is not None and all_labels is not None:
-> 2392 metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
2393 else:
2394 metrics = {}
~\AppData\Local\Temp/ipykernel_8820/1156443582.py in compute_metrics(pred)
120 print('y_hat', preds)
121
--> 122 precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='weighted')
123 acc = accuracy_score(labels, preds)
124 return {
~\Anaconda3\envs\mailbot\lib\site-packages\sklearn\metrics\_classification.py in precision_recall_fscore_support(y_true, y_pred, beta, labels, pos_label, average, warn_for, sample_weight, zero_division)
1532 if beta < 0:
1533 raise ValueError("beta should be >=0 in the F-beta score")
-> 1534 labels = _check_set_wise_labels(y_true, y_pred, average, labels, pos_label)
1535
1536 # Calculate tp_sum, pred_sum, true_sum ###
~\Anaconda3\envs\mailbot\lib\site-packages\sklearn\metrics\_classification.py in _check_set_wise_labels(y_true, y_pred, average, labels, pos_label)
1336 raise ValueError("average has to be one of " + str(average_options))
1337
-> 1338 y_type, y_true, y_pred = _check_targets(y_true, y_pred)
1339 # Convert to Python primitive type to avoid NumPy type / Python str
1340 # comparison. See https://github.com/numpy/numpy/issues/6784
~\Anaconda3\envs\mailbot\lib\site-packages\sklearn\metrics\_classification.py in _check_targets(y_true, y_pred)
82 y_pred : array or indicator matrix
83 """
---> 84 check_consistent_length(y_true, y_pred)
85 type_true = type_of_target(y_true)
86 type_pred = type_of_target(y_pred)
~\Anaconda3\envs\mailbot\lib\site-packages\sklearn\utils\validation.py in check_consistent_length(*arrays)
331 raise ValueError(
332 "Found input variables with inconsistent numbers of samples: %r"
--> 333 % [int(l) for l in lengths]
334 )
335
ValueError: Found input variables with inconsistent numbers of samples: [4, 3]
PYTORCH VERSION: 1.7.1+cpu
TRANSFORMERS VERSION: 4.12.3 | I have somehow solved the issue - not sure why, but it runs when my custom trainer is the following. I also updated torch to the newest version, i.e. 1.10
class TrainerCustom(transformers.Trainer):
# def __init__(self):
# super().__init__()
def __init__(self, weights, *args, **kwargs):
super().__init__(*args, **kwargs)
# initialize weights from argument
self.weights = weights
def compute_loss(self, model, inputs, return_outputs=False):
"""
How the loss is computed by Trainer. By default, all models return the loss in the first element.
Subclass and override for custom behavior.
"""
labels = inputs.pop("labels").long()
outputs = model(**inputs)
print("LABELS:", labels)
# Save past state if it exists
if self.args.past_index >= 0:
self._past = outputs[self.args.past_index]
loss_func = torch.nn.CrossEntropyLoss(weight = self.weights.to(labels.device))
loss = loss_func(outputs.get('logits'), labels)
print("loss:", loss)
return (loss, outputs) if return_outputs else loss``` | 0 |
huggingface | Beginners | Extremely confusing or non-existent documentation about the Seq2Seq trainer | https://discuss.huggingface.co/t/extremely-confusing-or-non-existent-documentation-about-the-seq2seq-trainer/12880 | I’ve been trying to train a model to translate database metadata + human requests into valid SQL.
Initially, I used a wiki SQL base + a custom pytorch script (worked fine) but I decided I want to train my own from scratch and I’d better go with the “modern” method of using a trainer.
The code I currently have is:
self.tokenizer = T5Tokenizer.from_pretrained("t5-small")
self.model = T5ForConditionalGeneration.from_pretrained("t5-small")
print('Creating datasets')
train_dataset = Dataset.from_dict({
'request': [x['prompt'] for x in data[:int(len(data) * 0.8)]],
'label': [x['completion'] for x in data[:int(len(data) * 0.8)]]
})
eval_dataset = Dataset.from_dict({
'request': [x['prompt'] for x in data[int(len(data) * 0.8):]],
'label': [x['completion'] for x in data[int(len(data) * 0.8):]]
})
print('Creating and starting trainer')
# Initialize our Trainer
trainer = Seq2SeqTrainer(
model=self.model,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
tokenizer=self.tokenizer,
compute_metrics=self.compute_metrics,
args=Seq2SeqTrainingArguments(
output_dir='hft',
overwrite_output_dir=True,
do_train=True,
do_eval=True,
num_train_epochs=20,
generation_max_length=512,
)
)
trainer.evaluate()
trainer.train()
trainer.evaluate()
Where the prompt and completion keys are both strings.
This simply yileds the error:
***** Running Evaluation *****
Num examples = 20
Batch size = 8
Traceback (most recent call last):
File "itg/t5take5.py", line 71, in <module>
t5t5.train(sparc_to_prompt())
File "itg/t5take5.py", line 59, in train
trainer.evaluate()
File "/home/george/.local/lib/python3.8/site-packages/transformers/trainer_seq2seq.py", line 70, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/home/george/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2151, in evaluate
output = eval_loop(
File "/home/george/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2313, in evaluation_loop
for step, inputs in enumerate(dataloader):
File "/home/george/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/home/george/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/george/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/home/george/.local/lib/python3.8/site-packages/transformers/data/data_collator.py", line 246, in __call__
batch = self.tokenizer.pad(
File "/home/george/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2723, in pad
raise ValueError(
ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label']
This is rather confusing, I’ve tried renaming the label column to something else (SQL), this just results in training failing and evaluation doing nothing with the logs:
***** Running Evaluation *****
Num examples = 0
Batch size = 8
The following columns in the training set don't have a corresponding argument in `T5ForConditionalGeneration.forward` and have been ignored: request, SQL.
I’ve tried providing the label_names argument, but this is also useless and the same behavior manifests.
Is the trainer’s documentation new?
I also attempted looking at some examples, but the “hard” part, that is to say, how do you actually get a dataset which is formatted in a valid way, is always missing. | Have you tried following the relevant course sections 6? (I linked to translation but summarization should be the same as well).
Basically you are supplying raw datasets to the Seq2SeqTrainer and this can’t work, as it will need the inputs to the models (input_ids, labels, attention_mask etc.) so you need to tokenize your inputs and labels. This is all done in the examples you mention once you have your dataset with one column for the input texts and one column for the target texts. | 0 |
huggingface | Beginners | Question regarding TF DistilBert For Sequence Classification | https://discuss.huggingface.co/t/question-regarding-tf-distilbert-for-sequence-classification/12882 | I have successfully fine tuned “TF DistilBert For Sequence” Classification to distinguish comments that are toxic vs. not in my datasets. Is there a way to use the same model to gauge which sentence in a pair of toxic sentences is more (or less) toxic? Is there a way to access the probability produced by the classifier to compare toxicity of two toxic sentences? | Hi,
You can access the probability as follows:
from transformers import DistilBertTokenizer, TFDistilBertForSequenceClassification
import tensorflow as tf
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
probabilities = tf.math.softmax(outputs.logits, axis=-1)
print(probabilities)
The probabilities are a tensor of shape (batch_size, num_labels), containing the probabilities per class for every example in the batch. | 0 |
huggingface | Beginners | PEGASUS&ProphetNet EncoderDecoderModel gives “Request-URI Too Large for url” error | https://discuss.huggingface.co/t/pegasus-prophetnet-encoderdecodermodel-gives-request-uri-too-large-for-url-error/4347 | Hello, I am trying to set up an EncoderDecoderModel using PEGASUS encoder and ProphetNet decoder.
First, I initialize a PegasusModel and access its encoder:
from transformers import PegasusModel, PegasusConfig, EncoderDecoderModel
pegasus = PegasusModel(PegasusConfig()).encoder
Then I try to pass that to the EncoderDecoderModel together with decoder from ProphetNet:
pegasus2prophet = EncoderDecoderModel.from_encoder_decoder_pretrained(pegasus, "microsoft/prophetnet-large-uncased")
However running that piece of code results in following errors:
HTTPError Traceback (most recent call last)
~/stanford/xcs224u-project/project_venv/lib/python3.8/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
416 # Load from URL or cache if already cached
--> 417 resolved_config_file = cached_path(
418 config_file,
~/stanford/xcs224u-project/project_venv/lib/python3.8/site-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, use_auth_token, local_files_only)
1077 # URL, so get it from the cache (downloading if necessary)
-> 1078 output_path = get_from_cache(
1079 url_or_filename,
~/stanford/xcs224u-project/project_venv/lib/python3.8/site-packages/transformers/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, use_auth_token, local_files_only)
1215 r = requests.head(url, headers=headers, allow_redirects=False, proxies=proxies, timeout=etag_timeout)
-> 1216 r.raise_for_status()
1217 etag = r.headers.get("X-Linked-Etag") or r.headers.get("ETag")
~/stanford/xcs224u-project/project_venv/lib/python3.8/site-packages/requests/models.py in raise_for_status(self)
942 if http_error_msg:
--> 943 raise HTTPError(http_error_msg, response=self)
944
HTTPError: 414 Client Error: Request-URI Too Large for url: https://huggingface.co/PegasusEncoder(%0A%20%20(embed_tokens):%20Embedding(50265,%201024,%20padding_idx=0)%0A%20%20(embed_positions):%20PegasusSinusoidalPositionalEmbed
(several rows of the URI follow)
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-3-2cdd15f1864e> in <module>
----> 1 pegasus2prophet = EncoderDecoderModel.from_encoder_decoder_pretrained(pegasus, "microsoft/prophetnet-large-uncased")
~/stanford/xcs224u-project/project_venv/lib/python3.8/site-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py in from_encoder_decoder_pretrained(cls, encoder_pretrained_model_name_or_path, decoder_pretrained_model_name_or_path, *model_args, **kwargs)
304 from ..auto.configuration_auto import AutoConfig
305
--> 306 encoder_config = AutoConfig.from_pretrained(encoder_pretrained_model_name_or_path)
307 if encoder_config.is_decoder is True or encoder_config.add_cross_attention is True:
308
~/stanford/xcs224u-project/project_venv/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
366 {'foo': False}
367 """
--> 368 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
369
370 if "model_type" in config_dict:
~/stanford/xcs224u-project/project_venv/lib/python3.8/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
434 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a {CONFIG_NAME} file\n\n"
435 )
--> 436 raise EnvironmentError(msg)
437
438 except json.JSONDecodeError:
OSError: Can't load config for 'PegasusEncoder(
(embed_tokens): Embedding(50265, 1024, padding_idx=0)
(embed_positions): PegasusSinusoidalPositionalEmbedding(1024, 1024)
(layers): ModuleList(
(0): PegasusEncoderLayer(
(self_attn): PegasusAttention(
...
Would anyone know if there is any way to bypass this problem? | Hello there! Were you able to resolve this issue? I am facing a similar issue with a model | 0 |
huggingface | Beginners | Training t5-based seq to seq suddenly reaches loss of `nan` and starts predicting only `<pad>` | https://discuss.huggingface.co/t/training-t5-based-seq-to-seq-suddenly-reaches-loss-of-nan-and-starts-predicting-only-pad/12884 | I’m trying to train a t5 based LM head model (mrm8488/t5-base-finetuned-wikiSQL) using my custom data to turn text into SQL (based roughly on the SPIDER dataset).
The current training loop I have is something like this:
parameters = self.model.parameters()
optimizer = AdamW(parameters, lr=1e-5) # imported from `transformers`
scheduler = get_linear_schedule_with_warmup(
optimizer,
num_warmup_steps=5,
num_training_steps=len(data) * nr_epochs,
)
for epoch in range(nr_epochs):
for batch in data_loader:
optimizer.zero_grad()
predictions = model(**batch)
loss = predictions[0]
loss.backward()
optimizer.step()
scheduler.step()
Note: Simplified, I don’t show early stopping, datasource creation, dl creation, some custom scheduling logic, etc. But none of that should be relevant.
Pretty standard, the batch dictionary contains: input_ids, attention_mask, labels, decoder_attention_mask. I get the inputs_ids and attention_mask from tokenizing my input text, I get the labels and dedocer_attention_mask from tokenizing my target text (with the same tokenizer).
I tried also passing decoder_input_ids (using the same values I used for labels) but it results in a CUDA error (when using GPU) or a blas error (when using CPU). I tried deepcopying the tensor in case it was an issue of both this and labels pointing to the same object, nothing changes
My main question here is:
Why would this result in the yielded loss suddenly becoming nan and the model, if .backwards is called on that, suddenly start to predict everything as <pad> ?
Is it just that <pad> is what the tokenizer decodes if the middle predicts “gibberish” (i.e. nan, inf or a very high or low number that’s not associated with any char/seq by the tokenizer)
Furthermore, usually, losses seem to become nan after they start getting higher and higher, but in this case, the model seems to be improving until at one point a nan drops out of nowhere.
My other questions, to hopefully help address this, are:
Is the decoder_attention_mask actually the output_attention_mask ? The model seems to perform much better when I add it and I get it from tokenizing the target text (and it seems to overlap with the padding therein) … but, my impression was that the “decoder” here was the generator of embedding and that seq2seq models have an additional LM head. Am I just getting my terminology wrong? Is the argument just named poorly?
Is there any relevance to passing decoder_input_ids ? Should these just be equivalent to the labels (given that, see above, the “decoder” here seems to be referring to the LM head)? Should I consider passing them instead of passing labels? Why would I get cuda/blas related crashes when I do pass them?
My current approach is to just “ignore” a loss of nan, i.e. clear the gradient, don’t do backdrop, and keep moving. Is there a better alternative? Is the loss going to nan unexpected and maybe a sign I should look for and remove a “faulty” datapoint from the batch?
I get this is an unideal way to be training, but I couldn’t get the Seq2Seq trainer working (made a question regarding that here: Extremely confusing or non-existent documentation about the Seq2Seq trainer 1) | I also cross-posted this on stack overflow, in case anyone is helped by that: python - How to avoid huggingface t5-based seq to seq suddenly reaching a loss of `nan` and start predicting only `<pad>`? - Stack Overflow | 0 |
huggingface | Beginners | Warm-started encoder-decoder models (Bert2Gpt2 and Bert2Bert) | https://discuss.huggingface.co/t/warm-started-encoder-decoder-models-bert2gpt2-and-bert2bert/12728 | I am working on warm starting models for the summarization task based on @patrickvonplaten 's great blog: Leveraging Pre-trained Language Model Checkpoints for Encoder-Decoder Models. However, I have a few questions regarding these models, especially for Bert2Gpt2 and Bert2Bert models:
1- As we all know, the summarization task requires a sequence2sequence model. In @patrickvonplaten’s blog of warm-starting bert2gpt2 model :
huggingface.co
patrickvonplaten/bert2gpt2-cnn_dailymail-fp16 · Hugging Face 1
Why don’t we use Seq2SeqTrainer and Seq2SeqTrainingArguments? Instead, we use Trainer and TrainingArguments.
2- For Bert2Gpt2 model, how can the decoder (Gpt2) understand the output of the encoder (Bert) while they use different vocabularies?
3- For Bert2Bert and Roberta2Roberta models, how can they be used as decoders while they are encoder-only models?
Best Regards | Hi,
looking at the files: Ayham/roberta_gpt2_summarization_cnn_dailymail at main
It indeed looks like only the weights (pytorch_model.bin) and model configuration (config.json) are uploaded, but not the tokenizer files.
You can upload the tokenizer files programmatically using the huggingface_hub library. First, make sure you have installed git-LFS and are logged into your HuggingFace account. In Colab, this can be done as follows:
!sudo apt-get install git-lfs
!git config --global user.email "your email"
!git config --global user.name "your username"
!huggingface-cli login
Next, you can do the following:
from transformers import RobertaTokenizer
from huggingface_hub import Repository
repo_url = "https://huggingface.co/Ayham/roberta_gpt2_summarization_cnn_dailymail"
repo = Repository(local_dir="tokenizer_files", # note that this directory must not exist already
clone_from=repo_url,
git_user="Niels Rogge",
git_email="niels.rogge1@gmail.com",
use_auth_token=True,
)
tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
tokenizer.save_pretrained("tokenizer_files")
repo.push_to_hub(commit_message="Upload tokenizer files")
Note that the Trainer can actually automatically push all files during/after training to the hub for you as seen here 1. | 1 |
huggingface | Beginners | Passing gradient_checkpointing to a config initialization is deprecated | https://discuss.huggingface.co/t/passing-gradient-checkpointing-to-a-config-initialization-is-deprecated/12851 | When initializing a wav2vec2 model, as follows:
feature_extractor = Wav2Vec2Processor.from_pretrained('facebook/wav2vec2-base')
wav_to_vec_model = Wav2Vec2Model.from_pretrained('facebook/wav2vec2-base')
I get the following warning:
UserWarning: Passing gradient_checkpointing to a config initialization is deprecated and will be removed in v5 Transformers. Using model.gradient_checkpointing_enable() instead, or if you are using the Trainer API, passgradient_checkpointing=True in your TrainingArguments.
I’m not using the TrainerAPI, so I tried adding:
wav_to_vec_model.gradient_checkpointing_enable()
Which doesn’t work. What am I doing wrong? Thanks | Not sure why this pretrained model has gradient_checkpointing enabled in its config @patrickvonplaten ? It will make everyone who wants to fine-tune it use gradient checkpointing by default which is not something we want. | 0 |
huggingface | Beginners | How to finetune RAG model with mini batches? | https://discuss.huggingface.co/t/how-to-finetune-rag-model-with-mini-batches/12724 | Dear authors of RAG model,
I know I can finetune with the rag with following example.
retriever = RagRetriever.from_pretrained(rag_example_args.rag_model_name, index_name="custom", passages_path=passages_path, index_path=index_path)
model = RagSequenceForGeneration.from_pretrained(rag_example_args.rag_model_name, retriever=retriever,cache_dir=cache_dir).to(device)
tokenizer = RagTokenizer.from_pretrained(rag_example_args.rag_model_name,cache_dir=cache_dir)
inputs = tokenizer("How many people live in Paris?", return_tensors="pt")
with tokenizer.as_target_tokenizer():
targets = tokenizer("In Paris, there are 10 million people.", return_tensors="pt")
input_ids = inputs["input_ids"].to(device)
labels = targets["input_ids"].to(device)
outputs = model(input_ids=input_ids, labels=labels)
However, this is for single sentence.
How can I finetune with mini batch qa samples?
Could you give an example?
Thank you very much!
@patrickvonplaten @lhoestq | Hi ! I think you can just pass a list of questions and answers to the tokenizer, and the reste of the code should work fine | 0 |
huggingface | Beginners | EvalPrediction returning one less prediction than label id for each batch | https://discuss.huggingface.co/t/evalprediction-returning-one-less-prediction-than-label-id-for-each-batch/6958 | Hi there,
I am attempting to recreate R2Bert (see paper here: https://www.aclweb.org/anthology/2020.findings-emnlp.141.pdf 1) which combines regression and ranking as part of the loss function when training a model to correctly predict an essay score. I have successfully built the model with native Pytorch to train. However, when I use the trainer module all is well when I am training the model, but if I call evaluate or predict methods on the trainer, then I am met by an arrow error which is the result of the EvalPrediction.prediction tensor being of difference length than EvalPrediction.label_id. After some snooping around I noticed that the difference in length turns out to always be the difference in epochs, so for example evaluating over 5 batches I get that the difference is 5 outputs. Any ideas on what might be causing this?
Here’s my code, also I use the ASAP dataset but with only 32 essays for each the test, validation and training set, just as a scrap dataset to try and get the model working ( I’ve tried with the full dataset: behaviour is the same _
imports
from transformers import (TrainingArguments,
Trainer,
AutoConfig,
AutoTokenizer,
AutoModel,
AdamW,
EvalPrediction)
from datasets import load_metric,load_dataset
import torch
from torch import nn
import torch.nn.functional as F
from sklearn.metrics import cohen_kappa_score
import re
acquiring dataset
model_name = 'bert-base-uncased'
path = 'datasets/AES/asap'
dataset_title = 'asap'
# loading dataset
dataset = load_dataset('csv', data_files={'train':[f'{path}/PreProcessed/CsvFiles/{dataset_title}_dev_train.csv'],
'val':[f'{path}/PreProcessed/CsvFiles/{dataset_title}_dev_val.csv'],
'test':[f'{path}/PreProcessed/CsvFiles/{dataset_title}_dev_test.csv']})
Tokenizing dataset
# tokenizing dataset
tokenizer = AutoTokenizer.from_pretrained(model_name)
def encode_batch(batch):
"""Encodes a batch of input data using the model tokenizer."""
return tokenizer(batch["essay"], max_length=512, truncation=True, padding="max_length")
# Encode the input data
dataset = dataset.map(encode_batch, batched=True)
# labels = normalised scores, domain1_score = original score, essay_set = prompt number the essay belongs to
# will be used to adjust predicted scores to original scoring scale for each essay.
dataset.set_format(type="torch", columns=["input_ids", "attention_mask","essay_set","labels","domain1_score"])
Building the model:
class R2BERT(nn.Module):
def __init__(
self,
pretrained_model_name,
norm_params = None,
):
super().__init__()
# get bert model
config = AutoConfig.from_pretrained(pretrained_model_name)
self.model = AutoModel.from_pretrained(pretrained_model_name,
config=config)
# add final layer to make score prediction
self.predictor = nn.Linear(config.hidden_size,1)
# To be used for calculating kappa metric, by accessing the minimum score and
# score range for each essay set to get predictions to original scoring range.
# but not got round to it yet
self.norm_params = norm_params
# method for freezing bert layers (using regex to find all layers less than
# a specified n_training_layer and then setting requires_grad = False).
# Done mainly to avoid CUDA: runtime error when training
def set_trainable_params(self,n_training_layer=None):
for param_name,param_value in model.named_parameters():
if n_training_layer:
layer_num = re.findall(r'\d+',param_name)
if len(layer_num)>0:
layer_num = int(layer_num[0])
else:
layer_num = 0
if param_name.startswith('model') and layer_num<n_training_layer:
param_value.requires_grad = False
else:
if param_name.startswith('model'):
param_value.requires_grad = False
# Forward pass of the model takes inputs as ••kwargs making it a dictionary.
# Then the values of keys: 'input_ids' and 'attention_mask' are used to get output of bert
# Linear layer applied to get score and returned as model output. Dimension of output changed
# from [batch size,1] to just batch size to prevent broadcasting error.
def forward(self,**inputs):
bert_output = self.model(
input_ids=inputs['input_ids'],
attention_mask=inputs['attention_mask'])
text_representation = bert_output[0][:,0,:]
batch_size = inputs['input_ids'].size()[0]
return self.predictor(text_representation).view(batch_size)
# Specific trainer class created to create a custom loss function
class R2Trainer(Trainer):
def compute_loss(self,model,inputs, return_outputs=False):
# labels are the scores of the essays
labels = inputs["labels"]
# The output of the model is passed through a sigmoid activation function
# to ensure it is between 0 and 1. This is because the essay scores have been normalised
# with min max scaling to adjust for different scoring ranges for different prompts.
outputs = torch.sigmoid(model(**inputs))
# mean square error used as regression loss
loss_m = F.mse_loss(outputs,labels)
# soft max is applied to both predicted and normalised scores (essentially determing the probability
# that for each essay in the set that it woruld be ranked the highest scoring)
# This enables the use of the listnet algorithm which is used for ranking loss
sm_pred_scores = F.softmax(outputs,dim=0)
sm_gold_scores = F.softmax(labels,dim=0)
# The loss for the listnet function is the cross entropy as applied here, this essentially determines
# how different the two soft max distrobutions are
loss_r = torch.sum((-sm_gold_scores*torch.log(sm_pred_scores)))
# The losses are then added together
loss = loss_m + loss_r
return (loss, outputs) if return_outputs else loss
def compute_accuracy(p):
#####################################################
# Here is where the error lies p.predictions returns only 30
# predictions for the training arguments and parameters set below
logits, labels = p.predictions,p.label_ids
print(p)
return metric.compute(predictions=logits, references=labels)
model = R2BERT(model_name)
model.set_trainable_params(6)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.to(device)
metric = load_metric("pearsonr","spearmanr")
training_args = TrainingArguments(
learning_rate=4e-5,
num_train_epochs=2,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
logging_steps=200,
output_dir="./training_output",
overwrite_output_dir=True,
evaluation_strategy='steps',
remove_unused_columns=False,
)
trainer = R2Trainer(
model=model,
args=training_args,
train_dataset=dataset["train"],
eval_dataset=dataset["val"],
compute_metrics=compute_accuracy,
)
Training
trainer.train()
Output:
TrainOutput(global_step=4, training_loss=2.8204991817474365, metrics={'train_runtime': 2.4896, 'train_samples_per_second': 25.707, 'train_steps_per_second': 1.607, 'total_flos': 0.0, 'train_loss': 2.8204991817474365, 'epoch': 2.0})
Predicting:
trainer.predict(dataset['test'])
Output and Error:
EvalPrediction(predictions=array([0.6802865 , 0.69348145, 0.7554306 , 0.70484996, 0.7307703 ,
0.74552727, 0.6842238 , 0.76353663, 0.69672614, 0.7247801 ,
0.77793705, 0.7025176 , 0.6014939 , 0.6216687 , 0.702473 ,
0.6444423 , 0.73216194, 0.75792855, 0.7077718 , 0.62824374,
0.72637045, 0.7813148 , 0.71593434, 0.7130688 , 0.7126326 ,
0.7286271 , 0.6804262 , 0.7279507 , 0.69572073, 0.72733516],
dtype=float32), label_ids=array([0.75 , 0. , 0.75 , 0.6 , 0.6 ,
0.2 , 0.4 , 1. , 0.6 , 0.6666667 ,
0.6363636 , 0.75 , 0.9 , 0.75 , 0.25 ,
0.56 , 0.75 , 0.6666667 , 0.27272728, 0.5 ,
1. , 0. , 0.44 , 1. , 0.6 ,
0.4 , 0.36 , 0.5 , 0.36363637, 0.8181818 ,
1. , 0.59090906], dtype=float32))
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/metric.py in add_batch(self, predictions, references)
434 try:
--> 435 self.writer.write_batch(batch)
436 except pa.ArrowInvalid:
10 frames
ArrowInvalid: Column 1 named references expected length 30 but got length 32
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/metric.py in add_batch(self, predictions, references)
436 except pa.ArrowInvalid:
437 raise ValueError(
--> 438 f"Predictions and/or references don't match the expected format.\n"
439 f"Expected format: {self.features},\n"
440 f"Input predictions: {predictions},\n"
ValueError: Predictions and/or references don't match the expected format.
Expected format: {'predictions': Value(dtype='int32', id=None), 'references': Value(dtype='int32', id=None)},
Input predictions: [0.6802865 0.69348145 0.7554306 0.70484996 0.7307703 0.74552727
0.6842238 0.76353663 0.69672614 0.7247801 0.77793705 0.7025176
0.6014939 0.6216687 0.702473 0.6444423 0.73216194 0.75792855
0.7077718 0.62824374 0.72637045 0.7813148 0.71593434 0.7130688
0.7126326 0.7286271 0.6804262 0.7279507 0.69572073 0.72733516],
Input references: [0.75 0. 0.75 0.6 0.6 0.2
0.4 1. 0.6 0.6666667 0.6363636 0.75
0.9 0.75 0.25 0.56 0.75 0.6666667
0.27272728 0.5 1. 0. 0.44 1.
0.6 0.4 0.36 0.5 0.36363637 0.8181818
1. 0.59090906]
Kind Regards,
Cameron
- `transformers` version: 4.7.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in> | hey @cameronstronge, looking at your error i think the problem is that both your predictions and ground truth labels are floats, while your compute_accuracy function expects integers.
if you fix that, doe the problem resolve itself? | 0 |
huggingface | Beginners | Xnli is not loading | https://discuss.huggingface.co/t/xnli-is-not-loading/12799 | I’m trying to load the xnli dataset like this:
xnli = nlp.load_dataset(path=‘xnli’)
and i got an error:
ConnectionError: Couldn’t reach https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip
Can someone tell me what’s the problem and can i get this dataset some other way?
Thanks in advance | Hi,
the nlp project was renamed to datasets over a year ago, so I’d suggest you install that package and let us know if you still have issues downloading the dataset. | 0 |
huggingface | Beginners | How to insert a end-sequence | https://discuss.huggingface.co/t/how-to-insert-a-end-sequence/9935 | I am new to HuggingFaces and I am trying to use the GPT-Neo model to generate the next sentence in a conversation (basically like a chatbot).
I tried around with GPT-3 before, and there I was using “Me:” as an End-Sequence to ensure the model would stop generating when it genrated the Text “Me:” (which indicates that it is my turn to say something).
Is there a similar option for GPT-Neo? | Just hopping in to say I have the exact same question in the hopes it’ll encourage someone to answer. | 0 |
huggingface | Beginners | Sample evaluation script on custom dataset | https://discuss.huggingface.co/t/sample-evaluation-script-on-custom-dataset/12654 | Hey, I have a custom dataset. can you send a sample script to get the accuracy on such a dataset? I was going through examples and I couldn’t get a code that does that. Can someone send me a resource?
my dataset is of the format-
premise , hypothesis, label(0 or 1)
and my model is deberta
Thanks
@lewtun | Hey @NDugar if you’re using the Trainer my suggestion would be to run Trainer.predict(your_test_dataset) so you can get all the predictions. Then you should be able to feed those into the accuracy metric in a second step (or whatever metric you’re interested in).
If you’re still having trouble, I suggest providing a minimal reproducible example, as explained here | 1 |
huggingface | Beginners | How to edit different classes in transformers and have the transformer installed with the changes? | https://discuss.huggingface.co/t/how-to-edit-different-classes-in-transformers-and-have-the-transformer-installed-with-the-changes/12781 | I wanted to edit some classes in the transformers for example BertEmbeddings transformers/modeling_bert.py at 4c32f9f26e6a84f0d9843fec8757e6ce640bb44e · huggingface/transformers · GitHub
and pre-train Bert from scratch on a custom dataset. But I am stuck on how to make the edit work.
The process I am following is:
Clone the repository git clone https://github.com/huggingface/transformers.git
edit the classes I needed to edit
Install transformer using
cd transformers
pip install -e .
The problem is I can not load any model or tokenizer using commands like:
from transformers import BertModel
The error message shows:
ImportError: cannot import name ‘BertModel’ from ‘transformers’ (unknown location)
while import transformers works perfectly fine.
My questions are:
How do I import the BertTokenizer or BertModel
Is there a better way to achieve what I am trying to than my approach?
I could be way off so any helpful suggestion is appreciated. Thanks
Note: I am trying to do something like this. How to use additional input features for NER? - #2 by nielsr 1 | Apparently, the problem was the editable version
This works:
cd transformers
pip install .
editable was not what I thought at first. | 1 |
huggingface | Beginners | Error occurs when loading additional parameters in multi-gpu training | https://discuss.huggingface.co/t/error-occurs-when-loading-additional-parameters-in-multi-gpu-training/12667 | I’m training plugins (e.g. adapter) on top of the language model on multiple GPUs using huggingface Accelerate. However, strange things occur when I try to load additional parameters into the model. The training cannot move on successfully and I find the process state not as expected.
My running command is like this:
CUDA_VISIBLE_DEVICES=1,2,3,4 python -m torch.distributed.launch --nproc_per_node 4 --use_env ./mycode.py
But I found the process state like this:
7d7ad087a6f1d9f89270a157c635fbb1157×320 29.4 KB
I don’t know why there will be three processes on GPU1. This is definitely not correct.
The code works well if I use a single GPU or if I don’t load the additional parameters. So I think there is no problem locating the bug.
FYI, the code related to loading additional parameters is as follows:
model = MyRobertaModel.from_pretrained(
args.model_name_or_path,
model_args=model_args,
data_args=data_args,
training_args=training_args,
n_tokens=args.n_tokens,
)
accelerator.wait_for_everyone() # I try to add barrier but it doesn't solve my problem
t = args.t
if t > 0:
embed_pool = torch.load(os.path.join(args.saved_plugin_dir, 'embed_pool.pth'))
for i in range(t):
model.add_prompt_embedding(t=i, saved_embedding=embed_pool[i])
plugin_ckpt = torch.load(os.path.join(args.saved_plugin_dir, 'plugin_ckpt.pth'))
model.load_plugin(plugin_ckpt)
accelerator.wait_for_everyone()
model.resize_token_embeddings(len(tokenizer))
The code for loading plugin looks like this
def load_plugin(self, plugin_ckpt):
idx = 0
for name, sub_module in super().named_modules():
if isinstance(sub_module, MyAdapter):
sub_module.adapter.load_state_dict(plugin_ckpt[f'adapter_{idx}'])
idx += 1
print('Load plugins successfully!')
Also, my library versions are:
python 3.6.8
transformers 4.11.3
accelerate 0.5.1
NVIDIA gpu cluster
Really thank you for your help! | Turn out to be a stupid mistake by me.
embed_pool = torch.load(os.path.join(args.saved_plugin_dir, 'embed_pool.pth'))
should be changed to
embed_pool = torch.load(os.path.join(args.saved_plugin_dir, 'embed_pool.pth'), map_location=torch.device('cpu'))
torch.load() will automatically map the file to device:0, and this device is the same in the eye of each device, thus causing the problem (spawning additional processes in device:0).
Mark this problem as solved by myself. | 1 |
huggingface | Beginners | <extra_id> when using fine-tuned MT5 for generation | https://discuss.huggingface.co/t/extra-id-when-using-fine-tuned-mt5-for-generation/3535 | Hi, I am trying to summarize the text in Japanese.
And I found that recently you updated a new script for fine-tuning Seq2Seq model.
github.com
huggingface/transformers 30
master/examples/seq2seq
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
So I fine-tuned MT5 model in my Japanese dataset. It contains 100 samples.
CUDA_VISIBLE_DEVICES=0 python examples/seq2seq/run_seq2seq.py \
--model_name_or_path google/mt5-small \
--do_train --do_eval --task summarization \
--train_file ~/summary/train.csv --validation_file ~/summary/val.csv \
--output_dir ~/tmp/tst-summarization \
--overwrite_output_dir \
--per_device_train_batch_size=4 --per_device_eval_batch_size=4 \
--predict_with_generate \
--text_column article --summary_column summary
Then I loaded this fine-tuned model for prediction.
import transformers
from transformers import (
AutoConfig,
AutoModelForSeq2SeqLM,
AutoTokenizer,
)
model_path = "../tmp/tst-summarization/"
tokenizer = AutoTokenizer.from_pretrained(
model_path,
use_fast=True,
)
model = AutoModelForSeq2SeqLM.from_pretrained(
model_path,
)
# article = "AI婚活のイメージ内閣府は人工知能(AI)やビッグデータを使った自治体の婚活事業支援に本腰を入れる。AIが膨大な情報を分析し、「相性の良い人」を提案する。お見合い実施率が高まるといった効果が出ている例もあり、2021年度から自治体への補助を拡充し、システム導入を促す。未婚化、晩婚化が少子化の主な要因とされており、結婚を希望する人を後押しする。これまでは本人が希望する年齢や身長、収入などの条件を指定し、その条件に合った人を提示する形が主流だった。AI婚活では性格や価値観などより細かく膨大な会員情報を分析。本人の希望条件に限らずお薦めの人を選び出し、お見合いに進む。"
article = article = """
The man was arrested as he waited to board a plane
at Johannesburg airport. Officials said a scan of
his body revealed the diamonds he had ingested,
worth $2.3m (£1.4m; 1.8m euros), inside. The man
was reportedly of Lebanese origin and was
travelling to Dubai. "We nabbed him just before he
went through the security checkpoint," Paul
Ramaloko, spokesman of the South Africa elite
police unit the Hawks said, according to Agence
France Presse. Authorities believe the man belongs
to a smuggling ring. Another man was arrested in
March also attempting to smuggle diamonds out the
country in a similar way. South Africa is among
the world's top producers of diamonds.
"""
batch = tokenizer.prepare_seq2seq_batch(
src_texts=[article], max_length=512, truncation=True, return_tensors="pt")
summary_ids = model.generate(batch["input_ids"], num_beams=4, max_length=128,
min_length=50, no_repeat_ngram_size=2,
early_stopping=True)
print([tokenizer.decode(g, skip_special_tokens=True,
clean_up_tokenization_spaces=True) for g in summary_ids])
The results are
['<extra_id_0>。AI婚活では性格や価値観の多様性を分析し、結婚を希望する人を後押しする。本人の希望条件を把握し、「相性の良い人」を提示する形が主流だといえるでしょう。 AI婚活は「相性のいい人」。']
["<extra_id_0> of diamonds was reportedly of Lebanese origin and was travelling to Dubai in March. Johannesburg - South Africa.com.an... <extra_id_51> the man's body, worth $2.3m (£1.4m euros)"]
The question is <extra_id> which is used for unsupervised training for T5 appeared. I mean, it shouldn’t appear in the output text in my opinion. I have tried adding the prefix "summarize: ", but it doesn’t work. Is there any problem with the fine-tuning or using way of the model? Thanks in advance.
@sshleifer @valhalla @sgugger | Having the same issue: the extra Id token kind of replace the first word in a sentence. Anyone knows why? | 0 |
huggingface | Beginners | How to change the batch size in a pipeline? | https://discuss.huggingface.co/t/how-to-change-the-batch-size-in-a-pipeline/8738 | Hello!
Sorry for the simple question but I was wondering how can I change the batch size when I load a pipeline for sentiment classification.
I use classifier = pipeline('sentiment-analysis') but the list of sentences I feed the classifier is too big to be processed in one batch.
Thanks! | You can do it in the method call:
examples = ["I hate everyone" ] * 100
classifier(examples, batch_size=10) | 1 |
huggingface | Beginners | Data augmentation for image (ViT) using Hugging Face | https://discuss.huggingface.co/t/data-augmentation-for-image-vit-using-hugging-face/9750 | Hi everyone,
I am currently doing the training of a ViT on a local dataset of mine. I have used the dataset template of hugging face to create my own dataset class.
To train my model I use pytorch functions (Trainer etc…), and I would like to do some data augmentation on my images.
Does hugging face allow data augmentation for images ? Otherwise, guessing I should use pytorch for the data augmentation, how could I proceed ?
Thank you | Hi,
the feature extractors (like ViTFeatureExtractor) are fairly minimal, and typically only support resizing of images and normalizing the channels. For all kinds of image augmentations, you can use torchvision’s transforms 3 or albumentations 1 for example. | 0 |
huggingface | Beginners | Are dynamic padding and smart batching in the library? | https://discuss.huggingface.co/t/are-dynamic-padding-and-smart-batching-in-the-library/10404 | my code:
return tokenizer(list(dataset['sentense']),
padding = True,
truncation = True,
max_length = 128 )
training_args = TrainingArguments(
output_dir='./results', # output directory
save_total_limit=5, # number of total save model.
save_steps=5000, # model saving step.
num_train_epochs=20, # total number of training epochs
learning_rate=5e-5, # learning_rate
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=16, # batch size for evaluation
...
Hello, I understand the docs like that
If I want to use dynamic padding → padding=True
or not (max_length) → padding = 'max_length
Is it right?
And I want to use smart batching,
Does per_device_train_batch_size automatically support this feature?
If not, I wonder if there is anything that make me to use smart batching.
Thanks!! | Hi,
This video makes it quite clear: What is dynamic padding? - YouTube 21
In order to use dynamic padding in combination with the Trainer, one typically postpones the padding, by only specifying truncation=True when preprocessing the dataset, and then using the DataCollatorWithPadding when defining the data loaders, which will dynamically pad the batches. | 0 |
huggingface | Beginners | How to properly add new vocabulary to BPE tokenizers (like Roberta)? | https://discuss.huggingface.co/t/how-to-properly-add-new-vocabulary-to-bpe-tokenizers-like-roberta/12635 | I would like to fine-tune RoBERTa on a domain-specific English-based vocabulary.
For that, I have done a TF-IDF on a corpus of mine, and extracted 500 words that are not yet in RoBERTa tokenizer.
As they represent only 1 percent of total tokenizer length, I don’t want to train the tokenizer from scratch.
So I just did :
tokenizer.add(['word1', 'word2'])
model.resize_token_embeddings(len(tokenizer))
BUT I see 2 problems related to BPE :
My words are not truncated into sub-words. Is it fine enough ?
There are no " Ġ " (\u0120) in my list. Should I add them manually ?*
I am adding that I could not find any precise answer to this question (that many of us have) : see Usage of Ġ in BPE tokenizer · Issue #4786 · huggingface/transformers · GitHub | Hello Pataleros,
I stumbled on the same issue some time ago. I am no huggingface savvy but here is what I dug up
Bad news is that it turns out a BPE tokenizer “learns” how to split text into tokens (a token may correspond to a full word or only a part) and I don’t think there is any clean way to add some vocabulary after the training is done.
Therefore, unfortunatly, the proper way would be to train a new tokenizer. Which makes transfer learning almost useless.
Now to the hacks !
Why can’t we add some words at the end of the vocab file ? Because then you change the output shape of your Roberta model and fine-tuning requires loading all your pretrained model except for the last layer. Not a trivial task but nothing outrageous (load full model, delete last layer, add same layer with your new voc size, save model)
Another possible hack would be to keep the same voc size, but change unused tokens (some chinese characters, or accents you don’t use etc…) with your additionnal vocabulary. It would be a bit harder as you have to locate those unused tokens.
As for me I just trained from scratch a small sized Bert, without transfer learning, which I advise you do so only if your domain specific English is restrained in vocabulary and grammar, and distinctly different from usual English.
Best of luck to you !
Dan | 0 |
huggingface | Beginners | Multilabel text classification Trainer API | https://discuss.huggingface.co/t/multilabel-text-classification-trainer-api/11508 | Hi all,
Can someone help me to do a multilabel classification with the Trainer API ? | Sure, all you need to do is make sure the problem_type of the model’s configuration is set to multi_label_classification, e.g.:
from transformers import BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=10, problem_type="multi_label_classification")
This will make sure the appropriate loss function is used (namely, binary cross entropy). Note that the current version of Transformers does not support this problem_type for any model, but the next version of Transformers will (as per PR #14180 1).
I suggest taking a look at the example notebook 10 to do multi-label classification using the Trainer.
Update: I made a notebook myself to illustrate how to fine-tune any encoder-only Transformer model for multi-label text classification: Transformers-Tutorials/Fine_tuning_BERT_(and_friends)_for_multi_label_text_classification.ipynb at master · NielsRogge/Transformers-Tutorials · GitHub 2 | 1 |
huggingface | Beginners | Memory Efficient Dataset Creation for NSP Training | https://discuss.huggingface.co/t/memory-efficient-dataset-creation-for-nsp-training/12385 | We want to fine tune BERT with Next Sentence Prediction (NSP) objective and we have a list of files which contains the conversations. To prepare the training dataset for the fine tuning, currently we read through all the files, load all conversation sentences into memory, create positive examples for adjacent sentences A and B, like [CLS] A [SEP] B [SEP], and create negative examples by randomly sample two sentences A and B in all conversation sentences.
The Current logic is similar with:
github.com
huggingface/transformers/blob/master/src/transformers/data/datasets/language_modeling.py#L196 1
for i in range(n):
self.examples[i]["chinese_ref"] = torch.tensor(ref[i], dtype=torch.long)
def __len__(self):
return len(self.examples)
def __getitem__(self, i) -> Dict[str, torch.tensor]:
return self.examples[i]
class LineByLineWithSOPTextDataset(Dataset):
"""
Dataset for sentence order prediction task, prepare sentence pairs for SOP task
"""
def __init__(self, tokenizer: PreTrainedTokenizer, file_dir: str, block_size: int):
warnings.warn(
DEPRECATION_WARNING.format(
"https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py"
),
FutureWarning,
However, this is not memory efficient because it loads all sentences into memory and now we have lots of sentences which cannot fit into memory any more.
Any suggestions to create the dataset for NSP more memory efficiently? The load_dataset APIs look promising, but didn’t figure out how to process the input files to randomly sample sentences for the negative examples.
Thanks | Hi,
Instead of generating a dataset with load_dataset, it should be easier to create dataset chunks with Dataset.from_dict, which we can then save to disk with save_to_disk, reload and concatenate to get a memory-mapped dataset.
The code could look as follows:
# distribute files in multiple dirs (chunkify dir) to avoid loading the entire data into a single LineByLineWithSOPTextDataset
from datasets import Dataset, concatenate_datasets
def list_of_dicts_to_dict_of_lists(d):
dic = d[0]
keys = dic.keys()
values = [dic.values() for dic in d]
return {k: list(v) for k, v in zip(keys, zip(*values))}
chunks = []
for i, file_dir with enumerate(dirs_with_data_files):
dset = LineByLineWithSOPTextDataset(<tokenizer>, file_dir)
examples = list_of_dicts_to_dict_of_lists(dset.examples)
chunk = Dataset.from_dict(examples)
chunk = Dataset.load_from_disk(chunk.save_to_disk("./chunks_dir/{i}")) # currently `chunk` is in memory, so we save it on disk to make it memory-mapped
chunks.append(chunk)
final_dset = concatenate_datasets(chunks) | 0 |
huggingface | Beginners | Wav2vec2 finetuned model’s strange truncated predictions | https://discuss.huggingface.co/t/wav2vec2-finetuned-models-strange-truncated-predictions/12319 | What is your question?
I’m getting strange truncation of prediction at different steps of training. Please help to understand what is the issue?
At the first steps of training like 800-1600 (2-3 epochs) I’m getting predictions of valid length and words count but with low accuracy (which is ok at the first steps), After steps > ~8000 things begin getting strange - accuracy of word prediction getting better, WER respectfully getting lower but an overall sentences’ lengths getting truncated to the right side of an utterances. For example:
Target:
Dərbəndin caxır-konyak kombinatı ərazisində yanğın qeydə alınıb. Hadisə axşam saatlarında baş verib. İlkin məlumata görə, insidentə spirt məhlulunun yerə dağılması səbəb olub
Prediction @ 400 step (length is correct, WER 60+)
dərbəndin çaxır kona kombinantı erazisində yanğın qeydə alınıb harisi axşam satlarında baş verb ilki məlumata görə insidentəs birt məxlunun yerə dağılması səbəb olub
Prediction @ 800 step (length is correct, WER 50+)
dərbəndin çaxırkonakombinanta ərazisində yanğın qeydə alınıb hadisə axşamsaatlarında baş verib ilki məlumata görə insidentəs birt məhlullunun yerə dağılması səbəb olub
Prediction @ 1600 step (length getting truncated, words joining each other, WER 40+)
dərbədinçıki əazisdə ynğqdını hadişıa veiklumagörə insidentspirt məlun yerə dağılması səbəb olub
Prediction @ > 20000 step (around 30 to 100 epochs, almost no changes in WER, sentence completely truncated to the right part, WER keep around 16-27 depending on audio quality)
ndəyaninsidentəspirtməluunun yerə dağılması səbəb olub
insidntə spirt məhlulunun yerə dağılması səbəb olub
insidentə spürt məhlulunun yerə dağılması səbəb olub
nsientə spirt məhlulunun yerə dağılması səbəb olub
Code
Exactly the same code but with different epoch param (num_train_epochs 30 to 100)…
What have you tried?
Training data: 30 hours of labeled data, single spoken person per clip, around 15-30 sec each
I’ve used Fine-Tune XLSR-Wav2Vec2 for low-resource ASR with 🤗 Transformers to train very similar to Turkish language. It differs only for a few characters in alphabet so I used exactly the same params for the first training. Then removed return_attention_mask but nothing changed at all. Then I tried to fine-tune Turkish finetuned model from tutorial itself from Patrick’s hub repo - got the same results.
What’s your environment?
fairseq Version (e.g., 1.0 or main): current master branch
PyTorch Version (e.g., 1.0): the one which comes with Python 3.8
OS (e.g., Linux): Linux
How you installed fairseq ( pip , source): clone and installed
Python version: 3.8
CUDA/cuDNN version: 10.2
GPU models and configuration: 1 x V100S (32 GB) | @patrickvonplaten kindly asking you to shed some light on this issue. what could be the possible reasons? | 0 |
huggingface | Beginners | Need help to give inputs to my fine tuned model | https://discuss.huggingface.co/t/need-help-to-give-inputs-to-my-fine-tuned-model/12582 | I finetuned a distilbert-base-uncased model in google colab. I also downloaded it (h5 file) to my laptop. But I don’t understand how to load it on my laptop and give some inputs to check how it performs. | Hi,
You can check out the code example in the docs of TFDistilBertForSequenceClassification.
The model outputs logits, which are unnormalized scores for each of the classes, for every example in the batch. It’s a tensor of shape (batch_size, num_labels).
To turn it into an actual prediction, one takes the highest score, as follows:
from transformers import DistilBertTokenizer, TFDistilBertForSequenceClassification
import tensorflow as tf
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
inputs = tokenizer("Hello world ", return_tensors="tf")
outputs = model(inputs)
logits = outputs.logits
predicted_class_idx = tf.math.argmax(logits, axis=-1)[0]
print("Predicted class:", model.config.id2label[int(predicted_class_idx)])
Note that you can set the id2label dictionary as an attribute of the model’s configuration, to map integers to actual class names. | 0 |
huggingface | Beginners | Accelerated Inference API Automatic Speech Recognition | https://discuss.huggingface.co/t/accelerated-inference-api-automatic-speech-recognition/8239 | Hi I’m trying to use the Automatic Speech Recognition API but the docs are … light.
When I copy/paste the example code from the docs (below for convenience):
import json
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://api-inference.huggingface.co/models/facebook/wav2vec2-base-960h"
def query(filename):
with open(filename, "rb") as f:
data = f.read()
response = requests.request("POST", API_URL, headers=headers, data=data)
return json.loads(response.content.decode("utf-8"))
data = query("sample1.flac")
… I get …
{
"error": "Model facebook/wav2vec2-base-960h is currently loading",
"estimated_time": 20
}
It then says in the docs “no other parameters are currently allowed”. Does this mean I can’t ask it to use a GPU for instance?
So
It’d be nice if the docs had sample code that worked out of the box. Developer UX is important.
It’d be nice if the docs also had documentation on the response format. For instance even the error result: estimated time: 20 is this minutes, days, centuries, nanoseconds?
A very common ASR feature is to have word-by-word timestamps for alignment use cases. Does this API support that or in any way harmonise the ASR engines underneath (SpeechBrain and another one)
I’m poised to shell out some big bucks for GPU-level support at HF but I need to see much more pro-level docs in this area. | @boxabirds I got same problem. looks like no-one responded to your post… did you work out the error? | 0 |
huggingface | Beginners | Fine-tune BERT and Camembert for regression problem | https://discuss.huggingface.co/t/fine-tune-bert-and-camembert-for-regression-problem/332 | I am fine tuning the Bert model on sentence ratings given on a scale of 1 to 9, but rather measuring its accuracy of classifying into the same score/category/bin as the judges, I just want BERT’s score on a continuous scale, like 1,1.1,1.2… to 9. I also need to figure out how to do this using CamemBERT as well. What are all changes to be made in BertForSequenceClassification and CamembertForSequenceClassification module and what are all the changes to be made in preprocessing (like encode_plus )? | Hi @sundaravel, you can check the source code for BertForSequenceClassification here 861. It also has code for regression problem.
Specifically for regression your last layer will be of shape (hidden_size, 1) and use MSE loss instead of cross entropy | 0 |
huggingface | Beginners | Loss error for bert token classifier | https://discuss.huggingface.co/t/loss-error-for-bert-token-classifier/12460 | So i am doing my first berttoken classifier. I am using a german polyglot dataset meaning tokenised words and lists of ner labels.
a row is [‘word1’,‘word2’…] [‘ORG’,‘LOC’…]
This is my code
tokenizer = BertTokenizer.from_pretrained('bert-base-german-cased')
encoded_dataset = [tokenizer(item['words'], is_split_into_words=True,return_tensors="pt", padding='max_length', truncation=True, max_length=128) for item in dataset_1]
model = BertForTokenClassification.from_pretrained('bert-base-german-cased', num_labels=1)
for item in encoded_dataset:
for key in item:
item[key] = torch.squeeze(item[key])
train_set = encoded_dataset[:500]
test_set = encoded_dataset[500:]
training_args = TrainingArguments(
num_train_epochs=1,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
output_dir='results',
logging_dir='logs',
no_cuda=False, # defaults to false anyway, just to be explicit
)
trainer = Trainer(
model=model,
tokenizer=tokenizer,
args=training_args,
train_dataset=train_set,
)
trainer.train()
And i am getting key error loss | Could you post the error ? | 0 |
huggingface | Beginners | Why do probabilities output for a model does not correspond to label predicted by the finetune model? | https://discuss.huggingface.co/t/why-do-probabilities-output-for-a-model-does-not-correspond-to-label-predicted-by-the-finetune-model/12464 | slight_smile:
Hello, I finetune a model from huggigface on a classification task : a multiclassification with 3 labels encoded as : 0,1, and 2. I use the crossentropy loss function for the computing of the loss .
When training I tried to get the probabilities but I observe that the probabilities does not correspond to the final label of the classification model. For industrial purpose I, need to set a threshold of probabilities so that not all the text given to the model which are classified are returned. But since the probabilities does not correspond to the label , how can I intepret them.
For industrail purpose I, need to get the right probabilities in order to introduce a threshold for what is returned after the classification is done.
For the pobabilities I used this code line : proba = nn.functional.softmax(logits, dim=1)
probabilities + label
[ 0.1701, 0.4728, 0.3571], => 1
[0.2768, 0.4665, 0.2567], => 1
[0.2286, 0.5702, 0.2012], => 1
**[0.2479, 0.5934, 0.1587], => 2**
**[0.2212, 0.5519, 0.2270], => 2**
[0.2169, 0.5404, 0.2428], => 1
[0.1706, 0.6370, 0.1924], => 1
[0.1836, 0.6960, 0.1203]] => 1
As see above, the predicted label for the line with *** are 2 but I do not get why, I thought by observing, it will be 1. Maybe it is me who does not understand. I put the original logits which I converted to probabilities. For the classification model I used Flaubertsequenceclassification Class.
Logits :
[-0.67542565 0.34714806 0.06658715]
[-0.1786863 0.3430867 -0.25426903]
[-0.2919644 0.6223039 -0.41944826]
**[-0.25066078 0.62209827 -0.69668627]**
** [-0.5443676 0.37007216 -0.51845074]**
[-0.5634354 0.34945157 -0.45065987]
[-0.7058248 0.6116817 -0.58579236]
[-0.7987261 0.5336867 -1.2213029 ]
If you have any idea !!!
A snippet of the model Class
# extract the hidden representations from the encoder output
hidden_state = encoder_output[0] # (bs, seq_len, dim)
pooled_output = hidden_state[:, 0] # (bs, dim)
# apply dropout
pooled_output = self.dropout(pooled_output) # (bs, dim)
# feed into the classifier
logits = self.classifier(pooled_output) # (bs, dim)
proba = nn.functional.softmax(logits, dim=1)
#print(type(proba))
print(proba)
#outputs = (probabilities,) + encoder_output[1:] # logits
outputs = (logits,) + encoder_output[1:] # logits
if labels is not None:
#multiclassification
loss_fct = torch.nn.CrossEntropyLoss() #crossEntropyLoss
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
# aggregate outputs
outputs = (loss, ) + outputs
# print(outputs)
return outputs # (loss), logits, (hidden_states), (attentions) | From your probabilities, it looks like the predicted labels should be 1 as you expected, since it’s the highest probability.
How are you generating 2 as the predicted label? | 0 |
huggingface | Beginners | Finetuning GPT2 with user defined loss | https://discuss.huggingface.co/t/finetuning-gpt2-with-user-defined-loss/163 | I have a dataset of scientific abstracts that I would like to use to finetune GPT2. However, I want to use a loss between the output of GPT2 and an N-grams model I have to adjust the weights. Is it possible to do this using huggingface transformers and if so, how? Thank you in advance!
EDIT:
Let me be a little more explicit. I would like to take the base gpt2 model and finetune it for text generation on my dataset of scientific abstracts. However, I would like to replace the loss function that the base gpt2 uses for my own that is based off an N-grams model I have. Ultimately, I would like for the finetuned model to generate scientific-sounding abstracts of a given length based off an initial sentence or two. | GPT2’s forward has a labels argument that you can use to automatically get the standard LM loss, but you don’t have to use this. You can take the model outputs and define any loss you’d like, whether using PyTorch or TF2. If you want to use Trainer, just define your own PT module that returns your custom loss as the first element from forward. See training and fine-tuning 134 and how to train a language model 80. | 0 |
huggingface | Beginners | Python crashes without error message when I try to use this custom tokenizer | https://discuss.huggingface.co/t/python-crashes-without-error-message-when-i-try-to-use-this-custom-tokenizer/12443 | I’m hoping to retrain a GPT-2 model from scratch, where the sentences are protein chains, and the words are single-ASCII-character representation of amino acids, e.g. “A” for alanine and “B” for asparagine. There are no spaces or other separators between words.
Due to constraints in other parts of my code, I would strongly prefer to have single ASCII characters for my special tokens as well. I suspect this requirement is the root of my problem - Python hangs and then crashes without an error message when I try to use this minimal tokenizer. Maybe I used a forbidden character that’s not documented as a special token?
Minimal reproducible code:
import numpy as np
import torch
from tokenizers import Tokenizer
from tokenizers.models import Unigram
from tokenizers.pre_tokenizers import Whitespace
from transformers import PreTrainedTokenizerFast
tokenizer = Tokenizer(Unigram())
tokenizer.pre_tokenizer = Whitespace()
tokenizer.add_tokens(['I', 'L', 'V', 'F', 'M', 'C', 'A', 'G', 'P', 'T', 'S', 'Y', 'W', 'Q', 'N', 'H', 'E', 'D', 'K', 'R', 'J', 'U', 'O'])
tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer,
bos_token='>',
eos_token='=',
unk_token='X',
pad_token='_')
sequences = ['>RNLYYYGRPDYW=>FGGSENATNLFLLELLGAGE=',
'>RNLYYYGRPDYW=>TLPLSLPTSAQDSNFSVKTE=',
'>CTGGSSWYVPDYW=>PNT=']
tokenizer(sequences,
return_tensors="pt",
padding='longest')
# Python hangs and crashes here | It’s on me; the issue was solved with a single line of code:
tokenizer.add_special_tokens(['>', '=', 'X', '_']) | 1 |
huggingface | Beginners | KeyError: Field “..” does not exist in table schema | https://discuss.huggingface.co/t/keyerror-field-does-not-exist-in-table-schema/12367 | Hi everyone! I’m trying to run the run_ner.py 1 script to perform a NER task on a custom dataset. The dataset was originally composed by 3 tsv files that I converted in csv files in order to run that script. Unfortunately, I got this error:
Traceback (most recent call last):
File “C:\Users\User\Desktop\NLP\run_ner.py”, line 578, in
main()
File “C:\Users\User\Desktop\NLP\run_ner.py”, line 262, in main
raw_datasets = load_dataset(extension, data_files=data_files, cache_dir=model_args.cache_dir)
File “C:\Users\User\Desktop\NLP\myenv\lib\site-packages\datasets\load.py”, line 1664, in load_dataset
builder_instance.download_and_prepare(
File “C:\Users\User\Desktop\NLP\myenv\lib\site-packages\datasets\builder.py”, line 593, in download_and_prepare
self._download_and_prepare(
File “C:\Users\User\Desktop\NLP\myenv\lib\site-packages\datasets\builder.py”, line 681, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File “C:\Users\User\Desktop\NLP\myenv\lib\site-packages\datasets\builder.py”, line 1136, in _prepare_split
writer.write_table(table)
File “C:\Users\User\Desktop\NLP\myenv\lib\site-packages\datasets\arrow_writer.py”, line 454, in write_table
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File “C:\Users\User\Desktop\NLP\myenv\lib\site-packages\datasets\arrow_writer.py”, line 454, in
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File “pyarrow\table.pxi”, line 1339, in pyarrow.lib.Table.getitem
File “pyarrow\table.pxi”, line 1900, in pyarrow.lib.Table.column
File “pyarrow\table.pxi”, line 1875, in pyarrow.lib.Table._ensure_integer_index
KeyError: ‘Field “Il” does not exist in table schema’
The head of the train csv is like:
I think the Field “Il” to which it refers in the KeyError is the first row of the train_labeled.csv
The command tha I’m running is:
python run_ner.py --model_name_or_path Musixmatch/umberto-commoncrawl-cased-v1 --tokenizer_name Musixmatch/umberto-commoncrawl-cased-v1 --train_file train_labeled.csv --validation_file devel_labeled.csv --test_file test_labeled.csv --output_dire umberto-ner --do_train --do_eval --do_predict
Can someone help me with this issue? Thanks! | I solved the problem, the csv files generated with pandas needed a post processing in Excel: words and labels had to be in two separated columns.
They were like:
col A
word,label
They have to be:
col A col B
word label | 1 |
huggingface | Beginners | Getting an error when loading up model | https://discuss.huggingface.co/t/getting-an-error-when-loading-up-model/12453 | image842×775 36.1 KB
I had used the push_to_hub api like normal when updating it but all of sudden I am getting this error? Any help? | Please put your code so that we may view how to proceed ? just an image is not enough. | 0 |
huggingface | Beginners | How to get probabilities per label in finetuning classification task? | https://discuss.huggingface.co/t/how-to-get-probabilities-per-label-in-finetuning-classification-task/12301 | Hello, I foloow the huggingface web site to finetune FlauBert for classification task. What I would like to know is how to get probabilities for the classification. Something like this [0.75,0.85,0.25], because I have 3 classes, so far when priinting the results I get this : but it seems to correspond to the logits and not the probabilities ? Furthermore , they contains negative numbers. I thought probabilities were positive numbers between [0>1].
PredictionOutput(predictions=array([[ 0.53947556, 0.42591393, -0.8021714 ],
[ 1.6963196 , -3.3902004 , 1.8755357 ],
[ 1.9264233 , -0.35482746, -2.339029 ],
...,
[ 2.8833866 , -1.1608589 , -1.2109699 ],
[ 1.1803235 , -1.4036949 , 0.48559391],
[ 1.9253297 , -1.0417538 , -1.2987505 ]], dtype=float32), label_ids=array([0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 2, 2, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0,
0, 0, 0, 0, 1, 0, 2, 0, 0, 2, 0, 0, 1, 0, 1, 2, 2, 2, 1, 2, 0, 0,
0, 2, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 2, 1, 1, 0, 0, 0, 0, 1, 0, 1,
1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 2, 0, 2, 1, 2, 0, 1,
0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 2, 1, 0, 0, 0, 0, 1, 0, 0, 1,
1, 0, 2, 0, 0, 0, 0, 0, 1, 2, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0,
0, 1, 2, 1, 1, 2, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1,
1, 1, 0, 2, 0, 1, 1, 1, 1, 0, 0, 0, 2, 2, 0, 0, 1, 1, 2, 1, 1, 0,
0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 2, 0,
2, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 2, 0, 0, 1, 0, 0, 2, 0,
2, 2, 0, 0, 2, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1,
0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 2, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1,
0, 0, 0, 1, 1, 0, 1, 2, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1,
0, 0, 1, 1, 0, 0, 0, 1, 2, 0, 0, 2, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0,
0, 0, 0, 0, 2, 2, 1, 1, 2, 0, 2, 1, 1, 1, 0, 2, 0, 0, 0, 2, 2, 0,
1, 1, 1, 1, 1, 0, 0, 1, 2, 0, 0, 0, 1, 0, 1, 1, 2, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 0, 0, 2, 2, 2, 0, 1, 2, 0, 1, 0, 1, 1, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,
0, 2, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1,
0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 2, 1, 1, 0,
1, 0, 0, 1, 0, 1, 2, 2, 0, 1, 1, 0, 2, 1, 0, 0, 0, 1, 1, 1, 1, 1,
1, 1, 1, 1, 2, 0, 0, 2, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 2, 0, 0, 0,
1, 0, 2, 2, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1,
0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 2, 0, 0,
0, 0, 1, 2, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 2, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0,
1, 0, 0, 2, 0, 1, 0, 1, 0, 0, 2, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1,
1, 2, 0, 0, 2, 0, 2, 0, 0, 0, 0, 0, 1, 0, 0, 2, 0, 0, 0, 0, 0, 0,
1, 0, 1, 0, 1, 0, 0, 2, 0, 0, 0, 0, 0, 0, 1, 0, 1, 2, 0, 0, 0, 0,
0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 0, 0, 1, 0, 2, 0, 0,
1, 2, 0, 1, 0, 0, 1, 1, 1, 0, 1, 2, 1, 0, 0, 0, 0, 1, 1, 0, 0, 2,
0, 1, 0, 1, 2, 0, 0, 1, 0, 0]), metrics={'test_loss': 1.164217233657837, 'test_accuracy': 0.565028901734104, 'test_f1_mi': 0.565028901734104, 'test_f1_ma': 0.42953547487160565, 'test_runtime': 1.4322, 'test_samples_per_second': 483.16, 'test_steps_per_second': 7.68})
```
Code for getting this results is adapted from the notebook for finetuning for task classification :slight_smile:
```
PRE_TRAINED_MODEL_NAME = '/gpfswork/rech/kpf/umg16uw/expe_5/model/sm'
class FlauBertForSequenceClassification(FlaubertModel):
"""
FlauBert Model for Classification Tasks.
"""
def __init__(self, config, num_labels, freeze_encoder=False):
"""
@param FlauBert: a FlauBertModel object
@param classifier: a torch.nn.Module classifier
@param freeze_encoder (bool): Set `False` to fine-tune the FlauBERT model
"""
# instantiate the parent class FlaubertModel
super().__init__(config)
# Specify hidden size of FB hidden size of our classifier, and number of labels
# instantiate num. of classes
self.num_labels = num_labels
# instantiate and load a pretrained FlaubertModel
self.encoder = FlaubertModel.from_pretrained(PRE_TRAINED_MODEL_NAME)
# freeze the encoder parameters if required (Q1)
if freeze_encoder:
for param in self.encoder.parameters():
param.requires_grad = False
# the classifier: a feed-forward layer attached to the encoder's head
self.classifier = torch.nn.Sequential(
torch.nn.Linear(in_features=config.emb_dim, out_features=512),
torch.nn.Tanh(), # or nn.ReLU()
torch.nn.Dropout(p=0.1),
torch.nn.Linear(in_features=512, out_features=self.num_labels, bias=True),
)
# instantiate a dropout function for the classifier's input
self.dropout = torch.nn.Dropout(p=0.1)
def forward(
self,
input_ids=None,
attention_mask=None,
head_mask=None,
inputs_embeds=None,
labels=None,
output_attentions=None,
output_hidden_states=None,
):
# encode a batch of sequences
encoder_output = self.encoder(
input_ids=input_ids,
attention_mask=attention_mask,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
)
# extract the hidden representations from the encoder output
hidden_state = encoder_output[0] # (bs, seq_len, dim)
pooled_output = hidden_state[:, 0] # (bs, dim)
# apply dropout
pooled_output = self.dropout(pooled_output) # (bs, dim)
# feed into the classifier
logits = self.classifier(pooled_output) # (bs, dim)
outputs = (logits,) + encoder_output[1:]
if labels is not None:
#multiclassification
loss_fct = torch.nn.CrossEntropyLoss() #crossEntropyLoss
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
outputs = (loss,) + outputs
return outputs # (loss), logits, (hidden_states), (attentions)
model = FlauBertForSequenceClassification(
config=model.config, num_labels=3, freeze_encoder = False
)
training_args = TrainingArguments(
output_dir='/gpfswork/rech/kpf/umg16uw/results_hf/sm',
logging_dir='/gpfswork/rech/kpf/umg16uw/logs/sm',
do_train=True,
do_eval=True,
evaluation_strategy="steps",
logging_first_step=True,
logging_steps=10,
num_train_epochs=3.0,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
learning_rate=2e-5,
weight_decay=0.01
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset = process_and_tokenize_file(X_train, y_train),
eval_dataset = process_and_tokenize_file(X_val, y_val),
compute_metrics=compute_metrics
)
# Train pre-trained model
# Start training loop
print("Start training...\n")
train_results = trainer.train()
val_results = trainer.evaluate()
for root, subdirs, files in os.walk(test_dir):
#print(root,"...")
#print(files,"...")
for f in files:
path_file = os.path.join(root, f)
input, input_label = input_file(path_file)
test_dataset = process_and_tokenize_file(input, input_label)
test_results = trainer.predict(test_dataset)
print(test_results) # give the results above
``` | Hi,
What models in the Transformers library output are called logits (they are called predictions in your case), these are the unnormalized scores for each class, for every example in a batch. You can turn them into probabilities by applying a softmax operation on the last dimension, like so:
import tensorflow as tf
probabilities = tf.math.softmax(predictions, axis=-1)
print(probabilities) | 0 |
huggingface | Beginners | Character-level tokenizer | https://discuss.huggingface.co/t/character-level-tokenizer/12450 | Hi,
I would like to use a character-level tokenizer to implement a use-case similar to minGPT play_char that could be used in HuggingFace hub.
My question is: is there an existing HF char-level tokenizer that can be used together with a HF autoregressive model (a.k.a. GPT-like model)?
Thanks! | Hi,
We do have character-level tokenizers in the library, but those are not for decoder-only models.
Current character-based tokenizers include:
CANINE 11 (encoder-only)
ByT5 8 (encoder-decoder) | 0 |
huggingface | Beginners | Modify generation params for a model in the Model hub | https://discuss.huggingface.co/t/modify-generation-params-for-a-model-in-the-model-hub/12402 | Hello, I would like to increase the max_length param on my model nouamanetazi/cover-letter-t5-base · Hugging Face 1 but I can’t seem to find how to do it in docs. Should I edit the config.json file? Can anyone provide an example please. | Hey there! You can change the inference parameters as documented here 1.
In short, you can do something like this in the metadata of the model card (README.md file)
inference:
parameters:
temperature: 0.7 | 1 |
huggingface | Beginners | How to reset a layer? | https://discuss.huggingface.co/t/how-to-reset-a-layer/4065 | Hi, I am looking for a solution to reset a layer in the pre-trained model. For example, like in a BART model, if I am going to reset the last layer of the decoder, how should I implement it?
I notice we have the _init_weights(), which should be helpful. So I am wondering if the code should be like:
# load the pre-trained model
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large")
# reset a specific layer
model._init_weights(model.get_decoder().layers[n:n+1])
But I don’t think I make it correct because the fine-tuning result doesn’t change. Any ideas on this implementation? Thank you! | I am also stuck on this. Looking at the code of _init_weights, it looks like it expects individual modules like nn.Linear.
This would require looping over all the modules of your model that you would like to re-initialize and passing them to _init_weights. But this might not translate to a new model, as their layer structure could be different. Is there not a way to just re-initialize a whole layer? Or all modules under some component (e.g. BertLayer)? | 0 |
huggingface | Beginners | Fine tuning Wav2vec for wolof | https://discuss.huggingface.co/t/fine-tuning-wav2vec-for-wolof/12279 | I’m training fine tuning wav2vec model , the training is going on but i see any log output | can you share a screen ?? | 0 |
huggingface | Beginners | ‘Type Error: list object cannot be interpreted as integer’ while evaluating a summarization model (seq2seq,BART) | https://discuss.huggingface.co/t/type-error-list-object-cannot-be-interpreted-as-integer-while-evaluating-a-summarization-model-seq2seq-bart/11590 | Hello all, I have been using this code:-
colab.research.google.com
Google Colaboratory 2
to learn training a summarization model. However, since I needed an extractive model, I replaced ‘sshleifer/distilbart-xsum-12-3’ with “facebook/bart-large-cnn” for both
AutoModelForSeq2SeqLM.from_pretrained & AutoTokenizer.from_pretrained
I am able to train the model and get two different summaries (one before the model is trained and one after the model is trained). But the summaries are abstractive so I changed one option in the training_args (predict_with_generate) to FALSE.
training_args = Seq2SeqTrainingArguments(
output_dir="results",
num_train_epochs=1, # demo
do_train=True,
do_eval=True,
per_device_train_batch_size=4, # demo
per_device_eval_batch_size=4,
# learning_rate=3e-05,
warmup_steps=500,
weight_decay=0.1,
label_smoothing_factor=0.1,
**predict_with_generate=False,**
logging_dir="logs",
logging_steps=50,
save_total_limit=3,
)
However, after doing this, I get an error while running trainer.evaluate() :-1:
text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
TypeError: 'list' object cannot be interpreted as an integer
And if I comment the option, the code runs albeit without the metrics (rouge etc) and Iam able to get extractive summaries.
Can anyone help in clearing this error so that I can run extractive summaries and get the metrics as well?
Thanks! | Hey @gildesh I’m not sure why you say BART will provide extractive summaries - my understanding is that it is an encoder-decoder Transformer, so the decoder will generate summaries if trained to do so.
In any case, the reason why you get an error with predict_with_generate=False is because the Trainer won’t call the model’s generate() method in that case (it just computes the loss / logits, which is why you don’t see the metrics).
So if you want to compute things like ROUGE during training, you’ll need to generate the summaries with predict_with_generate=True
PS the notebook you shared looks more complicated than it needs to be. I recommend using the official summarization example as a foundation (which will certainly work with BART) | 0 |
huggingface | Beginners | Convert transformer to SavedModel | https://discuss.huggingface.co/t/convert-transformer-to-savedmodel/353 | Hi! I found out that this is common unresolved problem.
So, I need to convert transformers’ DistilBERT to TensorFlows SavedModel format. I've converted it, but I cant inference it.
Conversion code
import tensorflow as tf
from transformers import TFAutoModel, AutoTokenizer
dir = "distilbert_savedmodel"
model = TFAutoModel.from_pretrained('distilbert-base-uncased')
model.save(dir)
Inference code
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased')
encoded = tokenizer.encode('Hello, world!', add_special_tokens=True, return_tensors="tf")
model = tf.keras.models.load_model(dir)
model(encoded)
Error
ValueError: Could not find matching function to call loaded from the SavedModel. Got:
Positional arguments (1 total):
* Tensor("inputs:0", shape=(1, 6), dtype=int32)
Keyword arguments: {'training': False}
Expected these arguments to match one of the following 4 option(s):
Option 1:
Positional arguments (1 total):
* {'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}
Keyword arguments: {'training': False}
Option 2:
Positional arguments (1 total):
* {'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}
Keyword arguments: {'training': True}
Option 3:
Positional arguments (1 total):
* {'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='inputs/input_ids')}
Keyword arguments: {'training': True}
Option 4:
Positional arguments (1 total):
* {'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='inputs/input_ids')}
Keyword arguments: {'training': False}
Related issues
huggingface/transformers#4004 51
huggingface/transformers#2135 29
huggingface/transformers#2021 31
Please, help me! | In pytorch, you could save the model with something like
torch.save(model.state_dict(),’/content/drive/My Drive/ftmodelname’ )
Then you could create a model using the pre-trained weights
tuned_model = BertForSequenceClassification.from_pretrained(‘bert-base-uncased’,
num_labels=NCLASSES,
output_attentions=True)
and then overwrite its weights from the saved state_dict with
tuned_model.load_state_dict(torch.load(’/content/drive/My Drive/ftmodelname’,
map_location=torch.device(“cpu”)),
strict=False)
I expect you could do a similar save of the model state_dict using Tensorflow. | 0 |
huggingface | Beginners | TrOCR repeated generation | https://discuss.huggingface.co/t/trocr-repeated-generation/12361 | @nielsr I am using microsoft/trocr-large-printed
there is a slight issue, the model generates repeated predictions on my dataset.
if you see the left is ground_truth and the right is the model prediction
after generating the right text, it does not stop and goes on repeating the same.
Do you know what might be the issue? Am I missing any param in generate function
My decoding code looks like this
for batch in tqdm(test_dataloader):
# predict using generate
pixel_values = batch["pixel_values"].to(device)
outputs = model.generate(pixel_values, output_scores=True, return_dict_in_generate=True, max_length=22)
# decode
pred_str = processor.batch_decode(outputs.sequences, skip_special_tokens=True)
thanks once again | Hi,
After investigation, it turns out the generate() method currently does not take into account config.decoder.eos_token_id, only config.eos_token_id.
You can fix it by setting model.config.eos_token_id = 2.
We will fix this soon. | 1 |
huggingface | Beginners | Longformer for text summarization | https://discuss.huggingface.co/t/longformer-for-text-summarization/478 | Hello! Does anyone know how to summarize long documents/news articles using the Longformer library? I am aware that using T5, the token limit is 512.
I would really appreciate any help in this area! Thank you | Hi, it’s possible to use Longformer for summerization, the way its done now, is taking BART model and then replacing it’s self attention with longformer sliding window attention so that it can take longer sequences. Check this two issues, first 75, second 49, and this branch 60 of longformer repo | 0 |
huggingface | Beginners | Tensorflow training failes with “Method `strategy` requires TF error” | https://discuss.huggingface.co/t/tensorflow-training-failes-with-method-strategy-requires-tf-error/12330 | I am doing the tensorflow example from here:
https://huggingface.co/transformers/custom_datasets.html
I get the error
“Method strategy requires TF”.
after some digging I find the issue is in
https://github.com/huggingface/transformers/blob/69e16abf98c94b8a6d2cf7d60ca36f13e4fbee58/src/transformers/file_utils.py#L82 2
Where
importlib_metadata.version(pkg)
is failing for all tensorflow packages.
However if I run importlib.util.find_spec(“tensorflow”) I get the output
ModuleSpec(name='tensorflow', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f45fc672240>, origin='/home/ec2-user/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow/__init__.py', submodule_search_locations=['/home/ec2-user/anaconda3/envs/tensorflow2_p36/lib/python3.6/site-packages/tensorflow'])
So there must be some issue with how importlib_metadata is trying to look up the tensorflow package vs what importlib.util.find_spec is returning. How can I get around this? | Adding the full code to reproduce. This is run on a SageMaker jupyter instance using the tensorflow_python36 kernel.
!pip install "sagemaker>=2.48.0" "transformers==4.6.1" "datasets[s3]==1.6.2" --upgrade
!wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -xf aclImdb_v1.tar.gz
from pathlib import Path
def read_imdb_split(split_dir):
split_dir = Path(split_dir)
texts = []
labels = []
for label_dir in ["pos", "neg"]:
for text_file in (split_dir/label_dir).iterdir():
texts.append(text_file.read_text())
labels.append(0 if label_dir is "neg" else 1)
return texts, labels
train_texts, train_labels = read_imdb_split('aclImdb/train')
test_texts, test_labels = read_imdb_split('aclImdb/test')
from sklearn.model_selection import train_test_split
train_texts, val_texts, train_labels, val_labels = train_test_split(train_texts, train_labels, test_size=.2)
from transformers import DistilBertTokenizerFast
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
len(train_texts)
train_encodings = tokenizer(train_texts, truncation=True, padding=True)
val_encodings = tokenizer(val_texts, truncation=True, padding=True)
test_encodings = tokenizer(test_texts, truncation=True, padding=True)
import tensorflow as tf
train_dataset = tf.data.Dataset.from_tensor_slices((
dict(train_encodings),
train_labels
))
val_dataset = tf.data.Dataset.from_tensor_slices((
dict(val_encodings),
val_labels
))
test_dataset = tf.data.Dataset.from_tensor_slices((
dict(test_encodings),
test_labels
))
from transformers import TFDistilBertForSequenceClassification, TFTrainer, TFTrainingArguments
training_args = TFTrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
with training_args.strategy.scope():
model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
trainer = TFTrainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset # evaluation dataset
)
trainer.train() | 0 |
huggingface | Beginners | BART Paraphrasing | https://discuss.huggingface.co/t/bart-paraphrasing/312 | I’ve been using BART to summarize, and I have noticed some of the outputs resembling paraphrases.
Is there a way for me to build on this, and use the model for paraphrasing primarily?
from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig
import torch
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn')
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn')
device = torch.device('cpu')
text = "At the core of the United States' mismanagement of the Coronavirus lies its distrust of science"
preprocess_text = text.strip().replace("\n","")
t5_prepared_Text = "summarize: "+preprocess_text
print ("original text preprocessed: \n", preprocess_text)
tokenized_text = tokenizer.encode(t5_prepared_Text, return_tensors="pt").to(device)
summary_ids = model.generate(tokenized_text,
num_beams=10,
no_repeat_ngram_size=1,
min_length=10,
num_return_sequences = 2,
max_length=20,
top_k = 100,
early_stopping=True)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
output1 = tokenizer.decode(summary_ids[1], skip_special_tokens=True)
Summarized Text: The United States' mismanagement of the Coronavirus is rooted in its distrust of science.
I’d like to note that when I do “num_return_sequences” the answers are the same. That makes sense, but is there a way for me to get separate answers? I don’t believe seed to be built-in with BART. | hi @zanderbush, sure BART should also work for paraphrasing. Just fine-tune it on a paraphrasing dataset.
There’s a small mistake in the way you are using .generate. If you want to do sampling you’ll need to set num_beams to 0 and and do_sample to True . And set do_sample to false and num_beams to >1 for beam search. This post 57 explains how to use generate | 0 |
huggingface | Beginners | Cost to fine tune large transformer models on the cloud? | https://discuss.huggingface.co/t/cost-to-fine-tune-large-transformer-models-on-the-cloud/12355 | hi folks
curious if anyone has experience fine tuning RoBERTa for purposes of text classification for sentiment analysis on a dataset of ~1000 sentences on a model like RoBERTa or BERT large?
similarly, any idea how much it would cost to further pretrain the language model first on 1GB of uncompressed text?
thank you,
mick | Didn’t use RoBERTa, did use BERT. Finetuning BERT can be done with google colab in decent time, i.e. is sort of free.
Pretraining I cannot say in advance. 1 GB of text data is a lot. Try 10MB for a few epochs first to make a rough estimation. Results are also not guaranteed to improve | 0 |
huggingface | Beginners | Fine-tuning T5 on Tensorflow | https://discuss.huggingface.co/t/fine-tuning-t5-on-tensorflow/12253 | Hi NLP Gurus,
I recently go trough the brand new Hugging Face course and decide to pick a project from the project list: Personal Writing Assistant. In this project, Lewis propose to use T5 and the JFleg datasets. I struggle a lot to have something close to work but I’m block at the training stage. Important point: I’m working on a M1 Mac and so I must use Tensorflow.
First issue: the to_tf_dataset coupled with DataCollatorForSeq2Seq have a strange behaviour. DataCollatorForSeq2Seq should use the T5 model to create decoder_input_ids using prepare_decoder_input_ids_from_labels model on the labels. But because the column didn’t exists at first to_tf_dataset drop it. If I add it in the columns params of to_tf_dataset, an error is raised because the column didn’t yet exists. I finally end up creating a dummy column fill with zeros to make it work. I think we can improve the developer experience here. Note that the course example 1 have the same issue on Google Colab: batch["decoder_input_ids"] show the tensor but it doesn’t appear in the tf_train_dataset.
Second blocking issue: when I call the fit method on the Keras model, an error raise
Invalid argument: logits and labels must have the same first dimension, got logits shape [3840,64] and labels shape [480]
[[node sparse_categorical_crossentropy_3/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits (defined at opt/homebrew/Caskroom/miniforge/base/envs/tensorflow/lib/python3.9/site-packages/transformers/modeling_tf_utils.py:797) ]]
[[tf_t5for_conditional_generation/decoder/block_._2/layer_._0/SelfAttention/transpose_1/_514]]
I personally can’t deal with this kind of error so if someone can help, I will appreciate !
This is my notebook:
import tensorflow as tf
import numpy as np
from datasets import load_dataset, concatenate_datasets, Dataset
from transformers import AutoTokenizer, TFAutoModelForSeq2SeqLM, DataCollatorForSeq2Seq, create_optimizer
dataset = load_dataset('jfleg')
dataset = concatenate_datasets([dataset['validation'], dataset['test']])
dataset = dataset.filter(lambda x: len(x['sentence']) > 16)
pd_dataset = dataset.to_pandas()
pd_dataset = pd_dataset.explode('corrections', ignore_index=True)
dataset = Dataset.from_pandas(pd_dataset)
dataset = dataset.map(lambda x: {'correction': x['corrections'], 'sentence': 'grammar:' + x['sentence']})
dataset = dataset.remove_columns(['corrections'])
def preprocess(examples):
model_inputs = tokenizer(examples['sentence'], max_length=128, truncation=True)
with tokenizer.as_target_tokenizer():
labels = tokenizer(examples['correction'], max_length=128, truncation=True)
model_inputs['labels'] = labels['input_ids']
model_inputs['decoder_input_ids'] = np.zeros((len(labels['input_ids']), 0))
return model_inputs
inputs = dataset.map(preprocess, batched=True)
inputs = inputs.remove_columns(['sentence', 'correction'])
model = TFAutoModelForSeq2SeqLM.from_pretrained('t5-small')
batch_size = 8
num_epochs = 3
data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model, return_tensors="tf")
tf_train = inputs.to_tf_dataset(
columns=["attention_mask", "input_ids", 'decoder_input_ids'],
label_cols=["labels"],
shuffle=True,
collate_fn=data_collator,
batch_size=batch_size,
)
num_train_steps = len(tf_train) * num_epochs
optimizer, schedule = create_optimizer(
init_lr=5e-5,
num_warmup_steps=0,
num_train_steps=num_train_steps,
weight_decay_rate=0.01,
)
model.compile(
optimizer=optimizer,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=tf.metrics.SparseCategoricalAccuracy(),
)
model.fit(
tf_train,
epochs=num_epochs,
batch_size=batch_size
) | I already did a lot of research and found this:
this issue 2 but unfortunately without real answer
And this one 1 same | 0 |
huggingface | Beginners | Chatbot with a knowledge base & mining a knowledge base automatically | https://discuss.huggingface.co/t/chatbot-with-a-knowledge-base-mining-a-knowledge-base-automatically/12048 | Hello everybody
I’m currently developing a chatbot using transformer models (e.g., GPT 2 or BlenderBot). I would like to incorporate a Knowledge Base to give the chatbot a persona. That means a few sentences describing who it is. For example, the knowledge base could contain the sentences “I am an artist”, “I have two children”, “I recently got a cat”, “I love physics”.
Is it possible to build with Hunggingface a chatbot leveraging a knowledge base? And if so, how? I did not find any tutorials or examples about it.
Second, I would like to automatically build such a knowledge base about a (famous) person from information on the internet (e.g. Wikipedia). Is there a possibility to extract such a knowledge base automatically (e.g. about Brad Pitt)? | You should look into prompt engineering, from experience it might be a bit difficult to get GPT2 to catch your prompt correctly, so if you are able I would go with a bigger model.
(Any article about prompt engineering will tell you this but, make sure you make the prompt read as something you would see in a book)
As for generating that prompt, (and this is only a suggestion) you can use a transformer to summarize the wikipedia article, and use that as the prompt. I believe HF transformers has a pipeline for that.
Cheers | 0 |
huggingface | Beginners | TypeError: ‘>’ not supported between instances of ‘NoneType’ and ‘int’ - Error while training distill bert | https://discuss.huggingface.co/t/typeerror-not-supported-between-instances-of-nonetype-and-int-error-while-training-distill-bert/10137 | Hi,
I had an error while finetuning distilbert model.Screen shot is given
image1356×371 21.3 KB
Screenshot of Code is(except data preprocessing):
import pandas as pd
import numpy as np
import seaborn as sns
import transformers
from transformers import AutoTokenizer,TFBertModel,TFDistilBertModel, DistilBertConfig
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
d_bert = TFDistilBertModel.from_pretrained('distilbert-base-uncased')
# In[72]:
bert = TFBertModel.from_pretrained('bert-base-uncased')
# In[73]:
df_train= X_train.replace("[^0-9a-zA-Z]", " ", regex = True)
df_test = X_test.replace("[^0-9a-zA-Z]", " ", regex = True)
X_train_list = list(df_train['Message'])
X_test_list = list(df_test['Message'])
Y_train_list= list(Y_train)
Y_test_list= list(Y_test)
# In[91]:
# print(X_test_list)
# In[75]:
from transformers import DistilBertTokenizerFast
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
# In[76]:
train_encodings = tokenizer(X_train_list, truncation= True, padding = True)
test_encodings = tokenizer(X_test_list, truncation= True, padding = True)
# In[92]:
# train_encodings
# In[78]:
import tensorflow as tf
train_dataset_sl = tf.data.Dataset.from_tensor_slices((dict(train_encodings), Y_train_list))
test_dataset_sl = tf.data.Dataset.from_tensor_slices((dict(test_encodings), Y_test_list))
# In[79]:
print(train_dataset_sl)
# In[86]:
from transformers import TFDistilBertForSequenceClassification, TFTrainer, TFTrainingArguments
training_args = TFTrainingArguments(
output_dir= './results',
num_train_epochs=2,
per_device_train_batch_size=8,
per_device_eval_batch_size=16,
warmup_steps=500,
weight_decay=0.01,
logging_dir='./logs',
logging_steps= 10)
# In[87]:
with training_args.strategy.scope():
model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=6)
trainer= TFTrainer(
model = model,
args= training_args,
train_dataset=train_dataset_sl,
eval_dataset= test_dataset_sl)
trainer.train()
Number of labels in dataset= 6 (0 to 5)
Can anybody help me out to resolve the issue.
Thanks in advance. | I came across same problem, which seems to be an issue according to a stackoverflow answer:deep learning - HUGGINGFACE TypeError: '>' not supported between instances of 'NoneType' and 'int' - Stack Overflow 22 | 0 |
huggingface | Beginners | How can I pad the vocab to a set multiple? | https://discuss.huggingface.co/t/how-can-i-pad-the-vocab-to-a-set-multiple/12290 | Probably an easy one, but not having any luck in finding the solution, so thought I’d make a post.
To use tensor cores effectively with mixed precision training a NVIDIA guide recommends to “pad vocabulary to be a multiple of 8”.
I’ve searched the tokenizers documentation for answers but haven’t found much luck. The closest I could find is the pp_tokenizer.vocab_size method that returns the current vocab size, but I can’t assign it a new value.
Any idea how I can do this? | You can resize the embedding matrix of a Transformer model using the resize_token_embeddings method (see docs 1). | 1 |
huggingface | Beginners | Calculation cross entropy for batch of two tensors | https://discuss.huggingface.co/t/calculation-cross-entropy-for-batch-of-two-tensors/12338 | I’d like to calculate cross entropy for batch of two tensors:
x = torch.tensor([[[ 2.1137, -1.3133, 0.7930, 0.3330, 0.9407],
[-0.8380, -2.0299, -1.1218, 0.3150, 0.4797],
[-0.7439, 0.0753, -0.1121, 0.0096, -1.2621]]])
y = torch.tensor([[1,2,3]])
loss = nn.CrossEntropyLoss()(x, y)
but receive exception:
RuntimeError: Expected target size [1, 5], got [1, 3]
Please explain what is wrong… | Try
x = torch.tensor([[ 2.1137, -1.3133, 0.7930, 0.3330, 0.9407],[-0.8380, -2.0299, -1.1218, 0.3150, 0.4797],[-0.7439,0.0753,-0.1121,0.0096,-1.2621]])
y = torch.tensor([1,2,3])
loss = torch.nn.CrossEntropyLoss()(x, y) | 0 |
huggingface | Beginners | Concurrent inference on a single GPU | https://discuss.huggingface.co/t/concurrent-inference-on-a-single-gpu/12046 | Hello
I’m building a chatbot using a transformer model (e.g., GPT 2 or BlenderBot) and I would like to let it run on a server (Windows or Linux). The server has one 11GB GPU. If there is only one inference of the chatbot model at the same time there is no problem. But if there are several concurrent calls, the calls need to be executed in sequential order which can increase the inference time. For example, when the inference takes 3 seconds and we have 10 concurrent calls, then it takes 33 seconds until the last call is processed. Theoretically, the concurrent calls could be batched for inference but usually, calls do not arise at the exactly same time.
Is there a solution to this problem for concurrent inference on a single GPU? | Does somebody have any suggestions? I’m happy about every input. | 0 |
huggingface | Beginners | Generate sentences from keywords only | https://discuss.huggingface.co/t/generate-sentences-from-keywords-only/12315 | Hi everyone, I am trying to generate sentences from a few keywords only. May I know which model and function I shall use? Thank you and I am a very beginner.
E.g.,
Input: “dinner”, “delicious”, “wonderful”, “steak”
Output: “We had a wonderful dinner yesterday and the steak was super delicious.” | Seq2seq models like T5 and BART are well-suited for this. You can fine-tune them on (list of keywords, sentence) pairs in a supervised manner. | 0 |
huggingface | Beginners | How to turn WanDB off in trainer? | https://discuss.huggingface.co/t/how-to-turn-wandb-off-in-trainer/6237 | I am trying to use the trainer to fine tune a bert model but it keeps trying to connect to wandb and I dont know what that is and just want it off. is there a config I am missing? | import os
os.environ[“WANDB_DISABLED”] = “true”
This works for me. | 0 |
huggingface | Beginners | glue_data/MNLI dataset | https://discuss.huggingface.co/t/glue-data-mnli-dataset/12265 | Hi,
I am trying to download the MNLI data set from hugging face, but I can’t. I can see the preview of the data but can’t download it. Does anybody have any idea?
Thanks. | Hi,
could you please open an issue 1 on GH where you copy and paste the error stack trace you are getting? | 0 |
huggingface | Beginners | Why TrOCR processor has a feature extractor? | https://discuss.huggingface.co/t/why-trocr-processor-has-a-feature-extractor/11939 | When we are using an image transformer, why do we need a feature extractor (TrOCR processor is Feature Extractor + Roberta Tokenizer)?
And I saw the output image given by the processor, it’s the same as the original image, just the shape is changed, it resized smaller.
@nielsr is the processor doing any type of image preprocessing ?.
I tried a few image preprocessing techniques like binarising the image, adding white space to borders, a bit of denoising and it turns out to be of little to no help.
Can you please comment on that too | Yes feature extractors also have a from_pretrained method, to just load the same configuration as the one of a particular checkpoint on the hub.
e.g. if you do ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224"), it will make sure the size attribute of the feature extractor is set to 224. You could of course also just initialize it as feature_extractor = ViTFeatureExtractor(), as in this case, the feature extractor’s size attribute will be 224 by default as seen in the docs 2.
AutoFeatureExtractor is a class that aims to make it easier for people not having to specify a model-specific feature extractor. The Auto API will load the appropriate feature extractor by just specifying a model name from the hub. It’s a feature extractor, not a model. It will take care of the preprocessing. | 1 |
huggingface | Beginners | RoBERTa MLM fine-tuning | https://discuss.huggingface.co/t/roberta-mlm-fine-tuning/1330 | Hello,
I want to fine-tune RoBERTa for MLM on a dataset of about 200k texts. The texts are reviews from online forums ranging from basic conversations to technical descriptions with a very specific vocabulary.
I have two questions regarding data preparation:
Can I simply use RobertaTokenizer.from_pretrained("roberta-base") even if the vocabulary of my fine-tuning corpus might differ significantly from the pre-training corpus? Or is there a way to “adjust” the tokenizer to the new data?
Each review comes with the title of the thread it has been posted in. From earlier experiments I know that concatenating titles and texts (and adding a special separator token between them) improves model performance for classification. However, I am wondering how this should be handled during language model fine-tuning? Since some threads contain hundreds of reviews, it seems wasteful for the language model to predict on the same title over and over again. | Hello there,
I am currently trying to do the same : fine-tune Roberta on a very specific vocabulary of mine (let’s say : biology stuff).
About your first question, you should at least add some new words, specific to your vocabulary, in the Tokenizer vocabulary. See this discussion : how can i finetune BertTokenizer? · Issue #2691 · huggingface/transformers · GitHub 1
Considering the MLM training, what class did you use exactly ? I am looking for more info online, found this (NLP-with-Deep-Learning/fine_tuning_bert_with_MLM.ipynb at master · yash-007/NLP-with-Deep-Learning · GitHub 5) but wonder how this would work for Roberta.
Thanks | 0 |
huggingface | Beginners | TrOCR inference | https://discuss.huggingface.co/t/trocr-inference/12237 | @nielsr, i am using trocr-printed for inference using the below code and its working fine except for
the len of (tuple_of_logits) , it’s always 19, no matter what batch_size i use, even when i override the
model.decoder.config.max_length from 20 to 10, the len(tuple_of_logits) is always 19.
can you please help me figure out, what am I missing here?
for batch in tqdm(test_dataloader):
# predict using generate
pixel_values = batch["pixel_values"].to(device)
outputs = model.generate(pixel_values, output_scores=True, return_dict_in_generate=True)
tuple_of_logits = outputs.scores
print(len(tuple_of_logits)) | Hi,
You can adjust the maximum number of tokens by specifying the max_length parameter of the generate method:
outputs = model.generate(pixel_values, output_scores=True, return_dict_in_generate=True, max_length=10)
Note that this is explained in the docs 2. | 1 |
huggingface | Beginners | Diff between trocr-printed and trocr-handwritten | https://discuss.huggingface.co/t/diff-between-trocr-printed-and-trocr-handwritten/12239 | what’s the Diff between trocr-printed and trocr-handwritten, other than the dataset they were trained on, because I inference an image with both the model and found handwritten gave correct output but printed missed a character | so the handwritten model prediction being correct and printed model being incorrect would be random | 1 |
huggingface | Beginners | Plot Loss Curve with Trainer() | https://discuss.huggingface.co/t/plot-loss-curve-with-trainer/9767 | Hey,
I am fine tuning a BERT model for a Multiclass Classification problem. While training my losses seem to look a bit “unhealthy” as my validation loss is always smaller (eval_steps=20) than my training loss. How can I plot a loss curve with a Trainer() model? | Scott from Weights & Biases here. Don’t want to be spammy so will delete this if it’s not helpful. You can plot losses to W&B by passing report_to to TrainingArguments.
from transformers import TrainingArguments, Trainer
args = TrainingArguments(... , report_to="wandb")
trainer = Trainer(... , args=args)
More info here: Logging & Experiment tracking with W&B 37 | 0 |
huggingface | Beginners | Annotate a NER dataset (for BERT) | https://discuss.huggingface.co/t/annotate-a-ner-dataset-for-bert/9687 | I am working on annotating a dataset for the purpose of named entity recognition.
In principle, I have seen that for multi-phrase (not single word) elements, annotations work like this (see this example below):
Romania ( B-CNT )
United States of America ( B-CNT C-CNT C-CNT C-CNT )
where B-CNT stands for “beginning-country” and C-CNT represents “continuing-country”.
The problem that I face is that I have a case in which (not related to countries) where I need to annotate like B-W GAP_WORD C-W C-W .
How should I proceed with the annotation in this case?
If I do annotate like in the schema above, should I expect a BERT -alike entity recognition system to learn and detect that a phrase can be like B-W GAP_WORD C-W C-W , or do I need that “C-W” (continuation word) to be exactly after the B-W (beginning word)?
Which solution is correct of the following 2:
B-W GAP_WORD C-W C-W
B-W GAP_WORD B-W C-W
And then, in case 2, find a way to make the connection between the B-Ws (actually corresponding to the same entity)? | Did you find a solution to this problem? I am working on this right now and want to label entities that are multiword. So far I have just labelled them all as individual words but its a pretty bad way to do this. | 0 |
huggingface | Beginners | Tokenizer.batch_encode_plus uses all my RAM | https://discuss.huggingface.co/t/tokenizer-batch-encode-plus-uses-all-my-ram/4828 | I only have 25GB RAM and everytime I try to run the below code my google colab crashes. Any idea how to prevent his from happening. Batch wise would work? If so, how does that look like?
max_q_len = 128
max_a_len = 64
def batch_encode(text, max_seq_len):
return tokenizer.batch_encode_plus(
text.tolist(),
max_length = max_seq_len,
pad_to_max_length=True,
truncation=True,
return_token_type_ids=False
)
# tokenize and encode sequences in the training set
tokensq_train = batch_encode(train_q, max_q_len)
tokens1_train = batch_encode(train_a1, max_a_len)
tokens2_train = batch_encode(train_a2, max_a_len)
My Tokenizer:
tokenizer = BertTokenizerFast.from_pretrained('bert-base-multilingual-uncased')
len(train_q) is 5023194 (which is the same for train_a1 and train_a2) | Are you positive it’s actually the encoding that does it and not some other part of your code? Maybe you can show us the traceback? | 0 |
huggingface | Beginners | ValueError: Target size (torch.Size([8])) must be the same as input size (torch.Size([8, 8])) | https://discuss.huggingface.co/t/valueerror-target-size-torch-size-8-must-be-the-same-as-input-size-torch-size-8-8/12133 | I’m having trouble getting my model to train. It keeps returning the error:
ValueError: Target size (torch.Size([8])) must be the same as input size (torch.Size([8, 8]))
for the given code:
from transformers import BertTokenizer, BertForSequenceClassification, TrainingArguments, Trainer
from datasets import load_dataset
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=8)
dataset = load_dataset('json', data_files={'train': 'train.jsonl', 'test': 'test.jsonl'})
def preprocess_data(examples):
# encode a batch of sentences
encoding = tokenizer(examples["sentence1"], padding="max_length", truncation=True)
# add labels as a list
encoding["labels"] = examples["label"]
return encoding
# tokenize sentences + add labels
encoded_dataset = dataset.map(preprocess_data)
for k,v in encoded_dataset.items():
print(k, v.shape)
# turn into PyTorch dataset
encoded_dataset.set_format("torch")
small_train_dataset = encoded_dataset["train"].shuffle(seed=42).select(range(100))
small_eval_dataset = encoded_dataset["test"].shuffle(seed=42).select(range(100))
training_args = TrainingArguments(
output_dir=".",
evaluation_strategy="epoch",
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
num_train_epochs=1,
)
trainer = Trainer(
model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset)
trainer.train()
I understand this is a mismatch in the target dimension but I’m not exactly sure how to correct it? Is it a mistake in how I’m preparing the data? | Can you print out one or two examples of small_train_dataset?
In that way, you can verify whether the data is prepared correctly for the model. You can for example decode the input_ids back to text, verify the labels, etc. | 0 |
huggingface | Beginners | How to become involved in the community? | https://discuss.huggingface.co/t/how-to-become-involved-in-the-community/12168 | Hello! I am sure this is a widely asked question but I would appreciate some insight! I am a developer coming from using GPT-3, and I have some understanding of transformer models and such. I work with CV and NLP, and wanted to look into HuggingFace as I have heard it’s name everywhere.
I am looking to develop my own NLP and CV Models using HuggingFace’s datasets. How can I become involved in learning about HuggingFace so I may utilize it in my workflow?
Thanks for your time,
Brayden | Hey, welcome to HuggingFace. If you want to learn the best way is to go through the course (rather short, but provide ample knowledge) - Transformer models - Hugging Face Course 2 and if you got doubts ask them on the Forum.
Join the discord for discussions, interaction with the community and much more.
Have a good day! | 0 |
huggingface | Beginners | Accuracy metric throws during evaluation on sequence classification task | https://discuss.huggingface.co/t/accuracy-metric-throws-during-evaluation-on-sequence-classification-task/12103 | I’m fine-tuning a BertFor SequenceClassification model and I want to compute the accuracy on the evaluation set after each training epoch. However, the evaluation step fails with:
TypeError: 'list' object is not callable" when calling evaluate
Here’s a minimal example showing the error (see also this Colab notebook):
Standard tokenizer, toy model:
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
config = BertConfig(num_hidden_layers=0)
model = BertForSequenceClassification(config)
Grab the IMDb dataset, make training and evaluation sets:
imdb = load_dataset("imdb")
def mapper(x):
return tokenizer(x["text"], max_length=384, truncation=True, padding="max_length")
eval_dataset = imdb["train"].select(range(100)).map(mapper, remove_columns=["text"])
Set up the trainer, including compute_metrics=[accuracy]:
accuracy = load_metric("accuracy")
args = TrainingArguments(
"imdb",
num_train_epochs=2,
report_to="none",
evaluation_strategy="epoch"
)
trainer = Trainer(
model,
args,
eval_dataset=eval_dataset,
compute_metrics=[accuracy],
tokenizer=tokenizer,
)
Evaluate:
trainer.evaluate()
Raises:
TypeError: 'list' object is not callable
Isn’t this something that should “just work”? | compute_metrics should be a function that takes a namedtuple (of type EvalPredictions) and returns a dictionary metric nane/metric value.
Look at the text classificatione example 1 or the course section on the Trainer 1. | 1 |
huggingface | Beginners | Question-Answering/Text-generation/Summarizing: Fine-tune on multiple answers | https://discuss.huggingface.co/t/question-answering-text-generation-summarizing-fine-tune-on-multiple-answers/2778 | Hi all!
Looking to fine-tune a model for QA/Text-Generation (not sure how to frame this) and I’m wondering how to best prepare the dataset in a way that I can feed multiple answers to the same question?
My goal is to facilitate the creation of a unique answer to a given question that is based on the input answers. The answers are longer-form (2 to 3 sentences) and I want the model to output this kind of length too.
For now my idea is to fine-tune GPT-2 and look at this as a text generation problem (but I don’t know how GPT-2 would treat multiple answers to the same question - would it adjust the weight simply in favor of the used tokens in the answers?). Maybe creating a summary of the given answers would have the same effect.
What would be the best approach to this? | Maybe @valhalla has some pointers? | 0 |
huggingface | Beginners | Retrieving whole words with fill-mask pipeline | https://discuss.huggingface.co/t/retrieving-whole-words-with-fill-mask-pipeline/11323 | Hi,
I’ve recently discovered the power of the fill-mask pipeline from Huggingface, and while playing with it, I discovered that it has issues handling non-vocabulary words.
For example, in the sentence, “The internal analysis indicates that the company has reached a [MASK] level.”, I would like to know which one of these words [‘good’, ‘virtuous’, ‘obedient’] is the most probable according to the bert-large-cased-whole-word-masking model.
The model refuses to give a score to the words virtuous and obedient because they do not exist in the vocabulary as such, therefore the scores are given to the first tokens that are recognized: v and o; which are not useful.
So the question remains, how could I get the prediction scores for the whole word instead of scores for individual subword tokens? | I’m not sure but you would average the probability of these tokens together and then compare with each other | 0 |
huggingface | Beginners | Reuse context for more questions in question answering | https://discuss.huggingface.co/t/reuse-context-for-more-questions-in-question-answering/9614 | I want to reuse the context in my QA system. I want to answer more questions on the same context and I want to avoid to load the context for any answer.
I’m tying to use the code below. There is a way to reuse the context, i.e. to load the context only once?
from transformers import pipeline
nlp_qa = pipeline(
'question-answering',
model='mrm8488/bert-italian-finedtuned-squadv1-it-alfa',
tokenizer='mrm8488/bert-italian-finedtuned-squadv1-it-alfa'
)
nlp_qa(
{
'question': 'Per quale lingua stai lavorando?',
'context': 'Manuel Romero è colaborando attivamente con HF / trasformatori per il trader del poder de las últimas ' +
'técnicas di procesamiento de lenguaje natural al idioma español'
}
) | With most implementations, this is not possible. The models combine question and context such that both can attend to each other since the first layer.
There may be some implementation that only performs question/context attention as a last operation but I am not aware of it. | 0 |
huggingface | Beginners | Output Includes Input | https://discuss.huggingface.co/t/output-includes-input/6831 | Whenever I am generating text the input is included in the output. When the input is close to the maximum length the model barely produces any useful output.
Information
When using transformers.pipeline or transformers.from_pretrianed, the model is only generating the input, when the input is long. For example,
generator = transformers.pipeline('text-generation', model='gpt2')
prompt = "really long text that is 1023 tokens ..."
output = generator(prompt, mex_length=1024, do_sample=True, temperature=0.9)
output in this case would be equal to the input prompt.
To Reproduce
Here is a Collab notebook 3 with simple examples of the problem. I am looking to generate output from input ~1300 tokens and running into this issue consistently. Is there a way around this? | I am experiencing the same issue with another model … Help with this would be appreciated | 0 |
huggingface | Beginners | Converting an HF dataset to pandas | https://discuss.huggingface.co/t/converting-an-hf-dataset-to-pandas/12043 | Wondering if there is a way to convert a dataset downloaded using load_dataset to pandas? | Hi,
we have a method for that - Dataset.to_pandas. However, note that this will load the entire dataset into memory by default to create a DataFrame. If your dataset is too big to fit in RAM, load it in chunks as follows:
dset = load_dataset(...)
for df in dset.to_pandas(batch_size=..., batched=True):
# process dataframes
Another option is to use the pandas formatter, which will return a DataFrame object each time the dataset is indexed/sliced:
dset = load_dataset(...)
dset.set_format("pandas")
dset[10] # returns a dataframe with 1 row
dset[10:30] # returns a dataframe with 20 rows | 0 |
huggingface | Beginners | Img2seq model with pretrained weights | https://discuss.huggingface.co/t/img2seq-model-with-pretrained-weights/728 | Hi there,
I’m very new to this transformers business, and have been playing around with the HF code to learn things as I tinker.
the context:
One thing I would like to do is build an encoder/decoder out of a CNN and a Transformer Decoder, for generating text from images. My wife likes to pretend that she’s seen movies, but when prompted for the plot she just summarizes contextual clues she gets from the movie posters. I want to see if I can get a transformer to do the same thing.
I have looked at what’s out there on the web, and most of the decoders I find for this kind of task are based on recurrent networks. Instead, I would like to adapt a pretrained transformer model to do the same thing.
the question:
Given a pre-trained CNN encoder, what would be the best way to extract the decoder from a pre-trained GPT/BERT model? I would ideally like to fine-tune something that’s already in a good place to begin with. I’m working with limited computational resources (a pair of consumer GPUs with about 12gb of vram in total) and a small training dataset (a few thousand movie synopses with corresponding images).
Cheers,
Mike | Interesting project. My suggestion would be to take the Transformer based ViT and merge that with a decoder as a sequence to sequence function but with cross attention. | 0 |
huggingface | Beginners | What is temperature? | https://discuss.huggingface.co/t/what-is-temperature/11924 | I see the word “temperature” being used at various places like:
in Models — transformers 4.12.4 documentation 5
temperature ( float , optional, defaults to 1.0) – The value used to module the next token probabilities.
temperature scaling for calibration
temperature of distillation
can anyone please explain what does it mean, or point me to a source with explanation? | This is very well explained in this Stackoverflow answer 11.
You can also check out our blog post 5 on generating text with Transformers, that also includes a description of the temperature. | 1 |
huggingface | Beginners | Loading WiC dataset for fine tuning | https://discuss.huggingface.co/t/loading-wic-dataset-for-fine-tuning/11936 | I’m attempting to load the WiC dataset given to me for a class final project to fine-tune a BERT model but keep getting errors.
I think it could be a problem with getitems but I’m not certain how I would change that to fit this dataset.
My Code is:
from transformers import BertTokenizer, BertForSequenceClassification, TrainingArguments, Trainer
from datasets import load_dataset
import torch
dataset = load_dataset('json', data_files={'train': 'train.jsonl', 'test': 'test.jsonl'})
train_texts, train_labels = dataset['train'], dataset['train']['label']
test_texts, test_labels = dataset['test'], dataset['test']['label']
encoded_dataset_train = train_texts.map(lambda examples: tokenizer(examples['sentence1']), batched=True)
encoded_dataset_test = test_texts.map(lambda examples: tokenizer(examples['sentence1']), batched=True)
class WiCDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
train_dataset = WiCDataset(encoded_dataset_train, train_labels)
test_dataset = WiCDataset(encoded_dataset_test, test_labels)
training_args = TrainingArguments("test_trainer")
trainer = Trainer(
model=model, args=training_args, train_dataset=train_dataset, eval_dataset=test_dataset)
trainer.train()
Returns the output:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_15176/367029658.py in <module>
3 model=model, args=training_args, train_dataset=train_dataset, eval_dataset=test_dataset)
4
----> 5 trainer.train()
c:\users\kjp19\appdata\local\programs\python\python38\lib\site-packages\transformers\trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1288 self.control = self.callback_handler.on_epoch_begin(args, self.state, self.control)
1289
-> 1290 for step, inputs in enumerate(epoch_iterator):
1291
1292 # Skip past any already trained steps if resuming training
c:\users\kjp19\appdata\local\programs\python\python38\lib\site-packages\torch\utils\data\dataloader.py in __next__(self)
519 if self._sampler_iter is None:
520 self._reset()
--> 521 data = self._next_data()
522 self._num_yielded += 1
523 if self._dataset_kind == _DatasetKind.Iterable and \
c:\users\kjp19\appdata\local\programs\python\python38\lib\site-packages\torch\utils\data\dataloader.py in _next_data(self)
559 def _next_data(self):
560 index = self._next_index() # may raise StopIteration
--> 561 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
562 if self._pin_memory:
563 data = _utils.pin_memory.pin_memory(data)
c:\users\kjp19\appdata\local\programs\python\python38\lib\site-packages\torch\utils\data\_utils\fetch.py in fetch(self, possibly_batched_index)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
c:\users\kjp19\appdata\local\programs\python\python38\lib\site-packages\torch\utils\data\_utils\fetch.py in <listcomp>(.0)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
~\AppData\Local\Temp/ipykernel_15176/2427074494.py in __getitem__(self, idx)
13
14 def __getitem__(self, idx):
---> 15 item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
16 item['labels'] = torch.tensor(self.labels[idx])
17 return item
AttributeError: 'Dataset' object has no attribute 'items' | Hi,
I see you are first working with a HuggingFace Dataset (that is returned by the load_dataset function), and that you are then converting it to a PyTorch Dataset.
Actually, the latter is not required. Also, you can tokenize your training and test splits in one go:
from transformers import BertTokenizer, BertForSequenceClassification, TrainingArguments, Trainer
from datasets import load_dataset
import torch
# load local data as HuggingFace Dataset
dataset = load_dataset('json', data_files={'train': 'train.jsonl', 'test': 'test.jsonl'})
def preprocess_data(examples):
# encode a batch of sentences
encoding = tokenizer(examples["sentence1"], padding="max_length", truncation=True)
# add labels as a list
encoding["labels"] = examples["label"]
return encoding
# tokenize sentences + add labels
encoded_dataset = dataset.map(preprocess_data)
# turn into PyTorch dataset
encoded_dataset.set_format("torch")
training_args = TrainingArguments("test_trainer")
trainer = Trainer(
model=model, args=training_args, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset["test"])
trainer.train() | 1 |
huggingface | Beginners | How to fine tune bert on entity recognition? | https://discuss.huggingface.co/t/how-to-fine-tune-bert-on-entity-recognition/11309 | I have a paragraph for example below
Either party may terminate this Agreement by written notice at any time if the other party defaults in the performance of its material obligations hereunder. In the event of such default, the party declaring the default shall provide the defaulting party with written notice setting forth the nature of the default, and the defaulting party shall have thirty (30) days to cure the default. If after such 30 day period the default remains uncured, the aggrieved party may terminate this Agreement by written notice to the defaulting party, which notice shall be effective upon receipt.
and then I need the Entity label and Entity value
Entity value = thirty (30) days
Entity label = Termination Notice Period
and I want to frame it as a Entity recognition task, So could you please tell me how would you have been approached? | Named-entity recognition (NER) is typically solved as a sequence tagging task, i.e. the model is trained to predict a label for every word. Typically one annotates NER datasets using the IOB annotation format 1 (or one of its variants, like BIOES). Let’s take the example sentence from your paragraph. It would have to be annotated as follows:
the O
defaulting O
party O
shall O
have O
thirty B-TER
(30) I-TER
days I-TER
to O
cure O
the O
default O
. O
In other words, we annotate each word as being either outside a named entity (“O”), inside a named-entity (“I-TER”) or at the beginning of a named entity (“B-TER”).
However, there’s one additional challenge, in the sense that models like BERT operate on subword tokens, rather than words, meaning that a word like “hello” might be tokenized into [“hel”, “lo”]. This means that one should actually labels all tokens rather than all words, as BERT will be trained to predict a label for every token. There are multiple strategies here, one could either propagate the label to all subtokens of a word, or only label the first subword token of a given word.
You can take a look at my example notebooks 6 that illustrate how to fine-tune BERT for NER. | 0 |
huggingface | Beginners | Zero shot classification for long form text | https://discuss.huggingface.co/t/zero-shot-classification-for-long-form-text/5536 | I’m looking to do topic prediction/classification on long form text (podcasts/transcripts) and I’m curious if anyone knows of a model for this? I’ve looked through the existing zero shot classification models but they all appear to be optimized for short form text like questions.
If anyone knows of such a model I would appreciate it | cc @joeddav who is the zero-shot expert here | 0 |
huggingface | Beginners | How to fine tune TrOCR model properly? | https://discuss.huggingface.co/t/how-to-fine-tune-trocr-model-properly/11699 | Machine learning neophyte here, so apologies in advance for a “dumb” question.
I have been trying to build a TrOCR model using the VisionEncoderDecoderModel with a checkpoint ‘microsoft/trocr-base-handwritten’ . I have tried the other ones too, but my fine tuning messes up the model instead of improving it. Wanted to ask for some help with fixing this/understanding what goes wrong. I have been using pytorch lightning for the training/fine tuning. My code is below. Out of the box (with the above checkpoint) model can generate pretty accurate results, but after my training/fine tuning its gets worse instead of better.
Some info: I am fine tuning on IAM dataset. The initial loss, when starting is around 8 and it never goes below 4…
Huge thanks for any help!
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-handwritten')
class TrOCR_Image_to_Text(pl.LightningModule):
def __init__(self):
super().__init__()
model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-base-handwritten')
model.config.decoder_start_token_id = processor.tokenizer.cls_token_id
model.config.pad_token_id = processor.tokenizer.pad_token_id
model.config.vocab_size = model.config.decoder.vocab_size
# set beam search parameters
model.config.eos_token_id = processor.tokenizer.sep_token_id
model.config.max_length = 89
model.config.early_stopping = True
model.config.no_repeat_ngram_size = 3
model.config.length_penalty = 2.0
model.config.num_beams = 2
self.vit = model
def generate(self, input_ids):
return self.vit.generate(input_ids)
def forward(self, batch):
model = self.vit
model.to(device)
x,y = batch
pixel_values = x
labels = y
outputs = model(pixel_values=pixel_values, labels=labels)
loss = outputs.loss
logits = outputs.logits
return loss
def training_step(self, batch, batch_idx):
loss = self.forward(batch)
self.log("training_loss", loss)
return loss
def validation_step(self, batch, batch_idx):
loss = self.forward(batch)
self.log("validation_loss", loss, prog_bar=True, on_epoch=True)
model = self.vit
x,y = batch
outputs = model.generate(x.to(device))
cer = compute_cer(pred_ids=outputs, label_ids=y)
valid_cer = cer
self.log("cer", valid_cer, prog_bar=True)
return loss
def test_step(self, batch, batch_idx):
loss = self(batch, batch_idx)
return loss
def configure_optimizers(self):
return AdamW(self.parameters(), lr=5e-5)
def train_dataloader(self):
return train_dataloader
def val_dataloader(self):
return eval_dataloader
def test_dataloader(self):
return eval_dataloader
'''
CER Metric:
'''
from datasets import load_metric
cer_metric = load_metric("cer")
def compute_cer(pred_ids, label_ids):
pred_str = processor.batch_decode(pred_ids, skip_special_tokens=True)
label_ids[label_ids == -100] = processor.tokenizer.pad_token_id
label_str = processor.batch_decode(label_ids, skip_special_tokens=True)
cer = cer_metric.compute(predictions=pred_str, references=label_str)
return cer | Hi,
Thanks for your interest in TrOCR! Actually, the checkpoint you are loading (i.e. microsoft/trocr-base-handwritten) is one that is already fine-tuned on the IAM dataset. So I guess further fine-tuning on this dataset is not really helpful.
Instead, it makes sense to start from a pre-trained-only checkpoint (namely microsoft/trocr-base-stage1 or microsoft/trocr-large-stage1), and fine-tune it on the IAM dataset (or another dataset of interest). I illustrate this in my notebooks here 9. | 1 |
huggingface | Beginners | Best model for entity recognition, text classification and sentiment analysis? | https://discuss.huggingface.co/t/best-model-for-entity-recognition-text-classification-and-sentiment-analysis/11776 | What is the best pre-trained model I can use on my python 2.6.6 web site for:
Entity Recognition,
Text Classification (topics, sub-topics, etc) and
Sentiment Analysis?
I’m not interested in paid subscriptions to apis.
I’m looking for open source solutions that I can run on my server.
Any help would be greatly appreciated. | I prefer myself as a random person before someone comes here to help you.
I guess a Bert or distilled Bert. I think a pre-trained model can do the job. | 0 |
huggingface | Beginners | Add default examples for the Inference API | https://discuss.huggingface.co/t/add-default-examples-for-the-inference-api/11757 | Hi there,
I recently fine-tuned a model and add it to the Hub.
My model is BERT based with a classification head used for sentiment analysis.
I’m wondering how to set-up default examples to be selected by users. The Hub has already set an example in english but my model uses Arabic language.
Here you can find my model.
huggingface.co
Yah216/Sentiment_Analysis_CAMelBERT_msa_sixteenth_HARD · Hugging Face 1
Thanks | UPDATE:
I found the answer in another model README.md
Apparently, I had to add in the top of my README.md the following text:
---
language: ar # <-- my language
widget:
- text: "my example goes here in the requested language"
---
Hope this helps other beginners too and would be awesome if someone put the link to the doc that discusses this issue. | 0 |
huggingface | Beginners | How to increase the length of the summary in Bart_large_cnn model used via transformers.Auto_Model_frompretrained? | https://discuss.huggingface.co/t/how-to-increase-the-length-of-the-summary-in-bart-large-cnn-model-used-via-transformers-auto-model-frompretrained/11622 | Hello, I used this code to train a bart model and generate summaries
(Google Colab)
However, the summaries are coming about to be only 200-350 characters in length.
Is there some way to increase that length?
What I thought was the following options: -
encoder_max_length = 256 # demo
decoder_max_length = 64
which are used here: -
def batch_tokenize_preprocess(batch, tokenizer, max_source_length, max_target_length):
source, target = batch["document"], batch["summary"]
source_tokenized = tokenizer(
source, padding="max_length", truncation=True, max_length=max_source_length
)
target_tokenized = tokenizer(
target, padding="max_length", truncation=True, max_length=max_target_length
)
batch = {k: v for k, v in source_tokenized.items()}
# Ignore padding in the loss
batch["labels"] = [
[-100 if token == tokenizer.pad_token_id else token for token in l]
for l in target_tokenized["input_ids"]
]
return batch
train_data = train_data_txt.map(
lambda batch: batch_tokenize_preprocess(
batch, tokenizer, encoder_max_length, decoder_max_length
),
batched=True,
remove_columns=train_data_txt.column_names,
)
Also, another parameter could be :- the max_length in the model.generate() function.
def generate_summary(test_samples, model):
inputs = tokenizer(
test_samples["document"],
padding="max_length",
truncation=True,
max_length=encoder_max_length,
return_tensors="pt",
)
input_ids = inputs.input_ids.to(model.device)
attention_mask = inputs.attention_mask.to(model.device)
outputs = model.generate(input_ids, attention_mask=attention_mask)
output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True)
return outputs, output_str
Which one of these should I alter to increase the length of the summary? | Hello, can anyone help
Bump | 0 |
huggingface | Beginners | How to pass continuous input in addition to text into pretrained model? | https://discuss.huggingface.co/t/how-to-pass-continuous-input-in-addition-to-text-into-pretrained-model/11414 | Hi, from here 1 it looks like the inputs to a model/pipeline will need to be either the words or tokenizer output, which are dictionary outputs of id and mask. In my project, I need to use a combination of words + some extracted features. How would one do this? Thank you. | No i think in that case you would implement a featurizer yourself and would pass the final inputs_embeds to the model | 1 |
huggingface | Beginners | Huggingfac and GSoC22 | https://discuss.huggingface.co/t/huggingfac-and-gsoc22/11478 | Hello team,
I was wondering, Huggingfac have a big community and it’s one of the best open course organizations to my mind so why I never see it in the GSoC
actually, I want to contribute with Huggingface soon and at the same time I want to select my organization that I will start to contribute with for GSoC
So I coming to ask about, Does Huggingfce will Participate in the GSoc22 or not
if it will, please let me know in any project it will Participate,so I can start contributing with | Hi there! The GSoC timelines have not been published for 2022. Last year the proposals were submitted around end of March and organizations applications were around mid January, so it’s a bit early to know how we’ll participate in the GSoC program.
In any case, if you want to contribute with Hugging Face to the Open Source ecosystem, I would check the transformers 2 and datasets 2 repositories, both have good first issues. | 0 |
huggingface | Beginners | Why are transformers called transformers? | https://discuss.huggingface.co/t/why-are-transformers-called-transformers/11607 | hi all,
does anyone know why transformer models are called transformer models?
Is it related to some kind of meme like the inception module? | Because they were a radical new architecture, compared to RNNs and CNNs, i.e. they transformed the architecture landscape. | 0 |
huggingface | Beginners | How to train a model to do QnA - other than English language? | https://discuss.huggingface.co/t/how-to-train-a-model-to-do-qna-other-than-english-language/11584 | Please help me - point me to resources where I can learn –
How to train a model to understand a paragraph in different language
Train to answer questions
I tested a couple of models.-- but they are not working.
It looks that I need to train the model…but I don’t know how to. | If you want to perform QnA tasks in other languages, you have to either use a multilingual model or use the pretrained models trained on your target language. See this post 1 for more information. | 0 |
huggingface | Beginners | Not sure how to compute BLEU through compute_metrics | https://discuss.huggingface.co/t/not-sure-how-to-compute-bleu-through-compute-metrics/9653 | Here is my code
from transformers import Seq2SeqTrainer,Seq2SeqTrainingArguments, EarlyStoppingCallback, BertTokenizer,MT5ForConditionalGeneration
from transformers.data.data_collator import DataCollatorForSeq2Seq,default_data_collator
from torch.utils.data import Dataset, DataLoader
import pandas as pd
import math,os
import numpy as np
from torch.utils.data import Dataset
from tqdm import tqdm
import torch
from datasets import load_dataset, load_metric
os.environ['MASTER_PORT'] = '777'
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "2"
#model trained form https://huggingface.co/uer/t5-base-chinese-cluecorpussmall
pretrain_task22_fewshot_zh = './results/baidu/pretrain-table-task22-lowdata/checkpoint-8'
model = MT5ForConditionalGeneration.from_pretrained(pretrain_task22_fewshot_zh)
tokenizer = BertTokenizer.from_pretrained(pretrain_task22_fewshot_zh)
device = 'cuda:0'
train_args = Seq2SeqTrainingArguments(output_dir='./results/baidu/finetune/task22-lowdata',evaluation_strategy = 'epoch',
per_device_train_batch_size=32,weight_decay=0, learning_rate= 0.00005,
num_train_epochs=100,lr_scheduler_type='constant_with_warmup',warmup_ratio=0.1,logging_strategy='steps',
save_strategy='epoch',fp16_backend = 'amp',fp16 = False,gradient_accumulation_steps = 2,
load_best_model_at_end = True,logging_steps = 1)#,deepspeed='./zero2_auto_config.json', save_total_limit = 3)
def load_data(path):
data = []
with open(path,encoding='utf-8') as w:
while True:
line = w.readline()
if not line:
break
data.append(line)
return data
class T5dataset(Dataset):
def __init__(self, data_set,tokenizer,maxlen,label_maxlen):
self.tokenizer = tokenizer
self.maxlen = maxlen
self.label_maxlen = label_maxlen
self.data_set = data_set
def __len__(self):
return len(self.data_set)
def __getitem__(self, index):
model_input = {}
data = self.data_set[index]
table, text = data.split('\t')
model_input = self.tokenizer(table,padding = 'max_length',truncation = True,max_length = self.maxlen)
label = self.tokenizer(text,truncation = True,max_length = self.label_maxlen)
model_input['labels'] = label['input_ids']
return {"input_ids": model_input['input_ids'], "attention_mask": model_input['attention_mask'], "labels": label['input_ids']}
baidu_lowdata = './data/baidu_compete/finetune/lowdata/'
train_data = load_data(baidu_lowdata + 'train.txt')
val_data = load_data(baidu_lowdata + 'val.txt')
train_data = T5dataset(train_data,tokenizer,64,256)
val_data = T5dataset(val_data,tokenizer,64,256)
early_stop = EarlyStoppingCallback(early_stopping_patience = 2,early_stopping_threshold = 0)
data_collator = DataCollatorForSeq2Seq(
tokenizer,
model=model,
label_pad_token_id=-100,
padding='max_length',
max_length= 64
)
metric = load_metric("sacrebleu")
def postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
labels = [[label.strip()] for label in labels]
return preds, labels
def compute_metrics(eval_preds):
# print(preds)
preds, labels = eval_preds
#print('preds:',preds[0])
# print('len:',preds[0].shape)
if isinstance(preds, tuple):
preds = preds[0]
print('preds:',preds)
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
# if data_args.ignore_pad_token_for_loss:
# # Replace -100 in the labels as we can't decode them.
# labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Some simple post-processing
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
result = {"bleu": result["score"]}
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
result["gen_len"] = np.mean(prediction_lens)
result = {k: round(v, 4) for k, v in result.items()}
return result
trainer = Seq2SeqTrainer(model=model,
args=train_args,
train_dataset=train_data,
eval_dataset=val_data,
tokenizer=tokenizer,
data_collator=data_collator,
callbacks = [early_stop],
compute_metrics=compute_metrics
)
trainer.train()
Here is part of what I got:
***** Running Evaluation *****
Num examples = 11
Batch size = 8
preds: [[[ -6.9859548 -6.9850636 -6.9853897 ... -6.985799 -6.9857574 | 0/2 [00:00<?, ?it/s]
-6.985038 ]
[ -6.9859576 -6.985067 -6.9853916 ... -6.9858017 -6.9857593
-6.985041 ]
[ -7.4163866 -7.41599 -7.41603 ... -7.4164863 -7.416518
-7.415782 ]
...
[ -8.480153 -8.479599 -8.479667 ... -8.480127 -8.480097
-8.47964 ]
[ -8.4777355 -8.477188 -8.477254 ... -8.47771 -8.4776745
-8.477233 ]
[ -8.475657 -8.47512 -8.475176 ... -8.475634 -8.475585
-8.475155 ]]
...
Traceback (most recent call last):
File "tmp.py", line 118, in <module>
trainer.train()
File "/search/odin/imer/anaconda3/envs/torch1.7/lib/python3.7/site-packages/transformers/trainer.py", line 1342, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/search/odin/imer/anaconda3/envs/torch1.7/lib/python3.7/site-packages/transformers/trainer.py", line 1437, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/search/odin/imer/anaconda3/envs/torch1.7/lib/python3.7/site-packages/transformers/trainer_seq2seq.py", line 75, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/search/odin/imer/anaconda3/envs/torch1.7/lib/python3.7/site-packages/transformers/trainer.py", line 2042, in evaluate
metric_key_prefix=metric_key_prefix,
File "/search/odin/imer/anaconda3/envs/torch1.7/lib/python3.7/site-packages/transformers/trainer.py", line 2273, in evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "tmp.py", line 91, in compute_metrics
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
File "/search/odin/imer/anaconda3/envs/torch1.7/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 3133, in batch_decode
for seq in sequences
File "/search/odin/imer/anaconda3/envs/torch1.7/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 3133, in <listcomp>
for seq in sequences
File "/search/odin/imer/anaconda3/envs/torch1.7/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 3169, in decode
**kwargs,
File "/search/odin/imer/anaconda3/envs/torch1.7/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 743, in _decode
filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)
File "/search/odin/imer/anaconda3/envs/torch1.7/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 718, in convert_ids_to_tokens
index = int(index)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
The code computing BLEU was copied from transformers/run_translation.py at master · huggingface/transformers · GitHub 7
I also ran that code and print preds in compute_metrics which were all integers. I think my main problem is why the preds printed in my code are not integers which can not be decode.
This has been disturbing me for two days. Could someone please point out where is wrong? Thank you! | Hi, I encounter the same situation that trying to use BLEU as the evaluation metric but having the same error as you. Did you find out a solution? | 0 |
huggingface | Beginners | Use custom LogitsProcessor in `model.generate()` | https://discuss.huggingface.co/t/use-custom-logitsprocessor-in-model-generate/11603 | This post is related to Whitelist specific tokens in beam search - #4 by 100worte 1
I see methods such as beam_search() and sample() has a parameter logits_processor, but generate() does not. As of 4.12.3, generate() seems to be calling _get_logits_processor() without any way to pass additional logits processors.
From my belief, we are supposed to call generate() with parameters instead of any other methods for generation. Is my belief correct? How should I fix this minimally working example below to allow me to use MyCustomLogitsProcessor? Thank you!
import transformers
import torch
from transformers.generation_logits_process import LogitsProcessor,LogitsProcessorList
from transformers import GPT2Tokenizer, GPT2LMHeadModel
class MyCustomLogitsProcessor(LogitsProcessor):
def __init__(self):
pass
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor):
return scores # Minimally working
if __name__ == '__main__':
print(transformers.__version__) #4.12.3
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
inputs = tokenizer("Hello, my dog is cute and ", return_tensors="pt")
logits_processor_list = LogitsProcessorList([
MyCustomLogitsProcessor(),
])
generation_output = model.generate(**inputs,
return_dict_in_generate=True,
output_scores=True,
logits_processor=logits_processor_list, # Will break:got multiple values for keyword argument 'logits_processor'
# What should I do here to use MyCustomLogitsProcessor?
)
#Expected: Hello, my dog is cute and icky. I'm not sure if she's a good dog
print(tokenizer.decode(generation_output["sequences"][0], skip_special_tokens=True)) | As it turns out, you cannot add a custom logits processor list to the model.generate(...) call. You need to use your own beam scorer… Similiar to this piece of code I had lying around from a research project.
bad_words_t = bad_words_ids
if extra_bad_words is not None:
bad_words_t += extra_bad_words
model_out=None
if horizon is None:
model_out = model.generate(input_ids = ids['input_ids'],\
max_length=max_length, num_beams=beams,\
no_repeat_ngram_size=5, bad_words_ids=bad_words_t, repetition_penalty=repetition_penalty)[0]
else:
horizon_ids = tokenizer(horizon, return_tensors="pt")['input_ids'].cuda()
input_ids = ids["input_ids"]
model.config.max_length = max_length
# instantiate logits processors
logits_processor = LogitsProcessorList([
MinLengthLogitsProcessor(ids['input_ids'].shape[1], model.config.eos_token_id),
NoRepeatNGramLogitsProcessor(5),
NoBadWordsLogitsProcessor(bad_words_t, eos_token_id=model.config.eos_token_id),
HorizonRepetitionPenalty(penalty=horizon_penalty, horizon=horizon_ids, horizon_exclusive=True),
RepetitionPenaltyLogitsProcessor(penalty=repetition_penalty)
])
stopping_criteria = StoppingCriteriaList([
MaxLengthCriteria(max_length=max_length),
])
model_kwargs={
"attention_mask":ids['attention_mask'],
"use_cache":True,
}
with torch.no_grad():
model_out = model.greedy_search(
input_ids=ids["input_ids"], logits_processor=logits_processor,\
stopping_criteria=stopping_criteria)[0]
return tokenizer.decode(model_out)
I think if the devs are willing to add the ability to pass a custom logits processor to the generate function, it would be a great addition. | 0 |
huggingface | Beginners | Popping `inputs[labels]` when self.label_smoother is not None (in trainer.py) | https://discuss.huggingface.co/t/popping-inputs-labels-when-self-label-smoother-is-not-none-in-trainer-py/11589 | Hi,
I was training my seq2seq model (I’m using Seq2seqTrainer) with label-smoothing and have encountered an error that input_ids was required in my training dataset, whereas I checked that I put them in the dataset.
While debugging it, I found that when self.label_smoother is not None, the labels item was popped out from inputs and the error came from outputs = model(**inputs) as shown in the following lines in trainer.py:
1872 def compute_loss(self, model, inputs, return_outputs=False):
1873 """
1874 How the loss is computed by Trainer. By default, all models return the loss in the first element.
1875
1876 Subclass and override for custom behavior.
1877 """
1878 if self.label_smoother is not None and "labels" in inputs:
1879 labels = inputs.pop("labels")
1880 else:
1881 labels = None
1882 outputs = model(**inputs)
Question: is the line number 1879 intended? I think it would be either
labels = copy.deepcopy(inputs['labels']) or labels = inputs['labels']
I searched for this board but couldn’t find any similar post. That means other people are using the label-smoothing without any problem, which means I incorrectly understand the concept of the seq2seq training and label-smoothing.
Any comment would be greatly appreciated. | Hey @jbeh can you share a minimal reproducible example? For example, something simple that just shows:
How you load and tokenize the datasets
How you define the training arguments
How you define the trainer
That will help us understand better what is causing the issue | 0 |
huggingface | Beginners | KeyError: ‘loss’ even though my dataset has labels | https://discuss.huggingface.co/t/keyerror-loss-even-though-my-dataset-has-labels/11563 | Hi everyone! I’m trying to fine-tune on a NER task the Musixmatch/umberto-commoncrawl-cased-v1 model, on the italian section of the wikiann dataset. The notebook I’m looking up to is this: notebooks/token_classification.ipynb at master · huggingface/notebooks · GitHub.
Dataset’s initial structure is:
DatasetDict({
validation: Dataset({
features: ['tokens', 'ner_tags', 'langs', 'spans'],
num_rows: 10000
})
test: Dataset({
features: ['tokens', 'ner_tags', 'langs', 'spans'],
num_rows: 10000
})
train: Dataset({
features: ['tokens', 'ner_tags', 'langs', 'spans'],
num_rows: 20000
})
})
It has no labels but the DataCollatorForTokenClassification should help me out generating them.
from transformers import DataCollatorForTokenClassification
from datasets import load_metric
data_collator = DataCollatorForTokenClassification(tokenizer)
metric = load_metric("seqeval")
from transformers import AutoModel, TrainingArguments, Trainer
model = AutoModel.from_pretrained("Musixmatch/umberto-commoncrawl-cased-v1")
model_name = model_checkpoint.split("/")[-1]
training_args = TrainingArguments(
f"{model_name}-finetuned-{task}", # output directory
evaluation_strategy = "epoch",
num_train_epochs=3, # total number of training epochs
learning_rate=2e-5,
per_device_train_batch_size=batch_size, # batch size per device during training
per_device_eval_batch_size=batch_size, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
trainer = Trainer(
model,
training_args,
train_dataset=tokenized_dataset["train"],
eval_dataset=tokenized_dataset["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics
)
The error it raises when I run trainer.train() is:
KeyError Traceback (most recent call last)
<ipython-input-16-3435b262f1ae> in <module>()
----> 1 trainer.train()
3 frames
/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in __getitem__(self, k)
2041 if isinstance(k, str):
2042 inner_dict = {k: v for (k, v) in self.items()}
-> 2043 return inner_dict[k]
2044 else:
2045 return self.to_tuple()[k]
KeyError: 'loss'
How can I fix it? What am I doig wrong? Thanks for the help! | marcomatta:
It has no labels but the DataCollatorForTokenClassification should help me out generating them.
No, you need to preprocess your dataset to generate them. The data collator is only there to pad those labels as well as the inputs. Have a look at one of the token classification example script 1 or example notebook 1 | 0 |
huggingface | Beginners | How to view the changes in a model after training? | https://discuss.huggingface.co/t/how-to-view-the-changes-in-a-model-after-training/11490 | Hello,
I trained a BART model (facebook-cnn) for summarization and compared summaries with a pretrained model
model_before_tuning_1 = AutoModelForSeq2SeqLM.from_pretrained(model_name)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_data,
eval_dataset=validation_data,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
trainer.train()
Summaries from model() and model_before_tuning_1() are different but when i compare the model config and/or print(model) it gives exact same things for both.
How to know, what exact parameters have this training changed? | When fine-tuning a Transformer-based model, such as BART, all parameters of the model are updated. This means, any tensor that is present in model.parameters() can have updated values after fine-tuning.
The configuration of a model (config.json) before and after fine-tuning can be identical. The configuration just defines basic hyperparameters such as the number of hidden layers, the number of attention heads, etc. | 0 |
huggingface | Beginners | Using jiant to evaluate sentence-transformer based model | https://discuss.huggingface.co/t/using-jiant-to-evaluate-sentence-transformer-based-model/11540 | Basically, the question is in the title, is it possible to use the NLP toolkit jiant to score models based on sentence transformers | I think that depends on the base model. So for example, a RoBERTa-based model is likely going to work just fine, while a ViT-based one won’t. | 1 |
huggingface | Beginners | Cuda OOM Error When Finetuning GPT Neo 2.7B | https://discuss.huggingface.co/t/cuda-oom-error-when-finetuning-gpt-neo-2-7b/6302 | I’m trying to finetune the 2.7B model with some data I gathered. I’m running on Google Colab Pro with a T-100 16GB. When I run:
from happytransformer import HappyGeneration, GENTrainArgs
model= HappyGeneration("GPT-NEO", "EleutherAI/gpt-neo-2.7B")
args = GENTrainArgs(num_train_epochs = 1, learning_rate =1e-5)
model.train("file.txt", args=args)
I get this error
RuntimeError: CUDA out of memory. Tried to allocate 80.00 MiB (GPU 0; 15.90 GiB total capacity; 14.94 GiB already allocated; 61.75 MiB free; 14.96 GiB reserved in total by PyTorch)
I’m also getting this warning so it could have to do with the problem:
Token indices sequence length is longer than the specified maximum sequence length for this model (2337 > 2048). Running this sequence through the model will result in indexing errors
Does anyone know why I’m getting this error? | Check Keep getting CUDA OOM error with Pytorch failing to allocate all free memory - PyTorch Forums 8 for the pytorch part of it.
I am seeing something similar for XLM, but in my case pytorch config override are not getting recognized. Need to check if hugging face is overriding it internally. | 0 |
huggingface | Beginners | What is a “word”? | https://discuss.huggingface.co/t/what-is-a-word/11517 | Trying to understand the char_to_word method on transformers.BatchEncoding. The description in the docs is:
Get the word in the original string corresponding to a character in the original string of a sequence of the batch.
Is a “word” defined to be a collection of nonwhitespace characters mapped, by the tokenizer, to a single token? | The notion of word depends on the tokenizer, and the text words are the result of the pre-tokenziation operation. Depending on the tokenizer, it can be split by whitespace, or by whitespace and punctuation, or other more advanced stuff | 0 |
huggingface | Beginners | Error occurs when saving model in multi-gpu settings | https://discuss.huggingface.co/t/error-occurs-when-saving-model-in-multi-gpu-settings/11407 | I’m finetuning a language model on multiple gpus. However, I met some problems with saving the model. After saving the model using .save_pretrained(output_dir), I tried to load the saved model using .from_pretrained(output_dir), but got the following error message.
OSError: Unable to load weights from pytorch checkpoint file for ‘xxx’ at 'my_model_dir’ If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
This error is strange because I check the model_dir, there are config.json file and pytorch_model.bin file in it. Also, obviously, I’m not doing anything with TF, so ther instruction in the Error is not instructive.
Currently, I’m using accelerate library to do the training in multi-gpu settings. And the relevant code for saving the model is as follows:
accelerator.wait_for_everyone()
unwrapped_model = accelerator.unwrap_model(model)
unwrapped_model.save_pretrained(args.output_dir)
# torch.save(unwrapped_model.state_dict(), args.output_dir+'.pth')
if accelerator.is_main_process:
tokenizer.save_pretrained(args.output_dir)
I saw similar problems on the Internet but haven’t found useful solutions. I think the problem lies in the multi-gpu setting, because if in single gpu setting, everything works fine.
FYI, here is my environment information:
python 3.6.8
transformers 3.4.0
accelerate 0.5.1
NVIDIA gpu cluster
Not sure if I miss anything important in multi-gpu setting. Really thanks for your help! | Is your training a multinode training? What may have happened is that you saved the model on the main process only, so only on one machine. The other machines then don’t find your model when you try to load it.
You can use the is_main_local_process attribute of the accelerator to save once per machine. | 0 |
huggingface | Beginners | Trainer never invokes compute_metrics | https://discuss.huggingface.co/t/trainer-never-invokes-compute-metrics/11440 | def compute_metrics(p: EvalPrediction):
print("***Computing Metrics***") # THIS LINE NEVER PRINTED
preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions
preds = np.squeeze(preds) if is_regression else np.argmax(preds, axis=1)
if data_args.task_name is not None:
result = metric.compute(predictions=preds, references=p.label_ids)
if len(result) > 1:
result["combined_score"] = np.mean(list(result.values())).item()
return result
elif is_regression:
return {"mse": ((preds - p.label_ids) ** 2).mean().item()}
else:
return {"accuracy": (preds == p.label_ids).astype(np.float32).mean().item()}
...
# Initialize our Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=data_collator,
)
# Training
if training_args.do_train:
checkpoint = None
if training_args.resume_from_checkpoint is not None:
checkpoint = training_args.resume_from_checkpoint
elif last_checkpoint is not None:
checkpoint = last_checkpoint
train_result = trainer.train(resume_from_checkpoint=checkpoint)
metrics = train_result.metrics
max_train_samples = (
data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)
)
metrics["train_samples"] = min(max_train_samples, len(train_dataset))
trainer.save_model() # Saves the tokenizer too for easy upload
trainer.log_metrics("train", metrics)
trainer.save_metrics("train", metrics)
trainer.save_state()
if training_args.do_eval:
logger.info("*** Evaluate ***")
# Loop to handle MNLI double evaluation (matched, mis-matched)
tasks = [data_args.task_name]
eval_datasets = [eval_dataset]
if data_args.task_name == "mnli":
tasks.append("mnli-mm")
eval_datasets.append(raw_datasets["validation_mismatched"])
for eval_dataset, task in zip(eval_datasets, tasks):
metrics = trainer.evaluate(eval_dataset=eval_dataset)
max_eval_samples = (
data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)
)
metrics["eval_samples"] = min(max_eval_samples, len(eval_dataset))
trainer.log_metrics("eval", metrics)
trainer.save_metrics("eval", metrics)
"output_dir": "./output_dir",
"do_train": true,
"do_eval": true,
"learning_rate": 1e-5,
"per_device_train_batch_size": 32,
"per_device_eval_batch_size": 32,
"logging_strategy": "epoch",
"save_strategy": "epoch",
"evaluation_strategy": "epoch",
"prediction_loss_only": false,
I have a question during training my own dataset, forked base code from run_glue.py 1. The arguments are my TrainingArguments.
During training / validation, it seems that compute_metrics never invoked while other things run correctly.
How can I fix this so I can get accuracy or other metrics?
Please let me know if you need more information or code | You can see the batches that will be passed to your model for evaluation with:
for batch in trainer.get_eval_dataloader(eval_dataset):
break
And see if it does contain the "labels" key. | 1 |
huggingface | Beginners | Changing a CML Language | https://discuss.huggingface.co/t/changing-a-cml-language/11489 | Hi all!
I just wanted to know if is there any shortcut for changing a causal language model language (in my case from English to Persian) - like fine-tuning a existing one - or I should train one from scratch? | You can start from a pre-trained English one and fine-tune it on another language. The effectiveness of this has been shown (among others) in the paper As Good as New. How to Successfully Recycle English GPT-2 to Make Models for Other Languages.
cc @wietsedv, who is the main author of that paper. | 1 |
huggingface | Beginners | T5 for closed book QA | https://discuss.huggingface.co/t/t5-for-closed-book-qa/11475 | How can I use T5 for abstractive QA, I don’t want to work on a SQUAD-like dataset, but rather get answers from general questions. Is there a prefix for this kind of QA for T5?
Thank you in advance! | Hi,
For open-domain question answering, no prefix is required. Google released several checkpoints (which you can find on our hub, such as this one 5) from their paper 2, you can use them as follows:
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-small-ssm-nq")
t5_tok = AutoTokenizer.from_pretrained("google/t5-small-ssm-nq")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True)) | 0 |
huggingface | Beginners | How to enable set device as GPU with Tensorflow? | https://discuss.huggingface.co/t/how-to-enable-set-device-as-gpu-with-tensorflow/11454 | Hi,
I am trying to figure out how I can set device to GPU with Tensorflow. With other frameworks I have been able to use GPU with the same instance. I have below code and error. It runs OOM with CPU instantly.
I know with Pytorch I can go as torch.cuda.set_device(), can you help me understand what’s the method in Tensorflow?
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
train_encodings = tokenizer(train_texts, truncation=True, padding=True)
train_dataset = tf.data.Dataset.from_tensor_slices((
dict(train_encodings),
train_labels
))
model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased', num_labels=3)
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
model.compile(optimizer=optimizer, loss= model.compute_loss, metrics=['accuracy'])
model.fit(train_dataset.shuffle(1000).batch(128), epochs=10, batch_size=128,
validation_data=val_dataset.shuffle(1000).batch(128))
I get error as below:
2021-11-08 05:50:31.398734: W tensorflow/core/framework/op_kernel.cc:1745] OP_REQUIRES failed at transpose_op.cc:183 : RESOURCE_EXHAUSTED: OOM when allocating tensor with shape[128,512,12,64] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu | Normally, Keras automatically uses the GPU if it’s available.
You can check the available GPUs as follows:
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU'))) | 0 |
huggingface | Beginners | LayoutLM for table detection and extraction | https://discuss.huggingface.co/t/layoutlm-for-table-detection-and-extraction/7015 | Can the LayoutLM model be used or tuned for table detection and extraction?
The paper says that it works on forms, receipts and for document classification tasks. | @ujjayants LayoutLM is not designed for that but you may want to look at
GitHub
GitHub - Layout-Parser/layout-parser: A Unified Toolkit for Deep Learning... 28
A Unified Toolkit for Deep Learning Based Document Image Analysis - GitHub - Layout-Parser/layout-parser: A Unified Toolkit for Deep Learning Based Document Image Analysis
It may do what you are looking for. | 0 |
huggingface | Beginners | Imbalance memory usage on multi_gpus | https://discuss.huggingface.co/t/imbalance-memory-usage-on-multi-gpus/11423 | Hi,
I am using the Trainer API for training a Bart model.
training_args = Seq2SeqTrainingArguments(
output_dir='./models/bart',
evaluation_strategy = "epoch",
learning_rate=2e-5,
num_train_epochs=5,
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
warmup_steps=500,
weight_decay=0.01,
predict_with_generate=True,
)
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
data_collator=data_collator,
tokenizer=tokenizer
)
I found out that the memory usage when training on multi-gpus is imbalance
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 14760 C python 10513MiB |
| 1 N/A N/A 14760 C python 4811MiB |
| 2 N/A N/A 14760 C python 4811MiB |
| 3 N/A N/A 14760 C python 4811MiB |
| 4 N/A N/A 14760 C python 4811MiB |
| 5 N/A N/A 14760 C python 4811MiB |
| 6 N/A N/A 14760 C python 4811MiB |
| 7 N/A N/A 14760 C python 4811MiB |
+-----------------------------------------------------------------------------+
Is there a way to balance the memory usage? | The reason for this, as far as I know, that all the models in the GPUs 1-7 have a copy in the GPU 0. The computed gradients on GPUs 1-7 are brought back to the GPU 0 for the backward pass to synchronize all the copies. After backpropagation, the newly obtained model parameters are distributed again to the GPUs 1-7. Forward pass is distributed, backward pass is syncronized.
So, it is necessary for a GPU to have copies of the models in other GPUs. Currently, I am not aware of a method to reduce the memory usage in the main GPU. | 1 |
huggingface | Beginners | Bert Data Preparation | https://discuss.huggingface.co/t/bert-data-preparation/11435 | I am trying to pre train a bert type model from scratch. It will be a bert tiny model .
I will append the wikipedia data along with some of my own data.
For wiki data I can download using hugging face datasets library. The question I have is what kind of cleaning I need to do after that. There are some non ascii characters in the datasets. Should I remove or mormalize them?
Does the real bert pre training used the wiki data with all these non ascii characters? anyone who knows this or have replicated the pre training results?
Thanks in advance. | Moving this topic to the Beginners category. | 0 |
huggingface | Beginners | Get label to id / id to label mapping | https://discuss.huggingface.co/t/get-label-to-id-id-to-label-mapping/11457 | hello,
I have been trouble finding where I can get label to id mapping after loading a dataset from the HuggingFace Hub. Is it already in the dataset object ? or is it done with the model ? Because I did not define the label2id configuration before training and I don’t know how to get back the corresponding label afterwards. | It is done within the dataset AFAIK.
dataset = load_dataset(...)
dataset.features["label"].feature.names # All label names
dataset.features["label"].feature._int2str # Same as `.names`
dataset.features["label"].feature._str2int # Mapping from labels to integer | 1 |
huggingface | Beginners | Coreference Resolution | https://discuss.huggingface.co/t/coreference-resolution/11394 | Hi,
I’m quite familiar with the Huggingface ecosystem and I used it a lot.
However, I cannot find resources/models / tutorials for coreference resolution except for neuralcoref which last commit was years ago…
I also saw some models but there is not any clue on how to use them (I guess a TokenClassification Head ?)
Does anyone have any starting point for implementing a coreference resolution pipeline?
(I will start will neuralcoref if there is nothing better)
Thanks in advance for any help,
Have a great day. | Hi,
I suggest to take a look at this repo: GitHub - mandarjoshi90/coref: BERT for Coreference Resolution 5
It includes multiple models (BERT, SpanBERT) fine-tuned on OntoNotes, an important benchmark for coreference resolution.
There’s also a demo notebook 2, showcasing how to run inference for a new piece of text to find all entity clusters. | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.