docs
stringclasses 4
values | category
stringlengths 3
31
| thread
stringlengths 7
255
| href
stringlengths 42
278
| question
stringlengths 0
30.3k
| context
stringlengths 0
24.9k
| marked
int64 0
1
|
---|---|---|---|---|---|---|
huggingface
|
🤗Transformers
|
Best models for seq2seq tasks
|
https://discuss.huggingface.co/t/best-models-for-seq2seq-tasks/716
|
Hi all, newbie here! So I have understood that transformers stand out a lot for seq2seq tasks since they are much faster to train and are more powerful in their comprehension abilities.
However, I had a particular use-case where I want to train a model from scratch. Basically, it involves a dataset of ciphers and the model will have to decode the plaintext from the encrypted value.
Now The cipher is pretty advanced, so the relationship would obviously not be something very simple. The length of the output sequence will be variable. So can anyone point me to a model that preferably uses transformers and that is pretty good for finding out complex relations between the sequences? It does not have to be compulsorily from the HF transformers library. Any model that you all think is particularly good for this purpose will be considered. Also, please also give an explanation of why you recommend that model so that I can research further about it.
Cheers!
Neel Gupta
|
how long are the sequences? 10+ words, 100+ words, 1000+ words?
| 0 |
huggingface
|
🤗Transformers
|
How to load a google’s bert ckpt using tf2
|
https://discuss.huggingface.co/t/how-to-load-a-googles-bert-ckpt-using-tf2/600
|
Is there any way to load a google released ckpt in https://github.com/google-research/bert 5 (because some settings are not provided in hf transformers) using tf 2+ ?
I’ve found docs on loading those models and converting them to pytorch models in here:
https://huggingface.co/transformers/converting_tensorflow_models.html 10
However, it only works with pytorch.
|
Have you looked at using the keras-bert library (instead of using huggingface transformers library)?
GitHub
CyberZHG/keras-bert 12
Implementation of BERT that could load official pre-trained models for feature extraction and prediction - CyberZHG/keras-bert
keras-bert works with TF2. I used it to load a bert model from the tensorflow hub into colab. I copied it onto my google drive first, and loaded from that copy, but I believe it is also possible to load directly using
!wget -q https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip 2
!unzip -o uncased_L-12_H-768_A-12.zip
So far as I can tell, models loaded with keras-bert are not compatible with transformers. I tried to get my fine-tuned model into transformers format (because I wanted to use Jesse Vig’s visualisations), but I never got it to work.
I believe there is something about the way the transformers code is written that means you can’t use non-transformers models in place of transformers models, or at least not without some serious poking of the python. However, I could be mistaken.
| 0 |
huggingface
|
🤗Transformers
|
Write With Transformers XLNet Broken
|
https://discuss.huggingface.co/t/write-with-transformers-xlnet-broken/653
|
I may be wrong, but this issue seems to have begun following the release of Transformers v3.0. I apologize if I am being annoying, considering you are doing an immense favor for us in making the demo available.
|
Hello @zanderbush! Why do you say it’s broken? Do you think the text generation is off?
The completions trigger correctly:
image2042×1096 93.3 KB
| 0 |
huggingface
|
🤗Transformers
|
How to do selective masking in Language modeling
|
https://discuss.huggingface.co/t/how-to-do-selective-masking-in-language-modeling/699
|
Hi Huggingfacers
I have a number of questions regarding finetuning a language model:
How to mask a selective portion of a given input sentence instead of masking randomly.
For example, if I am using ALBERT as a model, and I am aiming to do a different kind of loss function than the standard MLM loss for the masked tokens, how to access the model output of the masked tokens
|
I think the answer is similar another post: Selective masking in Language modeling 19
| 0 |
huggingface
|
🤗Transformers
|
Masked language modeling loss
|
https://discuss.huggingface.co/t/masked-language-modeling-loss/698
|
Can anyone provide a link to a visual equation walkthrough for the MLM loss, or better how would it be implemented in torch.
|
This thread 131 might help you
| 0 |
huggingface
|
🤗Transformers
|
Language pair with multiple models on the model hub?
|
https://discuss.huggingface.co/t/language-pair-with-multiple-models-on-the-model-hub/660
|
I’m interested in comparing different models that can translate between the same source and target language. Even better would be multiple models for multiple source/target pairs. When I search for models on huggingface.co/models, I see many models like this one: Helsinki-NLP/opus-mt-en-fr.
However, I don’t really see any other models available for translation. Are there any other translation models available, for example, for English to French, English to Chinese, or English to German translation? Thanks.
|
T5 can do english to french and english to german
| 0 |
huggingface
|
🤗Transformers
|
Any Pre-trained reformer model available for classification fine tuning
|
https://discuss.huggingface.co/t/any-pre-trained-reformer-model-available-for-classification-fine-tuning/602
|
I am trying to implement the reformer classification model(I have implemented the Reformer classification head ) on IMDB data set and would like to understand which of the two available pre-trained models: google/reformer-enwik8 and google/reformer-crime-and-punishment would be suited for the fine-tuning task or both of them won’t work?
google/reformer-enwik8 model: from the document - “The model is a language model that operates on characters. Therefore, this model does not need a tokenizer.” As the model is a character-level model, is it suitable for classification tasks?
Also in general what is the consideration for choosing a pre-trained model to fine-tune it for classification tasks?
|
To answer you second question,
As can be seen from the past recent developments, pre-trained masked-language models are more suitable for classification tasks or rather tasks that require bi-directional understanding. T5 has also show that it’s possible to achieve comparable results on classification tasks using text-to-text approach.
As for those two reformer models, they are causal LM’s trained for next token prediction so they might not perform well enough for classification. But I haven’t tried this myself so feel free to experiment.
| 0 |
huggingface
|
🤗Transformers
|
Looking for translation mechanism (es-en,en-es)
|
https://discuss.huggingface.co/t/looking-for-translation-mechanism-es-en-en-es/603
|
Hi,
I’d be happy to see a code example for translation mechanism from es-en and back from en-es.
I tried using the code for en-ROMANCE, and change the model to opus-mt-en_el_es_fi-en_el_es_fi but I’m getting empty string for the tgt_text.
the code published here:
https://huggingface.co/transformers/model_doc/marian.html 7
from transformers import MarianMTModel, MarianTokenizer
src_text = [
‘>>fr<< this is a sentence in english that we want to translate to french’,
‘>>pt<< This should go to portuguese’,
‘>>es<< And this to Spanish’
]
model_name = ‘Helsinki-NLP/opus-mt-en-ROMANCE’
tokenizer = MarianTokenizer.from_pretrained(model_name)
print(tokenizer.supported_language_codes)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer.prepare_translation_batch(src_text))
tgt_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated]
Thank you
Shani
|
I’d be happy to get you guidance
| 0 |
huggingface
|
🤗Transformers
|
How to use `.modules()` command to get all the parameters that pertains to the uppermost layer of `roberta-large` model?
|
https://discuss.huggingface.co/t/how-to-use-modules-command-to-get-all-the-parameters-that-pertains-to-the-uppermost-layer-of-roberta-large-model/629
|
Hello,
I would like to apply the function f to the parameters that pertains to the 24th layer (the uppermost layer) of the RobertaForMultipleChoice pre-trained model (roberta-large). How should I fix the loop below so that I only fix the parameters that are from the 24th layer? Currently, the loop applies the function f to every paramete in a Transformer.
Thank you,
for m in model_RobertaForMultipleChoice.modules():
for name, value in list(m.named_parameters(recurse=False)):
setattr(m, name, f)
|
check the names of modules
for nm in model_RobertaForMultipleChoice.named_modules(): print(nm[0])
choose the module you want
for name, mod in model_RobertaForMultipleChoice.named_modules():
if name==‘the 24 th layer module name’: mod.parameters()
| 0 |
huggingface
|
🤗Transformers
|
Tiny mBART doc/info
|
https://discuss.huggingface.co/t/tiny-mbart-doc-info/414
|
I could not find any documentation/info for the sshleifer/tiny-mbart model. How big it is? How it was trained? What is the peformance, etc.? Did I miss something?
|
AFAIK, i think this model is created for testing purpose. Pinging @sshleifer for confirmation.
| 0 |
huggingface
|
🤗Transformers
|
NER on multiple languages
|
https://discuss.huggingface.co/t/ner-on-multiple-languages/401
|
I want to do NER task on news articles that are in dozens of languages. Is the best option to go for xlm-roberta-large-finetuned-conll03-english 27? I read that XLM models fine-tuned for a language work well in other languages as well. My main issue is that this model is too big. Should I go for language specific smaller models if I already know what language I’m dealing with?
Also I’m curious, why does the xlm-roberta-large-finetuned-conll03-german 3 have so much more downloads than the English one?
|
Hi @goutham794,
you could train a multi-lingual NER model on the WikiANN dataset (or better: use the train/dev/test partitioned from https://github.com/afshinrahimi/mmner 89).
But fine-tuning one big multi-lingual NER model could be very complicated (fine-tuning instabilities). And you should keep in mind, that WikiANN only has three label types.
If you already know what languages you want to cover, then a better way would be to train “mono-lingual” models + just search for NER datasets for your desired languages. Good resource is:
GitHub
juand-r/entity-recognition-datasets 107
A collection of corpora for named entity recognition (NER) and entity recognition tasks. These annotated datasets cover a variety of languages, domains and entity types. - juand-r/entity-recognitio...
| 0 |
huggingface
|
🤗Transformers
|
DPRQuestionEncoder
|
https://discuss.huggingface.co/t/dprquestionencoder/578
|
I have installed transformers version 3.0.2.
When I do a
from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizer,
I get the following error :
ImportError Traceback (most recent call last)
in
----> 1 from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizer
ImportError: cannot import name ‘DPRQuestionEncoder’ from ‘transformers’ (/anaconda3/lib/python3.7/site-packages/transformers/init.py)
|
ref: https://github.com/huggingface/transformers/issues/6013 15,
I see that dpr functionality is not available in 3.0.2.
| 0 |
huggingface
|
🤗Transformers
|
Static type checking(with mypy): What’s the official position?
|
https://discuss.huggingface.co/t/static-type-checking-with-mypy-whats-the-official-position/464
|
PEP484 introduced type hinting into Python(There have been more PEPs). mypy (documentation is here 2) is the most popular type checker that conforms to these PEPs.
Is there an official position from HuggingFace on typing the public API of it’s libraries? I tried looking through Discourse, and Github issues, I couldn’t find any mention of plans for static type checking support.
My impression of the codebase is that, while the “public facing api” is actually completely documented in terms of types(using docstrings), the actual type annotations (whether inline, or in a separate stub-only package 2 are not there.
PS: If this question is better suited for Github issues, let me know.
|
Type annotation is half-there, half-not-there, mainly because the transformers library used to support python 2.7. For instance a new file like pipelines.py has almost everything properly typed-annoted.
As I work on the documentation, I try to add them where missing but if you want to help with PRs, feel free to do so
| 0 |
huggingface
|
🤗Transformers
|
Robertaforquestionanswering
|
https://discuss.huggingface.co/t/robertaforquestionanswering/544
|
I am a newbie to huggingface/transformers…
I tried to follow the instructions at https://huggingface.co/transformers/model_doc/roberta.html#robertaforquestionanswering 12 to try out this model but I get errors.
from transformers import RobertaTokenizer, RobertaForQuestionAnswering
import torch
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = RobertaForQuestionAnswering.from_pretrained('roberta-base')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss, start_scores, end_scores = outputs[:3]
The tokenizer is loaded but the error is in trying to load the QA model.
Traceback (most recent call last):
File “/gstore/home/madabhuc/.local/lib/python3.6/site-packages/transformers/modeling_utils.py”, line 655, in from_pretrained
raise EnvironmentError
OSError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “test.py”, line 7, in
model = RobertaForQuestionAnswering.from_pretrained(‘roberta-base’)
File “/gstore/home/madabhuc/.local/lib/python3.6/site-packages/transformers/modeling_utils.py”, line 662, in from_pretrained
raise EnvironmentError(msg)
OSError: Can’t load weights for ‘roberta-base’. Make sure that:
‘roberta-base’ is a correct model identifier listed on ‘https://huggingface.co/models 1’
or ‘roberta-base’ is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt.
|
Looks like an environment issue. I was able to run the code on another machine.
| 0 |
huggingface
|
🤗Transformers
|
What is the purpose of the additional dense layer in classification heads?
|
https://discuss.huggingface.co/t/what-is-the-purpose-of-the-additional-dense-layer-in-classification-heads/526
|
I was looking at the code for RoobertaClassificationHead and it adds an additional dense layer, which is not described in the paper for fine-tuning for classification.
I have looked at a few other classification heads in the Transformers library and they also add that additional dense layer.
For example, the classification head for RoBERTa is:
class RobertaClassificationHead(nn.Module):
"""Head for sentence-level classification tasks."""
def __init__(self, config):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.out_proj = nn.Linear(config.hidden_size, config.num_labels)
def forward(self, features, **kwargs):
x = features[:, 0, :] # take <s> token (equiv. to [CLS])
x = self.dropout(x)
x = self.dense(x)
x = torch.tanh(x)
x = self.dropout(x)
x = self.out_proj(x)
return x
To match the paper, it should be:
def forward(self, features, **kwargs):
x = features[:, 0, :] # take <s> token (equiv. to [CLS])
x = self.dropout(x)
x = self.out_proj(x)
return x
What is the purpose of that additional dense + tanh + dropout?
Thank you very much!
|
Roberta does not have a pooler layer (like Bert for instance) since the pretraining objective does not contain a classification task. When doing sentence classification with bert, your final hidden states go through a BertPooler (which is just dense + tanh), a dropout and a final classification layer (which is a dense layer).
This structure is mimicked for all models on a sentence classification task, which is why for Roberta (which does not have a pooler) you get those two linear layers in the classification head. Hope that makes sense!
| 0 |
huggingface
|
🤗Transformers
|
BPE tokenizers and spaces before words
|
https://discuss.huggingface.co/t/bpe-tokenizers-and-spaces-before-words/475
|
Hi,
The documentation for GPT2Tokenizer 45 suggests that we should keep the default of not adding spaces before words (add_prefix_space=False).
I understand that GPT2 was trained without adding spaces at the start of sentences, which results in different tokenizations.
However, I imagine that most of the text was similar to:
<|endoftext|>document_1<|endoftext|>document_2...
where document_n could be:
This is a long article from wikipedia. Lots of sentences.
So most of the time, new sentences would actually start with a space (separation from previous sentence) or a line break. I’m not aware of extra preprocessing that would remove spaces after punctuation?
In that case, it not obvious of what should be the best strategy when fine-tuning (adding spaces before words or not) as we may want to replicate what was the most common in initial dataset.
I would love any comment!
|
Hi Boris, here is some context and history on the GPT2 and Roberta tokenizers:
In GPT2 and Roberta tokenizers, the space before a word is part of a word, i.e. "Hello how are you puppetter" will be tokenized in ["Hello", "Ġhow", "Ġare", "Ġyou", "Ġpuppet", "ter"]. You can notice the spaces included in the words a Ġ here. Spaces are converted in a special character (the Ġ ) in the tokenizer prior to BPE splitting mostly to avoid digesting spaces since the standard BPE algorithm used spaces in its process (this can seem a bit hacky but was in the original GPT2 tokenizer implementation by OpenAI).
You probably have noted that the first word is a bit different because it’s lacking the first space but actually the model is trained like this and reach its best performances like this, with a special first word (see https://github.com/huggingface/transformers/issues/3788 25)
However, this behavior is a bit strange to some users because the first word is then different from the others: encoding Cats are super coolio and super coolio will not give the same tokenization (see here for instance: https://github.com/huggingface/transformers/issues/5249 18)
transformers thus provide an add_prefix_space argument to automatically add a space at the beginning if none is provided (more intuitive tokenization but slightly lower performances though).
The library used to have a complex mechanism to disable this when special tokens are used and control it dynamically. This mechanism was error-prone and this behavior is now simply activated or not at instantiation of the tokenizer (i.e. as an argument in from_pretrained ).
Also note that adding prefix space is necessary when the tokenizer is used with pre-tokenized inputs ( is_pretokenized=True ) the library has a test that raise an error if you want to encode some input with add_prefix_space=False : https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_gpt2.py#L364 13
| 0 |
huggingface
|
🤗Transformers
|
Untrained models produce inconsistent outputs
|
https://discuss.huggingface.co/t/untrained-models-produce-inconsistent-outputs/537
|
More of a general question, since I think not many will want anything to do with models that aren’t trained.
It seems that, when you create a model from config, i.e. untrained, then that model will produce different results for identical inputs. I’m wondering why. Are the weights randomly initiated on each forward pass?
Code:
import torch
from transformers import AutoModel, AutoTokenizer, AutoConfig
BaseName = 'bert-base-cased'
tokenizer = AutoTokenizer.from_pretrained(BaseName)
input_ids = torch.tensor(tokenizer.encode('Hello there')).unsqueeze(0)
#Load trained model
model = AutoModel.from_pretrained(BaseName)
trained_tensor1 = model(input_ids)[0]
trained_tensor2 = model(input_ids)[0]
print('Trained tensors are the same: ',torch.eq(trained_tensor1,trained_tensor2).all())
#Prints True
#Load untrained model
config = AutoConfig.from_pretrained(BaseName)
model = AutoModel.from_config(config)
untrained_tensor1 = model(input_ids)[0]
untrained_tensor2 = model(input_ids)[0]
print('Untrained tensors are the same: ',torch.eq(untrained_tensor1,untrained_tensor2).all())
#Prints False
I’ve also tried with xlnet and got the same results.
|
Have you tried putting the model in evaluation mode with model.eval()? The from_pretrained class method does it for you, but not the regular init.
| 0 |
huggingface
|
🤗Transformers
|
IDE Tips for reading abstracted code
|
https://discuss.huggingface.co/t/ide-tips-for-reading-abstracted-code/153
|
Sometimes you are reading model/tokenizers code that reuses some other model or tokenizers code, either through inheritance or composition, like:
I usually (in Pycharm) just use cmd-b to jump to a function’s caller, but interested to know other people’s tricks!
|
Goal: minimize cognitive burden when reading/tweaking python code.
Hypothesis: Jumping to parent/helper fn definition is a start. Being able to read the implementation on hover is even better.
Pycharm
Push Members Down 2
Warning: This actually copies the implementation of methods to the child class. git checkout -b {code_reading} first or git stash after.
Go to name of child class.
Type meta-b to go to declaration of parent class.
Right click -> refactor -> push members down. Check all the boxes that you want to read in your child’s file:
820×645 84.9 KB
Result:
688×542 43.8 KB
| 0 |
huggingface
|
🤗Transformers
|
`tpu_cores` can only be 1, 8 or [<1-8>]
|
https://discuss.huggingface.co/t/tpu-cores-can-only-be-1-8-or-1-8/417
|
First it was giving me errors due to a missing argument gpus but I fixed that by adding,
parser.add_argument(’–gpus’, type=int)
to the parser and setting the gpus parameter at the run_pl.sh file. Doing so I then came up with this error, I can understand that this is an error caused by PL, but it is a misconfiguration exception which means we should be able to fix it ourselves in our code.
I am trying out text-classification example with pytorch-lightning (run_pl.sh). But it seems to throw out an exception.
Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForSequenceClassification: ['cls.p
redictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.w
eight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predict
ions.transform.LayerNorm.bias']
- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another tas
k or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to
be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly
initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
File "run_pl_glue.py", line 186, in <module>
trainer = generic_train(model, args)
File "/lvol/bhashithe/transformers/examples/lightning_base.py", line 299, in generic_train
**train_params,
File "/lvol/bhashithe/env/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 853, in from_argparse_args
return cls(**trainer_kwargs)
File "/lvol/bhashithe/env/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 468, in __init__
self.tpu_cores = _parse_tpu_cores(tpu_cores)
File "/lvol/bhashithe/env/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 526, in _parse_tpu_c
ores
raise MisconfigurationException("`tpu_cores` can only be 1, 8 or [<1-8>]")
pytorch_lightning.utilities.exceptions.MisconfigurationException: `tpu_cores` can only be 1, 8 or [<1-8>]
The environment is all up to date, which has 8 GPUs (V100) and no TPUs.
|
Indeed, gpus is missing from argparse.
The following will fix your error:
- parser.add_argument("--n_tpu_cores", dest="tpu_cores", type=int, default=0)
+ parser.add_argument("--n_tpu_cores", dest="tpu_cores", type=int)
but then it fails again elsewhere. I’m looking at it now.
| 0 |
huggingface
|
🤗Transformers
|
Hugging face stickers?
|
https://discuss.huggingface.co/t/hugging-face-stickers/323
|
This may not be the most appropriate place to ask about this and if it isn’t I apologize. But I can’t be the only one here who wants on of those fire die cut stickers??? where can I get one!
|
Yeah. I also want the stickers
cc @julien-c
| 0 |
huggingface
|
🤗Transformers
|
Text generation with XLNet not working
|
https://discuss.huggingface.co/t/text-generation-with-xlnet-not-working/400
|
I’m having some trouble with text generation when using XLNet. Here is my code:
from transformers import XLNetLMHeadModel, XLNetTokenizer
import torch
from time import time
from torchsummary import summary
PADDING_TEXT = """In 1991, the remains of Russian Tsar Nicholas II and his family
(except for Alexei and Maria) are discovered.
The voice of Nicholas's young son, Tsarevich Alexei Nikolaevich, narrates the
remainder of the story. 1883 Western Siberia,
a young Grigori Rasputin is asked by his father and a group of men to perform magic.
Rasputin has a vision and denounces one of the men as a horse thief. Although his
father initially slaps him for making such an accusation, Rasputin watches as the
man is chased outside and beaten. Twenty years later, Rasputin sees a vision of
the Virgin Mary, prompting him to become a priest. Rasputin quickly becomes famous,
with people, even a bishop, begging for his blessing. <eod> """
def prepare_xlnet_input(tokenizer, prompt_text):
prompt_text = PADDING_TEXT + prompt_text
return prompt_text
tokenizer = XLNetTokenizer.from_pretrained("xlnet-large-cased")
model = XLNetLMHeadModel.from_pretrained('xlnet-large-cased',mem_len=0)
prompt_text = "How are you "
input_text = prepare_xlnet_input(tokenizer,prompt_text)
generated = tokenizer.encode(prompt_text, add_special_tokens=False)
context = torch.tensor([ tokenizer.encode(input_text, add_special_tokens=False)])
past = None
length = 10
tic = time()
for i in range(length):
print(i)
#Create dummy token for input
input_data = torch.cat([context,torch.zeros((1,1),dtype=torch.int64)],dim=1)
#Create target mapping mask
target_mapping = torch.zeros((1,1,input_data.shape[1]))
target_mapping[:,:,-1] = 1
print(target_mapping.shape)
#Create permutation mask
perm_mask = torch.zeros(1,input_data.shape[1],input_data.shape[1])
perm_mask[:,-1,:] = 1
print('Perm mask shape',perm_mask.shape)
#Run data through model
print('Input data: ', input_data[0,-10:-1])
results = model(input_data, mems=past, use_cache=False, target_mapping = target_mapping, perm_mask = perm_mask)
output = results[0]
#past = results[1]
#Get most probable token (greedy)
print('Results: ',len(results))
print('Output: ',output[0,0,0:20])
print('Output shape: ',output.shape)
token = torch.argmax(output,dim=-1)
print('Token: ',token)
generated += [token.squeeze(0).squeeze(0).tolist()]
context = torch.cat([context, token],dim=1)
print(f'Time to decode {length} tokens: ',time() - tic)
sequence = tokenizer.decode(generated)
print('################### OUTPUT TEXT #####################')
print(sequence)
And here is the output:
......
8
torch.Size([1, 1, 172])
Perm mask shape torch.Size([1, 172, 172])
Input data: tensor([44, 19, 19, 19, 19, 19, 19, 19, 19])
Results: 1
Output: tensor([-11.8758, -22.3436, -22.2104, -21.2482, -19.0063, -22.1941, -22.2687,
-13.8716, -9.6080, -5.0403, -11.9773, -10.5682, -9.9979, -8.1426,
-10.3780, -19.4922, -18.2674, -7.5363, -5.8832, -3.6131],
grad_fn=<SliceBackward>)
Output shape: torch.Size([1, 1, 32000])
Token: tensor([[19]])
9
torch.Size([1, 1, 173])
Perm mask shape torch.Size([1, 173, 173])
Input data: tensor([19, 19, 19, 19, 19, 19, 19, 19, 19])
Results: 1
Output: tensor([-12.3357, -22.8729, -22.7428, -21.7369, -19.5315, -22.7247, -22.7972,
-14.3732, -10.1052, -5.5922, -12.4161, -11.0451, -10.4621, -8.6313,
-10.8699, -19.9730, -18.7067, -8.0223, -6.4285, -4.0735],
grad_fn=<SliceBackward>)
Output shape: torch.Size([1, 1, 32000])
Token: tensor([[19]])
Time to decode 10 tokens: 11.015855312347412
################### OUTPUT TEXT #####################
How are you,,,,,,,,,,
It just predicts commas the entire way through.
Another strange thing I noticed is that the predicted logits steadily increase with every cycle, they don’t relatively change at all, they just increase.
It’s very similar to the example code. I tried the code from https://huggingface.co/transformers/model_doc/xlnet.html?highlight=tfxlnet#xlnetlmheadmodel 2 . Although it gave the wrong output, it did give something:
from transformers import XLNetTokenizer, XLNetLMHeadModel
import torch
tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')
model = XLNetLMHeadModel.from_pretrained('xlnet-large-cased')
# We show how to setup inputs to predict a next token using a bi-directional context.
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is very <mask>", add_special_tokens=False)).unsqueeze(0) # We will predict the masked token
perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float)
perm_mask[:, :, -1] = 1.0 # Previous tokens don't see last token
target_mapping = torch.zeros((1, 1, input_ids.shape[1]), dtype=torch.float) # Shape [1, 1, seq_length] => let's predict one token
target_mapping[0, 0, -1] = 1.0 # Our first (and only) prediction will be the last token of the sequence (the masked token)
outputs = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping)
next_token_logits = outputs[0] # Output has shape [target_mapping.size(0), target_mapping.size(1), config.vocab_size]
# The same way can the XLNetLMHeadModel be used to be trained by standard auto-regressive language modeling.
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is very <mask>", add_special_tokens=False)).unsqueeze(0) # We will predict the masked token
labels = torch.tensor(tokenizer.encode("cute", add_special_tokens=False)).unsqueeze(0)
assert labels.shape[0] == 1, 'only one word will be predicted'
perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float)
perm_mask[:, :, -1] = 1.0 # Previous tokens don't see last token as is done in standard auto-regressive lm training
target_mapping = torch.zeros((1, 1, input_ids.shape[1]), dtype=torch.float) # Shape [1, 1, seq_length] => let's predict one token
target_mapping[0, 0, -1] = 1.0 # Our first (and only) prediction will be the last token of the sequence (the masked token)
outputs = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping, labels=labels)
loss, next_token_logits = outputs[:2] # Output has shape [target_mapping.size(0), target_mapping.size(1), config.vocab_size]
next_logit = torch.argmax(next_token_logits[0,0,:])
next_token = tokenizer.convert_ids_to_tokens(int(next_logit))
print(next_token)
After that it predicts the word ‘very’ instead of the intended ‘cute’.
However running run_generation.py using XLNet works just fine. Even when I changed to be greedy and not sample, it still produced valid results. I verified that the shape of my input data, the input data itself, the permutation mask and the target mapping is all the same as what is used in run_generation.py, but I just can’t get valid results out of my code.
What am I doing wrong?
|
Turns out I made a mistake with the permutation mask.
perm_mask[:,-1,:] = 1 should be perm_mask[:,:,-1] = 1
There’s a day of my life I’ll never get back.
| 0 |
huggingface
|
🤗Transformers
|
Cannot import Data Collator For PLM
|
https://discuss.huggingface.co/t/cannot-import-data-collator-for-plm/395
|
Hello, I was testing the new added feature to transofrmers in 3.0 which is the ability to continue training trasnformers models ( XLNet in my case) but I’m getting this error:
2020-07-20 10:47:29.463663: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
Traceback (most recent call last):
File "run_language_modeling.py", line 29, in <module>
from transformers import (
ImportError: cannot import name 'DataCollatorForPermutationLanguageModeling'
I made sure that I installed the latest version of the library
|
Hi @krannnN, this change is recent not available in the release, you’ll need to install from source
| 0 |
huggingface
|
🤗Transformers
|
Smaller output vocabulary for GPT-2
|
https://discuss.huggingface.co/t/smaller-output-vocabulary-for-gpt-2/366
|
I noticed that by default, GPT2LMHeadModel returns prediction scores of shape (batch_size, sequence_length, config.vocab_size) (docs link 5). Is there any way for me to limit the output vocabulary to only a subset of words?
I want to take the existing weights from GPT-2, but re-train a new top linear layer with a smaller vocabulary. I suppose I could mask the logits at the end, but then it feels like a waste of computational power to even predict them.
|
Note that this model has the weights of the encoder and the decoder tied, so if you want to use the existing weights, you probably want to just mask the indices of the tokens you don’t want to use in your predictions.
Otherwise you can try to replace the last layer, but you will need to adapt the code in modeling_gpt2.py to do this.
| 0 |
huggingface
|
🤗Transformers
|
GPU inference slows down if done in a loop
|
https://discuss.huggingface.co/t/gpu-inference-slows-down-if-done-in-a-loop/361
|
Hi I have noticed that inference time is very quick if running the model on one batch. However, once inference is ran in a loop - even if on the same input - it slows down significantly.
I have actually seen the same behaviour on tensorflow models. Is an expected behaviour or is an issue with cuda etc.
Please find the notebook to see the issue
https://colab.research.google.com/drive/1gqSzQqFm8HL0OwmJzSRlcRFQ3FOpnvFh?usp=sharing 27
|
This is because Python is a slow language. You generally want to avoid a loop in Python to get performance, and want to use inputs in a batch to use the full performance of your hardware.
| 0 |
huggingface
|
🤗Transformers
|
How were the GPT2 pretrained tensorflow models created?
|
https://discuss.huggingface.co/t/how-were-the-gpt2-pretrained-tensorflow-models-created/357
|
Hello there,
I wonder how the GPT2 pretained models were created. The original models were checkpointed with the tensorflow 1 API and a substantially different computation graph than the reimplemantion in huggingface transformers? I wonder what you did do get there.
Have you found a way to adapt the originally published weights?
Have the openai developers shared WebText with you?
Have you trained the models on similar data?
Thanks for your help
|
I wasn’t on the team when this was done, but I guess it was convert to PyTorch using the conversion from TF scripts 4 and then converted back to TF2 using the functions in the convert PyTorch to TF2 module 3.
OpenAI did not share WebText with us, and there was no retraining involved.
| 0 |
huggingface
|
🤗Transformers
|
Mobilebert, training from scratch. Not seeing where loads the teacher
|
https://discuss.huggingface.co/t/mobilebert-training-from-scratch-not-seeing-where-loads-the-teacher/259
|
Hello, I am modifying mobilbert with some bells and whistles but I am not finding how to start the pertaining with the teacher-student way. Could you help me with that?
|
The training script is here 16. In general, everything linked to distillation is in this folder 8.
| 0 |
huggingface
|
🤗Transformers
|
Vocab.txt missing for distilbert squad on listed files
|
https://discuss.huggingface.co/t/vocab-txt-missing-for-distilbert-squad-on-listed-files/392
|
Hi there, I cannot auto-download models due to firewall, and would like to directly download the artifacts. It appears there is no vocab.txt available for distilbert-base-cased-distilled-squad or distilbert-base-uncased-distilled-squad
https://huggingface.co/distilbert-base-cased-distilled-squad#list-files 20
Any ideas how to move foward?
|
Managed to identify the relevant vocab.txt files for distilbert. Think having the vocab.txt related to a model, should also form part of the listed files.
In [4]: tokenizer.pretrained_vocab_files_map
Out[4]:
{'vocab_file': {'distilbert-base-uncased': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt',
'distilbert-base-uncased-distilled-squad': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-vocab.txt',
'distilbert-base-cased': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-vocab.txt',
'distilbert-base-cased-distilled-squad': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-vocab.txt',
'distilbert-base-german-cased': 'https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-german-cased-vocab.txt',
'distilbert-base-multilingual-cased': 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-vocab.txt'}}
| 0 |
huggingface
|
🤗Transformers
|
Benchmark results
|
https://discuss.huggingface.co/t/benchmark-results/385
|
Command:
export d=mbart_benchmark_data
python examples/benchmarking/run_benchmark.py \
--models facebook/bart-large-cnn \
--log_filename $d/log.txt \
--inference_memory_csv \
$d/inference_memory.csv \
--train_memory_csv $d/train_memory.csv \
--train_time_csv $d/train_time.csv \
--inference_time_csv $d/inference_time.csv \
--fp16 --log_print --training --save_to_csv \
--batch_sizes 4 8 12 16
Results:
bart-large-cnn
============== TRAIN - MEMORY - RESULTS =======
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Mem in MB
--------------------------------------------------------------------------------
facebook/bart-large-cnn 4 8 2795
facebook/bart-large-cnn 4 32 2897
facebook/bart-large-cnn 4 128 3169
facebook/bart-large-cnn 4 512 6873
facebook/bart-large-cnn 8 8 2827
facebook/bart-large-cnn 8 32 2933
facebook/bart-large-cnn 8 128 3465
facebook/bart-large-cnn 8 512 12195
facebook/bart-large-cnn 12 8 2859
facebook/bart-large-cnn 12 32 3137
facebook/bart-large-cnn 12 128 4371
facebook/bart-large-cnn 12 512 N/A
facebook/bart-large-cnn 16 8 2891
facebook/bart-large-cnn 16 32 3105
facebook/bart-large-cnn 16 128 5153
facebook/bart-large-cnn 16 512 N/A
--------------------------------------------------------------------------------
mbart
========= TRAIN - MEMORY - RESULTS =======
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
facebook/mbart-large-en-ro 4 8 4355
facebook/mbart-large-en-ro 4 32 4947
facebook/mbart-large-en-ro 4 128 5117
facebook/mbart-large-en-ro 4 512 10383
facebook/mbart-large-en-ro 8 8 4877
facebook/mbart-large-en-ro 8 32 4493
facebook/mbart-large-en-ro 8 128 5857
facebook/mbart-large-en-ro 8 512 N/A
facebook/mbart-large-en-ro 12 8 4909
facebook/mbart-large-en-ro 12 32 5085
facebook/mbart-large-en-ro 12 128 7079
facebook/mbart-large-en-ro 12 512 N/A
facebook/mbart-large-en-ro 16 8 4941
facebook/mbart-large-en-ro 16 32 4663
facebook/mbart-large-en-ro 16 128 8655
facebook/mbart-large-en-ro 16 512 N/A
--------------------------------------------------------------------------------
|
This assumes that len(input_ids) == len(decoder_input_ids), which is not true for summarization
since bart-large-cnn has smaller embeddings, it is less likely to OOM (have an N/A entry)
env: v100 16GB GPU, fp16
Train times show bart more than 2x faster
model,batch_size,sequence_length,result
facebook/mbart-large-en-ro,4,8,0.0669
facebook/mbart-large-en-ro,4,32,0.0699
facebook/mbart-large-en-ro,4,128,0.1377
facebook/mbart-large-en-ro,4,512,0.3529
facebook/mbart-large-en-ro,8,8,0.0672
facebook/mbart-large-en-ro,8,32,0.0831
facebook/mbart-large-en-ro,8,128,0.1928
facebook/mbart-large-en-ro,8,512,N/A
facebook/mbart-large-en-ro,12,8,0.0687
facebook/mbart-large-en-ro,12,32,0.1156
facebook/mbart-large-en-ro,12,128,0.2629
facebook/mbart-large-en-ro,12,512,N/A
facebook/mbart-large-en-ro,16,8,0.0705
facebook/mbart-large-en-ro,16,32,0.1392
facebook/mbart-large-en-ro,16,128,0.3334
facebook/mbart-large-en-ro,16,512,N/A
model,batch_size,sequence_length,result
facebook/bart-large-cnn,4,8,0.0619
facebook/bart-large-cnn,4,32,0.0629
facebook/bart-large-cnn,4,128,0.0623
facebook/bart-large-cnn,4,512,0.1274
facebook/bart-large-cnn,8,8,0.0699
facebook/bart-large-cnn,8,32,0.0628
facebook/bart-large-cnn,8,128,0.0705
facebook/bart-large-cnn,8,512,0.2347
facebook/bart-large-cnn,12,8,0.0614
facebook/bart-large-cnn,12,32,0.0620
facebook/bart-large-cnn,12,128,0.0884
facebook/bart-large-cnn,12,512,N/A
facebook/bart-large-cnn,16,8,0.0667
facebook/bart-large-cnn,16,32,0.0668
facebook/bart-large-cnn,16,128,0.1075
facebook/bart-large-cnn,16,512,N/A
| 0 |
huggingface
|
🤗Transformers
|
BertForSequenceClassification Index Error
|
https://discuss.huggingface.co/t/bertforsequenceclassification-index-error/343
|
Hey Beginner Here, I was using BertForSequenceClassification using this code:
def train(model, optimizer, critertion=nn.BCELoss(),train_loader=train_iter,valid_loader=valid_iter,num_epochs=5
,eval_every = len(train_iter) // 2,file_path = "",best_valid_loss = float("Inf")):
# initialize running values
running_loss = 0.0
valid_running_loss = 0.0
global_step = 0
train_loss_list = []
valid_loss_list = []
global_steps_list = []
model.train()
for epoch in range(num_epochs):
for (labels, title, text, titletext), _ in train_loader:
labels = labels.type(torch.LongTensor)
labels = labels.to(device)
titletext = titletext.type(torch.LongTensor)
titletext = titletext.to(device)
print(labels.shape)
print(titletext.shape)
output = model(titletext, labels)
loss, _ = output
optimizer.zero_grad()
loss.backward()
optimizer.step()
running_loss += loss.item()
global_step += 1
#removed other part of the code which was for validation and testing. Error is generated in train loop.
but when the run the code it shows me the following error:
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-63-e4474bff9c36> in <module>
2 optimizer = optim.Adam(model.parameters(), lr=2e-5)
3
----> 4 train(model=model, optimizer=optimizer)
<ipython-input-62-e6359dc8788e> in train(model, optimizer, critertion, train_loader, valid_loader, num_epochs, eval_every, file_path, best_valid_loss)
20 print(titletext.shape)
21
---> 22 output = model(titletext, labels)
23 loss, _ = output
24
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
<ipython-input-59-3d3782128a40> in forward(self, text, label)
7
8 def forward(self, text, label):
----> 9 loss, text_fea = self.encoder(text, labels=label)[:2]
10
11 return loss, text_fea
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
/opt/conda/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels)
1158 else:
1159 loss_fct = CrossEntropyLoss()
-> 1160 loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
1161 outputs = (loss,) + outputs
1162
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
930 def forward(self, input, target):
931 return F.cross_entropy(input, target, weight=self.weight,
--> 932 ignore_index=self.ignore_index, reduction=self.reduction)
933
934
/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
2315 if size_average is not None or reduce is not None:
2316 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2317 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
2318
2319
/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
2113 .format(input.size(0), target.size(0)))
2114 if dim == 2:
-> 2115 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
2116 elif dim == 4:
2117 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
IndexError: Target 5213 is out of bounds.
meanwhile, the shape of label and titletext is [16] and [16,128] respectively.
I tried to unsqueeze label too but that didn’t help.
I tried to whether if the target index is missing or the label and target are not of same length but it didn’t help as well.
What can be done to fix this?
Full Code + Dataset : Click here 6
|
OK, I had a look at your code - your input data that you feed to the model is very borked. Your labels instead of being {0,1} are row ids converted to float. Always dump at least the first row/batch of your inputs to see that what you feed is what you expect it to be. In your case, for labels in the first batch you get something like:
x = next(iter(train_iter))
x.label
tensor([.169, .254, .512, ...
definitely not 2 categories.
The 2 main errors are that you (1) save the row IDs in the csv files and TabularDataset.splits gives labels the row ids (2) you don’t convert FAKE/REAL strings to {0,1}.
So before you do X_train.to_csv(..., you need to:
news['label'] = news['label'].astype('category').cat.codes
and also you need to drop the row IDs, so the correct code is:
X_train.to_csv("./real-and-fake-news-dataset/train.csv", index=False)
X_test.to_csv("./real-and-fake-news-dataset/test.csv", index=False)
X_valid.to_csv("./real-and-fake-news-dataset/valid.csv", index=False)
and later the fields need to be corrected too (label moved to the end):
fields = [('title', text_field), ('text', text_field), ('titletext', text_field), ('label', label_field),]
alternatively you could keep the row ids and then adjust your train loop to ignore the first field.
I don’t think fix_length=MAX_SEQ_LEN does what you think it does - I think the description of that field is confusing and misleading - as it’s related to padding and not truncating - you get millions of warnings during TabularDataset.splits call.
Token indices sequence length is longer than the specified maximum sequence length for this model (1129 > 512). Running this sequence through the model will result in indexing errors
So I added:
news['titletext'] = news['titletext'].str.slice(0,128)
news['title'] = news['title'].str.slice(0,128)
news['text'] = news['text'].str.slice(0,128)
you have other bugs in your code, but you will sort those out.
I uploaded the partially corrected nb here 13.
You can see the debug prints I added in the code.
Your csv files were also saved in the wrong dir (not the one it was read from). So I adjusted the dirs too.
You will need to remove the initial dataset truncation I added to make the dev cycle fast (and note that trick for your own future development process).
| 0 |
huggingface
|
🤗Transformers
|
Development workflow and aliases
|
https://discuss.huggingface.co/t/development-workflow-and-aliases/330
|
Someone asked me in slack so I figured I’d post some tricks that I use here, would love to hear the tricks of others! What follows is unofficial, opinionated, and maybe not even best practice.
aliases to add to .zshrc (use oh-my-zsh it’s dope)!
install_pl_dev() {
pip uninstall typing
pip install -U git+https://github.com/PyTorchLightning/pytorch-lightning.git
pip install typing
}
### pytest
hft () {
pytest -p no:warnings -n auto --dist=loadfile ./tests/ $@
}
tfork () {
cd ~/transformers_fork
}
tmar () {
RUN_SLOW=1 pytest --tb=short -p no:warnings ./tests/test_modeling_marian.py -ra $@
}
tmar_tok () {
RUN_SLOW=1 pytest --tb=short -p no:warnings ./tests/test_tokenization_marian.py -ra $@
}
tbart () {
#pytest -p no:warnings ./tests/test_modeling_bart.py -ra $@
pytest --tb=short -p no:warnings ./tests/test_modeling_bart.py -ra $@
}
ttf () {
pytest -p no:warnings ./tests/test_modeling_tf_bart.py -ra $@
}
tbm () {
pytest -p no:warnings ./tests/test_modeling_bart.py -ra $@
RUN_SLOW=1 pytest -p no:warnings tests/test_modeling_bart.py -sv -k mnli
}
tcnn () {
RUN_SLOW=1 pytest --tb=short -p no:warnings tests/test_modeling_bart.py -sv -k cnn $@
pytest -p no:warnings ./tests/test_modeling_bart.py -ra $@
}
txsum () {
RUN_SLOW=1 pytest --tb=short -p no:warnings tests/test_modeling_bart.py -sv -k xsum $@
# pytest -p no:warnings ./tests/test_modeling_bart.py -ra $@
}
tmbart () {
RUN_SLOW=1 pytest --tb=short -p no:warnings tests/test_modeling_bart.py -sv -k mbart $@
# pytest -p no:warnings ./tests/test_modeling_bart.py -ra $@
}
tenro() {
RUN_SLOW=1 pytest --tb=short -p no:warnings tests/test_modeling_bart.py -s -k enro $@
# pytest -p no:warnings ./tests/test_modeling_bart.py -ra $@
}
# misc
_checkout_grep() {
git checkout $1 > /dev/null 2>&1 # surpress Previous HEAD position msg
git grep $2 | wc -l
}
check_torch_compat () {
# check the pytorch compatibility of a function
# example usage check_torch_compat torch.bool
cd ~/pytorch/docs
echo "1.0"
_checkout_grep v1.0.0 $1
echo "1.1"
_checkout_grep v1.1.0 $1
echo "1.2"
_checkout_grep v1.2.0 $1
echo "1.3"
_checkout_grep v1.3.0 $1
echo "1.4"
_checkout_grep v1.4.0 $1
echo "master"
_checkout_grep master $1
cd - > /dev/null 2>&1
}
texamples () {
pytest --tb=short -p no:warnings examples/ $@
}
sty() {
make style
flake8 --ignore=P,E501,E203,W503,E741 examples templates tests src utils
}
gsync (){
g fetch upstream
g merge upstream/master
}
covg() {
open "$COVERAGE_URL$1"
}
# GCP STUFF
export CUR_PROJ="YOUR GCP PROJECT"
gcloud config set project $CUR_PROJ
start_gpu () {
gcloud compute instances start $CUR_INSTANCE_NAME --project $CUR_PROJ --zone $ZONE
}
stop_gpu () {
gcloud compute instances stop $CUR_INSTANCE_NAME --project $CUR_PROJ --zone $ZONE
}
export HF_PROJ="FIXME your gcp project name"
hfg_ssh () {
gcloud beta compute ssh --zone $ZONE $CUR_INSTANCE_NAME --project $CUR_PROJ -- -L 5555:localhost:8888
}
tidy_ssh () {
gcloud beta compute ssh --zone $ZONE $CUR_INSTANCE_NAME --project $CUR_PROJ
}
put_my_s3 () {
s3cmd put --recursive $1 s3://models.huggingface.co/bert/sshleifer/ $@
}
# Workon different machines then run hfg_ssh
workon_hfg (){
export CUR_INSTANCE_NAME="shleifer-MYSICKGPU"
export ZONE='us-central1-a'
}
workon_pegasus (){
export CUR_INSTANCE_NAME="notreally-pegasus-vm"
export ZONE="us-west1-b"
}
workon_tpu (){
export CUR_INSTANCE_NAME="shleifer-HUGETPUCLUSTERFORMAKEAGI"
export ZONE="us-central1-f"
}
workon_v8 (){
export CUR_INSTANCE_NAME="shleifer-BLAH"
export ZONE='us-central1-a'
}
start_v8 () {
workon_v8
start_gpu
}
export TOKENIZERS_PARALLELISM=false
export PYTEST_ADDOPTS='--pdbcls=IPython.terminal.debugger:Pdb'
### AWS/Seq2Seq Stuff
export COVERAGE_URL="https://codecov.io/gh/huggingface/transformers/src/master/src/transformers/"
export h="s3://models.huggingface.co/bert/Helsinki-NLP"
export b="s3://models.huggingface.co/bert"
export ss="s3://models.huggingface.co/bert/sshleifer"
export sdbart="s3://sshleifer.logs/dbart"
export sdir=$HOME/transformers_fork/examples/seq2seq/
export CNN_DIR=$sdir/dbart/cnn_dm
export XSUM_DIR=$sdir/dbart/xsum
export ENRO_DIR=$sdir/dbart/wmt_en_ro
export XSUM_URL="http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz"
export XSUM_RAW_S3="s3://sshleifer.logs/dbart/XSUM-EMNLP18-Summary-Data-Original.tar.gz"
aw3 () {
aws s3 $@
}
s3ls () {
aws s3 ls $@
}
Misc-tips:
fork, called that directory transformers_fork, and clone it to $HOME/ on every machine.
use pip install -e .[“dev”] to keep up to date with dependency changes (isort, tokenizers mostly)
every time you start a VM I put my dotfiles up there, either with scp or git. I use git for dotfiles and scp for ~/.ssh/
When i want to update a branch, I usually run:
git checkout master
gsync # fetch upstream, merge upstream/master
git checkout <branch>
git merge master
If there are merge conflicts, I fix them in my IDE (vscode is nice, or pycharm cmd-k). I don’t trust git very much with this. The more you run this, the simpler it is to resolve merge conflicts.
Test Driven Development
(my version)
I run texamples -k finetune a lot also and try to keep it always green if I am working on examples/seq2seq. mostly on my mac but also on my VM.
I also run sty, my make style, isort, flake8 alias, all the time.
When I am updating bart, or adding a new model. I write the tests first and then try to get them green one by one. Same with new feature. Test first, add feature. This often includes adding a new check to an existing check.
I set tons of ipdb breakpoints for debugging hard things.
|
Thanks @sshleifer
What’s going on with your install of lightning, though? Shouldn’t it just be…
install_pl_dev() {
pip uninstall typing
pip install -U git+https://github.com/PyTorchLightning/pytorch-lightning.git
pip install typing
}
Also, texamples -k finetune is
| 0 |
huggingface
|
🤗Transformers
|
Hosted Inference API: Error loading tokenizer Can’t load config
|
https://discuss.huggingface.co/t/hosted-inference-api-error-loading-tokenizer-cant-load-config/319
|
Hi,
https://huggingface.co/sampathkethineedi/industry-classification-api 9
I uploaded my classification model fine tuned on BERT. There is no issue running the model from the hub and using the ‘sentiment-analysis’ pipeline. But there seems to be some problem when it comes to Hosted Inference API.
Can someone help me with this?
Thanks!
|
maybe @mfuntowicz or @julien-c can help?
| 0 |
huggingface
|
🤗Transformers
|
Seq2seq evaluation speed is slow
|
https://discuss.huggingface.co/t/seq2seq-evaluation-speed-is-slow/282
|
While running the seq2seq examples following the Readme I found that training is relatively fast and uses >50% of the GPU while evaluations (with the exact same batch size) is painfully slow with low GPU utilization. True for both T5 and BART. What am I doing wrong?
|
As a guess, evaluation requires a generation procedure whereas training uses teacher-forcing and cross-entropy loss. This might not be the case though because the validation step may include some generation - I would highly recommend using the Pycharm debugger to test out what part of the code is taking so long.
| 0 |
huggingface
|
🤗Transformers
|
Is TFAlbert model pre-trainable?
|
https://discuss.huggingface.co/t/is-tfalbert-model-pre-trainable/293
|
Hello,
I’m interested in pre-training the Albert model on a large set of domain specific data. I see on GitHub that there is a TFAlbertForPreTraining class, but it is not showing up in the docs.
Is this class only there for the loading of pre-trained Albert models, or can I make use of this to do pre-training myself?
|
I think the team just forgot to put them in the docs. You can make a PR to add them or I’ll fix that next week when I’m back from vacation.
| 0 |
huggingface
|
🤗Transformers
|
How to reinit attention head
|
https://discuss.huggingface.co/t/how-to-reinit-attention-head/267
|
hi , i would like to reinit some specified head like L=10, H=3, how can i do that?
thx
|
It depends on your model for the exact code, and the framework you want to use (PyTorch or TensorFlow) so more details would help us help you
In general, you can access the layers as attributes of your model, so by picking the right layer through them, you should be able to reinit it.
| 0 |
huggingface
|
🤗Transformers
|
What’s the difference between a QA model trained with SQuAD1.0 and SQuAD2.0?
|
https://discuss.huggingface.co/t/whats-the-difference-between-a-qa-model-trained-with-squad1-0-and-squad2-0/309
|
Hi guys,
I’d like to understand if there was any architectural difference between a Question-Answering model trained with the SQuAD1.0 dataset and SQuAD2.0 dataset. From what I understand the model trained with SQuAD2.0 should be capable of understanding if no answer can be provided given a certain question-context pair. Does it do so by giving a lower score to the most-likely answer in the context?
Moreover, how’s the score of an answer exactly calculated ( I’m referring to the score provided by the question-answering pipeline)?
|
Hi @z051m4, architectural there’s no difference. SQuADv2 has adversarial questions which look like correct questions but have no answers. With squad2 the models are trained to output bos token if no answer is present instead of giving wrong answer. This enables the model to differentiate between answerable and non answerable questions.
| 0 |
huggingface
|
🤗Transformers
|
[PYTORCH] Trace on CPU and use on GPU
|
https://discuss.huggingface.co/t/pytorch-trace-on-cpu-and-use-on-gpu/181
|
Hi All,
Is is possible to trace the GPT/Bert models on CPU and use that saved traced model on GPU?
I see a constant called “device” in the traced graph which seems persists in the saved models. This causes the model to be usable only on device where its traced, ie., if its traced on GPU devices then the saved JIT model is only usable on GPU hosts. I am using PyTorch 1.4 and transformers 0.3.2
Am I missing something here?
Thanks,
|
Not sure if I’m right but I think you can specify the device to load the traced/saved model to in the load function with the map_location parameter
https://pytorch.org/docs/master/generated/torch.jit.load.html#torch.jit.load 94
| 0 |
huggingface
|
🤗Transformers
|
How to yield hidden_states from a saved, fine-tuned (distil)bert model?
|
https://discuss.huggingface.co/t/how-to-yield-hidden-states-from-a-saved-fine-tuned-distil-bert-model/227
|
I have a model that I have fine-tuned and saved. Now, I would like to use those saved weights to yield the hidden_state embeddings, in an attempt to see how they work in other models like a CNN, but I am unable to.
Here is my process
# load pretrained distilbert
model = DistilBertForSequenceClassification.from_pretrained('C:\\Users\\14348\\Desktop\\pretrained_bert',
output_hidden_states=True)
tokenizer = DistilBertTokenizer.from_pretrained('C:\\Users\\14348\\Desktop\\pretrained_bert')
I tokenized my text in the exact same way that I did to fine-tune the saved model above. One example looks like:
b_input_ids[0]
tensor([ 101, 1000, 22190, 10711, 1024, 2093, 3548, 2730, 1999, 8479,
1999, 1062, 2953, 18900, 1000, 2405, 2011, 16597, 2376, 1997,
24815, 4037, 2006, 1020, 2285, 2429, 2000, 1037, 3189, 2013,
22190, 10711, 2874, 1010, 1037, 3067, 7738, 2001, 3344, 2041,
2006, 3329, 3548, 1997, 1996, 4099, 1010, 2040, 2020, 4458,
2083, 2019, 2181, 2006, 1996, 14535, 2480, 1011, 1062, 2953,
18900, 2364, 2346, 1999, 1062, 2953, 18900, 2212, 1997, 2023,
2874, 2012, 2105, 6694, 2023, 2851, 1012, 1996, 3189, 9909,
2008, 2093, 3548, 2020, 2730, 2006, 1996, 3962, 2004, 1037,
2765, 1997, 1996, 8479, 1012, 102, 0, ... ])
Now when I go to grab the embeddings like so:
# ignore compute graph
with torch.no_grad():
logits, hidden_states = model(input_ids=b_input_ids,
attention_mask=b_masks)
I get the following error:
IndexError: index out of range in self
But if I generate a new sentence on the fly, I can get embeddings for it no problem:
input_sentence = torch.tensor(tokenizer.encode("My sentence")).unsqueeze(0)
# ignore compute graph
with torch.no_grad():
logits, hidden_states = model(input_ids=input_sentence)
len(hidden_states)
8
logits
tensor([[ 0.2188, -0.0540]])
Thanks for your time and help!
|
Hi @afogarty85, can you post the full stack trace and also the shapes b_input_ids and b_masks
| 0 |
huggingface
|
🤗Transformers
|
How to get NER pipeline output to match with spacy’s output?
|
https://discuss.huggingface.co/t/how-to-get-ner-pipeline-output-to-match-with-spacys-output/230
|
Hi all,
In a prod setup, I am already using transformers, and need to have NER for a task. But the issue I’m facing is that unlike spacy, here the NER is at token level. What could be the quickest way or postprocessing to generate an output like spacy does (with char indices, at string level)?
image1085×536 81 KB
And thanks for hosting this forum. Just like other discourse forums, we can now ask all the simple curiosity questions without treating it as an “issue” on github.
|
hi @crazydiv
Thank you for joining the forum
I think this issue is fixed in this 21 PR
| 0 |
huggingface
|
🤗Transformers
|
Multi GPU fintuning BART
|
https://discuss.huggingface.co/t/multi-gpu-fintuning-bart/220
|
Hi,
I am trying to fine-tune the BART model checkpoints on a large dataset(around 1M data points). Since the dataset is large, I want to utilize a multi-GPU setup but I see that because of this line 29 it’s not currently possible to train in a multi-gpu setting. Any work arounds for it?
@sshleifer Tagging you here since you’ve worked with BART and summarization in particular a lot on the repo.
|
Which task are you finetuning on?
For sequence to sequence tasks, like summarization, examples/seq2seq/finetune.py supports multigpu for training only. There is a caveat: you have to run the final eval yourself on one GPU.
For language modeling tasks, multi-gpu is supported through the Trainer class.
| 0 |
huggingface
|
🤗Transformers
|
Transformers Huge Community Feedback
|
https://discuss.huggingface.co/t/transformers-huge-community-feedback/120
|
So last week we shared the first feedback request on transformers. The community was pretty amazingly involved in this with close to 900 detailed feedback forms to analyze and dive into, containing more than 50 thousand words of open answers
In this post, I would like, first, to deeply thank the community for this amazing feedback and for the carefully crafted answers, people were so keen to share. It’s really humbling to read all these encouraging and more critical words, from the thank-you to the detailed critics of the library in which people tried to be so helpful I wanted to thank you all one by one if only I had all your email addresses.
In the following post, I’d like to try to summarize and share the most interesting takeaways from this huge corpus.
Let me first start by our users
transformers has three big user communities of roughly equal sizes (in the respondents):
image1398×796 49.4 KB
Researchers (blue)
ML engineers (red)
Data scientists (green)
Comparing the answers from each community can give some hints:
Researchers :
The oldest and core users
They probably often develop or study models
They are more often using directly master and want verbose and easy to customize model and examples. Some would like to be able to train from scratch. They usually don’t want to use high-level training abstraction and encapsulated examples.
ML engineers :
They often joined a bit after the researchers community
They probably often push models in production applications
They are more often using recent PyPi versions. They are interested in fast inference, fp16, ONNX, TPU, multi-GPU, production-ready models. Some of them like training abstractions, some don’t.
Data scientists :
The most recent community of users (many have been using it since less than 3 months)
They probably often use models for data-analytics (i.e. no strong reqs for perf)
They are often beginning to use transformers models. They are interested in fast prototyping tool and examples that they can easily and quickly adapt to their use cases. They often don’t care much about the model internals or having access to training details but they are very interested in diving into/mastering data-processing.
There are also a lot of common points between these communities (they have mostly common interests) so don’t be distracted by these apparent differences, almost all of our users want recent (SOTA) models in an easy-to-use API
For how long have our users been using the library
image2130×740 98.2 KB
There is a significant influx of new users (green + purple are < 3 months users). One-third of the respondents have been using the library for less than 3 months!
The longest users are researchers followed by ML engineers. Data scientists are more recent users (40% of them have been using transformers for less than 3 months).
work or fun
image2124×676 62.8 KB
Most users are using transformers for work (80% overall)
Researchers are somewhat the more serious community (>90% using it for work)
Data scientists are using it more for fun than the other communities at the moment (maybe also because they are still discovering it).
Which version
image2130×740 153 KB
Many users are on the latest or two latest versions (blue+red+green+purple)
Researchers use master (red) more than the other communities (maybe because they tweak the models more).
ML engineers are somewhat a bit more conservative (more users on 2.11.X (green))
Data scientists tend to use master (red) less than the other communities (maybe because they customize the models less than the other groups)
Would you recommend the library?
1488×650 40.5 KB
Importance of various features in the examples
image2192×970 170 KB
User specific interests:
1600×885 96.3 KB
Three features are rated as most essential in the examples overall by all user-communities:
Full/transparent access to data-processing
Full access to training loop
Simple data-downloading and processing
People care less about TPU support
Some more community-specific interests:
Researchers want more strongly to avoid encapsulated training than the other communities
ML engineers are more interested in FP16, multi-GPU, TPU than the other communities
Data Scientists care less about training and optimization than the other communities (more ok with encapsulated trainer logic) and care more about the data processing
What to prioritize
1600×649 121 KB
Users ask for more priority on:
Adding examples for NLP tasks + easier to adapt to more real-life scenarios
Keep adding SOTA models
Less interesting for most users:
Refactor the code toward modularization
What do you like the most
1310×1274 491 KB
Most frequent reasons are:
Easy to use and simplicity
Many SOTA pretrained models
Community
Short summary of the top 300 strongest likes (apart from above mentioned top 3):
Pipelines <= many people like them
AutoModels <= many people like them as well
Easy to tweak - self-contained-verbose - non-modularization - no code reuse
Good doc
Model hub
PyTorch and TF interopt
Transparency
What do you dislike the most
1310×1274 518 KB
Some top dislikes are:
Need more examples, more real-use-cases and on how to load your own datasets for certain tasks - More examples on using transformers for text or token classification
More tutorials and lack of guidance for simple examples
Too frequent breaking changes
Too much modularization - Model code is spread across too many files
Examples are too encapsulated – examples are hard to unpack sometimes
Trainer
Less support for Tensorflow 2.0
model hub is a bit messy, it’s hard to find the right model
…
Open-feedback
1310×1274 499 KB
The most noticeable one:
Thanks! <=
…
|
I am positively impressed by the number of responses you got! Looking at the “would you recommend” graph, most people seem to have taken questionnaire seriously, too. (Not too many trolls.)
Some things that I am surprised by:
I had expected more data scientists and ML engineers and less researchers
I am surprised that there are so many long-time users because it often feels that the library is continuously growing with many new users but less interaction from older users (perhaps the forums will change that!)
67, 9, 123 as a an answer to a “work or fun” question
Big one: it seems that there is quite a preference for DataParallel over DistributedDataParallel. I wonder why this is. What are use cases where you can use DP but not DDP? In all other cases, DDP should perform better in terms of speed.
I do understand the comments about modularization, that the code base seems to be spread over too many files. On the one hand I think this can be partially solved by using a folder structure for the models, tokenizers, and utils. A directory structure already makes the code base more readable. On the other hand, though, since there is a lot of inheritance (e.g. PretrainedModel, BertModel, RobertaModel), you cannot get around separate building blocks. It is necessary to allow for custom models and rapid research where other building blocks can be reused.
The most important thing that I see in all of this, though, is the highlighted keyword in the likes: community. I definitely agree with it. The catch phrase of HF is to democratize NLP, and I strongly believe that this is exactly what you guys are doing. Let’s have the forum be another step into that direction. A big congrats on all the work that you do!
| 0 |
huggingface
|
🤗Transformers
|
High Level Overview Blogpost
|
https://discuss.huggingface.co/t/high-level-overview-blogpost/118
|
Which blogpost/doc is the best for getting a high level overview of the library, like a developer guide or something?
|
The new quick tour! https://huggingface.co/transformers/quicktour.html 35
| 0 |
huggingface
|
🤗Datasets
|
Multiple call datasets.load_from_disk() cause Memory Leak!
|
https://discuss.huggingface.co/t/multiple-call-datasets-load-from-disk-cause-memory-leak/13602
|
My dataset is stored on HDFS and the size is too large to save on local disk.
Using load_from_disk to pull them all down and then concat them will be a waste of time, especially in the case of a large number of workers in distributed training.
So I implemented an IterableDataset to load a file from hdfs at a time, the code below:
class StreamFileDataset(IterableDataset):
def __init__(self, data_dir, cycle_mode=False):
self.data_dir = data_dir
self.cycle_mode = cycle_mode
self._is_init = False
def _config_fs(self):
if self.data_dir.startswith("hdfs://"):
self.fs = HadoopFileSystem()
self.data_dir = self.data_dir.replace("hdfs:/", "")
self.data_files = sorted(self.fs.ls(self.data_dir))
else:
self.fs = None
self.data_files = sorted(glob.glob(os.path.join(self.data_dir, "*")))
def _config_multi_worker(self):
worker_info = data.get_worker_info()
if worker_info is not None:
worker_id = worker_info.id
num_worker = worker_info.num_workers
indexes = range(worker_id, len(self.data_files), num_worker)
self.data_files = [self.data_files[i] for i in indexes]
if self.cycle_mode:
self.data_files = itertools.cycle(self.data_files)
def _init(self):
if not self._is_init:
self._config_fs()
self._config_multi_worker()
self._is_init = True
def __iter__(self):
self._init()
for data_file in self.data_files:
data = datasets.load_from_disk(data_file, fs=self.fs)
for d in data:
yield d
# Manually delete data to avoid memory leaks
del data
But bad things happen now: there is a memory leak here!
image1222×384 21 KB
The memory increase in the image above happens when load_from_disk reads the next file
Then I did a test:
for data_file in self.data_files:
print("before")
print(f"RAM used: {psutil.Process().memory_info().rss / (1024 * 1024):.2f} MB")
data = datasets.load_from_disk(data_file, self.fs)
print("after")
print(f"RAM used: {psutil.Process().memory_info().rss / (1024 * 1024):.2f} MB")
The memory is gradually growing!!
I also experimented at the same time, even if the data is saved locally, there will be a memory leak.
Is this a bug, or is there any other solution?
|
any suggestion?
| 0 |
huggingface
|
🤗Datasets
|
Hugging Face Course Chapter 7 Token Classification dataset error
|
https://discuss.huggingface.co/t/hugging-face-course-chapter-7-token-classification-dataset-error/14143
|
When trying to download the CoNLL dataset a file not found error is raised.
from datasets import load_dataset
raw_datasets = load_dataset("conll2003")
FileNotFoundError: Couldn't find file at https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt
I saw that David Batista removed the dataset from its original location on January 14 2022 and that may be causing the error (?).
Here is the link to his commit.
|
I think you need to upgrade to the latest version of datasets (cc @lhoestq )
| 1 |
huggingface
|
🤗Datasets
|
Which URLs should be reachable to work with Huggingface hub
|
https://discuss.huggingface.co/t/which-urls-should-be-reachable-to-work-with-huggingface-hub/14055
|
Hi, I am configuring a server that should be able to reach the Huggingface Hub. In particular, I would like to be able to use the Datasets library to download public datasets, as well as retrieve pretrained models and tokenizers.
I need to specify the URL I will need to reach in order to get them whitelisted in our proxies. Is there a list somewhere? If I try to donwload a dataset without an internet connection, I see the process fail while trying to reach https://raw.githubusercontent.com, but I am not sure whether everything is hosted there or I should enable more domains
|
Oh no, I see that every dataset connects to a custom URL I had hoped everything was actually hosted centrally by HuggingFace, everything would have been so much simpler then
| 0 |
huggingface
|
🤗Datasets
|
Proprietary database load error: TypeError: Argument ‘storage’ has incorrect type (expected pyarrow.lib.Array, got pyarrow.lib.ChunkedArray)
|
https://discuss.huggingface.co/t/proprietary-database-load-error-typeerror-argument-storage-has-incorrect-type-expected-pyarrow-lib-array-got-pyarrow-lib-chunkedarray/14050
|
I’m currently trying to load in a proprietary dataset from disk. When the script finishes loading the train split I recieve the error: TypeError: Argument ‘storage’ has incorrect type (expected pyarrow.lib.Array, got pyarrow.lib.ChunkedArray).
The files themselves are a large set of videos. I load in each video using pims and extract a set of frames as PIL Images. These are then stored in dataset rows with a few other bits of info e.g. class-labels and filenames.
The script works fine when I extract a single frame from each video, and then generates the Typeerror with any more. I can only presume it is something to do with memory size, 10 frames per video uses ~16gb of my 32gb of memory.
Is there some way to do intermediate Arrow-file writes to lower memory overhead?
Alternatively is there some way to pass user defined variables to a load_dataset function, such that I can load the dataset multiple times extracting different frames and concatenate the resultant datasets?
Alternatively any ideas on how to fix the error? I’m using datasets 1.17.0
Thanks in advance
|
Hi! Could you please copy and paste the entire stack trace?
Is there some way to do intermediate Arrow-file writes to lower memory overhead?
You can control RAM usage in dataset scripts with the DEFAULT_WRITER_BATCH_SIZE attribute of GeneratorBasedBuilder (as we do in this script).
The files themselves are a large set of videos. I load in each video using pims and extract a set of frames as PIL Images. These are then stored in dataset rows with a few other bits of info e.g. class-labels and filenames.
In datasets 1.18.0, we added support for nested decoding of the Image feature which is ideal for your use-case. To use it, just define the features dict as:
features = Features({
"frames": Sequence(Image(), length=10),
"meta": ...
})
and yield data as:
yield idx, {
"frames": [pil_img_frame1, pil_img_frame2, ...],
"meta": ...,
}
| 0 |
huggingface
|
🤗Datasets
|
Pretrained model ‘Helsinki-NLP/opus-mt-en-ar’ is not available in TFAutoModelForSeq2SeqLM
|
https://discuss.huggingface.co/t/pretrained-model-helsinki-nlp-opus-mt-en-ar-is-not-available-in-tfautomodelforseq2seqlm/14037
|
I was trying to run this line from translation-tf.ipynb
from transformers import TFAutoModelForSeq2SeqLM, DataCollatorForSeq2Seq
model = TFAutoModelForSeq2SeqLM.from_pretrained('Helsinki-NLP/opus-mt-en-ar')
404 Client Error: Not Found for url: https://huggingface.co/Helsinki-NLP/opus-mt-en-ar/resolve/main/tf_model.h5
But it was runinng in the previous version like this:
`
model =AutoModelForSeq2SeqLM.from_pretrained('Helsinki-NLP/opus-mt-en-ar')
`
Any help? how could I use the new instructions?
|
Hello
While you initialize TFAutoModelForSeq2SeqLM you need to initialize it with TFAutoModelForSeq2SeqLM.from_pretrained('Helsinki-NLP/opus-mt-en-ar', from_pt = True) because the TF model itself doesn’t exist but TFAutoModelForSeq2SeqLM is implemented, meaning, you can convert PyTorch weights to TensorFlow and use it as a TF model.
| 1 |
huggingface
|
🤗Datasets
|
Could I download the dataset manually?
|
https://discuss.huggingface.co/t/could-i-download-the-dataset-manually/14008
|
Due to the connection error I cannot download some datasets from original URL, such as librispeech. But I can download it manually and store it. So how can I make the datasets package recognize it? I mean, where should I put the dataset, like ‘~/.cache/data/librispeech’ or somewhere else? Or how can I change the original code of datasets and it can know the dataset location.
Thanks!
|
Hi! Could you please copy & paste the connection error you get? To work with the local data, you’ll have to download the librispeech script from our repo and modify it in the way it reads the data from the downloaded directory - you can pass the path to the data directory as follows:
from datasets import load_dataset
dset = load_dataset("path/to/dir/of/your/modifiedlibrispeech/script", data_dir="path/to/librispeech/data")
and access the data_dir value in the modified librispeech script as follows:
def _split_generators(self, dl_manager):
local_data_path = dl_manager.manual_dir
...
| 0 |
huggingface
|
🤗Datasets
|
Image dataset best practices?
|
https://discuss.huggingface.co/t/image-dataset-best-practices/13974
|
Hi! I’m one of the founders of Segments.ai 4, a data labeling platform for computer vision. We’re working on an integration with HuggingFace, making it possible to export labeled datasets to the hub.
From reading the docs and toying around a bit, I found there’s a few potential ways to set up an image dataset:
Keep the image files out of the repository, and download them from their URLs (they’re hosted in the cloud) in the dataset loading script. The disadvantage here is that if the image URLs ever become unavailable, the dataset also won’t work anymore.
Store the image files in the repository, packed together in a few large parquet files, using git-lfs. This is basically what happens when you create a dataset with an image column locally, and run dataset.push_to_hub(). See this dataset 2.
Store the image files in the repository as individual jpg/png files, using git-lfs. Compared to the previous approach, this requires a custom dataset loading script. This seems cleaner from a versioning point of view: when images are added or removed later on, it leads to a compact diff compared to working with the parquet files. But perhaps it’s not ideal to have so many small files in a git-lfs repo.
Do you have any recommendations on what would be the cleanest approach that is considered best practice?
|
Hi Bert, thanks for reaching out, and good job with segments.ai !
You mentioned three different ways of hosting an image dataset, and they’re all sensible ideas. Here are a few aspects that can help deciding which one is best depending on your case:
Storing the URLs. It has several disadvantages: less convenient, less reproducibility, and probably doesn’t work in the long run. This should be avoided as much as possible IMO. However for certain datasets with copyright/licensing issues this can still be a solution, in particular if you’re not allowed to redistribute the images yourself.
Use Parquet files (e.g. from push_to_hub). It’s nice on several aspects:
a. You can store much more than just the images: parquet is a columnar format so you can have one column for the image data, one for the labels, and even more columns for metadata for example.
b. It has compression that makes it suitable for long-term storage of big datasets.
c. It’s a standard format for columnar data processing (think pandas, dask, spark)
d. It is compatible with efficient data streaming: you can stream your image dataset during training for example
e. It makes dataset sharing easy, since everything is packaged together (images + labels + metadata)
f. Parquet files are suitable for sharding: if your dataset is too big (hundreds of GB or terabytes), you can just split it in parquet files of reasonable size (like 200-500MB per file)
g. You can append new entries simply by adding new parquet files.
h. You can have random-access to your images easily
i. It works very well with Arrow (the back-end storage of HF Datasets)
However as you said updating the dataset requires to regenerate the parquet files
Store the raw images. It Is very flexible since you can add/update/remove images pretty easily. It can be convenient especially for small datasets, however:
a. You’ll start to have trouble using such a format for big datasets (hundreds of thousands of images). It may require extra effort to structure your directories to find your images easily and align them with your labels or metadata.
b. You need to use some standard structures to let systems load your images and your labels automatically. Those structures are often task specific, and need to be popular enough to be supported in your favorite libraries. Alternatively you might need to implement the data loading yourself.
c. It’s also extremely inefficient for data streaming, since you have to fetch the images one by one.
d. To share such datasets you have to zip/tar them
To conclude on this:
Option 2. Is the go-to solution if your dataset is big, frozen or if you need fancy stuff like parallel processing or streaming.
Option 3. Is preferable only for a small dataset, when constructing it or when you need flexibility
Option 1. Should be avoided, unless you have no other option
cc @osanseviero
| 1 |
huggingface
|
🤗Datasets
|
How to customize the “User Access requests” message?
|
https://discuss.huggingface.co/t/how-to-customize-the-user-access-requests-message/13953
|
Hello,
I’m adding a dataset to the hub, which seems very usable and great! I just have one question. Under “settings”->“user access requests”, the dataset owner can toggle an access request message that a user must acknowledge before accessing the dataset. I’m wondering how I can customize the message? It seems possible, because the access request message for this dataset is custom: mozilla-foundation/common_voice_7_0 · Datasets at Hugging Face
Thanks!
|
Hi! Yes, first enable the “user access requests” setting and then in the dataset README file specify a custom message in the extra_gated_prompt YAML field (use the mozilla-foundation/common_voice_7_0 README as an example).
cc @julien-c IMO it makes more sense to be able to specify this message directly in the repo settings and not in the README file.
| 1 |
huggingface
|
🤗Datasets
|
How to create a dataset like common voice?
|
https://discuss.huggingface.co/t/how-to-create-a-dataset-like-common-voice/13940
|
Hi,
I am trying to create a indic version of common voice. I am new to datasets, so I am not sure how to proceed with this.
Can anyone please help me to decide the structure and format of files. I have dataset in 6 languages. For every language I have a train, dev, test split.
What I am thinking is this:
--hi
-- --train
-- --dev
-- --test
--mr
-- --train
-- --dev
-- --test
I am planning to upload zip files for all the train, dev and test sets, will zips be supported or I have to upload individual files?
cc: @patrickvonplaten
|
You can use the same structure as common voice has. Its structure is useful, don’t think it’s a need to create own one.
| 0 |
huggingface
|
🤗Datasets
|
How to create custom ClassLabels?
|
https://discuss.huggingface.co/t/how-to-create-custom-classlabels/13650
|
I would like to turn a column in my dataset into ClassLabels 1.
For my use case, i have a column with three values and would like to map these to the class labels.
Creating the labels and setting the column is fairly straightforward:
# "basic_sentiment holds values [-1,0,1]
feat_sentiment = ClassLabel(num_classes = 3,names=["negative", "neutral", "positive"])
dataset = dataset.cast_column("basic_sentiment", feat_sentiment)
Now ClassLabel has three labels: 0 - negative, 1 - neutral, 2-positive, while the data still has values -1 to 1.
How can i set the ClassLabels to use the labels in the columns? Or do i have to set my column to fit the labels of the ClassLabel now?
I couldn’t find an in-depth explanation on how to use this feature in huggingset dataface.
|
Yes, currently only the [0, ..., num_classes - 1] range is supported when casting to ClassLabel.
| 1 |
huggingface
|
🤗Datasets
|
KeyError: ‘Field “builder_name” does not exist in table schema’
|
https://discuss.huggingface.co/t/keyerror-field-builder-name-does-not-exist-in-table-schema/12547
|
Hello!
I have a 3-level nested DatasetDict for the MultiDoGo datasets (paper splits, domain, train/dev/test) that I am trying to upload on the Hub as a community dataset:
huggingface.co
jpcorb20/multidogo · Datasets at Hugging Face 6
When I am testing the downloading afterwards, I get:
KeyError: ‘Field “builder_name” does not exist in table schema’
Seems like something is not right in the dataset_dict.json’s fields… How can I solve this issue?
I have the version datasets 1.16.1.
Thank you
|
I have encountered a similar issue recently. I observed that the schema was not exactly the same among all the files in the dataset and because of this, load_dataset() was failing. So my guess is that most probably one of your files might not have the field ‘builder_name’.
| 0 |
huggingface
|
🤗Datasets
|
How to move cache between computers
|
https://discuss.huggingface.co/t/how-to-move-cache-between-computers/13605
|
I have a server equipped with GPUs without internet access. I would like to run some experiments there, and for that I need to download datasets locally and move the downloaded files on the server.
What is the correct procedure to do that? I just copied the .cache/huggingface/datasets directory hoping it would work, but the library still tries to access the internet. I think this may be related to the fact that some metadata (especially a lock file) in there seems to be tied to the user on my local machine, which is different from the server.
I tried to explicitly pass download_mode="reuse_cache_if_exists", and I also tried to pass data_dir directly, but I did not manage to load the cached dataset directly from disk in any case. An example even just with the mnist dataset would be welcome!
|
Hi! Instead of copying the entire cache directory, use Dataset.save_to_disk locally to save the dataset to a specifc directory and then move only that directory to the server. In the final step, call datasets.load_from_disk on the server to load the dataset from the copied directory.
Additionally, you can speed up the process by using sftp/ssh to move the directory to the server:
dset.save_to_disk(path, fs=fsspec.filesystem("sftp", host=host, port=port, username=username, password=password))
| 1 |
huggingface
|
🤗Datasets
|
How do I set feature type when loading dataset(ClassLabel etc)?
|
https://discuss.huggingface.co/t/how-do-i-set-feature-type-when-loading-dataset-classlabel-etc/13873
|
I am loading my dataset from a local file, and I’m getting error “TypeError: new(): invalid data type ‘numpy.str_’” which I believe is due to the features not being defined
It’s mentioned here and a solution is to pass a features dictionary when loading. But I am having trouble with the format.
I’ve tried things like :
emotions = load_dataset("csv", data_files="train.txt", sep=";",
names=["text", "label"],features = {'text': datasets.Value(dtype='int32', id=None),
'label':datasets.ClassLabel(num_classes=2, names=['not_equivalent', 'equivalent'], names_file=None, id=None)})
and
load_dataset("csv", data_files="train.txt", sep=";",
names=["text", "label"],features = {'text': 'str',
'label':['not_equivalent', 'equivalent']})
Without success.
I’m trying to follow the documentation here but can’t seem to figure it out…is there an example of how to do this somewhere? Thanks!
https://huggingface.co/docs/datasets/_modules/datasets/features/features.html#Features
|
Hi! This should work:
import datasets
features = datasets.Features({"text": datasets.Value("string"), "label": datasets.ClassLabel(names=['not_equivalent', 'equivalent'])})
dset = datasets.load_dataset("csv", data_files="train.txt", sep=";", names=["text", "label"], features=features)
Also note that the label column in your csv file has to contain numbers as labels (0 and 1) and not strings (not_equivalent and equivalent), otherwise you’ll get an error.
| 0 |
huggingface
|
🤗Datasets
|
Add_column() does not work if used on dataset sliced with select()
|
https://discuss.huggingface.co/t/add-column-does-not-work-if-used-on-dataset-sliced-with-select/13893
|
Hello all, say I have a dataset with 2000 entries
dataset = Dataset.from_dict({‘colA’: list(range(2000))})
and from which I want to extract the first one thousand rows, create a new dataset with these and also add a new column to it:
dataset2 = dataset.select(list(range(1000)))
final_dataset = dataset2.add_column(‘colB’, list(range(1000)))
This gives an error
ArrowInvalid: Added column’s length must match table’s length. Expected length 2000 but got length 1000
I’ve experimented with the arguments of the select method, but I did not find a way to surpass this error. Does anyone know why it’s happening and how to resolve it?
Thanks.
|
Hi! Could you please open an issue in our GH repo 1 because this looks like a bug in datasets?
In the meantime, call flatten_indices (dset.flatten_indices()) after select and before add_column.
| 0 |
huggingface
|
🤗Datasets
|
Can we download dataset from folder of text file
|
https://discuss.huggingface.co/t/can-we-download-dataset-from-folder-of-text-file/13230
|
Hi,
My dataset has many text files, I want to first take all the text files as corpus for LM training. Then I will use the files to map the labels from it.
Thanks
|
Hi ! Do you mean that the labels are included in the file names ?
Here is an example on how to load one of the classes using glob patterns:
data_files = {"train": "path/to/data/*<class_name>*.txt"}
dataset = load_dataset("text": data_files=data_files}, split="train")
Then you can add the column with the label:
dataset = dataset.add_column("label", ["<class_name>"] * len(dataset))
Finally if you wish to combine the datasets of each class feel free to take a look at concatenate_datasets 1 or interleave_datasets
| 0 |
huggingface
|
🤗Datasets
|
GPU Memory usages varies with the size of the dataset
|
https://discuss.huggingface.co/t/gpu-memory-usages-varies-with-the-size-of-the-dataset/6681
|
Hi,
Does the GPU Memory usages vary with the size of the dataset? I am experimenting with the code from here.
gist.github.com
https://gist.github.com/jiahao87/50cec29725824da7ff6dd9314b53c4b3 4
pegasus_fine_tune.py
"""Script for fine-tuning Pegasus
Example usage:
# use XSum dataset as example, with first 1000 docs as training data
from datasets import load_dataset
dataset = load_dataset("xsum")
train_texts, train_labels = dataset['train']['document'][:1000], dataset['train']['summary'][:1000]
# use Pegasus Large model as base for fine-tuning
model_name = 'google/pegasus-large'
train_dataset, _, _, tokenizer = prepare_data(model_name, train_texts, train_labels)
This file has been truncated. show original
I observed the GPU memory varies (not proportional to) the size of the dataset. Trying to understand why? Anyone would help explain?
|
The author of the code help me figure it out. It was because the Dataset class implemented in the code loading the encoding in __int__ function
| 0 |
huggingface
|
🤗Datasets
|
How to use Datasets in a distributed system?
|
https://discuss.huggingface.co/t/how-to-use-datasets-in-a-distributed-system/13440
|
Hello,
My application relies on Dataset to manage some data.
We have multiple workers loading the same Dataset to do some computation.
Currently, a single worker writes to the Dataset and adds columns to the table.
I need the other workers to have access to the latest data.
In the past, we were able to overwrite the dataset and the workers would simply reload the new version. But we get an error as of 1.16 saying that we can’t overwrite a dataset.
What is the best way to do this? For now, I can disable caching to avoid the error.
|
Hi ! Indeed it’s not possible to write to an opened dataset anymore (or you may corrupt its data).
Depending on your distributed setup you can either pickle the dataset from one worker to the other (it only pickles the path to the local arrow files to be reloaded, not the actual data), or save the dataset to a new location using save_to_disk and make the other workers reload the dataset from there.
| 0 |
huggingface
|
🤗Datasets
|
Editing existing dataset sheets
|
https://discuss.huggingface.co/t/editing-existing-dataset-sheets/12959
|
Hi,
Couldn’t find a way to edit or suggest edits to dataset sheets.
For example
yelp polarity and yelp full from here GitHub - zhangxiangxiao/Crepe: Character-level Convolutional Networks for Text Classification 2 have BSD-3-Clause License
But the question is more general than that. (also contacting members that contributed content would have been nice, is that possible too? (e.g. jakeazcona/short-text-labeled-emotion-classification has no documentation, but maybe the authors have something)
|
Hi,
we plan to add support for Pull Requests to the HF Hub, so you’ll be able to propose changes that way.
cc @julien-c: Maybe we could have an option on the Hub to make e-mails (or GitHub/Twitter username) public and if they are public, show them on the profile page?
| 0 |
huggingface
|
🤗Datasets
|
Streaming datasets and batched mapping
|
https://discuss.huggingface.co/t/streaming-datasets-and-batched-mapping/13498
|
I’m exploring using streaming datasets with a function that preprocesses the text, tokenizes it into training samples, and then applies some noise to the input_ids (à la BART pretraining). It seems to be working really well, and saves a huge amount of disk space compared to downloading a dataset like OSCAR locally.
Since a lot of the examples in OSCAR are much longer than my model’s max size, I’ve been truncating each example to the final whitespace at the end of the first model-size chunk, and throwing a way a ton of data. Not the end of the world, but it feels… wasteful.
I took a look at how MappedExamplesIterable handles batching, and I had a realization. Since __iter__ fetches a batch from the dataset and then just yields each output of the mapped function, there’s no reason the number of processed results needs to be the same as the batch size, right?
The preprocessing function could split the longer examples into smaller chunks, and batch could yield any number of processed examples. It looks like the only thing batch_size is used for is pulling chunks of data from the cloud, and nothing downstream will care how many examples are returned, because they’re yielded one at a time. So a batch in MappedExamplesIterable with batch_size=100 could have 100, or 110, or 3000 or however many examples.
The only downside I see is not knowing how many total examples I’ll have to work with. But with a streaming dataset, I have to train with a predefined max_steps anyway, so that doesn’t seem so bad.
Am I understanding this correctly?
|
This style of batched fetching is only used by streaming datasets, right? I’d need to roll my own wrapper to do the same on-the-fly chunking on a local dataset loaded from disk?
Yes indeed, though you can stream the data from your disk as well if you want.
A dataset in non streaming mode needs to have a fixed number of samples known in advance as well as a mapping index <-> sample. That’s why chunking on-the-fly is not allowed for non-streaming mode.
And I know it’s not your area, but as far as you know, there’s no way to add/change a dataset’s map functions inside the HF Trainer’s train process, is there?
Indeed there’s no such mechanism afaik, I think you would have to subclass the trainer and implement this logic yourself.
| 1 |
huggingface
|
🤗Datasets
|
How to sample dataset according to the index
|
https://discuss.huggingface.co/t/how-to-sample-dataset-according-to-the-index/12940
|
Hi, I am training BERT and use the dataset wikipedia. Only a subset of the inputs are needed and I have got the indicecs of them. However, problems occur when I want to use the sub-dataset. This code is extremely slow :
subdataset=wikidataset[selected_indices].
, where the selected_indices is a one-dimension vector. I thought this may due to the dataset is too large. Is there any way to sample the dataset efficiently?
By the way, I also considered to use SubsetRandomSampler, but it seems this sampler does not work in the distributed training.
|
Hi ! when you use wikidataset[some_indices], it tries to load all the indices you requested in memory, as a python dictionary. This can take some time and fill up your memory.
If you just want to select a subset of your dataset and later train on model on it, you can do
subdataset = wikidataset.select(some_indices)
This returns a new Dataset object that only contains the indices you requested. Moreover this doesn’t bring any data on memory and is pretty fast
| 0 |
huggingface
|
🤗Datasets
|
Please, help me
|
https://discuss.huggingface.co/t/please-help-me/12903
|
I have voice recordings of the names of the cities in my country I want to build dataset and convert it to speech recognation model for my contry cities
please help me with this
I don’t know if I should go to Mars I can’t find the answer
Please, help me
|
Hi ! You can create a dataset from your audio files with
from datasets import Dataset, Features, Audio
features = features({"audio": Audio()})
dataset = Dataset.from_dict({"audio": list_of_paths_to_my_audio_files}, features=features)
You can also find some documentation about audio dataset processing here: Process audio data — datasets 1.17.0 documentation
| 0 |
huggingface
|
🤗Datasets
|
Downloading community datasets
|
https://discuss.huggingface.co/t/downloading-community-datasets/12941
|
Hi all, I seem to be having issues downloading community datasets in Datasets, I’ve seen in the docs that community datasets are marked as unsafe by default and we must inspect and opt-in to download the datasets, how can I do this?
For example, there is the dataset I’m wanting to download, ashraq/dhivehi-corpus, if I try to download:
import datasets
dv = datasets.load_dataset('ashraq/dhivehi-corpus')
I return a FileNotFoundError, which I assume to be because I haven’t opted in to using community datasets?
Thanks!
Edit: maybe this is because there is no dataset loading script?
|
Seems that this is a non-problem, it was because the dataset mentioned had no download script - and the author of the dataset has now fixed this too.
| 0 |
huggingface
|
🤗Datasets
|
How to push checkpoints greater than 4GB to datasets?
|
https://discuss.huggingface.co/t/how-to-push-checkpoints-greater-than-4gb-to-datasets/13356
|
I am not able to push checkpoints greater than 4GB,
I receive the following error.
Screenshot 2022-01-04 at 2.28.33 PM1398×160 41.5 KB
|
Hi ! The easiest is to split your ZIP archive into multiple archives. Alternatively you try to update Git, since it seems be fixed now 2
| 0 |
huggingface
|
🤗Datasets
|
IPFS cloud storage 2
|
https://discuss.huggingface.co/t/ipfs-cloud-storage-2/13185
|
Hi,
My team are interested in putting in some effort to get IPFS to work with HF. There was a previous thread on this topic (IPFS cloud storage? 2) that talked about using ipfsspec. One of the HF team mentioned: “I’m afraid the project you’ve linked is not mature enough, but we can reconsider adding the IPFS filesystem once this is changed.”
We are happy to put in some work on ipfsspec to get it where it needs to be. I wonder whether it would be possible to talk to someone in the HF team to get some requirements for the package from the HF side?
|
If I understand correctly, the main thing that is missing is write capabilities of the IPFS filesystem in ipfsspec
| 0 |
huggingface
|
🤗Datasets
|
Need to read subset of data files in WMT14
|
https://discuss.huggingface.co/t/need-to-read-subset-of-data-files-in-wmt14/13100
|
I need to load subset of dataset files in WMT14.
dataset.split.Train:[‘gigafren’]
and TEST and VALIDATION sets.
How can I do it? I have tried using split function and data_files. I was unable to do it. Kindly help.
|
Hi ! I think you can just create your own wmt_14_gigafren_only dataset script.
To do so you can download the python files from wmt_14 here: datasets/datasets/wmt14 at master · huggingface/datasets · GitHub. Then rename wmt_14.py to wmt_14_gigafren_only.py and edit this file to only keep gigafren in the _subsets part of the code.
Finally you just need to do load_dataset("path/to/wmt_14_gigafren_only.py") and you’re done
| 0 |
huggingface
|
🤗Datasets
|
Does huggingface support load raw text dataset from hdfs?
|
https://discuss.huggingface.co/t/does-huggingface-support-load-raw-text-dataset-from-hdfs/13233
|
I noticed that both load_from_disk and save_to_disk support the ‘fs’ parameters, but not load_dataset.
I want to know if it is possible to load text data from hdfs via load_dataset
Any suggestion will be appreciately.
|
No it doesn’t, only lozd_dataset supports streaming mode
| 1 |
huggingface
|
🤗Datasets
|
ClassLabels when using push_to_hub
|
https://discuss.huggingface.co/t/classlabels-when-using-push-to-hub/13214
|
Hi!
Congrats on this amazing library.
I am uploading a dataset programmatically using push_to_hub and defining the features as follows:
# ds contains text and label strings
hf_ds = Dataset.from_dict(
ds,
features=Features({
"text": Value("string"),
"label": ClassLabel(names=['World', 'Sports', ..])
})
)
hf_ds.push_to_hub("Recognai/corrected_labels_ag_news")
The thing is that even if I see the ClassLabel feature when I do hf_ds.features. The result on the dataset preview shows the labels as int and seems to indicate they’ve been given the int type.
Is there something I’m doing wrong on my side?
For reference this is the dataset: Recognai/corrected_labels_ag_news · Datasets at Hugging Face
|
Sorry I’ve seen this has already been tackled here:
github.com/huggingface/datasets
Push dataset infos.json to Hub 1
huggingface:master ← huggingface:push-dataset_infos.json-to-hub
opened
Dec 21, 2021
lhoestq
+206
-39
When doing `push_to_hub`, the feature types are lost (see issue https://github.c…om/huggingface/datasets/issues/3394).
This PR fixes this by also pushing a `dataset_infos.json` file to the Hub, that stores the feature types.
Other minor changes:
- renamed the `___` separator to `--`, since `--` is now disallowed in a name in the back-end.
I tested this feature with datasets like conll2003 that has feature types like `ClassLabel` that were previously lost.
Close https://github.com/huggingface/datasets/issues/3394
I would like to include this in today's release (though not mandatory), so feel free to comment/suggest changes
| 0 |
huggingface
|
🤗Datasets
|
Unable to resolve any data file after loading once
|
https://discuss.huggingface.co/t/unable-to-resolve-any-data-file-after-loading-once/12855
|
when I rerun my program, it occurs this error
" Unable to resolve any data file that matches ‘[’* train ‘]’ at /data2/whr/lzy/open_domain_data/retrieval/wiki_dpr with any supported extension [‘csv’, ‘tsv’, ‘json’, ‘jsonl’, ‘parquet’, ‘txt’, ‘zip’]", so how could i deal with this problem?
thx.
And below is my code .
|
Hi ! Just copy-pasting the answer I posted on GitHub at Unable to resolve any data file after loading once · Issue #3431 · huggingface/datasets · GitHub, in case anyone has a similar issue:
Hi ! load_dataset accepts as input either a local dataset directory or a dataset name from the Hugging Face Hub.
So here you are getting this error the second time because it tries to load the local wiki_dpr directory, instead of wiki_dpr from the Hub. It doesn’t work since it’s a cache directory, not a dataset directory in itself.
To fix that you can use another cache directory like cache_dir="/data2/whr/lzy/open_domain_data/retrieval/cache"
| 0 |
huggingface
|
🤗Datasets
|
Make Datasets use Google Storage bucket as Cache path
|
https://discuss.huggingface.co/t/make-datasets-use-google-storage-bucket-as-cache-path/12894
|
Hi,
I am training a BERT model on GCP (linux vm) and don’t have enough storage on my vm. So It will be interesting for me to tell datasets library to use my GCP bucket as its cache path.
Can you tell me how to do this please?
Thanks.
|
Hi ! Have you considered using a streaming dataset 1 ?
Otherwise I guess it should be possible to mount your GCS bucket on your VM, and point your cache to the local path of the GCS bucket.
| 0 |
huggingface
|
🤗Datasets
|
Remove a row/specific index from the dataset
|
https://discuss.huggingface.co/t/remove-a-row-specific-index-from-the-dataset/12875
|
Given the code
from datasets import load_dataset
dataset = load_dataset("glue", "mrpc", split='train')
idx = 0
How can I remove row 0 (dataset[0]) from this dataset?
The only way I can think of for now is using dataset.select(), and then selecting every index except 0, but that doesn’t seem efficient.
|
Hi!
You can do dataset = load_dataset("glue", "mrpc", split='train[1:]') to skip the first example while loading the dataset.
The only way I can think of for now is using dataset.select(), and then selecting every index except 0, but that doesn’t seem efficient.
Why do you think select is not efficient? It depends on the ops you use afterward, but select alone is very efficient as it only creates an indices mapping, which is (almost) equal to list(indices), and not a new PyArrow table.
`
| 0 |
huggingface
|
🤗Datasets
|
ArrowNotImplementedError when loading json dataset
|
https://discuss.huggingface.co/t/arrownotimplementederror-when-loading-json-dataset/12670
|
Hello community,
When trying to load custom json dataset based on wikipedia dump:
from datasets import load_dataset
wiki_fr_dataset_lp = load_dataset("json", data_files="/media/matthieu/HDD_4T0/Github/semantic-search-through-wikipedia-with-weaviate/step-1/articles.json", split="train")
I got the following error:
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
<ipython-input-1-f8ba9f3864f3> in <module>
1 from datasets import load_dataset
2
----> 3 wiki_fr_dataset_lp = load_dataset("json", data_files="/media/matthieu/HDD_4T0/Github/semantic-search-through-wikipedia-with-weaviate/step-1/articles.json", split="train")
4 wiki_fr_dataset_lp
~/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1630
1631 # Download and prepare data
-> 1632 builder_instance.download_and_prepare(
1633 download_config=download_config,
1634 download_mode=download_mode,
~/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
605 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
606 if not downloaded_from_gcs:
--> 607 self._download_and_prepare(
608 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
609 )
~/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
695 try:
696 # Prepare split will record examples associated to the split
--> 697 self._prepare_split(split_generator, **prepare_split_kwargs)
698 except OSError as e:
699 raise OSError(
~/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/datasets/builder.py in _prepare_split(self, split_generator)
1157 generator, unit=" tables", leave=False, disable=True # bool(logging.get_verbosity() == logging.NOTSET)
1158 ):
-> 1159 writer.write_table(table)
1160 num_examples, num_bytes = writer.finalize()
1161
~/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/datasets/arrow_writer.py in write_table(self, pa_table, writer_batch_size)
440 # reorder the arrays if necessary + cast to self._schema
441 # we can't simply use .cast here because we may need to change the order of the columns
--> 442 pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
443 batches: List[pa.RecordBatch] = pa_table.to_batches(max_chunksize=writer_batch_size)
444 self._num_bytes += sum(batch.nbytes for batch in batches)
~/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_arrays()
~/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib._sanitize_arrays()
~/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib.asarray()
~/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.ChunkedArray.cast()
~/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/pyarrow/compute.py in cast(arr, target_type, safe)
295 else:
296 options = CastOptions.unsafe(target_type)
--> 297 return call_function("cast", [arr], options)
298
299
~/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/pyarrow/_compute.pyx in pyarrow._compute.call_function()
~/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/pyarrow/_compute.pyx in pyarrow._compute.Function.call()
~/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/anaconda3/envs/sts-transformers-gpu-fresh/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowNotImplementedError: Unsupported cast from struct<title: string, content: string, count: int64> to struct using function cast_struct
Anyone would have an advice on this error?
Thanks!
|
Hi,
it’s hard to debug these kinds of errors without having access to data. I’d assume that in some examples (JSON lines) some fields, or subfields, are missing based on the error message. Also, you can try to increase the chunksize 2 argument and see if that works.
| 0 |
huggingface
|
🤗Datasets
|
[Solved] Image dataset seems slow for larger image size
|
https://discuss.huggingface.co/t/solved-image-dataset-seems-slow-for-larger-image-size/10960
|
Hi,
Recently I tried to train an image captioning flax model with TPU, and I found it is very slow → each batch (256 images) requires 26 seconds. I finally found the cause:
My dataset have a column named “image” and it stores image array (being cached).
However, operations like batch = dataset[idx], where idx is a list of length 256 will take 15 seconds, if the image size is (256 , 256). If I store the flattened image array (i.e. 1D array as length 256 * 256 * 3), it will be faster.
The provided script demonstrate this situation, with image/batch size 128.
I am wondering if I am doing something wrong, and if there is a better way to handle image datasets.
import numpy as np
import time
from datasets import load_dataset, Dataset
ds = load_dataset("cifar10")
train_ds = ds["train"]
# Take 512 examples
train_ds = train_ds.select(range(512))
# `_images` has shape (batch_size, 32, 32, 3)
# Let's make large images (batch_size, 32 * FACTOR, 32 * FACTOR, 3)
FACTOR = 4
def preprocess_function(examples):
inputs = {}
images = examples["img"]
# convert to np.float3f2 array
_images = np.array(images, dtype=np.float32)
# Make image larger
batch_size, image_size = _images.shape[0:2]
image_size = image_size * FACTOR
# use (H, W, C) format
_images = np.concatenate([_images] * FACTOR ** 2, axis=0).reshape((batch_size, image_size, image_size, 3))
# flatten images to 1D -> `data_loader` will be faster
# _images = _images.reshape((batch_size, image_size * image_size * 3))
inputs["image"] = [x for x in _images]
inputs["label"] = examples["label"]
return inputs
times_1 = []
times_2 = []
def data_loader(rng: np.random.Generator, dataset: Dataset, batch_size: int, shuffle: bool = False):
"""
Returns batches of size `batch_size` from truncated `dataset`, sharded over all local devices.
Shuffle batches if `shuffle` is `True`.
"""
steps_per_epoch = len(dataset) // batch_size
if shuffle:
batch_idx = rng.permutation(len(dataset))
else:
batch_idx = range(len(dataset))
batch_idx = batch_idx[: steps_per_epoch * batch_size] # Skip incomplete batch.
batch_idx = batch_idx.reshape((steps_per_epoch, batch_size))
for idx in batch_idx:
s = time.time()
# (bs=128, img_size=128)
# - (H, W, C) -> takes 6.959 seconds
# - flatten -> takes 2.407 seconds
batch = dataset[idx]
e = time.time()
times_1.append(e-s)
s = time.time()
# (bs=128, img_size=128)
# - (H, W, C) -> takes 1.661 seconds
# - flatten -> takes 0.673 seconds
batch = {k: np.array(v) for k, v in batch.items()}
e = time.time()
times_2.append(e-s)
yield batch
# The results are cached
train_ds = train_ds.map(
preprocess_function,
batched=True,
batch_size=16,
num_proc=2,
)
# Create sampling rng
input_rng = np.random.default_rng(seed=42)
train_loader = data_loader(input_rng, train_ds, batch_size=128, shuffle=True)
start = time.time()
for idx, batch in enumerate(train_loader):
end = time.time()
print(f"batch: {idx} | time: {end - start}")
start = time.time()
print(f"Average times_1 in data_loader: {np.mean(times_1)}")
print(f"Average times_2 in data_loader: {np.mean(times_2)}")
|
Hi !
There are two optimizations that you can use:
Use the Array3D feature type. Otherwise it will consider your arrays to be lists of arbitrary sizes, and it takes some time to collate them again into one numpy array.
features = Features({
**train_ds.features,
"image": Array3D(dtype="int32", shape=(128, 128, 3))
})
train_ds = train_ds.map(
preprocess_function,
batched=True,
batch_size=16,
num_proc=2,
features=features
)
Use the ‘numpy’ format. This way when accessing examples they will already be numpy arrays, and you will also benefit from the end-to-end zero-copy array read from Arrow to numpy.
train_ds = train_ds.with_format("numpy")
It becomes 100 times faster when querying the data loader !
Here are my results without the optimizations:
Average times_1 in data_loader: 7.690094709396362
And then, with these optimizations:
Average times_1 in data_loader: 0.0582084059715271
| 1 |
huggingface
|
🤗Datasets
|
ArrowInvalid: Column 1 named id expected length 512 but got length 1000
|
https://discuss.huggingface.co/t/arrowinvalid-column-1-named-id-expected-length-512-but-got-length-1000/12899
|
I am training ncbi_disease dataset 1 using transformers trainer.
Here are the features of the dataset as follows
DatasetDict({
train: Dataset({
features: [‘id’, ‘tokens’, ‘ner_tags’],
num_rows: 5433
})
validation: Dataset({
features: [‘id’, ‘tokens’, ‘ner_tags’],
num_rows: 924
})
test: Dataset({
features: [‘id’, ‘tokens’, ‘ner_tags’],
num_rows: 941
})
})
This is an output to a sample of training data tuple as follows
{'id': '20',
'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 2, 0, 0, 0],
'tokens': ['For',
'both',
'sexes',
'combined',
',',
'the',
'penetrances',
'at',
'age',
'60',
'years',
'for',
'all',
'cancers',
'and',
'for',
'colorectal',
'cancer',
'were',
'0',
'.']}
Here is the function for tokenization and I get this error
ArrowInvalid: Column 1 named id expected length 512 but got length 1000
def tokenize_text(examples):
return tokenizer(str(examples["tokens"]),truncation=True,max_length=512)
dataset=dataset.map(tokenize_text,batched=True)
Any clue how to solve this problem?
|
Hey @ghadeermobasher , This has been explained in chapter 5 of the course (The 🤗 Datasets library - Hugging Face Course 1). Scroll down a bit and you will find a similar error with the explanation.
You need to modify your “tokenize_text” function as such:
def tokenize_text(examples):
result = tokenizer(str(examples["tokens"]),truncation=True,
max_length=512, return_overflowing_tokens=True)
sample_map = result.pop("overflow_to_sample_mapping")
for key, values in examples.items():
result[key] = [values[i] for i in sample_map]
return result
| 1 |
huggingface
|
🤗Datasets
|
Fastest way to do inference on a large dataset in huggingface?
|
https://discuss.huggingface.co/t/fastest-way-to-do-inference-on-a-large-dataset-in-huggingface/12665
|
Hello there!
Here is my issue. I have a model trained with huggingface (yay) that works well. Now I need to perform inference and compute predictions on a very large dataset of several millions of observations.
What is the best way to proceed here? Should I write the prediction loop myself? Which routines in datasets should be useful here? My computer has lots of RAM (100GB+), 20+ cpus and a big GPU card.
Thanks!
|
Hi ! If you have enough RAM I guess you can use any tool to load your data.
However if your dataset doesn’t fit in RAM I’d suggest to use the datasets, since it allows to load datasets without filling up your RAM and gives excellent performance.
Then I guess you can write your prediction loop yourself. Just make sure to pass batches of data to your model to make sure that your GPU is fully utilized. You can also use my_dataset.map() with batched=True and set batch_size to a reasonable value
| 0 |
huggingface
|
🤗Datasets
|
New dataset added_review for improvement
|
https://discuss.huggingface.co/t/new-dataset-added-review-for-improvement/12659
|
I have recently created a dataset called “english-quotes”.
The data collection process is web scraping using BeautifulSoup and Requests libraries. I also added the script that I created to scrape data and create the dataset in the Card Description (under " Who are the source Data producers ?")
It’s just a start. I wanted to validate (for myself) the “Datasets” chapter of the course by a final self-evaluation different from the dataset of the course “github_issues” (I admit, I changed the sections of the data card a bit). If you have any remarks or comments for improvement, please let me know (as I will add more advanced datasets in the future)… and thanks.
the link of the new dataset added “english_quotes”: Abirate/english_quotes · Datasets at Hugging Face 1
|
This is awesome ! Thanks for adding this dataset
Do you know if there’s a list somewhere of all the possible tags ? It can be useful to know how many classes there are to train multi class classification models.
| 0 |
huggingface
|
🤗Datasets
|
How do I add features on a local dataset
|
https://discuss.huggingface.co/t/how-do-i-add-features-on-a-local-dataset/12711
|
Hey, I was reading Create a dataset loading script — datasets 1.16.1 documentation 2 and I am confused about how to add attributes to my local dataset. Can anyone explain how do I do this on a local dataset?
|
Without it I am getting:-
File "/opt/conda/lib/python3.7/collections/__init__.py", line 1027, in __
getitem__
raise KeyError(key)
KeyError: 'premise'
| 0 |
huggingface
|
🤗Datasets
|
Unable to load CommonVoice latest version
|
https://discuss.huggingface.co/t/unable-to-load-commonvoice-latest-version/12455
|
This line of code works well with commonvoice version 6.1.0 but i get error while using for version .70.0
from datasets import load_dataset, load_metric, Audio
common_voice_train = load_dataset("common_voice", "lg", split="train+validation",version="7.0.0")
common_voice_test = load_dataset("common_voice", "lg", split="test",version="7.0.0")
ValueError: Cannot name a custom BuilderConfig the same as an available BuilderConfig. Change the name. Available BuilderConfigs: ['ab', 'ar', 'as', 'br', 'ca', 'cnh', 'cs', 'cv', 'cy', 'de', 'dv', 'el', 'en', 'eo', 'es', 'et', 'eu', 'fa', 'fi', 'fr', 'fy-NL', 'ga-IE', 'hi', 'hsb', 'hu', 'ia', 'id', 'it', 'ja', 'ka', 'kab', 'ky', 'lg', 'lt', 'lv', 'mn', 'mt', 'nl', 'or', 'pa-IN', 'pl', 'pt', 'rm-sursilv', 'rm-vallader', 'ro', 'ru', 'rw', 'sah', 'sl', 'sv-SE', 'ta', 'th', 'tr', 'tt', 'uk', 'vi', 'vot', 'zh-CN', 'zh-HK', 'zh-TW']
what do i need to change. Thanks
|
cc @anton-l
| 0 |
huggingface
|
🤗Datasets
|
Can we upload datasets with a total size like the pile?
|
https://discuss.huggingface.co/t/can-we-upload-datasets-with-a-total-size-like-the-pile/12694
|
Hi!
We want to create a very large text dataset inspired by the pile. To distribute the dataset we would like to upload the dataset on datasets of hugginface. But we were wondering if there any limitations in sizes for uploading datasets?
Thank you for your responses!
|
Hi!
That’s so cool! No, we don’t have any limitations in terms of size. As the dataset you are working on is pretty big, would you be interested in collaborating with us? This way it will be easier for us to help you. Also, could you tell us a bit more about your project? Feel free to let me know here or via e-mail (mario@huggingface.co).
cc @thomwolf
| 0 |
huggingface
|
🤗Datasets
|
Item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} AttributeError: ‘list’ object has no attribute ‘items’
|
https://discuss.huggingface.co/t/item-key-torch-tensor-val-idx-for-key-val-in-self-encodings-items-attributeerror-list-object-has-no-attribute-items/12695
|
I am trying to classify a file using the model obtained from finetuning bertsequenceclassification.
I get this error when loading the new test file (it does not have label), it is unknown file.
# Create torch dataset
class Dataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels=None):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
if self.labels:
item["labels"] = torch.tensor(self.labels[idx])
#print(item)
return item
def __len__(self):
print(len(self.encodings["input_ids"]))
return len(self.encodings["input_ids"])
# prepare dat for classification
tokenizer = XXXTokenizer.from_pretrained(model_name)
print("Transform xml file to pandas series core...")
text, file_name = transform_xml_to_pd(file) # transform xml file to pd
# Xtest_emb, s = get_xxx_layer(Xtest['sent'], path_to_model_lge) # index 2 correspond to sentences
#print(text)
print("Preprocess text with spacy model...")
clean_text = make_new_traindata(text['sent'])
#print(clean_text[1]) # clean text ; 0 = raw text ; and etc...
X = list(clean_text)
X_text_tokenized = []
for x in X:
#print(type(x))
x_encoded = tokenizer(str(x), padding="max_length", truncation=True, max_length=512)
#print(type(x_encoded))
#print(x_encoded)
X_text_tokenized.append(x_encoded)
#print(type(X_text_tokenized))
X_data = Dataset(X_text_tokenized)
print(type(X_data))
print(X_data['input_ids'])
Error
File "/scriptTraitements/classifying.py", line 153, in __getitem__
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
AttributeError: 'list' object has no attribute 'items'
|
Hi,
this line is the problem:
X_data = Dataset(X_text_tokenized)
because X_text_tokenized is a list of dictionaries and not a dictionary of lists. You can fix this with the following code:
def list_of_dicts_to_dict_of_lists(d):
dic = d[0]
keys = dic.keys()
values = [dic.values() for dic in d]
return {k: list(v) for k, v in zip(keys, zip(*values))}
X_data = Dataset(list_of_dicts_to_dict_of_lists(X_text_tokenized))
| 1 |
huggingface
|
🤗Datasets
|
Save `DatasetDict` to HuggingFace Hub
|
https://discuss.huggingface.co/t/save-datasetdict-to-huggingface-hub/12075
|
Hi there,
I prepared my data into a DatasetDict object that I saved to disk with the save_to_disk method. I’d like to upload the generated folder to the HuggingFace Hub and use it using the usual load_dataset function. Though, I have not yet found a way to do so. Is this possible?
Thanks a lot in advance for your help.
Best,
Pietro
|
Hi,
this week’s release of datasets will add support for directly pushing a Dataset/DatasetDict object to the Hub. In the meantime, you can use a to_{format} method, where format is one of ["csv", "json", "txt", "parquet"] on each split of the DatasetDict object and push the generated files to the Hub (follow the docs here 1 for more information). Also note that this requires the master version of the library, which you can install with:
pip install git+https://github.com/huggingface/datasets.git
Without the master version, you’ll have to specify a list of files to load each split separately (docs on that are here 1).
| 0 |
huggingface
|
🤗Datasets
|
Error when fine-tuning with the Trainer API
|
https://discuss.huggingface.co/t/error-when-fine-tuning-with-the-trainer-api/12660
|
Hi,
I’m fine-tuning a model with the Trainer API and following these instructions: https://huggingface.co/docs/transformers/training#finetuning-in-pytorch-with-the-trainer-api 3
However, after I have defined the compute_metrics function and tried to run the script, it gave me the following error:
Traceback (most recent call last):
File "/home/le/torch_tutorial/lm_1_perpl.py", line 77, in <module>
trainer.train()
File "/home/le/torch_tutorial/venv/lib/python3.9/site-packages/transformers/trainer.py", line 1391, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/le/torch_tutorial/venv/lib/python3.9/site-packages/transformers/trainer.py", line 1491, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/home/le/torch_tutorial/venv/lib/python3.9/site-packages/transformers/trainer.py", line 2113, in evaluate
output = eval_loop(
File "/home/le/torch_tutorial/venv/lib/python3.9/site-packages/transformers/trainer.py", line 2354, in evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "/home/le/torch_tutorial/lm_1_perpl.py", line 67, in compute_metrics
return metric.compute(predictions=predictions, references=labels)
File "/home/le/torch_tutorial/venv/lib/python3.9/site-packages/datasets/metric.py", line 393, in compute
self.add_batch(predictions=predictions, references=references)
File "/home/le/torch_tutorial/venv/lib/python3.9/site-packages/datasets/metric.py", line 434, in add_batch
batch = self.info.features.encode_batch(batch)
File "/home/le/torch_tutorial/venv/lib/python3.9/site-packages/datasets/features/features.py", line 1049, in encode_batch
encoded_batch[key] = [encode_nested_example(self[key], obj) for obj in column]
File "/home/le/torch_tutorial/venv/lib/python3.9/site-packages/datasets/features/features.py", line 1049, in <listcomp>
encoded_batch[key] = [encode_nested_example(self[key], obj) for obj in column]
File "/home/le/torch_tutorial/venv/lib/python3.9/site-packages/datasets/features/features.py", line 853, in encode_nested_example
return schema.encode_example(obj)
File "/home/le/torch_tutorial/venv/lib/python3.9/site-packages/datasets/features/features.py", line 297, in encode_example
return int(value)
TypeError: only size-1 arrays can be converted to Python scalars
Do you have any ideas on what can be causing it? I have not changed anything in my code except for adding the compute_metrics function (like in the tutorial) and adding the compute_metrics argument in Trainer (before this addition everything was working perfectly):
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=lm_datasets["train"],
eval_dataset=lm_datasets["validation"],
compute_metrics=compute_metrics,
)
|
What I would do in this case is simply add a print statement within the compute_metrics function to view your logits and labels, their shapes, et.
| 0 |
huggingface
|
🤗Datasets
|
[Semantic search with FAISS] Can’t manage to format embeddings column to numpy format
|
https://discuss.huggingface.co/t/semantic-search-with-faiss-cant-manage-to-format-embeddings-column-to-numpy-format/12640
|
Hello,
I would like to test the semantic search with FAISS according the HF course but on wikipedia dataset.
I use a local docker with sentence-transformer algorithm to compute embeddings for each paragraph, according following function:
def get_embeddings(text_list):
payload_resp = requests.request("POST", api_sent_embed_url, data=json.dumps(text_list))
return np.array(json.loads(payload_resp.content.decode("utf-8")), dtype=np.float32)
Using this function I managed to obtain numpy array on output on those test data:
payload1 = ["Navigateur Web : Ce logiciel permet d'accéder à des pages web depuis votre ordinateur. Il en existe plusieurs téléchargeables gratuitement comme Google Chrome ou Mozilla. Certains sont même déjà installés comme Safari sur Mac OS et Edge sur Microsoft."]
payload2 = ["Google Chrome. Mozilla. Safari."]
payload = payload1 + payload2
embedding = get_embeddings(payload)
embedding.shape
(2, 384)
However, when trying to apply this on my embeddings_dataset
Dataset({
features: ['id', 'revid', 'text', 'title', 'url'],
num_rows: 100
})
I still obtain for the “embeddings” column list format and not numpy:
embeddings_dataset= embeddings_dataset.map(lambda x: {"embeddings": get_embeddings(x["text"])})
type(embeddings_test[0]["embeddings"])
list
Would anyone have an advice regarding this problem?
|
Hi,
set format of the embeddings column to NumPy after the map call to get a NumPy array:
embeddings_dataset.set_format("numpy", columns=["embeddings"], output_all_columns=True)
type(embeddings_test[0]["embeddings"])
| 0 |
huggingface
|
🤗Datasets
|
How can I parallelize a metric?
|
https://discuss.huggingface.co/t/how-can-i-parallelize-a-metric/12556
|
I am currently fine-tuning a language model using a policy-gradient reinforcement learning technique. Instead of a standard loss function, I am using a reward function and the REINFORCE algorithm to teach the model to emulate some desired behaviour.
As part of the reward function, I compute the ROUGE score between a reference sentence and a generated one. To do this I’m currently using a list comprehension that zips together two lists of sentences (ref_batch and pred_batch below) and then calculates the rouge score for each.
The code looks something like this:
from datasets import load_metric
rouge_metric = load_metric("rouge")
def get_rouge_score(ref, pred):
return rouge_metric.compute(rouge_types=["rougeL"], predictions=[pred], references=[ref])['rougeL'].mid.fmeasure
rouge_scores = torch.tensor([get_rouge_score(ref,pred) for ref,pred in zip(ref_batch, pred_batch)], device=device)
The problem with this is that it is very slow. The list comprehension iterates through examples one by one and uses the CPU to do the operation. By contrast, the rest of the training loop runs using tensor cores on a GPU. Hence this step is a significant bottleneck; profiling on the training step shows that this step alone takes up ~60% of the training time.
So my question: how could I parallelize this step, or even make it faster another way? If there’s a way to calculate scores on the GPU, that would be awesome. If not, is there an easy way I can use multiple CPU cores for this?
I’m also able to change the metric from ROUGE to another one that is more able to be parallelized, if that helps.
Thanks
|
Hi ! To speed up the processing you can pass keep_in_memory=True to load_metric to keep each sample in memory (by default it writes them on disk to save memory, but since you’re passing the examples one by one to compute you don’t need this). This should speed up your computation significantly.
Moreover feel free to use python multiprocessing to parallelize this (using multiprocessing.Pool and Pool.imap for example)
| 0 |
huggingface
|
🤗Datasets
|
How to load only test dataset from `librispeech_asr`?
|
https://discuss.huggingface.co/t/how-to-load-only-test-dataset-from-librispeech-asr/12602
|
Hi, I want to evaluate my model with Librespeech dataset
Train.500
Train.360
Train.100
Valid
Test
clean
-
104014
28539
2703
2620
other
148688
-
-
2864
2939
I don’t need train.500, train.360, train.100 and valid set. Is there any way to load only test set from librispeech_asr?
I can load all data and only keep test set later but it takes too long to load all data :(( So I want to load only test set.
from datasets import load_dataset, DatasetDict
# It takes forever to load everything here...
libre_dataset = load_dataset("librispeech_asr", 'clean')
keep = ["test"]
libre_dataset = DatasetDict({k: dataset for k, dataset in libre_dataset .items() if k in keep})
Thanks for reading!
|
Hey @IlllIIII you can specify the split you want to load by passing split="test" to the load_dataset() function (docs). This will still download all the splits in the dataset, so if space is an issue you can stream the elements of the test set one by one:
from datasets import load_dataset
dset = load_dataset("librispeech_asr", 'clean', split="test", streaming=True)
next(iter(dset))
| 0 |
huggingface
|
🤗Datasets
|
Dataset preview not showing for uploaded DatasetDict
|
https://discuss.huggingface.co/t/dataset-preview-not-showing-for-uploaded-datasetdict/12603
|
I created a DatasetDict and pushed it here 6.
I’m getting the message
Server Error
Status code: 400
Exception: Status400Error
Message: could not get the config name for this dataset
Am I supposed to create a config file somewhere that I missed so the dataset viewer works?
Thanks!
|
Hey @dansbecker as far as I know, the dataset viewer currently supports datasets with a loading script (example 1) or raw data files in common formats like JSON and CSV (example 2).
I agree that being able view the contents of DatasetDict objects would be a nice feature, so I’m tagging @lhoestq and @severo in case they have any additional insights here
| 0 |
huggingface
|
🤗Datasets
|
How preprocessing with IterableDataset works?
|
https://discuss.huggingface.co/t/how-preprocessing-with-iterabledataset-works/12186
|
I want to know how functions like map, the filter works with IterableDataset type?
|
Hi,
currently, IterableDataset doesn’t support filter (you can use a simple check to filter unwanted examples), and for map see the docs here 2.
| 0 |
huggingface
|
🤗Datasets
|
Permanently saving dataset with load_dataset
|
https://discuss.huggingface.co/t/permanently-saving-dataset-with-load-dataset/12589
|
Hi all,
How do I permanently download the dataset via load_dataset?
When I do this:
# I load almost all glue datasets
dataset = load_dataset("glue", "mrpc")
dataset = load_dataset("glue", "mnli")
...
dataset = load_dataset("glue", "qnli")
It still redownloads some of the datasets after running the code several times. Because its saved in cache.
Is there a way to download them in a way with load_data that it stays there permanently?
|
Hi ! Unless you clear the cache, the datasets are only downloaded once.
What makes you think that the datasets are being redownloaded ?
| 0 |
huggingface
|
🤗Datasets
|
Streaming Dataset Roberta
|
https://discuss.huggingface.co/t/streaming-dataset-roberta/12479
|
Anyone know of RoBERTa pretraining script with support for Dataset streadmin?
|
Hi ! I don’t think the community has already shared a script for RoBERTa pretraining using dataset streaming yet. However if you’re interested in looking into this, here are a few pointers:
RoBERTa was trained with BookCorpus, CC news and OpenWebText
BookCorpus and OpenWebText have been replicated and open sourced as BookCorpusOpen and OpenWebText2 (The Pile)
You can load and interleave the datasets with
from datasets import load_dataset, interleave_datasets
def only_keep_text(example):
return {"text": example["text"]}
bc = load_dataset("bookcorpusopen", split="train", streaming=True)
ccn = load_dataset("cc_news", split="train", streaming=True)
# this one currently has streaming issues - will fix soon
# owt = load_dataset("the_pile_openwebtext2", split="train", streaming=True)
dataset = interleave_datasets([
bc.map(only_keep_text),
ccn.map(only_keep_text),
# owt.map(only_keep_text)
])
Then you can check the documentation to see how to use it in a pytorch training loop: Stream — datasets 1.16.1 documentation 3
| 0 |
huggingface
|
🤗Datasets
|
Creating and uploading dataset Huggingface Hub vs Dataset creation script
|
https://discuss.huggingface.co/t/creating-and-uploading-dataset-huggingface-hub-vs-dataset-creation-script/11078
|
Hello,
Our team is in the process of creating (manually for now) a multilingual machine translation dataset for low resource languages. Currently, we have text files for each language sourced from different documents. The number of lines in the text files are the same. For example, for each document we have lang1.txt and lang2.txt each with n lines. Each line in lang1.txt maps to each line in lang2.txt. We currently have these text files in a Github repository.
Once we have completed this dataset curation (this will be actively on-going) we would like to upload to the Huggingface Hub. Essentially, we would like for this to be similar to glue with different configurations corresponding to different multi-lingual datasets.
When I was looking over the documentation page, I found several resources. One is the uploading to Huggingface Hub 1, another is creating a dataset loading script 2.
I’m confused about the function provided by these two methods. Also, I don’t if just uploading the raw text files that we have is enough to be able to work with.
Question: What is the easiest and most efficient way to upload the kind of dataset that I have?
Thank you.
|
Hi ! Yes you can upload your text files to the Hub. Then in order to define the configurations and how the examples must be read from the text files, you must also upload a dataset script in the same repository as your data.
If you don’t upload a dataset script, then the default dataset builder for .txt file is used (and basically it concatenates all the text data together).
If I understand correctly your dataset is a parallel dataset like flores, so I think to make your dataset script you can get some inspiration from the flores dataset script 7. Also feel free to read the documentation you mentioned about how to create a dataset script.
| 0 |
huggingface
|
🤗Datasets
|
Creating label2idx dictionary
|
https://discuss.huggingface.co/t/creating-label2idx-dictionary/12576
|
Hi all,
I am trying to create a label2idx dictionary, where e.g.
labels2idx = {
0: #all datasample indexes with label 0
1: #all datasample indexes with label 1
2: #all datasample indexes with label 2
...
}
I am loading the dataset via the hugginface datasets library (e.g. glue->mrpc using load_dataset). Is there a nice way to do this?
The most naive solution is to first loop over all datasamples by hand and then create the labels2idx by hand. I am wondering if there is a more nice way to do it using the library?
|
You can directly access this as follows:
from datasets import load_dataset
dataset = load_dataset("glue", "mrpc")
labels = dataset["train"].features["label"].names
id2label = {idx: label for idx, label in enumerate(labels)}
label2id = {label: idx for idx, label in enumerate(labels)}
| 1 |
huggingface
|
🤗Datasets
|
Should I shard dataset in distributed training?
|
https://discuss.huggingface.co/t/should-i-shard-dataset-in-distributed-training/12494
|
as the title, should I shard dataset set with rank?
for example:
rank_dataset = dataset.shard(num_shards=training_args.world_size, index=training_args.rank)
or Trainer will do that automate?
|
The Trainer does the sharding for you. Same if you use Accelerate with your own training loop.
| 1 |
huggingface
|
🤗Datasets
|
Getting list of tensors instead of tensor array after using set_format
|
https://discuss.huggingface.co/t/getting-list-of-tensors-instead-of-tensor-array-after-using-set-format/12051
|
So I’m bit unclear about how set_format converts data into torch tensors. My understanding is that if I use set_format to my dataset that has lists as values, it would conver them to tensors. I do get normal torch tensors for 1d and 2d lists, but when I have 3d lists, somehow it returns list of tensors.
Here is a toy example:
from datasets import Dataset
ex1 = {'a':[[1,1],[1,2]], 'b':[1,1]}
ex2 = {'a':[[[2,1],[2,2]], [[3,1],[3,2]]], 'b':[1,1]}
d1 = Dataset.from_dict(ex1)
d1.set_format('torch', columns=['a','b'])
d2 = Dataset.from_dict(ex2)
d2.set_format('torch', columns=['a','b'])
print(d1[:2])
print(d2[:2])
and the output is:
{'a': tensor([[1, 1],
[1, 2]]), 'b': tensor([1, 1])}
{'a': [[tensor([2, 1]), tensor([2, 2])], [tensor([3, 1]), tensor([3, 2])]], 'b': tensor([1, 1])}
As you can see, for d2 dataset with 3d lists, I’m getting list of tensors, instead of getting just straight tensor arrays as were the cases for 1d and 2d lists in d1 dataset. Why is it returning list of tensors for 3d lists? Is there a way to get straight tensor arrays for such lists?
Really need a good clarification on it. Thank you.
|
Hi,
this inconsistency is due to how PyArrow converts nested sequences to NumPy by default but can be fixed by casting the corresponding column to the ArrayXD type.
E.g. in your example:
dset = Dataset.from_dict(
{"a": [[[2,1],[2,2]], [[3,1],[3,2]]], "b": [1,1]},
features=Features({"a": Array2D(shape=(2, 2), dtype="int32"), "b": Value("int32")})
)
dset.set_format('torch', columns=['a','b'])
If you want to cast the existing dataset, use map instead of cast (cast fails on special extension types):
dset = Dataset.from_dict({"a": [[[2,1],[2,2]], [[3,1],[3,2]]], "b": [1,1]})
dset = dset.map(lambda batch: batch, batched=True, features=Features({"a": Array2D(shape=(2, 2), dtype="int32"), "b": Value("int32")}))
dset.set_format('torch', columns=['a','b'])
| 1 |
huggingface
|
🤗Datasets
|
Datasets map modifying audio array to list?
|
https://discuss.huggingface.co/t/datasets-map-modifying-audio-array-to-list/12303
|
I’m trying to use my own data for fine-tuning a Wav2Vec2 model but every time I create my DatasetDict, it converts my audio array to a list? Am I doing something incorrectly? How can I preserve the audio as an array?
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = sf.read(batch["file"])
batch["audio"] = speech_array
return batch
updated_dataset = my_dataset.map(speech_file_to_array_fn)
my_audio = updated_dataset['test'][0]
print(my_audio)
{'file': '/disks/data3/UASPEECH/control/CM04/CM04_B2_UW33_M3.wav', 'text': 'APPROACH', 'audio': [0.0001220703125, -0.00018310546875, 0.000152587890625, -0.00030517578125, 6.103515625e-05, 9.1552734375e-05,...}
print(type(my_audio['audio']))
<class 'list'>
|
Hi,
set format to NumPy to get an array as follows:
updated_dataset.set_format("numpy", columns=["audio"], output_all_columns=True)
updated_dataset["test"][0]
| 1 |
huggingface
|
🤗Datasets
|
PyTorch Dataset/DataLoader classes
|
https://discuss.huggingface.co/t/pytorch-dataset-dataloader-classes/11637
|
Hi!
My question is not directly related to HF libraries, so it is a bit off-topic, but I hope the moderators will not take too strict a view on that and let me keep it.
When training ResNet on ImageNet dataset, I coded some dataloading functionality by hand, which was extremely useful to me. I am currently transitioning from TF2 to PyTorch and I am very new to PyTorch Dataset and Dataloader classes. I am wondering whether PyTorch Dataset/DataLoader classes make the flow I coded by hand available out of the box. I did read PyTorch tutorials and API docs before posting the question.
Here is the problem:
ImageNet consists of ~ 1.3mln JPEG images, which take about 140Gb disk space. When compiling a batch, one needs to read a batch_size number of image files from the disk, and each of them needs to be pre-processed and this pre-processing is computationally expensive (load an image file of size NxM, randomly choose an integer in the interval [256, 480], re-size the image in the way that the shortest size is equal to this integer, crop randomly to extract a 224x224 square image, apply random color transformation to it, etc…). If this pre-processing could be done once and then used for all the epochs, it wouldn’t be a problem at all, but it needs to be re-done for each file each epoch (that’s how data augmenation is achieved). And training requires a large number of epochs (50-120).
Here is how I solved it:
I borrowed 30Gb from my RAM and made a buffer disk out of it. This space is enough to comfortably accomodate more that 1,000 batches (already pre-processed). I created a text file, monitoring the training progress (with two lines: current epoch number, current batch number), the training process updates this file after each 200 batches (which is equivalent of 1% of epoch with my batch size). Then I
wrote a run-time pre-processing script (both training and pre-processing run at the same time in parallel), which checks:
where the training process currently is
where the buffer currently starts (at which batch number)
where the buffer currently ends (at which batch number)
If the pre-processing script sees that the training process went ahead and it is now safe to delete some batches from the start of the buffer, then it does so to free space. If the pre-processing script sees that the training process has less than 800 batches pre-processed and waiting to be fed to the training process, then it jumps into action, pre-processes more batches and places them at the end of the queue. Then it waits. It checks every 100 seconds whether there is any work for it, does it if there is, and then waits again. Pre-processing takes place 100% on the CPU and I could use multiple threads. This is important. Without it the pre-processing would not be able to work fast enough, and the main GPU training process would have to wait (which is unacceptable).
Can PyTorch Dataset/DataLoader classes provide the above functionality out of the box? If yes, I would appreciate if you could give a me push in the right direction. There is even no need to give me a code example (although that would be nice). Just tell me whether it is possible and where to look if it is.
Thank you!!
Alex
|
If there is anybody who is only 80-90% sure that there is no out of the box functionality like that, it would also help me a lot.
| 0 |
huggingface
|
🤗Datasets
|
The Datasets library - Hugging Face Course
|
https://discuss.huggingface.co/t/the-datasets-library-hugging-face-course/12148
|
huggingface.co
The 🤗 Datasets library - Hugging Face Course 6
In the above link, I am not able to understand anything about the parameter: return_overflowing_tokens used in the tokenizer object and also what is this error message and why are we getting:
ArrowInvalid: Column 1 named condition expected length 1463 but got length 1000
|
Hi !
I think return_overflowing_tokens can be used to keep the overflowing tokens in case an example is longer than max_length. For example for max_length=4 and for an example consisting of 10 tokens:
if return_overflowing_tokens=False, then the example is cropped and you get one list of 4 tokens
if return_overflowing_tokens=True, then the example is split to lists of maximum 4 tokens, so you end up with three lists with length 4, 4 and 2.
The error
ArrowInvalid: Column 1 named condition expected length 1463 but got length 1000
means that there is a mismatch between the number of rows for condition and the other rows created by the tokenize function. Indeed from 1,000 examples, with each example having columns like condition, the tokenize function returned 1,463 tokenized texts. Because some columns have more rows than others, it can’t form a valid dataset table.
But since in the end you don’t care about the columns function at this point, you can just drop it and only keep the 1,463 tokenized texts with remove_columns=drug_dataset["train"].column_names
I hope that helps
| 0 |
huggingface
|
🤗Datasets
|
Add new column to a dataset
|
https://discuss.huggingface.co/t/add-new-column-to-a-dataset/12149
|
In the dataset I have 5000000 rows, I would like to add a column called ‘embeddings’ to my dataset.
dataset = dataset.add_column('embeddings', embeddings)
The variable embeddings is a numpy memmap array of size (5000000, 512).
But I get this error:
ArrowInvalidTraceback (most recent call last)
in
----> 1 dataset = dataset.add_column(‘embeddings’, embeddings)
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
486 }
487 # apply actual function
→ 488 out: Union[“Dataset”, “DatasetDict”] = func(self, *args, **kwargs)
489 datasets: List[“Dataset”] = list(out.values()) if isinstance(out, dict) else [out]
490 # re-apply format to the output
/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
404 # Call actual function
405
→ 406 out = func(self, *args, **kwargs)
407
408 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_column(self, name, column, new_fingerprint)
3346 :class:Dataset
3347 “”"
→ 3348 column_table = InMemoryTable.from_pydict({name: column})
3349 # Concatenate tables horizontally
3350 table = ConcatenationTable.from_tables([self._data, column_table], axis=1)
/opt/conda/lib/python3.8/site-packages/datasets/table.py in from_pydict(cls, *args, **kwargs)
367 @classmethod
368 def from_pydict(cls, *args, **kwargs):
→ 369 return cls(pa.Table.from_pydict(*args, **kwargs))
370
371 @inject_arrow_table_documentation(pa.Table.from_batches)
/opt/conda/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict()
/opt/conda/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib._from_pydict()
/opt/conda/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib.asarray()
/opt/conda/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
/opt/conda/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib._ndarray_to_array()
/opt/conda/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: only handle 1-dimensional arrays
How can I solve?
|
Hi,
it should work if you use concatenate_datasets instead:
import datasets
dset_embed = datasets.Dataset.from_dict({"embeddings": embeddings})
dset_concat = datasets.concatenate_datasets([dset, dset_embed], axis=1)
| 0 |
huggingface
|
🤗Datasets
|
Import Error: Need to install datasets
|
https://discuss.huggingface.co/t/import-error-need-to-install-datasets/11692
|
Hello,
I’m trying to upload a multilingual low resource West Balkan machine translation dataset called rosetta_balcanica on Hugging Face hub. The data is stored in Github and was manually extracted. This is an on-going project. I’ve created a dataset creation script that should enable one to download and load the dataset based on the configuration specified. I’m also following the documentation page for creating a dataset loading script.
After writing out the script, I use datasets-cli to test the script. First off, its not clear where to run this command even though it says at the root, of the datasets directory. This to me implies the root of the repo, but it seems its one directory above that.
When I run the command datasets-cli test rosetta_balcanica --save_infos --all_configs (the directory rosetta_balcanica has only a README.md and rosetta_balcania.py), I get this error:
Traceback (most recent call last):
File "/home/sudarshan/anaconda3/envs/rb_hub/bin/datasets-cli", line 8, in <module>
sys.exit(main())
File "/home/sudarshan/anaconda3/envs/rb_hub/lib/python3.7/site-packages/datasets/commands/datasets_cli.py", line 33, in main
service.run()
File "/home/sudarshan/anaconda3/envs/rb_hub/lib/python3.7/site-packages/datasets/commands/test.py", line 119, in run
module = dataset_module_factory(path)
File "/home/sudarshan/anaconda3/envs/rb_hub/lib/python3.7/site-packages/datasets/load.py", line 1083, in dataset_module_factory
combined_path, download_mode=download_mode, dynamic_modules_path=dynamic_modules_path
File "/home/sudarshan/anaconda3/envs/rb_hub/lib/python3.7/site-packages/datasets/load.py", line 669, in get_module
download_config=self.download_config,
File "/home/sudarshan/anaconda3/envs/rb_hub/lib/python3.7/site-packages/datasets/load.py", line 292, in _download_additional_modules
f"To be able to use {name}, you need to install the following dependencies"
ImportError: To be able to use rosetta_balcanica, you need to install the following dependencies['datasets,'] using 'pip install datasets,' for instance'
However, I have already installed datasets so I’m not sure whey I’m getting this error. This is my dataset creation script rosetta_balcanica.py:
_DESCRIPTION="""
Rosetta-Balcanica is a set of evaluation datasets for low resource western Balkan languages manually sourced from articles from OSCE website.
"""
_HOMEPAGE='https://github.com/ebegoli/rosetta-balcanica'
_DATA_URL='https://github.com/ebegoli/rosetta-balcanica/raw/main/rosetta_balcanica.tar.gz'
_VERSION=datasets.Version('1.0.0')
class RosettaBalcanicaConfig(datasets.BuilderConfig):
"""BuilderConfig for Rosetta Balcanica for low resource West Balcan languages
"""
def __init__(self, lang_pair=(None, None), **kwargs):
assert lang_pair in _VALID_LANGUAGE_PAIRS, (f"Language pair {lang_pair} not supported (yet)")
name = f'{lang_pair[0]} to {lang_pair[1]}'
desc = f'Translation dataset from {lang_pair[0]} to {lang_pair[1]}'
super(RosettaBalcanicaConfig, self).__init__(
name=name,
description=desc,
version=_VERSION,
**kwargs
)
self.lang_pair = lang_pair
class RoesettaBalcancia(datasets.GeneratorBasedBuilder):
logger.debug("i'm in builder")
BUILDER_CONFIGS = [
RosettaBalcanicaConfig(
lang_pair=lang_pair,
versino=_VERSION,
)
for lang_pair in _VALID_LANGUAGE_PAIRS
]
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{'translation': datasets.features.Translation(languages=self.config.lang_pair)}
),
homepage=_HOMEPAGE,
supervised_keys=self.config.lang_pair,
citation=_CITATION,
)
def _split_generators(self, dl_manager):
archive = dl_manager.download(_DATA_URL)
source,target = self.config.lang_pair
non_en = source if target == 'en' else target
data_dir = f'en-{non_en}'
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
'source_file': f'{data_dir}/train_{source}.txt',
'target_file': f'{data_dir}/train_{target}.txt',
'files': dl_manager.iter_archive(archive)
}
),
datasets.SplitGenerator(
name=datasets.Split.TEST,
gen_kwargs={
'source_file': f'{data_dir}/test_{source}.txt',
'target_file': f'{data_dir}/test_{target}.txt',
'files': dl_manager.iter_archive(archive)
}
),
]
def _generate_examples(self, source_file, target_file, files):
source_sents, target_sents = None, None
for path, f in files:
if path == source_file:
source_sents = f.read().decode('utf-8').split('\n')
elif path == target_file:
target_sents = f.read().decode('utf-8').split('\n')
if source_sents is not None and target_sents is not None:
break
assert len(target_sents) == len(source_sents), (f"Sizes do not match: {len(source_sents) vs len(target_sents)} for {source_file} vs {target_file}")
source,target = self.config.lang_pair
for idx, (l1, l2) in enumerate(zip(source_sents, target_sents)):
result = {
'translation': {source: l1, target: l2}
}
if all(result.values()):
yield idx, result
Please could I get some help on this?
Thanks!
|
Hi ! You did right by passing the path to the directory when contains your dataset script
And thanks for reporting the issue with the documentation, it’s indeed not clear enough, we’ll fix that.
Also regarding the error your get, I think this is because you import your dependencies in one single line, which is not parsed correctly by the dataset loader:
import datasets, logging
As a workaround you can just split the import on multiple lines
import logging
import datasets
I hope that helps !
| 0 |
huggingface
|
🤗Datasets
|
Conditionally sample example from the dataset
|
https://discuss.huggingface.co/t/conditionally-sample-example-from-the-dataset/12233
|
How how to conditionally sample example from the dataset. Eg dataset.sample(‘id’==‘abc’)?
|
This is also referred to as filtering the dataset, docs can be found here: Main classes — datasets 1.15.1 documentation 1
| 0 |
huggingface
|
🤗Datasets
|
Why is simply accessing dataset features so slow?
|
https://discuss.huggingface.co/t/why-is-simply-accessing-dataset-features-so-slow/12041
|
See this Colab notebook for runnable versions of the code snippets pasted below.
I’ve been playing with the SQuAD dataset and I’ve noticed that simply accessing features of mapped versions of the dataset is extremely slow:
# Let's play with the validation set.
squad = load_dataset("squad")
ds = squad["validation"]
# standard tokeinzer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
# tokenizer the dataset with standard hyperparameters.
f = lambda x: tokenizer(x["question"], x["context"], max_length=384, stride=128, truncation="only_second", padding="max_length", return_overflowing_tokens=True, return_offsets_mapping=True)
mapped_ds = ds.map(f, batched=True, remove_columns=val_ds.column_names)
Now, simply executing the statements – just accessing the features –
attention_mask = mapped_ds["attention_mask"]
input_ids = mapped_ds["input_ids"]
offset_mapping = mapped_ds["offset_mapping"]
overflow_to_sample_mapping = mapped_ds["overflow_to_sample_mapping"]
token_type_ids = mapped_ds["token_type_ids"]
takes about 12 seconds. Why does this take so long? My understanding from the docs is that an HF dataset is stored as a collection of feature columns. (Arrow table format?) Were this so, each assignment in the above snippet should just be assigning a pointer. There’s no data copying happening here, right?
Digging a little deeper, assigning attention_mask, input_ids, and token_type_ids each take 1-1.5 seconds. Assigning overflow_to_sample_mapping takes milliseconds, and assigning offet_mapping takes 8-9 seconds. It seems that accessing features with more complex structure is taking longer. I’m very curious what are these assignment statements actually doing, under the hood!
On a related note, looping over mapped_ds takes significantly longer than looping over its features, extracted and zipped:
for x in mapped_ds:
pass
takes 15 seconds while
for x in zip(attention_mask, input_ids, offset_mapping, overflow_to_sample_mapping, token_type_ids):
pass
takes 1.5 seconds.
This does have user-facing impact (right?): If you’re going to loop over the evaluation set in a custom evaluation script (overriding a trainer’s evaluate method, say), you’re better off storing the individual features of the evaluation dataset on the trainer instance once and for all and looping over their zip on each evaluation step (of which there can be many!), rather than iterating over the evaluation dataset itself each time. (A custom evaluation script of this sort is used to compute “exact match” and “F1” metrics on the SQuAD data. See the evaluate method of QuestionAnsweringTrainer in this file 1 and the postprocess_qa_predicitons function in this one.)
|
Hi,
this behavior is expected.
dataset[column] loads the entire dataset column into memory (this is why this part seems so slow to you), so operating directly on this data will be faster than operating on the dataset created with load_dataset, which is memory-mapped, i. e., stored in an arrow file by default. However, in many cases, a dataset or dataset transforms are too big to fit in RAM, so having them in memory is not an option and we’d rather pay a penalty (in respect to read/write operations) of having it stored in a file.
If you are sure that you have enough RAM, you can load the dataset as follows to have it in memory:
load_dataset("squad", split="validation", keep_in_memory=True)
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.