docs
stringclasses 4
values | category
stringlengths 3
31
| thread
stringlengths 7
255
| href
stringlengths 42
278
| question
stringlengths 0
30.3k
| context
stringlengths 0
24.9k
| marked
int64 0
1
|
---|---|---|---|---|---|---|
huggingface
|
🤗Datasets
|
HF Datasets not working with Language Modeling notebook
|
https://discuss.huggingface.co/t/hf-datasets-not-working-with-language-modeling-notebook/5922
|
Posted this in the beginners part of the forum, but didnt get any responses so decided to repost here.
I am trying to use this notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/language_modeling.ipynb 5) to finetune and generate text with GPT-Neo using my own custom dataset. I uploaded my own text file to my dataset, but when trying to use it, it just gives me this error
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, **download_kwargs)
354 try:
→ 355 local_path = cached_path(file_path, download_config=download_config)
356 except FileNotFoundError:
4 frames
FileNotFoundError: Couldn’t find file at htts://huggingface.co/datasets/Trainmaster9977/zbakuman/resolve/main/zbakuman.py
During handling of the above exception, another exception occurred:
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, **download_kwargs)
357 raise FileNotFoundError(
358 “Couldn’t find file locally at {}, or remotely at {}. Please provide a valid {} name”.format(
→ 359 combined_path, file_path, “dataset” if dataset else “metric”
360 )
361 )
FileNotFoundError: Couldn’t find file locally at Trainmaster9977/zbakuman/zbakuman.py, or remotely at htts://huggingface.co/datasets/Trainmaster9977/zbakuman/resolve/main/zbakuman.py. Please provide a valid dataset name
(Slightly modified beginning of url due to new users only able to put 2 links in posts)
I tried to use this to load it
from datasets import load_dataset
dataset = load_dataset(“Trainmaster9977/zbakuman”)
|
hey @Trainmaster9977, as described in the docs 18 you need to provide an argument to load_dataset that indicates the file format (csv, json, etc).
ps. in future please don’t create duplicate posts (either edit the original one or delete it if necessary)
| 0 |
huggingface
|
🤗Datasets
|
Cant save Dataset as Parquet-File since Updating Datasets?
|
https://discuss.huggingface.co/t/cant-save-dataset-as-parquet-file-since-updating-datasets/5880
|
Hi Guys,
i was using Datasets==1.5.0 while Preprocessing my Dataset & Saving it. I updated to the latest Datasets Version (1.6.1) and since then i cant export my Datasets as Parquet File.
With Version 1.5.0 i could just do:
import pyarrow.parquet as pq
...
...
pq.write_table(train_dataset.data, 'train.parquet')
pq.write_table(eval_dataset.data, 'eval.parquet')
When i run the same code with the latest datasets version i get:
File "../preprocess_dataset.py", line 132, in <module>
pq.write_table(train_dataset.data, f'{resampled_data_dir}/{data_args.dataset_config_name}.train.parquet')
File "/usr/local/lib/python3.8/dist-packages/pyarrow/parquet.py", line 1674, in write_table
writer.write_table(table, row_group_size=row_group_size)
File "/usr/local/lib/python3.8/dist-packages/pyarrow/parquet.py", line 588, in write_table
self.writer.write_table(table, row_group_size=row_group_size)
TypeError: Argument 'table' has incorrect type (expected pyarrow.lib.Table, got ConcatenationTable)
Should i just use 1.5.0 or is the a quick and easy work around?
Im not that familar with python. in java i could just use one version for one project and another version for another project. can / should i do the same here?
|
pq.write_table(train_dataset.data.table,
pq.write_table(eval_dataset.data.table, 'eval.parquet')
is working , so this can be closed
| 0 |
huggingface
|
🤗Datasets
|
WER Metric running out of Memory
|
https://discuss.huggingface.co/t/wer-metric-running-out-of-memory/5855
|
Hi guys,
I wanted to train a net based on HuggingFace. Unfortunately the validation process runs out of memory at the end. The values are still estimated by the model, but when the labels are to be converted to texts and the WER is to be determined, the program crashes (out of memory):
***** Running Evaluation *****
Num examples = 15588
Batch size = 4
100%|███████████████████████████████████████| 3897/3897 [21:27<00:00, 3.07it/s]Traceback (most recent call last):
File "/tmp/pycharm_project_263/audioengine/model/finetuning/wav2vec2/finetune_parquet.py", line 151, in <module>
File "/tmp/pycharm_project_263/audioengine/model/finetuning/wav2vec2/finetune_parquet.py", line 128, in main
max_val_samples = data_args.max_val_samples if data_args.max_val_samples is not None else len(eval_dataset)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1757, in evaluate
output = self.prediction_loop(
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1930, in prediction_loop
metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))
File "/tmp/pycharm_project_263/audioengine/model/finetuning/wav2vec2/wav2vec2_trainer.py", line 191, in __call__
wer = wer_metric.compute(predictions=pred_str, references=label_str)
File "/usr/local/lib/python3.8/dist-packages/datasets/metric.py", line 403, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/home/warmachine/.cache/huggingface/modules/datasets_modules/metrics/wer/73b2d32b723b7fb8f204d785c00980ae4d937f12a65466f8fdf78706e2951281/wer.py", line 94, in _compute
return wer(references, predictions)
File "/usr/local/lib/python3.8/dist-packages/jiwer/measures.py", line 80, in wer
measures = compute_measures(
File "/usr/local/lib/python3.8/dist-packages/jiwer/measures.py", line 192, in compute_measures
H, S, D, I = _get_operation_counts(truth, hypothesis)
File "/usr/local/lib/python3.8/dist-packages/jiwer/measures.py", line 273, in _get_operation_counts
editops = Levenshtein.editops(source_string, destination_string)
MemoryError
100%|███████████████████████████████████████| 3897/3897 [21:40<00:00, 3.00it/s]
Process finished with exit code 1
wer_metric = load_metric("wer")
def compute_metrics(processor):
def __call__(pred):
pred_logits = pred.predictions
pred_ids = np.argmax(pred_logits, axis=-1)
pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
pred_str = processor.batch_decode(pred_ids)
# we do not want to group tokens when computing the metrics
label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
wer = wer_metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
return __call__
...
trainer = Trainer(
model=model,
data_collator=data_collator,
args=args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
tokenizer=processor.feature_extractor,
train_seq_lengths=train_dataset.input_seq_lengths,
compute_metrics=compute_metrics(processor),
)
|
I’m guessing you don’t have enough RAM to host all your decoded predictions in memory. You should split your dataset in smaller pieces to get the decoded predictions.
| 0 |
huggingface
|
🤗Datasets
|
Compatibility for numpy arrays
|
https://discuss.huggingface.co/t/compatibility-for-numpy-arrays/5351
|
Is there any native compatibility in datasets to construct it from NumPy arrays to be further used in transformers without writing it to a file and loading it that way?
|
As far as I know there isn’t a native Dataset.from_numpy method, but you could map your array to a Python dictionary and use the from_dict method: Loading a Dataset — datasets 1.5.0 documentation 5
| 0 |
huggingface
|
🤗Datasets
|
When will the next release be?
|
https://discuss.huggingface.co/t/when-will-the-next-release-be/5609
|
Hi,
May I ask when the next release will take place?
I would like to pip install the new features
|
hey @Hwijeen, i don’t know when the next datasets release will be, but in the meantime you can pip install the new features directly from master as follows:
pip install git+https://github.com/huggingface/datasets.git
note that this is probably not stable (in the sense that master is continuously updated, some features might change etc)
| 0 |
huggingface
|
🤗Datasets
|
Using load_dataset.set_transform() function along with Trainer class
|
https://discuss.huggingface.co/t/using-load-dataset-set-transform-function-along-with-trainer-class/5758
|
Hello,
I’m trying to use the load_dataset.set_transform(…) function along with DataCollatorForLanguageModeling 3 and the Trainer 1 class from the transformers library to pretrain a model. Since I have a large dataset, tokenization does not fit in RAM and using the .map() function uses way too much disk space (> 500Go) which is limited in my case. So I need to tokenize on the fly.
While the set_transform works as expected if I index the dataset, I don’t know why it fails when I plug it with a DataCollatorForLanguageModeling and a Trainer.
tokenizer = ...
def encode(batch):
return tokenizer(batch["text"], padding="max_length", truncation=True, max_length=512, return_tensors="pt")
train_dataset = load_dataset('text', data_files={'train': txt_train_dataset})
train_dataset.set_transform(encode)
validation_dataset = load_dataset('text', data_files={'validation': txt_validation_dataset})
validation_dataset.set_transform(encode)
print(train_dataset["train"][:3]) # works as expected: {'input_ids': ..., 'attention_mask': ...}
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
trainer = Trainer(
model=...,
args=...,
data_collator=data_collator,
train_dataset=train_dataset["train"],
eval_dataset=validation_dataset["validation"],
compute_metrics=...,
)
Im probably missing something.
|
hey @tkon3, when you say “it fails” what do you mean exactly? there’s a related thread on set_transform here that might be useful: Understanding set_transform 11
| 0 |
huggingface
|
🤗Datasets
|
How to use dataset with run_language_modeling?
|
https://discuss.huggingface.co/t/how-to-use-dataset-with-run-language-modeling/5726
|
I have downloaded the s2orc dataset and saved it to disk.
Since it’s in the arrow format, I cannot figure out how to use the run_language_modeling command, since that seems to require a text file.
It seems like it would be simple. Can anyone help?
|
I modified run_language_modeling.py a little to make it work.
I would still be interested to hear if this is supported.
| 0 |
huggingface
|
🤗Datasets
|
How to tokenize using map
|
https://discuss.huggingface.co/t/how-to-tokenize-using-map/5473
|
This is a problem that I have gotten: full info on this thread 4 I had a quick question -
I have constructed Dataset object using numpy arrays, however when I use this:-
def tok(example):
encodings = tokenizer(example['src'], truncation=True, padding="max_length", max_length=2000)
return encodings
train_encoded_dataset = train_dataset.map(tok, batched=True)
val_encoded_dataset = val_dataset.map(tok, batched=True)
and I explore my train_encoded_dataset, I see that when trying to view the source sequence:
>>> train_encoded_dataset
>>>Dataset({
features: ['attention_mask', 'input_ids', 'src', 'tgt'],
num_rows: 4572
})
>>> train_encoded_dataset['src'][0]
The output of this last command produces text that is completely raw (basically untokenized) string- (like: ‘lorem ipsum…’) which is expected since I didn’t call tokenizer.tokenize
So does anyone have any idea how to get the model to tokenized as well? I tried a few obvious ways, but it didn’t yield anything
|
hey @Neel-Gupta, could you share a minimal example of the Dataset object you’re working with?
Neel-Gupta:
The output of this last command produces text that is completely raw (basically untokenized) string- (like: ‘lorem ipsum…’) which is expected since I didn’t call tokenizer.tokenize
So does anyone have any idea how to get the model to tokenized as well? I tried a few obvious ways, but it didn’t yield anything
just so i understand, you’re saying that the tokenizer is not tokenizing the strings in the src field right?
| 0 |
huggingface
|
🤗Datasets
|
How to use Dataset with Pytorch Lightning
|
https://discuss.huggingface.co/t/how-to-use-dataset-with-pytorch-lightning/5447
|
Hi,
I’m trying to load the cnn-dailymail 3 dataset to train a model for summarization using pytorch lighntning. To load the dataset with DataLoader I tried to follow the documentation 13 but it doesnt work (the pytorch lightning code I am using does work when the Dataloader isnt using a dataset from huggingface so there shouldnt be a problem in the training procedure).
Here is the code:
def train_dataloader(self):
train_dataset = load_dataset('cnn_dailymail','3.0.0', split='train')
train_dataset = train_dataset.map(lambda e: tokenizer(e['article'],e['highlights'], truncation=True, padding='max_length'), batched=True)
train_dataset.set_format(type='torch')
dataloader = DataLoader(train_dataset, batch_size=self.hparams.train_batch_size)
return dataloader
I put ‘article’ and ‘highlight’ in tokenizer as these are the 2 columns in the dataset that correspond to target and source.
Here is the stack trace:
Traceback (most recent call last):
File "MBART.py", line 346, in <module>
trainer.fit(model)
File "venv/lib/python3.6/site-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn
result = fn(self, *args, **kwargs)
File "venv/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1073, in fit
results = self.accelerator_backend.train(model)
File "venv/lib/python3.6/site-packages/pytorch_lightning/accelerators/gpu_backend.py", line 51, in train
results = self.trainer.run_pretrain_routine(model)
File "venv/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1224, in run_pretrain_routine
self._run_sanity_check(ref_model, model)
File "venv/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1257, in _run_sanity_check
eval_results = self._evaluate(model, self.val_dataloaders, max_batches, False)
File "venv/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 305, in _evaluate
for batch_idx, batch in enumerate(dataloader):
File "venv/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
data = self._next_data()
File "venv/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 557, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "venv/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "venv/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "venv/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1087, in __getitem__
format_kwargs=self._format_kwargs,
File "venv/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1074, in _getitem
format_kwargs=format_kwargs,
File "venv/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 890, in _convert_outputs
v = map_nested(command, v, **map_nested_kwargs)
File "venv/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
return function(data_struct)
File "venv/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 851, in command
return torch.tensor(x, **format_kwargs)
TypeError: new(): invalid data type 'str'
Any ideas on how to make it work ?
Thanks !
|
I think you also need to specify which columns you’d like to keep when doing .set_format(type='torch'). If you don’t do this, then the text columns are still part of the dataset, and converting strings to PyTorch tensors causes an error.
So I think you just need to update that line to:
train_dataset.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'])
| 0 |
huggingface
|
🤗Datasets
|
Apply batched zero shot classification on HuggingFace datasets object
|
https://discuss.huggingface.co/t/apply-batched-zero-shot-classification-on-huggingface-datasets-object/5211
|
Hi,
UPDATE: notebook to reproduce: https://colab.research.google.com/drive/1t-ApjHqdSo90NoXSJ5baeh7h-gJx8bLt?usp=sharing 4
I have a large amount of unlabeled texts, stored as a Pandas dataframe. So just a single column called “text”.
I’d like to apply zero-shot classification on all these texts in a batched way using HuggingFace Datasets’ .map(function, batched=True) functionality. I defined the function that I want to apply on batches as follows:
def zero_shot_classify_sequences(examples, threshold=0.5):
# first, send batch of texts through pipeline
texts = examples['text']
outputs = classifier(texts, candidate_labels, multi_label=True)
# next, for each output:
final_outputs = []
for output in outputs:
# create dictionary (predicted_labels, confidence)
final_output = {}
for label, score in zip(output['labels'], output['scores']):
if score > threshold:
final_output[label] = score
final_outputs.append(final_output)
assert len(final_outputs) == len(texts)
# set final outputs
examples['predicted_labels'] = final_outputs
return examples
The candidate labels are defined outside of this function.
In other words, I’d like to add a new column “predicted_labels”, which, for a batch of texts, should be a list of dictionaries (each dictionary mapping labels to confidence values for a given text - only those for which the confidence value > 0.5). However, when I do updated_dataset = dataset.map(zero_shot_classify_sequences, batched=True, batch_size=10), the output does not look like I’d expect. For a given text, I get the following:
'predicted_labels': {'Delivery & fulfilment technology': None,
'Novel processing techniques & Equipments': None,
'Plant-based': None,
'Retail tech': None}
This should not be the case. In case none of the confidence values is higher than the threshold of 0.5, then the dictionary of “predicted labels” should be empty for that given example.
It probably has to do with the fact that a list of dictionaries is not supported by Apache Arrow? Or is it?
|
cc @lhoestq
| 0 |
huggingface
|
🤗Datasets
|
How to combine local data files with an official dataset
|
https://discuss.huggingface.co/t/how-to-combine-local-data-files-with-an-official-dataset/4685
|
Hey,
I made a short notebook to show how local data files can be loaded into Datasets and
consequently, be combined with local data files into one Dataset object.
Check out the google colab here 176.
|
So, if I have a very different dataset, but I have the sentence and the mp3 (doesnt matter the audio quality, silence or if major part of the file is silent/background noise).
I only need to create that json file and use it as a base dataset?
| 0 |
huggingface
|
🤗Datasets
|
Map multiprocessing Issue
|
https://discuss.huggingface.co/t/map-multiprocessing-issue/4085
|
I’m getting this issue when I am trying to map-tokenize a large custom data set. Looks like a multiprocessing issue. Running it with one proc or with a smaller set it seems work. I’ve tried different batch_size and still get the same errors. I also tried sharding it into smaller data sets, but that didn’t help. Thoughts? Thanks!
dataset[‘test’].map(lambda e: tokenizer(e[‘texts’]), batched = True, batch_size = 1000, num_proc = 8)
error Traceback (most recent call last)
in
----> 1 dataset[‘test’].map(lambda e: tokenizer(e[‘texts’]), batched = True, batch_size = 1000, num_proc = 8)
/home/venv/lib/python3.6/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1316 logger.info(“Spawning {} processes”.format(num_proc))
1317 results = [pool.apply_async(self.class._map_single, kwds=kwds) for kwds in kwds_per_shard]
→ 1318 transformed_shards = [r.get() for r in results]
1319 logger.info(“Concatenating {} shards from multiprocessing”.format(num_proc))
1320 result = concatenate_datasets(transformed_shards)
/home/venv/lib/python3.6/site-packages/datasets/arrow_dataset.py in (.0)
1316 logger.info(“Spawning {} processes”.format(num_proc))
1317 results = [pool.apply_async(self.class._map_single, kwds=kwds) for kwds in kwds_per_shard]
→ 1318 transformed_shards = [r.get() for r in results]
1319 logger.info(“Concatenating {} shards from multiprocessing”.format(num_proc))
1320 result = concatenate_datasets(transformed_shards)
/home/venv/lib/python3.6/site-packages/multiprocess/pool.py in get(self, timeout)
642 return self._value
643 else:
→ 644 raise self._value
645
646 def _set(self, i, obj):
/home/venv/lib/python3.6/site-packages/multiprocess/pool.py in _handle_tasks(taskqueue, put, outqueue, pool, cache)
422 break
423 try:
→ 424 put(task)
425 except Exception as e:
426 job, idx = task[:2]
/home/venv/lib/python3.6/site-packages/multiprocess/connection.py in send(self, obj)
207 self._check_closed()
208 self._check_writable()
→ 209 self._send_bytes(_ForkingPickler.dumps(obj))
210
211 def recv_bytes(self, maxlength=None):
/home/venv/lib/python3.6/site-packages/multiprocess/connection.py in _send_bytes(self, buf)
394 n = len(buf)
395 # For wire compatibility with 3.2 and lower
→ 396 header = struct.pack("!i", n)
397 if n > 16384:
398 # The payload is large so Nagle’s algorithm won’t be triggered
error: ‘i’ format requires -2147483648 <= number <= 2147483647
|
Hi there, I got a (maybe) similar issue caused by the multiprocessing in map. Instead of opening a new thread, I thought I would use this one. Note that the error occurs only if I specify num_proc > 1, i.e. use multi-processing:
Code:
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=True)
datasets = datasets.map(
lambda sequence: tokenizer(sequence['text'], return_special_tokens_mask=True),
batched=True,
batch_size=1000,
num_proc=2, #psutil.cpu_count()
remove_columns=['text'],
)
datasets
Error:
Token indices sequence length is longer than the specified maximum sequence length for this model (8395 > 512). Running this sequence through the model will result in indexing errors
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "c:\Users\s_scho53\Desktop\L09_Desktop\_FiLMo\.venv\lib\site-packages\multiprocess\pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "c:\Users\s_scho53\Desktop\L09_Desktop\_FiLMo\.venv\lib\site-packages\datasets\arrow_dataset.py", line 203, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "c:\Users\s_scho53\Desktop\L09_Desktop\_FiLMo\.venv\lib\site-packages\datasets\fingerprint.py", line 337, in wrapper
out = func(self, *args, **kwargs)
File "c:\Users\s_scho53\Desktop\L09_Desktop\_FiLMo\.venv\lib\site-packages\datasets\arrow_dataset.py", line 1695, in _map_single
batch = apply_function_on_filtered_inputs(
File "c:\Users\s_scho53\Desktop\L09_Desktop\_FiLMo\.venv\lib\site-packages\datasets\arrow_dataset.py", line 1608, in apply_function_on_filtered_inputs
function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "<ipython-input-18-25a1ecec1896>", line 9, in <lambda>
NameError: name 'tokenizer' is not defined
"""
The above exception was the direct cause of the following exception:
NameError Traceback (most recent call last)
<ipython-input-18-25a1ecec1896> in <module>
6 tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=True)
7
----> 8 datasets = datasets.map(
9 lambda sequence: tokenizer(sequence['text'], return_special_tokens_mask=True),
10 batched=True,
c:\Users\s_scho53\Desktop\L09_Desktop\_FiLMo\.venv\lib\site-packages\datasets\dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc)
430 cache_file_names = {k: None for k in self}
431 return DatasetDict(
--> 432 {
433 k: dataset.map(
434 function=function,
c:\Users\s_scho53\Desktop\L09_Desktop\_FiLMo\.venv\lib\site-packages\datasets\dataset_dict.py in <dictcomp>(.0)
431 return DatasetDict(
432 {
--> 433 k: dataset.map(
434 function=function,
435 with_indices=with_indices,
c:\Users\s_scho53\Desktop\L09_Desktop\_FiLMo\.venv\lib\site-packages\datasets\arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1483 logger.info("Spawning {} processes".format(num_proc))
1484 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard]
-> 1485 transformed_shards = [r.get() for r in results]
1486 logger.info("Concatenating {} shards from multiprocessing".format(num_proc))
1487 result = concatenate_datasets(transformed_shards)
c:\Users\s_scho53\Desktop\L09_Desktop\_FiLMo\.venv\lib\site-packages\datasets\arrow_dataset.py in <listcomp>(.0)
1483 logger.info("Spawning {} processes".format(num_proc))
1484 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard]
-> 1485 transformed_shards = [r.get() for r in results]
1486 logger.info("Concatenating {} shards from multiprocessing".format(num_proc))
1487 result = concatenate_datasets(transformed_shards)
c:\Users\s_scho53\Desktop\L09_Desktop\_FiLMo\.venv\lib\site-packages\multiprocess\pool.py in get(self, timeout)
769 return self._value
770 else:
--> 771 raise self._value
772
773 def _set(self, i, obj):
NameError: name 'tokenizer' is not defined
I am grateful for any help!
| 0 |
huggingface
|
🤗Datasets
|
Three-way Random Split
|
https://discuss.huggingface.co/t/three-way-random-split/4679
|
Hi there,
I am wondering, what is currently the most elegant way to perform a three-way random split (into train, val and test set)? Let’s assume I load_dataset so that:
Dataset({
features: ['text'],
num_rows: 19122
})
Subsequently, I’d like to perform the split. Currently I am performing dataset.train_test_split() twice and then recombine the three datasets into one using DatasetDict. However, I assume that this is not the most elegant approach right? I also experimented with ReadInstructions, however, I could only split the data deterministically instead of randomly…
Any one got a better soultion?
|
cc @lhoestq
| 0 |
huggingface
|
🤗Datasets
|
Fetching rows of a large Dataset by index
|
https://discuss.huggingface.co/t/fetching-rows-of-a-large-dataset-by-index/4271
|
I was referred here by @lhoestq from this github issue 1.
Background
I have a large dataset, ds_all_utts, of user utterances. I load it using load_from_disk because I saved it with save_to_disk:
ds_all_utts = load_from_disk(ds_all_utts_fname)
ds_all_utts has 2,732,013 rows and these features:
{'ANY': Value(dtype='int64', id=None),
'COMPLAINTCLARIFICATION': Value(dtype='int64', id=None),
'COMPLAINTMISHEARD': Value(dtype='int64', id=None),
'COMPLAINTPRIVACY': Value(dtype='int64', id=None),
'COMPLAINTREPETITION': Value(dtype='int64', id=None),
'CRITICISM': Value(dtype='int64', id=None),
'NEGATIVENAVIGATION': Value(dtype='int64', id=None),
'OFFENSIVE': Value(dtype='int64', id=None),
'STOP': Value(dtype='int64', id=None),
'embedding': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None),
'frequency': Value(dtype='int64', id=None),
'user_utterance': Value(dtype='string', id=None)}
user_utterance is a short piece of text (usually just a few words), embedding is a 1280-length vector representing that utterance, frequency is an int, and the rest are binary labels (0 or 1) for the utterance. It’s sorted by descending frequency.
I have another Dataset called neuralgen_ds whose rows represent turns of dialogue along with their context. It has 385,580 rows and these features:
{'session_id': Value(dtype='string', id=None),
'treelet': Value(dtype='string', id=None),
'context': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
'bot_utt': Value(dtype='string', id=None),
'bot_utt_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
'user_utt': Value(dtype='string', id=None),
'user_utt_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
'GPT2ED': Value(dtype='bool', id=None),
'__index_level_0__': Value(dtype='int64', id=None)}
Of these, the important one is user_utt, which is the same type of data as ds_all_utts['user_utterance']. Some user utterances appear multiple times in neuralgen_ds; there are 190,602 unique utterances in neuralgen_ds['user_utt'].
What I want to do
For each row of neuralgen_ds, I want to look up the user utterance in ds_all_utts, and copy over certain columns into neuralgen_ds. In particular, I want to copy over embedding and all the capitalized binary labels (ANY, COMPLAINTCLARIFICATION, etc).
My code
First I create a dictionary mapping from a user utterance to its position in ds_all_utts:
ds_all_utts_lookup = {utt: idx for idx, utt in enumerate(ds_all_utts['user_utterance'])}
Then I use .map to add the columns to neuralgen_ds:
cols = ['embedding', 'ANY', 'COMPLAINTCLARIFICATION', 'COMPLAINTMISHEARD', 'COMPLAINTPRIVACY', 'COMPLAINTREPETITION', 'CRITICISM', 'NEGATIVENAVIGATION', 'OFFENSIVE', 'STOP']
def map_fn(examples):
user_utts = examples['user_utt'] # list of str
idxs = [ds_all_utts_lookup[user_utt] for user_utt in user_utts] # list of int
ds_slice = ds_all_utts[idxs] # dict
result = {col: ds_slice[col] for col in cols}
return result
neuralgen_ds = neuralgen_ds.map(map_fn, batched=True, batch_size=100)
The tqdm estimate says this .map will take over 8 hours. Adjusting batch_size doesn’t seem to help. The slowest part of map_fn is this line:
ds_slice = ds_all_utts[idxs] # dict
Other questions
Are you on a SSD or an HDD ?
I’m not sure, but I followed these instructions 2 and got
>>> lsblk -o name,rota
NAME ROTA
sda 1
├─sda1 1
├─sda2 1
├─sda5 1
├─sda6 1
├─sda7 1
└─sda8 1
sdb 1
└─sdb1 1
|
Hi !
To speed up map operations, you can run it with multiprocessing with specifying num_proc= in map. Usually it’s better to set it to the number of cores your CPU has.
Let me know if that helps.
Also it looks like you’re using a HDD, which is slower than an SSD. Since your script does a lot of read/write operations (writing a dataset when reading data from another one), I’d expect that the HDD slows down the process a bit unfortunately.
| 0 |
huggingface
|
🤗Datasets
|
Understanding set_transform
|
https://discuss.huggingface.co/t/understanding-set-transform/3740
|
I’ve been working on a side project that uses phonetic English language models for text generation. Since I’m not aware of any existing phonetic English datasets, I’ve been preprocessing existing English text datasets with my phonemization script to give myself enough training data. Mainly OSCAR for pretraining the model, and then my own small datasets for fine-tuning on specific tasks.
My workflow has been:
downloading the 2.5TB oscar_en shuffled text file
processing it (in chunks) to its phonetic representation and saving those text files to disk
batch tokenizing those files and saving them to a local HuggingFace dataset, because it takes hours (or days) to tokenize the whole thing at the beginning of a training
Even with only 3% of the original OSCAR corpus phonemized, my dataset is up to over a 1.2TB on disk. Which I was okay with – I’m running out of local storage, but I was never going to be able to use the whole OSCAR corpus anyway on my rinky-dink home setup.
But this month has brought two things to HuggingFace – OSCAR in the datasets library, and on-the-fly transforms.
Am I right in understanding that I could load the oscar_en corpus from the HF Dataset, and then pass to set_transform a function that would phonemize and tokenize the samples, and the only hit to my disk would be the arrow cache of the original OSCAR dataset? And that I would be able to quickly resume training from my most recent checkpoint, since it’d just be loading from that cache?
I imagine it’ll slow the overall training down and I might not be able to feed my GPUs as quickly as I’d like, but the simplified workflow might be worth the performance hit (especially if I find another bug in my phonemizer script that makes me want to redo everything)
Building off of that, if one wanted to do a BART-style pretraining, would it be possible start with a single-column dataset, and pass to set_transform a function that returns the tokenized original dataset as the targets, and a randomly-masked version of the original tokens as the inputs, all on the fly? [Forgive me if this last question is dumb or nonsensical, I have a very limited understanding of seq2seq training / how BART works]
|
set_transform does not cache 23 the resulting data. Depending on the data/storage you have available you may want to opt for map 13. Both have a low memory footprint.
| 0 |
huggingface
|
🤗Datasets
|
Available datasets online
|
https://discuss.huggingface.co/t/available-datasets-online/3651
|
Hi, I am a beginner. I was wondering if there was a predefined dataset that contains various service’s terms and conditions and or privacy policies to fine-tune a model. Thanks
|
Hi ! Afaik there’s no such dataset available on the datasets hub yet.
Do you know any dataset that we could add ?
| 0 |
huggingface
|
🤗Datasets
|
Can dataset.map accept multiple arguments like python map
|
https://discuss.huggingface.co/t/can-dataset-map-accept-multiple-arguments-like-python-map/4128
|
In Python, map works as follows:
map(func, arg_1, arg_2)
In datasets.map, we are required to pass in a callable (which expects objects of form dataset[idx], which means that certain things like tokenizer have to be defined and should be accessible within the scope of this function, along with that other parameters that we want to pass. Can we pass arguments like a normal func call as shown above ? I’m asking because I have two preprocess_func for train and validation split, and I have to write both functions twice which looks repetitive (since there are slight changes in both).
|
Hi @prajjwal1, as described in the docs 202, you can pass a dict called fn_kwargs that can include the extra arguments for your map function
| 0 |
huggingface
|
🤗Datasets
|
Translating Financial PhraseBank
|
https://discuss.huggingface.co/t/translating-financial-phrasebank/3932
|
Hello all,
I would like to start an NLP project, analysing financial data in Hebrew. To do so, I would like to train a Bert model on the Financial PhraseBank. Currently, the sentences are in English and was wondering if anyone would like to/know anyone that could assist me in translating this.
Thank you
|
Hi The Udster,
You could take a look at https://towardsdatascience.com/going-global-how-to-multi-task-in-multiple-languages-with-the-mt5-transformer-892617cd890c 5
Somewhere in the article the author describes how to use the simpletransformers library (based on Huggingface Transformers) to translate datasets into several language using the MarianMT models. (opus_xyz)
The translation quality is not so high as the one from google cloud, but it still beats manual translation
regards,
Yeb
| 0 |
huggingface
|
🤗Datasets
|
Is it possible to filter/select dataset class by a column’s values?
|
https://discuss.huggingface.co/t/is-it-possible-to-filter-select-dataset-class-by-a-columns-values/3296
|
I am wondering if it possible to use the dataset indices to:
get the values for a column
use (#1) to select/filter the original dataset by the order of those values
The problem I have is this: I am using HF’s dataset class for SQuAD 2.0 data like so:
from datasets import load_dataset
dataset = load_dataset("squad_v2")
When I train, I collect the indices and can use those indices to filter/select the dataset in order of how it was trained with randomly shuffled batches like so:
dataset['train'].select(indices=[list of indices here])
The problem is when I also want to use HF’s scoring method for SQuAD 2.0. I need the original data set and its tokenized “features” to be in the same order. When the training data is tokenized, it does not share the same length; it gets larger, owing to text of varying length. For this reason, I cannot use the indices emitted during training to align my original training data properly; the sizes are not the same.
One way that I could align both data sets is to:
collect the indices used during training
use those indices to create a new training data set in the right order dataset['train'].select(indices=[list of indices here])
then from the output of step 2, get each a list of all the strings found in the id column
use the strings found in the id column to then re-order the dataset class by the each and every unique string value.
I do not think this is possible judging from the docs here, but I am curious if anyone has a recommendation:
https://huggingface.co/docs/datasets/package_reference/main_classes.html 1
A painful way of doing what I want is through elastic search (its very slow, but I am feeling that there has to be a better way to filter / query a dataset.
from elasticsearch import Elasticsearch
es = Elasticsearch([{'host': 'localhost'}]).ping()
dataset['train'].add_elasticsearch_index(column='id')
out1 = dataset['train'].search(index_name='id', query='56be85543aeaaa14008c9063')
out1[1]
|
From what I can understand, what you want is somehow filter the dataset and then use the same dataset to compute metrics, is this right?
You should be able to do this by
get your filtered dataset
create a dataloader
iterate over the batches and do prediction for each batch
compute the metrics.
for batch in dataloader:
model_input, targets = batch
predictions = model(model_inputs)
metric.add_batch(predictions, targets)
score = metric.compute()
| 0 |
huggingface
|
🤗Datasets
|
Does load_dataset load the data in to the memory?
|
https://discuss.huggingface.co/t/does-load-dataset-load-the-data-in-to-the-memory/3621
|
What if the data file on the disk of a loaded dataset (with load_dataset function) changed?
Please consider the data structure and names stays the same and only some values change. So without loading the dataset again can I access new data points?
|
Hi ! If the data files changes - for example if one value was changed, then load_dataset will return a new updated dataset that takes this change into account.
Indeed the datasets library looks for any change in your data file before returning the dataset to make sure it doesn’t reload an outdated version from the cache.
| 0 |
huggingface
|
🤗Datasets
|
NLP dataset for ByteLevelTokenizer Training
|
https://discuss.huggingface.co/t/nlp-dataset-for-byteleveltokenizer-training/3653
|
Hi I would like to train my own ByteLevelBPETokenizer using an nlp dataset.
tokenizer = ByteLevelBPETokenizer()
tokenizer.train(files=???, vocab_size=52000, min_frequency=2, special_tokens=[
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>",
])
The dataset is from:
from datasets import load_dataset
dataset = load_dataset('wikicorpus', 'raw_en')
How can I process this dataset to input it in the tokenizer.train() function?
Thanks
|
You can take a look at the example script here:
github.com
huggingface/tokenizers/blob/master/bindings/python/examples/train_with_datasets.py 16
import datasets
from tokenizers import normalizers, pre_tokenizers, Tokenizer, models, trainers
# Build a tokenizer
bpe_tokenizer = Tokenizer(models.BPE())
bpe_tokenizer.pre_tokenizer = pre_tokenizers.Whitespace()
bpe_tokenizer.normalizer = normalizers.Lowercase()
# Initialize a dataset
dataset = datasets.load_dataset("wikitext", "wikitext-103-raw-v1")
# Build an iterator over this dataset
def batch_iterator():
batch_length = 1000
for i in range(0, len(dataset["train"]), batch_length):
yield dataset["train"][i : i + batch_length]["text"]
# And finally train
bpe_tokenizer.train_from_iterator(batch_iterator(), length=len(dataset["train"]))
| 0 |
huggingface
|
🤗Datasets
|
No space left on device
|
https://discuss.huggingface.co/t/no-space-left-on-device/3338
|
Hi,
when I tried to load the weights of model on my device,
it is shown “no space left on device”. And when I restart the machine, this message might go.
How can I clean the cache without restarting my machine?
|
If you’re using torch, you can try torch.cuda.empty_cache(), and if that’s not enough, send your model to the CPU first, then empty the cache. But, if we’re being honest, I can’t ever quite get it to work the way I expect, and usually I end up restarting my kernel, as it’s the only way I know to reliably clear up GPU memory.
| 0 |
huggingface
|
🤗Datasets
|
AttributeError: ‘DatasetDict’ object has no attribute ‘train_test_split’
|
https://discuss.huggingface.co/t/attributeerror-datasetdict-object-has-no-attribute-train-test-split/3341
|
Shouldn’t this work?
dataset = load_dataset('json', data_files='path/to/file')
dataset.train_test_split(test_size=0.15)
I’m getting this following error:
Using custom data configuration default
Downloading and preparing dataset json/default-cf892ee5bc3fc36a (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/json/default-cf892ee5bc3fc36a/0.0.0/70d89ed4db1394f028c651589fcab6d6b28dddcabbe39d3b21b4d41f9a708514...
Dataset json downloaded and prepared to /root/.cache/huggingface/datasets/json/default-cf892ee5bc3fc36a/0.0.0/70d89ed4db1394f028c651589fcab6d6b28dddcabbe39d3b21b4d41f9a708514. Subsequent calls will reuse this data.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-3-59d55201b8c3> in <module>()
1 dataset = load_dataset('json', data_files='/path/to/file')
----> 2 dataset.train_test_split(test_size=0.15)
3 dataset.shard(10)
4 dataset
AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
|
Hi @thecity2, as far as I know train_test_split operates on Dataset objects, not DatasetDict objects.
For example, this works
squad = (load_dataset('squad', split='train')
.train_test_split(train_size=800, test_size=200))
because I’ve picked the train split and so load_dataset returns a Dataset object. On the other hand, this does not work:
squad = load_dataset('squad').train_test_split(train_size=800, test_size=200)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-10-d3fb264651eb> in <module>
----> 1 squad = load_dataset('squad').train_test_split(train_size=800, test_size=200)
AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
It seems that your load_dataset is returning the latter, so you could try applying train_test_split on one of the Dataset objects that lives in your dataset.
| 0 |
huggingface
|
🤗Datasets
|
Getting PermissionError: [WinError 32] When Using Load_Dataset()
|
https://discuss.huggingface.co/t/getting-permissionerror-winerror-32-when-using-load-dataset/3249
|
I am trying to use the load_dataset command to create a dataset of my CSV train and test files. However, when attempting to load in my csv files, I’m getting a windows error.
My code:
from datasets import load_dataset
# load data
train_dataset = load_dataset('csv', data_files='C:/Users/WTF/Desktop/cleaned_dataset/train.csv')
The error return:
Downloading and preparing dataset csv/default-6eb8a0ce457cdcea (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to C:\Users\WTF\.cache\huggingface\datasets\csv\default-6eb8a0ce457cdcea\0.0.0\2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2...
0 tables [00:00, ? tables/s]AAAA
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
C:\Users\WTF\AppData\Local\Programs\Python\Python37\lib\site-packages\datasets\builder.py in incomplete_dir(dirname)
484 try:
--> 485 yield tmp_dir
486 if os.path.isdir(dirname):
C:\Users\WTF\AppData\Local\Programs\Python\Python37\lib\site-packages\datasets\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
526 self._download_and_prepare(
--> 527 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
528 )
C:\Users\WTF\AppData\Local\Programs\Python\Python37\lib\site-packages\datasets\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
603 # Prepare split will record examples associated to the split
--> 604 self._prepare_split(split_generator, **prepare_split_kwargs)
605 except OSError as e:
C:\Users\WTF\AppData\Local\Programs\Python\Python37\lib\site-packages\datasets\builder.py in _prepare_split(self, split_generator)
958 not_verbose = bool(logger.getEffectiveLevel() > WARNING)
--> 959 for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose):
960 writer.write_table(table)
C:\Users\WTF\AppData\Local\Programs\Python\Python37\lib\site-packages\tqdm\std.py in __iter__(self)
1103
-> 1104 for obj in iterable:
1105 yield obj
C:\Users\WTF\.cache\huggingface\modules\datasets_modules\datasets\csv\2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2\csv.py in _generate_tables(self, files)
126 float_precision=self.config.float_precision,
--> 127 chunksize=self.config.chunksize,
128 )
C:\Users\WTF\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision)
701
--> 702 return _read(filepath_or_buffer, kwds)
703
C:\Users\WTF\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds)
428 # Create the parser.
--> 429 parser = TextFileReader(filepath_or_buffer, **kwds)
430
C:\Users\WTF\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\io\parsers.py in __init__(self, f, engine, **kwds)
894
--> 895 self._make_engine(self.engine)
896
C:\Users\WTF\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\io\parsers.py in _make_engine(self, engine)
1121 if engine == 'c':
-> 1122 self._engine = CParserWrapper(self.f, **self.options)
1123 else:
C:\Users\WTF\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\io\parsers.py in __init__(self, src, **kwds)
1852
-> 1853 self._reader = parsers.TextReader(src, **kwds)
1854 self.unnamed_cols = self._reader.unnamed_cols
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader.__cinit__()
pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._get_header()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x91 in position 57: invalid start byte
During handling of the above exception, another exception occurred:
PermissionError Traceback (most recent call last)
<ipython-input-10-ab3af5dabfdc> in <module>()
2
3 # load data
----> 4 train_dataset = load_dataset('csv', data_files='C:/Users/WTF/Desktop/cleaned_dataset/train.csv')
5 test_dataset = load_dataset('csv', data_files='C:/Users/WTF/Desktop/cleaned_dataset/test.csv')
C:\Users\WTF\AppData\Local\Programs\Python\Python37\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
610 download_config=download_config,
611 download_mode=download_mode,
--> 612 ignore_verifications=ignore_verifications,
613 )
614
C:\Users\WTF\AppData\Local\Programs\Python\Python37\lib\site-packages\datasets\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
532 self.info.size_in_bytes = self.info.dataset_size + self.info.download_size
533 # Save info
--> 534 self._save_info()
535
536 # Download post processing resources
C:\Users\WTF\AppData\Local\Programs\Python\Python37\lib\contextlib.py in __exit__(self, type, value, traceback)
128 value = type()
129 try:
--> 130 self.gen.throw(type, value, traceback)
131 except StopIteration as exc:
132 # Suppress StopIteration *unless* it's the same exception that
C:\Users\WTF\AppData\Local\Programs\Python\Python37\lib\site-packages\datasets\builder.py in incomplete_dir(dirname)
489 finally:
490 if os.path.exists(tmp_dir):
--> 491 shutil.rmtree(tmp_dir)
492
493 # Print is intentional: we want this to always go to stdout so user has
C:\Users\WTF\AppData\Local\Programs\Python\Python37\lib\shutil.py in rmtree(path, ignore_errors, onerror)
514 # can't continue even if onerror hook returns
515 return
--> 516 return _rmtree_unsafe(path, onerror)
517
518 # Allow introspection of whether or not the hardening against symlink
C:\Users\WTF\AppData\Local\Programs\Python\Python37\lib\shutil.py in _rmtree_unsafe(path, onerror)
398 os.unlink(fullname)
399 except OSError:
--> 400 onerror(os.unlink, fullname, sys.exc_info())
401 try:
402 os.rmdir(path)
C:\Users\WTF\AppData\Local\Programs\Python\Python37\lib\shutil.py in _rmtree_unsafe(path, onerror)
396 else:
397 try:
--> 398 os.unlink(fullname)
399 except OSError:
400 onerror(os.unlink, fullname, sys.exc_info())
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\WTF\\.cache\\huggingface\\datasets\\csv\\default-6eb8a0ce457cdcea\\0.0.0\\2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2.incomplete\\csv-train.arrow```
|
cc @lhoestq
| 0 |
huggingface
|
🤗Datasets
|
Create a dataset from generator
|
https://discuss.huggingface.co/t/create-a-dataset-from-generator/3119
|
There is any way to create a dataset from a generator (without it being loaded into memory). Something similar to tf.data.Dataset.from_generator
|
The datasets are not completely read into memory so you should not have to worry about memory usage. It’s mostly fast on-disk access thanks to memory mapping.
| 0 |
huggingface
|
🤗Datasets
|
Bleurt metric throwing this error: UnrecognizedFlagError: Unknown command line flag ‘f’
|
https://discuss.huggingface.co/t/bleurt-metric-throwing-this-error-unrecognizedflagerror-unknown-command-line-flag-f/2974
|
Running this in Jupyter notebook and getting this error when attempting to use the bleurt metric:
UnrecognizedFlagError: Unknown command line flag 'f'
Any ideas?
|
What’s the code you used ?
| 0 |
huggingface
|
🤗Datasets
|
[Open-to-the-community] One week team-effort to reach v2.0 of HF datasets library
|
https://discuss.huggingface.co/t/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library/2176
|
Hi all,
We are planning to do one of the biggest team effort we have ever done next week (Nov 30th to Dec 4th) to reach the v2.0 of the datasets library (Edit: final day extended to next Wednesday Dec 9th!).
The effort will involve more than half of HuggingFace (!) with about 15 people including members who’ve defined the library like @lhoestq @yjernite, @joeddav, @jplu @patrickvonplaten, members of the research team like @teven @VictorSanh and the OSS team like @Narsil, newcomers like @abhishek, awesome part-time members like @aymm and @canwenxu and many others including @madlag or yours truly. (Edit: And now over 200 external participants as well )
It will be targetted toward adding and tagging a large number of NLP datasets to the datasets library with the goal being to reach +500 datasets and covers and organize as much of the NLP dataset eco-system as we find possible.
We are taking the occasion to develop some tools to more easily add and tag datasets in the library as well as create dataset cards for them.
After internal discussion, we have decided to open this time-limited project to external contributors if you want to have a little taste of what it is to participate in an internal HuggingFace team effort.
Basically, you can ping me or anyone of us and I will add you to the slack channel and give you access to the tools we use as well as detailed information on the workflow and a list of datasets that we think are worth adding.
There might be (Edit: “will definitely be”) a small reward as HuggingFace swag and of course sharing your contribution to this project but keep in mind that this is an open-source effort so join if you want to do an open-contribution and enjoy a bit of HuggingFace vibe, this is not an internship or work offer (for this you should check and apply on our profile on AngelList!). We expect most of the work to be done by the full-time members of HuggingFace but we are also always happy to share how we work and collaborate with external contributors which why we are opening this project.
what is it about:
we are adding a lot of new datasets to the library (in particular in many NLP tasks and we would like to have more datasets in low ressource languages as well) with the aim to cover as much ground as possible
how you can join:
post here to say that you want to participate and I will add you to our slack => That’s it
what you’ll get
enjoy a bit of HuggingFace vibe by joining the team sprint
receive a special event gift (actually 2 gifts, see this post further down the thread for details!) because it’s really amazing to see the community so involved here that we wanted to remember this event!
BIG UPDATE
We have just updated the deadline to next Wednesday (Dec 9th) So the late comers can still participate!
SECOND BIG UPDATE
A lot of people are still joining (on the way to be 300 participants ) so we are extending a bit the deadline again — though it will a limited extension because we have to end the project at some point
More precisely:
All the participants who will have open at least 1 PR before the end of Wednesday (Dec 9th) can continue adding additional datasets until the end of Sunday (Dec 13th) that will be counted in the sprint.
In other word:
If you have open 1 PR before Wednesday (and thus are eligible for the special event tee-shirt goody ) you will have until the end of Sunday to add 2 others datasets if you want, and join the main-contributors channel of the slack (+ get the special event mug)
Open-sourcely yours,
Thom
|
Love this! I’ll be too preoccupied the following weeks, but I’ll definitely join in in the future if such an event is done again!
| 0 |
huggingface
|
🤗Datasets
|
Converting string label to int
|
https://discuss.huggingface.co/t/converting-string-label-to-int/2816
|
I’m using a custom dataset from a CSV file where the labels are strings. I’m curious what the best way to encode these labels to integers would be.
Sample code:
datasets = load_dataset('csv', data_files={
'train': 'train.csv',
'test': 'test.csv'
}
)
def tokenize(batch):
return tokenizer(batch['text'], padding=True, truncation=True, max_length=128)
datasets = datasets.map(tokenize, batched=True)
datasets.set_format('torch', columns=['input_ids', 'attention_mask', 'labels'])
|
In your tokenize function, you can also add a line to convert your labels to ints:
def tokenize(batch):
tokenized_batch = tokenizer(batch['text'], padding=True, truncation=True, max_length=128)
tokenized_batch["labels"] = [str_to_int[label] for label in batch["labels"]]
return tokenized_batch
with str_to_int your correspondence string label to int label.
| 0 |
huggingface
|
🤗Datasets
|
Performance tips for shuffle and flatten_indices
|
https://discuss.huggingface.co/t/performance-tips-for-shuffle-and-flatten-indices/2117
|
Hey @lhoestq,
How can I speed up shuffle+flatten on a dataset with millions of instances? It’s painfully slow for whatever setting I tried.
TIA
|
Hi !
By flatten you mean flattend_indices ?
Is your dataset made of strings ?
If so, then the speed bottleneck is the I/O: read from the current dataset arrow file, and write to a new file.
The new file is written when doing flatten_indices.
To speed up things you can use a SSD or distribute the writing (using shard on the shuffled dataset for example).
| 0 |
huggingface
|
🤗Datasets
|
Question about loading wikipedia datset
|
https://discuss.huggingface.co/t/question-about-loading-wikipedia-datset/1969
|
Hello, I am trying to download wikipedia dataset.
This is the code I try:
from datasets import load_dataset
dataset = load_dataset("wikipedia", "20200501.ang", beam_runner='DirectRunner')
Then it shows:
FileNotFoundError: Couldn’t find file at https://dumps.wikimedia.org/angwiki/20200501/dumpstatus.json 4
If I pick a recent one dump which is available from https://dumps.wikimedia.org/angwiki/ 3 :
from datasets import load_dataset
dataset = load_dataset("wikipedia", "20200620.ang", beam_runner='DirectRunner')
It shows:
ValueError: BuilderConfig 20200620.ang not found. Available: [‘20200501.aa’, ‘20200501.ab’, ‘20200501.ace’, …]
Any advice? Thank you.
|
Do you need English wikipedia? If so, all you need is:
dataset = load_dataset('wikipedia', "20200501.en", split='train')
| 0 |
huggingface
|
🤗Datasets
|
Dataset set_format
|
https://discuss.huggingface.co/t/dataset-set-format/1961
|
Hello everyone,
Datasets provide this great feature of formatting datasets using set_format and then choosing the desired format (numpy, torch etc). The encoded dataset I prepared has columns/features of various data types (int32, int8 etc) but HF models require all features to be dtype torch.long/int64. Is there a simple trick to convert all features to torch.long tensors when selecting torch format?
I understand that I could have prepared the dataset with int64 type but that significantly increases the dataset file size footprint.
Thanks,
Vladimir
|
Nevermind, I found a way. RTFM.
format = {'type': 'torch', 'format_kwargs' :{'dtype': torch.long}}
dataset.set_format(**format)
| 0 |
huggingface
|
🤗Datasets
|
Compressing, saving, and loading datasets
|
https://discuss.huggingface.co/t/compressing-saving-and-loading-datasets/1818
|
Hello everyone,
I am working with large datasets (Wikipedia), and use map transform to create new datasets. The workflow involves creating new datasets that are saved using save_to_disk, and subsequently, I use terminal compression utils to compress the dataset folder. Then I decompress these files and the use load_from_disk to load them on other machines. These manual steps are pita.
It would be great to use compression within datasets and have one compressed file as a result of save_to_disk if so desired.
If I could save these datasets to a remote location immediatelly bypassing save_to_disk, compression, copy manual steps that would be amazing.
Loading these created datasets via URL from s3, gs, etc. via a single load_dataset call would be a killer.
All the best,
Vladimir
|
I agree that would be super cool to be able to archive and save/load archived dataset from/to a cloud storage. We’re thinking about this actively. Do you think that some dataset versioning logic could be interesting as well ?
| 0 |
huggingface
|
🤗Datasets
|
Working with large datasets
|
https://discuss.huggingface.co/t/working-with-large-datasets/1876
|
Hey @lhoestq,
I am preparing datasets for BERT pre-training and often save_to_disk simply dies without saving the contents of the files to disk. The largest file I was able to save was 18 GB but above that, I am having no luck.
Any performance tips for dealing with large datasets? Should I simply shard before saving to disk? If I do that, then I get copies of 18 GB files in each shard’s directory. What are my options?
Forgot to mention. I am already using num_proc and larger batches to speed up dataset map invocations. Those work great. It’s the save_to_disk that I am not sure how to deal with. And sharding without additional copies of the underlying dataset being copies in all shard directories.
Thanks in advance.
|
Hi !
Is there an error message ?
| 0 |
huggingface
|
🤗Datasets
|
How to deal with unpickable objects in map
|
https://discuss.huggingface.co/t/how-to-deal-with-unpickable-objects-in-map/1547
|
During the creation of my dataset I would like to add sent2vec representations of input sentences to the dataset. The code would look like this:
import sent2vec
from datasets import load_dataset
sent2vec_model = sent2vec.Sent2vecModel()
sent2vec_model.load_model(sent2vec_path, inference_mode=True)
datasets = load_dataset("text", data_files={"train": train_f, "validation": valid_f})
def preprocess(sentences):
embedded_sents = sent2vec_model.embed_sentences(sentences["text"])
return {"text": sentences["text"], "embeddings": embedded_sents}
datasets.map(preprocess, batch_size=None, batched=True)
Unfortunately this won’t work as the sent2vec model can’t be pickled (it seems), and the fingerprint generation thus fails. At first I thought the issue was that map uses multiprocessing by default but using num_proc=1 does not help either. From the error trace it seems that the error arises during the fingerprint/hash update when the sent2vec model is trying to pickled.
File "/mnt/c/dev/python/neural-fuzzy-repair/nfr/finetuning.py", line 48, in create_datasets
datasets.map(preprocess, batch_size=None, batched=True)
File "/home/bram/.local/share/virtualenvs/neural-fuzzy-repair-b49KnSNp/lib/python3.8/site-packages/datasets/dataset_dict.py", line 283, in map
{
File "/home/bram/.local/share/virtualenvs/neural-fuzzy-repair-b49KnSNp/lib/python3.8/site-packages/datasets/dataset_dict.py", line 284, in <dictcomp>
k: dataset.map(
File "/home/bram/.local/share/virtualenvs/neural-fuzzy-repair-b49KnSNp/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1240, in map
return self._map_single(
File "/home/bram/.local/share/virtualenvs/neural-fuzzy-repair-b49KnSNp/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 156, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/bram/.local/share/virtualenvs/neural-fuzzy-repair-b49KnSNp/lib/python3.8/site-packages/datasets/fingerprint.py", line 157, in wrapper
kwargs[fingerprint_name] = update_fingerprint(
File "/home/bram/.local/share/virtualenvs/neural-fuzzy-repair-b49KnSNp/lib/python3.8/site-packages/datasets/fingerprint.py", line 105, in update_fingerprint
hasher.update(transform_args[key])
File "/home/bram/.local/share/virtualenvs/neural-fuzzy-repair-b49KnSNp/lib/python3.8/site-packages/datasets/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/home/bram/.local/share/virtualenvs/neural-fuzzy-repair-b49KnSNp/lib/python3.8/site-packages/datasets/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/home/bram/.local/share/virtualenvs/neural-fuzzy-repair-b49KnSNp/lib/python3.8/site-packages/datasets/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/home/bram/.local/share/virtualenvs/neural-fuzzy-repair-b49KnSNp/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 367, in dumps
dump(obj, file)
File "/home/bram/.local/share/virtualenvs/neural-fuzzy-repair-b49KnSNp/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 339, in dump
Pickler(file, recurse=True).dump(obj)
File "/home/bram/.local/share/virtualenvs/neural-fuzzy-repair-b49KnSNp/lib/python3.8/site-packages/dill/_dill.py", line 446, in dump
StockPickler.dump(self, obj)
File "/home/bram/.pyenv/versions/3.8.6/lib/python3.8/pickle.py", line 485, in dump
self.save(obj)
File "/home/bram/.pyenv/versions/3.8.6/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/bram/.local/share/virtualenvs/neural-fuzzy-repair-b49KnSNp/lib/python3.8/site-packages/dill/_dill.py", line 1435, in save_function
pickler.save_reduce(_create_function, (obj.__code__,
File "/home/bram/.pyenv/versions/3.8.6/lib/python3.8/pickle.py", line 690, in save_reduce
save(args)
File "/home/bram/.pyenv/versions/3.8.6/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/bram/.pyenv/versions/3.8.6/lib/python3.8/pickle.py", line 899, in save_tuple
save(element)
File "/home/bram/.pyenv/versions/3.8.6/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/bram/.pyenv/versions/3.8.6/lib/python3.8/pickle.py", line 884, in save_tuple
save(element)
File "/home/bram/.pyenv/versions/3.8.6/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/bram/.local/share/virtualenvs/neural-fuzzy-repair-b49KnSNp/lib/python3.8/site-packages/dill/_dill.py", line 1170, in save_cell
pickler.save_reduce(_create_cell, (f,), obj=obj)
File "/home/bram/.pyenv/versions/3.8.6/lib/python3.8/pickle.py", line 690, in save_reduce
save(args)
File "/home/bram/.pyenv/versions/3.8.6/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/bram/.pyenv/versions/3.8.6/lib/python3.8/pickle.py", line 884, in save_tuple
save(element)
File "/home/bram/.pyenv/versions/3.8.6/lib/python3.8/pickle.py", line 601, in save
self.save_reduce(obj=obj, *rv)
File "/home/bram/.pyenv/versions/3.8.6/lib/python3.8/pickle.py", line 715, in save_reduce
save(state)
File "/home/bram/.pyenv/versions/3.8.6/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/bram/.local/share/virtualenvs/neural-fuzzy-repair-b49KnSNp/lib/python3.8/site-packages/dill/_dill.py", line 933, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/bram/.pyenv/versions/3.8.6/lib/python3.8/pickle.py", line 969, in save_dict
self._batch_setitems(obj.items())
File "/home/bram/.pyenv/versions/3.8.6/lib/python3.8/pickle.py", line 995, in _batch_setitems
save(v)
File "/home/bram/.pyenv/versions/3.8.6/lib/python3.8/pickle.py", line 576, in save
rv = reduce(self.proto)
File "stringsource", line 2, in sent2vec.Sent2vecModel.__reduce_cython__
TypeError: no default __reduce__ due to non-trivial __cinit__
Is there any way around this? For instance by completely disabling the fingerprinting?
|
Oh interesting, thanks Bram.
Yes I guess in this case we would have to disable the fingerprinting, right @lhoestq?
Which is a bit too bad because in the future we would have liked to leverage the fingerprint to allow the user to have a super robust reproducibility setup but python (+cython in this case) will always be what it is (aka a huge open field).
| 0 |
huggingface
|
🤗Datasets
|
RuntimeError: Error in void faiss::gpu::allocMemorySpace
|
https://discuss.huggingface.co/t/runtimeerror-error-in-void-faiss-allocmemoryspace/1358
|
Hello everyone,
I have a datasets.Dataset created with the following features and size.
Dataset(features: {'embeddings': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'pid': Value(dtype='int32', id=None)}, num_rows: 8841823)
I added a FAISS index over ‘embeddings’ column in the GPU.
docs.load_faiss_index('embeddings', DATAPATH_INDEXED_ENCODED_PASSAGES, device=torch.cuda.current_device())
RuntimeError: Error in void faiss::gpu::allocMemorySpaceV(faiss::gpu::MemorySpace, void**, size_t) at gpu/utils/MemorySpace.cpp:26: Error: 'err == cudaSuccess' failed: failed to cudaMalloc 27162080256 bytes (error 2 out of memory)
I am working on a NVIDIA Tesla K80 with 1 GPU having 11.4GB memory. Well, it seems that with this hardward is impossible to index it on the GPU. Running the index on the CPU leads to infeasible execution time on the next step since I would like to get the top 1000 nearest examples for 6900 queries.
Any workarounds? I’m not sure if I’ll have the resources, but if I get access to several GPUs like the one that I’m working with, I just need to specify multiple GPU IDs when loading the faiss index, right?
|
Well, it seems that you are simply running out if memory. I assume that you either have a very large index or you also have a model on the GPU at the same time?
| 0 |
huggingface
|
🤗Datasets
|
Extend load_from_disk and save_to_disk to remote storage
|
https://discuss.huggingface.co/t/extend-load-from-disk-and-save-to-disk-to-remote-storage/1449
|
Hello everyone,
As datasets are sliced, diced, and transformed it makes sense to load/save them from disk for future use. But why just local disk when this load/save can be extended to any remote storage (S3, GS etc)?
It would be great to have remote storage load/save added in the next release
Many thanks
|
You’re right that would be cool
Thanks for the suggestion
We think it could also be useful to have some sort of versioning for hosted datasets. Do you feel like it could be something you’d be interested to have ?
| 0 |
huggingface
|
🤗Datasets
|
Dataset map and flatten
|
https://discuss.huggingface.co/t/dataset-map-and-flatten/1402
|
Hey guys,
After dataset map transformation, I have a new dataset with the following features:
{‘data’: {‘is_random_next’: Value(dtype=‘bool’, id=None),
‘tokens_a’: Sequence(feature=Value(dtype=‘int64’, id=None), length=-1, id=None),
‘tokens_b’: Sequence(feature=Value(dtype=‘int64’, id=None), length=-1, id=None)}}
The data column is useless so after invoking flatten, I get:
{‘data.is_random_next’: Value(dtype=‘bool’, id=None),
‘data.tokens_a’: Sequence(feature=Value(dtype=‘int64’, id=None), length=-1, id=None),
‘data.tokens_b’: Sequence(feature=Value(dtype=‘int64’, id=None), length=-1, id=None)}
which is exactly what I want. After renaming columns, I am done.
Since map forces us to return dict, I need to wrap a list of dict values (is_random_next, tokens_a, tokens_b) with dict to comply, so I did. How could I avoid that?
|
Quentin @lhoestq I got the wikipedia/bookcorpus dataset processing to be super fast and everything works as advertised. I was wondering if I could somehow bypass the kludge that I have now for dataset flattening - I am really curious if it could be done without it?
| 0 |
huggingface
|
🤗Datasets
|
Bookcorpus dataset format
|
https://discuss.huggingface.co/t/bookcorpus-dataset-format/1421
|
The current book corpus dataset is parsed into sentences directly, which is great, but then there is no way to determine document boundaries. Would it be useful to have another bookcorpus dataset that is chunked into books rather than sentences directly?
Shawn Presser went to great lenghts to preserve the structure of the books’ text and it is available at https://github.com/soskek/bookcorpus/issues/27 25 for download.
|
Indeed ! It was already suggested in https://github.com/huggingface/datasets/issues/486 28 to use this link. It would be very cool to add it to the library. You can make a script to use the new link if you want. You can take some inspiration from the docs 28 and from the current bookcorpus 11 script.
Let me know if you have questions, you can ping me on the forum or on github
| 0 |
huggingface
|
🤗Datasets
|
Wikihow dataset preprocessing?
|
https://discuss.huggingface.co/t/wikihow-dataset-preprocessing/1413
|
Has anyone managed to use the code in examples/seq2seq on the wikihow dataset used in the pegasus paper?
If you have a working data dir (one line per example for source and target), but no preprocessing code, I would find a google drive/s3 link to the data useful! Thanks!
If you have preprocessing code, even more useful!
|
Here you go Sam @sshleifer
raw article files
gdown -O train_articles.zip --id 1-1CR6jh6StaI69AsbBXD8lQskFbGc2Ez # train
gdown -O valid_articles_.zip --id 1-EGoT5ZKRNHQb_ewNpD9GZCvQ3uHzDSi # val
gdown -O test_articles_.zip --id 1-CxzdzEIuBYzCs06zrglYrLBlLI6kjSZ # test
unzip these
pre-proc code (not super readable, wrote it a while ago)
import os
import glob
import json
import re
def get_art_abs_wikihow(path):
articles = glob.glob('%s/*' % path)
for a in articles:
try:
with open(a, 'r') as f:
text = f.read()
splits = text.split('@article')
abstract = splits[0].replace('\n', '').replace('@summary', '').strip()
article = splits[1].replace('\n', '').replace('@article', '').strip()
yield article, abstract
except Exception as e:
yield None
def write_to_bin(lines, out_prefix):
print("Making bin file for %s..." % out_prefix)
with open(out_prefix + '.source', 'at') as source_file, open(out_prefix + '.target', 'at') as target_file:
for idx,line in enumerate(lines):
if idx % 1000 == 0:
print("Writing story %i" % idx)
# Get the strings to write to .bin file
if line is None: continue
article, abstract = line
# a threshold is used to remove short articles with long summaries as well as articles with no summary
if len(abstract) < (0.75*len(article)):
# remove extra commas in abstracts
abstract = abstract.replace(".,",".")
# remove extra commas in articles
article = article.replace(";,", "")
article = article.replace(".,",".")
article = re.sub(r'[.]+[\n]+[,]',".\n", article)
abstract = abstract.strip().replace("\n", "")
article = article.strip().replace("\n", "")
# Write article and abstract to files
source_file.write(article + '\n')
target_file.write(abstract + '\n')
print("Finished writing files")
def create_stories(save_path='wikihow'):
# Create some new directories
if not os.path.exists(save_path): os.makedirs(save_path)
# write wikihow
print("Making bin file for wikihow valid set")
lines = get_art_abs_wikihow('./valid_articles')
write_to_bin(lines, os.path.join(save_path, "val"))
print("Making bin file for wikihow train set")
lines = get_art_abs_wikihow('./train_articles')
write_to_bin(lines, os.path.join(save_path, "train"))
print("Making bin file for wikihow test set")
lines = get_art_abs_wikihow('./test_articles')
write_to_bin(lines, os.path.join(save_path, "test"))
and processed data using above script. (one line per example)
gdown -O wikihow.zip --id 1_QE1PLJhhugMf2e1edUGJRMiktKsm8YU
| 0 |
huggingface
|
🤗Datasets
|
Format of data during pre-training
|
https://discuss.huggingface.co/t/format-of-data-during-pre-training/1388
|
What should be the format of the data for pre-training? could it be any raw data (e.g., news articles) in my case and then after I fine-tune, then I need to define it for a specific task e.g., classification?
|
It comes in many forms and usually needs processing/adaptation for model input. Look into transformers examples to start.
If you are asking about the datasets project specifically - it makes it even simpler to do the above. There are many prepared datasets ready to go. Read through documentation.
| 0 |
huggingface
|
🤗Datasets
|
datasets.Dataset.get_nearest_examples() on GPU
|
https://discuss.huggingface.co/t/datasets-dataset-get-nearest-examples-on-gpu/1348
|
Hello everyone,
Is it possible to put get_nearest_examples() computation on GPU after having added a FAISS index 4?
|
You can specify the GPU ID when you instantiated the index. Will use the CPU by default (None).
add_faiss_index(column, device=0)
| 0 |
huggingface
|
🤗Datasets
|
.get_nearest_examples() throws ArrowInvalid: offset overflow while concatenating arrays
|
https://discuss.huggingface.co/t/get-nearest-examples-throws-arrowinvalid-offset-overflow-while-concatenating-arrays/1297
|
Hello everyone,
I am adding a FAISS index on the MSMARCO passages dataset that has ~8.8M passages. I have already created the embeddings with DPRContextTokenizer and DPRContextEncoder.
Dataset info:
ms marco features888×140 18 KB
After adding the FAISS index to it, I tried to retrieve some documents with a query.
q_encoder = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-single-nq-base")
q_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-single-nq-base")
question = 'what is the difference between a c-corp and a s-corp?'
question_embedding = q_encoder(**q_tokenizer(question, return_tensors="pt"))[0][0].detach().numpy()
scores, retrieved_documents = dataset_embedded_passages['train'].get_nearest_examples('embeddings', question_embedding, k=10)
And then it threw,
ArrowInvalid Traceback (most recent call last)
in
~/.conda/envs/andregodinho/lib/python3.6/site-packages/datasets/search.py in get_nearest_examples(self, index_name, query, k)
564 self._check_index_is_initialized(index_name)
565 scores, indices = self.search(index_name, query, k)
--> 566 return NearestExamplesResults(scores, self[[i for i in indices if i >= 0]])
567
568 def get_nearest_examples_batch(
~/.conda/envs/andregodinho/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key)
1069 format_columns=self._format_columns,
1070 output_all_columns=self._output_all_columns,
-> 1071 format_kwargs=self._format_kwargs,
1072 )
1073
~/.conda/envs/andregodinho/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs)
1037 )
1038 else:
-> 1039 data_subset = self._data.take(indices_array)
1040
1041 if format_type is not None:
~/.conda/envs/andregodinho/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.take()
~/.conda/envs/andregodinho/lib/python3.6/site-packages/pyarrow/compute.py in take(data, indices, boundscheck)
266 """
267 options = TakeOptions(boundscheck)
--> 268 return call_function('take', [data, indices], options)
269
270
~/.conda/envs/andregodinho/lib/python3.6/site-packages/pyarrow/_compute.pyx in pyarrow._compute.call_function()
~/.conda/envs/andregodinho/lib/python3.6/site-packages/pyarrow/_compute.pyx in pyarrow._compute.Function.call()
~/.conda/envs/andregodinho/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.conda/envs/andregodinho/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
It threw ArrowInvalid: offset overflow while concatenating arrays
The dataset is very large. Any workaround?
|
Thanks for reporting !
It will be fixed in the datasets release of this week
| 0 |
huggingface
|
🤗Datasets
|
DPR Context tokenization in a GPU
|
https://discuss.huggingface.co/t/dpr-context-tokenization-in-a-gpu/1210
|
Hello everyone, I hope you are all having fun using the Hugging face library.
I am tokenizing the 8.8M passages from MSMARCO dataset. Moreover, I have indexed the dataset with Hugging face dataset because I want to add a FAISS index 1 over it afterwards.
To do all of these, I created the dataset correctly by following these steps 12. Afterwards, I ran this code:
torch.set_grad_enabled(False)
ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
dataset_embedded_passages = dataset_passages.map(lambda example: {'embeddings': ctx_encoder(**ctx_tokenizer(example["passage"], return_tensors="pt"))[0][0].numpy()})
This will take a lot of time in the CPU since the dataset has ~8.8M passages. Is it possible to do the tokenization in the GPU? I checked the .map() method and did not find a way to put to the device.
|
Hi !
You can indeed put the tokenized text on GPU and give it to the model.
Also you can make it significantly faster by using a batched map:
def embed(examples):
tokenized_examples = ctx_tokenizer(
examples["passage"],
return_tensors="pt",
padding="longest",
truncation=True,
max_length=512
).to(device=ctx_encoder.device)
embeddings = ctx_encoder(**tokenized_examples)[0]
return {"embeddings": embeddings}
dataset_embedded_passages = dataset_passages.map(embed, batched=True, batch_size=16)
Let me know if it helps
| 0 |
huggingface
|
🤗Datasets
|
Pipeline with custom dataset tokenizer: when to save/load manually
|
https://discuss.huggingface.co/t/pipeline-with-custom-dataset-tokenizer-when-to-save-load-manually/1084
|
I am trying my hand at the datasets library and I am not sure that I understand the flow.
Let’s assume that I have a single file that is a pickled dict. In that dict, I have two keys that each contain a list of datapoints. One of them is text and the other one is a sentence embedding (yeah, working on a strange project…).
I know that I can create a dataset from this file as follows:
dataset = Dataset.from_dict(torch.load("data.pt"))
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
keys_to_retain = {"input_ids", "sembedding"}
dataset = dataset.map(lambda example: tokenizer(example["text"], padding='max_length'), batched=True)
dataset.remove_columns_(set(dataset.column_names) - keys_to_retain)
dataset.set_format(type="torch", columns=["input_ids", "sembedding"])
My question is, what’s next? Especially considering how caching works. The first thing that should happen is splitting the dataset into a train, dev, test set 1. As a result I would eventually have a dictionary with train, dev, test keys in them. I can then use them in dataloaders and I am ready to go.
The question is, what about subsequent runs (.e. new Python sessions). Will all that code need to be run again? Should I do dataset.save_to_disk, and in a next session not run the whole dataset creation again? In other words, do I have to manually check for the saved files? Something like this (untested).
def create_datasets(dataset_path):
if Path(dataset_path).exists():
datasets = {partition: load_from_disk(Path(dataset_path) / partition) for partition in ["train", "dev", "test"]}
else:
# the snippet that I posted above
# assuming we have train, dev, test in datasets
for key, dataset in datasets.items():
dataset.save_to_disk(Path(dataset_path).joinpath(key))
return dataset
Or is the dataset cached somewhere and every time the first snippet is encountered, none of those steps is repeated and the cached dataset is loaded?
In short, it is not clear to me when I can rely on cache that is hidden (probably somewhere in the user directory), and when I should manually use save_to_disk and load a dataset manually.
Thanks!
|
The caching should work across sessions, normally you don’t have to use save_to_disk. The cache is indexed by a hash of the operations performed on the dataset, if a new, independent, session performs the same operations, they will use the cache instead of being recomputed. If you change something to the operation performed on the dataset, they will be recomputed instead of using the cache.
I will add a detail on the hashing mechanism to the doc when I have some time (no ETA) but basically it use as hash to store the dataset a complete pickle dump of all the arguments you provide the processing function at each step (including the function provided to map) so if anything changes it will be detected and the operation is recomputed instead of using the cache. If all the arguments and inputs are identical, the hash is the same (whether it’s the same session or not) and the cache file is used if it is found.
save_to_disk is provided as a special utility mostly for people who preprocess a dataset on one machine which has access to the internet and would like to use the dataset on a cluster without any access to the internet (and which thus cannot download the dataset files).
| 0 |
huggingface
|
🤗Datasets
|
Hugdatafast: hugginface/nlp + fastai
|
https://discuss.huggingface.co/t/hugdatafast-hugginface-nlp-fastai/986
|
Hugdatafast: huggingface/nlp fastai
An integration to make use of hundreds of datasets with fastai, and some handy transforms to make concatenated dataset like language model dataset.
pip install hugdatafast
Documentation: https://hugdatafast.readthedocs.io/en/latest/ 38
Doing NLP ?
See if you can turn your data pipeline into just 3 lines.
hugdatafast_fastai1216×871 234 KB
The updates will also be tweeted on my Twitter Richard Wang 3.
|
Update: add a example for preparing any hugginface/nlp dataset for (traditional) langugage model, or implement custom context window.
Update the update: I cancel the updates.
— Reason — (just for notes, skipping is ok)
Originally I want to introduce LMTransform 3 and CombineTransform 3, which can do context window over examples. But I suddenly thought there is few cases we need context window across examples. Examples in a dataset are often not consecutive, we don’t need to concatenate texts not related. So these classes might be only useful for my personal use case.
| 0 |
huggingface
|
🤗Datasets
|
[SOLVED] Dataset.map() is frozen on ELI5
|
https://discuss.huggingface.co/t/solved-dataset-map-is-frozen-on-eli5/656
|
I have tried to prepare ELI5 to train with T5, based on this wonderful notebook of Suraj Patil 4
However, when I run dataset.map() on ELI5 to prepare input_text, target_text, dataset.map is frozen in the first hundreds examples. On the contrary, this works totally fine on SQUAD (80,000 examples). Both nlp version 0.3.0 and 0.4.0 cause frozen process . Also try various pyarrow versions from 0.16.0 / 0.17.0 / 1.0.0 also have the same frozen process.
Reproducible code can be found on this colab notebook 4, where I also show that the same mapping function works fine on SQUAD, so the problem is likely due to ELI5 somehow.
More Info : instead of map, if I run for loop and apply function by myself, there’s no error and can finish within 10 seconds. However, nlp dataset is immutable , so I could not create a new key-value within the dataset directly ) .
I also notice that SQUAD texts are quite clean while ELI5 texts contain many special characters, not sure if this is the cause ?
|
Fixed by amazing Quentin here:
github.com/huggingface/nlp
Bugs : dataset.map() is frozen on ELI5 53
opened
Aug 7, 2020
closed
Aug 12, 2020
ratthachat
Hi Huggingface Team!
Thank you guys once again for this amazing repo.
I have tried to prepare ELI5 to train with T5, based...
Thanks very much again!
| 0 |
huggingface
|
🤗Datasets
|
Nlp Datasets: speed-test vs Fastai
|
https://discuss.huggingface.co/t/nlp-datasets-speed-test-vs-fastai/425
|
I was playing around with nlp Datasets 5 library and was seriously impressed by the speed!!
I figured it would be interesting to test it out to see if it would make more sense to do as much text processing (e.g. cleaning, tokenization, numericalisation) with it, instead of using Fastai’s defaults. I used fastai’s TextDataloader with all of its defaults and tried to replicate all its functionality with nlp Datasets
Full blog post here 13
Curious if anyone has any feedback or how this test might have been done better, especially how any pointers on how to parallelise tokenisation with nlp Datasets
Just tell me the results
Results were…mixed…
Fastai’s initialisation (e.g. load, preprocess, tokenize etc) was faster with the 1.6M row Sentiment140 dataset I used, however I have a few caveats:
Parallelisation
Fastai parallelises the tokenization, which I couldn’t figure out how to do with nlp Datasets (probably my own lack of knowledge and not a limitation of the library though). My guess is that doing so would likely make nlp Datasets much faster than Fastai
Sorting by sample length
To try and replicate SortedDL's behaviour, I sorted the entire dataset in the nlp Dataset trial, which added a significant amount of time, possibly theres a way to better replicated SortedDL's behaviour
Caching
nlp Datasets also uses caching so that the second time around you’d like to do the same pre-processing etc, it is much much faster
10% Data
0.16M ROWS:
Init (s)
1 epoch (s)
1 mini-batch [bs=64] (ms)
Fastai
124
14.3
7.4
Fastai w/sorted
48.1
14.3
7.4
nlp
71.2
11.3
5.6
100% Data
1.6M ROWS:
Init (s)
1 epoch (s)
Fastai w/sorted
484
142
nlp
1024
323
Any and all feedback welcome|!
(the forums auto-correct “nlp” in my post title to “Nlp” haha)
|
Hi there
Thanks for doing this speed comparison ! This is important for us to make sure we achieve the fastest read/write/process actions we can offer using the power of apache arrow, and with the minimum memory.
We plan to add multiprocessing in the very short term that will speed up processing significantly
| 0 |
huggingface
|
🤗Datasets
|
How to dealing with Data Imbalance
|
https://discuss.huggingface.co/t/how-to-dealing-with-data-imbalance/393
|
I want to fine tune pre-trained Roberta or Electra for multiclass classification (sentiment classify) imbalance data set . How handling problem ??
|
For class imbalance, one aspect to consider is that each batch has enough signal to provide some coverage of all the classes, even the unbalanced ones. Otherwise, it may degenerate during training.
When evaluating test performance though, you will need to keep the real proportions as you would observe in the real world.
| 0 |
huggingface
|
🤗Datasets
|
Nlp 0.3.0 is out!
|
https://discuss.huggingface.co/t/nlp-0-3-0-is-out/50
|
Features:
New methods to transform a dataset:
dataset.shuffle: create a shuffled dataset
dataset.train_test_split: create a train and a test split (similar to sklearn)
dataset.sort: create a dataset sorted according to a certain column
dataset.select: create a dataset with rows selected following the given list of indices
Other features:
Better instructions for datasets that require manual download
Important: if you load datasets that require manual downloads with an older version of nlp, instructions won’t be shown and an error will be raised
Better access to dataset information (for instance dataset.feature['label'] or dataset.dataset_size)
Datasets:
New: cos_e v1.0
New: rotten_tomatoes
New: german and italian wikipedia
New docs:
documentation about splitting a dataset
Bug fixes:
fix metric.compute that couldn’t write on file
fix squad_v2 imports
|
Nice, enjoying using nlp already!
Quick question, what is the vision for the nlp library? Will it’s main focus be in curating existing datasets or might it evolve into a more general purpose PyArrow wrapper for any (text?) dataset? I’m just blown away by its speed and it would be amazing to be able to do the same with my own text datasets.
I know I could already just start using PyArrow (as below) but I have a feeling that the nlp library might have more text-specific functionality coming down the line that would be amazing to be able to use with my own data…
table = pa.Table.from_pandas(df)
| 0 |
huggingface
|
🤗Tokenizers
|
How to ensure that tokenizers never truncate partial words?
|
https://discuss.huggingface.co/t/how-to-ensure-that-tokenizers-never-truncate-partial-words/14024
|
Is there a way to ensure tokenizers never partially truncate a word, as illustrated below:
tokenizer= SomeTokenizer.from_pretrained('some/path')
tokenizer.decode(tokenizer('I am Nasheed and I like xylophones.', truncation=True, max_length=12)['input_ids'])
The output is the above sentence truncated like: “’[CLS] I am Nasheed and I like xylo [CLS]’”
I want it to be truncated as: “’[CLS] I am Nasheed and I like [CLS]’”
Is there a way to enforce this?
|
Hi Nasheed, I’m quite curious about your use case and why you’re interested in never partially truncating, if you don’t mind sharing!
In any case, here is how I would do it: Increase max_length by 1. Tokenize the text. Decode the tokenized text. Check if the second to last token (the one before the final [CLS] token) starts with ## (the prefix that signifies that a longer token was split). If yes, remove both tokens, the one that starts with ## and the one before that. If not, just remove the one before the [CLS] token.
In your example it would be
[CLS] I am Nasheed and I like xylo ##phones [CLS]
Because the second to last token starts with ## you would remove that token and the token before it.
Hope that helps.
Cheers
Heiko
| 1 |
huggingface
|
🤗Tokenizers
|
Adding new tokens to a BERT tokenizer - Getting ValueError
|
https://discuss.huggingface.co/t/adding-new-tokens-to-a-bert-tokenizer-getting-valueerror/9253
|
I have a Python list named unique_list that contains new words that will be added to my tokenizer using tokenizer.add_tokens. However, when I run my code I’m getting the following error:
File "/home/kaan/anaconda3/envs/env_backup/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 937, in add_tokens
if not new_tokens:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
When I tested with a test array that contains 10 random words, it worked fine but the larger unique_list is causing a problem.
What am I doing wrong here?
|
Not sure if you already solved this issue, but I stumbled upon it today.
Looking closely at the error message, looks like the add_tokens() method expects the new_tokens passed to it as a python list rather than an numpy array. Converting new_tokens from numpy array to a list and then passing it resolved the issue.
added_tokens = tokenizer.add_tokens(new_tokens.tolist())
| 0 |
huggingface
|
🤗Tokenizers
|
Import distilbert-base-uncased tokenizer to an android app along with the tflite model
|
https://discuss.huggingface.co/t/import-distilbert-base-uncased-tokenizer-to-an-android-app-along-with-the-tflite-model/3234
|
I have converted the model (.h5) to tflite using:
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
open("/models/tflite_models/5th_Jan/distilbert_sms_60_5_jan.tflite", "wb").write(tflite_model)
but I also need tokenizer to run the model locally on the android app (independent of internet availability).
According to the articles on internet and question answered on stackoverflow How to tokenize input text in android studio to process in NLP model? 4 we need json file of tokenizers to tokenize words in new inputs.
When I run the following code:
import json
with open( 'android/word_dict.json' , 'w' ) as file:
json.dump( tokenizer.word_index , file )
The following error comes:
AttributeError: 'DistilBertTokenizer' object has no attribute 'word_index
I am unable to find solution to use tokenizer of distilbert-base-uncased in android app. Any help will be appreciated. Thanks.
|
excuse me did you solve it ?
| 0 |
huggingface
|
🤗Tokenizers
|
ERROR?why encoding [MASK] before ‘.’ would gain a idx 13?
|
https://discuss.huggingface.co/t/error-why-encoding-mask-before-would-gain-a-idx-13/2897
|
I found that if I use tokenizer for BERT,and decode some sentence with [MASK] before ‘.’,it would gain an additional idx 13
like this:
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2') inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt")
the ‘input_ids’ would be:
tensor([[ 2, 14, 1057, 16, 714, 25, 4, 13, 9, 3]])
4 refers to [MASK], 9 refers to ‘.’, but 13 refers to None
this would be happen when [MASK] is before ‘.’
is there something wrong?
|
You don’t generally mask works by replacing the text of the word with “[MASK]”. You usually encode the text firest “The capital of France is Paris.” and then replace the token for Paris with the Mask token. Perhaps that’s the reason for the addition of toke 13?
| 0 |
huggingface
|
🤗Tokenizers
|
How to know if a subtoken is a word or part of a word?
|
https://discuss.huggingface.co/t/how-to-know-if-a-subtoken-is-a-word-or-part-of-a-word/923
|
For example, using BERT in a token classification task, I get something like this …
[('Darüber', 17), ('hinaus', 17), ('fanden', 17), ('die', 17), ('Er', 17), ('##mitt', -100), ('##ler', -100), ('eine', 17), ('Ver', 17), ('##legung', -100), ('##sli', -100), ('##ste', -100), (',', 17), ('die', 17), ('bestätigt', 17), (',', 17), ('dass', 17), ('Dem', 8), ('##jan', -100), ('##juk', -100), ('am', 17), ('27', 17), ('.', -100), ('März', 17), ('1943', 17), ('an', 17), ('die', 17), ('Dienst', 17), ('##stelle', -100), ('So', 0), ('##bi', -100), ('##bor', -100), ('ab', 17), ('##kom', -100), ('##mand', -100), ('##iert', -100), ('wurde', 17), ('.', -100)]
… in the format of (sub-token, label id).
Is there a way I can automatically know that “##mitt” and “##ler” are part of “Er” (thus making up the word “Ermittler”) that would work across all tokenizers (not just BERT)?
|
what do you mean by “automatically know”?
I guess you already know that ##xx tokens are continuation tokens.
I don’t think it is possible to detect from (‘Er’, 17) that it has a continuation.
If you feed the data into an untrained Bert model, [I think] the embedding layer will create an embedding vector for (‘Er’,17) that does not depend on the continuation tokens.
If you feed the data into a trained Bert model, the embedding layer might create different embedding vectors for different instances of (‘Er’, 17), depending on their context, which includes depending on any continuation tokens.
There is a nice tutorial by Chris McCormick here https://mccormickml.com/2019/05/14/BERT-word-embeddings-tutorial/ 25 that discusses embeddings in more detail.
If you were asking about something else, please clarify.
[I am not an expert, and I could be wrong]
| 0 |
huggingface
|
🤗Tokenizers
|
Batch encode plus in Rust Tokenizers
|
https://discuss.huggingface.co/t/batch-encode-plus-in-rust-tokenizers/12722
|
In python, BertTokenizerFast has batch_encode_plus, is there a similar method in rust?
|
I will assume due to the lack of reply that there’s no way to do this.
| 0 |
huggingface
|
🤗Tokenizers
|
Adding new tokens while preserving tokenization of adjacent tokens
|
https://discuss.huggingface.co/t/adding-new-tokens-while-preserving-tokenization-of-adjacent-tokens/12604
|
I’m trying to add some new tokens to BERT and RoBERTa tokenizers so that I can fine-tune the models on a new word. The idea is to fine-tune the models on a limited set of sentences with the new word, and then see what it predicts about the word in other, different contexts, to examine the state of the model’s knowledge of certain properties of language.
In order to do this, I’d like to add the new tokens and essentially treat them like new ordinary words (that the model just hasn’t happened to encounter yet). They should behave exactly like normal words once added, with the exception that their embedding matrices will be randomly initialized and then be learned during fine-tuning.
However, I’m running into some issues doing this. In particular, the tokens surrounding the newly added tokens do not behave as expected when initializing the tokenizer with do_basic_tokenize=False. The problem can be observed in the following example; in the case of BERT, the period following the newly added token is not tokenized as a subword (i.e., it is tokenized as . instead of as the expected ##.), and in the case of RoBERTa, the word following the newly added subword is treated as though it does not have a preceding space (i.e., it is tokenized as a instead of as Ġa.
from transformers import BertTokenizer, RobertaTokenizer
new_word = 'mynewword'
bert = BertTokenizer.from_pretrained('bert-base-uncased', do_basic_tokenize = False)
bert.tokenize('mynewword') # does not exist yet
# ['my', '##ne', '##w', '##word']
bert.tokenize('testing.')
# ['testing', '##.']
bert.add_tokens(new_word)
bert.tokenize('mynewword') # now it does
# ['mynewword']
bert.tokenize('mynewword.')
# ['mynewword', '.']
roberta = RobertaTokenizer.from_pretrained('roberta-base', do_basic_tokenize = False)
roberta.tokenize('mynewword') # does not exist yet
# ['my', 'new', 'word']
roberta.tokenize('A testing a')
# ['A', 'Ġtesting', 'Ġa']
roberta.add_tokens(new_word)
roberta.tokenize('mynewword') # now it does
# ['mynewword']
roberta.tokenize('A mynewword a')
# ['A', 'mynewword', 'a']
Is there a way for me to add the new tokens while getting the behavior of the surrounding tokens to match what it would be if there were not an added token there? I feel like it’s important because the model could end up learning that (for instance), the new token can occur before ., while most others can only occur before ##. That seems like it would affect how it generalizes. In addition, I could turn on basic tokenization to solve the BERT problem here, but that wouldn’t really reflect the full state of the model’s knowledge, since it collapses the distinction between different tokens. And that doesn’t help with the RoBERTa problem, which is still there regardless.
In addition, I’d ideally be able to add the RoBERTa token as Ġmynewword, but I’m assuming that as long as it never occurs as the first word in a sentence, that shouldn’t matter.
|
Hey @mawilson if you want to add new tokens to the vocabulary, then in general you’ll need to resize the embedding layers with
model.resize_token_embeddings(len(tokenizer))
You can see a full example in the docs 3 - does that help solve your problem?
| 0 |
huggingface
|
🤗Tokenizers
|
Implementing custom tokenizer components (normalizers, processors)
|
https://discuss.huggingface.co/t/implementing-custom-tokenizer-components-normalizers-processors/12371
|
I’m wondering if there is an easy way to tweak the individual components of a tokenizer. Specifically, I’d like to implement a custom normalizer and post-processor.
Just to provide some context, I’m trying to train a Danish tokenizer. Danish has a lot of compound nouns (e.g., the Danish translation of “house owner” is “husejer”, with “hus” being “house” and “ejer” being “owner”), so a tokenizer should split these accordingly. A standard BPE or WordPiece can deal with this just fine.
The issue is that for some compound nouns, we impose an “s” in between the two words. For instance, “birthday greeting” is “fødselsdagshilsen”, with “fødselsdag” being “birthday” and “hilsen” being “greeting”. This messes up the tokenizer completely, tokenizing it as [‘fødselsdag’, ‘shi’, ‘l’, ‘sen’] rather than the ideal [‘fødselsdag’, ‘s’, ‘hilsen’].
I think I can solve it by imposing a new special token, <conn>, and at the normaliser stage I check if the word is of the form <word1>s<word2> where <word1> and <word2> are known words, and if so, replaces the “s” by <conn>. At the post-processing stage, I then replace the <conn> instances with “s” again.
Long story short, is there a way to simply subclass the normaliser/processor classes to implement such behaviours?
|
For anyone else looking, this can be done, and it’s answered in this question:
How to add additional custom pre-tokenization processing? 🤗Tokenizers
I would like to add a few custom functions for pre-tokenization. For example, I would like to split numerical text from any non-numerical test.
Eg
‘1000mg’ would become [‘1000’, ‘mg’].
I am trying to figure out the proper way to do this for the python binding; I think it may be a bit tricky since its a binding for the original rust version.
I am looking at the pretokenizer function
/huggingface/tokenizers/blob/2ccd16bf5c3dd97759d7bdf5229e2feeba314b4a/bindings/python/py_src/tokenizers/pre_to…
| 1 |
huggingface
|
🤗Tokenizers
|
Does T5Tokenizer support the Greek language?
|
https://discuss.huggingface.co/t/does-t5tokenizer-support-the-greek-language/12224
|
Does T5Tokenizer support the Greek language?
When I run the 3 lines of code below, then the input_ids are just 2 and 3 which correspond to the unknown token and the underscore respectively. This is the same for any input text of Greek letters.
from transformers import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained(“t5-small”)
input_ids = tokenizer(‘Γειά σου Κόσμε’, return_tensors=‘pt’).input_ids
|
Hi,
T5 itself was trained on English data only. However, there’s a multilingual variant called mT5 2 which supports Greek.
| 0 |
huggingface
|
🤗Tokenizers
|
How padding in huggingface tokenizer works?
|
https://discuss.huggingface.co/t/how-padding-in-huggingface-tokenizer-works/12161
|
I tried following tokenization example:
tokenizer = BertTokenizer.from_pretrained(MODEL_TYPE, do_lower_case=True)
sent = "I hate this. Not that.",
_tokenized = tokenizer(sent, padding=True, max_length=20, truncation=True)
print(_tknzr.decode(_tokenized['input_ids'][0]))
print(len(_tokenized['input_ids'][0]))
The output was:
[CLS] i hate this. not that. [SEP]
9
Notice the parameter to tokenizer: max_length=20. How can I make Bert tokenizer to append 11 [PAD] tokens to this sentence to make it total 20?
|
You need to change padding to "max_length". The default behavior (with padding=True) is to pad to the length of the longest sentence in the batch, meanwhile sentences longer than specified length are getting truncated to the specified max_length. In your example you have only one sentence, thus there’s no padding (the only sentence is the longest one). Your sentence is shorter than max length, so there’s no truncation either.
| 1 |
huggingface
|
🤗Tokenizers
|
How to configure TokenizerFast for AutoTokenizer
|
https://discuss.huggingface.co/t/how-to-configure-tokenizerfast-for-autotokenizer/11353
|
Hi there,
I made a custom model and tokenizer for Retribert architecture. For some reason, when using AutoTokenizer.from_pretrained method, the tokenizer does not initialize model_max_len tokenizer attribute to 512 but to a default of a very large integer. If I invoke AutoTokenizer.from_pretrained with an additional max_len=512 kwarg then the model_max_len gets set to 512 as expected. However, as you might expect I don’t want users to pass this additional kwarg but would prefer to somehow set this value by default.
I figured out that TokenizerFast gets initialized from tokenizer.json and I attempted to add model_max_len attribute to tokenizer.json. However, as soon as I do that AutoTokenizer complains that it can not load the JSON file any longer. Perhaps this property can’t be set via tokenizer.json or perhaps I am not adding it at the right JSON node.
Any ideas on how to set model_max_len tokenizer property so that AutoTokenizer picks it up without additional kwargs?
Best,
Vladimir
|
Hi, I figured this one out; leaving a small note if you stumble on this issue yourself. All you need to do is add tokenizer_config.json file with additional configs for the tokenizer. I added a simple tokenizer_config.json with the following contents:
{“model_max_length”: 512}
That’s all.
Cheers,
Vladimir
| 1 |
huggingface
|
🤗Tokenizers
|
Mask only specific words
|
https://discuss.huggingface.co/t/mask-only-specific-words/173
|
What would be the best strategy to mask only specific words during the LM training?
My aim is to mask only words of interest which I have previously collected in a list.
The issue arises since the tokenizer, not only splits a single word in multiple tokens, but it also adds special characters if the word does not occur at the begging of a sentence.
E.g.:
The word “Valkyria”:
at the beginning of a sentences gets split as [‘V’, ‘alky’, ‘ria’] with corresponding IDs: [846, 44068, 6374].
while in the middle of a sentence as [‘ĠV’, ‘alky’, ‘ria’] with corresponding IDs: [468, 44068, 6374],
This is just one of the issues forcing me to have multiple entries in my list of to-be-filtered IDs.
I have already had a look at the mask_tokens() function into the DataCollatorForLanguageModeling class, which is the function actually masking the tokens during each batch, but I cannot find any efficient and smart way to mask only specific words and their corresponding IDs.
|
When using the “fast” variant of the tokenizers available in huggingface/transformers 1, whenever you encode some text, you get back a BatchEncoding 3.
This BatchEncoding provides some helpful mappings that we can use in this kind of situation. So, you should be able to:
Find the word associated with any token using token_to_word 8. This method returns the index of the word in the input sequence.
Once you know the word’s index, you can actually retrieve its span with word_to_chars 2. This will let you extract the word from the input sequence.
| 0 |
huggingface
|
🤗Tokenizers
|
ArrowInvalid: Column 3 named attention_mask expected length 1000 but got length 1076
|
https://discuss.huggingface.co/t/arrowinvalid-column-3-named-attention-mask-expected-length-1000-but-got-length-1076/6904
|
I’m trying to evaluate a QA model on a custom dataset. This is how I prepared the velidation features:
def prepare_validation_features(examples):
# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# We keep the example_id that gave us this feature and we will store the offset mappings.
tokenized_examples["example_id"] = []
for i in range(len(tokenized_examples["input_ids"])):
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples.sequence_ids(i)
context_index = 1 if pad_on_right else 0
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
tokenized_examples["example_id"].append(examples["id"][sample_index])
# Set to None the offset_mapping that are not part of the context so it's easy to determine if a token
# position is part of the context or not.
tokenized_examples["offset_mapping"][i] = [
(o if sequence_ids[k] == context_index else None)
for k, o in enumerate(tokenized_examples["offset_mapping"][i])
]
return tokenized_examples
But as I try to apply the function to my dataset:
test_features = test_dataset.map(
prepare_validation_features,
batched=True,
)
at a certain moment (more or less 23% of the process) it return me this error:
23%
5/22 [00:19<00:51, 3.04s/ba]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<ipython-input-156-6658d0dc57be> in <module>()
1 test_features = test_dataset.map(
2 prepare_validation_features,
----> 3 batched=True,
4 )
8 frames
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Column 3 named attention_mask expected length 1000 but got length 1076
How can I fix it?
|
Hello! I have the same issue. Did you fix it?
| 0 |
huggingface
|
🤗Tokenizers
|
Is that possible to embed the tokenizer into the model to have it running on GCP using TensorFlow Serving?
|
https://discuss.huggingface.co/t/is-that-possible-to-embed-the-tokenizer-into-the-model-to-have-it-running-on-gcp-using-tensorflow-serving/10532
|
Hello!
Thanks in advance for your help!
At the beginning I’ve created an issue on github with this question:
Question: Is that possible to embed a tokenizer into the model for tensorflow serving? · Issue #13843 · huggingface/transformers · GitHub 6 and I’ve got a suggestion to tag @Rocketknight1 who is an expert in TensorFlow for questions like this.
I already using TF-BERT model (uncased version) with tensorflow serving. I found that I need to modify some inputs to get something like that:
callable = tf.function(self.model.call)
concrete_function = callable.get_concrete_function([
tf.TensorSpec([None, self.max_input_length], tf.int32, name="input_ids"),
tf.TensorSpec([None, self.max_input_length], tf.int32, name="attention_mask")
])
self.model.save(save_directory, signatures=concrete_function)
Also I found the following example (blog/tf-serving.md at master · huggingface/blog · GitHub 6), that allows me to change input signature of a model especially for serving:
from transformers import TFBertForSequenceClassification
import tensorflow as tf
# Creation of a subclass in order to define a new serving signature
class MyOwnModel(TFBertForSequenceClassification):
# Decorate the serving method with the new input_signature
# an input_signature represents the name, the data type and the shape of an expected input
@tf.function(input_signature=[{
"input_ids": tf.TensorSpec((None, None), tf.int32, name="input_ids"),
"attention_mask": tf.TensorSpec((None, None), tf.int32, name="attention_mask"),
"token_type_ids": tf.TensorSpec((None, None), tf.int32, name="token_type_ids"),
}])
def serving(self, inputs):
# call the model to process the inputs
output = self.call(inputs)
test_out = self.serving_output(output)
# return the formated output
return test_out
# Instantiate the model with the new serving method
model = MyOwnModel.from_pretrained("bert-base-cased")
# save it with saved_model=True in order to have a SavedModel version along with the h5 weights.
model.save_pretrained("/tmp/my_model6", saved_model=True)
In my current workflow I have a need of python because I have to prepare input for a model by using tokenizer. It does mean that for now I need to have a REST service that gets text request and then sends it to the serving instance. After I switched to GCP AI Platform I think that it is reasonable and worth to try to embed the tokenizer inside the model and let GCP AI Platform serving it.
I did some tries and looks like that it more difficult than it looks like.
The goal is to have the model with tokenizer on GCP AI platform and get rid of python REST API service because all other infrastructure is written using Erlang/Rust. I need to supply the text to the model serving instance (not the object with input_ids, attention_mask, etc.) and get logits. Or softmaxed logits.
So could someone please answer is that possible and if it is possible to provide some guidance how to achieve this?
Thanks a lot for your help!
/Dmitriy
|
I have been looking for something like this but couldn’t find it either. Based on the blog you found blog/tf-serving.md at master · huggingface/blog · GitHub 26, the author mentioned that it’s a possible next step improvement so not sure if this is already possible.
| 0 |
huggingface
|
🤗Tokenizers
|
Added Tokens Not Decoding with Spaces
|
https://discuss.huggingface.co/t/added-tokens-not-decoding-with-spaces/10883
|
Hi All,
My goal is to add a set of starting tokens to a pre-trained AlbertTokenizerFast.
In the Albert Pre-Trained Vocab (SentencePiece Model), all start tokens are preceded with the meta-symbol: ▁ (e.g. ▁hamburger).
I tried adding tokens, prefixed with the meta symbol:
new_tokens = [AddedToken("▁hamburger",), AddedToken("▁pizza")]
num_added_tokens = tokenizer.add_tokens(new_tokens)
However, as this forum post shows, input text to AddedToken is treated literally; so manually adding the meta-symbol prefixes doesn’t achieve the desired effect.
Instead, I tried using the single_word parameter:
new_tokens = [AddedToken("hamburger", single_word=True, lstrip=True), AddedToken("pizza", single_word=True, lstrip=True)]
num_added_tokens = tokenizer.add_tokens(new_tokens)
This solution successfully encodes the new tokens where hamburger is being encoded by token 30001:
tokenizer('This hamburger tastes great')
>> [2, 15, 30001, 53, 8, 345,3]
However, when I try to decode these ids, no space appears between “this” and “hamburger”:
tokenizer.decode([2, 15, 30001, 53, 8, 345,3])
>> ('Thishamburger tastes great')
I was wondering if anybody had any thoughts about how to fix this.
|
Does the same occur when setting lstrip=False when defining the new tokens?
| 0 |
huggingface
|
🤗Tokenizers
|
Tokenizer.encode not returning encodings
|
https://discuss.huggingface.co/t/tokenizer-encode-not-returning-encodings/10616
|
How do you get token encodings? The method isn’t working in this case.
I need the token offsets in order to translate my labels to a normal list of token tags (my labels are in this format [{start_index: int, end_index: int, tag: str} … ]
Thank you!
Maybe it’s because this is a pre-trained fast tokenizer?
|
You need to use the tokenizer directly on your text, not the encode method:
tokenizer("hello world")
| 0 |
huggingface
|
🤗Tokenizers
|
There is no 0.11.0 tokenizers in pip
|
https://discuss.huggingface.co/t/there-is-no-0-11-0-tokenizers-in-pip/10381
|
I can only find 0.10.3 through pip install. Am I doing something wrong or I can only install 0.11.0 through source?
Thank you!
|
There is no version 0.11.0 of Tokenizers that has been released.
| 0 |
huggingface
|
🤗Tokenizers
|
Using a fixed vocab.txt with AutoTokenizer?
|
https://discuss.huggingface.co/t/using-a-fixed-vocab-txt-with-autotokenizer/9919
|
Hello,
I have a special case where I want to use a hand-written vocab with a notebook that’s using AutoTokenizer but I can’t find a way to do this (it’s for a non-language sequence problem, where I’m pretraining very small models with a vocab designed to optimize sequence length, vocab size, and legibility).
If it’s not possible, what’s the best way to use my fixed vocab? In the past I used BertWordPieceTokenizer, loaded directly with the vocab.txt path, but I don’t know how to use this approach with newer Trainer-based approach in the notebook.
UPDATE: More specifically, if I try my old method of using BertWordPieceTokenizer(vocab='vocab.txt') it fails later with:
TypeError Traceback (most recent call last)
/tmp/ipykernel_3379/2783002494.py in <module>
3 # Setup train dataset if `do_train` is set.
4 print('Creating train dataset...')
----> 5 train_dataset = get_dataset(model_data_args, tokenizer=tokenizer, evaluate=False) if training_args.do_train else None
6
7 # Setup evaluation dataset if `do_eval` is set.
/tmp/ipykernel_3379/2486475202.py in get_dataset(args, tokenizer, evaluate)
32 if args.line_by_line:
33 # Each example in data file is on each line.
---> 34 return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path,
35 block_size=args.block_size)
36
~/anaconda3/envs/torch_17/lib/python3.8/site-packages/transformers/data/datasets/language_modeling.py in __init__(self, tokenizer, file_path, block_size)
133 lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())]
134
--> 135 batch_encoding = tokenizer(lines, add_special_tokens=True, truncation=True, max_length=block_size)
136 self.examples = batch_encoding["input_ids"]
137 self.examples = [{"input_ids": torch.tensor(e, dtype=torch.long)} for e in self.examples]
TypeError: 'BertWordPieceTokenizer' object is not callable
The notebook I’m trying to use is from: github.com/gmihaila/ml_things.git 1
|
Okay, so obviously I’m not a Python guy… I see there’s some insanity in the language that allows class instances to be callable… (why, Python… WHY???) …so I’m a bit stumped, but presumably it has to do with the fact that BertWordPieceTokenizer is not a subclass of PreTrainedTokenizer (which has the crazy attribute of being callable).
I’m really stuck. I’d just like to plug in my custom tokenizer, but it seems that when I hit “LineByLineTextDataset”, I’m going to hit the same callable error. I tried running with the default tokenization and although my vocab went down from 1073 to 399 tokens, my sequence length went from 128 to 833 tokens. Hence the desire to load my tokenizer from the hand-written vocab.
Aack!
UPDATE: Okay, I hadn’t realized I could do it with BertTokenizerFast. I haven’t totally verified that this is working, but so far it looks correct.
| 0 |
huggingface
|
🤗Tokenizers
|
Train wordpiece from scratch
|
https://discuss.huggingface.co/t/train-wordpiece-from-scratch/9843
|
Hi,
I am pre training a Bert model from scratch. For that I first need to train a wordpiece tokenizer, I am using BertWordPieceTokenizer for this.
My question:
Should I train the tokenizer on the whole corpus which is huge, or training it on a sample is enough?
Is there a way to tell the tokenizer to take train only on a sample?
Thanks.
|
Kamel:
Should I train the tokenizer on the whole corpus which is huge
Yes. With HuggingFace Tokenizers 5, it takes seconds. From the README: “Takes less than 20 seconds to tokenize a GB of text on a server’s CPU”.
| 0 |
huggingface
|
🤗Tokenizers
|
How to save a fast tokenizer using the transformer library and then load it using Tokenizers?
|
https://discuss.huggingface.co/t/how-to-save-a-fast-tokenizer-using-the-transformer-library-and-then-load-it-using-tokenizers/9567
|
I want to avoid importing the transformer library during inference with my model, for that reason I want to export the fast tokenizer and later import it using the Tokenizers library.
On Transformers side, this is as easy as tokenizer.save_pretrained(“tok”), however when loading it from Tokenizers, I am not sure what to do.
from tokenizers import Tokenizer
Tokenizer.from_file(“tok/tokenizer.json”)
Seems to work, but it is ignoring the two other files in the directory: tokenizer_config.json and special_tokens_map.json, for that reason I believe it won’t give me the same tokens.
Is there a way to import a tokenizer using the whole directory files ? Or better, can we import a pretrained fast tokenizer from the hub ?
Thanks
|
you can load tokenizer from directory with from_pretrained method:
tokenizer = Tokenizer.from_pretrained("your_tok_directory")
| 0 |
huggingface
|
🤗Tokenizers
|
Cannot create an identical PretrainedTokenizerFast object from a Tokenizer created by tokenizers library
|
https://discuss.huggingface.co/t/cannot-create-an-identical-pretrainedtokenizerfast-object-from-a-tokenizer-created-by-tokenizers-library/9317
|
I can’t seem to create a “PreTrainedTokenizerFast” object from my original tokenizers tokenizer object that has the same proporties. This is the code for a byte pair tokenizer I have experimented on. The resulting fast tokenizer does not have a [PAD] token, and does not have any special tokens at all.
tokenizer = ByteLevelBPETokenizer()
tokenizer.preprocessor = pre_tokenizers.BertPreTokenizer()
tokenizer.normalizer = normalizers.BertNormalizer()
tokenizer.train_from_iterator(docs, vocab_size=16_000, min_frequency=15, special_tokens = ["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
tokenizer._tokenizer.post_processor = processors.BertProcessing(
("[SEP]", tokenizer.token_to_id("[SEP]")),
("[CLS]", tokenizer.token_to_id("[CLS]")),
)
tokenizer.enable_truncation(max_length=256)
tokenizer.enable_padding(pad_id=3, pad_token="[PAD]")
fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer)
The result of printing the fast_tokenizer is:
PreTrainedTokenizerFast(name_or_path='', vocab_size=16000, model_max_len=1000000000000000019884624838656, is_fast=True, padding_side='right', special_tokens={})
Which model_max_len and special_tokens are wrong in it. Also, there is no pad_token and pad_token_id in the fast_tokenizer object. (the warning for pad_token for example: Using pad_token, but it is not set yet. ) Have I done anything wrong, or is this not supposed to happen?
The versions of libraries I’m using:
'tokenizers 0.10.3',
'transformers 4.10.0.dev0'
I have also tested with these versions:
'tokenizers 0.10.3',
'transformers 4.9.2'
|
When using PreTrainedTokenizerFast directly and not one of the subclasses, you have to manually set all the attributes specific to Transformers: the model_max_length as well as all the special tokens. The reason is that the Tokenizer has no concept of associated model (so it doesn’t know the model max length) and even if it has a concept of special tokens, it doesn’t know the differences between them, so you have to indicate which one is the pad token, which one the mask token etc.
| 0 |
huggingface
|
🤗Tokenizers
|
WordLevel error: Missing [UNK] token from the vocabulary
|
https://discuss.huggingface.co/t/wordlevel-error-missing-unk-token-from-the-vocabulary/5107
|
Hi, I am trying to train a basic Word Level tokenizer based on a file data.txt containing
5174 5155 4749 4814 4832 4761 4523 4999 4860 4699 5024 4788 [UNK]
When I run my code
from tokenizers import Tokenizer
from tokenizers.models import WordLevel
tokenizer = Tokenizer(WordLevel(unk_token='[UNK]'))
tokenizer.train(files=['data.txt'])
tokenizer.encode('5155')
I get the error
Exception: WordLevel error: Missing [UNK] token from the vocabulary
Why is it still missing despite having [UNK] in data.txt and also setting unk_token='[UNK]'?
Any help is very appreciated!
|
Hi Athena, I’m having the same issue… did you find the root of the problem?
| 0 |
huggingface
|
🤗Tokenizers
|
Extracting embedding values of NLP pertained models from tokenized strings
|
https://discuss.huggingface.co/t/extracting-embedding-values-of-nlp-pertained-models-from-tokenized-strings/9287
|
I am using huggingface’s pipeline to extract embeddings of words in a sentence. As far as I know, first a sentence will be turned into a tokenized strings. I think the length of the tokenized string might not be equal to the number of words in the original sentence. I need to retrieve word embedding of a particular sentence.
For example, here is my code:
#https://discuss.huggingface.co/t/extracting-token-embeddings-from-pretrained-language-models/6834/6
from transformers import pipeline, AutoTokenizer, AutoModel
import numpy as np
import re
model_name = "xlnet-base-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
model.resize_token_embeddings(len(tokenizer))
model_pipeline = pipeline('feature-extraction', model=model_name, tokenizer=tokenizer)
def find_wordNo_sentence(word, sentence):
print(sentence)
splitted_sen = sentence.split(" ")
print(splitted_sen)
index = splitted_sen.index(word)
for i,w in enumerate(splitted_sen):
if(word == w):
return i
print("not found") #0 base
def return_xlnet_embedding(word, sentence):
word = re.sub(r'[^\w]', " ", word)
word = " ".join(word.split())
sentence = re.sub(r'[^\w]', ' ', sentence)
sentence = " ".join(sentence.split())
id_word = find_wordNo_sentence(word, sentence)
try:
data = model_pipeline(sentence)
n_words = len(sentence.split(" "))
#print(sentence_emb.shape)
n_embs = len(data[0])
print(n_embs, n_words)
print(len(data[0]))
if (n_words != n_embs):
"There is extra tokenized word"
results = data[0][id_word]
return np.array(results)
except:
return "word not found"
return_xlnet_embedding('your', "what is your name?")
Then the output is:
what is your name [‘what’, ‘is’, ‘your’, ‘name’] 6 4 6
So the length of tokenized string that is fed to the pipeline is two more than number of my words. How can I find which one (among these 6 values) are the embedding of my word?
|
More especifically, I want to know when I call the model_pipeline(sentence), how should I understand how the sentence was tokenized? Because I think some words in the sentence might be tokenized into several parts, so I need to understand them.
| 0 |
huggingface
|
🤗Tokenizers
|
Tokenizer shrinking recipes
|
https://discuss.huggingface.co/t/tokenizer-shrinking-recipes/8564
|
As I’ve building tiny-modes for hf-internal-testing (HF Internal Testing) 4 I need to shrink/truncate the original tokenizers and the vocab in order to get truly tiny models and often it was taking quite a long time to figure out. So I reached out for help and got several great recipes which I thought I’d share here in case others might need something similar.
Anthony Moi’s version
@anthony’s tokenizer shrinker:
import json
from transformers import AutoTokenizer
from tokenizers import Tokenizer
vocab_keep_items = 5000
mname = "microsoft/deberta-base"
tokenizer = AutoTokenizer.from_pretrained(mname, use_fast=True)
assert tokenizer.is_fast, "This only works for fast tokenizers."
tokenizer_json = json.loads(tokenizer._tokenizer.to_str())
vocab = tokenizer_json["model"]["vocab"]
if tokenizer_json["model"]["type"] == "BPE":
new_vocab = { token: i for token, i in vocab.items() if i < vocab_keep_items }
merges = tokenizer_json["model"]["merges"]
new_merges = []
for i in range(len(merges)):
a, b = merges[i].split()
new_token = "".join((a, b))
if a in new_vocab and b in new_vocab and new_token in new_vocab:
new_merges.append(merges[i])
tokenizer_json["model"]["merges"] = new_merges
elif tokenizer_json["model"]["type"] == "Unigram":
new_vocab = vocab[:vocab_keep_items]
elif tokenizer_json["model"]["type"] == "WordPiece" or tokenizer_json["model"]["type"] == "WordLevel":
new_vocab = { token: i for token, i in vocab.items() if i < vocab_keep_items }
else:
raise ValueError(f"don't know how to handle {tokenizer_json['model']['type']}")
tokenizer_json["model"]["vocab"] = new_vocab
tokenizer._tokenizer = Tokenizer.from_str(json.dumps(tokenizer_json))
tokenizer.save_pretrained(".")
LysandreJik’s version
Using the recently added train_new_from_iterator suggested by @lysandre
from transformers import AutoTokenizer
mname = "microsoft/deberta-base" # or any checkpoint that has a fast tokenizer.
vocab_keep_items = 5000
tokenizer = AutoTokenizer.from_pretrained(mname)
assert tokenizer.is_fast, "This only works for fast tokenizers."
tokenizer.save_pretrained("big-tokenizer")
# Should be a generator of list of texts.
training_corpus = [
["This is the first sentence.", "This is the second one."],
["This sentence (contains #) over symbols and numbers 12 3.", "But not this one."],
]
new_tokenizer = tokenizer.train_new_from_iterator(training_corpus, vocab_size=vocab_keep_items)
new_tokenizer.save_pretrained("small-tokenizer")
but this one requires a training corpus, so I had an idea to cheat and train the new tokenizer on its own original vocab:
from transformers import AutoTokenizer
mname = "microsoft/deberta-base"
vocab_keep_items = 5000
tokenizer = AutoTokenizer.from_pretrained(mname)
assert tokenizer.is_fast, "This only works for fast tokenizers."
vocab = tokenizer.get_vocab()
training_corpus = [ vocab.keys() ] # Should be a generator of list of texts.
new_tokenizer = tokenizer.train_new_from_iterator(training_corpus, vocab_size=vocab_keep_items)
new_tokenizer.save_pretrained("small-tokenizer")
which is almost perfect, except it now doesn’t have any information about the frequency for each word/char (that’s how most tokenizers compute their vocab, which if you need this info you can fix by
having each key appearing len(vocab) - ID times, i.e.:
training_corpus = [ (k for i in range(vocab_len-v)) for k,v in vocab.items() ]
which will make the script much much longer to complete.
But for the needs of a tiny model (testing) the frequency doesn’t matter at all.
hack the tokenizer file version
Some tokenizers can be be just manually truncated at the file level, e.g. Electra:
# Shrink the orig vocab to keep things small (just enough to tokenize any word, so letters+symbols)
# ElectraTokenizerFast is fully defined by a tokenizer.json, which contains the vocab and the ids, so we just need to truncate it wisely
import subprocess
from transformers import ElectraTokenizerFast
mname = "google/electra-small-generator"
vocab_keep_items = 3000
tokenizer_fast = ElectraTokenizerFast.from_pretrained(mname)
tmp_dir = f"/tmp/{mname}"
tokenizer_fast.save_pretrained(tmp_dir)
# resize tokenizer.json (vocab.txt will be automatically resized on save_pretrained)
# perl -pi -e 's|(2999).*|$1}}}|' tokenizer.json # 0-indexed, so vocab_keep_items-1!
closing_pat = "}}}"
cmd = (f"perl -pi -e s|({vocab_keep_items-1}).*|$1{closing_pat}| {tmp_dir}/tokenizer.json").split()
result = subprocess.run(cmd, capture_output=True, text=True)
# reload with modified tokenizer
tokenizer_fast_tiny = ElectraTokenizerFast.from_pretrained(tmp_dir)
tokenizer_fast_tiny.save_pretrained(".")
spm vocab shrinking
First clone sentencepiece into a parent dir:
git clone https://github.com/google/sentencepiece
now to the shrinking
# workaround for fast tokenizer protobuf issue, and it's much faster too!
os.environ["PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION"] = "python"
from transformers import XLMRobertaTokenizerFast
mname = "xlm-roberta-base"
# Shrink the orig vocab to keep things small
vocab_keep_items = 5000
tmp_dir = f"/tmp/{mname}"
vocab_orig_path = f"{tmp_dir}/sentencepiece.bpe.model" # this name can be different
vocab_short_path = f"{tmp_dir}/spiece-short.model"
# HACK: need the sentencepiece source to get sentencepiece_model_pb2, as it doesn't get installed
sys.path.append("../sentencepiece/python/src/sentencepiece")
import sentencepiece_model_pb2 as model
tokenizer_orig = XLMRobertaTokenizerFast.from_pretrained(mname)
tokenizer_orig.save_pretrained(tmp_dir)
with open(vocab_orig_path, 'rb') as f: data = f.read()
# adapted from https://blog.ceshine.net/post/trim-down-sentencepiece-vocabulary/
m = model.ModelProto()
m.ParseFromString(data)
print(f"Shrinking vocab from original {len(m.pieces)} dict items")
for i in range(len(m.pieces) - vocab_keep_items): _ = m.pieces.pop()
print(f"new dict {len(m.pieces)}")
with open(vocab_short_path, 'wb') as f: f.write(m.SerializeToString())
m = None
tokenizer_fast_tiny = XLMRobertaTokenizerFast(vocab_file=vocab_short_path)
tokenizer_fast_tiny.save_pretrained(".")
If you have other related recipes please don’t hesitate to add those in the comments below.
p.s. if you create custom models that are derivations from original ones if possible please upload the script that created the derivative with the model files, so that in the future it’s easy to update or replicate or adapt to other models. e.g. make-tiny-deberta.py · hf-internal-testing/tiny-deberta at main created hf-internal-testing/tiny-deberta · Hugging Face.
|
gpt2 seems to have a special token "<|endoftext|>" stashed at the very end of the vocab, so it gets dropped and code breaks. So I hacked it back in with:
if "gpt2" in mname:
new_vocab = { token: i for token, i in vocab.items() if i < vocab_keep_items-1 }
new_vocab["<|endoftext|>"] = vocab_keep_items-1
else:
new_vocab = { token: i for token, i in vocab.items() if i < vocab_keep_items }
| 0 |
huggingface
|
🤗Tokenizers
|
Tokenization in a NER context
|
https://discuss.huggingface.co/t/tokenization-in-a-ner-context/5635
|
Hello everyone, I am trying to understand how to use the tokenizers in a NER context.
Basically, I have a text corpus with entities annotations, usually in IOB format [1], which can be seen as a mapping f: word → tag (annotators are working on a non-tokenized text and we ask them to annotate entire words).
When I am using any modern tokenizer, basically, I will get several tokens for a single word (for instance “huggingface” might produce something like [“hugging#”, “face”]). I need to transfer the original annotations to each token in order to have a new labelling fonction g: token → tag
E.g. what I have in input
text = "Huggingface is amazing"
labels = [B_org, O, O]"
what I need to produce if the tokenizer output is ["Hugging#", "face", "is", "amazin"] is
labels_per_tokens = [B_org, I_org, O, O]
```
To do so I need to backtrack for every token produced by the tokenizer what is the original word / annotation that I got in input but it seems not so easy to do (especially with [UNK] tokens). Am I missing something obvious ? Are there some good practice or solution to my problem ?
Thanks a lot for your help !
[1] https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)
|
hey @Thrix you can see how to align the NER tags and tokens in the tokenize_and_align_labels function in this tutorial: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/token_classification.ipynb#scrollTo=n9qywopnIrJH 182
| 0 |
huggingface
|
🤗Tokenizers
|
Training sentencePiece from scratch?
|
https://discuss.huggingface.co/t/training-sentencepiece-from-scratch/3477
|
Hi! I would like to train a sentencePiece tokenizer from scratch but I’m a bit lost from the documentation and don’t know where to start. There are already examples on how to train a BPE tokenizer on the huggingface website but I don’t know if I can simply transfer it 1 to 1. Also I don’t even know where to find the trainable class for sentencePiece.
Have you already trained a sentencePiece tokenizer?
|
@Johncwok check this page: Using tokenizers from 🤗 Tokenizers — transformers 4.7.0 documentation 69
You can train a SentencePiece tokenizer
from tokenizers import SentencePieceBPETokenizer
tokenizer = SentencePieceBPETokenizer()
tokenizer.train_from_iterator(
text,
vocab_size=30_000,
min_frequency=5,
show_progress=True,
limit_alphabet=500,
)
and then just wrap it with a PreTrainedTokenizerFast
from transformers import PreTrainedTokenizerFast
transformer_tokenizer = PreTrainedTokenizerFast(
tokenizer_object=tokenizer
)
Documentation is not quite clear about this
| 0 |
huggingface
|
🤗Tokenizers
|
Best way to mask a multi-token word when using `.*ForMaskedLM` models
|
https://discuss.huggingface.co/t/best-way-to-mask-a-multi-token-word-when-using-formaskedlm-models/6428
|
For example, in a context where the model is likely to predict the word seaplane (which gets decomposed into two tokens), should I include a single mask or two masks in the contextual sentence?
Here is a complete example: Google Colaboratory 3
Below is the predicted top 6 words for a single mask (where the word seaplane should go). Here it seems reasonable to concatenate the top two predicted vocab words, but this doesn’t seem to extend into the less probable words in the list below.
top_vocab_idxes = torch.topk(torch.softmax(single_mask_token_logits[masked_idx], dim=0), 6)
for token_id in top_vocab_idxes[1]:
print (tokenizer.decode([token_id]))
sea
plane
hangar
helicopter
lake
river
Below is result for using two masks in the contextual sentence, printing out the top 6 most likely combos for first and second masked tokens in each line.
top_vocab_idxes = torch.topk(probs, 6)
for token_id in torch.transpose(top_vocab_idxes[1], 1, 0):
print (tokenizer.decode(token_id))
sea plane
water area
mountain hangar
land dive
landing aircraft
flying field
In this particular case the top 3 most probable combos above seem like reasonable predictions for the two masked tokens given context:
double_mask_sentence = f"""When taking off in a seaplane, flying in a seaplane,
and then landing in a {tokenizer.mask_token} {tokenizer.mask_token},
remember to fashion your seat belt."""
It seems likely that I should use the second method above for my inference and possible later fine-tuning, however, I doubt this is what is done during pretraining.
Thank you for any feedback on what might be best practice here.
|
This is something of interest for me too!
This might be of help: [2009.07118] It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners 5
| 0 |
huggingface
|
🤗Tokenizers
|
TypeError when loading tokenizer with from_pretrained method for bart-large-mnli model
|
https://discuss.huggingface.co/t/typeerror-when-loading-tokenizer-with-from-pretrained-method-for-bart-large-mnli-model/3378
|
Hello everyone.
Here is my problem, (I wish someone can help me, I try so hard in vain to resolve it T.T) :
I use transformers 4.2.1 lib, and I am in a context where I only can use it in offline mode (no internet).
I want to use the bart-large-mnli model so I upload it on a specific server and I download the model with the following link :
huggingface.co
facebook/bart-large-mnli at main 2
Then I try to use from_pretrained method like this :
tokenizer = BartTokenizerFast.from_pretrained(’/appli/pretrainedModel/bart-large-mnli’)
or like this :
tokenizer = AutoTokenizer.from_pretrained(’/appli/pretrainedModel/bart-large-mnli’)
But every time I do this I get the following error (more detailed log at the end of my post, I truncate the last line with […] to replace all the content of merges.txt) :
“TypeError: Can’t convert [(‘Ä’, ‘t’), (‘Ä’, ‘a’), (‘h’, ‘e’), […] ] (list) to Union[Merges, Filename]”
I definitively don’t know what’s going wrong with merges.txt but it seems like there is a problem…
The content of /appli/pretrainedModel/bart-large-mnli is :
config.json
merges.txt
pytorch_model.bin
rust_model.ot
tokenizer_config.json
vocab.json
Does someone have any idea where is the problem ?
Thanks in advance.
More detailed error log :
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
in
----> 1 tokenizer = AutoTokenizer.from_pretrained(’/appli/pretrainedModel/bart-large-mnli’)
2 pipeline(‘zero-shot-classification’, model=model, tokenizer=tokenizer)
/appli/.conda/envs/bf_verbatim/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
383 tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]
384 if tokenizer_class_fast and (use_fast or tokenizer_class_py is None):
→ 385 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
386 else:
387 if tokenizer_class_py is not None:
/appli/.conda/envs/bf_verbatim/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1767
1768 return cls._from_pretrained(
→ 1769 resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs
1770 )
1771
/appli/.conda/envs/bf_verbatim/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs)
1839 # Instantiate tokenizer.
1840 try:
→ 1841 tokenizer = cls(*init_inputs, **init_kwargs)
1842 except OSError:
1843 raise OSError(
/appli/.conda/envs/bf_verbatim/lib/python3.7/site-packages/transformers/models/roberta/tokenization_roberta_fast.py in init(self, vocab_file, merges_file, tokenizer_file, errors, bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, mask_token, add_prefix_space, **kwargs)
171 mask_token=mask_token,
172 add_prefix_space=add_prefix_space,
→ 173 **kwargs,
174 )
175
/appli/.conda/envs/bf_verbatim/lib/python3.7/site-packages/transformers/models/gpt2/tokenization_gpt2_fast.py in init(self, vocab_file, merges_file, tokenizer_file, unk_token, bos_token, eos_token, add_prefix_space, **kwargs)
139 eos_token=eos_token,
140 add_prefix_space=add_prefix_space,
→ 141 **kwargs,
142 )
143
/appli/.conda/envs/bf_verbatim/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py in init(self, *args, **kwargs)
87 elif slow_tokenizer is not None:
88 # We need to convert a slow tokenizer to build the backend
—> 89 fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)
90 elif self.slow_tokenizer_class is not None:
91 # We need to create and convert a slow tokenizer to build the backend
/appli/.conda/envs/bf_verbatim/lib/python3.7/site-packages/transformers/convert_slow_tokenizer.py in convert_slow_tokenizer(transformer_tokenizer)
657 converter_class = SLOW_TO_FAST_CONVERTERS[tokenizer_class_name]
658
→ 659 return converter_class(transformer_tokenizer).converted()
/appli/.conda/envs/bf_verbatim/lib/python3.7/site-packages/transformers/convert_slow_tokenizer.py in converted(self)
281 continuing_subword_prefix="",
282 end_of_word_suffix="",
→ 283 fuse_unk=False,
284 )
285 )
TypeError: Can’t convert [(‘Ä’, ‘t’), (‘Ä’, ‘a’), (‘h’, ‘e’), […] ] (list) to Union[Merges, Filename]
|
Hi,
Were you able to resolve the issue?
Thanks in advance
| 0 |
huggingface
|
🤗Tokenizers
|
Using truncated fragments as input samples in training
|
https://discuss.huggingface.co/t/using-truncated-fragments-as-input-samples-in-training/6978
|
Hi!
I am using the tokenizers library, roughly following the run_mlm.py script to train a Masked Language Model (MobileBert) from scratch.
Since I am training an unsupervised model using truncated sentences, I was wondering if the truncated (left-out) fragments are included by default in the dataset for training since they would be valid examples for my use case (MLM setting). If they are not used (which I believe to be the case) I wanted to ask if there is any easy way in which I might include them in my training dataset (maybe by using return_overflowing_tokens and stride in a smart way?).
As an additional related question, I would like to know if there is any native way of sorting by length before batching to reduce the dataset size to the minimum. Something along these lines: pommedeterresautee gist and McCormickML blogpost.
EDIT: The best way I have found to do the smart batching is to create an ‘sample_length’ column and use the .sort method to sort by that column before tokenizing.
Thanks in advance!
|
By default, those are not included (unless you use the --line_by_line option which will concatenate all the samples then create block of the size you picked). Using return_overflowing_tokens is definitely an option to get those truncated part! stride is only if you want some overlap between the two parts of a long sentence, which is useful for question answering, but not necessarily for masked language modeling pretraining.
For the sorting by length before batching, we have the --group_by_length option in the Trainer, though it’s for the dataset so it happens after tokenization, which may not be what you are looking for.
| 0 |
huggingface
|
🤗Tokenizers
|
Using whitespace tokenizer for training models
|
https://discuss.huggingface.co/t/using-whitespace-tokenizer-for-training-models/6591
|
I have a dataset for which I wanted to use a tokenizer based on whitespace rather than any subword segmentation approach.
This snippet I got off github has a way to construct and use the custom tokenizer that operates on whitespaces:-
from tokenizers import Tokenizer, trainers
from tokenizers.models import BPE
from tokenizers.normalizers import Lowercase
from tokenizers.pre_tokenizers import CharDelimiterSplit
# We build our custom tokenizer:
tokenizer = Tokenizer(BPE())
tokenizer.normalizer = Lowercase()
tokenizer.pre_tokenizer = CharDelimiterSplit(' ')
# We can train this tokenizer by giving it a list of path to text files:
trainer = trainers.BpeTrainer(special_tokens=["[UNK]"], show_progress=True)
tokenizer.train(files=['/content/dataset.txt'], trainer=trainer)
I wanted to use it for pre-training the BigBird attention model, but facing two issues:
I can’t seem to be able to use this snippet with the custom tokenizer above to convert tokenized sentences in model-friendly sequences
from tokenizers.processors import BertProcessing
tokenizer._tokenizer.post_processor = tokenizers.processors.BertProcessing(
("</s>", tokenizer.token_to_id("</s>")),
("<s>", tokenizer.token_to_id("<s>")),
)
tokenizer.enable_truncation(max_length=16000)
This returns me an error, and without any preprocessing the output does not contain the sequence start and end tokens (<s>; </s>) as expected.
Next problem arises, when I save the tokenizer state in the specified folder, I am unable to use it via:
tokenizer = BigBirdTokenizerFast.from_pretrained("./tok", max_len=16000)
since it yields the error that my directory does not ‘reference’ the tokenizer files, which shouldn’t be an issue since using RobertaTokenizerFast does work - I assume it has something to do in the tokenization post-processing phase.
If anyone wants, I can create a reproducible colab notebook to speed up the issue being solved.
Thanks in advance,
N
|
I have created a fully reproducible colab notebook, with commented problems and synthetic data. Please find it here 12. Thanx
| 0 |
huggingface
|
🤗Tokenizers
|
Sentence splitting
|
https://discuss.huggingface.co/t/sentence-splitting/5393
|
I am following the Trainer example 31 to fine-tune a Bert model on my data for text classification, using the pre-trained tokenizer (bert-base-uncased).
In all examples I have found, the input texts are either single sentences or lists of sentences. However, my data is one string per document, comprising multiple sentences. When I inspect the tokenizer output, there are no [SEP] tokens put in between the sentences, e.g.:
This is how I tokenize my dataset:
def encode(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length')
train_dataset = train_dataset.map(encode, batched=True)
And this is an example result of the tokenization:
tokenizer.decode(train_dataset[0]["input_ids"])
[CLS] this is the first sentence . this is the second sentence. [SEP]
Given the special tokens in the beginning and the end, and that the output is lower-cased, I see that the input has been tokenized as expected. However, I was expecting to see a [SEP] between each sentence, as is the case when the input comprises a list of sentences.
What is the recommended approach? Should I split the input documents into sentences, and run the tokenizer on each of them? Or does the Transformer model handle the continuous stream of sentences?
I have seen posts like this:
Split document into sentences for sentence embedding
Just use a parser like stanza or spacy to tokenize/sentence segment your data. This is typically the first step in many NLP tasks.
And:
Summarization on long documents
The disadvantage is that there is no sentence boundary detection. You can theoretically solve that with the NLTK (or SpaCy) approach and splitting sentences.
However, it is not clear to me if this applied for a standard pipeline.
|
hey @carschno, how long are your documents (on average) and what kind of performance do you get with the current approach (i.e. tokenizing + truncating the whole document)?
if performance is a problem then since you’re doing text classification, you could try chunking each document into smaller passages with a sliding window (see this tutorial 109 for details on how that’s done), and then aggregate the [CLS] representations for each window in a manner similar to this 23 paper.
it’s not the most memory efficient strategy, but if your documents are not super long it might be a viable alternative to simple truncation
| 0 |
huggingface
|
🤗Tokenizers
|
Regular tokens vs special tokens
|
https://discuss.huggingface.co/t/regular-tokens-vs-special-tokens/6187
|
Based on the CTRL approach on GPT2, i’m trying to add tokens in order to control my text generation style. Is there a difference between adding a token as a regular one and adding it as a special token?
|
hey @Felipehonorato, as far as i know, special tokens won’t be split by the tokenizer which might be handy in your case where you’re trying to incorporate control tokens. you can find more information in the docs here 5.
| 0 |
huggingface
|
🤗Tokenizers
|
How do you use SentencePiece for BPE of sequences with no whitespace
|
https://discuss.huggingface.co/t/how-do-you-use-sentencepiece-for-bpe-of-sequences-with-no-whitespace/1895
|
I am trying to use byte pair encoding on amino acid sequences which have no spaces:
ADNRRPIWNLGHMVNALKQIPTFLXDGANA
the tokenizers summary section of the docs states suggests SentencePiece could be useful, as it treats the input as a raw stream, includes the space in the set of characters to use, then uses BPE or unigram to construct the appropriate vocabulary.
How would I train a tokenizer from scratch using SentencePiece? The tokenizer library seems to only support
WordPiece.
|
In original sentencepiece model, white space is considered as a regular character. Please read the description here.
github.com
google/sentencepiece - Whitespace is treated as a basic symbol 17
Unsupervised text tokenizer for Neural Network-based text generation.
I am not totally familier with the huggingface implementation of the sentencepiece. But you can use the original sentencepiece library for that and then try loading that sentencepiece model by huggingface wrapper if needed.
| 0 |
huggingface
|
🤗Tokenizers
|
BOS tokens for mBERT tokenizer
|
https://discuss.huggingface.co/t/bos-tokens-for-mbert-tokenizer/5467
|
Default mBERT tokenizes a sentence as ['[CLS]', 'This', 'is', 'a', 'sample', 'sentence', '[SEP]']. I want to change this behaviour and add a language specific token after the CLS token like this: ['[CLS]', '__en__', 'This', 'is', 'a', 'sample', 'sentence', '[SEP]']
I know TemplateProcessing can be used to achieve this if the language token doesn’t change
from tokenizers.processors import TemplateProcessing
tokenizer._tokenizer.post_processor = TemplateProcessing(
single=f"{_lang_token} $A [SEP]",
pair=f"{_lang_token} $A [SEP] $B:1 [SEP]:1",
special_tokens=[("[SEP]", tokenizer.convert_tokens_to_ids("[SEP]")),
(_lang_token, tokenizer.convert_tokens_to_ids(_lang_token))],
)
But in my case, the language token changes with every batch. What is the best way to add these tokens? Creating TemplateProcessing objects every time seems inefficient.
|
@sgugger any suggestions?
| 0 |
huggingface
|
🤗Tokenizers
|
Issues with offset_mapping values
|
https://discuss.huggingface.co/t/issues-with-offset-mapping-values/4237
|
Hi guys, I am trying to work with a FairSeq model converted to but I have some issues with tokenizer. I am trying to fine-tune it for POS tagging so I have the text split to words already and I want to use the offset_mapping to detect first token for each word. I do it like this:
tokenizer = RobertaTokenizerFast.from_pretrained('path', add_prefix_space=True)
ids = tokenizer([['drieme', 'drieme'], ['drieme']],
is_split_into_words=True,
padding=True,
return_offsets_mapping=True)
The tokenization looks like this:
['<s>', 'Ġd', 'rieme', 'Ġd', 'rieme', '</s>']
But the output from the command looks like this:
{
'input_ids': [[0, 543, 24209, 543, 24209, 2], [0, 543, 24209, 2, 1, 1]],
'attention_mask': [[1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 0, 0]],
'offset_mapping': [
[(0, 0), (0, 1), (1, 6), (1, 1), (1, 6), (0, 0)],
[(0, 0), (0, 1), (1, 6), (0, 0), (0, 0), (0, 0)]
]
}
Notice the offset mapping for the word drieme in the first case. First word has mappings (0, 1) and (1, 6). This looks reasonable, however the second drieme is (1, 1) and (1, 6). Suddenly, there is 1 at the first position. This 1 is there for all but first word for any sentence I try to parse. I feel like it might have something to do with handling the start of the sentence vs all the other words, but I am not sure how to solve this issue so that I have proper offset mappings. What am I doing wrong?
|
Thanks for reporting, it’s definitely a bug. Could you open an issue on tokenizers 18 with your snippet?
| 0 |
huggingface
|
🤗Tokenizers
|
BertTokenizerFast for stsb-xlm-r-multilingual model
|
https://discuss.huggingface.co/t/berttokenizerfast-for-stsb-xlm-r-multilingual-model/4742
|
Hi community,
Would there be a fast tokenizer for the stsb-xlm-r-multilingual model?
Thanks !
|
Hi community and @lewtun,
Could anyone have an idea on how to get a fast tokenizer for stsb-xlm-r-multilingual model?
I am blocked with low latency response due to tokenizer computation. Is there a fast tokenizer model as BertTokenizerFast or is there a way to run tokenizer on GPU ?
| 0 |
huggingface
|
🤗Tokenizers
|
How to add additional custom pre-tokenization processing?
|
https://discuss.huggingface.co/t/how-to-add-additional-custom-pre-tokenization-processing/1637
|
I would like to add a few custom functions for pre-tokenization. For example, I would like to split numerical text from any non-numerical test.
Eg
‘1000mg’ would become [‘1000’, ‘mg’].
I am trying to figure out the proper way to do this for the python binding; I think it may be a bit tricky since its a binding for the original rust version.
I am looking at the pretokenizer function
/huggingface/tokenizers/blob/2ccd16bf5c3dd97759d7bdf5229e2feeba314b4a/bindings/python/py_src/tokenizers/pre_tokenizers/init.pyi#L6
Which I am guessing may be where I could potentially as some pretokenization functions, but it doesn’t seem to return anything. I noticed that it’s expecting an instance of the PreTokenizedString defined here
/huggingface/tokenizers/blob/2ccd16bf5c3dd97759d7bdf5229e2feeba314b4a/bindings/python/py_src/tokenizers/init.pyi#L55
Which does seem to have some text processing functions. But they don’t seem to return anything. I am guessing that any additional rules need to be implemented in the original rust version itself?
I am looking at the rust pretokenizers code, it seems that I have to add any additional preprocessing code here
github.com
huggingface/tokenizers/blob/master/tokenizers/src/pre_tokenizers/unicode_scripts/pre_tokenizer.rs 3
use crate::pre_tokenizers::unicode_scripts::scripts::{get_script, Script};
use crate::tokenizer::{normalizer::Range, PreTokenizedString, PreTokenizer, Result};
#[derive(Clone, Debug)]
pub struct UnicodeScripts;
impl_serde_unit_struct!(UnicodeScriptsVisitor, UnicodeScripts);
impl UnicodeScripts {
pub fn new() -> Self {
Self {}
}
}
impl Default for UnicodeScripts {
fn default() -> Self {
Self::new()
}
}
// This code exists in the Unigram default IsValidSentencePiece.
This file has been truncated. show original
Does this seem like the right track for adding additional preprocessing code?
It it makes a difference, what I am trying to do is train a brand new tokenizer.
|
Hi @reSearch2vec
There are multiple ways to customize the pre-tokenization process:
Using existing components
The tokenizers library provides many different PreTokenizer that you can use, and even combine as you wish to. There is a list of components in the official documentation 54
Using custom components written in Python
It is possible to customize some of the components (Normalizer, PreTokenizer, and Decoder) using Python code. This hasn’t been documented yet, but you can find an example here 67. It lets you directly manipulate the NormalizedString 15 or PreTokenizedString 13 to normalize and pre-tokenize as you wish.
Now for the example you mentioned (ie ‘1000mg’ would become [‘1000’, ‘mg’]), you can probably use the Digits PreTokenizer that does exactly this.
If you didn’t get a chance to familiarize yourself with the Getting started part of our documentation 35, I think you will love it as it explains a bit more how to customize your tokenizer, and gives concrete examples.
| 0 |
huggingface
|
🤗Tokenizers
|
Using a BertWordPieceTokenizer trained from scratch from transformers
|
https://discuss.huggingface.co/t/using-a-bertwordpiecetokenizer-trained-from-scratch-from-transformers/4391
|
Hey everyone,
I’d like to load a BertWordPieceTokenizer I trained from scratch using the interface built in transformers, either with BertTokenizer or BertTokenizerFast. It looks like those two tokenizers in transformers expect different ways of loading in the saved data from BertWordPieceTokenizer, and I am wondering what is the best way to go about things.
Example
I am training on a couple test files, saving the tokenizer, and the reloading it in tokenizers.BertTokenizer (there is a bit of ceremony here creating the test data, but this is everything you need to reproduce the behavior I am seeing):
from pathlib import Path
from tokenizers import BertWordPieceTokenizer
from transformers import BertTokenizer
def test_text():
text = [
"This is a test, just a test",
"nothing more, nothing less"
]
return text
def create_test_files():
test_path = Path("tmp")
test_path.mkdir()
test_data = test_text()
for idx, text in enumerate(test_data):
file = test_path.joinpath(f"file{idx}.txt")
with open(file, "w") as f:
f.write(text)
return test_path
def cleanup_test(path):
path = Path(path)
for child in path.iterdir():
if child.is_file():
child.unlink()
else:
rm_tree(child)
path.rmdir()
def create_tokenizer_savepath():
savepath = Path("./bert")
savepath.mkdir()
return str(savepath)
def main():
# Saving two text files to train the tokenizer
test_path = create_test_files()
files = test_path.glob("**/*.txt")
files = [str(f) for f in files]
tokenizer = BertWordPieceTokenizer(
clean_text=True,
strip_accents=True,
lowercase=True,
)
tokenizer.train(
files,
vocab_size=15,
min_frequency=1,
show_progress=True,
special_tokens=["[PAD]", "[UNK]", "[CLS]", "[SEP]", "[MASK]"],
limit_alphabet=1000,
wordpieces_prefix="##",
)
savepath = create_tokenizer_savepath()
tokenizer.save_model(savepath, "pubmed_bert")
tokenizer = BertTokenizer.from_pretrained(
f"{savepath}/pubmed_bert-vocab.txt",
max_len=512
)
print(tokenizer)
cleanup_test(test_path)
cleanup_test(savepath)
if __name__ == "__main__":
main()
Loading the Trained Tokenizer
Specifying the path to the pubmed_bert-vocab.txt is deprecated:
Calling BertTokenizer.from_pretrained() with the path to a single file or url is deprecated
PreTrainedTokenizer(name_or_path='bert/pubmed_bert-vocab.txt', vocab_size=30, model_max_len=512, is_fast=False, padding_side='right', special_tokens={'unk_token': '[UNK]', 'sep_token': '[SEP]', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'mask_token': '[MASK]'})
But, if I just specify the path to the directory containing pubmed_bert-vocab.txt:
Traceback (most recent call last):
File "minimal_tokenizer.py", line 86, in <module>
main()
File "minimal_tokenizer.py", line 76, in main
max_len=512
File "/home/ygx/opt/local/anaconda3/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1777, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load tokenizer for 'bert'. Make sure that:
- 'bert' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'bert' is the correct path to a directory containing relevant tokenizer files
The directory I am saving to only contains pubmed_bert-vocab.txt. If specifying the full path to that vocab is deprecated, what is the best way to load that tokenizer?
Using BertTokenizerFast
If I swap out BertTokenizer for BertTokenizerFast, and pass in the path to the directory where I have saved my tokenizer trained from scratch, I get the same error:
Traceback (most recent call last):
File "minimal_tokenizer.py", line 86, in <module>
main()
File "minimal_tokenizer.py", line 76, in main
max_len=512
File "/home/ygx/opt/local/anaconda3/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1777, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load tokenizer for 'bert'. Make sure that:
- 'bert' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'bert' is the correct path to a directory containing relevant tokenizer files
And if I specify the path to file saved by my tokenizer (pubmed_bert-vocab.txt), I get a ValueError (vs the deprecation warning I was getting using BertTokenizer):
Traceback (most recent call last):
File "minimal_tokenizer.py", line 86, in <module>
main()
File "minimal_tokenizer.py", line 76, in main
max_len=512
File "/home/ygx/opt/local/anaconda3/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1696, in from_pretrained
"Use a model identifier or the path to a directory instead.".format(cls.__name__)
ValueError: Calling BertTokenizerFast.from_pretrained() with the path to a single file or url is not supported.Use a model identifier or the path to a directory instead.
Current Approach
I am currently using BertTokenizer, specifying the full path the the pubmed_bert-vocab.txt and am ignoring the deprecation warning, but ideally I would like use BertTokenizerFast but don’t know how to load in my saved tokenizer. What is the best way to go forward on this?
|
pinging @anthony
| 0 |
huggingface
|
🤗Tokenizers
|
Token Classification: How to tokenize and align labels with overflow and stride?
|
https://discuss.huggingface.co/t/token-classification-how-to-tokenize-and-align-labels-with-overflow-and-stride/4353
|
Hello Huggingface,
I try to solve a token classification task where the documents are longer than the model’s max length.
I modified the tokenize_and_align_labels function from example token classification notebook 41. I set the tokenizer option return_overflowing_tokens=True and rewrote the function to map labels for the overflowing tokens:
tokenizer_settings = {'is_split_into_words':True,'return_offsets_mapping':True,
'padding':True, 'truncation':True, 'stride':0,
'max_length':tokenizer.model_max_length, 'return_overflowing_tokens':True}
def tokenize_and_align_labels(examples):
tokenized_inputs = tokenizer(examples["tokens"], **tokenizer_settings)
labels = []
for i,document in enumerate(tokenized_inputs.encodings):
doc_encoded_labels = []
last_word_id = None
for word_id in document.word_ids:
if word_id == None: #or last_word_id == word_id:
doc_encoded_labels.append(-100)
else:
document_id = tokenized_inputs.overflow_to_sample_mapping[i]
label = examples[task][document_id][word_id]
doc_encoded_labels.append(int(label))
last_word_id = word_id
labels.append(doc_encoded_labels)
tokenized_inputs["labels"] = labels
return tokenized_inputs
Executing this code will result in an error:
exception has occurred: ArrowInvalid
Column 5 named task1 expected length 820 but got length 30
It looks like the input 30 examples can’t be mapped to the 820 examples after the slicing. How can I solve this issue?
Environment info
Google Colab running this notbook 41
To reproduce
Steps to reproduce the behaviour:
Replace the tokenize_and_align_labels function with the function given above.
Add examples longer than max_length
run tokenized_datasets = datasets.map(tokenize_and_align_labels, batched=True) cell.
|
cc @sgugger
| 0 |
huggingface
|
🤗Tokenizers
|
Space token ’ ’ cannot be add when is_split_into_words = True
|
https://discuss.huggingface.co/t/space-token-cannot-be-add-when-is-split-into-words-true/4305
|
for example,
>>> tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
>>> tokenizer.add_tokens(' ')
1
>>> tokenizer.encode('你好 世界', add_special_tokens=False)
[872, 1962, 21128, 686, 4518]
>>> tokenizer.encode(['你','好',' ', '世', '界'], is_split_into_words=True, add_special_tokens=False)
[872, 1962, 686, 4518]
Obviously, the blank token is ignored. But if you change it to another token like ‘[balabala]’, it works.
So what is the proper way to do this?
|
I found that one way is to use convert_tokens_to_ids, yet by which I cannot use the convenient features in encode and __call__ such as padding and automatically generating attention_mask
| 0 |
huggingface
|
🤗Tokenizers
|
Does AutoTokenizer.from_pretrained add [cls] tokens?
|
https://discuss.huggingface.co/t/does-autotokenizer-from-pretrained-add-cls-tokens/4056
|
Hello,
I am currently working on a classification problem using ProtBERT and I am following the Fine-Tuning Tutorial 4. I have called the tokenised using
tokenizer = AutoTokenizer.from_pretrained
and then tokenised like the tutorial says
train_encodings = tokenizer(seq_train, truncation=True, padding=True,
max_length=1024, return_tensors="pt")
Unfortunately, the model doesn’t seem to be learning (I froze the BERT layers). From reading around, I saw that I need to add the [CLS] token and found such an option using
tokenised.encode(add_special_tokens=True)
Yet the tutorial I am following doesn’t seem to require and I was wondering wyy is there a discrepancy and perhaps maybe this is why my model isn’t learning.
Thank you
|
Hi @theudster, I’m pretty sure that ProtBERT has a CLS token since you can see it in the tokenizer’s special tokens map:
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("Rostlab/prot_bert")
# returns {'unk_token': '[UNK]', 'sep_token': '[SEP]', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'mask_token': '[MASK]'}
tokenizer.special_tokens_map
You can also see it by encoding a text and then decoding it:
text = "I love Adelaide!"
# add_special_tokens=True is set by default
text_enc = tokenizer.encode(text)
for tok in text_enc:
print(tok, tokenizer.decode(tok))
You say you froze the BERT layers, so I’m wondering how you’re doing fine-tuning? I’ve sometimes found that the tutorials in the docs aren’t always complete, so for fine-tuning with text classification I would recommend following Sylvain’s tutorial here: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/token_classification.ipynb 9
| 0 |
huggingface
|
🤗Tokenizers
|
Combine multiple sentences together during tokenization
|
https://discuss.huggingface.co/t/combine-multiple-sentences-together-during-tokenization/3430
|
I want my output to be like [CLS] [SEP] text1 [SEP] text2 [SEP] text3 [SEP] eos token. As per the default behaviour, tokenizer expects either a string or a pair of string.
tokenizer(sentence1, sentence2) # returns a single vector value for input_ids. I want this but for three sentences
I want the pair of string behavior for three sentences. I can pass a list of sentences, but that creates 3 lists of input_ids.
tokenizer([sentence1, sentence2, sentence3]) # returns three tensors for input_ids
I want a single tensor representing the output I wrote above.
Is there any good way of doing it ?
|
I don’t think tokenizer handles this case directly.
You could directly join the sentences using [SEP] and then encode it as one single text.
tok = BertTokenizer.from_pretrained("bert-base-cased")
text = "sent1 [SEP] sent2 [SEP] sent3"
ids = tok(text, add_special_tokens=True).input_ids
tok.decode(ids)
=> '[CLS] sent1 [SEP] sent2 [SEP] sent3 [SEP]'
| 0 |
huggingface
|
🤗Tokenizers
|
“OSError: Model name ‘./XX’ was not found in tokenizers model name list” - cannot load custom tokenizer in Transformers
|
https://discuss.huggingface.co/t/oserror-model-name-xx-was-not-found-in-tokenizers-model-name-list-cannot-load-custom-tokenizer-in-transformers/2714
|
I’m trying to create tokenizer with my own dataset/vocabulary using Sentencepiece and then use it with AlbertTokenizer transformers.
I followed really closely the tutorial on how to train a model from scratch: https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb#scrollTo=hO5M3vrAhcuj 7
# import relevant libraries
from pathlib import Path
from tokenizers import SentencePieceBPETokenizer
from tokenizers.implementations import SentencePieceBPETokenizer
from tokenizers.processors import BertProcessing
from transformers import AlbertTokenizer
paths = [str(x) for x in Path("./data").glob("**/*.txt")]
# Initialize a tokenizer
tokenizer = SentencePieceBPETokenizer(add_prefix_space=True)
# Customize training
tokenizer.train(files=paths,
vocab_size=32000,
min_frequency=2,
show_progress=True,
special_tokens=['<unk>'],)
# Saving model
tokenizer.save_model("Sent-AlBERT")
tokenizer = SentencePieceBPETokenizer(
"./Sent-AlBERT/vocab.json",
"./Sent-AlBERT/merges.txt",)
tokenizer.enable_truncation(max_length=512)
Everything is fine up until this point when I tried to re-create the tokenizer in transformers
# Re-create our tokenizer in transformers
tokenizer = AlbertTokenizer.from_pretrained("./Sent-AlBERT", do_lower_case=True)
This is the error message I kept receiving:
OSError: Model name './Sent-AlBERT' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed './Sent-AlBERT' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.
For some reason, it works with RobertaTokenizerFast but not with AlbertTokenzier.
If anyone could give me a suggestion or any sort of direction on how to use Sentencepiece with AlberTokenizer I would really appreciate it.
P.S: I also tried to use ByteLevelBPETokenizer with DistilBertTokenizer, but it couldn’t recognize the tokenizer in the transformer either. I’m not sure what I am missing here.
|
You can’t directly use this tokenizer with a “slow” tokenizer (not backed by rust) there is a conversion step to do (not super versed in it but maybe @thomwolf can chime in?).
It should work with AlbertTokenizerFast (which has more functionality and is faster so it should be a win-win over all!)
| 0 |
huggingface
|
🤗Tokenizers
|
Bug with tokernizer’s offset mapping for NER problems?
|
https://discuss.huggingface.co/t/bug-with-tokernizers-offset-mapping-for-ner-problems/2928
|
I’m working on NER and am following the tutorial from Token Classification with W-NUT Emerging Entities 31. I’m relying on the code in that tutorial to identify which tokens are valid and which tokens have been added by the Tokenizer, such as subword tokens and special tokens like [CLS].
The tutorial says the following:
Now we arrive at a common obstacle with using pre-trained models for token-level classification: many of the tokens in the W-NUT corpus are not in DistilBert’s vocabulary. Bert and many models like it use a method called WordPiece Tokenization, meaning that single words are split into multiple tokens such that each token is likely to be in the vocabulary.
Let’s write a function to do this. This is where we will use the offset_mapping from the tokenizer as mentioned above. For each sub-token returned by the tokenizer, the offset mapping gives us a tuple indicating the sub-token’s start position and end position relative to the original token it was split from. That means that if the first position in the tuple is anything other than 0, we will set its corresponding label to -100. While we’re at it, we can also set labels to -100 if the second position of the offset mapping is 0, since this means it must be a special token like [PAD] or [CLS].
I get different results for the offset mapping from the tokenizer depending on when whether the input text is a complete sentence or a list of tokens.
batch_sentences = ['The quick brown fox jumped over the lazy dog.',
'That dog is really lazy.']
encoded_dict = tokenizer(text=batch_sentences,
add_special_tokens=True,
max_length=64,
padding=True,
truncation=True,
return_token_type_ids=True,
return_attention_mask=True,
return_offsets_mapping=True,
return_tensors='pt'
)
print(encoded_dict.offset_mapping)
That prints:
tensor([[[ 0, 0],
[ 0, 3],
[ 4, 9],
[10, 15],
[16, 19],
[20, 26],
[27, 31],
[32, 35],
[36, 40],
[41, 44],
[44, 45],
[ 0, 0]],
[[ 0, 0],
[ 0, 4],
[ 5, 8],
[ 9, 11],
[12, 18],
[19, 23],
[23, 24],
[ 0, 0],
[ 0, 0],
[ 0, 0],
[ 0, 0],
[ 0, 0]]])
On the other hand, if the sentences are already split, I get different results:
batch_sentences = [['The', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog.'],
['That', 'dog', 'is', 'really', 'lazy.']]
encoded_dict = tokenizer(text=batch_sentences,
is_split_into_words=True, # <--- different
add_special_tokens=True,
max_length=64,
padding=True,
truncation=True,
return_token_type_ids=True,
return_attention_mask=True,
return_offsets_mapping=True,
return_tensors='pt'
)
print(encoded_dict.offset_mapping)
That prints:
tensor([[[0, 0],
[0, 3],
[0, 5],
[0, 5],
[0, 3],
[0, 6],
[0, 4],
[0, 3],
[0, 4],
[0, 3],
[3, 4],
[0, 0]],
[[0, 0],
[0, 4],
[0, 3],
[0, 2],
[0, 6],
[0, 4],
[4, 5],
[0, 0],
[0, 0],
[0, 0],
[0, 0],
[0, 0]]])
Here’s a Colab notebook 18 with a full working example.
If this is a bug, I’ll open a ticket in Github.
|
I’m unsure what you think the bug is: the offset_mappings are maps from tokens to the original texts. If you provide the original texts in different formats, you are going to get different results. Each time you come back to 0 in the second results corresponds to one of your new words, and you get (0, 0) for special tokens, which is what the tutorial you mention detects.
For non-split texts, you get the spans in the original text (though I’m not sure how you get your labels in that case?)
Note that if you only want to detect the special tokens, you can use the special_tokens_mask the tokenizer can return if you add the flag return_special_tokens_mask=True. Also, for another approach using the word_ids method the fast tokenizer provide, you should check out the token classification example script 82.
| 0 |
huggingface
|
🤗Tokenizers
|
Error with new tokenizers (URGENT!)
|
https://discuss.huggingface.co/t/error-with-new-tokenizers-urgent/2847
|
Hi, recently all my pre-trained models undergo this error while loading their tokenizer:
Couldn't instantiate the backend tokenizer from one of: (1) a tokenizers library serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. You need to have sentencepiece installed to convert a slow tokenizer to a fast one.
I tried to pip install sentencepiece but this does not solve the problem. Do you know any solution? (I am working on Google Colab)
Note: In my humble opinion, changing so important things so fast can generate very dangerous problems. All my students (I teach DL stuff) and clients are stuck on my notebooks. I can understand that after a year a code can become outdated, but not just after two months. This requires a lot of maintenance work from my side!
|
There were some breaking changes in the V4 release, please find the details here:
github.com
Release Transformers v4.0.0: Fast tokenizers, model outputs, file reorganization... 107
s/tag/v4.0.0
Transformers v4.0.0-rc-1: Fast tokenizers, model outputs, file reorganization
Breaking changes since v3.x
Version v4.0.0 introduces several breaking changes that were necessary.
1. AutoTokenizers a...
| 0 |
huggingface
|
🤗Tokenizers
|
Error with <|endoftext|> in Tokenizer GPT2
|
https://discuss.huggingface.co/t/error-with-endoftext-in-tokenizer-gpt2/2838
|
Hi!
I work with the sberbank-ai/rugpt3large_based_on_gpt2 2 . It is the model for the russian language corpus.
I need to implement the function:
def score(sentence):
tokenize_input = tokenizer.tokenize(sentence)
tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])
loss=model(tensor_input, lm_labels=tensor_input)
return math.exp(loss)
from ( #473)
For this function to work correctly with a single token, I need to add a special token <|endoftext|>.
But when I pass the string “<|endoftext|>token” to Toktokenizer, an error is returned “ValueError: type of None unknown: <class ‘NoneType’>. Should be one of a python, numpy, pytorch or tensorflow object”.
But even if there are multiple tokens at the input of tokenizer, there will be this error too.
Token <|endoftext|> is absend in the tokenizer dictionary. But in the dictionary of special tokens, this token is present.
My questions:
What am I doing wrong?
How can I solve this problem?
Why is the <|endoftext|> absent in the dictionary?
This is short test shows my problem:
#!pip install transformers
from transformers import GPT2Tokenizer
from transformers import GPT2LMHeadModel
with torch.no_grad():
#view of special tokens in dicronary
print('#' * 20, ' view of special tokens in dictonary ', '#' * 20)
items = tokenizer.get_vocab().items()
for item in items:
if (item[0].startswith('<')==True) and (item[0].endswith('>')==True):
print(item)
#view of special tokens
print('#' * 20, 'map of special_tokens ', '#' * 20)
print(tokenizer.special_tokens_map)
#try to get the id's with <|endoftext|>
print('#' * 20, " try to get the id's with <|endoftext|> ", '#' * 20,)
single_token = 'вот'
single_token_with_eos = '<|endoftext|>' + single_token
#error here!
id = tokenizer.encode(single_token_with_eos, return_tensors='pt') print('single_token_with_eos_id',tokenizer.encode(single_token_with_eos))
Output:
#################### view of special tokens in dictonary ####################
('<pad>', 0)
('<s>', 1)
('</s>', 2)
('<unk>', 3)
('<mask>', 4)
> ################# map of special_tokens ####################
> {'bos_token': '<|endoftext|>', 'eos_token': '<|endoftext|>', 'unk_token': '<|endoftext|>'}
> #################### try to get the id's with <|endoftext|> ####################
Error here!
...
|
Sorry, I realized my mistake. It should have been like this:
tokenizer = AutoTokenizer.from_pretrained…
model = AutoModelWithLMHead.from_pretraine…
Снимок экрана 2020-12-16 095902861×418 54.3 KB
“If nothing works, then read the instructions”
| 0 |
huggingface
|
🤗Tokenizers
|
Build a RoBERTa tokenizer from scratch
|
https://discuss.huggingface.co/t/build-a-roberta-tokenizer-from-scratch/2758
|
Hi, there,
I try to train a RoBERTa model from scratch in the Chinese language.
The first step is to build a new tokenizer.
First, I followed the steps in the quicktour 12 . After the tokenizer training is done, I use run_mlm.py 4 to train the new model.
However, the RoBERTa model training fails and I found two observations:
The output of tokenzier(text) is <s> </s>. No matter what the text is, the output is always <s> </s>. There is nothing encoded.
There is no Ġ symbol in the generated merges.txt file.
The merges.txt contains:
#version: 0.2 - Trained by huggingface/tokenizers
什 么
怎 么
可 以
手 机
...
The code I used to train tokenizer:
def build_BPE_tokenizer(
train_files: List[TextIO],
output_dir: TextIO,
# name: str,
vocab_size: int,
min_frequency: int):
tokenizer = Tokenizer(BPE())
tokenizer.pre_tokenizer = Whitespace()
trainer = BpeTrainer(
vocab_size=vocab_size, min_frequency=min_frequency,
special_tokens=["<s>", "<pad>", "</s>",
"<unk>", "<mask>"]
)
tokenizer.train(trainer, train_files)
tokenizer.model.save(output_dir)
And examples of training data:
喜欢 打篮球 的 男生 喜欢 什么样 的 女生
爱 打篮球 的 男生 喜欢 什么样 的 女生
我 手机 丢 了 , 我想 换个 手机
我想 买个 新手机 , 求 推荐
How can I fix the problem? Any help is appreciated!
Thanks for the help!
|
Pinging @Narsil
| 0 |
huggingface
|
🤗Tokenizers
|
OSError: Model name ‘gpt2’ was not found in tokenizers model name list (gpt2,…)
|
https://discuss.huggingface.co/t/oserror-model-name-gpt2-was-not-found-in-tokenizers-model-name-list-gpt2/2164
|
I’m trying to replicate part of the transformers tutorial from fastai, and there is a place one writes:
from transformers import GPT2LMHeadModel, GPT2TokenizerFast
pretrained_weights = 'gpt2'
tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_weights)
model = GPT2LMHeadModel.from_pretrained(pretrained_weights)
However, trying to run it I get
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-31-b475580d46e5> in <module>
1 from transformers import GPT2LMHeadModel, GPT2TokenizerFast
2 pretrained_weights = 'gpt2'
----> 3 tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_weights)
4 model = GPT2LMHeadModel.from_pretrained(pretrained_weights)
/opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1589 ", ".join(s3_models),
1590 pretrained_model_name_or_path,
-> 1591 list(cls.vocab_files_names.values()),
1592 )
1593 )
OSError: Model name 'gpt2' was not found in tokenizers model name list (gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2). We assumed 'gpt2' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt', 'tokenizer.json'] but couldn't find such vocabulary files at this path or url.
I find this confusing because gpt2 is in the list. In fact, I encounter the same problem with any transformer model I choose, like for instance distilgpt2 or from another family. In fact, if I comment out that line I also get an error
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
372 if resolved_config_file is None:
--> 373 raise EnvironmentError
374 config_dict = cls._dict_from_json_file(resolved_config_file)
OSError:
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-32-a4869c5495d6> in <module>
2 pretrained_weights = 'gpt2'
3 #tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_weights)
----> 4 model = GPT2LMHeadModel.from_pretrained(pretrained_weights)
/opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
874 proxies=proxies,
875 local_files_only=local_files_only,
--> 876 **kwargs,
877 )
878 else:
/opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
327
328 """
--> 329 config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
330 return cls.from_dict(config_dict, **kwargs)
331
/opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
380 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a {CONFIG_NAME} file\n\n"
381 )
--> 382 raise EnvironmentError(msg)
383
384 except json.JSONDecodeError:
OSError: Can't load config for 'gpt2'. Make sure that:
- 'gpt2' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'gpt2' is the correct path to a directory containing a config.json file
Everything is run on Kaggle notebooks, in case it’s important
Thanks in advance!
|
Can you try to share a Google colab reproducing the error?
| 0 |
huggingface
|
🤗Tokenizers
|
Bypassing tokenizers
|
https://discuss.huggingface.co/t/bypassing-tokenizers/2162
|
Hi everyone,
Is it possible to bypass the tokenizer and directly provide the input embeddings to train the BERT model?
Thanks!
|
You can just create a subclass of the model that you want and modify its forward pass.
| 0 |
huggingface
|
🤗Tokenizers
|
Tokenizing Domain Specific Text
|
https://discuss.huggingface.co/t/tokenizing-domain-specific-text/1978
|
Hello everyone, I’ve been referencing this 4 paper on training transformer based models using metadata enhanced MIDI and was thinking about implementing this using the huggingface transformers and tokenizer libraries as an introduction to these libraries beyond the basic language modeling examples. As I’ve been researching and referencing this 4 tutorial I’ve ran into issues with tokenization and was wondering when training a tokenizer, how can I set up “word level” semantics? Technically each “word” in this case will be the data within a string like so ‘Event(name=Position, time=360, value=4/16, text=360)’ rather than just words and characters delimited on spaces like its doing now as listed below
#version: 0.2 - Trained by huggingface/tokenizers
m e
a l
u e
al ue
i me
n ame
v alue
Ġ value
Ġt ex
Ġt ime
Ev en
Ġtex t
Even t
) ,
Ġ Event
N o
Apologies on if these questions are noobish I’m grokking a lot of this as I go along. Any help is greatly appreciated.
|
pinging @anthony and @Narsil here
| 0 |
huggingface
|
🤗Tokenizers
|
Tokenizer splits up pre-split tokens
|
https://discuss.huggingface.co/t/tokenizer-splits-up-pre-split-tokens/2078
|
I am working on a Named Entity Recognition (NER) problem, and I need tokenization to be quite precise in order to match tokens with per-token NER tags. I have the following sentence It costs 2.5 million., which I have already tokenized.
tokens = ['It', 'costs', '2.5', 'million.']
I then run the list through a BERT tokenizer using the is_split_into_words=True option to get input IDs. When I try to reconstruct the original sentence using the tokenizer, I see that it has split the token 2.5 into the three tokens 2, ., and 5. It also split the token million. into two tokens million and ..
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=True)
result = tokenizer(tokens, is_split_into_words=True)
print(result.input_ids)
# [101, 2009, 5366, 1016, 1012, 1019, 2454, 1012, 102]
print(tokenizer.decode(result.input_ids))
# [CLS] it costs 2. 5 million. [SEP]
print(tokenizer.convert_ids_to_tokens(result.input_ids))
# ['[CLS]', 'it', 'costs', '2', '.', '5', 'million', '.', '[SEP]']
I do not want that additional tokenization. Since I passed is_split_into_words=True to the tokenizer, I was expecting that the tokenizer would treat each token as atomic and not do any further tokenization. I want the original string to be treated as four tokens ['It', 'costs', '2.5', 'million.'] so that the tokens lines up with my NER tags, where 2.5 has an NER tag of number.
How would I got about fixing my problem? Thank you.
|
Hi facehugger2020,
in order to fix your problem, you will need to train a BERT tokenizer yourself, and then train the BERT model too.
When a BERT model is created and pre-trained, it uses a particular vocabulary. For example, the standard bert-base-uncased model has a vocabulary of 30000 tokens. “2.5” is not part of that vocabulary, so the BERT tokenizer splits it up into smaller units.
Training from scratch with the vocabulary you need is not impossible, but it will be tricky and probably expensive. Could you force your NER tags to fit with the BERT tokenization? For example, could you perform the tagging after the tokenization?
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.