repo
stringclasses 147
values | number
int64 1
172k
| title
stringlengths 2
476
| body
stringlengths 0
5k
| url
stringlengths 39
70
| state
stringclasses 2
values | labels
listlengths 0
9
| created_at
timestamp[ns, tz=UTC]date 2017-01-18 18:50:08
2026-01-06 07:33:18
| updated_at
timestamp[ns, tz=UTC]date 2017-01-18 19:20:07
2026-01-06 08:03:39
| comments
int64 0
58
β | user
stringlengths 2
28
|
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/datasets-tagging
| 28
|
Why datasets version is pinned in requirements.txt?
|
In file `requirements.txt`, the version of `datasets` is pinned. Why?
|
https://github.com/huggingface/datasets-tagging/issues/28
|
open
|
[
"question"
] | 2021-12-29T09:39:40Z
| 2021-12-29T11:51:59Z
| null |
albertvillanova
|
huggingface/transformers
| 14,482
|
where can I find the dataset bert-base-chinese is pretrained on?
|
https://github.com/huggingface/transformers/issues/14482
|
closed
|
[] | 2021-11-22T09:22:51Z
| 2021-12-30T15:02:07Z
| null |
BoomSky0416
|
|
huggingface/transformers
| 14,440
|
What does "is_beam_sample_gen_mode" mean
|
Hi, I find there are many ways for generating sequences in `Transformers`(when calling the `generate` method).
According to the code there:
https://github.com/huggingface/transformers/blob/01f8e639d35feb91f16fd3c31f035df11a726cc5/src/transformers/generation_utils.py#L947-L951
As far as I known:
`is_greedy_gen_mode` stands for Greedy Search.
`is_sample_gen_mode` stands for Sampling(with top_k and top_p).
`is_beam_gen_mode` stands for Beam Search .
But what does `is_beam_sample_gen_mode` mean?
Besides, I want to know how do I choose the correct way for generating. I have tried serval ways, but:
1. I find the sequences out from "beam search" mode becomes too similar.
2. I also find the sequences out from "sample" mode, while being diverse, are lacking context coherence.
Thank you!
|
https://github.com/huggingface/transformers/issues/14440
|
closed
|
[] | 2021-11-18T06:31:52Z
| 2023-02-28T05:13:29Z
| null |
huhk-sysu
|
huggingface/sentence-transformers
| 1,227
|
What is the training data to train the checkpoint "nli-roberta-base-v2"?
|
Hi, I wonder what is the training data for the provided checkpoint "nli-roberta-base-v2"?
The checkpoint name indicates that the training data is related to the nli dataset, but I just want to clarify what it is.
Thanks in advance.
|
https://github.com/huggingface/sentence-transformers/issues/1227
|
closed
|
[] | 2021-10-25T08:59:45Z
| 2021-10-25T09:47:36Z
| null |
sh0416
|
huggingface/dataset-viewer
| 71
|
Download and cache the images and other files?
|
Fields with an image URL are detected, and the "ImageUrl" type is passed in the features, to let the client (moonlanding) put the URL in `<img src="..." />`.
This means that pages such as https://hf.co/datasets/severo/wit will download images directly from Wikipedia for example. Hotlinking presents various [issues](https://en.wikipedia.org/wiki/Inline_linking#Controversial_uses_of_inline_linking). In particular, it's harder for us to know for sure if the image really exists or if it has an error. It might also generate a lot of traffic to other websites.
Thus: we might want to download the images as an asset in the backend, then serve them directly. Coding a good downloading bot is not easy ([User-Agent](https://meta.wikimedia.org/wiki/User-Agent_policy), avoid reaching rate-limits, detect the filename, detect the mime-type/extension, etc.)
Related: https://github.com/huggingface/datasets/issues/3105
|
https://github.com/huggingface/dataset-viewer/issues/71
|
closed
|
[
"question"
] | 2021-10-18T15:37:59Z
| 2022-09-16T20:09:24Z
| null |
severo
|
huggingface/datasets
| 3,013
|
Improve `get_dataset_infos`?
|
Using the dedicated function `get_dataset_infos` on a dataset that has no dataset-info.json file returns an empty info:
```
>>> from datasets import get_dataset_infos
>>> get_dataset_infos('wit')
{}
```
While it's totally possible to get it (regenerate it) with:
```
>>> from datasets import load_dataset_builder
>>> builder = load_dataset_builder('wit')
>>> builder.info
DatasetInfo(description='Wikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset. WIT is composed of a curated set\n of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages. Its\n size enables WIT to be used as a pretraining dataset for multimodal machine learning models.\n', citation='@article{srinivasan2021wit,\n title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},\n author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},\n journal={arXiv preprint arXiv:2103.01913},\n year={2021}\n}\n', homepage='https://github.com/google-research-datasets/wit', license='', features={'b64_bytes': Value(dtype='string', id=None), 'embedding': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'image_url': Value(dtype='string', id=None), 'metadata_url': Value(dtype='string', id=None), 'original_height': Value(dtype='int32', id=None), 'original_width': Value(dtype='int32', id=None), 'mime_type': Value(dtype='string', id=None), 'caption_attribution_description': Value(dtype='string', id=None), 'wit_features': Sequence(feature={'language': Value(dtype='string', id=None), 'page_url': Value(dtype='string', id=None), 'attribution_passes_lang_id': Value(dtype='string', id=None), 'caption_alt_text_description': Value(dtype='string', id=None), 'caption_reference_description': Value(dtype='string', id=None), 'caption_title_and_reference_description': Value(dtype='string', id=None), 'context_page_description': Value(dtype='string', id=None), 'context_section_description': Value(dtype='string', id=None), 'hierarchical_section_title': Value(dtype='string', id=None), 'is_main_image': Value(dtype='string', id=None), 'page_changed_recently': Value(dtype='string', id=None), 'page_title': Value(dtype='string', id=None), 'section_title': Value(dtype='string', id=None)}, length=-1, id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name='wit', config_name='default', version=0.0.0, splits=None, download_checksums=None, download_size=None, post_processing_size=None, dataset_size=None, size_in_bytes=None)
```
Should we test if info is empty, and in that case regenerate it? Or always generate it?
|
https://github.com/huggingface/datasets/issues/3013
|
closed
|
[
"question",
"dataset-viewer"
] | 2021-10-04T09:47:04Z
| 2022-02-21T15:57:10Z
| null |
severo
|
huggingface/dataset-viewer
| 55
|
Should the features be associated to a split, instead of a config?
|
For now, we assume that all the splits of a config will share the same features, but it seems that it's not necessarily the case (https://github.com/huggingface/datasets/issues/2968). Am I right @lhoestq ?
Is there any example of such a dataset on the hub or in the canonical ones?
|
https://github.com/huggingface/dataset-viewer/issues/55
|
closed
|
[
"question"
] | 2021-10-01T18:14:53Z
| 2021-10-05T09:25:04Z
| null |
severo
|
huggingface/dataset-viewer
| 52
|
Regenerate dataset-info instead of loading it?
|
Currently, getting the rows with `/rows` requires a previous (internal) call to `/infos` to get the features (type of the columns). But sometimes the dataset-info.json file is missing, or not coherent with the dataset script (for example: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main), while we are using `datasets.get_dataset_infos()`, which only loads the exported dataset-info.json files:
https://github.com/huggingface/datasets-preview-backend/blob/c2a78e7ce8e36cdf579fea805535fa9ef84a2027/src/datasets_preview_backend/queries/infos.py#L45
https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/inspect.py#L115
We might want to call `._info()` from the builder to get the info, and features, instead of relying on the dataset-info.json file.
|
https://github.com/huggingface/dataset-viewer/issues/52
|
closed
|
[
"question"
] | 2021-09-27T11:28:13Z
| 2021-09-27T13:21:00Z
| null |
severo
|
huggingface/transformers
| 13,747
|
I want to understand the source code of transformers. Where should I start? Is there a tutorial link? thank you very much!
|
I want to understand the source code of transformers. Where should I start? Is there a tutorial link? thank you very much!
|
https://github.com/huggingface/transformers/issues/13747
|
closed
|
[
"Migration"
] | 2021-09-26T08:27:24Z
| 2021-11-04T15:06:05Z
| null |
limengqigithub
|
huggingface/accelerate
| 174
|
What is the recommended way of training GANs?
|
Currently, the examples folder doesn't contain any example of training GAN. I wonder what is the recommended way of handling multiple models and optimizers when using accelerate.
In terms of interface, `Accelerator.prepare` can wrap arbitrary number of models and optimizers at once. However, it seems to me that the current implementation of `Accelerator` only has one gradient scaler (when native amp is enabled), this might potentially cause issues for wrapping multiple optimizers with one `Accelerator` instance. One way of fixing this might be to move the ownership of `GradScaler` to an `AcceleratedOptimizer`, but this would case problems when calling `Accelerator.backward`.
On the other hand, to use deepspeed, one would have to create two `DeepSpeedEngine` instances to wrap two models. Maybe accelerate could follow this pattern, since deepspeed would be one of the supported backend.
Anyway, I guess a minimum GAN training script should be added to examples as a guildline.
|
https://github.com/huggingface/accelerate/issues/174
|
closed
|
[] | 2021-09-26T07:30:41Z
| 2023-10-24T17:55:15Z
| null |
yuxinyuan
|
huggingface/dataset-viewer
| 48
|
"flatten" the nested values?
|
See https://huggingface.co/docs/datasets/process.html#flatten
|
https://github.com/huggingface/dataset-viewer/issues/48
|
closed
|
[
"question"
] | 2021-09-24T12:58:34Z
| 2022-09-16T20:10:22Z
| null |
severo
|
huggingface/dataset-viewer
| 45
|
use `environs` to manage the env vars?
|
https://pypi.org/project/environs/ instead of utils.py
|
https://github.com/huggingface/dataset-viewer/issues/45
|
closed
|
[
"question"
] | 2021-09-24T08:05:38Z
| 2022-09-19T08:49:33Z
| null |
severo
|
huggingface/dataset-viewer
| 41
|
Move benchmark to a different repo?
|
It's a client of the API
|
https://github.com/huggingface/dataset-viewer/issues/41
|
closed
|
[
"question"
] | 2021-09-23T10:44:08Z
| 2021-10-12T08:49:11Z
| null |
severo
|
huggingface/dataset-viewer
| 35
|
Refresh the cache?
|
Force a cache refresh on a regular basis (cron)
|
https://github.com/huggingface/dataset-viewer/issues/35
|
closed
|
[
"question"
] | 2021-09-23T09:36:02Z
| 2021-10-12T08:34:41Z
| null |
severo
|
huggingface/dataset-viewer
| 30
|
Use FastAPI instead of only Starlette?
|
It would allow to have doc, and surely a lot of other benefits
|
https://github.com/huggingface/dataset-viewer/issues/30
|
closed
|
[
"question"
] | 2021-09-17T14:45:40Z
| 2021-09-20T10:25:17Z
| null |
severo
|
huggingface/datasets
| 2,888
|
v1.11.1 release date
|
Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago.
When do you plan to publush v1.11.1 release?
|
https://github.com/huggingface/datasets/issues/2888
|
closed
|
[
"question"
] | 2021-09-09T21:53:15Z
| 2021-09-12T20:18:35Z
| null |
fcakyon
|
huggingface/dataset-viewer
| 18
|
CI: how to acknowledge a "safety" warning?
|
We use `safety` to check vulnerabilities in the dependencies. But in the case below, `tensorflow` is marked as insecure while the last published version on pipy is still 2.6.0. What to do in this case?
```
+==============================================================================+
| |
| /$$$$$$ /$$ |
| /$$__ $$ | $$ |
| /$$$$$$$ /$$$$$$ | $$ \__//$$$$$$ /$$$$$$ /$$ /$$ |
| /$$_____/ |____ $$| $$$$ /$$__ $$|_ $$_/ | $$ | $$ |
| | $$$$$$ /$$$$$$$| $$_/ | $$$$$$$$ | $$ | $$ | $$ |
| \____ $$ /$$__ $$| $$ | $$_____/ | $$ /$$| $$ | $$ |
| /$$$$$$$/| $$$$$$$| $$ | $$$$$$$ | $$$$/| $$$$$$$ |
| |_______/ \_______/|__/ \_______/ \___/ \____ $$ |
| /$$ | $$ |
| | $$$$$$/ |
| by pyup.io \______/ |
| |
+==============================================================================+
| REPORT |
| checked 137 packages, using free DB (updated once a month) |
+============================+===========+==========================+==========+
| package | installed | affected | ID |
+============================+===========+==========================+==========+
| tensorflow | 2.6.0 | ==2.6.0 | 41161 |
+==============================================================================+
```
|
https://github.com/huggingface/dataset-viewer/issues/18
|
closed
|
[
"question"
] | 2021-09-01T07:20:45Z
| 2021-09-15T11:58:56Z
| null |
severo
|
huggingface/transformers
| 13,331
|
bert:What is the tf version corresponding to tensformers?
|
I use python3.7, tf2.4.0, cuda11.1 and cudnn 8.0.4 to run bert-base-un and report an error
- albert, bert, xlm: @LysandreJik
- tensorflow: @Rocketkn
|
https://github.com/huggingface/transformers/issues/13331
|
closed
|
[] | 2021-08-30T11:42:36Z
| 2021-08-30T15:46:16Z
| null |
xmcs111
|
huggingface/dataset-viewer
| 15
|
Add an endpoint to get the dataset card?
|
See https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L427, `full` argument
The dataset card is the README.md.
|
https://github.com/huggingface/dataset-viewer/issues/15
|
closed
|
[
"question"
] | 2021-08-26T13:43:29Z
| 2022-09-16T20:15:52Z
| null |
severo
|
huggingface/dataset-viewer
| 12
|
Install the datasets that require manual download
|
Some datasets require a manual download (https://huggingface.co/datasets/arxiv_dataset, for example). We might manually download them on the server, so that the backend returns the rows, instead of an error.
|
https://github.com/huggingface/dataset-viewer/issues/12
|
closed
|
[
"question"
] | 2021-08-25T16:30:11Z
| 2022-06-17T11:47:18Z
| null |
severo
|
huggingface/dataset-viewer
| 10
|
Use /info as the source for configs and splits?
|
It's a refactor. As the dataset info contains the configs and splits, maybe the code can be factorized. Before doing it: review the errors for /info, /configs, and /splits (https://observablehq.com/@huggingface/quality-assessment-of-datasets-loading) and ensure we will not increase the number of erroneous datasets.
|
https://github.com/huggingface/dataset-viewer/issues/10
|
closed
|
[
"question"
] | 2021-08-25T09:43:51Z
| 2021-09-01T07:08:25Z
| null |
severo
|
huggingface/dataset-viewer
| 6
|
Expand the purpose of this backend?
|
Depending on the evolution of https://github.com/huggingface/datasets, this project might disappear, or its features might be reduced, in particular, if one day it allows caching the data by self-generating:
- an arrow or a parquet data file (maybe with sharding and compression for the largest datasets)
- or a SQL database
- or precompute and store a partial list of known offsets (every 10MB for example)
It would allow getting random access to the data.
|
https://github.com/huggingface/dataset-viewer/issues/6
|
closed
|
[
"question"
] | 2021-08-09T14:03:41Z
| 2022-02-04T11:24:32Z
| null |
severo
|
huggingface/transformers
| 12,925
|
How to reproduce XLNet correctly And What is the config for finetuning XLNet?
|
I fintune a XLNet for English text classification. But it seems that I did something wrong about it because xlnet-base is worse than bert-base in my case. I set every 1/3 epoch report validation accuracy. At the beginning Bert-base is about 0.50 while XLNet-base is only 0.24. The config I use for xlnet is listed as follows:
```python
config = {
batch_size = 4,
learning_rate = 1e-5,
gradient_accumulation_steps = 32,
epochs = 4,
max_sep_length = 384,
weight_decay = 0.01,
adam_epsilon = 1e-6,
16-bit_training = False
}
```
Does finetune XLNet needs a special setting or XLNet converges slowly?
Thanks for everyone willing to help in advance! :-)
|
https://github.com/huggingface/transformers/issues/12925
|
closed
|
[
"Migration"
] | 2021-07-28T01:16:19Z
| 2021-07-29T05:50:07Z
| null |
sherlcok314159
|
huggingface/transformers
| 12,805
|
What is the data format of transformers language modeling run_clm.py fine-tuning?
|
I now use run_clm.py to fine-tune gpt2, the command is as follows:
```
python run_clm.py \\
--model_name_or_path gpt2 \\
--train_file train1.txt \\
--validation_file validation1.txt \\
--do_train \\
--do_eval \\
--output_dir /tmp/test-clm
```
The training data is as follows:
[train1.txt](https://github.com/huggingface/transformers/files/6847229/train1.txt)
[validation1.txt](https://github.com/huggingface/transformers/files/6847234/validation1.txt)
The following error always appears:
```
[INFO|modeling_utils.py:1354] 2021-07-20 17:37:01,399 >> All the weights of GPT2LMHeadModel were initialized from the model checkpoint at gpt2.
If your task is similar to the task the model of the checkpoint was trained on, you can already use GPT2LMHeadModel for predictions without further training.
Running tokenizer on dataset: 100%|ββββββββββ| 1/1 [00:00<00:00, 90.89ba/s]
Running tokenizer on dataset: 100%|ββββββββββ| 1/1 [00:00<00:00, 333.09ba/s]
Grouping texts in chunks of 1024: 0%| | 0/1 [00:00<?, ?ba/s]
Traceback (most recent call last):
File "D:/NLU/tanka-reminder-suggestion/language_modeling/run_clm.py", line 492, in <module>
main()
File "D:/NLU/tanka-reminder-suggestion/language_modeling/run_clm.py", line 407, in main
desc=f"Grouping texts in chunks of {block_size}",
File "D:\lib\site-packages\datasets\dataset_dict.py", line 489, in map
for k, dataset in self.items()
File "D:\lib\site-packages\datasets\dataset_dict.py", line 489, in <dictcomp>
for k, dataset in self.items()
File "D:\lib\site-packages\datasets\arrow_dataset.py", line 1673, in map
desc=desc,
File "D:\lib\site-packages\datasets\arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "D:\lib\site-packages\datasets\fingerprint.py", line 397, in wrapper
out = func(self, *args, **kwargs)
File "D:\lib\site-packages\datasets\arrow_dataset.py", line 2024, in _map_single
writer.write_batch(batch)
File "D:\lib\site-packages\datasets\arrow_writer.py", line 388, in write_batch
pa_table = pa.Table.from_pydict(typed_sequence_examples)
File "pyarrow\table.pxi", line 1631, in pyarrow.lib.Table.from_pydict
File "pyarrow\array.pxi", line 332, in pyarrow.lib.asarray
File "pyarrow\array.pxi", line 223, in pyarrow.lib.array
File "pyarrow\array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "D:\lib\site-packages\datasets\arrow_writer.py", line 100, in __arrow_array__
if trying_type and out[0].as_py() != self.data[0]:
File "pyarrow\array.pxi", line 1076, in pyarrow.lib.Array.__getitem__
File "pyarrow\array.pxi", line 551, in pyarrow.lib._normalize_index
IndexError: index out of bounds
```
Is the format of my training data incorrect? Please help me thanksοΌ
|
https://github.com/huggingface/transformers/issues/12805
|
closed
|
[] | 2021-07-20T09:43:30Z
| 2021-08-27T15:07:19Z
| null |
gongshaojie12
|
huggingface/sentence-transformers
| 1,070
|
What is the difference between training(https://www.sbert.net/docs/training/overview.html#training-data) and unsupervised learning
|
Hi,
I have some bunch of PDF's and I am building a QnA system from the pdf's. Currently, I am using deepset/haystack repo for the same task.
My doubt is if we want to generate embeddings for my text which training I should do, what is the difference as both approaches mostly takes sentences right?
|
https://github.com/huggingface/sentence-transformers/issues/1070
|
open
|
[] | 2021-07-15T12:13:37Z
| 2021-07-15T12:41:22Z
| null |
SAIVENKATARAJU
|
huggingface/transformers
| 12,704
|
Where is the casual mask when using BertLMHeadModel and set config.is_decoder = True?
|
I hope to use BERT for the task of causal language modeling.
`BertLMHeadModel ` seems to meet my needs, but I did not find any code snippets about the causal mask, even if I set the `config.is_decoder=True`.
I only find the following related code in https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py#L968.
however, I do not have any values to pass into the argument `encoder_hidden_states` when doing causal language modeling.
So maybe the causal mask does not work?
```
if self.config.is_decoder and encoder_hidden_states is not None:
encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
if encoder_attention_mask is None:
encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
else:
encoder_extended_attention_mask = None
```
|
https://github.com/huggingface/transformers/issues/12704
|
closed
|
[] | 2021-07-14T13:15:50Z
| 2021-07-24T06:42:04Z
| null |
Doragd
|
huggingface/transformers
| 12,105
|
What is the correct way to pass labels to DetrForSegmentation?
|
The [current documentation](https://huggingface.co/transformers/master/model_doc/detr.html#transformers.DetrForSegmentation.forward) for `DetrModelForSegmentation.forward` says the following about `labels` kwarg:
> The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,), the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4) and the **masks a torch.FloatTensor of shape (number of bounding boxes in the image, 4).**
But when I looked at the tests, it seems the shape of `masks` is `torch.rand(self.n_targets, self.min_size, self.max_size)` .
https://github.com/huggingface/transformers/blob/d2753dcbec7123500c1a84a7c2143a79e74df48f/tests/test_modeling_detr.py#L87-L103
---
I'm guessing this is a documentation mixup!
Anyways, it would be super helpful to include a snippet in the DETR docs that shows how to correctly pass masks/other labels + get the loss/loss dict. π
CC: @NielsRogge
|
https://github.com/huggingface/transformers/issues/12105
|
closed
|
[] | 2021-06-10T22:15:23Z
| 2021-06-17T14:37:54Z
| null |
nateraw
|
huggingface/transformers
| 12,005
|
where is the code for DetrFeatureExtractor, DetrForObjectDetection
|
Hello my dear friend.
i am long for the model of https://huggingface.co/facebook/detr-resnet-50
i cannot find the code of it in transformers==4.7.0.dev0 and 4.6.1 pleae help me . appreciated.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
|
https://github.com/huggingface/transformers/issues/12005
|
closed
|
[] | 2021-06-03T09:28:27Z
| 2021-06-10T07:06:59Z
| null |
zhangbo2008
|
huggingface/notebooks
| 42
|
what is the ' token classification head'?
|
https://github.com/huggingface/notebooks/issues/42
|
closed
|
[] | 2021-05-25T09:17:49Z
| 2021-05-29T11:36:11Z
| null |
zingxy
|
|
huggingface/pytorch-image-models
| 572
|
What is EfficientNetV2s? What is it relationship with EfficientNetV2οΌ
|
https://github.com/huggingface/pytorch-image-models/issues/572
|
closed
|
[
"enhancement"
] | 2021-04-21T07:24:51Z
| 2021-04-21T15:51:02Z
| null |
chenyang9799
|
|
huggingface/sentence-transformers
| 875
|
Where is the saved model after the training?
|
model.fit(train_objectives=[(train_dataloader, train_loss)], output_path=dir, epochs=1, warmup_steps=100)
I have specified the output_path where the model output, but I didn't see any documents after training.
thank you.
|
https://github.com/huggingface/sentence-transformers/issues/875
|
open
|
[] | 2021-04-17T00:45:41Z
| 2021-04-17T09:54:52Z
| null |
Bulando
|
huggingface/datasets
| 2,196
|
`load_dataset` caches two arrow files?
|
Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be able to `load_from_disk`?
|
https://github.com/huggingface/datasets/issues/2196
|
closed
|
[
"question"
] | 2021-04-09T03:49:19Z
| 2021-04-12T05:25:29Z
| null |
hwijeen
|
huggingface/datasets
| 2,193
|
Filtering/mapping on one column is very slow
|
I'm currently using the `wikipedia` datasetβ I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_columns=['num_tokens']`, it seems that the entirety of each row is loaded into memory, which makes the operation take much longer than it should. Indeed, `filter` currently just calls `map`, and I found that in `_map_single` on lines 1690-1704 of `arrow_dataset.py`, the method is just grabbing slices of _all the rows_ of the dataset and then passing only the specified columns to the map function. It seems that, when the user passes a value for `input_columns`, the `map` function should create a temporary pyarrow table by selecting just those columns, and then get slices from that table. Or something like thatβ I'm not very familiar with the pyarrow API.
I know that in the meantime I can sort of get around this by simply only returning the rows that match my filter criterion from the tokenizing function I pass to `map()`, but I actually _also_ want to map on just the `num_tokens` column in order to compute batches with a roughly uniform number of tokens per batch. I would also ideally like to be able to change my minimum and maximum article lengths without having to re-tokenize the entire dataset.
PS: This is definitely not a "dataset request." I'm realizing that I don't actually know how to remove labels from my own issues on other people's repos, if that is even possible.
|
https://github.com/huggingface/datasets/issues/2193
|
closed
|
[
"question"
] | 2021-04-08T18:16:14Z
| 2021-04-26T16:13:59Z
| null |
norabelrose
|
huggingface/datasets
| 2,187
|
Question (potential issue?) related to datasets caching
|
I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.builder - Using custom data configuration default-888a87931cbc5877
04/07/2021 18:34:42 - WARNING - datasets.builder - Reusing dataset csv (xxxx/cache-transformers/datasets/csv/default-888a87931cbc5877/0.0.0/965b6429be0fc05f975b608ce64e1fa941cc8fb4f30629b523d2390f3c0e1a93
```
Can you please let me know what this reusing dataset csv means? I wouldn't expect any reusing with the datasets caching disabled. Thank you!
|
https://github.com/huggingface/datasets/issues/2187
|
open
|
[
"question"
] | 2021-04-08T00:16:28Z
| 2023-01-03T18:30:38Z
| null |
ioana-blue
|
huggingface/transformers
| 11,057
|
Difference in tokenizer output depending on where `add_prefix_space` is set.
|
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik
## Information
I am using `roberta-base` tokenizer. The tokenization output changes depending on whether `add_prefix_space` is passed into the `from_pretrained` factory as keyword argument or set using property after constructing the .
## To reproduce
Steps to reproduce the behavior:
``` python
from transformers import RobertaTokenizerFast
tokenizer_1 = RobertaTokenizerFast.from_pretrained('roberta-base', add_prefix_space=True)
tokenizer_2 = RobertaTokenizerFast.from_pretrained('roberta-base')
tokenizer_2.add_prefix_space = True
pre_tokenized_inputs = ["Is", "this", "tokenization", "correct"]
tokenizer_1(pre_tokenized_inputs, is_split_into_words=True)
# {'input_ids': [0, 1534, 42, 19233, 1938, 4577, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]}
tokenizer_2(pre_tokenized_inputs, is_split_into_words=True)
# {'input_ids': [0, 6209, 9226, 46657, 1938, 36064, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]}
```
## Expected behavior
The addition of prefix space is not working for `tokenizer_2`. Either setting the property should add prefix space to each tokens before splitting into sub-words, or we shouldn't allow it to be set to `True` (raise a exception) after object creation.
|
https://github.com/huggingface/transformers/issues/11057
|
closed
|
[] | 2021-04-05T10:30:25Z
| 2021-06-07T15:18:36Z
| null |
sai-prasanna
|
huggingface/transformers
| 10,960
|
What is the score of trainer.predict()?
|
I want to know the meaning of output of trainer.predict().
example:
`PredictionOutput(predictions=array([[-2.2704859, 2.442343 ]], dtype=float32), label_ids=array([1]), metrics={'eval_loss': 0.008939245715737343, 'eval_runtime': 0.0215, 'eval_samples_per_second': 46.56})`
What is this score? -> predictions=array([[-2.2704859, 2.442343 ]]
I use it for Sequence Classification.
|
https://github.com/huggingface/transformers/issues/10960
|
closed
|
[] | 2021-03-30T07:53:13Z
| 2021-03-30T23:41:38Z
| null |
Yuukp
|
huggingface/datasets
| 2,108
|
Is there a way to use a GPU only when training an Index in the process of add_faisis_index?
|
Motivation - Some FAISS indexes like IVF consist of the training step that clusters the dataset into a given number of indexes. It would be nice if we can use a GPU to do the training step and covert the index back to CPU as mention in [this faiss example](https://gist.github.com/mdouze/46d6bbbaabca0b9778fca37ed2bcccf6).
|
https://github.com/huggingface/datasets/issues/2108
|
open
|
[
"question"
] | 2021-03-24T21:32:16Z
| 2021-03-25T06:31:43Z
| null |
shamanez
|
huggingface/datasets
| 1,973
|
Question: what gets stored in the datasets cache and why is it so huge?
|
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any insight? Thank you!
|
https://github.com/huggingface/datasets/issues/1973
|
closed
|
[] | 2021-03-02T14:35:53Z
| 2021-03-30T14:03:59Z
| null |
ioana-blue
|
huggingface/sentence-transformers
| 753
|
What is 'sentence_embedding' of a Sentence Transformer Model?
|
Hey, I try to understand where this comes from. It is just mentioned here [link](https://github.com/UKPLab/sentence-transformers/blob/9932965c92a06835eda255dac7eacd53f48c5cd7/sentence_transformers/SentenceTransformer.py#L144)
But seems not be used anywhere than. Because this feature is used in the losses like OnlineContrastive. I don't hink it comes from the huggingface model?
To which forward is this [here ](https://github.com/UKPLab/sentence-transformers/blob/9932965c92a06835eda255dac7eacd53f48c5cd7/sentence_transformers/SentenceTransformer.py#L181)referring to?
I also wonder what this _modules is like [here](https://github.com/UKPLab/sentence-transformers/blob/9932965c92a06835eda255dac7eacd53f48c5cd7/sentence_transformers/SentenceTransformer.py#L338).
Why is this not in the init?
Thanks. :-)
|
https://github.com/huggingface/sentence-transformers/issues/753
|
open
|
[] | 2021-02-11T20:48:07Z
| 2021-02-12T14:03:59Z
| null |
PaulForInvent
|
huggingface/transformers
| 9,961
|
What is the correct way to use Adafactor?
|
Hi, from the papers I've seen that Adafactor is typically used with no learning rate (as in Pegasus paper), however, when I try to execute run_seq2seq.py or seq2seq/finetune_trainer.py from your examples, and set --adafactor parameter, without specifying learning rate (for no learning rate), it uses the default 3e-05. Is there a way to use Adafactor without learning rate?
|
https://github.com/huggingface/transformers/issues/9961
|
closed
|
[
"wontfix"
] | 2021-02-02T15:42:08Z
| 2021-03-06T00:12:07Z
| null |
avacaondata
|
huggingface/datasets
| 1,808
|
writing Datasets in a human readable format
|
Hi
I see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq
|
https://github.com/huggingface/datasets/issues/1808
|
closed
|
[
"enhancement",
"question"
] | 2021-02-02T02:55:40Z
| 2022-06-01T15:38:13Z
| null |
ghost
|
huggingface/transformers
| 9,867
|
where is position_embedding_type used
|
When I was using pytorch Electra Model, I read its source code but I didn't find where position_embedding_type is used.
So did I miss something?
|
https://github.com/huggingface/transformers/issues/9867
|
closed
|
[] | 2021-01-28T08:29:08Z
| 2021-01-29T02:00:07Z
| null |
awdrgyjilplij
|
huggingface/datasets
| 1,786
|
How to use split dataset
|

Hey,
I want to split the lambada dataset into corpus, test, train and valid txt files (like penn treebank) but I am not able to achieve this. What I am doing is, executing the lambada.py file in my project but its not giving desired results. Any help will be appreciated!
|
https://github.com/huggingface/datasets/issues/1786
|
closed
|
[
"question"
] | 2021-01-27T21:37:47Z
| 2021-04-23T15:17:39Z
| null |
kkhan188
|
huggingface/sentence-transformers
| 693
|
What is 'Spearmanβs rank correlation between the cosine-similarity of the sentence embeddings and the gold labels.' ?
|
In your paper,you mention this
`we compute the Spearmanβs rank
correlation between the cosine-similarity of the
sentence embeddings and the gold labels.`
in **section 4.1**
Here is my question,what is the `gold labels` mean ,and can you provide a example to explain how to calculate the Spearmanβs rank correlation in your paper?Any help will be appreciate!
|
https://github.com/huggingface/sentence-transformers/issues/693
|
closed
|
[] | 2021-01-15T08:46:57Z
| 2021-01-15T09:55:00Z
| null |
Gpwner
|
huggingface/datasets
| 1,733
|
connection issue with glue, what is the data url for glue?
|
Hi
my codes sometimes fails due to connection issue with glue, could you tell me how I can have the URL datasets library is trying to read GLUE from to test the machines I am working on if there is an issue on my side or not
thanks
|
https://github.com/huggingface/datasets/issues/1733
|
closed
|
[] | 2021-01-13T08:37:40Z
| 2021-08-04T18:13:55Z
| null |
ghost
|
huggingface/transformers
| 9,556
|
Where is convert_bert_original_tf_checkpoint_to_pytorch.py?
|
HI:
I am getting the following error when implementing entity extraction in BERT. OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index']
I am very new to using BERT, and noted that [issue 2110](https://github.com/huggingface/transformers/issues/2110) had a similar issue. Issue 2110 was referred to the convert_bert_original_tf_checkpoint_to_pytorch.py file. However, the current link isn't working. Could you point me to its current location?
V/r,
L
|
https://github.com/huggingface/transformers/issues/9556
|
closed
|
[
"wontfix",
"Migration"
] | 2021-01-13T02:49:48Z
| 2021-03-06T00:13:15Z
| null |
sednaasil
|
huggingface/transformers
| 9,387
|
Where is the impact when output_attentions=True?
|
Is there any impact regarding performance (training/fine-tuning time, GPU memory, batch size, etc.) when `output_attentions=True`?
```python
self.bert_encoder = BertModel.from_pretrained(
hparams.architecture, # "bert-base-uncased"
output_attentions=True)
```
|
https://github.com/huggingface/transformers/issues/9387
|
closed
|
[
"wontfix"
] | 2021-01-02T23:16:57Z
| 2021-03-06T00:13:32Z
| null |
celsofranssa
|
huggingface/sentence-transformers
| 635
|
sbert.net is down. Where can I view list of pretrained models?
|
https://github.com/huggingface/sentence-transformers/issues/635
|
closed
|
[] | 2020-12-19T12:16:46Z
| 2020-12-19T14:10:36Z
| null |
mani-rai
|
|
huggingface/datasets
| 1,600
|
AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
|
The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong?
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files='data.txt')
dataset = dataset.train_test_split(test_size=0.1)
```
> AttributeError: 'DatasetDict' object has no attribute 'train_test_split'
|
https://github.com/huggingface/datasets/issues/1600
|
closed
|
[
"question"
] | 2020-12-18T05:37:10Z
| 2023-05-03T04:22:55Z
| null |
david-waterworth
|
huggingface/datasets
| 1,514
|
how to get all the options of a property in datasets
|
Hi
could you tell me how I can get all unique options of a property of dataset?
for instance in case of boolq, if the user wants to know which unique labels it has, is there a way to access unique labels without getting all training data lables and then forming a set i mean? thanks
|
https://github.com/huggingface/datasets/issues/1514
|
closed
|
[
"question"
] | 2020-12-12T16:24:08Z
| 2022-05-25T16:27:29Z
| null |
rabeehk
|
huggingface/datasets
| 1,167
|
β On-the-fly tokenization with datasets, tokenizers, and torch Datasets and Dataloaders
|
Hi there,
I have a question regarding "on-the-fly" tokenization. This question was elicited by reading the "How to train a new language model from scratch using Transformers and Tokenizers" [here](https://huggingface.co/blog/how-to-train). Towards the end there is this sentence: "If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step". I've tried coming up with a solution that would combine both `datasets` and `tokenizers`, but did not manage to find a good pattern.
I guess the solution would entail wrapping a dataset into a Pytorch dataset.
As a concrete example from the [docs](https://huggingface.co/transformers/custom_datasets.html)
```python
import torch
class SquadDataset(torch.utils.data.Dataset):
def __init__(self, encodings):
# instead of doing this beforehand, I'd like to do tokenization on the fly
self.encodings = encodings
def __getitem__(self, idx):
return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
def __len__(self):
return len(self.encodings.input_ids)
train_dataset = SquadDataset(train_encodings)
```
How would one implement this with "on-the-fly" tokenization exploiting the vectorized capabilities of tokenizers?
----
Edit: I have come up with this solution. It does what I want, but I feel it's not very elegant
```python
class CustomPytorchDataset(Dataset):
def __init__(self):
self.dataset = some_hf_dataset(...)
self.tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
def __getitem__(self, batch_idx):
instance = self.dataset[text_col][batch_idx]
tokenized_text = self.tokenizer(instance, truncation=True, padding=True)
return tokenized_text
def __len__(self):
return len(self.dataset)
@staticmethod
def collate_fn(batch):
# batch is a list, however it will always contain 1 item because we should not use the
# batch_size argument as batch_size is controlled by the sampler
return {k: torch.tensor(v) for k, v in batch[0].items()}
torch_ds = CustomPytorchDataset()
# NOTE: batch_sampler returns list of integers and since here we have SequentialSampler
# it returns: [1, 2, 3], [4, 5, 6], etc. - check calling `list(batch_sampler)`
batch_sampler = BatchSampler(SequentialSampler(torch_ds), batch_size=3, drop_last=True)
# NOTE: no `batch_size` as now the it is controlled by the sampler!
dl = DataLoader(dataset=torch_ds, sampler=batch_sampler, collate_fn=torch_ds.collate_fn)
```
|
https://github.com/huggingface/datasets/issues/1167
|
closed
|
[
"question",
"generic discussion"
] | 2020-12-05T17:02:56Z
| 2023-07-20T15:49:42Z
| null |
pietrolesci
|
huggingface/datasets
| 883
|
Downloading/caching only a part of a datasets' dataset.
|
Hi,
I want to use the validation data *only* (of natural question).
I don't want to have the whole dataset cached in my machine, just the dev set.
Is this possible? I can't find a way to do it in the docs.
Thank you,
Sapir
|
https://github.com/huggingface/datasets/issues/883
|
open
|
[
"enhancement",
"question"
] | 2020-11-24T14:25:18Z
| 2020-11-27T13:51:55Z
| null |
SapirWeissbuch
|
huggingface/datasets
| 878
|
Loading Data From S3 Path in Sagemaker
|
In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files["validation"] = valid_path
data_files["test"] = test_path
extension = train_path.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files, s3_enabled=True)
print(datasets)`
I getting an error of
`algo-1-7plil_1 | File "main.py", line 21, in <module>
algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files)
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__
algo-1-7plil_1 | **config_kwargs,
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config
algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file)))
algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime
algo-1-7plil_1 | return os.stat(filename).st_mtime
algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv`
But when im trying with pandas , it is able to load from S3
Does the datasets library support S3 path to load
|
https://github.com/huggingface/datasets/issues/878
|
open
|
[
"enhancement",
"question"
] | 2020-11-23T09:17:22Z
| 2020-12-23T09:53:08Z
| null |
mahesh1amour
|
huggingface/datasets
| 861
|
Possible Bug: Small training/dataset file creates gigantic output
|
Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely.
I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug?
I've used the following CMD:
`python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`
|
https://github.com/huggingface/datasets/issues/861
|
closed
|
[
"enhancement",
"question"
] | 2020-11-17T13:48:59Z
| 2021-03-30T14:04:04Z
| null |
NebelAI
|
huggingface/datasets
| 853
|
concatenate_datasets support axis=0 or 1 οΌ
|
I want to achieve the following result

|
https://github.com/huggingface/datasets/issues/853
|
closed
|
[
"enhancement",
"help wanted",
"question"
] | 2020-11-16T02:46:23Z
| 2021-04-19T16:07:18Z
| null |
renqingcolin
|
huggingface/pytorch-image-models
| 261
|
What is different with paper for mobilenet v3 and efficientNet
|
Thank for your great works.
The results with your code show much higher accuracy compared to reported accuracy. (mobilenet v3 and efficientNet)
I want to know what is main different with paper.
|
https://github.com/huggingface/pytorch-image-models/issues/261
|
closed
|
[] | 2020-10-29T13:34:35Z
| 2020-10-30T01:15:38Z
| null |
gksruf
|
huggingface/sentence-transformers
| 497
|
What is the meaning of warmup_steps when I fine-tune the model, can I remove it?
|
```python
evaluator = evaluation.EmbeddingSimilarityEvaluator(sentences1, sentences2, scores)
# Define your train dataset, the dataloader and the train loss
train_dataset = SentencesDataset(train_data, model)
train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=32)
train_loss = losses.CosineSimilarityLoss(model)
# Tune the model
model.fit(train_objectives=[(train_dataloader, train_loss)], epochs=1, warmup_steps=100, evaluator=evaluator, evaluation_steps=100, output_path='./Ko2CnModel')
```
|
https://github.com/huggingface/sentence-transformers/issues/497
|
closed
|
[] | 2020-10-14T10:03:43Z
| 2020-10-14T10:31:27Z
| null |
wmathor
|
huggingface/sentence-transformers
| 494
|
what is the license for this repository?
|
https://github.com/huggingface/sentence-transformers/issues/494
|
closed
|
[] | 2020-10-12T09:31:41Z
| 2020-10-12T09:32:15Z
| null |
pinkeshbadjatiya
|
|
huggingface/transformers
| 7,727
|
what is the perplexity of distilbert-base-uncased ?
|
# β Questions & Help
## Details
In the [readme](https://github.com/huggingface/transformers/tree/master/examples/distillation) , it is said that distilbert-base-uncased is pretraind on the same data used to pretrain Bert, so I wonder what is the final perplexity or cross entropy of the pretrain?
|
https://github.com/huggingface/transformers/issues/7727
|
closed
|
[
"wontfix"
] | 2020-10-12T09:11:49Z
| 2020-12-20T13:34:47Z
| null |
OleNet
|
huggingface/transformers
| 6,790
|
What is the size of the context window in the 'openai-gpt' pre-trained model?
|
What is the size of the context window in the 'openai-gpt' pre-trained model?
# β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**:
|
https://github.com/huggingface/transformers/issues/6790
|
closed
|
[
"wontfix"
] | 2020-08-28T09:17:02Z
| 2020-11-07T05:42:47Z
| null |
lzl19971215
|
huggingface/tokenizers
| 374
|
where is the pre-build tokenizers for 'merge.txt and vacab.json'
|
or how to build my private version
|
https://github.com/huggingface/tokenizers/issues/374
|
closed
|
[] | 2020-08-17T08:45:13Z
| 2021-01-06T20:02:22Z
| null |
SeekPoint
|
huggingface/sentence-transformers
| 335
|
What is the key difference between mean pooling BERT vs. mean pooling sentence-transformers?
|
Hi!
If I run sentence-transformers without pre-training, is it equivalent to apply mean-pooling to the last layer of BERT?
For example, if I run the below code,
```python
# Use BERT for mapping tokens to embeddings
word_embedding_model = models.Transformer('bert-base-uncased')
# Apply mean pooling to get one fixed sized sentence vector
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension())
model = SentenceTransformer(modules=[word_embedding_model, pooling_model])
model.encode('This is an example')
```
will the embedding vector be different than averaging the last layer of BERT?
|
https://github.com/huggingface/sentence-transformers/issues/335
|
open
|
[] | 2020-08-01T02:35:56Z
| 2020-08-01T08:39:45Z
| null |
yuwon
|
huggingface/pytorch-image-models
| 205
|
when I use the old versionοΌ the result is goodοΌbut I update the newest code, the result is error.what's wrong with me?
|
same dataset,and same train scripts,with this:
`
./distributed_train.sh 2 /data/data/product/product --model swsl_resnet50 --epochs 20 --warmup-epochs 1 --lr 0.001 --batch-size 16 --img-size 224 --num-classes 30 --pretrained --amp
`
the old code result:
Train: 10 [ 0/185 ( 0%)] Loss: 0.866020 (0.8660) Time: 0.599s, 53.41/s (0.599s, 53.41/s) LR: 1.000e-03 Data: 0.455 (0.455)
Train: 10 [ 50/185 ( 27%)] Loss: 0.857730 (0.8619) Time: 0.129s, 248.96/s (0.144s, 222.42/s) LR: 1.000e-03 Data: 0.003 (0.013)
Train: 10 [ 100/185 ( 54%)] Loss: 0.765654 (0.8298) Time: 0.129s, 247.52/s (0.139s, 230.35/s) LR: 1.000e-03 Data: 0.003 (0.008)
Train: 10 [ 150/185 ( 82%)] Loss: 0.984192 (0.8684) Time: 0.133s, 240.71/s (0.138s, 232.42/s) LR: 1.000e-03 Data: 0.003 (0.007)
Train: 10 [ 184/185 (100%)] Loss: 0.725536 (0.8398) Time: 0.191s, 167.12/s (0.137s, 232.97/s) LR: 1.000e-03 Data: 0.061 (0.006)
Test: [ 0/2] Time: 0.830 (0.830) Loss: 0.1307 (0.1307) Prec@1: 98.4375 (98.4375) Prec@5: 100.0000 (100.0000)
Test: [ 2/2] Time: 0.128 (0.348) Loss: 0.0857 (0.0974) Prec@1: 98.8889 (99.1329) Prec@5: 100.0000 (100.0000)
Current checkpoints:
('./output/train/20200731-174448-swsl_resnet50-224/checkpoint-3.pth.tar', 99.42196443590815)
('./output/train/20200731-174448-swsl_resnet50-224/checkpoint-8.pth.tar', 99.42196443590815)
('./output/train/20200731-174448-swsl_resnet50-224/checkpoint-5.pth.tar', 99.13294709486769)
('./output/train/20200731-174448-swsl_resnet50-224/checkpoint-10.pth.tar', 99.13294709486769)
#########################################################################
the new code reslut:
`
Train: 2 [ 0/185 ( 0%)] Loss: 1.102509 (1.1025) Time: 0.559s, 57.21/s (0.559s, 57.21/s) LR: 1.000e-03 Data: 0.413 (0.413)
Train: 2 [ 50/185 ( 27%)] Loss: 0.973374 (1.0379) Time: 0.131s, 244.76/s (0.142s, 225.78/s) LR: 1.000e-03 Data: 0.003 (0.012)
Train: 2 [ 100/185 ( 54%)] Loss: 1.284053 (1.1200) Time: 0.130s, 245.52/s (0.138s, 231.99/s) LR: 1.000e-03 Data: 0.003 (0.008)
Train: 2 [ 150/185 ( 82%)] Loss: 0.874424 (1.0586) Time: 0.157s, 204.25/s (0.137s, 233.52/s) LR: 1.000e-03 Data: 0.021 (0.007)
Train: 2 [ 184/185 (100%)] Loss: 0.963474 (1.0396) Time: 0.201s, 159.49/s (0.137s, 234.15/s) LR: 1.000e-03 Data: 0.066 (0.007)
Test: [ 0/10] Time: 0.455 (0.455) Loss: 6.2966 (6.2966) Acc@1: 0.0000 ( 0.0000) Acc@5: 0.0000 ( 0.0000)
Test: [ 10/10] Time: 0.087 (0.070) Loss: 6.2156 (6.4805) Acc@1: 0.0000 ( 0.0000) Acc@5: 7.6923 ( 1.1561)
Current checkpoints:
('./output/train/20200731-175136-swsl_resnet50-224/checkpoint-0.pth.tar', 0.28901735068745693)
('./output/train/20200731-175136-swsl_resnet50-224/checkpoint-1.pth.tar', 0.0)
('./output/train/20200731-175136-swsl_resnet50-224/checkpoint-2.pth.tar', 0.0)
|
https://github.com/huggingface/pytorch-image-models/issues/205
|
closed
|
[] | 2020-07-31T09:54:39Z
| 2020-08-03T09:45:04Z
| null |
runauto
|
huggingface/transformers
| 6,092
|
i dont know what Tranier`s Dataset is.
|
# β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->


i thought its my customer dataset goes wrong, i dont konw what dataset it should return. the Trainer receive what dataset.
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**:
|
https://github.com/huggingface/transformers/issues/6092
|
closed
|
[] | 2020-07-28T13:11:48Z
| 2020-07-28T13:48:43Z
| null |
Ted8000
|
huggingface/pytorch-image-models
| 201
|
where is CheckpointSaver?
|
hello, going over your repo
(thx for the great repo btw)
I can't find where the code for CheckpointSaver is...
nor do I find any checkpoint saved in my pc..
where can I find them??
|
https://github.com/huggingface/pytorch-image-models/issues/201
|
closed
|
[] | 2020-07-28T03:51:45Z
| 2020-07-28T04:32:20Z
| null |
ooodragon94
|
huggingface/transformers
| 5,940
|
What is the difference between the function of add_tokens() and add_special_tokens() in tokenizer
|
# β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
When I read the code of tokenizer, I have a problem if I want to use a pretrained model in NMT task, I need to add some tag tokens, such as '2English' or '2French'. I think these tokens are special tokens, so which function should I use: add_tokens() or add_special_tokens(). What is the difference between them?
|
https://github.com/huggingface/transformers/issues/5940
|
closed
|
[] | 2020-07-21T15:29:14Z
| 2025-03-05T20:33:05Z
| null |
kugwzk
|
huggingface/transformers
| 5,682
|
What is the decoder_input for encoder-decoder transformer in training time?
|
https://datascience.stackexchange.com/questions/76261/whats-the-input-dimension-for-transformer-decoder-during-training
Is the link's answer right?
Thank you very much!
|
https://github.com/huggingface/transformers/issues/5682
|
closed
|
[] | 2020-07-11T10:48:07Z
| 2020-07-12T03:32:38Z
| null |
guotong1988
|
huggingface/transformers
| 5,564
|
Where is the documentation on migrating to the 3.0 tokenizer API?
|
I see that you folks have completely changed the API to do tokenizing, e.g. for BertTokenizer. I have a lot of code using the two methods `encode_plus()` and `batch_encode_plus()`, and when I went to the [documentation](https://huggingface.co/transformers/main_classes/tokenizer.html) to look up an argument, I found that these methods are completely gone. All that remains is a little blurb saying:
> `BatchEncoding` holds the output of the tokenizerβs encoding methods (`__call__`, `encode_plus` and `batch_encode_plus`) and is derived from a Python dictionary.
Are these two methods deprecated now? Did you post a migration guide for users?
On the main [Huggingface Transformers page](https://github.com/huggingface/transformers), you have sections for `Migrating from pytorch-transformers to transformers` and `Migrating from pytorch-pretrained-bert to transformers`, so it's not like there's no precedent for you to provide some information to users on major API changes.
|
https://github.com/huggingface/transformers/issues/5564
|
closed
|
[] | 2020-07-07T03:17:26Z
| 2020-07-07T21:15:04Z
| null |
githubrandomuser2017
|
huggingface/transformers
| 5,447
|
Where did "prepare_for_model" go? What is the replacement?
|
I'm working with already numericalized data (e.g., where the text has been converted to ids via `tokenizer.tokenize()`) and was using `prepare_for_model` to build the appropriate input dictionary ... ***but*** that method is gone in 3.0.
So ... what should I use/do now?
Thanks
|
https://github.com/huggingface/transformers/issues/5447
|
closed
|
[] | 2020-07-01T19:20:34Z
| 2020-07-03T14:51:22Z
| null |
ohmeow
|
huggingface/transformers
| 5,204
|
T5 Model : What is maximum sequence length that can be used with pretrained T5 (3b model) checkpoint?
|
As the paper described, T5 uses a relative attention mechanism and the answer for this [issue](https://github.com/google-research/text-to-text-transfer-transformer/issues/273) says, T5 can use any sequence length were the only constraint is memory.
According to this, can I use T5 to summarize inputs that have more than 512 tokens in a sequence?
|
https://github.com/huggingface/transformers/issues/5204
|
closed
|
[] | 2020-06-23T02:36:22Z
| 2023-08-29T21:43:31Z
| null |
shamanez
|
huggingface/neuralcoref
| 259
|
getting a none value for `print(doc._.coref_clusters)`
|
hey people, I have attached code and the output. As you can see I am getting a none value when I am trying to `print(doc._.coref_clusters)` and the code above line in the given program is giving the output well and good. why is this? something related to new version bugs or something like that? please respond, thanks.
```
import spacy
import neuralcoref
nlp = spacy.load('en')
doc = nlp('My sister has a dog. She loves him.')
for token in doc:
print('{}:{}'.format(token,token.vector[:3]))
neuralcoref.add_to_pipe(nlp)
print(doc._.coref_clusters)
doc2 = nlp('Angela lives in Boston. She is quite happy in that city.')
for ent in doc2.ents:
print(ent._.coref_cluster)
```
```
(spacy) C:\Users\Gourav\Desktop\py3>python coref.py
C:\Users\Gourav\Anaconda3\envs\spacy\lib\importlib\_bootstrap.py:219: RuntimeWarning: spacy.morphology.Morphology size changed, may indicate binary incompatibility. Expected 104 from C header, got 112
from PyObject
return f(*args, **kwds)
C:\Users\Gourav\Anaconda3\envs\spacy\lib\importlib\_bootstrap.py:219: RuntimeWarning: spacy.vocab.Vocab size changed, may indicate binary incompatibility. Expected 96 from C header, got 112 from PyObject
return f(*args, **kwds)
C:\Users\Gourav\Anaconda3\envs\spacy\lib\importlib\_bootstrap.py:219: RuntimeWarning: spacy.tokens.span.Span size changed, may indicate binary incompatibility. Expected 72 from C header, got 80 from PyObject
return f(*args, **kwds)
My:[3.3386087 0.17132008 2.5449834 ]
sister:[ 0.57823443 2.995358 -0.9161793 ]
has:[-1.2454867 0.10024977 -2.9887996 ]
a:[-2.6144893 -0.87124985 0.77286935]
dog:[-1.5898073 1.3804269 -1.875045 ]
.:[-0.20775741 -3.216754 -0.9142698 ]
She:[ 1.9065745 -1.1759269 -1.1481409]
loves:[-3.0270743 0.6966858 -3.8048356]
him:[ 2.6918807 -1.7273386 -5.5162654]
.:[-1.5350039 -2.1957831 -1.6328099]
None
```
|
https://github.com/huggingface/neuralcoref/issues/259
|
closed
|
[
"question"
] | 2020-06-18T19:12:59Z
| 2020-06-19T07:58:38Z
| null |
chettipalli
|
huggingface/neuralcoref
| 257
|
Load new trained model
|
Dear guys,
Thank you so much for your interesting works. I was able to train a new model based on [this instruction](https://github.com/huggingface/neuralcoref/blob/master/neuralcoref/train/training.md) and this [blog post](https://medium.com/huggingface/how-to-train-a-neural-coreference-model-neuralcoref-2-7bb30c1abdfe). However, I could not find anywhere a manual how to load the trained model.
To understand how the model was loaded using `add_to_pipe` function, I downloaded the model from this [URL](https://s3.amazonaws.com/models.huggingface.co/neuralcoref/neuralcoref.tar.gz) and unzipped it. Inside, I could saw the `static_vectors` and `tuned_vectors`. I guess those are exactly the like the ones I used to train the model. However, I also see new file which is `key2row` and I don't know what it is and how to construct this file.
Can some one please give me a small instruction how to do the inference for the trained model?
Thank you so much!
|
https://github.com/huggingface/neuralcoref/issues/257
|
open
|
[
"question"
] | 2020-06-13T16:14:52Z
| 2021-07-15T07:32:04Z
| null |
SysDevHayes
|
huggingface/transformers
| 4,937
|
What is the different options for pooler_type in Bert config ?
|
# β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I want to change the pooling type at the top of the output hidden states of Bert.
I search in the documentation and find nothing. Can anyone help me ? I just want the different option of pooling (max, average etc.). Here's a piece of code to see the option i am talking about.
`import transformers
encoder = transformers.TFBertModel.from_pretrained("bert-base-uncased")
encoder.config`
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
|
https://github.com/huggingface/transformers/issues/4937
|
closed
|
[] | 2020-06-11T14:26:20Z
| 2020-06-18T07:26:02Z
| null |
ClementViricel
|
huggingface/datasets
| 246
|
What is the best way to cache a dataset?
|
For example if I want to use streamlit with a nlp dataset:
```
@st.cache
def load_data():
return nlp.load_dataset('squad')
```
This code raises the error "uncachable object"
Right now I just fixed with a constant for my specific case:
```
@st.cache(hash_funcs={pyarrow.lib.Buffer: lambda b: 0})
```
But I was curious to know what is the best way in general
|
https://github.com/huggingface/datasets/issues/246
|
closed
|
[] | 2020-06-06T11:02:07Z
| 2020-07-09T09:15:07Z
| null |
Mistobaan
|
huggingface/transformers
| 4,817
|
Question: Where do I find the Transformer model from the paper "Attention is all you need" ?
|
Hello
Firstly, thanks for supporting all questions here.
I read the paper "Attention is all you need" and wondering which class should I use in the HuggingFace library to use the Transformer architecture used in the paper.
Can you please advise?
Thanks
Abhishek
|
https://github.com/huggingface/transformers/issues/4817
|
closed
|
[] | 2020-06-06T10:34:56Z
| 2020-06-08T22:37:27Z
| null |
abhisheksgumadi
|
huggingface/neuralcoref
| 256
|
Can't locate CorScorer.pm
|
Dear guys,
Thank you for your interesting works. I'm training the model for new language (Dutch) using the SoNars corpus. Due to the fact that SoNars was in MMAX, I used the modification of this script (https://github.com/andreasvc/dutchcoref/blob/master/mmaxconll.py) to convert it to CONLL format.
After that, I trained a word2vec model (to prepare the static_word_embeddings files), I still have no clue what tuned_word_embeddings are, but I just use exactly the same static_word_embeddings files and it seemed to worked. I modified the load_function and other related functions as stated in the tutorial for training new language. Everything went well until the training, I got this error:
`Error during the scoring
Command '['perl', 'c:\\users\\administrator\\desktop\\neural_coref\\neuralcoref\\neuralcoref\\train\\scorer_wrapper.pl', 'muc', 'c:\\users\\administrator\\desktop\\neural_coref\\neuralcoref\\neuralcoref\\train/data//key.txt', 'c:\\users\\administrator\\desktop\\neural_coref\\neuralcoref\\neuralcoref\\train\\test_mentions.txt']' returned non-zero exit status 2.
Can't locate CorScorer.pm in @INC (you may need to install the CorScorer module) (@INC contains: scorer/lib /usr/lib/perl5/site_perl /usr/share/perl5/site_perl /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5/core_perl /usr/share/perl5/core_perl) at c:\users\administrator\desktop\neural_coref\neuralcoref\neuralcoref\train\scorer_wrapper.pl line 16.
BEGIN failed--compilation aborted at c:\users\administrator\desktop\neural_coref\neuralcoref\neuralcoref\train\scorer_wrapper.pl line 16.`
I could not find any information about the CorScorer.pm on anywhere on the internet. Can someone please help me to fix this? Am I doing anything wrong?
|
https://github.com/huggingface/neuralcoref/issues/256
|
closed
|
[
"question"
] | 2020-05-31T10:33:59Z
| 2021-11-02T14:06:49Z
| null |
SysDevHayes
|
huggingface/swift-coreml-transformers
| 19
|
What GPT-2 model is distilled here?
|
Is it the gpt2-small (124M), gpt2-medium (345M), gpt2-large (774M), or the gpt-xl (1.5B) that this implementation uses out of the box?
|
https://github.com/huggingface/swift-coreml-transformers/issues/19
|
closed
|
[] | 2020-05-05T02:08:49Z
| 2023-04-01T18:01:45Z
| null |
philipkd
|
huggingface/neuralcoref
| 250
|
How to improve processing speed?
|
Hi.
Could you give me some information about how to tune the parameters to make processing faster, even at the expense of accuracy?
How much impact does the `greedyness` parameter have on speed?
Thanks!
|
https://github.com/huggingface/neuralcoref/issues/250
|
closed
|
[
"question",
"wontfix",
"perf / speed"
] | 2020-04-17T16:32:08Z
| 2022-01-09T04:06:48Z
| null |
murphd40
|
huggingface/transformers
| 3,424
|
Where is the code of Bart fine-tuning?Thanks
|
https://github.com/huggingface/transformers/issues/3424
|
closed
|
[] | 2020-03-25T01:54:34Z
| 2020-04-16T15:03:10Z
| null |
qiunlp
|
|
huggingface/transformers
| 3,283
|
What is the most effective way to use BERT , ROBERTA , GPT-2 architectures as frozen feature extractors ?
|
We use pretrained self-supervised learning (SSL) models for NLP as feature extractors for downstream tasks like sentiment analysis. In most of such cases, we add a simple new classification layer and **fine-tune the whole model**. With the SSL models getting bigger and the amount of unsupervised training data is huge it would be nice if we can use the problem agnostic behavior of SSL embeddings. In other words if we use them as **Frozen Feature extractors**, we can save lot of time and computational cost.
**Have anyone seen a good review on using SSL networks as frozen feature extractors?**
|
https://github.com/huggingface/transformers/issues/3283
|
closed
|
[
"Discussion",
"wontfix"
] | 2020-03-15T09:06:20Z
| 2020-06-02T09:15:03Z
| null |
shamanez
|
huggingface/neuralcoref
| 248
|
German Training not working
|
Hi we tried to train your model for german. We used Glove in german but it doesnt work.
How does the binary static_word_embeddings.npy needs to be structured?
|
https://github.com/huggingface/neuralcoref/issues/248
|
closed
|
[
"question",
"wontfix",
"training",
"feat / coref"
] | 2020-03-11T10:25:36Z
| 2022-01-09T04:06:40Z
| null |
SimonF89
|
huggingface/transformers
| 3,205
|
where is the position emdeddings in bert for training a new model from scratch ?
|
# β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
|
https://github.com/huggingface/transformers/issues/3205
|
closed
|
[
"wontfix"
] | 2020-03-10T13:35:16Z
| 2020-05-16T17:44:04Z
| null |
2hip3ng
|
huggingface/transformers
| 3,193
|
Where is the default download address for pre-trained weight
|
# β Questions & Help
```
from transformers import DistilBertTokenizer, DistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertModel.from_pretrained('distilbert-base-uncased')
```
I can't find the downloaded file.
Thanks for your help
|
https://github.com/huggingface/transformers/issues/3193
|
closed
|
[] | 2020-03-09T17:35:47Z
| 2020-03-09T17:52:49Z
| null |
649459021
|
huggingface/blog
| 5
|
Where is the CoNLL-2003 formatted Esperanto dataset ref. in the tutorial?
|
> Using a dataset of annotated Esperanto POS tags formatted in the CoNLL-2003 format
Where is this dataset?
Thanks!
|
https://github.com/huggingface/blog/issues/5
|
open
|
[] | 2020-02-20T04:26:54Z
| 2020-03-04T16:30:32Z
| null |
ohmeow
|
huggingface/sentence-transformers
| 120
|
What is the expected number of epochs for training sentenceBERT
|
Hi,
Given a model in {BERT, XLM, .XLnet, ...}, do you have a dictionary of estimated best number of epochs for training your Siamese Network on NLI dataset?
Else, what would be your suggestion on this? (other than just keep trying with different epochs parameters since it takes a lot of computational time π )
That would be very useful for other users as well I think.
Cheers and great job! :D
|
https://github.com/huggingface/sentence-transformers/issues/120
|
open
|
[] | 2020-02-04T14:17:22Z
| 2020-06-08T19:48:20Z
| null |
MastafaF
|
huggingface/transformers
| 2,705
|
What is the input for TFBertForSequenceClassification?
|
# β Questions & Help
What is the input for TFBertForSequenceClassification?
## Details
I have a simple multiclass text data on which I want to train the BERT model.
From docs I have found the input format of data:
```a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])```
In my understanding:
`input_ids`- tokenized sentences, generated from BERT tokenizer.
`attention_mask`- As name suggests it is attention mask. I should use it to mask out padding tokens. Please correct me if I am wrong.
Now what is `token_type_ids'? is it necessary?
When I tried to print output_shape of the model? I got:
`AttributeError: The layer has never been called and thus has no defined output shape.`
So, let's say my dataset has 5 classes. Does this model expect one-hot encoded vector of shape [BATCH_SIZE, CLASSES] for .fit() method?
Also if I don't use .from_pretrained() method, will it load an untrained model?
|
https://github.com/huggingface/transformers/issues/2705
|
closed
|
[] | 2020-02-01T10:20:29Z
| 2020-03-12T08:41:25Z
| null |
sainimohit23
|
huggingface/transformers
| 2,591
|
What is the f1 score of Squad v2.0 on bert-base? I only got f1 score 74.78.
|
## β Questions & Help
<!-- A clear and concise description of the question. -->
Hello, I am doing some experiment of squad v2.0 on bert-base (NOT bert-large).
According to the BERT paper, bert-large achieves f1 score 81.9 with squad v2.0.
Since I couldn't find the official result for bert-base, I am not sure if I am getting the right f1 score.
Has anyone tried running squad v2.0 on bert base?
I got f1 score **74.78** for squad v2.0 result on bert-base, using below command:
sudo python3 ../../../run_squad.py \
--model_type bert \
--model_name_or_path bert-base-cased \
--do_train \
--do_eval \
--train_file $SQUAD2_DIR/train-v2.0.json \
--predict_file $SQUAD2_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 4 \
--learning_rate 4e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--version_2_with_negative \
--overwrite_output_dir \
--output_dir ../../../bert_base/$TASK_NAME/
|
https://github.com/huggingface/transformers/issues/2591
|
closed
|
[] | 2020-01-20T09:03:45Z
| 2020-01-22T05:03:12Z
| null |
YJYJLee
|
huggingface/tokenizers
| 73
|
Decoding to string
|
Hi, thanks for this awesome library!
I want to decode BPE back to *actual* text, so that I can calculate BLEU scores. When I use the tokenizer.decoder, I get a string without any whitespace. I understand I can use a `pre_tokenizer` to get whitespaces, but in that case the decoded output would be `i can feel the mag i c , can you ?` (or something similar, depending on the BPE model). How do I get the actual text through decoding, so that I can calculate BLEU scores like I normally would?
```
from tokenizers import Tokenizer, models, pre_tokenizers, decoders
# Load a BPE Model
vocab = "./scripts/vocab.json"
merges = "./path/to/merges.txt"
bpe = models.BPE.from_files(vocab, merges)
# Initialize a tokenizer
tokenizer = Tokenizer(bpe)
# Customize pre-tokenization and decoding
tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel.new(add_prefix_space=True)
tokenizer.decoder = decoders.ByteLevel.new()
# And then encode:
encoded = tokenizer.encode("i can feel the magic, can you?")
decoded = tokenizer.decode(encoded.ids)
print(encoded)
print(decoded)
>>> ['i', 'can', 'feel', 'the', 'mag', 'i', 'c', ',', 'can', 'you', '?']
>>> icanfeelthemagic,canyou?
```
|
https://github.com/huggingface/tokenizers/issues/73
|
closed
|
[
"question",
"python"
] | 2020-01-15T12:58:44Z
| 2020-01-20T15:38:29Z
| null |
davidstap
|
huggingface/transformers
| 2,411
|
What is the difference between T5Model, T5WithLMHeadModel, T5PreTrainedModel?
|
## β Questions & Help
<!-- A clear and concise description of the question. -->
I notice that for T5 model, there are more choices(T5Model, T5WithLMHeadModel, T5PreTrainedModel) than BERT or GPT. What is the difference between these three? I think all three are pre-trained model. We do not use T5PreTrainedModel in our downstream task code. Besides, the difference between T5Model and T5WithLMHeadModel is that the latter contains one more linear layer at the end. Am I right about these?
|
https://github.com/huggingface/transformers/issues/2411
|
closed
|
[
"wontfix"
] | 2020-01-06T07:01:32Z
| 2020-03-13T08:09:42Z
| null |
g-jing
|
huggingface/transformers
| 2,372
|
What is the "could not find answer" warning in squad.py
|
Hello,
I am trying to run run_squad.py for BERT (italian-cased) with an italian version of squad.
During the creation of features from dataset, I got some answer skipped like in the following:
<img width="478" alt="Screenshot 2019-12-30 at 23 30 19" src="https://user-images.githubusercontent.com/26765504/71603304-81081e80-2b5c-11ea-8333-73608e3141a7.png">
Can you tell why is this happening and if this influences the overall accuracy of the training?
|
https://github.com/huggingface/transformers/issues/2372
|
closed
|
[
"wontfix"
] | 2019-12-30T22:31:58Z
| 2020-08-29T15:05:37Z
| null |
cppntn
|
huggingface/transformers
| 2,278
|
where is the script of a second step of knwoledge distillation on SQuAD 1.0?
|
## β Questions & Help
<!-- A clear and concise description of the question. -->
In Distil part, there is a paragraph description which is "distilbert-base-uncased-distilled-squad: A finetuned version of distilbert-base-uncased finetuned using (a second step of) knwoledge distillation on SQuAD 1.0. This model reaches a F1 score of 86.9 on the dev set (for comparison, Bert bert-base-uncased version reaches a 88.5 F1 score)."
so where is the script of "a second step of knwoledge distillation on SQuAD 1.0" mentioned above?
Thanks a lot, it will be very helpful to me!
|
https://github.com/huggingface/transformers/issues/2278
|
closed
|
[
"wontfix"
] | 2019-12-23T09:13:26Z
| 2020-05-08T15:29:08Z
| null |
c0derm4n
|
huggingface/pytorch-image-models
| 63
|
what is the value range of magnitude in auto-augment when the MAX_LEVEL is set as 10.
|
Dear @rwightman , I have read the code about auto-augmentation and random-augmentation, and I noticed that the MAX_LEVEL is set as 10, same as the google's implementation. Also in the google implementation, they say an optimal magnitude is often in [5, 30]. But in your implementation you clip the input magnitude to be less than MAX_LEVEL (`magnitude = min(_MAX_LEVEL, max(0, magnitude)) # clip to valid range`).
Could you give me some hints about why MAX_LEVEL is set as 10, but the input magnitude range is recommended as [5, 30]? Really thanks!
|
https://github.com/huggingface/pytorch-image-models/issues/63
|
closed
|
[] | 2019-12-23T08:49:19Z
| 2019-12-26T23:40:49Z
| null |
cddlyf
|
huggingface/transformers
| 2,230
|
what is the most efficient way to store all hidden layers' weights?
|
Hi,
I am following this [post](https://mccormickml.com/2019/05/14/BERT-word-embeddings-tutorial/) for getting all 12 hidden layers' weights for every token in a sentence.
Consider I have a short text with 2 sentences: `He stole money today. He is fishing on the Mississippi riverbank.`
I want to store 5 + 8 = 13 tokens - all 12 hidden layers weights where each tensor's size=768. So, I will have 13 x 12 = 156 tensors.
I want to save all the weights in a file and I am wondering if I should use `pickle` or `hd5` format (I am working with long text documents.) I am planning to separate two sentences by a blank line, please suggest if any better ways to do it.
Thanks!
|
https://github.com/huggingface/transformers/issues/2230
|
closed
|
[
"wontfix"
] | 2019-12-19T19:41:00Z
| 2020-02-24T20:38:46Z
| null |
vr25
|
huggingface/pytorch-image-models
| 61
|
where is your MixNet code? I can't find it.
|
https://github.com/huggingface/pytorch-image-models/issues/61
|
closed
|
[] | 2019-12-17T02:49:04Z
| 2019-12-17T05:30:46Z
| null |
xiebinghua
|
|
huggingface/transformers
| 2,127
|
Where is extract_features.py and run_classifier.py ?
|
## β Questions & Help
<!-- A clear and concise description of the question. -->
Hello! I couldn't find the extract_features.py and run_classifier.py. Have they been renamed ?
|
https://github.com/huggingface/transformers/issues/2127
|
closed
|
[] | 2019-12-10T17:14:27Z
| 2019-12-13T15:09:01Z
| null |
JiangYanting
|
huggingface/transformers
| 2,013
|
What is the real parameters to weight the triple loss (L_{ce}, L_{mlm}, L_{cos}) in DistilBert?
|
Hello! Thanks for your great work DistilBert. I want to ask what is the real parameters "alpha" you used in DistilBert to weight the triple loss (L_{ce}, L_{mlm}, L_{cos})?
You did not mention this detail in your NIPS workshop paper (http://arxiv.org/abs/1910.01108). In the [README](https://github.com/huggingface/transformers/blob/master/examples/distillation/README.md) file, you listed two different setups: `--alpha_ce 5.0 --alpha_mlm 2.0 --alpha_cos 1.0 --alpha_clm 0.0` for single GPU training and `--alpha_ce 0.33 --alpha_mlm 0.33 --alpha_cos 0.33 --alpha_clm 0.0` for distributed training. Can you tell me what is the best setting?
Actually, I have tried to reproduce your results of DistilBert. I trained the DistilBert with the corpus used by BERT, but the performance of GLUE seemed slightly fall behind your pre-trained `distilbert-base-uncased` by 2 points. I would be appreciated if you can tell me the parameters for reproducibility. Thanks!
|
https://github.com/huggingface/transformers/issues/2013
|
closed
|
[] | 2019-12-01T16:49:05Z
| 2019-12-02T15:37:37Z
| null |
voidism
|
huggingface/neuralcoref
| 228
|
Integration of different word embeddings for prediction
|
HI,
I am using SciSpacy with neuralcoref (by adding `ENTITY` to `ACCEPTED_ENTS`) and would also like to use the SciSpacy word vectors if possible.
I already have switched the `self.static_vectors` and `self.tuned_vectors` to point to the `self.vocab.vectors` in the `NeuralCoref` constructor. I also changed `SIZE_EMBEDDING` constant to 300 dims (the dimensions of the SciSpacy vectors).
After these changes I am running into shape conflicts within the `thinc` module.
This said I have three questions:
- Being that I am working with biomedical text, would you think using domain-specific would improve performance since I would only be using them during prediction rather than training?
- Is there a better way to integrate these embeddings than what I am currently doing?
- If I am on the right path of integrating these embeddings, could you perhaps point me to a resource or give me an idea of how to adjust sizes in the ```# A BUNCH OF SIZES #``` section to accept my embeddings with 300 dimension?
Please let me know if I can provide any more information.
Thanks in advance and for making this very awesome tool :)
|
https://github.com/huggingface/neuralcoref/issues/228
|
closed
|
[
"question",
"wontfix",
"usage"
] | 2019-11-25T17:01:15Z
| 2022-01-09T04:06:41Z
| null |
masonedmison
|
huggingface/neuralcoref
| 227
|
What is the performance on CoNLL-2012 test set?
|
Hi,
Thank you for your excellent work. I am looking for an off-the-shelf tool to do some coref text processing. I am wondering about the model performance of this repo on the CoNLL-2012, such as the Avg. F1 score.
Would you please post it here or in the readme file? Thanks a lot.
|
https://github.com/huggingface/neuralcoref/issues/227
|
closed
|
[
"question",
"perf / accuracy"
] | 2019-11-25T09:26:30Z
| 2019-12-06T21:57:04Z
| null |
magic282
|
huggingface/transformers
| 1,866
|
BertForTokenClassification for NER . what is the conclusion of this output ?
|
## β Questions & Help
<!-- A clear and concise description of the question. -->
Hi ,
Im trying to perform NER using BertForTokenClassification .I saw this sample code in transformers GIT page.
from transformers import BertForTokenClassification
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForTokenClassification.from_pretrained('bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
labels = torch.tensor([1] * input_ids.size(1)).unsqueeze(0) # Batch size 1
print(labels)
outputs = model(input_ids, labels=labels)
loss, scores = outputs[:2]
output loss:
tensor(0.5975, grad_fn=<NllLossBackward>)
output scores:
tensor([[[-0.1622, 0.1824],
[-0.1552, -0.0534],
[-0.3032, -0.1166],
[-0.2453, -0.1182],
[-0.4388, -0.1898],
[-0.3159, -0.1067]]], grad_fn=<AddBackward0>)
1.When i printed the loss and score i got below values .Now how should i infer this output ? what dose these value represent for performing NER ? what should i do to get the NER tags for the sentence "Hello, my dog is cute" .
2.i referred few NER codes in GIT using BERT and they have humongous line of code written for performing the NER . Is there any simple way to perform NER using bert ? like how Flair library has very simple method for performing the NER task ?
|
https://github.com/huggingface/transformers/issues/1866
|
closed
|
[
"wontfix"
] | 2019-11-19T09:23:23Z
| 2020-02-04T21:23:21Z
| null |
AjitAntony
|
huggingface/transformers
| 1,834
|
Where is Model2Model PreTrainedEncoderDecoder in run_summerization_finetune
|
## β Questions & Help
<!-- A clear and concise description of the question. -->
|
https://github.com/huggingface/transformers/issues/1834
|
closed
|
[
"wontfix"
] | 2019-11-14T18:09:24Z
| 2020-03-09T03:39:51Z
| null |
yeliu918
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.