repo_id
stringlengths
4
110
author
stringlengths
2
27
model_type
stringlengths
2
29
files_per_repo
int64
2
15.4k
downloads_30d
int64
0
19.9M
library
stringlengths
2
37
likes
int64
0
4.34k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
30
languages
stringlengths
4
1.63k
datasets
stringlengths
2
2.58k
co2
stringclasses
29 values
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
15
prs_closed
int64
0
28
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
1 class
has_text
bool
1 class
text_length
int64
401
598k
is_nc
bool
1 class
readme
stringlengths
0
598k
hash
stringlengths
32
32
sd-concepts-library/linnopoke
sd-concepts-library
null
17
0
null
4
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,870
false
### linnopoke on Stable Diffusion This is the `<linnopoke>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<linnopoke> 0](https://huggingface.co/sd-concepts-library/linnopoke/resolve/main/concept_images/1.jpeg) ![<linnopoke> 1](https://huggingface.co/sd-concepts-library/linnopoke/resolve/main/concept_images/11.jpeg) ![<linnopoke> 2](https://huggingface.co/sd-concepts-library/linnopoke/resolve/main/concept_images/8.jpeg) ![<linnopoke> 3](https://huggingface.co/sd-concepts-library/linnopoke/resolve/main/concept_images/5.jpeg) ![<linnopoke> 4](https://huggingface.co/sd-concepts-library/linnopoke/resolve/main/concept_images/9.jpeg) ![<linnopoke> 5](https://huggingface.co/sd-concepts-library/linnopoke/resolve/main/concept_images/7.jpeg) ![<linnopoke> 6](https://huggingface.co/sd-concepts-library/linnopoke/resolve/main/concept_images/3.jpeg) ![<linnopoke> 7](https://huggingface.co/sd-concepts-library/linnopoke/resolve/main/concept_images/2.jpeg) ![<linnopoke> 8](https://huggingface.co/sd-concepts-library/linnopoke/resolve/main/concept_images/6.jpeg) ![<linnopoke> 9](https://huggingface.co/sd-concepts-library/linnopoke/resolve/main/concept_images/10.jpeg) ![<linnopoke> 10](https://huggingface.co/sd-concepts-library/linnopoke/resolve/main/concept_images/0.jpeg) ![<linnopoke> 11](https://huggingface.co/sd-concepts-library/linnopoke/resolve/main/concept_images/4.jpeg)
4aee5b3ca30b660bda3a1748968cc5b7
tkubotake/xlm-roberta-base-finetuned-panx-de
tkubotake
xlm-roberta
11
6
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,319
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1365 - F1: 0.8649 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 | | 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 | | 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
f1b418f791ea0208631d2adffd298267
zhiyil/roberta-base-finetuned-intent
zhiyil
roberta
12
1,968
transformers
0
text-classification
true
false
false
mit
null
['snips_built_in_intents']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,919
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-intent This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the snips_built_in_intents dataset. It achieves the following results on the evaluation set: - Loss: 0.2720 - Accuracy: 0.9333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - total_eval_batch_size: 5 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - training precision: Mixed Precision ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9568 | 1.0 | 37 | 1.7598 | 0.4333 | | 1.2238 | 2.0 | 74 | 0.8130 | 0.7667 | | 0.4536 | 3.0 | 111 | 0.4985 | 0.8 | | 0.2478 | 4.0 | 148 | 0.3535 | 0.8667 | | 0.0903 | 5.0 | 185 | 0.3110 | 0.8667 | | 0.0849 | 6.0 | 222 | 0.2720 | 0.9333 | | 0.0708 | 7.0 | 259 | 0.2742 | 0.8667 | | 0.0796 | 8.0 | 296 | 0.2839 | 0.8667 | | 0.0638 | 9.0 | 333 | 0.2949 | 0.8667 | | 0.0566 | 10.0 | 370 | 0.2925 | 0.8667 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.10.0+cpu - Datasets 2.7.1 - Tokenizers 0.12.0
2190a94bb3e8c5309e54f0bc00725965
hassnain/wav2vec2-base-timit-demo-colab6
hassnain
wav2vec2
12
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,701
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab6 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9394 - Wer: 0.5282 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.3117 | 7.35 | 500 | 3.1548 | 1.0 | | 1.6732 | 14.71 | 1000 | 0.8857 | 0.6561 | | 0.5267 | 22.06 | 1500 | 0.7931 | 0.6018 | | 0.2951 | 29.41 | 2000 | 0.8152 | 0.5816 | | 0.2013 | 36.76 | 2500 | 0.9060 | 0.5655 | | 0.1487 | 44.12 | 3000 | 0.9201 | 0.5624 | | 0.1189 | 51.47 | 3500 | 0.9394 | 0.5412 | | 0.1004 | 58.82 | 4000 | 0.9394 | 0.5282 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
0e843dc98519c39a6cb01eeea713be98
ConvLab/t5-small-nlu-tm1_tm2_tm3
ConvLab
t5
7
3
transformers
0
text2text-generation
true
false
false
apache-2.0
['en']
['ConvLab/tm1', 'ConvLab/tm2', 'ConvLab/tm3']
null
0
0
0
0
0
0
0
['t5-small', 'text2text-generation', 'natural language understanding', 'conversational system', 'task-oriented dialog']
true
true
true
826
false
# t5-small-nlu-tm1_tm2_tm3 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on [Taskmaster-1](https://huggingface.co/datasets/ConvLab/tm1), [Taskmaster-2](https://huggingface.co/datasets/ConvLab/tm2), and [Taskmaster-3](https://huggingface.co/datasets/ConvLab/tm3). Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 256 - optimizer: Adafactor - lr_scheduler_type: linear - num_epochs: 10.0 ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
74469152274484337e1dead1702c327d
Fireman4740/kurzgesagt-style-v2-768
Fireman4740
null
43
16
diffusers
5
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
0
1
0
0
0
0
['text-to-image']
false
true
true
566
false
### Kurzgesagt-style-v2-768 Dreambooth model trained on the v2-768 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: Kurzgesagt style (use that on your prompt) ![Kurzgesagt style 0](https://huggingface.co/Fireman4740/kurzgesagt-style-v2-768/resolve/main/xy_grid-0012-2599613694.png)
ce1f750a7542facf8b1077b0e9a862a8
Alireza1044/albert-base-v2-rte
Alireza1044
albert
16
2
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
1
1
0
0
0
0
0
['generated_from_trainer']
false
true
true
992
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rte This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.7994 - Accuracy: 0.6859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
92b8fa9ae09f0585cbefe95de589a6aa
rudzinskimaciej/crystalpunk
rudzinskimaciej
null
16
0
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
428
false
### crystalpunk Dreambooth model trained by rudzinskimaciej with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
00bd8be4595cb44447753d33d29bafc7
bnriiitb/whisper-small-te
bnriiitb
whisper
17
4
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['te']
['Chai_Bisket_Stories_16-08-2021_14-17']
null
0
0
0
0
0
0
0
['hf-asr-leaderboard', 'generated_from_trainer']
true
true
true
1,869
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Telugu - Naga Budigam This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Chai_Bisket_Stories_16-08-2021_14-17 dataset. It achieves the following results on the evaluation set: - Loss: 0.7063 - Wer: 77.4871 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2933 | 2.62 | 500 | 0.3849 | 86.6429 | | 0.0692 | 5.24 | 1000 | 0.3943 | 82.7190 | | 0.0251 | 7.85 | 1500 | 0.4720 | 82.4415 | | 0.0098 | 10.47 | 2000 | 0.5359 | 81.6092 | | 0.0061 | 13.09 | 2500 | 0.5868 | 75.9413 | | 0.0025 | 15.71 | 3000 | 0.6235 | 76.6944 | | 0.0009 | 18.32 | 3500 | 0.6634 | 78.3987 | | 0.0005 | 20.94 | 4000 | 0.6776 | 77.1700 | | 0.0002 | 23.56 | 4500 | 0.6995 | 78.2798 | | 0.0001 | 26.18 | 5000 | 0.7063 | 77.4871 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0 - Datasets 2.7.1 - Tokenizers 0.13.2
e6fbd57e298590cbbbfbf16373d2f8a1
moredeal/distilbert-base-uncased-finetuned-category-classification
moredeal
distilbert
16
3
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,665
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-category-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0377 - F1: 0.9943 - Roc Auc: 0.9943 - Accuracy: 0.9943 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:| | 0.0374 | 1.0 | 7612 | 0.0373 | 0.9916 | 0.9916 | 0.9915 | | 0.0255 | 2.0 | 15224 | 0.0409 | 0.9922 | 0.9922 | 0.9921 | | 0.0281 | 3.0 | 22836 | 0.0332 | 0.9934 | 0.9934 | 0.9934 | | 0.0189 | 4.0 | 30448 | 0.0359 | 0.9941 | 0.9941 | 0.9940 | | 0.005 | 5.0 | 38060 | 0.0377 | 0.9943 | 0.9943 | 0.9943 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
e111b97db53868381bc898cc58194dff
Helsinki-NLP/opus-mt-tl-de
Helsinki-NLP
marian
11
7
transformers
0
translation
true
true
false
apache-2.0
['tl', 'de']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
2,006
false
### tgl-deu * source group: Tagalog * target group: German * OPUS readme: [tgl-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-deu/README.md) * model: transformer-align * source language(s): tgl_Latn * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tgl.deu | 22.7 | 0.473 | ### System Info: - hf_name: tgl-deu - source_languages: tgl - target_languages: deu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-deu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tl', 'de'] - src_constituents: {'tgl_Latn'} - tgt_constituents: {'deu'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.test.txt - src_alpha3: tgl - tgt_alpha3: deu - short_pair: tl-de - chrF2_score: 0.473 - bleu: 22.7 - brevity_penalty: 0.9690000000000001 - ref_len: 2453.0 - src_name: Tagalog - tgt_name: German - train_date: 2020-06-17 - src_alpha2: tl - tgt_alpha2: de - prefer_old: False - long_pair: tgl-deu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
b0f65e9a5108972b5e2b9426b4a5793c
alexlopitz/ner_kaggle_class_prediction_model
alexlopitz
bert
16
3
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,530
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ner_kaggle_class_prediction_model This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0191 - Precision: 0.9850 - Recall: 0.9830 - F1: 0.9840 - Accuracy: 0.9950 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1304 | 1.0 | 806 | 0.0202 | 0.9823 | 0.9794 | 0.9808 | 0.9940 | | 0.0142 | 2.0 | 1612 | 0.0178 | 0.9819 | 0.9826 | 0.9823 | 0.9945 | | 0.0081 | 3.0 | 2418 | 0.0191 | 0.9850 | 0.9830 | 0.9840 | 0.9950 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
cffdbea42a3bf7e73a4076b3efeeaa4e
DenilsenAxel/nlp-text-classification
DenilsenAxel
bert
6
3
transformers
0
text-classification
true
false
false
apache-2.0
null
['amazon_us_reviews']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,331
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_trainer This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the amazon_us_reviews dataset. It achieves the following results on the evaluation set: - Loss: 0.9348 - Accuracy: 0.7441 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6471 | 1.0 | 7500 | 0.6596 | 0.7376 | | 0.5235 | 2.0 | 15000 | 0.6997 | 0.7423 | | 0.3955 | 3.0 | 22500 | 0.9348 | 0.7441 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
01a91f44b412b75fad17732e296afbf3
Voyager1/asr-wav2vec2-commonvoice-es-finetuned-rtve
Voyager1
wav2vec2
9
10
speechbrain
0
automatic-speech-recognition
true
false
false
afl-3.0
['es']
['commonvoice']
null
0
0
0
0
0
0
0
['CTC', 'pytorch', 'speechbrain', 'Transformer', 'hf-asr-leaderboard']
true
true
true
3,791
false
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # wav2vec 2.0 with CTC trained on data aligned from RTVE databases (No LM) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on CommonVoice (Spanish Language) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | RTVE 2022 Test WER | GPUs | |:-------------:|:--------------:| :--------:| | 16-01-23 | 23.45 | 3xRTX2080Ti 12GB | ## Pipeline description This ASR system is composed of 2 different but linked blocks: - Tokenizer (char) that transforms words into chars and trained with the train transcriptions (train.tsv) of CommonVoice (ES). - Acoustic model (wav2vec2.0 + CTC). A pretrained wav2vec 2.0 model ([wav2vec2-large-xlsr-53-spanish](https://huggingface.co/facebook/wav2vec2-large-xlsr-53-spanish)) is combined with two DNN layers and finetuned on CommonVoice ES. The obtained final acoustic representation is given to the CTC decoder. The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed. ## Install SpeechBrain First of all, please install tranformers and SpeechBrain with the following command: ``` pip install speechbrain transformers ``` Please notice that we encourage you to read tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Transcribing your own audio files (in Spanish) ```python from speechbrain.pretrained import EncoderASR asr_model = EncoderASR.from_hparams(source="Voyager1/asr-wav2vec2-commonvoice-es", savedir="pretrained_models/asr-wav2vec2-commonvoice-es") asr_model.transcribe_file("Voyager1/asr-wav2vec2-commonvoice-es/example-es.wav") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Limitations We do not provide any warranty on the performance achieved by this model when used on other datasets. # **Citations** ```bibtex @article{lopez2022tid, title={TID Spanish ASR system for the Albayzin 2022 Speech-to-Text Transcription Challenge}, author={L{\'o}pez, Fernando and Luque, Jordi}, journal={Proc. IberSPEECH 2022}, pages={271--275}, year={2022} } @misc{https://doi.org/10.48550/arxiv.2210.15226, doi = {10.48550/ARXIV.2210.15226}, url = {https://arxiv.org/abs/2210.15226}, author = {López, Fernando and Luque, Jordi}, title = {Iterative pseudo-forced alignment by acoustic CTC loss for self-supervised ASR domain adaptation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } @misc{lleidartve, title={Rtve 2018, 2020 and 2022 database description}, author={Lleida, E and Ortega, A and Miguel, A and Baz{\'a}n, V and P{\'e}rez, C and G{\'o}mez, M and de Prada, A} } @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
80d267b59d9d1de23eb2fc6e29fff910
riffusion/riffusion-model-v1
riffusion
null
61
11,493
diffusers
355
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
4
4
0
0
14
4
10
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'text-to-audio']
false
true
true
3,865
false
# Riffusion Riffusion is an app for real-time music generation with stable diffusion. Read about it at https://www.riffusion.com/about and try it at https://www.riffusion.com/. * Code: https://github.com/riffusion/riffusion * Web app: https://github.com/hmartiro/riffusion-app * Model checkpoint: https://huggingface.co/riffusion/riffusion-model-v1 * Discord: https://discord.gg/yu6SRwvX4v This repository contains the model files, including: * a diffusers formated library * a compiled checkpoint file * a traced unet for improved inference speed * a seed image library for use with riffusion-app ## Riffusion v1 Model Riffusion is a latent text-to-image diffusion model capable of generating spectrogram images given any text input. These spectrograms can be converted into audio clips. The model was created by [Seth Forsgren](https://sethforsgren.com/) and [Hayk Martiros](https://haykmartiros.com/) as a hobby project. You can use the Riffusion model directly, or try the [Riffusion web app](https://www.riffusion.com/). The Riffusion model was created by fine-tuning the **Stable-Diffusion-v1-5** checkpoint. Read about Stable Diffusion here [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion). ### Model Details - **Developed by:** Seth Forsgren, Hayk Martiros - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487). ### Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Generation of artworks, audio, and use in creative processes. - Applications in educational or creative tools. - Research on generative models. ### Datasets The original Stable Diffusion v1.5 was trained on the [LAION-5B](https://arxiv.org/abs/2210.08402) dataset using the [CLIP text encoder](https://openai.com/blog/clip/), which provided an amazing starting point with an in-depth understanding of language, including musical concepts. The team at LAION also compiled a fantastic audio dataset from many general, speech, and music sources that we recommend at [LAION-AI/audio-dataset](https://github.com/LAION-AI/audio-dataset/blob/main/data_collection/README.md). ### Fine Tuning Check out the [diffusers training examples](https://huggingface.co/docs/diffusers/training/overview) from Hugging Face. Fine tuning requires a dataset of spectrogram images of short audio clips, with associated text describing them. Note that the CLIP encoder is able to understand and connect many words even if they never appear in the dataset. It is also possible to use a [dreambooth](https://huggingface.co/blog/dreambooth) method to get custom styles. ## Citation If you build on this work, please cite it as follows: ``` @article{Forsgren_Martiros_2022, author = {Forsgren, Seth* and Martiros, Hayk*}, title = {{Riffusion - Stable diffusion for real-time music generation}}, url = {https://riffusion.com/about}, year = {2022} } ```
a7f2eb893601ac4af077d518bc7aed16
migueladarlo/distilbert-depression-base
migueladarlo
distilbert
5
2
transformers
2
text-classification
true
false
false
mit
['en']
['CLPsych 2015']
null
0
0
0
0
0
0
0
['text', 'Twitter']
true
true
true
2,507
false
# distilbert-depression-base This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) trained on CLPsych 2015 and evaluated on a scraped dataset from Twitter to detect potential users in Twitter for depression. It achieves the following results on the evaluation set: - Evaluation Loss: 0.64 - Accuracy: 0.65 - F1: 0.70 - Precision: 0.61 - Recall: 0.83 - AUC: 0.65 ## Intended uses & limitations Feed a corpus of tweets to the model to generate label if input is indicative of a depressed user or not. Label 1 is depressed, Label 0 is not depressed. Limitation: All token sequences longer than 512 are automatically truncated. Also, training and test data may be contaminated with mislabeled users. ### How to use You can use this model directly with a pipeline for sentiment analysis: ```python >>> from transformers import DistilBertTokenizerFast, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased') >>> from transformers import DistilBertForSequenceClassification >>> model = DistilBertForSequenceClassification.from_pretrained(r"distilbert-depression-base") >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> tokenizer_kwargs = {'padding':True,'truncation':True,'max_length':512} >>> result=classifier('pain peko',**tokenizer_kwargs) #For truncation to apply in the pipeline. >>> #Should note that the string passed as the input can be a corpus of tweets concatenated together into one document. [{'label': 'LABEL_1', 'score': 0.5048992037773132}] ``` Otherwise, download the files and specify within the pipeline the path to the folder that contains the config.json, pytorch_model.bin, and training_args.bin ## Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3.39e-05 - train_batch_size: 16 - eval_batch_size: 16 - weight_decay: 0.13 - num_epochs: 3.0 ## Training results | Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall | AUC | |:-----:|:-------------:|:---------------:|:--------:|:--------:|:---------:|:--------:|:--------:| | 1.0 | 0.68 | 0.66 | 0.59 | 0.63 | 0.56 | 0.73 | 0.59 | | 2.0 | 0.60 | 0.68 | 0.63 | 0.69 | 0.59 | 0.83 | 0.63 | | 3.0 | 0.52 | 0.67 | 0.64 | 0.66 | 0.62 | 0.72 | 0.65 |
de4c96b9851f95b0b53f51f970abe283
StonyBrookNLP/t5-3b-tatqa
StonyBrookNLP
t5
10
3
transformers
0
text2text-generation
true
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
['question-answering, multi-step-reasoning, multi-hop-reasoning']
false
true
true
2,615
false
# What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/t5-3b-tatqa" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "answer_me: Who scored the first touchdown of the game?" + "context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
66e7370f41728e2c13a8747e71ba5b74
sd-concepts-library/alicebeta
sd-concepts-library
null
10
0
null
2
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,136
false
### AliceBeta on Stable Diffusion This is the `<Alice-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<Alice-style> 0](https://huggingface.co/sd-concepts-library/alicebeta/resolve/main/concept_images/0.jpeg) ![<Alice-style> 1](https://huggingface.co/sd-concepts-library/alicebeta/resolve/main/concept_images/1.jpeg) ![<Alice-style> 2](https://huggingface.co/sd-concepts-library/alicebeta/resolve/main/concept_images/2.jpeg) ![<Alice-style> 3](https://huggingface.co/sd-concepts-library/alicebeta/resolve/main/concept_images/3.jpeg) ![<Alice-style> 4](https://huggingface.co/sd-concepts-library/alicebeta/resolve/main/concept_images/4.jpeg)
fea9fc0b516ad49be0e08a3c1b7a86fb
ericntay/clinical_bio_bert_ft
ericntay
bert
14
25
transformers
0
token-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,754
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clinical_bio_bert_ft This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2570 - F1: 0.8160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6327 | 1.0 | 95 | 0.2442 | 0.7096 | | 0.1692 | 2.0 | 190 | 0.2050 | 0.7701 | | 0.0878 | 3.0 | 285 | 0.1923 | 0.8002 | | 0.0493 | 4.0 | 380 | 0.2234 | 0.8079 | | 0.0302 | 5.0 | 475 | 0.2250 | 0.8090 | | 0.0191 | 6.0 | 570 | 0.2363 | 0.8145 | | 0.0132 | 7.0 | 665 | 0.2489 | 0.8178 | | 0.0102 | 8.0 | 760 | 0.2494 | 0.8152 | | 0.008 | 9.0 | 855 | 0.2542 | 0.8191 | | 0.0068 | 10.0 | 950 | 0.2570 | 0.8160 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
1894388ed8018904901c8074b88d6a9d
devtanumisra/finetuning-sentiment-model-deberta-smote
devtanumisra
deberta-v2
14
8
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,116
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-deberta-smote This model is a fine-tuned version of [yangheng/deberta-v3-base-absa-v1.1](https://huggingface.co/yangheng/deberta-v3-base-absa-v1.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4852 - Accuracy: 0.7215 - F1: 0.7215 - Precision: 0.7215 - Recall: 0.7215 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
a4b682feaa80b9829b4bb1e195c54833
egumasa/bert-base-uncased-finetuned-academic
egumasa
bert
15
4
transformers
0
fill-mask
true
false
false
apache-2.0
null
['elsevier-oa-cc-by']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,779
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-academic This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the elsevier-oa-cc-by dataset. It achieves the following results on the evaluation set: - Loss: 2.5893 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 40 - eval_batch_size: 40 - seed: 42 - optimizer: Adam with betas=(0.9,0.97) and epsilon=0.0001 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.9591 | 0.25 | 820 | 2.6567 | | 2.7993 | 0.5 | 1640 | 2.6006 | | 2.7519 | 0.75 | 2460 | 2.5707 | | 2.7319 | 1.0 | 3280 | 2.5763 | | 2.7359 | 1.25 | 4100 | 2.5866 | | 2.7451 | 1.5 | 4920 | 2.5855 | | 2.7421 | 1.75 | 5740 | 2.5770 | | 2.7319 | 2.0 | 6560 | 2.5762 | | 2.7356 | 2.25 | 7380 | 2.5807 | | 2.7376 | 2.5 | 8200 | 2.5813 | | 2.7386 | 2.75 | 9020 | 2.5841 | | 2.7378 | 3.0 | 9840 | 2.5737 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
bf79477a68e2e877f0a887d499f698d2
ehcalabres/distilgpt2-abc-irish-music-generation
ehcalabres
gpt2
8
2
transformers
0
text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
957
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-abc-irish-music-generation This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
3d87e56014cc45606d4ec0a578849fab
patrickvonplaten/wav2vec2-xls-r-100m-common_voice-tr-ft
patrickvonplaten
wav2vec2
21
15
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['tr']
['common_voice']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer', 'xls_r_repro_common_voice_tr']
true
true
true
1,696
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-100m-common_voice-tr-ft This model is a fine-tuned version of [facebook/wav2vec2-xls-r-100m](https://huggingface.co/facebook/wav2vec2-xls-r-100m) on the COMMON_VOICE - TR dataset. It achieves the following results on the evaluation set: - Loss: 3.4113 - Wer: 1.0 - Cer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:---:|:---:| | 3.1315 | 9.09 | 500 | 3.3832 | 1.0 | 1.0 | | 3.1163 | 18.18 | 1000 | 3.4252 | 1.0 | 1.0 | | 3.121 | 27.27 | 1500 | 3.4051 | 1.0 | 1.0 | | 3.1273 | 36.36 | 2000 | 3.4345 | 1.0 | 1.0 | | 3.2257 | 45.45 | 2500 | 3.4097 | 1.0 | 1.0 | ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 1.15.2.dev0 - Tokenizers 0.10.3
900bfb950215c058631f2889371496ac
m3hrdadfi/albert-fa-base-v2-ner-peyma
m3hrdadfi
albert
13
13
transformers
1
token-classification
true
true
false
apache-2.0
['fa']
null
null
0
0
0
0
0
0
0
[]
false
true
true
3,272
false
# ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian NER [ARMAN, PEYMA] This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`. ### PEYMA PEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes. 1. Organization 2. Money 3. Location 4. Date 5. Time 6. Person 7. Percent | Label | # | |:------------:|:-----:| | Organization | 16964 | | Money | 2037 | | Location | 8782 | | Date | 4259 | | Time | 732 | | Person | 7675 | | Percent | 699 | **Download** You can download the dataset from [here](http://nsurl.org/tasks/task-7-named-entity-recognition-ner-for-farsi/) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF | |:-------:|:-----------------:|:-----------:|:-----:|:----------:|:------------:|:--------:|:--------------:|:----------:| | PEYMA | 88.99 | 93.10 | 86.64 | - | 90.59 | - | 84.00 | - | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
b30b2a19aa14486da9b3835c5782ec2f
yhavinga/t5-v1_1-base-dutch-english-cased
yhavinga
t5
13
10
transformers
0
text2text-generation
false
false
true
apache-2.0
['nl', 'en']
['yhavinga/mc4_nl_cleaned']
null
0
0
0
0
0
0
0
['t5', 'seq2seq']
false
true
true
26,879
false
# t5-v1_1-base-dutch-english-cased A [T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) sequence to sequence model pre-trained from scratch on [cleaned Dutch 🇳🇱🇧🇪 mC4 and cleaned English 🇬🇧 C4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned). This **t5-v1.1** model has **247M** parameters. It was pre-trained with masked language modeling (denoise token span corruption) objective on the dataset `mc4_nl_cleaned` config `small_en_nl` for **10** epoch(s) and a duration of **11d18h**, with a sequence length of **512**, batch size **128** and **2839630** total steps (**186B** tokens). Pre-training evaluation loss and accuracy are **1,11** and **0,75**. Refer to the evaluation section below for a comparison of the pre-trained models on summarization and translation. * Pre-trained T5 models need to be finetuned before they can be used for downstream tasks, therefore the inference widget on the right has been turned off. * For a demo of the Dutch CNN summarization models, head over to the Hugging Face Spaces for the **[Netherformer 📰](https://huggingface.co/spaces/flax-community/netherformer)** example application! Please refer to the original T5 papers and Scale Efficiently papers for more information about the T5 architecture and configs, though it must be noted that this model (t5-v1_1-base-dutch-english-cased) is unrelated to these projects and not an 'official' checkpoint. * **[Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)** by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*. * **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. ## Tokenizer The model uses a cased SentencePiece tokenizer configured with the `Nmt, NFKC, Replace multi-space to single-space` normalizers and has 32003 tokens. It was trained on Dutch and English with scripts from the Huggingface Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling). See [./raw/main/tokenizer.json](tokenizer.json) for details. ## Dataset(s) All models listed below are pre-trained on [cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned), which is the original mC4, except * Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed * Sentences with less than 3 words are removed * Sentences with a word of more than 1000 characters are removed * Documents with less than 5 sentences are removed * Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies", "use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed. The Dutch and English models are pre-trained on a 50/50% mix of Dutch mC4 and English C4. The translation models are fine-tuned on [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix). ## Dutch T5 Models Three types of [Dutch T5 models have been trained (blog)](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models). `t5-base-dutch` is the only model with an original T5 config. The other model types t5-v1.1 and t5-eff have `gated-relu` instead of `relu` as activation function, and trained with a drop-out of `0.0` unless training would diverge (`t5-v1.1-large-dutch-cased`). The T5-eff models are models that differ in their number of layers. The table will list the several dimensions of these models. Not all t5-eff models are efficient, the best example being the inefficient `t5-xl-4L-dutch-english-cased`. | | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-xl-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-xl-8l-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | |:------------------|:----------------|:-----------------------------|:---------------------------|:----------------------------|:-----------------------------------|:----------------------------------------|:-----------------------------|:-------------------------------|:----------------------------------|:-----------------------------------|:--------------------------------------| | *type* | t5 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5 eff | t5 eff | t5 eff | t5 eff | t5 eff | | *d_model* | 768 | 768 | 768 | 1024 | 768 | 768 | 512 | 2048 | 768 | 1024 | 1024 | | *d_ff* | 3072 | 2048 | 2048 | 2816 | 2048 | 2048 | 1920 | 5120 | 2560 | 16384 | 4096 | | *num_heads* | 12 | 12 | 12 | 16 | 12 | 12 | 8 | 32 | 12 | 32 | 16 | | *d_kv* | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 128 | 64 | | *num_layers* | 12 | 12 | 12 | 24 | 12 | 12 | 24 | 4 | 36 | 8 | 8 | | *num parameters* | 223M | 248M | 248M | 783M | 248M | 248M | 250M | 585M | 729M | 1241M | 335M | | *feed_forward_proj* | relu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | | *dropout* | 0.1 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | | *dataset* | mc4_nl_cleaned | mc4_nl_cleaned full | mc4_nl_cleaned full | mc4_nl_cleaned | mc4_nl_cleaned small_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | | *tr. seq len* | 512 | 1024 | 1024 | 512 | 512 | 1024 | 512 | 512 | 512 | 512 | 512 | | *batch size* | 128 | 64 | 64 | 64 | 128 | 64 | 128 | 512 | 512 | 64 | 128 | | *total steps* | 527500 | 1014525 | 1210154 | 1120k/2427498 | 2839630 | 1520k/3397024 | 851852 | 212963 | 212963 | 538k/1703705 | 851850 | | *epochs* | 1 | 2 | 2 | 2 | 10 | 4 | 1 | 1 | 1 | 1 | 1 | | *duration* | 2d9h | 5d5h | 6d6h | 8d13h | 11d18h | 9d1h | 4d10h | 6d1h | 17d15h | 4d 19h | 3d 23h | | *optimizer* | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | | *lr* | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.009 | 0.005 | 0.005 | | *warmup* | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 5000.0 | 20000.0 | 2500.0 | 1000.0 | 1500.0 | 1500.0 | | *eval loss* | 1,38 | 1,20 | 0,96 | 1,07 | 1,11 | 1,13 | 1,18 | 1,27 | 1,05 | 1,3019 | 1,15 | | *eval acc* | 0,70 | 0,73 | 0,78 | 0,76 | 0,75 | 0,74 | 0,74 | 0,72 | 0,76 | 0,71 | 0,74 | ## Evaluation Most models from the list above have been fine-tuned for summarization and translation. The figure below shows the evaluation scores, where the x-axis shows the translation Bleu score (higher is better) and y-axis the summarization Rouge1 translation score (higher is better). Point size is proportional to the model size. Models with faster inference speed are green, slower inference speed is plotted as bleu. ![Evaluation T5 Dutch English](evaluation_t5_dutch_english.png) Evaluation was run on fine-tuned models trained with the following settings: | | Summarization | Translation | |---------------:|------------------|-------------------| | Dataset | CNN Dailymail NL | CCMatrix en -> nl | | #train samples | 50K | 50K | | Optimizer | Adam | Adam | | learning rate | 0.001 | 0.0005 | | source length | 1024 | 128 | | target length | 142 | 128 | |label smoothing | 0.05 | 0.1 | | #eval samples | 1000 | 1000 | Note that the amount of training data is limited to a fraction of the total dataset sizes, therefore the scores below can only be used to compare the 'transfer-learning' strength. The fine-tuned checkpoints for this evaluation are not saved, since they were trained for comparison of pre-trained models only. The numbers for summarization are the Rouge scores on 1000 documents from the test split. | | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base | |:------------------------|----------------:|-----------------------------:|---------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:| | *rouge1* | 33.38 | 33.97 | 34.39 | 33.38 | 34.97 | 34.38 | 30.35 | **35.04** | 34.04 | 33.25 | | *rouge2* | 13.32 | 13.85 | 13.98 | 13.47 | 14.01 | 13.89 | 11.57 | **14.23** | 13.76 | 12.74 | | *rougeL* | 24.22 | 24.72 | 25.1 | 24.34 | 24.99 | **25.25** | 22.69 | 25.05 | 24.75 | 23.5 | | *rougeLsum* | 30.23 | 30.9 | 31.44 | 30.51 | 32.01 | 31.38 | 27.5 | **32.12** | 31.12 | 30.15 | | *samples_per_second* | 3.18 | 3.02 | 2.99 | 3.22 | 2.97 | 1.57 | 2.8 | 0.61 | **3.27** | 1.22 | The models below have been evaluated for English to Dutch translation. Note that the first four models are pre-trained on Dutch only. That they still perform adequate is probably because the translation direction is English to Dutch. The numbers reported are the Bleu scores on 1000 documents from the test split. | | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base | |:-------------------------------|----------------:|-----------------------------:|---------------------------:|----------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:| | *precision_ng1* | 74.17 | 78.09 | 77.08 | 72.12 | 77.19 | 78.76 | 78.59 | 77.3 | **79.75** | 78.88 | 73.47 | | *precision_ng2* | 52.42 | 57.52 | 55.31 | 48.7 | 55.39 | 58.01 | 57.83 | 55.27 | **59.89** | 58.27 | 50.12 | | *precision_ng3* | 39.55 | 45.2 | 42.54 | 35.54 | 42.25 | 45.13 | 45.02 | 42.06 | **47.4** | 45.95 | 36.59 | | *precision_ng4* | 30.23 | 36.04 | 33.26 | 26.27 | 32.74 | 35.72 | 35.41 | 32.61 | **38.1** | 36.91 | 27.26 | | *bp* | 0.99 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | | *score* | 45.88 | 51.21 | 48.31 | 41.59 | 48.17 | 51.31 | 50.82 | 47.83 | **53** | 51.79 | 42.74 | | *samples_per_second* | **45.19** | 45.05 | 38.67 | 10.12 | 42.19 | 42.61 | 12.85 | 33.74 | 9.07 | 37.86 | 9.03 | ## Translation models The models `t5-small-24L-dutch-english` and `t5-base-36L-dutch-english` have been fine-tuned for both language directions on the first 25M samples from CCMatrix, giving a total of 50M training samples. Evaluation is performed on out-of-sample CCMatrix and also on Tatoeba and Opus Books. The `_bp` columns list the *brevity penalty*. The `avg_bleu` score is the bleu score averaged over all three evaluation datasets. The best scores displayed in bold for both translation directions. | | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | |:-----------------------|:-----------------------------|:-----------------------------|:------------------------------|:------------------------------| | *source_lang* | en | nl | en | nl | | *target_lang* | nl | en | nl | en | | *source_prefix* | translate English to Dutch: | translate Dutch to English: | translate English to Dutch: | translate Dutch to English: | | *ccmatrix_bleu* | **56.8** | 62.8 | 57.4 | **63.1** | | *tatoeba_bleu* | **46.6** | **52.8** | 46.4 | 51.7 | | *opus_books_bleu* | **13.5** | **24.9** | 12.9 | 23.4 | | *ccmatrix_bp* | 0.95 | 0.96 | 0.95 | 0.96 | | *tatoeba_bp* | 0.97 | 0.94 | 0.98 | 0.94 | | *opus_books_bp* | 0.8 | 0.94 | 0.77 | 0.89 | | *avg_bleu* | **38.96** | **46.86** | 38.92 | 46.06 | | *max_source_length* | 128 | 128 | 128 | 128 | | *max_target_length* | 128 | 128 | 128 | 128 | | *adam_beta1* | 0.9 | 0.9 | 0.9 | 0.9 | | *adam_beta2* | 0.997 | 0.997 | 0.997 | 0.997 | | *weight_decay* | 0.05 | 0.05 | 0.002 | 0.002 | | *lr* | 5e-05 | 5e-05 | 0.0005 | 0.0005 | | *label_smoothing_factor* | 0.15 | 0.15 | 0.1 | 0.1 | | *train_batch_size* | 128 | 128 | 128 | 128 | | *warmup_steps* | 2000 | 2000 | 2000 | 2000 | | *total steps* | 390625 | 390625 | 390625 | 390625 | | *duration* | 4d 5h | 4d 5h | 3d 2h | 3d 2h | | *num parameters* | 729M | 729M | 250M | 250M | ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was instrumental in all parts of the training. Weights & Biases made it possible to keep track of many training sessions and orchestrate hyper-parameter sweeps with insightful visualizations. The following repositories where helpful in setting up the TPU-VM, and getting an idea what sensible hyper-parameters are for training gpt2 from scratch: * [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp) * [Flax/Jax Community week t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch) Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
ede0870192964f17c120c8a2e7b5c615
Sercan/wav2vec2-large-xls-r-300m-tr
Sercan
wav2vec2
13
10
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,826
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-tr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.2891 - Wer: 0.4741 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 5.4933 | 0.39 | 400 | 1.0543 | 0.9316 | | 0.7039 | 0.78 | 800 | 0.6927 | 0.7702 | | 0.4768 | 1.17 | 1200 | 0.4779 | 0.6774 | | 0.4004 | 1.57 | 1600 | 0.4462 | 0.6450 | | 0.3739 | 1.96 | 2000 | 0.4287 | 0.6296 | | 0.317 | 2.35 | 2400 | 0.4395 | 0.6248 | | 0.3027 | 2.74 | 2800 | 0.4052 | 0.6027 | | 0.2633 | 3.13 | 3200 | 0.4026 | 0.5938 | | 0.245 | 3.52 | 3600 | 0.3814 | 0.5902 | | 0.2415 | 3.91 | 4000 | 0.3691 | 0.5708 | | 0.2193 | 4.31 | 4400 | 0.3626 | 0.5623 | | 0.2057 | 4.7 | 4800 | 0.3591 | 0.5551 | | 0.1874 | 5.09 | 5200 | 0.3670 | 0.5512 | | 0.1782 | 5.48 | 5600 | 0.3483 | 0.5406 | | 0.1706 | 5.87 | 6000 | 0.3392 | 0.5338 | | 0.153 | 6.26 | 6400 | 0.3189 | 0.5207 | | 0.1493 | 6.65 | 6800 | 0.3185 | 0.5164 | | 0.1381 | 7.05 | 7200 | 0.3199 | 0.5185 | | 0.1244 | 7.44 | 7600 | 0.3082 | 0.4993 | | 0.1182 | 7.83 | 8000 | 0.3122 | 0.4998 | | 0.1136 | 8.22 | 8400 | 0.3003 | 0.4936 | | 0.1047 | 8.61 | 8800 | 0.2945 | 0.4858 | | 0.0986 | 9.0 | 9200 | 0.2827 | 0.4809 | | 0.0925 | 9.39 | 9600 | 0.2894 | 0.4786 | | 0.0885 | 9.78 | 10000 | 0.2891 | 0.4741 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.12.1+cu116 - Datasets 2.1.0 - Tokenizers 0.12.1
5332205fdd4d619957b1fcaa85769258
nepalprabin/xlm-roberta-base-finetuned-marc-en
nepalprabin
xlm-roberta
12
3
transformers
0
text-classification
true
false
false
mit
null
['amazon_reviews_multi']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,274
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 1.0442 - Mae: 0.5385 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0371 | 1.0 | 1105 | 1.0522 | 0.5256 | | 0.8925 | 2.0 | 2210 | 1.0442 | 0.5385 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
7b443cd392008f7884c434d87eef8335
nvidia/stt_rw_conformer_transducer_large
nvidia
null
3
1
nemo
0
automatic-speech-recognition
true
false
false
cc-by-4.0
['rw']
['mozilla-foundation/common_voice_9_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'speech', 'audio', 'Transducer', 'Conformer', 'Transformer', 'pytorch', 'NeMo', 'hf-asr-leaderboard']
true
true
true
5,579
false
# NVIDIA Conformer-Transducer Large (Kinyarwanda) <style> img { display: inline; } </style> | [![Model architecture](https://img.shields.io/badge/Model_Arch-Conformer--Transducer-lightgrey#model-badge)](#model-architecture) | [![Model size](https://img.shields.io/badge/Params-120M-lightgrey#model-badge)](#model-architecture) | [![Language](https://img.shields.io/badge/Language-rw-lightgrey#model-badge)](#datasets) This model transcribes speech into lowercase Latin alphabet including space and apostrophe, and is trained on around 2000 hours of Kinyarwanda speech data. It is a non-autoregressive "large" variant of Conformer, with around 120 million parameters. See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-transducer) for complete architecture details. ## Usage The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset. To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest PyTorch version. ``` pip install nemo_toolkit['all'] ``` ### Automatically instantiate the model ```python import nemo.collections.asr as nemo_asr asr_model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained("nvidia/stt_rw_conformer_transducer_large") ``` ### Transcribing using Python Simply do: ``` asr_model.transcribe(['<your_audio>.wav']) ``` ### Transcribing many audio files ```shell python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="nvidia/stt_rw_conformer_transducer_large" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>" ``` ### Input This model accepts 16 kHz mono-channel Audio (wav files) as input. ### Output This model provides transcribed speech as a string for a given audio sample. ## Model Architecture Conformer-Transducer model is an autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses Transducer loss/decoding. You may find more info on the detail of this model here: [Conformer-Transducer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html). ## Training The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_transducer_bpe.yaml). The vocabulary we use contains 28 characters: ```python [' ', "'", 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z'] ``` Rare symbols with diacritics were replaced during preprocessing. The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py). For vocabulary of size 1024 we restrict maximum subtoken length to 4 symbols to avoid populating vocabulary with specific frequent words from the dataset. This does not affect the model performance and potentially helps to adapt to other domain without retraining tokenizer. Full config can be found inside the .nemo files. ### Datasets All the models in this collection are trained on MCV-9.0 Kinyarwanda dataset, which contains around 2000 hours training, 32 hours of development and 32 hours of testing speech audios. ## Performance The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding. | Version | Tokenizer | Vocabulary Size | Dev WER| Test WER| Train Dataset | |---------|-----------------------|-----------------|--------|---------|-----------------| | 1.11.0 | SentencePiece BPE, maxlen=4 | 1024 |13.82 | 16.19 | MCV-9.0 Train set| ## Limitations Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech. ## Deployment with NVIDIA Riva [NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded. Additionally, Riva provides: * World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours * Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization * Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva). Check out [Riva live demo](https://developer.nvidia.com/riva#demos). ## References - [1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100) - [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece) - [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
af75adae6ff1c4355dcd1fbb8405c707
ku-nlp/deberta-v2-tiny-japanese
ku-nlp
deberta-v2
8
5,091
transformers
0
fill-mask
true
false
false
cc-by-sa-4.0
['ja']
['wikipedia', 'cc100', 'oscar']
null
0
0
0
0
0
0
0
['deberta', 'deberta-v2', 'fill-mask']
false
true
true
3,216
false
# Model Card for Japanese DeBERTa V2 tiny ## Model description This is a Japanese DeBERTa V2 tiny model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the Japanese portion of OSCAR. ## How to use You can use this model for masked language modeling as follows: ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained('ku-nlp/deberta-v2-tiny-japanese') model = AutoModelForMaskedLM.from_pretrained('ku-nlp/deberta-v2-tiny-japanese') sentence = '京都 大学 で 自然 言語 処理 を [MASK] する 。' # input should be segmented into words by Juman++ in advance encoding = tokenizer(sentence, return_tensors='pt') ... ``` You can also fine-tune this model on downstream tasks. ## Tokenization The input text should be segmented into words by [Juman++](https://github.com/ku-nlp/jumanpp) in advance. [Juman++ 2.0.0-rc3](https://github.com/ku-nlp/jumanpp/releases/tag/v2.0.0-rc3) was used for pre-training. Each word is tokenized into subwords by [sentencepiece](https://github.com/google/sentencepiece). ## Training data We used the following corpora for pre-training: - Japanese Wikipedia (as of 20221020, 3.2GB, 27M sentences, 1.3M documents) - Japanese portion of CC-100 (85GB, 619M sentences, 66M documents) - Japanese portion of OSCAR (54GB, 326M sentences, 25M documents) Note that we filtered out documents annotated with "header", "footer", or "noisy" tags in OSCAR. Also note that Japanese Wikipedia was duplicated 10 times to make the total size of the corpus comparable to that of CC-100 and OSCAR. As a result, the total size of the training data is 171GB. ## Training procedure We first segmented texts in the corpora into words using [Juman++](https://github.com/ku-nlp/jumanpp). Then, we built a sentencepiece model with 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece). We tokenized the segmented corpora into subwords using the sentencepiece model and trained the Japanese DeBERTa model using [transformers](https://github.com/huggingface/transformers) library. The training took 33 hours using 8 NVIDIA A100-SXM4-40GB GPUs. The following hyperparameters were used during pre-training: - learning_rate: 1e-3 - per_device_train_batch_size: 128 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 6 - total_train_batch_size: 6,144 - max_seq_length: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06 - lr_scheduler_type: linear schedule with warmup - training_steps: 100,000 - warmup_steps: 10,000 The accuracy of the trained model on the masked language modeling task was 0.593. The evaluation set consists of 5,000 randomly sampled documents from each of the training corpora. ## Acknowledgments This work was supported by Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (JHPCN) through General Collaboration Project no. jh221004, "Developing a Platform for Constructing and Sharing of Large-Scale Japanese Language Models". For training models, we used the mdx: a platform for the data-driven future.
ae2322a490bcabe100abe5836ee6cdfb
muhtasham/tiny-mlm-glue-sst2
muhtasham
bert
12
2
transformers
1
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,597
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-sst2 This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.2692 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.0578 | 0.4 | 500 | 4.3208 | | 4.9384 | 0.8 | 1000 | 4.2217 | | 4.723 | 1.2 | 1500 | 4.2379 | | 4.7743 | 1.6 | 2000 | 4.1685 | | 4.7412 | 2.0 | 2500 | 4.2323 | | 4.6544 | 2.4 | 3000 | 4.1379 | | 4.5779 | 2.8 | 3500 | 4.2603 | | 4.5658 | 3.2 | 4000 | 4.2627 | | 4.5364 | 3.6 | 4500 | 4.2692 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
0ec34466f62e7f43ab19c916a1c55d3a
coppercitylabs/uzbek-news-category-classifier
coppercitylabs
bert
9
25
transformers
1
text-classification
true
false
false
mit
['uz']
['webcrawl']
null
0
0
0
0
0
0
0
['uzbek', 'cyrillic', 'news category classifier']
false
true
true
1,104
false
# Uzbek news category classifier (based on UzBERT) UzBERT fine-tuned to classify news articles into one of the following categories: - дунё - жамият - жиноят - иқтисодиёт - маданият - реклама - саломатлик - сиёсат - спорт - фан ва техника - шоу-бизнес ## How to use ```python >>> from transformers import pipeline >>> classifier = pipeline('text-classification', model='coppercitylabs/uzbek-news-category-classifier') >>> text = """Маҳоратли пара-енгил атлетикачимиз Ҳусниддин Норбеков Токио-2020 Паралимпия ўйинларида ғалаба қозониб, делегациямиз ҳисобига навбатдаги олтин медални келтирди. Бу ҳақда МОҚ хабар берди. Норбеков ҳозиргина ядро улоқтириш дастурида ўз ғалабасини тантана қилди. Ушбу машқда вакилимиз 16:13 метр натижа билан энг яхши кўрсаткични қайд этди. Шу тариқа, делегациямиз ҳисобидаги медаллар сони 16 (6 та олтин, 4 та кумуш ва 6 та бронза) тага етди. Кейинги кун дастурларида иштирок этадиган ҳамюртларимизга омад тилаб қоламиз!""" >>> classifier(text) [{'label': 'спорт', 'score': 0.9865401983261108}] ``` ## Fine-tuning data Fine-tuned on ~60K news articles for 3 epochs.
35bf30e1a5f2f49ef762de0064c2a4db
jcplus/stable-diffusion-v1-5
jcplus
null
18
9
diffusers
3
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
3
3
0
0
0
0
0
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
false
true
true
13,379
false
# Stable Diffusion v1-5 Model Card Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion). The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2) checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion). ### Diffusers ```py from diffusers import StableDiffusionPipeline import torch model_id = "runwayml/stable-diffusion-v1-5" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16") pipe = pipe.to(device) prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion) ### Original GitHub Repository 1. Download the weights - [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) - 4.27GB, ema-only weight. uses less VRAM - suitable for inference - [v1-5-pruned.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt) - 7.7GB, ema+non-ema weights. uses more VRAM - suitable for fine-tuning 2. Follow instructions [here](https://github.com/runwayml/stable-diffusion). ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487). - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. ### Safety Module The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers. This checker works by checking model outputs against known hard-coded NSFW concepts. The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter. Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images. The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-2B (en) and subsets thereof (see next section) **Training Procedure** Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through a ViT-L/14 text-encoder. - The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. Currently six Stable Diffusion checkpoints are provided, which were trained as follows. - [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en). 194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). - [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`. 515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)). - [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [`stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything. - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 2 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling steps show the relative improvements of the checkpoints: ![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-1-to-v1-5.png) Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 150000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq. ## Citation ```bibtex @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ``` *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
e74b4eb60925da4e299ee872c5e85628
impawankr/distilbert-base-uncased-finetuned-imdb
impawankr
distilbert
9
2
transformers
0
fill-mask
true
false
false
apache-2.0
null
['imdb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,318
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4897 | | 2.5756 | 2.0 | 314 | 2.4230 | | 2.5395 | 3.0 | 471 | 2.4358 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
2e860b083c64f08debfcbf3c22cc127a
hirosay/xlm-roberta-base-finetuned-panx-de
hirosay
xlm-roberta
12
1
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,319
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1335 - F1: 0.8652 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2566 | 1.0 | 525 | 0.1632 | 0.8292 | | 0.1276 | 2.0 | 1050 | 0.1340 | 0.8475 | | 0.0816 | 3.0 | 1575 | 0.1335 | 0.8652 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu116 - Datasets 2.8.0 - Tokenizers 0.12.1
aec9c9f8ccff11417ec53fb746850ddf
NYTK/text-generation-news-gpt2-small-hungarian
NYTK
gpt2
9
126
transformers
1
text-generation
true
false
false
mit
['hu']
null
null
0
0
0
0
0
0
0
['text-generation']
false
true
true
854
false
# Hungarian GPT-2 news generator For further models, scripts and details, see [our repository](https://github.com/nytud/neural-models) or [our demo site](https://juniper.nytud.hu/demo/nlp). - Pretrained on Hungarian Wikipedia - Finetuned on hin corpus (hvg.hu, index.hu, nol.hu) ## Results | Model | Perplexity | | ------------- | ------------- | | GPT-2 poem | 47.46 | | **GPT-2 news** | **22.06** | ## Citation If you use this model, please cite the following paper: ``` @inproceedings {yang-gpt2, title = {{"Az invazív medvék nem tolerálják a suzukis agressziót" - Magyar GPT-2 kísérleti modell}}, booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year = {2022}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {Yang, Zijian Győző}, pages = {463--476} } ```
dacb59aa2eab6a1aa97694028d0ef59b
Helsinki-NLP/opus-mt-fr-lg
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
false
### opus-mt-fr-lg * source languages: fr * target languages: lg * OPUS readme: [fr-lg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lg/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lg/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lg/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lg/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.lg | 21.7 | 0.454 |
ed58cc2ed8b05f2c42c7aa093db1bea7
negfir/distilbert-base-uncased-finetuned-squad
negfir
bert
22
5
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,178
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2200 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2789 | 1.0 | 5533 | 1.2200 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
075053d2586288d25aedf0cfc16b55e4
Payoto/roberta-base-finetuned-squad
Payoto
roberta
12
3
transformers
0
question-answering
true
false
false
mit
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,103
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-squad This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 20 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.25 - num_epochs: 3 - training precision: Mixed Precision ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.10.0+cpu - Datasets 2.7.1 - Tokenizers 0.12.1
08d9acfaa92ce17ca43d7303ee7e6ee7
yanaiela/roberta-base-epoch_16
yanaiela
roberta
9
2
transformers
0
fill-mask
true
false
false
mit
['en']
['wikipedia', 'bookcorpus']
null
0
0
0
0
0
0
0
['roberta-base', 'roberta-base-epoch_16']
false
true
true
2,102
false
# RoBERTa, Intermediate Checkpoint - Epoch 16 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_16. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
ca94d4b2b8bc972e68f733bf07623ba4
rhakbari/distilbert-base-uncased-finetuned-squad
rhakbari
distilbert
14
3
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,284
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2194 | 1.0 | 5533 | 1.1700 | | 0.9533 | 2.0 | 11066 | 1.1341 | | 0.7452 | 3.0 | 16599 | 1.1725 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
9563ad0b8b194464d9bdd7bb16d7c7bd
Tomasgomezdelfresno/ttoottoogg
Tomasgomezdelfresno
null
16
17
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
431
false
### ttoottoogg Dreambooth model trained by Tomasgomezdelfresno with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
1bf712869bb7df225f422bb118fe0ee3
yokoe/distilbert-base-uncased-finetuned-clinc
yokoe
distilbert
24
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['clinc_oos']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,482
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7720 - Accuracy: 0.9184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 3.2891 | 0.7429 | | 3.7868 | 2.0 | 636 | 1.8755 | 0.8374 | | 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 | | 1.6928 | 4.0 | 1272 | 0.8573 | 0.9132 | | 0.9056 | 5.0 | 1590 | 0.7720 | 0.9184 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
b69e6fb860eccfd23b84fbba8db75f2f
bvrau/covid-general-news-bert
bvrau
bert
13
5
transformers
0
text-classification
true
false
false
afl-3.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,125
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # covid-general-news-bert This model is a fine-tuned version of [bvrau/covid-twitter-bert-v2-struth](https://huggingface.co/bvrau/covid-twitter-bert-v2-struth) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0688 - Accuracy: 0.9774 - Precision: 0.9781 - Recall: 0.9738 - F1: 0.9760 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.2183 | 1.0 | 365 | 0.0688 | 0.9774 | 0.9781 | 0.9738 | 0.9760 | | 0.0783 | 2.0 | 730 | 0.0754 | 0.9842 | 0.9812 | 0.9855 | 0.9833 | | 0.0354 | 3.0 | 1095 | 0.0766 | 0.9856 | 0.9785 | 0.9913 | 0.9848 | | 0.0185 | 4.0 | 1460 | 0.0956 | 0.9822 | 0.9715 | 0.9913 | 0.9813 | | 0.0227 | 5.0 | 1825 | 0.0693 | 0.9870 | 0.9827 | 0.9898 | 0.9862 | | 0.0084 | 6.0 | 2190 | 0.0870 | 0.9849 | 0.9926 | 0.9753 | 0.9839 | | 0.0021 | 7.0 | 2555 | 0.0729 | 0.9877 | 0.9883 | 0.9855 | 0.9869 | | 0.0002 | 8.0 | 2920 | 0.1197 | 0.9808 | 0.9688 | 0.9913 | 0.9799 | | 0.0033 | 9.0 | 3285 | 0.0768 | 0.9884 | 0.9912 | 0.9840 | 0.9876 | | 0.0009 | 10.0 | 3650 | 0.1013 | 0.9863 | 0.9869 | 0.9840 | 0.9854 | | 0.0 | 11.0 | 4015 | 0.1069 | 0.9863 | 0.9869 | 0.9840 | 0.9854 | | 0.0 | 12.0 | 4380 | 0.1124 | 0.9856 | 0.9854 | 0.9840 | 0.9847 | | 0.0 | 13.0 | 4745 | 0.1175 | 0.9849 | 0.9854 | 0.9826 | 0.9840 | | 0.0 | 14.0 | 5110 | 0.1221 | 0.9849 | 0.9854 | 0.9826 | 0.9840 | | 0.0 | 15.0 | 5475 | 0.1256 | 0.9849 | 0.9854 | 0.9826 | 0.9840 | | 0.0 | 16.0 | 5840 | 0.1286 | 0.9849 | 0.9854 | 0.9826 | 0.9840 | | 0.0 | 17.0 | 6205 | 0.1300 | 0.9856 | 0.9854 | 0.9840 | 0.9847 | | 0.0 | 18.0 | 6570 | 0.1293 | 0.9849 | 0.9854 | 0.9826 | 0.9840 | | 0.0 | 19.0 | 6935 | 0.1304 | 0.9849 | 0.9854 | 0.9826 | 0.9840 | | 0.0 | 20.0 | 7300 | 0.1308 | 0.9849 | 0.9854 | 0.9826 | 0.9840 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
a1a5108eb83bd438354a499e7800f4c4
matteopilotto/kratos-sd-v1-4-dreambooth
matteopilotto
null
25
107
diffusers
1
text-to-image
true
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
0
0
0
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
true
true
3,744
false
# DreamBooth model of Kratos from God of War <img src="https://huggingface.co/matteopilotto/kratos-sd-v1-4-dreambooth/resolve/main/grid_hub_512px.png"> This is a Stable Diffusion model fine-tuned on the person concept with DreamBooth. It can be used by adding the string `krts person` to any prompt. Check out the exampls below ☟ to see a few practical examples on how to use it. If you are curious to learn more about the training script, then I suggest you to visit the [report](https://wandb.ai/matt24/dreambooth-kratos/reports/Kratos-Dreambooth--VmlldzozMzQyMjQ4)📝 I created with Weights & Biases 🐝. This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on [`matteopilotto/kratos`](https://huggingface.co/datasets/matteopilotto/kratos) dataset containing 10 images of **Kratos** 🪓 from **God of War** for the wildcard theme using [`CompVis/stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) pre-trained model. ## Example Output <img src="https://huggingface.co/matteopilotto/kratos-sd-v1-4-dreambooth/resolve/main/sample_outputs/245581956f83dc275e5d.png"> **Prompt:** "An illustration of **krts** **person** punk playing electric guitar, tristan eaton, victo ngai, artgerm, rhads, ross draws"\ **Negative prompt:** "low contrast, blurry, low resolution, warped"\ **Resolution:** 512 x 512\ **Guidance Scale:** 7\ **Inference steps:** 50\ **Seeds:** [556850, 459286, 768745, 594109] --- <img src="https://huggingface.co/matteopilotto/kratos-sd-v1-4-dreambooth/resolve/main/sample_outputs/4c4a87edbc0d5f03469a.png"> **Prompt:** "a drawing of **krts** **person** wearing a Spider-man costume in the style of Marvel comics"\ **Negative prompt:** "low contrast, blurry, low resolution, warped"\ **Resolution:** 512 x 512\ **Guidance Scale:** 7\ **Inference steps:** 50\ **Seeds:** [553766, 537908, 147395, 343240] --- <img src="https://huggingface.co/matteopilotto/kratos-sd-v1-4-dreambooth/resolve/main/sample_outputs/4dae428d30bddcc70967.png"> **Prompt:** "an illustration of **krts** **person** sitting in a movie theater eating popcorn watching a movie, unreal engine, cozy indoor lighting, artstation, detailed, digital painting, cinematic, character design by mark ryden and pixar and hayao miyazaki, unreal 5, daz, hyperrealistic, octane render"\ **Negative prompt:** "low contrast, blurry, low resolution, warped"\ **Resolution:** 512 x 512\ **Guidance Scale:** 7\ **Inference steps:** 50\ **Seeds:** [737986, 488711, 799063, 121111] ## Usage ```python import torch from diffusers import StableDiffusionPipeline # set device-agnostic code device = ( 'mps' if torch.backends.mps.is_available() else 'cuda' if torch.cuda.is_available() else 'cpu' ) # load pre-trained model pretrained_ckpt = 'matteopilotto/kratos-sd-v1-4-dreambooth' pipeline = StableDiffusionPipeline.from_pretrained(pretrained_ckpt).to(device) # stable diffusion hyperparameters unique_token = 'krts' class_type = 'person' prompt = f'An illustration of {unique_token} {class_type} punk playing electric guitar, tristan eaton, victo ngai, artgerm, rhads, ross draws' negative_prompt = 'low contrast, blurry, low resolution, warped' guidance_scale = 7 h = 512 w = 512 inference_steps = 50 seed = 594109 # set generator for reproducibility generator = torch.Generator(device=device).manual_seed(seed) # generate image image = pipeline( prompt, negative_prompt=negative_prompt, guidance_scale=guidance_scale, height=h, width=w, num_inference_steps=inference_steps, generator=generator ).images[0] ```
95b46232274be4bb75e49591168fac36
Gerard/xlm-roberta-base-finetuned-panx-de
Gerard
xlm-roberta
11
17
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,319
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1372 - F1: 0.8621 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 | | 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 | | 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
c832e16bd4bfaf18dbb0c7a153f1d070
seonghyeonye/direct_3B
seonghyeonye
t5
9
0
transformers
1
text2text-generation
true
false
false
apache-2.0
['en']
['bigscience/P3']
null
0
0
0
0
0
0
0
[]
false
true
true
4,631
false
**Official repository**: [seonghyeonye/Flipped-Learning](https://github.com/seonghyeonye/Flipped-Learning) # Model Description DIRECT is a strong baseline of FLIPPED, based on the training objective on [T0-3B](https://huggingface.co/bigscience/T0_3B). With only 5% token updates and half of training datasets compared to T0-3B, DIRECT outperforms T0-3B. (+6.38% mean accuracy on 14 NLP tasks, +1.19% mean accuracy on 14 BIG-bench tasks) # How to use Our overall explanation models along with ablations can be found in our [paper](https://arxiv.org/abs/2210.02969). We recommend using the [FLIPPED-11B](seonghyeonye/flipped_11B) checkpoint as it leads (on average) to the best performances on a variety of NLP tasks. |Model|Number of parameters| |-|-| |[Flipped_11B](https://huggingface.co/seonghyeonye/flipped_11B)|11 billion| |[Flipped_3B](https://huggingface.co/seonghyeonye/flipped_3B)|3 billion| Here is how to download the model in PyTorch: ```python import torch from transformers import T5Tokenizer, T5ForConditionalGeneration model = T5ForConditionalGeneration.from_pretrained("seonghyeonye/direct_3B") tokenizer = T5Tokenizer.from_pretrained("seonghyeonye/direct_3B") ``` If you want to use another checkpoint, please replace the path in `T5Tokenizer` and `T5ForConditionalGeneration`. We also provide a quick [Jupyter Notebook](https://github.com/seonghyeonye/Flipped-Learning/blob/master/flipped_inference.ipynb) where you can inference with our method. **Note: the model was trained with fp32 activations. As such, we highly discourage running inference with fp16.** # Training procedure DIRECT model is based on [T5+LM](https://huggingface.co/google/t5-xl-lm-adapt), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective additionally pretrained on language modeling objective on [C4](https://huggingface.co/datasets/c4). Training details: - Fine-tuning steps: 5'000 - Input sequence length: 512 - Target sequence length: 128 - Batch size: 240 - Optimizer: Adafactor - Learning rate: 1e-4 - Dropout: 0.1 - Sampling strategy: proportional to the number of examples in each dataset (we randomly sampled any dataset if it has over 500'000 examples so that it has at most 500'000 examples. Also, we randomly choose which instruction to generate for each training steps, so ideally each instruction appears *num_examples/num_templates* while training.) # Training data We trained different variants T0 with different mixtures of datasets. |Model|Training datasets| |--|--| |FLIPPED_11B|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Topic Classification: AG News, DBPedia<br>- Paraphrase Identification: MRPC, PAWS, QQP| |FLIPPED_3B|Same as FLIPPED-11B| |DIRECT_3B|Same as FLIPPED-11B| We only choose prompts examples that has output lables, which can be found on the dataset page. # Evaluation data We evaluate our models on following datasets: |Task category|Datasets| |-|-| |Natural language inference|ANLI(R1, R2, R3), CB, RTE| |Coreference resolution|WSC, Winogrande| |Word sense disambiguation|WiC| |Sentence completion|COPA, HellaSwag, Story Cloze| |QA|PIQA, ARC-Challenge, OpenbookQA| We also evaluate FLIPPED on a subset of [BIG-bench benchmark](https://github.com/google/BIG-bench): - Code description task - Conceptual combinations - Hindu knowledge json - Known unknowns - Language identification - Logic grid puzzle task - Logical deduction - Common misconceptions - Movie dialog same or different - Novel concepts - Strategyqa - Formal fallacies syllogisms negation - VitaminC - Winowhy multiple choice # Label generalization We evaluate the robustness of models on following datasets with changing the output label of the datasets. The substitute words can be found in our [paper](https://arxiv.org/abs/2210.02969). |Task category|(Datasets, Template name)| |-|-| |Unseen tasks|(WSC, does the pronoun refer to), (CB, can we infer), (RTE, MNLI crowdsource)| |Seen tasks|(IMDB, Reviewer Enjoyment Yes No), (PAWS, Meaning) | The template name we used can be found in the [promptsource template library](https://github.com/bigscience-workshop/promptsource/tree/main/promptsource/templates). # BibTeX entry and citation info ```bibtex @article{ye2022guess, title={Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners}, author={Ye, Seonghyeon and Kim, Doyoung and Jang, Joel and Shin, Joongbo and Seo, Minjoon}, journal={arXiv preprint arXiv:2210.02969}, year={2022} } ```
c3c4c8987d6083ffca5146588e82a175
stevemobs/deberta-base-finetuned-aqa-newsqa
stevemobs
deberta
13
7
transformers
0
question-answering
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,251
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-base-finetuned-aqa-newsqa This model is a fine-tuned version of [stevemobs/deberta-base-finetuned-aqa](https://huggingface.co/stevemobs/deberta-base-finetuned-aqa) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7657 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.6883 | 1.0 | 17307 | 0.7325 | | 0.4807 | 2.0 | 34614 | 0.7657 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
624aa1bbba9dec75c2010f1a88ac697a
mhu-coder/ConvTasNet_Libri1Mix_enhsingle
mhu-coder
null
3
2
asteroid
1
audio-to-audio
true
false
false
cc-by-sa-4.0
null
['libri1mix', 'enh_single']
null
0
0
0
0
0
0
0
['asteroid', 'audio', 'ConvTasNet', 'audio-to-audio']
false
true
true
1,725
false
## Asteroid model `mhu-coder/ConvTasNet_Libri1Mix_enhsingle` Imported from [Zenodo](https://zenodo.org/record/4301955#.X9cj98Jw0bY) ### Description: This model was trained by Mathieu Hu using the librimix/ConvTasNet recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the `enh_single` task of the Libri1Mix dataset. ### Training config: ```yaml data: n_src: 1 sample_rate: 16000 segment: 3 task: enh_single train_dir: data/wav16k/min/train-100 valid_dir: data/wav16k/min/dev filterbank: kernel_size: 16 n_filters: 512 stride: 8 main_args: exp_dir: exp/train_convtasnet_f34664b9 help: None masknet: bn_chan: 128 hid_chan: 512 mask_act: relu n_blocks: 8 n_repeats: 3 n_src: 1 skip_chan: 128 optim: lr: 0.001 optimizer: adam weight_decay: 0.0 positional arguments: training: batch_size: 2 early_stop: True epochs: 200 half_lr: True num_workers: 4 ``` ### Results: ```yaml si_sdr: 13.938355526049932 si_sdr_imp: 10.488574220190232 sdr: 14.567380104207393 sdr_imp: 11.064717304994337 sir: inf sir_imp: nan sar: 14.567380104207393 sar_imp: 11.064717304994337 stoi: 0.9201010933251715 stoi_imp: 0.1241812697846321 ``` ### License notice: This work "ConvTasNet_Libri1Mx_enhsingle" is a derivative of [CSR-I (WSJ0) Complete](https://catalog.ldc.upenn.edu/LDC93S6A) by [LDC](https://www.ldc.upenn.edu/), used under [LDC User Agreement for Non-Members](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf) (Research only). "ConvTasNet_Libri1Mix_enhsingle" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Mathieu Hu.
ac069b2afa5b5f1722ec68d20b70a058
nillo36/distilbert-base-uncased-finetuned-subreddit_classification
nillo36
distilbert
35
5
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,677
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-subreddit_classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2958 - Accuracy: 0.91 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4142 | 0.6 | 30 | 1.2653 | 0.45 | | 0.9856 | 1.2 | 60 | 0.7754 | 0.87 | | 0.5056 | 1.8 | 90 | 0.4413 | 0.9 | | 0.2248 | 2.4 | 120 | 0.2984 | 0.92 | | 0.1352 | 3.0 | 150 | 0.3265 | 0.89 | | 0.0856 | 3.6 | 180 | 0.2958 | 0.91 | | 0.0715 | 4.2 | 210 | 0.2611 | 0.92 | | 0.0615 | 4.8 | 240 | 0.2738 | 0.93 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.13.0+cpu - Datasets 2.8.0 - Tokenizers 0.12.1
d47fef8bf6c0e387a81e30e4e8b8f6b0
jx88/xlm-roberta-base-finetuned-marc-en-j-run
jx88
xlm-roberta
12
3
transformers
0
text-classification
true
false
false
mit
null
['amazon_reviews_multi']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,330
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en-j-run This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9189 - Mae: 0.4634 ## Model description Trained following the MLT Tokyo Transformers workshop run by huggingface. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.2327 | 1.0 | 235 | 1.0526 | 0.6341 | | 0.9943 | 2.0 | 470 | 0.9189 | 0.4634 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
29908533d7a488403cdee7e75b8d866f
muhtasham/small-vanilla-target-glue-wnli
muhtasham
bert
10
2
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,506
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-vanilla-target-glue-wnli This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 8.2398 - Accuracy: 0.0845 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6354 | 25.0 | 500 | 2.5362 | 0.0845 | | 0.3043 | 50.0 | 1000 | 5.1175 | 0.0986 | | 0.138 | 75.0 | 1500 | 6.7552 | 0.0986 | | 0.0732 | 100.0 | 2000 | 7.6533 | 0.0986 | | 0.0413 | 125.0 | 2500 | 8.2398 | 0.0845 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
9d92f0d548f79962e3d70b22c7aca8e0
sd-concepts-library/kanovt
sd-concepts-library
null
40
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
4,068
false
### kanovt on Stable Diffusion This is the `kanovt` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![kanovt 0](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/34.jpeg) ![kanovt 1](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/8.jpeg) ![kanovt 2](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/33.jpeg) ![kanovt 3](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/3.jpeg) ![kanovt 4](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/12.jpeg) ![kanovt 5](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/14.jpeg) ![kanovt 6](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/28.jpeg) ![kanovt 7](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/29.jpeg) ![kanovt 8](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/18.jpeg) ![kanovt 9](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/26.jpeg) ![kanovt 10](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/1.jpeg) ![kanovt 11](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/27.jpeg) ![kanovt 12](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/16.jpeg) ![kanovt 13](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/20.jpeg) ![kanovt 14](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/24.jpeg) ![kanovt 15](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/11.jpeg) ![kanovt 16](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/5.jpeg) ![kanovt 17](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/0.jpeg) ![kanovt 18](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/21.jpeg) ![kanovt 19](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/7.jpeg) ![kanovt 20](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/15.jpeg) ![kanovt 21](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/17.jpeg) ![kanovt 22](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/23.jpeg) ![kanovt 23](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/31.jpeg) ![kanovt 24](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/22.jpeg) ![kanovt 25](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/32.jpeg) ![kanovt 26](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/10.jpeg) ![kanovt 27](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/30.jpeg) ![kanovt 28](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/25.jpeg) ![kanovt 29](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/19.jpeg) ![kanovt 30](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/6.jpeg) ![kanovt 31](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/4.jpeg) ![kanovt 32](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/9.jpeg) ![kanovt 33](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/2.jpeg) ![kanovt 34](https://huggingface.co/sd-concepts-library/kanovt/resolve/main/concept_images/13.jpeg)
32fd9926ded22c978ce6cdadb2830727
snowood1/ConfliBERT-cont-uncased
snowood1
bert
8
4
transformers
0
fill-mask
true
false
false
gpl-3.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
678
false
ConfliBERT is a pre-trained language model for political conflict and violence. We provided four versions of ConfliBERT: <ol> <li>ConfliBERT-scr-uncased: &nbsp;&nbsp;&nbsp;&nbsp; Pretraining from scratch with our own uncased vocabulary (preferred)</li> <li>ConfliBERT-scr-cased: &nbsp;&nbsp;&nbsp;&nbsp; Pretraining from scratch with our own cased vocabulary</li> <li>ConfliBERT-cont-uncased: &nbsp;&nbsp;&nbsp;&nbsp; Continual pretraining with original BERT's uncased vocabulary</li> <li>ConfliBERT-cont-cased: &nbsp;&nbsp;&nbsp;&nbsp; Continual pretraining with original BERT's cased vocabulary</li> </ol> See more details in https://github.com/eventdata/ConfliBERT/
ad01e5b921b7e3dbca3d609a14562316
jonatasgrosman/exp_w2v2t_pt_xlsr-53_s677
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['pt']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'pt']
false
true
true
461
false
# exp_w2v2t_pt_xlsr-53_s677 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
071be88d0b2286a98e4c7ae526e1c5c1
mriggs/mt5-small-finetuned-2epochs-opus_books-en-to-it
mriggs
mt5
11
4
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['opus_books']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,225
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-2epochs-opus_books-en-to-it This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the opus_books dataset. It achieves the following results on the evaluation set: - Loss: 3.0110 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.957 | 1.0 | 3638 | 3.0675 | | 3.8286 | 2.0 | 7276 | 3.0110 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.0 - Tokenizers 0.13.1
2de144bca9aaf6950b1918e2498a376d
MeshalAlamr/wav2vec2-xls-r-300m-ar-8
MeshalAlamr
wav2vec2
11
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,013
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-ar-8 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 76.6942 - Wer: 0.2108 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6295.0487 | 4.71 | 400 | 615.8572 | 1.0 | | 1464.0058 | 9.41 | 800 | 111.7187 | 0.5361 | | 425.6333 | 14.12 | 1200 | 80.7770 | 0.3446 | | 280.069 | 18.82 | 1600 | 74.0422 | 0.2980 | | 213.0118 | 23.53 | 2000 | 78.4876 | 0.2783 | | 175.6819 | 28.24 | 2400 | 70.4845 | 0.2491 | | 148.5846 | 32.94 | 2800 | 70.5758 | 0.2443 | | 131.1029 | 37.65 | 3200 | 75.3770 | 0.2371 | | 116.7131 | 42.35 | 3600 | 78.7061 | 0.2268 | | 105.369 | 47.06 | 4000 | 76.4783 | 0.2210 | | 97.0829 | 51.76 | 4400 | 76.6051 | 0.2153 | | 90.4009 | 56.47 | 4800 | 76.6942 | 0.2108 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0 - Datasets 1.18.4 - Tokenizers 0.11.6
28a5b202b9de3c7d3375c8e0a4429ffb
CarperAI/FIM-NeoX-1.3B
CarperAI
gpt_neox
7
190
transformers
21
text-generation
true
false
false
apache-2.0
['en', 'code']
null
null
2
1
1
0
3
3
0
['pytorch', 'causal-lm', 'code-generation', 'The Pile']
false
true
true
11,008
false
# FIM-1.3B ## Model Description FIM-1.3B is the first of a series of large-scale infilling-enabled autoregressive language models trained by CarperAI. FIM-1.3B is the first of these models, and future models (both larger and smaller) trained on greater quantities of code data will be released, potentially with different architectural variations optimized for code. This is a preliminary release of an experimental artifact and should be treated as such. We are releasing these results and this model in the hopes that it may be useful to the greater research community, especially those interested in LMs for code and pair programming tools. CarperAI will be releasing larger LMs better tuned for code in the near future, building on these experiments. ## Model Dimensions | Hyperparameter | Value | |----------------------|----------------------------------------------------------------------------------------------------------------------------------------| | \\(n_{parameters}\\) | 1,331,810,304 | | \\(n_{layers}\\) | 24 | | \\(d_{model}\\) | 2048 | | \\(d_{ff}\\) | 8192 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 128 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50280 | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) The model consists of 24 transformer layers with a hidden dimension of 2048, and a feedforward intermediate dimension of 8192. The hidden dimension is split into 16 heads for self-attention, each with a dimension of 128. Rotary Position Embedding (RoPE) is used. The model is trained with the same tokenizer as [GPT-NeoX-20b](https://arxiv.org/abs/2204.06745), for a vocabulary size of 50254 tokens. ## Training Data The model was trained on the Pile, an 800Gb dataset composed of varied web corpora. The datasheet and paper for the Pile can be found [here](https://arxiv.org/abs/2201.07311) and [here](https://arxiv.org/abs/2101.00027) respectively. ## Training Details This model was trained for 47,000 steps at a batch size of 6,291,456 tokens per step in the [GPT-NeoX codebase](https://github.com/EleutherAI/gpt-neox). It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly. Following [Bavarian et al. 2022](https://arxiv.org/abs/2207.14255), we train the model to additionally perform infilling via a data transformation applied randomly to 90% of input contexts at train-time. Middle segments “to infill” were selected uniformly at random from contexts at the character level, and these contexts were then reformatted as \<SUF\> {last 1/3rd of the context} \<PRE\> {first 1/3rd of the context} \<MID\> {middle 1/3rd of the context} \<EOD\> ## How to use This model can be easily loaded using the `AutoModelForCausalLM` class: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("CarperAI/FIM-NeoX-1.3B") model = AutoModelForCausalLM.from_pretrained("CarperAI/FIM-NeoX-1.3B") ``` ### Performing Infilling Suppose we have some text that we would like to perform infilling on at a certain “cursor location”. This would have the form {some prelude text here} \<INFILLING LOCATION\> {some text following cursor}. The way to perform infilling generation would be via placing the input text into this format: \<SUF\> {some text following cursor} \<PRE\> {some prelude text here} \<MID\> ... language model output is generated after \<MID\> token! As a concrete example, here is a code snippet that should allow a model to perform infilling: There was an issue where the sentinel `<|SUF|>`, `<|PRE|>`, and `<|MID|>` tokens were not the correct ids in the uploaded tokenizer and model card! Please try clearing the Huggingface cache and redownloading the model :)) Here is a minimal example of performing open-ended generation with this model, on a simple function `score(x, y)`: ``` def score(x,y) -> int: """ ``` and also infilling with the function and end of docstring already placed: ``` def score(x,y) -> int: """ <|MID|> (infill here) """ score = x + y return score ``` ``` from transformers import AutoTokenizer, AutoModelForCausalLM import torch model = AutoModelForCausalLM.from_pretrained("CarperAI/FIM-NeoX-1.3B") tok = AutoTokenizer.from_pretrained("CarperAI/ # infilling demo prefix = 'def score(x, y) -> int:\n"""\n' suffix = '"""\n\n score = x + y\n return score' model_input = [50277, *tok(suffix)["input_ids"], 50278, *tok(prefix)["input_ids"], 50279] output = tok.decode(model.generate(torch.IntTensor(model_input).unsqueeze(0), max_length=40)[0]) print(output) ``` outputs: `'<|SUF|>"""\n\n score = x + y\n return score<|PRE|>def score(x, y) -> int:\n"""\n<|MID|> score(x, y) -> int\n<|endoftext|>'` ``` from transformers import AutoTokenizer, AutoModelForCausalLM import torch # non-infilling demo prefix = 'def score(x, y) -> int:\n"""\n' model_input = [*tok(prefix)["input_ids"]] output = tok.decode(model.generate(torch.IntTensor(model_input).unsqueeze(0), max_length=100)[0]) print(output) ``` outputs: `'def score(x, y) -> int:\n"""\n Return the score of the given point.\n """\n return sum(x * y for x, y in zip(x_list, y_list))\n\ndef get_point_score(x, y) -> int:\n """\n Return the score of the given point.\n """\n return sum(x * y for x, y in zip(x_list, y'` The sentinel tokens are now accessible via `tokenizer.decode(50277) = "<|SUF|>"`, `tokenizer.decode(50278) = "<|PRE|>"`, `tokenizer.decode(50279) = "<|MID|>"`. ## Intended Uses and Limitations FIM-1.3B learns a representation of the English language that can be used to extract features useful for downstream NLP and Code generation tasks. However, the model has solely been trained on a standard next-token-prediction language modeling task on its training data. ## Limitations and Biases FIM-1.3B was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. FIM-1.3B may produce socially unacceptable or otherwise harmful text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how FIM-1.3B will respond to particular prompts, and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. Code generated by FIM-1.3B should also be checked for security errors by a human before use in production. ## Evaluation results We evaluate our model on a number of standard NLP datasets to verify that our infilling model performs on par with a comparable autoregressive model. We use the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) library developed by EleutherAI for all evaluations except for HumanEval-infilling, for which we use the code in [https://github.com/openai/human-eval-infilling](https://github.com/openai/human-eval-infilling) to evaluate performance. All 3 models here are trained using the same configuration with differing FIM hyperparameters and/or different positional embeddings. "AR-1.3B" refers to a model trained without FIM and with rotary positional embeddings, "CarperAI/FIM-NeoX-1.3B" refers to this model (trained with a FIM rate of 0.9 in SPM mode according to [Bavarian et al. 2022](https://arxiv.org/abs/2207.14255)), and "FIM-1.3B-alibi" refers to a model trained with [AliBi](https://arxiv.org/abs/2108.12409) positional embeddings but otherwise the same as this model. | Model | HumanEval-infilling | arc\_easy | arc\_challenge | lambada | piqa | sciq | wsc | winogrande | |-----------------|---------------------|----------|---------------|---------|--------|-------|--------|------------| | AR-1.3B | 0.0029 | 0.5816 | 0.2465 | 7.03 | 0.7116 | 0.85 | 0.3654 | 0.5651 | | CarperAI/FIM-NeoX-1.3B | 0.0155 | 0.5829 | 0.2457 | 7.08 | 0.7029 | 0.861 | 0.3654 | 0.5390 | | FIM-1.3B-alibi | 0.0029 | 0.5589 | 0.25 | 7.49 | 0.6926 | 0.856 | 0.3654 | 0.5406 | Here HumanEval-infilling is reported as Pass@10 with a temperature of 0.8 (such that 100 times the score reported here = Pass@10 as a percentage), Lambada is reported as perplexity, and all other benchmarks report accuracy as a number between 0 and 1. These results are subject to change, but appear to indicate that AliBi with FIM does not enable infilling, while rotary positional embeddings do allow for infilling to be learned. ## Licensing This model is licensed under the terms of the Apache License 2.0. ``` Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` ## Acknowledgements This project would not have been possible without compute resources provided by [Stability.ai](https://stability.ai) and [CarperAI](https://carper.ai/). This model was trained by Hailey Schoelkopf, and would also not have been possible without help, guidance, and feedback by many including Louis Castricato, Stella Biderman, Shivanshu Purohit, Quentin Anthony, and others.
1c4f5f6cbaeac4247623ea96fcb66bd5
jonatasgrosman/exp_w2v2t_id_xlsr-53_s358
jonatasgrosman
wav2vec2
10
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['id']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'id']
false
true
true
461
false
# exp_w2v2t_id_xlsr-53_s358 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (id)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
4f11850a32228328bd6f2b76071d0a18
davidaponte/kd-distilBERT-clinc
davidaponte
distilbert
19
1
transformers
1
text-classification
true
false
false
apache-2.0
null
['clinc_oos']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,461
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kd-distilBERT-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7752 - Accuracy: 0.9129 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.3211 | 1.0 | 318 | 3.3313 | 0.7235 | | 2.6568 | 2.0 | 636 | 1.9016 | 0.8452 | | 1.5575 | 3.0 | 954 | 1.1668 | 0.8955 | | 1.0094 | 4.0 | 1272 | 0.8619 | 0.9087 | | 0.7914 | 5.0 | 1590 | 0.7752 | 0.9129 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
dfa921f6f581408aee67d6dd753c1775
Salesforce/qa_consolidation
Salesforce
roberta
9
6
transformers
2
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['question_answering', 'qa', 'answer_consolidation']
false
true
true
4,097
false
# QA Consolidation Model Model card for the QA Consolidation (step 3) of the Discord Questions framework (EMNLP 2022 - Findings). The model assesses the similarity between two answers (a1, a2) to a question Q. The score obtained is on a scale from 1 (most dissimilar) to 5 (most similar). See example below for formatting. The model is a RoBERTa-large model, finetuned on the [MOCHA dataset](https://arxiv.org/abs/2010.03636), and a 5-pt version of the [Answer Equivalence](https://arxiv.org/abs/2202.07654v1) dataset. For a (question, answer1, answer2)-tuple, the model outputs a [1-5] answer similarity score, where 5 is most similar. Example usage of the model: ```py from transformers import AutoModelForSequenceClassification, AutoTokenizer import itertools ae_tokenizer = AutoTokenizer.from_pretrained("Salesforce/qa_consolidation") ae_model = AutoModelForSequenceClassification.from_pretrained("Salesforce/qa_consolidation").eval() question = "When will the recession happen?" answers = ["probably next January", "never", "we're already in a recession", "it won't happen", "it's going on right now", "not before next year", "upcoming January-March"] dataset = [{"a1": a1, "a2": a2, "input": "%s <sep> %s <sep> %s" % (question, a1, a2)} for a1, a2 in itertools.combinations(answers, 2)] input_ids = ae_tokenizer.batch_encode_plus([d["input"] for d in dataset], add_special_tokens=False, padding=True, return_tensors="pt")["input_ids"] scores = ae_model(input_ids=input_ids)["logits"][:, 0].tolist() for d, score in zip(dataset, scores): d["score"] = score for d in sorted(dataset, key=lambda d: -d["score"]): print("[Score: %.3f] %s" % (d["score"], d["input"])) ``` The output then looks like: ``` [Score: 4.980] When will the recession happen? <sep> never <sep> it won't happen [Score: 3.831] When will the recession happen? <sep> probably next January <sep> upcoming January-March [Score: 3.366] When will the recession happen? <sep> we're already in a recession <sep> it's going on right now [Score: 2.302] When will the recession happen? <sep> never <sep> not before next year [Score: 1.899] When will the recession happen? <sep> probably next January <sep> not before next year [Score: 1.290] When will the recession happen? <sep> it won't happen <sep> not before next year [Score: 1.230] When will the recession happen? <sep> we're already in a recession <sep> it won't happen [Score: 1.187] When will the recession happen? <sep> not before next year <sep> upcoming January-March [Score: 1.126] When will the recession happen? <sep> it won't happen <sep> it's going on right now [Score: 1.108] When will the recession happen? <sep> never <sep> we're already in a recession [Score: 1.099] When will the recession happen? <sep> we're already in a recession <sep> not before next year [Score: 1.091] When will the recession happen? <sep> probably next January <sep> it's going on right now [Score: 1.084] When will the recession happen? <sep> never <sep> it's going on right now [Score: 1.048] When will the recession happen? <sep> probably next January <sep> we're already in a recession [Score: 1.023] When will the recession happen? <sep> probably next January <sep> it won't happen [Score: 1.017] When will the recession happen? <sep> probably next January <sep> never [Score: 1.006] When will the recession happen? <sep> it's going on right now <sep> not before next year [Score: 0.994] When will the recession happen? <sep> we're already in a recession <sep> upcoming January-March [Score: 0.917] When will the recession happen? <sep> it's going on right now <sep> upcoming January-March [Score: 0.903] When will the recession happen? <sep> it won't happen <sep> upcoming January-March [Score: 0.896] When will the recession happen? <sep> never <sep> upcoming January-March ``` In the paper, we find that a threshold of `T=2.75` achieves the highest F1 score on the validation portions of the two datasets. In the above example, only the first three pairs would be classified as equivalent answers, and all pairs below would be labeled as non-equivalent answers.
e5427cb8787c4e4c1d058dd42572396f
SetFit/distilbert-base-uncased__sst2__train-32-7
SetFit
distilbert
10
5
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,075
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__sst2__train-32-7 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6736 - Accuracy: 0.5931 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7094 | 1.0 | 13 | 0.6887 | 0.5385 | | 0.651 | 2.0 | 26 | 0.6682 | 0.6923 | | 0.6084 | 3.0 | 39 | 0.6412 | 0.6923 | | 0.4547 | 4.0 | 52 | 0.6095 | 0.6923 | | 0.2903 | 5.0 | 65 | 0.6621 | 0.6923 | | 0.1407 | 6.0 | 78 | 0.7130 | 0.7692 | | 0.0444 | 7.0 | 91 | 0.9007 | 0.6923 | | 0.0176 | 8.0 | 104 | 0.9525 | 0.7692 | | 0.0098 | 9.0 | 117 | 1.0289 | 0.7692 | | 0.0071 | 10.0 | 130 | 1.0876 | 0.7692 | | 0.0052 | 11.0 | 143 | 1.1431 | 0.6923 | | 0.0038 | 12.0 | 156 | 1.1687 | 0.7692 | | 0.0034 | 13.0 | 169 | 1.1792 | 0.7692 | | 0.0031 | 14.0 | 182 | 1.2033 | 0.7692 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
a63a90c7b0c4fbce9da2f5e172baa93f
brwillia/distilgpt2-finetuned-wikitext2
brwillia
gpt2
9
2
transformers
0
text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,243
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7602 | 1.0 | 2334 | 3.6669 | | 3.653 | 2.0 | 4668 | 3.6472 | | 3.6006 | 3.0 | 7002 | 3.6421 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
f3ff810ee9a8ddc652db73bd6b6c7b79
knurm/my-finetuned-xml-roberta2
knurm
xlm-roberta
11
5
transformers
0
question-answering
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,389
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-finetuned-xml-roberta2 This model is a fine-tuned version of [knurm/my-finetuned-xml-roberta](https://huggingface.co/knurm/my-finetuned-xml-roberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.4644 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.4491 | 1.0 | 5652 | 3.3339 | | 3.171 | 2.0 | 11304 | 3.2681 | | 2.9518 | 3.0 | 16956 | 3.3003 | | 2.7305 | 4.0 | 22608 | 3.3447 | | 2.5974 | 5.0 | 28260 | 3.4644 | ### Framework versions - Transformers 4.19.1 - Pytorch 1.11.0+cu113 - Datasets 2.2.1 - Tokenizers 0.12.1
ca9144d85277e903f8da734e410d6d01
frgfm/rexnet1_0x
frgfm
null
5
19
transformers
0
image-classification
true
false
false
apache-2.0
null
['frgfm/imagenette']
null
0
0
0
0
0
0
0
['image-classification', 'pytorch', 'onnx']
false
true
true
2,866
false
# ReXNet-1.0x model Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf). ## Model description The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy. ## Installation ### Prerequisites Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron. ### Latest stable release You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows: ```shell pip install pylocron ``` or using [conda](https://anaconda.org/frgfm/pylocron): ```shell conda install -c frgfm pylocron ``` ### Developer mode Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*: ```shell git clone https://github.com/frgfm/Holocron.git pip install -e Holocron/. ``` ## Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from holocron.models import model_from_hf_hub model = model_from_hf_hub("frgfm/rexnet1_0x").eval() img = Image.open(path_to_an_image).convert("RGB") # Preprocessing config = model.default_cfg transform = Compose([ Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR), PILToTensor(), ConvertImageDtype(torch.float32), Normalize(config['mean'], config['std']) ]) input_tensor = transform(img).unsqueeze(0) # Inference with torch.inference_mode(): output = model(input_tensor) probs = output.squeeze(0).softmax(dim=0) ``` ## Citation Original paper ```bibtex @article{DBLP:journals/corr/abs-2007-00992, author = {Dongyoon Han and Sangdoo Yun and Byeongho Heo and Young Joon Yoo}, title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural Network}, journal = {CoRR}, volume = {abs/2007.00992}, year = {2020}, url = {https://arxiv.org/abs/2007.00992}, eprinttype = {arXiv}, eprint = {2007.00992}, timestamp = {Mon, 06 Jul 2020 15:26:01 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Source of this implementation ```bibtex @software{Fernandez_Holocron_2020, author = {Fernandez, François-Guillaume}, month = {5}, title = {{Holocron}}, url = {https://github.com/frgfm/Holocron}, year = {2020} } ```
d7cf8ee0e97b52cacf5252715277f0fe
tomekkorbak/nervous_wozniak
tomekkorbak
gpt2
36
6
transformers
0
null
true
false
false
mit
['en']
['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
7,635
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nervous_wozniak This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.24.0 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25177], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}], 'scorer_config': {}}, 'kl_gpt3_callback': {'force_call_on': [25177], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'nervous_wozniak', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output2', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25177, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/qjc0jrdx
9c7b0a93dbd83732ca934b5c8929d2eb
zangwei/gpt2-wikitext2
zangwei
gpt2
8
2
transformers
0
text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,058
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-wikitext2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 6.6488 - eval_runtime: 22.5221 - eval_samples_per_second: 85.871 - eval_steps_per_second: 10.745 - epoch: 0.66 - step: 1490 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
4cdd6d956d8aab924d6107b36ffac2df
Philip-Jan/finetuning-sentiment-model-3000-samples
Philip-Jan
distilbert
28
9
transformers
0
text-classification
true
false
false
apache-2.0
null
['imdb']
null
1
0
1
0
0
0
0
['generated_from_trainer']
true
true
true
1,053
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3328 - Accuracy: 0.8633 - F1: 0.8647 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cpu - Datasets 2.1.0 - Tokenizers 0.12.1
614a6b885496930eb1751cff25df5f1f
nielsr/segformer-finetuned-sidewalk-trainer
nielsr
segformer
9
0
transformers
0
null
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
936
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-finetuned-sidewalk-trainer This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
13bcaa69f093ec8f939e2997663a62cd
Helsinki-NLP/opus-mt-en-ng
Helsinki-NLP
marian
10
116
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
false
### opus-mt-en-ng * source languages: en * target languages: ng * OPUS readme: [en-ng](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ng/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ng/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ng/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ng/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.ng | 24.8 | 0.496 |
9d5f6fb1f83917fe178cca76815ad59a
muhtasham/tiny-mlm-snli-target-glue-qnli
muhtasham
bert
10
4
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,796
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-snli-target-glue-qnli This model is a fine-tuned version of [muhtasham/tiny-mlm-snli](https://huggingface.co/muhtasham/tiny-mlm-snli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4710 - Accuracy: 0.7811 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6125 | 0.15 | 500 | 0.5374 | 0.7371 | | 0.5442 | 0.31 | 1000 | 0.5321 | 0.7414 | | 0.5223 | 0.46 | 1500 | 0.4991 | 0.7628 | | 0.5165 | 0.61 | 2000 | 0.5155 | 0.7545 | | 0.5118 | 0.76 | 2500 | 0.4795 | 0.7752 | | 0.5052 | 0.92 | 3000 | 0.4663 | 0.7856 | | 0.4916 | 1.07 | 3500 | 0.4500 | 0.7955 | | 0.4818 | 1.22 | 4000 | 0.4669 | 0.7811 | | 0.4685 | 1.37 | 4500 | 0.4774 | 0.7759 | | 0.4761 | 1.53 | 5000 | 0.4710 | 0.7811 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
62b5a5d5c3040f6727469bc0364754a5
moghis/distilbert-base-uncased-finetuned-emotion
moghis
distilbert
12
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,343
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2141 - Accuracy: 0.924 - F1: 0.9241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7828 | 1.0 | 250 | 0.2936 | 0.909 | 0.9070 | | 0.2344 | 2.0 | 500 | 0.2141 | 0.924 | 0.9241 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
fd3b06a32e0e5fa5e6ceeb78611876db
sd-concepts-library/garfield-pizza-plush
sd-concepts-library
null
11
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,351
false
### Garfield-Pizza-Plush on Stable Diffusion This is the `<garfield-plushy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<garfield-plushy> 0](https://huggingface.co/sd-concepts-library/garfield-pizza-plush/resolve/main/concept_images/5.jpeg) ![<garfield-plushy> 1](https://huggingface.co/sd-concepts-library/garfield-pizza-plush/resolve/main/concept_images/3.jpeg) ![<garfield-plushy> 2](https://huggingface.co/sd-concepts-library/garfield-pizza-plush/resolve/main/concept_images/0.jpeg) ![<garfield-plushy> 3](https://huggingface.co/sd-concepts-library/garfield-pizza-plush/resolve/main/concept_images/2.jpeg) ![<garfield-plushy> 4](https://huggingface.co/sd-concepts-library/garfield-pizza-plush/resolve/main/concept_images/1.jpeg) ![<garfield-plushy> 5](https://huggingface.co/sd-concepts-library/garfield-pizza-plush/resolve/main/concept_images/4.jpeg)
4f84ffe0b9e375fbc4da2ee7197d49db
jonatasgrosman/wav2vec2-large-fr-voxpopuli-french
jonatasgrosman
wav2vec2
8
712
transformers
1
automatic-speech-recognition
true
false
true
apache-2.0
['fr']
['common_voice']
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
true
true
true
7,474
false
# Fine-tuned French Voxpopuli wav2vec2 large model for speech recognition in French Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) on French using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint ## Usage The model can be used directly (without a language model) as follows... Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-fr-voxpopuli-french") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "fr" MODEL_ID = "jonatasgrosman/wav2vec2-large-fr-voxpopuli-french" SAMPLES = 10 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) for i, predicted_sentence in enumerate(predicted_sentences): print("-" * 100) print("Reference:", test_dataset[i]["sentence"]) print("Prediction:", predicted_sentence) ``` | Reference | Prediction | | ------------- | ------------- | | "CE DERNIER A ÉVOLUÉ TOUT AU LONG DE L'HISTOIRE ROMAINE." | CE DERNIER A ÉVOLÉ TOUT AU LONG DE L'HISTOIRE ROMAINE | | CE SITE CONTIENT QUATRE TOMBEAUX DE LA DYNASTIE ACHÉMÉNIDE ET SEPT DES SASSANIDES. | CE SITE CONTIENT QUATRE TOMBEAUX DE LA DYNESTIE ACHÉMÉNIDE ET SEPT DES SACENNIDES | | "J'AI DIT QUE LES ACTEURS DE BOIS AVAIENT, SELON MOI, BEAUCOUP D'AVANTAGES SUR LES AUTRES." | JAI DIT QUE LES ACTEURS DE BOIS AVAIENT SELON MOI BEAUCOUP DAVANTAGE SUR LES AUTRES | | LES PAYS-BAS ONT REMPORTÉ TOUTES LES ÉDITIONS. | LE PAYS-BAS ON REMPORTÉ TOUTES LES ÉDITIONS | | IL Y A MAINTENANT UNE GARE ROUTIÈRE. | IL A MAINTENANT GULA E RETIREN | | HUIT | HUIT | | DANS L’ATTENTE DU LENDEMAIN, ILS NE POUVAIENT SE DÉFENDRE D’UNE VIVE ÉMOTION | DANS LATTENTE DU LENDEMAIN IL NE POUVAIT SE DÉFENDRE DUNE VIVE ÉMOTION | | LA PREMIÈRE SAISON EST COMPOSÉE DE DOUZE ÉPISODES. | LA PREMIÈRE SAISON EST COMPOSÉE DE DOUZ ÉPISODES | | ELLE SE TROUVE ÉGALEMENT DANS LES ÎLES BRITANNIQUES. | ELLE SE TROUVE ÉGALEMENT DANS LES ÎLES BRITANNIQUES | | ZÉRO | ZÉRO | ## Evaluation The model can be evaluated as follows on the French (fr) test data of Common Voice. ```python import torch import re import librosa from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "fr" MODEL_ID = "jonatasgrosman/wav2vec2-large-fr-voxpopuli-french" DEVICE = "cuda" CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞", "؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]", "{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。", "、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽", "『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "ʻ", "ˆ"] test_dataset = load_dataset("common_voice", LANG_ID, split="test") wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]" processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) model.to(DEVICE) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): with warnings.catch_warnings(): warnings.simplefilter("ignore") speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) predictions = [x.upper() for x in result["pred_strings"]] references = [x.upper() for x in result["sentence"]] print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}") print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}") ``` **Test Result**: In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-05-16). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used. | Model | WER | CER | | ------------- | ------------- | ------------- | | jonatasgrosman/wav2vec2-large-xlsr-53-french | **15.90%** | **5.29%** | | jonatasgrosman/wav2vec2-large-fr-voxpopuli-french | 17.62% | 6.04% | | Ilyes/wav2vec2-large-xlsr-53-french | 19.67% | 6.70% | | Nhut/wav2vec2-large-xlsr-french | 24.09% | 8.42% | | facebook/wav2vec2-large-xlsr-53-french | 25.45% | 10.35% | | MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French | 28.22% | 9.70% | | Ilyes/wav2vec2-large-xlsr-53-french_punctuation | 29.80% | 11.79% | | facebook/wav2vec2-base-10k-voxpopuli-ft-fr | 61.06% | 33.31% | ## Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021voxpopuli-fr-wav2vec2-large-french, title={Fine-tuned {F}rench {V}oxpopuli wav2vec2 large model for speech recognition in {F}rench}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-fr-voxpopuli-french}}, year={2021} } ```
6f8638bb39d65f43cdca3f99d72932d1
laituan245/molt5-base-caption2smiles
laituan245
t5
7
153
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,119
false
This model can be used to generate a SMILES string from an input caption. ## Example Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-base-caption2smiles", model_max_length=512) model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-base-caption2smiles') input_text = 'The molecule is a monomethoxybenzene that is 2-methoxyphenol substituted by a hydroxymethyl group at position 4. It has a role as a plant metabolite. It is a member of guaiacols and a member of benzyl alcohols.' input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids, num_beams=5, max_length=512) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) # The model will generate "COC1=C(C=CC(=C1)CCCO)O". The ground-truth is "COC1=C(C=CC(=C1)CO)O". ``` ## Paper For more information, please take a look at our paper. Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817) Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
2efcf21817433feb794f9ce26894f94d
pyf98/aishell_e_branchformer
pyf98
null
21
3
espnet
0
automatic-speech-recognition
false
false
false
cc-by-4.0
['zh']
['aishell']
null
0
0
0
0
0
0
0
['espnet', 'audio', 'automatic-speech-recognition']
false
true
true
24,341
false
## ESPnet2 ASR model ### `pyf98/aishell_e_branchformer` This model was trained by Yifan Peng using aishell recipe in [espnet](https://github.com/espnet/espnet/). References: - [E-Branchformer: Branchformer with Enhanced merging for speech recognition (SLT 2022)](https://arxiv.org/abs/2210.00077) - [Branchformer: Parallel MLP-Attention Architectures to Capture Local and Global Context for Speech Recognition and Understanding (ICML 2022)](https://proceedings.mlr.press/v162/peng22a.html) ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 89567acf6047737820aef96d2dd2e611825c8b1e pip install -e . cd egs2/aishell/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model pyf98/aishell_e_branchformer ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Sun Dec 18 12:21:46 CST 2022` - python version: `3.9.15 (main, Nov 24 2022, 14:31:59) [GCC 11.2.0]` - espnet version: `espnet 202209` - pytorch version: `pytorch 1.12.1` - Git hash: `26f432bc859e5e40cac1a86042d498ba7baffbb0` - Commit date: `Fri Dec 9 02:16:01 2022 +0000` ## asr_train_asr_e_branchformer_e12_mlp1024_linear1024_mactrue_amp_raw_zh_char_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_branchformer_asr_model_valid.acc.ave/dev|14326|14326|66.9|33.1|0.0|0.0|33.1|33.1| |decode_asr_branchformer_asr_model_valid.acc.ave/test|7176|7176|65.4|34.6|0.0|0.0|34.6|34.6| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_branchformer_asr_model_valid.acc.ave/dev|14326|205341|95.9|4.0|0.1|0.1|4.2|33.1| |decode_asr_branchformer_asr_model_valid.acc.ave/test|7176|104765|95.6|4.3|0.1|0.1|4.5|34.6| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_e_branchformer_e12_mlp1024_linear1024_mactrue_amp.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_e_branchformer_e12_mlp1024_linear1024_mactrue_amp_raw_zh_char_sp ngpu: 1 seed: 0 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 2 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 39475 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 60 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: true log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 25000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_zh_char_sp/train/speech_shape - exp/asr_stats_raw_zh_char_sp/train/text_shape.char valid_shape_file: - exp/asr_stats_raw_zh_char_sp/valid/speech_shape - exp/asr_stats_raw_zh_char_sp/valid/text_shape.char batch_type: numel valid_batch_type: null fold_length: - 51200 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_sp/wav.scp - speech - kaldi_ark - - dump/raw/train_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - kaldi_ark - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 35000 token_list: - <blank> - <unk> - 的 - 一 - 在 - 十 - 中 - 是 - 人 - 有 - 二 - 上 - 了 - 不 - 国 - 市 - 大 - 业 - 为 - 年 - 三 - 发 - 个 - 分 - 出 - 会 - 公 - 行 - 地 - 成 - 这 - 和 - 到 - 五 - 产 - 时 - 对 - 房 - 百 - 能 - 场 - 来 - 以 - 新 - 之 - 日 - 者 - 将 - 现 - 四 - 要 - 家 - 资 - 多 - 月 - 也 - 方 - 后 - 机 - 下 - 前 - 零 - 比 - 于 - 生 - 点 - 开 - 动 - 高 - 经 - 进 - 报 - 体 - 赛 - 子 - 万 - 车 - 用 - 金 - 司 - 可 - 被 - 过 - 手 - 本 - 作 - 自 - 全 - 八 - 六 - 最 - 价 - 目 - 电 - 部 - 交 - 九 - 七 - 面 - 我 - 企 - 加 - 小 - 度 - 实 - 同 - 城 - 工 - 其 - 力 - 定 - 而 - 元 - 合 - 已 - 内 - 与 - 法 - 还 - 关 - 网 - 得 - 他 - 就 - 入 - 名 - 品 - 女 - 记 - 理 - 事 - 长 - 两 - 商 - 都 - 们 - 京 - 并 - 但 - 平 - 制 - 保 - 据 - 期 - 化 - 主 - 重 - 表 - 次 - 相 - 量 - 通 - 道 - 政 - 所 - 天 - 第 - 利 - 间 - 海 - 数 - 务 - 提 - 北 - 展 - 员 - 管 - 投 - 因 - 建 - 好 - 外 - 区 - 更 - 示 - 增 - 从 - 计 - 信 - 性 - 等 - 运 - 项 - 应 - 当 - 收 - 位 - 着 - 起 - 学 - 台 - 民 - 持 - 规 - 设 - 明 - 股 - 正 - 没 - 心 - 然 - 很 - 今 - 调 - 去 - 安 - 此 - 东 - 队 - 如 - 线 - 科 - 世 - 无 - 达 - 身 - 果 - 证 - 基 - 受 - 男 - 需 - 标 - 布 - 情 - 格 - 近 - 步 - 未 - 费 - 求 - 式 - 消 - 千 - 美 - 些 - 里 - 米 - 向 - 看 - 续 - 息 - 意 - 接 - 门 - 回 - 及 - 销 - 老 - 获 - 总 - 监 - 打 - 联 - 至 - 亿 - 说 - 讯 - 住 - 环 - 件 - 整 - 水 - 技 - 路 - 院 - 局 - 特 - 该 - 统 - 由 - 售 - 购 - 强 - 改 - 问 - 乐 - 楼 - 涨 - 处 - 决 - 让 - 系 - 户 - 题 - 推 - 少 - 广 - 显 - 降 - 跑 - 影 - 只 - 选 - 称 - 创 - 易 - 战 - 首 - 完 - 案 - 策 - 常 - 查 - 参 - 种 - 牌 - 程 - 银 - 备 - 认 - 营 - 立 - 势 - 结 - 造 - 超 - 己 - 准 - 存 - 险 - 球 - 各 - 代 - 低 - 再 - 做 - 级 - 款 - 放 - 物 - 告 - 原 - 友 - 转 - 警 - 周 - 界 - 张 - 样 - 传 - 较 - 风 - 单 - 给 - 她 - 州 - 解 - 则 - 视 - 指 - 预 - 升 - 华 - 供 - 走 - 每 - 取 - 导 - 搜 - 集 - 文 - 变 - 客 - 排 - 片 - 头 - 任 - 积 - 术 - 率 - 型 - 军 - 斯 - 研 - 别 - 非 - 直 - 智 - 速 - 组 - 星 - 领 - 口 - 份 - 岁 - 马 - 王 - 快 - 专 - 社 - 使 - 团 - 模 - 器 - 难 - 活 - 拉 - 或 - 约 - 施 - 源 - 构 - 支 - 医 - 儿 - 带 - 服 - 先 - 想 - 引 - 么 - 办 - 照 - 狐 - 权 - 微 - 南 - 始 - 融 - 深 - 士 - 游 - 绩 - 仅 - 况 - 媒 - 随 - 半 - 越 - 幅 - 确 - 注 - 类 - 争 - 税 - 限 - 流 - 均 - 控 - 充 - 额 - 望 - 连 - 划 - 奥 - 亚 - 包 - 娱 - 西 - 财 - 值 - 伤 - 某 - 致 - 终 - 空 - 济 - 众 - 际 - 土 - 买 - 仍 - 育 - 师 - 汽 - 知 - 质 - 态 - 具 - 李 - 责 - 究 - 露 - 条 - 几 - 居 - 共 - 响 - 反 - 站 - 冠 - 节 - 季 - 优 - 委 - 宅 - 观 - 互 - 见 - 范 - 境 - 感 - 负 - 段 - 失 - 采 - 套 - 域 - 尔 - 举 - 何 - 光 - 气 - 落 - 博 - 教 - 锦 - 林 - 山 - 依 - 继 - 极 - 形 - 图 - 审 - 竞 - 益 - 断 - 贷 - 效 - 府 - 复 - 许 - 容 - 健 - 击 - 足 - 又 - 诉 - 助 - 孩 - 色 - 停 - 票 - 双 - 拿 - 板 - 松 - 热 - 那 - 把 - 却 - 清 - 刘 - 议 - 考 - 减 - 曾 - 疑 - 例 - 除 - 功 - 占 - 你 - 试 - 根 - 港 - 太 - 离 - 才 - 货 - 突 - 涉 - 且 - 券 - 配 - 盘 - 即 - 库 - 付 - 破 - 职 - 演 - 农 - 置 - 纪 - 论 - 真 - 龙 - 晚 - 装 - 爱 - 号 - 练 - 死 - 压 - 亲 - 严 - 评 - 田 - 话 - 托 - 护 - 火 - 协 - 红 - 江 - 克 - 卖 - 言 - 租 - 善 - 频 - 普 - 飞 - 验 - 补 - 边 - 满 - 象 - 软 - 算 - 遭 - 馀 - 闻 - 稳 - 厂 - 远 - 苹 - 钱 - 担 - 判 - 官 - 虽 - 湾 - 按 - 昨 - 校 - 必 - 园 - 略 - 救 - 希 - 底 - 执 - 够 - 征 - 拍 - 历 - 像 - 润 - 层 - 债 - 便 - 障 - 围 - 康 - 店 - 往 - 列 - 早 - 测 - 录 - 否 - 香 - 宝 - 阳 - 索 - 核 - 兴 - 检 - 状 - 英 - 村 - 料 - 云 - 留 - 夫 - 移 - 奖 - 病 - 临 - 轻 - 省 - 秒 - 激 - 请 - 革 - 属 - 遇 - 跌 - 维 - 批 - 德 - 承 - 端 - 介 - 精 - 夺 - 群 - 初 - 胜 - 卡 - 尽 - 花 - 辆 - 它 - 故 - 神 - 届 - 治 - 透 - 景 - 白 - 副 - 什 - 宣 - 铁 - 杨 - 跳 - 假 - 登 - 福 - 青 - 药 - 婚 - 养 - 幕 - 违 - 短 - 访 - 修 - 纷 - 律 - 左 - 角 - 酒 - 括 - 爆 - 嫌 - 径 - 宁 - 董 - 适 - 逐 - 刚 - 防 - 陈 - 午 - 差 - 庭 - 独 - 波 - 食 - 识 - 似 - 候 - 黄 - 亡 - 训 - 书 - 退 - 待 - 航 - 块 - 冲 - 扩 - 吴 - 甚 - 申 - 伟 - 眼 - 巴 - 觉 - 找 - 换 - 义 - 轮 - 滑 - 席 - 央 - 送 - 右 - 卫 - 乘 - 石 - 字 - 罪 - 罗 - 泳 - 孙 - 析 - 志 - 另 - 母 - 绿 - 抢 - 止 - 令 - 童 - 妈 - 史 - 刑 - 洲 - 述 - 穿 - 念 - 纳 - 损 - 富 - 免 - 毒 - 络 - 紧 - 妻 - 乎 - 豪 - 素 - 害 - 倒 - 吸 - 街 - 促 - 择 - 杀 - 追 - 巨 - 犯 - 声 - 愿 - 晨 - 思 - 谈 - 河 - 镇 - 尼 - 跟 - 庆 - 链 - 措 - 借 - 赔 - 密 - 圳 - 贴 - 苏 - 温 - 骗 - 习 - 摄 - 版 - 帮 - 币 - 阶 - 阿 - 迎 - 驾 - 黑 - 趋 - 县 - 私 - 吃 - 疗 - 细 - 虑 - 脑 - 韩 - 亮 - 旅 - 抓 - 罚 - 良 - 背 - 脸 - 绝 - 班 - 危 - 础 - 戏 - 戴 - 招 - 命 - 尚 - 缺 - 伙 - 须 - 父 - 夜 - 切 - 操 - 挥 - 派 - 延 - 撞 - 披 - 衣 - 剧 - 陆 - 竟 - 签 - 欧 - 享 - 春 - 徽 - 裁 - 偿 - 启 - 艺 - 宗 - 味 - 察 - 估 - 净 - 募 - 拥 - 释 - 喜 - 顺 - 励 - 靠 - 渐 - 兰 - 油 - 佳 - 困 - 针 - 迷 - 写 - 材 - 硬 - 桥 - 坚 - 订 - 拳 - 累 - 盖 - 室 - 束 - 截 - 距 - 驶 - 旬 - 歌 - 悉 - 烈 - 序 - 患 - 干 - 污 - 圈 - 杰 - 顶 - 败 - 伴 - 归 - 探 - 曝 - 怀 - 急 - 池 - 织 - 秀 - 姐 - 峰 - 顾 - 误 - 键 - 丰 - 玩 - 汉 - 古 - 彩 - 讨 - 朋 - 抗 - 刺 - 挑 - 血 - 凌 - 旧 - 拟 - 晒 - 附 - 惊 - 欢 - 劳 - 丈 - 播 - 徐 - 吗 - 湖 - 笑 - 馆 - 音 - 阵 - 坐 - 谷 - 异 - 怎 - 夏 - 龄 - 熟 - 若 - 惠 - 休 - 永 - 哪 - 暂 - 输 - 绍 - 印 - 冰 - 缓 - 暖 - 听 - 避 - 嘉 - 寻 - 培 - 筹 - 伦 - 雪 - 账 - 暴 - 简 - 予 - 丽 - 泽 - 刻 - 野 - 威 - 宽 - 笔 - 语 - 武 - 炒 - 虚 - 架 - 奇 - 哥 - 尤 - 座 - 迅 - 粉 - 倍 - 朱 - 屋 - 般 - 错 - 津 - 弟 - 汇 - 概 - 鼓 - 掉 - 郑 - 钟 - 召 - 礼 - 禁 - 折 - 缩 - 锁 - 涛 - 乡 - 肥 - 幸 - 雨 - 梦 - 肉 - 攻 - 冬 - 呼 - 蓝 - 综 - 码 - 杯 - 映 - 刀 - 谢 - 编 - 脚 - 晓 - 遍 - 朝 - 吉 - 洗 - 盗 - 丹 - 屏 - 盛 - 秘 - 拘 - 染 - 渠 - 扣 - 洋 - 梯 - 枪 - 久 - 诈 - 川 - 摩 - 俄 - 迪 - 毛 - 赞 - 符 - 画 - 翻 - 妹 - 筑 - 聚 - 哈 - 兵 - 肯 - 胎 - 潮 - 苦 - 逃 - 讲 - 授 - 慢 - 顿 - 遗 - 丝 - 呈 - 揭 - 挂 - 封 - 慧 - 跨 - 询 - 拆 - 森 - 孕 - 脱 - 读 - 枚 - 捐 - 桩 - 跃 - 刷 - 芯 - 斗 - 昆 - 储 - 守 - 触 - 木 - 皮 - 饭 - 添 - 莞 - 震 - 载 - 贵 - 侵 - 撑 - 爸 - 册 - 舞 - 丁 - 贸 - 奶 - 隐 - 妇 - 榜 - 睡 - 陷 - 草 - 扬 - 袭 - 偷 - 督 - 亏 - 吕 - 珠 - 赶 - 扶 - 盈 - 档 - 诺 - 返 - 既 - 末 - 沙 - 谁 - 宏 - 摘 - 典 - 床 - 闭 - 弃 - 雷 - 毕 - 郭 - 玲 - 郎 - 芝 - 胡 - 瑞 - 盟 - 厅 - 抱 - 燃 - 铜 - 旗 - 荣 - 餐 - 牙 - 爷 - 迹 - 宇 - 途 - 潜 - 抵 - 骨 - 援 - 浪 - 玉 - 祖 - 振 - 虹 - 散 - 焦 - 勇 - 努 - 婆 - 拒 - 弹 - 梁 - 坛 - 含 - 坏 - 纯 - 烟 - 冷 - 镜 - 叫 - 赵 - 静 - 仪 - 藏 - 杂 - 痛 - 慎 - 树 - 章 - 塞 - 钢 - 狂 - 呢 - 雅 - 寿 - 恩 - 固 - 狗 - 菜 - 沟 - 献 - 叶 - 泰 - 赢 - 剩 - 窃 - 偏 - 掌 - 宜 - 课 - 趣 - 喝 - 纠 - 籍 - 替 - 炸 - 隔 - 砸 - 搭 - 诚 - 族 - 浙 - 齐 - 杆 - 晋 - 恶 - 奋 - 秋 - 鲜 - 鲁 - 冒 - 赚 - 弱 - 腿 - 祝 - 混 - 缴 - 疾 - 握 - 汪 - 辉 - 奔 - 醒 - 捕 - 骑 - 鸟 - 摆 - 灵 - 敏 - 牛 - 岛 - 恋 - 耗 - 瓦 - 拼 - 恐 - 棒 - 坦 - 厚 - 侧 - 尝 - 薪 - 堂 - 曲 - 答 - 雄 - 徒 - 碍 - 拓 - 翔 - 佛 - 佐 - 滴 - 杭 - 残 - 毫 - 射 - 拖 - 阻 - 辑 - 踪 - 症 - 姓 - 欲 - 鱼 - 船 - 恢 - 衡 - 淡 - 唯 - 乏 - 迟 - 琪 - 烧 - 唐 - 卷 - 陪 - 伏 - 劵 - 繁 - 逆 - 迁 - 诊 - 乱 - 亦 - 谓 - 矿 - 迫 - 忧 - 扮 - 巢 - 扎 - 卓 - 恒 - 庄 - 递 - 灾 - 莱 - 赴 - 煤 - 搏 - 剂 - 梅 - 吧 - 撤 - 哲 - 炳 - 尾 - 誉 - 洛 - 轨 - 署 - 党 - 惯 - 幼 - 缘 - 墨 - 莫 - 辞 - 奏 - 敢 - 垄 - 旁 - 蒙 - 箱 - 吨 - 泛 - 怕 - 闹 - 欠 - 劫 - 纸 - 岸 - 淘 - 赌 - 窗 - 洁 - 岗 - 娘 - 晶 - 劲 - 凭 - 斤 - 洪 - 液 - 槛 - 兼 - 摔 - 楚 - 昌 - 菲 - 萌 - 伍 - 沿 - 咨 - 饮 - 墙 - 沈 - 坡 - 寸 - 溢 - 仓 - 鉴 - 慈 - 柯 - 旦 - 殊 - 坠 - 诸 - 搞 - 伊 - 霸 - 绑 - 氧 - 墅 - 轿 - 蛋 - 忙 - 滨 - 井 - 逼 - 伯 - 癌 - 燕 - 赖 - 浦 - 漏 - 携 - 堪 - 阅 - 诗 - 贩 - 腐 - 倾 - 铺 - 旺 - 横 - 逊 - 允 - 窄 - 鸡 - 唱 - 贿 - 拨 - 砍 - 猛 - 碳 - 堵 - 邀 - 冕 - 栏 - 姆 - 耳 - 绕 - 览 - 聘 - 琳 - 霞 - 挖 - 庞 - 彻 - 颁 - 挺 - 沉 - 抄 - 宫 - 殴 - 垃 - 圾 - 尸 - 涵 - 娃 - 婷 - 牵 - 腾 - 卧 - 偶 - 扰 - 澳 - 迈 - 虎 - 贡 - 词 - 壁 - 宾 - 捷 - 忍 - 佩 - 喊 - 抽 - 植 - 炼 - 奸 - 吐 - 抛 - 祥 - 莉 - 泄 - 械 - 乒 - 辛 - 疯 - 凯 - 扫 - 灯 - 淀 - 毁 - 鬼 - 婴 - 淫 - 冻 - 篮 - 聊 - 帅 - 乔 - 沪 - 羽 - 舍 - 裂 - 忽 - 圆 - 拔 - 朗 - 宿 - 麻 - 眠 - 玮 - 塔 - 碰 - 怪 - 押 - 攀 - 驰 - 欣 - 踏 - 巩 - 废 - 艰 - 乳 - 句 - 侦 - 兄 - 荐 - 寓 - 厦 - 贝 - 纵 - 肖 - 杜 - 忘 - 丢 - 搬 - 曼 - 瓶 - 鹏 - 默 - 惨 - 泡 - 愈 - 敦 - 洞 - 劝 - 颖 - 酷 - 颜 - 巡 - 脏 - 仿 - 羊 - 挤 - 廉 - 麦 - 塌 - 君 - 敌 - 乌 - 俩 - 樊 - 邮 - 烯 - 详 - 舒 - 契 - 漫 - 胞 - 魔 - 宋 - 伐 - 谨 - 姿 - 姑 - 隆 - 纹 - 傅 - 茶 - 著 - 谋 - 敬 - 郁 - 驱 - 菌 - 悬 - 循 - 摊 - 闪 - 伪 - 鸿 - 娜 - 澎 - 湃 - 炉 - 暗 - 闯 - 绪 - 汰 - 稿 - 咬 - 卢 - 泉 - 涌 - 蕾 - 姻 - 熊 - 稀 - 摇 - 吊 - 桌 - 俊 - 哭 - 赠 - 逸 - 吓 - 赫 - 凡 - 俱 - 冯 - 巧 - 涯 - 啦 - 讼 - 恰 - 抚 - 肇 - 锋 - 凶 - 贯 - 悄 - 灭 - 冀 - 糕 - 伸 - 胖 - 腹 - 郊 - 斌 - 鑫 - 厉 - 肩 - 圣 - 浮 - 妙 - 饰 - 尖 - 尊 - 邱 - 诞 - 屡 - 摸 - 酬 - 闲 - 晰 - 匹 - 锻 - 甲 - 敲 - 遥 - 勒 - 兑 - 熙 - 稽 - 蔡 - 惜 - 猫 - 怒 - 驻 - 颇 - 浓 - 宴 - 仁 - 赏 - 磨 - 悲 - 骂 - 轴 - 姜 - 猪 - 割 - 歉 - 玻 - 浩 - 番 - 渡 - 肌 - 践 - 盾 - 甜 - 溺 - 尺 - 忆 - 盐 - 泥 - 薄 - 矛 - 畅 - 抑 - 颗 - 蒋 - 稍 - 碎 - 帝 - 璃 - 掀 - 拐 - 牢 - 幻 - 仔 - 粮 - 艾 - 扭 - 尿 - 刊 - 仑 - 黎 - 埃 - 臂 - 邻 - 苗 - 衔 - 桂 - 潭 - 履 - 贾 - 饼 - 惩 - 诱 - 旋 - 篇 - 辽 - 旭 - 逾 - 豆 - 潘 - 堆 - 甘 - 邦 - 氏 - 拦 - 硕 - 棋 - 裤 - 乓 - 姚 - 厘 - 邓 - 陶 - 萨 - 弗 - 辅 - 廷 - 吁 - 杠 - 绮 - 瑄 - 夹 - 槽 - 祸 - 袁 - 勾 - 赁 - 帖 - 腰 - 漂 - 裕 - 嘴 - 壮 - 弯 - 啊 - 汤 - 垫 - 魏 - 倡 - 栋 - 碑 - 颈 - 暑 - 魅 - 裸 - 疏 - 雇 - 毅 - 忠 - 疆 - 葛 - 凤 - 屈 - 悦 - 馈 - 挡 - 闫 - 氮 - 兆 - 貌 - 厕 - 谣 - 颠 - 猜 - 疲 - 框 - 揽 - 胁 - 憾 - 秩 - 艳 - 帽 - 氛 - 荷 - 泪 - 剑 - 懂 - 钻 - 遵 - 贪 - 贼 - 狱 - 姣 - 寺 - 胶 - 吵 - 催 - 削 - 丑 - 欺 - 肃 - 妥 - 烦 - 灰 - 擅 - 佣 - 萧 - 虾 - 鞋 - 捧 - 逝 - 猥 - 瓜 - 酸 - 奈 - 厨 - 紫 - 侠 - 塑 - 娇 - 辖 - 舆 - 擦 - 柏 - 澄 - 磊 - 虐 - 轰 - 曹 - 删 - 鼻 - 柳 - 屯 - 笼 - 皇 - 糖 - 珍 - 疼 - 柜 - 捡 - 址 - 肠 - 捞 - 拜 - 峻 - 吹 - 乃 - 瘦 - 肚 - 贤 - 帕 - 岳 - 勤 - 瑜 - 锅 - 沫 - 俗 - 昕 - 帆 - 茂 - 醉 - 填 - 饱 - 爬 - 轩 - 滞 - 蜜 - 汗 - 飙 - 耐 - 亨 - 媳 - 彭 - 蓄 - 蝶 - 炮 - 鼠 - 咖 - 琴 - 宠 - 棍 - 掘 - 茨 - 坑 - 湘 - 孟 - 劣 - 灿 - 虫 - 彦 - 喷 - 描 - 辩 - 尴 - 尬 - 弥 - 孤 - 峡 - 凸 - 逻 - 辰 - 孔 - 抬 - 馨 - 蔚 - 怡 - 雯 - 砖 - 崇 - 肢 - 柱 - 阔 - 彼 - 荒 - 滚 - 葡 - 萄 - 昂 - 盆 - 怨 - 瞬 - 斜 - 斩 - 睛 - 剪 - 插 - 棚 - 串 - 沃 - 柔 - 肤 - 壳 - 胸 - 陕 - 凉 - 崛 - 鸣 - 罕 - 衷 - 阴 - 盲 - 伞 - 戒 - 踢 - 狼 - 埋 - 酿 - 旨 - 戈 - 捉 - 跪 - 贺 - 谭 - 涂 - 萎 - 滋 - 昏 - 扇 - 鼎 - 楠 - 驳 - 溪 - 桑 - 钧 - 荡 - 痕 - 玛 - 躲 - 谐 - 您 - 叹 - 桶 - 晕 - 丙 - 璇 - 咚 - 烂 - 杉 - 挣 - 窝 - 亵 - 芸 - 渝 - 芳 - 妆 - 膜 - 煌 - 尘 - 侯 - 赋 - 渣 - 贫 - 桃 - 页 - 吞 - 胀 - 竹 - 肝 - 雾 - 嫁 - 辈 - 愤 - 琐 - 殖 - 媛 - 寄 - 僵 - 逮 - 聪 - 粗 - 寒 - 弄 - 墓 - 谌 - 扔 - 役 - 呆 - 靖 - 蒂 - 芬 - 翼 - 喂 - 孵 - 谎 - 硅 - 璨 - 喀 - 盼 - 盒 - 慌 - 烫 - 秦 - 梳 - 韦 - 袋 - 钓 - 夕 - 碗 - 寨 - 塘 - 衍 - 垒 - 卿 - 滩 - 扑 - 绘 - 辱 - 炎 - 铅 - 肿 - 衰 - 厢 - 躺 - 纽 - 硫 - 睐 - 翁 - 慰 - 耍 - 缠 - 狠 - 脉 - 斥 - 脂 - 趴 - 钩 - 歧 - 椅 - 踩 - 掷 - 挽 - 锐 - 勘 - 逢 - 郝 - 宪 - 胃 - 粒 - 瞩 - 辟 - 皆 - 仰 - 腕 - 匪 - 陵 - 钥 - 缝 - 闸 - 犬 - 锡 - 弊 - 凝 - 臭 - 趁 - 拾 - 夸 - 掩 - 耀 - 炭 - 铬 - 叠 - 坊 - 挪 - 蟹 - 裹 - 狮 - 辐 - 陌 - 捅 - 疫 - 兹 - 霍 - 锈 - 娟 - 蚁 - 奢 - 吻 - 侃 - 晖 - 扳 - 冤 - 彰 - 蹈 - 畴 - 蛇 - 濠 - 啡 - 堡 - 侣 - 撒 - 铭 - 掏 - 奎 - 蜂 - 咸 - 穷 - 瞄 - 遂 - 碾 - 匿 - 瓷 - 舱 - 刹 - 柄 - 倪 - 睹 - 译 - 淇 - 猝 - 浅 - 肺 - 湿 - 顽 - 罩 - 胆 - 匙 - 渴 - 妮 - 羞 - 脆 - 魄 - 锂 - 纤 - 炫 - 裙 - 肾 - 傲 - 膝 - 叔 - 啥 - 撕 - 牲 - 猴 - 辨 - 酝 - 刮 - 惑 - 渗 - 喻 - 晴 - 淑 - 羡 - 慕 - 擂 - 骚 - 纺 - 咕 - 僧 - 悔 - 垂 - 瘫 - 剥 - 舰 - 浏 - 鲍 - 跻 - 亭 - 撰 - 卸 - 莲 - 纱 - 糊 - 朵 - 岩 - 眉 - 函 - 糟 - 仗 - 惹 - 琦 - 贞 - 氢 - 楷 - 莓 - 瞒 - 奠 - 勃 - 锤 - 妨 - 帷 - 洽 - 乞 - 牺 - 亩 - 簿 - 斑 - 翘 - 祈 - 唇 - 耕 - 扯 - 妍 - 坎 - 谱 - 盯 - 泼 - 悍 - 莎 - 汁 - 囊 - 甩 - 辣 - 浸 - 恼 - 盔 - 烤 - 坝 - 巅 - 沸 - 抹 - 邹 - 霾 - 怖 - 犹 - 擎 - 迄 - 恨 - 丧 - 坞 - 袖 - 赤 - 萍 - 爽 - 穆 - 娶 - 闷 - 捍 - 膀 - 侈 - 筋 - 逛 - 倩 - 纲 - 遮 - 御 - 姨 - 淮 - 宰 - 叉 - 绵 - 惧 - 钦 - 廊 - 鳄 - 砂 - 浆 - 禽 - 咏 - 瘾 - 饿 - 痴 - 绳 - 碟 - 韵 - 皓 - 廖 - 岭 - 蛙 - 兔 - 芽 - 剖 - 嫖 - 昔 - 哀 - 蔓 - 谦 - 滥 - 赂 - 渊 - 捣 - 佑 - 弈 - 仙 - 澡 - 骤 - 侨 - 奉 - 磅 - 慨 - 筛 - 嘲 - 竣 - 箭 - 荧 - 脖 - 彤 - 豫 - 躁 - 秉 - 鹤 - 幺 - 渔 - 罢 - 贬 - 铲 - 卵 - 逗 - 牧 - 蔬 - 苑 - 沦 - 遏 - 柴 - 庙 - 兽 - 耶 - 魂 - 溜 - 缉 - 俏 - 蕴 - 苛 - 凑 - 婿 - 铸 - 兜 - 蹭 - 鸭 - 朴 - 肋 - 噪 - 焚 - 坍 - 啤 - 钉 - 戚 - 谍 - 挫 - 艇 - 余 - 巷 - 屠 - 咋 - 詹 - 衫 - 浴 - 爹 - 孝 - 瘤 - 霖 - 崩 - 甸 - 悼 - 擒 - 浇 - 雕 - 竖 - 帐 - 萤 - 靡 - 漠 - 傻 - 撼 - 崔 - 筒 - 脊 - 嘛 - 臣 - 禾 - 龟 - 唤 - 呀 - 壤 - 灌 - 邵 - 稻 - 巾 - 葩 - 饥 - 缔 - 舌 - 窜 - 秽 - 茅 - 靓 - 阱 - 钞 - 潼 - 硝 - 墩 - 蝙 - 蝠 - 嫂 - 艘 - 嚣 - 铃 - 扒 - 佬 - 竭 - 赎 - 傍 - 熬 - 悠 - 挨 - 泊 - 攒 - 坪 - 焰 - 螺 - 薇 - 蛛 - 牟 - 忌 - 愧 - 酵 - 迭 - 饶 - 惟 - 钮 - 闵 - 碧 - 徘 - 徊 - 溯 - 棉 - 歪 - 捂 - 蚊 - 锰 - 屁 - 畸 - 肪 - 蹲 - 剔 - 榆 - 撇 - 瑟 - 讶 - 飘 - 蒸 - 诠 - 寂 - 罄 - 莹 - 鹅 - 泣 - 崖 - 珊 - 讳 - 翰 - 蜘 - 仲 - 燥 - 菱 - 滢 - 煎 - 蛮 - 瞻 - 蘑 - 菇 - 隙 - 捆 - 蕉 - 遣 - 宛 - 肆 - 丸 - 磁 - 玥 - 嵌 - 韶 - 枝 - 咪 - 愉 - 呕 - 淤 - 誓 - 辄 - 俯 - 桐 - 舅 - 蓉 - 渭 - 氯 - 溅 - 雁 - 龚 - 恺 - 妖 - 饽 - 荆 - 枯 - 仇 - 坟 - 澜 - 麟 - 藤 - 猎 - 洒 - 茹 - 碌 - 畏 - 涤 - 俞 - 勿 - 蔽 - 罐 - 尹 - 堰 - 儒 - 芮 - 孚 - 哗 - 掐 - 矶 - 椎 - 阐 - 驴 - 蝉 - 焕 - 鄂 - 耻 - 炯 - 衬 - 婉 - 愁 - 梨 - 丛 - 谅 - 膨 - 曙 - 鹿 - 骄 - 缅 - 匆 - 赃 - 蒲 - 睁 - 焱 - 灼 - 刃 - 螃 - 瑕 - 讹 - 禅 - 臀 - 姗 - 媚 - 呛 - 凰 - 瀚 - 埔 - 弓 - 阚 - 湛 - 奕 - 扛 - 齿 - 挟 - 髓 - 狭 - 栈 - 骏 - 崭 - 慑 - 殿 - 祭 - 僻 - 蹬 - 寡 - 呦 - 鞠 - 酱 - 瑰 - 馒 - 坤 - 趟 - 臻 - 咒 - 豹 - 畜 - 冉 - 绎 - 岌 - 甄 - 绞 - 宵 - 庸 - 歇 - 挠 - 氨 - 乙 - 茵 - 岔 - 淄 - 碘 - 淋 - 蓬 - 颅 - 羹 - 浑 - 昧 - 翠 - 峥 - 惕 - 睿 - 芦 - 蚀 - 颓 - 霜 - 钰 - 橘 - 堤 - 凳 - 溶 - 锯 - 幂 - 榴 - 娼 - 汹 - 茫 - 厌 - 绰 - 崎 - 溃 - 撬 - 沾 - 拇 - 疵 - 哦 - 弧 - 弘 - 咽 - 葬 - 阁 - 竿 - 篡 - 隶 - 诟 - 煮 - 丘 - 耿 - 彬 - 敞 - 泻 - 夷 - 隅 - 渎 - 淹 - 骆 - 醋 - 霆 - 涩 - 陀 - 叙 - 梗 - 冶 - 敛 - 痪 - 讽 - 疤 - 螂 - 芒 - 幢 - 炜 - 毯 - 橙 - 拢 - 俨 - 仕 - 氰 - 钾 - 呐 - 株 - 脾 - 烨 - 磕 - 薛 - 窖 - 芷 - 蜕 - 衅 - 歹 - 哒 - 诡 - 摧 - 漆 - 蟑 - 劈 - 呵 - 絮 - 抖 - 娅 - 铝 - 霉 - 芭 - 辜 - 昊 - 嘘 - 哑 - 枢 - 脐 - 庐 - 钠 - 鳌 - 矩 - 锆 - 婧 - 沛 - 饲 - 熄 - 翡 - 屹 - 膏 - 阙 - 搂 - 锣 - 幌 - 橄 - 榄 - 杖 - 旷 - 矫 - 冈 - 舟 - 腊 - 聂 - 拣 - 遛 - 勋 - 窘 - 韧 - 咱 - 拎 - 椒 - 揣 - 殷 - 揪 - 伽 - 贱 - 琼 - 菡 - 闺 - 昭 - 雏 - 蹊 - 黛 - 禹 - 鞍 - 乖 - 汝 - 甫 - 彝 - 泸 - 诬 - 拽 - 毽 - 搅 - 葵 - 旱 - 勉 - 跷 - 畔 - 肘 - 坂 - 漩 - 涡 - 倘 - 醛 - 曦 - 铀 - 杏 - 棕 - 幽 - 裴 - 阮 - 敷 - 茄 - 沧 - 剽 - 恳 - 淳 - 萱 - 袱 - 亥 - 痱 - 腔 - 嫉 - 粹 - 焊 - 诀 - 粪 - 朔 - 黯 - 谜 - 眨 - 祁 - 暧 - 魁 - 辗 - 穗 - 倦 - 剿 - 袍 - 恭 - 炙 - 娴 - 玫 - 锏 - 熏 - 窥 - 堕 - 悟 - 晃 - 缪 - 驿 - 泷 - 雀 - 惫 - 玺 - 剃 - 斐 - 袂 - 梭 - 哄 - 邪 - 岂 - 腻 - 嫩 - 榕 - 谴 - 潇 - 纬 - 侮 - 翅 - 镶 - 坷 - 彪 - 祷 - 匝 - 耽 - 萝 - 窑 - 瑾 - 滤 - 拱 - 哨 - 蠢 - 邢 - 涞 - 恤 - 泾 - 谤 - 瀑 - 舶 - 懈 - 忱 - 烹 - 晟 - 踞 - 剁 - 珉 - 庚 - 晤 - 壶 - 砾 - 嗅 - 妒 - 匈 - 胰 - 绯 - 荼 - 爪 - 茜 - 桦 - 蜇 - 芜 - 玄 - 葫 - 蚂 - 绊 - 搁 - 霏 - 粘 - 佟 - 雍 - 垮 - 羁 - 娥 - 碱 - 磷 - 钊 - 毙 - 诿 - 绸 - 捏 - 遴 - 畊 - 厮 - 巫 - 猖 - 獗 - 掴 - 辍 - 蜡 - 赣 - 筵 - 芙 - 蒜 - 缆 - 俪 - 鹰 - 笋 - 毋 - 喆 - 鹭 - 蝴 - 汀 - 诽 - 桔 - 篷 - 莽 - 栖 - 饪 - 伺 - 戳 - 谊 - 霄 - 侄 - 滔 - 瞎 - 皱 - 蛟 - 裔 - 烽 - 猿 - 叮 - 绷 - 腺 - 暨 - 沥 - 喧 - 囤 - 掠 - 陡 - 膺 - 痒 - 饵 - 戎 - 褚 - 丐 - 渤 - 帜 - 娄 - 洼 - 禄 - 婵 - 琢 - 躯 - 禺 - 峙 - 踹 - 怜 - 炖 - 剐 - 缚 - 襄 - 枫 - 绽 - 庾 - 斧 - 穴 - 寇 - 蝇 - 鞭 - 阎 - 矢 - 糙 - 巍 - 蒿 - 殒 - 蛰 - 囧 - 卜 - 宙 - 珮 - 鸦 - 璞 - 翟 - 酗 - 褒 - 豁 - 镑 - 耷 - 棠 - 垦 - 韬 - 荫 - 窨 - 鸽 - 羲 - 懒 - 躬 - 匕 - 犀 - 吼 - 珀 - 昙 - 樱 - 蹿 - 抉 - 苍 - 汛 - 铉 - 镉 - 喔 - 邯 - 郸 - 噱 - 瓯 - 沼 - 捻 - 苯 - 蹼 - 麋 - 阀 - 煞 - 踝 - 缭 - 菊 - 竺 - 峭 - 攥 - 癖 - 肛 - 泔 - 拯 - 窟 - 靳 - 舵 - 嘱 - 昱 - 勺 - 吾 - 丫 - 觅 - 醇 - 磋 - 徙 - 陨 - 惺 - 渍 - 炬 - 栽 - 晏 - 颂 - 奴 - 榔 - 驭 - 嚼 - 赡 - 豚 - 蔷 - 梓 - 梧 - 哽 - 晗 - 汞 - 嫣 - 蕊 - 祺 - 疹 - 壹 - 噬 - 皂 - 矗 - 悚 - 憧 - 憬 - 拷 - 扁 - 廓 - 蹴 - 岚 - 瑛 - 崴 - 栗 - 囚 - 涿 - 礁 - 晔 - 殡 - 璀 - 淞 - 隋 - 踵 - 钵 - 煊 - 赘 - 瞧 - 寞 - 陋 - 骷 - 髅 - 秸 - 秆 - 夯 - 荔 - 襁 - 褓 - 笨 - 沮 - 瞅 - 怂 - 茗 - 甥 - 亟 - 杳 - 煦 - 挚 - 棵 - 祠 - 嗯 - 枕 - 粟 - 泌 - 蜀 - 寥 - 遐 - 涝 - 辫 - 籁 - 窍 - 聋 - 逍 - 跤 - 凹 - 釜 - 嘀 - 嗒 - 淝 - 藜 - 翱 - 硚 - 叼 - 痹 - 腼 - 腆 - 伎 - 骋 - 愕 - 腥 - 拮 - 轧 - 癫 - 橡 - 膊 - 觑 - 寅 - 砒 - 趾 - 颐 - 漳 - 峨 - 呜 - 淆 - 凿 - 壕 - 铨 - 莆 - 筷 - 璧 - 譬 - 岖 - 抠 - 笛 - 厥 - 砺 - 喉 - 酌 - 簧 - 鲸 - 踊 - 牡 - 嬛 - 缜 - 奂 - 熹 - 闽 - 馊 - 胯 - 喇 - 伶 - 墟 - 煜 - 耘 - 榷 - 骁 - 猩 - 辙 - 狸 - 滕 - 诵 - 窒 - 恍 - 髦 - 诫 - 榨 - 熠 - 蔺 - 薯 - 歆 - 粤 - 夭 - 拌 - 唏 - 厄 - 吝 - 眷 - 峪 - 拙 - 咎 - 粥 - 痰 - 琅 - 羚 - 莘 - 憨 - 瞰 - 炅 - 孜 - 亢 - 缮 - 焯 - 咄 - 暇 - 矮 - 汲 - 灶 - 闰 - 奚 - 汶 - 珲 - 麓 - 憋 - 崂 - 镳 - 殃 - 卉 - 诧 - 矣 - 屎 - 聆 - 芋 - 屑 - 罂 - 籽 - 绚 - 卞 - 枉 - 汕 - 懋 - 媲 - 啧 - 掣 - 嬉 - 仨 - 姬 - 懿 - 馅 - 胺 - 撂 - 睫 - 蛐 - 萃 - 眈 - 飚 - 毓 - 涅 - 昼 - 橱 - 驼 - 涠 - 谩 - 婶 - 膛 - 拄 - 绣 - 栅 - 邬 - 怠 - 鄙 - 哉 - 跺 - 帘 - 沓 - 搀 - 腌 - 羿 - 泵 - 鄞 - 郡 - 烃 - 愚 - 蕙 - 垤 - 锌 - 柠 - 檬 - 葱 - 垢 - 匮 - 卦 - 懊 - 掺 - 叱 - 坯 - 糯 - 覆 - 铆 - 琬 - 抡 - 潢 - 棺 - 塾 - 飓 - 诅 - 翩 - 揍 - 檀 - 鳝 - 讪 - 熔 - 杞 - 啃 - 昀 - 紊 - 敖 - 璐 - 蔗 - 槌 - 铐 - 搡 - 磐 - 宕 - 栓 - 叭 - 戟 - 顷 - 濒 - 窦 - 摁 - 俐 - 瞳 - 蚕 - 鹊 - 迂 - 畿 - 瓣 - 媞 - 寝 - 蹦 - 嗑 - 袒 - 殉 - 稚 - 俘 - 搪 - 沽 - 妃 - 嗓 - 胫 - 町 - 莴 - 苣 - 痘 - 蔑 - 皖 - 枞 - 忐 - 忑 - 靴 - 菁 - 姥 - 诙 - 嚷 - 焉 - 沣 - 霹 - 雳 - 僚 - 尧 - 嘎 - 诩 - 咫 - 柬 - 惮 - 狄 - 匀 - 裆 - 黏 - 釉 - 膳 - 渺 - 苟 - 瑶 - 唾 - 瘠 - 讧 - 睦 - 弦 - 庇 - 袄 - 噩 - 扼 - 戛 - 禀 - 恿 - 滁 - 麾 - 筱 - 瘀 - 褪 - 槟 - 缨 - 绒 - 犷 - 茸 - 惋 - 嗤 - 寮 - 褂 - 咳 - 缀 - 谙 - 涧 - 炽 - 缄 - 鹜 - 砌 - 贮 - 庵 - 隧 - 卤 - 跆 - 皋 - 蝗 - 洱 - 圪 - 邑 - 锄 - 荟 - 渚 - 苇 - 孰 - 鹃 - 哼 - 呃 - 琛 - 痣 - 摹 - 痼 - 镯 - 刁 - 秧 - 腩 - 鳞 - 乍 - 颚 - 慷 - 氓 - 惦 - 卑 - 挝 - 熨 - 濮 - 胳 - 瓢 - 砰 - 溧 - 锷 - 鸠 - 犒 - 姝 - 蹄 - 宸 - 侥 - 锭 - 佶 - 浊 - 婪 - 磺 - 咤 - 迢 - 檐 - 邺 - 掂 - 渲 - 嚎 - 祛 - 伢 - 叛 - 撮 - 甬 - 淌 - 瀛 - 朽 - 陂 - 帼 - 铿 - 锵 - 漓 - 驯 - 鲨 - 抒 - 茁 - 柿 - 貔 - 貅 - 钝 - 鳅 - 嚏 - 暮 - 瑚 - 荤 - 蜓 - 垣 - 颤 - 溥 - 臃 - 戮 - 枣 - 佼 - 拗 - 哆 - 嗦 - 惚 - 鸥 - 倚 - 嗨 - 舸 - 赐 - 姊 - 憔 - 悴 - 铰 - 黝 - 屿 - 秃 - 嘻 - 楞 - 棱 - 袈 - 裟 - 汴 - 揉 - 髋 - 悸 - 榻 - 逞 - 晾 - 屌 - 闳 - 痊 - 袜 - 扉 - 琶 - 摒 - 捺 - 匠 - 窈 - 窕 - 飒 - 猬 - 蜚 - 萋 - 蚯 - 蚓 - 鲟 - 澈 - 樟 - 悖 - 玖 - 俾 - 抿 - 彷 - 彿 - 虱 - 狙 - 鲶 - 槿 - 烘 - 挎 - 狰 - 狞 - 邃 - 瞪 - 俚 - 涕 - 谬 - 睬 - 蜷 - 兢 - 镍 - 砷 - 菠 - 怦 - 凄 - 卯 - 獒 - 渀 - 辘 - 滇 - 燎 - 噎 - 蝎 - 綦 - 鄢 - 捎 - 瞿 - 蜿 - 蜒 - 禧 - 榈 - 锹 - 殭 - 爵 - 盹 - 淖 - 啼 - 瓮 - 鳖 - 镖 - 珑 - 罹 - 殆 - 掖 - 柞 - 缸 - 绅 - 棘 - 祉 - 胱 - 殓 - 嗡 - 嗷 - 箍 - 圩 - 耒 - 婕 - 腑 - 萦 - 鹞 - 珜 - 啵 - 瑙 - 葆 - 逡 - 嗽 - 饕 - 餮 - 隼 - 妞 - 饺 - 叨 - 酋 - 恙 - 泗 - 弩 - 骜 - 铎 - 酶 - 蚝 - 烁 - 匾 - 侬 - 藻 - 馥 - 骥 - 槐 - 缕 - 椿 - 袆 - 琊 - 稣 - 藩 - 迸 - 蹂 - 躏 - 隽 - 俸 - 郫 - 簸 - 砥 - 骸 - 掮 - 斛 - 啸 - 璋 - 垛 - 札 - 邋 - 遢 - 蕲 - 哇 - 碴 - 邛 - 崃 - 觐 - 笙 - 裳 - 泞 - 蚌 - 醍 - 醐 - 拴 - 舜 - 沅 - 懵 - 谕 - 帚 - 螳 - 噼 - 啪 - 漱 - 郜 - 碉 - 圭 - 谀 - 轶 - 舀 - 呲 - 啶 - 氟 - 琏 - 垅 - 娩 - 乾 - 鏖 - 牾 - 肮 - 啕 - 吏 - 涓 - 氦 - 锥 - 桎 - 吿 - 烊 - 斟 - 汾 - 岐 - 耄 - 耋 - 嗲 - 胛 - 疚 - 骇 - 癣 - 磡 - 侑 - 漾 - 碚 - 琉 - 惬 - 遁 - 耸 - 岱 - 糗 - 缙 - 肴 - 梵 - 僮 - 鸵 - 悯 - 孪 - 莅 - 戬 - 霁 - 簇 - 逵 - 倜 - 傥 - 馋 - 蓁 - 衙 - 蛀 - 蔫 - 崧 - 吟 - 琰 - 唬 - 渥 - 岷 - 仡 - 涎 - 鸳 - 鸯 - 镊 - 妧 - 嬷 - 嫦 - 嫔 - 沐 - 伉 - 嶝 - 锢 - 筐 - 蜥 - 蜴 - 泱 - 骅 - 吆 - 撩 - 怯 - 叩 - 哟 - 啬 - 岬 - 笃 - 玳 - 瑁 - 邝 - 咣 - 矜 - 嘭 - 馗 - 婀 - 黔 - 锟 - 啰 - 翌 - 铠 - 貉 - 獾 - 酣 - 楣 - 佃 - 琵 - 茆 - 皙 - 凋 - 敝 - 匣 - 嵘 - 宓 - 茎 - 楂 - 竲 - 瘪 - 侗 - 铣 - 薰 - 砲 - 羣 - 淼 - 襟 - 妊 - 娠 - 罡 - 瘁 - 椰 - 烙 - 呗 - 荃 - 皎 - 殚 - 腋 - 骼 - 腓 - 榭 - 隘 - 唉 - 铮 - 狩 - 抨 - 峁 - 粱 - 阂 - 厩 - 莠 - 吩 - 咐 - 瞌 - 蜊 - 恬 - 膑 - 踉 - 跄 - 颍 - 朐 - 疝 - 毂 - 秣 - 舛 - 炊 - 漯 - 泠 - 喘 - 撵 - 狡 - 猾 - 铂 - 钛 - 荞 - 拭 - 丞 - 漭 - 绌 - 埜 - 掰 - 狈 - 锜 - 菩 - 弛 - 寰 - 秤 - 灞 - 黍 - 蓟 - 嵛 - 榉 - 幄 - 颊 - 缤 - 朦 - 胧 - 冥 - 砝 - 镀 - 夙 - 燊 - 荚 - 浈 - 苡 - 眺 - 陬 - 寐 - 佘 - 濑 - 仄 - 楔 - 胚 - 嵩 - 洙 - 诓 - 阜 - 浚 - 觊 - 觎 - 曰 - 怵 - 兖 - 稠 - 嵋 - 艋 - 篪 - 琥 - 玟 - 褴 - 褛 - 喱 - 虞 - 魇 - 凇 - 徉 - 嘟 - 臆 - 犊 - 哎 - 靑 - 俺 - 塬 - 妯 - 娌 - 蜈 - 蚣 - 恣 - 沏 - 磴 - 霎 - 趸 - 麒 - 氪 - 缇 - 沁 - 疃 - 恸 - 瘩 - 暄 - 憩 - 祯 - 惰 - 溉 - 沱 - 诲 - 笈 - 擘 - 亳 - 孺 - 忪 - 瞟 - 擞 - 瘸 - 掬 - 唁 - 蹚 - 匡 - 粕 - 鲷 - 泓 - 叵 - 嗣 - 眯 - 炷 - 珺 - 漕 - 谑 - 咯 - 嗬 - 缰 - 卲 - 壑 - 靶 - 隍 - 唠 - 濡 - 盎 - 骊 - 腱 - 鞘 - 拧 - 痫 - 宦 - 诶 - 椋 - 鼾 - 湍 - 毗 - 酪 - 赦 - 炕 - 焘 - 奘 - 邂 - 逅 - 妄 - 骐 - 卒 - 喵 - 觥 - 眬 - 纣 - 憷 - 覃 - 孀 - 芊 - 孢 - 惶 - 迥 - 纰 - 咀 - 鸾 - 箫 - 晦 - 泯 - 砚 - 吭 - 祢 - 揩 - 刨 - 珏 - 撸 - 兀 - 痉 - 挛 - 胤 - 巿 - 纶 - 镁 - 哺 - 咔 - 嚓 - 稼 - 焖 - 妤 - 妩 - 潞 - 雌 - 栾 - 侍 - 煲 - 嫚 - 竽 - 恪 - 霈 - 赝 - 莺 - 眶 - 桓 - 槎 - 馑 - 涮 - 枭 - 徇 - 洵 - 垌 - 昵 - 褶 - 喽 - 脯 - 孱 - 遨 - 谚 - 烷 - 搽 - 酯 - 枷 - 桉 - 咧 - 窿 - 拈 - 斓 - 跛 - 蹶 - 瘟 - 俭 - 靛 - 脍 - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: null zero_infinity: true joint_net_conf: null use_preprocessor: true token_type: char bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 frontend: default frontend_conf: fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 10 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_zh_char_sp/train/feats_stats.npz model: espnet model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false preencoder: null preencoder_conf: {} encoder: e_branchformer encoder_conf: output_size: 256 attention_heads: 4 attention_layer_type: rel_selfattn pos_enc_layer_type: rel_pos rel_pos_type: latest cgmlp_linear_units: 1024 cgmlp_conv_kernel: 31 use_linear_after_conv: false gate_activation: identity num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d layer_drop_rate: 0.0 linear_units: 1024 positionwise_layer_type: linear use_ffn: true macaron_ffn: true merge_conv_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.0 src_attention_dropout_rate: 0.0 preprocessor: default preprocessor_conf: {} required: - output_dir - token_list version: '202209' distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
4bcabb769f99526a78e2892ddf4c8f8b
anas-awadalla/gpt2-span-head-few-shot-k-128-finetuned-squad-seed-2
anas-awadalla
gpt2
15
5
transformers
0
question-answering
true
false
false
mit
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
969
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-span-head-few-shot-k-128-finetuned-squad-seed-2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.11.6
7054c4945dab906025767d21c52428c4
underactuated/opt-350m_rl1_v4
underactuated
opt
12
6
transformers
0
text-generation
true
false
false
other
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
911
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-350m_rl1_v4 This model is a fine-tuned version of [underactuated/opt-350m_mle_v3](https://huggingface.co/underactuated/opt-350m_mle_v3) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.26.1 - Pytorch 1.12.1 - Datasets 2.9.0 - Tokenizers 0.13.2
9d2aaafaac410cacf997b2ed189e78e3
mnarasim/finetuning-sentiment-model-3000-samples
mnarasim
distilbert
10
9
transformers
0
text-classification
true
false
false
apache-2.0
null
['imdb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,047
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3140 - Accuracy: 0.88 - F1: 0.8816 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
26d80429ac02ef60e06319a7751428e6
Finnish-NLP/t5-large-nl36-finnish
Finnish-NLP
t5
23
6
transformers
0
text2text-generation
true
false
true
apache-2.0
['fi']
['Finnish-NLP/mc4_fi_cleaned', 'wikipedia']
null
0
0
0
0
0
0
0
['finnish', 't5', 't5x', 'seq2seq']
false
true
true
9,457
false
# T5-large-nl36 for Finnish Pretrained T5 model on Finnish language using a span-based masked language modeling (MLM) objective. T5 was introduced in [this paper](https://arxiv.org/abs/1910.10683) and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer). **Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text. ## Model description T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format. Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts. More precisely, it was pretrained with the span-based masked language modeling (MLM) objective. Spans of the input sequence are masked by so-called sentinel tokens (a.k.a unique mask tokens) and the output sequence is formed as a concatenation of the same sentinel tokens and the real masked tokens. This way, the model learns an inner representation of the Finnish language. This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining: - GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202) - Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning - Pretrained on span-based masked language modeling (MLM) objective only without mixing in the downstream tasks - No parameter sharing between embedding and classifier layer This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially. This model uses the [t5-efficient-large-nl36](https://huggingface.co/google/t5-efficient-large-nl36) architecture's layer depth which means both the encoder and the decoder have 36 transformer layers compared to the original T5 "large" model's architecture of 24 transformer layers. In total, this model has 1425 million parameters. ## Intended uses & limitations This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example. ### How to use Here is how to use this model in PyTorch: ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-large-nl36-finnish") model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-large-nl36-finnish") ``` and in TensorFlow: ```python from transformers import T5Tokenizer, TFT5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-large-nl36-finnish") model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-large-nl36-finnish", from_pt=True) ``` ### Limitations and bias The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. ## Training data This Finnish T5 model was pretrained on the combination of six datasets: - [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo). - [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset - [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501) - [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401) - [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001) - [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803) Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text. ## Training procedure ### Preprocessing The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. ### Pretraining The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1.87M steps with a batch size of 32 (in total 31B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-3, and then an inverse square root decay (exponential decay) of the learning rate after. Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere). ## Evaluation results Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens. When fine-tuned on those datasets, this model (the seventh row of the table) achieves the following accuracy results compared to our other T5 models and their parameter counts: | | Model parameters | Yle News accuracy | Eduskunta accuracy | |-------------------------------------------------------|------------------|---------------------|----------------------| |Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 | |Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 | |Finnish-NLP/t5-small-nl16-finnish | 184 million |94.46 |74.00 | |Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 | |Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 | |Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** | |Finnish-NLP/t5-large-nl36-finnish | 1425 million |94.17 |73.50 | Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification: | | Model parameters | Yle News accuracy | Eduskunta accuracy | |-------------------------------------------------------|------------------|---------------------|----------------------| |google/mt5-small | 301 million |91.51 |64.10 | |google/mt5-base | 583 million |92.71 |68.40 | ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). ## Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) Feel free to contact us for more details 🤗
3a1313cef3bb704cf20470b5fc8f8c52
IIIT-L/roberta-large-finetuned-non-code-mixed-DS
IIIT-L
roberta
11
2
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,134
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-finetuned-non-code-mixed-DS This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.1265 - Accuracy: 0.6936 - Precision: 0.6794 - Recall: 0.6782 - F1: 0.6784 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 43 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.0688 | 1.0 | 463 | 0.8847 | 0.6127 | 0.6038 | 0.6032 | 0.6014 | | 0.8226 | 2.0 | 926 | 0.7622 | 0.6796 | 0.6769 | 0.6822 | 0.6716 | | 0.6844 | 2.99 | 1389 | 0.8391 | 0.6828 | 0.6718 | 0.6563 | 0.6602 | | 0.536 | 3.99 | 1852 | 0.8218 | 0.6990 | 0.6950 | 0.6807 | 0.6844 | | 0.3938 | 4.99 | 2315 | 0.9616 | 0.6958 | 0.6967 | 0.7056 | 0.6880 | | 0.2674 | 5.99 | 2778 | 1.1389 | 0.7033 | 0.6868 | 0.6895 | 0.6879 | | 0.2073 | 6.98 | 3241 | 1.5578 | 0.6915 | 0.6786 | 0.6807 | 0.6792 | | 0.1641 | 7.98 | 3704 | 1.9538 | 0.6850 | 0.6734 | 0.6715 | 0.6717 | | 0.1394 | 8.98 | 4167 | 2.3230 | 0.6893 | 0.6733 | 0.6742 | 0.6736 | | 0.1248 | 9.98 | 4630 | 2.4050 | 0.6936 | 0.6824 | 0.6819 | 0.6815 | | 0.1002 | 10.98 | 5093 | 2.4227 | 0.6947 | 0.6832 | 0.6932 | 0.6795 | | 0.0776 | 11.97 | 5556 | 2.5782 | 0.7012 | 0.6876 | 0.6923 | 0.6887 | | 0.0685 | 12.97 | 6019 | 2.7967 | 0.6915 | 0.6836 | 0.6930 | 0.6820 | | 0.045 | 13.97 | 6482 | 2.8884 | 0.7044 | 0.6873 | 0.6855 | 0.6863 | | 0.0462 | 14.97 | 6945 | 2.9528 | 0.6947 | 0.6754 | 0.6749 | 0.6751 | | 0.0444 | 15.97 | 7408 | 3.0356 | 0.6904 | 0.6778 | 0.6805 | 0.6778 | | 0.0343 | 16.96 | 7871 | 3.0123 | 0.6936 | 0.6784 | 0.6762 | 0.6771 | | 0.0245 | 17.96 | 8334 | 3.0160 | 0.6893 | 0.6720 | 0.6735 | 0.6727 | | 0.0198 | 18.96 | 8797 | 3.1597 | 0.6904 | 0.6741 | 0.6727 | 0.6732 | | 0.0189 | 19.96 | 9260 | 3.1265 | 0.6936 | 0.6794 | 0.6782 | 0.6784 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.10.1+cu111 - Datasets 2.3.2 - Tokenizers 0.12.1
0b752f0c4c338dc881cb60fdd164ff54
timm/levit_conv_128.fb_dist_in1k
timm
null
4
16
timm
0
image-classification
true
false
false
apache-2.0
null
['imagenet-1k']
null
0
0
0
0
0
0
0
['image-classification', 'timm']
false
true
true
4,837
false
# Model card for levit_conv_128.fb_dist_in1k A LeViT image classification model using default linear mode (non-convolutional mode with nn.Linear and nn.BatchNorm1d). Pretrained on ImageNet-1k using distillation by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 9.2 - GMACs: 0.4 - Activations (M): 2.7 - Image size: 224 x 224 - **Papers:** - LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference: https://arxiv.org/abs/2104.01136 - **Original:** https://github.com/facebookresearch/LeViT - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('levit_conv_128.fb_dist_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'levit_conv_128.fb_dist_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled (ie.e a (batch_size, num_features, H, W) tensor output = model.forward_head(output, pre_logits=True) # output is (batch_size, num_features) tensor ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'levit_conv_128.fb_dist_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g. for levit_conv_256: # torch.Size([2, 256, 14, 14]) # torch.Size([2, 384, 7, 7]) # torch.Size([2, 512, 4, 4]) print(o.shape) ``` ## Model Comparison |model |top1 |top5 |param_count|img_size| |-----------------------------------|------|------|-----------|--------| |levit_384.fb_dist_in1k |82.596|96.012|39.13 |224 | |levit_conv_384.fb_dist_in1k |82.596|96.012|39.13 |224 | |levit_256.fb_dist_in1k |81.512|95.48 |18.89 |224 | |levit_conv_256.fb_dist_in1k |81.512|95.48 |18.89 |224 | |levit_conv_192.fb_dist_in1k |79.86 |94.792|10.95 |224 | |levit_192.fb_dist_in1k |79.858|94.792|10.95 |224 | |levit_128.fb_dist_in1k |78.474|94.014|9.21 |224 | |levit_conv_128.fb_dist_in1k |78.474|94.02 |9.21 |224 | |levit_128s.fb_dist_in1k |76.534|92.864|7.78 |224 | |levit_conv_128s.fb_dist_in1k |76.532|92.864|7.78 |224 | ## Citation ```bibtex @InProceedings{Graham_2021_ICCV, author = {Graham, Benjamin and El-Nouby, Alaaeldin and Touvron, Hugo and Stock, Pierre and Joulin, Armand and Jegou, Herve and Douze, Matthijs}, title = {LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {12259-12269} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ```
11d626a49e8f32f24ca8d35733113dff
google/t5-efficient-small-nl32
google
t5
12
7
transformers
0
text2text-generation
true
true
true
apache-2.0
['en']
['c4']
null
0
0
0
0
0
0
0
['deep-narrow']
false
true
true
6,257
false
# T5-Efficient-SMALL-NL32 (Deep-Narrow version) T5-Efficient-SMALL-NL32 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block. ## Details model architecture This model checkpoint - **t5-efficient-small-nl32** - is of model type **Small** with the following variations: - **nl** is **32** It has **251.49** million parameters and thus requires *ca.* **1005.96 MB** of memory in full precision (*fp32*) or **502.98 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh | #Params| | ----| ---- | ---- | ---- | ---- | ---- | ----| | Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M| | Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M| | Small | 6/6 | 2048 | 512 | 32 | 8 | 60M| | Base | 12/12 | 3072 | 768 | 64 | 12 | 220M| | Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M| | Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B| | XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B| whereas the following abbreviations are used: | Abbreviation | Definition | | ----| ---- | | nl | Number of transformer blocks (depth) | | dm | Dimension of embedding vector (output vector of transformers block) | | kv | Dimension of key/value projection matrix | | nh | Number of attention heads | | ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) | | el | Number of transformer blocks in the encoder (encoder depth) | | dl | Number of transformer blocks in the decoder (decoder depth) | | sh | Signifies that attention heads are shared | | skv | Signifies that key-values projection matrices are tied | If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*. ## Pre-Training The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using the span-based masked language modeling (MLM) objective. ## Fine-Tuning **Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage. The checkpoint was pretrained in English and is therefore only useful for English NLP tasks. You can follow on of the following examples on how to fine-tune the model: *PyTorch*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) - [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *Tensorflow*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *JAX/Flax*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. ## Downstream Performance TODO: Add table if available ## Computational Complexity TODO: Add table if available ## More information We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint. As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv* model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
dafc3d5e74df59ee78788984dbb4a858
srg/outhimar_64-Close-regression
srg
null
4
0
sklearn
0
tabular-regression
false
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['tabular-regression', 'baseline-trainer']
false
true
true
7,996
false
## Baseline Model trained on outhimar_64 to apply regression on Close **Metrics of the best model:** r2 0.999858 neg_mean_squared_error -1.067685 Name: Ridge(alpha=10), dtype: float64 **See model plot below:** <style>#sk-container-id-6 {color: black;background-color: white;}#sk-container-id-6 pre{padding: 0;}#sk-container-id-6 div.sk-toggleable {background-color: white;}#sk-container-id-6 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-6 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-6 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-6 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-6 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-6 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-6 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-6 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-6 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-6 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-6 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-6 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-6 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-6 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-6 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-6 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-6 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-6 div.sk-item {position: relative;z-index: 1;}#sk-container-id-6 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-6 div.sk-item::before, #sk-container-id-6 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-6 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-6 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-6 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-6 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-6 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-6 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-6 div.sk-label-container {text-align: center;}#sk-container-id-6 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-6 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-6" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[(&#x27;easypreprocessor&#x27;,EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless Date False False False ... True False False Open True False False ... False False False High True False False ... False False False Low True False False ... False False False Adj Close True False False ... False False False Volume True False False ... False False False[6 rows x 7 columns])),(&#x27;ridge&#x27;, Ridge(alpha=10))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-18" type="checkbox" ><label for="sk-estimator-id-18" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[(&#x27;easypreprocessor&#x27;,EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless Date False False False ... True False False Open True False False ... False False False High True False False ... False False False Low True False False ... False False False Adj Close True False False ... False False False Volume True False False ... False False False[6 rows x 7 columns])),(&#x27;ridge&#x27;, Ridge(alpha=10))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-19" type="checkbox" ><label for="sk-estimator-id-19" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless Date False False False ... True False False Open True False False ... False False False High True False False ... False False False Low True False False ... False False False Adj Close True False False ... False False False Volume True False False ... False False False[6 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-20" type="checkbox" ><label for="sk-estimator-id-20" class="sk-toggleable__label sk-toggleable__label-arrow">Ridge</label><div class="sk-toggleable__content"><pre>Ridge(alpha=10)</pre></div></div></div></div></div></div></div> **Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain). **Logs of training** including the models tried in the process can be found in logs.txt
68d1ccadf65dbc2f991ff90c0634066d
google/multiberts-seed_15
google
bert
8
8
transformers
0
null
true
true
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['multiberts', 'multiberts-seed_15']
false
true
true
3,334
false
# MultiBERTs - Seed 15 MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #15. ## Model Description This model is a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_15') model = TFBertModel.from_pretrained("google/multiberts-seed_15") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_15') model = BertModel.from_pretrained("google/multiberts-seed_15") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
63b0f1c2cdf94efa65549fe3f0888eeb
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_c_inference_only
deepdoctection
null
5
0
null
0
null
false
false
false
apache-2.0
null
['Pubtabnet']
null
0
0
0
0
0
0
0
['Tensorflow']
false
true
true
3,119
false
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) . Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683). The model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before detecting cells. The code has been adapted so that it can be used in a **deep**doctection pipeline. ## How this model can be used This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial. ## This is an inference model only To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this [model](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_c) . ## How this model was trained. To recreate the model run on the **deep**doctection framework, run: ```python >>> import os >>> from deep_doctection.datasets import DatasetRegistry >>> from deep_doctection.eval import MetricRegistry >>> from deep_doctection.utils import get_configs_dir_path >>> from deep_doctection.train import train_faster_rcnn pubtabnet = DatasetRegistry.get_dataset("pubtabnet") pubtabnet.dataflow.categories.filter_categories(categories="CELL") path_config_yaml=os.path.join(get_configs_dir_path(),"tp/cell/conf_frcnn_cell.yaml") path_weights = "" dataset_train = pubtabnet config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.STARTING_EPOCH=1", "TRAIN.CHECKPOINT_PERIOD=50","BACKBONE.FREEZE_AT=0", "PREPROC.TRAIN_SHORT_EDGE_SIZE=[200,600]"] build_train_config=["max_datapoints=500000"] dataset_val = pubtabnet build_val_config = ["max_datapoints=4000"] coco_metric = MetricRegistry.get_metric("coco") coco_metric.set_params(max_detections=[50,200,600], area_range=[[0,1000000],[0,200],[200,800],[800,1000000]]) train_faster_rcnn(path_config_yaml=path_config_yaml, dataset_train=dataset_train, path_weights=path_weights, config_overwrite=config_overwrite, log_dir="/path/to/dir", build_train_config=build_train_config, dataset_val=dataset_val, build_val_config=build_val_config, metric=coco_metric, pipeline_component_name="ImageLayoutService" ) ``` ## How to fine-tune this model To fine tune this model, please check this [Fine-tune](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Fine_Tune.ipynb) tutorial.
944463fabf1b517692e305b40864f42d
pritoms/opt-350m-opty-350m-lectures
pritoms
opt
9
2
transformers
0
text-generation
true
false
false
other
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,254
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-350m-opty-350m-lectures This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3830 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 5 | 2.7828 | | No log | 2.0 | 10 | 2.4889 | | No log | 3.0 | 15 | 2.3830 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
080a5aeff051de11ccbe95c240ccb104
jmurphy97/distilbert-base-uncased-finetuned-clinc
jmurphy97
distilbert
12
5
transformers
0
text-classification
true
false
false
apache-2.0
null
['clinc_oos']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,482
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7720 - Accuracy: 0.9184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2896 | 1.0 | 318 | 3.2891 | 0.7429 | | 2.6283 | 2.0 | 636 | 1.8755 | 0.8374 | | 1.5481 | 3.0 | 954 | 1.1570 | 0.8961 | | 1.0149 | 4.0 | 1272 | 0.8573 | 0.9132 | | 0.7952 | 5.0 | 1590 | 0.7720 | 0.9184 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
82da5455cab73175947f8d86553846a6
Tomor0720/deberta-base-finetuned-qqp
Tomor0720
deberta
13
3
transformers
0
text-classification
true
false
false
mit
null
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,328
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-base-finetuned-qqp This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.2617 - Accuracy: 0.9128 - F1: 0.8844 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.2412 | 1.0 | 22741 | 0.2369 | 0.9048 | 0.8753 | | 0.1742 | 2.0 | 45482 | 0.2617 | 0.9128 | 0.8844 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
647c2f2330748f687d56b20e89f73494
TransQuest/microtransquest-en_de-wiki
TransQuest
xlm-roberta
12
17
transformers
0
token-classification
true
false
false
apache-2.0
['en-de']
null
null
1
1
0
0
0
0
0
['Quality Estimation', 'microtransquest']
false
true
true
5,277
false
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level. With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest). ## Features - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment. - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps. - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented. - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest) ## Installation ### From pip ```bash pip install transquest ``` ### From Source ```bash git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt ``` ## Using Pre-trained Models ```python from transquest.algo.word_level.microtransquest.run_model import MicroTransQuestModel import torch model = MicroTransQuestModel("xlmroberta", "TransQuest/microtransquest-en_de-wiki", labels=["OK", "BAD"], use_cuda=torch.cuda.is_available()) source_tags, target_tags = model.predict([["if not , you may not be protected against the diseases . ", "ja tā nav , Jūs varat nepasargāt no slimībām . "]]) ``` ## Documentation For more details follow the documentation. 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip. 2. **Architectures** - Checkout the architectures implemented in TransQuest 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation. 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation. 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks. 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/) 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/) 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest ## Citations If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/). ```bash @InProceedings{ranasinghe2021, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers}, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics}, year = {2021} } ``` If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020. ```bash @InProceedings{transquest:2020a, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, year = {2020} } ``` ```bash @InProceedings{transquest:2020b, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest at WMT2020: Sentence-Level Direct Assessment}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, year = {2020} } ```
68bf74c0a7c6eff1511ab2aba4977bbb
gokuls/distilbert_add_GLUE_Experiment_logit_kd_rte_256
gokuls
distilbert
17
2
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,750
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_add_GLUE_Experiment_logit_kd_rte_256 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.4234 - Accuracy: 0.4729 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4263 | 1.0 | 10 | 0.4235 | 0.4729 | | 0.4176 | 2.0 | 20 | 0.4241 | 0.4729 | | 0.4173 | 3.0 | 30 | 0.4234 | 0.4729 | | 0.4172 | 4.0 | 40 | 0.4245 | 0.4729 | | 0.4182 | 5.0 | 50 | 0.4243 | 0.4729 | | 0.4178 | 6.0 | 60 | 0.4236 | 0.4729 | | 0.4176 | 7.0 | 70 | 0.4238 | 0.4729 | | 0.4177 | 8.0 | 80 | 0.4240 | 0.4729 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
65de91b9f43b0c8db3d72275649cd8c6
fathyshalab/massive_audio-roberta-large-v1-5-0
fathyshalab
roberta
14
2
sentence-transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['setfit', 'sentence-transformers', 'text-classification']
false
true
true
1,458
false
# fathyshalab/massive_audio-roberta-large-v1-5-0 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/massive_audio-roberta-large-v1-5-0") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
167b62b6be1687bb7900a21bdc093333
MultiBertGunjanPatrick/multiberts-seed-2
MultiBertGunjanPatrick
bert
7
4
transformers
0
null
true
false
false
apache-2.0
['en']
['bookcorpus', 'wikipedia']
null
0
0
0
0
0
0
0
['exbert', 'multiberts']
false
true
true
6,319
false
# MultiBERTs Seed 0 (uncased) Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0') model = BertModel.from_pretrained("multiberts-seed-0") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
641ca774ab6aec4c71fa4edc488df9fe
ConvLab/mt5-small-nlu-all-crosswoz
ConvLab
mt5
8
101
transformers
0
text2text-generation
true
false
false
apache-2.0
['zh']
['ConvLab/crosswoz']
null
0
0
0
0
0
0
0
['mt5-small', 'text2text-generation', 'natural language understanding', 'conversational system', 'task-oriented dialog']
true
true
true
735
false
# mt5-small-nlu-all-crosswoz This model is a fine-tuned version of [mt5-small](https://huggingface.co/mt5-small) on [CrossWOZ](https://huggingface.co/datasets/ConvLab/crosswoz) both user and system utterances. Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - optimizer: Adafactor - lr_scheduler_type: linear - num_epochs: 10.0 ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
a18f14b07ba451cd4b46c6f5a6f847f3
sd-concepts-library/xatu2
sd-concepts-library
null
94
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
9,828
false
### xatu2 on Stable Diffusion This is the `<xatu-test>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<xatu-test> 0](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/63.jpeg) ![<xatu-test> 1](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/80.jpeg) ![<xatu-test> 2](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/43.jpeg) ![<xatu-test> 3](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/56.jpeg) ![<xatu-test> 4](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/30.jpeg) ![<xatu-test> 5](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/88.jpeg) ![<xatu-test> 6](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/24.jpeg) ![<xatu-test> 7](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/85.jpeg) ![<xatu-test> 8](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/37.jpeg) ![<xatu-test> 9](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/19.jpeg) ![<xatu-test> 10](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/54.jpeg) ![<xatu-test> 11](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/5.jpeg) ![<xatu-test> 12](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/75.jpeg) ![<xatu-test> 13](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/55.jpeg) ![<xatu-test> 14](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/66.jpeg) ![<xatu-test> 15](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/49.jpeg) ![<xatu-test> 16](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/45.jpeg) ![<xatu-test> 17](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/58.jpeg) ![<xatu-test> 18](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/39.jpeg) ![<xatu-test> 19](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/6.jpeg) ![<xatu-test> 20](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/38.jpeg) ![<xatu-test> 21](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/15.jpeg) ![<xatu-test> 22](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/20.jpeg) ![<xatu-test> 23](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/35.jpeg) ![<xatu-test> 24](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/62.jpeg) ![<xatu-test> 25](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/14.jpeg) ![<xatu-test> 26](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/9.jpeg) ![<xatu-test> 27](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/3.jpeg) ![<xatu-test> 28](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/47.jpeg) ![<xatu-test> 29](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/0.jpeg) ![<xatu-test> 30](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/42.jpeg) ![<xatu-test> 31](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/36.jpeg) ![<xatu-test> 32](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/33.jpeg) ![<xatu-test> 33](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/17.jpeg) ![<xatu-test> 34](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/12.jpeg) ![<xatu-test> 35](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/86.jpeg) ![<xatu-test> 36](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/13.jpeg) ![<xatu-test> 37](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/2.jpeg) ![<xatu-test> 38](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/44.jpeg) ![<xatu-test> 39](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/68.jpeg) ![<xatu-test> 40](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/16.jpeg) ![<xatu-test> 41](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/65.jpeg) ![<xatu-test> 42](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/52.jpeg) ![<xatu-test> 43](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/59.jpeg) ![<xatu-test> 44](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/25.jpeg) ![<xatu-test> 45](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/50.jpeg) ![<xatu-test> 46](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/48.jpeg) ![<xatu-test> 47](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/18.jpeg) ![<xatu-test> 48](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/78.jpeg) ![<xatu-test> 49](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/22.jpeg) ![<xatu-test> 50](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/10.jpeg) ![<xatu-test> 51](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/53.jpeg) ![<xatu-test> 52](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/46.jpeg) ![<xatu-test> 53](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/41.jpeg) ![<xatu-test> 54](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/72.jpeg) ![<xatu-test> 55](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/31.jpeg) ![<xatu-test> 56](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/7.jpeg) ![<xatu-test> 57](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/51.jpeg) ![<xatu-test> 58](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/84.jpeg) ![<xatu-test> 59](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/1.jpeg) ![<xatu-test> 60](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/27.jpeg) ![<xatu-test> 61](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/70.jpeg) ![<xatu-test> 62](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/87.jpeg) ![<xatu-test> 63](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/32.jpeg) ![<xatu-test> 64](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/81.jpeg) ![<xatu-test> 65](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/76.jpeg) ![<xatu-test> 66](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/34.jpeg) ![<xatu-test> 67](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/79.jpeg) ![<xatu-test> 68](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/69.jpeg) ![<xatu-test> 69](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/26.jpeg) ![<xatu-test> 70](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/82.jpeg) ![<xatu-test> 71](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/21.jpeg) ![<xatu-test> 72](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/23.jpeg) ![<xatu-test> 73](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/67.jpeg) ![<xatu-test> 74](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/61.jpeg) ![<xatu-test> 75](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/29.jpeg) ![<xatu-test> 76](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/60.jpeg) ![<xatu-test> 77](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/40.jpeg) ![<xatu-test> 78](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/64.jpeg) ![<xatu-test> 79](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/74.jpeg) ![<xatu-test> 80](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/83.jpeg) ![<xatu-test> 81](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/73.jpeg) ![<xatu-test> 82](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/11.jpeg) ![<xatu-test> 83](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/57.jpeg) ![<xatu-test> 84](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/28.jpeg) ![<xatu-test> 85](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/71.jpeg) ![<xatu-test> 86](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/4.jpeg) ![<xatu-test> 87](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/8.jpeg) ![<xatu-test> 88](https://huggingface.co/sd-concepts-library/xatu2/resolve/main/concept_images/77.jpeg)
5995d796e55e08019c550c18955d475c
fathyshalab/all-roberta-large-v1-auto_and_commute-4-16-5
fathyshalab
roberta
11
3
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,521
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-auto_and_commute-4-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2614 - Accuracy: 0.4289 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7929 | 1.0 | 1 | 2.5690 | 0.2667 | | 2.267 | 2.0 | 2 | 2.4558 | 0.3533 | | 1.8495 | 3.0 | 3 | 2.3630 | 0.3911 | | 1.4397 | 4.0 | 4 | 2.2956 | 0.4133 | | 1.2985 | 5.0 | 5 | 2.2614 | 0.4289 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
6a387ed4af9327775873eddf501dde13
BeardedJohn/bert-finetuned-ner-ubb-endava-only-misc
BeardedJohn
bert
8
24
transformers
0
token-classification
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,443
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # BeardedJohn/bert-finetuned-ner-ubb-endava-only-misc This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0015 - Validation Loss: 0.0006 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 705, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1740 | 0.0013 | 0 | | 0.0024 | 0.0007 | 1 | | 0.0015 | 0.0006 | 2 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
ca990758f05f5c55e62cc503a43b1028
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t8_54.csv___topic_text_google_mt5_base
nestoralvaro
mt5
12
1
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,481
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-finetuned-xsum-data_prep_2021_12_26___t8_54.csv___topic_text_google_mt5_base This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 1.4678 - Rouge2: 0.1841 - Rougel: 1.4748 - Rougelsum: 1.4701 - Gen Len: 6.4874 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.0 | 1.0 | 10645 | nan | 1.4678 | 0.1841 | 1.4748 | 1.4701 | 6.4874 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
f54b77f3b3c3556e6970239fd4977904
YeRyeongLee/roberta-base-finetuned-filtered-0609
YeRyeongLee
roberta
11
2
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,218
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-filtered-0609 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1343 - Accuracy: 0.9824 - Precision: 0.9824 - Recall: 0.9824 - F1: 0.9824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.1817 | 1.0 | 3180 | 0.1883 | 0.9651 | 0.9654 | 0.9651 | 0.9651 | | 0.1647 | 2.0 | 6360 | 0.1264 | 0.9777 | 0.9778 | 0.9777 | 0.9777 | | 0.1295 | 3.0 | 9540 | 0.1514 | 0.9723 | 0.9724 | 0.9723 | 0.9723 | | 0.0991 | 4.0 | 12720 | 0.1487 | 0.9761 | 0.9763 | 0.9761 | 0.9761 | | 0.0749 | 5.0 | 15900 | 0.1119 | 0.9802 | 0.9802 | 0.9802 | 0.9802 | | 0.0532 | 6.0 | 19080 | 0.1357 | 0.9789 | 0.9790 | 0.9789 | 0.9789 | | 0.0471 | 7.0 | 22260 | 0.1397 | 0.9780 | 0.9782 | 0.9780 | 0.9780 | | 0.0153 | 8.0 | 25440 | 0.1568 | 0.9777 | 0.9778 | 0.9777 | 0.9777 | | 0.0147 | 9.0 | 28620 | 0.1274 | 0.9824 | 0.9824 | 0.9824 | 0.9824 | | 0.0135 | 10.0 | 31800 | 0.1343 | 0.9824 | 0.9824 | 0.9824 | 0.9824 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.9.1+cu111 - Datasets 1.16.1 - Tokenizers 0.12.1
228315d39818f5e0a23d3f571214a63a
Haakf/allsides_right_text_padded
Haakf
distilbert
8
4
transformers
0
fill-mask
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,885
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Haakf/allsides_right_text_padded This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.9151 - Validation Loss: 1.8887 - Epoch: 5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -797, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.0219 | 1.9405 | 0 | | 2.0081 | 1.8806 | 1 | | 1.9741 | 1.8750 | 2 | | 1.9575 | 1.8781 | 3 | | 1.9444 | 1.8302 | 4 | | 1.9151 | 1.8887 | 5 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.9.2 - Datasets 2.7.1 - Tokenizers 0.13.2
d87f98b6c49e01523ddbcd6e88b7a46e
Edresson/wav2vec2-large-xlsr-coraa-portuguese
Edresson
wav2vec2
8
2,039
transformers
11
automatic-speech-recognition
true
false
false
apache-2.0
['pt']
['CORAA']
null
0
0
0
0
1
1
0
['audio', 'speech', 'wav2vec2', 'pt', 'portuguese-speech-corpus', 'automatic-speech-recognition', 'hf-asr-leaderboard', 'speech', 'PyTorch']
true
true
true
1,326
false
# Wav2vec 2.0 trained with CORAA Portuguese Dataset This a the demonstration of a fine-tuned Wav2vec model for Portuguese using the following [CORAA dataset](https://github.com/nilc-nlp/CORAA) # Use this model ```python from transformers import AutoTokenizer, Wav2Vec2ForCTC tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-xlsr-coraa-portuguese") model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-xlsr-coraa-portuguese") ``` # Results For the results check the [CORAA article](https://arxiv.org/abs/2110.15731) # Example test with Common Voice Dataset ```python dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) ```
f3d9c4832c28ceea086720cc9aa883ca