repo_id
stringlengths
4
110
author
stringlengths
2
27
model_type
stringlengths
2
29
files_per_repo
int64
2
15.4k
downloads_30d
int64
0
19.9M
library
stringlengths
2
37
likes
int64
0
4.34k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
30
languages
stringlengths
4
1.63k
datasets
stringlengths
2
2.58k
co2
stringclasses
29 values
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
15
prs_closed
int64
0
28
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
1 class
has_text
bool
1 class
text_length
int64
401
598k
is_nc
bool
1 class
readme
stringlengths
0
598k
hash
stringlengths
32
32
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_data_aug_mnli
gokuls
distilbert
19
0
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,648
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_logit_kd_data_aug_mnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.5076 - Accuracy: 0.6560 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.4734 | 1.0 | 31440 | 0.5068 | 0.6496 | | 0.3743 | 2.0 | 62880 | 0.5281 | 0.6379 | | 0.3454 | 3.0 | 94320 | 0.5361 | 0.6354 | | 0.3333 | 4.0 | 125760 | 0.5399 | 0.6350 | | 0.3265 | 5.0 | 157200 | 0.5409 | 0.6379 | | 0.3219 | 6.0 | 188640 | 0.5377 | 0.6413 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
12c1cf5b58ecc87af41f4c926d22189b
Benito/DeTr-TableDetection-1000-images
Benito
detr
9
4
transformers
0
object-detection
true
false
false
apache-2.0
null
['table_detection_light']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,416
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DeTr-TableDetection-1000-images This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the table_detection_light dataset. It achieves the following results on the evaluation set: - Loss: 0.5143 - Mean Iou: 0.0242 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 63 | 0.8696 | 0.0215 | | 0.9227 | 2.0 | 126 | 0.7547 | 0.0245 | | 0.9227 | 3.0 | 189 | 0.7170 | 0.0211 | | 0.6775 | 4.0 | 252 | 0.8319 | 0.0222 | | 0.6801 | 5.0 | 315 | 0.6943 | 0.0212 | | 0.6801 | 6.0 | 378 | 0.6622 | 0.0252 | | 0.604 | 7.0 | 441 | 0.6043 | 0.0234 | | 0.5467 | 8.0 | 504 | 0.7404 | 0.0249 | | 0.5467 | 9.0 | 567 | 0.6755 | 0.0242 | | 0.4347 | 10.0 | 630 | 0.5507 | 0.0232 | | 0.4347 | 11.0 | 693 | 0.6633 | 0.0277 | | 0.4202 | 12.0 | 756 | 0.5941 | 0.0256 | | 0.3508 | 13.0 | 819 | 0.5387 | 0.0238 | | 0.3508 | 14.0 | 882 | 0.5381 | 0.0256 | | 0.3223 | 15.0 | 945 | 0.5646 | 0.0254 | | 0.3058 | 16.0 | 1008 | 0.5460 | 0.0213 | | 0.3058 | 17.0 | 1071 | 0.5589 | 0.0264 | | 0.2861 | 18.0 | 1134 | 0.5423 | 0.0257 | | 0.2861 | 19.0 | 1197 | 0.5207 | 0.0248 | | 0.2705 | 20.0 | 1260 | 0.5143 | 0.0242 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu117 - Datasets 2.5.1 - Tokenizers 0.13.2
c5d0be88ceed6651bcdd2d3574a9c484
explosion/fr_udv25_frenchsequoia_trf
explosion
null
28
5
spacy
1
token-classification
false
false
false
lgpl-lr
['fr']
null
null
0
0
0
0
0
0
0
['spacy', 'token-classification']
false
true
true
16,205
false
UD v2.5 benchmarking pipeline for UD_French-Sequoia | Feature | Description | | --- | --- | | **Name** | `fr_udv25_frenchsequoia_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `LGPL-LR` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (916 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `ADJ`, `ADP`, `ADP_DET`, `ADP_PRON`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `X` | | **`morphologizer`** | `POS=PROPN`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=PRON\|Person=1`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=SCONJ`, `POS=ADP`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `NumType=Ord\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=PUNCT`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `POS=ADV`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `NumType=Card\|POS=NUM`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=CCONJ`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `POS=PRON\|PronType=Rel`, `Number=Sing\|POS=DET\|Poss=Yes`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Definite=Def\|Number=Plur\|POS=ADP\|PronType=Art`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|VerbForm=Inf`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Number=Plur\|POS=DET`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=ADV\|PronType=Int`, `POS=VERB\|Tense=Pres\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Number=Plur\|POS=DET\|Poss=Yes`, `POS=AUX\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Gender=Masc\|POS=VERB\|Tense=Past\|VerbForm=Part`, `POS=ADV\|Polarity=Neg`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `POS=PRON\|Person=3\|Reflex=Yes`, `Gender=Masc\|POS=NOUN`, `POS=AUX\|Tense=Past\|VerbForm=Part`, `POS=PRON\|Person=3`, `Number=Plur\|POS=NOUN`, `NumType=Ord\|Number=Sing\|POS=ADJ`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `POS=AUX\|Tense=Pres\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Sing\|POS=PRON\|Person=3`, `Number=Sing\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Number=Plur\|POS=PROPN`, `Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=DET`, `Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes`, `Gender=Masc\|POS=PRON`, `POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Number=Sing\|POS=PRON`, `Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Ind\|POS=VERB\|VerbForm=Fin`, `Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `POS=PRON`, `POS=NUM`, `Gender=Fem\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=PRON`, `Number=Plur\|POS=PRON\|Person=3`, `Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Sing\|POS=PRON\|Person=1`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=INTJ`, `Number=Plur\|POS=PRON\|Person=2`, `NumType=Card\|POS=PRON`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `NumType=Card\|POS=NOUN`, `POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3`, `Gender=Fem\|Number=Sing\|POS=DET`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=DET`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=PROPN`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=DET`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|POS=PRON`, `Gender=Masc\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `POS=X`, `POS=SYM`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `POS=DET`, `Gender=Masc\|Number=Plur\|POS=PRON`, `POS=PART`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|POS=VERB\|Person=3\|VerbForm=Fin`, `Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Gender=Masc\|Number=Plur\|POS=DET`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Mood=Imp\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=2\|Reflex=Yes`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|Reflex=Yes`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|Person=1\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|POS=PROPN`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|POS=ADV`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Plur\|POS=PROPN`, `Gender=Masc\|NumType=Card\|POS=NUM` | | **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advcl:cleft`, `advmod`, `amod`, `appos`, `aux:caus`, `aux:pass`, `aux:tense`, `case`, `cc`, `ccomp`, `conj`, `cop`, `csubj`, `dep`, `det`, `dislocated`, `expl:comp`, `expl:pass`, `expl:subj`, `fixed`, `flat:foreign`, `flat:name`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:caus`, `nsubj:pass`, `nummod`, `obj`, `obl:agent`, `obl:arg`, `obl:mod`, `orphan`, `parataxis`, `punct`, `vocative`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `0`, `3`, `4`, `6`, `8`, `10`, `12`, `14`, `16`, `20`, `22`, `24`, `26`, `30`, `32`, `34`, `36`, `39`, `40`, `42`, `44`, `45`, `48`, `50`, `52`, `54`, `56`, `58`, `61`, `63`, `66`, `70`, `72`, `74`, `77`, `79`, `81`, `82`, `84`, `86`, `88`, `89`, `91`, `95`, `97`, `99`, `102`, `103`, `106`, `110`, `111`, `113`, `114`, `115`, `118`, `119`, `123`, `125`, `126`, `128`, `130`, `132`, `133`, `134`, `136`, `138`, `139`, `140`, `142`, `143`, `144`, `146`, `148`, `150`, `152`, `155`, `157`, `160`, `161`, `163`, `165`, `167`, `171`, `173`, `174`, `176`, `177`, `179`, `181`, `183`, `185`, `187`, `189`, `191`, `192`, `195`, `197`, `198`, `200`, `202`, `203`, `205`, `208`, `210`, `211`, `212`, `214`, `217`, `218`, `221`, `225`, `227`, `229`, `230`, `232`, `234`, `236`, `238`, `240`, `242`, `243`, `245`, `247`, `248`, `251`, `253`, `255`, `257`, `258`, `260`, `261`, `264`, `267`, `268`, `269`, `272`, `273`, `276`, `277`, `278`, `279`, `284`, `287`, `288`, `291`, `293`, `295`, `298`, `299`, `301`, `304`, `306`, `307`, `309`, `310`, `313`, `315`, `318`, `319`, `322`, `324`, `325`, `327`, `329`, `330`, `332`, `333`, `336`, `339`, `341`, `342`, `344`, `346`, `347`, `350`, `351`, `353`, `356`, `358`, `359`, `361`, `363`, `365`, `367`, `369`, `373`, `376`, `378`, `379`, `380`, `382`, `384`, `386`, `389`, `390`, `391`, `394`, `396`, `398`, `399`, `401`, `404`, `406`, `409`, `412`, `414`, `418`, `421`, `423`, `424`, `426`, `428`, `429`, `430`, `434`, `436`, `438`, `440`, `441`, `443`, `446`, `447`, `448`, `451`, `453`, `456`, `457`, `458`, `460`, `462`, `463`, `465`, `468`, `470`, `472`, `474`, `480`, `482`, `483`, `485`, `486`, `490`, `493`, `494`, `497`, `499`, `500`, `501`, `503`, `506`, `509`, `511`, `512`, `514`, `516`, `518`, `522`, `523`, `526`, `530`, `532`, `534`, `537`, `539`, `540`, `541`, `543`, `545`, `546`, `548`, `550`, `551`, `552`, `554`, `556`, `557`, `558`, `561`, `563`, `565`, `567`, `570`, `571`, `573`, `574`, `575`, `576`, `578`, `579`, `581`, `582`, `583`, `584`, `586`, `587`, `588`, `589`, `590`, `592`, `595`, `600`, `603`, `604`, `606`, `608`, `611`, `612`, `614`, `615`, `616`, `618`, `619`, `620`, `621`, `622`, `623`, `624`, `625`, `626`, `627`, `628`, `629`, `630`, `631`, `632`, `633`, `634`, `635`, `636`, `638`, `640`, `644`, `646`, `647`, `648`, `650`, `652`, `654`, `657`, `659`, `660`, `661`, `662`, `663`, `664`, `665`, `666`, `668`, `672`, `674`, `675`, `677`, `678`, `679`, `680`, `681`, `682`, `683`, `684`, `685`, `686`, `687`, `688`, `689`, `690`, `691`, `692`, `693`, `694`, `695`, `696`, `697`, `698`, `699`, `700`, `701`, `702`, `704`, `705`, `706`, `707`, `708`, `709`, `710`, `711`, `712`, `713`, `714`, `715`, `716`, `717`, `718`, `719`, `720`, `721`, `722`, `723`, `724`, `725`, `726`, `727`, `728`, `729`, `730`, `731`, `732`, `733`, `734`, `735`, `736`, `737`, `738`, `739`, `740`, `741`, `743`, `744`, `747`, `748`, `749`, `750`, `751`, `752`, `753`, `754`, `755`, `756`, `758`, `760`, `762`, `763`, `766`, `767`, `768`, `770`, `772`, `773`, `774`, `775`, `776`, `777`, `778`, `779`, `781`, `783`, `784`, `786`, `787`, `789`, `790`, `791`, `794`, `795`, `796`, `797`, `798`, `799`, `800`, `801`, `802`, `803`, `807`, `809`, `812`, `813`, `815`, `817`, `819`, `821`, `825`, `828`, `829`, `832`, `833`, `834`, `837`, `838`, `839`, `841`, `842`, `844`, `846`, `849`, `851`, `853`, `854`, `855`, `858`, `861`, `862`, `866`, `868`, `869`, `871`, `872`, `874`, `876`, `879`, `880`, `882`, `885`, `887`, `891`, `893`, `895`, `898`, `899`, `902`, `903`, `905`, `906`, `908`, `910`, `911`, `912`, `914`, `917`, `920`, `923`, `925`, `927`, `929`, `932`, `933`, `934`, `936`, `938`, `939`, `943`, `944`, `945`, `946`, `947`, `950`, `952`, `954`, `956`, `958`, `959`, `961`, `963`, `965`, `967`, `969`, `971`, `973`, `976`, `978`, `979`, `980`, `981`, `984`, `986`, `987`, `990`, `993`, `994`, `996`, `998`, `999`, `1000`, `1001`, `1002`, `1004`, `1006`, `1007`, `1009`, `1010`, `1012`, `1014`, `1016`, `1018`, `1021`, `1023`, `1026`, `1027`, `1029`, `1031`, `1033`, `1034`, `1036`, `1037`, `1039`, `1041`, `1043`, `1044`, `1045`, `1046`, `1049`, `1051`, `1053`, `1054`, `1055`, `1056`, `1057`, `1058`, `1059`, `1061`, `1063`, `1065`, `1067`, `1068`, `1070`, `1072`, `1073`, `1075`, `1077`, `1078`, `1080`, `1081`, `1082`, `1084`, `1085`, `1087`, `1088`, `1089`, `1090`, `1091`, `1092`, `1094`, `1095`, `1097`, `1098`, `1100`, `1103`, `1106`, `1108`, `1110`, `1111`, `1113`, `1116`, `1117`, `1119`, `1121`, `1124`, `1127`, `1129`, `1131`, `1132`, `1133`, `1135`, `1136`, `1138`, `1139`, `1141`, `1142`, `1145`, `1148`, `1153`, `1154`, `1156`, `1157`, `1159`, `1161` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 99.70 | | `TOKEN_P` | 99.69 | | `TOKEN_R` | 99.71 | | `TOKEN_ACC` | 99.96 | | `SENTS_F` | 94.42 | | `SENTS_P` | 94.42 | | `SENTS_R` | 94.42 | | `TAG_ACC` | 98.65 | | `POS_ACC` | 98.56 | | `MORPH_ACC` | 97.55 | | `DEP_UAS` | 94.68 | | `DEP_LAS` | 92.60 | | `LEMMA_ACC` | 97.41 |
5e86bb5b5d0919c30332cbe71be6110e
jonatasgrosman/exp_w2v2t_de_vp-nl_s8
jonatasgrosman
wav2vec2
10
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['de']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'de']
false
true
true
467
false
# exp_w2v2t_de_vp-nl_s8 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
b5865e5c6b551ac6f68364387e5121c4
KGUY1/AnythingPencilAnime
KGUY1
null
6
0
null
1
null
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
962
false
<h1>AnythingPencilAnime</h1> This is a merged model made from anime-pencil-deffusion-v3 and Anything-V3.0-pruned-fp32, the credits go to the authors of the original models. # Usage Keep the words `anime pencil concept style` towards the beginning of your prompt to invoke the finetuned style from the anime-pencil-deffusion-v3 model. ### Examples generated by the v3 model <img src="https://huggingface.co/KGUY1/AnythingPencilAnime/resolve/main/Example-3.png"/> <img src="https://huggingface.co/KGUY1/AnythingPencilAnime/resolve/main/Example-1.png"/> # Models Used https://huggingface.co/Linaqruf/anything-v3.0/blob/main/Anything-V3.0-pruned-fp32.safetensors https://huggingface.co/yehiaserag/anime-pencil-diffusion/blob/main/anime-pencil-deffusion-v3.safetensors # Socials - Use the #AnythingInkPunk so i can see the cool stuff you make! --- *NOTE: usage of this model implies accpetance of stable diffusion's [CreativeML Open RAIL-M license](LICENSE)*
e1977bc048a109e4070ccec179bfbe6c
huxxx657/roberta-base-finetuned-scrambled-squad-10
huxxx657
roberta
13
5
transformers
0
question-answering
true
false
false
mit
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,157
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-scrambled-squad-10 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.7200 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7482 | 1.0 | 5532 | 1.7200 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
9d0b1785d5c7f2ef3070ca4d083c8322
rishabhjain16/whisper_large_v2_to_myst55h
rishabhjain16
whisper
25
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,721
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openai/whisper-large-v2 This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3834 - Wer: 11.8889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.5582 | 0.12 | 500 | 0.3660 | 14.8170 | | 0.2285 | 1.02 | 1000 | 0.2919 | 12.6304 | | 0.2038 | 1.15 | 1500 | 0.2795 | 11.3850 | | 0.074 | 2.04 | 2000 | 0.3150 | 12.1043 | | 0.2165 | 2.17 | 2500 | 0.2978 | 12.8510 | | 0.0399 | 3.07 | 3000 | 0.3467 | 11.7322 | | 0.045 | 3.19 | 3500 | 0.3501 | 11.7218 | | 0.0187 | 4.09 | 4000 | 0.3834 | 11.8889 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.9.1.dev0 - Tokenizers 0.13.2
07930004d47190642c5f079664690e56
csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless-2022-03-12
csukuangfj
null
33
0
k2
0
null
false
false
false
apache-2.0
['en']
['librispeech']
null
0
0
0
0
0
0
0
['icefall', 'k2', 'transducer', 'librispeech', 'ASR', 'stateless transducer', 'PyTorch', 'RNN-T', 'pruned RNN-T', 'speech recognition']
false
true
true
5,023
false
# Introduction This repo contains pre-trained model using <https://github.com/k2-fsa/icefall/pull/248>. It is trained on full LibriSpeech dataset using pruned RNN-T loss from [k2](https://github.com/k2-fsa/k2). ## How to clone this repo ``` sudo apt-get install git-lfs git clone https://huggingface.co/csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless-2022-03-12 cd icefall-asr-librispeech-pruned-transducer-stateless-2022-03-12 git lfs pull ``` **Caution**: You have to run `git lfs pull`. Otherwise, you will be SAD later. The model in this repo is trained using the commit `1603744469d167d848e074f2ea98c587153205fa`. You can use ``` git clone https://github.com/k2-fsa/icefall cd icefall git checkout 1603744469d167d848e074f2ea98c587153205fa ``` to download `icefall`. The decoder architecture is modified from [Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419). A Conv1d layer is placed right after the input embedding layer. ----- ## Description This repo provides pre-trained transducer Conformer model for the LibriSpeech dataset using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless and contains only an embedding layer and a Conv1d. The commands for training are: ``` cd egs/librispeech/ASR/ ./prepare.sh export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" . path.sh ./pruned_transducer_stateless/train.py \ --world-size 8 \ --num-epochs 60 \ --start-epoch 0 \ --exp-dir pruned_transducer_stateless/exp \ --full-libri 1 \ --max-duration 300 \ --prune-range 5 \ --lr-factor 5 \ --lm-scale 0.25 ``` The tensorboard training log can be found at <https://tensorboard.dev/experiment/WKRFY5fYSzaVBHahenpNlA/> The command for decoding is: ```bash epoch=42 avg=11 sym=1 # greedy search ./pruned_transducer_stateless/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir ./pruned_transducer_stateless/exp \ --max-duration 100 \ --decoding-method greedy_search \ --beam-size 4 \ --max-sym-per-frame $sym # modified beam search ./pruned_transducer_stateless/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir ./pruned_transducer_stateless/exp \ --max-duration 100 \ --decoding-method modified_beam_search \ --beam-size 4 # beam search # (not recommended) ./pruned_transducer_stateless/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir ./pruned_transducer_stateless/exp \ --max-duration 100 \ --decoding-method beam_search \ --beam-size 4 ``` You can find the decoding log for the above command in this repo (in the folder `log`). The WERs for the test datasets are | | test-clean | test-other | comment | |-------------------------------------|------------|------------|------------------------------------------| | greedy search (max sym per frame 1) | 2.62 | 6.37 | --epoch 42, --avg 11, --max-duration 100 | | greedy search (max sym per frame 2) | 2.62 | 6.37 | --epoch 42, --avg 11, --max-duration 100 | | greedy search (max sym per frame 3) | 2.62 | 6.37 | --epoch 42, --avg 11, --max-duration 100 | | modified beam search (beam size 4) | 2.56 | 6.27 | --epoch 42, --avg 11, --max-duration 100 | | beam search (beam size 4) | 2.57 | 6.27 | --epoch 42, --avg 11, --max-duration 100 | # File description - [log][log], this directory contains the decoding log and decoding results - [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model - [data][data], this directory contains files generated by [prepare.sh][prepare] - [exp][exp], this directory contains only one file: `preprained.pt` `exp/pretrained.pt` is generated by the following command: ```bash epoch=42 avg=11 ./pruned_transducer_stateless/export.py \ --exp-dir ./pruned_transducer_stateless/exp \ --bpe-model data/lang_bpe_500/bpe.model \ --epoch $epoch \ --avg $avg ``` **HINT**: To use `pretrained.pt` to compute the WER for test-clean and test-other, just do the following: ``` cp icefall-asr-librispeech-pruned-transducer-stateless-2022-03-12/exp/pretrained.pt \ /path/to/icefall/egs/librispeech/ASR/pruned_transducer_stateless/exp/epoch-999.pt ``` and pass `--epoch 999 --avg 1` to `pruned_transducer_stateless/decode.py`. [icefall]: https://github.com/k2-fsa/icefall [prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/prepare.sh [exp]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless-2022-03-12/tree/main/exp [data]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless-2022-03-12/tree/main/data [test_wavs]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless-2022-03-12/tree/main/test_wavs [log]: https://huggingface.co/csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless-2022-03-12/tree/main/log [icefall]: https://github.com/k2-fsa/icefall
fd8c87ca9022157a1bb3af04f6c6b743
jaeyeon/wav2vec2-child-en-tokenizer-4
jaeyeon
wav2vec2
11
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,314
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-child-en-tokenizer-4 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4709 - Wer: 0.3769 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0334 | 1.72 | 100 | 1.4709 | 0.3769 | | 0.0332 | 3.45 | 200 | 1.4709 | 0.3769 | | 0.0343 | 5.17 | 300 | 1.4709 | 0.3769 | | 0.032 | 6.9 | 400 | 1.4709 | 0.3769 | | 0.0332 | 8.62 | 500 | 1.4709 | 0.3769 | | 0.0327 | 10.34 | 600 | 1.4709 | 0.3769 | | 0.0331 | 12.07 | 700 | 1.4709 | 0.3769 | | 0.0334 | 13.79 | 800 | 1.4709 | 0.3769 | | 0.0319 | 15.52 | 900 | 1.4709 | 0.3769 | | 0.0338 | 17.24 | 1000 | 1.4709 | 0.3769 | | 0.0321 | 18.97 | 1100 | 1.4709 | 0.3769 | | 0.0367 | 20.69 | 1200 | 1.4709 | 0.3769 | | 0.0331 | 22.41 | 1300 | 1.4709 | 0.3769 | | 0.0332 | 24.14 | 1400 | 1.4709 | 0.3769 | | 0.0347 | 25.86 | 1500 | 1.4709 | 0.3769 | | 0.0319 | 27.59 | 1600 | 1.4709 | 0.3769 | | 0.0302 | 29.31 | 1700 | 1.4709 | 0.3769 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
92200fbcee3c6361c841ebde9072f735
manoharahuggingface/bert-finetuned-ner
manoharahuggingface
bert
12
7
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,518
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0608 - Precision: 0.9362 - Recall: 0.9507 - F1: 0.9434 - Accuracy: 0.9866 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0869 | 1.0 | 1756 | 0.0684 | 0.9180 | 0.9369 | 0.9274 | 0.9823 | | 0.033 | 2.0 | 3512 | 0.0681 | 0.9264 | 0.9487 | 0.9374 | 0.9854 | | 0.0178 | 3.0 | 5268 | 0.0608 | 0.9362 | 0.9507 | 0.9434 | 0.9866 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
78ace10b2ef219ecc0de800ad8e5736d
RayMelius/bert-finetuned-ner
RayMelius
bert
12
3
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
893
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
b28d965f53ae8961b6aba4b2199f2782
versae/wav2vec2-large-voxrex-swedish-coscan-no-region
versae
wav2vec2
10
4
transformers
0
audio-classification
true
false
false
cc0-1.0
null
['coscan-speech2']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,869
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-voxrex-swedish-coscan-no-region This model is a fine-tuned version of [KBLab/wav2vec2-large-voxrex-swedish](https://huggingface.co/KBLab/wav2vec2-large-voxrex-swedish) on the coscan-speech2 dataset. It achieves the following results on the evaluation set: - Loss: 1.0151 - Accuracy: 0.8773 - F1: 0.8773 - Precision: 0.8773 - Recall: 0.8773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.1651 | 1.0 | 6468 | 0.5657 | 0.8650 | 0.8650 | 0.8650 | 0.8650 | | 0.1217 | 2.0 | 12936 | 0.9411 | 0.8487 | 0.8487 | 0.8487 | 0.8487 | | 0.0013 | 3.0 | 19404 | 0.9991 | 0.8617 | 0.8617 | 0.8617 | 0.8617 | | 0.0652 | 4.0 | 25872 | 1.0151 | 0.8773 | 0.8773 | 0.8773 | 0.8773 | | 0.0001 | 5.0 | 32340 | 1.1031 | 0.8700 | 0.8700 | 0.8700 | 0.8700 | ### Classification report on Coscan Speech (test set) ``` precision recall f1-score support Bergen og Ytre Vestland 0.65 0.97 0.78 1809 Hedmark og Oppland 0.12 0.06 0.08 2302 Nordland 0.97 0.47 0.63 2195 Oslo-området 0.78 0.42 0.55 6957 Sunnmøre 0.94 0.71 0.81 2636 Sør-Vestlandet 0.96 0.46 0.62 2860 Sørlandet 0.62 0.81 0.70 2490 Troms 0.67 1.00 0.80 2867 Trøndelag 0.52 0.94 0.67 2666 Voss og omland 0.70 0.71 0.71 2641 Ytre Oslofjord 0.20 0.49 0.29 1678 accuracy 0.62 31101 macro avg 0.65 0.64 0.60 31101 weighted avg 0.68 0.62 0.61 31101 ``` ### Framework versions - Transformers 4.22.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 2.4.1.dev0 - Tokenizers 0.12.1
9759f8574c4b82d36601e0ed6fb10765
ghatgetanuj/roberta-large_cls_subj
ghatgetanuj
roberta
13
1
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,507
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large_cls_subj This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6931 - Accuracy: 0.4835 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3714 | 1.0 | 500 | 0.2392 | 0.9335 | | 0.395 | 2.0 | 1000 | 0.7052 | 0.4855 | | 0.5316 | 3.0 | 1500 | 0.6932 | 0.5055 | | 0.7051 | 4.0 | 2000 | 0.6926 | 0.5165 | | 0.6965 | 5.0 | 2500 | 0.6931 | 0.4835 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
9975161cc7685b3c946a414d898682c7
sd-concepts-library/david-moreno-architecture
sd-concepts-library
null
10
0
null
1
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,103
false
### David Moreno Architecture on Stable Diffusion This is the `<dm-arch>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<dm-arch> 0](https://huggingface.co/sd-concepts-library/test/resolve/main/concept_images/4.jpeg) ![<dm-arch> 1](https://huggingface.co/sd-concepts-library/test/resolve/main/concept_images/1.jpeg) ![<dm-arch> 2](https://huggingface.co/sd-concepts-library/test/resolve/main/concept_images/2.jpeg) ![<dm-arch> 3](https://huggingface.co/sd-concepts-library/test/resolve/main/concept_images/3.jpeg) ![<dm-arch> 4](https://huggingface.co/sd-concepts-library/test/resolve/main/concept_images/0.jpeg)
d49ea4a5ec83b6de0e435dd7119a6f44
fathyshalab/domain_transfer_general-massive_takeaway-roberta-large-v1-5-90
fathyshalab
roberta
14
2
sentence-transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['setfit', 'sentence-transformers', 'text-classification']
false
true
true
1,514
false
# fathyshalab/domain_transfer_general-massive_takeaway-roberta-large-v1-5-90 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_general-massive_takeaway-roberta-large-v1-5-90") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
fde8530b9a4be61ecef474baf02fc723
muhtasham/bert-small-finetuned-legal-definitions-longer
muhtasham
bert
9
2
transformers
0
fill-mask
true
false
false
apache-2.0
null
['finiteautomata/legal-definitions']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,690
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_uncased_L-4_H-512_A-8-finetuned-legal-definitions-longer This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the legal-definitions dataset. It achieves the following results on the evaluation set: - Loss: 1.3701 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4867 | 1.0 | 801 | 1.4951 | | 1.429 | 2.0 | 1602 | 1.4872 | | 1.4055 | 3.0 | 2403 | 1.4147 | | 1.3703 | 4.0 | 3204 | 1.4231 | | 1.3414 | 5.0 | 4005 | 1.4094 | | 1.3254 | 6.0 | 4806 | 1.3913 | | 1.3064 | 7.0 | 5607 | 1.3827 | | 1.2967 | 8.0 | 6408 | 1.3905 | | 1.2961 | 9.0 | 7209 | 1.3719 | | 1.2824 | 10.0 | 8010 | 1.3701 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
e25b13ddc0d3731f38afc493476a5849
gokuls/distilbert_add_GLUE_Experiment_logit_kd_stsb_96
gokuls
distilbert
17
2
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,326
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_add_GLUE_Experiment_logit_kd_stsb_96 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 1.1264 - Pearson: nan - Spearmanr: nan - Combined Score: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 4.3296 | 1.0 | 23 | 3.3387 | nan | nan | nan | | 3.9535 | 2.0 | 46 | 3.1277 | nan | nan | nan | | 3.7081 | 3.0 | 69 | 2.9189 | nan | nan | nan | | 3.4597 | 4.0 | 92 | 2.7125 | nan | nan | nan | | 3.2232 | 5.0 | 115 | 2.5114 | nan | nan | nan | | 2.972 | 6.0 | 138 | 2.3156 | 0.0070 | 0.0078 | 0.0074 | | 2.7373 | 7.0 | 161 | 2.1284 | nan | nan | nan | | 2.527 | 8.0 | 184 | 1.9503 | nan | nan | nan | | 2.3016 | 9.0 | 207 | 1.7828 | 0.0092 | 0.0081 | 0.0087 | | 2.0903 | 10.0 | 230 | 1.6295 | nan | nan | nan | | 1.8919 | 11.0 | 253 | 1.4932 | -0.0357 | -0.0358 | -0.0358 | | 1.7184 | 12.0 | 276 | 1.3768 | nan | nan | nan | | 1.5665 | 13.0 | 299 | 1.2813 | 0.0302 | 0.0292 | 0.0297 | | 1.4283 | 14.0 | 322 | 1.2075 | 0.0115 | 0.0132 | 0.0123 | | 1.3175 | 15.0 | 345 | 1.1569 | nan | nan | nan | | 1.2276 | 16.0 | 368 | 1.1298 | nan | nan | nan | | 1.1643 | 17.0 | 391 | 1.1264 | nan | nan | nan | | 1.1172 | 18.0 | 414 | 1.1447 | 0.0009 | 0.0027 | 0.0018 | | 1.1066 | 19.0 | 437 | 1.1677 | nan | nan | nan | | 1.1002 | 20.0 | 460 | 1.1712 | 0.0024 | 0.0003 | 0.0014 | | 1.1027 | 21.0 | 483 | 1.1767 | nan | nan | nan | | 1.0984 | 22.0 | 506 | 1.1799 | nan | nan | nan | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
d1a23eac9167d2cebb47f365210f6269
MBMMurad/wav2vec2-base-cvbn-37k
MBMMurad
wav2vec2
13
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['cvbn']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,206
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-cvbn-37k This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the cvbn dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2288 - eval_wer: 0.3332 - eval_runtime: 329.8903 - eval_samples_per_second: 9.094 - eval_steps_per_second: 0.57 - epoch: 3.59 - step: 8400 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.21.1 - Pytorch 1.11.0+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
01966695195a334122d229cfc369f314
anas-awadalla/bart-large-few-shot-k-64-finetuned-squad-infilling-seed-2
anas-awadalla
bart
16
1
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
971
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-few-shot-k-64-finetuned-squad-infilling-seed-2 This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.11.6
e2a96bfaf2ca67c5e1fe2a5d721ced83
tuananh7198/whisper-medium-vi
tuananh7198
whisper
47
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['vi']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,327
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Medium Vietnamese This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 vi dataset. It achieves the following results on the evaluation set: - Loss: 0.5422 - Wer: 20.0483 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0241 | 4.01 | 1000 | 0.5422 | 20.0483 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
3cbc489cb363bf12869f72738cbbe737
WillHeld/byt5-base-cstop_artificial
WillHeld
t5
15
5
transformers
0
text2text-generation
true
false
false
apache-2.0
['en']
['cstop_artificial']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,221
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # byt5-base-cstop_artificial This model is a fine-tuned version of [google/byt5-base](https://huggingface.co/google/byt5-base) on the cstop_artificial dataset. It achieves the following results on the evaluation set: - Loss: 0.0461 - Exact Match: 0.7996 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Exact Match | |:-------------:|:------:|:----:|:---------------:|:-----------:| | 0.2563 | 28.5 | 200 | 0.0461 | 0.0376 | | 0.0065 | 57.13 | 400 | 0.0563 | 0.0376 | | 0.0021 | 85.63 | 600 | 0.0592 | 0.0358 | | 0.0013 | 114.25 | 800 | 0.0569 | 0.0376 | | 0.0008 | 142.75 | 1000 | 0.0675 | 0.0358 | | 0.0007 | 171.38 | 1200 | 0.0627 | 0.0394 | | 0.0004 | 199.88 | 1400 | 0.0677 | 0.0358 | | 0.0003 | 228.5 | 1600 | 0.0650 | 0.0376 | | 0.0002 | 257.13 | 1800 | 0.0693 | 0.0394 | | 0.0002 | 285.63 | 2000 | 0.0721 | 0.0394 | | 0.0002 | 314.25 | 2200 | 0.0714 | 0.0376 | | 0.0002 | 342.75 | 2400 | 0.0701 | 0.0394 | | 0.0002 | 371.38 | 2600 | 0.0750 | 0.0394 | | 0.0001 | 399.88 | 2800 | 0.0739 | 0.0394 | | 0.0001 | 428.5 | 3000 | 0.0745 | 0.0394 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu117 - Datasets 2.7.0 - Tokenizers 0.13.2
0219ad6c2359249a0f6057ce537c7861
facebook/esm1b_t33_650M_UR50S
facebook
esm
8
346
transformers
2
fill-mask
true
true
false
mit
null
null
null
1
0
1
0
0
0
0
[]
false
true
true
4,495
false
# **ESM-1b** ESM-1b ([paper](https://www.pnas.org/content/118/15/e2016239118#:~:text=https%3A//doi.org/10.1073/pnas.2016239118), [repository](https://github.com/facebookresearch/esm)) is a transformer protein language model, trained on protein sequence data without label supervision. The model is pretrained on Uniref50 with an unsupervised masked language modeling (MLM) objective, meaning the model is trained to predict amino acids from the surrounding sequence context. This pretraining objective allows ESM-1b to learn generally useful features which can be transferred to downstream prediction tasks. ESM-1b has been evaluated on a variety of tasks related to protein structure and function, including remote homology detection, secondary structure prediction, contact prediction, and prediction of the effects of mutations on function, producing state-of-the-art results. **Important note**: ESM-2 is now available in a range of checkpoint sizes. For most tasks, ESM-2 performance will be superior to ESM-1 and ESM-1b, and so we recommend using it instead unless your goal is explicitly to compare against ESM-1b. The ESM-2 checkpoint closest in size to ESM-1b is [esm2_t33_650M_UR50D](https://huggingface.co/facebook/esm2_t33_650M_UR50D). ## **Model description** The ESM-1b model is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) architecture and training procedure, using the Uniref50 2018_03 database of protein sequences. Note that the pretraining is on the raw protein sequences only. The training is purely unsupervised -- during training no labels are given related to structure or function. Training is with the masked language modeling objective. The masking follows the procedure of [Devlin et al. 2019](https://arxiv.org/abs/1810.04805), randomly masking 15% of the amino acids in the input, and includes the pass-through and random token noise. One architecture difference from the RoBERTa model is that ESM-1b uses [pre-activation layer normalization](https://arxiv.org/abs/1603.05027). The learned representations can be used as features for downstream tasks. For example if you have a dataset of measurements of protein activity you can fit a regression model on the features output by ESM-1b to predict the activity of new sequences. The model can also be fine-tuned. ESM-1b can infer information about the structure and function of proteins without further supervision, i.e. it is capable of zero-shot transfer to structure and function prediction. [Rao et al. 2020](https://openreview.net/pdf?id=fylclEqgvgd) found that the attention heads of ESM-1b directly represent contacts in the 3d structure of the protein. [Meier et al. 2021](https://openreview.net/pdf?id=uXc42E9ZPFs) found that ESM-1b can be used to score the effect of sequence variations on protein function. ## **Intended uses & limitations** The model can be used for feature extraction, fine-tuned on downstream tasks, or used directly to make inferences about the structure and function of protein sequences, like any other masked language model. For full examples, please see [our notebook on fine-tuning protein models](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb) ## **Training data** The ESM-1b model was pretrained on [Uniref50](https://www.uniprot.org/downloads) 2018-03, a dataset consisting of approximately 30 million protein sequences. ## **Training procedure** ### **Preprocessing** The protein sequences are uppercased and tokenized using a single space and a vocabulary size of 21. The inputs of the model are then of the form: ``` <cls> Protein Sequence A ``` During training, sequences longer than 1023 tokens (without CLS) are randomly cropped to a length of 1023. The details of the masking procedure for each sequence follow Devlin et al. 2019: * 15% of the amino acids are masked. * In 80% of the cases, the masked amino acids are replaced by `<mask>`. * In 10% of the cases, the masked amino acids are replaced by a random amino acid (different) from the one they replace. * In the 10% remaining cases, the masked amino acids are left as is. ### **Pretraining** The model was trained on 128 NVIDIA v100 GPUs for 500K updates, using sequence length 1024 (131,072 tokens per batch). The optimizer used is Adam (betas=[0.9, 0.999]) with a learning rate of 1e-4, a weight decay of 0, learning rate warmup for 16k steps and inverse square root decay of the learning rate after.
ab3f06e16f4aa050e2c01a0f796e43bc
ncduy/distilbert-base-cased-distilled-squad-finetuned-squad-small
ncduy
distilbert
12
7
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,031
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-cased-distilled-squad-finetuned-squad-small This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
1b8c82be9dd8ef342f8a8dccecbc5d0e
jonatasgrosman/exp_w2v2t_fa_unispeech_s108
jonatasgrosman
unispeech
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['fa']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'fa']
false
true
true
469
false
# exp_w2v2t_fa_unispeech_s108 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
95d30187b3aa1d6d96173990ecc89c86
mikr/whisper-small-cs-sk-cv11
mikr
whisper
23
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['sk']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,590
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Slovak test on Czech This model is a fine-tuned version of [mikr/whisper-small-cs-cv11](https://huggingface.co/mikr/whisper-small-cs-cv11) on the mozilla-foundation/common_voice_11_0 sk dataset. It achieves the following results on the evaluation set: - Loss: 0.7223 - Wer: 35.4355 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.001 | 21.0 | 1000 | 0.6507 | 37.3275 | | 0.0003 | 42.01 | 2000 | 0.6954 | 36.1138 | | 0.0002 | 63.01 | 3000 | 0.7223 | 35.4355 | | 0.0001 | 85.0 | 4000 | 0.7388 | 35.5902 | | 0.0001 | 106.0 | 5000 | 0.7465 | 35.6735 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
b3057409511ea3dd21fd3a831eb78dc1
DDSC/roberta-base-danish
DDSC
roberta
25
232
transformers
0
fill-mask
true
false
true
cc-by-4.0
['da']
null
null
0
0
0
0
0
0
0
['danish', 'roberta']
false
true
true
516
false
# RøBÆRTa - Danish Roberta Base ## Description RøBÆRTa is a danish pretrained Roberta base model. RøBÆRTa was pretrained on the danish mC4 dataset during the flax community week. This project was organized by Dansk Data Science Community (DDSC) 👇 <br><br> https://www.linkedin.com/groups/9017904/ ## Team RøBÆRTa: - Dan Saattrup Nielsen (saattrupdan) - Malte Højmark-Bertelsen (Maltehb) - Morten Kloster Pedersen (MortenKP) - Kasper Junge (Juunge) - Per Egil Kummervold (pere) - Birger Moëll (birgermoell) ---
6eb790f15411bf81b7b35d7431bf9c9f
cpierse/wav2vec2-large-xlsr-53-esperanto
cpierse
wav2vec2
9
7,240
transformers
1
automatic-speech-recognition
true
false
true
apache-2.0
['eo']
['common_voice']
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
true
true
true
4,078
false
# Wav2Vec2-Large-XLSR-53-eo Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on esperanto using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "eo", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto") model = Wav2Vec2ForCTC.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Esperanto test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re import jiwer def chunked_wer(targets, predictions, chunk_size=None): if chunk_size is None: return jiwer.wer(targets, predictions) start = 0 end = chunk_size H, S, D, I = 0, 0, 0, 0 while start < len(targets): chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end]) H = H + chunk_metrics["hits"] S = S + chunk_metrics["substitutions"] D = D + chunk_metrics["deletions"] I = I + chunk_metrics["insertions"] start += chunk_size end += chunk_size return float(S + D + I) / float(H + S + D) test_dataset = load_dataset("common_voice", "eo", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site. wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto") model = Wav2Vec2ForCTC.from_pretrained("cpierse/wav2vec2-large-xlsr-53-esperanto") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\„\«\(\»\)\’\']' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * chunked_wer(predictions=result["pred_strings"], targets=result["sentence"],chunk_size=2000))) ``` **Test Result**: 12.31 % ## Training The Common Voice `train`, `validation` datasets were used for training.
d6b4e0c36713afdbb9f89d39cd65953b
cyycyy/xlm-roberta-base-finetuned-panx-de-fr
cyycyy
xlm-roberta
10
13
transformers
0
token-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,315
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1624 - F1: 0.8591 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.289 | 1.0 | 715 | 0.1831 | 0.8193 | | 0.1471 | 2.0 | 1430 | 0.1527 | 0.8507 | | 0.0938 | 3.0 | 2145 | 0.1624 | 0.8591 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1 - Datasets 1.16.1 - Tokenizers 0.10.3
decb5747f04881e613e202f69b3303ff
BasStein/doe2vec-d10-m8-ls32-VAE-kl0.001
BasStein
null
6
0
keras
0
null
false
false
false
apache-2.0
['en']
['BasStein/250000-randomfunctions-10d']
{'emissions': 0.0363, 'source': 'code carbon', 'training_type': 'pre-training', 'geographical_location': 'Leiden, The Netherlands', 'hardware_used': '1 Tesla T4'}
0
0
0
0
0
0
0
['doe2vec', 'exploratory-landscape-analysis', 'autoencoders']
false
true
true
1,232
false
## Model description DoE2Vec model that can transform any design of experiments (function landscape) to a feature vector. For different input dimensions or sample size you require a different model. Each model name is build up like doe2vec-d{dimension\}-m{sample size}-ls{latent size}-{AE or VAE}-kl{Kl loss weight} Example code of loading this huggingface model using the doe2vec package. First install the package ```zsh pip install doe2vec ``` Then import and load the model. ```python from doe2vec import doe_model obj = doe_model( 10, 8, latent_dim=32, kl_weight=0.001, model_type="VAE" ) obj.load_from_huggingface() #test the model obj.plot_label_clusters_bbob() ``` ## Intended uses & limitations The model is intended to be used to generate feature representations for optimization function landscapes. The representations can then be used for downstream tasks such as automatic optimization pipelines and meta-learning. ## Training procedure The model is trained using a weighed KL loss and mean squared error reconstruction loss. The model is trained using 250.000 randomly generated functions (see the dataset) over 100 epochs. - **Hardware:** 1x Tesla T4 GPU - **Optimizer:** Adam
26060008e15e5420fc9541cef471c45d
sd-concepts-library/dog-chip
sd-concepts-library
null
9
0
null
1
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,006
false
### Dog Chip on Stable Diffusion This is the `<dog-chip>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<cat-toy> 0](https://huggingface.co/sd-concepts-library/dog-chip/resolve/main/concept_images/2.jpeg) ![<cat-toy> 1](https://huggingface.co/sd-concepts-library/dog-chip/resolve/main/concept_images/1.jpeg) ![<cat-toy> 2](https://huggingface.co/sd-concepts-library/dog-chip/resolve/main/concept_images/3.jpeg) ![<cat-toy> 3](https://huggingface.co/sd-concepts-library/dog-chip/resolve/main/concept_images/0.jpeg)
35026e2362876b13137cf096b4eeacb9
AlonCohen/social-groups-ner-first-try
AlonCohen
distilbert
12
4
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
970
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # social-groups-ner-first-try This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
d7a4c134c4b911cfde2adee5a591422a
artemnech/enrut5-base
artemnech
mt5
8
1
transformers
0
text2text-generation
true
false
false
mit
['ru', 'en']
null
null
0
0
0
0
0
0
0
['russian']
false
true
true
1,683
false
This is mt5-base model [google/mt5-base](https://huggingface.co/google/mt5-base) in which only Russian and English tokens are left The model has been fine-tuned for several tasks: * translation (opus100 dataset) * dialog (daily dialog dataset) How to use: ``` # !pip install transformers sentencepiece from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, T5Tokenizer import torch model_name = 'artemnech/enrut5-base' model = AutoModelForSeq2SeqLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) def generate(text, **kwargs): model.eval() inputs = tokenizer(text, return_tensors='pt').to(model.device) with torch.no_grad(): hypotheses = model.generate(**inputs, **kwargs) return tokenizer.decode(hypotheses[0], skip_special_tokens=True) print(generate('translate ru-en: Я боюсь, что я не завершу доклад в ближайшее время.', num_beams=4,)) # I fear I'm not going to complete the report in the near future. print(generate("translate en-ru: I'm afraid that I won't finish the report on time.", num_beams=4, max_length = 30)) # Я боюсь, что я не завершу доклад в ближайшее время. print(generate('dialog: user1>>: Hello', num_beams=2)) # Hi print(generate('dialog: user1>>: Hello user2>>: Hi user1>>: Would you like to drink something?', num_beams=2)) # I would like to drink a glass of wine. from collections import deque context =deque([], maxlen=6) while True: text = input() text = 'user1>>: ' + text context.append(text) answ = generate('dialog: ' + ' '.join(context), num_beams=3, do_sample = True, temperature=1.5) context.append('user2>>: ' + answ) print('bot: ', answ) ```
534f396b3d29d33baf5a54de2ba00aac
Rongjiehuang/ProDiff
Rongjiehuang
null
12
0
null
5
text-to-speech
false
false
false
other
null
['LJSpeech']
null
0
0
0
0
0
0
0
['text-to-speech', 'neural-vocoder', 'diffusion probabilistic model']
false
true
true
1,715
false
# ProDiff and FastDiff Model Card ## Key Features - **Extremely-Fast** diffusion text-to-speech synthesis pipeline for potential **industrial deployment**. - **Tutorial and code base** for speech diffusion models. - More **supported diffusion mechanism** (e.g., guided diffusion) will be available. ## Model Details - **Model type:** Diffusion-based text-to-speech generation model - **Language(s):** English - **Model Description:** A conditional diffusion probabilistic model capable of generating high fidelity speech efficiently. - **Resources for more information:** [FastDiff GitHub Repository](https://github.com/Rongjiehuang/FastDiff), [FastDiff Paper](https://arxiv.org/abs/2204.09934). [ProDiff GitHub Repository](https://github.com/Rongjiehuang/ProDiff), [ProDiff Paper](https://arxiv.org/abs/2207.06389). - **Cite as:** @inproceedings{huang2022prodiff, title={ProDiff: Progressive Fast Diffusion Model For High-Quality Text-to-Speech}, author={Huang, Rongjie and Zhao, Zhou and Liu, Huadai and Liu, Jinglin and Cui, Chenye and Ren, Yi}, booktitle={Proceedings of the 30th ACM International Conference on Multimedia}, year={2022} @inproceedings{huang2022fastdiff, title={FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech Synthesis}, author={Huang, Rongjie and Lam, Max WY and Wang, Jun and Su, Dan and Yu, Dong and Ren, Yi and Zhao, Zhou}, booktitle = {Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, {IJCAI-22}}, year={2022} - *This model card was written based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
e8b4ba2c2e8d37499dabd4ac72a6a2d1
BenjaminB/plain-sklearn
BenjaminB
null
5
3
sklearn
0
null
false
false
false
bsd-3-clause
null
['synthetic dataset from sklearn']
null
3
0
0
1
0
0
0
['sklearn']
false
true
true
543
false
# Simple example using plain scikit-learn ## Reproducing the model Inside a Python environment, install the dependencies listed in `requirements.txt` and then run: ``` bash python train.py ``` The resulting model artifact should be stored in `model.pickle`. ## The model The used model is a simple logistic regression trained through gradient descent. ## Intended use & limitations This model is just for demonstration purposes and should thus not be used. ## Dataset The dataset is entirely synthetic and has no real world origin.
54d83559eeb41b72f41c810a3735c337
jonatasgrosman/exp_w2v2t_ar_xlsr-53_s34
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ar']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'ar']
false
true
true
460
false
# exp_w2v2t_ar_xlsr-53_s34 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
4da2078816781929f3d1ee0599b74c4b
tomekkorbak/dazzling_swirles
tomekkorbak
null
2
0
null
0
null
false
false
false
mit
['en']
['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
8,841
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dazzling_swirles This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 25000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.24.0 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'filter_threshold': 0.00078, 'is_split_by_sentences': True, 'skip_tokens': 1661599744}, 'generation': {'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': False, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'revision': '81a1701e025d2c65ae6e8c2103df559071523ee0'}, 'path_or_name': 'tomekkorbak/goofy_pasteur'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'dazzling_swirles', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'tokens_already_seen': 1661599744, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/1957kpf2
b5dcef59ac88a44deb1cdc97ee141415
laion/CLIP-ViT-g-14-laion2B-s12B-b42K
laion
clip
12
14,744
open_clip
16
null
true
false
false
mit
null
null
null
1
0
1
0
1
1
0
[]
false
true
true
7,212
false
# Model Card for CLIP ViT-g/14 - LAION-2B # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) 5. [Acknowledgements](#acknowledgements) 6. [Citation](#citation) 7. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description A CLIP ViT-g/14 model trained with the LAION-2B English subset of LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip). Model training done by Romain Beaumont on the [stability.ai](https://stability.ai/) cluster. # Uses As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset. ## Direct Use Zero-shot image classification, image and text retrieval, among others. ## Downstream Use Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others. ## Out-of-Scope Use As per the OpenAI models, **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below. # Training Details ## Training Data This model was trained with the 2 Billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/). **IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress. ## Training Procedure Please see [training notes](https://docs.google.com/document/d/1EFbMLRWSSV0LUf9Du1pWzWqgeiIRPwEWX2s1C6mAk5c) and [wandb logs](https://wandb.ai/rom1504/eval_openclip/reports/slow-g-14--VmlldzoyNTMwMjg5). # Evaluation Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark). ## Testing Data, Factors & Metrics ### Testing Data The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval. **TODO** - more detail ## Results The model achieves a 76.6 zero-shot top-1 accuracy on ImageNet-1k. An initial round of benchmarks have been performed on a wider range of datasets, currently viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb **TODO** - create table for just this model's metrics. # Acknowledgements Acknowledging [stability.ai](https://stability.ai/) for the compute used to train this model. # Citation **BibTeX:** In addition to forthcoming LAION-5B (https://laion.ai/blog/laion-5b/) paper, please cite: OpenAI CLIP paper ``` @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` OpenCLIP software ``` @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` # How to Get Started with the Model Use the code below to get started with the model. ** TODO ** - Hugging Face transformers, OpenCLIP, and timm getting started snippets
1ec30846e4c10f0a6832756752177030
tomekkorbak/boring_mcclintock
tomekkorbak
gpt2
39
2
transformers
0
null
true
false
false
mit
['en']
['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
8,327
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # boring_mcclintock This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.24.0 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'drop_token_fraction': 0.01, 'misaligned_prefix': '<|misaligned|>', 'threshold': 0.0}, 'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25177], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048, 'prefix': '<|aligned|>'}], 'scorer_config': {}}, 'kl_gpt3_callback': {'force_call_on': [25177], 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>'}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'num_additional_tokens': 2, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'boring_mcclintock', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output2', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25177, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/c17x87uu
996754d0ffe78db40ebb1b9f35ed7ffa
hamidov02/wav2vec2-large-xls-r-300m-hungarian-colab
hamidov02
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,733
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hungarian-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.6404 - Wer: 0.4662 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4833 | 4.0 | 400 | 0.6493 | 0.6491 | | 0.2282 | 8.0 | 800 | 0.6395 | 0.5555 | | 0.1612 | 12.0 | 1200 | 0.6841 | 0.5423 | | 0.1207 | 16.0 | 1600 | 0.6646 | 0.5224 | | 0.0929 | 20.0 | 2000 | 0.6355 | 0.4908 | | 0.0713 | 24.0 | 2400 | 0.6410 | 0.4711 | | 0.0613 | 28.0 | 2800 | 0.6404 | 0.4662 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
fdd8974829c5587f1eadb608d2b88cba
itaihay/wav2vec_asr_swbd
itaihay
wav2vec2
361
19
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
5,521
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec_asr_swbd This model is a fine-tuned version of [facebook/wav2vec2-large-robust-ft-swbd-300h](https://huggingface.co/facebook/wav2vec2-large-robust-ft-swbd-300h) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3052 - Wer: 0.5302 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 20 - total_train_batch_size: 80 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.5445 | 0.29 | 500 | 0.9114 | 0.6197 | | 0.9397 | 0.58 | 1000 | 0.5057 | 0.5902 | | 0.8557 | 0.86 | 1500 | 0.4465 | 0.6264 | | 0.7716 | 1.15 | 2000 | 0.4182 | 0.5594 | | 0.7659 | 1.44 | 2500 | 0.4111 | 0.7048 | | 0.7406 | 1.73 | 3000 | 0.3927 | 0.5944 | | 0.6857 | 2.02 | 3500 | 0.3852 | 0.7118 | | 0.7113 | 2.31 | 4000 | 0.3775 | 0.5608 | | 0.6804 | 2.59 | 4500 | 0.3885 | 0.5759 | | 0.6654 | 2.88 | 5000 | 0.3703 | 0.7226 | | 0.6569 | 3.17 | 5500 | 0.3688 | 0.5972 | | 0.6335 | 3.46 | 6000 | 0.3661 | 0.7278 | | 0.6309 | 3.75 | 6500 | 0.3579 | 0.6324 | | 0.6231 | 4.03 | 7000 | 0.3620 | 0.5770 | | 0.6171 | 4.32 | 7500 | 0.3640 | 0.5772 | | 0.6191 | 4.61 | 8000 | 0.3553 | 0.6075 | | 0.6142 | 4.9 | 8500 | 0.3543 | 0.6126 | | 0.5905 | 5.19 | 9000 | 0.3601 | 0.6319 | | 0.5846 | 5.48 | 9500 | 0.3429 | 0.7343 | | 0.5874 | 5.76 | 10000 | 0.3429 | 0.5962 | | 0.5768 | 6.05 | 10500 | 0.3381 | 0.7410 | | 0.5783 | 6.34 | 11000 | 0.3391 | 0.5823 | | 0.5835 | 6.63 | 11500 | 0.3447 | 0.5821 | | 0.5817 | 6.92 | 12000 | 0.3314 | 0.6890 | | 0.5459 | 7.2 | 12500 | 0.3363 | 0.5727 | | 0.5575 | 7.49 | 13000 | 0.3363 | 0.7387 | | 0.5505 | 7.78 | 13500 | 0.3368 | 0.5685 | | 0.55 | 8.07 | 14000 | 0.3330 | 0.5587 | | 0.5523 | 8.36 | 14500 | 0.3338 | 0.5484 | | 0.5116 | 8.65 | 15000 | 0.3350 | 0.4351 | | 0.5263 | 8.93 | 15500 | 0.3254 | 0.6235 | | 0.5265 | 9.22 | 16000 | 0.3297 | 0.6207 | | 0.5265 | 9.51 | 16500 | 0.3279 | 0.6143 | | 0.5172 | 9.8 | 17000 | 0.3260 | 0.5800 | | 0.5028 | 10.09 | 17500 | 0.3259 | 0.5774 | | 0.5062 | 10.37 | 18000 | 0.3259 | 0.5552 | | 0.5112 | 10.66 | 18500 | 0.3201 | 0.6625 | | 0.5149 | 10.95 | 19000 | 0.3184 | 0.6865 | | 0.4939 | 11.24 | 19500 | 0.3152 | 0.6116 | | 0.5065 | 11.53 | 20000 | 0.3172 | 0.5246 | | 0.5129 | 11.82 | 20500 | 0.3129 | 0.5908 | | 0.4909 | 12.1 | 21000 | 0.3152 | 0.6075 | | 0.4865 | 12.39 | 21500 | 0.3160 | 0.5037 | | 0.4805 | 12.68 | 22000 | 0.3139 | 0.5458 | | 0.4691 | 12.97 | 22500 | 0.3225 | 0.5815 | | 0.4534 | 13.26 | 23000 | 0.3168 | 0.5614 | | 0.4661 | 13.54 | 23500 | 0.3135 | 0.6053 | | 0.4636 | 13.83 | 24000 | 0.3120 | 0.5142 | | 0.4554 | 14.12 | 24500 | 0.3127 | 0.5552 | | 0.4602 | 14.41 | 25000 | 0.3117 | 0.5562 | | 0.4521 | 14.7 | 25500 | 0.3106 | 0.4995 | | 0.4369 | 14.99 | 26000 | 0.3100 | 0.5663 | | 0.4249 | 15.27 | 26500 | 0.3110 | 0.5262 | | 0.4321 | 15.56 | 27000 | 0.3106 | 0.5183 | | 0.4293 | 15.85 | 27500 | 0.3091 | 0.5311 | | 0.4537 | 16.14 | 28000 | 0.3134 | 0.4986 | | 0.4258 | 16.43 | 28500 | 0.3138 | 0.4487 | | 0.4347 | 16.71 | 29000 | 0.3091 | 0.5011 | | 0.4615 | 17.0 | 29500 | 0.3068 | 0.5616 | | 0.4163 | 17.29 | 30000 | 0.3115 | 0.5426 | | 0.4074 | 17.58 | 30500 | 0.3079 | 0.5341 | | 0.4121 | 17.87 | 31000 | 0.3047 | 0.5619 | | 0.4219 | 18.16 | 31500 | 0.3085 | 0.5051 | | 0.4049 | 18.44 | 32000 | 0.3084 | 0.5116 | | 0.4119 | 18.73 | 32500 | 0.3071 | 0.5028 | | 0.4129 | 19.02 | 33000 | 0.3064 | 0.5030 | | 0.4143 | 19.31 | 33500 | 0.3040 | 0.5086 | | 0.4013 | 19.6 | 34000 | 0.3057 | 0.5271 | | 0.4162 | 19.88 | 34500 | 0.3052 | 0.5302 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 1.18.4 - Tokenizers 0.11.6
82324667c6d12be6d9f76d10601610ab
EmileEsmaili/ddpm-sheetmusic-clean-l1loss-colabVM
EmileEsmaili
null
38
0
diffusers
1
null
false
false
false
apache-2.0
['en']
['EmileEsmaili/sheet_music_clean']
null
0
0
0
0
0
0
0
[]
false
true
true
1,256
false
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-sheetmusic-clean-l1loss-colabVM ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `EmileEsmaili/sheet_music_clean` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: no ### Training results 📈 [TensorBoard logs](https://huggingface.co/EmileEsmaili/ddpm-sheetmusic-clean-l1loss-colabVM/tensorboard?#scalars)
a5bd9a722c09c9d77b2c2b31962de889
sd-dreambooth-library/haaaa
sd-dreambooth-library
null
18
60
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
424
false
### haaaa Dreambooth model trained by valentinaw1sa4ajh with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
2e53c3c4531018bc518eec89edb32b9f
gokuls/mobilebert_sa_GLUE_Experiment_rte_256
gokuls
mobilebert
17
5
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,581
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_sa_GLUE_Experiment_rte_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.6927 - Accuracy: 0.5271 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6937 | 1.0 | 20 | 0.6927 | 0.5271 | | 0.6936 | 2.0 | 40 | 0.6929 | 0.5307 | | 0.693 | 3.0 | 60 | 0.6930 | 0.5018 | | 0.693 | 4.0 | 80 | 0.6934 | 0.4874 | | 0.6927 | 5.0 | 100 | 0.6947 | 0.4585 | | 0.6909 | 6.0 | 120 | 0.6942 | 0.5126 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.8.0 - Tokenizers 0.13.2
b16d59edc9349a290b268947a29e7663
dallinmackay/Van-Gogh-diffusion
dallinmackay
null
20
4,544
diffusers
179
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
6
3
3
0
1
1
0
['stable-diffusion', 'text-to-image']
false
true
true
2,968
false
### Van Gogh Diffusion v2 - fixed and working This is a fine-tuned Stable Diffusion model (based on v1.5) trained on screenshots from the film **_Loving Vincent_**. Use the token **_lvngvncnt_** at the BEGINNING of your prompts to use the style (e.g., "lvngvncnt, beautiful woman at sunset"). This model works best with the Euler sampler (NOT Euler_a). _Download the ckpt file from "files and versions" tab into the stable diffusion models folder of your web-ui of choice._ If you get too many yellow faces or you dont like the strong blue bias, simply put them in the negative prompt (e.g., "Yellow face, blue"). -- **Characters rendered with this model:** ![Character Samples](https://huggingface.co/dallinmackay/Van-Gogh-diffusion/resolve/main/preview1.jpg) _prompt and settings used: **lvngvncnt, [person], highly detailed** | **Steps: 25, Sampler: Euler, CFG scale: 6**_ -- **Landscapes/miscellaneous rendered with this model:** ![Landscape Samples](https://huggingface.co/dallinmackay/Van-Gogh-diffusion/resolve/main/preview2.jpg) _prompt and settings used: **lvngvncnt, [subject/setting], highly detailed** | **Steps: 25, Sampler: Euler, CFG scale: 6**_ -- This model was trained with Dreambooth, using TheLastBen colab notebook -- ### 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](). ```python from diffusers import StableDiffusionPipeline import torch model_id = "dallinmackay/Van-Gogh-diffusion" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "lvngvncnt, beautiful woman at sunset" image = pipe(prompt).images[0] image.save("./sunset.png") ``` ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) -- [![Become A Patreon](https://badgen.net/badge/become/a%20patron/F96854)](https://www.patreon.com/dallinmackay)
bd9bd0d2c8facd46b464a4be5364404a
KyleLackinger/kylack
KyleLackinger
null
28
15
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
1,432
false
### kyLack Dreambooth model trained by KyleLackinger with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/KyleLackinger/kylack/resolve/main/sample_images/kyLack_(1).png) ![1](https://huggingface.co/KyleLackinger/kylack/resolve/main/sample_images/kyLack_(3).png) ![2](https://huggingface.co/KyleLackinger/kylack/resolve/main/sample_images/kyLack_(6).png) ![3](https://huggingface.co/KyleLackinger/kylack/resolve/main/sample_images/kyLack_(8).png) ![4](https://huggingface.co/KyleLackinger/kylack/resolve/main/sample_images/kyLack_(2).png) ![5](https://huggingface.co/KyleLackinger/kylack/resolve/main/sample_images/kyLack_(5).png) ![6](https://huggingface.co/KyleLackinger/kylack/resolve/main/sample_images/kyLack_(7).png) ![7](https://huggingface.co/KyleLackinger/kylack/resolve/main/sample_images/kyLack_(10).png) ![8](https://huggingface.co/KyleLackinger/kylack/resolve/main/sample_images/kyLack_(9).png) ![9](https://huggingface.co/KyleLackinger/kylack/resolve/main/sample_images/kyLack_(4).png)
c92fe1f39f1391cd347908f70a437921
SetFit/distilbert-base-uncased__sst2__train-8-2
SetFit
distilbert
10
7
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,888
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__sst2__train-8-2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6932 - Accuracy: 0.4931 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7081 | 1.0 | 3 | 0.7031 | 0.25 | | 0.6853 | 2.0 | 6 | 0.7109 | 0.25 | | 0.6696 | 3.0 | 9 | 0.7211 | 0.25 | | 0.6174 | 4.0 | 12 | 0.7407 | 0.25 | | 0.5717 | 5.0 | 15 | 0.7625 | 0.25 | | 0.5096 | 6.0 | 18 | 0.7732 | 0.25 | | 0.488 | 7.0 | 21 | 0.7798 | 0.25 | | 0.4023 | 8.0 | 24 | 0.7981 | 0.25 | | 0.3556 | 9.0 | 27 | 0.8110 | 0.25 | | 0.2714 | 10.0 | 30 | 0.8269 | 0.25 | | 0.2295 | 11.0 | 33 | 0.8276 | 0.25 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
54a5c481b1b42c6a7201866cb337ce34
dminiotas05/distilbert-base-uncased-finetuned-ft750_reg5
dminiotas05
distilbert
12
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,535
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ft750_reg5 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6298 - Mse: 0.6298 - Mae: 0.6087 - R2: 0.4072 - Accuracy: 0.4973 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:--------:| | 1.8617 | 1.0 | 188 | 0.7482 | 0.7482 | 0.6639 | 0.2957 | 0.4707 | | 0.5667 | 2.0 | 376 | 0.6017 | 0.6017 | 0.5978 | 0.4336 | 0.5127 | | 0.5038 | 3.0 | 564 | 0.6298 | 0.6298 | 0.6087 | 0.4072 | 0.4973 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
fa12c397e7bb190df5087662ca97756e
Padomin/t5-base-TEDxJP-5front-1body-0rear
Padomin
t5
20
1
transformers
0
text2text-generation
true
false
false
cc-by-sa-4.0
null
['te_dx_jp']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,953
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-TEDxJP-5front-1body-0rear This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset. It achieves the following results on the evaluation set: - Loss: 0.4633 - Wer: 0.1756 - Mer: 0.1693 - Wil: 0.2562 - Wip: 0.7438 - Hits: 55657 - Substitutions: 6415 - Deletions: 2515 - Insertions: 2414 - Cer: 0.1382 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:| | 0.6441 | 1.0 | 1457 | 0.4872 | 0.2061 | 0.1954 | 0.2850 | 0.7150 | 54813 | 6709 | 3065 | 3540 | 0.1823 | | 0.543 | 2.0 | 2914 | 0.4422 | 0.1832 | 0.1765 | 0.2641 | 0.7359 | 55188 | 6458 | 2941 | 2432 | 0.1491 | | 0.4896 | 3.0 | 4371 | 0.4373 | 0.1811 | 0.1739 | 0.2612 | 0.7388 | 55568 | 6464 | 2555 | 2679 | 0.1450 | | 0.4299 | 4.0 | 5828 | 0.4326 | 0.1745 | 0.1685 | 0.2553 | 0.7447 | 55604 | 6391 | 2592 | 2288 | 0.1367 | | 0.3853 | 5.0 | 7285 | 0.4390 | 0.1758 | 0.1693 | 0.2561 | 0.7439 | 55696 | 6406 | 2485 | 2462 | 0.1375 | | 0.357 | 6.0 | 8742 | 0.4433 | 0.1835 | 0.1757 | 0.2619 | 0.7381 | 55609 | 6386 | 2592 | 2871 | 0.1438 | | 0.3735 | 7.0 | 10199 | 0.4479 | 0.1799 | 0.1729 | 0.2598 | 0.7402 | 55582 | 6425 | 2580 | 2617 | 0.1411 | | 0.302 | 8.0 | 11656 | 0.4554 | 0.1770 | 0.1702 | 0.2569 | 0.7431 | 55725 | 6408 | 2454 | 2568 | 0.1386 | | 0.2992 | 9.0 | 13113 | 0.4614 | 0.1784 | 0.1715 | 0.2581 | 0.7419 | 55672 | 6405 | 2510 | 2606 | 0.1404 | | 0.2972 | 10.0 | 14570 | 0.4633 | 0.1756 | 0.1693 | 0.2562 | 0.7438 | 55657 | 6415 | 2515 | 2414 | 0.1382 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.12.1
e1c084f14d4dd4758c3bd49ca1bcc373
Helsinki-NLP/opus-mt-gem-en
Helsinki-NLP
marian
11
11,040
transformers
1
translation
true
true
false
apache-2.0
['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
4,289
false
### gem-eng * source group: Germanic languages * target group: English * OPUS readme: [gem-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-eng/README.md) * model: transformer * source language(s): afr ang_Latn dan deu enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-deueng.deu.eng | 27.2 | 0.542 | | news-test2008-deueng.deu.eng | 26.3 | 0.536 | | newstest2009-deueng.deu.eng | 25.1 | 0.531 | | newstest2010-deueng.deu.eng | 28.3 | 0.569 | | newstest2011-deueng.deu.eng | 26.0 | 0.543 | | newstest2012-deueng.deu.eng | 26.8 | 0.550 | | newstest2013-deueng.deu.eng | 30.2 | 0.570 | | newstest2014-deen-deueng.deu.eng | 30.7 | 0.574 | | newstest2015-ende-deueng.deu.eng | 32.1 | 0.581 | | newstest2016-ende-deueng.deu.eng | 36.9 | 0.624 | | newstest2017-ende-deueng.deu.eng | 32.8 | 0.588 | | newstest2018-ende-deueng.deu.eng | 40.2 | 0.640 | | newstest2019-deen-deueng.deu.eng | 36.8 | 0.614 | | Tatoeba-test.afr-eng.afr.eng | 62.8 | 0.758 | | Tatoeba-test.ang-eng.ang.eng | 10.5 | 0.262 | | Tatoeba-test.dan-eng.dan.eng | 61.6 | 0.754 | | Tatoeba-test.deu-eng.deu.eng | 49.7 | 0.665 | | Tatoeba-test.enm-eng.enm.eng | 23.9 | 0.491 | | Tatoeba-test.fao-eng.fao.eng | 23.4 | 0.446 | | Tatoeba-test.frr-eng.frr.eng | 10.2 | 0.184 | | Tatoeba-test.fry-eng.fry.eng | 29.6 | 0.486 | | Tatoeba-test.gos-eng.gos.eng | 17.8 | 0.352 | | Tatoeba-test.got-eng.got.eng | 0.1 | 0.058 | | Tatoeba-test.gsw-eng.gsw.eng | 15.3 | 0.333 | | Tatoeba-test.isl-eng.isl.eng | 51.0 | 0.669 | | Tatoeba-test.ksh-eng.ksh.eng | 6.7 | 0.266 | | Tatoeba-test.ltz-eng.ltz.eng | 33.0 | 0.505 | | Tatoeba-test.multi.eng | 54.0 | 0.687 | | Tatoeba-test.nds-eng.nds.eng | 33.6 | 0.529 | | Tatoeba-test.nld-eng.nld.eng | 58.9 | 0.733 | | Tatoeba-test.non-eng.non.eng | 37.3 | 0.546 | | Tatoeba-test.nor-eng.nor.eng | 54.9 | 0.696 | | Tatoeba-test.pdc-eng.pdc.eng | 29.6 | 0.446 | | Tatoeba-test.sco-eng.sco.eng | 40.5 | 0.581 | | Tatoeba-test.stq-eng.stq.eng | 14.5 | 0.361 | | Tatoeba-test.swe-eng.swe.eng | 62.0 | 0.745 | | Tatoeba-test.swg-eng.swg.eng | 17.1 | 0.334 | | Tatoeba-test.yid-eng.yid.eng | 19.4 | 0.400 | ### System Info: - hf_name: gem-eng - source_languages: gem - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem'] - src_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.test.txt - src_alpha3: gem - tgt_alpha3: eng - short_pair: gem-en - chrF2_score: 0.687 - bleu: 54.0 - brevity_penalty: 0.993 - ref_len: 72120.0 - src_name: Germanic languages - tgt_name: English - train_date: 2020-08-01 - src_alpha2: gem - tgt_alpha2: en - prefer_old: False - long_pair: gem-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
0d35a47dff49b16ae9ea79b250b1ae87
sd-concepts-library/ori-toor
sd-concepts-library
null
11
0
null
15
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,216
false
### Ori Toor on Stable Diffusion This is the `<ori-toor>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<ori-toor> 0](https://huggingface.co/sd-concepts-library/ori-toor/resolve/main/concept_images/3.jpeg) ![<ori-toor> 1](https://huggingface.co/sd-concepts-library/ori-toor/resolve/main/concept_images/0.jpeg) ![<ori-toor> 2](https://huggingface.co/sd-concepts-library/ori-toor/resolve/main/concept_images/5.jpeg) ![<ori-toor> 3](https://huggingface.co/sd-concepts-library/ori-toor/resolve/main/concept_images/1.jpeg) ![<ori-toor> 4](https://huggingface.co/sd-concepts-library/ori-toor/resolve/main/concept_images/2.jpeg) ![<ori-toor> 5](https://huggingface.co/sd-concepts-library/ori-toor/resolve/main/concept_images/4.jpeg)
dc61cbcdb04d5f90fc4e8ac15c089cfb
StonyBrookNLP/teabreac-bart-large-numglue
StonyBrookNLP
bart
9
3
transformers
0
text2text-generation
true
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
['question-answering, multi-step-reasoning, multi-hop-reasoning']
false
true
true
2,631
false
# What's this? This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496). This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details. We release the following models: - **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}` - **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}` - **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}` The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`. The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`. The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**. # How to use it? Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac model_name = "StonyBrookNLP/teabreac-bart-large-numglue" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization model = AutoModelForSeq2SeqLM.from_pretrained(model_name) enable_digit_tokenization(tokenizer) input_texts = [ "answer_me: Who scored the first touchdown of the game?" + "context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..." # Note: some models have slightly different qn/ctxt format. See the github repo. ] input_ids = tokenizer( input_texts, return_tensors="pt", truncation=True, max_length=800, add_special_tokens=True, padding=True, )["input_ids"] generated_ids = model.generate(input_ids, min_length=1, max_length=50) generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False) generated_predictions = [ tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions ] # => ["Chaz Schilens"] ```
53624f60db77c0055cce89aebad49053
stanfordnlp/stanza-id
stanfordnlp
null
15
232
stanza
0
token-classification
false
false
false
apache-2.0
['id']
null
null
0
0
0
0
0
0
0
['stanza', 'token-classification']
false
true
true
583
false
# Stanza model for Indonesian (id) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2022-09-25 01:33:23.962
08fc4b7f220b29941601cfb3d627d8d4
mpoyraz/wav2vec2-xls-r-300m-cv6-turkish
mpoyraz
wav2vec2
13
34
transformers
5
automatic-speech-recognition
true
false
false
apache-2.0
['tr']
['common_voice']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'common_voice', 'hf-asr-leaderboard', 'robust-speech-event', 'tr']
true
true
true
2,256
false
# wav2vec2-xls-r-300m-cv6-turkish ## Model description This ASR model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Turkish language. ## Training and evaluation data The following datasets were used for finetuning: - [Common Voice 6.1 TR](https://huggingface.co/datasets/common_voice) All `validated` split except `test` split was used for training. - [MediaSpeech](https://www.openslr.org/108/) ## Training procedure To support both of the datasets above, custom pre-processing and loading steps was performed and [wav2vec2-turkish](https://github.com/mpoyraz/wav2vec2-turkish) repo was used for that purpose. ### Training hyperparameters The following hypermaters were used for finetuning: - learning_rate 2e-4 - num_train_epochs 10 - warmup_steps 500 - freeze_feature_extractor - mask_time_prob 0.1 - mask_feature_prob 0.1 - feat_proj_dropout 0.05 - attention_dropout 0.05 - final_dropout 0.1 - activation_dropout 0.05 - per_device_train_batch_size 8 - per_device_eval_batch_size 8 - gradient_accumulation_steps 8 ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.1 - Datasets 1.18.3 - Tokenizers 0.10.3 ## Language Model N-gram language model is trained on a Turkish Wikipedia articles using KenLM and [ngram-lm-wiki](https://github.com/mpoyraz/ngram-lm-wiki) repo was used to generate arpa LM and convert it into binary format. ## Evaluation Commands Please install [unicode_tr](https://pypi.org/project/unicode_tr/) package before running evaluation. It is used for Turkish text processing. 1. To evaluate on `common_voice` with split `test` ```bash python eval.py --model_id mpoyraz/wav2vec2-xls-r-300m-cv6-turkish --dataset common_voice --config tr --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id mpoyraz/wav2vec2-xls-r-300m-cv6-turkish --dataset speech-recognition-community-v2/dev_data --config tr --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ## Evaluation results: | Dataset | WER | CER | |---|---|---| |Common Voice 6.1 TR test split| 8.83 | 2.37 | |Speech Recognition Community dev data| 32.81 | 11.22 |
ebaeb3d6dfffd297d63c9b115312a61f
sd-concepts-library/cecilio-g
sd-concepts-library
null
11
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,232
false
### Cecilio G on Stable Diffusion This is the `<cecilio-g>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<cecilio-g> 0](https://huggingface.co/sd-concepts-library/cecilio-g/resolve/main/concept_images/1.jpeg) ![<cecilio-g> 1](https://huggingface.co/sd-concepts-library/cecilio-g/resolve/main/concept_images/5.jpeg) ![<cecilio-g> 2](https://huggingface.co/sd-concepts-library/cecilio-g/resolve/main/concept_images/3.jpeg) ![<cecilio-g> 3](https://huggingface.co/sd-concepts-library/cecilio-g/resolve/main/concept_images/2.jpeg) ![<cecilio-g> 4](https://huggingface.co/sd-concepts-library/cecilio-g/resolve/main/concept_images/0.jpeg) ![<cecilio-g> 5](https://huggingface.co/sd-concepts-library/cecilio-g/resolve/main/concept_images/4.jpeg)
352f48a479f3bdd0ba0e5376540e0905
brianpaiva/global_ep5_bertsqv1pptbased_ctxpar_punct_sqv2
brianpaiva
bert
12
13
transformers
0
question-answering
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,546
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # global_ep5_bertsqv1pptbased_ctxpar_punct_sqv2 This model is a fine-tuned version of [pierreguillou/bert-base-cased-squad-v1.1-portuguese](https://huggingface.co/pierreguillou/bert-base-cased-squad-v1.1-portuguese) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5786 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 30 - eval_batch_size: 180 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 90 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 36 | 1.4093 | | No log | 1.99 | 72 | 1.1903 | | No log | 2.99 | 108 | 1.0553 | | No log | 3.99 | 144 | 0.7175 | | No log | 4.99 | 180 | 0.5786 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
5718baddbae5f6643dae1cec301c5642
Helsinki-NLP/opus-mt-en-rnd
Helsinki-NLP
marian
10
13
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
false
### opus-mt-en-rnd * source languages: en * target languages: rnd * OPUS readme: [en-rnd](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-rnd/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-rnd/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-rnd/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-rnd/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.rnd | 34.5 | 0.571 |
5095a4d21dc53b00655b423d0f3dac82
syaimu/7th_furry
syaimu
null
5
0
null
20
null
false
false
false
other
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
417
false
## / 7th Furry / <img src="https://i.imgur.com/3vnM7yh.png" width="1700" height=""> # (Important Notice:1.6) It is quite peaky in use, so the prompts need to be adjusted firmly. default CFG Scale : 8 ±5 default Sampler : DPM++ SDE Karras default Steps : 30 The following prompts are used for comparison images. https://majinai.art/i/AmrKBRI <img src="https://i.imgur.com/SMmZVuQ.jpg" width="1700" height="">
b8d56ebff4c8908a8825fe23a4696d32
sergiocannata/cvt-21-finetuned-brs2
sergiocannata
cvt
9
6
transformers
0
image-classification
true
false
false
apache-2.0
null
['imagefolder']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
9,148
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cvt-21-finetuned-brs2 This model is a fine-tuned version of [microsoft/cvt-21](https://huggingface.co/microsoft/cvt-21) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6947 - Accuracy: 0.6604 - F1: 0.6087 - Precision (ppv): 0.5385 - Recall (sensitivity): 0.7 - Specificity: 0.6364 - Npv: 0.7778 - Auc: 0.6682 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision (ppv) | Recall (sensitivity) | Specificity | Npv | Auc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------------:|:--------------------:|:-----------:|:------:|:------:| | 0.8177 | 1.89 | 100 | 0.7113 | 0.5283 | 0.5098 | 0.4194 | 0.65 | 0.4545 | 0.6818 | 0.5523 | | 0.736 | 3.77 | 200 | 0.7178 | 0.5283 | 0.3902 | 0.3810 | 0.4 | 0.6061 | 0.625 | 0.5030 | | 0.5978 | 5.66 | 300 | 0.6889 | 0.6038 | 0.5532 | 0.4815 | 0.65 | 0.5758 | 0.7308 | 0.6129 | | 0.5576 | 7.55 | 400 | 0.7349 | 0.4717 | 0.5484 | 0.4048 | 0.85 | 0.2424 | 0.7273 | 0.5462 | | 0.5219 | 9.43 | 500 | 0.6522 | 0.6038 | 0.4 | 0.4667 | 0.35 | 0.7576 | 0.6579 | 0.5538 | | 0.5326 | 11.32 | 600 | 0.6665 | 0.6226 | 0.5238 | 0.5 | 0.55 | 0.6667 | 0.7097 | 0.6083 | | 0.4381 | 13.21 | 700 | 0.7685 | 0.4717 | 0.5333 | 0.4 | 0.8 | 0.2727 | 0.6923 | 0.5364 | | 0.5598 | 15.09 | 800 | 0.7212 | 0.5283 | 0.1935 | 0.2727 | 0.15 | 0.7576 | 0.5952 | 0.4538 | | 0.6887 | 16.98 | 900 | 0.6985 | 0.6604 | 0.64 | 0.5333 | 0.8 | 0.5758 | 0.8261 | 0.6879 | | 0.7594 | 18.87 | 1000 | 0.7040 | 0.5472 | 0.4286 | 0.4091 | 0.45 | 0.6061 | 0.6452 | 0.5280 | | 0.2177 | 20.75 | 1100 | 0.8056 | 0.4528 | 0.5397 | 0.3953 | 0.85 | 0.2121 | 0.7 | 0.5311 | | 0.4893 | 22.64 | 1200 | 0.8821 | 0.3396 | 0.3860 | 0.2973 | 0.55 | 0.2121 | 0.4375 | 0.3811 | | 0.5994 | 24.53 | 1300 | 0.8059 | 0.5660 | 0.5660 | 0.4545 | 0.75 | 0.4545 | 0.75 | 0.6023 | | 0.5179 | 26.42 | 1400 | 0.6750 | 0.6038 | 0.4615 | 0.4737 | 0.45 | 0.6970 | 0.6765 | 0.5735 | | 0.198 | 28.3 | 1500 | 0.7448 | 0.3962 | 0.3333 | 0.2857 | 0.4 | 0.3939 | 0.52 | 0.3970 | | 0.6536 | 30.19 | 1600 | 0.7555 | 0.5094 | 0.4583 | 0.3929 | 0.55 | 0.4848 | 0.64 | 0.5174 | | 0.7558 | 32.08 | 1700 | 0.6664 | 0.5849 | 0.4762 | 0.4545 | 0.5 | 0.6364 | 0.6774 | 0.5682 | | 0.4915 | 33.96 | 1800 | 0.9213 | 0.3962 | 0.5152 | 0.3696 | 0.85 | 0.1212 | 0.5714 | 0.4856 | | 0.3661 | 35.85 | 1900 | 0.9202 | 0.4528 | 0.4912 | 0.3784 | 0.7 | 0.3030 | 0.625 | 0.5015 | | 0.4838 | 37.74 | 2000 | 0.9297 | 0.4528 | 0.5085 | 0.3846 | 0.75 | 0.2727 | 0.6429 | 0.5114 | | 0.8461 | 39.62 | 2100 | 0.9464 | 0.4717 | 0.5758 | 0.4130 | 0.95 | 0.1818 | 0.8571 | 0.5659 | | 0.6937 | 41.51 | 2200 | 0.7129 | 0.5094 | 0.48 | 0.4 | 0.6 | 0.4545 | 0.6522 | 0.5273 | | 0.6302 | 43.4 | 2300 | 0.6866 | 0.5849 | 0.6071 | 0.4722 | 0.85 | 0.4242 | 0.8235 | 0.6371 | | 0.0793 | 45.28 | 2400 | 0.7791 | 0.5094 | 0.5517 | 0.4211 | 0.8 | 0.3333 | 0.7333 | 0.5667 | | 0.464 | 47.17 | 2500 | 0.8116 | 0.4340 | 0.4444 | 0.3529 | 0.6 | 0.3333 | 0.5789 | 0.4667 | | 0.6131 | 49.06 | 2600 | 0.5970 | 0.6226 | 0.5455 | 0.5 | 0.6 | 0.6364 | 0.7241 | 0.6182 | | 0.6937 | 50.94 | 2700 | 0.8201 | 0.4340 | 0.4 | 0.3333 | 0.5 | 0.3939 | 0.5652 | 0.4470 | | 0.6552 | 52.83 | 2800 | 0.7168 | 0.5660 | 0.5306 | 0.4483 | 0.65 | 0.5152 | 0.7083 | 0.5826 | | 0.7749 | 54.72 | 2900 | 0.6875 | 0.5849 | 0.5217 | 0.4615 | 0.6 | 0.5758 | 0.7037 | 0.5879 | | 0.9482 | 56.6 | 3000 | 0.6392 | 0.6226 | 0.6296 | 0.5 | 0.85 | 0.4848 | 0.8421 | 0.6674 | | 0.2467 | 58.49 | 3100 | 0.6281 | 0.6038 | 0.5333 | 0.48 | 0.6 | 0.6061 | 0.7143 | 0.6030 | | 0.2903 | 60.38 | 3200 | 0.7383 | 0.5472 | 0.5556 | 0.4412 | 0.75 | 0.4242 | 0.7368 | 0.5871 | | 0.5859 | 62.26 | 3300 | 0.7191 | 0.6226 | 0.5652 | 0.5 | 0.65 | 0.6061 | 0.7407 | 0.6280 | | 0.3815 | 64.15 | 3400 | 0.7469 | 0.5283 | 0.4444 | 0.4 | 0.5 | 0.5455 | 0.6429 | 0.5227 | | 0.531 | 66.04 | 3500 | 0.7566 | 0.6226 | 0.5652 | 0.5 | 0.65 | 0.6061 | 0.7407 | 0.6280 | | 0.3892 | 67.92 | 3600 | 0.8168 | 0.5660 | 0.5490 | 0.4516 | 0.7 | 0.4848 | 0.7273 | 0.5924 | | 0.6487 | 69.81 | 3700 | 0.9077 | 0.4340 | 0.4643 | 0.3611 | 0.65 | 0.3030 | 0.5882 | 0.4765 | | 0.5525 | 71.7 | 3800 | 0.6961 | 0.6038 | 0.5116 | 0.4783 | 0.55 | 0.6364 | 0.7 | 0.5932 | | 0.3137 | 73.58 | 3900 | 1.0817 | 0.3774 | 0.4590 | 0.3415 | 0.7 | 0.1818 | 0.5 | 0.4409 | | 0.3526 | 75.47 | 4000 | 0.7684 | 0.5472 | 0.5862 | 0.4474 | 0.85 | 0.3636 | 0.8 | 0.6068 | | 0.5938 | 77.36 | 4100 | 0.8786 | 0.4340 | 0.4828 | 0.3684 | 0.7 | 0.2727 | 0.6 | 0.4864 | | 0.2431 | 79.25 | 4200 | 0.8925 | 0.4151 | 0.4746 | 0.3590 | 0.7 | 0.2424 | 0.5714 | 0.4712 | | 0.1021 | 81.13 | 4300 | 1.0740 | 0.4528 | 0.4727 | 0.3714 | 0.65 | 0.3333 | 0.6111 | 0.4917 | | 0.3429 | 83.02 | 4400 | 0.7723 | 0.4906 | 0.5091 | 0.4 | 0.7 | 0.3636 | 0.6667 | 0.5318 | | 0.3836 | 84.91 | 4500 | 0.7247 | 0.5472 | 0.5556 | 0.4412 | 0.75 | 0.4242 | 0.7368 | 0.5871 | | 0.4099 | 86.79 | 4600 | 0.8508 | 0.4340 | 0.4828 | 0.3684 | 0.7 | 0.2727 | 0.6 | 0.4864 | | 0.8264 | 88.68 | 4700 | 0.7682 | 0.5849 | 0.5769 | 0.4688 | 0.75 | 0.4848 | 0.7619 | 0.6174 | | 0.1928 | 90.57 | 4800 | 0.8738 | 0.4906 | 0.5574 | 0.4146 | 0.85 | 0.2727 | 0.75 | 0.5614 | | 0.3422 | 92.45 | 4900 | 0.8810 | 0.5660 | 0.5965 | 0.4595 | 0.85 | 0.3939 | 0.8125 | 0.6220 | | 0.5524 | 94.34 | 5000 | 1.0801 | 0.3774 | 0.4923 | 0.3556 | 0.8 | 0.1212 | 0.5 | 0.4606 | | 0.464 | 96.23 | 5100 | 0.9417 | 0.5283 | 0.5902 | 0.4390 | 0.9 | 0.3030 | 0.8333 | 0.6015 | | 0.7182 | 98.11 | 5200 | 1.0335 | 0.4151 | 0.4746 | 0.3590 | 0.7 | 0.2424 | 0.5714 | 0.4712 | | 0.604 | 100.0 | 5300 | 0.6947 | 0.6604 | 0.6087 | 0.5385 | 0.7 | 0.6364 | 0.7778 | 0.6682 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
fa30a5424e14ea285e88d6bb96ea7fb6
mprzibilla/super_large_finetune_M01
mprzibilla
wav2vec2
10
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,803
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # super_large_finetune_M01 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.9906 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 20 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 35440 - num_epochs: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:------:|:---------------:|:---:| | 10.0626 | 20.0 | 70880 | 3.0307 | 1.0 | | 2.5319 | 40.0 | 141760 | 3.0316 | 1.0 | | 2.4978 | 60.0 | 212640 | 3.0123 | 1.0 | | 2.4849 | 80.0 | 283520 | 2.9923 | 1.0 | | 2.4776 | 100.0 | 354400 | 3.0092 | 1.0 | | 2.4733 | 120.0 | 425280 | 2.9964 | 1.0 | | 2.4702 | 140.0 | 496160 | 2.9968 | 1.0 | | 2.4686 | 160.0 | 567040 | 2.9937 | 1.0 | | 2.4669 | 180.0 | 637920 | 2.9908 | 1.0 | | 2.4661 | 200.0 | 708800 | 2.9906 | 1.0 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
802197e2f63b3fdb7676502ad3efff33
anas-awadalla/bart-large-few-shot-k-256-finetuned-squad-infilling-seed-2
anas-awadalla
bart
16
1
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
968
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-few-shot-k-256-finetuned-squad-infilling-seed-2 This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 35.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.11.6
c75780ec26bf4cc0fd7f0d207ae86b99
akira0402/xlm-roberta-base-finetuned-panx-de
akira0402
xlm-roberta
9
5
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,319
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1380 - F1: 0.8630 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2625 | 1.0 | 525 | 0.1667 | 0.8208 | | 0.1281 | 2.0 | 1050 | 0.1361 | 0.8510 | | 0.0809 | 3.0 | 1575 | 0.1380 | 0.8630 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
183d65f4b8a77ae8fe14864c5d21945b
AlekseyKorshuk/6.7b-ri-reproduce-4-gpu
AlekseyKorshuk
opt
13
4
transformers
0
text-generation
true
false
false
other
null
['ChaiML/dalio_training_v1']
null
10
10
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,055
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 6.7b-ri-reproduce-4-gpu This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the ChaiML/dalio_training_v1 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9e-07 - train_batch_size: 1 - eval_batch_size: 8 - seed: 100 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 4 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
ff349ed939c52f6f74ade54daa0cc16d
Lvxue/distilled-mt5-small-1-0.5
Lvxue
mt5
14
1
transformers
0
text2text-generation
true
false
false
apache-2.0
['en', 'ro']
['wmt16']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,036
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilled-mt5-small-1-0.5 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 3.8410 - Bleu: 5.3917 - Gen Len: 40.6103 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
6c320831d9c9de46ea35825eb67f78db
facebook/s2t-small-mustc-en-it-st
facebook
speech_to_text
11
282
transformers
1
automatic-speech-recognition
true
true
false
mit
['en', 'it']
['mustc']
null
1
1
0
0
0
0
0
['audio', 'speech-translation', 'automatic-speech-recognition']
false
true
true
4,212
false
# S2T-SMALL-MUSTC-EN-IT-ST `s2t-small-mustc-en-it-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST). The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text) ## Model description S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the transcripts/translations autoregressively. ## Intended uses & limitations This model can be used for end-to-end English speech to Italian text translation. See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints. ### How to use As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the transcripts by passing the speech features to the model. *Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the filter bank features. Make sure to install the `torchaudio` package before running this example.* You could either install those as extra speech dependancies with `pip install transformers"[speech, sentencepiece]"` or install the packages seperatly with `pip install torchaudio sentencepiece`. ```python import torch from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration from datasets import load_dataset import soundfile as sf model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-mustc-en-it-st") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-mustc-en-it-st") def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch ds = load_dataset( "patrickvonplaten/librispeech_asr_dummy", "clean", split="validation" ) ds = ds.map(map_to_array) inputs = processor( ds["speech"][0], sampling_rate=16_000, return_tensors="pt" ) generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"]) translation = processor.batch_decode(generated_ids, skip_special_tokens=True) ``` ## Training data The s2t-small-mustc-en-it-st is trained on English-Italian subset of [MuST-C](https://ict.fbk.eu/must-c/). MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems for speech translation from English into several languages. For each target language, MuST-C comprises several hundred hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual transcriptions and translations. ## Training procedure ### Preprocessing The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization) is applied to each example. The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 8,000. ### Training The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779). The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate model training and for better performance the encoder is pre-trained for English ASR. ## Evaluation results MuST-C test results for en-it (BLEU score): 22.7 ### BibTeX entry and citation info ```bibtex @inproceedings{wang2020fairseqs2t, title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq}, author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino}, booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations}, year = {2020}, } ```
edda4585582ce468dc094593bc5fc544
brennan-richards/gpt2-finetuned-academic-topics
brennan-richards
gpt2
5
7
transformers
0
text-generation
true
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,773
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-finetuned-academic-topics This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on a dataset of sequences of science, technology, engineering and mathematics academic topics/tags which a user has used on their CiteULike or Google Scholar profiles. Please contact brichards88@uri.edu for questions or inquiries. It achieves the following results on the evaluation set: - Train Loss: 3.3216 - Validation Loss: 3.2215 - Epoch: 4 ## Model description Give a sequence of topics, i.e.: "machine learning, deep learning, chemistry, evolution" the model will continue the sequence, effectively recommending/generating new topics that might be of interest. ## Intended uses & limitations The model is not guaranteed to generate a real topic or even a real word/words as output. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.7873 | 4.2950 | 0 | | 4.1032 | 3.8203 | 1 | | 3.7363 | 3.5614 | 2 | | 3.4999 | 3.3740 | 3 | | 3.3216 | 3.2215 | 4 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
684cc9e665adf9d85638a7c04379cef2
plantdoctor/swin-tiny-patch4-window7-224-plant-doctor
plantdoctor
swin
24
12
transformers
0
image-classification
true
false
false
apache-2.0
null
['image_folder']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,490
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-plant-doctor This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.0043 - Accuracy: 0.9983 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.0643 | 1.0 | 3954 | 0.0218 | 0.9933 | | 0.0536 | 2.0 | 7908 | 0.0103 | 0.9966 | | 0.018 | 3.0 | 11862 | 0.0043 | 0.9983 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu115 - Datasets 2.1.0 - Tokenizers 0.12.1
860734457ca4ce4cac1346a070f5a15e
brad1141/bertBasev2
brad1141
bert
10
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,934
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bertBasev2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0328 - Precision: 0.9539 - Recall: 0.9707 - F1: 0.9622 - Accuracy: 0.9911 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 1.2004 | 1.0 | 1012 | 0.9504 | 0.2620 | 0.3519 | 0.3004 | 0.6856 | | 1.0265 | 2.0 | 2024 | 0.6205 | 0.4356 | 0.5161 | 0.4725 | 0.7956 | | 0.6895 | 3.0 | 3036 | 0.3269 | 0.6694 | 0.7302 | 0.6985 | 0.9044 | | 0.44 | 4.0 | 4048 | 0.1325 | 0.8356 | 0.9091 | 0.8708 | 0.9667 | | 0.2585 | 5.0 | 5060 | 0.0717 | 0.9259 | 0.9531 | 0.9393 | 0.9844 | | 0.1722 | 6.0 | 6072 | 0.0382 | 0.9480 | 0.9619 | 0.9549 | 0.99 | | 0.0919 | 7.0 | 7084 | 0.0328 | 0.9539 | 0.9707 | 0.9622 | 0.9911 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
687ad5c4b200fcfa524e56fd08ffda0e
postbot/gpt2-medium-emailgen
postbot
gpt2
12
43
transformers
0
text-generation
true
false
false
['apache-2.0']
null
['aeslc', 'postbot/multi-emails-100k']
null
1
0
1
0
0
0
0
['text generation', 'emailgen', 'email generation', 'email']
false
true
true
2,138
false
# gpt2-medium-emailgen [![colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/pszemraj/70058788c6d4b430398c12ee8ba10602/minimal-demo-for-postbot-gpt2-medium-emailgen.ipynb ) Why write the entire email when you can generate (most of) it? ```python from transformers import pipeline model_tag = "postbot/gpt2-medium-emailgen" generator = pipeline( 'text-generation', model=model_tag, ) prompt = """ Hello, Following up on the bubblegum shipment.""" result = generator( prompt, max_length=64, do_sample=False, early_stopping=True, ) # generate print(result[0]['generated_text']) ``` ## about This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the postbot/multi-emails-100k dataset. It achieves the following results on the evaluation set: - Loss: 1.5840 ## Model description More information needed ## Intended uses & limitations - this is intended as a tool to save time writing predictable emails and not to write emails without a human-in-the-loop. validate that your email is factually correct before sending it to others. ## Training and evaluation data - the dataset is essentially a hand-curated/augmented expansion to the classic `aeslc` dataset ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.02 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8701 | 1.0 | 789 | 1.8378 | | 1.5065 | 2.0 | 1578 | 1.6176 | | 1.1873 | 3.0 | 2367 | 1.5840 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.10.0+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
10afface88c7f0c232fb57d0826d3add
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_pretrain_qqp
gokuls
mobilebert
17
3
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,856
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_add_GLUE_Experiment_logit_kd_pretrain_qqp This model is a fine-tuned version of [gokuls/mobilebert_add_pre-training-complete](https://huggingface.co/gokuls/mobilebert_add_pre-training-complete) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: nan - Accuracy: 0.6318 - F1: 0.0 - Combined Score: 0.3159 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:---:|:--------------:| | 0.0 | 1.0 | 2843 | nan | 0.6318 | 0.0 | 0.3159 | | 0.0 | 2.0 | 5686 | nan | 0.6318 | 0.0 | 0.3159 | | 0.0 | 3.0 | 8529 | nan | 0.6318 | 0.0 | 0.3159 | | 0.0 | 4.0 | 11372 | nan | 0.6318 | 0.0 | 0.3159 | | 0.0 | 5.0 | 14215 | nan | 0.6318 | 0.0 | 0.3159 | | 0.0 | 6.0 | 17058 | nan | 0.6318 | 0.0 | 0.3159 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
77dc479086bd2292b22d7819844ecb59
Helsinki-NLP/opus-mt-uk-no
Helsinki-NLP
marian
11
7
transformers
0
translation
true
true
false
apache-2.0
['uk', False]
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
1,996
false
### ukr-nor * source group: Ukrainian * target group: Norwegian * OPUS readme: [ukr-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-nor/README.md) * model: transformer-align * source language(s): ukr * target language(s): nob * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ukr.nor | 21.3 | 0.397 | ### System Info: - hf_name: ukr-nor - source_languages: ukr - target_languages: nor - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-nor/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['uk', 'no'] - src_constituents: {'ukr'} - tgt_constituents: {'nob', 'nno'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.test.txt - src_alpha3: ukr - tgt_alpha3: nor - short_pair: uk-no - chrF2_score: 0.397 - bleu: 21.3 - brevity_penalty: 0.966 - ref_len: 4378.0 - src_name: Ukrainian - tgt_name: Norwegian - train_date: 2020-06-17 - src_alpha2: uk - tgt_alpha2: no - prefer_old: False - long_pair: ukr-nor - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
88e92fb5479a7b7ed21f3a8dd9d34716
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-2_female-8_s295
jonatasgrosman
wav2vec2
10
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['fr']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'fr']
false
true
true
476
false
# exp_w2v2r_fr_xls-r_gender_male-2_female-8_s295 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
b7c9300cf4375ecc51ed7ba7c900eeec
jkhan447/HateXplain-2nd-anno-labeled
jkhan447
bert
15
7
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,019
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HateXplain-2nd-anno-labeled This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8699 - Accuracy: 0.5778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
7c78b043162fb95ab6300d602edf7947
tnkmr/sfi_convtasnet_fd_mgf_musdb18hq
tnkmr
null
4,430
0
null
0
audio-to-audio
false
false
false
mit
['ja']
['MUSDB18-HQ']
null
4
0
4
0
0
0
0
['music', 'audio', 'audio-to-audio', 'SFI']
false
true
true
1,254
false
# Sampling-frequency-independent (SFI) Conv-TasNet trained with the MUSDB18-HQ dataset for music source separation. This model was proposed in [our IEEE/ACM Trans. ASLP paper](https://doi.org/10.1109/TASLP.2022.3203907) and works well with untrained sampling frequencies by using sampling-frequency-independent convolutional layers with the frequency domain filter design. The latent analog filter is a modulated Gaussian filter. It was trained by Tomohiko Nakamura using [the codebase](https://github.com/TomohikoNakamura/sfi_convtasnet)). This model was trained with 32 kHz-sampled data but works well with untrained sampling frequencies (e.g., 8, 16 kHz). # License MIT # Citation Please cite the following paper. ``` @article{KSaito2022IEEEACMTASLP, author={Saito, Koichi and Nakamura, Tomohiko and Yatabe, Kohei and Saruwatari, Hiroshi}, journal = {IEEE/ACM Transactions on Audio, Speech, and Language Processing}, title = {Sampling-frequency-independent convolutional layer and its application to audio source separation}, year=2022, month=sep, volume=30, pages={2928--2943}, doi={10.1109/TASLP.2022.3203907}, } ``` # Contents - Four trained models (seed=40,42,44,47) - Evaluation results (json files obtained with the museval library)
43e2dbc0e294151fdd0aa32a43f3c1b6
yhavinga/t5-v1.1-large-dutch-cnn-test
yhavinga
t5
13
138
transformers
1
summarization
true
false
true
apache-2.0
['nl']
['yhavinga/mc4_nl_cleaned', 'ml6team/cnn_dailymail_nl']
null
3
0
2
1
0
0
0
['summarization', 't5', 'seq2seq']
true
true
true
5,857
false
# T5 v1.1 Large finetuned for CNN news summarization in Dutch 🇳🇱 This model is [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) finetuned on [CNN Dailymail NL](https://huggingface.co/datasets/ml6team/cnn_dailymail_nl) For a demo of the Dutch CNN summarization models, head over to the Hugging Face Spaces for the **[Netherformer 📰](https://huggingface.co/spaces/flax-community/netherformer)** example application! Rouge scores for this model are listed below. ## Tokenizer * SentencePiece tokenizer trained from scratch for Dutch on mC4 nl cleaned with scripts from the Huggingface Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling). ## Dataset All models listed below are trained on of the `full` configuration (39B tokens) of [cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned), which is the original mC4, except * Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed * Sentences with less than 3 words are removed * Sentences with a word of more than 1000 characters are removed * Documents with less than 5 sentences are removed * Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies", "use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed. ## Models TL;DR: [yhavinga/t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) is the best model. * `yhavinga/t5-base-dutch` is a re-training of the Dutch T5 base v1.0 model trained during the summer 2021 Flax/Jax community week. Accuracy was improved from 0.64 to 0.70. * The two T5 v1.1 base models are an uncased and cased version of `t5-v1.1-base`, again pre-trained from scratch on Dutch, with a tokenizer also trained from scratch. The t5 v1.1 models are slightly different from the t5 models, and the base models are trained with a dropout of 0.0. For fine-tuning it is intended to set this back to 0.1. * The large cased model is a pre-trained Dutch version of `t5-v1.1-large`. Training of t5-v1.1-large proved difficult. Without dropout regularization, the training would diverge at a certain point. With dropout training went better, be it much slower than training the t5-model. At some point convergance was too slow to warrant further training. The latest checkpoint, training scripts and metrics are available for reference. For actual fine-tuning the cased base model is probably the better choice. | | model | train seq len | acc | loss | batch size | epochs | steps | dropout | optim | lr | duration | |---------------------------------------------------------------------------------------------------|---------|---------------|----------|----------|------------|--------|---------|---------|-----------|------|----------| | [yhavinga/t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | T5 | 512 | 0,70 | 1,38 | 128 | 1 | 528481 | 0.1 | adafactor | 5e-3 | 2d 9h | | [yhavinga/t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | t5-v1.1 | 1024 | 0,73 | 1,20 | 64 | 2 | 1014525 | 0.0 | adafactor | 5e-3 | 5d 5h | | [yhavinga/t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | t5-v1.1 | 1024 | **0,78** | **0,96** | 64 | 2 | 1210000 | 0.0 | adafactor | 5e-3 | 6d 6h | | [yhavinga/t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | t5-v1.1 | 512 | 0,76 | 1,07 | 64 | 1 | 1120000 | 0.1 | adafactor | 5e-3 | 86 13h | The cased t5-v1.1 Dutch models were fine-tuned on summarizing the CNN Daily Mail dataset. | | model | input len | target len | Rouge1 | Rouge2 | RougeL | RougeLsum | Test Gen Len | epochs | batch size | steps | duration | |-------------------------------------------------------------------------------------------------------|---------|-----------|------------|--------|--------|--------|-----------|--------------|--------|------------|-------|----------| | [yhavinga/t5-v1.1-base-dutch-cnn-test](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cnn-test) | t5-v1.1 | 1024 | 96 | 34,8 | 13,6 | 25,2 | 32,1 | 79 | 6 | 64 | 26916 | 2h 40m | | [yhavinga/t5-v1.1-large-dutch-cnn-test](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cnn-test) | t5-v1.1 | 1024 | 96 | 34,4 | 13,6 | 25,3 | 31,7 | 81 | 5 | 16 | 89720 | 11h | ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was also instrumental in many, if not all parts of the training. The following repositories where helpful in setting up the TPU-VM, and training the models: * [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp) * [HUggingFace Flax MLM examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling) * [Flax/Jax Community week t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch) Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
ac08e82106850492ddc1a18bc5b0244f
JosephusCheung/ACertainThing
JosephusCheung
null
23
3,760
diffusers
130
text-to-image
false
false
false
creativeml-openrail-m
['en']
null
null
0
0
0
0
0
0
0
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
false
true
true
5,563
false
# ACertainThing **Try full functions with Google Colab free T4** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1gwJViXR0UxoXx01qiU6uTSEKGjTagOgp?usp=sharing) Anything3.0 is an overfitted model that takes liberties when it shouldn't be generating human images and certain details. However, the community has given it a high rating, and I believe that is because many lazy people who don't know how to write a prompt can use this overfitted model to generate high-quality images even if their prompts are poorly written. Here is a ACertain version of Anything3.0, made with Dreambooth (idea of [LoRA](https://arxiv.org/abs/2106.09685) integrated), initialized with [ACertainModel](https://huggingface.co/JosephusCheung/ACertainModel). Although this model may produce better results for image generation, it is built on two major problems. Firstly, it does not always stay true to your prompts; it adds irrelevant details, and sometimes these details are highly homogenized. Secondly, it is an unstable, overfitted model, similar to Anything3.0, and is not suitable for any form of further training. As far as I know, Anything3.0 is obtained by merging several models in just the right way, but it is itself an overfitted model with defects in both its saturation and configuration. However, as I mentioned earlier, it can make even poorly written prompts produce good output images, which leads many lazy people who are incapable of writing good prompts to quickly surpass those who study the writing of prompts carefully. Despite these problems, I still want to release an extended version of the model that caters to the preferences of many people in the community. I hope would you like it. **In my personal view, I oppose all forms of model merging as it has no scientific principle and is nothing but a waste of time. It is a desire to get results without putting in the effort. That is why I do not like Anything3.0, or this model that is being released. But I respect the choices and preferences of the community, and I hope that you can also respect and understand my thoughts.** If you want your prompts to be accurately output and want to learn the correct skills for using prompts, it is recommended that you use the more balanced model [ACertainModel](https://huggingface.co/JosephusCheung/ACertainModel). e.g. **_masterpiece, best quality, 1girl, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden_** ## About online preview with Hosted inference API, also generation with this model Parameters are not allowed to be modified, as it seems that it is generated with *Clip skip: 1*, for better performance, it is strongly recommended to use *Clip skip: 2* instead. Here is an example of inference settings, if it is applicable with you on your own server: *Steps: 28, Sampler: Euler a, CFG scale: 11, Clip skip: 2*. ## 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or FLAX/JAX. ```python from diffusers import StableDiffusionPipeline import torch model_id = "JosephusCheung/ACertainThing" branch_name= "main" pipe = StableDiffusionPipeline.from_pretrained(model_id, revision=branch_name, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "pikachu" image = pipe(prompt).images[0] image.save("./pikachu.png") ``` ## Examples Below are some examples of images generated using this model, with better performance on framing and hand gestures, as well as moving objects, comparing to other analogues: **Anime Girl:** ![Anime Girl](https://huggingface.co/JosephusCheung/ACertainThing/resolve/main/samples/acth-sample-1girl.png) ``` 1girl, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden Steps: 28, Sampler: Euler a, CFG scale: 11, Seed: 114514, Clip skip: 2 ``` **Anime Boy:** ![Anime Boy](https://huggingface.co/JosephusCheung/ACertainThing/resolve/main/samples/acth-sample-1boy.png) ``` 1boy, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden Steps: 28, Sampler: Euler a, CFG scale: 11, Seed: 114514, Clip skip: 2 ``` ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) ## Is it a NovelAI based model? What is the relationship with SD1.2 and SD1.4? See [ASimilarityCalculatior](https://huggingface.co/JosephusCheung/ASimilarityCalculatior)
5ee62dfd9d525b8e49fb23e97b637d29
gokuls/distilbert_add_GLUE_Experiment_logit_kd_rte
gokuls
distilbert
17
3
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,366
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_add_GLUE_Experiment_logit_kd_rte This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.4229 - Accuracy: 0.4729 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4684 | 1.0 | 10 | 0.4310 | 0.4729 | | 0.4214 | 2.0 | 20 | 0.4342 | 0.4729 | | 0.4216 | 3.0 | 30 | 0.4264 | 0.4729 | | 0.4197 | 4.0 | 40 | 0.4311 | 0.4729 | | 0.425 | 5.0 | 50 | 0.4297 | 0.4729 | | 0.4192 | 6.0 | 60 | 0.4260 | 0.4729 | | 0.4182 | 7.0 | 70 | 0.4243 | 0.4729 | | 0.4184 | 8.0 | 80 | 0.4246 | 0.4729 | | 0.4201 | 9.0 | 90 | 0.4240 | 0.4729 | | 0.417 | 10.0 | 100 | 0.4259 | 0.4729 | | 0.419 | 11.0 | 110 | 0.4269 | 0.4729 | | 0.4165 | 12.0 | 120 | 0.4249 | 0.4729 | | 0.4116 | 13.0 | 130 | 0.4229 | 0.4729 | | 0.3924 | 14.0 | 140 | 0.4916 | 0.4729 | | 0.3783 | 15.0 | 150 | 0.4539 | 0.4874 | | 0.3384 | 16.0 | 160 | 0.4581 | 0.4982 | | 0.3202 | 17.0 | 170 | 0.5284 | 0.4765 | | 0.3054 | 18.0 | 180 | 0.4884 | 0.5162 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
3983539c1e0d3b9239fad65842c5175a
Seongkyu/bert-base-cased-finetuned-squad
Seongkyu
bert
12
20
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,261
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.0458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.0179 | 1.0 | 6194 | 0.9548 | | 0.7277 | 2.0 | 12388 | 0.9717 | | 0.507 | 3.0 | 18582 | 1.0458 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
99e6dc2a7420e3c5c2c1bf4d38d56315
steja/whisper-small-somali
steja
whisper
17
2
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['google/fleurs']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,677
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper_small_Somali This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the google/fleurs so_so dataset. It achieves the following results on the evaluation set: - Loss: 2.0764 - Wer: 66.5950 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.0205 | 30.74 | 400 | 1.8418 | 67.2524 | | 0.0012 | 61.52 | 800 | 2.0764 | 66.5950 | | 0.0006 | 92.3 | 1200 | 2.1537 | 67.6452 | | 0.0004 | 123.07 | 1600 | 2.1930 | 67.1367 | | 0.0004 | 153.81 | 2000 | 2.2065 | 66.9299 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
495ca3925e878afa61c118b501343ce9
stevemobs/deberta-base-combined-squad1-aqa-newsqa-50
stevemobs
deberta
17
5
transformers
0
question-answering
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,226
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-base-combined-squad1-aqa-newsqa-50 This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.9401 | 1.0 | 18532 | 0.8266 | | 0.6811 | 2.0 | 37064 | 0.7756 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
4b4f13a77feac309a62871339f9be0d1
martinbiber/marian-finetuned-kde4-en-to-fr
martinbiber
marian
9
1
transformers
0
text2text-generation
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,381
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # martinbiber/marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0539 - Validation Loss: 0.8992 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 5911, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0539 | 0.8992 | 0 | ### Framework versions - Transformers 4.19.2 - TensorFlow 2.8.2 - Datasets 2.2.2 - Tokenizers 0.12.1
07ff73ab34898cc354d2c3fcc4f9571c
pranay-j/whisper-large-v2-hindi
pranay-j
whisper
17
1
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['hi']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,376
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large V2 finetuned Hindi This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2043 - Wer: 10.7225 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0153 | 3.18 | 1000 | 0.2043 | 10.7225 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
bab8dd3122d50e84a841be02356263eb
lmqg/mt5-small-esquad-qg-ae
lmqg
mt5
40
126
transformers
0
text2text-generation
true
false
false
cc-by-4.0
['es']
['lmqg/qg_esquad']
null
0
0
0
0
0
0
0
['question generation', 'answer extraction']
true
true
true
7,139
false
# Model Card of `lmqg/mt5-small-esquad-qg-ae` This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question generation and answer extraction jointly on the [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small) - **Language:** es - **Training data:** [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="es", model="lmqg/mt5-small-esquad-qg-ae") # model prediction question_answer_pairs = model.generate_qa("a noviembre , que es también la estación lluviosa.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-small-esquad-qg-ae") # answer extraction answer = pipe("generate question: del <hl> Ministerio de Desarrollo Urbano <hl> , Gobierno de la India.") # question generation question = pipe("extract answers: <hl> En la diáspora somalí, múltiples eventos islámicos de recaudación de fondos se llevan a cabo cada año en ciudades como Birmingham, Londres, Toronto y Minneapolis, donde los académicos y profesionales somalíes dan conferencias y responden preguntas de la audiencia. <hl> El propósito de estos eventos es recaudar dinero para nuevas escuelas o universidades en Somalia, para ayudar a los somalíes que han sufrido como consecuencia de inundaciones y / o sequías, o para reunir fondos para la creación de nuevas mezquitas como.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-esquad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_esquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 83.39 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_1 | 24.5 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_2 | 16.48 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_3 | 11.83 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_4 | 8.79 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | METEOR | 21.66 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | MoverScore | 58.34 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | ROUGE_L | 23.13 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-esquad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_esquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 79.06 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | QAAlignedF1Score (MoverScore) | 54.49 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | QAAlignedPrecision (BERTScore) | 76.46 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | QAAlignedPrecision (MoverScore) | 52.96 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | QAAlignedRecall (BERTScore) | 81.94 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | QAAlignedRecall (MoverScore) | 56.21 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-esquad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_esquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 57.63 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | AnswerF1Score | 75.31 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | BERTScore | 89.77 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_1 | 35.18 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_2 | 30.48 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_3 | 26.92 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_4 | 23.89 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | METEOR | 43.11 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | MoverScore | 80.64 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | ROUGE_L | 48.58 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_esquad - dataset_name: default - input_types: ['paragraph_answer', 'paragraph_sentence'] - output_types: ['question', 'answer'] - prefix_types: ['qg', 'ae'] - model: google/mt5-small - max_length: 512 - max_length_output: 32 - epoch: 5 - batch: 16 - lr: 0.001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-esquad-qg-ae/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
b5d0ca1c100215d5125c5742498f34f7
DOOGLAK/wikigold_trained_no_DA
DOOGLAK
bert
13
23
transformers
0
token-classification
true
false
false
apache-2.0
null
['wikigold_splits']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,509
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # temp This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the wikigold_splits dataset. It achieves the following results on the evaluation set: - Loss: 0.1322 - Precision: 0.8517 - Recall: 0.875 - F1: 0.8632 - Accuracy: 0.9607 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 167 | 0.1490 | 0.7583 | 0.7760 | 0.7671 | 0.9472 | | No log | 2.0 | 334 | 0.1337 | 0.8519 | 0.8464 | 0.8491 | 0.9572 | | 0.1569 | 3.0 | 501 | 0.1322 | 0.8517 | 0.875 | 0.8632 | 0.9607 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
ce24cc56c92de95645a0cfe11f09fa0b
mike157/flan-t5-base-flant5-apple-support
mike157
t5
11
11
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['stackexchange_titlebody_best_voted_answer_jsonl']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,849
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-flant5-apple-support This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the stackexchange_titlebody_best_voted_answer_jsonl dataset. It achieves the following results on the evaluation set: - Loss: 3.0475 - Rouge1: 12.4139 - Rouge2: 2.0562 - Rougel: 9.4938 - Rougelsum: 11.0524 - Gen Len: 18.9589 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 232 | 3.0886 | 12.844 | 2.1734 | 9.8971 | 11.3641 | 18.8876 | | No log | 2.0 | 464 | 3.0639 | 12.2909 | 2.1209 | 9.4999 | 10.9458 | 18.9416 | | 3.3185 | 3.0 | 696 | 3.0538 | 12.4154 | 2.0984 | 9.4989 | 11.0684 | 18.9492 | | 3.3185 | 4.0 | 928 | 3.0489 | 12.7043 | 2.1969 | 9.7356 | 11.3629 | 18.9481 | | 3.187 | 5.0 | 1160 | 3.0475 | 12.4139 | 2.0562 | 9.4938 | 11.0524 | 18.9589 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu117 - Datasets 2.8.0 - Tokenizers 0.13.2
dffce1434025d7cfe0fa92b3718fbf8e
DrishtiSharma/whisper-small-hindi-2k-steps
DrishtiSharma
whisper
15
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['hi']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,312
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Hindi - Drishti Sharma This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2751 - Wer: 17.1985 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0598 | 2.44 | 2000 | 0.2751 | 17.1985 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
d73ba0f12adc86e1b44d3a05d8b0544b
limsc/reqroberta-tapt-epoch50
limsc
roberta
9
2
transformers
0
fill-mask
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,325
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # reqroberta-tapt-epoch50 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 37100, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.19.2 - TensorFlow 2.8.2 - Datasets 2.2.2 - Tokenizers 0.12.1
0c35d5a5e942b79dfb7a8d73b3c55c0d
candra/wav2vec2-large-xls-r-300m-indonesia-colab
candra
wav2vec2
13
2
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,853
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-indonesia-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3278 - Wer: 0.2831 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.0256 | 3.23 | 400 | 0.8938 | 0.8095 | | 0.4608 | 6.45 | 800 | 0.3986 | 0.4415 | | 0.2037 | 9.68 | 1200 | 0.3712 | 0.3881 | | 0.1423 | 12.9 | 1600 | 0.3362 | 0.3547 | | 0.1125 | 16.13 | 2000 | 0.3612 | 0.3452 | | 0.0879 | 19.35 | 2400 | 0.3589 | 0.3201 | | 0.0706 | 22.58 | 2800 | 0.3449 | 0.2989 | | 0.0558 | 25.81 | 3200 | 0.3371 | 0.2941 | | 0.0459 | 29.03 | 3600 | 0.3278 | 0.2831 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
24e2affc44b347e0aa6b14a7184528ca
sayakpaul/distilbert-base-uncased-finetuned-emotion-lr-1e-05-wd-0002
sayakpaul
distilbert
10
3
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,396
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion-lr-1e-05-wd-0002 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.7739 - Accuracy: 0.7495 - F1: 0.6924 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.3404 | 1.0 | 125 | 1.0081 | 0.637 | 0.5492 | | 0.8738 | 2.0 | 250 | 0.7739 | 0.7495 | 0.6924 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.10.0 - Datasets 2.6.1 - Tokenizers 0.13.1
468f13ce161d30f767dc0916555ebbcd
yanaiela/roberta-base-epoch_34
yanaiela
roberta
9
3
transformers
0
fill-mask
true
false
false
mit
['en']
['wikipedia', 'bookcorpus']
null
0
0
0
0
0
0
0
['roberta-base', 'roberta-base-epoch_34']
false
true
true
2,102
false
# RoBERTa, Intermediate Checkpoint - Epoch 34 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_34. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
e7926cf64da93567d578a16ffcdc5ed5
xpariz10/ast-finetuned-audioset-10-10-0.4593-finetuning-ESC-50-slower-LR
xpariz10
audio-spectrogram-transformer
7
0
transformers
0
audio-classification
true
false
false
bsd-3-clause
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,629
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ast-finetuned-audioset-10-10-0.4593-finetuning-ESC-50-slower-LR This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7837 - Accuracy: 0.8929 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 9.3646 | 1.0 | 28 | 6.0136 | 0.0893 | | 2.9631 | 2.0 | 56 | 2.0175 | 0.5357 | | 1.2435 | 3.0 | 84 | 1.1471 | 0.7679 | | 0.7699 | 4.0 | 112 | 0.8559 | 0.875 | | 0.5911 | 5.0 | 140 | 0.7837 | 0.8929 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
140d9ffd7c60ab805aa234b0141c4a80
imraan/ddpm-butterflies-128
imraan
null
13
3
diffusers
0
null
false
false
false
apache-2.0
['en']
['huggan/smithsonian_butterflies_subset']
null
0
0
0
0
0
0
0
[]
false
true
true
1,228
false
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/imraan/ddpm-butterflies-128/tensorboard?#scalars)
5e46e3ef303878003c6d20c1ec778e3f
obokkkk/wav2vec2-base-timit-demo-colab2
obokkkk
wav2vec2
12
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,642
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab2 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4805 - Wer: 0.3398 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4737 | 4.0 | 500 | 1.2889 | 0.9293 | | 0.5838 | 8.0 | 1000 | 0.4751 | 0.4353 | | 0.2141 | 12.0 | 1500 | 0.4809 | 0.3881 | | 0.1259 | 16.0 | 2000 | 0.4587 | 0.3683 | | 0.084 | 20.0 | 2500 | 0.4941 | 0.3601 | | 0.0582 | 24.0 | 3000 | 0.4811 | 0.3482 | | 0.0439 | 28.0 | 3500 | 0.4805 | 0.3398 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
63bc27ffb4a0fc145348c329b77eab64
Marscen/distilbert-base-uncased-finetuned-squad
Marscen
distilbert
12
5
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad_v2']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,286
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.4052 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2178 | 1.0 | 8235 | 1.1827 | | 0.9355 | 2.0 | 16470 | 1.3283 | | 0.761 | 3.0 | 24705 | 1.4052 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.8.1+cu111 - Datasets 2.2.2 - Tokenizers 0.12.1
89f61bfa00ed65d4b31c506f192fbac2
yanaiela/roberta-base-epoch_48
yanaiela
roberta
9
3
transformers
0
fill-mask
true
false
false
mit
['en']
['wikipedia', 'bookcorpus']
null
0
0
0
0
0
0
0
['roberta-base', 'roberta-base-epoch_48']
false
true
true
2,102
false
# RoBERTa, Intermediate Checkpoint - Epoch 48 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_48. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
f491a9502e627f11c8aee16bbdce3ab0
BayesBayes/distilgpt2-finetuned-wikitext2
BayesBayes
gpt2
7
0
transformers
0
text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,067
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 4.4834 - eval_runtime: 217.639 - eval_samples_per_second: 8.872 - eval_steps_per_second: 1.112 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
2b9b6cd2738bc783830f733f47b2446b
sd-concepts-library/atm-ant-2
sd-concepts-library
null
9
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,010
false
### ATM Ant 2 on Stable Diffusion This is the `<atm-ant>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<atm-ant> 0](https://huggingface.co/sd-concepts-library/atm-ant-2/resolve/main/concept_images/0.jpeg) ![<atm-ant> 1](https://huggingface.co/sd-concepts-library/atm-ant-2/resolve/main/concept_images/3.jpeg) ![<atm-ant> 2](https://huggingface.co/sd-concepts-library/atm-ant-2/resolve/main/concept_images/1.jpeg) ![<atm-ant> 3](https://huggingface.co/sd-concepts-library/atm-ant-2/resolve/main/concept_images/2.jpeg)
d438baa0aedb85d91dd67485d670283a
haesun/xlm-roberta-base-finetuned-panx-en
haesun
xlm-roberta
10
7
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,319
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3932 - F1: 0.7032 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1504 | 1.0 | 50 | 0.5992 | 0.4786 | | 0.5147 | 2.0 | 100 | 0.4307 | 0.6468 | | 0.3717 | 3.0 | 150 | 0.3932 | 0.7032 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
e35095c5e33a3b18cb035bd705c3a3c0
milyiyo/paraphraser-spanish-t5-base
milyiyo
t5
18
3
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,867
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # paraphraser-spanish-t5-base This model is a fine-tuned version of [milyiyo/paraphraser-spanish-t5-base](https://huggingface.co/milyiyo/paraphraser-spanish-t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7572 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.1212 | 0.07 | 2000 | 0.8120 | | 1.2263 | 0.14 | 4000 | 0.7773 | | 1.1976 | 0.21 | 6000 | 0.7745 | | 1.1828 | 0.28 | 8000 | 0.7675 | | 1.1399 | 0.35 | 10000 | 0.7668 | | 1.1378 | 0.42 | 12000 | 0.7651 | | 1.1035 | 0.5 | 14000 | 0.7644 | | 1.0923 | 0.57 | 16000 | 0.7633 | | 1.0924 | 0.64 | 18000 | 0.7594 | | 1.0943 | 0.71 | 20000 | 0.7578 | | 1.0872 | 0.78 | 22000 | 0.7575 | | 1.0755 | 0.85 | 24000 | 0.7599 | | 1.0806 | 0.92 | 26000 | 0.7558 | | 1.079 | 0.99 | 28000 | 0.7572 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
96d52a74cfe9de0b3cd0f0180b67be4a
JosephusCheung/ACertainModel
JosephusCheung
null
24
1,407
diffusers
124
text-to-image
false
false
false
creativeml-openrail-m
['en']
null
null
0
0
0
0
0
0
0
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
false
true
true
5,332
false
# ACertainModel **Try full functions with Google Colab free T4** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1ldhBc70wvuvkp4Af_vNTzTfBXwpf_cH5?usp=sharing) Check Twitter [#ACertainModel](https://twitter.com/hashtag/ACertainModel) for community artworks Welcome to ACertainModel - a latent diffusion model for weebs. This model is intended to produce high-quality, highly detailed anime style pictures with just a few prompts. Like other anime-style Stable Diffusion models, it also supports danbooru tags, including artists, to generate images. Since I noticed that the laion-aesthetics introduced in the Stable-Diffusion-v-1-4 checkpoint hindered finetuning anime style illustration generation model, Dreambooth was used to finetune some tags separately to make it closer to what it was in SD1.2. To avoid overfitting and possible language drift, I added a huge amount of auto-generated pictures from a single word prompt to the training set, using models that are popular in the community such as Anything-3.0, together with partially manual selected full-danbooru images within a year, for further native training. I am also aware of a method of [LoRA](https://arxiv.org/abs/2106.09685), with a similar idea, finetuning attention layer solely, to have better performance on eyes, hands, and other details. For copyright compliance and technical experiment, it was trained from few artist images directly. It was trained on Dreambooth with pictures generated from several popular diffusion models in the community. The checkpoint was initialized with the weights of a Stable Diffusion Model and subsequently fine-tuned for 2K GPU hours on V100 32GB and 600 GPU hours on A100 40GB at 512P dynamic aspect ratio resolution with a certain ratio of unsupervised auto-generated images from several popular diffusion models in the community with some Textual Inversions and Hypernetworks. We do know some tricks on xformers and 8-bit optimization, but we didn't use any of them for better quality and stability. Up to 15 branches are trained simultaneously, cherry-picking about every 20,000 steps. e.g. **_masterpiece, best quality, 1girl, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden_** ## About online preview with Hosted inference API, also generation with this model Parameters are not allowed to be modified, as it seems that it is generated with *Clip skip: 1*, for better performance, it is strongly recommended to use *Clip skip: 2* instead. Here is an example of inference settings, if it is applicable with you on your own server: *Steps: 28, Sampler: Euler a, CFG scale: 11, Clip skip: 2*. ## 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or FLAX/JAX. ```python from diffusers import StableDiffusionPipeline import torch model_id = "JosephusCheung/ACertainModel" branch_name= "main" pipe = StableDiffusionPipeline.from_pretrained(model_id, revision=branch_name, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "pikachu" image = pipe(prompt).images[0] image.save("./pikachu.png") ``` ## Examples Below are some examples of images generated using this model, with better performance on framing and hand gestures, as well as moving objects, comparing to other analogues: **Anime Girl:** ![Anime Girl](https://huggingface.co/JosephusCheung/ACertainModel/resolve/main/samples/sample-1girl.png) ``` 1girl, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden Steps: 28, Sampler: Euler a, CFG scale: 11, Seed: 114514, Clip skip: 2 ``` **Anime Boy:** ![Anime Boy](https://huggingface.co/JosephusCheung/ACertainModel/resolve/main/samples/sample-1boy.png) ``` 1boy, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden Steps: 28, Sampler: Euler a, CFG scale: 11, Seed: 114514, Clip skip: 2 ``` ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) ## Is it a NovelAI based model? What is the relationship with SD1.2 and SD1.4? See [ASimilarityCalculatior](https://huggingface.co/JosephusCheung/ASimilarityCalculatior)
b1a5264ce6a07c547c1dec7135c8c6ce
shpotes/codegen-350M-mono
shpotes
codegen
13
2
transformers
3
text-generation
true
false
false
bsd-3-clause
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,531
false
# Overview The CodeGen model was proposed in by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. From Salesforce Research. The abstract from the paper is the following: Program synthesis strives to generate a computer program as a solution to a given problem specification. We propose a conversational program synthesis approach via large language models, which addresses the challenges of searching over a vast program space and user intent specification faced in prior approaches. Our new approach casts the process of writing a specification and program as a multi-turn conversation between a user and a system. It treats program synthesis as a sequence prediction problem, in which the specification is expressed in natural language and the desired program is conditionally sampled. We train a family of large language models, called CodeGen, on natural language and programming language data. With weak supervision in the data and the scaling up of data size and model size, conversational capacities emerge from the simple autoregressive language modeling. To study the model behavior on conversational program synthesis, we develop a multi-turn programming benchmark (MTPB), where solving each problem requires multi-step synthesis via multi-turn conversation between the user and the model. Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm. In addition, our model CodeGen (with up to 16B parameters trained on TPU-v4) outperforms OpenAI's Codex on the HumanEval benchmark. We plan to make the training library JaxFormer including checkpoints available as open source. # How to use ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("shpotes/codegen-350M-mono") model = AutoModelForCausalLM.from_pretrained("shpotes/codegen-350M-mono", trust_remote_code=True) input_ids = tokenizer( context, truncation=True, padding=True, return_tensors='pt', pad_token_id=pad_token_id, ).input_ids input_ids_len = input_ids.shape[1] with torch.no_grad(): input_ids = input_ids tokens = model.generate( input_ids, do_sample=True, num_return_sequences=num_return_sequences, temperature=temp, max_length=input_ids_len + max_length_sample, top_p=top_p, use_cache=True, ) text = tokenizer.batch_decode(tokens[:, input_ids_len:, ...]) ```
2fd41c74714fe1370b4999e1939d19fd